74 Interview: IES Director Mark Schneider on Education Research and the Future of Schools
Sign up for The 74 newsletter.
Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter
See previous 74 Interviews: Bill Gates on the challenge of spurring educational improvement; Sal Khan on COVID's math toll; and Patricia Brantley on the future of virtual learning. The full archive is here.
The Institute of Education Sciences turns 21 this year. After five years at its helm, Director Mark Schneider is hoping to shepherd its transition to maturity.
When he was appointed by President Trump in 2017, Schneider took over an agency designed to reveal the truth of how schooling is delivered in the United States. IES houses four research centers that measure the effects of educational interventions from preschool to university, and through the National Assessment of Educational Progress — the agency's most recognizable research product, often referred to as the Nation's Report Card — it delivers regular updates on the state of student achievement.
But Schneider sees a new role for federal research endeavors. Through the use of public competitions and artificial intelligence, the director wants IES to help incubate breakthrough technologies and treatments that can help student performance take a giant leap forward in the coming years. Rapid-cycle experimentation and replication, he hopes, will help reverse more than a decade of stagnation in K–12 performance.
Late in his six-year term, Schneider is candid about his status as one of the few holdovers from the previous administration still serving in government. In part, he quips, that's because education research isn't considered important enough for a Trump appointee to be fired. But he's also labored to win the trust of Congress and cultivate bipartisan support for a vision of educational improvement powered by data.
Now he believes that vision could soon be realized. In December, Congress approved a substantial increase in IES's budget to potentially fund a fifth national center that some have dubbed a "DARPA" for education research (based on the Pentagon's famous hub for research and development). Further legislation is needed to authorize a branch for advanced development in education sciences, but potential research strands are already being theorized.
Schneider — a political scientist who left academia for leadership and research roles at the American Institutes for Research and the American Enterprise Institute — has a commanding perspective on the federal education bureaucracy, serving as the head of the National Center for Education Statistics in the 2000s. His sometimes tart observations about Washington's research efforts, and the future of IES, can be found on his frequently updated blog.
In a wide-ranging conversation with The 74's Kevin Mahnken, Schneider spoke with surprising openness about the Department of Education (which "operates like a bank" in its grantmaking capacity), the "horrifying" reality of university master's programs ("It's a money machine, and so you create more of them"), and why he believes some concerns about data privacy are overblown ("If I were really worried about this, I wouldn't wear an Apple watch.")
Above all, he said, the task ahead is to develop a research base that can yield transformative educational tools on the order of COVID vaccines and ChatGPT.
"The goal, using this foundation, is to look at things that pop out, that would not exist otherwise," Schneider said. "If we can do this with vaccines, if we can use it with chatbots, then what's our foundation?"
The conversation has been edited for length and clarity.
The 74: Tell me a little about what you’re anticipating this year in terms of legislation to establish a DARPA-type program for education.
Mark Schneider: There are two parts of the legislation. The first is to set up the National Center for Advanced Development in Education, NCADE, and the other is for major reinvestment in Statewide Longitudinal Data Systems. Most people focus on the first part, but the second is also really important because we spent a billion dollars building those data systems over the last 18 years. The whole thing is a great system, but it needs to be rebuilt.
What needs to be modified in those systems?
It's old technology. I think the first round of money for them went out the door in 2006. [Gestures at iPhone sitting on the table] Can you imagine having a technology system that was built in 2006? So they need to be modernized, but the more important thing is that we now have a much more expansive vision of what they can do after almost 20 years of work.
The example I point to is absenteeism. States have really good records on attendance because money flows based on average daily attendance, and they have to take counts. They know who are chronic absentees, but they don't know why. It could be food insecurity, health, migration status, could be a dozen things or more. But if we use these longitudinal data systems as a backbone and then plug in information from criminal justice, health, Social Security, we would have a much better sense of what's going on with any student in a given school. The strength of Statewide Longitudinal Data Systems [SLDS] has always been tracking students over time.
"Why did I survive when almost nobody else did? I don't think education research is that important. I think I’m good at my job, and the reforms we’re pursuing … are really strongly supported by the current administration. But I’m not important enough to be fired."
The biggest problem, of course, is that as you merge more data, the issues of privacy become more intense because it's easier and easier to identify people when there's more information. We’re nowhere near good enough at privacy protection, but we’re getting way better, and there are so many more ways of protecting privacy than there were 20 years ago.
Given the lengthy timetables of federal projects like the SLDS, do you ever feel like you’re painting the Golden Gate Bridge, and now that you’ve finally established these tools, it's already time to overhaul them?
Well, we spent a $1 billion building this, and right now, we’re spending about $35 million per year on grants to states to do things with it. What percentage of $1 billion is going back into maintenance and expansions? It's pocket change. So you always have to remember that this is a state-owned system, designed to help them do their work. And to take an example, Tennessee is surrounded by seven other states, and they end up doing their own collaborations and data exchanges.
Is the inherent federalism of that approach, especially layered over the archaic technology, difficult to manage? How did it play out during the pandemic, for instance, when real-time data was so hard to generate?
The trickiness had nothing to do with SLDS, though. It had to do with the world we woke up to in March 2020.
For me, SLDS is like an exemplar of a federal system where the states assume almost all responsibility. But again, we have more capacity compared with most states. There are states like Massachusetts that are doing an unbelievably good job, and other states are not. Our role there is providing the resources to enable states to a) experiment like Massachusetts and b) bring states that have little capacity up to speed.
Probably the most alarming federal data coming out of the COVID era has been the release of scores from the National Assessment of Educational Progress, which showed huge drops in achievement in reading and especially math. Did those results match what you were expecting?
By the time NAEP landed, we had NWEA results and others that suggested it was going to be a debacle. We knew the scores were going to go down by a bunch. But NAEP is NAEP — it's national, it's rock-solid in terms of its methodologies and its sample. So it's indisputable that this was an awful situation, right?
To connect the dots with SLDS: One of the problems with the system is that it was conceived as a data warehouse strategy. And I tried and tried, but nobody caught that this was a stupid way of phrasing its purpose. I said, "We don't need a data warehouse. What goes into a warehouse, a forklift?" We want an Amazon model where we also have retail stores, and you can go in and find stuff.
I understand that states are very hesitant to let random academics and researchers have access to very private data. But as we rebuild the SLDS, we need to make sure that there are use requirements as part of the deal — always, always consistent with privacy protections, but we have to use these more. It's a little tricky because some states have a history of opening up the doors and letting in researchers, and others just don't. In the state of Texas, it can depend on who the attorney general is.
It can be striking how many research papers come out of, for instance, Wake County, North Carolina.
It's because they’ve opened the data to more people. And that's part of the deal, but Wake County is not the United States. We need more.
My days of active research are behind me, but the possibilities built into these data are incredible. I thought I was going to be able to do a deal with Utah, where there's an organization doing early childhood interventions; all the evidence is that they’re good, but we need to see if "good" sticks. Well, SLDS is perfectly designed to figure out if interventions stick. I thought this work in Utah would allow us to identify students in their early childhood interventions, work with the state to track those students over time, and find out if those very positive pre-K results — it's a very inexpensive intervention with great results in the early years — stick. We have the means to do it. We just need to do it.
It seems like efforts like that would be complicated by the growing political salience of data security.
It's everywhere, and for good reason. I’m not really a privacy hawk, but all the privacy protections need to consider benefits versus costs. In too many places, we’ve concentrated on the risk without considering the benefit. But that's only half the equation. We have to be able to say, "This risk can be mitigated, and there could be huge benefits to come out of this."
"It's largely the same technology that ETS invented 40 years ago. But the world has changed. It's just gotten more and more expensive, but the amount of reimagining NAEP and its structure — whether or not we can do this cheaper and faster — is just lagging. It's really frustrating."
This is what political systems do all the time — they balance risks against rewards. But we have to do it in a much more sophisticated way.
Why are you a privacy dove? There is something a little funny about how guarded people are about government intrusions when they so freely hand over their data to Amazon or whomever.
I have an Amazon Echo in every room in my house, and I know that they’re listening! Everyone has a story where they’re talking about something, and then they go on their Amazon account and see an advertisement related to the product they were talking about. It's really scary, but I’ve only turned off the microphone on one of my devices because of the convenience of being able to say, "Alexa, turn on my lights, play the BBC." For me, those benefits are worth getting a bunch of stupid advertisements.
If I were really worried about this, I wouldn't wear an Apple watch or own an Apple phone. We all should be concerned about privacy, and especially when it comes to children. Obviously, the standards have to be high. But again, there are benefits to using a more comprehensive database, which is my vision of what SLDS would be. The technology issues are real, and it's always a war of whether people hack it and we need to develop better mechanisms for protection.
What are you trying to achieve, organizationally, with the proposed addition of an advanced research center?
IES is only 20 years old. My predecessor, Russ Whitehurst, was the founding director, and he was brilliant. He set out to modernize the research and development infrastructure, and his goal was to make randomized controlled trials the coin of the realm. I was the NCES commissioner for three years, and I argued with him all the time about his model of RCTs, which are the gold standard. The way he saw it was — and he knew what he was doing, he's really smart — "I can't compromise this at the beginning. If I say, ‘Maybe we do this, maybe we do that,’ then nobody goes in the direction I want, and they just wait me out."
The problem with the model was that RCTs, as they were originally introduced, were about average effects across populations. But to use a specific example, we’ve now moved into individualized medicine — it's about what works for you, and under what conditions. So the mantra of IES now is, "What works for whom, and under what conditions?" Of course, we still have studies that look at main effects, but our work is all about identifying what works for individuals or groups of students. This requires a lot of changes about the way we think and how we do business.
My joke is that almost every science has gone through a replication crisis. We don't have a replication crisis, because we don't replicate anything. Even if it works, we don't replicate it! So a few years ago, we launched a replication RFA [request for applications]. IES was moving in that direction anyway, but we needed a much more systematic attention to replication. My mistake was we structured the replication this way: "Something worked in New York City, so give me another $5 million, and I’ll try it in Philadelphia." Or, "It worked for some African American kids, let's try it with Hispanic kids." They were all big experiments, five years long. You can't make progress that way.
Now we’re running an X Prize, which will be announced before the summer. I’m not sure how generalizable this will be, but the prize is based on using digital learning platforms to run experiments. The critical part is that you have to have 100,000 users on your platform to qualify. You run those experiments, you fail fast — that's an incredibly important principle, fail fast — and the few things that work, you have to do multiple replications. The original plan was: experiment, replication, then another round of replications. At the end of which, the goal is to say, "Here's an intervention that worked for these students, but not for these students." Then you take what worked for those students and push it further. [On May 9, Adaptive Experimentation Accelerator was announced as the winner of the $1 million Digital Learning Challenge prize.]
It's a systematic approach to rapid replication. Not everything in education research can be done in short order. Some things take a long time. But there are many, many things that last a semester or a school year, and at the end of that time, we have proximate measures for distal outcomes. This prize approach is just a different process for how we replicate.
ChatGPT just opened up a whole world of discussion about the use of AI. But what happened with ChatGPT is like what we’re trying to do. The world has been doing AI for literally decades, but the last 10 years have seen increased computing power and more complexity in the models, and the foundational models have gotten bigger and bigger and bigger. We built an incredible foundation: machine learning, data science, AI. And all of a sudden, boom! ChatGPT is the first thing that caught the public's attention, but it was built on this amazing foundation. Nobody knows what the next thing is that will break through, but they’re all being built on decades’ worth of work that established this foundation. It's the same thing with mRNA research — the COVID vaccine could not have happened without that foundation.
What I’m trying to do is use IES resources to build this kind of foundation, which includes the learning platforms, rapid-cycle experimentation and replication, transformative research money. And the goal, using this foundation, is to look at things that pop out, that would not exist otherwise. That's the goal: If we can do this with vaccines, if we can use it with chatbots, then what's our foundation? What I hope is that, when we get NCADE going, we move this activity there and let it consolidate and interact. Then we start doing new, innovative research based on that foundation.
What are the kinds of research projects and outcomes that perhaps seem fantastical now, but could be realized in the way that MRNA vaccines have been?
The telos, the North Star, is individualized education. The first thing that is popping from this work is an AI institute that IES is launching with the National Science Foundation, and it's designed for students’ with speech pathologies. There aren't enough speech language pathologists in schools, so the demand for them is really high. We also do something incredibly stupid by burdening them with unbelievable paperwork.
"My joke is that almost every science has gone through a replication crisis. We don't have a replication crisis, because we don't replicate anything. Even if it works, we don't replicate it!"
This AI institute is funded by $20 million, split between IES and the NSF, and it has several prongs to it. The first is to develop an AI-assisted universal screener, because it takes time to diagnose exactly what students’ speech pathologies are — whether it has to do with sentence structure, vocabulary, pronunciation. Medicine has been doing this forever, by the way. The second prong is to use an AI toolbox to help design, update, and monitor the treatment plan. In other words, we’ve got a labor shortage, we know we need assessment and a treatment plan, and AI can do this. Or, AI should be able to do this, whether or not we can pull it off with this group. It's a risk, like everything we do is a risk. But to me, this is a breakthrough.
I’m very optimistic that they’re going to pull it off, in part because of the third prong, which relates to the paperwork. It's a lot of work, multiple forms, and it's routine. Well, guess what can now type up routine paragraphs?
It seems like school districts, let alone Congress, could be really hesitant about deploying AI to write up after-incident reports, or what have you. Some regulatory structure is going to have to be created to govern the use of this technology.
I’m sure, like me, you’ve been monitoring the reaction to ChatGPT. There's an extreme reaction, "Ban it completely." Another extreme would be, "This is amazing, go for it!" And then there's the right reaction: This is a tool that's never going back in the box. So how do we use it appropriately? How do we use it in classrooms, and to free teachers from drudgery?
At least for the foreseeable future, humans will have a role because ChatGPT is often wrong. And the biggest problem is that we sometimes don't know when it's wrong. It’ll get better over time, I don't think there's a question about that, but it needs human intervention. Humans have to know that it's not infallible, and they have to have the intelligence to know how to read ChatGPT and say, "That doesn't work."
Of course, it writes very boring prose.
But so do students.
And so do reporters.
Touché. You mentioned that you ran NCES over a decade ago. I’m wondering if you’ve noticed a change in Washington's ambitions around using federal data to spur school improvement, especially now that the peak reform era is long gone.
It's true that the level of skepticism is much greater. But the technology has also gotten way, way better. We hired the National Academies [of Science, Engineering, and Medicine] to do three reports for us to coincide with our 20th anniversary. The one about NCES was the most interesting one. It talks about new and somewhat less intrusive measures.
NCES is old. There are lots of arguments about when it started, but the modern NCES was actually a reaction to [sociologist and researcher] James Coleman, who was intimately involved in the early design of longitudinal studies. They’ve gotten more complicated — the original was "High School and Beyond" — and they’re all based on survey data, just going out and talking to people. Well, you know the fate of surveys: Response rates are falling and falling, and it's harder to get people to talk.
That's how bad it's gotten?
We were forced — "forced" makes it sound like it was a bad idea; and it did turn out to be a bad idea — to ask schools that were participating for a lot of information about IEPs [individualized education programs] and students with special needs. This gets back to that cost/benefit calculation because they would not share the classification of students with special needs, and they just refused to participate. So we ended up canceling that data collection. That was a leading indicator of the problem.
"I taught public policy for decades at Stony Brook University, and when I decided that I was never going back, they asked me to give a talk. … My opening remark set everyone back on their heels because I said, ‘I taught here for 20 years, and every one of my students should sue me for malpractice.’ Nothing I taught had anything to do with the way the sausage is really made."
Increasingly, the question is what we can do to get the kind of data that these longitudinal studies generated without having to interview 15,000 or 18,000 kids. It requires a modification in the way you think, and it requires an expansive view of where the data lie. How much of the data that we’re asking students and parents and teachers about resides in state longitudinal data systems, for example? Could we drive the need for human interviewing to 5 percent or 10 percent of what we do now? It actually calls for a different thought process than, "Well, we always do ‘High School and Beyond’ this way!" But federal bureaucracies aren't known for their innovative thinking, quite frankly.
This adaptation might also mean that some of the unique things we get from surveys are going to have to go because no one will give them to you.
What, if anything, is the effect of changes in government on a massive organization like IES? You were appointed under President Trump, so the Department of Education has already undergone a really significant change, and now Congress has changed hands as well.
We’re not massive. We’re pretty small, actually.
We’re a science agency, and we were created when the Education Sciences Reform Act was authorized in 2002. I think the vision was that IES would grow not to the size of the National Institutes for Health or the National Science Foundation, but on a trajectory that would put it into that kind of group. If you look at the original legislation, it's still there. We have a board that is almost populated now, and the ex officio members include the director of the Census, the commissioner of the Bureau of Labor Statistics, and somebody from NIH. You don't create a board with those kinds of people on it unless you expect it to be a big, major player.
It never got there. The budget is up to $808 million, in part because we got a pretty big chunk of money in the omnibus package. But $30 million of that was for DARPA-Ed, which we don't have yet. Ten million dollars of that is for the School Pulse Panel. So Congress is interested in modernization, and we have to prove that this investment is worthwhile.
What about the difference at the top? Are there notably different attitudes between Secretary DeVos and Secretary Cardona with respect to IES's mission?
I’ve gotten enormous support from the department. We would not have gotten the money for NCADE, we would not have gotten the money for School Pulse without that support. DeVos's goal was to make the Education Department go away, so this administration is obviously much more expansive. They’ve been careful in their support of things, but again, NCADE wouldn't have gotten this far without the full-throated backing of the department, and of the Office of Management and Budget and the White House.
I’m reminded of the parties’ divergent positions on the federal government's role in education, and how close the Department of Education came to never being authorized.
Jimmy Carter is a really good ex-president and a good human being, but was not a very effective president. As you know, the establishment of the department was in response to support that he got from teachers’ unions. So there is a philosophical debate about the role of the federal government in education, and it's not a slam dunk. There are things that are worth talking about. A huge chunk of the money that the department manages is Title IV, so it operates like a bank, and it's by far the smallest cabinet department in terms of workforce.
The other thing I’m not sure people fully understand is that the department isn't just a grant-making operation, it's also a contract shop. I taught public policy for decades at Stony Brook University, and when I decided that I was never going back, they asked me to give a talk to my former colleagues — almost all of whom I’d hired — and graduate students. My opening remark set everyone back on their heels because I said, "I taught here for 20 years, and every one of my students should sue me for malpractice." Nothing I taught had anything to do with the way the sausage is really made.
You hear this all the time, and academics pooh-pooh it. But I’ve been on both sides of it, and it's really true: Academic research and the sausage factory are the same. In 20 years of teaching public policy, I never once mentioned contractors. And contractors run the whole show. It's the way we do business, and it's even more interesting than just: "I run this agency, but here's what you, the contractor, should do." All too often, it's the contractors doing the actual thinking.
There's been a long argument over the 20 years, on and off, that I’ve been associated with this stuff. We should, and must, contract out the work and the implementation, but we should not be contracting out the thinking. And that's easy to articulate, but what's the dividing line? When are we surrendering our intellectual capital — our control of the ship, if you will — to contractors who now design the ship, build the ship and steer the ship?
Are there concrete examples from education research where you can point to projects that have gone off-course?
NAEP is $185 million per year, and it gets renewed every five years. Do you know how long Educational Testing Services has had the contract? Forty years. There are reasons why they get this contract — they’re good! But this is decades of either minimal or zero competition. And as the test has gotten bigger and more complicated, even putting together a bid to compete costs millions of dollars. People ask, "Why would we spend millions of dollars to compete with ETS when they’ve had the contract for 40 years and we see no indication that it will ever be different?"
To me, this is a serious issue.
Given that NAEP is the foremost product of NCES, there's probably very little scope for reimagining it beyond, say, changing the testing modality from pen-and-paper to computers.
I agree on that, it's largely the same technology that ETS invented 40 years ago. But the world has changed. It's just gotten more and more expensive, but the amount of reimagining NAEP and its structure — whether or not we can do this cheaper and faster — is just lagging. It's really frustrating.
Even before COVID, there was a lot of pondering about the future of NAEP and the costs of administering it. The Long-Term Trends test was postponed between 2012 and 2020, right?
Yeah, but that's an interesting case. The modern version of NAEP — which measures fourth- and eighth-grade reading and math — was authorized in 2002, I believe. It goes back to the ’70s, really, but we’ve been doing this version of it for 20 years. People love the Long-Term Trends test, but do we really need it when we’ve had 20 years of the main NAEP?
You’ve spent a lot of your career studying the value of higher education. Do you think we’re staring at a financial or demographic apocalypse for colleges and universities?
"Apocalypse" is way too strong a word. There are demographic trends such that the pool of students is shrinking, and there's also incredible regional variation. The New England and mid-Atlantic states are experiencing much sharper declines than the South and the West. And of course, universities are not mobile; if you invest all this infrastructure in frigid Massachusetts or northern New York, and all the students move, you have to ask, "What do I do with all this infrastructure now?"
As to the value of a four-year degree, you and I operate in a sphere where everybody is highly literate. I read all the time, and I’m not talking about technical stuff. I read novels all the time because it's an opportunity to live in a different world. But what's the definition of literacy in the world we now live in, and what skills do we truly need? It's still only a minority of people who go to four-year programs, but do we need to send even that many students to get four-year degrees? Most of them want jobs and family-sustaining wages, and do we need four-year degrees for that? The answer is obviously not, if you look at what's happening in Maryland and Pennsylvania [where governors have recently removed degree requirements from thousands of state jobs].
The fact of the matter is, this is happening. To the extent that it's happening, which I believe is necessary and important, the incentives for getting a bachelor's degree start to decline. It becomes more of an individual question: "I’m going to spend five or six years at a four-year institution. It's pretty much a cookie cutter, stamp-stamp-stamp experience, and I get a bachelor's degree. Then, at a job interview, they ask what my skills are, and I can't answer. Well, I can use ChatGPT!"
That's quite grim. But is there a way to offer prospective students better information about the value they’re actually getting from college?
When I was at the American Institutes for Research, I ran something called College Measures, which was the first systematic attempt to crack all the work that had been done at the university level about what happens to students when they graduate. In the end, it's the variation in programs that really matters — as soon as we started unpacking student outcomes, program by program, the programs that were technical were the winners. And the numbers were amazing. The first results we published came from Virginia and Tennessee, and I swear to God, when I saw the results, I didn't believe them. I thought we had an error in the data because associate's degree holders were out-earning bachelor's degree holders.
We repeated this over and over and over again, in maybe 10 different states. It was always technical degrees coming out of community colleges that had the best earnings. In the state of Florida, I think the best postsecondary certificate was "Elevator Mechanic/Constructor." There aren't a lot of them, but the starting wage was $100,000! Then you start looking at sociology, English, psychology, and [gestures downward with his hand, makes crashing sound].
It turned out to be that these degree programs were increasingly becoming surrogates for skills. The worst outcome for all students was for those who went into liberal arts and general studies at community colleges. They’re doing that because they want to transfer to a four-year school, but only 20 percent of them actually transfer. They come out with a general education and no skills, and the labor market outcomes were a disaster.
I was working with the Burning Glass Institute, which has employment records for millions of people and scrapes job advertisements, to start looking for what skills were in high demand. The beauty of it was that it was such good data, and even better, it was regional. Most people don't move that often, so if I’m living and going to school in western Tennessee, it doesn't help me at all to know what somebody's hiring for in Miami. It basically asked, "How much money is each skill worth?" Things have probably changed since that time, but one of the highest-demand skills in almost every market was [the customer relationship manager software] Salesforce, which was worth between $10,000 and $20,000.
The other thing we did, which made me really popular, was look at the same outcomes for master's programs. Colleges just create these programs, and the money goes to support everything that academics love: travel, course buyouts, graduate students. But the numbers are horrifying for most master's programs. You create a master's program, and they tend to be relatively cheap — and you don't give TAs to master's students, so it's all cash. It's a money machine, and so you create more of them.
This brings me back to my previous question. If young people start seeing the value proposition of a four-year degree differently, and American fertility rates are producing fewer young people to begin with, it seems like the music eventually has to stop for the higher education sector. And if that happens, employers are going to have to rely on something besides the apparent prestige of a B.A. to distinguish between job candidates, right?
Both my daughters think I’ve become increasingly conservative because of what goes on in post-secondary education. Look at university endowments: All the money is hidden, but the subsidy we give to well-off students is humungous because their endowments are tax-free. Princeton has a huge endowment and a small student population; Harvard has a bigger endowment, but also a larger enrollment. When I was at the American Institutes for Research, we calculated the subsidy at Princeton per undergraduate student, and the subsidy was something in the vicinity of $100,000 per year. All hidden, nobody talks about it. Meanwhile the total subsidy for Montclair State University, which is down the road, was $12,000; the local community college was $3,000. This includes both state and federal money. What kind of system is this?
I testified at the Senate Finance Committee, and we got a small tax on endowments that was only for the very, very richest schools. I think it's still on the books, but it was nowhere near as aggressive as it should have been. What I wanted was to take the money and set up a competitive grant program for community colleges because what they do is hard work, and they absolutely need the money. But what happened was that we got a much smaller tax that went into the general fund and didn't go into improving anything. It was a disappointment.
This leads me to wonder what you make of the Biden administration's student debt relief!
I’m not going to talk anymore. [Laughs]
The other part of that same campaign was about property taxes. Georgetown and George Washington University, for example, don't pay property taxes. Some universities acknowledge that they’re getting police services, fire, sewage, and so forth, and they negotiate something called a PILOT, a payment in lieu of taxes. One case was Harvard, which negotiated a PILOT with Boston that was way lower than what they would have otherwise paid, and they didn't even fully pay it! A past college president told me once, "Your campaign to go after the endowments is never going to happen in a serious way. But if you start attacking our property tax exemption, that gets us worried."
"The numbers were amazing. The first results we published came from Virginia and Tennessee, and I swear to God, when I saw the results, I didn't believe them. I thought we had an error in the data because associate's degree holders were out-earning bachelor's degree holders."
Back when I thought some of this was actually going to stick, I wrote an op-ed in the Washington Post. Washington, D.C.'s Office of Tax Revenue turns out to be a pretty good agency, and I asked them for a list of all the properties owned by Georgetown and George Washington. I just asked them to calculate the value of those properties, and what should be the payment given the commercial tax rate. It was a lot of money. The average residential property owner in Princeton, New Jersey, pays thousands of dollars more in taxes than they otherwise would because Princeton University doesn't pay property taxes.
Criticizing universities in the Washington Post doesn't sound like a good way to make friends in your current position.
Well, I haven't done anything like that in years. And of course, I was appointed by the previous administration, when none of this stuff was particularly poisonous.
So why did I survive when almost nobody else did? I don't think education research is that important. I think I’m good at my job, and the reforms we’re pursuing — whether it's establishing NCADE or revising the SLDS — are really strongly supported by the current administration, which I really appreciate. But I’m not important enough to be fired.
Isn't that something of an indictment of federal policymakers, though? They should care more about education research!
Yeah, but then I would have been fired. [Laughs]
I was affiliated with AEI [the American Enterprise Institute, a conservative think tank], and I still have many friends there. But this NCADE proposal has Democratic backing in Congress. A lot of the work is still nonpartisan, or bipartisan. We work really hard at this, and some of the things we’re pushing are just so fundamentally important that it doesn't matter which party you’re in.
Does partisanship make it harder to pursue the higher education issues you’re interested in, though?
I’m only the third IES director that's been confirmed and served any length of time. Russ Whitehurst was totally focused on early childhood literacy, and John Easton cared the most about K–12. So even over these last five years, IES is predominantly still K–12 oriented.
My newest thing in postsecondary research is to collect data on non-credit activity, and I don't think people understand how big that is in community college. A lot of it is people enrolling to use a swimming pool, or someone who takes three courses in musicology but isn't interested in credit or a degree. But increasingly, non-credit activity is being used for non-credit certificates that are job- and career-related. Maybe you need three courses to upgrade my skills for auto body repair, or to upgrade your IT skills, but you don't want a whole degree or to enroll in college. So you can do it on a non-credit basis.
We don't even know how many non-credit certificates are being granted because we don't collect any data on it. IPEDS [the Integrated Postsecondary Data System, the federal government's primary source of information on colleges and universities] is rooted in Title IV, and it doesn't collect information about schools that don't take federal grants or about non-credit activity. But it's really big, and many people are betting time and energy and money to acquire non-credit certificates. We’re trying to do some work on that, and OMB is very hesitant to mandate any collections of data because of Title IV, but they’ve approved a voluntary data collection. I don't do research anymore, but I’m trying to broker deals with researchers and states — Virginia has a beautiful data set, for instance — to find out what happens if you get a non-credit certificate. Indiana is another opportunity.
Launching this stuff is hard because it's pretty untraditional, and it requires strong state data systems and the willingness of states to work with independent researchers. And of the $808 million we’ve got, none of it is walking-around money; all of it is competitive, everything's peer-reviewed. Which it should be, but I can't just say, "Sure, sounds great, I’ll send you $50,000."
Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter
Kevin Mahnken is a senior writer at The 74.
Sign up for The 74 Newsletter The 74: Tell me a little about what you’re anticipating this year in terms of legislation to establish a DARPA-type program for education. What needs to be modified in those systems? Given the lengthy timetables of federal projects like the SLDS, do you ever feel like you’re painting the Golden Gate Bridge, and now that you’ve finally established these tools, it's already time to overhaul them? Is the inherent federalism of that approach, especially layered over the archaic technology, difficult to manage? How did it play out during the pandemic, for instance, when real-time data was so hard to generate? Probably the most alarming federal data coming out of the COVID era has been the release of scores from the National Assessment of Educational Progress, which showed huge drops in achievement in reading and especially math. Did those results match what you were expecting? It can be striking how many research papers come out of, for instance, Wake County, North Carolina. It seems like efforts like that would be complicated by the growing political salience of data security. Why are you a privacy dove? There is something a little funny about how guarded people are about government intrusions when they so freely hand over their data to Amazon or whomever. What are you trying to achieve, organizationally, with the proposed addition of an advanced research center? What are the kinds of research projects and outcomes that perhaps seem fantastical now, but could be realized in the way that MRNA vaccines have been? It seems like school districts, let alone Congress, could be really hesitant about deploying AI to write up after-incident reports, or what have you. Some regulatory structure is going to have to be created to govern the use of this technology. But so do students. Touché. You mentioned that you ran NCES over a decade ago. I’m wondering if you’ve noticed a change in Washington's ambitions around using federal data to spur school improvement, especially now that the peak reform era is long gone. That's how bad it's gotten? What, if anything, is the effect of changes in government on a massive organization like IES? You were appointed under President Trump, so the Department of Education has already undergone a really significant change, and now Congress has changed hands as well. What about the difference at the top? Are there notably different attitudes between Secretary DeVos and Secretary Cardona with respect to IES's mission? I’m reminded of the parties’ divergent positions on the federal government's role in education, and how close the Department of Education came to never being authorized. Are there concrete examples from education research where you can point to projects that have gone off-course? Given that NAEP is the foremost product of NCES, there's probably very little scope for reimagining it beyond, say, changing the testing modality from pen-and-paper to computers. Even before COVID, there was a lot of pondering about the future of NAEP and the costs of administering it. The Long-Term Trends test was postponed between 2012 and 2020, right? You’ve spent a lot of your career studying the value of higher education. Do you think we’re staring at a financial or demographic apocalypse for colleges and universities? That's quite grim. But is there a way to offer prospective students better information about the value they’re actually getting from college? This brings me back to my previous question. If young people start seeing the value proposition of a four-year degree differently, and American fertility rates are producing fewer young people to begin with, it seems like the music eventually has to stop for the higher education sector. And if that happens, employers are going to have to rely on something besides the apparent prestige of a B.A. to distinguish between job candidates, right? This leads me to wonder what you make of the Biden administration's student debt relief! Criticizing universities in the Washington Post doesn't sound like a good way to make friends in your current position. Isn't that something of an indictment of federal policymakers, though? They should care more about education research! Does partisanship make it harder to pursue the higher education issues you’re interested in, though? Sign up for The 74 Newsletter