The evidence disconnect in education policy

Is good evidence winning or losing the battle over education policy? I sat in on an interesting panel discussion on reading last week hosted by the New America Foundation, where the conclusion was, in essence, good evidence isn’t winning often enough. In the case of reading, where there is relatively abundant research on what works, very few school districts and classrooms actually implement tried-and-true methods, the panelists said.

The problem goes beyond reading, of course. The education world is notorious for the relatively weak research base that informs what happens in schools and classrooms. Recently, the Obama administration has tried to encourage more practices based on good research through its Investing in Innovation Fund, where groups and school districts competed for federal money to expand education reform programs that were supported by research.

But shaky or conflicting evidence is still often the norm in many areas of education. Several stories this week highlight this problem: On Monday, Sam Dillon of The New York Times wrote about the debate over class size, which has become shriller as budgets have shrunk and districts are being forced to increase student-teacher ratios. Educators are divided over what the research says on whether small class sizes matter — or matter enough to make up for the extra costs. (See what we’ve written on the issue, here.)

Also in the news, Gothamschools.org highlights a new report that finds that merit pay — a favorite among education reformers, including Education Secretary Arne Duncan — didn’t work to improve test scores in a New York City experiment, and was actually connected with dropping test scores among middle-school students. A Vanderbilt University study in September 2010 found largely the same thing, that offering middle-school math teachers bonuses up to $15,000 did not produce gains in student test scores.

There are many other examples. Does the preschool research show enough of a lasting benefit for children to justify its (often high) expense? As states decide whether to divert more resources to charter schools, which of the various charter  studies are to be believed — the big national studies that found that charters are mediocre on average, or the smaller studies that have found that in cities, at least, they perform better than their public school counterparts?

These questions over what policies are supported by research — and which research is best — are likely to get more heated as districts prioritize in tight financial times. Will tighter wallets force schools and policymakers to pay more attention to the evidence to ensure that we get better bang for our buck, or will evidence continue to get short shrift?