At 11:30 am this past Monday, I was slated to meet with John King, deputy commissioner of education for New York State, in midtown Manhattan.
At 10:40 am, his secretary called and canceled. Something important had unexpectedly come up, she said.
That “something” turned out to be a major agreement between the New York Department of Education and state teachers’ unions on how teachers are evaluated — and, more specifically, how student test scores fit into the picture. The New York Times reported on this big breakthrough yesterday, pointing out that the move was part of the state’s efforts to secure Race to the Top money in Round Two. New York could win up to $700 million in federal funds, although it finished a distant 15th out of 16 finalists in Round One.
Under the proposed system, which must still be approved by the state legislature, teachers would be categorized as “highly effective,” “effective,” “developing” or “ineffective.” (Currently, teachers in New York are rated only “satisfactory” or “unsatisfactory.”) Forty percent of teacher evaluations would depend on student test scores…but only, of course, for those teachers whose students take annual standardized exams. Most teachers’ students do not.
It’s now time for a semi-bold prediction: if this new system is implemented in New York, I’d be willing to bet that the vast majority of teachers will be rated “highly effective,” “effective” or “developing.”
In the past, criticism has been leveled against the binary rating system of “satisfactory” or “unsatisfactory,” in large part because 99 out of 100 teachers are rated “satisfactory.” The New Teacher Project’s widely cited 2009 report on this phenomenon, “The Widget Effect: Our National Failure to Acknowledge and Act on Differences in Teacher Effectiveness,” also found the following: “Districts that use a broader range of rating options do little better; in these districts, 94 percent of teachers receive one of the top two ratings and less than 1 percent are rated unsatisfactory.”
So don’t expect too much to change in terms of how many teachers are rated “ineffective.” It’s not likely to be much more than one percent. It might even be less — maybe half a percent.
I applaud New York’s decision to tie “only” 40 percent of teacher evaluations to student test scores — because, as the state commissioner and the chancellor of the State Board of Regents have acknowledged, New York’s current tests aren’t very good.
Teachers are often allergic to proposals that link their own performance to students’ test scores, and not without reason: many standardized tests are poor measures of true and meaningful learning. Most measure memorization. Most are highly predictable and easily gamed. This is, after all, why the multi-billion-dollar test-prep industry exists. Show me a good test and I’ll show you teachers unafraid of having evaluations tied to whether their students ace it. (The only tests I know that come close are part of the International Baccalaureate Diploma Program.)
For my take on why tying student test scores to teacher evaluations is highly problematic, see a commentary I wrote in February that appeared in Education Week. (Those who don’t subscribe to Education Week can access the entire piece here.)