Value-added analysis of teachers’ work is the subject on Room for Debate.
How should this information be used? What are the strengths and pitfalls of this kind of measurement? If it has flaws, can it be improved and made into a worthwhile tool?
Advocates, like Amy Wilkins of Education Trust, say value-added measures coupled with “rigorous classroom observation” provide valuable feedback for teachers.
When summed over several years, these data can provide teachers with valuable feedback about what kinds of students they are most successful with and with whom they need to improve. They can help schools match the most able teachers with the students who most need them. And they can help leaders better target teacher supports and rewards.
Critics, such as Stanford’s Linda Darling-Hammond, think the method is too unreliable to be useful.
While scores may play a role in teacher evaluation, they need to be viewed in context, along with other evidence of the teacher’s practice.
Better systems exist — like the career ladder evaluations in Denver and Rochester, the Teacher Advancement Program and the rigorous performance assessments used for National Board Certification, all of which link evidence of student learning to what teachers do in teaching curriculum to specific students. These systems also help teachers improve their practice — accomplishing what evaluation, ultimately, should be designed to do.
Notice that Wilkins supports value-added scores and classroom observation, while Darling-Hammond prefers observation but concedes a role for test scores in teacher evaluation. Is a fuzzy consensus emerging?