Evaluating teachers’ performance by how much they raise students’ test scores — is “fatally flawed,” writes Dan Willingham on Britannica Blog. Among his objections to value-added analyses:
Suppose Teacher A has a class of high-achievers, and Teacher B has a class of low-achievers. The fact that we’re looking at change scores is supposed mean that if each class improves, say, 10 points on a reading scale, we infer that the teachers are equally effective. But who says it’s equally hard or easy to move high-achievers and low-achievers 10 points on the reading scale?
It’s OK to use value-added analysis for research, he writes, but not to determine who’s a good or bad teacher.
Using an unreliable measure to make important personnel decisions is a certain way to engender mistrust and lower morale.
Eduwonkette lists a series of problems with value-added performance measures. Among them are questions about how schools and colleagues affect teachers’ effectiveness and whether teachers who promote short-term score gains are equally “effective in promoting longer-term academic growth.”