The uses (and misuses) of value-added research

Value-added research, which uses “sophisticated statistical techniques to attempt to isolate a teacher’s effect on student test score growth,”  makes sense, writes Matt DiCarlo in a thoughtful analysis on Shanker Blog. What’s troubling is how the models are used.

For example, the most prominent conclusion of this body of evidence is that teachers are very important, that there’s a big difference between effective and ineffective teachers, and that whatever is responsible for all this variation is very difficult to measure (see hereherehere and here). These analyses use test scores not as judge and jury, but as a reasonable substitute for “real learning,” with which one might draw inferences about the overall distribution of “real teacher effects.”

And then there are all the peripheral contributions to understanding that this line of work has made, including (but not limited to):

The “research does not show is that it’s a good idea to use value-added and other growth model estimates as heavily-weighted components in teacher evaluations or other personnel-related systems.,” DiCarlo concludes.

As has been discussed before, there is a big difference between demonstrating that teachers matter overall – that their test-based effects vary widely, and in a manner that is not just random –and being able to accurately identify the “good” and “bad” performers at the level of individual teachers.

Most districts and states use value-added models poorly, concludes DiCarlo

About Joanne