The value-added bubble

The rush to evaluate teachers by value-added models reminds Rick Hess of the collateralized mortgage bubble.

Edu-econometricians are eagerly building intricate models stacked atop value-added scores. Yet, today’s value-added measures are, at best, a pale measure of teacher quality. There are legitimate concerns about test quality; the noisiness and variability of calculations; the fact that metrics don’t account for the impact of specialists, support staff, or shared instruction; and the degree to which value-added calculations rest upon a narrow, truncated conception of good teaching. Value-added does tell us something useful and I’m in favor of integrating it into evaluation and pay decisions, accordingly, but I worry when it becomes the foundation upon which everything else is constructed.

Even the best model is only as good as the data, Hess writes. If test scores are ” flawed, biased, or incomplete measures of learning or teacher effectiveness, the models won’t pick that up.”

About Joanne