New teacher evaluation systems tend to give lower ratings to teachers with disadvantaged students. Teacher Beat’s Stephen Sawchuk asks the critical question: Are the ratings biased? Or do high-need kids get fewer high-quality teachers?
Value-added measures (VAM) are supposed to judge teachers by whether they’ve done better than previous teachers at improving their students’ progress. But many question whether VAM is a reliable measure of teachers’ effectiveness.
Evaluation systems also include classroom observations. And those have problems too, writes Sawchuk. “Observations by principals can reflect bias, rather than actual teaching performance,” writes Sawchuk.
Yet we also know that disadvantaged students are less likely to have teachers capable of boosting their test scores and that black students are about four times more likely than white students to be located in schools with many uncertified teachers.
Teachers in low-poverty Washington, D.C. schools were far more likely to ace the teacher-evaluation system, IMPACT, observes Matthew Di Carlo, at the Shanker blog.
The Pittsburgh teacher-evaluation program shows similar results, according to a federal analysis, writes Sawchuk. “Teachers of low-income and minority students tended to receive lower scores from principals conducting observations, and from surveys administered to students. Those teaching gifted students tended to get higher ratings.”
It’s hard to know whether all methods of evaluation are inaccurate or whether a “maldistribution of talent” explains the low scores for teachers of disadvantaged students, concludes Sawchuk.
It will be hard to persuade teachers to work in high-poverty, high-minority schools if they know they’ll risk being rated ineffective.