Complex formulas used to rate teachers

“Value-added” formulas used to evaluate teachers are incredibly complex, reports the Wall Street Journal. The goal is to analyze how much of students’ improvement is due to good teaching by analyzing each student’s previous performance.

For the first time this year, teachers in Rhode Island and Florida will see their evaluations linked to the complex metric. Louisiana and New Jersey will pilot the formulas this year and roll them out next school year. At least a dozen other states and school districts will spend the year finalizing their teacher-rating formulas.

Few people understand the models, said Janice Poda, strategic-initiatives director for the Council of Chief State School Officers. “States have to trust the vendor is designing a system that is fair and, right now, a lot of the state officials simply don’t have the information they need.”

In New York City, principals now use value-added data to make teacher tenure decisions. Last year, only 3 percent of teachers were denied tenure, though many more were deferred.

At Frederick Douglass Academy in Harlem, principal Gregory Hodge uses the value-added results to alter instruction, move teachers to new classroom assignments and pair weak students with the highest performing teachers. Mr. Hodge said the data for teachers generally aligns with his classroom observations. “It’s confirming what an experienced principal knows,” he said.

Evaluations have helped low performers improve — or leave, reports the New Haven Independent.

After the first year of grading teachers and principals on student performance, 34 low-ranked teachers left voluntarily. The teachers’ union has no complaints.

The new system graded 1,846 classroom teachers into five ratings, from 1 (“needs improvement”) to 5 (“exemplary”). Teachers’ scores came from classroom observations and goals they set for their kids, based largely on growth on student test scores.

Overall, 75 teachers were flagged as poor performers in November or in March. Of those, 39 percent rose out of the “needs improvement” category by the end of the year. Another 20 percent didn’t improve their rating but got to keep their jobs. The final 41 percent resigned or retired in the face of termination.

Three quarters of the city’s teachers were rated “effective,” “strong” or “exemplary.” The 36 “exemplary” teachers will be offered the chance to lead their own “professional learning communities,” funded by a private grant.

One of 44 principals received a low evaluation and left the district. Fourteen percent of principals were rated “developing,” 39 percent “effective,” 34 percent “strong” and 11 percent “exemplary.”

About Joanne


  1. Here in Texas; we just received their performance ratings. Now, not only is it incredibly complex, no one can tell us how it will be applied.

    When a group of statisticians asked to review the “complex formula” they were denied. Evidently, it’s considered a trade secret. So even if you have a brilliant non-educational math mind, you aren’t allowed to review it.

    And yet careers and money, lots of money, are being decided by these secret formula.

    Kind of reminds me of the “double secret probation” from some movie…..

    • Ah, you must be in Houston, Mike. That’s been our complaint since this whole thing started. Thankfully, I just quit teaching after over 15 years, the last 10 spent in HISD.

      Sweet freedom from the ingrained stupidity of that district.

  2. We are facing the same thing in Ohio. I’ve seen my value added scores for the past 2 years. Last year shows extraordinary growth, and the year before shows that the students actually went backwards. I can’t figure it out.

    These results make no sense to me based on my knowledge of the kids. I didn’t change my teaching much from one year to the next. My population changed, but not enough to explain that kind of difference. I can’t understand where these numbers are coming from, and no one is able to explain it to me, but I am being held accountable for the results.

    I’m disturbed for two reasons: one, because my professional future will depend on this, but more importantly, because if I truly did something in the classroom that caused regression OR exceptional gains then I want to know what that is, so I stop doing the bad and keep doing the good!

    My gut feeling is that these results are a load of crap, and both groups of students made good but not exceptional gains. I would pay good money, though, to see the formula used so I could understand why I’m getting such different results.

    I ask, and ask. The answer is, “It’s complicated. Trust us, it’s correct.”

  3. I should clarify – both groups of students earned similar scores on the end-of-year tests. Based on my classroom assessments, both groups seemed to start and end in about the same places. It’s the state “Value Added” metric that is totally different for the two groups.