Tennessee: Observers inflate teachers’ scores

Principals are giving high scores to low-performing teachers, concludes a Tennessee Education Department report on the state’s new evaluation system, reports the Tennessean. Principals need more training in how to evaluate teachers, the report recommends.

. . . instructors who got failing grades when measured by their students’ test scores tended to get much higher marks from principals who watched them in classrooms. State officials expected to see similar scores from both methods.

“Evaluators are telling teachers they exceed expectations in their observation feedback when in fact student outcomes paint a very different picture,” the report states.

More than 75 percent of teachers received top scores of 4 or 5 in classroom observations, but only 50 percent earned high value-added scores based on their students’ academic progress. By contrast, fewer than 2.5 percent received a 1 or 2 observation score; 16 percent were rated that low based on student progress. Teachers with a learning gains score of 1 averaged an observational score of 3.6.

Teachers can be denied tenure, or lose it, if they score score 1s or 2s for two consecutive years.

. . . Half of each evaluation is based on observations. The other half comes from standardized tests and other measures of student performance.

But almost two-thirds of instructors don’t teach subjects that show up on state standardized tests, so for those teachers — including in kindergarten through second grade, and in subjects like art and foreign languages — a score is applied based on the entire school’s learning gains, which the state calls its “value-added score.”

Rather than using schoolwide scores, the state should develop other ways to measure these teachers, the report recommends. It also calls for principals to “spend less time evaluating teachers who scored well and more time with teachers who need more training,” reports the Tennessean.  “High-scoring teachers may get the chance to undergo fewer observations and to choose to use their value-added scores for 100 percent of their overall scores.”

 

About Joanne

Comments

  1. The flaw in this argument is the phrase “as measured by student test scores.” If test scores say anything about the quality of teaching (and that’s debatable), it is the GAIN in performance that counts, not the absolute score on a standardized test. If teacher A has a class of bright students with a history of good test scores, but if under teacher A the class’s test scores, though still good, decline 5% does that make teacher A a better teacher than teacher B who had a class of students with a history of low test scores, but which under teacher B improve 7%.

    I would rate teacher B a better teacher than teacher A.

    • That’s no flaw.

      Students who don’t attain some absolute, minimal level of mastery are a failure of the education system and those who are employed by the public education system.

      A teacher who moves a kid from completely illiterate to functional illiteracy is a failure because a minimal level of literacy is what’s important not how close you come to that goal.

      • You can’t be serious. You can’t put that blame squarely on ONE teacher.
        If a student entered 3 grades behind in math, and after 1 year with a given math teacher, improved to being only 1 grade behind, that teacher is clearly a success. You may be correct that the system as a whole may be failing that student, but when it comes to evaluating THAT teacher to determine if THAT teacher should be retained, you have to take into account the inputs.

  2. Tennessee uses value-added measures, not absolute scores, so a teacher whose students make larger gains than they have in the past would be considered a success. Teacher A would not fare well, even if her students outscore Teacher B’s class.

  3. Cranberry says:

    By what criteria are principals supposed to judge classroom performance? If the rubric tilts toward progressive, small-group, hands-on activities in heterogeneous classes, the rubric may award the highest marks to teaching practices which have little connection to student learning. Or not–that is to say, what proof does anyone have that the rubric’s standard of teaching correlates with teaching which increases measurable student performance?

    Should principals assess the performance of their own teachers? These are people they hired. If they give negative assessments, they may fear they’ll have to fire them. Wouldn’t it make more sense for teachers to be assessed by teams of people who are not connected to the school? Such assessors could compare teachers across schools, by grade level.

    • SuperSub says:

      I’d say your first comment is spot on. I’ve known bad teachers who could hit every single criteria on Danielson’s framework for their scheduled observations, yet their students did poorly on standardized exams and, when intereracting with other teachers’ students, routinely commented on how little they learned.

    • I’ve always thought that observers — assuming that this is the way to judge teachers — should come from the ranks of retired teachers. That way, teachers could be observed by teachers who’ve worked in the same discipline and age group and they could be observed many times throughout the year, possibly by several different observers.

      The idea that principals or assistant principals should observing teachers is absurd especially when the principal’s experience is radically different from what the observed teacher is teaching.

      It should be a surprise to no one that observers look primarily for entertainment value and little else.

    • This is it exactly. I am observed on a list of criteria that my district wants to see, most of which I believe does not help my students, but redirects my time and energy into less effective areas.

      • In some schools the observations are announced so you can prepare your dog and pony show for a particular day and then go back to teaching. In my case, my AP would always show up unannounced so the only way to keep in her good graces would be to stage a circus all the time.

        I suspect that if Dr. Shapiro taught physics in HS instead of college he might be strongly in favor of judging by test scores instead of observations.

  4. “. instructors who got failing grades when measured by their students’ test scores tended to get much higher marks from principals who watched them in classrooms. State officials expected to see similar scores from both methods.”

    The best teacher in the world cannot educate a student who doesn’t care. Put some real consequences for students and parents back into public education and we’ll see a change…as it is now, even shame and embarassment have been removed.

  5. lightly seasoned says:

    I rarely get observed unless I’m up in the review cycle, and I understand why, but it is nice to get some feedback now and then. Sometimes I have student teacher/observers collect data for me as the price of admission so to speak.

    I’ve also made a point of inviting the super and assist super into my classroom several times a year. They have elementary backgrounds and often make decisions that leave us at the secondary level scratching our heads. Having them observe my classes has helped them understand the differences.