Observing teachers: Is it worth the time?

Teacher evaluation is off to a “bumpy start” in New York City schools, reports the New York Times. For example, PS 130 Principal Lily Din Woo and her assistant principal “are spending parts of each day darting in and out of classrooms, clipboards and iPads in hand, as they go over checklists for good teaching. Is the lesson clear? Is the classroom organized?”

All told, they will spend over two of the 40 weeks of the school year on such visits. The hours spent sitting with teachers to discuss each encounter and entering their marks into the school system’s temperamental teacher-grading database easily stretch to more than a month.

So, the principal and her assistant now will spend 10 percent of their time visiting classrooms, observing and giving teachers feedback on their teaching. Is that really excessive?

“Talent coaches” are helping and retirees may be hired “to pitch in at schools where the workload is heavy.”

Writing up observations, which must be “low inference” and aligned to the “Danielson rubric,” will be more time consuming and taxing than the Times estimates, predicts NYC Educator.

Minnesota is piloting a new teacher evaluation system that includes more classroom observation by the principal, reports Hechinger’s Tim Post for Minnesota Public Radio.

Pine Island, Minn. – Principal Cindy Hansen’s fingers fly across her laptop as she types notes in a corner of Scott Morgan’s classroom, watching as the special education teacher works with a kindergartner on her social skills.

This is more than a principal pop-in. Hansen and Morgan are part of a new, experimental kind of teacher evaluation. Earlier, they met for a pre-evaluation chat. Later, they’ll talk over the teacher’s strengths and weaknesses and set performance goals. She’ll evaluate 70 teachers this way.

“It’s not meant to be a “gotcha” kind of a situation,” Hansen says later. “It’s really is meant to be a helpful kind of conversation.”

Beginning teachers will be observed three times a year for the first three years, while veteran teachers will be observed at least once a year, with a more thorough review once every three years. Student performance will count for 35 percent of overall evaluations. Student surveys also will be factored in.

Use of test scores to evaluate teachers is controversial. Now there’s resistance to principals evaluating their teachers’ classroom performance.

About Joanne

Comments

  1. I have always (and I mean always) enjoyed and benefited from classroom observations. What bothers me about the new system is that it’s based heavily on the Danielson rubric, which in turn is based on a assumptions with which I profoundly disagree. (I don’t disagree with all of them–but some I find incorrect and inappropriate.)

    To me, the value of a classroom observation lies in the discernment of the observer. If the principal cannot rely on her wisdom, experience, and judgment, then I see no point. Granted, the principal’s judgment must be informed and may need a counterbalance. But the Danielson Rubric is more than a counterbalance. Principals are expected to follow it.

  2. I think observations could be useful, theoretically, if they were performed by a skilled teacher with at least the perceptive abilities of the teacher being observed, and with a grasp of the discipline being taught and an understanding of how the particular class session fit into the larger unit and year plan.

    It’s been many years since I’ve encountered a principal with those qualifications. Younger principals, in my experience, have been indoctrinated into a collective ideology in which they cannot see behaviors in plain sight, if those behaviors conflict with the abstractions their ilk repeat like incantations.

    Requiring principals to pass at least three AP subject exams–demonstrating that they are at least as educated as the top high schoolers–would be a step toward restoring knowledge (rather than ideology) as a school’s foundation.

  3. My area also uses a variation of the Danielson rubric. There are many parts that do have some underlying assumptions about what is “good teaching” that I fundamentally disagree with. Secondly, most of the principals I have met have no concept of the idea that student activity and intellectual engagement are not the same thing, and are not necessarily that highly correlated for some students.

    Lastly, as a bit of humor, our state decided to drop some aspects of the Danielson rubric that they felt were not as important. Content knowledge of the subject being taught was one of the areas dropped. I think I died a little inside when i heard that.

  4. I hate Danielson…really I do. You might say I’m being harsh, but she put her name on the rubric and continues to promote it personally, so I think it’s fair to transfer my distaste with the rubric to her.

    That being said, there are so many useless categories in the rubric. Some are just downright inconsequential in the grand scheme of things, some are completely outside the teacher’s control, some promote the same one-size-fits-all style that the Common Core does, and others run counter to everything I’ve learned as being effective over my years of teaching low SES students.

    I’m not sure of how true this is, but a dean at the school I got my education degree with 30 years in public school classrooms told me that when Danielson first came out with her rubric it was intended solely as way to suggest improvements and not measure effectiveness… and as a result, it was designed as a very expansive list of attributes of an entire group of successful teachers, no single teacher actually did (or could do) everything in the list.

    Yet, as I’ve seen the rubric implemented in my district and others, I’m expected to display evidence of all of the first three domains and some of domain four in a single 40-80 minute lesson…and my building admins have all attended Danielson-approved training. This may just be poor implementation, but I’ve seen too much poor implementation of the rubric across multiple districts to write it off simply as crappy administrators…at some point Danielson and her team become responsible for a poorly-designed product.

  5. Richard Aubrey says:

    Two problems: The question is whether the observation is “worth” the time and effort. It means there is an opportunity cost to the observation. So we are implicitly asking if a better use could be made of the time. Which means, further, that there should be some demonstrable benefit first, and that it would offset whatever is not done because of the observation’s time and effort cost.
    I know, that much is so obvious as to not need stating.
    Except new ideas always miss the point.
    The other problem is whether a principal actually knows his or her stuff.
    My wife taught HS with several different principals. The last one was good–young and full of energy–but he was beginning to run down because energy is not in limitless supply.
    An earlier principal had had it and was barely there, figuratively speaking. Might have been good once.
    Young principals are promoted early due to some kind of demonstrated quality. That’s good. But it means less classroom time, and possibly that means they looked good because the got the good classes. How much do they know about the rest of the business?