Students recognize good teaching

Students’ assessments of their teachers tend to match value-added measures of effectiveness, concludes research funded by the Gates Foundation. From the New York Times: 

Teachers whose students described them as skillful at maintaining classroom order, at focusing their instruction and at helping their charges learn from their mistakes are often the same teachers whose students learn the most in the course of a year, as measured by gains on standardized test scores . . . 

Researchers are looking for correlations between value-added rankings and other measures of teacher effectiveness, reports the Times.

Classrooms where a majority of students said they agreed with the statement, “Our class stays busy and doesn’t waste time,” tended to be led by teachers with high value-added scores, the report said.

The same was true for teachers whose students agreed with the statements, “In this class, we learn to correct our mistakes,” and, “My teacher has several good ways to explain each topic that we cover in this class.”

“Kids know effective teaching when they experience it,” said researcher Ronald Ferguson, who designed the student questionnaires. “As a nation, we’ve wasted what students know about their own classroom experiences instead of using that knowledge to inform school reform efforts.”

Twenty states are redesigning their systems for evaluating teachers, often asking the Gates Foundation for help in assessing effectiveness, Vicki L. Phillips, a director of education at the foundation, told the Times.

Teachers who spend a lot of time on test prep have lower value-added learning gains than those who “work their way methodically through the key concepts of literacy and mathematics,” Phillips said.

About Joanne

Comments

  1. Yeah I agree that students know good teaching when they see it, they see a lot of teachers on a regular basis, something most administrators fail to do.

    The problem is that a teacher can be effective and not popular and receive lower ratings as a result. There is some interesting research I’ve read recently which pointed to two findings with college professors, who have been rated by students for years.

    1. The college professors with the highest ratings also tended to have higher grades and easier courses. Not good for learning if you are letting your evaluation determine your curriculum I’d say.

    2. Students would often lie on the evaluations either to bolster evaluations for nice professors with poor teaching or to reduce evaluations for “mean” professors.

    I’m not exactly sure how to resolve this as it seems clear to me that students should have a voice in talking about teacher quality.

  2. David raises some good points but it seems to me that the same concerns would apply to any subjective evaluation of teachers. Two questions:

    1) If implemented in other schools, would the teachers be aware of individual evaluations and who wrote them. I doubt any union would allow anonymous evaluations to be used, and I’m not sure if I want students to be empowered by anonymity to write whatever they feel like on an evaluation of me.

    2) The significance of the correlation with regards to evaluation relies upon the validity of value-added measures, which don’t hold much water in my book. I’m not a big fan of the validity of the individual instruments used to determine added value. Are the students basing their opinions on the same measures used to determine added value?

  3. There’s another thing here which is that students weren’t asked if they had a good teacher or not, they were asked some specific things about how they were taught. It’s quite possible to be know how the class works without making the connection to whether or not it the teacher is good. You have to be careful how you ask these questions.

  4. Roger Sweeny says:

    The college professors with the highest ratings also tended to have higher grades and easier courses.

    We are supposed to just assume this is a bad thing. We don’t even think that we aren’t asking, “What do the students learn?”

    The fact that a course covers a lot of ground and has hard tests doesn’t mean that students actually learned much. (And, no, I don’t think it’s legitimate to assume that the more a course covers, the more a student learns, or the harder the course, the more students learn.)

  5. I’m afraid I believe this is true.

    I once gave out a student survey and the results were devastating. I learned a lot from it, but it sure was painful.

  6. Peace Corps says:

    I should survey my students. I think I know what they would say based on what they have told me to my face: “You grade too hard.” “You go too fast.” “You confuse me when you show me different ways to approach the same problem.” “You are the only teacher that gives us homework every night.”

    But I have also had students tell me that I am the only teacher they have that “teaches.”

  7. Peace Corps, the results from a written survey can be quite different from what students will tell you to your face.

  8. What LSquared said.

    The survey was designed by a Harvard professor who really knew what he was doing. From the questions in the NYT article, it doesn’t seem like they were asking about teacher popularity or about favorites. The questions are very specific and zoom in on what is actually happening in the classroom.

    I’ve witnessed teaching that was little more than going through the motions. Students would be assigned 3- or 4-page flip books as a “lab assignment.” Because it counted as such a large proportion of their grade, they were penalized if they didn’t do it, but it was a joke.

    Of course, the kids were blamed for not doing well.

  9. Bill Leonard says:

    As a layman who nevertheless recalls his college and high school days, I suspect that students may not always be able to judge their teachers/professors while they are students in the classrooms of said teachers and profs — at least, not to the degree to which those profs/teachers are effective teachers..

    OTOH, I do think most thinking students understand incompetence when they are subjected to it. I certainly did.

    Bill

  10. Yes, I agree with Bill Leonard. Students can spot incompetence, but not necessarily competence.

    I survey my students pretty routinely. The first time I did so this year, I was bracing myself for complaints, but by far the most common request was that I “kick out the noisy students who don’t care”–not a complaint that I wasn’t managing them, but that I kick them out more often (which, of course, I can’t do).

    I think it’s worth surveying students, but each survey should be linked to a) the student’s relative improvement in the class, b) the student’s discipline record, c) absentee record, and d) the student’s overall GPA.

    Another thing we did this year that I thought helpful was create a course-alike quiz and give it to all our students. We could then see how each teacher was doing on covering key concepts. Of course, then that has to be tempered by the teacher’s student population. I came in third of five in all but one question, where my students came in first (complicated multi-step equation–whoo and hoo!) . But it was a third place well in the hunt, within 5-10% of the leaders and I had the bulk of the students (over 100), as well as an intervention class and a high level of 504/IEPs (and, as the discipline coordinator pointed out, an even higher level of kids with a top ranking on the Top 25 Discipline Issues list). The two teachers with a slightly higher degree of comprehension had 1 and 3 classes, no intervention, and very few trouble students (for example, I had about 10% blank submissions, one of the other teachers had two, and the other teacher had omitted her blanks).

    The thing that has struck me after a year and a half of teaching is how widely a teacher’s life varies based on the subject taught and the population taught to. I teach algebra to kids who have already taken it once, twice or, in a few terrible cases, three times. Last year, I taught geometry, which should have been a bit easier, but the kids had no algebra to speak of, and nearly 10% of my students were either expelled or suspended routinely. I hear AP US History teachers, or calculus teachers, talk about their challenges and I have to stop myself from laughing (I’m a tutor and private instructor, so I know their population, too).

    Incidentally, someone sent me an email asking me about differentiation, and I got swamped. I should have time in the next week, but if you don’t hear from me, feel free to nag. I just got overbooked.

  11. I meant to say this about course-alike quizzes–what I like about them is they give course-wide overview of how you are doing relative to everyone else. In our case, the range of answers made it clear that all the teachers were doing well for their populations, but if there had been coverage problems, the quiz results would have helped ferret it out.

  12. Michael E. Lopez says:

    Cal Saith:

    I think it’s worth surveying students, but each survey should be linked to a) the student’s relative improvement in the class, b) the student’s discipline record, c) absentee record, and d) the student’s overall GPA.

    While I certainly think that such information linking is useful, your point made me think of something else. I think that the surveys should be limited based on (1) attendance and (2) some combination of class effort and performance. In other words, I don’t think every student should be allowed to fill out an evaluation.

    Let me explain #2 a little: in a math class, you might have a kid who did all the homework, but still failed. I’d want to know what that kid thought because that kid was clearly “doing the class” and not out to lunch, so presumably would have something informed to say about the class and the instructor. You might have a kid who didn’t do the homework, but got an A. In such a situation, I think we can also say that the kid might have something informative to say. LIkewise, in an English class, you want the opinion of the people who actually read the books and the poems.

    It’s the kids who don’t show up and who don’t do the reading whose opinions are worthless: how can they comment on a class they didn’t really take? When it comes down to it, evaluations are only as useful as the knowledge-base of the evaluator. You wouldn’t ask students from Mr. Harris’ English class to do evaluations for Mrs. Jungfrauen, so why would you ask a student who didn’t really take Mrs. Jungfrauen’s class to comment on her? Just because he was a witness who seems to have confessed that he didn’t actually see anything?

    More on evaluations in general in a subsequent post.

  13. Michael E. Lopez says:

    So I filled out my first evaluation when I was in college, as I imagine many of you did as well. My first reaction was disbelief: “What the f*** do I know about teaching history?” I said to one of my friends who was seated next to me.

    Nevertheless, I filled it out and I continued to fill them out. But they’ve always puzzled me.

    Now I get them every quarter. Dozens of them. I’ve got a big stack on my desk right now telling me what a prick I am. They’re mixed in with exuberent praise, thoughtful critique, and indifferent sarcasm. My evaluations actually tend to be pretty good, but much more highly polarized than the other graduate students in my programs. Students love me or hate me.

    I’m with Robert Wright on this: evaluations are extremely useful, and often eye-opening. They’re just about the BEST self-improvement tools an instructor can have, because even if you don’t actually take them at face-value (because the students don’t always know what they are talking about) they’re informative nonetheless because you get a window into the sorts of things that matter to your students, the sorts of things that they care about, and you get an idea of what THEY think is important to a good class.

    If nothing else, that gives you a reason and a basis for explaining why you do the things that you do, which is almost always a good idea. (Though not always!)

    The problem with high school level evaluations, that I see, is that most high school students aren’t trusted to walk themselves to lunch off campus. Are we really going to trust them with something that can make or break a teacher’s career? College-level evals are somewhat more justified, because college students are (should be) a self-selected group of marginally more responsible, thoughtful students who aren’t being forced to be there. There’s limited utility in asking a prisoner, even one in a gilded cage, how he likes his confinement. And if you are imprisoning him “for his own good” then you’re undercutting the very idea that he’s competent to make decisions for himself.

    Anyone who has been reading my commentary on this site for any appreciable amount of time will know that I never tire of saying that the students know who the good teachers are. I find it unsurprising that evaluations track other objective measures of teacher performance (though I am skeptical about value-added’s utility to the profession). Nevertheless, I wouldn’t be in favor of making ANY sort of employment decision — tenure, salary, or anything else — based in any way on student evaluations, and I might propose that unless a teacher’s evaluations reach some critical numerical threshold — an overall Cross-Class average of “3″ on a 10 point scale, perhaps — no one but the school secretary and the teacher him or herself should ever see them.

    I also think that making them anonymous is very important: students don’t believe you when you say that there is a HUGE difference between whether or not you like them and what sort of grade you give them. They come around after a full quarter or two of demonstrated sincerity, but the fact is that most of their educational experiences (by the time I get them, anyway) have convinced them that this isn’t true. So they need to feel safe from retribution.

  14. Dr. Sanford Aranoff says:

    What are we professors supposed to do? My position is that we are there to help students understand the material. They must learn the principles of rational thinking. This means understanding basic principles, the logical conclusions, empirical verifications, and the need to change the principles due to a better logical understanding or empirical evidence. Instead, college professors sell their thoughts, not focusing on rational thinking. How else can we explain so many left-wing professors, when left ideas violate the rules of rationality? See the new book, Rational Thinking, Government Policies, Science, and Living. See also the book, Teaching and Helping Students Think and Do Better.

  15. I also think that making them anonymous is very important …

    For what it is worth, I didn’t even believe that the anonymous feedback was anonymous (in high school and college). I think the breaking point was when I noticed that each of one of the anonymous forms had a unique numerical ID on it … and the teacher handed out the forms to the students in the order in which they sat in the classroom …

    I *do* believe that the students can, in general, roughly rank their teachers in order of effectiveness. Getting this information out of the students is a much harder task.

  16. Diana Senechal says:

    Funny, though: the correlation between teacher value-added and student feedback is a bit stronger for math than for ELA.

    A curious detail: the top-quartile teachers don’t always seem to have better class control than the bottom-quartile teachers, according to the student surveys (see pp. 12-13). In some areas they seem to do worse.

  17. M. Lopez,

    Your comments on the validity of students evaluations – especially when we don’t actually trust them to “value” their own education – are quite insightful. Too many teachers are “favored” by kids and parents for the ease and fun of their class. I think this is more common among the mid-level student. The top students want the rigor because they don’t want to end up at Dartmouth unprepared. Of course, they are also under the most pressure to simply get the grade, not the knowledge. It’s a tricky thing to trust them to evaluate the process when we don’t trust them to simply choose to pursue education for their own good – instead mandating it and identify core requirements for a diploma