Lashing the anti-testing backlash

To protest curiousity-crushing test prep, Penn State Professor Timothy Slekar told his 11-year-old son to write “I prefer not to take your test” on the state exam.

He has been forced to complete worksheets in language arts and mathematics. He can alphabetize spelling words and find the main idea of a paragraph. He’s had practice in sequencing. He can round numbers. He can add, subtract, multiply and divide with fractions and decimals. And he has mastered the scripted art of estimating (Who knew there were incorrect estimates?). He has had multiple PSSA practice tests and according to these tests my son is ready.

. . . But what has been lost during these past five months? He sits in social studies and science classes that have been shortened to allow more time for reading and math instruction. He hasn’t been given the opportunity to engage real children’s literature.

Inspired by Slekar, a Pennsylvania mother opted her sons out of testing, falsely claiming a religious objection.

But there’s a backlash against the anti-testing backlash. At Jezebel’s Learning Curves, Anna North argues that testing is necessary, especially for children whose parents lack the “time, education and English proficiency” to monitor their children’s learning and spot when they’re falling behind.

Standardized testing is rarely fun — and it could almost certainly be improved — but it’s not nearly as antithetical to real, deep learning as its detractors suggest. Learning how to study will serve kids well throughout life — and while stimulating curiosity is important, most adults are probably glad our curiosity was supplemented by requirements from time to time.

If well-educated parents scuttle standardized testing, their children are likely to learn critical reading and math skills, North argues. Other people’s children may not.

Like North, I see no problem in teaching Pennsylvania children to find the main idea in a paragraph, or to add, subtract, multiply and divide with fractions and decimals, or to learn sequencing, rounding and estimating. Apparently, the school is teaching in a boring way and without integrating reading and math into history and science. But it is possible to teach reading comprehension and math skills without drudgery.

Standardized testing is not the devil,” writes Robert Pondiscio. “Test prep is the devil.” Time-wasting test prep is most likely to be a problem at high-poverty, low-performing schools, he adds.

At my South Bronx elementary school, we had a Teachers College consultant who encouraged us to ”teach tests as a genre of literature.”  But even that pales in comparison to a grad student of mine who was mandated to spend two hours per day on test prep from the first day of school.

Instead of boycotting the tests, parents should demand good teaching, Pondiscio writes.

. . . I would march into the school office the first day of school with the following bargain:  “I’m sure you agree the best test prep is great teaching and a robust curriculum, Ms. Principal.  So let’s keep our focus right there.  Don’t worry about spending my child’s time and your budget dollars on test prep materials. Because if they show up in our kids’ classrooms, we can promise our kids won’t be showing up for the test.”

Pro-testers think anti-testers are like parents who won’t vaccinate their children, suggests Alexander Russo.

About Joanne

Comments

  1. Roger Sweeny says:

    The problem isn’t the testing. It’s putting everyone in the same room and doing the same things. Some kids NEED that help. Some, like Professor Slekar’s kid, don’t. The kids who need the help should get it. The ones who don’t should be doing something else.

    Either hire exceptional teachers who can “differentiate instruction” or put kids with different needs in different places.

  2. Thank you, Joanne. My liberal friends are driving me nuts with their “my child did well on xyz standardized test, so public education works!” comments on facebook. I am very tempted to tell them that despite their progressive opinions, they really aren’t liberal *enough.* :9

  3. Richard Aubrey says:

    Testing and test prep are either good or bad depending how the first tests for actually useful knowledge and how the second prepares for the first. If a test doesn’t test for useful knowledge (not global warming, for example), then it’s useless except to see how well the school teaches to the test.
    Don’t see why you can’t design a test to test for useful knowledge. If you do, then learning it will be a two-fer. Test results are good and the kid learns something.
    Perhaps many tests are already usefully aimed at useful knowledge and what we have is the mush-head view that memorization is fascist and you don’t have to know stuff. Just think good thoughts according to the Dept of Ed’s Good Thoughts Syllabus.

  4. Cranberry says:

    I would have fallen into the “testing is not a problem” camp a few years ago. Having witnessed the effect of high-stakes testing on our local school, I am not a fan. On the plus side, our local school now makes every effort to teach young children to read. The administration has also made it clear to parents and teachers that knowledge of basic math facts is not optional.

    After that, however, “teaching to the test” takes hold. In some districts, testing may keep the administrators honest. In other districts, it limits the range and depth of instruction. Any number of items are only taught because they appear on the state tests. What doesn’t appear on the state tests isn’t taught–there isn’t time for them. Writing instruction is divided between what’s necessary for the open response items, and preparation for a creative college application essay.

  5. SuperSub says:

    I took standardized tests throughout my school career…and we had plenty of science, social studies, and other subjects taught to us. The difference? NCLB.
    The tests were truly used to evaluate students and guide instruction, not to hold over the heads of administrators.

  6. Thinly Veiled Anonymity says:

    Standardized tests are supposed to be measures of academic competence — not actual academic accomplishments themselves.

    That said, a survey of any kind doesn’t measure what it WANTS to measure; it measures what it measures. In the case of standardized tests, what is measured is how well students do on standardized tests. The question, then, is how well is performance on standardized tests correlated with what it wants to measure, namely academic competence?

    That depends, of course, on the test. Some tests are better than others, and the formulaic mass-graded essay tests are somewhere near the bottom of the pile. But if there’s any daylight between the two, between test performance and academic competence, then the former can be addressed independently of the latter.

    Continuing to use essay tests as a case study (mindful that the principle can be transferred to other sorts of tests as well), what essay tests are supposed to be measuring is a student’s ability to write. But to the extent that students are conditioned to provide stock, five-paragraph responses to the test prompts, their test performance soars and their actual writing ability plummets, to the dismay of their college instructors. This is precisely because what amounts to excellent performance on the exam is not just different from, but actually at odds with what is supposed to be measured.

    That most standardized tests DO have significant gaps between what they measure and what they want to measure can be confirmed simply by looking at the fact of “test prep”. I’ve often said that by far the best preparation for the SAT is 12 years of good schooling. The SAT is, generally speaking, a pretty good test of what it wants to measure (or at least it used to be — I’m not as familiar with it as I once was); test prep doesn’t affect scores that much at all.

    My suspicion — and its just a suspicion — is that all the mad test prep that schools are doing doesn’t affect their scores that much, either, and that if they just mellowed out and got serious about a full, rich curriculum taught by people who know their subjects, test scores would go up far more than they do under a regime where what’s studied is the test, just the test, only the test, and naught but the test.

    In other words, I’d bet that the tests are actually pretty good at measuring what they intend to measure (essay tests excepted — there is no way to mass grade good writing). Nevertheless, the approach of most schools seems to be premised on the notion that they aren’t good measures, and it’s affecting instruction. It may be as SuperSub says — that NCLB is responsible. But I was never a fan of that legislation to begin with.

    And for that reason — not because the tests are evil but because they have deleterious effects on schools — I’m fully in favor of parents and students taking a stand against the testing regime. And whatever brave teachers and administrators who feel so inclined should join them.

  7. test prep doesn’t affect scores that much at all.

    In the case of the SAT, you’re completely wrong, because you don’t understand what “on average” means. Research shows that “on average”, SAT prep improves scores 50 points or so. But within that average ranges from “scores go down slightly” to “scores improve 300-400 points per section”.

    And, mind you, that was all test prep of any sort, not closely controlled studies of quality test prep.

    In the case of standardized tests, what is measured is how well students do on standardized tests. The question, then, is how well is performance on standardized tests correlated with what it wants to measure, namely academic competence?

    The first sentence is wrong, of course, and you seem to realize this because the second sentence completely contradicts the first.

    Really, those of you who blithely declare that “standardized tests” don’t work really need to stop repeating it simply because you want it to be true.

    what essay tests are supposed to be measuring is a student’s ability to write.

    And wrong yet a third time. Essay tests are not and were not ever intended to measure a student’s ability to write. Quality of writing and mechanics are the fourth of four major criteria, and the least important.

    It’s also quite apparent that you don’t understand that “full, rich curriculum” and “standards” don’t have to co-exist. Perhaps you should go read up on this topic. Or maybe prepare for a standardized test on the meaning of standards in school curriculum.

  8. I’ve long favored an admittedly quixotic idea to institute random testing in grades 3-8. If you accept the proposition (and really, I don’t see how it can be sensibly disputed) that in an era of high-stakes accountability, testing will wield not just a disproportionate influence on classroom practice but a complete dominance, then you must have an accountability system that exists solely to create the best possible practice. Thus if you want all children to have a broad, rich curriculum and to demonstrate certain skills within those disciplines, you create a system wherein the schools have no idea which students will be tested, on what subject and on what day. The only “defense” against this would be to ensure that all subjects are taught well to every child.

  9. Thinly Veiled Anonymity says:

    Cal-

    I’ve three issues with what you’ve said:

    FIRST:
    When I said that “test prep doesn’t affect scores that much at all” I was expressly talking only about the SAT, hence my use of the semicolon. Now, perhaps I’m completely wrong, as you suggest, but I’ve two things to say in response. First — I was making a claim about the old Pre-1994 SAT, and I believe that I am much less wrong about that. Second — if I am completely wrong, then I’m simply wrong about the SAT, not about my overall point.

    SECOND:
    Now, I said the following:

    In the case of standardized tests, what is measured is how well students do on standardized tests. The question, then, is how well is performance on standardized tests correlated with what it wants to measure, namely academic competence?

    To which you responded:

    The first sentence is wrong, of course, and you seem to realize this because the second sentence completely contradicts the first.

    Really, those of you who blithely declare that “standardized tests” don’t work really need to stop repeating it simply because you want it to be true.

    I don’t think you’re thinking very clearly here. The first sentence can’t be wrong: it’s a truism. I’m not saying that’s all that the tests measure, merely that performance on the exam is the only thing that is necessarily measured by performance on an exam. How can you disagree with this?

    Nor did I say that the tests don’t “work” — only that they don’t necessarily work. Whether they work or not has to do entirely with how strong a correlation exists between what is measured and what is sought to be measured. Think of it this way: you might have a ruler that has inch marks on it, but the inch marks are actually 1.34 inches apart. Your ruler still measures length in terms of marks, like all rulers, but the data you’re getting isn’t what you want to be getting. But if you have a ruler that is properly calibrated, you’re still measuring the same thing — length in terms of marks on the ruler — but the measurement you’re getting is useful for something else: determining inches.

    I can’t for the life of me figure out what you find objectionable about this reasoning, nor can I fathom how an educated person such as yourself can’t see that both sentences are not only true, but so obviously true that you’d really have to have something wrong with you to not see that they’re true. Nor do I see any contradiction. I can only assume that you weren’t reading carefully.

    THIRD:
    Finally, as for the essay tests, again — like the factual issues about the SAT I can be wrong on this point without harming my overall argument. I don’t think I’m wrong, though. I hardly think it’s a stretch to think that the essay tests are supposed to measure a student’s ability to write. From the College Board:

    The SAT essay measures your ability to:
    * develop a point of view on an issue presented in an excerpt
    * support your point of view using reasoning and examples from your reading, studies, experience, or observations
    * follow the conventions of standard written English

    Those three things are foundational blocks of an ability that educators typically call “writing ability”. Being able to write flawless, beautiful prose is useless if you’ve nothing to write about. And grammar and content are useless without dialectic and rhetoric.

    And the SAT essay test wants to measure these things. It doesn’t — it measures what it measures — but it wants to.

    The same goes for pretty much any mass-scored essay test given to students.

  10. Cranberry says:

    The same goes for pretty much any mass-scored essay test given to students.

    Just wait until the software salesmen manage to persuade states to outsource essay grading to machines. Then the fun will really start. All this anti-testing and anti-anti-testing furor is only a warm up.

  11. Now, perhaps I’m completely wrong, as you suggest, but I’ve two things to say in response. First — I was making a claim about the old Pre-1994 SAT, and I believe that I am much less wrong about that. Second — if I am completely wrong, then I’m simply wrong about the SAT, not about my overall point.

    I know you were only talking about the SAT. You are wrong. You were wrong about the larger point because you were using your beliefs about the SAT to “bolster” your larger point.

    Those three things are foundational blocks of an ability that educators typically call “writing ability”.

    Really. So if someone said you were a “good writer”, they would be talking about ability to develop an idea? Normally we call that “critical thinking”, as in “She’s a good critical thinker, although her writing is weak.”

    The rest of the writing section of the SAT is a test on writing skills. But the essay is a test of reasoning in written form. And mechanics are a minimal part of the score.

    It doesn’t — it measures what it measures — but it wants to.

    No, it does. I am extremely knowledgeable about what the SAT essay grades, and it unquestionably evaluates the writer’s ability to express a point of view with support. It’s not terribly granular, but it gets the job done. And that’s all that can really happen in a test taken by over a million people a year.

    I do get tired of snobs sneering at the five paragraph essay (which is not required, by the way). For non-writers and people of lower ability, the five paragraph essay is an essential structure that allows them to realize that ideas can be organized and expressed in written form. It’s an essential tool and there’s nothing wrong with teaching it. Students who can’t master the five paragraph essay shouldn’t go on to anything else.

    merely that performance on the exam is the only thing that is necessarily measured by performance on an exam.

    Because it is manifestly absurd. Tests are designed to measure many things, and most standardized tests measure quite well.