Low grades for low standards

Compare the rigor of state proficiency standards in Education Next. Whether Johnny can read depends on where he’s being tested, write Paul Peterson and Frederick Hess. Tennessee, Texas and Oklahoma have the lowest standards; South Carolina, Maine, Missouri, Wyoming and Massachusetts earn an A grade.

About Joanne

Comments

  1. Even though I’ve written about the same topic in the past, I think it’s too simplistic to compare standards this way.

    Yes, standards in one state can be tougher than in another, but they can also be qualitatively different. That is, the *type* of math skills/knowledge being assessed can vary across states.
    For example, Hess & Peterson gave Michigan and Florida about average marks for their elementary math assessment. But that doesn’t mean the standards are comparable. Look at Michigan’s math assessment and you’ll see they ask for more written explanations than the NAEP or Florida, which asks for written definitions and answers (but not explanations).

  2. I take all claims of “proficiency” with a boulder of salt because the word can mean anything the user (or in this case, state educrats) want it to mean. This article demonstrates my point.

    When I hear someone is “proficient,” I want to hear concrete examples of their “proficiency.”

    For example, if someone is “literate,” what does that mean? That they can recognize the alphabet? Write their name? Read sentences haltingly? Use phonics to pretty much decode anything? Comprehend newspapers? Write essays? When country A is “more” (or “less”) literate than country B, are they being held to the same standard?

    Another ambiguous term is “fluent.” Some English speakers love to throw that around even if they only have a smattering or a far-from-perfect command of a foreign language.

    But if “fluency” is defined as the ability to respond without a learner’s hesitation, that still leaves open the possibility of ungrammatical, poorly worded (but instant) responses. It is possible to speak a language badly and be “fluent” in that sense at the same time.

    Many immigrants achieve that state. As long as they understand others and can get their point across, why aim for perfection? I see nothing wrong with this. They are still far more “fluent” in my book than people who grossly overestimate their language skills. And in my experience, such immigrants don’t brag about their English. It is a practical tool for them, not an ego-booster.

  3. Tom West says:

    you’ll see they ask for more written explanations than the NAEP

    I always hated those (and now my son is hating them too). If you’re a math geek, it’s a heck of a lot easier to do than to explain.

    It’s why learning calculus is a high school subject, while *proving* calculus (God gives you 1 and the successor function, you go on from there…) is only taught in the “Math for Mathochists” stream of university.

  4. Amritas wrote:

    I take all claims of “proficiency” with a boulder of salt

    Yeah, but proficiency testing is pretty elderly stuff. It’s been around for so long and been used by agencies that demand accurate information that I find it difficult to believe there isn’t something in the way of science or engineering with which to determine which test is most informative/accurate.

    Fer instance, the Department of Defense has to place a good deal of value in determining which sturdy lad or lass is most likely to complete pilot training and which shouldn’t be allowed to start. It seems that after X number of decades of doing that type of testing there ought to be some way to determine the better tests from the poorer, the new, improved test from its unimproved predecessor.

    In this case it’s a comparison between the results of the state tests, where applicable, to the results of the NEAP. If the NEAP’s a lousy test then the foundation of the study falls apart.

    Beyond that, I don’t know enough to judge if the sort of comparison being described in the table is valid but it does seem that there’s an unspoken assumption – there’s no difference between the states other then the rigor of their state proficiency tests.

  5. Amritas says:

    allen,

    “Yeah, but proficiency testing is pretty elderly stuff. It’s been around for so long and been used by agencies that demand accurate information that I find it difficult to believe there isn’t something in the way of science or engineering with which to determine which test is most informative/accurate.”

    That’s not the issue as I see it. I am not questioning the idea of proficiency *testing*. It is, I think, possible to create the “most informative/accurate” test, at least for certain academic fields. So let’s suppose such a test exists. If some states use it and others don’t, can we really compare performance? No. Now let’s suppose all the states use the same ideal test but they draw the lines of “proficiency” differently. State A educrats say 2/3 right is “proficient” whereas State B educrats say 3/4 right is “proficient.” Without an explicit standard, “proficiency” means whatever the user wants it to mean, just like “fluency” or “literacy” (which are types of “proficiency”).

    The bottom line is, when someone says X is “proficient,” I want to know what X can and cannot do. I want to know what kind of test X took, and what X scored on it. I want more than the label “proficient.” The word is meant to be reassuring, but I want to peel the label and look to see what, if anything, is beneath it.

    “it does seem that there’s an unspoken assumption – there’s no difference between the states other then the rigor of their state proficiency tests.”

    There are, in fact, so many variables that the debates will go on forever.

    No one-size-fits-all public education system can ever account for all variables to give every student an “equal” education. But smaller institutions (right down to the smallest of all – a one-parent homeschooler) can more easily adapt to those variables. That doesn’t mean they all actually will do so.

  6. Amritas wrote:

    I am not questioning the idea of proficiency *testing*.

    Neither am I. I’m saying that, given the length of time the testing business has been around it should be possible to compare the various state’s tests in the manner that Education Next has done in the linked article.

    That comparison would necessarily have to include the artful dodges of the responsible agencies. Not much point in having a good test that’s subverted to reach a politically convenient result.

    A means of differentiating the states whose tests are nothing but a whitewash from the states that have valid proficiency tests is a would be a useful precursor to reordering the priorities of the education business.

    It’ll happen as a result of the gradually increasing element of competition in public education but that competitiveness could be spurred by a useful standard of quality.