To end cheating, open up tests

Instead of boosting security for test questions to prevent cheating, why not have open tests? asks Eric Hanushek on Education Next.

He proposes developing a very large bank of test questions that cover the entire curriculum from basic to advanced topics. All questions would be made public. Teachers could teach to the test, knowing they’re covering the entire curriculum. Critics could challenge test questions they think are misleading, irrelevant or otherwise inappropriate.

Then, move to computerized adaptive testing, where answers to an initial set of questions move the student to easier or more difficult items based on responses.  This testing permits accurate assessments at varying levels while lessening test burden from excessive questions that provide little information on individual student performance. Such assessments would not be limited to minimally proficient levels that are the focus of today’s tests, and thus they could provide useful information to districts that find current testing too easy.  Students would be given a random selection of questions, and the answers would go directly into the computer – bypassing the erasure checks, the comparison of responses with other students, and the like.

This is how the FAA tests applicants for a private pilot license, he writes. There are so many possible questions that it’s easier to learn the underlying concepts than to memorize all possible answers.

Students spend less time taking adaptive tests, because they’re not asked lots of too-easy or too-hard questions. Teachers get the results immediately.

Does Hanushek’s idea make sense?

 

 

About Joanne

Comments

  1. Yes. My school and many others are already paying for adaptive computerized tests that we can use as universal screens. We immediately take the information from those and use them to reexamine learning goals and groupings for our students. State testing data only really has punitive value months after the test is over. If the state were to have a flexible system we could frequently use for adjustments to instruction, we wouldn’t then need to spend additional, local taxpayer money to get information we can actually use.

  2. Open tests would have many advantages. In particular, as Hanushek points out, preparation for the test and instruction in the subject would not be far flung. And critics could see and discuss the test questions while they were still test questions, not after the fact. Having the tests open to the public would take away the unnecessary mystery and could help raise their quality.

    I am skeptical about adaptive tests, mainly because it is easy for such a program to go wrong. Some students make careless errors or see the question in a way that didn’t occur to the test writer. The test should not “adapt” to a lower level in these cases. Also, adaptive testing can take the challenge away from some students; they will grow accustomed to questions that they can answer easily.

    Yet another pitfall: if open testing offers transparency, adaptive testing takes some of it away. A student doesn’t know for sure whether or when the program is adapting. It could be unsettling for a student to wonder whether the program was switching to a lower level. And if the program indicated when it was doing so, that would be unsettling as well. (“Wait! Stop! I get this problem!”) There’s something to be said for tackling the material unassisted, for better or for worse.

  3. I like the idea.

    Testing for a ham radio license works the same way.

  4. A comment on another website makes sense; increase test security by drastically limiting the time the tests spend in the school. Tests arrive one afternoon, are given the next day and are in the hands of the returning shipper(FedEx, UPS, USPS) within an hour of that day’s school closure.

    • Lightly Seasoned says:

      No, it doesn’t. Districts are failed on the basis of LNDs (level not determined) — kids on the rolls who do not take the test. One day doesn’t allow for extended time IEPs, absences, tracking down students who are suspended, have long-term illnesses, or are incarcerated. And yes, we have to run around doing all that. We get 7 days with the tests, and it’ a challenge then.

      • Could there be an alternate date a week or so later? Or online? It’s not snark, I’d like to know. The suggestion was originally made by an active teacher.

        • Lightly Seasoned says:

          One day isn’t enough to travel to the detention center, homes, hospital, etc. unless you have a lot of people dedicated to the task. I’ve done the homebound students for just my department, and it takes me a week to get around to them all and proctor the series of exams (after school, of course). A small school might have only one or two, but once you have over 1000 kids, the numbers get prohibitive. At any one time, we usually have a dozen kids out on homebound instruction alone.

  5. Yes. Adaptive testing makes sense. The local school district does this two or three times a year to give teachers a sense of where each student is in math, reading, and language arts. While it’s not detailed as a formative assessment, it does catch kids who need extra support sometimes before teachers notice, and it can highlight those children who are ready for some enrichment even if they don’t quality for a GT program.

    I can second the experience of having the question pool available for ham radio licensing. That made me fill in all those little gaps I had, so I can honestly say I came out of the testing process knowing much more than when I started. And isn’t that the whole idea?

  6. superdestroyer says:

    Open tests means that the Asians such as Korean, Indians, Chinese will spend a large amount of time memorizing every question. Then when the Asian scores go up relative to whites, then whites will have to commit to spending more time memorizing all of the answer.

    Open tests means changing from teaching to the test to memorizing the test.

  7. I am skeptical about adaptive tests, mainly because it is easy for such a program to go wrong.

    Three words:  Software Quality Assurance.  We can make software reliable enough to fly an airplane with you aboard (and do); making testing software work correctly is a piece of cake.

    Some students make careless errors or see the question in a way that didn’t occur to the test writer.

    Eliminating ambiguous questions is one of the features of open testing.  Careless errors are only human, and can be distinguished from lack of understanding by performance on other questions of similar difficulty.

    Also, adaptive testing can take the challenge away from some students; they will grow accustomed to questions that they can answer easily.

    I think you misunderstand what adaptive testing is intended to do.  If the testee’s achieving 100% at a particular level, the difficulty goes up or moves to later concepts.  Two 6th-grade students might both score 95+% on 6th-grade math questions, but adaptive testing might find that one achieves 80% on 7th-grade material and the other achieves 80% on 9th-grade material.  A non-adaptive test cannot resolve such things in reasonable time.

  8. This sounds wonderful. With all the cheating being uncovered, I have to wonder whether some of the teachers I’ve worked with have done the same – it makes you wonder whether their wonderful test results were earned fairly.

    What I would like about the open system is the opportunity to see the questions to reduce a common reason for getting them wrong – unfamiliarity with the vocabulary or terms used in the questions. I mostly work with low readers, who can perform adequately in the classroom, but have serious difficulties reading the test, or understanding the questions, if they are put in sentences longer than 8-10 words, or paragraphs longer than a sentence or two. The more the test-maker tries to make the question “real-life”, the more the likelihood that the verbiage is just too dense for my students to make sense of.

    With open-testing, I could teach how them how to analyze the question, using real examples. I could make sure that I teach the vocabulary needed (particularly important given my large number of non-English speaking students).

  9. Adaptive tests also adjust downward/ That was part of my point. Students would get used to tests that turned easier whenever they struggled with the problems.

    • If they’re consistently “struggling” (failing) on questions at a particular level or concept, it means they haven’t mastered that part yet.  In a subject like math it means they need to go back to earlier material to re-teach, and determining where to start is one of the most important elements of the process.

  10. My concern is that kids will give up on questions that look difficult, knowing the computer will make things easier if they do so.

    No, struggling isn’t necessarily the same as failing. You have to struggle with certain things to reach another level. Students go through bumpy times when the material seems very difficult. Then they “get” it, and that material becomes easy. Its difficulty at certain points does not mean that the student should be given easier material. (There are cases when it does mean this and cases when it does not.)

    There are other complications as well. Most subjects consist not only of progressions, but also of discrete topics. A student might have weak knowledge of one topic but strong knowledge of another.

    If adaptive tests do become the norm, each test report should state when and why the test adapted to the student. That way, if the shift from one level to another isn’t well founded, the teacher, student, or parent can point this out.

    • If a student is still struggling with something at the time of the test, do they need more work, or do you really think they’re ready to move on anyway?

      Moving test questions back to the pre-struggling material pins down the material mastered and where teaching needs to start.  That seems to be what testing ought to do.