Selling unproven software

Schools spent nearly $2 billion in 2006 to buy education software of dubious quality, writes Todd Oppenheimer in the spring issue of Education Next. Software companies claim their products have been proven effective, but their research is shoddy and misleading.

The government’s What Works Clearinghouse has rejected the validity of 75 percent of studies backing various instructional software programs, earning the nickname of “the Nothing Works Clearinghouse.”

The nickname carries an important double meaning. To some, it’s another example of governmental blockheadedness — specifically, that understanding how teaching and learning work in the real world is beyond the skill of a federal agency. To others, including many leaders in the research community, the message is actually more harsh. It is that most new classroom gimmicks don’t add much of value, and studies packaged to suggest otherwise are to be treated with great suspicion. In fairness, suspicious research sometimes contains perfectly innocent flaws. That’s because truly scientific research is extremely difficult, time-consuming, and costly — and thus very rare — which is precisely why the WWC has found so few studies to be satisfactory.

The unproven software might be effective. But there’s a great risk of wasting billions of dollars.

About Joanne


  1. I wonder how much of the software used in speech pathology and special ed has been proven effective by anyone other than the publisher of the software.

  2. wayne martin says:

    Hmmm .. anyone who reads this article could come away with an unfavorable impression of software, or educational software at least..

    > When companies that sell instructional software
    > used to come calling on Reid Lyon, expert on
    > reading instruction and former advisor to President Bush,
    > he played a little game. First, he listened politely to the sales
    > reps’ enthusiastic pitches and colorful demonstrations of
    > how computer software can build reading skills in new ways.
    > Then he asked to see their technical manuals.

    This opening paragraph causes me to think that the author doesn’t understand software very well. Mainframe software required manuals. Well-designed PC/Interactive software is designed to not require manuals to get up and running. A meaningful demonstration should be provided within ten minutes, or the software package will not stand on its own.

    > One reason is that researchers who evaluate classroom exercises
    > and educators who work inside those classrooms represent two
    > often conflicting cultures

    This is true when automating any system. The “new” (automated) and the “old” (current) will never see eye-to-eye, or speak the same language. The goal is to education the “old” fold to see the capabilities and the power of the tools the “new” folks are providing.

    There are dozens of books on managing a successful software project. In the cases which work, the project works closely with the client, defining checkpoints where the client is provided access to the work-in-progress to provide comment and feedback. School software is a little different, since it is intended for the mass market, rather than a single client. However, a clever software project manager will find one/more beta testers and work closely with them during the development of the project. Failure to proceed along those lines could well prove fatal to a software product.

    Schools would be well-advised to do their own evaluation. Have the Asst. Sup. For Instruction obtain some sort of evaluation kit, or license, install the software and “have at it”. If you can’t understand it with a very short period of time—then box it up and send it back. A software package that is supposed to provide reading instruct should not have a user manual. There might be a manual for teachers for various advance functions that deal with client software that uploads student evaluations to the server workstation.

  3. wayne martin says:

    > I wonder how much of the software used in speech pathology
    > and special ed has been proven effective by anyone othrr
    > than the publisher of the software.

    Such questions can be directed to the salesperson for the software. If they can’t provide reference sites, then you will either have to evaluation the package yourself, or look to another package/vendor.

  4. Walter E. Wallis says:

    Anyone know of a program that will drill my grandson on his Wii for 15 minutes, then give him a quiz and reward him with play time comensurate with his score?

  5. When I worked in the computer games business years ago, we used to say “those who can program, program. Those who can’t program, program educational software.”

  6. “> Then he asked to see their technical manuals.” Was this perhaps a request for the instructions for the teacher on how to adjust lesson plans to work with the software and otherwise best utilize it to supplement teaching. That’s a different matter than the built-in instructions on the screen, and I’d expect it to be equally important. If the lesson plan and the materials provided to the students are haring off in different directions… (This is something that often went wrong without computers, too, as I recall my school days in the 60’s.)

    It could also have been a request to see documentation of just how the software computed scores, decides when a student is ready for the next level, and tries to move them from the fun games to the actual learning. Those kinds of specifications are pretty important to assessing whether a program was properly designed, and hard to deduce just by running it, because much of the bookkeeping and program logic is hidden. However, anyone who received such documentation and just took it at face value would be terribly naive about software. Almost always, the specifications say how the program was supposed to work, not how it was actually written and “debugged” – but it gives you a starting point to figuring out what the program really does when you start running it.