Does Facebook need ethics education?

There has been outrage over Facebook’s psychological experiment on 700,000 unwitting users. In order to test its ability to manipulate users’ posts, Facebook used an algorithm that altered the emotional content of their news feeds. (In half of the cases, it omitted content associated with negative emotions; in the other half, positive emotions.)

According to an abstract, “for people who had positive content reduced in their News Feed, a larger percentage of words in people’s status updates were negative and a smaller percentage were positive. When negativity was reduced, the opposite pattern occurred.” The findings were published in the March 2014 issue of the Proceedings of the National Academy of Sciences (and reported in numerous places, including the Wall Street Journal article that informed this post).

Now, these findings aren’t surprising–who wants to be all cheery when your “friends” are down in the dumps?–but they left many people angry. An experiment of this kind isn’t just a misuse of data; it deliberately provokes people to post things they might not otherwise have posted, in a “space” (i.e., the news feed) that many consider their own, since it includes only what they want to include. (Yes, they’re mistaken in considering it their own, but Facebook does a lot to feed that illusion.)

Did Facebook have the right to conduct this experiment in the first place? Kate Crawford, visiting professor at MIT’s Center for Civic Media and principal researcher at Microsoft Research, says no. Moreover, she holds that ethics should be part of the education of data scientists. (For a more detailed exposition of this view, see danah boyd and Kate Crawford, “Critical Questions for Big Data,” Information, Communication & Society, 15:5, 662-679.)

What would “ethics education” look like in this context? Would it focus on the issues at hand, or would it examine ethics more broadly, with readings  and analysis of ethical problems? Would it take the form of a professional development course, or would it start in high school or earlier?

It is possible that the Facebook controversy (and others like it) will lead to a greater emphasis on ethics in education. That could be promising if handled well. One pitfall of ethics education is that it may be reduced to specific issues and even mistaught. That is, those studying the “Ethics of Big Data” may never consider ethics outside of Big Data, or ancient ethical problems that relate to their own, or even the distinction between ethics and morality (which has been articulated in different ways but is worth considering in any case).

So ethics education, if taken up by “big data” and other nebulous entities, will need to go beyond a crash course or PD. Study ethics, but study it well. How do you do that? Read seminal texts, raise questions boldly, stay aware of your errors and fallacies, and put your principles and reasoning into practice. That’s just a start.


  1. My experience in a Computer Science department is that when someone says that we should teach “ethics”, what he really means is that we should teach HIS ethics.

  2. Stacy in NJ says:

    The path entrepreneurs seem to eventual tread from disruptive innovators to big businesses protecting their turf and trying to limit their competition – allowing themselves to be co opted so they’re able to purchase political influence – was long ago followed by Facebook.

    Ethics now means whatever they can get away with while hiring public relations firms and paying off the appropriate academics with grants and gifts and politicians with campaign contributions.

  3. SuperSub says:

    Schools (primary, secondary, postsecondary and graduate) will have as much success teaching ethics as they do morals and values – which is none.
    Ethics in mainstream scientific fields is not maintained through feel-good seminars and professional development, its enforced by supposedly-independent review boards that are designed to shut down unethical research.
    Facebook, as a corporate entity, really cannot function in the same manner. Money corrupts, and Facebook will get its way most of the time. The only way to limit Facebook is by going after their money, either by boycotting or legal actions. I don’t see the core constituency of Facebook batting an eyelash at this incident, and since even Facebook supposedly didn’t keep records on who it used as test subjects, I doubt a class action lawsuit would succeed. Maybe a state AG could go after them, but it would be a hard case to make without victims.
    The real culprits here, the ones who should have known better, are the researchers and review board at Cornell and the journal that published the results. They knew what Facebook did was unethical, but explained it away simply by stating that the harm was already done.

  4. Mark Roulo says:

    Yes, there has been outrage. I’m still trying to understand *why*, however.
    The entire *point* to advertising is to change the behavior of the folks who are being targeted. This has been true for pretty much as long as advertising has existed. And *measuring* the change isn’t new, either. Fred Pohl has a great description of his time as an advertising copywriter at a company that sold books via mail order. It was common for them to prepare a bunch of different pitches for each book, send out a few thousand of each, and they send out a lot more of the ones that worked. This was in the 1950s.
    So Facebook is altering the *selection* (but not content … yet) of what gets shown on customer feeds and then measuring the results. This is different from any other advertising company measuring the result of various ad campaigns … how?