Teaching Responsible Computing Playbook

Teaching Responsible Computing Playbook

Talking About Unanticipated Consequences

Authors: Vance Ricks

This section is for Computing instructors who want to build the idea of unanticipated consequences into their Computing coursework and their conversations with computing students. Focusing on that idea has several benefits. It is a good way to introduce your students to knowledge from other disciplines. It can encourage your students to prioritize those who are affected by technological developments. And it can help your students become accustomed to considering what will happen when tools they develop are used in unexpected ways and trying to plan accordingly.

Most computing students will already be well familiar with a certain idea of an unanticipated consequence, where a program fails to run because of a syntax error that the programmers didn’t catch ahead of time. The topic of this section is more like (but also importantly different from) a logical error that the programmers didn’t catch: the program will run, but the fact that its results are wrong will not be immediately apparent.

It’s possible to learn the history of technological developments as a history of the unanticipated consequences of the introduction or proliferation of those developments. Not every unanticipated consequence is a negative one. (Consider, as an example, the bicycle, which played and continues to play an important role in movements for greater gender equality.) However, the usual connotation of “unanticipated consequences” is of some outcome or result that is not just merely unexpected, but also often unwanted. This is why “unanticipated” differs in a key way from “unintended”: the former term implies a lack of knowledge, while the latter merely implies an absence of intent. Discussing some of the unanticipated consequences of digital technologies is a good idea, for a few reasons.

First, it is a good way to introduce your students to concepts and information from a variety of other disciplines. In addition to History, other disciplines such as Anthropology, Cultural Studies, Environmental Studies, Political Science, and Sociology have important perspectives to offer on how and why different technologies have had unanticipated consequences. This multidisciplinary approach is an important corrective to a presentist (i.e., focusing only on recent events) perspective on technologies, and an important corrective to treating “technology” as a synonym of “digital computers”. Nearly every technological system, from agriculture to indoor plumbing to writing to digital computers, has had some consequences that its earliest creators and users did not -- and perhaps could not -- anticipate or plan for.

If done thoughtfully, that multidisciplinary approach will help your students practice distinguishing what people bear responsibility for from what people should be blamed for. Although “unanticipated” doesn’t strictly imply “undesired”, many unanticipated consequences of technologies are ones that the creators or developers of those technologies might not have expected or desired. If the benefit of historical perspective allows your students to say, “but they should have expected those consequences!” about earlier technological developments, then it should also allow them to wonder what they “should have expected” about the technologies that they’re designing, developing, and using.

That point leads to the second benefit of discussing examples of unanticipated consequences: It immediately raises questions, “‘Unanticipated’ by whom? Did no one anticipate those consequences?” Without assuming that there are any omniscient humans or that every expression of worry should be treated as prophecy, you and your students can find examples of cases where some consequences were unanticipated by those who designed or implemented them but not by those who were on the outside of those design or implementation decisions. In other words: a discussion of unanticipated consequences can direct your students’ attention both to the developers and users of technologies and to those who are affected by those technologies. There will always be unanticipated consequences of technologies. A more diverse group of developers, from a diverse range of backgrounds, might be likelier to be able to anticipate, mitigate, or even prevent, a wider range of them.

This leads to the third benefit of a focus on unanticipated consequences. “Responsible Computing” implies a richer conception of taking responsibility, which in turn implies both exercising and demonstrating care. One common failure to demonstrate care occurs at the design and implementation stages of a project. That could mean, for example, assuming that a technology will be used only in benign ways and only by people with benign intentions. It could also mean assuming that any unanticipated consequences are outside the scope of the project, and will just have to be dealt with by someone else later. Discussing examples of unanticipated consequences in both computing and in other technological systems can help train your students to ask about what will happen when -- not if -- the tools they develop are used in ways that they didn’t expect, by people whom they didn’t expect to be using those tools.


Key Questions:

  • Which examples of unanticipated consequences of a technological system will be discussed? Choose a diverse set of examples from a range of time periods, places, and types of technology. Also, take your students’ interests when choosing your examples.
  • Which other fields of study will be used to get their perspective? Incorporate knowledge from other perspectives and disciplines (e.g., Cultural Studies, History, Psychology) about how, when, and why some kinds of consequences are unanticipated.
  • How are those examples discussed? Tell your students about hindsight bias. The fact that an unexpected outcome seems obvious after the fact, doesn’t prove that it was obvious at the time. On the other hand, “obviousness” can be a matter of perspective. From whose perspective(s) was an outcome not obvious, and why?
  • What broader lesson should students learn from those examples? “There will always be unanticipated consequences” can be an expression of hard-earned wisdom. It can, equally, be an expression of passivity and an evasion of responsibility. Examples and discussions be framed so that as much as possible, you are encouraging wisdom rather than passivity?
  • How will you apply that broader lesson in the computing courses? Here are some questions to require students to submit with any (substantial) software project assigned -- perhaps even before they’ve started to design and write it.
    • What do you think or hope it will be used for?
    • Who do you envision using it?
    • HOW do you envision them using it? (What are they using it for?)
    • What else could it be used for?
    • What might happen if a user you didn’t imagine, uses it for purposes you didn’t expect? What can you put into place to expose, mitigate, or prevent any of those uses that might be harmful or otherwise bad?
    • How might your tool be used to hurt, control, or profile others who are more vulnerable than you?
    • If you were born in another country, or in a different part of this country, how might you feel about your tool?
    • How might it affect other parts of your own society? Other societies? The planet?
  • How should students apply that broader lesson beyond computing courses? Give specific examples of how software engineers or programmers iteratively applied their own knowledge of the phenomenon of unanticipated consequences.


Checklist

☐ Decide which (diverse set of) examples of a technological system’s unanticipated consequences you will discuss (keeping your students’ interests in mind).

☐ Incorporate knowledge from other perspectives and disciplines

☐ Decide on how the example will be presented to the students.

☐ Decide what broader lesson(s) you want the discussion to convey.

☐ Discuss with students how they can apply those lessons to their Computing courses.

☐ Discuss with students how they can apply those lessons beyond their Computing courses.


Examples

Guilford College

At Guilford College, the CTIS 210 course (Introduction to Computer Programming) includes a unit on “Moral Machines”.

Checklist walkthrough (Here, we show how some of the checklist items in the previous section of this document could be used. The checklist did not exist when this assignment was designed, so this walkthrough is meant only for illustrative purposes.)

  • Decide which (diverse set of) examples of a technological system’s unanticipated consequences you will discuss (keeping your students’ interests in mind).
    • In this unit of the course, students visit MIT’s Moral Machine platform and browse the scenarios that are presented there. Students discuss the (non-autonomous) automobile as a technological system and some of its unanticipated social and environmental consequences. Finally, students use pseudocode to write IF-THEN-ELSE statements that express their judgments of how to address the scenarios that involve autonomous vehicles. As the students are presented with additional scenarios, they have to iteratively redesign their original statements to address those cases.
  • Incorporate knowledge from other perspectives and disciplines.
    • The students are pointed to information and perspectives from cognitive scientists, psychologists, urban designers, and moral philosophers to see the wide variety of challenges raised by autonomous and semi-autonomous vehicles. Some of those perspectives focus on the overall harms of “car culture” and the possibility that autonomous vehicles will only entrench that culture.
  • Decide what broader lesson(s) you want the discussion to convey.
    • We make assumptions as we develop technology, both about who uses the technology and who is affected by the technology (and those aren’t always the same people). A technological system such as self-driving vehicles raises numerous questions about not just balancing priorities, but also the assumptions, guidelines, ethics, and models of the world we use in determining those priorities.
  • Discuss with your students how they can apply those lessons to their Computing courses.
    • In post-assignment discussions, students are asked to acknowledge the importance of revising one’s models to reflect new information; the difficulty of representing ethical values in software; and the necessity of always considering the real-world effects of their programming choices even in software that seems “trivial” or “small”.


Allegheny College

At Allegheny College, the course CMPSC 301 Data Analytics has guest presenters.

Checklist walkthrough (Here, we show how some of the checklist items in the previous section of this document could be used. The checklist did not exist when the course was last taught, so this walkthrough is meant only for illustrative purposes.)

  • Incorporate knowledge from other perspectives and disciplines
    • Speakers from diverse disciplines and industries (i.e,. Biology, Business Development & Marketing Economics and Psychology) were invited to give talks and host discussions about their form of computational use and data analysis in their own research. The talk incorporated discussion about how to comprehend results and to make responsible decisions from the perspective of the speaker's area and what unintended consequences of data and its analysis may look like in their area. The themes of the class labs followed from these talks.
  • Decide what broader lesson(s) you want the discussion to convey.
    • In addition to course time given to understand the types of analysis and the utilization of computational strategies and statistical tools, the talks added a sense of realism about the course's chapter to help students realize that ethical thinking is also mainstream value. For example, discussions followed the notion disciplines and industrial areas come with their own inherent benchmarks of responsibility (for example, marketing: information accuracy, psychology: personal privacy, biology: correctness of data collection and similar themes), and the talks were an effective way to introduce these modes of thinking to show how professionals incorporate ethics into their fields.
  • Discuss with your students how they can apply those lessons to their Computing courses.
    • In terms of optimistic, yet unintended, consequences, students realize from these talks that real-world policies may likely be created after following computational analyses. Since policies concerning political, humanitarian, and environmental issues, for example, may have wide-spread impacts, students begin to cultivate new-found respect for the power of their own skills for working with the data. In addition, another consequence is the realization that the worth of the analysis depends entirely on the quality of the data that is applied and students also realize the importance of working with reliable and unbiased data.


University at Buffalo

At University at Buffalo (UB), we teach a course on Machine Learning and Society where we discuss various kinds of bias that can creep into an ML system, including a discussion of deployment bias (as defined by Suresh and Guttag).

Checklist walkthrough (Here, we show how some of the checklist items in the previous section of this document could be used. The checklist did not exist when the course was last taught at UB, so this walkthrough is meant only for illustrative purposes.)

  • Decide which (diverse set of) examples of a technological system’s unanticipated consequences you will discuss (keeping your students’ interests in mind).
    • We talk about the Tay Chatbot, which was supposed to learn to talk like a teenager while interacting with other Twitter users. However, in less than a day Tay, “trained” by other Twitter users, was using racist and anti-semitic language in its tweets and had to be shut down.
  • Decide on how the example will be presented to the students.
    • The example was presented as one of many possible ways bias can creep into an ML pipeline (as classified by Suresh and Guttag).
  • Decide what broader lesson(s) you want the discussion to convey.
    • The broader lesson for students was to realize that when designing an ML system, the designer should take into account how the system will be used in society (the Tay Chatbot being a case where the developers did not do this).


Resources


Related Pages


Authors and Contributors

Image of Vance Ricks

Vance Ricks (author)

Image of Oliver Bonham-Carter

Oliver Bonham-Carter

Image of Atri Rudra

Atri Rudra