Windows of Olin Library Thermal image of a house Big dog military robots


INFO 1260 / CS 1340: Choices and Consequences in Computing
Spring 2021
Jon Kleinberg and Karen Levy

Course description

Computing requires difficult choices that can have serious implications for real people. This course covers a range of ethical, societal, and policy implications of computing and information. It draws on recent developments in digital technology and their impact on society, situating these in the context of fundamental principles from computing, policy, ethics, and the social sciences. A particular emphasis will be placed on large areas in which advances in computing have consistently raised societal challenges: privacy of individual data; fairness in algorithmic decision-making; dissemination of online content; and accountability in the design of computing systems. As this is an area in which the pace of technological development raises new challenges on a regular basis, the broader goal of the course is to enable students to develop their own analyses of new situations as they emerge at the interface of computing and societal interests.


There are no formal pre-requisites for this course. It is open to students of all majors.

This course satisfies the Knowledge, Cognition, & Moral Reasoning (KCM) distribution requirement. For Information Science majors, the course may substitute for INFO 1200 to fulfill major requirements. Students may receive credit for both INFO 1200 and INFO 1260, as the scopes of the two courses are distinct.


Principles of computing and its societal impact

  • We begin by discussing some of the broad forces that laid the foundations for this course, particularly the ways in which applications of computing developed in the online domain have come to impact societal institutions more generally, and the ways in which principles from the social sciences, law, and policy can be used to understand and potentially to shape this impact.

Algorithmic decision-making, fairness, and bias

  • Algorithms trained using machine learning are increasingly being used to evaluate people in a a range of different contexts, including employment, education, credit, healthcare, and the legal system. We consider the ways in which these kinds of algorithmic evaluations may incorporate biases that are present in the human decisions they're trained on, and what mechanisms might be available to counteract these forms of bias.

Data collection, data aggregation, and the problem of privacy

  • Computing platforms are capable of collecting vast amounts of data about their users, and can analyze those data to make inferences about users' characteristics and behaviors. Data collection and analysis have become central to platforms' business models, but also present fundamental challenges to users' privacy expectations. Here, we describe the difficult choices that platforms must make about how they gather, store, combine, and analyze users' information, and what social and political impacts those practices can have.

Content creation and platform policies

  • One of the most visible developments in computing over the past two decades has been the growth of enormous social platforms on the Internet through which people connect with each other and share information. We look at some of the profound challenges these platforms face as they set policies to regulate these behaviors, and how those decisions relate to longstanding debates about the values of speech.

Experimentation in the design of user-facing algorithms

  • When computing platforms evaluate new features and functionality, a common paradigm is to try out different versions and measure user response. This means that platforms are engaging in long-running sets of experiments with human participants, and so it is important to ask how principles developed for reasoning about such experiments more broadly should be applied in the online domain.

Cyber-physical systems: robots, drones, and sensors

  • A number of the important applications of computing are embedded in the physical world -- robots, autonomous vehicles, and networks of sensors are some basic instances. Many of the issues we consider in the course have direct analogues in physical settings; we'll consider how these physical manifestations add important dimensions to the questions.

Assurance and accountability for algorithms

  • Many forms of computing come with different types of guarantees --- for example, the guarantee that an application has been tested, debugged, or proved correct. How do these guarantees correspond to different forms of accountability for the behavior of a computing application or platform? This is a broad question that we will consider as a way to synthesize a number of the themes that run through the course.