Web Exclusive

R. U. Robot Ready?

Carnegie Mellon University students face the societal implications of AI

By Jennifer Keating

April 25, 2023

This article originally appeared in the Summer/Fall 2019 issue of Liberal Education.

Algorithms designed to narrow job applicant pools, network platforms that process data to “read” case law, cars that upload new operating systems overnight, and pocket-size devices that hold troves of vulnerable personal data—these are not potential anomalies of an imagined future. They are already features of everyday life. As artificial intelligence (AI) continues to develop in its sophistication and integration into contemporary society, how well are we preparing students to navigate this reality? What language do we use to describe these systems? How does the public understand or imagine the capabilities of state-of-the-art systems? How do we equip designers to assess and predict the influences and impacts of the systems that they develop?

“The ethics of AI” has become shorthand for several complex and perhaps delicate considerations about the social and cultural implications of advancing technology. Concerns that have largely been the focus of computer scientists, roboticists, and engineers are rapidly moving into mainstream conversations in news outlets, congressional hearings, and our lives. No longer can technologists be alone in interrogating the intended and unintended influences of advancing AI on communities. Our world of ever-increasing technology requires careful contemplation across many disciplines.

Carnegie Mellon University (CMU) is harnessing the expertise of faculty members across campus to create innovative curricular offerings that expose students, regardless of their majors, to many modes of inquiry into the ethical and social ramifications of AI and other technologies. Our emerging efforts introduce students in technical fields, the humanities, and the social sciences to questions that will doubtlessly persist as AI advances in virtually all areas of our economic markets, public governance, and private lives.

Xiang Zhi Tan, a PhD student in Carnegie Mellon’s Robotics Institute, tests Baxter robot’s ability to assess its own performance at tasks. (Carnegie Mellon University)

In grappling with these questions in specific courses, we consider with our students the ways that AI can, for instance, diminish or exacerbate socioeconomic disparity. We consider how human-to-machine relationships shape and reconfigure human-to-human relationships and negotiations of power, ranging from relationships that involve the most influential brokers of our government and corporations to those that involve individuals who have inequitable access to the internet and digital devices. We also trace technological advancements over time to determine how society has responded to systems like the printing press, the cotton gin, the steam engine, and various other advances that have had considerable impacts on human institutions and relationships. We then look for ways in which these past innovations, and their social and cultural consequences, offer lessons on how we might handle the current technological revolution. As we mine the past and present, we seek indicators of how we can model future interactions between humans and machines and how these interactions might shape our human relationships as individuals, communities, and nations.

Technologists like Illah Nourbakhsh at CMU, Alan Winfield at the University of West England–Bristol, and Noel Sharkey at the University of Sheffield have undertaken research into the ethics of robotics and integrated such work into curricular offerings. Anthropologist Lucy Suchman at Lancaster University has collaborated across disciplinary lines for decades. For example, in “Wishful Mnemonics and Autonomous Killing Machines,” coauthored with Sharkey, Suchman considers US military goals for increasing automated technology in warfare, articulating an urgency for addressing the existential threats associated with humans relinquishing consequential decision making to automated systems.

But how might that urgency be translated into other object lessons that consider the societal impacts of seemingly more innocuous technologies that also raise questions of human agency and control? How might a humanist attend to or interrogate the potential surveillance capabilities in a system like Amazon’s Alexa, or even the advancing autonomous vehicle systems of Uber, Argo AI, Aurora Innovations, and Tesla? Technologists at these companies can utilize user information to teach systems to optimize safety, routing efficiency, and other features. Alongside technologists’ concerns, an anthropologist, a rhetorician, or a historian might analyze passenger routing data to consider how a company or a local government could monitor the movement of individuals or groups within a population. Within contexts of societies under strife, an in-home device like Alexa, or data on population movement in autonomous vehicles, could allow a government to control a population through surveillance at levels well beyond historic examples. Think about the British military using Alexa as a tool in individual homes in a context like the Troubles in Northern Ireland, or about transportation data being used to monitor black, Indian, and other populations in a political context like the apartheid government in South Africa. When we move away from the novelty of devices and consider their more sinister applications in vulnerable societies, imagining the intended and unintended consequences for advancing AI offers rich modes for analysis in undergraduate classrooms.

As institutions of higher learning help students develop skills to attend to the dynamic issues arising in this era of advancing AI, they can pull in their various experts to develop rich, cross-disciplinary curricula, as we are doing at CMU. Some examples of how our undergraduate curricula respond to and shape circumstances around enhancing technologies include offering:

  • a major in artificial intelligence in the School of Computer Science that integrates courses on societal impact, modules on ethics, and responsible technological design principles;
  • a minor in societal and human impacts of future technologies (SHIFT); and
  • the Grand Challenge Seminar: Artificial Intelligence and Humanity (in Dietrich College).

While the AI major focuses primarily on students gaining sophisticated skills to build the next generation of technologies, courses emphasize awareness and sensitivity to the potential implications for systems that do not espouse responsible design practices. Courses meant to develop students’ specific technical skills also include modules on ethical practices. In addition, the program offers stand-alone courses on ethics and technological development.

The SHIFT minor provides a deeper emphasis on complementing technological skill with an understanding of the larger implications of advanced systems. Students majoring in a technical field can, for instance, take classes on the history of science and technology or ethics in philosophy. Students majoring in the social sciences or the humanities, meanwhile, can supplement their studies with a SHIFT minor to build their technical skills. This reflexive curricular design requires a combination of courses that supports students’ content development in their majors and also offers plenty of opportunities to explore AI’s technical elements or its social implications through a robust minor.

The Grand Challenge seminar AI and Humanity offers an integrated approach to challenges pertaining to AI and society. In Grand Challenge courses, which are held in CMU’s Dietrich College of Humanities and Social Sciences, students interrogate persistent societal problems and concerns such as racism and climate change. Taught by faculty teams, the seminars expose students to the power of multidisciplinary modes of inquiry and collaborative modeling in their first semester on campus. They set a tone of continual engagement with faculty, peers, and others in the campus community throughout students’ time at CMU.

A Robotics Institute team uses organizational and cognitive sciences to explore how AI can help humans work better together. (Carnegie Mellon University)

In AI and Humanity, which I teach with Illah Nourbakhsh in the Robotics Institute in the School of Computer Science, we build a common language with students to analyze various AI and robotics systems. We study narratives in various forms (films, plays, paintings, and television episodes) that explore human interactions with machines, in combination with detailed histories of the technical development of systems ranging from the internet to IBM’s Watson to autonomous vehicles and weapons. We consider how the networking of data facilitates rudimentary and sophisticated modes for surveillance. We also discuss what political contexts allow populations to tolerate or protest against such systems. Our learning outcomes include the following:

  • Identify, describe, and respond to historical examples of negotiations of power between human individuals and communities through materials presented in various mediums.
  • Develop verbal and written communication skills to diagram, describe, and articulate evaluations of the historical and contemporary evolution of machines and human relationships to these systems.
  • Survey a variety of narrative forms that explore human relationships to emerging technologies over time, including futurism.
  • Map foundational technology innovations that have resulted in and might lead to disruptive advancements in AI.
  • Create individual and/or collaborative narratives pertaining to the evolving relationships between humans and machines.

Throughout the semester, several course themes draw on the history of language as a basis for building a common vocabulary. Students work with the texts Keywords: A Vocabulary of Culture and Society and Keywords for Today: A 21st Century Vocabulary to build a shared language and consider the evolution of language. As we move through the semester, we grapple with the following themes:

  • Concepts of “artificial” and “nature”
  • The early internet and AI systems
  • Concepts of the individual and the self
  • Narrative
  • Identity
  • Labor
  • Economy
  • “State of the art” in robotics and AI
  • Surveillance
  • Information and data
  • Networks
  • Autonomous vehicles
  • Autonomous weaponry

As students study developing technologies and narrative imaginings of near and far futures, they are offered the intellectual space to wonder aloud. We watch episodes of the British television series Black Mirror that deal with autonomous weaponry, such as “Hated in the Nation,” and with concepts of personhood and sentience, such as “Be Right Back.” We read Karel Capek’s science fiction play R.U.R.(Rossum’s Universal Robots) and compare it to Jordan Harrison’s play Marjorie Prime as we consider the power of narrative, the portrayal of humanoid robotic systems, and the manner in which these plays contend with human-to-human negotiations of power and conceptions of personhood. We analyze these plays in the context of the Narrative of the Life of Frederick Douglass, an American Slave, in which concepts of property, personhood, and citizenship are discussed in a historical framework. We also consider the plays alongside contemporary stunts like Saudi Arabia’s granting citizenship to the female humanoid robot “Sophie.”

Guest speakers visit the course, examining issues ranging from responsible design to the potential implications for autonomous weaponry regulation. Recent speakers include Lancaster’s Suchman; John Havens (IEEE, a technical professional organization); Mark Kamlet (CMU economist); David Danks (CMU philosopher and ethicist); and Louis Chude-Sokei (Boston University literary critic). We also hold public events, such as Public Engagement with AI & Robotics through the Arts in Pittsburgh, at which we hosted the creative director for the Pittsburgh Public Theater; the head curator for photography at the Carnegie Museum of Art; the director of new plays at City Theatre; and a professor of architecture who uses robotic systems to experiment with artisanal plastering techniques.

In addition to guest speakers, the class also visits research sites. We have toured the Community Robotics, Education and Technology Empowerment (CREATE) Lab and other robotics labs at CMU. Tours of local corporations include Argo AI, Aurora Innovations, and Uber. These tours allow students to see state-of-the-art technology and interact with engineers and technologists in both academe and industry to discuss the issues and ideas with which students engage throughout the term.

In July 2019, we piloted a guided-research course to build on themes from the first-year seminar. For the research course, a group of four students developed the focus of our shared work: the influence of technology on education. The work took shape around a discussion of an article in Science Robotics, “Social Robots for Education: A Review.” As we shared initial reactions to the reading, a rising second-year student in the School of Computer Science raised questions about how a child’s developmental sense of agency can be altered by interacting with embodied systems through social robots in contrast to virtual tutorial systems on a computer interface. The student cited the definition of agency—an ability to act and exact change—and questioned how it might be understood as a feature of a system, which is distinct, of course, from considerations of a child’s sense of agency. This opened up a theoretical and semantic interrogation of what agency means, as a keyword, in children and in a child’s interaction with the examples of social educational tools described in the article, which are modeled after a tutor’s role in a classroom.

Our student’s sensitivity to language and the concerns associated with power negotiations that manifest in K–12 classrooms were an exciting extension from the first-year seminar. The group’s attention to language, the history of technological advancement over time, and technology’s intended and unintended consequences was part of an impressive development trajectory as students moved from consumers of ideas presented in the first-year seminar to leading our discussion and choosing their project themes in the research course. The students’ guided research project suggested the poignancy of a dynamic cross-disciplinary approach to curricular development and its translation to curricular design well beyond introductory-level classes.

The only way to successfully tackle the challenges of AI, as well as a range of other societal concerns—from climate change to racism to political rhetoric and democracy—is through cross-disciplinary inquiry and collaboration. While there is considerable work ahead to equip our graduates with the tools necessary to navigate and lead in the rapidly changing world, we believe an interdisciplinary approach, throughout a curriculum or in a single course, offers the necessary tools to attend to these challenges and the changes already underway.

Download this article with full citations as it appears in the 2019 summer/fall issue of Liberal Education.


  • Jennifer Keating

    Jennifer Keating is assistant dean for educational initiatives in the Dietrich College of Humanities and Social Sciences at Carnegie Mellon University.