Marion Boulicault and Milo Phillips-Brown are part of a team working on transforming technology ethics education at MIT.
Boulicault, a PhD candidate in MIT Philosophy, is a neuroethics fellow with the National Science Foundation’s Center for Neurotechnology, for which she has organized ethics roundtables and taught ethics workshops in partnership with Momentum and MOSTEC, MIT programs for undergraduate and high school students. She also co-leads a National Institutes of Health-funded project examining the ethical implications of the use of neurotechnology for treating psychiatric disorders, and is a teaching fellow at Harvard University’s Embedded EthiCS program. Her dissertation on infertility measurement investigates technology ethics through the lens of feminist philosophy. For the past two years, she’s been a member of Harvard’s GenderSci Lab, an interdisciplinary research group engaged in generating feminist concepts, methods, and theories for scientific research on sex and gender.
Phillips-Brown is a postdoc in the ethics of technology in MIT Philosophy (one half of the Department of Linguistics and Philosophy), a research fellow in digital ethics and governance at the Jain Family Institute, and a member of the Advisory Board on the Social and Ethical Responsibilities of Computing. He has collaborated with professors from computer science, brain and cognitive science, and across the School of Engineering to integrate ethics into engineering classes. He also teaches 24.131 (Ethics of Technology) in MIT Philosophy.
Q: It seems that barely a day goes by without some controversy about technology in the news — for example, Facebook’s controversial policies toward political advertising on its platform. Do you think technology ethics education can help us understand and address these controversies? And if so, how?
A: Technology ethics education can definitely help us to understand and address these controversies. But, we believe that to do so most effectively, a new approach is needed.
If you open an engineering textbook and flip to the ethics section — if there is one — you’ll likely see historical case studies of technologies gone awry (the space shuttle Challenger disaster, say) and bite-sized versions of moral theories (excerpts from philosophers John Stuart Mill on utilitarianism, or Immanuel Kant on why one should follow rules, for example). This has been the traditional approach to technology ethics education.
The problem with the traditional approach is that it’s too far removed from what engineers and technologists do when they actually make things. You can, of course, learn from case studies of people’s mistakes, but students and instructors with whom we’ve worked say they feel alienated from traditional case studies and don’t always understand what these studies have to do with their own work. And the abstract realm of moral theory is, well, abstract!
It’s not always clear how to operationalize these theories in practice. And we’re philosophers: we don’t mean this as a knock on moral theory. It’s just that Mill and Kant and most people who are in the business of doing moral theory weren’t necessarily thinking about the intersection of moral theory with technological change.
The alternative approach we’ve been piloting across MIT is teaching ethics as a set of skills (or what Aristotle would call techné). If we’re going to make a difference in whether our students make things ethically and responsibly, they have to know how to do that. They need ethical skills that they can apply to their own work.
This spans from skills for how to think about the seemingly mundane decisions they make on a daily basis in the lab, or in a meeting at their startup, to decisions about whether to accept industry funding or how to speak about their work in public, to fundamental decisions about whether a technology should be made in the first place. All of these decisions have ethical dimensions, and we want to teach students the skills to navigate them now and throughout their future careers.
Q: What does skills-based ethics pedagogy look like at MIT?
A: In 2018, we, together with Abby Jaques and Jim Magarian, began piloting a skills-based approach as part of the New Engineering Education Transformation (NEET). NEET is an interdisciplinary School of Engineering initiative that’s oriented towards competency- and skill-based learning. Over the course of a year, NEET students build a technology — like an autonomous drone, or a biological “microchip” that simulates the human gut — and during an in-class workshop, we provide a structured, step-by-step guide for students on how to recognize and think through some of the complex ethical and political dimensions of their technology.
We’ve also been working with professors in EECS [the Department of Electrical Engineering and Computer Science] to add ethics questions to engineering problem sets, with the expectation the students will be grappling with ethical decision-making as they train to become engineers.
We don’t have all the answers. This is still very much an exploratory phase to figure out what works and what doesn’t with this new approach. One thing we’ve found so far is that students are more inclined to engage with ethical thinking when their professors signal that they care about ethical engineering. For example, professors can speak to why they care about ethics at the beginning of our in-class workshops. Putting ethics questions alongside technical material in problem sets is also effective because it signals that ethical issues are on par with, and inextricable from, technical ones.
Q: How would you like to see technology ethics integrated across MIT?
A: Ultimately, we would like to see MIT take a fully immersive approach to ethics education. By that, we mean ethical reasoning skills should be taught, valorized, and rewarded at every stage and in every dimension of undergraduate and graduate education. The result, we hope, is that students — and the Institute at large — would come to see technology, ethics, and politics as inescapably intertwined. That’s in contrast to a model where the engineer makes something, then thinks “let’s check for ethical issues.” Ethics and politics are implicated every step of the way when technology is created.
The MIT Schwarzman College of Computing is a great opportunity to exemplify this model. In a recent article in MIT News, the college’s dean, Dan Huttenlocher, wrote that “no other academic institution is taking on the scale and scope of change that we are pursuing at MIT.” The college has named David Kaiser, the Germeshausen Professor of the History of Science, and Julie Shah, an assistant professor in the Department of Aeronautics and Astronautics, as associate deans for the program in Social and Ethical Responsibilities of Computing, so there is an opportunity for the “scale and scope” and also the direction of this monumental change to encompass social justice.
Teaching ethics as a skill is a key part of this, as is having complementary classes in the School of Humanities, Arts, and Social Sciences that encourage students to see the ethics, politics, and social nature of technology through the lenses of various disciplines. For example, Milo has co-taught an MIT philosophy class, Ethics of Technology, that addresses moral and political theory in relation to questions about technology that are making headlines right now. In this class, students read an article about China’s surveillance state alongside Foucault on the Panopticon, or a white paper on best practices for accessible data visualization with a recent academic paper in the theory of disability.
We are also currently partnering with Kate Trimble, the associate dean for public service, to integrate ethical reasoning into summer experiential education programs, such as UROP [Undergraduate Research Opportunities Program] and MISTI [MIT International Science and Technology Initiatives Program]. In doing so, we are working towards building an interdisciplinary, multimodal, and fully immersive approach to ethics education at MIT, one which provides students with opportunities for learning and practicing ethical reasoning skills across all of their experiences at the Institute.