One scientist’s journey from the Middle East to MIT

“I recently exhaled a breath I’ve been holding in for nearly half my life. After applying over a decade ago, I’m finally an American. This means so many things to me. Foremost, it means I can go back to the the Middle East, and see my mama and the family, for the first time in 14 years.”

The words appear on a social media post next to a photo of Ubadah Sabbagh, a postdoc at MIT’s McGovern Institute, who in 2021 joined the lab of Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor at MIT. Sabbagh, a Syrian national, is dressed in a charcoal gray jacket, a keffiyeh loose around his neck, and holds his U.S. citizenship papers, which he began applying for when he was 19 and an undergraduate at the University of Missouri-Kansas City (UMKC) studying biology and bioinformatics.

In the photo he is 29.

A clarity of vision

Sabbagh’s journey from the Middle East to his research position at MIT has been marked by determination and courage, a multifaceted curiosity, and a role as a scientist-writer/scientist-advocate. He is particularly committed to the importance of humanity in science.

“For me, a scientist is a person who is not only in the lab but also has a unique perspective to contribute to society,” he says. “The scientific method is an idea, and that can be objective. But the process of doing science is a human endeavor, and like all human endeavors, it is inherently both social and political.”

At just 30 years of age, some of Sabbagh’s ideas have disrupted conventional thinking about how science is done in the United States. He believes nations should do science not primarily to compete, for example, but to be aspirational.

“It is our job to make our work accessible to the public, to educate and inform, and to help ground policy,” he says. “In our technologically advanced society, we need to raise the baseline for public scientific intuition so that people are empowered and better equipped to separate truth from myth.”

His research and advocacy work have won him accolades, including the 2023 Young Arab Pioneers Award from the Arab Youth Center and the 2020 Young Investigator Award from the American Society of Neurochemistry. He was also named to the 2021 Forbes “30 under 30” list, the first Syrian to be selected in the Science category.

A path to knowledge

Sabbagh’s path to that knowledge began when, living on his own at age 16, he attended Longview Community College in Kansas City, Missouri, often juggling multiple jobs. It continued at UMKC, where he fell in love with biology and had his first research experience with bioinformatician Gerald Wyckoff at the same time the civil war in Syria escalated, with his family still in the Middle East. “That was a rough time for me,” he says. “I had a lot of survivor’s guilt: I am here, I have all of this stability and security compared to what they have, and while they had suffocation, I had opportunity. I need to make this mean something positive, not just for me, but in as broad a way as possible for other people.”

The war also sparked Sabbagh’s interest in human behavior — “where it originates, what motivates people to do things, but in a biological, not a psychological way,” he says. “What circuitry is engaged? What is the infrastructure of the brain that leads to X, Y, Z?”

His passion for neuroscience blossomed as a graduate student at Virginia Tech, where he earned his PhD in translational biology, medicine, and health. There, he received a six-year NIH F99/K00 Award, and under the mentorship of neuroscientist at the Fralin Biomedical Research Institute he researched the connections between the eye and the brain — specifically, mapping the architecture of the principle neurons in a region of the thalamus essential to visual processing.

“The retina, and the entire visual system, struck me as elegant, with beautiful layers of diverse cells found at every node,” says Sabbagh, his own eyes lighting up.

His research earned him a coveted spot on the Forbes “30 under 30” list, generating enormous visibility, including in the Arab world, adding visitors to his already robust X (formerly Twitter) account, which has more than 9,200 followers. “The increased visibility lets me use my voice to advocate for the things I care about,” he says.

Those causes range from promoting equity and inclusion in science to transforming the American system of doing science for the betterment of science and the scientists themselves. He co-founded the nonprofit Black in Neuro to celebrate and empower Black scholars in neuroscience, and he continues to serve on the board. He is the chair of an advisory committee for the Society for Neuroscience (SfN), recommending ways SfN can better address the needs of its young members, and a member of the Advisory Committee to the National Institutes of Health (NIH) director working group charged with re-envisioning postdoctoral training. He serves on the advisory board of Community for Rigor, a new NIH initiative that aims to teach scientific rigor at national scale and, in his spare time, he writes articles about the relationship of science and policy for publications including Scientific American and The Washington Post.

Still, there have been obstacles. The same year Sabbagh received the NIH F99/K00 Award, he faced major setbacks in his application to become a citizen. He would not try again until 2021, when he had his PhD in hand and had joined the McGovern Institute for Brain Research.

An MIT postdoc and citizenship

Sabbagh dove into his research in Guoping Feng’s lab with the same vigor and outside-the-box thinking that characterized his previous work. He continues to investigate the thalamus, but in a region that is less involved in processing pure sensory signals, such as light and sound, and more focused on cognitive functions of the brain. He aims to understand how thalamic brain areas orchestrate complex functions we carry out every day, including working memory and cognitive flexibility.

“This is important to understand because when this orchestra goes out of tune it can lead to a range of neurological disorders, including schizophrenia,” he says. He is also developing new tools for studying the brain using genome editing and viral engineering to expand the toolkit available to neuroscientists.

The environment at the McGovern Institute is also a source of inspiration for Sabbagh’s research. “The scale and scope of work being done at McGovern is remarkable. It’s an exciting place for me to be as a neuroscientist,” said Sabbagh. “Besides being intellectually enriching, I’ve found great community here — something that’s important to me, wherever I work.”

Returning to the Middle East

While at an advisory meeting at the NIH, Sabbagh learned he had been selected as a Young Arab Pioneer by the Arab Youth Center and was flown the next day to Abu Dhabi for a ceremony overseen by Shamma Al Mazrui, cabinet member and minister of community development in the United Arab Emirates. The ceremony recognized 20 Arab youth from around the world in sectors ranging from scientific research to entrepreneurship and community development. Sabbagh’s research “presented a unique portrayal of creative Arab youth and an admirable representation of the values of youth beyond the Arab world,” says Sadeq Jarrar, executive director of the center.

“There I was, among other young Arab leaders, learning firsthand about their efforts, aspirations, and their outlook for the future,” says Sabbagh, who was deeply inspired by the experience.

Just a month earlier, his passport finally secured, Sabbagh had reunited with his family in the Middle East after more than a decade in the United States. “I had been away for so long,” he says, describing the experience as a “cultural reawakening.”

Sabbagh saw a gaping need he had not been aware of when he left 14 years earlier, as a teen. “The Middle East had such a glorious intellectual past,” he says. “But for years people have been leaving to get their advanced scientific training, and there is no adequate infrastructure to support them if they want to go back.” He wondered: What if there were a scientific renaissance in the region? How would we build infrastructure to cultivate local minds and local talent? What if the next chapter of the Middle East included being a new nexus of global scientific advancements?

“I felt so inspired,” he says. “I have a longing, someday, to meaningfully give back.”

Who will benefit from AI?

What if we’ve been thinking about artificial intelligence the wrong way?

After all, AI is often discussed as something that could replicate human intelligence and replace human work. But there is an alternate future: one in which AI provides “machine usefulness” for human workers, augmenting but not usurping jobs, while helping to create productivity gains and spread prosperity.

That would be a fairly rosy scenario. However, as MIT economist Daron Acemoglu emphasized in a public campus lecture on Tuesday night, society has started to move in a different direction — one in which AI replaces jobs and rachets up societal surveillance, and in the process reinforces economic inequality while concentrating political power further in the hands of the ultra-wealthy.

“There are transformative and very consequential choices ahead of us,” warned Acemoglu, Institute Professor at MIT, who has spent years studying the impact of automation on jobs and society.

Major innovations, Acemoglu suggested, are almost always bound up with matters of societal power and control, especially those involving automation. Technology generally helps society increase productivity; the question is how narrowly or widely those economic benefits are shared. When it comes to AI, he observed, these questions matter acutely “because there are so many different directions in which these technologies can be developed. It is quite possible they could bring broad-based benefits — or they might actually enrich and empower a very narrow elite.”

But when innovations augment rather than replace workers’ tasks, he noted, it creates conditions in which prosperity can spread to the work force itself.

“The objective is not to make machines intelligent in and of themselves, but more and more useful to humans,” said Acemoglu, speaking to a near-capacity audience of almost 300 people in Wong Auditorium.

The Productivity Bandwagon

The Starr Forum is a public event series held by MIT’s Center for International Studies (CIS), and focused on leading issues of global interest. Tuesday’s event was hosted by Evan Lieberman, director of CIS and the Total Professor of Political Science and Contemporary Africa.

Acemoglu’s talk drew on themes detailed in his book “Power and Progress: Our 1000-Year Struggle Over Technology and Prosperity,” which was co-written with Simon Johnson and published in May by PublicAffairs. Johnson is the Ronald A. Kurtz Professor of Entrepreneurship at the MIT Sloan School of Management.

In Tuesday’s talk, as in his book, Acemoglu discussed some famous historial examples to make the point that the widespread benefits of new technology cannot be assumed, but are conditional on how technology is implemented.

It took at least 100 years after the 18th-century onset of the Industrial Revolution, Acemoglu noted, for the productivity gains of industrialization to be widely shared. At first, real earnings did not rise, working hours increased by 20 percent, and labor conditions worsened as factory textile workers lost much of the autonomy they had held as independent weavers.

Similarly, Acemoglu observed, Eli Whitney’s invention of the cotton gin made the conditions of slavery in the U.S. even worse. That overall dynamic, in which innovation can potentially enrich a few at the expense of the many, Acemoglu said, has not vanished.

“We’re not saying that this time is different,” Acemoglu said. “This time is very similar to what went on in the past. There has always been this tension about who controls technology and whether the gains from technology are going to be widely shared.”

To be sure, he noted, there are many, many ways society has ultimately benefitted from technologies. But it’s not something we can take for granted.

“Yes indeed, we are immeasurably more prosperous, healthier, and more comfortable today than people were 300 years ago,” Acemoglu said. “But again, there was nothing automatic about it, and the path to that improvement was circuitous.”

Ultimately what society must aim for, Acemoglu said, is what he and Johnson term “The Productivity Bandwagon” in their book. That is the condition in which technological innovation is adapted to help workers, not replace them, spreading economic growth more widely. In this way, productivity growth is accompanied by shared prosperity.

“The Productivity Bandwagon is not a force of nature that applies under all circumstances automatically, and with great force, but it is something that’s conditional on the nature of technology and how production is organized and the gains are shared,” Acemoglu said.

Crucially, he added, this “double process” of innovation involves one more thing: a significant amount of worker power, something which has eroded in recent decades in many places, including the U.S.

That erosion of worker power, he acknowledged, has made it less likely that multifaceted technologies will be used in ways that help the labor force. Still, Acemoglu noted, there is a healthy tradition within the ranks of technologists, including innovators such as Norbert Wiener and Douglas Engelbart, to “make machines more useable, or more useful to humans, and AI could pursue that path.”

Conversely, Acemoglu noted, “There is every danger that overemphasizing automation is not going to get you many productivity gains either,” since some technologies may be merely cheaper than human workers, not more productive.

Icarus and us

The event included a commentary from Fotini Christia, the Ford International Professor of the Social Sciences and director of the MIT Sociotechnical Systems Research Center. Christia offered that “Power and Progress” was “a tremendous book about the forces of technology and how to channel them for the greater good.” She also noted “how prevalent these themes have been even going back to ancient times,” referring to Greek myths involving Daedalus, Icarus, and Prometheus.

Additionally, Christia raised a series of pressing questions about the themes of Acemoglu’s talk, including whether the advent of AI represented a more concerning set of problems than previous episodes of technological advancement, many of which ultimately helped many people; which people in society have the most ability and responsibility to help produce changes; and whether AI might have a different impact on developing countries in the Global South.

In an extensive audience question-and-answer session, Acemoglu fielded over a dozen questions, many of them about the distribution of earnings, global inequality, and how workers might organize themselves to have a say in the implementation of AI.

Broadly, Acemoglu suggested it is still to be determined how greater worker power can be obtained, and noted that workers themselves should help suggest productive uses for AI. At multiple points, he noted that workers cannot just protest circumstances, but must also pursue policy changes as well — if possible.

“There is some degree of optimism in saying we can actually redirect technology and that it’s a social choice,” Acemoglu acknowledged.

Acemoglu also suggested that countries in the global South were also vulnerable to the potential effects of AI, in a few ways. For one thing, he noted, as the work of MIT economist Martin Beraja shows, China has been exporting AI surveillance technologies to governments in many developing countries. For another, he noted, countries that have made overall economic progress by employing more of their citizens in low-wage industries might find labor force participation being undercut by AI developments.

Separately, Acemoglu warned, if private companies or central governments anywhere in the world amass more and more information about people, it is likely to have negative consequences for most of the population.

“As long as that information can be used without any constraints, it’s going to be antidemocratic and it’s going to be inequality-inducing,” he said. “There is every danger that AI, if it goes down the automation path, could be a highly unequalizing technology around the world.”

Re-imagining the opera of the future

In the mid-1980s, composer Tod Machover came across a copy of Philip K. Dick’s science fiction novel “VALIS” in a Parisian bookstore. Based on a mystical vision Dick called his “pink light experience,” “VALIS” was an acronym for “vast active living intelligence system.” The metaphysical novel would become the basis for Machover’s opera of the same name, which first premiered at the Pompidou Center in 1987, and was recently re-staged at MIT for a new generation.

At the time, Machover was in his 20s and the director of musical research at the renowned French Institute IRCAM, a hotbed of the avant-garde known for its pioneering research in music technology. The Pompidou, Machover says, had given him carte blanche to create a new piece for its 10th anniversary. So, throughout the summer and fall, the composer had gone about constructing an elaborate theater inside the center’s cavernous entrance hall, installing speakers and hundreds of video monitors.

Creating the first computer opera

Machover, who is now Muriel R. Cooper Professor of Music and Media and director of the MIT Media Lab’s Opera of the Future research group, had originally wanted to use IRCAM founder Pierre Boulez’s Ensemble Intercontemporain, but was turned down when he asked to rehearse with them for a full two months. “Like a rock band,” he says. “I went back and thought, ‘Well, what’s the smallest number of players that can make and generate the richness and layered complexity of music that I was thinking about?’”

He decided his orchestra would consist of only two musicians: a keyboardist and a percussionist. With tools like personal computers, MIDI, and the DX7 newly available, the possibilities of digital sound and intelligent interaction were beginning to expand. Soon, Machover took a position as a founding faculty member of MIT’s Media Lab, shuttling back and forth between Cambridge, Massachusetts, and Paris. “That’s when we invented hyperinstruments,” says Machover. The hyperinstruments, developed at the Media Lab in collaboration with Machover’s very first graduate student RA Joe Chung, allowed the musician to control a much fuller range of sound. At the time, he says, “no serious composers were using real-time computer instruments for concert music.”

Word spread at IRCAM that Machover’s opera was, to say the least, unusual. Over the course of December 1987, “VALIS” opened to packed houses in Paris, eliciting both cheers and groans of horror. “It was really controversial,” Machover says, “It really stirred people up. It was like, ‘Wow, we’ve never heard anything like this. It has melody and harmonies and driving rhythms in a way that new music isn’t supposed to.’” “VALIS” existed somewhere between an orchestra and a rock band, the purely acoustic dissolving into the electric as the opera progressed. In today’s era of the remix, audiences might be accustomed to a mélange of musical styles, but then this hybrid approach was new. Machover — who trained as a cellist in addition to playing bass in rock bands — has always borrowed freely from high and low, classical and rock, human and synthetic, acoustic and hi-tech, combining parts to create new wholes.

The story of Dick’s philosophical novel is itself a study of fragments, of the divided self, as the main character, Phil, confronts his fictional double, Horselover Fat, while entering on a hallucinatory spiritual quest after the suicide of a friend. At the time of Dick’s writing, the term artificial intelligence had yet to achieve widespread use. And yet, in “VALIS,” he combines ideas about AI and mysticism to explore questions of existence. In Dick’s vision, “VALIS” was the grand unifying theory that connected a vast array of seemingly disparate ideas. “For him, that’s what God was: this complex technological system,” Machover says, “His big question was: Is it possible for technology to be the answer? Is it possible for anything to be the answer, or am I just lost? He was looking for what could possibly reconnect him to the world and reconnect the parts of his personality, and envisioned a technology to do that.”

A performance for the contemporary era

A full production of “VALIS” hasn’t been mounted in over 30 years, but it’s a fitting moment to re-stage the opera as Dick’s original vision of the living artificial intelligence system — as well as hopes for its promise and fears for its pitfalls — seems increasingly prophetic. The new performance was developed at MIT over the course of the last few years with funding from the MIT Center for Art, Science and Technology, among other sources. Performed at MIT Theater Building W97, the production stars baritone Davóne Tines and mezzo-soprano Anaïs Reno. Joining them also were vocalists Timur Bekbosunov, David Cushing, Maggie Finnegan, Rose Hegele, and Kristin Young, as well as pianist/keyboardist Julia Carey and multi-percussionist Maria Finkelmeier. New AI-enhanced technologies, created and performed by Max Addae, Emil Droga, Nina Masuelli, Manaswi Mishra, and Ana Schon, were developed in the MIT Media Lab’s Opera of the Future group, which Machover directs.

At MIT, Machover collaborated with theater director Jay Scheib, Class of 1949 Professor of Music and Theater Arts, whose augmented reality theater productions have long probed the confused border between the simulacra and the real. “We took camera feeds of live action, process the signal and then project it back, like a strange film, on a variety of surfaces, both TV- and screen-like but also diaphonous and translucent,” says Scheib, “It’s lots and lots of images accumulating at a really high speed, and a mix of choreography and styles of film acting, operatic acting.” Against an innovative set designed by Oana Botez, lighting by Yuki Link, and media by Peter A. Torpey PhD ’13, actors played multiple characters as time splinters and refracts. “Reality is constantly shifting,” says Scheib.

As the opera sped toward the hallucinatory finale, becoming progressively disorienting, a computer music composer named Mini appeared, originally played by Machover, conjuring the angelic hologram Sophia who delivers Phil/Fat to a state of wholeness. In the opera’s libretto, Mini is described as “sculpting sound” instead of simply playing the keyboard, “setting off musical structures with the flick of his hand — he seemed to be playing the orchestra of the future.” Machover composed Mini’s section beforehand in the original production, but the contemporary performance used a custom-built AI model, fed with Machover’s own compositions, to create new music in real time. “It’s not an instrument, exactly. It’s a living system that gets explored during the performance,” says Machover, “It’s like a system that Mini might actually have built.”

As they were developing the project this past spring, the Opera of the Future group wrestled with the question: How would Mini “perform” the system? “Because this is live, this is real, we wanted it to feel fresh and new, and not just be someone waving hands in the air,” says Machover. One day, Nina Masuelli ’23, who had recently completed her undergraduate degree at MIT, brought a large clear plastic jar into the lab. The group experimented with applying sensors to the jar, and then connected it to the AI system. As Mini manipulates the jar, the machine’s music responds in turn. “It’s incredibly magical,” says Machover. “It’s this new kind of object that allows a living system to be explored and to form right in front of you. It’s different every time, and every time it makes me smile with delight as something unexpected is revealed.”

As the performance neared, and Machover watched Masuelli continue to sculpt sound with the hollow jug, a string of Christmas lights coiled inside, something occurred to him: “Why don’t you be Mini?”

In some ways, in the age of ChatGPT and DALL-E, Mini’s exchange with the AI system is symbolic of humanity’s larger dance with machine intelligence, as we experiment with ways to exist and create alongside it: an ongoing venture that will eventually be for the next generation to explore. Writing thousands of sprawling pages in what he called his “exegesis,” Philip K. Dick spent the rest of his life after his “pink light experience” trying to make sense of a universe “transformed by information.” Though the many questions raised by “VALIS” — Is technology the answer? — might never be fully explained, says Machover, “you can feel them through music.”

Audiences apparently felt the same way. As one reviewer wrote, “’VALIS’ is an operatic tour-de-force.” The three shows were filled to capacity, with long waiting lists, and response was wildly enthusiastic.

“It has been deeply gratifying to see that “VALIS” has captured the imagination of a new group of creative collaborators and astonishing performers, of brilliant student inventors and artists, and of the public, wonderfully diverse in age and background,” says Machover, “This is partially due to the visionary nature of Philip K. Dick’s novel (much of which is even more relevant today than when the book and opera first appeared). I hope it also reflects something of the musical vitality and richness of the score, which feels as fresh to me as when I composed it over 35 years ago. I am truly delighted that “VALIS” is back, and hope very much that it is here to stay!”

Professor Emerita Evelyn Fox Keller, influential philosopher and historian of science, dies at 87

MIT Professor Emerita Evelyn Fox Keller, a distinguished and groundbreaking philosopher and historian of science, has died at age 87.

Keller gained acclaim for her powerful critique of the scientific establishment’s conception of objectivity, which she found lacking in its own terms and heavily laden with gendered assumptions. Her work drove many scholars toward a more nuanced and sophisticated understanding of the subjective factors and socially driven modes of thought that can shape scientific theories and hypotheses.

A trained physicist who conducted academic research in biology and then focused on the scientific enterprise and the self-understanding of scientists, Keller joined MIT in 1992, serving in the Program in Science, Technology, and Society.

Having faced outright hostility and discouragement as a female graduate student in the sciences in the late 1950s and early 1960s, Keller by the 1980s had become a prominent academic thinker and public intellectual, ready and willing to bring her ideas to a larger general audience.

“There is no magic lens that will enable us to look at, to see nature unclouded … uncolored by any values, hopes, fears, anxieties, desires, goals that we bring to it,” Keller told journalist Bill Moyers in 1990 for his “World of Ideas” show on PBS.

By that time, Keller had become well-known for two high-profile books. In “A Feeling for the Organism: The Life and Work of Barbara McClintock” published in 1983, Keller examined the work of the biologist whose close studies of corn showed that genetic elements could move around on a chromosome over time, affecting gene expression. Initially ignored, McClintock won the Nobel Prize — within a year of the book’s publication — and her distinctive, well-developed sense of her own research methods meshed with, and fed into, Keller’s ideas about the complexity of discovery.

In “Reflections on Gender and Science,” published in 1985, Keller looked broadly at how the 17th-century institutionalization of science both demarcated it strictly as an activity for men and, relatedly, generated a notion of purely objective inquiry that stood in contrast to the purportedly more emotional and less linear thinking of women. Those foundational works helped other scholars question the idea of unmediated scientific discovery and better recognize the immense gender imbalances in the sciences.

Overcoming hurdles

Keller, born Evelyn Fox, grew up in New York City, a child of Russian Jewish immigrant parents, and first attended Queens College as an undergraduate, before transferring to Brandeis University, where she received her BA in physics in 1957. She received an MA in from Radcliffe College in 1959 and earned her PhD in physics from Harvard University in 1963.

The social environment Keller encountered while working toward her PhD, however, showed her firsthand how much science could be a closed shop to women.

“I was leered at by some,” Keller later wrote, recounting “open and unbelievably rude laughter with which I was often received.” As the journalist Beth Horning wrote in a 1993 profile of Keller published in MIT Technology Review, Keller’s “seriousness and ambition were publicly derided by both her peers and her elders.”

As much as Keller was taken aback, she kept moving forward, earning her doctorate while turning her academic focus toward molecular biology. After briefly returning to physics early in her research career, Keller took a faculty position in mathematical biology at Northeastern University. Among other appointments, Keller served on the faculty at the State University of New York at Purchase, where she began expanding her teaching toward subjects such as women’s studies, and writing about the institutional difficulties she had faced in science.

By the late 1970s, Keller had met McClintock and started writing about McClintock’s work — a kind of case study in the complicated issues Keller wanted to explore. The book’s title was a McClintock phrase, about having “a feeling for the organism” one was studying; McClintock emphasized the importance of being closely attuned to the corn she was studying, which ultimately helped her detect some of the unexpected genomic behavior she identified.

However, as Keller would often emphasize later on, this approach did not mean that McClintock was pursuing science in a distinctively feminine way, either. Instead, as Horning notes, Keller’s aim, stated in “Reflections on Gender and Science,” was the “reclamation, from within science, of science as a human instead of a masculine project.” McClintock’s methods may have been considered unusual and her findings unexpected, but that reflected a narrowness on the part of the scientific establishment.

At the Institute

Starting in 1979, Keller had multiple appointments at MIT as a visiting fellow, visiting scholar, and visiting professor. In 1988, Keller joined the faculty at the University of California at Berkeley, before moving to MIT as a tenured faculty member four years later.

At MIT, Keller joined her older brother, Maurice Fox, in the Institute faculty ranks. Fox was an accomplished biologist who taught at MIT from 1962 through 1996, served as head of the Department of Biology from 1985 through 1989, and was an expert in mutation and recombination, among other subjects; he died in 2020. Keller’s sister is the prominent scholar and social activist Frances Fox Piven, whose wide-ranging work has examined social welfare, working class movements, and democratic practices in the U.S., and influenced the expansion of voting access.

In 1992 Keller received a MacArthur Foundation “genius” award for her scholarship. The foundation called her “a scholar whose interdisciplinary work raises important questions about the interrelationships among language, gender, and science,” while also noting that she had “stimulated thought about alternative styles of scientific research” through her book on McClintock.

In all, Keller wrote 11 books on science and co-edited three other volumes; her individually authored books include “The Century of the Gene” (2000, Harvard University Press), “Making Sense of Life” (2002, Harvard University Press), and  “The Mirage of a Space between Nature and Nurture” (2010, Duke University Press).

That third book examined the history and implications of nature-nurture debates. Keller found the purported distinction between nature and nurture to be a relatively recent one historically, promoted heavily in the late 19th century by the statistician (and eugenicist) Francis Galton, but not one that had much currency before then.

“We’re stuck with our DNA, but lots of things affect the way DNA is deployed,” Keller told MIT News in 2010, in an interview about the book. “It’s not enough to know what your DNA sequence is to understand about disease, behavior, and physiology.”

Most recently, in early 2023, Keller also published an autobiography, “Making Sense of My Life in Science: A Memoir,” issued by Modern Memoirs.

An intrepid scholar, Keller’s work helped make clear that, although nature exists apart from humans, our understanding of it is always mediated by our own ideas and values.

As Keller told Moyers in 1990, “it is a fantasy that any human product could be free of human values. And science is a human product. It’s a wonderful, glorious human product.”

Among other career honors, Keller was elected to the American Academy of Arts and Sciences, and to the American Philosophical Society; received a Guggenheim Fellowship; was granted the 2018 Dan David Prize; and also received honorary degrees from Dartmouth University, Lulea University of Technology, Mount Holyoke College, Rensselaer Polytechnic Institute, Simmons College, the University of Amsterdam, and Wesleyan University.

Keller is survived by her son, Jeffrey Keller; her daughter, Sarah Keller; her sister, Frances Fox Piven; her grand-daughters, Chloe Marschall and Cale Marschall; her nephews, Jonathan Fox, Gregory Fox, and Michael Fox, and her niece, Sarah Piven.

MIT scholars awarded seed grants to probe the social implications of generative AI

In July, MIT President Sally Kornbluth and Provost Cynthia Barnhart issued a call for papers to “articulate effective roadmaps, policy recommendations, and calls for action across the broad domain of generative AI.”

Over the next month, they received an influx of responses from every school at MIT proposing to explore generative AI’s potential applications and impact across areas ranging from climate and the environment to education, health care, companionship, music, and literature.

Now, 27 proposals have been selected to receive exploratory funding. Co-authored by interdisciplinary teams of faculty and researchers affiliated with all five of the Institute’s schools and the MIT Schwarzman College of Computing, the proposals represent a sweeping array of perspectives for exploring the transformative potential of generative AI, in both positive and negative directions for society.

“In the past year, generative AI has captured the public imagination and raised countless questions about how this rapidly advancing technology will affect our world,” Kornbluth says. “This summer, to help shed light on those questions, we offered our faculty seed grants for the most promising ‘impact papers’ — basically, proposals to pursue intensive research on some aspect of how generative AI will shape people’s life and work. I’m thrilled to report that we received 75 proposals in short order, across an enormous spectrum of fields and very often from interdisciplinary teams. With the seed grants now awarded, I cannot wait to see how our faculty expand our understanding and illuminate the potential impacts of generative AI.”

Each selected research group will receive between $50,000 and $70,000 to create 10-page impact papers that will be due by Dec. 15. Those papers will be shared widely via a publication venue managed and hosted by the MIT Press and the MIT Libraries.

The papers were reviewed by a committee of 19 faculty representing a dozen departments. Reflecting generative AI’s wide-ranging impact beyond the technology sphere, 11 of the selected proposals have at least one author from the School of Humanities, Arts, and Social Sciences. All submissions were reviewed initially by three members of the committee, with professors Caspar Hare, Dan Huttenlocher, Asu Ozdaglar, and Ron Rivest making final recommendations.

“It was exciting to see the broad and diverse response which the call for papers generated,” says Ozdaglar, who is also deputy dean of the MIT Schwarzman College of Computing and the head of the Department of Electrical Engineering and Computer Science. “Our faculty have contributed some truly innovative ideas. We are hoping to capitalize on the current momentum around this topic and to support our faculty in turning these abstracts into impact that is accessible to broad audiences beyond academia and that can help inform public conversation in this important area.”

The robust response has already spurred new collaborations, and an additional call for proposals will be made later this semester to further expand the scope of generative AI research on campus. Many of the selected proposals act as roadmaps for broad fields of inquiry into the intersection of generative AI and other fields. Indeed, committee members characterized these papers as the beginning of much more research.

“Our goal with this call was to spearhead further exciting work for thinking about the implications of new AI technologies and how to best develop and use them,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing. “We also wanted to encourage new pathways for collaboration and information exchange across MIT.”

Thomas Tull, a member of the MIT School of Engineering Dean’s Advisory Council and a former innovation scholar at the School of Engineering, contributed to the effort.

“While there is no doubt the long-term implications of AI will be enormous, because it is still in its nascent stages, it has been the subject of endless speculation and countless articles — both positive and negative,” says Tull. “As such, I felt strongly about funding an effort involving some of the best minds in the country to facilitate a meaningful public discourse on this topic and, ideally, help shape how we think about and best use what is likely the biggest technological innovation in our lifetime.”

The selected papers are:

  • “Can Generative AI Provide Trusted Financial Advice?” led by Andrew Lo and Jillian Ross;
  • “Evaluating the Effectiveness of AI-Identification in Human-AI Communication,” led by Athulya Aravind and Gabor Brody (Brown University);
  • “Generative AI and Research Integrity,” led by Chris Bourg, Sue Kriegsman, Heather Sardis, and Erin Stalberg;
  • “Generative AI and Equitable AI Pathway Education,” led by Cynthia Breazeal, Antonio Torralba, Kate Darling, Asu Ozdaglar, George Westerman, Aikaterini Bagiati, and Andres Salazar Gomez;
  • “How to Label Content Produced by Generative AI,” led by David Rand and Adam Berinsky;
  • “Auditing Data Provenance for Large Language Models,” led by Deb Roy and Alex “Sandy” Pentland;
  • “Artificial Eloquence: Style, Citation, and the Right to One’s Own Voice in the Age of A.I.,” led by Joshua Brandon Bennett;
  • “The Climate and Sustainability Implications of Generative AI,” led by Elsa Olivetti, Vivienne Sze, Mohammad Alizadeh, Priya Donti, and Anantha Chandrakasan;
  • “From Automation to Augmentation: Redefining Engineering Design and Manufacturing in the Age of NextGen AI,” led by Faez Ahmed, John Hart, Simon Johnson, and Daron Acemoglu;
  • “Advancing Equality: Harnessing Generative AI to Combat Systemic Racism,” led by Fotini Christia, Catherine D’Ignazio, Munzer Dahleh, Marzyeh Ghassemi, Peko Hosoi, and Devavrat Shah;
  • “Defining Agency for the Era of Generative AI,” led by Graham M. Jones and Arvind Satyanarayan;
  • “Generative AI and K-12 Education,” led by Hal Abelson, Eric Klopfer, Cynthia Breazeal, and Justin Reich;
  • “Labor Market Matching,” led by John Horton and Manish Raghavan;
  • “Towards Robust, End-to-End Explainable, and Lifelong Learnable Generative AI with Large Population Models,” led by Josh Tenenbaum and Vikash Mansinghka;
  • “Implementing Generative AI in U.S. Hospitals,” led by Julie Shah, Retsef Levi, and Kate Kellogg;
  • “Direct Democracy and Generative AI,” led by Lily Tsai and Alex “Sandy” Pentland;
  • “Learning from Nature to Achieve Material Sustainability: Generative AI for Rigorous Bio-inspired Materials Design,” led by Markus Buehler;
  • “Generative AI to Support Young People in Creative Learning Experiences,” led by Mitchel Resnick;
  • “Employer Implementation of Generative AI Future of Inequality,” led by Nathan Wilmers;
  • “The Pocket Calculator, Google Translate, and Chat-GPT: From Disruptive Technologies to Curricular Innovation,” led by Per Urlaub and Eva Dessein;
  • “Closing the Execution Gap in Generative AI for Chemicals and Materials: Freeways or Safeguards,” led by Rafael Gomez-Bombarelli, Regina Barzilay, Connor Wilson Coley, Jeffrey Grossman, Tommi Jaakkola, Stefanie Jegelka, Elsa Olivetti, Wojciech Matusik, Mingda Li, and Ju Li;
  • “Generative AI in the Era of Alternative ‘Facts,’” led by Saadia Gabriel, Marzyeh Ghassemi, Jacob Andreas, and Asu Ozdaglar;
  • “Who Do We Become When We Talk to Machines? Thinking About Generative AI and Artificial Intimacy, the New AI,” led by Sherry Turkle;
  • “Bringing Workers’ Voices into the Design and Use of Generative AI,” led by Thomas A. Kochan, Julie Shah, Ben Armstrong, Meghan Perdue, and Emilio J. Castilla;
  • “Experiment With Microsoft to Understand the Productivity Effect of CoPilot on Software Developers,” led by Tobias Salz and Mert Demirer;
  • “AI for Musical Discovery,” led by Tod Machover; and
  • “Large Language Models for Design and Manufacturing,” led by Wojciech Matusik.

MIT Sloan Dean Emeritus Bill Pounds, an expert in corporate governance and operations management, dies at 95

William Pounds, a former dean of the MIT Sloan School of Management who was known for his ubiquitous presence around campus and mentorship of junior faculty members, and for advising the broader Institute through the turbulent years of the Vietnam War, died Aug. 23. He was 95.

Pounds joined MIT Sloan in 1961 as an assistant professor and became dean in 1966. After stepping down from that role in 1980, Pounds was senior advisor to the Rockefeller family from 1981 to 1991. He served on boards for a wide range of organizations, including Idexx, General Mills, Putnam Investments, Sun Oil Co. (now Sunoco), the Museum of Fine Arts Boston, and WGBH. He was an active member of the American Academy of Arts and Sciences.

“Through his great gifts and skills and determination, Bill brought the MIT Sloan School into the modern landscape of management education and research,” says David Schmittlein, who has served as dean since 2007. “Much of what has been accomplished during my time at MIT has been shaped by his voice and guided by his vision. Bill’s legacy for the school is, and will remain, unequaled.”

Pounds earned a degree in chemical engineering from the Carnegie Institute of Technology (now Carnegie Mellon University) and flew fighter planes with the U.S. Navy during the Korean War.

He gained operations management experience working for companies like Kodak and Pittsburgh Plate Glass, which supplied automobile paint to General Motors. He was working for PPG when then-dean Howard Johnson invited him to MIT Sloan to teach production management.

In a 2009 interview, Pounds said he came to MIT Sloan with a respect for industry workers and a commitment to helping the organizations employing them operate as well as they could.

“That’s how I see the Sloan school: It’s an enterprise aimed at producing people who are capable of making whatever part of the world they may be involved in work better,” Pounds said.

Pounds directly shaped the future of both the management school and the overall institute during his decades of academic service.

The “Pounds Panel”

On March 4, 1969, faculty members, scientists, students, and other members of the MIT community gathered on campus to protest the U.S. Department of Defense’s funding of two Institute laboratories in the midst of the Vietnam War. Pounds was appointed chair of a review panel organized to consider whether action should be taken related to the labs. It would ultimately be known as the Pounds Panel.

“It was like Noah’s Ark of conservative and radical students and faculty and staff members from the laboratory, members of the [MIT] Corporation, alumni, faculty, on and on,” Pounds said in the 2009 interview. “Just every shade of constituency was represented.”

Some felt that the laboratories were serving the nation’s interests, while others felt that the work was evil and could have terrible effects on the world. But both sides felt that if the work had to be done, it was better to do it at MIT.

“The two groups could agree that the work should stay at MIT for almost opposite reasons,” Pounds said. “And as long as we didn’t ask them to agree on the reasons, they could agree on the conclusion.”

A cheerleader and constructive critic

While leading MIT Sloan, Pounds said he put most of his attention into building out the school’s faculty. “It’s really [the] selection of people and then encouraging them to do what they do best,” Pounds said.

Among those he encouraged was MIT Sloan professor and Nobel laureate Robert Merton. In 1970, Merton was a junior faculty member in the school’s finance group. There had been an explosion of interest in financial science, and Merton and other junior faculty members at the time, including Stewart Myers and Myron Scholes, were trying to expand their research and practice, but without the benefit of senior leaders. Pounds encouraged the cohort to keep going.

“He had the ability to say, ‘Hey, even if it isn’t conventional or the usual way we would do this, this makes sense,’” said Merton, who, along with Scholes, was awarded the Nobel Memorial Prize in Economic Sciences in 1997.

During Pounds’s 14-year term as dean, MIT Sloan’s faculty expanded to include names like Lotte Bailyn, Arnoldo Hax, Thomas Magnanti, John Van Maanen, Eric von Hippel, and Phyllis Wallace.

After Pounds heard Wallace give a talk about her research on Black teenage women in the labor market, he invited her to be a visiting professor in 1973. Wallace was the first woman to receive tenure at MIT Sloan and the first to be promoted to full professor.

Bailyn, who became the second tenured woman at MIT Sloan, said she valued Pounds’s opinion.

“He was a constructive critic of some of the work we were doing, as he was for many people,” she said.

Pounds was frequently seen around the MIT Sloan campus lunching with students in the cafeteria or poking his head into faculty offices. One of those offices belonged to Professor Andrew Lo, who joined MIT Sloan in 1988 as an assistant professor in the finance group. Even though Pounds hadn’t recruited Lo, the emeritus dean took an interest in the junior faculty member’s work.

“He gave honest feedback about my ideas and was quite realistic in the opposition that I would face with my colleagues,” Lo says of his early work in developing his adaptive markets hypothesis. “But he also gave me perspective and said, ‘If you persist, you will eventually be rewarded.’”

A “pragmatic visionary”

Tom Pounds SM ’88 says his father was “a pragmatic visionary” who approached everything he did with his feet firmly on the ground, but also with a curiosity and desire to translate his own experience and observations into deeper insights that could improve management practice broadly.

“He got a reputation as an unusually thoughtful board member who was going to do much more than lend his title to the role,” Tom Pounds says. “He was the one who could be counted on to ask hard questions and to work with his fellow directors to enhance their collective performance and impact.”

In early 2023, father and son published a collection of essays, “What Is This Management? Essays on Corporate Governance and Management Education,” which included reflections on the elder Pounds’s career in corporate governance and management.

Along with a legacy of philanthropic support to MIT and other institutions, the Pounds family recently endowed the William and Helen Pounds Fellowship Fund to support up to four MIT Sloan graduate students per year.

How an archeological approach can help leverage biased data in AI to improve medicine

The classic computer science adage “garbage in, garbage out” lacks nuance when it comes to understanding biased medical data, argue computer science and bioethics professors from MIT, Johns Hopkins University, and the Alan Turing Institute in a new opinion piece published in a recent edition of the New England Journal of Medicine (NEJM). The rising popularity of artificial intelligence has brought increased scrutiny to the matter of biased AI models resulting in algorithmic discrimination, which the White House Office of Science and Technology identified as a key issue in their recent Blueprint for an AI Bill of Rights

When encountering biased data, particularly for AI models used in medical settings, the typical response is to either collect more data from underrepresented groups or generate synthetic data making up for missing parts to ensure that the model performs equally well across an array of patient populations. But the authors argue that this technical approach should be augmented with a sociotechnical perspective that takes both historical and current social factors into account. By doing so, researchers can be more effective in addressing bias in public health. 

“The three of us had been discussing the ways in which we often treat issues with data from a machine learning perspective as irritations that need to be managed with a technical solution,” recalls co-author Marzyeh Ghassemi, an assistant professor in electrical engineering and computer science and an affiliate of the Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Institute of Medical Engineering and Science (IMES). “We had used analogies of data as an artifact that gives a partial view of past practices, or a cracked mirror holding up a reflection. In both cases the information is perhaps not entirely accurate or favorable: Maybe we think that we behave in certain ways as a society — but when you actually look at the data, it tells a different story. We might not like what that story is, but once you unearth an understanding of the past you can move forward and take steps to address poor practices.” 

Data as artifact 

In the paper, titled “Considering Biased Data as Informative Artifacts in AI-Assisted Health Care,” Ghassemi, Kadija Ferryman, and Maxine Mackintosh make the case for viewing biased clinical data as “artifacts” in the same way anthropologists or archeologists would view physical objects: pieces of civilization-revealing practices, belief systems, and cultural values — in the case of the paper, specifically those that have led to existing inequities in the health care system. 

For example, a 2019 study showed that an algorithm widely considered to be an industry standard used health-care expenditures as an indicator of need, leading to the erroneous conclusion that sicker Black patients require the same level of care as healthier white patients. What researchers found was algorithmic discrimination failing to account for unequal access to care.  

In this instance, rather than viewing biased datasets or lack of data as problems that only require disposal or fixing, Ghassemi and her colleagues recommend the “artifacts” approach as a way to raise awareness around social and historical elements influencing how data are collected and alternative approaches to clinical AI development. 

“If the goal of your model is deployment in a clinical setting, you should engage a bioethicist or a clinician with appropriate training reasonably early on in problem formulation,” says Ghassemi. “As computer scientists, we often don’t have a complete picture of the different social and historical factors that have gone into creating data that we’ll be using. We need expertise in discerning when models generalized from existing data may not work well for specific subgroups.” 

When more data can actually harm performance 

The authors acknowledge that one of the more challenging aspects of implementing an artifact-based approach is being able to assess whether data have been racially corrected: i.e., using white, male bodies as the conventional standard that other bodies are measured against. The opinion piece cites an example from the Chronic Kidney Disease Collaboration in 2021, which developed a new equation to measure kidney function because the old equation had previously been “corrected” under the blanket assumption that Black people have higher muscle mass. Ghassemi says that researchers should be prepared to investigate race-based correction as part of the research process. 

In another recent paper accepted to this year’s International Conference on Machine Learning co-authored by Ghassemi’s PhD student Vinith Suriyakumar and University of California at San Diego Assistant Professor Berk Ustun, the researchers found that assuming the inclusion of personalized attributes like self-reported race improve the performance of ML models can actually lead to worse risk scores, models, and metrics for minority and minoritized populations.  

“There’s no single right solution for whether or not to include self-reported race in a clinical risk score. Self-reported race is a social construct that is both a proxy for other information, and deeply proxied itself in other medical data. The solution needs to fit the evidence,” explains Ghassemi. 

How to move forward 

This is not to say that biased datasets should be enshrined, or biased algorithms don’t require fixing — quality training data is still key to developing safe, high-performance clinical AI models, and the NEJM piece highlights the role of the National Institutes of Health (NIH) in driving ethical practices.  

“Generating high-quality, ethically sourced datasets is crucial for enabling the use of next-generation AI technologies that transform how we do research,” NIH acting director Lawrence Tabak stated in a press release when the NIH announced its $130 million Bridge2AI Program last year. Ghassemi agrees, pointing out that the NIH has “prioritized data collection in ethical ways that cover information we have not previously emphasized the value of in human health — such as environmental factors and social determinants. I’m very excited about their prioritization of, and strong investments towards, achieving meaningful health outcomes.” 

Elaine Nsoesie, an associate professor at the Boston University of Public Health, believes there are many potential benefits to treating biased datasets as artifacts rather than garbage, starting with the focus on context. “Biases present in a dataset collected for lung cancer patients in a hospital in Uganda might be different from a dataset collected in the U.S. for the same patient population,” she explains. “In considering local context, we can train algorithms to better serve specific populations.” Nsoesie says that understanding the historical and contemporary factors shaping a dataset can make it easier to identify discriminatory practices that might be coded in algorithms or systems in ways that are not immediately obvious. She also notes that an artifact-based approach could lead to the development of new policies and structures ensuring that the root causes of bias in a particular dataset are eliminated. 

“People often tell me that they are very afraid of AI, especially in health. They’ll say, ‘I’m really scared of an AI misdiagnosing me,’ or ‘I’m concerned it will treat me poorly,’” Ghassemi says. “I tell them, you shouldn’t be scared of some hypothetical AI in health tomorrow, you should be scared of what health is right now. If we take a narrow technical view of the data we extract from systems, we could naively replicate poor practices. That’s not the only option — realizing there is a problem is our first step towards a larger opportunity.” 

Making life friendlier with personal robots

“As a child, I wished for a robot that would explain others’ emotions to me” says Sharifa Alghowinem, a research scientist in the Media Lab’s Personal Robots Group (PRG). Growing up in Saudi Arabia, Alghowinem says she dreamed of coming to MIT one day to develop Arabic-based technologies, and of creating a robot that could help herself and others navigate a complex world.

In her early life, Alghowinem faced difficulties with understanding social cues and never scored well on standardized tests, but her dreams carried her through. She earned an undergraduate degree in computing before leaving home to pursue graduate education in Australia. At the Australian National University, she discovered affective computing for the first time and began working to help AI detect human emotions and moods, but it wasn’t until she came to MIT as a postdoc with the Ibn Khaldun Fellowship for Saudi Arabian Women, which is housed in the MIT Department of Mechanical Engineering, that she was finally able to work on a technology with the potential to explain others’ emotions in English and Arabic. Today, she says her work is so fun that she calls the lab “my playground.” 

Alghowinem can’t say no to an exciting project. She found one with great potential to make robots more helpful to people by working with Jibo, a friendly robot companion developed by the founder of the Personal Robots Group (PRG) and the social robot startup Jibo Inc., MIT Professor and Dean for Digital Learning Cynthia Breazeal. Breazeal’s research explores the potential for companion robots to go far beyond assistants who obey transactional commands, like requests for the daily weather, adding items to shopping lists, or controlling lighting. At the MIT Media Lab, the PRG team designs Jibo to make him an insightful coach and companion to advance social robotics technologies and research. Visitors to the MIT Museum can experience Jibo’s charming personality.

Alghowinem’s research has focused on mental health care and education, often working with other graduate students and Undergraduate Research Opportunity Program students in the group. In one study, Jibo coached young and older adults via positive psychology. He adapted his interventions based on the verbal and non-verbal responses he observed in the participants. For example, Jibo takes in the verbal content of a participant’s speech and combines it with non-verbal information like prolonged pauses and self-hugs. If he concludes that deep emotions have been disclosed, Jibo responds with empathy. When the participant doesn’t disclose, Jibo asks a gentle follow up question like, “Can you tell me more?” 

Another project studied how a robot can effectively support high-quality parent and child interactions while reading a storybook together. Multiple PRG studies work together to learn what types of data are needed for a robot to understand people’s social and emotional states.

“I would like to see Jibo become a companion for the whole household,” says Alghowinem. Jibo can take on different roles with different family members such as a companion, reminding elders to take medication, or as a playmate for children. Alghowinem is especially motivated by the unique role Jibo could play in emotional wellness, and playing a preventative role in depression or even suicide. Integrating Jibo into daily life provides the opportunity for Jibo to detect emerging concerns and intervene, acting as a confidential resource or mental health coach. 

Alghowinem is also passionate about teaching and mentoring others, and not only via robots. She makes sure to meet individually with the students she mentors every week and she was instrumental earlier this year in bringing two visiting undergraduate students from Prince Sultan University in Saudi Arabia. Mindful of their social-emotional experience, she worked hard to create the opportunity for the two students, together, to visit MIT so they could support each other. One of the visiting students, Tasneem Burghleh, says she was curious to meet the person who went out of her way to make opportunities for strangers and discovered in her an “endless passion that makes her want to pass it on and share it with everyone else.”

Next, Alghowinem is working to create opportunities for children who are refugees from Syria. Still in the fundraising stage, the plan is to equip social robots to teach the children English language and social-emotional skills and provide activities to preserve cultural heritage and Arabic abilities.

“We’ve laid the groundwork by making sure Jibo can speak Arabic as well as several other languages,” says Alghowinem. “Now I hope we can learn how to make Jibo really useful to kids like me who need some support as they learn how to interact with the world around them.”

School of Humanities, Arts, and Social Sciences welcomes 10 new faculty

Dean Agustín Rayo and the MIT School of Humanities, Arts, and Social Sciences recently welcomed 10 new professors to the MIT community. They arrive with diverse backgrounds and vast knowledge in their areas of research.

Isaiah Andrews PhD ’14 joins MIT as a professor in the Department of Economics. Andrews is an econometrician who develops reliable and broadly applicable methods of statistical inference to address key challenges in economics, social science, and medicine. He is the recipient of the prestigious John Bates Clark Medal, a MacArthur Fellowship, and a Sloan Research Fellowship. Andrews earned his PhD in economics from MIT and was previously an assistant and associate professor in the Department of Economics.

Joshua Bennett is professor of literature and distinguished chair of the humanities. He is the author of five books of poetry, criticism, and narrative nonfiction, including most recently “Spoken Word: A Cultural History” (Knopf, 2023) and “The Study of Human Life” (Penguin, 2022), which is being adapted for television in collaboration with Warner Brothers Studios. He earned his PhD in English from Princeton University, and an MA in theater and performance studies from the University of Warwick, where he was a Marshall Scholar. For his creative writing and scholarship, Bennett has received fellowships and awards from the Guggenheim Foundation, the Whiting Foundation, the National Endowment for the Arts, and the Society of Fellows at Harvard University.

Nathaniel Hendren PhD ’12 is a professor in the Department of Economics. His research quantifies the differences in economic mobility and opportunity for people of different backgrounds, explores why private markets often fail to provide economic opportunity, and offers new tools for government policymakers evaluating the effectiveness of social programs. Hendren founded and co-directs Policy Impacts and Opportunity Insights. He has received the Presidential Early Career Award for Scientists and Engineers and a Sloan Research Fellowship. Hendren earned his PhD in economics from MIT.

Crystal Lee PhD ’22 is an assistant professor in computational media and design with joint appointments in the MIT Schwarzman College of Computing and Comparative Media Studies Program/Writing. She works broadly on research related to ethical tech, social media, data visualization, and disability. This research has been supported by fellowships from the National Science Foundation, Social Science Research Council, and the MIT Programs for Digital Humanities. She is also a faculty associate at the Berkman Klein Center for Internet and Society at Harvard University, where she co-leads the Ethical Tech Working Group, and a senior fellow at Mozilla. She graduated with high honors from Stanford University and completed her PhD at MIT.

Eli Nelson joins the Program on Science, Technology, and Society as an assistant professor. Nelson completed a PhD in the history of science from Harvard University in 2018. His research focuses on the history of Native sciences in North America in the 19th and 20th centuries. Before coming to MIT, Nelson was an assistant professor of American studies at Williams College.

Ashesh Rambachan is a new assistant professor in the Department of Economics. He studies economic applications of machine learning, focusing on algorithmic tools that drive decision-making in the criminal justice system and consumer lending markets and developing algorithmic procedures for discovering new behavioral models. Rambachan also develops methods for determining causation using cross-sectional and dynamic data. He earned his PhD in economics from Harvard, and is joining MIT after spending a year as a postdoc at Microsoft New England.

Nina Roussille joins the Department of Economics as an assistant professor after completing postdoctoral fellowships at MIT and the London School of Economics (LSE). Roussille studies topics in labor and gender economics, including how biased beliefs about outside options can keep workers stuck in low-wage jobs and how gender differences in salary demands can generate wage inequality. She is also the executive director of LSE’s Hub for Equal Representation. Roussille earned her PhD from the University of California at Berkeley.

Jessica Ruffin is an assistant professor of literature. Her first book, “Becoming Amphibious: critical ethical encounters between land and sea,” engages philosophical aesthetics, critical theory, and philosophies of race to trace the potential for ethics amid white supremacy and anti-Blackness. Her essays include “Preface to a Philosophy by Which No One Can Live” (New German Critique); “The Myth of the Sneeze in the Dream of Film History” (Discourse); and “Between Friends” (qui parle). Her second manuscript reframes Frankfurt School critical theory and psychoanalysis in light of Arthur Schopenhauer’s aesthetics — exploring the ethical and mystical in German avant-garde media through the conclusion of World War II. She earned a PhD in film and media, with a designated emphasis in critical theory (2021). She also holds an MA in German literature and culture (University of California at Berkeley, 2018) and an MA in humanities (University of Chicago, 2008). She comes to MIT after two years as assistant professor of film, television, and media and member of the Michigan Society of Fellows at University of Michigan at Ann Arbor.

Caitlin Talmadge PhD ’11 is an associate professor of political science. She also serves as a senior non-resident fellow in foreign policy at the Brookings Institution; a member of the Defense Policy Board at the U.S. Department of Defense; and a series editor for Cornell Studies in Security Affairs at Cornell University Press. During academic year 2023-24, she is on leave from MIT as a fellow at the Woodrow Wilson Center for Scholars in Washington. Talmadge’s research and teaching focus on nuclear deterrence and escalation, U.S. military operations and strategy, and security issues in Asia and the Persian Gulf. Talmadge is a graduate of Harvard (BA, government, summa cum laude) and MIT (PhD, political science). Previously, she has worked as a researcher at the Center for Strategic and International Studies; a consultant to the Office of Net Assessment at the U.S. Department of Defense; and a professor at the George Washington University and Georgetown University.

Miguel Zenón is an assistant professor in the Music and Theater Arts Section. He is a Puerto Rican alto saxophonist, composer, band leader, music producer, and educator. He is a multiple Grammy Award nominee, and the recipient of a Guggenheim Fellowship and a MacArthur Fellowship. He also holds an honorary doctorate degree in the arts from Universidad del Sagrado Corazón. Zenón has built a distinguished career as a leader, releasing several critically acclaimed albums while touring and recording with some of the great musicians of our time.

Dreaming of waves

Ocean waves are easy on the eyes, but hard on the brain. How do they form? How far do they travel? How do they break? Those magnificent waves you see crashing into the shore are complex.

“I’ve often asked this question,” the eminent wave scientist Walter Munk told MIT Professor Stefan Helmreich several years ago. “If we met somebody from another planet who had never seen waves, could [they] dream about what it’s like when a wave becomes unstable in shallow water? About what it would do? I don’t think so. It’s a complicated problem.”

In recent decades scientists have gotten to know waves better. In the 1960s, they confirmed that waves travel across the world; a storm in the Tasman Sea can create great surf in California. In the 1990s, scientists obtained eye-opening measurements of massive “rogue” waves. Meanwhile experts continue tailoring a standard model of waves, developed in the 1980s, to local conditions, as data and theory keep influencing each other.  

“Waves are empirical and conceptual phenomena both,” writes Helmreich in his new work, “A Book of Waves,” published this month by Duke University Press. In it, Helmreich examines the development of wave science globally, the propagation of wave theory into other areas of life — such as the “waves” of the Covid-19 pandemic — and the way researchers develop both empirical knowledge and abstractions describing nature in systematic terms.

“Wave science is constantly going back and forth between registering data and interpreting that data,” says Helmreich, the Elting E. Morison Professor of Anthropology at MIT. “The aspiration of so much wave science has been to formalize and automate measurement so that everything becomes a matter of simple data registration. But you can never get away from the human interpretation of those results. Humans are the ones who care about what waves are doing.”

“You need the world”

Helmreich has long been interested in ocean science. His 2009 book “Alien Ocean” examined marine biologists and their study of microbes. In 2014, Helmreich presented material that wound up in “A Book of Waves” while delivering the Lewis Henry Morgan lectures at the University of Rochester, the nation’s oldest anthropology lecture series.

To research the book, Helmreich traveled far and wide, from the Netherlands to Australia, among other places, often embedding himself with researchers. That included a stint on board the FLIP ship, a unique, now-retired vessel operated by the Scripps Institution of Oceanography, which could turn itself from a long horizontal vessel into a kind of giant live-aboard vertical buoy, for conducting wave measurements. The FLIP ship is one of many distinctive wave science tools; as the book draws out, this has been a diverse and even quirky field, methodologically, with wave scientists approaching their subject from all angles.

“Ocean and water waves look very different in different national contexts,” Helmreich says. “In the Netherlands, interest in waves is very much bound up with hydrogical engineers’ desires to keep the country dry. In the United States, ocean wave science was crucially formatted by World War II, and the Cold War, and military prerogatives.”

As it happens, the late Munk (1917-2019), who The New York Times once called “The Einstein of waves,” developed some of his insights and techniques while helping to forecast wave heights for the Allied invasion of Normandy in World War II. In spinning out his thought experiment about aliens to Helmreich, Munk was making the case for empiricism in wave science.

“Mathematical formalisms and representations are vital to understanding what waves are doing, but they’re not enough,” Helmreich says. “You need the world.”

Disney makes waves

But as Helmreich also emphasizes in his work, wave science depends on a delicate interplay between theory, modeling, and inventive empirical research. What might the Disney film “Fantasia” have to do with wave science? Well, movies used to rely on optical film recordings to play their soundtracks; “Fantasia’s” film soundtrack also had schematic renderings of sound levels. British wave scientists realized they could adapt this technique of depicting sound patterns to represent sets of waves.

For that matter, by the 1960s, scientists also began categorizing waves into a wave spectrum, sorted by the frequency with which they arrived at the shore. That idea comes directly from the concept of spectra of light, radio, and sound waves. In this sense, existing scientific concepts have periodically been deployed by wave researchers to make sense of what they already can see.

“The book asks questions about the relationship between reality and its representations,” Helmreich says. “Waves are obviously empirical things in the world. But understanding how they work requires abstractions, whether you are a scientist at sea, a surfer, or an engineer trying to figure out what will happen at a coastline. And those representations are influenced by the tools scientists use, whether cameras, pressure sensors, sonar, film, buoys, or computer models. What scientists think waves are is imprinted by the media they use to study waves.”

As Helmreich notes, the interdisciplinary nature of wave science has evolved. Physics shaped wave science for much of the 20th century. More recently, as scientists recognize that waves transmit things like agricultural runoff and the aerosolized signatures of coastal communities’ car exhaust, biological and chemical oceanographers have entered the field. And climate scientists and engineers are increasingly concerned with rising sea levels and seemingly bigger waves.

“Ocean waves used to belong to the physcists,” Helmreich says. “Today a lot of it is about climate change and sea level rise.”

The shape of things to come

But even as other fields have fed into ocean wave science, so too has wave science influenced other disciplines. From medicine to social science, the concept of the wave has been applied to social phenomena to help organize our understanding of matters such as disease transmission and public health.

“People use the figure of the wave to think about the shape of things to come,” Helmreich says. “Certainly we saw that during the Covid pandemic, that the wave was considered to be both descriptive, of what was happening, and predictive, about what would happen next.”

Scholars have praised “A Book of Waves.” Hugh Raffles, a professor and chair of anthropology at The New School, has called it “a model of expansive transdisciplinary practice,” as well as “a constant surprise, a mind-opening recalibration of the ways we assemble nature, science, ethnography, and the arts.”

Helmreich hopes readers will consider how extensively social, political, and civic needs have influenced wave studies. Back during World War II, Walter Munk developed a concept called “significant wave height” to help evaluate the viability of landing craft off Normandy.

“There’s an interesting, very contingent history to the metric of significant wave height,” Helmreich says. “But one can open up the concept of significance to ask: Significant for whom, and for what? Significance, in its wider cultural meaning is about human projects, whether to do with warfare, coastal protection, humanitarian rescue at sea, shipping, surfing, or recreation of other kinds. How waves become significant is an anthropological question. “A Book of Waves” seeks to map the many different ways that waves have become significant to people.”

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.