3 Questions: Daron Acemoglu on the “dangerous situation” still facing the U.S.

Two stunning events on Jan. 6 — rioters invading the U.S. Capitol, and roughly 140 GOP members of Congress voting not to certify the presidential election results in certain states — have intensified national concern about the future of American democracy.

To extend the discussion, MIT News spoke with MIT economist and Institute Professor Daron Acemoglu, who has written extensively about democratic institutions, political dynamics, and the way democracy increases economic growth. Acemoglu’s most recent book, “The Narrow Corridor,” co-authored with James Robinson of the University of Chicago and published by Penguin Random House in 2019, contends that we overestimate the supposedly “brilliant design” of the U.S. Constitution as a bulwark against antidemocratic forces. Instead, rights and liberties in the U.S. have depended upon “society’s mobilization … at every turn” throughout our history.

Indeed, Acemoglu asserts, while a state can protect weaker citizens within society, we must also work to limit the power of the state. Liberty and democracy thus exist in a “narrow corridor” between lawnessness or authoritarianism; social action is needed to protect liberty when the state starts to discard the rule of law. We asked Acemoglu to reflect upon U.S. events through this framework.  

Q: You have said in the past that democracy-based liberty in the U.S. has a “much more troubled and contingent existence” than we might like to imagine. What is your evaluation of the condition of U.S. democracy in light of this month’s events?

A: My take is that the events of Jan. 6 are nothing surprising, and they’re not really a change in the trend that we have experienced. I think we have been in a very dangerous situation for the last four years. Of course Trump’s election itself is not something that came out of nowhere. But focusing on the last four years, Trump has continuously weakened U.S. institutions, polarized the country, and destroyed political norms that were unfortunately already weak. So it has been a rocky ride for U.S. democracy, and one of the most pernicious weapons that Trump has had has been the use of social media to appeal especially to a group of fairly extremist and even prone-to-violence supporters.

We’ve seen this play out, in slow motion and then accelerated form, over the last four years. I don’t find what happened in January 2021 surprising, and I would be shocked if anybody is shocked after what we’ve experienced.

I am also very opposed to the narrative that the events of Jan. 6 prove U.S. institutions work because we’ve come back from the brink. Everybody should be concerned that this is not an isolated event, and the next one could be much more damaging. I think the only good thing that has come out is that hopefully people are becoming a little bit better informed, and recognizing that a broad reassessment of where we are is necessary. Trump will not be the last American populist, and the next one may be much more dangerous, so we really have to rethink our institutional safeguards and the foundational issues that have brought us here.

Q: What are the main actions that are needed for the U.S. to stay in what you have termed the “narrow corridor” in which political liberty is possible?

A: My view is that the four years of Trump would have been even more damaging if people had not taken action. From the first day, there were civil society actions against Trump, and the media has been the disinfecting sunlight on Trump’s corruption and all sorts of unethical behavior. So I think we really owe the survival of our democracy to that sort of civil society action.

But of course the question is: How can we rebuild our institutions? And my analysis of history suggests that when things become very polarized, very zero-sum inside the narrow corridor, we have a lot of instability. There are economic problems that we have to tackle — creating shared prosperity would help a lot, and during economically hard times, politics becomes more polarized. There are a lot of challenges ahead, and none of them are easy. But we cannot afford to waste any time to get started on them.

Q: To what extent is the U.S. part of a global trend toward illiberalism — or conversely, how much are its politics unique?

A:  Every country, just like Tolstoy’s unhappy families, is living this crisis differently, but of course, it cannot help but strike one that there are many parallels between what’s happening in Brazil, Turkey, Hungary, Poland, the U.S., and to some degree even the U.K. I don’t believe social science has generated a very good explanation for why so many disparate countries are going in the same direction. So I don’t pretend to know the answer.

What’s common about these economies is they’re all having a very hard time creating a model of shared prosperity, and they’re all being affected by globalization, in the cultural sense as well as economically. Nationalism is on the rise, and what the reasons for that are I don’t know, but discontent among lower-and middle-income people, along with nationalism, creates a pretty good breeding ground for right-wing populism.

All kinds of autocrats have worked out how to exploit media, especially social media, and seem to have a better playbook for weakening democratic participation and checks and controls at the moment. But we have seen in the U.S. that people who were worried about Trump’s destructive impact did protest against him, created a lot of pressure, and turned out in the midterm and presidential elections. I wouldn’t take that as comfort that we’re going to be okay, but I wouldn’t say that nobody wants to defend democracy, either.

Biden taps Eric Lander and Maria Zuber for senior science posts

President-elect Joseph Biden has selected two MIT faculty leaders — Broad Institute Director Eric Lander and Vice President for Research Maria Zuber — for top science and technology posts in his administration.

Lander has been named Presidential Science Advisor, a position he will assume soon after Biden’s inauguration on Jan. 20. He has also been nominated as director of the Office of Science and Technology Policy (OSTP), a position that requires Senate confirmation.

Biden intends to elevate the Presidential Science Advisor, for the first time in history, to be a member of his Cabinet.

Zuber has been named co-chair of the President’s Council of Advisors on Science and Technology (PCAST), along with Caltech chemical engineer Frances Arnold, a 2018 winner of the Nobel Prize in chemistry. Zuber and Arnold will be the first women ever to co-chair PCAST.

Lander, Zuber, Arnold, and other appointees will join Biden in Wilmington, Delaware, on Saturday afternoon, where the president-elect will introduce his team of top advisors on science and technology, domains he has declared as crucial to America’s future. Biden has charged this team with recommending strategies and actions to ensure that the nation maximizes the benefits of science and technology for America’s welfare in the 21st century, including addressing health needs, climate change, national security, and economic prosperity.

“From Covid-19 to climate change, cybersecurity to U.S. competitiveness in innovation, the nation faces urgent challenges whose solutions depend on a broad and deep understanding of the frontiers of science and technology. In that context, it is enormously meaningful that science is being raised to a Cabinet-level position for the first time,” MIT President L. Rafael Reif says. “With his piercing intelligence and remarkable record as scientific pioneer, Eric Lander is a superb choice for this new role. And given her leadership of immensely complex NASA missions and her deep engagement with the leading edge of dozens of scientific domains as MIT’s vice president for research, it is difficult to imagine someone more qualified to co-chair PCAST than Maria Zuber. This is a banner day for science, and for the nation.”

Lander will take a leave of absence from MIT, where he is a professor of biology, and the Broad Institute, which he has led since its 2004 founding. The Broad Institute announced today that Todd Golub, currently its chief scientific officer as well as a faculty member at Harvard Medical School and an investigator at the Dana-Farber Cancer Institute, will succeed Lander as director.

Zuber, the E.A. Griswold Professor of Geophysics in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, will continue to serve as the Institute’s vice president for research, a position she has held since 2013.

Separately, Biden announced earlier this week that he will nominate Gary Gensler, professor of the practice of global economics and management at the MIT Sloan School of Management, as chair of the Securities and Exchange Commission.

Eric Lander

Eric S. Lander, 63, has served since 2004 as founding director of the Broad Institute of MIT and Harvard. A geneticist, molecular biologist, and mathematician, he was one of the principal leaders of the international Human Genome Project from 1990 to 2003, and is committed to attracting, teaching, and mentoring a new generation of scientists to fulfill the promise of genomic insights to benefit human health.

From 2009 to 2017, Lander informed federal policy on science and technology as co-chair of PCAST throughout the two terms of President Barack Obama.

“Our country once again stands at a consequential moment with respect to science and technology, and how we respond to the challenges and opportunities ahead will shape our future for the rest of this century,” Lander says. “President-elect Biden understands the central role of science and technology, and I am deeply honored to have been asked to serve.”

Trained as a mathematician, Lander earned a BA in mathematics from Princeton University in 1978. As a Rhodes Scholar from 1978 to 1981, he attended Oxford University, where he earned his doctorate in mathematics. Lander served on the Harvard Business School faculty from 1981 to 1990, teaching courses on managerial economics, decision analysis, and bargaining.

In 1983, his younger brother, Arthur, a developmental neurobiologist, suggested that, with his interest in coding theory, Lander might be interested in how biological systems, including the brain, encode and process information. Lander began to audit courses at Harvard and to moonlight in laboratories around Harvard and MIT, learning about molecular biology and genetics.

In 1986, he was appointed a Whitehead Fellow of the Whitehead Institute for Biomedical Research, where he started his own laboratory. In 1990, Lander was appointed as a tenured professor in MIT’s Department of Biology and as a member of the Whitehead Institute.

Lander’s honors and awards include the MacArthur Fellowship, the Breakthrough Prize in Life Sciences, the Albany Prize in Medicine and Biological Research, the Gairdner Foundation International Award of Canada, and MIT’s Killian Faculty Achievement Award. He was elected as a member of the U.S. National Academy of Sciences in 1997 and of the U.S. Institute of Medicine in 1999. 

Maria Zuber

The daughter of a Pennsylvania state trooper and the granddaughter of coal miners, Maria T. Zuber, 62, has been a member of the MIT faculty since 1995 and MIT’s vice president for research since 2013. She has served since 2012 on the 24-member National Science Board (NSB), the governing body of the National Science Foundation, serving as NSB chair from 2016 to 2019.

Zuber’s own research bridges planetary geophysics and the technology of space-based laser and radio systems. She was the first woman to lead a NASA spacecraft mission, serving as principal investigator of the space agency’s Gravity Recovery and Interior Laboratory (GRAIL) mission, an effort launched in 2008 to map the moon’s gravitational field to answer fundamental questions about the moon’s evolution and internal composition. In all, Zuber has held leadership roles associated with scientific experiments or instrumentation on nine NASA missions since 1990.

As MIT’s vice president for research, Zuber is responsible for research administration and policy. She oversees more than a dozen interdisciplinary research centers, including the David H. Koch Institute for Integrative Cancer Research, the Plasma Science and Fusion Center, the Research Laboratory of Electronics, the Institute for Soldier Nanotechnologies, the MIT Energy Initiative (MITEI), and the Haystack Observatory. She is also responsible for MIT’s research integrity and compliance, and plays a central role in research relationships with the federal government.

“Many of the most pressing challenges facing the nation and the world will require breakthroughs in science and technology,” Zuber says. “An essential element of any solution must be rebuilding trust in science, and I’m thrilled to have the opportunity to work with President-elect Biden and his team to drive positive change.”

Zuber holds a BA in astronomy and geology from the University of Pennsylvania, awarded in 1980, and an ScM and PhD in geophysics from Brown University, awarded in 1983 and 1986, respectively. She has received awards and honors including MIT’s Killian Faculty Achievement Award; the American Geophysical Union’s Harry H. Hess Medal; and numerous NASA awards, including the Distinguished Public Service Medal and the Outstanding Public Leadership Medal. She was elected as a member of the National Academy of Sciences in 2004.

Todd Golub

Todd Golub, 57, will become the next director of the Broad Institute. He joined Dana-Farber and Harvard Medical School in 1997, and is currently a professor of pediatrics at Harvard Medical School and the Charles A. Dana Investigator in Human Cancer Genetics at Dana-Farber.

Golub served as a leader of the Whitehead Institute/MIT Center for Genome Research, the precursor to the Broad Institute. He has also been an investigator with the Howard Hughes Medical Institute, and has served as chair of numerous scientific advisory boards, including at St. Jude Children’s Research Hospital and the National Cancer Institute’s Board of Scientific Advisors.

Golub is also an entrepreneur, having co-founded several biotechnology companies to develop diagnostic and therapeutic products. He has created and applied genomic tools to understand the basis of disease, and to develop new approaches to drug discovery. He has made fundamental discoveries in the molecular basis of human cancer, and has helped develop new approaches to precision medicine.

“Broad is in a stronger scientific and cultural position today than at any point in our 16-year history,” Golub says. “Moreover, the pandemic has pushed us to think differently about nearly every aspect of how we collaborate and deliver on our scientific mission. We are well-positioned to work with the larger scientific community to confront some of the most urgent challenges in biomedicine: from developing novel diagnostics and therapeutics for infectious diseases and cancer, to understanding the genetic basis of cardiovascular disease and mental illness. I am honored to serve as director of this remarkable institution.”

Members of the Broad Institute’s Board of Directors thanked Lander for his lengthy service and expressed optimism in Golub’s ability to build upon that foundation.

“Todd’s deep knowledge of the Broad Institute community, its science, and its mission to propel the understanding and treatment of disease make him the perfect choice for the Institute’s next director,” says Louis Gerstner, Jr., chair of the Broad Institute Board of Directors. “Todd is well-positioned to lead the Institute and our key scientific collaborations forward, and the board is highly confident he will continue the Broad’s culture of innovation, collegiality, and constant renewal.”

Broad board member Shirley Tilghman, professor of molecular biology and public policy and president emerita of Princeton University, adds: “In its 16 years, the Broad has become one of the most unique institutions in the biomedical ecosystem. Under Eric’s and Todd’s leadership, it has developed powerful new methods and made many contributions to genomic medicine that will benefit human health.”

MIT launches Center for Constructive Communication

Today MIT announced the launch of the interdisciplinary Center for Constructive Communication, which will leverage data-driven analytics to better understand current social and mass media ecosystems and design new tools and communication networks capable of bridging social, cultural, and political divides.

An important aspect of the new center is its commitment to reach beyond academia to work closely with experienced, locally based organizations and trusted influencers in underserved, marginalized communities across the country. These collaborations will be critical for launching pilot programs to evaluate which tools offer the greatest potential to create more trusted communication within our deeply fragmented society.

Based at the MIT Media Lab, and fostering collaborations across the MIT campus and beyond, the center is being established with over $10 million in commitments from foundations, corporations, government programs, and philanthropists. It will bring together researchers in artificial intelligence, computational social science, digital interactive design, and learning technologies to collaborate with software engineers, journalists, artists, public health experts, and community organizers.

The center will be directed by Deb Roy, professor of media arts and sciences, whose work on machine learning, human-machine interaction, analysis of large-scale media ecosystems, and advancing constructive dialogue will be integrated into the new center’s broad research program. For the past year, Roy also served as the executive director of the Media Lab. As an entrepreneur, Roy was co-founder and CEO of Bluefin Labs, a media analytics company acquired by Twitter in 2013, and is co-founder and chairperson of Cortico, a nonprofit collaborator with the center that builds systems for bringing underheard community voices into a healthier public sphere.

“Social media technologies promised to open up our worlds to new people and perspectives, but too often have ended up limiting and distorting our understanding of others,” says Roy. “Last week’s violence in Washington, D.C. — carried out by a mob mobilized largely in online bubbles — laid bare the challenge at hand. We now live in a fragmented society dominated by the loudest, most extreme voices, stifling avenues of communication that might lead to more constructive dialogues.”

“What our new center hopes to achieve is the creation of new ‘spaces’ where diverse voices and nuanced perspectives of so many who have been marginalized can be heard, and the design of tools and methods that enable influencers and organizations to play new and even more beneficial roles in society,” says Roy.

“A societal imperative”

In addressing these highly polarizing divides with the aid of new human-machine systems, center researchers will remain firmly committed to ensuring that the AI technologies key to this effort will enhance rather than replace human capabilities, and will move quickly to identify and guard against misuse of any of the work coming out of the center.

Marking the center’s launch, MIT President L. Rafael Reif says, “A signature strength of MIT is developing technological solutions to address humanity’s great challenges. Yet those who design and benefit from advanced technologies have a special responsibility to protect society from their unintended harms. Few forces in our society today are more powerful — and therefore more dangerous — than social media. To the challenge of protecting society against social media’s potential harms and helping it fulfill its highest purposes MIT offers a unique depth and breadth of expertise. The establishment of the Center for Constructive Communication, with Deb Roy at the helm, is a significant step in this important work.”

The center’s research program, focused on creating new human-machine communication capabilities, will be advanced by center fellows from BIPOC (Black, Indigenous, and people of color) and other communities, who work in media and communications, health disparities, immigration, racial justice, AI, and data analytics. These fellows will be important collaborators in designing, testing, and deploying strategies that build on ongoing research that uses AI-powered tools for listening, mapping, and shaping how information spreads.

“Involving trusted local influencers and organizers in the creation of these new capabilities is critical to addressing the harms of misinformation on societal issues such as Covid-19, immigration, and poverty,” says Ceasar McDowell, professor of the practice of civic design and associate head of the Department of Urban Studies and Planning, who will oversee the center’s fellowship program. “This requires building local capacity to use sophisticated communication analytics, to effectively identify, understand, and counter misinformation that threatens the health, safety, and security of marginalized communities.”

Martha Minow, the 300th Anniversary University Professor at Harvard University, who serves as a senior advisor to the center, stresses the importance of its work in maintaining a healthy pluralistic democracy. “In recent years, we’ve seen just how quickly an epidemic of misinformation and broken communication networks can expose the frailties of our democracy,” says Minow. “Opening trusted communication channels, encouraging dialogue across the full political spectrum, and engaging our most marginalized communities is not merely a wish list — it is a societal imperative.”

Building on the Laboratory for Social Machines

The Center for Constructive Communication represents the evolution and scaling of the Laboratory for Social Machines (LSM), which was established by Roy at the MIT Media Lab in 2014 and today includes a research team of approximately 30. LSM’s work built on Roy’s earlier Media Lab research on language analysis, and expanded it into social media analytics often presented through data visualization that detailed social media and media ecosystems.

More recently, LSM has expanded into the design of social technologies to support communication and learning across divides, establishing a track record that includes:

  • more than 160 peer-reviewed publications in human-machine communication and learning;
  • a study, in collaboration with Sinan Aral, the David Austin Professor of Management, on the spread of false news that was the cover story of Science magazine;
  • a tech-assisted coaching system, Learning Loops, for supporting kids’ narrative development, already successfully piloted with hundreds of participants in collaboration with community organizations; and
  • Beat the Virus, a coalition, created in close collaboration with global health expert, visiting professor, and former U.S. Assistant Surgeon General Susan Blumenthal, created in response to Covid-19, to deliver science-grounded public health guidance via social media influencers and to serve as a resource hub for trusted information about the pandemic. In addition, LSM social media analytics guided the generation of over 650 million media impressions and 5.5 million engagements with no paid media.

The new center will incorporate all work currently underway in LSM, including:

  • PULSE (Public Understanding, Listening, and Sense-Making Engine), a  collection of tools and methods for combining human listening with machine learning to make sense of public expressions of opinion and lived experience;
  • HealthPULSE, a public health communication system that leverages the PULSE toolkit to navigate our fragmented media landscape to provide reliable and relevant public guidance;
  • Clover, a pro-social media network designed to promote positive identity development, a sense of belonging, and exploration for tweens and early teens; and
  • StoryLine, media and content analytics for understanding how the content of stories connects with the audiences of stories.

Through a cooperation agreement with Cortico, the center incorporates Local Voices Network (LVN) data and methods in many projects. LVN aims to build infrastructure for a stronger democracy through technology-powered insights from small group facilitated conversations connected in a network of community engagement.

In addition to current collaborations with Cortico, New America, 826 Boston, and Frontline/PBS, the center is seeking to expand its relationship with numerous additional organizations across fields — organizations best suited to building the kind of trust needed to achieve meaningful change.

Ten “keys to reality” from Nobel laureate Frank Wilczek

In the spring of 1970, colleges across the country erupted with student protests in response to the Vietnam War and the National Guard’s shooting of student demonstrators at Kent State University. At the University of Chicago, where Frank Wilczek was an undergraduate, regularly scheduled classes were “improvised and semivoluntary” amid the turmoil, as he recalls.

It was during this turbulent time that Wilczek found unexpected comfort, and a new understanding of the world, in mathematics. He had decided to sit in on a class by physics professor Peter Freund, who, with a zeal “bordering on rapture,” led students through mathematical theories of symmetry and ways in which these theories can predict behaviors in the physical world.

In his new book, “Fundamentals: Ten Keys to Reality,” published today by Penguin Press, Wilczek writes that the lessons were a revelation: “To experience the deep harmony between two different universes — the universe of beautiful ideas and the universe of physical behavior — was for me a kind of spiritual awakening. It became my vocation. I haven’t been disappointed.”

Wilczek, who is the Herman Feshbach Professor of Physics at MIT, has since made groundbreaking contributions to our fundamental understanding of the physical world, for which he has been widely recognized, most notably in 2004 with the Nobel Prize in Physics, which he shared with physicists David Gross and David Politzer. He has also authored several popular science books on physics and the history of science.

In his new book, he distills scientists’ collective understanding of the physical world into 10 broad philosophical themes, using the fundamental theories of physics, from cosmology to quantum mechanics, to reframe ideas of space, time, and our place in the universe.

“People wrestle with what the world is all about,” Wilczek tells MIT News. “They’re not concerned with knowing precisely what Coulomb’s law is, but want to know more about questions like the ancient Greeks asked: What is space? What is time? So in the end, I came up with 10 assertions, at the levels of philosophy but backed up by very concrete facts, to organize what we know.”

A rollercoaster reborn

Wilczek wrote the bulk of the book earlier this spring, in the midst of another tumultuous time, at the start of a global pandemic. His grandson had been born as Wilczek was laying out the structure for his book, and in the preface, the physicist writes that he watched as the baby began building up a model of the world, based on his observations and interactions with the environment, “with insatiable curiosity and few preconceptions.”

Wilczek says that scientists may take a cue from the way babies learn — by building and pruning more detailed models of the world, with a similar unbiased, open outlook. He can recall times when he felt his own understanding of the world fundamentally shift. The college course on mathematical symmetry was an early instance. More recently, the rise of artificial intelligence and machine learning has prompted him to rethink “what knowledge is, and how it’s acquired.”

He writes: “The process of being born again can be disorienting. But, like a roller- coaster ride, it can also be exhilarating. And it brings this gift: To those who are born again, in the way of science, the world comes to seem fresh, lucid, and wonderfully abundant.”

“Patterns in matter”

Wilczek’s book contains ample opportunity for readers to reframe their view of the physical world. For instance, in a chapter entitled “There’s Plenty of Space,” he writes that, while the universe is vast, there is another scale of vastness in ourselves. To illustrate his point, he calculates that there are roughly 10 octillion atoms that make up the human body. That’s about 1 million times the number of stars in the visible universe. The multitudes within and beyond us are not contradictory, he says, but can be explained by the same set of physical rules.

And in fact, the universe, in all its diversity, can be described by a surprisingly few set of rules, collectively known as the Standard Model of Physics, though Wilczek prefers to call it by another name.

“The so-called Standard Model is the culmination of millenia of investigation, allowing us to understand how matter works, very fully,” Wilczek says. “So calling it a model, and standard, is kind of a lost opportunity to really convey to people the magnitude of what’s been achieved by humanity. That’s why I like to call it the ‘Core.’ It’s a central body of understanding that we can build out from.”

Wilczek takes the reader through many of the key experiments, theories, and revelations that physicists have made in building and validating the Standard Model, and its mathematical descriptions of the universe.

Included in this often joyful scientific tour are brief mentions of Wilczek’s own contributions, such as his Nobel-winning work establishing the theory of quantum chromodynamics; his characterization of the axion, a theoretical particle that he named after a laundry detergent by the same name (“It was short, catchy, and would fit in nicely alongside proton, neutron, electron, and pion,” he writes); and his introduction of the anyon — an entirely new kind of particle that is neither a fermion or a boson.

In April, and then separately in July, scientists made the first observations of anyons, nearly 40 years after Wilczek first proposed their existence.

“I was beginning to think it would never happen,” says Wilczek, who was finishing up his book when the discoveries were made public. “When it finally did, it was a beautiful surprise.”

The discovery of anyons opens possibilities for the particles to be used as building blocks for quantum computers, and marks another milestone in our understanding of the universe.

In closing his book, Wilczek writes about “complementarity” — a concept in physics that refers to two seemingly contrasting theories, such as the wave and particle theories of light, that can separately explain the same set of phenomena. He points to many complementary theories of physics throughout the book and extends the idea to philosophy and ways in which accepting contrasting views of the world can help us to expand our experience.

“With progress, we’ve come to consider people and creatures as having intrinsic value and being worthy of profound respect, just like ourselves,” he writes. “When we see ourselves as patterns in matter, it is natural to draw our circle of kinship very wide indeed.”

MIT.nano’s Immersion Lab opens for researchers and students

The MIT.nano Immersion Lab, MIT’s first open-access facility for augmented and virtual reality (AR/VR) and interacting with data, is now open and available to MIT students, faculty, researchers, and external users.

The powerful set of capabilities is located on the third floor of MIT.nano in a two-story space resembling a black-box theater. The Immersion Lab contains embedded systems and individual equipment and platforms, as well as data capacity to support new modes of teaching and applications such as creating and experiencing immersive environments, human motion capture, 3D scanning for digital assets, 360-degree modeling of spaces, interactive computation and visualization, and interfacing of physical and digital worlds in real-time.

“Give the MIT community a unique set of tools and their relentless curiosity and penchant for experimentation is bound to create striking new paradigms and open new intellectual vistas. They will probably also invent new tools along the way,” says Vladimir Bulović, the founding faculty director of MIT.nano and the Fariborz Maseeh Chair in Emerging Technology. “We are excited to see what happens when students, faculty, and researchers from different disciplines start to connect and collaborate in the Immersion Lab — activating its virtual realms.”

A major focus of the lab is to support data exploration, allowing scientists and engineers to analyze and visualize their research at the human scale with large, multidimensional views, enabling visual, haptic, and aural representations. “The facility offers a new and much-needed laboratory to individuals and programs grappling with how to wield, shape, present, and interact with data in innovative ways,” says Brian W. Anthony, the associate director of MIT.nano and faculty lead for the Immersion Lab.

Massive data is one output of MIT.nano, as the workflow of a typical scientific measurement system within the facility requires iterative acquisition, visualization, interpretation, and data analysis. The Immersion Lab will accelerate the data-centric work of MIT.nano researchers, but also of others who step into its space, driven by their pursuits of science, engineering, art, entertainment, and education.

Tools and capabilities

The Immersion Lab not only assembles a variety of advanced hardware and software tools, but is also an instrument in and of itself, says Anthony. The two-story cube, measuring approximately 28 feet on each side, is outfitted with an embedded OptiTrack system that enables precise motion capture via real-time active or passive 3D tracking of objects, as well as full-body motion analysis with the associated software.

Complementing the built-in systems are stand-alone instruments that study the data, analyze and model the physical world, and generate new, immersive content, including:

  • a Matterport Pro2 photogrammetric camera to generate 3D, geographically and dimensionally accurate reconstructions of spaces (Matterport can also be used for augmented reality creation and tagging, virtual reality walkthroughs, and 3D models of the built environment);
  • a Lenscloud system that uses 126 cameras and custom software to produce high-volume, 360-degree photogrammetric scans of human bodies or human-scale objects;
  • software and hardware tools for content generation and editing, such as 360-degree cameras, 3D animation software, and green screens;
  • backpack computers and VR headsets to allow researchers to test and interact with their digital assets in virtual spaces, untethered from a stationary desktop computer; and
  • hardware and software to visualize complex and multidimensional datasets, including HP Z8 data science workstations and Dell Alienware gaming workstations.

Like MIT.nano’s fabrication and characterization facilities, the Immersion Lab is open to researchers from any department, lab, and center at MIT. Expert research staff are available to assist users.

Support for research, courses, and seminars

Anthony says the Immersion Lab is already supporting cross-disciplinary research at MIT, working with multiple MIT groups for diverse uses — quantitative geometry measurements of physical prototypes for advanced manufacturing, motion analysis of humans for health and wellness uses, creation of animated characters for arts and theater production, virtual tours of physical spaces, and visualization of fluid and heat flow for architectural design, to name a few.

The MIT.nano Immersion Lab Gaming Program is a four-year research collaboration between MIT.nano and video game development company NCSOFT that seeks to chart the future of how people interact with the world and each other via hardware and software innovations in gaming technologies. In the program’s first two calls-for-proposals in 2019 and 2020, 12 projects from five different departments were awarded $1.5M of combined research funding. The collaborative proposal selection process by MIT.nano and NCSOFT ensures the awarded projects are developing industrially-impactful advancements, and that MIT researchers are exposed to technical practitioners at NCSOFT.

The Immersion Lab also partners with the Clinical Research Center (CRC) at the MIT Institute for Medical Engineering and Science to generate a human-centric environment in which to study health and wellness. Through this partnership, the CRC has provided sensors, equipment, and expertise to capture physiological measurements of a human body while immersed in the physical or virtual realm of the Immersion Lab.

Undergraduate students can use the Immersion Lab through sponsored Undergraduate Research Opportunities Program (UROP) projects. Recent UROP work includes jumping as a new form of locomotion in virtual reality and analyzing human muscle lines using motion capture software. Starting with MIT’s 2021 Independent Activities Period, the Immersion Lab will also offer workshops, short courses, and for-credit classes in the MIT curriculum.

Members of the MIT community and general public can learn more about the various application areas supported by the Immersion Lab through a new seminar series, Immersed, beginning in February. This monthly event will feature talks by experts in the fields of current work, highlighting future goals to be pursued with the immersive technologies. Slated topical areas include motion in sports, uses for photogrammetry, rehabilitation and prosthetics, and music/performing arts.

New ways of teaching and learning

Virtual reality makes it possible for instructors to bring students to environments that are hard to access, either geographically or at scale. New modalities for introducing the language of gaming into education allow for students to discover concepts for themselves.

As a recent example, William Oliver, associate professor in electrical engineering and computer science, is developing Qubit Arcade to teach core principles of quantum computing via a virtual reality demonstration. Users can create Bloch spheres, control qubit states, measure results, and compose quantum circuits in an intuitive 3D representation with virtualized quantum gates.

IMES Director Elazer Edelman, the Edward J. Poitras Professor in Medical Engineering and Science, is using the Immersion Lab as a teaching tool for interacting with 3D models of the heart. With the 3D and 4D visualization tools of the Lab, Edelman and his students can see in detail the evolution of congenital heart failure models, something his students could previously only study if they happened upon a case in a cadaver.

“Software engineers understand how to implement concepts in a digital environment. Artists understand how light interacts with materials and how to draw the eye to a particular feature through contrast and composition. Musicians and composers understand how the human ear responds to sound. Dancers and animators understand human motion. Teachers know how to explain concepts and challenge their students. Hardware engineers know how to manipulate materials and matter to build new physical functionality. All of these fields have something to contribute to the problems we are tackling in the Immersion Lab,” says Anthony.

A faculty advisory board has been established to help the MIT.nano Immersion Lab identify opportunities enabled by the current tools and those that should be explored with additional software and hardware capabilities. The lab’s advisory board currently comprises seven MIT faculty from six departments. Such broad faculty engagement ensures that the Immersion Lab engages in projects across many disciplines and launches new directions of cross-disciplinary discoveries.

Visit nanousers.mit.edu/immersion-lab to learn more.

2020 MIT Water Summit brings international audiences together to discuss resilience

Earlier this semester, the MIT student group The Water Club gathered to discuss topics for their eighth annual MIT Water Summit. Given the dramatic challenges of 2020, the group knew this year’s decision was particularly weighty. Commenting on the process, Laura Chen, a junior in chemical engineering and director of the 2020 Water Summit, recalled, “in light of the effects of Covid-19 across the world, as well as the resurgence of the Black Lives Matter movement in the U.S., we thought a lot about how we might [through the Water Summit] create a better picture of our future, on the scale of system-wide challenges.” To that end, the student organizers sought a theme and group of presenters that could focus the discussion on structural change, rather than point-source solutions.

The Water Summit, a three-day student-run conference that gathers college students, professors, researchers, and industry professionals to discuss the newest ideas and innovations in the world’s water resources and our use of them, attracted 480 people from all over the world. The international nature of both audience and presenters showcased the benefits that can come from the pivot to virtual space, its own demonstration of resilience. 

Resilience has become a bit of a buzzword in recent years, cropping up in discussions of everything from mental health, to business development, to climate change response, and more. Given the ubiquity of the term — especially in 2020, where the pandemic has forced rapid changes to everyday practices around the world — using it as a frame to question systemic challenges and structural solutions in the water sector was a potent one. In the words of opening speaker Rebecca Farnum, community outreach advocate at Syracuse University London: “Who are we expecting to be resilient? What is forcing certain communities to be resilient? As we re-imagine resilience, can our policies and governments be, so our people and ecosystems don’t have to be?”

Water systems resilience: Different definitions, similar goals

Farnum’s opening talk framed the complexities inherent to resilience when it comes to water. “Resilience is different for everyone,” she explained, and all depends on which water sector stakeholder you are taking to. The differences stem from the different uses each individual or community has for water. For a farmer, water is an expense. For an activist, it is a right. However, this does not mean that any one perspective should be dominant over another. Each holds an important piece of the puzzle of water resilience that, if understood together, can guide innovation and decision-making that can improve water sustainability around the world and build toward a water-secure future for all.

Using resilience as a motivator for water sector innovation is an exciting mission for many researchers and entrepreneurs. However, Farnum noted that resilience can be a burden as well, especially to those who have no choice but to exercise it daily. In conversations with young people living in water-insecure areas, she has heard these frustrations time and again. The problem is that it’s often the individual that is required to be continuously resilient to challenges of water insecurity, while the structures exacerbating these challenges rarely — or slowly — change. Farnum summed up this call for structural change this way: “Placing the burden of resiliency on people is problematic. Can we re-imagine a world where systems bear this burden, rather than people?”

In fact, the challenges of water insecurity — and need for widespread resilience — are increasing globally, as the world’s poorest people bear the brunt of its effects. About 2.2 billion people around the world lack access to safe drinking water. The spread of Covid-19 revealed these significant gaps in clean water access and sanitation, and how unprepared many communities were to respond to emergency scenarios like this one. On top of this, climate change is further depleting water sources while scaling up natural disasters like hurricanes and droughts, which have reached record levels of destruction in 2020. From record-breaking wildfires in the United States and Australia, to cyclones in India that caused billions of dollars in damage, to devastating floods in Vietnam and Cambodia, disaster scenarios like these are increasing in frequency and power. It is therefore more important than ever to not only create resilient communities that can absorb and recover from these challenges, but also find ways to mitigate them in the first place.

Place-based approaches to water planning

While water issues are present across the world, they differ from country to country and from community to community. Uma Lele, president-elect of the International Association of Agricultural Economists, spoke to this challenge, with a particular focus on China and India. Population growth, income growth, and industrialization have increased both countries’ need for high quality water. Yet, while China and India face similar water challenges, they have taken very different approaches. When it comes to the stability of water and water quality, Lele stated that “it is important to consider what a country’s policies are and how they affect all populations.” China has a centralized governmental approach to water issues, while India delegates more power to the states. Therefore, China’s centralized nature makes it easier to pursue water usage policies such as caps and pricing, whereas it is more difficult to implement these policies successfully in India, where states that prioritize farmer productivity may resist.

Throughout the summit, a common theme was how everyone, regardless of geography, is vulnerable to water challenges. In a panel on water and energy, Newsha Ajami, director of Urban Water Policy at Stanford University’s Water in the West program, shared that by 2050, 80 percent of the world population will be living in urban areas. Given this, embedding sustainable water management practices in urban design right now is imperative to our ability to move toward a water-secure future. “Sixty percent of these cities haven’t even been built yet,” she noted, which provides the opportunity for urban planners to foreground water sustainability from the ground up.

Paula Kehoe, director of water resources with the San Francisco Public Utilities Commission, also spoke to what cities can do right now. In a panel discussion on decentralized water systems, she shared how San Francisco is using “greywater” — wastewater generated in households or office buildings from streams without fecal contamination — for non-potable applications in order to conserve water in public buildings. What began as a controversial water-saving policy borne out of a moment of water scarcity in 2012 is now a mandatory ongoing city water conservation strategy.

People-focused approaches to water solutions

When making decisions about water issues, all stakeholders should be at the table. These stakeholders include industry, government, researchers, and especially people on the ground that are directly affected by water challenges. In one example, Pamela Silva Díaz ’12, an independent engineering consultant who works with MIT D-Lab, spoke to how disaster mitigation needs to adapt to all populations. This means taking into account language, literacy levels, income levels, and gender. Connecting with many different communities and forming relationships on the ground early and in an ongoing way can alleviate inequities and other challenges down the line.

In fact, the importance of community engagement in water solutions to improve water systems resilience was a common thread throughout the summit. One method discussed in a panel on data-driven decision-making for the water sector was citizen science. This practice helps water utilities and other entities with their data collection but, more importantly, it keeps the public engaged in water challenges and their long-term solutions. Luis Montestruque, vice president of digital solutions for Xylem, explained that a critical role for citizen science is to allow people to participate in monitoring contamination in areas where they live and work.

Understanding levels of contamination and its impact is vital to empowering a community to maintain water quality through generations. It additionally helps create datasets for populations that are often underrepresented by research, such as members of poor or rural communities. Charlene Ren SM ’16, SM ’17, founder of the startup MyH2O, echoed these sentiments. She has found, through the on-the-ground work of MyH2O in China, that people have a tendency to trust the water sources they have used their entire life, even if they are contaminated. MyH2O’s citizen science efforts help people understand what is actually in their water and turns them from users to advocates. 

Water inequalities extend to the United States, and these domestic challenges were discussed by Emma Robbins, director of the Navajo Water Project and one of the Water Summit’s keynote presenters. She shared a startling statistic: 2 million Americans, including many Indigenous peoples in Native nations across the U.S., do not have piped water in their homes. On top of this, for the Navajo nation in particular, much of the groundwater is contaminated with naturally occurring arsenic and uranium, as well as residual contaminants from mines on their land that have not been properly cleaned up and closed. Given this lack of clean water access, Covid-19 hit Navajo communities particularly hard, a challenge to which the Navajo Water Project has been responding since March 2020. 

The Navajo Water Project, an Indigenous-led and Indigenous-operated organization, uses first-hand experience as well as close collaboration with community organizations and Navajo nation leaders to determine effective water infrastructure solutions. In the beginning of the pandemic, due to the urgency of the situation, the Navajo Water Project distributed bottled water to residents in need. They drew heavily on their connections with community organizations and tribal leaders to ensure their messaging about Covid-safe practices. The water itself reached even the most remote households. However, larger and longer-term solutions were needed, and the Navajo Water Project developed, in close consultation with their contacts across the Navajo nation, modular weather-resilient water tanks and innovations for in-home water systems. Given how resource-intensive installing home water systems is, designing solutions with community members’ ongoing input continues to be essential to ensure that they responded to the unique nature of both the site and the users’ everyday needs at the outset.   

A closing lesson: Continue to center equity and community input in water innovation

“This has been hands-down the best Water Summit ever,” said MIT D-Lab lecturer Susan Murcott in her closing remarks for the event. A participant of all eight summits, she pointed out with delight that for the first time, equity, community engagement, sustainability, and resilience were central themes, as opposed to taking a back seat to technology-heavy discussions. She and many presenters throughout the summit reminded participants of the importance of keeping these concepts at the center, even after the summit ended. The reason is clear, especially for MIT students and others involved in building next-generation water policies and technologies: Centering these themes can ultimately help with the long-term sustainability of their water sector innovations.

As Emma Robbins reminded the audience: “It’s important to realize that oftentimes there might be a different solution than you have in mind.” Why is it so important to put time and energy into learning about the communities one seeks to help? They are the ones that will be using whatever technology or practice is introduced, long after these decisions are made.

Evelyn Hu delivers 2020 Dresselhaus Lecture on leveraging defects at the nanoscale

Harvard University Professor Evelyn Hu opened the 2020 Mildred S. Dresselhaus Lecture with a question: In an imperfect world, is perfection a necessary precursor for transformative advances in science and engineering?

Over the course of the next hour, for a virtual audience of nearly 300, the Tarr-Coyne Professor of Applied Physics and Electrical Engineering at the John A. Paulson School of Engineering and Applied Sciences at Harvard University argued that, at the nanoscale, there must be more creative ways to approach materials. By looking at what nature gives us in terms of electron energy levels, phonons, and a variety of processes, Hu said, scientists can re-engineer the properties of materials.

To illustrate her point, Hu described the effect of defects — vacancies or missing atoms — in otherwise perfect crystalline semiconductors. In transforming these defects, Hu demonstrated how unique properties at the nanoscale involving quantum confinement can profoundly change electron density of states. Hu’s talk exemplified the exceptional scholarship and leadership that have defined her career, says Vladimir Bulović, the founding faculty director of MIT.nano.

“Professor Hu has developed groundbreaking techniques for designing at the nanoscale, used those techniques to produce extraordinary innovations, and extended her impact through inspirational mentorship and teaching,” says Bulović, who is also the Fariborz Maseeh Professor of Emerging Technologies. “We were honored to have her present this year’s Dresselhaus Lecture.”

Hu attended the same high school as Dresselhaus — Hunter College High School in New York City — a coincidence that “was like a good luck talisman to me,” she said. “It gives me such great pleasure to try and express my gratefulness to Millie for all the guidance and mentorship she’s given to me from the time I was a graduate student … and the inspiration that she’s given to us all.”

Making a perfect material less perfect

Inspired by Dresselhaus’s work in the early 1990s to rethink thermoelectric materials, Hu’s research group is working on new ways to engineer materials that can exhibit a combination of photon correlation and spin coherence. Her talk showcased how silicon vacancies in silicon carbide, when integrated within nanoscale optical cavities, can result in a controlled output of light. The integrated defect-cavity system can also serve as a “nanoscope” into the material, allowing scientists to learn about the interactions with surrounding defects, providing broader insights into long-term quantum coherence.

Hu displayed an image of a perfect, single crystal semiconductor, then quickly disrupted that perfection by removing the silicon atoms to create a silicon vacancy. Changing the material in this way allows her to look for opportunities, she said. “Think of vacancies not as something missing,” Hu explained, “but as atom-like entities with particular electronic and spin states embedded in a complex, wide bandgap environment. The silicon vacancy has ground and electronic states. It also has an electron spin.”

In order to obtain enough signal from this single atomic scale defect, Hu manipulates the nanoscale to create an integrated environment for the silicon vacancies that she calls a cavity. “Think of this as a breakout room,” she says. “A place our atomic-scale silicon vacancy can be in an intimate and isolated conversation with its environment.”

The cavity recycles the photon energy as it goes back and forth between the emitter and this environment. When the silicon vacancy is placed within this cavity, the signal-to-noise is enormously better, Hu said.

At the end of her lecture, Hu answered audience questions ranging from scalability of her work and mathematical models that enumerate these discoveries, to limiting factors and the use of molecules as active spin states as compared to crystalline semiconductors. Hu concluded her talk by reflecting on Dresselhaus’s legacy, not only as a great scientist but as someone who was beloved.

This word, she says, means a degree of trust, of willingness to follow, to believe, to listen to. “For a scientist and engineer to be beloved in that way, and to have trust in that way, makes the difference between effectiveness and the ability to affect change.”

Honoring Mildred S. Dresselhaus

Hu was the second speaker to deliver the Dresselhaus Lecture. Established in 2019 to honor the late MIT physics and electrical engineering professor Mildred Dresselhaus, the “Queen of Carbon Science,” the annual event features a speaker selected by a committee of MIT faculty from a list of nominations submitted by the MIT community, scholars from other institutions and research laboratories, and members of the general public. The process and lecture are coordinated by MIT.nano, an open access facility for nanoscience and nanoengineering of which Dresselhaus was a strong faculty supporter.

Muriel Médard, the Cecil H. Green Professor in MIT’s Department of Electrical Engineering and Computer Science, opened the lecture with an invitation to nominate candidates for a new honor named for Dresselhaus by the Institute of Electrical and Electronics Engineers (IEEE). Established in 2019, the IEEE Mildred Dresselhaus Medal will honor an individual for outstanding technical contributions in science and engineering of great impact to IEEE fields of interest. “We’re really looking for people who have had an impact that goes beyond the technical,” says Médard. “Do consider nominating a worthy colleague, somebody whom you feel reflects well the kind of qualities that made Millie so remarkable.”

Nominations for the 2021 Dresselhaus Lecture are broadly accepted and can be submitted on MIT.nano’s website at any time. Any significant figure in science and engineering from anywhere in the world may be nominated.

To the brain, reading computer code is not the same as reading language

In some ways, learning to program a computer is similar to learning a new language. It requires learning new symbols and terms, which must be organized correctly to instruct the computer what to do. The computer code must also be clear enough that other programmers can read and understand it.

In spite of those similarities, MIT neuroscientists have found that reading computer code does not activate the regions of the brain that are involved in language processing. Instead, it activates a distributed network called the multiple demand network, which is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles.

However, although reading computer code activates the multiple demand network, it appears to rely more on different parts of the network than math or logic problems do, suggesting that coding does not precisely replicate the cognitive demands of mathematics either.

“Understanding computer code seems to be its own thing. It’s not the same as language, and it’s not the same as math and logic,” says Anna Ivanova, an MIT graduate student and the lead author of the study.

Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute for Brain Research, is the senior author of the paper, which appears today in eLife. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Tufts University were also involved in the study.

Language and cognition

A major focus of Fedorenko’s research is the relationship between language and other cognitive functions. In particular, she has been studying the question of whether other functions rely on the brain’s language network, which includes Broca’s area and other regions in the left hemisphere of the brain. In previous work, her lab has shown that music and math do not appear to activate this language network.

“Here, we were interested in exploring the relationship between language and computer programming, partially because computer programming is such a new invention that we know that there couldn’t be any hardwired mechanisms that make us good programmers,” Ivanova says.

There are two schools of thought regarding how the brain learns to code, she says. One holds that in order to be good at programming, you must be good at math. The other suggests that because of the parallels between coding and language, language skills might be more relevant. To shed light on this issue, the researchers set out to study whether brain activity patterns while reading computer code would overlap with language-related brain activity.

The two programming languages that the researchers focused on in this study are known for their readability — Python and ScratchJr, a visual programming language designed for children age 5 and older. The subjects in the study were all young adults proficient in the language they were being tested on. While the programmers lay in a functional magnetic resonance (fMRI) scanner, the researchers showed them snippets of code and asked them to predict what action the code would produce.

The researchers saw little to no response to code in the language regions of the brain. Instead, they found that the coding task mainly activated the so-called multiple demand network. This network, whose activity is spread throughout the frontal and parietal lobes of the brain, is typically recruited for tasks that require holding many pieces of information in mind at once, and is responsible for our ability to perform a wide variety of mental tasks.

“It does pretty much anything that’s cognitively challenging, that makes you think hard,” Ivanova says.

Previous studies have shown that math and logic problems seem to rely mainly on the multiple demand regions in the left hemisphere, while tasks that involve spatial navigation activate the right hemisphere more than the left. The MIT team found that reading computer code appears to activate both the left and right sides of the multiple demand network, and ScratchJr activated the right side slightly more than the left. This finding goes against the hypothesis that math and coding rely on the same brain mechanisms.

Effects of experience

The researchers say that while they didn’t identify any regions that appear to be exclusively devoted to programming, such specialized brain activity might develop in people who have much more coding experience.

“It’s possible that if you take people who are professional programmers, who have spent 30 or 40 years coding in a particular language, you may start seeing some specialization, or some crystallization of parts of the multiple demand system,” Fedorenko says. “In people who are familiar with coding and can efficiently do these tasks, but have had relatively limited experience, it just doesn’t seem like you see any specialization yet.”

In a companion paper appearing in the same issue of eLife, a team of researchers from Johns Hopkins University also reported that solving code problems activates the multiple demand network rather than the language regions.

The findings suggest there isn’t a definitive answer to whether coding should be taught as a math-based skill or a language-based skill. In part, that’s because learning to program may draw on both language and multiple demand systems, even if — once learned — programming doesn’t rely on the language regions, the researchers say.

“There have been claims from both camps — it has to be together with math, it has to be together with language,” Ivanova says. “But it looks like computer science educators will have to develop their own approaches for teaching code most effectively.”

The research was funded by the National Science Foundation, the Department of the Brain and Cognitive Sciences at MIT, and the McGovern Institute for Brain Research.

Model could help determine quarantine measures needed to reduce Covid-19’s spread

Some of the research described in this article has been published on a preprint server but has not yet been peer-reviewed by experts in the field.

As Covid-19 infections soar across the U.S., some states are tightening restrictions and reinstituting quarantine measures to slow the virus’ spread. A model developed by MIT researchers shows a direct link between the number of people who become infected and how effectively a state maintains its quarantine measures.

The researchers described their model in a paper published in Cell Patterns in November, showing that the system could recapitulate the effects that quarantine measures had on viral spread in countries around the world. In their next study, recently posted to the preprint server medRxiv, they drilled into data from the United States last spring and summer. That earlier surge in infections, they found, was strongly related to a drop in “quarantine strength” — a measure the team defines as the ability to keep infected individuals from infecting others.

The latest study focuses on last spring and early summer, when the southern and west-central United States saw a precipitous rise in infections as states in those regions reopened and relaxed quarantine measures. The researchers used their model to calculate the quarantine strength in these states, many of which were early to reopen following initial lockdowns in the spring.

If these states had not reopened so early, or had reopened but strictly enforced measures such as mask-wearing and social distancing, the model calculates that more than 40 percent of infections could have been avoided in all states that the researchers considered. In particular, the study estimates, if Texas and Florida had maintained stricter quarantine measures, more than 100,000 infections could have been avoided in each of those states.

“If you look at these numbers, simple actions on an individual level can lead to huge reductions in the number of infections and can massively influence the global statistics of this pandemic,” says lead author Raj Dandekar, a graduate student in MIT’s Department of Civil and Environmental Engineering. 

As the country battles a winter wave of new infections, and states are once again tightening restrictions, the team hopes the model can help policymakers determine the level of quarantine measures to put in place.

“What I think we have learned quantitatively is, jumping around from hyper-quarantine to no quarantine and back to hyper-quarantine definitely doesn’t work,” says co-author Christopher Rackauckas, an applied mathematics instructor at MIT. “Instead, good consistent application of policy would have been a much more effective tool.”

The new paper’s MIT co-authors also include undergraduate Emma Wang and professor of mechanical engineering George Barbastathis.

Strength learning

The team’s model is a modification of a standard SIR model, an epidemiological model that is used to predict the way a disease spreads, based on the number of people who are either “susceptible,” “infectious,” or “recovered.” Dandekar and his colleagues enhanced an SIR model with a neural network that they trained to process real Covid-19 data.

The machine-learning-enhanced model learns to identify patterns in data of infected and recovered cases, and from these data, it calculates the number of infected individuals who are not transmitting the virus to others (presumably because the infected individuals are following some sort of quarantining measures). This value is what the researchers label as “quarantine strength,” which reflects how effective a region is in quarantining an infected individual. The model can process data over time to see how a region’s quarantine strength evolves.

The researchers developed the model in early February and have since applied it to Covid-19 data from more than 70 countries, finding that it has accurately simulated the on-the-ground quarantine situation in European, South American, and Asian countries that were initially hard-hit by the virus.

“When we look at these countries to see when quarantines were instituted, and we compare that with results for the trained quarantine strength signal, we see a very strong correlation,” Rackauckas says. “The quarantine strength in our model changes a day or two after policies are instituted, among all countries. Those results validated the model.”

The team published these country-level results last month in Cell Patterns, and are also hosting the results at covid19ml.org, where users can click on a map of the world to see how a given country’s quarantine strength has changed over time.

What if states had delayed?

Once the researchers validated the model at the country level, they applied it to individual states in the U.S., to see not only how a state’s quarantine measures evolved over time, but how the number of infections would have changed if a state modified its quarantine strength, for instance by delaying reopening.

They focused on the south and west-central U.S., where many states were early to reopen and subsequently experienced rapid surges in infections. The team used the model to calculate the quarantine strength for Arizona, Florida, Louisiana, Nevada, Oklahoma, South Carolina, Tennessee, Texas, and Utah, all of which opened before May 15. They also modeled New York, New Jersey, and Illinois — states that delayed reopening to late May and early June.

They fed the model the number of infected and recovered individuals that was reported for each state, starting from when the 500th infection was reported in each state, up until mid-July. They also noted the day on which each state’s stay-at-home order was lifted, effectively signaling the state’s reopening.

For every state, the quarantine strength declined soon after reopening; the steepness of this decline, and the subsequent rise in infections, was strongly related to a state’s reopening. States that reopened early on, such as South Carolina and Tennessee, had a steeper drop in quarantine strength and a higher rate of daily cases.

“Instead of just saying that reopening early is bad, we are actually quantifying here how bad it was,” Dandekar says.

Meanwhile, states like New York and New Jersey, which delayed reopening or enforced quarantine measures such as mask-wearing even after reopening, kept a more or less steady quarantine strength, with no significant rise in infections. 

“Now that we can give a measure of quarantine strength that matches reality, we can say, ‘What if we kept everything constant? How much difference would the southern states have had in their outlook?’” Rackauckas says.

Next, the team reversed its model to estimate the number of infections that would have occurred if a given state maintained a steady quarantine strength even after reopening. In this scenario, more than 40 percent of infections could have been avoided in each state they modeled. In Texas and Florida, that percentage amounts to about 100,000 preventable cases for each state.

Conceivably, as the pandemic continues to ebb and surge, policymakers could use the model to calculate the quarantine strength needed to keep a state’s current infections below a certain number. They could then look through the data to a point in time where the state exhibited this same value, and refer to the type of restrictions that were in place at that time, as a guide to the policies they could put in place at the present time.

“What is the rate of growth of the disease that we’re comfortable with, and what would be the quarantine policies that would get us there?” Rackauckas says. “Is it everyone holing up in their houses, or is it everyone allowed to go to restaurants, but once a week? That’s what the model can kind of tell us. It can give us more of a refined quantitative view of that question.”

This research was funded, in part, by the Intelligence Advanced Research (Projects Activity (IARPA).

3 Questions: Phillip Sharp on the discoveries that enabled RNA vaccines for Covid-19

Some of the most promising vaccines developed to combat Covid-19 rely on messenger RNA (mRNA) — a template cells use to carry genetic instructions for producing proteins. The mRNA vaccines take advantage of this cellular process to make proteins that then trigger an immune response that targets SARS-CoV-2, the virus that causes Covid-19.

Compared to other types of vaccines, recently developed technologies allow mRNA vaccines to be rapidly created and deployed on a large-scale — crucial aspects in the fight against Covid-19. Within the year since the identification and sequencing of the SARS-CoV-2 virus, companies such as Pfizer and Moderna have developed mRNA vaccines and run large-scale trials in the race to have a vaccine approved by the U.S. Food and Drug Administration — a feat unheard of with traditional vaccines using live attenuated or inactive viruses. These vaccines appear to have a greater than 90 percent efficacy in protecting against infection.

The fact that these vaccines could be rapidly developed within these last 10 months rests on more than four decades of study of mRNA. This success story begins with Institute Professor Phillip A. Sharp’s discovery of split genes and spliced RNA that took place at MIT in the 1970s — a discovery that would earn him the 1993 Nobel Prize in Physiology or Medicine.

Sharp, a professor within the Department of Biology and member of the Koch Institute for Integrative Cancer Research at MIT, commented on the long arc of scientific research that has led to this groundbreaking, rapid vaccine development — and looked ahead to what the future might hold for mRNA technology.

Q: Professor Sharp, take us back to the fifth floor of the MIT Center for Cancer Research in the 1970s. Were you and your colleagues thinking about vaccines when you studied viruses that caused cancer?

A: Not RNA vaccines! There was a hope in the ’70s that viruses were the cause of many cancers and could possibly be treated by conventional vaccination with inactivated virus. This is not the case, except for a few cancers such as HPV causing cervical cancer.

Also, not all groups at the MIT Center for Cancer Research (CCR) focused directly on cancer. We knew so little about the causes of cancer that Professor Salvador Luria, director of the CCR, recruited faculty to study cells and cancer at the most fundamental level. The center’s three focuses were virus and genetics, cell biology, and immunology. These were great choices.

Our research was initially funded by the American Cancer Society, and we later received federal funding from the National Cancer Institute, part of the National Institutes of Health and the National Science Foundation — as well as support from MIT through the CCR, of course.

At Cold Spring Harbor Laboratory in collaboration with colleagues, we had mapped the parts of the adenovirus genome responsible for tumor development. While doing so, I became intrigued by the report that adenovirus RNA in the nucleus was longer than the RNA found outside the nucleus in the cytoplasm where the messenger RNA was being translated into proteins. Other scientists had also described longer-than-expected nuclear RNA from cellular genes, and this seemed to be a fundamental puzzle to solve.

Susan Berget, a postdoc in my lab, and Claire Moore, a technician who ran MIT’s electron microscopy facility for the cancer center and would later be a postdoc in my lab, were instrumental in designing the experiments that would lead to the iconic electron micrograph that was the key to unlocking the mystery of this “heterogeneous” nuclear RNA. Since those days, Sue and Claire have had successful careers as professors at Baylor College of Medicine and Tufts Medical School, respectively.

The micrograph showed loops that would later be called “introns” — unnecessary extra material in between the relevant segments of mRNA, or “exons.” These exons would be joined together, or spliced, to create the final, shorter message for the translation to proteins in the cytoplasm of the cell.

This data was first presented at the Cancer Center fifth floor group meeting that included Bob Weinberg, David Baltimore, David Housman, and Nancy Hopkins. Their comments, particularly those of David Baltimore, were catalysts in our discovery. Our curiosity to understand this basic cellular mechanism drove us to learn more, to design the experiments that could elucidate the RNA splicing process. The collaborative environment of the MIT Cancer Center allowed us to share ideas and push each other to see problems in a new way.

Q: Your discovery of RNA splicing was a turning point, opening up new avenues that led to new applications. What did this foundation allow you to do that you couldn’t do before?

A: Our discovery in 1977 occurred just as biotechnology appeared with the objective of introducing complex human proteins as therapeutic agents, for example interferons and antibodies. Engineering genes to express these proteins in industrial tanks was dependent on this discovery of gene structure. The same is true of the RNA vaccines for Covid-19: By harnessing new technology for synthesis of RNA, researchers have developed vaccines whose chemical structure mimics that of cytoplasmic mRNA.

In the early 1980s, following isolation of many human mutant disease genes, we recognized that about one-fifth of these were defective for accurate RNA splicing. Further, we also found that different isoforms of mRNAs encoding different proteins can be generated from a single gene. This is “alternative RNA splicing” and may explain the puzzle that humans have fewer genes — 21,000 to 23,000 — than many less complex organisms, but these genes are expressed in more complex protein isoforms. This is just speculation, but there are so many things about biology yet to be discovered.

I liken RNA splicing to discovering the Rosetta Stone. We understood how the same letters of the alphabet could be written and rewritten to form new words, new meaning, and new languages. The new “language” of mRNA vaccines can be developed in a laboratory using a DNA template and readily available materials. Knowing the genetic code of the SARS-CoV-2 is the first step in generating the mRNA vaccine. The effective delivery of vaccines into the body based on our fundamental understanding of mRNA took decades more work and ingenuity to figure out how to evade other cellular mechanisms perfected over hundreds of millions of years of evolution to destroy foreign genetic material.

Q: Looking ahead 40 more years, where do you think mRNA technology might be?

A: In the future, mRNA vaccine technology may allow for one vaccine to target multiple diseases. We could also create personalized vaccines based on individuals’ genomes.

Messenger RNA vaccines have several benefits compared to other types of vaccines, including the use of noninfectious elements and shorter manufacturing times. The process can scaled up, making vaccine development faster than traditional methods. RNA vaccines can also be moved rapidly into clinical trials, which is critical for the next pandemic.

It is impossible to predict the future of RNA therapies, such as the new vaccines, but there are some signs that new advancements could happen very quickly. A few years ago, the first RNA-based therapy was approved for treatment of lethal genetic disease. This treatment was designed through the discovery of RNA interference. Messenger RNA-based therapies will also likely be used to treat genetic diseases, vaccinate against cancer, and generate transplantable organs. It is another tool at the forefront of modern medical care.

But keep in mind that all mRNAs in human cells are encoded by only 2 percent of the total genome sequence. Most of the other 98 percent is transcribed into cellular RNAs whose activities remain to be discovered. There could be many future RNA-based therapies.

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.