Knight Science Journalism Program at MIT partners with STAT on a new health journalism fellowship

The following is an adaptation of a joint announcement from the Knight Science Journalism Program at MIT, STAT, and the Chan-Zuckerberg Initiative.

The Knight Science Journalism Program at MIT and STAT, the award-winning Boston-based health, science, and medicine publication, have teamed up to launch the Sharon Begley-STAT Science Reporting Fellowship.

The fellowship’s goal is to better diversify the ranks of science and health journalists and to foster broader and more inclusive coverage of science. The Chan-Zuckerberg Initiative (CZI) is providing $225,000 to support the first two years of the program, which is named in honor of Sharon Begley, an acclaimed science writer for STAT who died in January from complications of lung cancer.

The nine-month fellowship is intended for early-career journalists from racial and ethnic groups underrepresented in the profession and will prepare them for a successful career in science journalism. It will combine a paid reporting apprenticeship at STAT with an educational component provided through MIT’s prestigious Knight Science Journalism (KSJ) Program. The fellowship is now accepting applications for the inaugural Begley Fellow to start in September 2021, with plans to select two additional fellows in 2022.

“KSJ is honored to be a partner in this pioneering fellowship that honors the exceptional work of Sharon Begley and offers a new opportunity to support outstanding and inclusive science and health journalism,” says Deborah Blum, director of the KSJ Program at MIT. “We appreciate the commitment of the Chan Zuckerberg Initiative to improving racial and ethnic diversity in our community, which we believe is essential to smarter and more inclusive coverage of scientific research. And we are delighted to be working with STAT, one of the best health news publications available today, in assuring the success of this project.”

Science journalism reflects the structural and systemic inequities in our society, with Black, Hispanic/Latinx, and Indigenous reporters often not getting the same opportunities as white applicants to gain relevant experience. Roughly 80 percent of science journalists are white, according to the most recent membership data from two of the leading professional organizations, with 6 percent identifying as Asian or Pacific Islander, 1-4 percent as Black, 3-4 percent as Hispanic or Latinx, and 1 percent as Native American.

“The best way to make our profession and workplace more diverse and inclusive is for news organizations to grow their own talent — and that’s exactly what Sharon aimed to do,” Gideon Gil, a STAT managing editor, says in explaining why STAT decided to create the fellowship in Begley’s name. “Sharon relished mentoring younger science journalists, and her professional progeny work at news organizations across the U.S. So we could think of no more fitting way to honor her.”

The funding from CZI will enable both KSJ and STAT to offer Begley Fellows a stipend, with a combined total of $75,000 during each term. KSJ will also provide MIT-based health insurance for each fellow. In addition, STAT plans to raise additional funding to cover fellows’ reporting expenses and the program’s administrative costs, and to keep the fellowship operating in future years.

Fellows will work at STAT’s Boston, Massachusetts, office alongside its team of experienced science and health reporters and editors. The fellows will report and write articles, with additional opportunities for building connections, mentorship, and learning across publication teams. KSJ and MIT are providing support for the university-based part of the program, which offers opportunities ranging from training seminars and other fellowship community events, university library access, and the chance to audit classes at MIT and Harvard University. The Sharon Begley-STAT Science Reporting Fellowship aims to serve as a model for expanding racial diversity in science journalism that could be replicated at other publications.

Begley, STAT’s senior science writer, was long one of the nation’s finest science journalists. She was known as a generous supporter of younger journalists and was especially eager to help other women advance in a profession that, when she began as a researcher at Newsweek in 1977, was unwelcoming. She later worked at the Wall Street Journal and Reuters, before joining STAT at its founding in 2015.

“Sharon loved working at STAT and did some of her best reporting there, and mentoring younger journalists was one of her talents and priorities,” says her husband, Ned Groth. “So, for there to be a Sharon Begley Fellow at STAT, honing their journalistic skills in association with and mentored by colleagues who were in turn mentored by Sharon, seems like a perfect tribute to her.”

Her legacy includes her powerful advocacy for people of color, exemplified by a series she wrote in 2016 and 2017 about the neglect by scientists, government funders, drug makers, and hospitals of patients with sickle cell disease, who, in the United States, are predominantly Black.

“Supporting reporters from racial and ethnic groups underrepresented in journalism will bring important perspectives to the newsroom and surface new narratives and stories relevant to more communities, which will not only make biomedical reporting better and more accurate, but also help encourage greater public trust in science among historically marginalized groups,” says CZI Science Communications Manager Leah Duran. “We’re proud to support STAT and MIT to stand up this exciting program to cultivate talent and expand representation in science journalism.” In 2019, CZI supported the University of California at Santa Cruz to increase diversity, inclusion, and representation in its science journalism program.

The Sharon Begley-STAT Science Reporting Fellowship is accepting applications through June 30 at 5 p.m. For more information and application instructions, please visit KSJ’s online portal. For inquiries, technical assistance, or other questions pertaining to this application, please contact Gideon Gil or Deborah Blum.

Engineers create a programmable fiber

MIT researchers have created the first fiber with digital capabilities, able to sense, store, analyze, and infer activity after being sewn into a shirt.

Yoel Fink, who is a professor of material sciences and electrical engineering, a Research Laboratory of Electronics principal investigator, and the senior author on the study, says digital fibers expand the possibilities for fabrics to uncover the context of hidden patterns in the human body that could be used for physical performance monitoring, medical inference, and early disease detection.

Or, you might someday store your wedding music in the gown you wore on the big day — more on that later.

Fink and his colleagues describe the features of the digital fiber today in Nature Communications. Until now, electronic fibers have been analog — carrying a continuous electrical signal — rather than digital, where discrete bits of information can be encoded and processed in 0s and 1s.

“This work presents the first realization of a fabric with the ability to store and process data digitally, adding a new information content dimension to textiles and allowing fabrics to be programmed literally,” Fink says.

MIT PhD student Gabriel Loke and MIT postdoc Tural Khudiyev are the lead authors on the paper. Other co-authors MIT postdoc Wei Yan; MIT undergraduates Brian Wang, Stephanie Fu, Ioannis Chatziveroglou, Syamantak Payra, Yorai Shaoul, Johnny Fung, and Itamar Chinn; John Joannopoulos, the Francis Wright Davis Chair Professor of Physics and director of the Institute for Soldier Nanotechnologies at MIT; Harrisburg University of Science and Technology master’s student Pin-Wen Chou; and Rhode Island School of Design Associate Professor Anna Gitelson-Kahn. The fabric work was facilitated by Professor Anais Missakian, who holds the Pevaroff-Cohn Family Endowed Chair in Textiles at RISD.

Memory and more

The new fiber was created by placing hundreds of square silicon microscale digital chips into a preform that was then used to create a polymer fiber. By precisely controlling the polymer flow, the researchers were able to create a fiber with continuous electrical connection between the chips over a length of tens of meters.

The fiber itself is thin and flexible and can be passed through a needle, sewn into fabrics, and washed at least 10 times without breaking down. According to Loke, “When you put it into a shirt, you can’t feel it at all. You wouldn’t know it was there.”

Making a digital fiber “opens up different areas of opportunities and actually solves some of the problems of functional fibers,” he says.

For instance, it offers a way to control individual elements within a fiber, from one point at the fiber’s end. “You can think of our fiber as a corridor, and the elements are like rooms, and they each have their own unique digital room numbers,” Loke explains. The research team devised a digital addressing method that allows them to “switch on” the functionality of one element without turning on all the elements.

A digital fiber can also store a lot of information in memory. The researchers were able to write, store, and read information on the fiber, including a 767-kilobit full-color short movie file and a 0.48 megabyte music file. The files can be stored for two months without power.

When they were dreaming up “crazy ideas” for the fiber, Loke says, they thought about applications like a wedding gown that would store digital wedding music within the weave of its fabric, or even writing the story of the fiber’s creation into its components.

Fink notes that the research at MIT was in close collaboration with the textile department at RISD led by Missakian.  Gitelson-Kahn incorporated the digital fibers into a knitted garment sleeve, thus paving the way to creating the first digital garment.

On-body artificial intelligence

The fiber also takes a few steps forward into artificial intelligence by including, within the fiber memory, a neural network of 1,650 connections. After sewing it around the armpit of a shirt, the researchers used the fiber to collect 270 minutes of surface body temperature data from a person wearing the shirt, and analyze how these data corresponded to different physical activities. Trained on these data, the fiber was able to determine with 96 percent accuracy what activity the person wearing it was engaged in.

Adding an AI component to the fiber further increases its possibilities, the researchers say. Fabrics with digital components can collect a lot of information across the body over time, and these “lush data” are perfect for machine learning algorithms, Loke says.

“This type of fabric could give quantity and quality open-source data for extracting out new body patterns that we did not know about before,” he says.

With this analytic power, the fibers someday could sense and alert people in real-time to health changes like a respiratory decline or an irregular heartbeat, or deliver muscle activation or heart rate data to athletes during training.

The fiber is controlled by a small external device, so the next step will be to design a new chip as a microcontroller that can be connected within the fiber itself.

“When we can do that, we can call it a fiber computer,” Loke says.

This research was supported by the U.S. Army Institute of Soldier Nanotechnology, National Science Foundation, the U.S. Army Research Office, the MIT Sea Grant, and the Defense Threat Reduction Agency.

A better way to introduce digital tech in the workplace

When bringing technologies into the workplace, it pays to be realistic. Often, for instance, bringing new digital technology into an organization does not radically improve a firm’s operations. Despite high-level planning, a more frequent result is the messy process of frontline employees figuring out how they can get tech tools to help them to some degree.

That task can easily fall on overburdened workers who have to grapple with getting things done, but don’t always have much voice in an organization. So isn’t there a way to think systematically about implementing digital technology in the workplace?

MIT Professor Kate Kellogg thinks there is, and calls it “experimentalist governance of digital technology”: Let different parts of an organization experiment with the technology — and then centrally remove roadblocks to adopt the best practices that emerge, firm-wide.

“If you want to get value out of new digital technology, you need to allow local teams to adapt the technology to their setting,” says Kellogg, the David J. McGrath Jr. Professor of Management and Innovation at the MIT Sloan School of Management. “You also need to form a central group that’s tracking all these local experiments, and revising processes in response to problems and possibilities. If you just let everyone do everything locally, you’re going to see resistance to the technology, particularly among frontline employees.”

Kellogg’s perspective comes after she conducted an 18-month close ethnographic study of a teaching hospital, examining many facets of its daily workings — including things like the integration of technology into everyday medical practices.

Some of the insights from that organizational research now appear in a paper Kellogg has written, “Local Adaptation Without Work Intensification: Experimentalist Governance of Digital Technology for Mutually Beneficial Role Reconfiguration in Organizations,” recently published online in the journal Organization Science.

In the hospital

Kellogg’s on-the-ground, daily, ethnographic research took place in the primary care unit of an academic hospital in the northeastern U.S., where there were six medical teams, each consisting of seven to nine doctors, and three or four nurses and medical assistants, as well four or five receptionists.

The primary care group was transitioning to using new digital technology available in the electronic health system to provide clinical decision support, by indicating when patients needed vaccinations, diabetes tests, and pap smears. Previously, certain actions might only have been called for after visits with primary-care doctors. The software made those things part of the preclinical patient routine, as needed.

In practice, however, implementing the digital technology led to significantly more work for the medical assistants, who were in charge of using the alerts, communicating with patients — and often assigned even more background work by doctors. When the recommendation provided by the technology was not aligned with a doctor’s individual judgment about when a particular action was needed, the medical assistants would be tasked with finding out more about a patient’s medical history.

“I was surprised to find that it wasn’t working well,” Kellogg says.

She adds: “The promise of these technologies is that they’re going to automate a lot of practices and processes, but they don’t do that perfectly. There often need to be people who fill the gaps between what the technology can do and what’s really required, and oftentimes it’s less-skilled workers who are asked to do that.”

As such, Kellogg observed, the challenges of using the software were not just technological or logistical, but organizational. The primary-care unit was willing to let its different groups experiment with the software, but the people most affected by it were least-well positioned to demand changes in the hospital’s routines.

“It sounds great to have all the local teams doing experimentation, but in practice … a lot of people are asking frontline workers to do a lot of things, and they [the workers] don’t have any way to push back on that without being seen as complainers,” Kellogg notes.

Three types of problems

All told, Kellogg identified three types of problems regarding digital technology implementation. The first, which she calls “participation problems,” are when lower-ranking employees do not feel comfortable speaking up about workplace issues. The second, “threshold problems,” involve getting enough people to agree to use the solutions discovered through local experiments for the solutions to become beneficial. The third are “free rider problems,” when, say, doctors benefit from medical assistants doing a wider range of work tasks, but then don’t follow the proposed guidelines required to free up medical assistant time. 

So, while the digital technology provided some advantages, the hospital still had to take another step in order to use it effectively: form a centralized working group to take advantage of solutions identified in local experiments, while balancing the needs of doctors with realistic expectations for medical assistants.

“What I found was this local adaptation of digital technology needed to be complemented by a central governing body,” Kellogg says. “The central group could do things like introduce technical training and a new performance evaluation system for medical assistants, and quickly spread locally developed technology solutions, such as reprogrammed code with revised decision support rules.”

Placing a representative of the hospital’s medical assistants on this kind of governing body, for example, means “the lower-level medical assistant can speak on behalf of their counterparts, rather than [being perceived as] a resister, now [they’re] being solicited for a valued opinion of what all their colleagues are struggling with,” Kellogg notes.

Another tactic: Rather than demand all doctors follow the central group’s recommendations, the group obtained “provisional commitments” from the doctors — willingness to try the best practices — and found that to be a more effective way of bringing everyone on board.

“What experimentalist governance is, you allow for all the local experimentation, you come up with solutions, but then you have a central body composed of people from different levels, and you solve participation problems and leverage opportunities that arise during local adaptation,” Kellogg says.

A bigger picture

Kellogg has long done much of her research through extensive ethnographic work in medical settings. Her 2011 book “Challenging Operations,” for instance, used on-the-ground research to study the controversy of the hours demanded of medical residents. This new paper, for its part, is one product of over 400 sessions Kellogg spent following medical workers around inside the primary care unit.

“The holy grail of ethnography is finding a surprise,” says Kellogg. It also requires, she observes, “a diehard focus on the empirical. Let’s get past abstractions and dig into a few concrete examples to really understand the more generalizable challenges and the best practices for addressing them. I was able to learn things that you wouldn’t be able to learn by conducting a survey.”

For all the public discussion about technology and jobs, then, there is no substitute for a granular understanding of how technology really affects workers. Kellogg says she hopes the concept of experimentalist governance could be used widely to help harness promising-but-imperfect digital technology adoption. It could also apply, she suggests, to banks, law firms, and all kinds of businesses using various forms of enterprise software to streamline processes such as human resources management, customer support, and email marketing.

“The bigger picture is, when we engage in digital transformation, we want to encourage experimentation, but we also need some kind of central governance,” Kellogg says. “It’s a way to solve problems that are being experienced locally and make sure that successful experiments can be diffused. … A lot of people talk about digital technology as being either good or bad. But neither the technology itself nor the type of work being done dictates its impact. What I’m showing is that organizations need an experimentalist governance process in place to make digital technology beneficial for both managers and workers.”

The potential of artificial intelligence to bring equity in health care

Health care is at a junction, a point where artificial intelligence tools are being introduced to all areas of the space. This introduction comes with great expectations: AI has the potential to greatly improve existing technologies, sharpen personalized medicines, and, with an influx of big data, benefit historically underserved populations.

But in order to do those things, the health care community must ensure that AI tools are trustworthy, and that they don’t end up perpetuating biases that exist in the current system. Researchers at the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), an initiative to support AI research in health care, call for creating a robust infrastructure that can aid scientists and clinicians in pursuing this mission.

Fair and equitable AI for health care

The Jameel Clinic recently hosted the AI for Health Care Equity Conference to assess current state-of-the-art work in this space, including new machine learning techniques that support fairness, personalization, and inclusiveness; identify key areas of impact in health care delivery; and discuss regulatory and policy implications.

Nearly 1,400 people virtually attended the conference to hear from thought leaders in academia, industry, and government who are working to improve health care equity and further understand the technical challenges in this space and paths forward.

During the event, Regina Barzilay, the School of Engineering Distinguished Professor of AI and Health and the AI faculty lead for Jameel Clinic, and Bilal Mateen, clinical technology lead at the Wellcome Trust, announced the Wellcome Fund grant conferred to Jameel Clinic to create a community platform supporting equitable AI tools in health care.

The project’s ultimate goal is not to solve an academic question or reach a specific research benchmark, but to actually improve the lives of patients worldwide. Researchers at Jameel Clinic insist that AI tools should not be designed with a single population in mind, but instead be crafted to be reiterative and inclusive, to serve any community or subpopulation. To do this, a given AI tool needs to be studied and validated across many populations, usually in multiple cities and countries. Also on the project wish list is to create open access for the scientific community at large, while honoring patient privacy, to democratize the effort.

“What became increasingly evident to us as a funder is that the nature of science has fundamentally changed over the last few years, and is substantially more computational by design than it ever was previously,” says Mateen.

The clinical perspective

This call to action is a response to health care in 2020. At the conference, Collin Stultz, a professor of electrical engineering and computer science and a cardiologist at Massachusetts General Hospital, spoke on how health care providers typically prescribe treatments and why these treatments are often incorrect.

In simplistic terms, a doctor collects information on their patient, then uses that information to create a treatment plan. “The decisions providers make can improve the quality of patients’ lives or make them live longer, but this does not happen in a vacuum,” says Stultz.

Instead, he says that a complex web of forces can influence how a patient receives treatment. These forces go from being hyper-specific to universal, ranging from factors unique to an individual patient, to bias from a provider, such as knowledge gleaned from flawed clinical trials, to broad structural problems, like uneven access to care.

Datasets and algorithms

A central question of the conference revolved around how race is represented in datasets, since it’s a variable that can be fluid, self-reported, and defined in non-specific terms.

“The inequities we’re trying to address are large, striking, and persistent,” says Sharrelle Barber, an assistant professor of epidemiology and biostatistics at Drexel University. “We have to think about what that variable really is. Really, it’s a marker of structural racism,” says Barber. “It’s not biological, it’s not genetic. We’ve been saying that over and over again.”

Some aspects of health are purely determined by biology, such as hereditary conditions like cystic fibrosis, but the majority of conditions are not straightforward. According to Massachusetts General Hospital oncologist T. Salewa Oseni, when it comes to patient health and outcomes, research tends to assume biological factors have outsized influence, but socioeconomic factors should be considered just as seriously.

Even as machine learning researchers detect preexisting biases in the health care system, they must also address weaknesses in algorithms themselves, as highlighted by a series of speakers at the conference. They must grapple with important questions that arise in all stages of development, from the initial framing of what the technology is trying to solve to overseeing deployment in the real world.

Irene Chen, a PhD student at MIT studying machine learning, examines all steps of the development pipeline through the lens of ethics. As a first-year doctoral student, Chen was alarmed to find an “out-of-the-box” algorithm, which happened to project patient mortality, churning out significantly different predictions based on race. This kind of algorithm can have real impacts, too; it guides how hospitals allocate resources to patients.

Chen set about understanding why this algorithm produced such uneven results. In later work, she defined three specific sources of bias that could be detangled from any model. The first is “bias,” but in a statistical sense — maybe the model is not a good fit for the research question. The second is variance, which is controlled by sample size. The last source is noise, which has nothing to do with tweaking the model or increasing the sample size. Instead, it indicates that something has happened during the data collection process, a step way before model development. Many systemic inequities, such as limited health insurance or a historic mistrust of medicine in certain groups, get “rolled up” into noise.

“Once you identify which component it is, you can propose a fix,” says Chen.

Marzyeh Ghassemi, an assistant professor at the University of Toronto and an incoming professor at MIT, has studied the trade-off between anonymizing highly personal health data and ensuring that all patients are fairly represented. In cases like differential privacy, a machine-learning tool that guarantees the same level of privacy for every data point, individuals who are too “unique” in their cohort started to lose predictive influence in the model. In health data, where trials often underrepresent certain populations, “minorities are the ones that look unique,” says Ghassemi.

“We need to create more data, it needs to be diverse data,” she says. “These robust, private, fair, high-quality algorithms we’re trying to train require large-scale data sets for research use.”

Beyond Jameel Clinic, other organizations are recognizing the power of harnessing diverse data to create more equitable health care. Anthony Philippakis, chief data officer at the Broad Institute of MIT and Harvard, presented on the All of Us research program, an unprecedented project from the National Institutes of Health that aims to bridge the gap for historically under-recognized populations by collecting observational and longitudinal health data on over 1 million Americans. The database is meant to uncover how diseases present across different sub-populations.

One of the largest questions of the conference, and of AI in general, revolves around policy. Kadija Ferryman, a cultural anthropologist and bioethicist at New York University, points out that AI regulation is in its infancy, which can be a good thing. “There’s a lot of opportunities for policy to be created with these ideas around fairness and justice, as opposed to having policies that have been developed, and then working to try to undo some of the policy regulations,” says Ferryman.

Even before policy comes into play, there are certain best practices for developers to keep in mind. Najat Khan, chief data science officer at Janssen R&D, encourages researchers to be “extremely systematic” when choosing datasets. Even large, common datasets contain inherent bias.

Even more fundamental is opening the door to a diverse group of future researchers.

“We have to ensure that we are developing folks, investing in them, and having them work on really important problems that they care about,” says Khan. “You’ll see a fundamental shift in the talent that we have.”

The AI for Health Care Equity Conference was co-organized by MIT’s Jameel Clinic; Department of Electrical Engineering and Computer Science; Institute for Data, Systems, and Society; Institute for Medical Engineering and Science; and the MIT Schwarzman College of Computing.

Artificial intelligence system could help counter the spread of disinformation

Disinformation campaigns are not new — think of wartime propaganda used to sway public opinion against an enemy. What is new, however, is the use of the internet and social media to spread these campaigns. The spread of disinformation via social media has the power to change elections, strengthen conspiracy theories, and sow discord.

Steven Smith, a staff member from MIT Lincoln Laboratory’s Artificial Intelligence Software Architectures and Algorithms Group, is part of a team that set out to better understand these campaigns by launching the Reconnaissance of Influence Operations (RIO) program. Their goal was to create a system that would automatically detect disinformation narratives as well as those individuals who are spreading the narratives within social media networks. Earlier this year, the team published a paper on their work in the Proceedings of the National Academy of Sciences and they received an R&D 100 award last fall.

The project originated in 2014 when Smith and colleagues were studying how malicious groups could exploit social media. They noticed increased and unusual activity in social media data from accounts that had the appearance of pushing pro-Russian narratives.

“We were kind of scratching our heads,” Smith says of the data. So the team applied for internal funding through the laboratory’s Technology Office and launched the program in order to study whether similar techniques would be used in the 2017 French elections.

In the 30 days leading up to the election, the RIO team collected real-time social media data to search for and analyze the spread of disinformation. In total, they compiled 28 million Twitter posts from 1 million accounts. Then, using the RIO system, they were able to detect disinformation accounts with 96 percent precision.

What makes the RIO system unique is that it combines multiple analytics techniques in order to create a comprehensive view of where and how the disinformation narratives are spreading.

“If you are trying to answer the question of who is influential on a social network, traditionally, people look at activity counts,” says Edward Kao, who is another member of the research team. On Twitter, for example, analysts would consider the number of tweets and retweets. “What we found is that in many cases this is not sufficient. It doesn’t actually tell you the impact of the accounts on the social network.”

As part of Kao’s PhD work in the laboratory’s Lincoln Scholars program, a tuition fellowship program, he developed a statistical approach — now used in RIO — to help determine not only whether a social media account is spreading disinformation but also how much the account causes the network as a whole to change and amplify the message.

Erika Mackin, another research team member, also applied a new machine learning approach that helps RIO to classify these accounts by looking into data related to behaviors such as whether the account interacts with foreign media and what languages it uses. This approach allows RIO to detect hostile accounts that are active in diverse campaigns, ranging from the 2017 French presidential elections to the spread of Covid-19 disinformation.

Another unique aspect of RIO is that it can detect and quantify the impact of accounts operated by both bots and humans, whereas most automated systems in use today detect bots only. RIO also has the ability to help those using the system to forecast how different countermeasures might halt the spread of a particular disinformation campaign.

The team envisions RIO being used by both government and industry as well as beyond social media and in the realm of traditional media such as newspapers and television. Currently, they are working with West Point student Joseph Schlessinger, who is also a graduate student at MIT and a military fellow at Lincoln Laboratory, to understand how narratives spread across European media outlets. A new follow-on program is also underway to dive into the cognitive aspects of influence operations and how individual attitudes and behaviors are affected by disinformation.

“Defending against disinformation is not only a matter of national security, but also about protecting democracy,” says Kao.

3 Questions: Caroline White-Nockleby on the socio-environmental complexities of renewable energy

Caroline White-Nockleby is a PhD student in MIT’s doctoral program in History, Anthropology, and Science, Technology, and Society (HASTS), which is co-sponsored by the History and Anthropology sections, and the Program in Science, Technology and Society (STS). White-Nockleby’s research centers on the shifting supply chains of renewable energy infrastructures. In particular, she is interested in the interfaces between policymaking, social dynamics, and tech innovations in the sourcing, manufacture, and implementation of energy storage technologies. She received a BA in geosciences and American studies from Williams College and an MPhil in social anthropology from the University of Cambridge, England. MIT SHASS Communications spoke with her for the series Solving Climate: Humanistic Perspectives from MIT about the perspectives her field and research bring to addressing the climate crisis. 
 
Q: How has research from the HASTS doctoral program shaped your understanding of global climate change and its myriad ecological and social impacts?

A: MIT HASTS alum Candis Callison [PhD ’10], now an anthropologist and professor of journalism, wrote her first book, “How Climate Change Comes to Matter” about the different discursive frameworks — what she terms “vernaculars” — through which scientists, journalists, Indigenous communities, sustainable investment firms, and evangelical Christian environmental organizations understand climate change.

Through ethnographic research, Callison shows that although these understandings were grounded in a shared set of facts, each drew from different cultural and ethical frameworks. These variations could silo conversations, even as they illustrated the pluralities of the climate crisis by highlighting different challenges and compelling different actions.

HASTS faculty member and environmental historian Megan Black, an associate professor in the MIT History Section, is currently researching the history of the first Landsat satellites launched in the 1970s. The technical capacities of Landsat’s visualization mechanisms were influenced by the political context of the Cold War. Black’s investigation has revealed, among other findings, that Landsat’s imaging devices were particularly well-suited to surfacing geological features and thus to minerals exploration, which was a key application of Landsat data in its inaugural decade. The historical context of the satellite’s initial design has thus shaped — and limited — the information accessible to the many investigations that today use early Landsat imagery as a vital indicator of decadal-scale environmental changes. 

Climate change is not only a scientific and technological matter, but also a social, political, and historical one. It stems from centuries of uneven geographies of energy extraction and distribution; related historical and geographical processes today distribute climate vulnerabilities unevenly across places and people.

The dimensions of today’s promising interventions have, in turn, been configured by past funding and research agendas — and the many technologies employed have a wide variety of implications for equity, ethics, and justice. The parameters of public opinion and policy debate on the nature and risks of climate change, as well as its conceivable solutions, are similarly shaped by socio-historical contexts.

MIT’s Program in History, Anthropology, Science, Technology and Society (HASTS) supports research that attends to the social and historical facets of climate change. Just as importantly, the HASTS program equips scholars with the tools to develop nuanced understandings of the scientific and technological mechanisms of its causes, impacts, and proposed solutions. Such technical and social attunement makes the program well-situated — perhaps particularly so — to unravel the myriad social and ecological dimensions of the climate crisis.

Q: Technology offers hope for addressing climate change, and it also presents challenges. The renewable energy industry, for example, relies on the mining of lithium and other metals — a process that is itself damaging to the environment. What has your research revealed about the trade-offs humanity is facing in its efforts to combat global climate change, and, how would you suggest we begin to grapple with such trade-offs?

A: Renewable energy can sometimes be positioned as immaterial and inherently redistributive. In some sense these characterizations arise from physical qualities: the sun and wind don’t require extraction, won’t run out, and are distributed across space.

Yet renewable energy must be collected, stored, and transported; it requires financing, metals extraction, and the processing of decommissioned materials. Energy access, mining, and waste deposition are material, geographically situated dynamics. Not everyone stands to benefit equally from renewable energy’s financial and environmental potentials, and not everyone will be equally exposed to its socio-environmental impacts.

The distribution of burdens is in some cases already mapping onto existing inequities in power and privilege, disproportionately impacting BIPOC [Black, Indigenous, and people of color] and low-income individuals, as well as communities in the Global South — often in locales also on the front lines of climate change or other forms of environmental injustice.

None of these challenges should stall renewable energy implementation; renewables are an absolutely crucial part of climate mitigation and can also increase climate resilience and reduce environmental contamination, among other co-benefits.

Moreover, neither the parameters of these challenges nor the potential interventions are clear-cut. Minerals extraction is key for many local economies.

Different metals also have distinct environmental and social footprints. Cobalt mining, which takes place largely in the Democratic Republic of the Congo under environmentally and economically precarious conditions, poses different socio-ecological challenges than copper extraction, which takes place around the world, primarily at large scales via increasingly remote methods. Lithium, meanwhile, can be found in salt flats, igneous rocks, geothermal fluids, and clays, each of which requires different mining techniques.

Minimizing the localized burdens of renewable energy implementation will be complex. Here at MIT, researchers are working on technical approaches to develop less-intensive forms of mining, novel battery chemistries, robust energy storage technologies, recycling mechanisms, and policies to extend energy access. Just as important, I think, is understanding the historical processes through which the benefits and burdens of different energies have been distributed — and ensuring that the ethical frameworks by which current and future projects might be mapped and evaluated are sufficiently nuanced.

I’m still in the planning phase of my own research, but I hope it will help surface, and offer tools with which to think through, some of these socio-environmental complexities.

Q: In confronting an issue as formidable as climate change, what gives you hope?
   
A: In college I did an interview project to learn about collaborations between student environmental groups and a local church to address climate change. Toward the end of each interview, I found myself coming back to the same question: What gives you energy in your work on climate change? What keeps you going?

The question wasn’t strictly necessary for my project; I was asking, mostly, for myself. Climate change can be truly overwhelming, in part because it so dramatically dwarfs the scope, in space and in time, of a single human life. It is also complex — intertwined with so many different ways of knowing the world.

My interviewees gave different answers. Some told me they were careful to mentally segment the issue so as to keep “climate change,” as a paralyzing totality, from sapping a sense of purpose from their daily research or advocacy endeavors. Others I spoke with took the opposite approach, conceptually linking their own efforts — which could feel insufficiently quotidian — to a sense of the broader stakes. But almost everyone I talked to highlighted the importance of being part of a community — of engaging in and through collaborative efforts.

That’s what gives me hope as well: people working together to address climate change in ways that attend to both its scientific and its social complexities. Intersections between climate change and social justice like the Sunrise Movement or the Climate Justice Alliance give me hope.

Climate-related collaborations are also happening all across MIT; I find the initiatives that have emerged from the Climate Grand Challenges process particularly inspirational. In STS, individuals such as HASTS alum Sara Wylie [PhD ’11], who has researched the impacts of hydraulic fracturing, have built deep relationships with the communities they work within, leveraging their research to support relevant climate justice initiatives.

For my part, I’ve been energized by my involvement in a project led by MIT MLK Scholar Luis G. Murillo [former minister of environment and sustainable development in Colombia] that convenes policymakers, community advocates, and researchers to advance initiatives that foment racial justice, conservation, climate mitigation, and peace.

Prepared by MIT SHASS Communications
Series editor and designer: Emily Hiestand
Co-editor: Kathryn O’Neill

3 Questions: Caroline White-Nockleby on the socio-environmental complexities of renewable energy

Caroline White-Nockleby is a PhD student in MIT’s doctoral program in History, Anthropology, and Science, Technology, and Society (HASTS), which is co-sponsored by the History and Anthropology sections, and the Program in Science, Technology and Society (STS). White-Nockleby’s research centers on the shifting supply chains of renewable energy infrastructures. In particular, she is interested in the interfaces between policymaking, social dynamics, and tech innovations in the sourcing, manufacture, and implementation of energy storage technologies. She received a BA in geosciences and American studies from Williams College and an MPhil in social anthropology from the University of Cambridge, England. MIT SHASS Communications spoke with her for the series Solving Climate: Humanistic Perspectives from MIT about the perspectives her field and research bring to addressing the climate crisis. 
 
Q: How has research from the HASTS doctoral program shaped your understanding of global climate change and its myriad ecological and social impacts?

A: MIT HASTS alum Candis Callison [PhD ’10], now an anthropologist and professor of journalism, wrote her first book, “How Climate Change Comes to Matter” about the different discursive frameworks — what she terms “vernaculars” — through which scientists, journalists, Indigenous communities, sustainable investment firms, and evangelical Christian environmental organizations understand climate change.

Through ethnographic research, Callison shows that although these understandings were grounded in a shared set of facts, each drew from different cultural and ethical frameworks. These variations could silo conversations, even as they illustrated the pluralities of the climate crisis by highlighting different challenges and compelling different actions.

HASTS faculty member and environmental historian Megan Black, an associate professor in the MIT History Section, is currently researching the history of the first Landsat satellites launched in the 1970s. The technical capacities of Landsat’s visualization mechanisms were influenced by the political context of the Cold War. Black’s investigation has revealed, among other findings, that Landsat’s imaging devices were particularly well-suited to surfacing geological features and thus to minerals exploration, which was a key application of Landsat data in its inaugural decade. The historical context of the satellite’s initial design has thus shaped — and limited — the information accessible to the many investigations that today use early Landsat imagery as a vital indicator of decadal-scale environmental changes. 

Climate change is not only a scientific and technological matter, but also a social, political, and historical one. It stems from centuries of uneven geographies of energy extraction and distribution; related historical and geographical processes today distribute climate vulnerabilities unevenly across places and people.

The dimensions of today’s promising interventions have, in turn, been configured by past funding and research agendas — and the many technologies employed have a wide variety of implications for equity, ethics, and justice. The parameters of public opinion and policy debate on the nature and risks of climate change, as well as its conceivable solutions, are similarly shaped by socio-historical contexts.

MIT’s Program in History, Anthropology, Science, Technology and Society (HASTS) supports research that attends to the social and historical facets of climate change. Just as importantly, the HASTS program equips scholars with the tools to develop nuanced understandings of the scientific and technological mechanisms of its causes, impacts, and proposed solutions. Such technical and social attunement makes the program well-situated — perhaps particularly so — to unravel the myriad social and ecological dimensions of the climate crisis.

Q: Technology offers hope for addressing climate change, and it also presents challenges. The renewable energy industry, for example, relies on the mining of lithium and other metals — a process that is itself damaging to the environment. What has your research revealed about the trade-offs humanity is facing in its efforts to combat global climate change, and, how would you suggest we begin to grapple with such trade-offs?

A: Renewable energy can sometimes be positioned as immaterial and inherently redistributive. In some sense these characterizations arise from physical qualities: the sun and wind don’t require extraction, won’t run out, and are distributed across space.

Yet renewable energy must be collected, stored, and transported; it requires financing, metals extraction, and the processing of decommissioned materials. Energy access, mining, and waste deposition are material, geographically situated dynamics. Not everyone stands to benefit equally from renewable energy’s financial and environmental potentials, and not everyone will be equally exposed to its socio-environmental impacts.

The distribution of burdens is in some cases already mapping onto existing inequities in power and privilege, disproportionately impacting BIPOC [Black, Indigenous, and people of color] and low-income individuals, as well as communities in the Global South — often in locales also on the front lines of climate change or other forms of environmental injustice.

None of these challenges should stall renewable energy implementation; renewables are an absolutely crucial part of climate mitigation and can also increase climate resilience and reduce environmental contamination, among other co-benefits.

Moreover, neither the parameters of these challenges nor the potential interventions are clear-cut. Minerals extraction is key for many local economies.

Different metals also have distinct environmental and social footprints. Cobalt mining, which takes place largely in the Democratic Republic of the Congo under environmentally and economically precarious conditions, poses different socio-ecological challenges than copper extraction, which takes place around the world, primarily at large scales via increasingly remote methods. Lithium, meanwhile, can be found in salt flats, igneous rocks, geothermal fluids, and clays, each of which requires different mining techniques.

Minimizing the localized burdens of renewable energy implementation will be complex. Here at MIT, researchers are working on technical approaches to develop less-intensive forms of mining, novel battery chemistries, robust energy storage technologies, recycling mechanisms, and policies to extend energy access. Just as important, I think, is understanding the historical processes through which the benefits and burdens of different energies have been distributed — and ensuring that the ethical frameworks by which current and future projects might be mapped and evaluated are sufficiently nuanced.

I’m still in the planning phase of my own research, but I hope it will help surface, and offer tools with which to think through, some of these socio-environmental complexities.

Q: In confronting an issue as formidable as climate change, what gives you hope?
   
A: In college I did an interview project to learn about collaborations between student environmental groups and a local church to address climate change. Toward the end of each interview, I found myself coming back to the same question: What gives you energy in your work on climate change? What keeps you going?

The question wasn’t strictly necessary for my project; I was asking, mostly, for myself. Climate change can be truly overwhelming, in part because it so dramatically dwarfs the scope, in space and in time, of a single human life. It is also complex — intertwined with so many different ways of knowing the world.

My interviewees gave different answers. Some told me they were careful to mentally segment the issue so as to keep “climate change,” as a paralyzing totality, from sapping a sense of purpose from their daily research or advocacy endeavors. Others I spoke with took the opposite approach, conceptually linking their own efforts — which could feel insufficiently quotidian — to a sense of the broader stakes. But almost everyone I talked to highlighted the importance of being part of a community — of engaging in and through collaborative efforts.

That’s what gives me hope as well: people working together to address climate change in ways that attend to both its scientific and its social complexities. Intersections between climate change and social justice like the Sunrise Movement or the Climate Justice Alliance give me hope.

Climate-related collaborations are also happening all across MIT; I find the initiatives that have emerged from the Climate Grand Challenges process particularly inspirational. In STS, individuals such as HASTS alum Sara Wylie [PhD ’11], who has researched the impacts of hydraulic fracturing, have built deep relationships with the communities they work within, leveraging their research to support relevant climate justice initiatives.

For my part, I’ve been energized by my involvement in a project led by MIT MLK Scholar Luis G. Murillo [former minister of environment and sustainable development in Colombia] that convenes policymakers, community advocates, and researchers to advance initiatives that foment racial justice, conservation, climate mitigation, and peace.

Prepared by MIT SHASS Communications
Series editor and designer: Emily Hiestand
Co-editor: Kathryn O’Neill

MIT students and alumni “hack” Hong Kong Kowloon East

The year 2020 was undoubtedly a challenge for everyone. The pandemic generated vast negative impacts on the world on a physical, psychological, and emotional level: mobility was restricted; socialization was limited; economic and industrial progress were put on hold. Many industries and small independent business have suffered, and academia and research have also experienced many difficulties. The education of future generations may have transitioned online, but it limited in-person learning experiences and social growth.

On the collegiate level, first-year students were barred from anticipated campus learning and research, while seniors faced tremendous anxiety over the lack of face-to-face consultations and the uncertainty of their graduation. To meet the increasing desire to reconnect, the MIT Hong Kong Innovation Node took on a new role: to expand the MIT Global Classroom initiative and breach the boundaries of learning via the collaboration of colleagues, students, and alumni across the globe.

Since its founding in 2016, the MIT Hong Kong Innovation Node has focused on cultivating the innovative and entrepreneurial capabilities of MIT students and Hong Kong university students. The collaboration with MIT alumni and students has contributed to the establishment of numerous landing programs around the globe. This accomplishment is best demonstrated by the success of the MIT Entrepreneurship and Maker Skills Integrator (MEMSI) and the MIT Entrepreneurship and FinTech Integrator (MEFTI).

In 2020, the node executed the Kowloon East Inclusive Innovation and Growth Project, which carried out smart city activities that would boost inclusion, innovation, and growth for the Hong Kong communities. The exchange of ideas between MIT students, faculty, researchers, and alumni, in collaboration with the rest of the Hong Kong community, revealed opportunities beyond Kowloon East in the neighboring cities in Pearl River Delta region. Some of these opportunities involved the production of internships and public engagement opportunities.

“Hacking” Kowloon East: activating technology for urban life

The MIT Hong Kong Innovation Node welcomed 2021 with an Independent Activities Period virtual site visit to Hong Kong in collaboration with the Department of Urban Studies and Planning. The two-week “hacking” series offered by Associate Professor Brent Ryan, head of the City Design and Development Group, altered the concept of smart cities by exploring how the current initiative in Kowloon East can be better leveraged by emerging digital technologies to connect residents to each other and enhance economic opportunities.

As a paradigm of high-density urbanism and the center of a wide variety of global and local challenges, Hong Kong provides an opportunity to rethink how physical spaces can be integrated with digital technologies for better synergy. “Hacking” series participants took advantage of this fact. Equal numbers of undergraduate student ambassadors were recruited from local universities, and paired with MIT students and Hong Kong-MIT graduate students who were based in Boston. Some of the project ideas focused on how to retail revitalization, how to promote health care and environment, and how to establish an overall human-centered urban design.

“Although I couldn’t travel physically, special lectures from the domain experts and the student pairing system with HK student ambassadors helped me discover a specific problem I wanted to tackle,” says Younjae Oh, a second-year student of the master of science in architecture studies (design) program at MIT. She went on to state that the series “inspired creativity within the team and led us to make more insightful, considered decisions upon cultural awareness. What I have found valuable in this workshop is the extremity of engagement with the cross-cultural team.”

This blend of “Hacking” contributors collaborated in an open-ended structure where they proposed and developed reality-based projects to promote “smart, equitable urbanism” in the Kowloon East (Kwun Tong) neighborhood of Hong Kong. Queenie Kwan Li, a first-year master’s student in the science in architecture studies (design) program at MIT, describes aspects of the program, mentioning, “Direct consultations with local and international domain experts lined up by the MIT Innovation Node immensely deepened my understanding of my home city’s development.” She adds, “It also gifted me a unique opportunity to relate my ongoing training at MIT for a potential impact in Hong Kong.”

Global classroom-in-action

Despite its progress in innovation, entrepreneurship, and smart city restructuring in this collaboration with the node, the pandemic highlighted an ongoing challenge of how the School of Architecture and Planning can offer a hybrid learning experience for a professional audience with mentorships and apprenticeships.

Architecture and urban design training emphasize the design studio culture of collective learning, which is vastly different from solo learning at home. This learning usually begins with a physical site visit: surveys, interviews, meeting and interacting with locals to obtain firsthand engagement experience. Under the experimentation of a hybrid format, the teaching team has to curate and piece fragments together to imitate refreshing local perspectives through tailored exercises using online interactions and team collaborations.

Although traveling experiences are always the best and most-direct ways to understand the benefits and deficits of an area, to appreciate the culture and customs, and to pinpoint challenges the locals face, it is easy to forget that people are the core, the identity of a place, when learning solely online. To make up for that deficit, the “Hacking” series invited the physical attendance of local and international members of the MIT alumni community with relevant domain expertise.

Sean Kwok ’01 says, “MIT graduates spanning five decades volunteered to teach and guide current students. In return, this workshop gave us, former MIT students, the rare opportunity to participate in the MIT academic life again, learn from our colleagues, and give back to the school at the same time.”

Some of the domain expertise included those with backgrounds in architecture, urban design and planning, real estate, mobility and transportation, public housing, workforce development, city science and urban analytics, art administration, and engineering. In fact, a total of 23 domain experts, local stakeholders, and eight mentors from various disciplines were physically involved in the program at the node’s headquarters in Hong Kong.

Throughout the series, they shared their knowledge and experiences in a hybridized format so that non-Hong Kong-based members could also participate. Joel Austin Cunningham, a first-year master’s student in the science in architecture studies (design) program at MIT, commends the “Hacking” series, stressing that it “addressed the unprecedented constraints of the coronavirus with an innovative educational solution … As architecture and urban planning students, we rely heavily upon active engagements with a project’s site, something which has been significantly constrained this academic year. The IAP workshop responded to this issue, through a multi-institutional collaboration which compensated for our inability to travel through active engagements with an array of local stakeholders and collaborators based in the city.”

Learning is a feedback loop — part of it is learned from the reconstruction of a previous experience, and part of it is constructed by us as we develop the learning experience together and assimilate new information, insights, and ideas from one another. As part of such interconnectedness, a human-centric approach, communication skills, cultural and moral values involve the inclusive diversity and empathy of everyone. 

Solv[ED] inspires young people to become global problem-solvers

On May 3, during its annual flagship event Solve at MIT, MIT Solve launched a new program called Solv[ED], geared toward young innovators to help them become problem solvers and learn about social entrepreneurship. 

Starting in June, Solv[ED] will feature a variety of workshops and learning sessions and provide resources that are designed to support young people aged 24 and under with the skills needed to make an impact on their communities and the world. Solv[ED] will host its first annual Youth Innovation Challenge this September and invite young people to submit and pitch solutions to solve problems worldwide. 

Via events throughout the year, young problem-solvers will also be able to network with one another, as well as the broader Solv[ED] community through its open innovation platform, to brainstorm ideas and advance their solutions and enterprises.

“There is no one path through Solv[ED]’s offerings. We’re creating a program for young people to design their own social impact journeys,” says Alex Amouyel, executive director of Solve. “We can’t do this alone. That is why we are inviting youth organizations, education providers, and other cross-sector leaders to join us and support young problem-solvers all over the world.”

Emma Yang, the youngest MIT Solver and founder of Timeless, a startup that empowers Alzheimer’s patients to stay engaged and connected to their loved ones, is excited about the launch of Solv[ED] and believes that it will generate a large community of youth looking to work together to make change.

“Solv[ED] will give young people the opportunity to learn about and practice skills for social entrepreneurship. I’m especially excited about the ways that it’ll do this while bringing young people from around the world together,” Yang says.

In addition to young innovators, Solv[ED]’s community gathers member organizations looking to support these youth, such as Anant National University, Antropia ESSEC, Firefly Innovations at City University of New York, Instituto Tecnológico de Monterrey, Learn with Leaders, T.A. Pai Management Institute, Universidad de los Andes, and Universidad Privada Peruano Alemana. 

Solv[ED] partners include the Morgridge Family Foundation, the Rieschel Foundation, and the Pozen Social Innovation Prize. 

MIT students  can sign up for the Solv[ED] newsletter for more updates, and organizations that support youth innovation can become Solv[ED] Members.

Behind Covid-19 vaccine development

When starting a vaccine program, scientists generally have anecdotal understanding of the disease they’re aiming to target. When Covid-19 surfaced over a year ago, there were so many unknowns about the fast-moving virus that scientists had to act quickly and rely on new methods and techniques just to even begin understanding the basics of the disease.

Scientists at Janssen Research & Development, developers of the Johnson & Johnson-Janssen Covid-19 vaccine, leveraged real-world data and, working with MIT researchers, applied artificial intelligence and machine learning to help guide the company’s research efforts into a potential vaccine.

“Data science and machine learning can be used to augment scientific understanding of a disease,” says Najat Khan, chief data science officer and global head of strategy and operations for Janssen Research & Development. “For Covid-19, these tools became even more important because ­­­our knowledge was rather limited. There was no hypothesis at the time. We were developing an unbiased understanding of the disease based on real-world data using sophisticated AI/ML algorithms.”

In preparing for clinical studies of Janssen’s lead vaccine candidate, Khan put out a call for collaborators on predictive modeling efforts to partner with her data science team to identify key locations to set up trial sites. Through Regina Barzilay, the MIT School of Engineering Distinguished Professor for AI and Health, faculty lead of AI for MIT’s Abdul Latif Jameel Clinic for Machine Learning in Health, and a member of Janssen’s scientific advisory board, Khan connected with Dimitris Bertsimas, the Boeing Leaders for Global Operations Professor of Management at MIT, who had developed a leading machine learning model that tracks Covid-19 spread in communities and predicts patient outcomes, and brought him on as the primary technical partner on the project.

DELPHI

When the World Health Organization declared Covid-19 a pandemic in March 2020 and forced much of the world into lockdown, Bertsimas, who is also the faculty lead of entrepreneurship for the Jameel Clinic, brought his group of 25-plus doctoral and master’s students together to discuss how they could use their collective skills in machine learning and optimization to create new tools to aid the world in combating the spread of the disease.

The group started tracking their efforts on the COVIDAnalytics platform, where their models are generating accurate real-time insight into the pandemic. One of the group’s first projects was charting the progression of Covid-19 with an epidemiological model they developed named DELPHI, which predicts state-by-state infection and mortality rates based upon each state’s policy decision.

DELPHI is based on the standard SEIR model, a compartmental model that simplifies the mathematical modeling of infectious diseases by dividing populations in four categories: susceptible, exposed, infectious, and recovered. The ordering of the labels is intentional to show the flow patterns between the compartments. DELPHI expands on this model with a system that looks at 11 possible states of being to account for realistic effects of the pandemic, such comparing the length of time those who recovered from Covid-19 spent in the hospital versus those who died.

“The model has some values that are hardwired, such as how long a person stays in the hospital, but we went deeper to account for the nonlinear change of infection rates, which we found were not constant and varied over different periods and locations,” says Bertsimas. “This gave us more modeling flexibility, which led the model to make more accurate predictions.”

A key innovation of the model is capturing the behaviors of people related to measures put into place during the pandemic, such as lockdowns, mask-wearing, and social distancing, and the impact these had on infection rates.

“By June or July, we were able to augment the model with these data. The model then became even more accurate,” says Bertsimas. “We also considered different scenarios for how various governments might respond with policy decisions, from implementing serious restrictions to no restrictions at all, and compared them to what we were seeing happening in the world. This gave us the ability to make a spectrum of predictions. One of the advantages of the DELPHI model is that it makes predictions on 120 countries and all 50 U.S. states on a daily basis.”

A vaccine for today’s pandemic

Being able to determine where Covid-19 is likely to spike next proved to be critical to the success of Janssen’s clinical trials, which were “event-based” — meaning that “we figure out efficacy based on how many ‘events’ are in our study population, events such as becoming sick with Covid-19,” explains Khan.

“To run a trial like this, which is very, very large, it’s important to go to hot spots where we anticipate the disease transmission to be high so that you can accumulate those events quickly. If you can, then you can run the trial faster, bring the vaccine to market more quickly, and also, most importantly, have a very rich dataset where you can make statistically sound analysis.”

Bertsimas assembled a core group of researchers to work with him on the project, including two doctoral students from MIT’s Operations Research Center, where he is a faculty member: Michael Li, who led implementation efforts, and Omar Skali Lami. Other members included Hamza Tazi MBN ’20, a former master of business analytics student, and Ali Haddad, a data research scientist at Dynamic Ideas LLC.

The MIT team began collaborating with Khan and her team last May to forecast where the next surge in cases might happen. Their goal was to identify Covid-19 hot spots where Janssen could conduct clinical trials and recruit participants who were most likely to get exposed to the virus.

With clinical trials due to start last September, the teams had to immediately hit the ground running and make predictions four months in advance of when the trials would actually take place. “We started meeting daily with the Janssen team. I’m not exaggerating — we met on a daily basis … sometimes over the weekend, and sometimes more than once a day,” says Bertsimas.

To understand how the virus was moving around the world, data scientists at Janssen continuously monitored and scouted data sources across the world. The team built a global surveillance dashboard that pulled in data at a country, state, and even county level based on data availability, on case numbers, hospitalizations, and mortality and testing rates.

The DELPHI model integrated these data, with additional information about local policies and behaviors, such as whether people were being compliant with mask-wearing, and was making daily predictions in the 300-400 range. “We were getting constant feedback from the Janssen team which helped to improve the quality of the model. The model eventually became quite central to the clinical trial process,” says Bertsimas.

Remarkably, the vast majority of Janssen’s clinical trial sites that DELPHI predicted to be Covid-19 hot spots ultimately had extremely high number of cases, including in South Africa and Brazil, where new variants of the virus had surfaced by the time the trials began. According to Khan, high incidence rates typically indicate variant involvement.

“All of the predictions the model made are publicly available, so one can go back and see how accurate the model really is. It held its own. To this day, DELPHI is one of the most accurate models the scientific community has produced,” says Bertsimas.

“As a result of this model, we were able to have a highly data-rich package at the time of submission of our vaccine candidate,” says Khan. “We are one of the few trials that had clinical data in South Africa and Brazil. That became critical because we were able to develop a vaccine that became relevant for today’s needs, today’s world, and today’s pandemic, which consists of so many variants, unfortunately.” 

Khan points out that the DELPHI model was further evolved with diversity in mind, taking into account biological risk factors, patient demographics, and other characteristics. “Covid-19 impacts people in different ways, so it was important to go to areas where we were able to recruit participants from different races, ethnic groups, and genders. Due to this effort, we had one of the most diverse Covid-19 trials that’s been run to date,” she says. “If you start with the right data, unbiased, and go to the right places, we can actually change a lot of the paradigms that are limiting us today.”

In April, the MIT and Janssen R&D Data Science team were jointly recognized by the Institute for Operations Research and the Management Sciences (INFORMS) as the winner of the 2021 Innovative Applications in Analytics Award for their innovative and highly impactful work on Covid-19. Building on this success, the teams are continuing their collaboration to apply their data-driven approach and technical rigor in tackling other infectious diseases. “This was not a partnership in name only. Our teams really came together in this and continue to work together on various data science efforts across the pipeline,” says Khan. The team further appreciates the role of investigators on the ground, who contributed to site selection in combination with the model.

“It was a very satisfying experience,” concurs Bertsimas. “I’m proud to have contributed to this effort and help the world in the fight against the pandemic.”

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.