MIT Press’s Direct to Open opens access to full list of 2024 monographs

Now in its third year of operation, the MIT Press’ Direct to Open (D2O) recently announced that it reached its full funding goal in 2024 and will open access to 79 new monographs and edited book collections this year

Launched in 2021, D2O is an innovative sustainable framework for open-access monographs that shifts publishing from a solely market-based, purchase model where individuals and libraries buy single e-books, to a collaborative, library-supported open-access model. 

“Reaching our overall funding goal — in full and on time — is a major milestone in developing a sustainable open-access publishing model,” says Amy Harris, senior manager, library relations and sales at the MIT Press. “We are extremely grateful for the support of our library and consortium partners that makes this possible.” 

There are other models that offer fund-to-open opportunities on a title-by-title basis or that focus on opening access within specific disciplines. D2O is unique because it allows the press to open access to its entire slate of scholarly books at scale during each funding cycle. Thanks to D2O, all monograph authors have the opportunity for their work to be open access, and the press can offer equal support to traditionally underfunded disciplines in the social sciences and humanities. 

At a time when the traditional market for scholarly books continues to decline, works funded through D2O are reaching larger audiences online than ever before — averaging 2,694 reads per title and bringing important scholarship to new audiences. D2O books have also been academically cited almost 1,100 times.

“D2O is meeting the needs of academics, readers, and libraries alike, and our usage and citation stats demonstrate that the academic community is embracing open-access scholarship across a wide range of fields and for many purposes — from the classroom to research projects to professional interest reading,” says Harris. “This further aligns the work of the MIT Press with the mission of MIT to advance knowledge in science, technology, the arts, and other areas of scholarship to best serve the nation and the world, and provides opportunities for expansion of the model in the forthcoming years.”

The MIT Press will now turn its attention to its fourth funding cycle and invites libraries and library consortia to participate. For details, please visit: mitpress.mit.edu/D2O.

Nancy Hopkins awarded the National Academy of Sciences Public Welfare Medal

The National Academy of Sciences has awarded MIT biologist Nancy Hopkins, the Amgen Professor of Biology Emerita, with the 2024 Public Welfare Medal in recognition of “her courageous leadership over three decades to create and ensure equal opportunity for women in science.” 

The award recognizes Hopkins’s role in catalyzing and leading MIT’s “A Study on the Status of Women Faculty in Science,” made public in 1999. The landmark report, the result of the efforts of numerous members of the MIT faculty and administration, revealed inequities in the treatment and resources available to women versus men on the faculty at the Institute, helped drive significant changes to MIT policies and practices, and sparked a national conversation about the unequal treatment of women in science, engineering, and beyond.

Since the medal was established in 1914 to honor extraordinary use of science for the public good, it has been awarded to several MIT-affiliated scientists, including Karl Compton, James R. Killian Jr., and Jerome B. Wiesner, as well as Vannevar Bush, Isidor I. Rabi, and Victor Weiskopf.

“The Public Welfare Medal has been awarded to MIT faculty who have helped define our Institute and scientists who have shaped modern science on the national stage,” says Susan Hockfield, MIT president emerita. “It is more than fitting for Nancy to join their ranks, and — importantly — celebrates her critical role in increasing the participation of women in science and engineering as a significant national achievement.”

When Hopkins joined the faculty of the MIT Center for Cancer Research (CCR) in 1973, she did not set out to become an advocate for equality for women in science. For the first 15 years, she distinguished herself in pioneering studies linking genes of RNA tumor viruses to their roles in causing some forms of cancer. But in 1989, Hopkins changed course: She began developing molecular technologies for the study of zebrafish that would help establish it as an important model for vertebrate development and cancer biology.

To make the pivot, Hopkins needed more space to accommodate fish tanks and new equipment. Although Hopkins strongly suspected that she had been assigned less lab space than her male peers in the building, her hypothesis carried little weight and her request was denied. Ever the scientist, Hopkins believed the path to more lab space was to collect data. One night in 1993, with a measuring tape in hand, she visited each lab to quantify the distribution of space in her building. Her hypothesis appeared correct.

Hopkins shared her initial findings — and her growing sense that there was bias against women scientists — with one female colleague, and then others, many of whom reported similar experiences. The senior women faculty in MIT’s School of Science began meeting to discuss their concerns, ultimately documenting them in a letter to Dean of Science Robert Birgeneau. The letter was signed by professors Susan Carey, Sylvia Ceyer, Sallie “Penny” Chisholm, Suzanne Corkin, Mildred Dresselhaus, Ann Graybiel, Ruth Lehmann, Marcia McNutt, Terry Orr-Weaver, Mary-Lou Pardue, Molly Potter, Paula Malanotte-Rizzoli, Leigh Royden, Lisa Steiner, and Joanne Stubbe. Also important were Hopkins’s discussions with Lorna Gibson, a professor in the Department of Materials Science and Engineering, since Gibson had made similar observations with her female colleagues in the School of Engineering. Despite the biases against these women, they were highly accomplished scientists. Four of them were eventually awarded the U.S. National Medal of Science, and 11 were, or became, members of the National Academy of Sciences.

In response to the women in the School of Science, Birgeneau established the Committee on the Status of Women Faculty in 1995, which included both female faculty and three male faculty who had been department chairs: Jerome Friedman, Dan Kleitman, and Robert Silbey. In addition to interviewing essentially all the female faculty members in the school, they collected data on salaries, space, and other resources. The committee found that of 209 tenured professors in the School of Science only 15 were women, and they often had smaller wages and labs, and were raising more of their salaries from grants than equivalent male faculty.

At the urging of Lotte Bailyn, a professor at the MIT Sloan School of Management and chair of the faculty, Hopkins and the committee summarized their findings to be presented to MIT’s faculty. Struck by the pervasive and well-documented pattern of bias against women across the School of Science, both Birgeneau and MIT President Charles Vest added prefaces to the report before it was published in the faculty newsletter. Vest commented, “I have always believed that contemporary gender discrimination within universities is part reality and part perception. True, but I now understand that reality is by far the greater part of the balance.”

Vest took an “engineers’ approach” to addressing the report’s findings, remarking “anything I can measure, I can fix.” He tasked Provost Robert Brown with establishing committees to produce reports on the status of women faculty for all five of MIT’s schools. The reports were published in 2002 and drew attention to the small number of women faculty in some schools, as well as discrepancies similar to those first documented in the School of Science.

In response, MIT implemented changes in hiring practices, updated pay equity reviews, and worked to improve the working environment for women faculty. On-campus day care facilities were built and leave policies were expanded for the benefit of all faculty members with families. To address underrepresentation of individuals of color, as well as the unique biases against women of color, Brown established the Council on Faculty Diversity with Hopkins and Philip Clay, then MIT’s chancellor and a professor in the Department of Urban Studies and Planning. Meanwhile, Vest spearheaded a collaboration with presidents of other leading universities to increase representation of women faculty.

MIT increased the numbers of women faculty by altering hiring procedures  — particularly in the School of Engineering under Dean Thomas Magnanti and in the School of Science under Birgeneau, and later Associate Dean Hazel Sive. MIT did not need to alter its standards for hiring to increase the number of women on its faculty: Women hired with revised policies at the Institute have been equally successful and have gone on to important leadership roles at MIT and other institutions.

In the wake of the 1999 report the press thrust MIT — and Hopkins — into the national spotlight. The careful documentation in the report and first Birgeneau’s and then Vest’s endorsement of and proactive response to its findings were persuasive to many reporters and their readers. The reports and media coverage resonated with women across academia, resulting in a flood of mail to Hopkins’s inbox, as well as many requests for speaking engagements. Hopkins would eventually undertake hundreds of talks across the United States and many other countries about advocating for the equitable treatment of women in science.

Her advocacy work continued after her retirement. In 2019, Hopkins, along with Hockfield and Sangeeta Bhatia, the John J. and Dorothy Wilson Professor of Health Sciences and Technology and of the Department of Electrical Engineering and Computer Science, founded the Boston Biotech Working Group — which later evolved into the Faculty Founder Initiative — to increase women’s representation as founders and board members of biotech companies in Massachusetts.

Hopkins, however, believes she became “this very visible person by chance.”

“An almost uncountable number of people made this happen,” she continues. “Moreover, I know how much work went on before I even set foot on campus, such as by Emily Wick, Shirley Ann Jackson, Sheila Widnall, and Mildred Dresselhaus. I stood on the shoulders of a great institution and the long, hard work of many people that belong to it.”

The National Academy of Sciences will present the 2024 Public Welfare Medal to Hopkins in April at its 161st annual meeting. Hopkins is the recipient of many other awards and honors, both for her scientific achievements and her advocacy for women in science. She is a member of the National Academy of Sciences, the National Academy of Medicine, the American Academy of Arts and Sciences, and the AACR Academy. Other awards include the Centennial Medal from Harvard University, the MIT Gordon Y. Billard Award for “special service” to MIT, the MIT Laya Wiesner Community Award, the Maria Mitchell Women in Science Award, and the STAT Biomedical Innovation Award. In addition, she has received eight honorary doctorates, most recently from Rockefeller University, the Hong Kong University of Science and Technology, and the Weizmann Institute.

What to do about AI in health?

Before a drug is approved by the U.S. Food and Drug Administration (FDA), it must demonstrate both safety and efficacy. However, the FDA does not require an understanding a drug’s mechanism of action for approval. This acceptance of results without explanation raises the question of whether the “black box” decision-making process of a safe and effective artificial intelligence model must be fully explained in order to secure FDA approval.  

This topic was one of many discussion points addressed on Monday, Dec. 4 during the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) AI and Health Regulatory Policy Conference, which ignited a series of discussions and debates amongst faculty; regulators from the United States, EU, and Nigeria; and industry experts concerning the regulation of AI in health. 

As machine learning continues to evolve rapidly, uncertainty persists as to whether regulators can keep up and still reduce the likelihood of harmful impact while ensuring that their respective countries remain competitive in innovation. To promote an environment of frank and open discussion, the Jameel Clinic event’s attendance was highly curated for an audience of 100 attendees debating through the enforcement of the Chatham House Rule, to allow speakers anonymity for discussing controversial opinions and arguments without being identified as the source. 

Rather than hosting an event to generate buzz around AI in health, the Jameel Clinic’s goal was to create a space to keep regulators apprised of the most cutting-edge advancements in AI, while allowing faculty and industry experts to propose new or different approaches to regulatory frameworks for AI in health, especially for AI use in clinical settings and in drug development. 

AI’s role in medicine is more relevant than ever, as the industry struggles with a post-pandemic labor shortage, increased costs (“Not a salary issue, despite common belief,” said one speaker), as well as high rates of burnout and resignations among health care professionals. One speaker suggested that priorities for clinical AI deployment should be focused more on operational tooling rather than patient diagnosis and treatment. 

One attendee pointed out a “clear lack of education across all constituents — not just amongst developer communities and health care systems, but with patients and regulators as well.” Given that medical doctors are often the primary users of clinical AI tools, a number of the medical doctors present pleaded with regulators to consult them before taking action. 

Data availability was a key issue for the majority of AI researchers in attendance. They lamented the lack of data to make their AI tools work effectively. Many faced barriers such as intellectual property barring access or simply a dearth of large, high-quality datasets. “Developers can’t spend billions creating data, but the FDA can,” a speaker pointed out during the event. “There’s a price uncertainty that could lead to underinvestment in AI.” Speakers from the EU touted the development of a system obligating governments to make health data available for AI researchers. 

By the end of the daylong event, many of the attendees suggested prolonging the discussion and praised the selective curation and closed environment, which created a unique space conducive to open and productive discussions on AI regulation in health. Once future follow-up events are confirmed, the Jameel Clinic will develop additional workshops of a similar nature to maintain the momentum and keep regulators in the loop on the latest developments in the field.

“The North Star for any regulatory system is safety,” acknowledged one attendee. “Generational thought stems from that, then works downstream.” 

Award shines a spotlight on local science journalism

Local reporting is a critical tool in the battle against disinformation and misinformation. It can also provide valuable data about everything from environmental damage derived from questionable agribusiness practices to the long-term effects of logging on communities. 

Reporting like this requires more than just journalistic chops. It needs a network that can share these important stories, access to readers, and financial support. That’s why organizations like the Knight Science Journalism Program at MIT and its Victor K. McElheny Award are important. 

Founded in 2018 with a gift from Knight Science Journalism (KSJ) Program founding director Victor McElheny and his wife, Ruth McElheny, the KSJ Victor K. McElheny Award rewards local science journalists for their pioneering work and their stories’ impacts. 

“The prize can help illustrate a continuing contribution to the maximum level of public understanding of what technology and science are achieving, and what these achievements imply for humanity,” McElheny says.

The award comes with a $10,000 prize.

“Local science journalism has value, in part, because consolidation in this sector has meant fewer journalists and a shrinking pool of resources with which to do this important work,” notes editor Cathy Clabby, a Knight Science Journalism Fellowship Program alumna (2008). Clabby was part of the team at The Charlotte Observer and The Raleigh News and Observer that earned the McElheny Award in 2023 for its poultry farm investigation

“The award demonstrated a commitment to high journalistic standards,” Clabby says.

These journalistic standards and the accompanying national recognition for awardees can lend further legitimacy to long-form science journalism. 

Features and outcomes

Additionally, while some news outlets are starved of the resources necessary to produce deeply-researched, high-quality stories, receiving the McElheny Award can help raise the visibility of small and nonprofit newsrooms, which can help with circulation, operating expenses, and fundraising.

“The award has a very real value to our audience, especially as we develop our digital subscriber model,” notes journalist Tony Bartelme, one of several Charleston Post and Courier reporters whose feature on the Gulf Stream won the inaugural award in 2019. “If readers see this kind of national recognition, they’re more likely to see the value of subscribing.”

“The financial element of the award is certainly a delightful surprise, particularly for a team project like this with a small budget,” says journalist Aaron Scott, whose team at Oregon Public Broadcasting won for its Timber Wars podcast series in 2021. “It filled me with joy getting to tell my colleagues they’d be getting bonus checks in the mail.”

Deborah Blum  the Pulitzer Prize-winning director of the Knight Science Journalism Program and founder of Undark Magazine  argues that local and regional journalists play a central role in promoting science literacy and critical thinking skills among their readers. Blum describes an information ecosystem worthy of preservation, with local science journalism acting as a fundamental building block of public consciousness and shared understanding. 

“Science stories told by reporters in the home community, known and trusted by their neighbors, have a special ability to reach readers and listeners,” Blum says.

Value, vision, and recognition

Storytelling has value beyond views, clicks, and shares, according to McElheny Award winners. 

“An informed electorate helps ensure a functional and accountable government,” Clabby asserts.

Journalists point to the skills necessary to produce thoughtful, reasoned stories that can impact readers, communities, and other journalists as valuable assets for creating powerful pieces.

“Science journalism is hard to do because it takes time to wade through it all and understand the science with enough depth to tell the story properly,” Bartelme says. “But, what’s more important than a planet on fire?”

Further, recognition from their peers can serve as validation for what can sometimes become months of research and reporting to produce such important stories.

“Recognition [as evidenced by] the Victor K. McElheny Award is deeply rewarding,” Scott believes, “because it means some of our most accomplished and thoughtful peers are listening to, reading, and thinking deeply about a story we’ve invested so much in telling.”

Outcomes and impacts

The Victor K. McElheny Award for Local Science Journalism confers national recognition on journalists performing a critical function in producing an informed electorate. Local science journalism can have lasting impacts on readers, apprise audiences of advances and challenges related to science and technology, and help secure funding for current and future efforts.

“Fact-based journalism has value for audiences,” Clabby says.

Scott, noting the value of balanced science reporting, described science journalism as “both more important, and more under threat by politicization, than ever before.”

“The McElheny Award is really the only award that celebrates science stories that reach this important audience,” Bartelme concludes. “Local journalists have a special and often more intimate relationship with readers than national organizations.”

Solving complex problems with technology and varied perspectives at Sphere Las Vegas

Something new, large, and round has dominated the Las Vegas skyline since July: Sphere.

After debuting this summer, the state-of-the-art entertainment venue became instantly recognizable thanks to pictures and videos on social media and Reddit. Some of the most viral posts depict the 580,000-square-foot, fully programmable LED Exosphere projecting a giant yellow emoji that smiles, sleeps, and follows airplanes flying overhead with a look of wonder.

According to Jared Miller ’98, MBA ’03, SM ’03, Sphere’s growing popularity even before its official opening last September — when the Irish rock band U2 began its months-long residency — is a testament to the work of the creative team that made it happen.

“The team we have assembled in many ways reflects my experience at MIT,” says Miller, who is executive vice president and CIO at Sphere Entertainment.

“We have deep technology experts, engineers, scientists, artists, creative technologists, and people who have worked in many different industries who have come together to embrace this vision,” adds Miller. “The diversity of the people you’re surrounded with … brings different perspectives [and an] enthusiasm to come together and collaborate on a solution. This is what’s really special about Sphere, and it applies to MIT as well.”

Embracing the pivot

As an undergraduate, Miller majored in chemical engineering and interned in the oil and gas industry, after which he decided to pursue an alternative career path. This led to a job at Intel during the race to build the first microprocessor capable of achieving 1 gigahertz.

Miller learned a lot about himself and his professional interests during the experience, and he was eager for more. “I wanted to learn more about the business aspects; to move from being an engineer into a broader management and strategy role,” he says.

He applied to the program then known as Leaders for Manufacturing (LFM) and matriculated in 2001. The program was then focused on “Big M manufacturing,” but as Miller recalls, LFM was growing and evolving toward its eventual renaming as Leaders for Global Operations (LGO). As a result, the student experience was expanding far beyond manufacturing and into other disciplines.

For Miller, this meant the airline industry. “The intersection of technology and guest experience was taking hold in the industry because it required a pretty rapid shift in how airports and airlines were thinking about … how they were moving people through their journey,” he says.

LGO students participate in six-month internships at LGO partner companies that serve as a basis for their thesis projects. Miller interned at Continental Airlines, where he studied the use of self-service check-in kiosks and their impact on traveler experience.

After graduation, he remained at Continental — which merged with United Airlines in 2010 — for almost a decade, until he pivoted to designing and building new venues in the sports and entertainment industry.

“MIT constantly encouraged and challenged us to think very openly about the opportunities that lie ahead. In my case, these pivots didn’t seem that odd or awkward between the different engineering fields and industries. It was just another step in the journey,” says Miller. “The intersection of technology and the guest experience was at the heart of what I was doing.”

Merging invention with varied perspectives

Until the venue’s official launch, all the public knew about Sphere was what they could see displayed on its massive Exosphere. Once U2 played their first of 40 shows and filmmaker Darren Aronofsky’s “Postcard from Earth” premiered as part of The Sphere Experience, audiences were granted access to what Miller and his team had also been working on.

These include a fully immersive display plane with 16k x 16k resolution, 4D technologies like haptic systems and atmospheric effects to influence what guests are literally feeling, the world’s largest beamforming audio system, and more.

“So much of what we’ve done at Sphere has been about invention,” says Miller.

By “invention,” Miller means the sense of identifying potential experiences for the audience and working back from that point when developing the necessary technologies. Though he is quick to explain that technology is not always the solution to a problem, but simply one of many tools that can be used.

“A lot of it comes through process improvements,” explains Miller. “You’ve got to analyze what didn’t work, using a lot of data to come back and say, ‘You know what? This is what needs to change. This is why this approach didn’t work.’ Then get right back up and find another way to tackle the problem.”

From using systems thinking and data analytics to address complex problems — like how to guarantee that 18,000 people in a spherical structure will have the same experience — to building teams that collaborate well to produce possible solutions, Miller credits many of the tools at his disposal to his learnings at MIT.

He learned how to think about complex problems more broadly, and how to think collaboratively with others from a wide variety of backgrounds — much like the team at Sphere.

“At LGO, we discussed and worked on problems that hadn’t been solved yet. We needed a diverse group of people to come together and use all their experiences and expertise to create that solve,” says Miller. “It’s bringing together that diverse group of people to work together that ultimately gets to a great solution.”

New hope for early pancreatic cancer intervention via AI-based risk prediction

The first documented case of pancreatic cancer dates back to the 18th century. Since then, researchers have undertaken a protracted and challenging odyssey to understand the elusive and deadly disease. To date, there is no better cancer treatment than early intervention. Unfortunately, the pancreas, nestled deep within the abdomen, is particularly elusive for early detection. 

MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) scientists, alongside Limor Appelbaum, a staff scientist in the Department of Radiation Oncology at Beth Israel Deaconess Medical Center (BIDMC), were eager to better identify potential high-risk patients. They set out to develop two machine-learning models for early detection of pancreatic ductal adenocarcinoma (PDAC), the most common form of the cancer. To access a broad and diverse database, the team synced up with a federated network company, using electronic health record data from various institutions across the United States. This vast pool of data helped ensure the models’ reliability and generalizability, making them applicable across a wide range of populations, geographical locations, and demographic groups.

The two models  the “PRISM” neural network, and the logistic regression model (a statistical technique for probability), outperformed current methods. The team’s comparison showed that while standard screening criteria identify about 10 percent of PDAC cases using a five-times higher relative risk threshold, Prism can detect 35 percent of PDAC cases at this same threshold. 

Using AI to detect cancer risk is not a new phenomena algorithms analyze mammograms, CT scans for lung cancer, and assist in the analysis of Pap smear tests and HPV testing, to name a few applications. “The PRISM models stand out for their development and validation on an extensive database of over 5 million patients, surpassing the scale of most prior research in the field,” says Kai Jia, an MIT PhD student in electrical engineering and computer science (EECS), MIT CSAIL affiliate, and first author on an open-access paper in eBioMedicine outlining the new work. “The model uses routine clinical and lab data to make its predictions, and the diversity of the U.S. population is a significant advancement over other PDAC models, which are usually confined to specific geographic regions, like a few health-care centers in the U.S. Additionally, using a unique regularization technique in the training process enhanced the models’ generalizability and interpretability.” 

“This report outlines a powerful approach to use big data and artificial intelligence algorithms to refine our approach to identifying risk profiles for cancer,” says David Avigan, a Harvard Medical School professor and the cancer center director and chief of hematology and hematologic malignancies at BIDMC, who was not involved in the study. “This approach may lead to novel strategies to identify patients with high risk for malignancy that may benefit from focused screening with the potential for early intervention.” 

Prismatic perspectives

The journey toward the development of PRISM began over six years ago, fueled by firsthand experiences with the limitations of current diagnostic practices. “Approximately 80-85 percent of pancreatic cancer patients are diagnosed at advanced stages, where cure is no longer an option,” says senior author Appelbaum, who is also a Harvard Medical School instructor as well as radiation oncologist. “This clinical frustration sparked the idea to delve into the wealth of data available in electronic health records (EHRs).”

The CSAIL group’s close collaboration with Appelbaum made it possible to understand the combined medical and machine learning aspects of the problem better, eventually leading to a much more accurate and transparent model. “The hypothesis was that these records contained hidden clues — subtle signs and symptoms that could act as early warning signals of pancreatic cancer,” she adds. “This guided our use of federated EHR networks in developing these models, for a scalable approach for deploying risk prediction tools in health care.”

Both PrismNN and PrismLR models analyze EHR data, including patient demographics, diagnoses, medications, and lab results, to assess PDAC risk. PrismNN uses artificial neural networks to detect intricate patterns in data features like age, medical history, and lab results, yielding a risk score for PDAC likelihood. PrismLR uses logistic regression for a simpler analysis, generating a probability score of PDAC based on these features. Together, the models offer a thorough evaluation of different approaches in predicting PDAC risk from the same EHR data.

One paramount point for gaining the trust of physicians, the team notes, is better understanding how the models work, known in the field as interpretability. The scientists pointed out that while logistic regression models are inherently easier to interpret, recent advancements have made deep neural networks somewhat more transparent. This helped the team to refine the thousands of potentially predictive features derived from EHR of a single patient to approximately 85 critical indicators. These indicators, which include patient age, diabetes diagnosis, and an increased frequency of visits to physicians, are automatically discovered by the model but match physicians’ understanding of risk factors associated with pancreatic cancer. 

The path forward

Despite the promise of the PRISM models, as with all research, some parts are still a work in progress. U.S. data alone are the current diet for the models, necessitating testing and adaptation for global use. The path forward, the team notes, includes expanding the model’s applicability to international datasets and integrating additional biomarkers for more refined risk assessment.

“A subsequent aim for us is to facilitate the models’ implementation in routine health care settings. The vision is to have these models function seamlessly in the background of health care systems, automatically analyzing patient data and alerting physicians to high-risk cases without adding to their workload,” says Jia. “A machine-learning model integrated with the EHR system could empower physicians with early alerts for high-risk patients, potentially enabling interventions well before symptoms manifest. We are eager to deploy our techniques in the real world to help all individuals enjoy longer, healthier lives.” 

Jia wrote the paper alongside Applebaum and MIT EECS Professor and CSAIL Principal Investigator Martin Rinard, who are both senior authors of the paper. Researchers on the paper were supported during their time at MIT CSAIL, in part, by the Defense Advanced Research Projects Agency, Boeing, the National Science Foundation, and Aarno Labs. TriNetX provided resources for the project, and the Prevent Cancer Foundation also supported the team.

Stratospheric safety standards: How aviation could steer regulation of AI in health

What is the likelihood of dying in a plane crash? According to a 2022 report released by the International Air Transport Association, the industry fatality risk is 0.11. In other words, on average, a person would need to take a flight every day for 25,214 years to have a 100 percent chance of experiencing a fatal accident. Long touted as one of the safest modes of transportation, the highly regulated aviation industry has MIT scientists thinking that it may hold the key to regulating artificial intelligence in health care. 

Marzyeh Ghassemi, an assistant professor at the MIT Department of Electrical Engineering and Computer Science (EECS) and Institute of Medical Engineering Sciences, and Julie Shah, an H.N. Slater Professor of Aeronautics and Astronautics at MIT, share an interest in the challenges of transparency in AI models. After chatting in early 2023, they realized that aviation could serve as a model to ensure that marginalized patients are not harmed by biased AI models.  

Ghassemi, who is also a principal investigator at the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) and the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Shah then recruited a cross-disciplinary team of researchers, attorneys, and policy analysts across MIT, Stanford University, the Federation of American Scientists, Emory University, University of Adelaide, Microsoft, and the University of California San Francisco to kick off a research project, the results of which were recently accepted to the Equity and Access in Algorithms, Mechanisms and Optimization Conference. 

“I think I can speak for both Marzyeh and myself when I say that we’re really excited to see kind of excitement around AI starting to come about in society,” says first author Elizabeth Bondi-Kelly, now an assistant professor of EECS at the University of Michigan who was a postdoc in Ghassemi’s lab when the project began. “But we’re also a little bit cautious and want to try to make sure that it’s possible we can have frameworks in place to manage potential risks as these deployments start to happen, so we were looking for inspiration for ways to try to facilitate that.” 

AI in health today bears a resemblance to where the aviation industry was a century ago, says co-author Lindsay Sanneman, a PhD student in the Department of Aeronautics and Astronautics at MIT. Though the 1920s were known as “the Golden Age of Aviation,” fatal accidents were “disturbingly numerous,” according to the Mackinac Center for Public Policy.  

Jeff Marcus, the current chief of the National Transportation Safety Board (NTSB) Safety Recommendations Division, recently published a National Aviation Month blog post noting that while a number of fatal accidents occurred in the 1920s, 1929 remains the “worst year on record” for the most fatal aviation accidents in history, with 51 reported accidents. By today’s standards that would be 7,000 accidents per year, or 20 per day. In response to the high number of fatal accidents in the 1920s, President Calvin Coolidge passed landmark legislation in 1926 known as the Air Commerce Act, which would regulate air travel via the Department of Commerce. 

But the parallels do not stop there — aviation’s subsequent path into automation is similar to AI’s. AI explainability has been a contentious topic given AI’s notorious “black box” problem, which has AI researchers debating how much an AI model must “explain” its result to the user before potentially biasing them to blindly follow the model’s guidance.  

“In the 1970s there was an increasing amount of automation … autopilot systems that take care of warning pilots about risks,” Sanneman adds. “There were some growing pains as automation entered the aviation space in terms of human interaction with the autonomous system — potential confusion that arises when the pilot doesn’t have keen awareness about what the automation is doing.” 

Today, becoming a commercial airline captain requires 1,500 hours of logged flight time along with instrument trainings. According to the researchers’ paper, this rigorous and comprehensive process takes approximately 15 years, including a bachelor’s degree and co-piloting. Researchers believe the success of extensive pilot training could be a potential model for training medical doctors on using AI tools in clinical settings. 

The paper also proposes encouraging reports of unsafe health AI tools in the way the Federal Aviation Agency (FAA) does for pilots — via “limited immunity”, which allows pilots to retain their license after doing something unsafe, as long as it was unintentional. 

According to a 2023 report published by the World Health Organization, on average, one in every 10 patients is harmed by an adverse event (i.e., “medical errors”) while receiving hospital care in high-income countries. 

Yet in current health care practice, clinicians and health care workers often fear reporting medical errors, not only because of concerns related to guilt and self-criticism, but also due to negative consequences that emphasize the punishment of individuals, such as a revoked medical license, rather than reforming the system that made medical error more likely to occur.  

“In health, when the hammer misses, patients suffer,” wrote Ghassemi in a recent comment published in Nature Human Behavior. “This reality presents an unacceptable ethical risk for medical AI communities who are already grappling with complex care issues, staffing shortages, and overburdened systems.” 

Grace Wickerson, co-author and health equity policy manager at the Federation of American Scientists, sees this new paper as a critical addition to a broader governance framework that is not yet in place. “I think there’s a lot that we can do with existing government authority,” they say. “There’s different ways that Medicare and Medicaid can pay for health AI that makes sure that equity is considered in their purchasing or reimbursement technologies, the NIH [National Institute of Health] can fund more research in making algorithms more equitable and build standards for these algorithms that could then be used by the FDA [Food and Drug Administration] as they’re trying to figure out what health equity means and how they’re regulated within their current authorities.” 

Among others, the paper lists six primary existing government agencies that could help regulate health AI, including: the FDA, the Federal Trade Commission (FTC), the recently established Advanced Research Projects Agency for Health, the Agency for Healthcare Research and Quality, the Centers for Medicare and Medicaid, the Department of Health and Human Services, and the Office of Civil Rights (OCR).  

But Wickerson says that more needs to be done. The most challenging part to writing the paper, in Wickerson’s view, was “imagining what we don’t have yet.”  

Rather than solely relying on existing regulatory bodies, the paper also proposes creating an independent auditing authority, similar to the NTSB, that allows for a safety audit for malfunctioning health AI systems. 

“I think that’s the current question for tech governance — we haven’t really had an entity that’s been assessing the impact of technology since the ’90s,” Wickerson adds. “There used to be an Office of Technology Assessment … before the digital era even started, this office existed and then the federal government allowed it to sunset.” 

Zach Harned, co-author and recent graduate of Stanford Law School, believes a primary challenge in emerging technology is having technological development outpace regulation. “However, the importance of AI technology and the potential benefits and risks it poses, especially in the health-care arena, has led to a flurry of regulatory efforts,” Harned says. “The FDA is clearly the primary player here, and they’ve consistently issued guidances and white papers attempting to illustrate their evolving position on AI; however, privacy will be another important area to watch, with enforcement from OCR on the HIPAA [Health Insurance Portability and Accountability Act] side and the FTC enforcing privacy violations for non-HIPAA covered entities.” 

Harned notes that the area is evolving fast, including developments such as the recent White House Executive Order 14110 on the safe and trustworthy development of AI, as well as regulatory activity in the European Union (EU), including the capstone EU AI Act that is nearing finalization. “It’s certainly an exciting time to see this important technology get developed and regulated to ensure safety while also not stifling innovation,” he says. 

In addition to regulatory activities, the paper suggests other opportunities to create incentives for safer health AI tools such as a pay-for-performance program, in which insurance companies reward hospitals for good performance (though researchers recognize that this approach would require additional oversight to be equitable).  

So just how long do researchers think it would take to create a working regulatory system for health AI? According to the paper, “the NTSB and FAA system, where investigations and enforcement are in two different bodies, was created by Congress over decades.” 

Bondi-Kelly hopes that the paper is a piece to the puzzle of AI regulation. In her mind, “the dream scenario would be that all of us read the paper and are super inspired and able to apply some of the helpful lessons from aviation to help AI to prevent some of the potential harm that might come about.”

In addition to Ghassemi, Shah, Bondi-Kelly, and Sanneman, MIT co-authors on the work include Senior Research Scientist Leo Anthony Celi and former postdocs Thomas Hartvigsen and Swami Sankaranarayanan. Funding for the work came, in part, from an MIT CSAIL METEOR Fellowship, Quanta Computing, the Volkswagen Foundation, the National Institutes of Health, the Herman L. F. von Helmholtz Career Development Professorship and a CIFAR Azrieli Global Scholar award.

Michael John Gorman named MIT Museum director

MIT has appointed Michael John Gorman the Mark R. Epstein (Class of 1963) Director of the recently re-imagined MIT Museum.

Gorman replaces longtime museum director John Durant, who stepped down in 2023. Originally from Ireland, Gorman is the founding director of BIOTOPIA – Naturkundemuseum Bayern in Munich, Germany, a newly established innovative center and museum space for life sciences and environment. Since 2015, he has been responsible for the development of the center’s vision, exhibition strategy, and operations and festivals combining science and the arts. He is also a tenured university professor for life sciences in society at the Ludwig Maximilians University in Munich.  

Gorman was the founding director of Science Gallery at Trinity College, Dublin, Ireland, a groundbreaking public space for innovation, science, and the arts. From 2012 to 2016, Gorman served as CEO of Science Gallery International, a nonprofit he founded with university partners to support the establishment of the Global Science Gallery Network in cities including London; Melbourne, Australia; and Bengaluru, India.  

From 1999 to 2000, Gorman held dual postdoctoral fellowships at MIT’s Dibner Institute and Harvard University’s Department of History of Science, before becoming a lecturer in science, technology, and society at Stanford University. He is the author of books on topics ranging from Buckminster Fuller´s designs to 17th century art and science to the recent book “Idea Colliders: The Future of Science Museums,” published by the MIT Press. He has curated numerous exhibitions and festivals bridging science, art, technology, and design around the world.

“I see the MIT Museum as a dynamic public forum, a place to encounter possible futures, and a leading center for public engagement at the nexus of science, technology, and the arts and design,” says Gorman. “I’m greatly looking forward to building on the excellent work that’s been done by the museum team since its re-opening at the spectacular new site in Kendall Square, and to realizing the museum´s vast potential as MIT’s window to the world.” 

Kathryn Wysocki Gunsch, deputy director of the MIT Museum, will serve as interim director until Gorman takes up his post this summer.

In October 2022, a reinvented MIT Museum opened in a new location in the heart of the Kendall Square Gateway of MIT’s campus at 314 Main Street in Cambridge, Massachusetts. The museum aims to make innovation and research available to all by presenting the best of STEAM, and to “turn MIT inside-out,” inviting visitors to take part in ongoing research while demonstrating how science and innovation will shape the future of society.

Highlights include freshly conceived exhibitions featuring objects from the museum’s prodigious collections of over 1.5 million objects, along with loans of art and artifacts; the Lee Family Exchange event space for public dialogue and conversation; a hands-on Heide Maker Hub, where audiences can experiment with putting scientific ideas into action; and an enlarged store.

Turning history of science into a comic adventure

The Covid-19 pandemic taught us how complex the science and management of infectious disease can be, as the public grappled with rapidly evolving science, shifting and contentious policies, and mixed public health messages.

The purpose of scientific communication is to make the complexity of such topics engaging and accessible while also making sure the information conveyed is scientifically accurate. With that goal in mind, one MIT team recently transformed themselves into time-traveling comic book characters, in an effort to convey the fascinating history of infectious disease science.

The multimedia project, “A Paradigm Shift in Infectious Diseases,” follows its creators — and the story’s protagonists — on a journey through scientific history. MIT Associate Professor Lydia Bourouiba and cancer-researcher-turned-graphic-artist Argha Manna travel across the world, leaping from one century to the next to learn about paradigm shifts in science from philosophers of science and to meet scientific luminaries and other scholars as they changed the understanding of infectious diseases and their transmission.

“Our goal with this project was to communicate effectively about the scientific method,” says Bourouiba, director of MIT’s Fluid Dynamics of Disease Transmission Laboratory, part of the Fluids and Health Network; a core faculty member of the Institute for Medical Engineering and Science (IMES); and an associate professor in the departments of Civil and Environmental Engineering and of Mechanical Engineering. “During crises like the Covid-19 pandemic, we saw a lot of confusion and misunderstanding from the public that stemmed, in part, from a lack of knowledge about how science actually evolves.”

The project was exhibited in MIT’s Rotch Library Gallery last month and was the subject of an event at the Hayden Library that explored broader questions about the scientific method and scientific literacy. The authors are currently in talks with publishers to create a comic book from the story, and Bourouiba is teaching a related class, HST.537/1.063/2.25 (Fluids and Diseases), this spring.

The exciting history of infectious disease research

Bourouiba pitched the idea for the exhibit to the MIT Center for Art, Science and Technology (CAST) in 2021 during the Covid-19 pandemic. CAST agreed to fund the project, which also received support from the Department of Civil and Environmental Engineering, IMES, and the MIT Libraries.

“We wanted to use visual art in the form of comics, which allows us to convey multilayered messages, with the two protagonists traveling through time and locations to try to understand the processes that led to the different understandings of infectious diseases and how they are transmitted,” Bourouiba explains.

Like all good science communication, the project tells a story. The comic starts with Bourouiba and Manna discussing how infectious diseases spread. They read about experiments by William F. Wells in the 1930s, focusing on the size of exhaled droplets and how it determines how fast they evaporate. And they learn about the origins of germ theory, which after much pushback and debate, was eventually established by Louis Pasteur and Robert Koch toward the end of the 19th century. Soon, Bourouiba and Manna are transported back in time to come face to face with the subjects of their study. The adventure brings them to ancient Greece, Egypt, Italy, and eventually back to MIT — but in the 1940s — where Harold “Doc” Edgerton conducted pioneering work on stroboscopic photography, which could capture images of moving droplets in previously unprecedented detail.

“Through the adventure of the protagonists in this comics, one learns that the evolution of ideas on infectious diseases is far from solely a school of medicine effort,” Bourouiba says. “Instead, it involved, from its start, physicists, ecologists, engineers, and modelers, in addition to those managing public good, eventually establishing public health structures.”

Through it all, the audience learns about various “paradigm shifts” in science that mark progress and put in perspective contemporary shifts in our understanding of infectious disease.

The power of science communication

A panel at the Hayden Library served to launch the exhibit and included Professor Joel Gill, associate professor of art and chair of the Department of Visual Narrative at Boston University; Edward Nardell, professor of global health and social medicine at Harvard Medical School; Carl Zimmer, New York Times journalist and author; John Durant, then-director of the MIT Museum and adjunct professor in the MIT Program in Science, Technology, and Society (STS); and Robin Scheffler, associate professor in MIT STS.

The panel discussed shifts in ideas about science and how we communicate them using media like videos, books, and comics.

We need to think about our audience, we need to know the audience we’re talking to, and we need to be prepared to listen as well as to speak to the audience,” Durant said. “We also need to find ways of moving outside of the circle of people who think the way we do.”

In Scheffler’s talk, he showed examples throughout history of scientists using art and artists using science.

“By thinking about the slippery-ness between [art and science] and having a greater sense that there isn’t a hard and fast line to draw in terms of paradigm shifts in science, I think we can all have a more empathetic and practical approach in how we communicate and talk about the nature of changing science and changing understandings of disease,” Scheffler said.

Ultimately, the comic exemplifies an idea by one of its central characters, Doc Edgerton. The famed educator once said, “The trick to education is to teach people in such a way that they don’t realize they’re learning until it’s too late.”

Inclusive research for social change

Pair a decades-old program dedicated to creating research opportunities for underrepresented minorities and populations with a growing initiative committed to tackling the very issues at the heart of such disparities, and you’ll get a transformative partnership that only MIT can deliver. 

Since 1986, the MIT Summer Research Program (MSRP) has led an institutional effort to prepare underrepresented students (minorities, women in STEM, or students with low socioeconomic status) for doctoral education by pairing them with MIT labs and research groups. For the past three years, the Initiative on Combatting Systemic Racism (ICSR), a cross-disciplinary research collaboration led by MIT’s Institute for Data, Systems, and Society (IDSS), has joined them in their mission, helping bring the issue full circle by providing MSRP students with the opportunity to use big data and computational tools to create impactful changes toward racial equity.

“ICSR has further enabled our direct engagement with undergrads, both within and outside of MIT,” says Fotini Christia, the Ford International Professor of the Social Sciences, associate director of IDSS, and co-organizer for the initiative. “We’ve found that this line of research has attracted students interested in examining these topics with the most rigorous methods.”

The initiative fits well under the IDSS banner, as IDSS research seeks solutions to complex societal issues through a multidisciplinary approach that includes statistics, computation, modeling, social science methodologies, human behavior, and an understanding of complex systems. With the support of faculty and researchers from all five schools and the MIT Schwarzman College of Computing, the objective of ICSR is to work on an array of different societal aspects of systemic racism through a set of verticals including policing, housing, health care, and social media.

Where passion meets impact

Grinnell senior Mia Hines has always dreamed of using her love for computer science to support social justice. She has experience working with unhoused people and labor unions, and advocating for Indigenous peoples’ rights. When applying to college, she focused her essay on using technology to help Syrian refugees.

“As a Black woman, it’s very important to me that we focus on these areas, especially on how we can use technology to help marginalized communities,” Hines says. “And also, how do we stop technology or improve technology that is already hurting marginalized communities?”   

Through MSRP, Hines was paired with research advisor Ufuoma Ovienmhada, a fourth-year doctoral student in the Department of Aeronautics and Astronautics at MIT. A member of Professor Danielle Wood’s Space Enabled research group at MIT’s Media Lab, Ovienmhada received funding from an ICSR Seed Grant and NASA’s Applied Sciences Program to support her ongoing research measuring environmental injustice and socioeconomic disparities in prison landscapes. 

“I had been doing satellite remote sensing for environmental challenges and sustainability, starting out looking at coastal ecosystems, when I learned about an issue called ‘prison ecology,’” Ovienmhada explains. “This refers to the intersection of mass incarceration and environmental justice.”

Ovienmhada’s research uses satellite remote sensing and environmental data to characterize exposures to different environmental hazards such as air pollution, extreme heat, and flooding. “This allows others to use these datasets for real-time advocacy, in addition to creating public awareness,” she says.

Focused especially on extreme heat, Hines used satellite remote sensing to monitor the fluctuation of temperature to assess the risk being imposed on prisoners, including death, especially in states like Texas, where 75 percent of prisons either don’t have full air conditioning or have none at all.

“Before this project I had done little to no work with geospatial data, and as a budding data scientist, getting to work with and understanding different types of data and resources is really helpful,” Hines says. “I was also funded and afforded the flexibility to take advantage of IDSS’s Data Science and Machine Learning online course. It was really great to be able to do that and learn even more.”

Filling the gap

Much like Hines, Harvey Mudd senior Megan Li was specifically interested in the IDSS-supported MSRP projects. She was drawn to the interdisciplinary approach, and she seeks in her own work to apply computational methods to societal issues and to make computer science more inclusive, considerate, and ethical. 

Working with Aurora Zhang, a grad student in IDSS’s Social and Engineering Systems PhD program, Li used county-level data on income and housing prices to quantify and visualize how affordability based on income alone varies across the United States. She then expanded the analysis to include assets and debt to determine the most common barriers to home ownership.

“I spent my day-to-day looking at census data and writing Python scripts that could work with it,” reports Li. “I also reached out to the Census Bureau directly to learn a little bit more about how they did their data collection, and discussed questions related to some of their previous studies and working papers that I had reviewed.” 

Outside of actual day-to-day research, Li says she learned a lot in conversations with fellow researchers, particularly changing her “skeptical view” of whether or not mortgage lending algorithms would help or hurt home buyers in the approval process. “I think I have a little bit more faith now, which is a good thing.”

“Harvey Mudd is undergraduate-only, and while professors do run labs here, my specific research areas are not well represented,” Li says. “This opportunity was enormous in that I got the experience I need to see if this research area is actually something that I want to do long term, and I got more mirrors into what I would be doing in grad school from talking to students and getting to know faculty.”

Closing the loop

While participating in MSRP offered crucial research experience to Hines, the ICSR projects enabled her to engage in topics she’s passionate about and work that could drive tangible societal change.

“The experience felt much more concrete because we were working on these very sophisticated projects, in a supportive environment where people were very excited to work with us,” she says.

A significant benefit for Li was the chance to steer her research in alignment with her own interests. “I was actually given the opportunity to propose my own research idea, versus supporting a graduate student’s work in progress,” she explains. 

For Ovienmhada, the pairing of the two initiatives solidifies the efforts of MSRP and closes a crucial loop in diversity, equity, and inclusion advocacy. 

“I’ve participated in a lot of different DEI-related efforts and advocacy and one thing that always comes up is the fact that it’s not just about bringing people in, it’s also about creating an environment and opportunities that align with people’s values,” Ovienmhada says. “Programs like MSRP and ICSR create opportunities for people who want to do work that’s aligned with certain values by providing the needed mentoring and financial support.”

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.