Caspar Hare, Georgia Perakis named associate deans of Social and Ethical Responsibilities of Computing

Caspar Hare and Georgia Perakis have been appointed the new associate deans of the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative in the MIT Stephen A. Schwarzman College of Computing. Their new roles will take effect on Sept. 1.

“Infusing social and ethical aspects of computing in academic research and education is a critical component of the college mission,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “I look forward to working with Caspar and Georgia on continuing to develop and advance SERC and its reach across MIT. Their complementary backgrounds and their broad connections across MIT will be invaluable to this next chapter of SERC.”

Caspar Hare

Hare is a professor of philosophy in the Department of Linguistics and Philosophy. A member of the MIT faculty since 2003, his main interests are in ethics, metaphysics, and epistemology. The general theme of his recent work has been to bring ideas about practical rationality and metaphysics to bear on issues in normative ethics and epistemology. He is the author of two books: “On Myself, and Other, Less Important Subjects” (Princeton University Press 2009), about the metaphysics of perspective, and “The Limits of Kindness” (Oxford University Press 2013), about normative ethics.

Georgia Perakis

Perakis is the William F. Pounds Professor of Management and professor of operations research, statistics, and operations management at the MIT Sloan School of Management, where she has been a faculty member since 1998. She investigates the theory and practice of analytics and its role in operations problems and is particularly interested in how to solve complex and practical problems in pricing, revenue management, supply chains, health care, transportation, and energy applications, among other areas. Since 2019, she has been the co-director of the Operations Research Center, an interdepartmental PhD program that jointly reports to MIT Sloan and the MIT Schwarzman College of Computing, a role in which she will remain. Perakis will also assume an associate dean role at MIT Sloan in recognition of her leadership.

Hare and Perakis succeed David Kaiser, the Germeshausen Professor of the History of Science and professor of physics, and Julie Shah, the H.N. Slater Professor of Aeronautics and Astronautics, who will be stepping down from their roles at the conclusion of their three-year term on Aug. 31.

“My deepest thanks to Dave and Julie for their tremendous leadership of SERC and contributions to the college as associate deans,” says Huttenlocher.

SERC impact

As the inaugural associate deans of SERC, Kaiser and Shah have been responsible for advancing a mission to incorporate humanist, social science, social responsibility, and civic perspectives into MIT’s teaching, research, and implementation of computing. In doing so, they have engaged dozens of faculty members and thousands of students from across MIT during these first three years of the initiative.

They have brought together people from a broad array of disciplines to collaborate on crafting original materials such as active learning projects, homework assignments, and in-class demonstrations. A collection of these materials was recently published and is now freely available to the world via MIT OpenCourseWare.

In February 2021, they launched the MIT Case Studies in Social and Ethical Responsibilities of Computing for undergraduate instruction across a range of classes and fields of study. The specially commissioned and peer-reviewed cases are based on original research and are brief by design. Three issues have been published to date and a fourth will be released later this summer. Kaiser will continue to oversee the successful new series as editor.

Last year, 60 undergraduates, graduate students, and postdocs joined a community of SERC Scholars to help advance SERC efforts in the college. The scholars participate in unique opportunities throughout, such as the summer Experiential Ethics program. A multidisciplinary team of graduate students last winter worked with the instructors and teaching assistants of class 6.036 (Introduction to Machine Learning), MIT’s largest machine learning course, to infuse weekly labs with material covering ethical computing, data and model bias, and fairness in machine learning through SERC.

Through efforts such as these, SERC has had a substantial impact at MIT and beyond. Over the course of their tenure, Kaiser and Shah have engaged about 80 faculty members, and more than 2,100 students took courses that included new SERC content in the last year alone. SERC’s reach extended well beyond engineering students, with about 500 exposed to SERC content through courses offered in the School of Humanities, Arts, and Social Sciences, the MIT Sloan School of Management, and the School of Architecture and Planning.

Caspar Hare, Georgia Perakis named associate deans of Social and Ethical Responsibilities of Computing

Caspar Hare and Georgia Perakis have been appointed the new associate deans of the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative in the MIT Stephen A. Schwarzman College of Computing. Their new roles will take effect on Sept. 1.

“Infusing social and ethical aspects of computing in academic research and education is a critical component of the college mission,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “I look forward to working with Caspar and Georgia on continuing to develop and advance SERC and its reach across MIT. Their complementary backgrounds and their broad connections across MIT will be invaluable to this next chapter of SERC.”

Caspar Hare

Hare is a professor of philosophy in the Department of Linguistics and Philosophy. A member of the MIT faculty since 2003, his main interests are in ethics, metaphysics, and epistemology. The general theme of his recent work has been to bring ideas about practical rationality and metaphysics to bear on issues in normative ethics and epistemology. He is the author of two books: “On Myself, and Other, Less Important Subjects” (Princeton University Press 2009), about the metaphysics of perspective, and “The Limits of Kindness” (Oxford University Press 2013), about normative ethics.

Georgia Perakis

Perakis is the William F. Pounds Professor of Management and professor of operations research, statistics, and operations management at the MIT Sloan School of Management, where she has been a faculty member since 1998. She investigates the theory and practice of analytics and its role in operations problems and is particularly interested in how to solve complex and practical problems in pricing, revenue management, supply chains, health care, transportation, and energy applications, among other areas. Since 2019, she has been the co-director of the Operations Research Center, an interdepartmental PhD program that jointly reports to MIT Sloan and the MIT Schwarzman College of Computing, a role in which she will remain. Perakis will also assume an associate dean role at MIT Sloan in recognition of her leadership.

Hare and Perakis succeed David Kaiser, the Germeshausen Professor of the History of Science and professor of physics, and Julie Shah, the H.N. Slater Professor of Aeronautics and Astronautics, who will be stepping down from their roles at the conclusion of their three-year term on Aug. 31.

“My deepest thanks to Dave and Julie for their tremendous leadership of SERC and contributions to the college as associate deans,” says Huttenlocher.

SERC impact

As the inaugural associate deans of SERC, Kaiser and Shah have been responsible for advancing a mission to incorporate humanist, social science, social responsibility, and civic perspectives into MIT’s teaching, research, and implementation of computing. In doing so, they have engaged dozens of faculty members and thousands of students from across MIT during these first three years of the initiative.

They have brought together people from a broad array of disciplines to collaborate on crafting original materials such as active learning projects, homework assignments, and in-class demonstrations. A collection of these materials was recently published and is now freely available to the world via MIT OpenCourseWare.

In February 2021, they launched the MIT Case Studies in Social and Ethical Responsibilities of Computing for undergraduate instruction across a range of classes and fields of study. The specially commissioned and peer-reviewed cases are based on original research and are brief by design. Three issues have been published to date and a fourth will be released later this summer. Kaiser will continue to oversee the successful new series as editor.

Last year, 60 undergraduates, graduate students, and postdocs joined a community of SERC Scholars to help advance SERC efforts in the college. The scholars participate in unique opportunities throughout, such as the summer Experiential Ethics program. A multidisciplinary team of graduate students last winter worked with the instructors and teaching assistants of class 6.036 (Introduction to Machine Learning), MIT’s largest machine learning course, to infuse weekly labs with material covering ethical computing, data and model bias, and fairness in machine learning through SERC.

Through efforts such as these, SERC has had a substantial impact at MIT and beyond. Over the course of their tenure, Kaiser and Shah have engaged about 80 faculty members, and more than 2,100 students took courses that included new SERC content in the last year alone. SERC’s reach extended well beyond engineering students, with about 500 exposed to SERC content through courses offered in the School of Humanities, Arts, and Social Sciences, the MIT Sloan School of Management, and the School of Architecture and Planning.

Why it’s a problem that pulse oximeters don’t work as well on patients of color

Pulse oximetry is a noninvasive test that measures the oxygen saturation level in a patient’s blood, and it has become an important tool for monitoring many patients, including those with Covid-19. But new research links faulty readings from pulse oximeters with racial disparities in health outcomes, potentially leading to higher rates of death and complications such as organ dysfunction, in patients with darker skin.

It is well known that non-white intensive care unit (ICU) patients receive less-accurate readings of their oxygen levels using pulse oximeters — the common devices clamped on patients’ fingers. Now, a paper co-authored by MIT scientists reveals that inaccurate pulse oximeter readings can lead to critically ill patients of color receiving less supplemental oxygen during ICU stays.

The paper,Assessment of Racial and Ethnic Differences in Oxygen Supplementation Among Patients in the Intensive Care Unit,” published in JAMA Internal Medicine, focused on the question of whether there were differences in supplemental oxygen administration among patients of different races and ethnicities that were associated with pulse oximeter performance discrepancies. 

The findings showed that inaccurate readings of Asian, Black, and Hispanic patients resulted in them receiving less supplemental oxygen than white patients. These results provide insight into how health technologies such as the pulse oximeter contribute to racial and ethnic disparities in care, according to the researchers.

The study’s senior author, Leo Anthony Celi, clinical research director and principal research scientist at the MIT Laboratory for Computational Physiology, and a principal research scientist at the MIT Institute for Medical Engineering and Science (IMES), says the challenge is that health care technology is routinely designed around the majority population.

“Medical devices are typically developed in rich countries with white, fit individuals as test subjects,” he explains. “Drugs are evaluated through clinical trials that disproportionately enroll white individuals. Genomics data overwhelmingly come from individuals of European descent.”

“It is therefore not surprising that we observe disparities in outcomes across demographics, with poorer outcomes among those who were not included in the design of health care,” Celi adds.

While pulse oximeters are widely used due to ease of use, the most accurate way to measure blood oxygen saturation (SaO2) levels is by taking a sample of the patient’s arterial blood. False readings of normal pulse oximetry (SpO2) can lead to hidden hypoxemia. Elevated bilirubin in the bloodstream and the use of certain medications in the ICU called vasopressors can also throw off pulse oximetry readings.

More than 3,000 participants were included in the study, of whom 2,667 were white, 207 Black, 112 Hispanic, and 83 Asian — using data from the Medical Information Mart for Intensive Care version 4, or MIMIC-IV dataset. This dataset is comprised of more than 50,000 patients admitted to the ICU at Beth Israel Deaconess Medical Center, and includes both pulse oximeter readings and oxygen saturation levels detected in blood samples. MIMIC-IV also includes rates of administration of supplemental oxygen.

When the researchers compared SpO2 levels taken by pulse oximeter to oxygen saturation from blood samples, they found that Black, Hispanic, and Asian patients had higher SpO2 readings than white patients for a given blood oxygen saturation level measured in blood samples. The turnaround time of arterial blood gas analysis may take from several minutes up to an hour. As a result, clinicians typically make decisions based on pulse oximetry reading, unaware of its suboptimal performance in certain patient demographics.

Eric Gottlieb, the study’s lead author, a nephrologist, a lecturer at MIT, and a Harvard Medical School fellow at Brigham and Women’s Hospital, called for more research to be done, in order to better understand “how pulse oximeter performance disparities lead to worse outcomes; possible differences in ventilation management, fluid resuscitation, triaging decisions, and other aspects of care should be explored. We then need to redesign these devices and properly evaluate them to ensure that they perform equally well for all patients.”

Celi emphasizes that understanding biases that exist within real-world data is crucial in order to better develop algorithms and artificial intelligence to assist clinicians with decision-making. “Before we invest more money on developing artificial intelligence for health care using electronic health records, we have to identify all the drivers of outcome disparities, including those that arise from the use of suboptimally designed technology,” he argues. “Otherwise, we risk perpetuating and magnifying health inequities with AI.”

Celi described the project and research as a testament to the value of data sharing that is the core of the MIMIC project. “No one team has the expertise and perspective to understand all the biases that exist in real-world data to prevent AI from perpetuating health inequities,” he says. “The database we analyzed for this project has more than 30,000 credentialed users consisting of teams that include data scientists, clinicians, and social scientists.”

The many researchers working on this topic together form a community that shares and performs quality checks on codes and queries, promotes reproducibility of the results, and crowdsources the curation of the data, Celi says. “There is harm when health data is not shared,” he says. “Limiting data access means limiting the perspectives with which data is analyzed and interpreted. We’ve seen numerous examples of model mis-specifications and flawed assumptions leading to models that ultimately harm patients.”

Removing the Oxygen Out of Natural Gas

Do you wish to learn more about the deoxygenation of natural gas? Then you are in the appropriate location. You will find numerous advantages and all the information you require regarding oxygen removal from natural gas in this post. Both the environment and natural gas streams contain oxygen. All three types of gas—natural, liquefied petroleum, and liquefied natural—have some oxygen in their free natural form. The vacuum system comprises coal mines, oil recovery systems, and landfills, including oxygen. According to numerous pipeline requirements, natural gas must have less than fewer parts per million of oxygen. Different natural gas surges, sometimes known as polluted gas streams, include oxygen. However, traditional channels could only contain 100 ppm of oxygen. cleaner formulation It is possible to add or provide oxygen when utilizing gas dryers. It must be treated with air to reduce the calorific value of LPG and create air balance. As landfill gas is extracted, oxygen that is present in it is drawn into the dump.

Why Does Natural Gas Need to Have the Oxygen Removed?

Natural gas with oxygen should be avoided since it might corrode processing equipment, increasing maintenance and replacement expenses. Additionally, when oxygen and hydrogen sulfide combines, oxygen turns into sulfur. By oxidizing the glycol solvent used in drying plants or producing salt in acid gas removal systems, oxygen also has an impact on purge streams. Oxygen, a natural gas stream, can lead to several problems, such as the breakdown of process chemicals (such as amine), increased pipeline erosion, and exceeding the ten ppm limit for pipelines. It is challenging to isolate oxygen from natural gas. In addition to the technology’s lack of advancement and accessibility, the market’s potential is also thought to be constrained. Due to the high cost of such a removal project and the absence of suitable channels, the sector has yet to acquire expertise and competence.

Oxidation Catalytic

Directing a natural gas stream over a catalyst bed at a higher temperature can remove oxygen from the gas. Natural gas and oxygen combine to produce CO2 and water. Natural gas is used as the fuel in the catalytic “burn” of oxygen to produce CO2 and water. Since the reaction can occur at lower temperatures, heavier hydrocarbons (propane+) are favored. This makes it possible to process heavier hydrocarbon streams with more substantial oxygen concentrations than streams with just methane as the fuel source. In some circumstances, hydrogen non ionic surfactant can serve as a fuel source. If sufficient amounts of hydrogen are not already present, additional hydrogen can be added and injected into the gas stream to promote the reaction. Two of hydrogen’s main benefits are lower reaction temperature and less potential for secondary reactions.

Advantages of Taking The Oxygen Out of Natural Gas

  • Handles gases of any volume or oxygen content.
  • Operational simplicity
  • Economical
  • Exceptionally trustworthy
  • The oxygen content falls below detectable levels.

Conclusion

The article has now come to an end. If you’ve read the entire article, you already know all there is to know about deoxygenating natural gas.

Contact Us:

Chemical Products Industries, Inc.

Address: 7649 SW 34th St, Oklahoma City, OK
Phone: (800) 624-4356

Christopher Capozzola named senior associate dean for open learning

MIT Professor Christopher Capozzola has joined MIT Open Learning as senior associate dean, effective Aug. 1. Reporting to interim Vice President for Open Learning Eric Grimson, Capozzola will oversee open education offerings including OpenCourseWare, MITx, and MicroMasters, as well as the Digital Learning Lab, Digital Learning in Residential Education, and MIT Video Productions.

Capozzola has a long history of participation in the MIT Open Learning mission. A member of the MITx Faculty Advisory Committee, Capozzola also has five courses published on OpenCourseWare (OCW), and one course, Visualizing Imperialism in the Philippines, published on both MITx and the Open Learning Library.

“Chris has proven his commitment to the mission of Open Learning through his contributions both to external learners and to MIT students, as well as through his own research and professional projects. He’s also demonstrated his ability to engage collaboratively with the MIT faculty and broader community on issues related to effective delivery of educational experiences,” says Grimson. “MIT’s open online education offerings are more relevant than they’ve ever been, reaching many millions of people around the world. Chris will provide the essential faculty attention and dedicated support needed to help Open Learning continue to reflect the full spectrum of MIT’s knowledge and teaching to the world.”

Capozzola comes to MIT Open Learning from the History Section in the School of Humanities, Arts and Social Sciences (SHASS), where he has taught since 2002. He’s the author of two books and numerous articles exploring citizenship, war, and the military in modern American history. He has served as department head since 2020 and is a MacVicar Faculty Fellow, MIT’s highest honor for undergraduate teaching. He also served as MIT’s secretary of the faculty from 2015 to 2017.

In addition to his teaching and faculty governance roles, Capozzola is an active proponent of public history. He served as a co-curator of “The Volunteers: Americans Join World War I, 1914-1919,” a multi-platform public history initiative marking the centennial of World War I. He currently serves as academic advisor for the online educational project Filipino Veterans Recognition and Education Project.

His interest in public-facing education projects has grown, he says, “because the best parts of my job involve sharing history with excited and curious audiences. It’s very clear that those audiences are enormous and global, and that learners bring their own backgrounds, questions, and interests to the kind of history we produce at MIT.”

This enthusiasm extends to MIT Open Learning as well: Capozzola is eager to work with MIT faculty to leverage digital learning to be more nimble in their teaching, and supporting learners in moving smoothly through MIT’s digital resources.

“What has drawn me to Open Learning from the beginning is my own curiosity about how we can teach better and differently, as well as the creativity of the people who are involved. Everything that I’ve worked on with Open Learning has been very collaborative,” says Capozzola. “People bring all different kinds of expertise: about technology, about the science of learning, about students at MIT and learners beyond. Only by getting everybody together and collaborating can we produce these amazing resources.”

In his new role as senior associate dean, he’s looking forward to collaborating with faculty and instructors across all of the Institute’s schools and departments, helping them to work with MIT Open Learning through every possible avenue and lowering barriers to participation.

“Open Learning is a critical component of the overall MIT mission. We need to share MIT’s knowledge with the nation and the world in the 21st century. One way to think about that is, if we’re doing something at MIT that we think advances the mission but it’s not on Open Learning, then we’re not advancing MIT’s mission,” he says. This includes offering courses through MITx and OCW, as well as working with the Residential Education team and the Digital Learning Lab to incorporate learning design and digital technologies into the classroom to improve teaching and learning at MIT.

“When it comes to technology in the classroom, I have a skeptical enthusiasm and an enthusiastic skepticism. I want to think about what it means to teach and learn at a residential university in the 2020s. We have all learned a lot of lessons about that during the pandemic, and now is a great moment to convene conversations within Open Learning, at MIT, and beyond. It’s time for thoughtful reflection about what we do and how we can engage the most people with as much of an MIT education as we can,” Capozzola says.

Another exciting opportunity Capozzola sees is guiding MIT Open Learning toward reflecting the Institute’s values and priorities as well as its knowledge. Working closely with dean for digital learning Cynthia Breazeal, who oversees MIT Open Learning’s professional and corporate education offerings and research and engagement units, Capozzola envisions developing new content and strategies that accelerate MIT’s efforts in diversity, equity, and inclusion; climate and sustainability; and more. 

“I really want Open Learning to reflect MIT. By which I mean, everyone at MIT should see themselves, their disciplines, and their high standards for teaching, learning, research represented in Open Learning,” Capozzola says. “The staff and leadership of Open Learning have worked hard over the last 10 years to do that. I’m looking forward to thinking with Open Learning and in dialog with MIT about our priorities, our values, and our next steps.”

3 Questions: John Durant on the new MIT Museum at Kendall Square

To the outside world, much of what goes on at MIT can seem mysterious. But the MIT Museum, whose new location is in the heart of Kendall Square, wants to change that. With a specially designed space by architects Höweler + Yoon, new exhibitions, and new public programs, this fall marks a reset for the 50-year-old institution. 

The museum hopes to inspire future generations of scientists, engineers, and innovators. And with its new free Cambridge Residents Membership, the museum is sending a clear message to its neighbors that all are welcome.

John Durant, The Mark R. Epstein (Class of 1963) Director of the MIT Museum and an affiliate of MIT’s Program in Science, Technology, and Society, speaks here about the museum’s transformation and what’s to come when it opens its doors to the public on Oct. 2.

Q: What role will the new museum play in making MIT more accessible and better understood?

A: The MIT Museum is consciously standing at the interface between a world-famous research institute and the wider world. Our task here is to “turn MIT inside out,” by making what MIT does visible and accessible to the wider world. We are focused on the question: What does all this intensive MIT creativity, research, innovation, teaching and learning at MIT mean? What does it all mean for the wider community of which we’re part? 

Our job as a museum is to make what MIT does, both the processes and the products, accessible. We do this for two reasons. First, MIT’s mission statement is a public service mission statement — it intends to help make the world a better place. The second reason is that MIT is involved with potentially world-changing ideas and innovations. If we’re about ideas, discoveries, inventions, and applications that can literally change the world, then we have a responsibility to the world. We have a responsibility to make these things available to the people who will end up being affected by them, so that we can have the kinds of informed conversations that are necessary in a democratic society. 

“Essential MIT,” the first gallery in the museum, highlights the people behind the research and innovation at MIT. Although it’s tempting to focus on the products of research, in the end everything we do is about the people who do it. We want to humanize research and innovation, and the best way to do that is to put the people — whether they are senior faculty, junior faculty, students, or even visitors — at the center of the story. In fact, there will be a big digital wall display of all the people that we comprise, a visualization of the MIT community, and the visitor will be able to join this community on a temporary basis if they want to, by putting themselves in the display. 

MIT can sometimes seem like a rather austere place. It may be seen as the kind of a place where only those super-smart people go to do super-smart things. We don’t want to send that message. We’re an open campus, and we want to send a message to people that whoever they are, from whatever background, whatever part of the community, whatever language they speak, wherever they live, they have a warm welcome with us. 

Q: How will the museum be showcasing innovation and research? 

A: The new museum is structured in a series of eight galleries, which spiral up the building, and that travel from the local to the global and back again. “Essential MIT” is quite explicitly an introduction to the Institute itself. In that gallery, we feature a few examples of current big projects that illustrate the kind of work that MIT does. In the last gallery, the museum becomes local again through the museum’s collections. On the top floor, for the first time in the museum’s history, we will be able to show visitors that we’re a collecting museum, and that we hold all manner of objects and artifacts, which make up a permanent record — an archive, if you will — of the research and innovation that has gone on in this place. 

But, of course, MIT doesn’t only concern itself with things that only have local significance. It’s involved in some of the biggest research questions that are being tackled worldwide: climate change, fundamental physics, genetics, artificial intelligence, the nature of cancer, and many more. Between the two bookends of these rather locally focused galleries, therefore, we have put galleries dealing with global questions in research and innovation. We’re trying to point out that current research and innovation raises big questions that go beyond the purely scientific or purely technical. We don’t want to shy away from the ethical, social, or even political questions posed by this new research, and some of these larger questions will be treated “head-on” in these galleries. 

For example, we’ve never before tried to explain to people what AI is, and what it isn’t — as well as some of its larger implications for society. In “AI: Mind the Gap,” we’re going to explain what AI is good at doing, and by the same token, what it is not good at doing. For example, we will have an interactive exhibit that allows visitors to see a neural network learning in real time — in this case, how to recognize faces and facial expressions. Such learning machines are fundamental to what AI can do, and there are many positive applications of that in the real world. We will also give people the chance to use AI to create poetry. But we’ll also be looking at some of the larger concerns that some of these technologies raise — issues like algorithmic bias, or the area called deepfake technology, which is increasingly widely used. In order to explain this technology to people, we are going to display an artwork based on the Apollo moon landings that uses deepfakes.

Nothing in the new museum is something that the visitor will have seen before, but for one exception, and it’s by careful design. We’re bringing with us some of the kinetic or moving sculptures by the artist Arthur Ganson. We value the connections his work raises at the interface between science, technology and the arts. In trying to get people to think in different ways about what’s happening in the worlds of research and innovation, artists often bring fresh perspectives.

Q: What kinds of educational opportunities will the museum now be able to present?

A: The new museum has about 30 percent more space for galleries and exhibitions than the old museum, but it has about 300 percent more space for face-to-face activities. We’re going to have two fully equipped teaching labs in the new museum, where we can teach a very wide variety of subjects, including wet lab work. We shall also have the Maker Hub, a fully-equipped maker space for the public. MIT’s motto is “mens et manus,” mind and hand, and we want to be true to that. We want to give people a chance not only just to look at stuff, but also to make stuff, to do it themselves. 

At the heart of the new museum is a space called The Exchange, which is designed for face-to-face meetings, short talks, demonstrations, panel discussions, debates, films, anything you like. I think of The Exchange as the living room of the new museum, a place with double-height ceilings, bleacher seating, and a very big LED screen so that we can show almost anything we need to show. It’s a place where visitors can gather, learn, discuss, and debate; where they can have the conversations about what to do about deepfakes, or how to apply gene editing most wisely, or whatever the issue of the day happens to be. We’re unapologetically putting these conversations center stage. 

Finally, the first month of the opening events includes an MIT Community Day, a Cambridge Residents Day, and the museum’s public opening on Oct. 2. The first week after the opening will feature the Cambridge Science Festival, the festival founded and presented by the MIT Museum which has been re-imagined this year. The festival will feature large-scale projects, many taking place in MIT’s Open Space, an area we think of as the new museum’s “front lawn.”

Study finds Wikipedia influences judicial behavior

Mixed appraisals of one of the internet’s major resources, Wikipedia, are reflected in the slightly dystopian article “List of Wikipedia Scandals.” Yet billions of users routinely flock to the online, anonymously editable, encyclopedic knowledge bank for just about everything. How this unauthoritative source influences our discourse and decisions is hard to reliably trace. But a new study attempts to measure how knowledge gleaned from Wikipedia may play out in one specific realm: the courts.

A team of researchers led by Neil Thompson, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), recently came up with a friendly experiment: creating new legal Wikipedia articles to examine how they affect the legal decisions of judges. They set off by developing over 150 new Wikipedia articles on Irish Supreme Court decisions, written by law students. Half of these were randomly chosen to be uploaded online, where they could be used by judges, clerks, lawyers, and so on — the “treatment” group. The other half were kept offline, and this second group of cases provided the counterfactual basis of what would happen to a case absent a Wikipedia article about it (the “control”). They then looked at two measures: whether the cases were more likely to be cited as precedents by subsequent judicial decisions, and whether the argumentation in court judgments echoed the linguistic content of the new Wikipedia pages. 

It turned out the published articles tipped the scales: Getting a public Wikipedia article increased a case’s citations by more than 20 percent. The increase was statistically significant, and the effect was particularly strong for cases that supported the argument the citing judge was making in their decision (but not the converse). Unsurprisingly, the increase was bigger for citations by lower courts — the High Court — and mostly absent for citations by appellate courts — the Supreme Court and Court of Appeal. The researchers suspect this is showing that Wikipedia is used more by judges or clerks who have a heavier workload, for whom the convenience of Wikipedia offers a greater attraction. 

“To our knowledge, this is the first randomized field experiment that investigates the influence of legal sources on judicial behavior. And because randomized experiments are the gold standard for this type of research, we know the effect we are seeing is causation, not just correlation,” says Thompson, the lead author of the study. “The fact that we wrote up all these cases, but the only ones that ended up on Wikipedia were those that won the proverbial ‘coin flip,’ allows us to show that Wikipedia is influencing both what judges cite and how they write up their decisions.”

“Our results also highlight an important public policy issue,” Thompson adds. “With a source that is as widely used as Wikipedia, we want to make sure we are building institutions to ensure that the information is of the highest quality. The finding that judges or their staffs are using Wikipedia is a much bigger worry if the information they find there isn’t reliable.” 

A paper describing the study is being published in “The Cambridge Handbook of Experimental Jurisprudence” (Cambridge University Press, 2022). Joining Thompson on the paper are Brian Flannigan, Edana Richardson, and Brian McKenzie of Maynooth University in Ireland and Xueyun Luo of Cornell University.

The researchers’ statistical model essentially compared how much citation behavior changed for the treatment group (first difference: before versus after) and how that compared with the change that happened for the control group (second difference: treatment versus control).

In 2018, Thompson first visited the idea of proving the causal role that Wikipedia plays in shaping knowledge and behavior by looking at how it shapes academic science. It turns out that adding scientific articles, in this case about chemistry, changed how the topic was discussed in scientific literature, and science articles added as references to Wikipedia received more academic citations as well. 

That led Brian McKenzie, an associate professor at Maynooth University, to make a call. I was working with students to add articles to Wikipedia at the time I read Neil’s research on the influence of Wikipedia on scientific research,” explains McKenzie. “There were only a handful of Irish Supreme Court cases on Wikipedia so I reached out to Neil to ask if he wanted to design another iteration of his experiment using court cases.”

The Irish legal system proved the perfect test bed, as it shares a key similarity with other national legal systems such as the United Kingdom and United States — it operates within a hierarchical court structure where decisions of higher courts subsequently bind lower courts. Also, there are relatively few Wikipedia articles on Irish Supreme Court decisions compared to those of the U.S. Supreme Court — over the course of their project, the researchers increased the number of such articles tenfold. 

In addition to looking at the case citations made in the decisions, the team also analyzed the language used in the written decision using natural language processing. What they found were the linguistic fingerprints of the Wikipedia articles that they’d created.

So what might this influence look like? Suppose A sues B in federal district court. A argues that B is liable for breach of contract; B acknowledges A’s account of the facts but maintains that they gave rise to no contract between them. The assigned judge, conscious of the heavy work already delegated to his clerks, decides to conduct her own research. On reviewing the parties’ submissions, the judge forms the preliminary view that a contract has not truly been formed and that she should give judgment for the defendant. To write his official opinion, the judge googles some previous decisions cited in B’s brief that seem similar to the case between A and B. On confirming their similarity by reading the relevant case summaries on Wikipedia, the judge paraphrases some of the text of the Wikipedia entries in his draft opinion to complete his analysis. The judge then enters his judgment and publishes his opinion. 

“The text of a court’s judgment itself will guide the law as it becomes a source of precedent for subsequent judicial decision-making. Future lawyers and judges will look back at that written judgment, and use it to decide what its implications are so that they can treat ‘like’ cases alike,” says coauthor Brian Flanagan. “If the text itself is influenced, as this experiment shows, by anonymously sourced internet content, that’s a problem. For the many potential cracks that have opened up in our “information superhighway” that is the internet, you can imagine that this vulnerability could potentially lead to adversarial actors manipulating information. If easily accessible analysis of legal questions is already being relied on, it behooves the legal community to accelerate efforts to ensure that such analysis is both comprehensive and expert.”

Explained: How to tell if artificial intelligence is working the way we want it to

About a decade ago, deep-learning models started achieving superhuman results on all sorts of tasks, from beating world-champion board game players to outperforming doctors at diagnosing breast cancer.

These powerful deep-learning models are usually based on artificial neural networks, which were first proposed in the 1940s and have become a popular type of machine learning. A computer learns to process data using layers of interconnected nodes, or neurons, that mimic the human brain. 

As the field of machine learning has grown, artificial neural networks have grown along with it.

Deep-learning models are now often composed of millions or billions of interconnected nodes in many layers that are trained to perform detection or classification tasks using vast amounts of data. But because the models are so enormously complex, even the researchers who design them don’t fully understand how they work. This makes it hard to know whether they are working correctly.

For instance, maybe a model designed to help physicians diagnose patients correctly predicted that a skin lesion was cancerous, but it did so by focusing on an unrelated mark that happens to frequently occur when there is cancerous tissue in a photo, rather than on the cancerous tissue itself. This is known as a spurious correlation. The model gets the prediction right, but it does so for the wrong reason. In a real clinical setting where the mark does not appear on cancer-positive images, it could result in missed diagnoses.

With so much uncertainty swirling around these so-called “black-box” models, how can one unravel what’s going on inside the box?

This puzzle has led to a new and rapidly growing area of study in which researchers develop and test explanation methods (also called interpretability methods) that seek to shed some light on how black-box machine-learning models make predictions.

What are explanation methods?

At their most basic level, explanation methods are either global or local. A local explanation method focuses on explaining how the model made one specific prediction, while global explanations seek to describe the overall behavior of an entire model. This is often done by developing a separate, simpler (and hopefully understandable) model that mimics the larger, black-box model.

But because deep learning models work in fundamentally complex and nonlinear ways, developing an effective global explanation model is particularly challenging. This has led researchers to turn much of their recent focus onto local explanation methods instead, explains Yilun Zhou, a graduate student in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) who studies models, algorithms, and evaluations in interpretable machine learning.

The most popular types of local explanation methods fall into three broad categories.

The first and most widely used type of explanation method is known as feature attribution. Feature attribution methods show which features were most important when the model made a specific decision.

Features are the input variables that are fed to a machine-learning model and used in its prediction. When the data are tabular, features are drawn from the columns in a dataset (they are transformed using a variety of techniques so the model can process the raw data). For image-processing tasks, on the other hand, every pixel in an image is a feature. If a model predicts that an X-ray image shows cancer, for instance, the feature attribution method would highlight the pixels in that specific X-ray that were most important for the model’s prediction.

Essentially, feature attribution methods show what the model pays the most attention to when it makes a prediction.

“Using this feature attribution explanation, you can check to see whether a spurious correlation is a concern. For instance, it will show if the pixels in a watermark are highlighted or if the pixels in an actual tumor are highlighted,” says Zhou.

A second type of explanation method is known as a counterfactual explanation. Given an input and a model’s prediction, these methods show how to change that input so it falls into another class. For instance, if a machine-learning model predicts that a borrower would be denied a loan, the counterfactual explanation shows what factors need to change so her loan application is accepted. Perhaps her credit score or income, both features used in the model’s prediction, need to be higher for her to be approved.

“The good thing about this explanation method is it tells you exactly how you need to change the input to flip the decision, which could have practical usage. For someone who is applying for a mortgage and didn’t get it, this explanation would tell them what they need to do to achieve their desired outcome,” he says.

The third category of explanation methods are known as sample importance explanations. Unlike the others, this method requires access to the data that were used to train the model.

A sample importance explanation will show which training sample a model relied on most when it made a specific prediction; ideally, this is the most similar sample to the input data. This type of explanation is particularly useful if one observes a seemingly irrational prediction. There may have been a data entry error that affected a particular sample that was used to train the model. With this knowledge, one could fix that sample and retrain the model to improve its accuracy.

How are explanation methods used?

One motivation for developing these explanations is to perform quality assurance and debug the model. With more understanding of how features impact a model’s decision, for instance, one could identify that a model is working incorrectly and intervene to fix the problem, or toss the model out and start over.

Another, more recent, area of research is exploring the use of machine-learning models to discover scientific patterns that humans haven’t uncovered before. For instance, a cancer diagnosing model that outperforms clinicians could be faulty, or it could actually be picking up on some hidden patterns in an X-ray image that represent an early pathological pathway for cancer that were either unknown to human doctors or thought to be irrelevant, Zhou says.

It’s still very early days for that area of research, however.

Words of warning

While explanation methods can sometimes be useful for machine-learning practitioners when they are trying to catch bugs in their models or understand the inner-workings of a system, end-users should proceed with caution when trying to use them in practice, says Marzyeh Ghassemi, an assistant professor and head of the Healthy ML Group in CSAIL.

As machine learning has been adopted in more disciplines, from health care to education, explanation methods are being used to help decision makers better understand a model’s predictions so they know when to trust the model and use its guidance in practice. But Ghassemi warns against using these methods in that way.

“We have found that explanations make people, both experts and nonexperts, overconfident in the ability or the advice of a specific recommendation system. I think it is very important for humans not to turn off that internal circuitry asking, ‘let me question the advice that I am
given,’” she says.

Scientists know explanations make people over-confident based on other recent work, she adds, citing some recent studies by Microsoft researchers.

Far from a silver bullet, explanation methods have their share of problems. For one, Ghassemi’s recent research has shown that explanation methods can perpetuate biases and lead to worse outcomes for people from disadvantaged groups.

Another pitfall of explanation methods is that it is often impossible to tell if the explanation method is correct in the first place. One would need to compare the explanations to the actual model, but since the user doesn’t know how the model works, this is circular logic, Zhou says.

He and other researchers are working on improving explanation methods so they are more faithful to the actual model’s predictions, but Zhou cautions that, even the best explanation should be taken with a grain of salt.

“In addition, people generally perceive these models to be human-like decision makers, and we are prone to overgeneralization. We need to calm people down and hold them back to really make sure that the generalized model understanding they build from these local explanations are balanced,” he adds.

Zhou’s most recent research seeks to do just that.

What’s next for machine-learning explanation methods?

Rather than focusing on providing explanations, Ghassemi argues that more effort needs to be done by the research community to study how information is presented to decision makers so they understand it, and more regulation needs to be put in place to ensure machine-learning models are used responsibly in practice. Better explanation methods alone aren’t the answer.

“I have been excited to see that there is a lot more recognition, even in industry, that we can’t just take this information and make a pretty dashboard and assume people will perform better with that. You need to have measurable improvements in action, and I’m hoping that leads to real guidelines about improving the way we display information in these deeply technical fields, like medicine,” she says.

And in addition to new work focused on improving explanations, Zhou expects to see more research related to explanation methods for specific use cases, such as model debugging, scientific discovery, fairness auditing, and safety assurance. By identifying fine-grained characteristics of explanation methods and the requirements of different use cases, researchers could establish a theory that would match explanations with specific scenarios, which could help overcome some of the pitfalls that come from using them in real-world scenarios.

Review: IT in health care has produced modest changes — so far

It has never been hard to imagine how information technology (IT) might improve health care services. Fast messaging replacing faxes. Electronic health records that can be accessed more easily. Software that can inform doctors’ decisions. Telemedicine that makes care more flexible. The possibilities seem endless.

But as a new review paper from an MIT economist finds, the overall impact of information technology on health care has been evolutionary, not revolutionary. Technology has lowered costs and improved patient care — but to a modest extent that varies across the health care landscape, while only improving productivity slightly. High-tech tools have also not replaced many health care workers.

“What we found is that even though there’s been this explosion in IT adoption, there hasn’t been a dramatic change in health care productivity,” says Joseph Doyle, an economist at the MIT Sloan School of Management and co-author of the new paper. “We’ve seen in other industries that it takes time to learn how to use [IT] best. Health care seems to be marching along that path.”

Relatedly, when it comes to heath care jobs, Doyle says, “We don’t see dramatic changes in employment or wages across different levels of health care. We’re seeing case evidence of less hiring of people who transcribe orders, while for people who work in IT, we’re seeing more hiring of workers with those skills. But nothing dramatic in terms of nurse employment or doctor employment.”

Still, Doyle notes that health care “could be on the cusp of major changes” as organizations get more comfortable deploying technology efficiently.

The paper, “The Impact of Health Information and Communication Technology on Clinical Quality, Productivity, and Workers,” has been published online by the Annual Review of Economics as part of their August issue.

The authors are Ari Bronsoler PhD ’22, a recent doctoral graduate in economics at MIT; Doyle, who is the Erwin H. Schell Professor of Management and Applied Economics at the MIT Sloan School of Management; and John Van Reenen, a digital fellow in MIT’s Initiative for the Digital Economy and the Ronald Coase School Professor at the London School of Economics.

Safety first

The paper itself is a broad-ranging review of 975 academic research papers on technology and health care services; Doyle is a leading health care economist whose own quasiexperimental studies have quantified, among other things, the difference that increased health care spending yields. This literature review was developed as part of MIT’s Work of the Future project, which aims to better understand the effects of innovation on jobs. Given that health care spending accounted for 18 percent of U.S. GDP in 2020, grasping the effects of high-tech tools on the sector is an important component of this effort.

One facet of health care that has seen massive IT-based change is the use of electronic health records. In 2009, fewer than 10 percent of hospitals were using such records; by 2014, about 97 percent hospitals had them. In turn, these records allow for easier flow of information within providers and help with the use of clinical decision-support tools — software that helps inform doctors’ decisions.

However, a review of the evidence shows the health care industry has not followed up to the same extent regarding other kinds of applications, like decision-support tools. One reason for that may be patient-safety concerns.

“There is risk aversion when it comes to people’s health,” Doyle observes. “You [medical providers] don’t want to make a mistake. As you go to a new system, you have to make sure you’re doing it very, very well, in order to not let anything fall through the cracks as you make that transition. So, I can see why IT adoption would take longer in health care, as organizations make that transition.”

Multiple studies do show a boost in overall productivity stemming from IT applications in health care, but not by an eye-catching amount — the total effect seems to be from roughly 1 percent to about 3 percent.

Complements to the job, not substitutes, so far

Patient outcomes also seem to be helped by IT, but with effects that vary. Examining other literature reviews of specific studies, the authors note that a 2011 survey found 60 percent of studies showed better patient outcomes associated with greater IT use, no effect in 30 percent of studies, and a negative association in 10 percent of studies. A 2018 review of 37 studies found positive effects from IT in 30 cases, 7 studies with no clear effect, and none with negative effects.

The more positive effects in more recent studies “may reflect a learning curve” by the industry, Bronsoler, Doyle, and Van Reenen write in their paper.

Their analysis also suggests that despite periodic claims that technology will wipe out health care jobs — through imaging, robots, and more — IT tools themselves have not reduced the medical labor force. In 1990, there were 8 million health care workers in the U.S., accounting for 7 percent of jobs; today there are 16 million health care workers in the U.S., accounting for 11 percent of jobs. In that time there has been a slight reduction in medical clerical workers, dropping from 16 percent to 13 percent of the health care workforce, likely due to automation of some routine tasks. But the persistence of hands-on jobs has been robust: The percentage of nurses has slightly increased among health care jobs since 1990, for example, from 15.5 percent to 17.1 percent.

“We don’t see a major shock to the labor markets yet,” Doyle says. “These digital tools are mostly supportive [for workers], as opposed to replacements. We say in economics that they’re complements and not substitutes, at least so far.”

Will tech lower our bills, or not?

As the authors note in the paper, past trends are no guarantee of future outcomes. In some industries, adoption of IT tools in recent decades has been halting at first and more influential later. And in the history of technology, many important inventions, like electricity, produce their greatest effects decades after their introduction.

It is thus possible that the U.S. health care industry could be headed toward some more substantial IT-based shifts in the future.

“We can see the pandemic speeding up telemedicine, for example,” Doyle says. To be sure, he notes, that trend depends in part on what patients want outside of the acute stages of a pandemic: “People have started to get used to interacting with their physicians [on video] for routine things. Other things, you need to go in and be seen … But this adoption-diffusion curve has had a discontinuity [a sudden increase] during the pandemic.”

Still, even the adoption of telemedicine also depends on its costs, Doyle notes.

“Every phone call now becomes a [virtual] visit,” he says. “Figuring out how we pay for that in a way that still encourages the adoption, but doesn’t break the bank, is something payers [insurers] and providers are negotiating as we speak.”

Regarding all IT changes in medicine, Doyle adds, “Even though already we spend one in every five dollars that we have on health care, having more access to health care could increase the amount we spend. It could also improve health in ways that subsequently prevent escalation of major health care expenses.” In this sense, he adds, IT could “add to our health care bills or moderate our health care bills.”

For their part, Bronsoler, Doyle, and Van Reenen are working on a study that tracks variation in U.S. state privacy laws to see how those policies affect information sharing and the use of electronic health records. In all areas of health care, he adds, continued study of technology’s impact is welcome.

‘There is a lot more research to be done,” Doyle says.

Funding for the research was provided, in part, by the MIT Work of the Future Task Force, and the U.K.’s Economic and Social Research Council, through its Programme On Innovation and Diffusion.

Donald “Bruce” Montgomery, influential electromagnet engineer, dies at 89

Donald “Bruce” Montgomery SM ’57, a highly influential engineer and longtime MIT researcher whose career was focused on the development of large-scale electromagnets, died on July 1. He was 89.

Montgomery’s contributions have been pivotal for numerous major facilities in fusion energy, in the design of magnets for particle accelerators for physics and medical applications, for magnetically levitated transportation, and in many other disciplines. He was a recognized international leader in magnet design and fusion engineering, a member of the National Academy of Engineering, and recipient of numerous awards including the Dawson Award for Excellence in Plasma Physics Research (1983) and the Fusion Power Associates Distinguished Career Award (1998).

Montgomery graduated with a BA from Williams College and an MS from MIT in the Department of Electrical Engineering in 1957. In 1967 he received an ScD from the University of Lausanne.

Following his graduation from MIT he joined the staff of MIT Lincoln Laboratory, and shortly after began work on high-field magnets under Francis Bitter, renowned magnet designer and founder of the National Magnet Laboratory at MIT. Montgomery rose to become the associate director of what was later renamed the Francis Bitter National Magnet Laboratory. During this period he authored the book “Solenoid Magnet Design: The Magnetic and Mechanical Aspects of Resistive and Superconducting Magnets,” which remains a standard reference.

A turn toward fusion

Montgomery’s expertise was next harnessed to a growing program in fusion energy. Following the measurement of plasma temperatures exceeding 10 million degrees in the Soviet T3 tokamak, a race was on to build ever more capable magnetic confinement experiments. Working with Bruno Coppi of MIT’s physics department and Ron Parker from electrical engineering, Montgomery led a team that designed and constructed two tokamak devices capable of operating with magnetic fields up to and exceeding 12 Tesla, still today an unprecedentedly high magnetic field for fusion research. The initial device, known as Alcator A, set a world record for the key plasma confinement metric. The follow-on device, Alcator C, extended this record in the 1980s and gave confidence that plasma conditions sufficient for a fusion power plant could indeed be achieved.

The record-setting performance by both devices was made possible by the use of breakthrough magnet technology developed with Montgomery’s insight and leadership. One can draw a straight line between these early breakthroughs in magnet technology and the resultant scientific progress that they enabled to the further evolution of magnet technology being used in SPARC, a demonstration fusion device led by MIT and startup company Commonwealth Fusion Systems that is designed to produce more energy than it consumes.

Montgomery also had a well-recognized ability to manage very large projects and to lead diverse groups of scientists, engineers, technicians, and students. As a result he was appointed chief engineer on several national fusion system construction projects and had a leadership role in the early days of the international fusion project known as ITER. In the 1990’s he led one of the three national consortia teams vying to develop maglev technology under the U.S. Department of Transportation Maglev Initiative.

Creating a revolutionary cable

While at the National Magnet Lab, Montgomery, Henry Kolm, and Mitch Hoenig invented the concept for the cable-in-conduit-conductor (CICC). In those early days of large-scale superconducting magnet research, large-bore, high-field superconducting magnets were built in a type of brute-force method. These older designs were unstable and unsuitable to the need for ever higher magnetic fields, and larger sizes increase the performance of magnetic confinement fusion machines. This technology was impeding advancement, especially for the tokamak’s poloidal field magnets which were required to deliver rapidly changing fields.

Montgomery, Kolm, and Hoenig solved these problems by combining many superconducting wires into a cable, using standard industrial equipment, and then putting the cable inside a steel or other high strength metal alloy tube (conduit). The magnet was cooled down and maintained at 4K by flowing supercritical helium within the conduit. Since each conductor could be insulated against high voltages, large-bore, high-field, high-stored magnetic energy magnets could be safely protected from quench. The strong metal alloy conduit provided high mechanical strength distributed most optimally throughout the winding cross-section. And the flowing helium provided excellent heat transfer from all the superconducting wires in the cable, resulting in very high electrothermal stability, especially for fast ramped magnets.

Although the CICC concept was deemed heretical within the international applied superconductivity community and dismissed as impractical, under Montgomery’s leadership the MIT group rapidly developed and proved the concept. Today, every working fusion device in the world that uses superconducting magnets employs this conductor, including tokamaks (e.g., EAST, KSTAR, JT60-SA), helical machines (LHD), and stellarators (Wendelstein 7-X). It is the baseline conductor design for ITER and has found application in particle accelerators and magnetic levitation.

Exploring magnetic levitation and propulsion

In the 1970’s, Montgomery and Kolm from the Francis Bitter Magnet Laboratory collaborated with Richard Thornton from the MIT Department of Electrical Engineering in formulating the “magplane” concept of magnetic levitation and propulsion. An early demonstration of a model scale device was built and tested on MIT’s athletic fields. Montgomery and Henry Kolm later founded Magplane Technology, Inc. (MTI) a small company focused on developing advanced applications of magnetic levitation and propulsion. A working version of this technology was built in China, where it was used to deliver coal from coal mines, avoiding the excessive coal dust and waste resulting from open trucking vehicles. In the 1980’s, Montgomery worked with Peter Marston and Mitch Hoenig, leading an MIT team developing very large-scale superconducting magnets for magnetohydrodynamic electric power generation.

Engineers and scientists know that failure can be the best instructor. Montgomery took that lesson to heart, diagnosing failure mechanisms in large magnet systems and authoring several meta-studies which analyzed and tabulated the underlying causes. This work allowed engineers to focus on the most critical aspects of their designs and contributed to the growing reliability of research magnets. After his retirement from MIT in 1996, Montgomery was the founder and president of MTECHNOLOGY Inc., an engineering consultancy which specializes in risk and reliability.

An engineer’s engineer

Joe Minervini, one of Montgomery’s proteges, notes: “Bruce was considered by me and most people who knew him to be an ‘engineer’s engineer.’ Although he always possessed a deep scientific understanding of the technology problem he was attacking, he always seemed to formulate a brilliant but practical engineering solution. Over his long career at MIT, he demonstrated this time and again on many of the most advanced and challenging new technologies built around conventional and superconducting magnets.”

Beyond the breadth of his technical contributions and committed mentorship, Bruce Montgomery will be remembered for his warm personality and his calm, steady demeanor, which was of inestimable value when things got tough — a common occurrence when pushing the envelope in research. He had a unique ability to take control of contentious technical and management discussions and to gently pull or push everyone to an effective consensus and into action. He will be sorely missed by his friends, family and colleagues.

Montgomery is predeceased by his wife of 52 years, Nancy Ford Fenn, who passed away in 2006, and by Elizabeth Bartlett Sturges, with whom he spent many happy years until her passing in 2021. He is survived by his son Timothy Montgomery and his wife Susan of Scituate, Massachusetts; daughter Melissa Sweeny and her husband Tom of Groton, Massachusetts; as well as his grandchildren, Jenna Sweeny, Christopher Sweeny, and Benjamin Sweeny.

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.