Political Methodology Lab to dig deep into new kinds of data

Political scientists with a quantitative bent have long relied on time-tested tools such as in-person and telephone surveys and official government statistics. But the rapid ascendance of digital technology and the emergence of new channels for communication and information pose a challenge for data-based social science.

“Technology has completely changed how we get political news and form opinions, with people interacting almost entirely through social media like Facebook and Twitter,” says Teppei Yamamoto, the Alfred Henry and Jean Morrison Hayes Career Development Associate Professor of Political Science. “Today scientists seeking to understand what shapes political beliefs and behavior must focus on online activities, which means contending with massive amounts of new kinds of data.”

In order to cope with this situation, says Yamamoto, “researchers need new tools for asking questions and analyzing responses.”

To help accomplish just that, Yamamoto has launched the Political Methodology Lab. A research initiative sponsored by the Department of Political Science and the School of Humanities, Arts, and Social Sciences, the lab will advance new computational and analytical research by department faculty, host a speaker series and workshops on advanced quantitative methods, and fund innovative data science projects by students, with the help of its own new, high-performance computing cluster.

Behind the methodology

Powering the lab’s mission are its core members, researchers who “are at the forefront of innovating statistical and computational approaches to important problems,” says Yamamoto.

Among this group is F. Daniel Hidalgo, the Cecil and Ida Green Associate Professor of Political Science, who is examining electoral behavior and political representation in developing countries, as well as government transparency in the United States. Hidalgo’s work involves new statistical models and the application of machine-learning techniques.

Another core colleague, Assistant Professor In Song Kim, is developing a database of lobbying activities in Washington, D.C. Using a novel algorithm, this database, in the form of a publicly available portal, LobbyView, permits users to “follow the money” from interest groups on specific legislative issues.

Yamamoto’s own lab research concerns a troubling trend in American democracy. “There is such a polarization of opinions that Democrats and Republicans think about the world in completely different ways, so they can’t talk to each other much less understand each other,” says Yamamoto. “I want to know why people are getting stuck in these informational silos.”

Yamamoto is investigating one explanation for this increased public partisanship: the tendency of individuals to engage exclusively in social media that matches their pre-existing ideologies.

“In the past, people watched or read the national news and received information that might challenge their beliefs,” says Yamamoto. “Today, it seems people choose information environments where they receive reinforcement for their opinions, making it much less likely for them to be receptive to alternative views.”

But testing this hypothesis in a real-world way requires a fresh approach. Past research has involved showing subjects different types of media content and querying them later about the potential impact of their media consumption. Yamamoto and lab affiliate Adam Berinsky, the Mitsui Professor of Political Science, are devising new experimental designs to get a better grasp on the polarizing effects of media.

One study works with online survey respondents in an effort to determine whether their political persuasions confine them to similar sorts of media streams and news articles. In a related experiment, lab researchers are analyzing a data set tracking the media consumption of thousands of people with known political ideologies to see whether exposure to different media treatments of political information alters the media they pursue.

Yamamoto hopes that the results of these different experiments will reveal in a quantifiable way whether consuming politically tinged social media contributes to trapping people inside ideological silos and reinforcing their pre-existing opinions.

Another major question the lab hopes to pursue is whether the algorithms embedded in some of today’s most popular communications platforms, such as YouTube, channel users into these silos, closing them off from alternative political perspectives.

The age of information technology

The overarching goal of Political Methodology Lab research, suggests Yamamoto, is to reveal the rapidly evolving machinery of political discourse, participation, and governance, with the hope of identifying threats to these institutions. “All over the world democracy is in trouble,” says Yamamoto. “The thing is, the societal changes brought about by information technology create both crises and new ways to analyze what is happening.”

While he once imagined becoming a diplomat, Yamamoto, whose father served as a Japanese prefecture official, has long been devoted to using scientific methods to understand and buttress democratic institutions. While working toward his undergraduate degree at the University of Tokyo, where he studied the voting behavior of European parliament members, he went to Oxford University to research the formation of the European Union.

“I was intrigued by this great idea of a super government formed through the union of nation states,” recalls Yamamoto. “I wanted to study the mechanisms behind it empirically, using statistics.” At Princeton University, where he earned his doctoral degree, Yamamoto developed new approaches for analyzing and modeling voting behavior, just as the digital revolution was bringing about a sea change in political communication and unleashing a flood of public opinion data.

Today, with the European Union on shaky footing, and American democracy in peril, Yamamoto feels optimistic that the Political Methodology Lab will be able to take advantage of new technology and data “to potentially solve crises,” says Yamamoto. “Once our projects find the causal mechanism behind a problem, we will be in a good position to make policy recommendations.” But actually implementing the fix will be a matter of politics — “and that’s outside the realm of science,” says Yamamoto.

MIT class reveals, explores Institute’s connections to slavery

MIT’s first president, William Barton Rogers, possessed enslaved persons in his Virginia household until the early 1850s, roughly a decade before he founded the Institute, according to new research from an MIT history class scholars and administrators designed to examine the legacy of slavery in relationship to the university.

While Massachusetts outlawed slavery in the early 1780s, Rogers lived in Virginia, where slavery was still legal, from 1819 until 1853, mostly on the campuses of the College of William and Mary and the University of Virginia. Documents from the time indicate that in those settings, Rogers had enslaved persons in his household in both 1840 and 1850.

MIT was founded in 1861 and began offering classes in 1865, just as the U.S. Civil War was ending the era of legal slavery in the South. But even as the Institute emerged in a new historical period, it bore marks of that older era as well.

“Our founder was a slave owner,” says Craig Steven Wilder, the Barton L. Weller Professor of History at MIT and a leading expert on the links between universities and slavery. Given how often such institutions drew personnel and material support from wealthy families that had profited from slavery, “people shouldn’t be surprised that MIT has these connections,” Wilder notes.

“I think that by looking at MIT’s ties to slavery, what you start to see is the centrality of slavery to the rise of the United States and its institutions,” Wilder adds.

The discovery comes from an archival research class for undergraduates that was set in motion by MIT’s president, L. Rafael Reif, and held in the fall of 2017 under the guidance of Wilder and Nora Murphy, an archivist in the MIT Libraries.

While the students in the class researched a variety of topics using primary sources from the 19th century, Murphy herself discovered that Rogers had six slaves in his household in 1850, and two slaves in his household in 1840. The findings come from Murphy’s close examination of household census data.

“We need to ask all kinds of questions, and it’s important to keep an open mind because sometimes the findings are unexpected,” says Murphy, who is MIT’s archivist for researcher services. Once the project was under way, she adds, it was “easy to just begin to look at the censuses and see who was living in the household.”

President Reif says the new finding is an important step toward a better understanding of MIT’s history, and will lead to a productive dialogue about the Institute’s relationship to society, past and present.

“At MIT, we believe in looking at the facts, even when they’re painful. So I am deeply grateful to Professor Wilder for giving us a mechanism for finding and sharing the truth,” Reif says. “The next challenge is up to all of us: embracing this opportunity to take a new look at our past, and exploring together how to tell a more complete version of our history.”

A charge to investigate

The class emerged in part from discussions about MIT’s possible historical links to slavery, held among leaders in MIT’s Office of Minority Education and MIT’s central administration. With Reif seeking ways of examining the subject with a sharper historical lens, the Institute turned to Wilder, a scholar who has established himself as the leading expert on the historical connections between slavery and American universities, and asked him to propose a path forward.

“One of the things that MIT owes all of us, itself, its constituencies, its alumni, its students, its faculty, and the broader public is to be brutually honest about its past,” Wilder says.

Wilder’s award-winning 2013 book, “Ebony and Ivy,” documents how slavery shaped U.S. colleges and universities from the 1600s onward. Such institutions were often founded or run by men who were slaveholders and slave traders, who received financial support from slave-based businesses, or who recruited students from families who had grown wealthy from such forms of commerce.

Few of the oldest universities in the U.S. had examined the issue until recently. But in 2006 Brown University released a report detailing its manifold links to slavery, and since then Columbia University, Georgetown University, Harvard University, and Princeton University, among others, have published their own findings. Columbia University president Lee Bollinger has noted that reading “Ebony and Ivy” helped persuade him to initiate his university’s own study of the issue.

Given that MIT was founded more recently than those other institutions, it might seem a less obvious candidate for historical scrutiny in this regard. But the pervasive entanglement of slavery in the U.S. made the possibility of connections to the Institute worth examining more closely. 

“The MIT way is to confront challenges and not to shrink from them, and so that was the impetus for the class,” says Melissa Nobles, dean of MIT’s School of Humanities, Arts, and Social Sciences.

Wilder and Murphy proposed the class, which became 21H.S01 (MIT and Slavery), to Nobles and Reif, among others in MIT’s administration. They received approval and will continue offering the class in the future.

“It has been wonderful to have President Reif’s support and his willingness to be as transparent as possible about the class, and what the class is looking into, and what the results of the class are,” Murphy says.

Students in the archives

“MIT and Slavery” is designed to have undergraduates do original archival research. While virtually all history courses assign substantial secondary reading, and many ask students to read primary-source documents or visit archives to an extent, 21H.S01 had students performing archival work from the first weeks of the semester onward.

“It was a very different sort of way of teaching and doing a class, working so closely with the archives,” Murphy says.

Primary-source archives, Wilder adds, are where historians “spend much of our creative time, and some of the most important intellectual experiences that we have are actually in the archives. So bringing students into that space … was really, in fact, this intense research experience for them … right in the laboratory of history.”

Each student then settled on a research topic. One examined racial imagery in early MIT student publications; another studied student debate of a mural on campus that reproduced J.M.W. Turner’s 1840 painting, “Slavers Throwing Overboard the Dead and Dying,” and found that the discussion focused on the history of technology and not the question of slavery itself.

A third student found that in its early years, MIT held a popular class in moral philosophy that discussed slavery, but it dropped the course by the 1880s. A fourth research project examined how MIT drew students from Louisville, Kentucky, and then sent graduates back to the South during the Reconstruction period.

Understanding MIT’s involvement in Reconstruction is bound to be a major topic for the class’ students in the future, Wilder observes.

“The rise of MIT is in many ways a story of the transformation from a slave economy to a post-slavery industrial economy, with lots of racial legacies and lots of unresolved conflicts that continue to play out in the United States today, including the really quite critical question of the position of black people and black labor in American society, and how we will ultimately define freedom for people who aren’t white,” Wilder says. 

An additional concept for all students to wrestle with, Wilder notes, is that technology itself, and the institutions developing it, do not stand apart from society, but are always entangled in it.

“What we have to understand is that technology, engineering, and science are, in fact, human endeavors that are driven by the economic, the commercial, and the political interests of nations,” Wilder says.

Further steps for MIT

Even as further iterations of the class continue, MIT intends the new findings about Rogers and the other topics to form the basis of a community dialogue about the Institute and the legacy of slavery. On Feb. 16, MIT will host an event that features a conversation involving Reif, Wilder, and Murphy, and includes presentations by the students who participated in the class, who will speak about their research projects.

“History burdens all of us, and part of what it means to be in a community is that we share each other’s burden,” Nobles says. “So my expectation and my hope is, given the nature of the MIT community and our commitment to each other, that we will see this as a shared responsibility and we will all participate … to help each of us understand what it means to us as individuals, and what it means for the institution as a whole.”

Beyond campus, Wilder is also working to develop an ongoing research project involving MIT and other prominent technical universities founded in the 19th century, in which all the institutions have a chance to explore the legacies of slavery in science, engineering, and technical education during the 1800s.

“MIT is part of a larger exploration of the ties between American universities and slavery, but we are not just participating, we are also leading a part of it,” Wilder says. “We are leading the research about the relationship of technology and science to the institution of slavery — not only to better understand our own history, but to fulfill our role as an elite university and to help build our role for the 21st century.”

Programming drones to fly in the face of uncertainty

Companies like Amazon have big ideas for drones that can deliver packages right to your door. But even putting aside the policy issues, programming drones to fly through cluttered spaces like cities is difficult. Being able to avoid obstacles while traveling at high speeds is computationally complex, especially for small drones that are limited in how much they can carry onboard for real-time processing.

Many existing approaches rely on intricate maps that aim to tell drones exactly where they are relative to obstacles, which isn’t particularly practical in real-world settings with unpredictable objects. If their estimated location is off by even just a small margin, they can easily crash.

With that in mind, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed NanoMap, a system that allows drones to consistently fly 20 miles per hour through dense environments such as forests and warehouses.

One of NanoMap’s key insights is a surprisingly simple one: The system considers the drone’s position in the world over time to be uncertain, and actually models and accounts for that uncertainty.

“Overly confident maps won’t help you if you want drones that can operate at higher speeds in human environments,” says graduate student Pete Florence, lead author on a new related paper. “An approach that is better aware of uncertainty gets us a much higher level of reliability in terms of being able to fly in close quarters and avoid obstacles.”

Specifically, NanoMap uses a depth-sensing system to stitch together a series of measurements about the drone’s immediate surroundings. This allows it to not only make motion plans for its current field of view, but also anticipate how it should move around in the hidden fields of view that it has already seen.

“It’s kind of like saving all of the images you’ve seen of the world as a big tape in your head,” says Florence. “For the drone to plan motions, it essentially goes back into time to think individually of all the different places that it was in.”

The team’s tests demonstrate the impact of uncertainty. For example, if NanoMap wasn’t modeling uncertainty and the drone drifted just 5 percent away from where it was expected to be, the drone would crash more than once every four flights. Meanwhile, when it accounted for uncertainty, the crash rate reduced to 2 percent.

The paper was co-written by Florence and MIT Professor Russ Tedrake alongside research software engineers John Carter and Jake Ware. It was recently accepted to the IEEE International Conference on Robotics and Automation, which takes place in May in Brisbane, Australia.

For years computer scientists have worked on algorithms that allow drones to know where they are, what’s around them, and how to get from one point to another. Common approaches such as simultaneous localization and mapping (SLAM) take raw data of the world and convert them into mapped representations.

But the output of SLAM methods aren’t typically used to plan motions. That’s where researchers often use methods like “occupancy grids,” in which many measurements are incorporated into one specific representation of the 3-D world.

The problem is that such data can be both unreliable and hard to gather quickly. At high speeds, computer-vision algorithms can’t make much of their surroundings, forcing drones to rely on inexact data from the inertial measurement unit (IMU) sensor, which measures things like the drone’s acceleration and rate of rotation.

The way NanoMap handles this is that it essentially doesn’t sweat the minor details. It operates under the assumption that, to avoid an obstacle, you don’t have to take 100 different measurements and find the average to figure out its exact location in space; instead, you can simply gather enough information to know that the object is in a general area.

“The key difference to previous work is that the researchers created a map consisting of a set of images with their position uncertainty rather than just a set of images and their positions and orientation,” says Sebastian Scherer, a systems scientist at Carnegie Mellon University’s Robotics Institute. “Keeping track of the uncertainty has the advantage of allowing the use of previous images even if the robot doesn’t know exactly where it is and allows in improved planning.”

Florence describes NanoMap as the first system that enables drone flight with 3-D data that is aware of “pose uncertainty,” meaning that the drone takes into consideration that it doesn’t perfectly know its position and orientation as it moves through the world. Future iterations might also incorporate other pieces of information, such as the uncertainty in the drone’s individual depth-sensing measurements.

NanoMap is particularly effective for smaller drones moving through smaller spaces, and works well in tandem with a second system that is focused on more long-horizon planning. (The researchers tested NanoMap last year in a program tied to the Defense Advanced Research Projects Agency, or DARPA.)

The team says that the system could be used in fields ranging from search and rescue and defense to package delivery and entertainment. It can also be applied to self-driving cars and other forms of autonomous navigation.

“The researchers demonstrated impressive results avoiding obstacles and this work enables robots to quickly check for collisions,” says Scherer. “Fast flight among obstacles is a key capability that will allow better filming of action sequences, more efficient information gathering and other advances in the future.”

This work was supported in part by DARPA’s Fast Lightweight Autonomy program.

Study finds gender and skin-type bias in commercial artificial-intelligence systems

Three commercially released facial-analysis programs from major technology companies demonstrate both skin-type and gender biases, according to a new paper researchers from MIT and Stanford University will present later this month at the Conference on Fairness, Accountability, and Transparency.

In the researchers’ experiments, the three programs’ error rates in determining the gender of light-skinned men were never worse than 0.8 percent. For darker-skinned women, however, the error rates ballooned — to more than 20 percent in one case and more than 34 percent in the other two.

The findings raise questions about how today’s neural networks, which learn to perform computational tasks by looking for patterns in huge data sets, are trained and evaluated. For instance, according to the paper, researchers at a major U.S. technology company claimed an accuracy rate of more than 97 percent for a face-recognition system they’d designed. But the data set used to assess its performance was more than 77 percent male and more than 83 percent white.

“What’s really important here is the method and how that method applies to other applications,” says Joy Buolamwini, a researcher in the MIT Media Lab’s Civic Media group and first author on the new paper. “The same data-centric techniques that can be used to try to determine somebody’s gender are also used to identify a person when you’re looking for a criminal suspect or to unlock your phone. And it’s not just about computer vision. I’m really hopeful that this will spur more work into looking at [other] disparities.”

Buolamwini is joined on the paper by Timnit Gebru, who was a graduate student at Stanford when the work was done and is now a postdoc at Microsoft Research.

Chance discoveries

The three programs that Buolamwini and Gebru investigated were general-purpose facial-analysis systems, which could be used to match faces in different photos as well as to assess characteristics such as gender, age, and mood. All three systems treated gender classification as a binary decision — male or female — which made their performance on that task particularly easy to assess statistically. But the same types of bias probably afflict the programs’ performance on other tasks, too.

Indeed, it was the chance discovery of apparent bias in face-tracking by one of the programs that prompted Buolamwini’s investigation in the first place.

Several years ago, as a graduate student at the Media Lab, Buolamwini was working on a system she called Upbeat Walls, an interactive, multimedia art installation that allowed users to control colorful patterns projected on a reflective surface by moving their heads. To track the user’s movements, the system used a commercial facial-analysis program.

The team that Buolamwini assembled to work on the project was ethnically diverse, but the researchers found that, when it came time to present the device in public, they had to rely on one of the lighter-skinned team members to demonstrate it. The system just didn’t seem to work reliably with darker-skinned users.

Curious, Buolamwini, who is black, began submitting photos of herself to commercial facial-recognition programs. In several cases, the programs failed to recognize the photos as featuring a human face at all. When they did, they consistently misclassified Buolamwini’s gender.

Quantitative standards

To begin investigating the programs’ biases systematically, Buolamwini first assembled a set of images in which women and people with dark skin are much better-represented than they are in the data sets typically used to evaluate face-analysis systems. The final set contained more than 1,200 images.

Next, she worked with a dermatologic surgeon to code the images according to the Fitzpatrick scale of skin tones, a six-point scale, from light to dark, originally developed by dermatologists as a means of assessing risk of sunburn.

Then she applied three commercial facial-analysis systems from major technology companies to her newly constructed data set. Across all three, the error rates for gender classification were consistently higher for females than they were for males, and for darker-skinned subjects than for lighter-skinned subjects.

For darker-skinned women — those assigned scores of IV, V, or VI on the Fitzpatrick scale — the error rates were 20.8 percent, 34.5 percent, and 34.7. But with two of the systems, the error rates for the darkest-skinned women in the data set — those assigned a score of VI — were worse still: 46.5 percent and 46.8 percent. Essentially, for those women, the system might as well have been guessing gender at random.

“To fail on one in three, in a commercial system, on something that’s been reduced to a binary classification task, you have to ask, would that have been permitted if those failure rates were in a different subgroup?” Buolamwini says. “The other big lesson … is that our benchmarks, the standards by which we measure success, themselves can give us a false sense of progress.”

“This is an area where the data sets have a large influence on what happens to the model,” says Ruchir Puri, chief architect of IBM’s Watson artificial-intelligence system. “We have a new model now that we brought out that is much more balanced in terms of accuracy across the benchmark that Joy was looking at. It has a half a million images with balanced types, and we have a different underlying neural network that is much more robust.”

“It takes time for us to do these things,” he adds. “We’ve been working on this roughly eight to nine months. The model isn’t specifically a response to her paper, but we took it upon ourselves to address the questions she had raised directly, including her benchmark. She was bringing up some very important points, and we should look at how our new work stands up to them.”

Jennifer Rupp: Engineering practical ceramics

Ensuring that her research contributes to society’s well-being is a major driving force for Jennifer Rupp.

“Even if my work is fundamental, I want to think about how it can be useful for society,” says Rupp, the Thomas Lord Assistant Professor of Materials Science and Engineering and an assistant professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT.

Since joining the Department of Materials Science and Engineering in February 2017, she has been focusing not only on the basics of ceramics processing techniques but also on how to further develop those techniques to design new practical devices as well as materials with novel structures. Her current research applications range from battery-based storage for renewable energy, to energy-harvesting systems, to devices used to store data during computation.

Rupp first became intrigued with ceramics during her doctoral studies at ETH Zurich.

“I got particularly interested in how they can influence structures to gain certain functionalities and properties,” she says. During this time, she also became fascinated with how ceramics can contribute to the conversion and storage of energy. The need to transition to a low-carbon energy future motivates much of her work at MIT. “Climate change is happening,” she says. “Even though not everybody may agree on that, it’s a fact.”

One way to tackle the climate change problem is by capitalizing on solar energy. Sunshine falling on the Earth delivers roughly 170,000 terawatts per year — about 10,000 times the energy consumed annually worldwide. “So we have a lot of solar energy,” says Rupp. “The question is, how do we profit the most from it?”

To help convert that solar energy into a renewable fuel, her team is designing a ceramic material that can be used in a solar reactor in which incoming sunlight is controlled to create a heat cycle. During the temperature shifts, the ceramic material incorporates and releases oxygen. At the higher temperature, it loses oxygen; at the lower temperature, it regains the oxygen. When carbon dioxide and water are flushed into the solar reactor during this oxidation process, a split reaction occurs, yielding a combination of carbon monoxide and hydrogen known as syngas, which can be converted catalytically into ethanol, methanol, or other liquid fuels.

While the challenges are many, Rupp says she feels bolstered by the humanitarian ethos at MIT. “At MIT, there are scientists and engineers who care about social issues and try to contribute with science and their problem-solving skills to do more,” she says. “I think this is quite important. MIT gives you strong support to try out even very risky things.”

In addition to continuing her work on new materials, Rupp looks forward to exploring new concepts with her students. During the fall of 2017, she taught two recitation sections of 3.091 (Introduction to Solid State Chemistry), a class that has given thousands of MIT undergraduates a foundation in chemistry from an engineering perspective. This spring, she will begin teaching a new elective for graduate students on ceramics processing and engineering that will delve into making ceramic materials not only on the conventional large-scale level but also as nanofabricated structures and small-system structures for devices that can store and convert energy, compute information, or sense carbon dioxide or various environmental pollutants.

To further engage with students, Rupp has proposed an extracurricular club for them to develop materials science comic strips. The first iteration is available on Instagram (@materialcomics) and it depicts three heroes who jump into various structures to investigate their composition and, naturally, to have adventures. Rupp sees the comics as an exciting avenue to engage the nonscientific community as a whole and to illustrate the structures and compositions of various everyday materials.

“I think it is important to create interest in the topic of materials science across various ages and simply to enjoy the fun in it,” she says. 

Rupp says MIT is proving to be a stimulating environment. “Everybody is really committed and open to being creative,” she says. “I think a scientist is not only a teacher or a student; a scientist is someone of any age, of any rank, someone who simply enjoys unlocking creativity to design new materials and devices.”

This article appears in the Autumn 2017 issue of Energy Futures, the magazine of the MIT Energy Initiative.

Alumni call on MIT to champion artificial intelligence education

In the weeks before the launch of the MIT Intelligence Quest, an initiative that will advance the science and engineering of human and machine intelligence, School of Engineering graduates were asked: “What positive role can MIT play in the AI revolution?”

Alumni urged MIT to energize the artificial intelligence community, including people in industry, academia, and the government, around a thoughtful strategy for the future. They wrote directly to Anantha P. Chandrakasan, dean of engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, who posed the question in a monthly newsletter, The Infinite.

“The AI community is struggling to ensure that AI-inspired transformations end up benefiting science and society,” says Auroop Ganguly PhD ’02, a professor of civil and environmental engineering at Northeastern University. “A clear lack of leadership is becoming apparent, particularly in this country, across not just government agencies and labs, but also in academia and private-public sectors. This is an area where MIT, with its enormous resources and reputation, can help jump-start innovations.”

AI-inspired technologies hold great promise, wrote alumni. Smart vehicles may save lives worldwide and smart buildings save energy and reduce carbon emissions. The latest advances are inspiring progress in health care, education, energy, and the environment.

But decisive leadership is essential to guarantee that development of AI technologies includes consideration of societal and ethical questions alongside the technical, say alumni.

“Recent developments ranging from autonomous cars and infrastructural resilience to weather prediction and remote sensing suggest the possibilities that AI can bring to bear in these areas,” says Ganguly. “Private industry may be willing to invest once academia, led by MIT, moves in this direction, as may be evident from signals coming out of technology companies,” he says, noting Microsoft’s AI for Earth initiative and Google’s stated desire to use AI to address humanity’s greatest challenges.

MIT alumnus Tom Wylonis SM ’68 who is board chairman of Evaxion Biotech, which is using AI to seek solutions to global challenges to human health, wrote: “Since industry investments tend to be short-term due to risk aversion, MIT should also lead the way with fundamental research that lays the foundation for longer-term benefits from AI.”

Wylonis, an active mentor to MIT students and funding board member of Sandbox, an Institute-wide program that supports student-initiated ideas, suggests MIT has another leadership role to play.

“I believe that MIT should do all it can to increase the number and quality of MIT graduates with AI specialization,” says Wylonis, citing a rise in AI investment in nearly all industrial sectors. “Along with this, encourage other educational and research institutes to follow suit since we are likely to encounter a massive future shortage of AI talent.”

Alumni repeatedly circled back to the issue of ethical deployment of AI technologies. “MIT can amplify the positive impact of AI by actively encouraging a dialogue between technologists and social scientists on where AI can and should impact society,” wrote Don Shobrys ’75, an engineer and consultant who has volunteered extensively at MIT, including stints serving on the Corporation, as president of the Alumni Association, and as co-director of the Venture Mentoring Service.

Yu Chen ’00, a technical program manager at Google, advised: “MIT will be one of the technology leaders defining the future, so I want us to be mindful about designing a future that works for everyone, not just those who have more access to money or resources or information.”

Many alumni also suggested MIT must help set the agenda for public debate and convene conversations about important issues. Offering people clarity about the role and workings of AI is crucial, they say.

Ray Stata ’57, SM ’58 a longtime MIT volunteer and benefactor, wrote: “MIT is already well-along in AI education and research. But for the alumni it would be great to offer an online course which describes the basics of neural networks and how deep learning is applied to solve problems in new ways.”

Joanna Bryson ScD ’01 an associate professor focused on AI ethics at the University of Bath, declared: “MIT needs to take a strong stand on fact-based assessment of AI.” She added: “MIT needs to claim the high ground and maintain a human-centered perspective on AI.”

Cornel West advocates the “examined life” on campus

How can universities be a force for social good in turbulent times? At an MIT talk on Wednesday evening, the prominent philosopher Cornel West had a clear answer: painful self-reflection.

More precisely, West suggested, the individuals who populate institutions of higher education should rigorously reexamine the consensus beliefs they encounter and, ideally, develop an “aversion to conformity” that will help bring vitality and diversity to academic life.

“The unexamined life is not worth living,” West said, alluding to the ideas of Socrates. “The examined life is painful.”

Higher education, West added, should be about not “information,” but “transformation” — a process of questioning assumptions and refining habits of critical thinking that can be applied to any issue.

“I don’t fetishize smartness,” West said, observing that the lessons of one discipline do not necessarily translate into other realms — and that we should be wary of overestimating people based on their apparent sharpness in one sphere of life. At leading universities, West suggested, there is greater danger in overestimating people than in being humble about our capabilities.

“We recognize we will be wrong as well as right,” West said.

West’s talk, titled “Speaking Truth to Power! A Discussion on Institutional Provincialism,” took place before a packed auditorium in MIT’s Room 10-250. West was joined by five MIT scholars who made their own remarks after his talk and engaged with an extensive round of questions from the audience.

West is a professor of the practice of public philosophy at Harvard Divinity School and holds a joint appointment with Harvard’s Department of African and African-American Studies. He is author of, among other books, “Race Matters” and “Democracy Matters.” He has appeared in over two dozen documentaries about social issues and released three spoken-word albums.

West was introduced by Ty Austin, a graduate student in MIT’s Department of Architecture, who outlined the issue West discussed: Institutions benefit by developing stable identities, but too much conformity, or too narrow an institutional identity, can limit an university’s ability to influence an ever-changing world.

“We bring a sort of identity [and] mindset into this vast metropolis of higher learning,” Austin said. And yet, he observed, if a university’s inhabitants adhere to “the same identity and like-mindedness,” it is quite possible that “the very institutions that are said to broaden horizons and advance technology and society” would exist for “the benefit of the few, by marginalizing the many.”

“A test of who we are”

West did not offer a detailed critique of the Institute: “I’m not here to pontificate. I don’t know that much about the internal dynamics of MIT,” he said. Instead, he offered reflections about the practice of self-examination, as well as the larger, pressing problems in the world today.  

We are facing, among other hazards, “economic catastrophe” in the form of inequality, West said. He noted that the three richest Americans have wealth equivalent to the bottom half of the population.

“Salute their smartness, their intelligence, [but] we’re talking about structures, we’re talking about institutions in place, we’re talking about policies that generate massive redistribution of wealth from poorer working people to the well-to-do,” West contended.

The changing climate, West said, was an environmental “catastrophe” in the making for all of society. He also decried the increase of racially charged politics and immigration issues in the U.S. in recent years.

“We live in one of the bleakest moments in the history of the empire,” West said, adding: “It’s a test of who we are.”

West’s talk occurred during Black History Month, which he called “sacred ground” in American life, and he decried the use of sanitizing euphemisms for political discord in the U.S., such as the term “race problem” as a description of civic conflict. Instead, West said, there have been a “series of catastrophes visited upon blacks” in the U.S.

While talking about the need for self-critique on campuses, West also sounded upbeat notes about the possibilities for social rejuvenation that come with intellectual freedom.

“Social movements need some MIT folks — who do their homework,” West said.

“That’s a challenge to all my brothers and sisters here of all colors at MIT,” he added. “How do you not just talk about it, but enact a sensitivity to the problem, in your curriculum, in your own praxis, in your organizational affiliation.”

The event was sponsored by all five of MIT’s schools — the School of Engineering, the School of Science, the School of Architecture and Planning, the Sloan School of Management, and the School of Humanities, Arts, and Social Sciences — as well as the Media Lab, the Program in Media Arts and Sciences, the Office of Graduate Education, the Institute Community and Equity Office, and the Committee on Race and Diversity.

The view from the panel

The event also featured a panel of MIT students and faculty who spoke about how they work to help bring alternate ideas to the Institute.

Joy Buolamwini, a PhD student at the MIT Media Lab, founded the Algorithmic Justice League to push back against ethnic biases in machine learning, such as in facial recognition programs.

“I’m coming up against something I call ‘the coded gaze,’” Buolamwini said, referring to the decisions and assumptions in such programs, which, she noted, reflect “the priorities and preferences of what those who have power choose to focus on, who’s visible, who’s rendered invisible.”

Sasha Costanza-Chock, the Mitsui Career Development Associate Professor in Contemporary Technology, suggested that preventing intellectual provincialism at MIT means avoiding “the technological solutionist ideology” of attempting to solve hard problems from the lab without sufficient on-the-ground knowledge of social realities.

“In this process of problem-solving,” said Costanza-Chock, it is important “to think about how do we say, ‘Well, I may be a brilliant person, and I may have a certain set of skills and knowledge in one domain, but how do I really work in partnership with, or even in service to [people who] are experiencing the lived reality of intersectional oppression?”

Jennifer Light, chair of MIT’s Program in Science, Technology, and Society, observed that knowledge of the past can make clear that science and technology should always be understood in relationship to civic life — and have been used to exacerbate harmful social goals. For instance, Light noted, professors at elite U.S. universities in the 1920s favored eugenics programs.

“Smart people have lots of bad ideas,” said Light. “And history can be a tool to understand that, and take to your own present.”

Finding joy in social activism

The event’s concluding remarks were delivered by Duane Lee, an astronomer who is a postdoc at Vanderbilt University and an MLK Visiting Scholar at MIT. Lee discussed the need to challenge conventional wisdom as a way of increasing diversity in academia.

“At times we delude ourselves that we are immune to biases,” said Lee, relating multiple instances in which professional colleagues have asserted to him that increased diversity in his field would lead to a lowering of overall academic standards.

On the contrary, Lee suggested, if the discipline had been tapping into a talent pool consisting of everyone in society, not just the narrower cross-section of society it has traditionally employed, then the standards of the field, along with the rate of progress, would likely be higher than they currently are.

West praised the contributions of the other panelists and, along with them, fielded audience questions, telling one undergraduate that social engagement can also be a source of energy and enjoyment.

“There’s got to be some joy in it,” West said about the practice of social activism. “If it’s just done out of joylessness, then you’re not going to be a long-distance runner.”

For that matter, West noted earlier in the event, persistence and determination are key components of enacting civic change in the face of setbacks or just intermittent public indifference. 

“All of us fall short,” West said. “Samuel Beckett is right: Try again. Fail again. Fail better.”

Cities of the future may be built with locally available volcanic ash

MIT engineers working with scientists in Kuwait have found that volcanic rocks, when pulverized into a fine ash, can be used as a sustainable additive in concrete structures.

In a paper published online in the Journal of Cleaner Production, the researchers report that, by replacing a certain percentage of traditional cement with volcanic ash, they can reduce a concrete structure’s “embodied energy,” or the total energy that goes into making concrete. According to their calculations, it takes 16 percent less energy to construct a pilot neighborhood with 26 concrete buildings made with 50 percent volcanic ash, compared with the energy it takes to make the same structures entirely of traditional Portland cement.

When they ground volcanic ash down to increasingly small particle sizes, the researchers found that a mixture of the finer powder and Portland cement produced stronger concrete structures, compared with those made from cement alone. However, the process of grinding volcanic ash down to such fine particles requires energy, which in turn increases the resulting structure’s embodied energy. There is, then, a tradeoff between a concrete structure’s strength and its embodied energy, when volcanic ash is used.

Based on experiments with various concrete and volcanic ash mixtures, and calculations of the resulting structure’s embodied energy, the researchers have mapped out the relationship between strength and embodied energy. They say engineers can use this relationship as a blueprint of sorts to help them choose, for instance, the percent of cement they would want to replace with volcanic ash to produce a given structure.

“You can customize this,” says Oral Buyukozturk, a professor in MIT’s Department of Civil and Environmental Engineering (CEE). “If it is for a traffic block, for example, where you may not need as much strength as, say, for a high-rise building. So you could produce those things with much less energy. That is huge if you think of the amount of concrete that’s used over the world.”

Buyukozturk is joined on a  paper by an interdisciplinary team of researchers, including research scientist Kunal Kupwade-Patil and undergraduate Stephanie Chin of CEE, former doctoral student Catherine De Wolf and Professor John Ochsendorf of MIT’s Department of Architecture, Ali Hajiah of the Kuwait Institute for Scientific Research, and Adil Al-Mumin of Kuwait University.

A natural additive

Concrete is the most abundantly used material in the world, second only to water. The manufacturing of concrete involves first blasting rocks such as limestone out from quarries, then transporting the rocks to mills, where they are further crushed and treated under high temperature through various processes resulting in the production of cement.

Such energy-intensive processes create a significant environmental footprint; the production of traditional Portland cement accounts for about 5 percent of the world’s carbon dioxide emissions. To cut down on these emissions, Buyukozturk and others have been looking for sustainable additives and alternatives to cement.

Volcanic ash has several sustainable advantages as an additive in manufacturing concrete: The rocky material, which lies in ample supply around active and inactive volcanoes around the world, is naturally available; it is typically considered a waste material, as people typically do not use it for any widespread purpose; some volcanic ashes have intrinsic, “pozzolonic” properties, meaning that, in powder form, the ash with a reduced amount of cement can naturally bind with water and other materials to form cement-like pastes.

“Cement production takes a lot of energy because there are high temperatures involved, and it’s a multistage process,” says Chin, who with Kupwade-Patil led much of the group’s experimental work as a student in the Undergraduate Research Opportunities Program (UROP) with Buyukozturk. “That’s the main motivation for trying to find an alternative. Volcanic ash forms under high heat and high pressure, and nature kind of does all those chemical reactions for us.”

The team looked first at how much energy it would take to make concrete from a mixture of cement and volcanic ash, versus cement alone. To do this, the researchers consulted several databases in which others had calculated the embodied energy associated with various industrial processes, such as the energy that goes into crushing rock or curing cement. The researchers picked through the databases to assemble the individual processes associated with producing traditional cement and cement containing 10 to 50 percent volcanic ash.

They then went into the lab, where they manufactured small samples of concrete with various percentages of volcanic ash, as well as samples made only of Portland cement. Chin and her colleagues subjected each sample to standard tests of strength, such as compressing the structures until they began to crack. Then they mapped out each sample’s strength against its calculated embodied energy.

According to their results, replacing 50 percent of traditional cement with volcanic ash with an average particle size of 17 micrometers can bring down concrete’s embodied energy by 16 percent. However, at this particle size, volcanic ash can compromise concrete’s overall strength. Grinding the ash down to a particle size of about 6 micrometers significantly increases concrete’s strength, as smaller particles provide more surface area with which water and cement can chemically bind.

Cities of ash

The team extrapolated its results to see how structures made partly with volcanic ash would affect concrete’s embodied energy at the scale of entire buildings and neighborhoods.

The researchers focused on a neighborhood in Kuwait with 13 residential and 13 commercial buildings, all made with traditional Portland cement, mostly imported from Europe. With the help of their collaborators in Kuwait, they flew a drone over the neighborhood to collect images and measurements. They also consulted local authorities, who provided them with additional information on each building system.

Using all this information, the team calculated the neighborhood’s existing embodied energy, and then calculated how that embodied energy would change if buildings were made with concrete composed of various percentages of volcanic ash, which is in ample supply in the Middle East.

As with their experiments in the lab, they found that a neighborhood’s infrastructure can be made with considerably less energy if the same buildings are built with concrete made from a cement mixture that is 30 percent volcanic ash.

“What we’ve found out is that concrete can be manufactured with natural additives with desired properties, and reduced embodied energy, which can be translated into significant energy savings when you are creating a neighborhood or a city,” Buyukozturk says.

This research was supported in part by the Kuwait Foundation for the Advancement of Sciences. The project was conducted as part of the Kuwait-MIT signature project on sustainability of Kuwait’s built environment for which Buyukozturk was the principal investigator.

Undergraduates share authorship

In a second paper, which will soon appear in the ASCE Journal of Materials in Civil Engineering, co-authors Chin and Maranda L. Johnston, also a former UROP student, explore the binding mechanism involved when Portland cement is replaced with finely ground volcanic ash. The team used various techniques including synchrotron X-Ray diffraction at Argonne National Laboratory to examine the microstructure of hardened cement pastes.

They found the finer-sized volcanic ash particles produced nanometer-scale products within the cement paste as it hardened, which helped to densify the matrix as it cured. “Our work provides a basis for the engineers to optimize their mixes with natural additives according to their specified requirements,” Kupwade-Patil says.

“It has become in a way a tradition in my laboratory to involve freshman and other UROP students in high-level multidisciplinary research leading to journal publications,” Buyukozturk says. “This learning experience is an important part of our educational system.”

When numbers started counting

Odds are, you’ve tried to win arguments by citing statistics. Who has been the greater player, LeBron James or Michael Jordan? Which health care policy is right? Where are the best schools? Which city has the worst morning traffic? If you can find the numbers, then maybe — maybe — you can resolve these matters.

But have you ever wondered: When did people start using numbers in politics or other public debates, anyway? Did the Egyptians have quantitative arguments about pyramid policy? Or is it a very recent phenomenon, due to the spread of data and electronic communications?

In a new book, William Deringer, an assistant professor at MIT, offers an answer: In the English-speaking world, people started using numbers in political debates in Britain around 1688, and the practice took firm hold over the next few decades.

Why then? England had just concluded its “Glorious Revolution,” in which William and Mary usurped the throne, deposing James II, while Parliament gained a stronger hold on state affairs. That rise of parliamentary power, along with polarized political parties and the growth of the press, contributed to a public culture of debate and dispute — one in which numbers increasingly became a form of ammunition.

“It was part of a larger phenomenon,” says Deringer, who is the Leo Marx Career Development Assistant Professor of Science, Technology, and Society. “Issue after issue, you had two sides arguing intensely. This turned out to be a political context in which numbers functioned really well.”

Moreover, by 1720, when the infamous episode of global financial speculation known as the South Sea Bubble reached a crisis point, quantitative arguments became even more embedded in civic life, given the junction of politics and economics. Really, the advent of numerical arguments in politics dates to the whole period from 1688 to at least 1720, and even a bit beyond that.

As Deringer suggests in the book, the influence of this change has been immense. The practices of British political culture thoroughly informed American colonial politics and in a sense created the means for quantitative reasoning to gain authority in the modern U.S. state.

“The developments of the 17th and 18th centuries created cultural conditions that continue to influence us today,” Deringer says.

Deringer’s book, “Calculated Values: Finance, Politics, and the Quantitative Age,” is being published this week by Harvard University Press.

Fiscal duty and free speech

To be clear, Deringer’s historical claim is not that numbers or mathematics were wholly ignored in civic life before the late 17th century. From the ancient Greeks who discoursed upon the moral value of mathematics, to late-Medieval Venetians who used double-entry bookkeeping to change commerce, mathematics mattered in many ways. The English themselves compiled the famous Domesday Book around 1086 to keep track of land and income.

What Deringer is tracing, however, is a new era in which “fighting with numbers,” as he writes in the book, became “a regular part” of politics. In the modern world, we look to statistics to help resolve public issues and give quantitative evidence considerable weight.

This new practice in politics, Deringer believes, stems crucially from the expansion of parliamentary powers in Britain, in the years after 1688. Those powers, in a series of parliamentary acts, limited the monarch’s ability to control courts and elections, ensured the right of free speech in Parliament, and, significantly, included the “financial settlement” in which the monarch had to keep reapplying to Parliament for state funds.

In a short time, then, Parliament became increasingly active in controlling Britain’s purse strings, and it tolerated increasingly vocal debate on the subject — conditions in which statistics gained authority.

“People were using calculations as a way of making criticism,” Deringer says. The polarized politics, with Tories and Whigs at odds, and the growing press meant that this was “an environment remarkably hospitable to numerical calculation as a mode of thinking and arguing.”

Indeed, a fair amount of our language for such things dates to this time period; the phrase “facts and figures” is first found in 1727, for instance.

Critics with a cause — and calculations

Deringer’s findings also cut against the grain of theoretical work that regards the state as an overwhelming source of repressive power. In contrast to this notion, the emergence of statistics in British politics did not help the state subjugate anyone. It helped both sides in politics make claims, and actually helped outsiders and antiestablishment critics gain credibility for their assertions.  

“One of the things I found fascinating about this period is that [state power] could not have been the only explanation” for the advent of statistics in politics, Deringer says. “The state was not as functional as it could have been. The state didn’t know what the public thought it should know.”

That applies to the debates about the South Sea Bubble as well, he observes. In the book, Deringer chronicles the public saga of one Archibald Hutcheson, a critic of the South Sea Company, who felt its stock was overinflated and engaged in quantitative financial detective work to prove his point.

“The people who were doing the most intensive calculations about the bubble were consistently people who were critical of this scheme,” Deringer says. The collapse of the South Sea Bubble, he writes, “was probably the greatest political triumph for calculation in the entire 18th century.”

Of course, simply wielding numbers is no guarantee of winning a political debate, something apparent in contemporary times as well. Sometimes entrenched interests override solid numerical reasoning; many other times, statistics depend on debatable assumptions or yield results open to multiple interpretations.

In many cases, Deringer says, “Calculations can be really flexible. There’s a lot of give in some numbers. If you change a couple of assumptions, there will be a very different conclusion.”

The use of numbers in politics, he thinks, also creates a heightened skepticism of quantitative claims — which can be a good thing if it creates more critical thinking and sharpens our analysis of complicated issues. Having numbers on hand can be good; asking questions about them can be better. And that has remained constant, from 1688 through the present day.

“I think these things go together in a two-sided relationship,” Deringer says. “There is something healthy about that.”

Understanding and treating disease

In 2006, a discovery opened up a new world of possibility for treating diseases. For the first time, researchers created stem cells without using embryos. Adult skin cells were reprogrammed into induced pluripotent stem cells, or iPSCs, that could differentiate into specialized cells for use in almost any part of the body — from the liver to the heart or brain, and everywhere in between. Areas of the body damaged by disease could be made healthy again.

But after more than a decade of research on iPSCs, the process of creating them is still incredibly inefficient. “We have been puzzled that after 10 years of intense research in that direction, the efficiency of iPSC reprogramming is still only about 0.1 percent,” says Associate Professor Domitilla Del Vecchio. “It’s not really at the point that you can use it for clinical purposes.”

Del Vecchio and her colleagues are hoping to change that. Currently, researchers develop iPSCs by delivering synthetic DNA to the nucleus of a somatic cell, such as a skin cell. This synthetic DNA produces high levels of select proteins — known as transcription factors — with the aim of “pushing” the somatic cell to reprogram into a stem cell. But overloading a cell with such a high level of transcription factors leads to a highly inefficient process. “If you have a mechanical system, such as a car or a robotic manipulator, and you give it an arbitrary push, you should not expect that the system will end up exactly in the configuration you want,” says Del Vecchio.

To fix this problem, Del Vecchio and her team are adding accelerators and brakes to the process. Using mathematical analysis, they can demonstrate that with an appropriate balance the pluripotent stem cell state can be reached. With the help of small molecules, the synthetic DNA delivered via a virus can produce tunable levels of transcription factors based on a target configuration. This method — called a synthetic genetic feedback controller — can be used to steer the concentration of transcription factors in the cell to the point at which it can become a stem cell.

The applications of Del Vecchio’s work may have far-reaching implications for the way diseases are treated. Researchers could quickly create healthy heart cells for patients with a heart condition or beta cells for diabetic patients. 

Del Vecchio’s eyes light up when discussing the possibilities. “It’s clearly high risk,” she admits. “But we want to proceed in this direction because if it works, it will be highly impactful for society and hence extremely rewarding for us.”

Giving doctors the ability to quickly create stem cells could change the way many diseases are treated. The research Del Vecchio’s team is conducting represents just one example of how mechanical engineering researchers across a diverse range of specialties are developing new and innovative ways to deepen our understanding of disease and unlock new therapies to treat it. 

Diagnosing disease

Understanding disease starts with the cell. Assistant Professor Ming Guo, who serves as the d’Arbeloff Career Development Professor, is interested in decoding the differences between the mechanical properties of a healthy cell and a diseased cell as a way to develop a diagnostic tool. “Just by looking at a healthy cell compared to a diseased cell, you can tell that they’re different mechanically,” says Guo.

Stiffness in particular is a key trait in distinguishing what kind of disease a cell has — for example cancer cells have been shown to be soft while asthma cells are often stiff. Currently, this information is probed by contact-only methods such as atomic force microscopes or optical tweezers, which use either a mechanical tip or a focused laser beam on a patient’s tissue to measure cell properties. Guo and his colleagues have now developed a safer, less invasive method of analyzing the mechanics of a cell to help formulate a diagnosis by simply taking a small sample of cells and watching them using a standard optical microscope.

“We came up with the method by simply observing the movement of organelles in the cell,” says Guo. The team took videos of cells under a microscope. They tracked the movements of individual organelles or particles within the cell at frequencies of 10 frames per second and higher. Then, by plugging the value of these movements into a generalized form of the Stokes-Einstein equation, they were able to calculate the exact stiffness of a cell. 

“We found that high frequency fluctuation can help us gauge cell stiffness and understand its mechanics,” says Guo. Understanding these mechanical properties can help doctors diagnose diseases on the spot. Guo has begun collaborating with doctors at Massachusetts General Hospital on applying this method to cancer and asthma cells. The hope is that doctors and researchers can test drug efficacy by measuring a cell’s mechanics before and after treatment.

Tracking disease

While improving diagnostic methods could help catch a disease early, tracking how diseases grow could be key to developing new therapeutic interventions. Roger Kamm, the Cecil and Ida Green Distinguished Professor, and his lab use a device that’s roughly the size of a quarter to track tumor cells as they leave the vascular network and eventually grow into tumors. These tiny microfluidic devices can help us understand how cancer metastasizes.

“We’ve developed a 3-D vascularized network in which we can track cancer cells inside a capillary,” says Kamm. “It’s about understanding how cell populations interact. We can watch the tumor cells escape from the vessel to invade the surrounding tissue.” 

The microfluidic device consists of two media channels on either side with 3-D hydrogel in the center. The gel is seeded with endothelial cells that form capillaries where the tumor cells are introduced. From there, Kamm and his team use microscopic imaging to watch every single movement the tumor cell makes. From intravasation — when a cancer cell enters the bloodstream — to extravasation — when a tumor cell leaves the blood stream and becomes metastatic — the cell’s path is studied with painstaking precision.

“Using microfluidics, we can follow this process over time,” explains Kamm. “After the cell enters the metastatic organ, we can see how a single tumor cell starts to multiply over days and how it begins to form a metastatic tumor.”

Microfluidic devices allow Kamm to analyze the forces that inform the cancer cell’s behavior. Understanding whether tumor cells push or pull when they leave the capillary and the force interactions between endothelial cells and tumor cells represent potential therapeutic targets for preventing or minimizing metastasis.

“We can look at all these different tumor cell lines, treat the different cells, and see how that affects the rate at which the tumor cells escape from the vasculature and grow,” says Kamm. This knowledge could open up new opportunities in treating cancer and even developing new immunotherapies.

Treating disease

Armed with more knowledge of how diseases grow and spread, researchers are better able to develop new ways to treat, and in some cases cure, disease. Among them is Assistant Professor Ellen Roche, who is taking a unique dual approach to treating heart disease using both mechanical and biological therapies.

“The idea is to mechanically assist the heart,” says Roche, who also serves as Helmholtz Career Development Professor at MIT’s Institute for Medical Engineering and Science. “Rather than take over its function we just assist and augment it using a biomimetic approach.”

Roche uses new techniques like soft robotics to develop devices that mimic both the tissue properties and the motion of the heart. One such device is a sleeve that wraps around the heart to assist with pumping. Soft robots like this sleeve use elastomeric materials and fluidic actuation to mimic an organ’s movement. “By smartly designing simple fluidics channels and reinforcing soft materials in just the right way, you can achieve very complex motion with just elastomeric changes, and pressurized air or water,” says Roche.

While working on mechanical therapies to treat things like congenital heart disease and heart attacks, Roche is also looking at how biological therapies can help in treating these diseases. She and her team are developing smart devices that provide localized drug delivery instead of systemic drug delivery.

“One of my main goals is to combine these mechanical and biological therapies and see how they interplay with each other,” says Roche. Understanding how these different therapies interact could help determine the best timing sequence for maximum efficacy. With the help of collaborators in the cardiac surgery group at Boston Children’s Hospital, Roche is creating and testing models for these therapies. “We really want to see if we can treat disease and recover function using polytherapy rather than just a mechanical or biological approach.”

Rehabilitation from disease

In instances when disease is not detected or treated in time, researchers are developing tools that assist in the recovery process. From optimizing the design of prosthetic feet or building cheaper wheelchairs, mechanical engineers are finding ways to improve the quality of life for those living with the aftermath of disease. This work also includes tools and devices that can be used in physical rehabilitation. One such tool is the MIT-MANUS — a robot developed by Professor Neville Hogan to help stroke victims recover and regain mobility.

“They say no two snowflakes are alike, well no two stroke patients are alike either,” says Hogan. “That makes the problem spectacularly complicated.” Hogan has collaborated with neurologists and neuroscientists on understanding the process of recovery and basic motor control in the brain. He used this knowledge to develop robots that interact with stroke patients and help them regain control of their movements.

MIT-MANUS was originally designed to help restore motor function in stroke patients’ shoulders and elbows. Patients strap their forearm into a brace attached to a robotic arm and grasp onto a controller connected to a video screen. On the screen, a video game provides patients with prompts to move their arm and wrist. If the patient is unable to fully move their arm on their own, MIT-MANUS provides guidance and assists their movements. The robot then tracks and stores this data for physical therapists and specialists to analyze.

“In clinical trials of MIT-MANUS we found that there was a reduction of impairment in joints exercised through use of the robot,” says Hogan. Over the years the scope of this robot-aided therapy for stroke victims has grown beyond hands and arms. Hogan and his collaborators have put together a robotic gym that helps deliver localized therapy to various limbs and joints throughout the body.

Whether it’s constructing large robots like the MIT-MANUS to help rehabilitate stroke victims, tracking the miniscule movements of organelles in the cell, or using genetic circuits to create stem cells, mechanical engineers are shaping both our fundamental understanding of disease and the way in which doctors approach treatments and therapies. 

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.