In a suddenly remote spring, library support services carry on

While the MIT Libraries’ physical spaces and tangible collections are currently inaccessible, its network of people, services, and resources has mobilized behind the scenes to ensure that Institute learning and research continue despite the disruptions of Covid-19. 

Since mid-March, the MIT Libraries have provided only services and resources that can be accessed remotely. Library staff — like many across MIT — had to quickly pivot to a new reality, finding creative solutions to providing the expertise and resources the community needs now. 

Expanding access to digital content

Without access to physical collections, library staff have had to navigate a complex landscape of publishers, platforms, and copyright to access the materials MIT students, faculty, and researchers depend on.

Once physical library locations closed, staff sprang into action to identify alternative resources to print and fulfill requests from the MIT community and beyond:

  • Staff have loaded more than 300,000 new e-books and added 475,000 links to digital versions of materials in the catalog through the HathiTrust Digital Library.

  • The “Covid Collections Group” created a guide to dozens of free and expanded resources for textbooks, e-books, journals, film, and music through offers from publishers during Covid-19. 

  • 1.4 million titles are available through the Internet Archive’s National Emergency Library.

  • Staff expedited 150 purchases of materials requested by the MIT community, including technical standards related to face mask manufacturing.

  • Article requests, which have seen a 33 percent increase during the closure, have been filled at a rate of 88 percent, with an average turnaround time of nine hours.

  • Staff fulfilled 90 percent of lending requests despite the difficult circumstances, with less than a half-day turnaround, on average. 

The rapid move to remote work has required a nimble, creative response from library staff, but the benefits of pivoting to a digital-first model could last far beyond Covid-19.

“Providing comprehensive digital access to content has been a foundational part of our vision for the future of research libraries,” says Chris Bourg, director of the MIT Libraries. “But I think our current crisis, where we’ve been forced to quickly adapt to a remote environment, has really thrown into relief how important it is. Research and learning depend on more open and equitable access to knowledge, and that will be true long after this crisis passes.”

Here to help, wherever you are

Some library services have had to adjust to provide more flexibility: recognizing that plans change and time zones and working hours vary, the libraries increased the hours the AskUs chat service is available, and extended the length of time interlibrary borrowing requests can be downloaded. Others have adapted easily — such as providing expertise in a specific subject area, data management, or copyright — with the help of a few new tools.

Daniel Sheehan, program head for GIS and Statistical Software Services, describes a recent morning’s work: “I got a couple of economics grad students — one in France — together with a political science grad student to talk over Zoom about redistricting software while getting real-time help from a colleague over Slack. An undergrad needed help accessing the mapping software Arcgis Pro remotely on one of our GIS and Data Lab computers, so I got her going with screen sharing. Then I looked at data via Dropbox from an EAPS grad student in advance of a Zoom meeting he requested. It’s not the same as being on campus in person, but the technology seems to be working well in this strange time.” 

All in it together: Support across the community

Beyond one-on-one help, library departments are finding ways to adapt to current needs, often requiring collaborations between teams or across the Institute. The libraries are gearing up for MIT-wide electronic thesis submission this spring. After running a successful e-thesis pilot with several departments, labs, and centers last year, library staff will take an existing tool developed for digital archives transfers and test to ensure it’s ready for widespread production to support the graduating class of 2020.

Distinctive Collections, meanwhile, is working with several classes, including Debbie Douglas’s History of MIT, to find ways of incorporating documentary efforts into assignments, so students take an active role in archiving our current experience. Staff are also web-archiving MIT websites and working to collect experiences across the Institute during this unprecedented time.

Finding opportunity amid the crisis, staff have found ways to streamline processes, call on diverse experts to problem-solve jointly, and employ their sophisticated understanding of the online information landscape to help MIT users. Looking to the future, they could brainstorm with faculty on new ways to bring library expertise into remote courses.  

“At the libraries, our vision is for a more open, equitable, and interactive information ecosystem,” says Karrie Peterson, head of Liaison, Instruction, and Reference Services. “Librarians — online and asynchronously — are still working toward that vision, whether helping researchers with data-sharing practices or assisting student teams in exploring sustainability issues in their literature reviews. The current situation has highlighted the importance of open and equitable knowledge sharing and provides us with an extraordinary opportunity to move in that direction.”

Study finds stronger links between automation and inequality

This is part 3 of a three-part series examining the effects of robots and automation on employment, based on new research from economist and Institute Professor Daron Acemoglu. 

Modern technology affects different workers in different ways. In some white-collar jobs — designer, engineer — people become more productive with sophisticated software at their side. In other cases, forms of automation, from robots to phone-answering systems, have simply replaced factory workers, receptionists, and many other kinds of employees.

Now a new study co-authored by an MIT economist suggests automation has a bigger impact on the labor market and income inequality than previous research would indicate — and identifies the year 1987 as a key inflection point in this process, the moment when jobs lost to automation stopped being replaced by an equal number of similar workplace opportunities.

“Automation is critical for understanding inequality dynamics,” says MIT economist Daron Acemoglu, co-author of a newly published paper detailing the findings.

Within industries adopting automation, the study shows, the average “displacement” (or job loss) from 1947-1987 was 17 percent of jobs, while the average “reinstatement” (new opportunities) was 19 percent. But from 1987-2016, displacement was 16 percent, while reinstatement was just 10 percent. In short, those factory positions or phone-answering jobs are not coming back.

“A lot of the new job opportunities that technology brought from the 1960s to the 1980s benefitted low-skill workers,” Acemoglu adds. “But from the 1980s, and especially in the 1990s and 2000s, there’s a double whammy for low-skill workers: They’re hurt by displacement, and the new tasks that are coming, are coming slower and benefitting high-skill workers.”

The new paper, “Unpacking Skill Bias: Automation and New Tasks,” will appear in the May issue of the American Economic Association: Papers and Proceedings. The authors are Acemoglu, who is an Institute Professor at MIT, and Pascual Restrepo PhD ’16, an assistant professor of economics at Boston University.

Low-skill workers: Moving backward

The new paper is one of several studies Acemoglu and Restrepo have conducted recently examining the effects of robots and automation in the workplace. In a just-published paper, they concluded that across the U.S. from 1993 to 2007, each new robot replaced 3.3 jobs.

In still another new paper, Acemoglu and Restrepo examined French industry from 2010 to 2015. They found that firms that quickly adopted robots became more productive and hired more workers, while their competitors fell behind and shed workers — with jobs again being reduced overall.

In the current study, Acemoglu and Restrepo construct a model of technology’s effects on the labor market, while testing the model’s strength by using empirical data from 44 relevant industries. (The study uses U.S. Census statistics on employment and wages, as well as economic data from the Bureau of Economic Analysis and the Bureau of Labor Studies, among other sources.)

The result is an alternative to the standard economic modeling in the field, which has emphasized the idea of “skill-biased” technological change — meaning that technology tends to benefit select high-skilled workers more than low-skill workers, helping the wages of high-skilled workers more, while the value of other workers stagnates. Think again of highly trained engineers who use new software to finish more projects more quickly: They become more productive and valuable, while workers lacking synergy with new technology are comparatively less valued.  

However, Acemoglu and Restrepo think even this scenario, with the prosperity gap it implies, is still too benign. Where automation occurs, lower-skill workers are not just failing to make gains; they are actively pushed backward financially. Moreover,  Acemoglu and Restrepo note, the standard model of skill-biased change does not fully account for this dynamic; it estimates that productivity gains and real (inflation-adjusted) wages of workers should be higher than they actually are.

More specifically, the standard model implies an estimate of about 2 percent annual growth in productivity since 1963, whereas annual productivity gains have been about 1.2 percent; it also estimates wage growth for low-skill workers of about 1 percent per year, whereas real wages for low-skill workers have actually dropped since the 1970s.

“Productivity growth has been lackluster, and real wages have fallen,” Acemoglu says. “Automation accounts for both of those.” Moreover, he adds, “Demand for skills has gone down almost exclusely in industries that have seen a lot of automation.”

Why “so-so technologies” are so, so bad

Indeed, Acemoglu says, automation is a special case within the larger set of technological changes in the workplace. As he puts it, automation “is different than garden-variety skill-biased technological change,” because it can replace jobs without adding much productivity to the economy.

Think of a self-checkout system in your supermarket or pharmacy: It reduces labor costs without making the task more efficient. The difference is the work is done by you, not paid employees. These kinds of systems are what Acemoglu and Restrepo have termed “so-so technologies,” because of the minimal value they offer.

“So-so technologies are not really doing a fantastic job, nobody’s enthusiastic about going one-by-one through their items at checkout, and nobody likes it when the airline they’re calling puts them through automated menus,” Acemoglu says. “So-so technologies are cost-saving devices for firms that just reduce their costs a little bit but don’t increase productivity by much. They create the usual displacement effect but don’t benefit other workers that much, and firms have no reason to hire more workers or pay other workers more.”

To be sure, not all automation resembles self-checkout systems, which were not around in 1987. Automation at that time consisted more of printed office records being converted into databases, or machinery being added to sectors like textiles and furniture-making. Robots became more commonly added to heavy industrial manufacturing in the 1990s. Automation is a suite of technologies, continuing today with software and AI, which are inherently worker-displacing.

“Displacement is really the center of our theory,” Acemoglu says. “And it has grimmer implications, because wage inequality is associated with disruptive changes for workers. It’s a much more Luddite explanation.”

After all, the Luddites — British textile mill workers who destroyed machinery in the 1810s — may be synonymous with technophobia, but their actions were motivated by economic concerns; they knew machines were replacing their jobs. That same displacement continues today, although, Acemoglu contends, the net negative consequences of technology on jobs is not inevitable. We could, perhaps, find more ways to produce job-enhancing technologies, rather than job-replacing innovations.

“It’s not all doom and gloom,” says Acemoglu. “There is nothing that says technology is all bad for workers. It is the choice we make about the direction to develop technology that is critical.”

Robots help some firms, even while workers across industries struggle

This is part 2 of a three-part series examining the effects of robots and automation on employment, based on new research from economist and Institute Professor Daron Acemoglu. 

Overall, adding robots to manufacturing reduces jobs — by more than three per robot, in fact. But a new study co-authored by an MIT professor reveals an important pattern: Firms that move quickly to use robots tend to add workers to their payroll, while industry job losses are more concentrated in firms that make this change more slowly.

The study, by MIT economist Daron Acemoglu, examines the introduction of robots to French manufacturing in recent decades, illuminating the business dynamics and labor implications in granular detail.

“When you look at use of robots at the firm level, it is really interesting because there is an additional dimension,” says Acemoglu. “We know firms are adopting robots in order to reduce their costs, so it is quite plausible that firms adopting robots early are going to expand at the expense of their competitors whose costs are not going down. And that’s exactly what we find.”

Indeed, as the study shows, a 20 percentage point increase in robot use in manufacturing from 2010 to 2015 led to a 3.2 percent decline in industry-wide employment. And yet, for firms adopting robots during that timespan, employee hours worked rose by 10.9 percent, and wages rose modestly as well.

A new paper detailing the study, “Competing with Robots: Firm-Level Evidence from France,” will appear in the May issue of the American Economic Association: Papers and Proceedings. The authors are Acemoglu, who is an Institute Professor at MIT; Clair Lelarge, a senior research economist at the Banque de France and the Center for Economic Policy Research; and Pascual Restrepo Phd ’16, an assistant professor of economics at Boston University.

A French robot census

To conduct the study, the scholars examined 55,390 French manufacturing firms, of which 598 purchased robots during the period from 2010 to 2015. The study uses data provided by France’s Ministry of Industry, client data from French robot suppliers, customs data about imported robots, and firm-level financial data concerning sales, employment, and wages, among other things.

The 598 firms that did purchase robots, while comprising just 1 percent of manufacturing firms, accounted for about 20 percent of manufacturing production during that five-year period.

“Our paper is unique in that we have an almost comprehensive [view] of robot adoption,” Acemoglu says.

The manufacturing industries most heavily adding robots to their production lines in France were pharmaceutical companies, chemicals and plastic manufacturers, food and beverage producers, metal and machinery manufacturers, and automakers.

The industries investing least in robots from 2010 to 2015 included paper and printing, textiles and apparel manufacturing, appliance manufacturers, furniture makers, and minerals companies.

The firms that did add robots to their manufacturing processes became more productive and profitable, and the use of automation lowered their labor share — the part of their income going to workers — between roughly 4 and 6 percentage points. However, because their investments in technology fueled more growth and more market share, they added more workers overall.

By contrast, the firms that did not add robots saw no change in the labor share, and for every 10 percentage point increase in robot adoption by their competitors, these firms saw their own employment drop 2.5 percent. Essentially, the firms not investing in technology were losing ground to their competitors.

This dynamic — job growth at robot-adopting firms, but job losses overall — fits with another finding Acemoglu and Restrepo made in a separate paper about the effects of robots on employment in the U.S. There, the economists found that each robot added to the work force essentially eliminated 3.3 jobs nationally.

“Looking at the result, you might think [at first] it’s the opposite of the U.S. result, where the robot adoption goes hand in hand with destruction of jobs, whereas in France, robot-adopting firms are expanding their employment,” Acemoglu says. “But that’s only because they’re expanding at the expense of their competitors. What we show is that when we add the indirect effect on those competitors, the overall effect is negative and comparable to what we find the in the U.S.”

Superstar firms and the labor share issue

The competitive dynamics the researchers found in France resemble those in another high-profile piece of economics research recently published by MIT professors. In a recent paper, MIT economists David Autor and John Van Reenen, along with three co-authors, published evidence indicating the decline in the labor share in the U.S. as a whole was driven by gains made by “superstar firms,” which find ways to lower their labor share and gain market power.

While those elite firms may hire more workers and even pay relatively well as they grow, labor share declines in their industries, overall.

“It’s very complementary,” Acemoglu observes about the work of Autor and Van Reenen. However, he notes, “A slight difference is that superstar firms [in the work of Autor and Van Reenen, in the U.S.] could come from many different sources. By having this individual firm-level technology data, we are able to show that a lot of this is about automation.”

So, while economists have offered many possible explanations for the decline of the labor share generally — including technology, tax policy, changes in labor market institutions, and more — Acemoglu suspects technology, and automation specifically, is the prime candidate, certainly in France.

“A big part of the [economic] literature now on technology, globalization, labor market institutions, is turning to the question of what explains the decline in the labor share,” Acemoglu says. “Many of those are reasonably interesting hypotheses, but in France it’s only the firms that adopt robots — and they are very large firms — that are reducing their labor share, and that’s what accounts for the entirety of the decline in the labor share in French manufacturing. This really emphasizes that automation, and in particular robots, is a critical part in understanding what’s going on.”

3 Questions: How MIT experienced the 1918-19 flu pandemic

Just over a century ago, the world grappled with a major pandemic when the H1N1 influenza virus infected about 500 million people in 1918 and 1919. When the virus first appeared, MIT had just relocated from Boston to its current campus in Cambridge, Massachusetts, and World War I was approaching its conclusion.

As the MIT community now grapples with Covid-19, the MIT Libraries’ Nora Murphy has been exploring archival materials related to the 1918 flu pandemic, which similarly disrupted life at the Institute. Murphy, the archivist for researcher services in the MIT Libraries’ Distinctive Collections department, has worked with numerous MIT courses, encouraging active learning and critical analysis using the Institute’s collections. She co-teaches the MIT and Slavery class with Craig Steven Wilder, the Barton L. Weller Professor of History. Here, she shares some of what she found of life during the 1918-19 pandemic, and offers insights on documenting our current crisis for the future.

Q: What materials have you been able to find, what has stood out to you about that time at the Institute?

A: We did research on MIT’s response to the 1918 flu back in 2006, and, luckily, I have access to those notes. In 1918, the flu epidemic hit while MIT was in the midst of developing and offering training programs to prepare soldiers and officers to fight with U.S. Naval and Army forces in World War I. The Institute was balancing its normal academic program with more than seven of these war-related programs. To accommodate them, MIT was busy building temporary housing and research facilities — the buildings on the relatively new Cambridge campus were deemed insufficient for the task. Of course, that changed after the armistice was declared on Nov. 11, 1918. 

According to the 1918 Report of the President and The Tech, there was a three-week delay in the start of the fall 1918 semester at the request of federal and state authorities “due to prevalence of Spanish Influenza and Grippe which has spread throughout this section of the country.” In an Oct. 2, 1918, announcement of the postponement, The Tech editors write, “It is our aim to aid in every way possible the fight against this terrible disease which now seems to have passed its crisis.”

Q: Do you see any parallels between the 1918 flu pandemic and the current Covid-19 crisis at MIT?

A: One parallel would be social distancing efforts. Contemporary newspaper accounts show that MIT complied with emergency governmental regulations of local municipalities and delayed the start of the semester in 1918, and the opening of a newly constructed mess hall on campus was delayed to prevent the congregation of large numbers of people in one space.    

There is also a circular letter from President Richard Maclaurin to the students in the Dec. 21, 1918, issue of The Tech in which he refers to the “abnormal” conditions of the fall semester. While it’s not clear if he is referring to the effects of the flu epidemic or the wartime training programs, there did seem to be similar questions to those MIT is considering now of maintaining academic continuity during a major disruption. He writes that “[the faculty] will not adopt a policy that will involve a lowering of the Institute’s standards,” but acknowledges that the faculty would take the current conditions into account and notes some of the ways students could catch up to continue their normal academic progress.

Q: How do you, as an archivist, think about documenting this experience of the Covid-19 pandemic at MIT for the future?

A: Documenting our individual and collective experiences in these extraordinary times will help each of us and our successors to reflect on the decisions made in order to evaluate how we respond to a future “abnormal” event. It gives us the chance to acknowledge the strengths and weaknesses of our plans and the chance to share our experiences.

Currently, there are several efforts underway to document MIT’s experience with the Covid-19 pandemic, and more are likely to develop. MIT Distinctive Collections is web-archiving Institute websites and working to more broadly collect experiences across MIT. Community members are invited to submit all forms of personal reflections and firsthand accounts on our website. Debbie Douglas of the MIT Museum is leading an effort, in collaboration with members of the Class of 1970, leaders at the Institute, and Distinctive Collections, to document reflections of students in her History of MIT class, as well as those of alumni. In addition, Distinctive Collections is working with classes to find ways of incorporating documentary efforts into assignments.

We also strongly recommend that MIT academic and administrative offices record their pandemic-related responses and activities in their annual reports to the president. These reports are a wonderful resource for scholars.

The Institute Archives, which is part of Distinctive Collections, welcomes discussion of how and what to share with us, so that what the members of our MIT community are experiencing now — whether in labs on campus, at home, or wherever they have found a safe haven — is available for the future. Any questions can be sent to us at MITstory-covid19@mit.edu.

How many jobs do robots really replace?

This is part 1 of a three-part series examining the effects of robots and automation on employment, based on new research from economist and Institute Professor Daron Acemoglu.  

In many parts of the U.S., robots have been replacing workers over the last few decades. But to what extent, really? Some technologists have forecast that automation will lead to a future without work, while other observers have been more skeptical about such scenarios.

Now a study co-authored by an MIT professor puts firm numbers on the trend, finding a very real impact — although one that falls well short of a robot takeover. The study also finds that in the U.S., the impact of robots varies widely by industry and region, and may play a notable role in exacerbating income inequality.

“We find fairly major negative employment effects,” MIT economist Daron Acemoglu says, although he notes that the impact of the trend can be overstated.

From 1990 to 2007, the study shows, adding one additional robot per 1,000 workers reduced the national employment-to-population ratio by about 0.2 percent, with some areas of the U.S. affected far more than others.

This means each additional robot added in manufacturing replaced about 3.3 workers nationally, on average.

That increased use of robots in the workplace also lowered wages by roughly 0.4 percent during the same time period.

“We find negative wage effects, that workers are losing in terms of real wages in more affected areas, because robots are pretty good at competing against them,” Acemoglu says.

The paper, “Robots and Jobs: Evidence from U.S. Labor Markets,” appears in advance online form in the Journal of Political Economy. The authors are Acemoglu and Pascual Restrepo PhD ’16, an assistant professor of economics at Boston University.

Displaced in Detroit

To conduct the study, Acemoglu and Restrepo used data on 19 industries, compiled by the International Federation of Robotics (IFR), a Frankfurt-based industry group that keeps detailed statistics on robot deployments worldwide. The scholars combined that with U.S.-based data on population, employment, business, and wages, from the U.S. Census Bureau, the Bureau of Economic Analysis, and the Bureau of Labor Statistics, among other sources.

The researchers also compared robot deployment in the U.S. to that of other countries, finding it lags behind that of Europe. From 1993 to 2007, U.S. firms actually did introduce almost exactly one new robot per 1,000 workers; in Europe, firms introduced 1.6 new robots per 1,000 workers.

“Even though the U.S. is a technologically very advanced economy, in terms of industrial robots’ production and usage and innovation, it’s behind many other advanced economies,” Acemoglu says.

In the U.S., four manufacturing industries account for 70 percent of robots: automakers (38 percent of robots in use), electronics (15 percent), the plastics and chemical industry (10 percent), and metals manufacturers (7 percent).

Across the U.S., the study analyzed the impact of robots in 722 commuting zones in the continental U.S. — essentially metropolitan areas — and found considerable geographic variation in how intensively robots are utilized.

Given industry trends in robot deployment, the area of the country most affected is the seat of the automobile industry. Michigan has the highest concentration of robots in the workplace, with employment in Detroit, Lansing, and Saginaw affected more than anywhere else in the country.

“Different industries have different footprints in different places in the U.S.,” Acemoglu observes. “The place where the robot issue is most apparent is Detroit. Whatever happens to automobile manufacturing has a much greater impact on the Detroit area [than elsewhere].”

In commuting zones where robots were added to the workforce, each robot replaces about 6.6 jobs locally, the researchers found. However, in a subtle twist, adding robots in manufacturing benefits people in other industries and other areas of the country — by lowering the cost of goods, among other things. These national economic benefits are the reason the researchers calculated that adding one robot replaces 3.3 jobs for the country as a whole.

The inequality issue

In conducting the study, Acemoglu and Restrepo went to considerable lengths to see if the employment trends in robot-heavy areas might have been caused by other factors, such as trade policy, but they found no complicating empirical effects.

The study does suggest, however, that robots have a direct influence on income inequality. The manufacturing jobs they replace come from parts of the workforce without many other good employment options; as a result, there is a direct connection between automation in robot-using industries and sagging incomes among blue-collar workers.

“There are major distributional implications,” Acemoglu says. When robots are added to manufacturing plants, “The burden falls on the low-skill and especially middle-skill workers. That’s really an important part of our overall research [on robots], that automation actually is a much bigger part of the technological factors that have contributed to rising inequality over the last 30 years.”

So while claims about machines wiping out human work entirely may be overstated, the research by Acemoglu and Restrepo shows that the robot effect is a very real one in manufacturing, with significant social implications.

“It certainly won’t give any support to those who think robots are going to take all of our jobs,” Acemoglu says. “But it does imply that automation is a real force to be grappled with.”

Myth-busting on YouTube

In mid-March, Izabella Pena received a WhatsApp text from a friend in Indianapolis, Indiana. “He said, ‘Oh, I got your audio message from a priest in rural São Paulo,’” remembers Pena, a postdoc in Department of Biology Professor David Sabatini’s lab at the Whitehead Institute for Biomedical Research.

Pena had recorded the five-minute audio message about risk groups and the novel coronavirus SARS-CoV-2 for her family’s text thread after she heard one-too-many comments about how only the elderly caught the more severe forms of Covid-19. She never imagined it would spread like wildfire. “I realized the power of these tools,” says Pena of WhatsApp. “You can really reach people and share your information.”

While Pena’s message was fact-checked and scientifically correct, a lot of the information being shared on these platforms isn’t. In Pena’s native Brazil, the messaging platform WhatsApp has played an outsized role in the spread of fake news concerning SARS-CoV-2. Seeing the onslaught of misinformation, Pena first panicked. Then she fought back, choosing to use the vehicles of fake news to spread facts. “We scientists need to learn how to use WhatsApp, YouTube, and Twitter to communicate,” says Pena. “Because that’s how people are getting their information.”

At first, Pena’s misinformation-busting efforts were focused on friends and family. She recorded short audio messages in Portuguese to answer their questions and try to convince them that Covid-19 isn’t just another cold. The rapid spread of her audio messages, which alerted listeners about the importance of physical isolation and risk groups, sparked an idea: to take her science communication efforts from WhatsApp to YouTube, where she could reach a larger audience. Video also has the benefit of being a visual medium, where there’s a face attached to the information being shared. “I think that if people see you, there’s more reliability,” says Pena.

Pena uploaded her first video in late March, answering questions she had received via WhatsApp about Covid-19. Since then, she’s uploaded another five videos and is aiming to release one a week while the pandemic lasts. Many of these videos are in direct response to the messages she gets from viewers. “For example, everybody is asking when is life going to go back to normal, and I think life is only going to go back to ‘normal’ when there’s a vaccine,” says Pena. On April 10, she uploaded a video focused on vaccines, explaining what exactly a vaccine is and how they are made.

On camera, Pena is warm and inviting, delivering updated information about the coronavirus’s biology and epidemiology without clunky jargon and with an abundance of analogies. In a recent video that delved into the biology of SARS-CoV-2 and the different treatments being explored for the virus, she compared the human protein TMPRSS2, which primes the virus’ spike protein to enable the fusion of the virion to a cell’s membrane, to the scissors you use to open a tough plastic snack bag.

In using analogies, Pena is following the advice of Paulo Freire, a famed Brazilian educator and one of her personal idols. “Freire says that the best way to teach something very complicated to someone is to try to bring that concept close to their lives,” says Pena.

Trying to make complex and novel science digestible requires time. According to Pena, just writing the script and developing the analogies takes a couple of hours. “I collect all the information I need before I write the script,” says Pena, whose videos include a long list of references in the description, an unexpected sight on YouTube. “Then I film and edit the video. It all takes a few hours.”

Pena’s videos are filmed late at night because she continues to perform research during the pandemic, mostly virtually. But, she explains, “I’m part of the essential personnel in my lab.” Pena’s work in the Sabatini Lab focuses on the lysosome, the garbage disposal unit of cells that breaks down old cell parts and waste to recycle nutrients. It’s the perfect organelle for someone who has always enjoyed cell metabolism.

“I’ve always liked how chemicals in the cells are made and broken down,” says Pena. Her PhD research at the University of Campinas in Brazil investigated how metabolic problems in the brain could cause epilepsy. Since joining the Sabatini lab in 2018, Pena studies neurodegenerative disorders, like Parkinson’s and Huntington’s, and what role the lysosome plays in them. “For neurodegenerative diseases, there’s a lot of evidence that there’s lysosome influence,” she says. “There are many lysosome gene mutations associated to these disorders, so it’s a nice target to look at.”

Mostly working from home in Cambridge, Massachusetts, Pena is analyzing data and writing grants and papers, balancing her research with her “after-hours job” as a science communicator. “It’s a lot of commitment and dedication, but I believe this is very important, so I’ll keep doing it,” she says. “We are living a hard time, where science and education are constantly under attack. As scientists, we need to help inform people with accurate and life-saving information”.

Recently, Pena added another job title to her resumé: vice-president of ContraCovid, an initiative to make coronavirus information accessible to Latino and immigrant individuals. “We are sharing information in four languages: English, Portuguese, Spanish, and Haitian Creole, to benefit the community here in the U.S. and abroad,” says Pena. But ContraCovid wants to do more, including creating videos like Pena’s in other languages and recruiting more scientists, so that their materials can reach more and more people.

Accessibility of information is at the front of Pena’s mind when she sits down to make a new video. “If you look at how scientists communicate with each other, it’s a bit intimidating,” says Pena. The jargon and the excess of data make it hard for the general public to locate the main takeaways. Pena focuses on stripping away the excess and delivering the message, such as the importance of flattening the curve, in an easily digestible manner.

When imagining her viewers, Pena thinks of her mother. “My mom is not a scientist, but she’s super into technology like YouTube and WhatsApp,” says Pena, who usually sends her audio clips and videos to her mom first, only uploading them once her mom gives the go-ahead. “My mom helps a lot with sharing the videos because she has lots of followers,” Pena laughs. That’s actually how her involvement in Covid-19 outreach started: with her mom wildly sharing Pena’s audio message about risk groups with her numerous followers. 

How growth of the scientific enterprise influenced a century of quantum physics

Austrian quantum theorist Erwin Schrödinger first used the term “entanglement,” in 1935, to describe the mind-bending phenomenon in which the actions of two distant particles are bound up with each other. Entanglement was the kind of thing that could keep Schrödinger awake at night; like his friend Albert Einstein, he thought it cast doubt on quantum mechanics as a viable description of the world. How could it be real?  

And yet, evidence keeps accumulating that entanglement exists. Two years ago MIT Professor David Kaiser and an international team used lasers, single-photon detectors, atomic clocks, and huge telescopes collecting light that had been released by distant quasars 8 billion years ago to further refine tests of quantum entanglement. The researchers thus effectively ruled out a potential objection, that the appearance of entanglement might derive from some correlation between the selection of measurements to perform and the behavior of the particles being tested.

Yes, entanglement defies our intuition, but at least scientists can keep learning about it, Kaiser notes.

“Schrödinger could only stay up all night,” says Kaiser, meaning that theorists in the 1930s just had “pencil and paper and very hard-thought calculations and compelling analogies” to guide them, but little physical evidence. Today, by contrast, “we have instruments to study these questions in ways that weren’t even possible experimentally or empirically until recently.”

Now Kaiser, a professor of physics at MIT and the Germeshausen Professor of the History of Science in MIT’s Program in Science, Technology, and Society, has written a new history of the subject, “Quantum Legacies: Dispatches from an Uncertain World,” published this month by the University of Chicago Press. Moving between vignettes of key physicists, original research about the growth of the field, and accounts of his own work in cosmology, Kaiser emphasizes the vast changes in the field over time.

“There have been really quite dramatic shifts in the fortunes of the discipline,” says Kaiser, who says he aimed to present readers with “a different kind of story, with different through-lines, over a very turbulent century.”

The physics boom and the crash

Indeed, many histories of quantum physics have been telescopic in form, focusing on the field’s most well-known stars: the foundational quantum theorists Niels Bohr, Paul Dirac, Werner Heisenberg, and Schrödinger, with Einstein usually featured as a famous quantum skeptic. Before the physics community was thrown into turmoil by world war, these scientists developed quantum mechanics and identified its most baffling features — including entanglement and the uncertainty principle (the trade-off in accuracy when measuring things like the position and momentum of a particle).

We still struggle to interpret these concepts, but much else has changed. In particular, Kaiser emphasizes, physics witnessed a quarter-century of unprecedented growth starting in the 1940s, especially when students flooded back into America’s universities after World War II.

“We trained more people in physics in that quarter-century after the war than had previously been trained, cumulatively, since the dawn of time,” Kaiser says of this growth phase.

Meanwhile, massive particle colliders changed the methods of physics and yielded new knowledge about subatomic structures. Huge teams collaborated on experiments, strictly intent on grinding out empirical advances. More people than ever were becoming physicists, but seemingly fewer than ever pondered the “philosophical” problems raised by quantum physics, which became unfashionable.

“It was more than a pendulum swing,” Kaiser says. “Physics saw these quite dramatic shifts in what even counted as a real question.”

Kaiser carefully documents this shift through close readings of physics textbooks, showing how an ethos of pragmatic calculation became dominant. Textbook authors, he adds, are “always making a range of value judgements: What’s an appropriate topic, what’s an appropriate method? What should we be asking questions about? What is ‘merely’ philosophical?”

And then the physics bubble burst: Funding, enrollment numbers, and jobs in the field all dropped precipitously in the early 1970s, due to a slowing economy and decreased federal funding.  

“Those numbers crashed for virtually every field of study across the academy, but none fell faster than physics,” Kaiser says.

The Tao of large colliders

Perhaps surprisingly, that 1970s job-market crunch helped revive interest in the quantum curiosities of the 1930s. As Kaiser detailed in his 2011 book “How the Hippies Saved Physics” — which grew out of this book project — some key advances toward understanding entanglement came from then-marginal physicists who, lacking fast-track research opportunities, had relative freedom to explore neglected issues. 

Such unconventional thinking soon began to influence teaching as well, Kaiser notes in “Quantum Legacies.” Fritjof Capra’s period bestseller “The Tao of Physics,” linking Eastern religion and quantum mysteries, is known today as a New Age staple — but it landed on academic syllabi in the 1970s, thanks to physics professors eager to lure students back to their classrooms.

Since the 1970s, quantum physics has seen multiple mini-eras zip by. Defense spending spurred a 1980s recovery in physics, but when U.S. Congress killed the Superconducting Supercollider project in 1993, physicists in some branches of the discipline could not generate many new experimental results — until the Large Hadron Collider came online in 2008. Multiple recent academic generations have thus experienced physics as a turbulent discipline, with its fortunes tied to distant politics.

“Sometimes people got caught out of sync, they entered physics during boom times and, through no fault of their own, the opportunities vanished before they got their degrees,” Kaiser says. “And we’ve seen that happen twice in this country in the last half-century.”

So while the likes of Schrödinger could make progress with a pencil and paper, the material conditions of physics matter immensely as far as contemporary progress in the discipline goes.

“The ideas matter a great deal,” Kaiser says. “But the ideas are embedded in a changing world.”

“Quantum Legacies” has drawn praise from scholars; Nobel-winning physicist Kip Thorne of Caltech praises the book’s “remarkable set of vignettes about major developments in physics and cosmology of the past century,” which “beautifully integrate science with human history.” Award-winning novelist Nell Freudenberger notes Kaiser’s “talent for uncovering connections between otherworldly ideas and the social and political worlds in which they take shape,” which, she continues, makes for “a simply spellbinding guide to the mysteries of the universe.”

For his part, Kaiser hopes readers will ponder the “doubleness” of scientists — they hope to find eternal answers, despite being bound by their era’s tools and assumptions. And while “Quantum Legacies” explores the lives of some individual physicists, such as Dirac, Kaiser also hopes readers will appreciate how thoroughly quantum physics has been a collaborative enterprise.

“In science there is a tradition of writing about the single genius, but quantum mechanics from day one has required an ensemble cast,” Kaiser says, adding, “When we study institutions, generations, and cohorts, I find that more valuable than thinking about these unattainable geniuses on the mountaintop — which is always a fable, but it’s an especially poor-fitting fable for this set of developments.”

Consider, he says, that more than 15,000 physicists published papers relating to the Higgs Boson — exploring how subatomic particles acquire mass — over a 50-year span. But only after the Large Hadron Collider started running could scientists find evidence for it.

“It makes me think about my own [work] in a different way,” Kaiser says. “What have I not been able to think of, that the next generation will open up? I find that much more exciting, as a human story, as a conceptual story, than focusing on a single lone genius.”

Meet the MIT bilinguals: Dual history and planetary science major Charlotte Minsky

“I wasn’t someone who grew up thinking of MIT as my dream school. But, at the end of the day I knew I couldn’t say no to MIT.” 

It was not a lack of enthusiasm or appreciation for MIT that Charlotte Minsky harbored when she entered her first year. She had two academic “loves” and expected her love of science to lead her to a career in the sciences after MIT.

“I was afraid I would lose my ability to spend as much time on history as I wanted to if I came to MIT,” says Minsky, now a senior. “That was my main fear. History was something I cared about, but it was completely separate from the science.”
 
What a difference four years can make

Minsky may be the only student graduating this spring with a major in three schools at MIT; she will earn a double major in earth, atmospheric, and planetary sciences (EAPS), and in history and computer science. This fall, Minsky will study the history and philosophy of science at the University of Cambridge, England, having recently been awarded a Gates Cambridge Scholarship. After earning her MPhil, she plans to earn a doctorate in planetary science.  

Combining humanistic and scientific/technical forms of knowledge and exploration is a path increasingly championed at MIT, and referred to as a “bilingual” education. In her case, Minsky observes, this approach developed over time. In her first year, she locked into the Institute’s HASS (humanities, arts, and social sciences) requirements by taking history classes. She also selected a Undergraduate Research Opportunities Program research project in astronomy that involved searching for an as-yet undetected planet.

“It was awesome,” says Minsky of her astronomy research. “We didn’t find the planet, but it was my first exposure to planetary science and astronomy research and an introduction to the EAPS department, Course 12. EAPS at MIT combines many different fields — astronomy, oceanography, geology, atmospheric chemistry — all very different. It felt like a way to declare a major without having to declare a major. There were so many things I could explore.”

Making history

From the beginning, Minsky was keen to sample everything MIT offered. In her sophomore year she enrolled in the initial class of the MIT and Slavery Project, an ongoing undergraduate history research effort exploring the Institute’s entanglement with the legacy of slavery in science and engineering fields and in the lives of some early leaders. In this project, students are writing a formerly unexplored aspect of the history of MIT, for MIT. Minsky calls the course “transformative.”

“That was the first course that started to make me think how history and science are connected,” she says, “and that it’s actually imperative for us to examine science and technology in an historical context — in particular, to understand the ways that science and technology have at times benefited from, and perpetuated, unjust or inequitable social structures.”

In the fall of her junior year Minsky discovered something she only then realized she had been missing: a role model. She recalls that everything changed when she walked into the classroom of Sara Seager, a professor of physics and planetary science. It was not only Minsky’s introduction to exoplanets — a subject on which she now plans to focus professionally — but Seager was her first MIT female professor to lead a class, solo. A year earlier Minsky had taken a class with shared teaching responsibilities between a male and female professor.

“Those two are still the only female STEM professors I’ve had,” says Minsky. “And I did not realize until I got to Professor Seager’s class that the absence of a role model was one of the reasons I could not quite see a future for myself in scientific research academia.”

Another of MIT’s humanistic courses that semester — Theories and Methods in the Study of History — deepened Minsky’s devotion to history and she began to see how history also connects with planetary science. Her path forward began to shine brightly.

“Exoplanets are just really cool”

Exoplanets — planets orbiting other stars — can almost never be seen directly. One of the main ways they’re detected is by observing when a star is dimmed, which occurs when an exoplanet moves in front of a star. Minsky’s goal is to study the atmospheres of these exoplanets, which also gives us a better understanding of our own atmosphere.

“Seeing that different brightness is the equivalent of standing here in Boston, looking across the country, and seeing a moth fly in front of a street light,” says Minsky. “The fact that we can do this, that we can find other worlds in other solar systems, kind of blew my mind. And then, to go a step further, and say that not only can we see the moth fly in front of the street light but we can figure out what the dust on the moth’s wing is made of! That’s my analogy for the atmosphere. You can study the envelopes of gas around these planets that hover around stars that are hundreds of light years away.”

When asked about the value of fluency in both humanistic and technical thinking, Minsky said studying history alongside planetary science enables her to be “reflexive,” and to see the mutually-informing relationships between scientific work, history, and society. Studying science has made her a better historian, she says, and studying history has made her a better scientist.  

“The fields of history and planetary science have very different ways of thinking and working, and the tell different types of stories. Studying both fields, I’ve been able to break out of the constraints of what each field considers the right way to find truth.

“In science in general, there are very strict procedures for how to conduct a study that creates a construction of scientific validity. But, the scientific process is actually messy and fuzzy, and there’s no such thing as objective truth. History, and the history of science, has made me aware of that fact — and more mindful that when we’re doing science, we are always making political and valuated choices: about what types of questions to ask, and what types of answers and experiments are considered valid and truthful.

“At the same time, I think the evidential approach of science has made me a better historian. Being steeped in the scientific approach drives me to search, as a historian, for the most concrete evidence I can find.”
 
A voice for undergraduates

During her four years at MIT, Minsky was active in a number of student organizations, including the Institute’s Undergraduate Association, for which she is currently vice president. The group advocates on behalf of students’ interests, sponsors events, and works with MIT’s administration to address concerns. The role is one Minsky takes to heart. One initiative she supports is more support for SHASS initiatives.

“If we’re going to talk about being ‘bilingual,’ the Institute needs to focus on more support for the School of Humanities, Arts, and Social Sciences,” she says.

Before moving from Cambridge, Massachusetts to Cambridge, England, Minsky has one more history to explicate; she’s writing her thesis on the history of the Lick Observatory, located on Mount Hamilton, in Santa Clara County, California. The 19th century telescope — the world’s first permanent mountaintop telescope — was enabled and shaped by settler colonialism, says Minsky.

“This writing is turning out to be a culmination of all the things I’ve been able to study at MIT. It’s a history of science, but it’s not just general science, technology, and engineering. It’s specifically a history in my scientific field with its entanglements, including a historical context in structures of oppression.”

Story prepared by MIT SHASS Communications
Editorial and design director: Emily Hiestand
Writer: Maria Iacobo

3 Questions: Tom Leighton on the major surge in internet traffic triggered by physical distancing

With various physical distancing guidelines in place throughout the world as a means to curb the spread of Covid-19, the internet has experienced a dramatic spike in overall traffic. MIT Professor Tom Leighton is chief executive officer and co-founder of Akamai Technologies, a global content delivery network, cybersecurity, and cloud service company that provides web and internet security services. At MIT he specializes in applied mathematics in the Department of Mathematics and is a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The Department of Mathematics Communications spoke to Leighton about his company’s response to the world’s increased reliance on the internet during the Covid-19 pandemic.

Q: How is the pandemic changing the way people use the internet?

A: The internet has become our lifeline as we face the challenges of working remotely, distance learning, and sheltering in place. Everything has moved online: religious services, movie premieres, commerce of all kinds, and even gatherings of friends for a cup of coffee. We’ve already been doing many of these things online for years — the big difference now is that we are suddenly only doing them online.

When we’ve emerged from the pandemic, it seems quite possible that our usage of the internet for nearly every facet of our lives will have increased permanently. Many more people may be working remotely even when offices reopen; the shift to virtual meetings may become the norm even when we can travel again; a much greater share of commerce may be conducted online even when we can return to shopping malls; and our usage of social media and video streaming could well be greater than ever before, even when it’s OK to meet others in person.

Q: How much more use is the internet seeing as a result of the pandemic?

A: Akamai operates a globally distributed intelligent edge platform with more than 270,000 servers in 4,000 locations across 137 countries. From our vantage point, we can see that global internet traffic increased by about 30 percent during the past month. That’s about 10 times normal, and it means we’ve seen an entire year’s worth of growth in internet traffic in just the past few weeks. And that’s without any live sports streaming, like the usual March Madness college basketball tournament in the United States.

Just a few weeks ago, we set a new peak record of traffic on the Akamai edge platform of 167 terabytes per second. That’s more than double the peak we saw one year before. These are truly unprecedented times. The internet is being used at a scale that the world has never experienced.

Q: Can the internet keep up with the surge in traffic?

A: The answer is yes, but with many more caveats now.

Around the world, some regulators, major carriers, and content providers are taking steps to reduce load during peak traffic times in an effort to avert online gridlock. For example, European regulators have asked telecom providers and streaming platforms to switch to standard definition video during periods of peak demand. And Akamai is working with leading companies such as Microsoft and Sony to deliver software updates for e-gaming at off-peak traffic times. The typical software update uses as much traffic as about 30,000 web pages, so this makes a big difference when it comes to managing congestion.

In addition, Akamai’s intelligent edge network architecture is designed to mitigate and minimize network congestion. Because we’ve deployed our infrastructure deep into carrier networks, we can help those networks avoid overload by diverting traffic away from areas experiencing high levels of congestion.

Overall, we fully expect to maintain the integrity and reliability of website and mobile application delivery, as well as security services, for all of our customers during this time. In particular, Akamai customers across sectors such as government, health care, financial services, commerce, manufacturing, and business services should not experience any change in the performance of their services. We will continue working with governments, network operators, and our customers to minimize stress on the system. At the same time, we’ll do our best to make sure that everyone who is relying on the internet for their work, studies, news, and entertainment continues to have a high-quality, positive experience.

With lidar and artificial intelligence, road status clears up after a disaster

Consider the days after a hurricane strikes. Trees and debris are blocking roads, bridges are destroyed, and sections of roadway are washed out. Emergency managers soon face a bevy of questions: How can supplies get delivered to certain areas? What’s the best route for evacuating survivors? Which roads are too damaged to remain open?

Without concrete data on the state of the road network, emergency managers often have to base their answers on incomplete information. The Humanitarian Assistance and Disaster Relief Systems Group at MIT Lincoln Laboratory hopes to use its airborne lidar platform, paired with artificial intelligence (AI) algorithms, to fill this information gap.  

“For a truly large-scale catastrophe, understanding the state of the transportation system as early as possible is critical,” says Chad Council, a researcher in the group. “With our particular approach, you can determine road viability, do optimal routing, and also get quantified road damage. You fly it, you run it, you’ve got everything.”

Since the 2017 hurricane season, the team has been flying its advanced lidar platform over stricken cities and towns. Lidar works by pulsing photons down over an area and measuring the time it takes for each photon to bounce back to the sensor. These time-of-arrival data points paint a 3D “point cloud” map of the landscape — every road, tree, and building — to within about a foot of accuracy.

To date, they’ve mapped huge swaths of the Carolinas, Florida, Texas, and all of Puerto Rico. In the immediate aftermath of hurricanes in those areas, the team manually sifted through the data to help the Federal Emergency Management Agency (FEMA) find and quantify damage to roads, among other tasks. The team’s focus now is on developing AI algorithms that can automate these processes and find ways to route around damage.

What’s the road status?

Information about the road network after a disaster comes to emergency managers in a “mosaic of different information streams,” Council says, namely satellite images, aerial photographs taken by the Civil Air Patrol, and crowdsourcing from vetted sources.

“These various efforts for acquiring data are important because every situation is different. There might be cases when crowdsourcing is fastest, and it’s good to have redundancy. But when you consider the scale of disasters like Hurricane Maria on Puerto Rico, these various streams can be overwhelming, incomplete, and difficult to coalesce,” he says.

During these times, lidar can act as an all-seeing eye, providing a big-picture map of an area and also granular details on road features. The laboratory’s platform is especially advanced because it uses Geiger-mode lidar, which is sensitive to a single photon. As such, its sensor can collect each of the millions of photons that trickle through openings in foliage as the system is flown overhead. This foliage can then be filtered out of the lidar map, revealing roads that would otherwise be hidden from aerial view.

To provide the status of the road network, the lidar map is first run through a neural network. This neural network is trained to find and extract the roads, and to determine their widths. Then, AI algorithms search these roads and flag anomalies that indicate the roads are impassable. For example, a cluster of lidar points extending up and across a road is likely a downed tree. A sudden drop in the elevation is likely a hole or washed out area in a road.

The extracted road network, with its flagged anomalies, is then merged with an OpenStreetMap of the area (an open-access map similar to Google Maps). Emergency managers can use this system to plan routes, or in other cases to identify isolated communities — those that are cut off from the road network. The system will show them the most efficient route between two specified locations, finding detours around impassable roads. Users can also specify how important it is to stay on the road; on the basis of that input, the system provides routes through parking lots or fields.  

This process, from extracting roads to finding damage to planning routes, can be applied to the data at the scale of a single neighborhood or across an entire city.

How fast and how accurate?

To gain an idea of how fast this system works, consider that in a recent test, the team flew the lidar platform, processed the data, and got AI-based analytics in 36 hours. That sortie covered an area of 250 square miles, an area about the size of Chicago, Illinois.

But accuracy is equally as important as speed. “As we incorporate AI techniques into decision support, we’re developing metrics to characterize an algorithm’s performance,” Council says.

For finding roads, the algorithm determines if a point in the lidar point cloud is “road” or “not road.” The team ran a performance evaluation of the algorithm against 50,000 square meters of suburban data, and the resulting ROC curve indicated that the current algorithm provided an 87 percent true positive rate (that is, correctly labeled a point as “road”), with a 20 percent false positive rate (that is, labeling a point as “road” that may not be road). The false positives are typically areas that geometrically look like a road but aren’t.

“Because we have another data source for identifying the general location of roads, OpenStreetMaps, these false positives can be excluded, resulting in a highly accurate 3D point cloud representation of the road network,” says Dieter Schuldt, who has been leading the algorithm-testing efforts.

For the algorithm that detects road damage, the team is in the process of further aggregating ground truth data to evaluate its performance. In the meantime, preliminary results have been promising. Their damage-finding algorithm recently flagged for review a potentially blocked road in Bedford, Massachusetts, which appeared to be a hole measuring 10 meters wide by 7 meters long by 1 meter deep. The town’s public works department and a site visit confirmed that construction blocked the road.

“We actually didn’t go in expecting that this particular sortie would capture examples of blocked roads, and it was an interesting find,” says Bhavani Ananthabhotla, a contributor to this work. “With additional ground truth annotations, we hope to not only evaluate and improve performance, but also to better tailor future models to regional emergency management needs, including informing route planning and repair cost estimation.”

The team is continuing to test, train, and tweak their algorithms to improve accuracy. Their hope is that these techniques may soon be deployed to help answer important questions during disaster recovery.

“We picture lidar as a 3D scaffold that other data can be draped over and that can be trusted,” Council says. “The more trust, the more likely an emergency manager, and a community in general, will use it to make the best decisions they can.”

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.