Solv[ED] inspires young people to become global problem-solvers

On May 3, during its annual flagship event Solve at MIT, MIT Solve launched a new program called Solv[ED], geared toward young innovators to help them become problem solvers and learn about social entrepreneurship. 

Starting in June, Solv[ED] will feature a variety of workshops and learning sessions and provide resources that are designed to support young people aged 24 and under with the skills needed to make an impact on their communities and the world. Solv[ED] will host its first annual Youth Innovation Challenge this September and invite young people to submit and pitch solutions to solve problems worldwide. 

Via events throughout the year, young problem-solvers will also be able to network with one another, as well as the broader Solv[ED] community through its open innovation platform, to brainstorm ideas and advance their solutions and enterprises.

“There is no one path through Solv[ED]’s offerings. We’re creating a program for young people to design their own social impact journeys,” says Alex Amouyel, executive director of Solve. “We can’t do this alone. That is why we are inviting youth organizations, education providers, and other cross-sector leaders to join us and support young problem-solvers all over the world.”

Emma Yang, the youngest MIT Solver and founder of Timeless, a startup that empowers Alzheimer’s patients to stay engaged and connected to their loved ones, is excited about the launch of Solv[ED] and believes that it will generate a large community of youth looking to work together to make change.

“Solv[ED] will give young people the opportunity to learn about and practice skills for social entrepreneurship. I’m especially excited about the ways that it’ll do this while bringing young people from around the world together,” Yang says.

In addition to young innovators, Solv[ED]’s community gathers member organizations looking to support these youth, such as Anant National University, Antropia ESSEC, Firefly Innovations at City University of New York, Instituto Tecnológico de Monterrey, Learn with Leaders, T.A. Pai Management Institute, Universidad de los Andes, and Universidad Privada Peruano Alemana. 

Solv[ED] partners include the Morgridge Family Foundation, the Rieschel Foundation, and the Pozen Social Innovation Prize. 

MIT students  can sign up for the Solv[ED] newsletter for more updates, and organizations that support youth innovation can become Solv[ED] Members.

Behind Covid-19 vaccine development

When starting a vaccine program, scientists generally have anecdotal understanding of the disease they’re aiming to target. When Covid-19 surfaced over a year ago, there were so many unknowns about the fast-moving virus that scientists had to act quickly and rely on new methods and techniques just to even begin understanding the basics of the disease.

Scientists at Janssen Research & Development, developers of the Johnson & Johnson-Janssen Covid-19 vaccine, leveraged real-world data and, working with MIT researchers, applied artificial intelligence and machine learning to help guide the company’s research efforts into a potential vaccine.

“Data science and machine learning can be used to augment scientific understanding of a disease,” says Najat Khan, chief data science officer and global head of strategy and operations for Janssen Research & Development. “For Covid-19, these tools became even more important because ­­­our knowledge was rather limited. There was no hypothesis at the time. We were developing an unbiased understanding of the disease based on real-world data using sophisticated AI/ML algorithms.”

In preparing for clinical studies of Janssen’s lead vaccine candidate, Khan put out a call for collaborators on predictive modeling efforts to partner with her data science team to identify key locations to set up trial sites. Through Regina Barzilay, the MIT School of Engineering Distinguished Professor for AI and Health, faculty lead of AI for MIT’s Abdul Latif Jameel Clinic for Machine Learning in Health, and a member of Janssen’s scientific advisory board, Khan connected with Dimitris Bertsimas, the Boeing Leaders for Global Operations Professor of Management at MIT, who had developed a leading machine learning model that tracks Covid-19 spread in communities and predicts patient outcomes, and brought him on as the primary technical partner on the project.

DELPHI

When the World Health Organization declared Covid-19 a pandemic in March 2020 and forced much of the world into lockdown, Bertsimas, who is also the faculty lead of entrepreneurship for the Jameel Clinic, brought his group of 25-plus doctoral and master’s students together to discuss how they could use their collective skills in machine learning and optimization to create new tools to aid the world in combating the spread of the disease.

The group started tracking their efforts on the COVIDAnalytics platform, where their models are generating accurate real-time insight into the pandemic. One of the group’s first projects was charting the progression of Covid-19 with an epidemiological model they developed named DELPHI, which predicts state-by-state infection and mortality rates based upon each state’s policy decision.

DELPHI is based on the standard SEIR model, a compartmental model that simplifies the mathematical modeling of infectious diseases by dividing populations in four categories: susceptible, exposed, infectious, and recovered. The ordering of the labels is intentional to show the flow patterns between the compartments. DELPHI expands on this model with a system that looks at 11 possible states of being to account for realistic effects of the pandemic, such comparing the length of time those who recovered from Covid-19 spent in the hospital versus those who died.

“The model has some values that are hardwired, such as how long a person stays in the hospital, but we went deeper to account for the nonlinear change of infection rates, which we found were not constant and varied over different periods and locations,” says Bertsimas. “This gave us more modeling flexibility, which led the model to make more accurate predictions.”

A key innovation of the model is capturing the behaviors of people related to measures put into place during the pandemic, such as lockdowns, mask-wearing, and social distancing, and the impact these had on infection rates.

“By June or July, we were able to augment the model with these data. The model then became even more accurate,” says Bertsimas. “We also considered different scenarios for how various governments might respond with policy decisions, from implementing serious restrictions to no restrictions at all, and compared them to what we were seeing happening in the world. This gave us the ability to make a spectrum of predictions. One of the advantages of the DELPHI model is that it makes predictions on 120 countries and all 50 U.S. states on a daily basis.”

A vaccine for today’s pandemic

Being able to determine where Covid-19 is likely to spike next proved to be critical to the success of Janssen’s clinical trials, which were “event-based” — meaning that “we figure out efficacy based on how many ‘events’ are in our study population, events such as becoming sick with Covid-19,” explains Khan.

“To run a trial like this, which is very, very large, it’s important to go to hot spots where we anticipate the disease transmission to be high so that you can accumulate those events quickly. If you can, then you can run the trial faster, bring the vaccine to market more quickly, and also, most importantly, have a very rich dataset where you can make statistically sound analysis.”

Bertsimas assembled a core group of researchers to work with him on the project, including two doctoral students from MIT’s Operations Research Center, where he is a faculty member: Michael Li, who led implementation efforts, and Omar Skali Lami. Other members included Hamza Tazi MBN ’20, a former master of business analytics student, and Ali Haddad, a data research scientist at Dynamic Ideas LLC.

The MIT team began collaborating with Khan and her team last May to forecast where the next surge in cases might happen. Their goal was to identify Covid-19 hot spots where Janssen could conduct clinical trials and recruit participants who were most likely to get exposed to the virus.

With clinical trials due to start last September, the teams had to immediately hit the ground running and make predictions four months in advance of when the trials would actually take place. “We started meeting daily with the Janssen team. I’m not exaggerating — we met on a daily basis … sometimes over the weekend, and sometimes more than once a day,” says Bertsimas.

To understand how the virus was moving around the world, data scientists at Janssen continuously monitored and scouted data sources across the world. The team built a global surveillance dashboard that pulled in data at a country, state, and even county level based on data availability, on case numbers, hospitalizations, and mortality and testing rates.

The DELPHI model integrated these data, with additional information about local policies and behaviors, such as whether people were being compliant with mask-wearing, and was making daily predictions in the 300-400 range. “We were getting constant feedback from the Janssen team which helped to improve the quality of the model. The model eventually became quite central to the clinical trial process,” says Bertsimas.

Remarkably, the vast majority of Janssen’s clinical trial sites that DELPHI predicted to be Covid-19 hot spots ultimately had extremely high number of cases, including in South Africa and Brazil, where new variants of the virus had surfaced by the time the trials began. According to Khan, high incidence rates typically indicate variant involvement.

“All of the predictions the model made are publicly available, so one can go back and see how accurate the model really is. It held its own. To this day, DELPHI is one of the most accurate models the scientific community has produced,” says Bertsimas.

“As a result of this model, we were able to have a highly data-rich package at the time of submission of our vaccine candidate,” says Khan. “We are one of the few trials that had clinical data in South Africa and Brazil. That became critical because we were able to develop a vaccine that became relevant for today’s needs, today’s world, and today’s pandemic, which consists of so many variants, unfortunately.” 

Khan points out that the DELPHI model was further evolved with diversity in mind, taking into account biological risk factors, patient demographics, and other characteristics. “Covid-19 impacts people in different ways, so it was important to go to areas where we were able to recruit participants from different races, ethnic groups, and genders. Due to this effort, we had one of the most diverse Covid-19 trials that’s been run to date,” she says. “If you start with the right data, unbiased, and go to the right places, we can actually change a lot of the paradigms that are limiting us today.”

In April, the MIT and Janssen R&D Data Science team were jointly recognized by the Institute for Operations Research and the Management Sciences (INFORMS) as the winner of the 2021 Innovative Applications in Analytics Award for their innovative and highly impactful work on Covid-19. Building on this success, the teams are continuing their collaboration to apply their data-driven approach and technical rigor in tackling other infectious diseases. “This was not a partnership in name only. Our teams really came together in this and continue to work together on various data science efforts across the pipeline,” says Khan. The team further appreciates the role of investigators on the ground, who contributed to site selection in combination with the model.

“It was a very satisfying experience,” concurs Bertsimas. “I’m proud to have contributed to this effort and help the world in the fight against the pandemic.”

All About Customer Call Tracking Software

Are you a business owner? Do you wish to streamline your marketing efforts? Then, here is the article for you! With the traditional call tracking methods being ineffective and obsolete, the time has come for business owners to switch to an advanced call tracking software, filled with powerful features. Read on to find out all about customer call tracking software.

What is customer call tracking software?

Customer call tracking software is a unique, modern, and advanced application. It tracks, monitors, and records incoming phone calls along with conversations in a specific geographical area. Through them, marketers can assign phone calls to the respective marketing channels. This software also lets you receive more phone calls from prospective customers.

What are the characteristics of customer call tracking software?

Dynamic number insertion

DNI allows you to assign a particular number to every online or ad source. The number is then shown to your customers and the visitors who arrive at your website. This helps you to track the source of every phone call.

Multi-channel attribution

This enables you to find out all the sources that led to the incoming calls. Through this, you can check the campaign that attracted the visitor to your website as well as the pages they went through before making the call.

Call recording

Through this feature, you can evaluate the performance of your agent dealing with the calls. It also tells whether the customer queries were solved accurately or not. Call recording assumes more importance for gaining more quality leads.

CRM integration

If you want to boost your sales, then make sure that your call tracking software is integrated with a CRM tool. This integration with the Customer Relationship Management tool will help you in viewing customers’ profiles and call history.

Call routing

With call routing, you can connect your agents with the customers, ensuring that their needs are catered to for enriching the overall customer experience. Call routing is also helpful in boosting the agents’ productivity.

What are the benefits of customer call tracking software?

Provide high-quality leads

With call tracking services being increasingly used in diverse industries including e-commerce, healthcare, retail, insurance, etc., it can be said that call tracking software is offering high-quality leads to businesses, boosting the demand for their products or services.

Evaluate marketing efforts

The software goes a long way in helping you to better understand the performance and related results of your marketing efforts. With call tracking software, you can know the source of the caller, whether it is an SEO organic search or a PPC ad campaign.

Improve SEO activities

By using this software, you can also find out details about the specific landing pages of your website that are producing the maximum calls and traffic. Once you know this, you can work on refining your SEO-based activities.

Analyze call quality

Today’s call tracker software is packed with innovative technologies such as artificial intelligence, machine learning, voice recognition, etc. This helps you to invest more effort in automated call scoring.

Call Tracking Pro Also Offers Following Services:

Call Tracking And Recording

Call Tracking Solution

MIT unveils a new action plan to tackle the climate crisis

MIT has released an ambitious new plan for action to address the world’s accelerating climate crisis. The plan, titled “Fast Forward: MIT’s Climate Action Plan for the Decade,” includes a broad array of new initiatives and significant expansions of existing programs, to address the needs for new technologies, new policies, and new kinds of outreach to bring the Institute’s expertise to bear on this critical global issue.

As MIT President L. Rafael Reif and other senior leaders have written in a letter to the MIT community announcing the new plan, “Humanity must find affordable, equitable ways to bring every sector of the global economy to net-zero carbon emissions no later than 2050.” And in order to do that, “we must go as far as we can, as fast as we can, with the tools and methods we have now.” But that alone, they stress, will not be enough to meet that essential goal. Significant investments will also be needed to invent and deploy new tools, including technological breakthroughs, policy initiatives, and effective strategies for education and communication about this epochal challenge.

“Our approach is to build on what the MIT community does best — and then aspire for still more. Harnessing MIT’s long record as a leader in innovation, the plan’s driving force is a series of initiatives to ignite research on, and accelerate the deployment of, the technologies and policies that will produce the greatest impact on limiting global climate change,” says Vice President for Research Maria Zuber, who led the creation and implementation of MIT’s first climate action plan and oversaw the development of the new plan alongside Associate Provost Richard Lester and School of Engineering Dean Anantha Chandrakasan.

The new plan includes a commitment to investigate the essential dynamics of global warming and its impacts, increasing efforts toward more precise predictions, and advocating for science-based climate policies and increased funding for climate research. It also aims to foster innovation through new research grants, faculty hiring policies, and student fellowship opportunities.

Decarbonizing the world’s economy in time will require “new ideas, transformed into practical solutions, in record time,” the plan states, and so it includes a push for research focused on key areas such as cement and steel production, heavy transportation, and ways to remove carbon from the air. The plan affirms the imperative for decarbonization efforts to emphasize the need for equity and fairness, and for broad outreach to all segments of society.

Charting a shared course for the future

Having made substantial progress in implementing the Institute’s original five-year Plan for Action on Climate Change, MIT’s new plan outlines measures to build upon and expand that progress over the next decade. The plan consists of five broad areas of action: sparking innovation, educating future generations, informing and leveraging government action, reducing MIT’s own climate impact, and uniting and coordinating all of MIT’s climate efforts.

MIT is already well on its way to reaching the initial target, set in 2015, to reduce the Institute’s net carbon emissions by at least 32 percent from 2005 levels by the year 2030. That goal is being met through a combination of innovative off-campus power purchase agreements that enable the construction of large-scale solar and wind farms, and an array of renewable energy and building efficiency measures on campus. In the new plan, MIT commits to net-zero direct carbon emissions by 2026.

The initial plan focused largely on intensifying efforts to find breakthrough solutions for addressing climate change, through a series of actions including the creation of new low-carbon energy centers for research, and the convening of researchers, industry leaders, and policymakers to facilitate the sharing of best practices and successful measures. The new plan expands upon these actions and incorporates new measures, such as climate-focused faculty positions and student work opportunities to help tackle climate issues from a variety of disciplines and perspectives.

A long-running series of symposia, community forums, and other events and discussions helped shape a set of underlying principles that apply to all of the plan’s many component parts. These themes are:

  • The centrality of science, to build on MIT’s pioneering work in understanding the dynamics of global warming and its effects;
  • The need to innovate and scale, requiring new ideas to be made into practical solutions quickly;
  • The imperative of justice, since many of those who will be most affected by climate change are among those with the least resources to adapt;
  • The need for engagement, dealing with government, industry, and society as a whole, reflecting the fact that decarbonizing the world’s economy will require working with leaders in all sectors; and
  • The power of coordination, emphasizing the need for the many different parts of the Institute’s climate research, education, and outreach to have clear structures for decision making, action, and accountability.

Bolstering research and innovation

The new plan features a wide array of action items to encourage innovation in critical areas, including new programs as well as the expansions of existing programs. This includes the Climate Grand Challenges, announced last year, which focus on game-changing research advances across disciplines spanning MIT.

“We must, and we do, call for critical self-examination of our own footprint, and aspire for substantial reductions. We also must, and we do, renew and bolster our commitment to the kind of paradigm-shifting research and innovation, across every sector imaginable (and some perhaps still waiting to be discovered), that the world expects from MIT,” Lester says. “An immediate and existential crisis like climate change calls for both near-term and extraordinary long-term strokes. I believe the people of MIT are capable of both.” 

The plan also calls for expanding the MIT Climate and Sustainability Consortium, created earlier this year, to foster collaborations among companies and researchers to work for solutions to climate problems. The aim is to greatly accelerate the adoption of large-scale, real-world climate solutions, across different industries around the world, by working with large companies as they work to find ways to meet new net-zero climate targets, in areas ranging from aerospace to packaged food.

Another planned action is to establish a Future Energy Systems Center, which will coalesce the work that has been fostered through MIT’s Low-Carbon Energy Centers, created under the previous climate action plan. The Institute is also committing to devoting at least 20 upcoming faculty positions to climate-focused talent. And, there will be new midcareer ignition grants for faculty to spur work related to climate change and clean energy.

For students, the plan will provide up to 100 new Climate and Sustainability Energy Fellowships, spanning the Institute’s five schools and one college. These will enable work on current or new projects related to climate change. There will also be a new Climate Education Task Force to evaluate current offerings and make recommendations for strengthening research on climate-related topics. And, in-depth climate or clean-energy-related research opportunities will be offered to every undergraduate who wants one. Climate and sustainability topics and examples will be introduced into courses throughout the Institute, especially in the General Institute Requirements that all undergraduates must take.

This emphasis on MIT’s students is reflected in the plan’s introductory cover letter from Reif, Zuber, Lester, Chandrakasan, and Executive Vice President and Treasurer Glen Shor. They write: “In facing this challenge, we have very high expectations for our students; we expect them to help make the impossible possible. And we owe it to them to face this crisis by coming together in a whole-of-MIT effort — deliberately, wholeheartedly, and as fast as we can.”

The plan’s educational components provide “the opportunity to fundamentally change how we have our graduates think in terms of a sustainable future,” Chandrakasan says. “I think the opportunity to embed this notion of sustainability into every class, to think about design for sustainability, is a very important aspect of what we’re doing. And, this plan could significantly increase the faculty focused on this critical area in the next several years. The potential impact of that is tremendous.”

Reaching outward

The plan calls for creating a new Sustainability Policy Hub for undergraduates and graduate students to foster interactions with sustainability policymakers and faculty, including facilitating climate policy internships in Washington. There will be an expansion of the Council on the Uncertain Human Future, which started last year to bring together various groups to consider the climate crisis and its impacts on how people might live now and in the future.

“The proposed new Sustainability Policy Hub, coordinated by the Technology and Policy Program, will help MIT students and researchers engage with decision makers on topics that directly affect people and their well-being today and in the future,” says Noelle Selin, an associate professor in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric, and Planetary Sciences. “Ensuring sustainability in a changed climate is a collaborative effort, and working with policymakers and communities will be critical to ensure our research leads to action.”

A new series of Climate Action Symposia, similar to a successful series held in 2019-2020, will be convened. These events may include a focus on climate challenges for the developing world. In addition, MIT will develop a science- and fact-based curriculum on climate issues for high school students. These will be aimed at underserved populations and at countering sources of misinformation.

Building on its ongoing efforts to provide reliable, evidence-based information on climate science, technology, and policy solutions to policymakers at all levels of government, MIT is establishing a faculty-led Climate Policy Working Group, which will work with the Institute’s Washington office to help connect faculty members doing relevant research with officials working in those areas.

In the financial arena, MIT will lead more research and discussions aimed at strengthening the financial disclosures relating to climate that corporations need to make, thus making the markets more sensitive to the true risks to investors posed by climate change. In addition, MIT will develop a series of case studies of companies that have made a conversion to decarbonized energy and to sustainable practices, in order to provide useful models for others.

MIT will also expand the reach of its tools for modeling the impacts of various policy decisions on climate outcomes, economics, and energy systems. And, it will continue to send delegations to the major climate policy forums such as the UN’s Conference of the Parties, and to find new audiences for its Climate Portal, web-based Climate Primer, and TILclimate podcast.

“This plan reaffirms MIT’s commitment to developing climate change solutions,” says Christopher Knittel, the George P. Shultz Professor of Applied Economics. “It understands that solving climate change will require not only new technologies but also new climate leaders and new policy. The plan leverages MIT’s strength across all three of these, as well as its most prized resources: its students. I look forward to working with our students and policymakers in using the tools of economics to provide the research needed for evidence-based policymaking.”

Recognizing that the impacts of climate change fall most heavily on some populations that have contributed little to the problem but have limited means to make the needed changes, the plan emphasizes the importance of addressing the socioeconomic challenges posed by major transitions in energy systems, and will focus on job creation and community support in these regions, both domestically and in the developing world. These programs include the Environmental Solutions Initiative’s Natural Climate Solutions Program, and the Climate Resilience Early Warning System Network, which aims to provide fine-grained climate predictions.

“I’m extraordinarily excited about the plan,” says Professor John Fernández, director of the Environmental Solutions Initiative and a professor of building technology. “These are exactly the right things for MIT to be doing, and they align well with an increasing appetite across our community. We have extensive expertise at MIT to contribute to diverse solutions, but our reach should be expanded and I think this plan will help us do that.”

“It’s so encouraging to see environmental justice issues and community collaborations centered in the new climate action plan,” says Amy Moran-Thomas, the Alfred Henry and Jean Morrison Hayes Career Development Associate Professor of Anthropology. “This is a vital step forward. MIT’s policy responses and climate technology design can be so much more significant in their reach with these engagements done in a meaningful way.”

Decarbonizing campus

MIT’s first climate action plan produced mechanisms and actions that have led to significant reductions in net emissions. For example, through an innovative collaborative power purchase agreement, MIT enabled the construction of a large solar farm and the early retirement of a coal plant, and also provided a model that others have since adopted. Because of the existing agreement, MIT has already reduced its net emissions by 24 percent despite a boom in construction of new buildings on campus. This model will be extended moving forward, as MIT explores a variety of possible large-scale collaborative agreements to enable solar energy, wind energy, energy storage, and other emissions-curbing facilities.

Using the campus as a living testbed, the Institute has studied every aspect of its operations to assess their climate impacts, including heating and cooling, electricity, lighting, materials, and transportation. The studies confirm the difficulties inherent in transforming large existing infrastructure, but all feasible reductions in emissions are being pursued. Among them: All new purchases of light vehicles will be zero-emissions if available. The amount of solar generation on campus will increase fivefold, from 100 to 500 megawatts. Shuttle buses will begin converting to electric power no later than  2026, and the number of car-charging stations will triple, to 360.

Meanwhile, a new working group will study possibilities for further reductions of on-campus emissions, including indirect emissions encompassed in the UN’s Scope 3 category, such as embedded energy in construction materials, as well as possible measures to offset off-campus Institute-sponsored travel. The group will also study goals relating to food, water, and waste systems; develop a campus climate resilience plan; and expand the accounting of greenhouse gas emissions to include MIT’s facilities outside the campus. It will encourage all labs, departments, and centers to develop plans for sustainability and reductions in emissions.

“This is a broad and appropriately ambitious plan that reflects the headway we’ve made building up capacity over the last five years,” says Robert Armstrong, director of the MIT Energy Initiative. “To succeed we’ll need to continually integrate new understanding of climate science, science and technology innovations, and societal engagement from the many elements of this plan, and to be agile in adapting ongoing work accordingly.”

Examining investments

To help bring MIT’s investments in line with these climate goals, MIT has already begun the process of decarbonizing its portfolio, but aims to go further.

Beyond merely declaring an aspirational goal for such reductions, the Institute will take this on as a serious research question, by undertaking an intensive analysis of what it would mean to achieve net-zero carbon by 2050 in a broad investment portfolio.

“I am grateful to MITIMCO for their seriousness in affirming this step,” Zuber says. “We hope the outcome of this analysis will help not just our institution but possibly other institutional managers with a broad portfolio who aspire to a net-zero carbon goal.”

MIT’s investment management company will also review its environmental, social, and governance investment framework and post it online. And, as a member of Climate Action 100+, MIT will be actively engaging with major companies about their climate-change planning. For the planned development of the Volpe site in Kendall square, MIT will offset the entire carbon footprint and raise the site above the projected 2070 100-year flood level.

Institute-wide participation

A centerpiece of the new plan is the creation of two high-level committees representing all parts of the MIT community. The MIT Climate Steering Committee, a council of faculty and administrative leaders, will oversee and coordinate MIT’s strategies on climate change, from technology to policy. The steering committee will serve as an “orchestra conductor,” coordinating with the heads of the various climate-related departments, labs, and centers, as well as issue-focused working groups, seeking input from across the Institute, setting priorities, committing resources, and communicating regularly on the progress of the climate plan’s implementation.

The second committee, called the Climate Nucleus, will include representatives of climate- and energy-focused departments, labs, and centers that have significant responsibilities under the climate plan, as well as the MIT Washington Office. It will have broad responsibility for overseeing the management and implementation of all elements of the plan, including program planning, budgeting and staffing, fundraising, external and internal engagement, and program-level accountability. The Nucleus will make recommendations to the Climate Steering Committee on a regular basis and report annually to the steering committee on progress under the plan.

“We heard loud and clear that MIT needed both a representative voice for all those pursuing research, education, and innovation to achieve our climate and sustainability goals, but also a body that’s nimble enough to move quickly and imbued with enough budgetary oversight and leadership authority to act decisively. With the steering committee and Climate Nucleus together, we hope to do both,” Lester says.

The new plan also calls for the creation of three working groups to address specific aspects of climate action. The working groups will include faculty, staff, students, and alumni and give these groups direct input into the ongoing implementation of MIT’s plans. The three groups will focus on climate education, climate policy, and MIT’s own carbon footprint. They will track progress under the plan and make recommendations to the Nucleus on ways of increasing MIT’s effectiveness and impact.

“MIT is in an extraordinary position to make a difference and to set a standard of climate leadership,” the plan’s cover letter says. “With this plan, we commit to a coordinated set of leadership actions to spur innovation, accelerate action, and deliver practical impact.”

“Successfully addressing the challenges posed by climate change will require breakthrough science, daring innovation, and practical solutions, the very trifecta that defines MIT research,” says Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography. “The MIT climate action plan lays out a comprehensive vision to bring the whole Institute together and address these challenges head on. “Last century, MIT helped put humans on the moon. This century, it is committing to help save humanity and the environment from climate change here on Earth.”

New system cleans messy data tables automatically

MIT researchers have created a new system that automatically cleans “dirty data” —  the typos, duplicates, missing values, misspellings, and inconsistencies dreaded by data analysts, data engineers, and data scientists. The system, called PClean, is the latest in a series of domain-specific probabilistic programming languages written by researchers at the Probabilistic Computing Project that aim to simplify and automate the development of AI applications (others include one for 3D perception via inverse graphics and another for modeling time series and databases).

According to surveys conducted by Anaconda and Figure Eight, data cleaning can take a quarter of a data scientist’s time. Automating the task is challenging because different datasets require different types of cleaning, and common-sense judgment calls about objects in the world are often needed (e.g., which of several cities called “Beverly Hills” someone lives in). PClean provides generic common-sense models for these kinds of judgment calls that can be customized to specific databases and types of errors.

PClean uses a knowledge-based approach to automate the data cleaning process: Users encode background knowledge about the database and what sorts of issues might appear. Take, for instance, the problem of cleaning state names in a database of apartment listings. What if someone said they lived in Beverly Hills but left the state column empty? Though there is a well-known Beverly Hills in California, there’s also one in Florida, Missouri, and Texas … and there’s a neighborhood of Baltimore known as Beverly Hills. How can you know in which the person lives? This is where PClean’s expressive scripting language comes in. Users can give PClean background knowledge about the domain and about how data might be corrupted. PClean combines this knowledge via common-sense probabilistic reasoning to come up with the answer. For example, given additional knowledge about typical rents, PClean infers the correct Beverly Hills is in California because of the high cost of rent where the respondent lives. 

Alex Lew, the lead author of the paper and a PhD student in the Department of Electrical Engineering and Computer Science (EECS), says he’s most excited that PClean gives a way to enlist help from computers in the same way that people seek help from one another. “When I ask a friend for help with something, it’s often easier than asking a computer. That’s because in today’s dominant programming languages, I have to give step-by-step instructions, which can’t assume that the computer has any context about the world or task — or even just common-sense reasoning abilities. With a human, I get to assume all those things,” he says. “PClean is a step toward closing that gap. It lets me tell the computer what I know about a problem, encoding the same kind of background knowledge I’d explain to a person helping me clean my data. I can also give PClean hints, tips, and tricks I’ve already discovered for solving the task faster.”

Co-authors are Monica Agrawal, a PhD student in EECS; David Sontag, an associate professor in EECS; and Vikash K. Mansinghka, a principal research scientist in the Department of Brain and Cognitive Sciences.

What innovations allow this to work? 

The idea that probabilistic cleaning based on declarative, generative knowledge could potentially deliver much greater accuracy than machine learning was previously suggested in a 2003 paper by Hanna Pasula and others from Stuart Russell’s lab at the University of California at Berkeley. “Ensuring data quality is a huge problem in the real world, and almost all existing solutions are ad-hoc, expensive, and error-prone,” says Russell, professor of computer science at UC Berkeley. “PClean is the first scalable, well-engineered, general-purpose solution based on generative data modeling, which has to be the right way to go. The results speak for themselves.” Co-author Agrawal adds that “existing data cleaning methods are more constrained in their expressiveness, which can be more user-friendly, but at the expense of being quite limiting. Further, we found that PClean can scale to very large datasets that have unrealistic runtimes under existing systems.”

PClean builds on recent progress in probabilistic programming, including a new AI programming model built at MIT’s Probabilistic Computing Project that makes it much easier to apply realistic models of human knowledge to interpret data. PClean’s repairs are based on Bayesian reasoning, an approach that weighs alternative explanations of ambiguous data by applying probabilities based on prior knowledge to the data at hand. “The ability to make these kinds of uncertain decisions, where we want to tell the computer what kind of things it is likely to see, and have the computer automatically use that in order to figure out what is probably the right answer, is central to probabilistic programming,” says Lew.

PClean is the first Bayesian data-cleaning system that can combine domain expertise with common-sense reasoning to automatically clean databases of millions of records. PClean achieves this scale via three innovations. First, PClean’s scripting language lets users encode what they know. This yields accurate models, even for complex databases. Second, PClean’s inference algorithm uses a two-phase approach, based on processing records one-at-a-time to make informed guesses about how to clean them, then revisiting its judgment calls to fix mistakes. This yields robust, accurate inference results. Third, PClean provides a custom compiler that generates fast inference code. This allows PClean to run on million-record databases with greater speed than multiple competing approaches. “PClean users can give PClean hints about how to reason more effectively about their database, and tune its performance — unlike previous probabilistic programming approaches to data cleaning, which relied primarily on generic inference algorithms that were often too slow or inaccurate,” says Mansinghka. 

As with all probabilistic programs, the lines of code needed for the tool to work are many fewer than alternative state-of-the-art options: PClean programs need only about 50 lines of code to outperform benchmarks in terms of accuracy and runtime. For comparison, a simple snake cellphone game takes twice as many lines of code to run, and Minecraft comes in at well over 1 million lines of code.

In their paper, just presented at the 2021 Society for Artificial Intelligence and Statistics conference, the authors show PClean’s ability to scale to datasets containing millions of records by using PClean to detect errors and impute missing values in the 2.2 million-row Medicare Physician Compare National dataset. Running for just seven-and-a-half hours, PClean found more than 8,000 errors. The authors then verified by hand (via searches on hospital websites and doctor LinkedIn pages) that for more than 96 percent of them, PClean’s proposed fix was correct. 

Since PClean is based on Bayesian probability, it can also give calibrated estimates of its uncertainty. “It can maintain multiple hypotheses — give you graded judgments, not just yes/no answers. This builds trust and helps users override PClean when necessary. For example, you can look at a judgment where PClean was uncertain, and tell it the right answer. It can then update the rest of its judgments in light of your feedback,” says Mansinghka. “We think there’s a lot of potential value in that kind of interactive process that interleaves human judgment with machine judgment. We see PClean as an early example of a new kind of AI system that can be told more of what people know, report when it is uncertain, and reason and interact with people in more useful, human-like ways.”

David Pfau, a senior research scientist at DeepMind, noted in a tweet that PClean meets a business need: “When you consider that the vast majority of business data out there is not images of dogs, but entries in relational databases and spreadsheets, it’s a wonder that things like this don’t yet have the success that deep learning has.”

Benefits, risks, and regulation

PClean makes it cheaper and easier to join messy, inconsistent databases into clean records, without the massive investments in human and software systems that data-centric companies currently rely on. This has potential social benefits — but also risks, among them that PClean may make it cheaper and easier to invade peoples’ privacy, and potentially even to de-anonymize them, by joining incomplete information from multiple public sources.

“We ultimately need much stronger data, AI, and privacy regulation, to mitigate these kinds of harms,” says Mansinghka. Lew adds, “As compared to machine-learning approaches to data cleaning, PClean might allow for finer-grained regulatory control. For example, PClean can tell us not only that it merged two records as referring to the same person, but also why it did so — and I can come to my own judgment about whether I agree. I can even tell PClean only to consider certain reasons for merging two entries.” Unfortunately, the reseachers say, privacy concerns persist no matter how fairly a dataset is cleaned.

Mansinghka and Lew are excited to help people pursue socially beneficial applications. They have been approached by people who want to use PClean to improve the quality of data for journalism and humanitarian applications, such as anticorruption monitoring and consolidating donor records submitted to state boards of elections. Agrawal says she hopes PClean will free up data scientists’ time, “to focus on the problems they care about instead of data cleaning. Early feedback and enthusiasm around PClean suggest that this might be the case, which we’re excited to hear.”

Saving the radome

Perched atop the MIT Cecil and Ida Green Building (Building 54), MIT’s tallest academic building, a large, golf ball-like structure protrudes from the roof, holding its own in the iconic MIT campus skyline. This radar dome — or “radome” for short — is a fiberglass shell that encases a large parabolic dish, shielding it from the elements while allowing radio waves to penetrate. First installed in 1966, it was used initially to pioneer weather radar research. As the years passed and technology evolved, the radome eventually fell out of use for this purpose and was subsequently slated for removal as MIT began a major renovation and capital improvement project for the building. That’s when the student-led MIT Radio Society, who had found creative new uses for the radome, sprang into action to save it — and succeeded.

“When we say ‘save the radome,’ what we set out to accomplish was to preserve a scientific instrument with great potential from demolition and incorporate an in-place renovation of the dish into the overall building renewal project,” says Kerri Cahoy, associate professor in MIT’s Department of Aeronautics and Astronautics and the Department of Earth, Atmospheric and Planetary Sciences, who serves as the faculty advisor for the MIT Radio Society.

The call to action

Starting in the early 1980s, the MIT Radio Society took up residence alongside the radome on the roof of the Green Building, leveraging the highest point on campus accessible to students that provided a manageable, unobstructed laboratory to house equipment like antenna arrays and an FM repeater. In recent years, the Radio Society adapted and upgraded the radome for their microwave experiments, most notably enabling its use for Earth-moon-Earth or “moonbounce” communication, where signals are bounced off the moon to reach Earth-bound receivers at greater distances than radio communications sent on the ground.

“Before the pandemic, we participated in a contest where we used moonbounce to make contact with as many people in as many places as possible to earn points,” says Milo Hooper, a senior in mechanical engineering and president of the MIT Radio Society. “We had to get up at 2 a.m. to make sure the moon was in the right position at the right time, and we were able to talk to people in Europe and on the West Coast. As a student, it’s amazing to have the opportunity to use a world-class instrument on a college campus. It’s unrivaled.”

To secure the large dish’s future and replace the deteriorating radome, the MIT Radio Society spearheaded a fundraising effort and immediately got to work. Building on the momentum of a previous successful fundraising campaign among Radio Society alumni that helped refurbish their equipment on the roof, they further mobilized the MIT community of alumni and friends by organizing a second campaign. The students also pulled together a successful grant application in record time to Amateur Radio Digital Communications (ARDC), a non-profit private foundation supporting amateur radio and digital communications science, resulting in ARDC’s largest-ever philanthropic contribution, made in memory of the organization’s founder Brian Kantor. This lead gift brought the MIT Radio Society across the finish line to successfully meet their fundraising goal.

“We were overwhelmed at first by the amount we needed to raise, and the short time we had before the renovation project needed to begin. We just had to hope that someone would see the same promise and potential in the dish that we did,” says Gregory Allan, a PhD student in the MIT Department of Aeronautics and Astronautics who led ARDC grant submission efforts. “When we contacted ARDC, they were so supportive and willing to do whatever it took to make this happen. We’re really grateful to them for this incredible gift.”

Finding a new purpose

When it comes to satellite communication, the bigger the dish, the further you can send communication signals. The large dish atop the Green Building is 18 feet wide, which is unique because academic institutions don’t typically have access to a dish that size without partnering with a commercial provider. The dish rests upon a mount that also boasts a unique feature: Built and used initially for an earlier project to track aircraft movement during World War II, the mount can reposition the dish quickly. This will be particularly useful for tracking satellites in low-Earth orbit that streak across the night sky in less than 10 minutes.

“The dish is really perfect for both low-Earth orbit satellite communications because of the fast-tracking and also for deep space lunar CubeSats because of its large size. Additionally, the surface of the dish is in good shape, so we can use it for communications at relatively high frequencies which allow us to transfer data at higher rates,” says Mary Knapp ’11, PhD ’18, who is now a research scientist at MIT Haystack Observatory. “Basically, it’s ready to be put to use by many of the CubeSat projects in the process of being developed at MIT, and potentially for outside parties as well.”

The large radome has also proven to be a valuable asset in the classroom, particularly supporting remote-learning efforts during the Covid-19 pandemic. With the help of the MIT Radio Society, the dish enabled remote radio astronomy experiments for the Physics Junior Laboratory (J-Lab), a foundational course in the physics curriculum. The astronomy experiment typically involves using a small radio telescope to measure how the galaxy rotates from our perspective here on Earth. Instead, students collected “exquisite” high-quality data using the large dish, allowing the class to maintain operations close to normal even while working remotely during the pandemic.

“From my perspective, there are three big shining stars that helped make this happen: the initiative and energy of the students, the support of alumni and the MIT community, and ARDC who saw the potential and the exciting future of this facility and how we can use it to educate future generations and support forward-thinking research on campus,” says Cahoy. “We feel grateful that MIT gave us the opportunity to see this through, and appreciate the support, partnership, and guidance we received from the Department of Facilities Campus Construction team who helped us navigate this complex project.”

Turning technology against human traffickers

Last October, the White House released the National Action Plan to Combat Human Trafficking. The plan was motivated, in part, by a greater understanding of the pervasiveness of the crime. In 2019, 11,500 situations of human trafficking in the United States were identified through the National Human Trafficking Hotline, and the federal government estimates there are nearly 25 million victims globally.

This increasing awareness has also motivated MIT Lincoln Laboratory, a federally funded research and development center, to harness its technological expertise toward combating human trafficking.

In recent years, researchers in the Humanitarian Assistance and Disaster Relief Systems Group have met with federal, state, and local agencies, nongovernmental organizations (NGOs), and technology companies to understand the challenges in identifying, investigating, and prosecuting trafficking cases. In 2019, the team compiled their findings and 29 targeted technology recommendations into a roadmap for the federal government. This roadmap informed the U.S. Department of Homeland Security’s recent counter-trafficking strategy released in 2020.

“Traffickers are using technology to gain efficiencies of scale, from online commercial sex marketplaces to complex internet-driven money laundering, and we must also leverage technology to counter them,” says Matthew Daggett, who is leading this research at the laboratory.

In July, Daggett testified at a congressional hearing about many of the current technology gaps and made several policy recommendations on the role of technology countering trafficking. “Taking advantage of digital evidence can be overwhelming for investigators. There’s not a lot of technology out there to pull it all together, and while there are pockets of tech activity, we see a lot of duplication of effort because this work is siloed across the community,” he adds.

Breaking down these silos has been part of Daggett’s goal. Most recently, he brought together almost 200 practitioners from 85 federal and state agencies, NGOs, universities, and companies for the Counter–Human Trafficking Technology Workshop at Lincoln Laboratory. This first-of-its-kind virtual event brought about discussions of how technology is used today, where gaps exist, and what opportunities exist for new partnerships. 

The workshop was also an opportunity for the laboratory’s researchers to present several advanced tools in development. “The goal is to come up with sustainable ways to partner on transitioning these prototypes out into the field,” Daggett adds.

Uncovering networks

One the most mature capabilities at the laboratory in countering human trafficking deals with the challenge of discovering large-scale, organized trafficking networks.

“We cannot just disrupt pieces of an organized network, because many networks recover easily. We need to uncover the entirety of the network and disrupt it as a whole,” says Lin Li, a researcher in the Artificial Intelligence Technology Group.

To help investigators do that, Li has been developing machine learning algorithms that automatically analyze online commercial sex ads to reveal whether they are likely associated with human trafficking activities and if they belong to the same organization.  

This task may have been easier only a few years ago, when a large percentage of trafficking-linked activities were advertised, and reported, from listings on Backpage.com. Backpage was the second-largest classified ad listing service in the United States after Craigslist, and was seized in 2018 by a multi-agency federal investigation. A slew of new advertising sites has since appeared in its wake. “Now we have a very decentralized distributed information source, where people are cross-posting on many web pages,” Li says. Traffickers are also becoming more security-aware, Li says, often using burner cellular or internet phones that make it difficult to use “hard” links such as phone numbers to uncover organized crime.

So, the researchers have instead been leveraging “soft” indicators of organized activity, such as semantic similarities in the ad descriptions. They use natural language processing to extract unique phrases in content to create ad templates, and then find matches for those templates across hundreds of thousands of ads from multiple websites.

“We’ve learned that each organization can have multiple templates that they use when they post their ads, and each template is more or less unique to the organization. By template matching, we essentially have an organization-discovery algorithm,” Li says.

In this analysis process, the system also ranks the likelihood of an ad being associated with human trafficking. By definition, human trafficking involves compelling individuals to provide service or labor through the use of force, fraud, or coercion — and does not apply to all commercial sex work. The team trained a language model to learn terms related to race, age, and other marketplace vernacular in the context of the ad that may be indicative of potential trafficking. 

To show the impact of this system, Li gives an example scenario in which an ad is reported to law enforcement as being linked to human trafficking. A traditional search to find other ads using the same phone number might yield 600 ads. But by applying template matching, approximately 900 additional ads could be identified, enabling the discovery of previously unassociated phone numbers.

“We then map out this network structure, showing links between ad template clusters and their locations. Suddenly, you see a transnational network,” Li says. “It could be a very powerful way, starting with one ad, of discovering an organization’s entire operation.”

Analyzing digital evidence

Once a human trafficking investigation is underway, the process of analyzing evidence to find probable cause for warrants, corroborate victim statements, and build a case for prosecution can be very time- and human-intensive. A case folder might hold thousands of pieces of digital evidence — a conglomeration of business or government records, financial transactions, cell phone data, emails, photographs, social media profiles, audio or video recordings, and more.

“The wide range of data types and formats can make this process challenging. It’s hard to understand the interconnectivity of it all and what pieces of evidence hold answers,” Daggett says. “What investigators want is a way to search and visualize this data with the same ease they would a Google search.”

The system Daggett and his team are prototyping takes all the data contained in an evidence folder and indexes it, extracting the information inside each file into three major buckets — text, imagery, and audio data. These three types of data are then passed through specialized software processes to structure and enrich them, making them more useful for answering investigative questions.                                

The image processor, for example, can recognize and extract text, faces, and objects from images. The processor can then detect near-duplicate images in the evidence, making a link between an image that appears on a sex advertisement and the cell phone that took it, even for images that have been heavily edited or filtered. They are also working on facial recognition algorithms that can identify the unique faces within a set of evidence, model them, and find them elsewhere within the evidence files, under widely different lighting conditions and shooting angles. These techniques are useful for identifying additional victims and corroborating who knows whom.

Another enrichment capability allows investigators to find “signatures” of trafficking in the data. These signatures can be specific vernacular used, for example, in text messages between suspects that refer to illicit activity. Other trafficking signatures can be image-based, such as if the picture was taken in a hotel room, contains certain objects such as cash, or shows specific types of tattoos that traffickers use to brand their victims. A deep learning model the team is working on now is specifically aimed at recognizing crown tattoos associated with trafficking. “The challenge is to train the model to identify the signature across a wide range of crown tattoos that look very different from one another, and we’re seeing robust performance using this technique,” Daggett says.

One particularly time-intensive process for investigators is analyzing thousands of jail phone calls from suspects who are awaiting trial, for indications of witness tampering or continuing illicit operations. The laboratory has been leveraging automated speech recognition technology to develop a tool to allow investigators to partially transcribe and analyze the content of these conversations. This capability gives law enforcement a general idea of what a call might be about, helping them triage ones that should be prioritized for a closer look. 

Finally, the team has been developing a series of user-facing tools that use all of the processed data to enable investigators to search, discover, and visualize connections between evidentiary artifacts, explore geolocated information on a map, and automatically build evidence timelines.

“The prosecutors really like the timeline tool, as this is one of the most labor-intensive tasks when preparing for trial,” Daggett says.

When users click on a document, a map pin, or a timeline entry, they see a data card that links back to the original artifacts. “These tools point you back to the primary evidence that cases can be built on,” Daggett says. “A lot of this prototyping is picking what might be called low-hanging fruit, but it’s really more like fruit already on the ground that is useful and just isn’t getting picked up.”

Victim-centered training

These data analytics are especially useful for helping law enforcement corroborate victim statements. Victims may be fearful or unwilling to provide a full picture of their experience to investigators, or may have difficulty recalling traumatic events. The more nontestimonial evidence that prosecutors can use to tell the story to a jury, the less pressure prosecutors must place on victims to help secure a conviction. There is greater awareness of the retraumatization that can occur during the investigation and trial processes.    

“In the last decade, there has been a greater shift toward a victim-centered approach to investigations,” says Hayley Reynolds, an assistant leader in the Human Health and Performance Systems Group and one of the early leaders of counter–human trafficking research at the laboratory. “There’s a greater understanding that you can’t bring the case to trial if a survivor’s needs are not kept at the forefront.”

Improving training for law enforcement, specifically in interacting with victims, was one of the team’s recommendation in the trafficking technology roadmap. In this area, the laboratory has been developing a scenario-based training capability that uses game-play mechanics to inform law enforcement on aspects of trauma-informed victim interviewing. The training, called a “serious game,” helps officers experience how the approach they choose to gather information can build rapport and trust with a victim, or can reduce the feeling of safety and retraumatize victims. The capability is currently being evaluated by several organizations that specialize in victim-centered practitioner training. The laboratory recently published a journal on serious games built for multiple mission areas over the last decade.

Daggett says that prototyping in partnership with the state and federal investigators and prosecutors that these tools are intended for is critical. “Everything we do must be user-centered,” he says. “We study their existing workflows and processes in detail, present ideas for technologies that could improve their work, and they rate what would have the most operational utility. It’s our way to methodically figure out how to solve the most critical problems,” Daggett says.

When Daggett gave congressional testimony in July, he spoke of the need to establish a unified, interagency entity focused on R&D for countering human trafficking. Since then, some progress has been made toward that goal — the federal government has now launched the Center for Countering Human Trafficking, the first integrated center to support investigations and intelligence analysis, outreach and training activities, and victim assistance.

Daggett hopes that future collaborations will enable technologists to apply their work toward capabilities needed most by the community. “Thoughtfully designed technology can empower the collective counter–human trafficking community and disrupt these illicit operations. Increased R&D holds the potential make a tremendous impact by accelerating justice and hastening the healing of victims.”

Media Advisory — MIT researchers: AI policy needed to manage impacts, build more equitable systems

On Thursday, May 6 and Friday, May 7, the AI Policy Forum — a global effort convened by researchers from MIT — will present their initial policy recommendations aimed at managing the effects of artificial intelligence and building AI systems that better reflect society’s values. Recognizing that there is unlikely to be any singular national AI policy, but rather public policies for the distinct ways in which we encounter AI in our lives, forum leaders will preview their preliminary findings and policy recommendations in three key areas: finance, mobility, and health care.

The inaugural AI Policy Forum Symposium, a virtual event hosted by the MIT Schwarzman College of Computing, will bring together AI and public policy leaders, government officials from around the world, regulators, and advocates to investigate some of the pressing questions posed by AI in our economies and societies. The symposium’s program will feature remarks from public policymakers helping shape governments’ approaches to AI; state and federal regulators on the front lines of these issues; designers of self-driving cars and cancer-diagnosing algorithms; faculty examining the systems used in emerging finance companies and associated concerns; and researchers pushing the boundaries of AI.

WHAT
AI Policy Forum (AIPF) Symposium

WHO:
MIT speakers: 

  • Martin A. Schmidt, MIT provost
  • Daniel Huttenlocher, AIPF chair and dean of the MIT Schwarzman College of Computing
  • Regina Barzilay, MIT School of Engineering Distinguished Professor of AI and Health; AI faculty lead of the Jameel Clinic at MIT
  • Daniel Weitzner, founding director of the MIT Internet Policy Research Initiative; former U.S. deputy chief technology officer in the Office of Science and Technology Policy
  • Luis Videgaray, senior lecturer in the MIT Sloan School of Management; former foreign minister and minister of finance of Mexico
  • Aleksander Madry, professor of computer science in the MIT Department of Electrical Engineering and Computer Science
  • R. David Edelman, director of public policy for the MIT Internet Policy Research Initiative; former special assistant to U.S. President Barack Obama for economic and technology policy
  • Julie Shah, MIT associate professor of aeronautics and astronautics; associate dean of social and ethical responsibilities of computing in the MIT Schwarzman College of Computing
  • Andrew Lo, professor of finance in the MIT Sloan School of Management

Guest speakers and participants: 

  • Julie Bishop, chancellor of the Australian National University; former minister of foreign affairs and member of the Parliament of Australia
  • Andrew Wyckoff, director for science, technology and innovation at the Organization for Economic Cooperation and Development (OECD)
  • Martha Minow, 300th Anniversary University Professor at Harvard Law School; former dean of the Harvard Law School
  • Alejandro Poiré, dean of the School of Public Policy at Monterrey Tec; former secretary of the interior of Mexico
  • Ngaire Woods, dean of the Blavatnik School of Government at the University of Oxford
  • Darran Anderson, director of strategy and innovation at the Texas Department of Transportation
  • Nat Beuse, vice president of security at Aurora; former head safety regulator for autonomous vehicles at the U.S. Department of Transportation
  • Laura Major, chief technology officer of Motional
  • Manuela Veloso, head of AI research at JP Morgan Chase
  • Stephanie Lee, managing director of BlackRock Systematic Active Equities Emerging Markets

WHEN
Thursday and Friday, May 6 and 7

Media RSVP:
Reporters interested in attending can register here. More information on the AI Policy Forum can be found here

With a zap of light, system switches objects’ colors and patterns

When was the last time you repainted your car? Redesigned your coffee mug collection? Gave your shoes a colorful facelift?

You likely answered: never, never, and never. You might consider these arduous tasks not worth the effort. But a new color-shifting “programmable matter” system could change that with a zap of light.

MIT researchers have developed a way to rapidly update imagery on object surfaces. The system, dubbed “ChromoUpdate” pairs an ultraviolet (UV) light projector with items coated in light-activated dye. The projected light alters the reflective properties of the dye, creating colorful new images in just a few minutes. The advance could accelerate product development, enabling product designers to churn through prototypes without getting bogged down with painting or printing.

example

ChromoUpdate “takes advantage of fast programming cycles — things that wouldn’t have been possible before,” says Michael Wessley, the study’s lead author and a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory.

The research will be presented at the ACM Conference on Human Factors in Computing Systems this month. Wessely’s co-authors include his advisor, Professor Stefanie Mueller, as well as postdoc Yuhua Jin, recent graduate Cattalyya Nuengsigkapian ’19, MNG ’20, visiting master’s student Aleksei Kashapov, postdoc Isabel Qamar, and Professor Dzmitry Tsetserukou of the Skolkovo Institute of Science and Technology.

ChromoUpdate builds on the researchers’ previous programmable matter system, called PhotoChromeleon. That method was “the first to show that we can have high-resolution, multicolor textures that we can just reprogram over and over again,” says Wessely. PhotoChromeleon used a lacquer-like ink comprising cyan, magenta, and yellow dyes. The user covered an object with a layer of the ink, which could then be reprogrammed using light. First, UV light from an LED was shone on the ink, fully saturating the dyes. Next, the dyes were selectively desaturated with a visible light projector, bringing each pixel to its desired color and leaving behind the final image. PhotoChromeleon was innovative, but it was sluggish. It took about 20 minutes to update an image. “We can accelerate the process,” says Wessely.

They achieved that with ChromoUpdate, by fine-tuning the UV saturation process. Rather than using an LED, which uniformly blasts the entire surface, ChromoUpdate uses a UV projector that can vary light levels across the surface. So, the operator has pixel-level control over saturation levels. “We can saturate the material locally in the exact pattern we want,” says Wessely. That saves time — someone designing a car’s exterior might simply want to add racing stripes to an otherwise completed design. ChromoUpdate lets them do just that, without erasing and reprojecting the entire exterior.

This selective saturation procedure allows designers to create a black-and-white preview of a design in seconds, or a full-color prototype in minutes. That means they could try out dozens of designs in a single work session, a previously unattainable feat. “You can actually have a physical prototype to see if your design really works,” says Wessely. “You can see how it looks when sunlight shines on it or when shadows are cast. It’s not enough just to do this on a computer.”

That speed also means ChromoUpdate could be used for providing real-time notifications without relying on screens. “One example is your coffee mug,” says Wessely. “You put your mug in our projector system and program it to show your daily schedule. And it updates itself directly when a new meeting comes in for that day, or it shows you the weather forecast.”

Wessely hopes to keep improving the technology. At present, the light-activated ink is specialized for smooth, rigid surfaces like mugs, phone cases, or cars. But the researchers are working toward flexible, programmable textiles. “We’re looking at methods to dye fabrics and potentially use light-emitting fibers,” says Wessely. “So, we could have clothing — t-shirts and shoes and all that stuff — that can reprogram itself.”

The researchers have partnered with a group of textile makers in Paris to see how ChomoUpdate can be incorporated into the design process.

This research was funded, in part, by Ford.

Study finds ride-sharing intensifies urban road congestion

Transport network companies (TNCs), or ride-sharing companies, have gained widespread popularity across much of the world, with more and more cities adopting the phenomenon. While ride-sharing has been credited with being more environmentally friendly than taxis and private vehicles, is that really the case today, or do they rather contribute to urban congestion?

Researchers at the Future Urban Mobility (FM) Interdisciplinary Research Group (IRG) at Singapore-MIT Alliance for Research and Technology (SMART), MIT, and Tongji University conducted a study to find out.

In a paper titled “Impacts of transportation network companies on urban mobility” recently published in Nature Sustainability, the first-of-its-kind study assessed three aspects of how ride-sharing (more accurately called ride-hailing) impacts urban mobility in the United States — road congestion, public transport ridership, and private vehicle ownership — and how they have evolved over time.

“While public transportation provides high-efficiency shared services, it can only accommodate a small portion of commuters, as their coverage is limited in most places,” says Jinhua Zhao, SMART FM principal investigator and associate professor at MIT Department of Urban Studies and Planning. “While mathematical models in prior studies showed that the potential benefit of on-demand shared mobility could be tremendous, our study suggests that translating this potential into actual gains is much more complicated in the real world.”

Using a panel dataset covering mobility trends, socio-demographic changes, and TNC entry at the metropolitan statistical areas level to construct a set of fixed-effect panel models, the researchers found the entrance of TNCs led to increased road congestion in terms of both intensity and duration. Specifically, they noted that congestion increased by almost 1 percent while the duration of congestion rose by 4.5 percent. They also found a 8.9 percent drop in public transport ridership and an insignificant decrease of only 1 percent in private vehicle ownership.

While many previous studies have focused on Uber alone, this study takes into account both Uber and Lyft — the two most popular ride-sharing companies in the United States. While Uber accounts for 69 percent of the market, Lyft accounts for a significant 29 percent, and its inclusion into the dataset would give a more holistic and unbiased estimate of the TNC effect.

The study also finds easy access to ride-sharing discourages commuters from taking greener alternatives, such as walking or public transportation. Survey data from various U.S. cities also showed that approximately half of TNC trips would otherwise have been made by walking, cycling, public transport, or would not have been made at all.

“We are still in the early stages of TNCs and we are likely to see many changes in how these ride-sharing businesses operate,” says Hui Kong, SMART-FM alumna and postdoc at the MIT Urban Mobility Lab, and an author of the paper. “Our research shows that over time TNCs have intensified urban transport challenges and road congestion in the United States, mainly through the extended duration and slightly through the increased intensity. With this information, policies can then be introduced that could lead to positive changes.”

The researchers think that the substantial deadheading miles (miles traveled without a passenger) by TNCs could contribute to the TNC’s negative impact on road congestion. According to some other studies, approximately 40.8 percent of TNC miles are deadheading miles.

“Our findings can provide useful insights into the role that TNCs have played in urban transport systems,” says Professor Mi Diao of Tongji University and SMART-FM alumnus, who is the lead author of the paper. “It can be very useful in supporting transportation planners and policymakers in their decisions and regulations with regard to TNCs.”

The research is carried out by SMART and supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence And Technological Enterprise (CREATE) program.

FM is one of five IRGs in SMART. FM harnesses new technological and institutional innovations to create the next generation of urban mobility systems to increase accessibility, equity, safety, and environmental performance for the citizens and businesses of Singapore and other metropolitan areas, worldwide.

SMART is MIT’s research enterprise in Singapore, established in partnership with the NRF in 2007. SMART is the first entity in the CREATE. SMART serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both Singapore and MIT. SMART currently comprises an Innovation Center and five IRGs: Antimicrobial Resistance, Critical Analytics for Manufacturing Personalized-Medicine, Disruptive and Sustainable Technologies for Agricultural Precision, FM, and Low Energy Electronic Systems.

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.