Stackable online Master of Science in supply chain management announced

EdX, Arizona State University, and MIT have announced the launch of an online master’s degree program in supply chain management. This unique credit pathway between MIT and ASU takes a MicroMasters program from one university, MIT, and stacks it up to a full master’s degree on edX from ASU. Learners who complete and pass the Supply Chain Management MicroMasters program and then apply and gain admission to ASU are eligible to earn a top-ranked graduate degree from ASU’s W. P. Carey School of Business and ASU Online. MIT and ASU are both currently ranked in the top three for graduate supply chain and logistics by U.S. News and World Report

This new master’s degree is the latest program to launch following edX’s October 2018 announcement of 10 disruptively priced and top-ranked online master’s degree programs available on edX.org. Master’s degrees on edX are unique because they are stacked, degree-granting programs with a MicroMasters program component. A MicroMasters program is a series of graduate-level courses that provides learners with valuable standalone skills that translate into career-focused advancement, as well as the option to use the completed coursework as a stepping stone toward credit in a full master’s degree program. 

“We are excited to strengthen our relationship with ASU to offer this innovative, top-ranked online master’s degree program in supply chain management,” says Anant Agarwal, edX CEO and MIT professor. “This announcement comes at a time when the workplace is changing more rapidly than ever before, and employers are in need of highly skilled talent, especially in the fields most impacted by advances in technology. This new offering truly transforms traditional graduate education by bringing together two top-ranked schools in supply chain management to create the world’s first stackable, hybrid graduate degree program. This approach to a stackable, flexible, top-quality online master’s degree is the latest milestone in addressing today’s global skills gap.”

ASU’s online master’s degree program will help prepare a highly technical and competent global workforce for advancement in supply chain management careers across a broad diversity of industries and functions. Students enrolled in the program will also gain an in-depth understanding of the role the supply chain manager can play in an enterprise supply chain and in determining overall strategy. 

“We’re very excited to collaborate with MIT and edX to increase accessibility to a top-ranked degree in supply chain management,” says Amy Hillman, dean of the W. P. Carey School of Business at ASU. “We believe there will be many students who are eager to dive deeper after their MicroMasters program to earn a master’s degree from ASU, and that more learners will be drawn to the MIT Supply Chain Management MicroMasters program as this new pathway to a graduate degree within the edX platform becomes available.”
 
With this new pathway, the MIT Supply Chain Management MicroMasters program now offers learners pathways to completing a master’s degree at 21 institutions. This new program with ASU for the supply chain management online master’s degree offers a seamless learner experience through an easy transition of credit and a timely completion of degree requirements without leaving the edX platform. 

“Learners who complete the MITx MicroMasters program credential from the MIT Center for Transportation and Logistics will now have the opportunity to transition seamlessly online to a full master’s degree from ASU,” says Krishna Rajagopal, dean for digital learning at MIT Open Learning. “We are delighted to add this program to MIT’s growing number of pathways that provide learners with increased access to higher education and career advancement opportunities in a flexible, affordable manner.”

The online Master of Science in supply chain management from ASU will launch in January 2020. Students currently enrolled in, or who have already completed, the MITx Supply Chain Management MicroMasters program can apply now for the degree program, with an application deadline of Dec. 16.

3Q: David Mindell on his vision for human-centered robotics

David Mindell, Frances and David Dibner Professor of the History of Engineering and Manufacturing in the School of Humanities, Arts, and Social Sciences and professor of aeronautics and astronautics, researches the intersections of human behavior, technological innovation, and automation. Mindell is the author of five acclaimed books, most recently “Our Robots, Ourselves: Robotics and the Myths of Autonomy” (Viking, 2015) as well as the co-founder of the Humatics Corporation, which develops technologies for human-centered automation. SHASS Communications spoke with Mindell recently on how his vision for human-centered robotics is developing and his thoughts about the new MIT Stephen A. Schwarzman College of Computing, which aims to integrate technical and humanistic research and education.  
 
Q: Interdisciplinary programs have proved challenging to sustain, given the differing methodologies and vocabularies of the fields being brought together. How might the MIT Schwarzman College of Computing design the curriculum to educate “bilinguals” — students who are adept in both advanced computation and one of more of the humanities, arts, and social science fields?
 
A: Some technology leaders today are naive and uneducated in humanistic and social thinking. They still think that technology evolves on its own and “impacts” society, instead of understanding technology as a human and cultural expression, as part of society.

As a historian and an engineer, and MIT’s only faculty member with a dual appointment in engineering and the humanities, I’ve been “bilingual” my entire career (long before we began using that term for fluency in both humanities and technology fields). My education started with firm grounding in two fields — electrical engineering and history — that I continue to study.

Dual competence is a good model for undergraduates at MIT today as well. Pick two: not necessarily the two that I chose, but any two disciplines that capture the core of technology and the core of the humanities. Disciplines at the undergraduate level provide structure, conventions, and professional identity (although my appointment is in Aero/Astro, I still identify as an electrical engineer). I prefer the term “dual disciplinary” to “interdisciplinary.” 

The College of Computing curriculum should focus on fundamentals, not just engineering plus some dabbling in social implications.

It sends the wrong message to students that “the technical stuff is core, and then we need to add all this wrapper humanities and social sciences around the engineering.” Rather, we need to say: “master two fundamental ways of thinking about the world, one technical and one humanistic or social.” Sometimes these two modes will be at odds with each other, which raises critical questions. Other times they will be synergistic and energizing. For example, my historical work on the Apollo guidance computer inspired a great deal of my current engineering work on precision navigation.

Q: In naming the company you founded Humatics, you’ve combined “human” and “robotics,” highlighting the synergy between human beings and our advanced technologies. What projects underway at Humatics define and demonstrate how you envision people working collaboratively with machines? 

A: Humatics builds on the synthesis that has defined my career — the name is the first four letters of “human” and the last four letters of “robotics.” Our mission is to build technologies that weave robotics into the human world, rather than shape human behavior to the limitations of the robots. We do very technical stuff: We build our own radar chips, our own signal processing algorithms, our own AI-based navigation systems. But we also craft our technologies to be human-centered, to give users and workers information that enables them to make their own decisions and work safer and more efficiently.

We’re currently working to incorporate our ultra-wideband navigation systems into subway and mass transit systems. Humatics’ technologies will enable modern signaling systems to be installed more quickly and less expensively. It’s gritty, dirty work down in the tunnels, but it is a “smart city” application that can improve the daily lives of millions of people. By enabling the trains to navigate themselves with centimeter-precision, we enable greater rush-hour throughput, fewer interruptions, even improved access for people with disabilities, at a minimal cost compared to laying new track.

A great deal of this work focuses on reliability, robustness, and safety. These are large technological systems that MIT used to focus on in the Engineering Systems Division. They are legacy infrastructure running at full capacity, with a variety of stakeholders, and technical issues hashed out in political debate. As an opportunity to improve peoples’ lives with our technology, this project is very motivating for the Humatics team.

We see a subway system as a giant robot that collaborates with millions of people every day. Indeed, for all their flaws, it does so today in beautifully fluid ways. Disruption is not an option. Similarly, we see factories, e-commerce fulfillment centers, even entire supply chains as giant human-machine systems that combine three key elements: people, robots (vehicles), and infrastructure. Humatics builds the technological glue that ties these systems together.

Q: Autonomous cars were touted to be available soon, but their design has run into issues and ethical questions. Is there a different approach to the design of artificially intelligent vehicles, one that does not attempt to create fully autonomous vehicles? If so, what are the barriers or resistance to human-centered approaches?

A: Too many engineers still imagine autonomy as meaning “alone in the world.” This approach derives from a specific historical imagination of autonomy, derived from Defense Advanced Research Projects Agency sponsorship and elsewhere, that a robot should be independent of all infrastructure. While that’s potentially appropriate for military operations, the promise of autonomy on our roads must be the promise of autonomy in the human world, in myriad exquisite relationships.

Autonomous vehicle companies are learning, at great expense, that they already depend heavily on infrastructure (including roads and traffic signs) and that the sooner they learn to embrace it, the sooner they can deploy at scale. Decades of experience have taught us that, to function in the human world, autonomy must be connected, relational, and situated. Human-centered autonomy in automobiles must be more than a fancy FitBit on a driver; it must factor into the fundamental design of the systems: What do we wish to control? Whom do we trust? Who owns our data? How are our systems trained? How do they handle failure? Who gets to decide?

The current crisis over the Boeing 737 MAX control systems show these questions are hard to get right, even in aviation. There we have a great deal of regulation, formalism, training, and procedure, not to mention a safety culture that evolved over a century. For autonomous cars, with radically different regulatory settings and operating environments, not to mention non-deterministic software, we still have a great deal to learn. Sometimes I think it could take the better part of this century to really learn how to build robust autonomy into safety-critical systems at scale.
 

Interview prepared by MIT SHASS Communications
Editorial and Design Director: Emily Hiestand
Interview conducted by writer Maria Iacobo

 

A scholar and teacher re-examines moments in the history of STEM

When Clare Kim began her fall 2017 semester as the teaching assistant for 21H.S01, the inaugural “MIT and Slavery” course, she didn’t know she and her students would be creating a historical moment of their own at the Institute.

Along with Craig Steven Wilder, the Barton L. Weller Professor of History, and Nora Murphy, an archivist for researcher services in the MIT Libraries, Kim helped a team of students use archival materials to examine the Institute’s ties to slavery and how that legacy has impacted the modern structure of scientific institutions. The findings that came to light through the class thrust Kim and her students onto a prominent stage. They spoke about their research in media interviews and at a standing-room-only community forum, and helped bring MIT into a national conversation about universities and the institution of slavery in the United States.

For Kim, a PhD student in MIT’s Program in History, Anthropology, and Science, Technology, and Society (HASTS), it was especially rewarding to help the students to think critically about their own scientific work through a historical context. She enjoyed seeing how the course challenged conventional ideas that had been presented to them about their various fields of study.

“I think people tend to think too much about history as a series of true facts where the narrative that gets constructed is stabilized. Conducting historical research is fun because you have a chance to re-examine evidence, examine archival materials, reinterpret some of what has already been written, and craft a new narrative as a result,” Kim says.

This year, Kim was awarded the prestigious Goodwin Medal for her work as a TA for several MIT courses. The award recognizes graduate teaching assistants that have gone the extra mile in the classroom. Faculty, colleagues, and former students praised Kim for her compassionate, supportive, and individual approach to teaching.

“I love teaching,” she says. “I like to have conversations with my students about what I’m thinking about. It’s not that I’m just imparting knowledge, but I want them to develop a critical way of thinking. I want them to be able to challenge whatever analyses I introduce to them.”

Kim also applies this critical-thinking lens to her own scholarship in the history of mathematics. She is particularly interested in studying math this way because the field is often perceived as “all-stable” and contained, when in fact its boundaries have been much more fluid.

Mathematics and creativity

Kim’s own work re-examines the history of mathematical thought and how it has impacted nonscientific and technical fields in U.S. intellectual life. Her dissertation focuses on the history of mathematics and the ways that mathematicians interacted with artists, humanists, and philosophers throughout the 20th century. She looks at the dialogue and negotiations between different scholars, exploring how they reconfigured the boundaries between academic disciplines.

Kim says that this moment in history is particularly interesting because it reframes mathematics as a field that hasn’t operated autonomously, but rather has engaged with humanistic and artistic practices. This creative perspective, she says, suggests an ongoing, historical relationship between mathematics and the arts and humanities that may come as a surprise to those more likely to associate mathematics with technical and military applications, at least in terms of practical uses.

“Accepting this clean divide between mathematics and the arts occludes all of these fascinating interactions and conversations between mathematicians and nonmathematicians about what it meant to be modern and creative,” Kim says. One such moment of interaction she explores is between mathematicians and design theorists in the 1930s, who worked together in an attempt to develop and teach a mathematical theory of “aesthetic measure,” a way of ascribing judgments of beauty and taste.  

Building the foundation

With an engineering professor father and a mathematician mother, Kim has long been interested in science and mathematics. However, she says influences from her family, which includes a twin sister who is a classicist and an older sister who studied structural biology, ensured that she would also develop a strong background in the humanities and literature.

Kim entered college thinking that she would pursue a technical field, though likely not math itself — she jokes that her math career peaked during her time competing in MATHCOUNTS as a kid. But during her undergraduate years at Brown University, she took a course on the history of science taught by Joan Richards, a professor specializing in the history of mathematics. There, she discovered her interest in studying not just scientific knowledge, but the people who pursue it.

After earning a bachelor’s in history at Brown, with a focus in mathematics and science, Kim decided to pursue a doctoral degree. MIT’s HASTS program appealed to her because of its interdisciplinary approach to studying the social and political components of science and technology.

“In addition to receiving more formal training in the history of science itself, HASTS trained me in anthropological inquiry, political theory, and all these different kinds of methods that could be brought to bear on the social sciences and humanities more generally,” Kim says.

After defending her thesis, Kim will begin a postdoc at Washington University in St. Louis, where she will continue her research and begin converting her dissertation into a book manuscript. She will also be teaching a course she has developed called “Code and Craft,” a course that explores, in a variety of historical contexts, the artful and artisanal components of AI, computing, and otherwise “technical” domains.

In her free time, Kim practices taekwondo (she has a first-degree black belt) and enjoys taking long walks through Cambridge, which she says is how she gets some of her best thinking done.

Transmedia Storytelling Initiative launches with $1.1 million gift

Driven by the rise of transformative digital technologies and the proliferation of data, human storytelling is rapidly evolving in ways that challenge and expand our very understanding of narrative. Transmedia — where stories and data operate across multiple platforms and social transformations — and its wide range of theoretical, philosophical, and creative perspectives, needs shared critique around making and understanding.

MIT’s School of Architecture and Planning (SA+P), working closely with faculty in the MIT School of Humanities, Arts, and Social Sciences (SHASS) and others across the Institute, has launched the Transmedia Storytelling Initiative under the direction of Professor Caroline Jones, an art historian, critic, and curator in the History, Theory, Criticism section of SA+P’s Department of Architecture. The initiative will build on MIT’s bold tradition of art education, research, production, and innovation in media-based storytelling, from film through augmented reality. Supported by a foundational gift from David and Nina Fialkow, this initiative will create an influential hub for pedagogy and research in time-based media.

The goal of the program is to create new partnerships among faculty across schools, offer pioneering pedagogy to students at the graduate and undergraduate levels, convene conversations among makers and theorists of time-based media, and encourage shared debate and public knowledge about pressing social issues, aesthetic theories, and technologies of the moving image.

The program will bring together faculty from SA+P and SHASS, including the Comparative Media Studies/Writing program, and from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). The formation of the MIT Stephen A. Schwarzman College of Computing adds another powerful dimension to the collaborative potential.

“We are grateful to Nina and David for helping us build on the rich heritage of MIT in this domain and carry it forward,” says SA+P Dean Hashim Sarkis. “Their passion for both innovation and art is invaluable as we embark on this new venture.”

The Fialkows’ interest in the initiative stems from their longstanding engagement with filmmaking. David Fialkow, cofounder and managing director of venture capital firm General Catalyst, earned the 2018 Academy Award for producing the year’s best documentary, “Icarus.” Nina Fialkow has worked as an independent film producer for PBS as well as on several award-winning documentaries. Nina has served as chair of the Massachusetts Cultural Council since 2016.

“We are thrilled and humbled to support MIT’s vision for storytelling,” say David and Nina Fialkow. “We hope to tap into our ecosystem of premier thinkers, creators, and funders to grow this initiative into a transformative program for MIT’s students, the broader community, and our society.”

The building blocks

The Transmedia Storytelling Initiative draws on MIT’s long commitment to provocative work produced at the intersection of art and technology.

In 1967, the Department of Architecture established the Film Section and founded the Center for Advanced Visual Studies (CAVS). Over time, CAVS brought scores of important video, computer, and “systems” artists to campus. In parallel, the Film Section trained generations of filmmakers as part of Architecture’s Visual Arts Program (VAP). SA+P uniquely brought making together with theorizing, as Urban Studies and Architecture departments fostered sections such as History, Theory, Criticism (HTC), and the Architecture Machine group that became the Media Lab in 1985.

A major proponent of “direct cinema,” the Film Section was based in the Department of Architecture until it relocated to the Media Lab. With the retirement of its charismatic leader, Professor Richard Leacock, its energies shifted to the Media Lab’s Interactive Cinema group (1987–2004) under the direction of the lab’s research scientist and Leacock’s former student, Glorianna Davenport.

The 1990s’ shift from analog film and video to “digitally convergent” forms (based on bits, bytes, and algorithms) transformed production and critical understanding of time-based media, distributing storytelling and making across the Institute (and across media platforms, going “viral” around the globe).

In parallel to Davenport’s Interactive Cinema group and preceding the Media Lab’s Future Storytelling group (2008–2017), the Comparative Media Studies program — now Comparative Media Studies/Writing (CMS/W) — emerged in SHASS in 1999 and quickly proved to be a leader in cross-media studies. The research of CMS/W scholars such as Henry Jenkins gave rise to the terms “transmedia storytelling” and “convergence” that have since become widely adopted.

The program’s commitment to MIT’s “mens-et-manus” (“mind-and-hand”) ethos takes the form of several field-shaping research labs, including: the Open Documentary Lab, which partners with Sundance and Oculus, explores storytelling and storyfinding with interactive, immersive, and machine learning systems; and the Game Lab, which draws on emergent technologies and partners with colleagues in the Department of Computer Science and Engineering to create rule-based ludic narratives. Current CMS/W faculty such as professors William Uricchio, Nick Montfort, D. Fox Harrell, and Lisa Parks each lead labs that draw fellows and postdocs to their explorations of expressive systems. All have been actively involved in the discussions leading to and shaping this new initiative.

Reflecting on the new initiative, Melissa Nobles, Kenan Sahin Dean of SHASS, says, “For more than two decades, the media, writing, and literature faculty in MIT SHASS have been at the forefront of examining the changing nature of media to empower storytelling, collaborating with other schools across the Institute. The Transmedia Initiative will enable our faculty in CMS/W and other disciplines in our school to work with the SA+P faculty and build new partnerships that apply the humanistic lens to emerging media, especially as it becomes increasingly digital and ever more influential in our society.”

The Transmedia Storytelling initiative will draw on these related conversations across MIT, in the urgent social project of revealing stories created within data by filters and algorithms, as well as producing new stories through the emerging media of the future.

“For the first time since the analog days of the Film Section, there will be a shared conversation around the moving image and its relationship to our lived realities,” says Caroline Jones. “Transmedia’s existing capacity to multiply storylines and allow users to participate in co-creation will be amplified by the collaborative force of MIT makers and theorists. MIT is the perfect place to launch this, and now is the time.”

Involving members of several schools will be important to the success of the new initiative. Increasingly, faculty across SA+P use moving images, cinematic tropes, and powerful narratives to model potential realities and tell stories with design in the world. Media theorists in SHASS use humanistic tools to decode the stories embedded in our algorithms and the feelings provoked by media, from immersion to surveillance. 

SA+P’s Art, Culture and Technology program — the successor to VAP and CAVS — currently includes three faculty who are renowned for theorizing and producing innovative forms of what has long been theorized as “expanded cinema”: Judith Barry (filmic installations and media theory); Renée Green (“Free Agent Media,” “Cinematic Migrations”); and Nida Sinnokrot (“Horizontal Cinema”). In these artists’ works, the historical “new media” of cinema is reanimated, deconstructed, and reassembled to address wholly contemporary concerns.

Vision for the initiative

Understandings of narrative, the making of time-based media, and modes of alternative storytelling go well beyond “film.” CMS in particular ranges across popular culture entities such as music video, computer games, and graphic novels, as well as more academically focused practices from computational poetry to net art.

The Transmedia Storytelling Initiative will draw together the various strands of such compelling research and teaching about time-based media to meet the 21st century’s unprecedented demands, including consideration of ethical dimensions.

“Stories unwind to reveal humans’ moral thinking,” says Jones. “Implicit in the Transmedia Storytelling Initiative is the imperative to convene an ethical conversation about what narratives are propelling the platforms we share and how we can mindfully create new stories together.”

Aiming ultimately for a physical footprint offering gathering, production, and presentation spaces, the initiative will begin to coordinate pedagogy for a proposed undergraduate minor in Transmedia. This course of study will encompass storytelling via production and theory, spanning from computational platforms that convert data to affective videos to artistic documentary forms, to analysis and critique of contemporary media technologies.

Engineers set the standards

It might not seem consequential now, but in 1863, Scientific American weighed in on a pressing technological issue: the standardization of screw threads in U.S. machine shops. Given standard-size threads — the ridges running around screws and bolts — screws missing from machinery could be replaced with hardware from any producer. But without a standard, fixing industrial equipment would be harder or even impossible.

Moreover, Great Britain had begun standardizing the size of screw threads, so why couldn’t the U.S.? After energetic campaigning by a mechanical engineer named William Sellers, both the U.S. Navy and the Pennsylvania Railroad got on board with the idea, greatly helping standardization take hold.

Why did it matter? The latter half of the 1800s was an unprecedented time of industrial expansion. But the products and tools of the time were not necessarily uniform. Making them compatible served as an accelerant for industrialization. The standardization of screw threads was a signature moment in this process — along with new standards for steam boilers (which had a nasty habit of exploding) and for the steel rails used in train tracks.

Moreover, what goes for 19th-century hardware goes for hundreds of things used in daily life today. From software languages to batteries, transmission lines to power plants, cement, and more, standardization still helps fuel economic growth.

“Everything around us is full of standards,” says JoAnne Yates, the Sloan Distinguished Professor of Management at MIT. “None of us could function without standards.”

But how did this all come about? One might expect government treaties to be essential for global standards to exist. But time and again, Yates notes, industrial standards are voluntary and have the same source: engineers. Or, more precisely, nongovernmental standard-setting bodies dominated by engineers, which work to make technology uniform across borders.

“On one end of a continuum is government regulation, and on the other are market forces, and in between is an invisible infrastructure of organizations that helps us arrive at voluntary standards without which we couldn’t operate,” Yates says.

Now Yates is the co-author of a new history that makes the role of engineers in setting standards more visible than ever. The book, “Engineering Rules: Global Standard Setting since 1880,” is being published this week by Johns Hopkins University Press. It is co-authored by Yates, who teaches in the MIT Sloan School of Management, and Craig N. Murphy, who is the Betty Freyhof Johnson ’44 Professor of International Relations at Wellesley College.

Joint research project

As it happens, Murphy is also Yates’ husband — and, for the first time, they have collaborated on a research project.

“He’s a political scientist and I’m a business historian, but we had said throughout our careers, ‘Some day we should write a book together,’” Yates says. When it crossed their radar as a topic, the evolution of standards “immediately appealed to both of us,” she adds. “From Craig’s point of view, he studies global governance, which also includes nongovernmental institutions like this. I saw it as important because of the way firms play a role in it.”

As Yates and Murphy see it, there have been three distinct historical “waves” of technological standardization. The first, the late 19th- and early 20th-century industrial phase, was spurred by the professionalization of engineering itself. Those engineers were trying to impose order on a world far less organized than ours: Although the U.S. Constitution gives Congress the power to set standards, a U.S. National Bureau of Standards was not created until 1901, when there were still 25 different basic units of length — such as “rods” — being used in the country.

Much of this industrial standardization occured country by country. But by the early 20th century, engineers ramped up their efforts to make standards international — and some, like the British engineer Charles le Maistre, a key figure in the book, were very aspirational about global standards.

“Technology evangelists, like le Maistre, spread the word about the importance of standardizing and how technical standards should transcend politics and transcend national boundaries,” Yates says, adding that many had a “social movement-like fervor, feeling that they were contributing to the common good. They even thought it would create world peace.”

It didn’t. Still, the momentum for standards created by Le Maistre carried into the post-World War II era, the second wave detailed in the book. This new phase, Yates notes, is exemplified by the creation of the standardized shipping container, which made world-wide commerce vastly easier in terms of logistics and efficiency.

“This second wave was all about integrating the global market,” Yates says. 

The third and most recent wave of standardization, as Yates and Murphy see it, is centered on information technology — where engineers have once again toiled, often with a sense of greater purpose, to develop global standards.

To some degree this is an MIT story; Tim Berners-Lee, inventor of the World Wide Web, moved to MIT to establish a global standards consortium for the web, W3C, which was founded in 1994, with the Institute’s backing. More broadly, Yates and Murphy note, the era is marked by efforts to speed up the process of standard-setting, “to respond to a more rapid pace of technological change” in the world.

Setting a historical standard

Intriguingly, as Yates and Murphy document, many efforts to standardize technologies required firms and business leaders to put aside their short-term interests for a longer-term good — whether for a business, an industry, or society generally.

“You can’t explain the standards world entirely by economics,” Yates says. “And you can’t explain the standards world entirely by power.”

Other scholars regard the book as a significant contribution to the history of business and globalization. Yates and Murphy “demonstrate the crucial impact of private and informal standard setting on our daily lives,” according to Thomas G. Weiss, a professor of international relations and global governance at the Graduate Center of the City University of New York. Weiss calls the book “essential reading for anyone wishing to understand the major changes in the global economy.”

For her part, Yates says she hopes readers will, among other things, reflect on the idealism and energy of the engineers who regarded international standards as a higher cause.

“It is a story about engineers thinking they could contribute something good for the world, and then putting the necessary organizations into place.” Yates notes. “Standardization didn’t create world peace, but it has been good for the world.”

Dwaipayan Banerjee receives 2019 Levitan Prize in the Humanities

Assistant Professor Dwaipayan Banerjee of the Program in Science, Technology, and Society (STS) has been awarded the 2019 James A. (1945) and Ruth Levitan Prize in the Humanities. The prestigious award comes with a $29,500 grant that will support Banerjee’s research on the history of computing in India.

Melissa Nobles, the Kenan Sahin Dean of MIT’s School of Humanities, Arts, and Social Sciences (SHASS), announced the award, noting that a committee of senior faculty had reviewed submissions for the Levitan Prize and selected Banerjee’s proposal as the most outstanding.

“Dwai’s work is extremely relevant today, and I look forward to seeing how his new project expands our understanding of technology and technological culture as a part of the human world,” Nobles says.

Postcolonial India and computing

Banerjee’s scholarship centers on the social contexts of science, technology, and medicine in the global south. He has two book projects now nearing completion: “Enduring Cancer: Health and Everyday Life in Contemporary India” (forthcoming in 2020, Duke University Press) and “Hematologies: The Political Life of Blood in India” (forthcoming in 2019, Cornell University Press; co-authored with J. Copeman). Both books assess how India’s post-colonial history has shaped, and been shaped by, practices of biomedicine and health care.

Banerjee says he was delighted to receive the Levitan Award, which is presented annually by SHASS to support innovative and creative scholarship in one of the Institute’s humanities, arts, or social science fields. “Its funds will go a long way in helping explore archives about computational research and technology spread across India, some of which have yet to receive sustained scholarly attention,” he says.

Global computing histories

Banerjee’s Levitan project will investigate the post-colonial history of computing in India from the 1950s to today. “Contemporary scholarly and popular narratives about computing in India suggest that, even as India supplies cheap IT labor to the rest of the world, the country lags behind in basic computing research and development,” he says. “My new project challenges these representations.”

Banerjee adds, “In presenting this account, I urge social science research, which has predominantly focused on the history of computing in Europe and the United States, to take account of more global histories of computing.”

The project, titled “A Counter History of Computing in India,” will trace major shifts in the relation between the Indian state and computing research and practice. Banerjee explains that “In the first decades after India’s independence, the postcolonial state sought to develop indigenous computing expertise and infrastructure by creating public institutions of research and education, simultaneously limiting private enterprise and the entry of global capital.”

Noting that today the vision for development relies heavily on private entrepreneurship, Banerjee asks: “Why and how did the early post-colonial vision of publicly-driven computing research and development decline?”

Policy, computing, and outsourcing

More broadly, Banerjee plans to investigate how changing policies have impacted the development of computing and shaped the global distribution of expertise and labor. “After economic liberalization in the 1980s, a transformed Indian state gave up its protectionist outlook and began to court global corporations, giving rise to the new paradigm of outsourcing.”

Banerjee says he will endeavor to answer the question, “What is lost when a handful of U.S.-based corporations seek to determine hierarchies of technology work and control how its social benefits are globally distributed?” The Levitan Prize will support Banerjee’s field research in India and help him develop a multi-city archive of primary sources relating to the history of computational science and technology in the region.

First awarded in 1990, the Levitan Prize in the Humanities was established through a gift from the late James A. Levitan, a 1945 MIT graduate in chemistry who was also a member of the MIT Corporation.
 

Story prepared by MIT SHASS Communications
Editorial and Design Director: Emily Hiestand
Writer: Kathryn O’Neill

Michael Bloomberg’s Commencement address

Below is the text of the Commencement address delivered by entrepreneur, philanthropist, and three-term New York City mayor Michael Bloomberg for the Institute’s 2019 Commencement held June 7, 2019.

As excited as all of you are today, there’s a group here that is beaming with pride and that deserves a big round of applause – your parents and your families.

You’ve been very lucky to study at a place that attracts some of the brightest minds in the world. And during your time here, MIT has extended its tradition of groundbreaking research and innovation. Most of you were here when LIGO proved that Einstein was right about gravitational waves, something that I – as a Johns Hopkins engineering graduate – claimed all along.

And just this spring, MIT scientists and astronomers helped to capture the first-ever image of a black hole. Those really are incredible accomplishments for MIT.

All of you are part of an amazing institution that has proven – time and time again – that human knowledge and achievement is limitless. In fact, this is the place that proved moonshots are worth taking.

Fifty years ago next month, the Apollo 11 lunar module touched down on the moon. It’s fair to say the crew never would have gotten there without MIT. I don’t just mean that because Buzz Aldrin was class of ’63 here, and took Richard Battin’s famous astro-dynamics course. As Chairman Millard mentioned, the Apollo 11 literally got there thanks to its navigation and control systems that were designed right here at what is now the Draper Laboratory.

Successfully putting a man on the moon required solving so many complex problems. How to physically guide a spacecraft on a half-million-mile journey was arguably the biggest one, and your fellow alums and professors solved it by building a one-cubic-foot computer at the time when computers were giant machines that filled whole rooms.

The only reason those MIT engineers even tried to build that computer in the first place was that they had been asked to help do something that people thought was either impossible or unnecessary.

Going to the moon was not a popular idea back in the 1960s. And Congress didn’t want to pay for it. Imagine that, a Congress that didn’t want to invest in science. Go figure – that would never happen today.

President Kennedy needed to persuade the taxpayers that a manned mission to the moon was possible and worth doing. So in 1962, he delivered a speech that inspired the country. He said, ‘We choose to go to the moon this decade, and do the other things, not because they are easy, but because they are hard.’

In that one sentence, Kennedy summed up mankind’s inherent need to reach for the stars. He continued by saying, ‘That challenge is one that we are willing to accept, one we are unwilling to postpone, and which we intend to win.’

In other words, for the good of the United States, and humanity, it had to be done. And he was right. Neil Armstrong took a great leap for mankind, the U.S. won a major Cold War victory, and a decade of scientific innovation led to an unprecedented era of technological advancement.

The inventions that emerged from that moonshot changed the world: satellite television, computer microchips, CAT scan machines, and many other things we now take for granted – even video game joysticks.

The world we live in today is fundamentally different, not just because we landed on the moon, but because we tried to get there in the first place. In hindsight, President Kennedy’s call for the original moonshot at exactly the right moment in history was brilliant. And the brightest minds of their generation – many of them MIT graduates – delivered.

Today, I believe that we are living in a similar moment. And once again, we’ll be counting on MIT graduates – all of you – to lead us.

But this time, our most important and pressing mission – your generation’s mission – is not only to explore deep space and reach faraway places. It is to save our own planet, the one that we’re living on, from climate change. And unlike 1962, the primary challenge before you is not scientific or technological. It is political.

The fact is we’ve already pioneered the technology to tackle climate change. We know how to power buildings using sun and wind. We know how to power vehicles using batteries charged with renewable energy. We know how to power factories and industry using hydrogen and fuel cells. And we know that these innovations don’t require us to sacrifice financially or economically. Just the opposite, these investments, on balance, create jobs and save money.

Yes, all of those power sources need to be brought to scale – and that will require further scientific innovation which we need you to help lead. But the question isn’t how to tackle climate change. We’ve known how to do that for many years. The question is: why the hell are we moving so slowly?

The race we are in is against time, and we are losing. And with each passing year, it becomes clearer just how far behind we’ve fallen, how fast the situation is deteriorating, and how tragic the results can be.

In the past decade alone, we’ve seen historic hurricanes devastate islands across the Caribbean. We’ve seen ‘thousand-year floods’ hit the Midwestern and Southern United States multiple times in a decade. We’ve seen record-breaking wildfires ravage California, and record-breaking typhoons kill thousands in the Philippines.

This is a true crisis. If we fail to rise to the occasion, your generation, your children, and grandchildren will pay a terrible price. So scientists know there can be no delay in taking action – and many governments and political leaders around the world are starting to understand that.

Yet here in the United States, our federal government is seeking to become the only country in the world to withdraw from the Paris Climate Agreement. The only one. Not even North Korea is doing that.

Those in Washington who deny the science of climate change are no more based in reality than those who believe the moon landing was faked. And while the moon landing conspiracy theorists are relegated to the paranoid corners of talk radio, climate skeptics occupy the highest positions of power in the United States government.

Now, in the administration’s defense: climate change, they say, is only a theory. Yeah, like gravity is only a theory.

People can ignore gravity at their own risk, at least until they hit the ground. But when they ignore the climate crisis they are not only putting themselves at risk, they are putting all humanity at risk.

Instead of challenging Americans to believe in our ability to master the universe, as President Kennedy did, the current administration is pandering to the skeptics who, in the 1960s, looked at the space program and only saw short-term costs, not long-term benefits.

President Kennedy’s era earned the nickname, ‘The Greatest Generation’ – not only because they persevered through the Great Depression and won the Second World War. They earned it because of determination to rise, to pioneer, to innovate, and to fulfill the promise of American freedom.

They dreamed in moonshots. They reached for the stars. And they began to redeem – through the civil rights movement – the failures of the past. They set the standard for leadership and service to our nation’s ideals.

Now, your generation has the opportunity to join them in the history books. The challenge that lies before you – stopping climate change – is unlike any other ever faced by humankind. The stakes could not be higher.

If left unchecked, the climate change crisis threatens to destroy oceanic life that feeds so many people on this planet. It threatens to breed war by spreading drought and hunger. It threatens to sink coastal communities, devastate farms and businesses, and spread disease.

Now, some people say we should leave it in God’s hands. But most religious leaders, I’m happy to say, disagree. After all, where in the Bible, or the Torah, or the Koran, or any other book about faith or philosophy does it teach that we should do things that make floods and fires and plagues more severe? I must have missed that day in religion class.

Today, most Americans in both parties accept that human activity is driving the climate crisis and they want government to take action. Over the past few months, there has been a healthy debate – mostly within the Democratic Party – over what those actions should be. And that’s great.

In the years ahead, we need to build consensus around comprehensive and ambitious federal policies that the next Congress should pass. But everyone who is concerned about the climate crisis should also be able to agree on two realities.

The first one is given opposition in the Senate and White House, there is virtually no chance of passing such policies before 2021. And the second reality is we can’t wait to act. We can’t put this mission off any longer. Mother Nature does not wait on the election calendar – and neither can we.

Our foundation, Bloomberg Philanthropies, has been working for years to rally cities, states, and businesses to lead on this issue – and we’ve had real success. Just not enough.

So today, I’m happy to announce that, with our foundation, I am committing $500 million to the launch of a new national climate initiative, and I hope that you will all become part of it. We are calling it Beyond Carbon. The last one was Beyond Coal, this is Beyond Carbon because we have greater goals.

And our goal is to move the U.S. toward a 100 percent clean energy economy as expeditiously as possible, and begin that process right now. We intend to succeed not by sacrificing things we need, but by investing in things we want: more good jobs, cleaner air and water, cheaper power, more transportation options, and less congested roads.

To do it, we will defeat in the courts the EPA’s attempts to rollback regulations that reduce carbon pollution and protect our air and water. But most of our battles will take place outside of Washington. We are going to take the fight to the cities and states – and directly to the people. And the fight will take place on four main fronts.

First, we will push states and utilities to phase out every last U.S. coal-fired power plant by 2030 – just 11 years from now. Politicians keep making promises about climate change mitigation by the year 2050 – hypocritically, after they’re long gone and no one can hold them accountable. Meanwhile, the science keeps moving the possible inflection point of irreversible global warming closer and closer. We have to set goals for the near-term – and we have to hold our elected officials accountable for meeting them.

We know that closing every last U.S. coal-fired power plant over the next 11 years is achievable because we’re already more than half-way there. Through a partnership between Bloomberg Philanthropies and the Sierra Club, we’ve shut down 289 coal-fired power plants since 2011, and that includes 51 that we have retired since the 2016 presidential election despite all the bluster from the White House. As a matter of fact, since Trump got elected the rate of closure has gone up.

Second, we will work to stop the construction of new gas plants. By the time they are built, they will already be out of date – because renewable energy will be cheaper. Cities like Los Angeles are already stopping new gas plant construction in favor of renewable energy, and states like New Mexico, Washington, Hawaii, and California are working to convert their electrical systems to 100 percent clean energy.

We don’t want to replace one fossil fuel with another. We want to build a clean energy economy – and we will push more states to do that.

Third, we will support our most powerful allies – governors, mayors, and legislators – in their pursuit of ambitious policies and laws, and we will empower the grassroots army of activists and environmental groups that are currently driving progress state-by-state.

Together, we will push for new incentives and mandates that increase renewable power, pollution-free buildings, waste-free industry, access to mass transit, and sales of electric vehicles, which are now turning the combustion engine – and all of its pollution – into a relic of the industrial revolution.

Fourth, and finally, we will get deeply involved in elections across the country, because climate change is now first and foremost a political problem, not a scientific quandary, or even a technological puzzle.

Now, I know that as scientists and engineers, politics can be a dirty word. I’m an engineer – I get it. But I’m also a realist so I have three words for you: get over it.

At least for the foreseeable future, winning the battle against climate change will depend less on scientific advancement and more on political activism.

That’s why Beyond Carbon includes political spending that will mobilize voters to go to the polls and support candidates who actually are taking action on something that could end life on Earth as we know it. And at the same time, we will defeat at the voting booth those who try to block action and those who pander with rhetoric that just kicks the can down the road.

Our message to elected officials will be simple: face reality on climate change, or face the music on Election Day. Our lives and our children’s lives depend on it. And so should their political careers.

Now, most of America will experience a net increase in jobs as we move to renewable energy sources and reductions in pollution. In some places jobs are being lost – we know that, and we can’t leave those communities behind.

For example, generations of miners powered America to greatness – and many paid for it with their lives and their health. But today they need our help to change with technology and the economy.

And while it is up to the federal government to make those investments, Beyond Carbon will continue our foundation’s work to show that progress really is possible. So we will support local organizations in Appalachia and the western mountain states and work to spur economic growth and re-train workers for jobs in growing industries.

Taken together, these four elements of Beyond Carbon will be the largest coordinated assault on the climate crisis that our country has ever undertaken.

We will work to empower and expand the volunteers and activists fighting these battles community by community, state by state. It’s a process that our foundation and I have proved can succeed. After all, this isn’t the first time we’ve done an end run around Washington.

A decade ago no one would have believed that we could take on the coal industry and close half of all U.S. plants. But we have.

A decade ago no one would have believed we could take on the NRA and pass stronger gun safety laws in states like Florida, Colorado, and Nevada. But we have.

Two decades ago, no one would have believed that we could take on the tobacco industry and spread New York City’s smoking ban to most of America and to countries around the world. But we have.

And now, we will take on the fossil fuel industry to accelerate the transition to a clean energy economy. I believe we will succeed again – but only if one thing happens and that is: you have to help lead the way by raising your voices, by joining an advocacy group, by knocking on doors, by calling your elected officials, by voting, and getting your friends and family to join you.

Back in the 1960’s, when scientists here at MIT were racing to the moon, there was a popular saying that went: if you’re not part of the solution you’re part of the problem. Today, Washington is a very, very big part of the problem.

We have to be part of the solution through political activism that puts the screws to our elected officials. Let me reiterate, this has gone from a scientific challenge to a political one.

It is time for all of us to accept that climate change is the challenge of our time. As President Kennedy said 57 years ago of the moon mission: we are willing to accept this challenge, we are unwilling to postpone it, and we intend to win it. We must again do what is hard.

Graduates, we need your minds and your creativity to achieve a clean energy future. But that is not all. We need your voices. We need your votes. And we need you to help lead us where Washington will not. It may be a moonshot – but it’s the only shot we’ve got.

As you leave this campus I hope you will carry with you MIT’s tradition of taking – and making – moonshots. Be ambitious in every facet of your life. And don’t ever let something stop you because people say it’s impossible. Let those words inspire you. Because just trying to make the impossible possible can lead to achievements you never dreamed of. And sometimes, you actually do land on the moon.

Tomorrow start working on the mission that, if you succeed, will lead the whole world to call you the Greatest Generation, too.

Thank you, and congratulations.

Featured video: The MIT Brass Rat

Since its debut in 1929, the MIT class ring — affectionately nicknamed the “Brass Rat” for its featured mascot, the MIT beaver — has become a distinctive symbol of the Institute, worn proudly by many of its 138,000 alumni.

Each year, the Brass Rat is redesigned by a committee of 12 students from the second-year class, who then collaborate with the manufacturer, Herff Jones. The committee is responsible for designing, premiering, selling, and delivering the ring to its class. (The MIT graduate ring, known as the “Grad Rat,” is redesigned in a similar process every five years.)

“We try to represent every single community and every single background on the ring,” says Nicholas Salinas, vice chair of the Class of 2021 Ring Committee. “So that way, students are excited and really feel like they have a home here at MIT.”

Video by Melanie Gonick/MIT | 5 min. 17 sec.

CSAIL hosts first-ever TEDxMIT

On Tuesday, May 28, MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) hosted a special TEDx event featuring an all-female line-up of MIT scientists and researchers who discussed cutting-edge ideas in science and technology.

TEDxMIT speakers included roboticists, engineers, astronomers, and policy experts, including former White House chief technology officer Megan Smith ’86 SM ’88 and MIT Institute Professor Emerita Barbara Liskov, winner of the A.M. Turing Award, often considered the “Nobel Prize for computing.”

From Professor Nergis Mavalvala’s work on gravitational waves to Associate Provost Krystyn Van Vliet’s efforts to improve cell therapy, the afternoon was filled with energizing and historic success stories of women in STEM.

In an early talk, MIT Associate Professor Julie Shah touched on the much-discussed narrative of artificial intelligence and job displacement, and how that relates to her own work creating systems that she described as “being intentional about augmenting human capabilities.”She spoke about her efforts developing robots to help reduce the cognitive burden of overwhelmed workers, like the nurses on labor wards who have to make hundreds of split-second decisions for scheduling deliveries and C-sections.

“We can create a future where we don’t have robots who replace humans, but that help us accomplish what neither group can do alone,” said Shah.

CSAIL Director Daniela Rus, a professor of electrical engineering and computer science, spoke of how computer scientists can inspire the next generation of programmers by emphasizing the many possibilities that coding opens up.

“I like to say that those of us who know how to … breathe life into things through programming have superpowers,” said Rus.

Throughout the day scientists showed off technologies that could fundamentally transform many industries, from Professor Dava Newman’s prototype Mars spacesuit to Associate Professor Vivienne Sze’s low-power processors for machine learning.

Judy Brewer, director of the World Wide Web Consortium’s Web Accessibility Initiative, discussed the ways in which the web has made the world a more connected place for those with disabilities — and yet, how important it is for the people who design digital technologies to be better about making them accessible.

“When the web became available, I could go and travel anywhere,” Brewer said. “There’s a general history of excluding people with disabilities, and then we go and design tech that perpetuates that exclusion. In my vision of the future everything is accessible, including the digital world.”

Liskov captivated the audience with her tales of the early days of computer programming. She was asked to learn Fortran on her first day of work in 1961 — having never written a line of code before.

“I didn’t have any training,” she said. “But then again, nobody did.”

In 1971 Liskov joined MIT, where she created the programming language CLU, which established the notion of “abstract data types” and laid the groundwork for languages like Java and C#. Many coders now take so-called “object-oriented programming” (OOP) for granted: She wryly reflected on how, after she won the Turing Award, one internet commenter looked at her contributions to data abstraction and pointed out that “everybody knows that, anyway.”

“It was a statement to how much the world has changed,” she said with a smile. “When I was doing that work decades earlier, nobody knew anything about [OOP].”

Other researchers built off of Liskov’s remarks in discussing the birth of big data and machine learning. Professor Ronitt Rubinfeld spoke about how computer scientists’ work in sublinear time algorithms has allowed them to better make sense of large amounts of data, while Hamsa Balakrishnan spoke about the ways in which algorithms can help systems engineers make air travel more efficient.

The event’s overarching them was to highlight examples of female role models in a field where they’ve often been overlooked. Paula Hammond, head of MIT’s Department of Chemical Engineering, touted the fact that more than half of undergrads in her department this year were women. Rus urged the women in the audience, many of whom were MIT students, to think about what role they might want to play in continuing to advance science in the coming years.

“To paraphrase our hometown hero, President John F. Kennedy, we need to prepare [women] to see both what technology can do for them — and what they can do for technology,” Rus said.

Rus led the planning of the TEDxMIT event alongside MIT research affiliate John Werner and student directors Stephanie Fu and Rucha Kelkar, both first-years.

Bringing human-like reasoning to driverless car navigation

With aims of bringing more human-like reasoning to autonomous vehicles, MIT researchers have created a system that uses only simple maps and visual data to enable driverless cars to navigate routes in new, complex environments.

Human drivers are exceptionally good at navigating roads they haven’t driven on before, using observation and simple tools. We simply match what we see around us to what we see on our GPS devices to determine where we are and where we need to go. Driverless cars, however, struggle with this basic reasoning. In every new area, the cars must first map and analyze all the new roads, which is very time consuming. The systems also rely on complex maps — usually generated by 3-D scans — which are computationally intensive to generate and process on the fly.

In a paper being presented at this week’s International Conference on Robotics and Automation, MIT researchers describe an autonomous control system that “learns” the steering patterns of human drivers as they navigate roads in a small area, using only data from video camera feeds and a simple GPS-like map. Then, the trained system can control a driverless car along a planned route in a brand-new area, by imitating the human driver.

Similarly to human drivers, the system also detects any mismatches between its map and features of the road. This helps the system determine if its position, sensors, or mapping are incorrect, in order to correct the car’s course.

To train the system initially, a human operator controlled an automated Toyota Prius — equipped with several cameras and a basic GPS navigation system — to collect data from local suburban streets including various road structures and obstacles. When deployed autonomously, the system successfully navigated the car along a preplanned path in a different forested area, designated for autonomous vehicle tests.

“With our system, you don’t need to train on every road beforehand,” says first author Alexander Amini, an MIT graduate student. “You can download a new map for the car to navigate through roads it has never seen before.”

“Our objective is to achieve autonomous navigation that is robust for driving in new environments,” adds co-author Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “For example, if we train an autonomous vehicle to drive in an urban setting such as the streets of Cambridge, the system should also be able to drive smoothly in the woods, even if that is an environment it has never seen before.”

Joining Rus and Amini on the paper are Guy Rosman, a researcher at the Toyota Research Institute, and Sertac Karaman, an associate professor of aeronautics and astronautics at MIT.

Point-to-point navigation

Traditional navigation systems process data from sensors through multiple modules customized for tasks such as localization, mapping, object detection, motion planning, and steering control. For years, Rus’s group has been developing “end-to-end” navigation systems, which process inputted sensory data and output steering commands, without a need for any specialized modules.

Until now, however, these models were strictly designed to safely follow the road, without any real destination in mind. In the new paper, the researchers advanced their end-to-end system to drive from goal to destination, in a previously unseen environment. To do so, the researchers trained their system to predict a full probability distribution over all possible steering commands at any given instant while driving.

The system uses a machine learning model called a convolutional neural network (CNN), commonly used for image recognition. During training, the system watches and learns how to steer from a human driver. The CNN correlates steering wheel rotations to road curvatures it observes through cameras and an inputted map. Eventually, it learns the most likely steering command for various driving situations, such as straight roads, four-way or T-shaped intersections, forks, and rotaries.

“Initially, at a T-shaped intersection, there are many different directions the car could turn,” Rus says. “The model starts by thinking about all those directions, but as it sees more and more data about what people do, it will see that some people turn left and some turn right, but nobody goes straight. Straight ahead is ruled out as a possible direction, and the model learns that, at T-shaped intersections, it can only move left or right.”

What does the map say?

In testing, the researchers input the system with a map with a randomly chosen route. When driving, the system extracts visual features from the camera, which enables it to predict road structures. For instance, it identifies a distant stop sign or line breaks on the side of the road as signs of an upcoming intersection. At each moment, it uses its predicted probability distribution of steering commands to choose the most likely one to follow its route.

Importantly, the researchers say, the system uses maps that are easy to store and process. Autonomous control systems typically use LIDAR scans to create massive, complex maps that take roughly 4,000 gigabytes (4 terabytes) of data to store just the city of San Francisco. For every new destination, the car must create new maps, which amounts to tons of data processing. Maps used by the researchers’ system, however, captures the entire world using just 40 gigabytes of data.  

During autonomous driving, the system also continuously matches its visual data to the map data and notes any mismatches. Doing so helps the autonomous vehicle better determine where it is located on the road. And it ensures the car stays on the safest path if it’s being fed contradictory input information: If, say, the car is cruising on a straight road with no turns, and the GPS indicates the car must turn right, the car will know to keep driving straight or to stop.

“In the real world, sensors do fail,” Amini says. “We want to make sure that the system is robust to different failures of different sensors by building a system that can accept these noisy inputs and still navigate and localize itself correctly on the road.”

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.