Local rocks can yield more crops

Nitrogen, phosphorous, and potassium are the three elements that support the productivity of all plants used for agriculture, and are the constituents of commercial fertilizers that farmers use throughout the world. 

Potassium (also referred to as potash) is largely produced in the Northern Hemisphere, where is abundant. In fact, the potash market is dominated by just a few producers, largely in Canada, Russia, and Belarus. As a result, potash (and fertilizers in general) can be accessed relatively affordably by farmers in northern regions, where it also happens to be a closer match for the soil nutrient needs of their farms and crops.

But that’s not necessarily the case for farmers elsewhere. For tropical growing regions in Brazil and some countries in Africa, differing soil and rock compositions make for a poor match for the fertilizers that are currently on the market. When these fertilizers — which are resource intensive to produce — need to be shipped long distances to reach consumers in Southern Hemisphere countries, costs can skyrocket. When the fertilizer isn’t the right match for the soil needs, farmers may need to add more in order to achieve as much gain as their counterparts in the north, if they are even able to afford more in the first place.    

So while these fertilizers promise higher yields, small- and medium-scale farmers still can end up with lower profits, higher soil salinity, a rapid reduction in overall soil fertility, and increased leaching into groundwater, rivers, and streams. This makes it challenging for these farmers to thrive, especially in Africa. Expensive or unsuitable fertilizer lowers food production capacity, affecting farmers’ economic and nutritional self-sufficiency. Now, at a time when the United Nations projects that global population will rise by to 8.5 billion in 2030 — an overall increase of over 1.2 billion people — the need for local, sustainable fertilizer solutions to increase yields is even more urgent.  

Meeting food security needs with more interdisciplinary research

This mismatch — and the regional food security implications that it entails — was the inspiration for Antoine Allanore, associate professor of metallurgy in the Department of Materials Science and Engineering at MIT, to focus his efforts on finding alternative fertilizer materials. Over the last six years, he has built a research team, including Davide Ciceri, a research scientist in his lab through 2018. 

Having immersed themselves in fertilizers research, Allanore and Ciceri have found the lack of attention by others in the materials science field to this topic surprising. 

“Industry hasn’t put as much thought as is needed into doing research on the raw materials [used in fertilizers],” says Ciceri. “Their product has worked so far, and no one has complained, so there is little space for innovation.” 

Allanore thinks of it this way: “Unfortunately, farming is not a very profitable field.  They make so little compared to those who work in trade or food processing and marketing, which, as a result, have received a lot of investment and attention.  Because of this lack of research investment, we know very little about what happens to some of the elements that we’re putting in the soil.”

This lack of investment is especially problematic for farmers in the Global South who are without affordable access to the fertilizers that are currently available on the market. Motivated by their desire to find local, sustainable fertilizer solutions for African farmers and fueled by J-WAFS seed funding, Allanore, Ciceri, and other members of their research team have created a road map that materials scientists and others can use to develop a new generation of potash-independent fertilizers suitable for African soils. Published last August in the journal Science of the Total Environment, the paper, “Local fertilizers to achieve food self-sufficiency in Africa,” was one the first comprehensive studies of the use of fertilizer across Africa from a materials science perspective. It indicated urgently needed advancements in fertilizer research, technology, and policy, and recommended approaches that can help to achieve the yield gains necessary to meet current and future demand sustainably.

“From the standpoint of materials processing, there’s really so much to do on the mineral resources required for fertilizers,” says Ciceri. “What we wanted to do was to promote a discussion in the community about this. Why is there no research on new fertilizer developments? What strategies are implementable? Is there enough field crop testing that can be done to support what chemists can do in the lab?”

While their paper was geared toward materials scientists, Allanore recognizes that what is needed is an interdisciplinary approach. “We are about to know the full genome of humans, but we don’t yet know how a crop uptakes nutrients,” he says. Collaboration between agronomists, soil scientists, materials scientists, economists, and others can improve our understanding of all of the interactions, materials, and products that go into obtaining the optimal yield of agricultural crops with minimal negative impact on the surrounding ecosystem. He is quick to state, however, that the goal is not to replicate what has been done with modern agriculture, but go beyond it to find sustainable solutions so that the African continent can provide its own food, profitability, and a decent life for the people who are growing crops.

Finding new sources for potassium and testing results

Professor Allanore’s lab has already discovered a potash alternative that is derived from potassium feldspar, a rock that is commonly found all over the world. To Ciceri, finding a solution in feldspar was startlingly obvious.

“Looking back at years of research, I was surprised to find that no one had looked to K-feldspar as a source,” he says. “It’s so abundant. How could it be that in 2015 our research team was the first to get potassium out of it?” 

And yet, that’s just what they’ve been able to do. With the support of a partnership with two Brazilian entities, Terrativa and EMBRAPA (the Brazilian Agricultural Research Corporation), the research team was able to develop a hydrothermal process to turn K-feldspar rocks into a new fertilizing material. But while this early collaboration helped the researchers develop an understanding of feldspar and how it could be used as a fertilizer for specific crops in Brazil, the team did not have direct control or access to the agronomic trials. 

That’s where J-WAFS funding proved supportive. The 2017 seed grant provided the research team the opportunity to conduct an independent assessment of the fertilizing potential of the new materials, and also contextualize their discovery within a broader conversation about global food security, as they did in their paper.

For crop testing, they began with tomatoes, which are one of the most common and economically important horticultural crops, and ranked among the most consumed vegetables in the world. A collaboration with Allen Barker, a professor of plant and soil sciences at the Stockbridge School of Agriculture at the University of Massachusetts Amherst, made it possible. Barker provided greenhouse space for testing, as well as essential expertise in agronomy that helped the MIT research team perform the rigorous analysis of the new material that has, now, determined its effectiveness. 

“This was an extremely important step for our research,” Allanore says. “The J-WAFS funding gave us the freedom to enter into this collaboration with the University of Massachusetts at Amherst. And, unlike what happens with corporate sponsorship research agreements, in this case we all had open access to the data.” 

Allanore is particularly grateful to the contributions of Barker and his team, since the tests would not have been possible without their participation. The results of this work were published on Jan. 22, in the article “Fertilizing properties of potassium feldspar altered hydrothermally” in the journal Communications in Soil Science and Plant Analysis. The paper was co-authored by Ciceri, Barker, Allanore, and Thomas Close, another member of the MIT team currently completing his doctorate.

Creating new spaces for art

For the first half of 2018, a large contemporary artwork greeted people entering the famed Isabella Stewart Gardner Museum in Boston. An image two stories high on the museum’s façade showed people in a refugee boat looking upward — where a drone was taking photos.

“Global Displacement,” stated text printed over the image. “1 in 100 people worldwide are displaced from their homes.”

The installation was the work of Judith Barry, a prominent contemporary American artist and the new director of the MIT Program in Art, Culture, and Technology (ACT). Like much of Barry’s oeuvre, this work was attention-grabbing, but with subtle twists. For example, the people in the original photo had been replaced by portraits of faces gazing upward, taken by associates in Barry’s studio.

Those looking closely at the work might find themselves asking new questions. For instance: What if you recognized people in refugee boats, or saw them as people much like yourself?

“Art constructs a space that people can inhabit,” Barry says. “And when you enter into the space that art makes, if you engage with the work in that space, other kinds of experiences are possible.”

Over three decades, Barry has gained acclaim while making new spaces for people to inhabit in galleries around the world. She has created video installations for major museums, performance art pieces, collages, and much more, all while exploring socially relevant topics.

“I don’t have a signature style,” Barry says. “The form and the content are derived from my research process. And that’s been the case since the very beginning.”

That applies to works such as the Gardner mural among many others she has developed. Barry’s acclaimed 2011 work, “Cairo Stories,” was a video installation based on 215 interviews with Egyptian women describing the conditions of daily life they encounter, and took nearly a decade to complete; it was reinstalled at the Mary Boone gallery in New York City this fall.

“I met many people across Egyptian society that I never would have gotten to know had I done the project in a shorter period of time,” Barry says. “It took many years, and the experience was profoundly moving.”

Barry’s emphasis on research, innovation, and social relevance all make her a natural fit at MIT; Barry joined the Institute in January 2018 as a professor with tenure and head of ACT.

A training in space

Barry was born in Columbus, Ohio, although, as she recounts of her life growing up, “We were moving all the time.” Like some other kids who move a lot, Barry developed some transportable skills — “I could draw, and I was athletic” — which, in her case, included dance. As an undergraduate at the University of California at Berkeley, Barry studied architecture, and subsequently found herself working for a large firm in the field.

“I got to design bathrooms and hallways and HVAC systems — all the things young architects do,” Barry says. “That was not interesting. But when I began taking art classes, another world opened to me.”

Indeed, Barry soon realized that art was a place where she could combine many of her interests. Inspired in part by the renowned artist (and MIT professor emerita) Joan Jonas, Barry developed performance art pieces in San Francisco in the 1970s. Before long she had expanded her repertoire to include video art installations. Indeed, as a leading video-art practitioner, Barry had exhibitions in venues such as the New York’s Whitney Museum fairly soon after leaving art school (at the San Francisco Art Institute).

“It was so different then,” Barry says of the 1970s art scene — meaning it was more open to newcomers of many backgrounds. By the late 1980s, she thinks, careers in the field had already begun to depend more on professionalized study, with graduate art degrees becoming the norm for many aspiring artists.

Still, as Barry notes, education is highly valuable. She herself has often used her architectural training in her career as an artist.

“When you’re in the space of an art exhibition, art happens at that moment when the viewer and the artwork come into contact,” Barry says. “It doesn’t necessarily carry over into daily life or another experience. But there are those moments when you encounter an artwork and something happens — it’s the sense of discovery and your engagement that produces an art experience. I try to set places where this can happen when I design my work. I use my architecture training as a methodology to interrogate space, so that in certain spots, something happens spatially that might keep you engaged.”

All told, Barry has been a prolific artist and gained international recognition for her challenging works. Among other honors, she received the Frederick Kiesler Prize for Architecture and the Arts in 2000, the “Best Pavilion” award at the Cairo Biennale in 2001, and a Guggenheim Fellowship in 2011. Barry’s work has been displayed multiple times at the Venice Biennale, the Whitney, and biennales in Berlin, Nagoya Biennale, Sao Paolo, Sydney, Sharjah (in the United Arab Emirates, where “Cairo Stories” debuted), among others.

At the Institute

Along the way, Barry spent one academic year teaching at MIT, in 2002-03, and says she is eager to explore new possibilities for teaching and creating art at ACT.

“It’s a great opportunity to rethink the question of what art, culture, and technology might become in the 21st century, and especially at MIT where you’re in a maw of technology and which is unlike traditional art schools,” Barry says. “I hope to use my time as director to put together programs and projects that reflect this revised sense of art, culture, and technology.”

Among other things, Barry notes, artists are grappling with issues of diversity in evolving ways: “In terms of culture, you have to ask, what is culture today? It is not one unified culture, but composed of many diverse cultures which are reflected in the student population at MIT.”

Barry finds herself in an interesting position with regard to technology, as well. She has often used technologies in her work, even while depicting tensions that arise in part from technological forces.

“Now we’re living in a technology-anxious time, where you read article after article about AI and robots taking over the world,” Barry says. “One of the major issues about technology facing society is your privacy, for instance. Or the anxiety that because machines do not rely on visual language, the need for mimesis [the depiction of things] will disappear. I hope questions about how technology affects daily life will become part of a much broader public debate.”  

Indeed, Barry adds, “Artists are often charged with the task of representation — in other words, finding ways to make these issues visible. Art has an important role to play in this discussion.”

President Reif calls for federal funding, focused education to address “opportunity and threat” of AI

In an op-ed piece published today in Financial Times, MIT President L. Rafael Reif argues for sustained federal investment in artificial intelligence, and encourages the nation’s colleges and universities to prepare students for new societal challenges posed by AI.  

AI promises to help “humanity learn more, waste less, work smarter, live longer and better understand and predict almost anything that can be measured,” Reif writes. But with great power comes great responsibility: New technologies could pose serious risks, he says, “including threats to privacy, public safety, jobs and the security of nations.”

Countries around the world have started heavily investing in national AI initiatives, with China alone spending a reported $1 billion annually. To say competitive, Reif says, the U.S. must commit to at least a decade of sustained financial support for rising researchers and new academic centers across the nation.

But with the looming “opportunity and threat” of AI advancements, he says, higher education must be prepared to guide students through new ethical and cultural issues, especially as computer science seeps into other fields of study. At MIT, for instance, 40 percent of students major in computer science alone or paired with other subjects, such as molecular biology, economics, and urban planning. Higher education must now teach students to become “AI bilingual,” Reif says.

The op-ed comes on the heels of several of MIT’s major investments in AI, most recently the MIT Stephen A. Schwarzman College of Computing. Announced in October, the new college aims to educate the leaders of the AI future, with a particular focus on research and education on the ethical implications and societal impact of computing technologies. Other initiatives include the MIT­–IBM Watson AI Lab launched in late 2017 and the MIT Intelligence Quest launched last February.

Reif concludes his piece with a call for a “broad strategic effort across society” in dealing with AI. “Technology belongs to all of us,” he says. “We must all be alert to the risks posed by AI, but this is no time to be afraid. Those nations and institutions which act now to help shape the future of AI will help shape the future for us all.”

Letter regarding the MIT Schwarzman College of Computing working groups and Idea Bank

The following letter was sent to the MIT community on Feb. 7 by Provost Martin A. Schmidt.

To the members of the MIT community:

In October 2018, MIT announced the establishment of the MIT Stephen A. Schwarzman College of Computing. The College aims to create a shared academic structure to facilitate the connection of computing scholarship and resources to all disciplines at MIT, and to provide opportunities for pathbreaking initiatives in computing-related education and research.

At that time, we anticipated the formation of several working groups to develop ideas and options for creation of the College that can help the administration plan for its launch.  I am writing to let you know that we have created five working groups for this purpose, as follows:

  1. Organizational Structure — Co-chairs:  Asu Ozdaglar, Department Head, Electrical Engineering and Computer Science, and School of Engineering Distinguished Professor of Engineering; Nelson Repenning, Associate Dean of Leadership and Special Projects, and School of Management Distinguished Professor of System Dynamics and Organization Studies
  2. Faculty Appointments — Co-chairs:  Eran Ben-Joseph, Department Head, Urban Studies and Planning; William Freeman, Thomas and Gerd Perkins Professor of Electrical Engineering
  3. Curriculum and Degrees — Co-chairs: Srini Devadas, Edwin Sibley Webster Professor of Electrical Engineering and Computer Science; Troy Van Voorhis, Haslam and Dewey Professor of Chemistry
  4. Social Implications and Responsibilities of Computing — Co-chairs:  Melissa Nobles, Kenan Sahin Dean of Humanities, Arts, and Social Sciences; Julie Shah, Associate Professor, Aeronautics and Astronautics
  5. College Infrastructure — Co-chairs:  Benoit Forget, Associate Professor, Nuclear Science and Engineering; Nicholas Roy, Professor, Aeronautics and Astronautics, and Member, Computer Science and Artificial Intelligence Laboratory   

The full memberships of these groups, which include faculty, staff, and students from a wide range of MIT departments, can be found here.

These groups will convene throughout the spring 2019 semester with the aim of producing a report describing their thoughts on these important issues by May. A steering committee — composed of the 10 co-chairs, Dean of Engineering Anantha Chandrakasan, MIT Faculty Chair Susan Silbey, and me — will provide collaborative guidance to the working groups. In addition, we have established an Idea Bank in order to gain input from the MIT community. Community members can submit ideas related to the working group topics to the Idea Bank until the end of April.

I wish to express my appreciation to all members of the working groups and the steering committee for their efforts on the important task of planning for the launch of the new MIT Schwarzman College of Computing in fall 2019. I am certain the guidance from their recommendations will help shape the College’s path to lasting success. 

Sincerely,

Martin A. Schmidt
Provost

Peering under the hood of fake-news detectors

New work from MIT researchers peers under the hood of an automated fake-news detection system, revealing how machine-learning models catch subtle but consistent differences in the language of factual and false stories. The research also underscores how fake-news detectors should undergo more rigorous testing to be effective for real-world applications.

Popularized as a concept in the United States during the 2016 presidential election, fake news is a form of propaganda created to mislead readers, in order to generate views on websites or steer public opinion.

Almost as quickly as the issue became mainstream, researchers began developing automated fake news detectors — so-called neural networks that “learn” from scores of data to recognize linguistic cues indicative of false articles. Given new articles to assess, these networks can, with fairly high accuracy, separate fact from fiction, in controlled settings.

One issue, however, is the “black box” problem — meaning there’s no telling what linguistic patterns the networks analyze during training. They’re also trained and tested on the same topics, which may limit their potential to generalize to new topics, a necessity for analyzing news across the internet.

In a paper presented at the Conference and Workshop on Neural Information Processing Systems, the researchers tackle both of those issues. They developed a deep-learning model that learns to detect language patterns of fake and real news. Part of their work “cracks open” the black box to find the words and phrases the model captures to make its predictions.

Additionally, they tested their model on a novel topic it didn’t see in training. This approach classifies individual articles based solely on language patterns, which more closely represents a real-world application for news readers. Traditional fake news detectors classify articles based on text combined with source information, such as a Wikipedia page or website.

“In our case, we wanted to understand what was the decision-process of the classifier based only on language, as this can provide insights on what is the language of fake news,” says co-author Xavier Boix, a postdoc in the lab of Eugene McDermott Professor Tomaso Poggio at the Center for Brains, Minds, and Machines (CBMM) in the Department of Brain and Cognitive Sciences (BCS).

“A key issue with machine learning and artificial intelligence is that you get an answer and don’t know why you got that answer,” says graduate student and first author Nicole O’Brien ’17. “Showing these inner workings takes a first step toward understanding the reliability of deep-learning fake-news detectors.”

The model identifies sets of words that tend to appear more frequently in either real or fake news — some perhaps obvious, others much less so. The findings, the researchers say, points to subtle yet consistent differences in fake news — which favors exaggerations and superlatives — and real news, which leans more toward conservative word choices.

“Fake news is a threat for democracy,” Boix says. “In our lab, our objective isn’t just to push science forward, but also to use technologies to help society. … It would be powerful to have tools for users or companies that could provide an assessment of whether news is fake or not.”

The paper’s other co-authors are Sophia Latessa, an undergraduate student in CBMM; and Georgios Evangelopoulos, a researcher in CBMM, the McGovern Institute of Brain Research, and the Laboratory for Computational and Statistical Learning.

Limiting bias

The researchers’ model is a convolutional neural network that trains on a dataset of fake news and real news. For training and testing, the researchers used a popular fake news research dataset, called Kaggle, which contains around 12,000 fake news sample articles from 244 different websites. They also compiled a dataset of real news samples, using more than 2,000 from the New York Times and more than 9,000 from The Guardian.

In training, the model captures the language of an article as “word embeddings,” where words are represented as vectors — basically, arrays of numbers — with words of similar semantic meanings clustered closer together. In doing so, it captures triplets of words as patterns that provide some context — such as, say, a negative comment about a political party. Given a new article, the model scans the text for similar patterns and sends them over a series of layers. A final output layer determines the probability of each pattern: real or fake.

The researchers first trained and tested the model in the traditional way, using the same topics. But they thought this might create an inherent bias in the model, since certain topics are more often the subject of fake or real news. For example, fake news stories are generally more likely to include the words “Trump” and “Clinton.”

“But that’s not what we wanted,” O’Brien says. “That just shows topics that are strongly weighting in fake and real news. … We wanted to find the actual patterns in language that are indicative of those.”

Next, the researchers trained the model on all topics without any mention of the word “Trump,” and tested the model only on samples that had been set aside from the training data and that did contain the word “Trump.” While the traditional approach reached 93-percent accuracy, the second approach reached 87-percent accuracy. This accuracy gap, the researchers say, highlights the importance of using topics held out from the training process, to ensure the model can generalize what it has learned to new topics.

More research needed

To open the black box, the researchers then retraced their steps. Each time the model makes a prediction about a word triplet, a certain part of the model activates, depending on if the triplet is more likely from a real or fake news story. The researchers designed a method to retrace each prediction back to its designated part and then find the exact words that made it activate.    

More research is needed to determine how useful this information is to readers, Boix says. In the future, the model could potentially be combined with, say, automated fact-checkers and other tools to give readers an edge in combating misinformation. After some refining, the model could also be the basis of a browser extension or app that alerts readers to potential fake news language.

“If I just give you an article, and highlight those patterns in the article as you’re reading, you could assess if the article is more or less fake,” he says. “It would be kind of like a warning to say, ‘Hey, maybe there is something strange here.’”

3 Questions: Ken Urban on theater, science, and tech

Ken Urban, a senior lecturer in MIT’s Music and Theater Arts Program (MTA) is a screenwriter, director, musician, and highly acclaimed playwright, whose work has been performed in New York, London, Boston, and Washington. He joined the faculty in 2017 and now leads MIT’s playwriting program. Recently, he launched the MTA Playwrights Lab, a groundbreaking collaboration between MIT students and professional theater artists.

Q: You began college studying chemical engineering but instead became a world-class playwright. In what ways does your affinity for math and science inform your writing or your approach to theater-making? More broadly, do you see fruitful connections between the sciences, technology, and the arts?

A. In terms of how the engineer in me helps my playwriting, it comes down to a question of structure. The thing I loved about studying math and science was how it helped reveal the hidden structure of the universe, and answered questions about how things functioned. When I write a play, I am telling a story and I need to find the best structure to tell that story. When I was in Catholic grammar school, I loved to diagram sentences. We would take a complex sentence and break it down into the parts of speech, then represent that structure in a compact, orderly diagram. What I loved was how it combined my love of language with my love of problem solving. I do the same thing, in a way, when I write a play. I break down the story into scenes, into beats, trying to figure the best, most exciting way to reveal a character or the plot. That feels to me like the work of an engineer.

The larger connection between theater making, and science and technology is a little trickier. As a playwright invested in psychology, I love the unadorned quality of plays, of actors on a set being in a believable and emotionally-rich scenario. I admire the work of the Wooster Group, Reza Abdoh — I’m helping organize a retrospective of his work here on campus in February — and others in the experimental scene, who use technology as an integral part of their aesthetic. I just don’t tend to create work like that. Plays about science are especially hard. The amount of material you need to cover for a general audience to understand the science itself can make those plays feel exposition-heavy. That’s never a good thing. It might be why great plays on science are few and far between. But that isn’t stopping me from trying. I am currently working on a new play inspired by Henrietta Lacks and the ethical dilemmas regarding her immortal cells, which are used in labs across the globe.

That play will be workshopped here at MIT at our new theater building W97 in March. “The Immortals” is a dangerous comedy that uses the science as a springboard for a larger investigation of ethics. I am looking forward to my students seeing how a new play is developed in rehearsal, and no doubt, they will help the actors, the director, and me understand more about biomedical ethics.

Q: What have you learned as a playwright and dramatic writer that might help individuals and societies better navigate this complex time in history?

A. The best writing advice I ever got was from playwright Erik Ehn. He told me you need to feel the breath of your characters on your neck. I took that to mean you need to know them intimately. They cannot be held at a distance. I got that advice at a crucial moment. I was working on Sense of an Ending, a play about the Rwandan genocide, and I was frustrated because I couldn’t understand the two nuns in my play. I was basing these characters on two actual nuns who were convicted of “crimes against humanity” for their perceived role in a church massacre during the 1994 genocide of the Tutsis by the Hutu majority. But Erik’s advice helped me realize that I couldn’t look at these women from the outside. To make these characters work as dramatic engines, to make the play successful as an evening of theater, I had to understand why Sister Justina and Sister Alice did what they did. To see myself in them. To have empathy or at least understanding why these women did not help.

Understanding others is crucial right now. Remember, of course, that understanding is not the same as forgiving or ignoring conflict. But not to sit in a place of judgment: That is the goal. And that’s what being a playwright has taught me. Not to get too personal, but my father is a climate change denier. It enrages me. But what I have come to understand is that he is motivated by fear. To acknowledge the reality of global climate change is terrifying because it means we have to do something. And it means we are leaving a damaged world to the generations after us. Realizing this facet of my father helped me find ways to challenge him without dismissing him as a person. I ask my writers here at MIT to read an article about a 43-year-old female steelworker who is asked to train the Mexican workers who are replacing her when the American plant is shut down and the company moves. I chose this article because I know this is an experience far removed from my students’ lives, but I want them to do the hard work of finding themselves inside her experience and use that as a springboard for their new play. You cannot write convincingly until you care about people who are different from you.

Q: President Reif has said that the solutions to today’s challenges depend on pairing advanced technical and scientific capabilities with a broad understanding of the world’s political, cultural, and economic realities. What do you view as the main deterrent to such collaborative, multi-disciplinary problem-solving and how can we resolve it?

A. In key ways, knowledge has become more and more bifurcated. We have specialties and the solution to these global problems requires a multi-faceted approach. One of the joys of a career in the arts is that I am constantly being asked to go outside my comfort zone and to explore subject matter that is beyond my expertise. My PhdD is in English literature and my dissertation was focused on nihilism and 1990s British theater. I was trained to know a lot about Nietzsche and Sarah Kane. But what do I know about Henrietta Lacks and biomedical research? The Rwandan genocide? Being gay in Uganda?

Writing plays has helped me gain a broader understanding of our world. I don’t know how to solve this vast problem [of siloed research], but I do hope that teaching dramatic writing at MIT helps in some small way. Perhaps, teaching students about the collaborations that foster new writing in the theater, also helps to catalyze new ideas and models for how collaborations might work in their own fields and areas of expertise.

Interview prepared by MIT SHASS Communicaitons
Series Editor: Emily Hiestand
Consulting Editor: Elizabeth Karagianis

Want to squelch fake news? Let the readers take charge

Would you like to rid the internet of false political news stories and misinformation? Then consider using — yes — crowdsourcing.

That’s right. A new study co-authored by an MIT professor shows that crowdsourced judgments about the quality of news sources may effectively marginalize false news stories and other kinds of online misinformation.

“What we found is that, while there are real disagreements among Democrats and Republicans concerning mainstream news outlets, basically everybody — Democrats, Republicans, and professional fact-checkers — agree that the fake and hyperpartisan sites are not to be trusted,” says David Rand, an MIT scholar and co-author of a new paper detailing the study’s results.

Indeed, using a pair of public-opinion surveys to evaluate of 60 news sources, the researchers found that Democrats trusted mainstream media outlets more than Republicans do — with the exception of Fox News, which Republicans trusted far more than Democrats did. But when it comes to lesser-known sites peddling false information, as well as “hyperpartisan” political websites (the researchers include Breitbart and Daily Kos in this category), both Democrats and Republicans show a similar disregard for such sources.

Trust levels for these alternative sites were low overall. For instance, in one survey, when respondents were asked to give a trust rating from 1 to 5 for news outlets, the result was that hyperpartisan websites received a trust rating of only 1.8 from both Republicans and Democrats; fake news sites received a trust rating of only 1.7 from Republicans and 1.9 from Democrats. 

By contrast, mainstream media outlets received a trust rating of 2.9 from Democrats but only 2.3 from Republicans; Fox News, however, received a trust rating of 3.2 from Republicans, compared to 2.4 from Democrats.

The study adds a twist to a high-profile issue. False news stories have proliferated online in recent years, and social media sites such as Facebook have received sharp criticism for giving them visibility. Facebook also faced pushback for a January 2018 plan to let readers rate the quality of online news sources. But the current study suggests such a crowdsourcing approach could work well, if implemented correctly.

“If the goal is to remove really bad content, this actually seems quite promising,” Rand says. 

The paper, “Fighting misinformation on social media using crowdsourced judgments of news source quality,” is being published in Proceedings of the National Academy of Sciences this week. The authors are Gordon Pennycook of the University of Regina, and Rand, an associate professor in the MIT Sloan School of Management.

To promote, or to squelch?

To perform the study, the researchers conducted two online surveys that had roughly 1,000 participants each, one on Amazon’s Mechanical Turk platform, and one via the survey tool Lucid. In each case, respondents were asked to rate their trust in 60 news outlets, about a third of which were high-profile, mainstream sources.

The second survey’s participants had demographic characteristics resembling that of the country as a whole — including partisan affiliation. (The researchers weighted Republicans and Democrats equally in the survey to avoid any perception of bias.) That survey also measured the general audience’s evaluations against a set of judgments by professional fact-checkers, to see whether the larger audience’s judgments were similar to the opinions of experienced researchers.

But while Democrats and Republicans regarded prominent news outlets differently, that party-based mismatch largely vanished when it came to the other kinds of news sites, where, as Rand says, “By and large we did not find that people were really blinded by their partisanship.”

In this vein, Republicans trusted MSNBC more than Breitbart, even though many of them regarded it as a left-leaning news channel. Meanwhile, Democrats, although they trusted Fox News less than any other mainstream news source, trusted it more than left-leaning hyperpartisan outlets (such as Daily Kos).

Moreover, because the respondents generally distrusted the more marginal websites, there was significant agreement among the general audience and the professional fact-checkers. (As the authors point out, this also challenges claims about fact-checkers having strong political biases themselves.)

That means the crowdsourcing approach could work especially well in marginalizing false news stories — for instance by building audience judgments into an algorithm ranking stories by quality. Crowdsourcing would probably be less effective, however, if a social media site were trying to build a consensus about the very best news sources and stories.

Where Facebook failed: Familiarity?

If the new study by Rand and Pennycook rehabilitates the idea of crowdsourcing news source judgments, their approach differs from Facebook’s stated 2018 plan in one crucial respect. Facebook was only going to let readers who were familiar with a given news source give trust ratings.

But Rand and Pennycook conclude that this method would indeed build bias into the system, because people are more skeptical of news sources they have less familiarity with — and there is likely good reason why most people are not acquainted with many sites that run fake or hyperpartisan news.

 “The people who are familiar with fake news outlets are, by and large, the people who like fake news,” Rand says. “Those are not the people that you want to be asking whether they trust it.”

Thus for crowdsourced judgments to be a part of an online ranking algorithm, there might have to be a mechanism for using the judgments of audience members who are unfamiliar with a given source. Or, better yet, suggest, Pennycook and Rand, showing users sample content from each news outlet before having the users produce trust ratings.

For his part, Rand acknowledges one limit to the overall generalizability of the study: The dymanics could be different in countries that have more limited traditions of freedom of the press.

“Our results pertain to the U.S., and we don’t have any sense of how this will generalize to other countries, where the fake news problem is more serious than it is here,” Rand says.

All told, Rand says, he also hopes the study will help people look at America’s fake news problem with something less than total despair.

“When people talk about fake news and misinformation, they almost always have very grim conversations about how everything is terrible,” Rand says. “But a lot of the work Gord [Pennycook] and I have been doing has turned out to produce a much more optimistic take on things.”

Support for the study came from the Ethics and Governance of Artifical Intelligence Initiative of the Miami Foundation, the Social Sciences and Humanities Research Council of Canada, and the Templeton World Charity Foundation.

Why Use Shotgun For Your Home Defense

There are many small arms that are used across the country and the world. However, there are reasons to believe that shotguns could be considered as one of the best options when it comes to securing yourself and also other members of your home. It also could come in very handy if you are planning to keep your valuables and your home safe from unwanted intruders and other such persons. But with so many other options available you may have to gather some information before you consider the various shotgun for sale OKC options. We are therefore happy to share below some points which we believe are obviously undeniable benefits as far as the use of shotguns is concerned. We hope it will be helpful for our readers to make up their minds and then decide to buy the right licensed ammunition for the protection of their homes, family members and the valuables.

It Does Fire Multiple Projectiles

This is perhaps one of the biggest advantages of shotguns. It has the capability of firing multiple projectiles each time the trigger is pulled. This is something that is unique only to shotgun and no other defensive ammunition has it. The impact of different pellets hitting the target at short distances could certainly be devastating. If you attend any self-defense class, they will always teach you to fire as many rapid shots as possible on the target to immobilize him or her. This is best done with the help of a shotgun because in one shot it can spray many numbers of pellets. When fired from short range it could well and truly immobilize the target and he or she will first try and run away from the scene rather than suffering the risk of serious or even fatal injuries.

It Has A Menacing Appearance

Shotguns are easily identifiable and stand apart from the rest of the crowd. If you just move around with a shotgun, it is quite possible that not many miscreants or criminal-minded persons would be willing to confront it. They would rather try their luck elsewhere and therefore you could keep burglars, thieves, and other criminals at bay without actually using it.

It Is Easy To Use

Further, modern-day shotguns are easy to use and therefore you need not struggle with the various lessons for using it properly. They are light and easy to carry around and offer you the best value for money. They also come in different sizes and shapes and you can choose the one which you feel best suits your specific needs and requirements.

However, on the flip side, there are some disadvantages. If you are keen on using something discreet, then shotguns perhaps may not be the ideal option. Because of its big size, it certainly is a giveaway and if you want that element of surprise and swiftness while defending you and your family, then pistols, revolvers and other small arms could be a better option.

In fine, when you look at the pros and cons of shotguns, there is hardly any doubt that the pros are much more in numbers. Therefore it certainly is a good defensive option apart from being useful for hunting and other such sports purposes.

Contact US:

H&H Shooting Sports

Address:
400 South Vermont Ave #110
Oklahoma City, OK
Phone: 1-405-947-3888

A faster, more efficient cryptocurrency

MIT researchers have developed a new cryptocurrency that drastically reduces the data users need to join the network and verify transactions — by up to 99 percent compared to today’s popular cryptocurrencies. This means a much more scalable network.

Cryptocurrencies, such as the popular Bitcoin, are networks built on the blockchain, a financial ledger formatted in a sequence of individual blocks, each containing transaction data. These networks are decentralized, meaning there are no banks or organizations to manage funds and balances, so users join forces to store and verify the transactions.

But decentralization leads to a scalability problem. To join a cryptocurrency, new users must download and store all transaction data from hundreds of thousands of individual blocks. They must also store these data to use the service and help verify transactions. This makes the process slow or computationally impractical for some.

In a paper being presented at the Network and Distributed System Security Symposium next month, the MIT researchers introduce Vault, a cryptocurrency that lets users join the network by downloading only a fraction of the total transaction data. It also incorporates techniques that delete empty accounts that take up space, and enables verifications using only the most recent transaction data that are divided and shared across the network, minimizing an individual user’s data storage and processing requirements.

In experiments, Vault reduced the bandwidth for joining its network by 99 percent compared to Bitcoin and 90 percent compared to Ethereum, which is considered one of today’s most efficient cryptocurrencies. Importantly, Vault still ensures that all nodes validate all transactions, providing tight security equal to its existing counterparts.

“Currently there are a lot of cryptocurrencies, but they’re hitting bottlenecks related to joining the system as a new user and to storage. The broad goal here is to enable cryptocurrencies to scale well for more and more users,” says co-author Derek Leung, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Joining Leung on the paper are CSAIL researchers Yossi Gilad and Nickolai Zeldovich, who is also a professor in the Department of Electrical Engineering and Computer Science (EECS); and recent alumnus Adam Suhl ’18.

Vaulting over blocks

Each block in a cryptocurrency network contains a timestamp, its location in the blockchain, and fixed-length string of numbers and letters, called a “hash,” that’s basically the block’s identification. Each new block contains the hash of the previous block in the blockchain. Blocks in Vault also contain up to 10,000 transactions — or 10 megabytes of data — that must all be verified by users. The structure of the blockchain and, in particular, the chain of hashes, ensures that an adversary cannot hack the blocks without detection.

New users join cryptocurrency networks, or “bootstrap,” by downloading all past transaction data to ensure they’re secure and up to date. To join Bitcoin last year, for instance, a user would download 500,000 blocks totaling about 150 gigabytes. Users must also store all account balances to help verify new users and ensure users have enough funds to complete transactions. Storage requirements are becoming substantial, as Bitcoin expands beyond 22 million accounts.

The researchers built their system on top of a new cryptocurrency network called Algorand — invented by Silvio Micali, the Ford Professor of Engineering at MIT — that’s secure, decentralized, and more scalable than other cryptocurrencies.

With traditional cryptocurrencies, users compete to solve equations that validate blocks, with the first to solve the equations receiving funds. As the network scales, this slows down transaction processing times. Algorand uses a “proof-of-stake” concept to more efficiently verify blocks and better enable new users join. For every block, a representative verification “committee” is selected. Users with more money — or stake — in the network have higher probability of being selected. To join the network, users verify each certificate, not every transaction.

But each block holds some key information to validate the certificate immediately ahead of it, meaning new users must start with the first block in the chain, along with its certificate, and sequentially validate each one in order, which can be time-consuming. To speed things up, the researchers give each new certificate verification information based on a block a few hundred or 1,000 blocks behind it — called a “breadcrumb.” When a new user joins, they match the breadcrumb of an early block to a breadcrumb 1,000 blocks ahead. That breadcrumb can be matched to another breadcrumb 1,000 blocks ahead, and so on.

“The paper title is a pun,” Leung says. “A vault is a place where you can store money, but the blockchain also lets you ‘vault’ over blocks when joining a network. When I’m bootstrapping, I only need a block from way in the past to verify a block way in the future. I can skip over all blocks in between, which saves us a lot of bandwidth.”

Divide and discard

To reduce data storage requirements, the researchers designed Vault with a novel “sharding” scheme. The technique divides transaction data into smaller portions — or shards — that it shares across the network, so individual users only have to process small amounts of data to verify transactions.

To implement sharing in a secure way, Vault uses a well-known data structure called a binary Merkle tree. In binary trees, a single top node branches off into two “children” nodes, and those two nodes each break into two children nodes, and so on.

In Merkle trees, the top node contains a single hash, called a root hash. But the tree is constructed from the bottom, up. The tree combines each pair of children hashes along the bottom to form their parent hash. It repeats that process up the tree, assigning a parent node from each pair of children nodes, until it combines everything into the root hash. In cryptocurrencies, the top node contains a hash of a single block. Each bottom node contains a hash that signifies the balance information about one account involved in one transaction in the block. The balance hash and block hash are tied together.

To verify any one transaction, the network combines the two children nodes to get the parent node hash. It repeats that process working up the tree. If the final combined hash matches the root hash of the block, the transaction can be verified. But with traditional cryptocurrencies, users must store the entire tree structure.

With Vault, the researchers divide the Merkle tree into separate shards assigned to separate groups of users. Each user account only ever stores the balances of the accounts in its assigned shard, as well as root hashes. The trick is having all users store one layer of nodes that cuts across the entire Merkle tree. When a user needs to verify a transaction from outside of their shard, they trace a path to that common layer. From that common layer, they can determine the balance of the account outside their shard, and continue validation normally.

“Each shard of the network is responsible for storing a smaller slice of a big data structure, but this small slice allows users to verify transactions from all other parts of network,” Leung says.

Additionally, the researchers designed a novel scheme that recognizes and discards from a user’s assigned shard accounts that have had zero balances for a certain length of time. Other cryptocurrencies keep all empty accounts, which increase data storage requirements while serving no real purpose, as they don’t need verification. When users store account data in Vault, they ignore those old, empty accounts.

AI, the law, and our future

Scientists and policymakers converged at MIT on Tuesday to discuss one of the hardest problems in artificial intelligence: How to govern it.

The first MIT AI Policy Congress featured seven panel discussions sprawling across a variety of AI applications, and 25 speakers — including two former White House chiefs of staff, former cabinet secretaries, homeland security and defense policy chiefs, industry and civil society leaders, and leading researchers.

Their shared focus: how to harness the opportunities that AI is creating — across areas including transportation and safety, medicine, labor, criminal justice, and national security — while vigorously confronting challenges, including the potential for social bias, the need for transparency, and misteps that could stall AI innovation while exacerbating social problems in the United States and around the world.

“When it comes to AI in areas of public trust, the era of moving fast and breaking everything is over,” said R. David Edelman, director of the Project on Technology, the Economy, and National Security (TENS) at the MIT Internet Policy Research Initiative (IPRI), and a former special assistant to the president for economic and technology policy in the Obama White House.

Added Edelman: “There is simply too much at stake for all of us not to have a say.”

Daniel Weitzner, founding director of IPRI and a principal research scientist at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), said a key objective of the dialogue was to help policy analysts feel confident about their ability to actively shape the effects of AI on society.

“I hope the policymakers come away with a clear sense that AI technology is not some immovable object, but rather that the right interaction between computer science, government, and society at large will help shape the development of new technology to address society’s needs,” Weitzner said at the close of the event.

The MIT AI Policy Congress was organized by IPRI, alongside a two-day meeting of the Organization for Economic Cooperation and Development (OECD), the Paris-based intergovernmental association, which is developing AI policy recommendations for 35 countries around the world. As part of the event, OECD experts took part in a half-day, hands-on training session in machine learning, as they trained and tested a neural network under the guidance of Hal Abelson, the  Class of 1922 Professor of Computer Science and Engineering at MIT.

Tuesday’s forum also began with a primer on the state of the art in AI from Antonio Torralba, a professor in CSAIL and the Department of Electrical Engineering and Computer Science (EECS), and director of the MIT Quest for Intelligence. Noting that “there are so many things going on” in AI, Torralba quipped: “It’s very difficult to know what the future is, but it’s even harder to know what the present is.”

A new “commitment to address ethical issues”

Tuesday’s event, co-hosted by the IPRI and the MIT Quest for Intelligence, was held at a time when AI is receiving a significant amount of media attention — and an unprecedented level of financial investment and institutional support.

For its part, MIT announced in October 2018 that it was founding the MIT Stephen A. Schwarzman College of Computing, supported by a $350 million gift from Stephen Schwarzman, which will serve as an interdisciplinary nexus of research and education in computer science, data science, AI, and related fields. The college will also address policy and ethical issues relating to computing.

“Here at MIT, we are at a unique moment with the impending launch of the new MIT Schwarzman College of Computing,” Weitzner noted. “The commitment to address policy and ethical issues in computing will result in new AI research, and curriculum to train students to develop new technology to meet society’s needs.”

Other institutions are making an expanded commitment to AI as well — including the OECD.

“Things are evolving quite quickly,” said Andrew Wyckoff, director for science, technology, and innovation at the OECD. “We need to begin to try to get ahead of that.”

Wyckoff added that AI was a “top three” policy priority for the OECD in 2019-2020, and said the organization was forming a “policy observatory” to produce realistic assessments of AI’s impact, including the issue of automation replacing jobs.

“There’s a lot of fear out there about [workers] being displaced,” said Wyckoff. “We need to look at this and see what is reality, versus what is fear.”

A fair amount of that idea stems more from fear than reality, said Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy and a professor at the MIT Sloan School of Management, during a panel discussion on manufacturing and labor.

Compared to the range of skills needed in most jobs, “Today what machine learning can do is much more narrow,” Brynjolfsson said. “I think that’s going to be the status quo for a number of years.”

Brynjolfsson noted that his own research on the subject, evaluating the full range of specific tasks used in a wide variety of jobs, shows that automation tends to replace some but not all of those tasks.

“In not a single one of those occupations did machine learning run the table” of tasks, Brynjolfsson said. “You’re not just going to be able to plug in a machine very often.” However, he noted, the fact that computers can usurp certain tasks means that “reinvention and redesign” will be necessary for many jobs. Still, as Brynjolfsson emphasized, “That process is going to play out over years, if not decades.”

A varied policy landscape

One major idea underscored at the event is that AI policymaking could unfold quite differently from industry to industry. For autonomous vehicles — perhaps the most widely-touted application of AI — U.S. states have significant rulemaking power, and laws could vary greatly across state lines.

In a panel discussion on AI and transportation, Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and the director of CSAIL, remarked that she sees transportation “as one of the main targets and one of the main points of adoption for AI technologies in the present and near future.”

Rus suggested that the use of  autonomous vehicles in some low-speed, less-complex environments might be possible within five years or so, but she also made clear that autonomous vehicles fare less well in more complicated, higher-speed situations, and struggle in bad weather.

Partly for those reasons, many autonomous vehicles figure to feature systems where drivers can take over the controls. But as Rus noted, that “depends on people’s ability to take over instantaneously,” while studies are currently showing that it takes drivers about nine seconds to assume control of their vehicles.

The transportation panel discussion also touched on the use of AI in nautical and aerial systems. In the latter case, “you can’t look into your AI co-pilot’s eyes and judge their confidence,” said John-Paul Clarke, the vice president of strategic technologies at United Technologies, regarding the complex dynamics of human-machine interfaces.

In other industries, fundamental AI challenges involve access to data, a point emphasized by both Torralba and Regina Barzilay, an MIT professor in both CSAIL and EECS. During a panel on health care, Barzilay presented on one aspect of her research, which uses machine learning to analyze mammogram results for better early detection of cancer. In Barzilay’s view, key technical challenges in her work that could be addressed by AI policy include access to more data and testing across populations — both of which can help refine automated detection tools.

The matter of how best to create access to patient data, however, led to some lively subsequent exchanges. Tom Price, former secretary of health and human services in the Trump administration, suggested that “de-identified data is absolutely the key” to further progress, while some MIT researchers in the sudience suggested that it is virtually impossible to create totally anonymous patient data.

Jason Furman, a professor of the practice of economic policy at the Harvard Kennedy School and a former chair of the Council of Economic Advisors in the Obama White House, addressed the concern that insurers would deny coverage to people based on AI-generated predictions about which people would most likely develop diseases later in life. Furman suggested that the best solution for this lies outside the AI domain: preventing denial of care based on pre-existing conditions, an element of the Affordable Care Act.

But overall, Furman added, “the real problem with artificial intelligence is we don’t have enough of it.”

For his part, Weitzner suggested that, in lieu of perfectly anonymous medical data, “we should agree on what are the permissible uses and the impermissible uses” of data, since “the right way of enabling innovation and taking privacy seriously is taking accountability seriously.”

Public accountability

For that matter, the accountability of organizations constituted another touchstone of Tuesday’s discussions, especially in a panel on law enforcement and AI.

“Government entities need to be transparent about what they’re doing with respect to AI,” said Jim Baker, Harvard Law School lecturer and former general counsel of the FBI. “I think that’s obvious.”

Carol Rose, executive director of the American Civil Liberties Union’s Massachusetts chapter, warned against over-use of AI tools in law enforcement

“I think AI has tremendous promise, but it really depends if the data scientists and law enforcement work together,” Rose said, suggesting that a certain amount of “junk science” had already made its way into tools being marketed to law-enforcement officials. Rose also cited Joy Buolamwini of the MIT Media Lab as a leader in the evaluation of such AI tools; Buolamwini founded the Algorithmic Justice League, a group scrutinizing the use of facial recognition technologies.

“Sometimes I worry we have an AI hammer looking for a nail,” Rose said.

All told, as Edelman noted in closing remarks, the policy world consists of “very different bodies of law,” and policymakers will need to ask themselves to what extent general regulations are meaningful, or if AI policy issues are best addressed in more specific ways — whether in medicine, criminal justice, or transportation.  

“Our goal is to see the interconnection among these fields … but as we do, let’s also ask ourselves if ‘AI governance’ is the right frame at all — it might just be that in the near future, all governance deals with AI issues, one way or another,” Edelman said.

Weitzner concluded the conference with a call for governments to continue engagement with the computer science and artificial intelligence technical communities. “The technologies that are shaping the world’s future are being developed today. We have the opportunity to be sure that they serve society’s needs if we keep up this dialogue as way of informing technical design and cross-disciplinary research.”

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.