People Should Find A Safe Storm Shelter During Thunderstorm

Storm Shelters in OKC

Tuesday June 5, 2001 marked the start of an extremely fascinating time in the annals of my cherished Houston. Tropical storm Allison, that early summer daytime came to see. The thunderstorm went rapidly, although there was Tuesday. Friday, afterward arrived, and Allison returned. This time going slowly, this time in the north. The thunderstorm became still. Thousands of people driven from their houses. Only when they might be desired most, several leading hospitals shut. Dozens of important surface roads, and every important highway covered in water that was high.

Yet even prior to the rain stopped, service to others, and narratives of Christian compassion started to be composed. For a couples class, about 75 people had assembled at Lakewood Church among the greatest nondenominational churches in The United States. From time they got ready to depart the waters had climbed so high they were stranded. The facility of Lakewood stayed dry and high at the center of among the hardest hit parts of town. Refugees in the powerful thunderstorm started arriving at their doorstep. Without no advance preparation, and demand of official sanction, those 75 classmates started a calamity shelter that grew to hold over 3,000 customers. The greatest of over 30 refuges that could be established in the height of the thunderstorm.

Where help was doled out to those who’d suffered losses after Lakewood functioned as a Red Cross Service Center. When it became clear that FEMA aid, and Red Cross wouldn’t bring aid enough, Lakewood and Second Baptist joined -Houston to produce an adopt a family plan to greatly help get folks on their feet quicker. In the occasions that followed militaries of Christians arrived in both churches. From all over town, people of economical standing, race, and each and every denomination collected. Wet rotted carpeting were pulled up, sheet stone removed. Piles of clothes donated food and bed clothes were doled out. Elbow grease and cleaning equipment were used to start eliminating traces of the damage.

It would have been an excellent example of practical ministry in a period of disaster, in the event the story stopped here, but it continues. A great many other churches functioned as shelters as well as in the occasions that followed Red Cross Service Centers. Tons of new volunteers, a lot of them Christians put to work, and were put through accelerated training. That Saturday, I used to be trapped in my own, personal subdivision. Particular that my family was safe because I worked in Storm Shelters OKC that was near where I used to live. What they wouldn’t permit the storm to do, is take their demand to give their religion, or their self respect. I saw so a lot of people as they brought gifts of food, clothes and bedclothes, praising the Lord. I saw young kids coming making use of their parents to not give new, rarely used toys to kids who had none.

Leaning On God Through Hard Times

Unity Church of Christianity from a location across town impacted by the storm sent a sizable way to obtain bedding as well as other supplies. A tiny troupe of musicians and Christian clowns requested to be permitted to amuse the kids in the shelter where I served and arrived. We of course promptly taken their offer. The kids were collected by them in a sizable empty space of flooring. They sang, they told stories, balloon animals were made by them. The kids, frightened, at least briefly displaced laughed.

When not occupied elsewhere I did lots of listening. I listened to survivors that were disappointed, and frustrated relief workers. I listened to kids make an effort to take advantage of a scenario they could not comprehend. All these are only the stories I have heard or seen. I am aware that spiritual groups, Churches, and lots of other individual Christians functioned admirably. I do need to thank them for the attempts in disaster. I thank The Lord for supplying them to serve.

I didn’t write its individuals, or this which means you’d feel sorry for Houston. As this disaster unfolded yet what I saw encouraged my beliefs the Lord will provide through our brothers and sisters in religion for us. Regardless how awful your community hits, you the individual Christian can be a part of the remedy. Those blankets you can probably never use, and have stored away mean much to people who have none. You are able to help in the event that you can drive. You are able to help if you’re able to create a cot. It is possible to help in the event that you can scrub a wall. It is possible to help if all you are able to do is sit and listen. Large catastrophes like Allison get lots of focus. However a disaster can come in virtually any size. That is a serious disaster to your family that called it home in case a single household burns. It is going to be generations prior to the folks here forget Allison.

United States Oil and Gas Exploration Opportunities

Firms investing in this sector can research, develop and create, as well as appreciate the edges of a global gas and oil portfolio with no political and economical disadvantages. Allowing regime and the US financial conditions is rated amongst the world and the petroleum made in US is sold at costs that were international. The firms will likely gain as US also has a national market that is booming. Where 500 exploration wells are drilled most of the petroleum exploration in US continues to be concentrated around the Taranaki Basin. On the other hand, the US sedimentary basins still remain unexplored and many show existence of petroleum seeps and arrangements were also unveiled by the investigation data with high hydrocarbon potential. There have already been onshore gas discoveries before including Great south river basins, East Coast Basin and offshore Canterbury.

As interest in petroleum is expected to grow strongly during this interval but this doesn’t automatically dim the bright future expectations in this sector. The interest in petroleum is anticipated to reach 338 PJ per annum. The US government is eager to augment the gas and oil supply. As new discoveries in this sector are required to carry through the national demand at the same time as raise the amount of self reliance and minimize the cost on imports of petroleum the Gas and Oil exploration sector is thought to be among the dawn sectors. The US government has invented a distinctive approach to reach its petroleum and gas exploration targets. It’s developed a “Benefit For Attempt” model for Petroleum and Gas exploration tasks in US.

The “Benefit For Attempt” in today’s analytic thinking is defined as oil reserves found per kilometer drilled. It will help in deriving the estimate of reservations drilled for dollar and each kilometer spent for each investigation. The authorities of US has revealed considerable signs that it’ll bring positive effects of change which will favor investigation of new oil reserves since the price of investigation has adverse effects on investigation task. The Authorities of US has made the information accessible about the oil potential in its study report. Foil of advice in royalty and allocation regimes, and simplicity of processes have enhanced the attractiveness of Petroleum and Natural Gas Sector in the United States.

Petroleum was the third biggest export earner in 2008 for US and the chance to to keep up the growth of the sector is broadly accessible by manners of investigation endeavors that are new. The government is poised to keep the impetus in this sector. Now many firms are active with new exploration jobs in the Challenger Plateau of the United States, Northland East Slope Basin region, outer Taranaki Basin, and Bellona Trough region. The 89 Energy oil and gas sector guarantees foreign investors as government to high increase has declared a five year continuance of an exemption for offshore petroleum and gas exploration in its 2009 budget. The authorities provide nonresident rig operators with tax breaks.

Modern Robot Duct Cleaning Uses

AC systems, and heat, venting collect pollutants and contaminants like mold, debris, dust and bacteria that can have an adverse impact on indoor air quality. Most folks are at present aware that indoor air pollution could be a health concern and increased visibility has been thus gained by the area. Studies have also suggested cleaning their efficacy enhances and is contributory to a longer operating life, along with maintenance and energy cost savings. The cleaning of the parts of forced air systems of heat, venting and cooling system is what’s called duct cleaning. Robots are an advantageous tool raising the price and efficacy facets of the procedure. Therefore, using modern robot duct isn’t any longer a new practice.

A cleaner, healthier indoor environment is created by a clean air duct system which lowers energy prices and increases efficiency. As we spend more hours inside air duct cleaning has become an important variable in the cleaning sector. Indoor pollutant levels can increase. Health effects can show years or up immediately after repeated or long exposure. These effects range from some respiratory diseases, cardiovascular disease, and cancer that can be deadly or debilitating. Therefore, it’s wise to ensure indoor air quality isn’t endangered inside buildings. Dangerous pollutants that can found in inside can transcend outdoor air pollutants in accordance with the Environmental Protection Agency.

Duct cleaning from Air Duct Cleaning Edmond professionals removes microbial contaminants, that might not be visible to the naked eye together with both observable contaminants. Indoor air quality cans impact and present a health hazard. Air ducts can be host to a number of health hazard microbial agents. Legionnaires Disease is one malaise that’s got public notice as our modern surroundings supports the development of the bacteria that has the potential to cause outbreaks and causes the affliction. Typical disorder-causing surroundings contain wetness producing gear such as those in air conditioned buildings with cooling towers that are badly maintained. In summary, in building and designing systems to control our surroundings, we’ve created conditions that were perfect . Those systems must be correctly tracked and preserved. That’s the secret to controlling this disorder.

Robots allow for the occupation while saving workers from exposure to be done faster. Signs of the technological progress in the duct cleaning business is apparent in the variety of gear now available for example, array of robotic gear, to be used in air duct cleaning. Robots are priceless in hard to reach places. Robots used to see states inside the duct, now may be used for spraying, cleaning and sampling procedures. The remote controlled robotic gear can be fitted with practical and fastener characteristics to reach many different use functions.

Video recorders and a closed circuit television camera system can be attached to the robotic gear to view states and operations and for documentation purposes. Inside ducts are inspected by review apparatus in the robot. Robots traveling to particular sections of the system and can move around barriers. Some join functions that empower cleaning operation and instruction manual and fit into little ducts. An useful view range can be delivered by them with models delivering disinfection, cleaning, review, coating and sealing abilities economically.

The remote controlled robotic gear comes in various sizes and shapes for different uses. Of robotic video cameras the first use was in the 80s to record states inside the duct. Robotic cleaning systems have a lot more uses. These devices provide improved accessibility for better cleaning and reduce labor costs. Lately, functions have been expanded by areas for the use of small mobile robots in the service industries, including uses for review and duct cleaning.

More improvements are being considered to make a tool that was productive even more effective. If you determine to have your ventilation, heat and cooling system cleaned, it’s important to make sure all parts of the system clean and is qualified to achieve this. Failure to clean one part of a contaminated system can lead to re-contamination of the entire system.

When To Call A DWI Attorney

Charges or fees against a DWI offender need a legal Sugar Land criminal defense attorney that is qualified dismiss or so that you can reduce charges or the fees. So, undoubtedly a DWI attorney is needed by everyone. Even if it’s a first-time violation the penalties can be severe being represented by a DWI attorney that is qualified is vitally significant. If you’re facing following charges for DWI subsequently the punishments can contain felony charges and be severe. Locating an excellent attorney is thus a job you should approach when possible.

So you must bear in mind that you just should hire a DWI attorney who practices within the state where the violation occurred every state within America will make its laws and legislation regarding DWI violations. It is because they are going to have the knowledge and expertise of state law that is relevant to sufficiently defend you and will be knowledgeable about the processes and evaluations performed to establish your guilt.

As your attorney they are going to look to the evaluations that have been completed at the time of your arrest and the authorities evidence that is accompanying to assess whether or not these evaluations were accurately performed, carried out by competent staff and if the right processes where followed. It isn’t often that a police testimony is asserted against, although authorities testimony also can be challenged in court.

You should attempt to locate someone who specializes in these kind of cases when you start trying to find a DWI attorney. Whilst many attorneys may be willing to consider on your case, a lawyer who specializes in these cases is required by the skilled knowledge needed to interpret the scientific and medical evaluations ran when you had been detained. The first consultation is free and provides you with the chance to to inquire further about their experience in fees and these cases.

Many attorneys will work according into a fee that is hourly or on a set fee basis determined by the kind of case. You may find how they have been paid to satisfy your financial situation and you will have the capacity to negotiate the conditions of their fee. If you are unable to afford to hire an attorney that is private you then can request a court-appointed attorney paid for by the state. Before you hire a DWI attorney you should make sure when you might be expected to appear in court and you understand the precise charges imposed against you.

How Credit Card Works

The credit card is making your life more easy, supplying an amazing set of options. The credit card is a retail trade settlement; a credit system worked through the little plastic card which bears its name. Regulated by ISO 7810 defines credit cards the actual card itself consistently chooses the same structure, size and contour. A strip of a special stuff on the card (the substance resembles the floppy disk or a magnetic group) is saving all the necessary data. This magnetic strip enables the credit card’s validation. The layout has become an important variable; an enticing credit card layout is essential in ensuring advice and its dependability keeping properties.

A credit card is supplied to the user just after a bank approves an account, estimating a varied variety of variables to ascertain fiscal dependability. This bank is the credit supplier. When a purchase is being made by an individual, he must sign a receipt to verify the trade. There are the card details, and the amount of cash to be paid. You can find many shops that take electronic authority for the credit cards and use cloud tokenization for authorization. Nearly all verification are made using a digital verification system; it enables assessing the card is not invalid. If the customer has enough cash to insure the purchase he could be attempting to make staying on his credit limit any retailer may also check.

As the credit supplier, it is as much as the banks to keep the user informed of his statement. They typically send monthly statements detailing each trade procedures through the outstanding fees, the card and the sums owed. This enables the cardholder to ensure all the payments are right, and to discover mistakes or fraudulent action to dispute. Interest is typically charging and establishes a minimal repayment amount by the end of the following billing cycle.

The precise way the interest is charged is normally set within an initial understanding. On the rear of the credit card statement these elements are specified by the supplier. Generally, the credit card is an easy type of revolving credit from one month to another. It can also be a classy financial instrument, having many balance sections to afford a greater extent for credit management. Interest rates may also be not the same as one card to another. The credit card promotion services are using some appealing incentives find some new ones along the way and to keep their customers.

Why Get Help From A Property Management?

One solution while removing much of the anxiety, to have the revenue of your rental home would be to engage and contact property management in Oklahoma City, Oklahoma. If you wish to know more and are considering the product please browse the remainder of the post. Leasing out your bit of real property may be real cash-cow as many landlords understand, but that cash flow usually includes a tremendous concern. Night phones from tenants that have the trouble of marketing the house if you own an emptiness just take out lots of the pleasure of earning money off of leases, overdue lease payments which you must chase down, as well as over-flowing lavatories. One solution while removing much of the anxiety, to have the earnings would be to engage a property management organization.

These businesses perform as the go between for the tenant as well as you. The tenant will not actually need to understand who you’re when you hire a property management company. The company manages the day to day while you still possess the ability to help make the final judgements in regards to the home relationships using the tenant. The company may manage the marketing for you personally, for those who are in possession of a unit that is vacant. Since the company is going to have more connections in a bigger market than you’ve got along with the industry than you are doing, you’ll discover your device gets stuffed a whole lot more quickly making use of their aid. In addition, the property management company may care for testing prospective tenants and help prospects move in by partnering with the right home services and moving company. With regards to the arrangement you’ve got, you might nevertheless not be unable to get the last say regarding if a tenant is qualified for the the system, but of locating a suitable tenant, the day-to-day difficulty is not any longer your problem. They’ll also manage the before-move-in the reviews as well as reviews required following a tenant moves away.

It is possible to step back watching the profits, after the the system is stuffed. Communicating will be handled by the company with all the tenant if you have an issue. You won’t be telephoned if this pipe explosions at the center of the night time. Your consultant is called by the tenant in the company, who then makes the preparations that are required to get the issue repaired with a care supplier. You get a phone call a day later or may not know there was an issue before you register using the business. The property management organization may also make your leasing obligations to to get. The company will do what’s required to accumulate if your tenant is making a payment. In certain arrangements, the organization is going to also take-over paying taxation, insurance, and the mortgage on the portion of property. You actually need to do-nothing but appreciate after after all the the invoices are paid, the revenue which is sent your way.

With all the advantages, you’re probably questioning exactly what to employing a property management organization, the downside should be. From hiring one the primary variable that stops some landlords is the price. All these providers will be paid for by you. The price must be weighed by you from the time frame you’ll save time that you may subsequently use to follow additional revenue-producing efforts or just take pleasure in the fruits of your expense work.

Benifits From An Orthodontic Care

Orthodontics is the specialty of dentistry centered on the identification and treatment of dental and related facial problems. The outcomes of Norman Orthodontist OKC treatment could be dramatic — an advanced quality of life for a lot of individuals of ages and lovely grins, improved oral health health, aesthetics and increased cosmetic tranquility. Whether into a look dentistry attention is needed or not is an individual’s own choice. Situations are tolerated by most folks like totally various kinds of bite issues or over bites and don’t get treated. Nevertheless, a number people sense guaranteed with teeth that are correctly aligned, appealing and simpler. Dentistry attention may enhance construct and appearance power. It jointly might work with you consult with clearness or to gnaw on greater.

Orthodontic attention isn’t only decorative in character. It might also gain long term oral health health. Right, correctly aligned teeth is not more difficult to floss and clean. This may ease and decrease the risk of rot. It may also quit periodontists irritation that problems gums. Periodontists might finish in disease, that occurs once micro-organism bunch round your house where the teeth and the gums meet. Periodontists can be ended in by untreated periodontists. Such an unhealthiness result in enamel reduction and may ruin bone that surrounds the teeth. Less may be chewed by people who have stings that are harmful with efficacy. A few of us using a serious bite down side might have difficulties obtaining enough nutrients. Once the teeth aren’t aimed correctly, this somewhat might happen. Morsel issues that are repairing may allow it to be more easy to chew and digest meals.

One may also have language problems, when the top and lower front teeth do not arrange right. All these are fixed through therapy, occasionally combined with medical help. Eventually, remedy may ease to avoid early use of rear areas. Your teeth grow to an unlikely quantity of pressure, as you chew down. In case your top teeth do not match it’ll trigger your teeth that are back to degrade. The most frequently encountered type of therapy is the braces (or retainer) and head-gear. But, a lot people complain about suffering with this technique that, unfortunately, is also unavoidable. Sport braces damages, as well as additional individuals have problem in talking. Dental practitioners, though, state several days can be normally disappeared throughout by the hurting. Occasionally annoyance is caused by them. In the event that you’d like to to quit more unpleasant senses, fresh, soft and tedious food must be avoided by you. In addition, tend not to take your braces away unless the medical professional claims so.

It is advised which you just observe your medical professional often for medical examinations to prevent choice possible problems that may appear while getting therapy. You are going to be approved using a specific dental hygiene, if necessary. Dental specialist may look-out of managing and id malocclusion now. Orthodontia – the main specialization of medication – mainly targets repairing chin problems and teeth, your grin as well as thus your sting. Dentist, however, won’t only do chin remedies and crisis teeth. They also handle tender to severe dental circumstances which may grow to states that are risky. You actually have not got to quantify throughout a predicament your life all. See dental specialist San – Direction Posts, and you’ll notice only but of stunning your smile plenty will soon be.

In MIT visit, Dropbox CEO Drew Houston ’05 explores the accelerated shift to distributed work

When the cloud storage firm Dropbox decided to shut down its offices with the outbreak of the Covid-19 pandemic, co-founder and CEO Drew Houston ’05 had to send the company’s nearly 3,000 employees home and tell them they were not coming back to work anytime soon. “It felt like I was announcing a snow day or something.”

In the early days of the pandemic, Houston says that Dropbox reacted as many others did to ensure that employees were safe and customers were taken care of. “It’s surreal, there’s no playbook for running a global company in a pandemic over Zoom. For a lot of it we were just taking it as we go.”

Houston talked about his experience leading Dropbox through a public health crisis and how Covid-19 has accelerated a shift to distributed work in a fireside chat on Oct. 14 with Dan Huttenlocher, dean of the MIT Stephen A. Schwarzman College of Computing.

During the discussion, Houston also spoke about his $10 million gift to MIT, which will endow the first shared professorship between the MIT Schwarzman College of Computing and the MIT Sloan School of Management, as well as provide a catalyst startup fund for the college.

“The goal is to find ways to unlock more of our brainpower through a multidisciplinary approach between computing and management,” says Houston. “It’s often at the intersection of these disciplines where you can bring people together from different perspectives, where you can have really big unlocks. I think academia has a huge role to play [here], and I think MIT is super well-positioned to lead. So, I want to do anything I can to help with that.”

Virtual first

While the abrupt swing to remote work was unexpected, Houston says it was pretty clear that the entire way of working as we knew it was going to change indefinitely for knowledge workers. “There’s a silver lining in every crisis,” says Houston, noting that people have been using Dropbox for years to work more flexibly so it made sense for the company to lean in and become early adopters of a distributed work paradigm in which employees work in different physical locations.

Dropbox proceeded to redesign the work experience throughout the company, unveiling a “virtual first” working model in October 2020 in which remote work is the primary experience for all employees. Individual work spaces went by the wayside and offices located in areas with a high concentration of employees were converted into convening and collaborative spaces called Dropbox Studios for in-person work with teammates.

“There’s a lot we could say about Covid, but for me, the most significant thing is that we’ll look back at 2020 as the year we shifted permanently from working out of offices to primarily working out of screens. It’s a transition that’s been underway for a while, but Covid completely finished the swing,” says Houston.

Designing for the future workplace

Houston says the pandemic also prompted Dropbox to reevaluate its product line and begin thinking of ways to make improvements. “We’ve had this whole new way of working sort of forced on us. No one designed it; it just happened. Even tools like Zoom, Slack, and Dropbox were designed in and for the old world.”

Undergoing that process helped Dropbox gain clarity on where they could add value and led to the realization that they needed to get back to their roots. “In a lot of ways, what people need today in principle is the same thing they needed in the beginning — one place for all their stuff,” says Houston.

Dropbox reoriented its product roadmap to refocus efforts from syncing files to organizing cloud content. The company is focused on building toward this new direction with the release of new automation features that users can easily implement to better organize their uploaded content and find it quickly. Dropbox also recently announced the acquisition of Command E, a universal search and productivity company, to help accelerate its efforts in this space.

Houston views Dropbox as still evolving and sees many opportunities ahead in this new era of distributed work. “We need to design better tools and smarter systems. It’s not just the individual parts, but how they’re woven together.” He’s surprised by how little intelligence is actually integrated into current systems and believes that rapid advances in AI and machine learning will soon lead to a new generation of smart tools that will ultimately reshape the nature of work — “in the same way that we had a new generation of cloud tools revolutionize how we work and had all these advantages that we couldn’t imagine not having now.”

Founding roots

Houston famously turned his frustration with carrying USB drives and emailing files to himself into a demo for what became Dropbox.

After graduating from MIT in 2005 with a bachelor’s degree in electrical engineering and computer science, he teamed up with fellow classmate Arash Ferdowsi to found Dropbox in 2007 and led the company’s growth from a simple idea to a service used by 700 million people around the world today.

Houston credits MIT for preparing him well for his entrepreneurial journey, recalling that what surprised him most about his student experience was how much he learned outside the classroom. At the event, he stressed the importance of developing both sides of the brain to a select group of computer science and management students who were in attendance, and a broader live stream audience. “One thing you learn about starting a company is that the hardest problems are usually not technical problems; they’re people problems.” He says that he didn’t realize it at the time, but some of his first lessons in management were gained by taking on responsibilities in his fraternity and in various student organizations that evoked a sense of being “on the hook.”

As CEO, Houston has had a chance to look behind the curtain at how things happen and has come to appreciate that problems don’t solve themselves. While individual people can make a huge difference, he explains that many of the challenges the world faces right now are inherently multidisciplinary ones, which sparked his interest in the MIT Schwarzman College of Computing.

He says that the mindset embodied by the college to connect computing with other disciplines resonated and inspired him to initiate his biggest philanthropic effort to date sooner rather than later because “we don’t have that much time to address these problems.”

The reasons behind lithium-ion batteries’ rapid cost decline

Lithium-ion batteries, those marvels of lightweight power that have made possible today’s age of handheld electronics and electric vehicles, have plunged in cost since their introduction three decades ago at a rate similar to the drop in solar panel prices, as documented by a study published last March. But what brought about such an astonishing cost decline, of about 97 percent?

Some of the researchers behind that earlier study have now analyzed what accounted for the extraordinary savings. They found that by far the biggest factor was work on research and development, particularly in chemistry and materials science. This outweighed the gains achieved through economies of scale, though that turned out to be the second-largest category of reductions.

The new findings are being published today in the journal Energy and Environmental Science, in a paper by MIT postdoc Micah Ziegler, recent graduate student Juhyun Song PhD ’19, and Jessika Trancik, a professor in MIT’s Institute for Data, Systems and Society.

The findings could be useful for policymakers and planners to help guide spending priorities in order to continue the pathway toward ever-lower costs for this and other crucial energy storage technologies, according to Trancik. Their work suggests that there is still considerable room for further improvement in electrochemical battery technologies, she says.

The analysis required digging through a variety of sources, since much of the relevant information consists of closely held proprietary business data. “The data collection effort was extensive,” Ziegler says. “We looked at academic articles, industry and government reports, press releases, and specification sheets. We even looked at some legal filings that came out. We had to piece together data from many different sources to get a sense of what was happening.” He says they collected “about 15,000 qualitative and quantitative data points, across 1,000 individual records from approximately 280 references.”

Data from the earliest times are hardest to access and can have the greatest uncertainties, Trancik says, but by comparing different data sources from the same period they have attempted to account for these uncertainties.

Overall, she says, “we estimate that the majority of the cost decline, more than 50 percent, came from research-and-development-related activities.” That included both private sector and government-funded research and development, and “the vast majority” of that cost decline within that R&D category came from chemistry and materials research.

That was an interesting finding, she says, because “there were so many variables that people were working on through very different kinds of efforts,” including the design of the battery cells themselves, their manufacturing systems, supply chains, and so on. “The cost improvement emerged from a diverse set of efforts and many people, and not from the work of only a few individuals.”

The findings about the importance of investment in R&D were especially significant, Ziegler says, because much of this investment happened after lithium-ion battery technology was commercialized, a stage at which some analysts thought the research contribution would become less significant. Over roughly a 20-year period starting five years after the batteries’ introduction in the early 1990s, he says, “most of the cost reduction still came from R&D. The R&D contribution didn’t end when commercialization began. In fact, it was still the biggest contributor to cost reduction.”

The study took advantage of an analytical approach that Trancik and her team initially developed to analyze the similarly precipitous drop in costs of silicon solar panels over the last few decades. They also applied the approach to understand the rising costs of nuclear energy. “This is really getting at the fundamental mechanisms of technological change,” she says. “And we can also develop these models looking forward in time, which allows us to uncover the levers that people could use to improve the technology in the future.”

One advantage of the methodology Trancik and her colleagues have developed, she says, is that it helps to sort out the relative importance of different factors when many variables are changing all at once, which typically happens as a technology improves. “It’s not simply adding up the cost effects of these variables,” she says, “because many of these variables affect many different cost components. There’s this kind of intricate web of dependencies.” But the team’s methodology makes it possible to “look at how that overall cost change can be attributed to those variables, by essentially mapping out that network of dependencies,” she says.

This can help provide guidance on public spending, private investments, and other incentives. “What are all the things that different decision makers could do?” she asks. “What decisions do they have agency over so that they could improve the technology, which is important in the case of low-carbon technologies, where we’re looking for solutions to climate change and we have limited time and limited resources? The new approach allows us to potentially be a bit more intentional about where we make those investments of time and money.”

“This paper collects data available in a systematic way to determine changes in the cost components of lithium-ion batteries between 1990-1995 and 2010-2015,” says Laura Diaz Anadon, a professor of climate change policy at Cambridge University, who was not connected to this research. “This period was an important one in the history of the technology, and understanding the evolution of cost components lays the groundwork for future work on mechanisms and could help inform research efforts in other types of batteries.”

The research was supported by the Alfred P. Sloan Foundation, the Environmental Defense Fund, and the MIT Technology and Policy Program.

Giving robots social skills

Robots can deliver food on a college campus and hit a hole-in-one on the golf course, but even the most sophisticated robot can’t perform basic social interactions that are critical to everyday human life.

MIT researchers have now incorporated certain social interactions into a framework for robotics, enabling machines to understand what it means to help or hinder one another, and to learn to perform these social behaviors on their own. In a simulated environment, a robot watches its companion, guesses what task it wants to accomplish, and then helps or hinders this other robot based on its own goals.

The researchers also showed that their model creates realistic and predictable social interactions. When they showed videos of these simulated robots interacting with one another to humans, the human viewers mostly agreed with the model about what type of social behavior was occurring.

Enabling robots to exhibit social skills could lead to smoother and more positive human-robot interactions. For instance, a robot in an assisted living facility could use these capabilities to help create a more caring environment for elderly individuals. The new model may also enable scientists to measure social interactions quantitatively, which could help psychologists study autism or analyze the effects of antidepressants.

“Robots will live in our world soon enough, and they really need to learn how to communicate with us on human terms. They need to understand when it is time for them to help and when it is time for them to see what they can do to prevent something from happening. This is very early work and we are barely scratching the surface, but I feel like this is the first very serious attempt for understanding what it means for humans and machines to interact socially,” says Boris Katz, principal research scientist and head of the InfoLab Group in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Center for Brains, Minds, and Machines (CBMM).

Joining Katz on the paper are co-lead author Ravi Tejwani, a research assistant at CSAIL; co-lead author Yen-Ling Kuo, a CSAIL PhD student; Tianmin Shu, a postdoc in the Department of Brain and Cognitive Sciences; and senior author Andrei Barbu, a research scientist at CSAIL and CBMM. The research will be presented at the Conference on Robot Learning in November.

A social simulation

To study social interactions, the researchers created a simulated environment where robots pursue physical and social goals as they move around a two-dimensional grid.

A physical goal relates to the environment. For example, a robot’s physical goal might be to navigate to a tree at a certain point on the grid. A social goal involves guessing what another robot is trying to do and then acting based on that estimation, like helping another robot water the tree.

The researchers use their model to specify what a robot’s physical goals are, what its social goals are, and how much emphasis it should place on one over the other. The robot is rewarded for actions it takes that get it closer to accomplishing its goals. If a robot is trying to help its companion, it adjusts its reward to match that of the other robot; if it is trying to hinder, it adjusts its reward to be the opposite. The planner, an algorithm that decides which actions the robot should take, uses this continually updating reward to guide the robot to carry out a blend of physical and social goals.

“We have opened a new mathematical framework for how you model social interaction between two agents. If you are a robot, and you want to go to location X, and I am another robot and I see that you are trying to go to location X, I can cooperate by helping you get to location X faster. That might mean moving X closer to you, finding another better X, or taking whatever action you had to take at X. Our formulation allows the plan to discover the ‘how’; we specify the ‘what’ in terms of what social interactions mean mathematically,” says Tejwani.

Blending a robot’s physical and social goals is important to create realistic interactions, since humans who help one another have limits to how far they will go. For instance, a rational person likely wouldn’t just hand a stranger their wallet, Barbu says.

The researchers used this mathematical framework to define three types of robots. A level 0 robot has only physical goals and cannot reason socially. A level 1 robot has physical and social goals but assumes all other robots only have physical goals. Level 1 robots can take actions based on the physical goals of other robots, like helping and hindering. A level 2 robot assumes other robots have social and physical goals; these robots can take more sophisticated actions like joining in to help together.

Evaluating the model

To see how their model compared to human perspectives about social interactions, they created 98 different scenarios with robots at levels 0, 1, and 2. Twelve humans watched 196 video clips of the robots interacting, and then were asked to estimate the physical and social goals of those robots.

In most instances, their model agreed with what the humans thought about the social interactions that were occurring in each frame.

“We have this long-term interest, both to build computational models for robots, but also to dig deeper into the human aspects of this. We want to find out what features from these videos humans are using to understand social interactions. Can we make an objective test for your ability to recognize social interactions? Maybe there is a way to teach people to recognize these social interactions and improve their abilities. We are a long way from this, but even just being able to measure social interactions effectively is a big step forward,” Barbu says.

Toward greater sophistication

The researchers are working on developing a system with 3D agents in an environment that allows many more types of interactions, such as the manipulation of household objects. They are also planning to modify their model to include environments where actions can fail.

The researchers also want to incorporate a neural network-based robot planner into the model, which learns from experience and performs faster. Finally, they hope to run an experiment to collect data about the features humans use to determine if two robots are engaging in a social interaction.

“Hopefully, we will have a benchmark that allows all researchers to work on these social interactions and inspire the kinds of science and engineering advances we’ve seen in other areas such as object and action recognition,” Barbu says.

“I think this is a lovely application of structured reasoning to a complex yet urgent challenge,” says Tomer Ullman, assistant professor in the Department of Psychology at Harvard University and head of the Computation, Cognition, and Development Lab, who was not involved with this research. “Even young infants seem to understand social interactions like helping and hindering, but we don’t yet have machines that can perform this reasoning at anything like human-level flexibility. I believe models like the ones proposed in this work, that have agents thinking about the rewards of others and socially planning how best to thwart or support them, are a good step in the right direction.”

This research was supported by the Center for Brains, Minds, and Machines; the National Science Foundation; the MIT CSAIL Systems that Learn Initiative; the MIT-IBM Watson AI Lab; the DARPA Artificial Social Intelligence for Successful Teams program; the U.S. Air Force Research Laboratory; the U.S. Air Force Artificial Intelligence Accelerator; and the Office of Naval Research.

Toward speech recognition for uncommon spoken languages

Automated speech-recognition technology has become more common with the popularity of virtual assistants like Siri, but many of these systems only perform well with the most widely spoken of the world’s roughly 7,000 languages.

Because these systems largely don’t exist for less common languages, the millions of people who speak them are cut off from many technologies that rely on speech, from smart home devices to assistive technologies and translation services.

Recent advances have enabled machine learning models that can learn the world’s uncommon languages, which lack the large amount of transcribed speech needed to train algorithms. However, these solutions are often too complex and expensive to be applied widely.

Researchers at MIT and elsewhere have now tackled this problem by developing a simple technique that reduces the complexity of an advanced speech-learning model, enabling it to run more efficiently and achieve higher performance.

Their technique involves removing unnecessary parts of a common, but complex, speech recognition model and then making minor adjustments so it can recognize a specific language. Because only small tweaks are needed once the larger model is cut down to size, it is much less expensive and time-consuming to teach this model an uncommon language.

This work could help level the playing field and bring automatic speech-recognition systems to many areas of the world where they have yet to be deployed. The systems are important in some academic environments, where they can assist students who are blind or have low vision, and are also being used to improve efficiency in health care settings through medical transcription and in the legal field through court reporting. Automatic speech-recognition can also help users learn new languages and improve their pronunciation skills. This technology could even be used to transcribe and document rare languages that are in danger of vanishing.  

“This is an important problem to solve because we have amazing technology in natural language processing and speech recognition, but taking the research in this direction will help us scale the technology to many more underexplored languages in the world,” says Cheng-I Jeff Lai, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and first author of the paper.

Lai wrote the paper with fellow MIT PhD students Alexander H. Liu, Yi-Lun Liao, Sameer Khurana, and Yung-Sung Chuang; his advisor and senior author James Glass, senior research scientist and head of the Spoken Language Systems Group in CSAIL; MIT-IBM Watson AI Lab research scientists Yang Zhang, Shiyu Chang, and Kaizhi Qian; and David Cox, the IBM director of the MIT-IBM Watson AI Lab. The research will be presented at the Conference on Neural Information Processing Systems in December.

Learning speech from audio

The researchers studied a powerful neural network that has been pretrained to learn basic speech from raw audio, called Wave2vec 2.0.

A neural network is a series of algorithms that can learn to recognize patterns in data; modeled loosely off the human brain, neural networks are arranged into layers of interconnected nodes that process data inputs.

Wave2vec 2.0 is a self-supervised learning model, so it learns to recognize a spoken language after it is fed a large amount of unlabeled speech. The training process only requires a few minutes of transcribed speech. This opens the door for speech recognition of uncommon languages that lack large amounts of transcribed speech, like Wolof, which is spoken by 5 million people in West Africa.

However, the neural network has about 300 million individual connections, so it requires a massive amount of computing power to train on a specific language.

The researchers set out to improve the efficiency of this network by pruning it. Just like a gardener cuts off superfluous branches, neural network pruning involves removing connections that aren’t necessary for a specific task, in this case, learning a language. Lai and his collaborators wanted to see how the pruning process would affect this model’s speech recognition performance.

After pruning the full neural network to create a smaller subnetwork, they trained the subnetwork with a small amount of labeled Spanish speech and then again with French speech, a process called finetuning.  

“We would expect these two models to be very different because they are finetuned for different languages. But the surprising part is that if we prune these models, they will end up with highly similar pruning patterns. For French and Spanish, they have 97 percent overlap,” Lai says.

They ran experiments using 10 languages, from Romance languages like Italian and Spanish to languages that have completely different alphabets, like Russian and Mandarin. The results were the same — the finetuned models all had a very large overlap.

A simple solution

Drawing on that unique finding, they developed a simple technique to improve the efficiency and boost the performance of the neural network, called PARP (Prune, Adjust, and Re-Prune).

In the first step, a pretrained speech recognition neural network like Wave2vec 2.0 is pruned by removing unnecessary connections. Then in the second step, the resulting subnetwork is adjusted for a specific language, and then pruned again. During this second step, connections that had been removed are allowed to grow back if they are important for that particular language.

Because connections are allowed to grow back during the second step, the model only needs to be finetuned once, rather than over multiple iterations, which vastly reduces the amount of computing power required.

Testing the technique

The researchers put PARP to the test against other common pruning techniques and found that it outperformed them all for speech recognition. It was especially effective when there was only a very small amount of transcribed speech to train on.

They also showed that PARP can create one smaller subnetwork that can be finetuned for 10 languages at once, eliminating the need to prune separate subnetworks for each language, which could also reduce the expense and time required to train these models.

Moving forward, the researchers would like to apply PARP to text-to-speech models and also see how their technique could improve the efficiency of other deep learning networks.

“There are increasing needs to put large deep-learning models on edge devices. Having more efficient models allows these models to be squeezed onto more primitive systems, like cell phones. Speech technology is very important for cell phones, for instance, but having a smaller model does not necessarily mean it is computing faster. We need additional technology to bring about faster computation, so there is still a long way to go,” Zhang says.

Self-supervised learning (SSL) is changing the field of speech processing, so making SSL models smaller without degrading performance is a crucial research direction, says Hung-yi Lee, associate professor in the Department of Electrical Engineering and the Department of Computer Science and Information Engineering at National Taiwan University, who was not involved in this research.

“PARP trims the SSL models, and at the same time, surprisingly improves the recognition accuracy. Moreover, the paper shows there is a subnet in the SSL model, which is suitable for ASR tasks of many languages. This discovery will stimulate research on language/task agnostic network pruning. In other words, SSL models can be compressed while maintaining their performance on various tasks and languages,” he says.

This work is partially funded by the MIT-IBM Watson AI Lab and the 5k Language Learning Project.

3 Questions: Blending computing with other disciplines at MIT

The demand for computing-related training is at an all-time high. At MIT, there has been a remarkable tide of interest in computer science programs, with heavy enrollment from students studying everything from economics to life sciences eager to learn how computational techniques and methodologies can be used and applied within their primary field.

Launched in 2020, the Common Ground for Computing Education was created through the MIT Stephen A. Schwarzman College of Computing to meet the growing need for enhanced curricula that connect computer science and artificial intelligence with different domains. In order to advance this mission, the Common Ground is bringing experts across MIT together and facilitating collaborations among multiple departments to develop new classes and approaches that blend computing topics with other disciplines.

Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, and the chairs of the Common Ground Standing Committee — Jeff Grossman, head of the Department of Materials Science and Engineering and the Morton and Claire Goulder and Family Professor of Environmental Systems; and Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing, head of the Department of Electrical Engineering and Computer Science, and the MathWorks Professor of Electrical Engineering and Computer Science — discuss here the objectives of the Common Ground, pilot subjects that are underway, and ways they’re engaging faculty to create new curricula for MIT’s class of “computing bilinguals.”

Q: What are the objectives of the Common Ground and how does it fit into the mission of the MIT Schwarzman College of Computing?

Huttenlocher: One of the core components of the college mission is to educate students who are fluent in both the “language” of computing and that of other disciplines. Machine learning classes, for example, attract a lot of students outside of electrical engineering and computer science (EECS) majors. These students are interested in machine learning for modeling within the context of their fields of interest, rather than inner workings of machine learning itself as taught in Course 6. So, we need new approaches to how we develop computing curricula in order to provide students with a thorough grounding in computing that is relevant to their interests, to not just enable them to use computational tools, but understand conceptually how they can be developed and applied in their primary field, whether it be science, engineering, humanities, business, or design.

The core goals of the Common Ground are to infuse computing education throughout MIT in a coordinated manner, as well as to serve as a platform for multi-departmental collaborations. All classes and curricula developed through the Common Ground are intended to be created and offered jointly by multiple academic departments to meet ‘common’ needs. We’re bringing the forefront of rapidly-changing computer science and artificial intelligence fields together with the problems and methods of other disciplines, so the process has to be collaborative. As much as computing is changing thinking in the disciplines, the disciplines are changing the way people develop new computing approaches. It can’t be a stand-alone effort — otherwise it won’t work.

Q: How is the Common Ground facilitating collaborations and engaging faculty across MIT to develop new curricula?

Grossman: The Common Ground Standing Committee was formed to oversee the activities of the Common Ground and is charged with evaluating how best to support and advance program objectives. There are 29 members on the committee — all are faculty experts in various computing areas, and they represent 18 academic departments across all five MIT schools and the college. The structure of the committee very much aligns with the mission of the Common Ground in that it draws from all parts of the Institute. Members are organized into subcommittees currently centered on three primary focus areas: fundamentals of computational science and engineering; fundamentals of programming/computational thinking; and machine learning, data science, and algorithms. The subcommittees, with extensive input from departments, framed prototypes for what Common Ground subjects would look like in each area, and a number of classes have already been piloted to date.

It has been wonderful working with colleagues from different departments. The level of commitment that everyone on the committee has put into this effort has truly been amazing to see, and I share their enthusiasm for pursuing opportunities in computing education.

Q: Can you tell us more about the subjects that are already underway?

Ozdaglar: So far, we have four offerings for students to choose from: in the fall, there’s Linear Algebra and Optimization with the Department of Mathematics and EECS, and Programming Skills and Computational Thinking in-Context with the Experimental Study Group and EECS; Modeling with Machine Learning: From Algorithms to Applications in the spring, with disciplinary modules developed by multiple engineering departments and MIT Supply Chain Management; and Introduction to Computational Science and Engineering during both semesters, which is a collaboration between the Department of Aeronautics and Astronautics and the Department of Mathematics. 

We have had students from a range of majors take these classes, including mechanical engineering, physics, chemical engineering, economics, and management, among others. The response has been very positive. It is very exciting to see MIT students having access to these unique offerings. Our goal is to enable them to frame disciplinary problems using a rich computational framework, which is one of the objectives of the Common Ground.

We are planning to expand Common Ground offerings in the years to come and welcome ideas for new subjects. Some ideas that we currently have in the works include classes on causal inference, creative programming, and data visualization with communication. In addition, this fall, we put out a call for proposals to develop new subjects. We invited instructors from all across the campus to submit ideas for pilot computing classes that are useful across a range of areas and support the educational mission of individual departments. The selected proposals will receive seed funding from the Common Ground to assist in the design, development, and staffing of new, broadly-applicable computing subjects and revision of existing subjects in alignment with the Common Ground’s objectives. We are looking explicitly to facilitate opportunities in which multiple departments would benefit from coordinated teaching.

Making machine learning more useful to high-stakes decision makers

The U.S. Centers for Disease Control and Prevention estimates that one in seven children in the United States experienced abuse or neglect in the past year. Child protective services agencies around the nation receive a high number of reports each year (about 4.4 million in 2019) of alleged neglect or abuse. With so many cases, some agencies are implementing machine learning models to help child welfare specialists screen cases and determine which to recommend for further investigation.

But these models don’t do any good if the humans they are intended to help don’t understand or trust their outputs.

Researchers at MIT and elsewhere launched a research project to identify and tackle machine learning usability challenges in child welfare screening. In collaboration with a child welfare department in Colorado, the researchers studied how call screeners assess cases, with and without the help of machine learning predictions. Based on feedback from the call screeners, they designed a visual analytics tool that uses bar graphs to show how specific factors of a case contribute to the predicted risk that a child will be removed from their home within two years.

The researchers found that screeners are more interested in seeing how each factor, like the child’s age, influences a prediction, rather than understanding the computational basis of how the model works. Their results also show that even a simple model can cause confusion if its features are not described with straightforward language.

These findings could be applied to other high-risk fields where humans use machine learning models to help them make decisions, but lack data science experience, says senior author Kalyan Veeramachaneni, principal research scientist in the Laboratory for Information and Decision Systems (LIDS) and senior author of the paper.

“Researchers who study explainable AI, they often try to dig deeper into the model itself to explain what the model did. But a big takeaway from this project is that these domain experts don’t necessarily want to learn what machine learning actually does. They are more interested in understanding why the model is making a different prediction than what their intuition is saying, or what factors it is using to make this prediction. They want information that helps them reconcile their agreements or disagreements with the model, or confirms their intuition,” he says.

Co-authors include electrical engineering and computer science PhD student Alexandra Zytek, who is the lead author; postdoc Dongyu Liu; and Rhema Vaithianathan, professor of economics and director of the Center for Social Data Analytics at the Auckland University of Technology and professor of social data analytics at the University of Queensland. The research will be presented later this month at the IEEE Visualization Conference.

Real-world research

The researchers began the study more than two years ago by identifying seven factors that make a machine learning model less usable, including lack of trust in where predictions come from and disagreements between user opinions and the model’s output.

With these factors in mind, Zytek and Liu flew to Colorado in the winter of 2019 to learn firsthand from call screeners in a child welfare department. This department is implementing a machine learning system developed by Vaithianathan that generates a risk score for each report, predicting the likelihood the child will be removed from their home. That risk score is based on more than 100 demographic and historic factors, such as the parents’ ages and past court involvements.

“As you can imagine, just getting a number between one and 20 and being told to integrate this into your workflow can be a bit challenging,” Zytek says.

They observed how teams of screeners process cases in about 10 minutes and spend most of that time discussing the risk factors associated with the case. That inspired the researchers to develop a case-specific details interface, which shows how each factor influenced the overall risk score using color-coded, horizontal bar graphs that indicate the magnitude of the contribution in a positive or negative direction.

Based on observations and detailed interviews, the researchers built four additional interfaces that provide explanations of the model, including one that compares a current case to past cases with similar risk scores. Then they ran a series of user studies.

The studies revealed that more than 90 percent of the screeners found the case-specific details interface to be useful, and it generally increased their trust in the model’s predictions. On the other hand, the screeners did not like the case comparison interface. While the researchers thought this interface would increase trust in the model, screeners were concerned it could lead to decisions based on past cases rather than the current report.   

“The most interesting result to me was that, the features we showed them — the information that the model uses — had to be really interpretable to start. The model uses more than 100 different features in order to make its prediction, and a lot of those were a bit confusing,” Zytek says.

Keeping the screeners in the loop throughout the iterative process helped the researchers make decisions about what elements to include in the machine learning explanation tool, called Sibyl.

As they refined the Sibyl interfaces, the researchers were careful to consider how providing explanations could contribute to some cognitive biases, and even undermine screeners’ trust in the model.

For instance, since explanations are based on averages in a database of child abuse and neglect cases, having three past abuse referrals may actually decrease the risk score of a child, since averages in this database may be far higher. A screener may see that explanation and decide not to trust the model, even though it is working correctly, Zytek explains. And because humans tend to put more emphasis on recent information, the order in which the factors are listed could also influence decisions.

Improving interpretability

Based on feedback from call screeners, the researchers are working to tweak the explanation model so the features that it uses are easier to explain.

Moving forward, they plan to enhance the interfaces they’ve created based on additional feedback and then run a quantitative user study to track the effects on decision making with real cases. Once those evaluations are complete, they can prepare to deploy Sibyl, Zytek says.

“It was especially valuable to be able to work so actively with these screeners. We got to really understand the problems they faced. While we saw some reservations on their part, what we saw more of was excitement about how useful these explanations were in certain cases. That was really rewarding,” she says.

This work is supported, in part, by the National Science Foundation.

3 Questions: Administering elections in a hyper-partisan era

Charles Stewart III is the Kenan Sahin Distinguished Professor of Political Science at MIT and a renowned expert on U.S. election administration. A founding member of the influential Caltech/MIT Voting Technology Project, Stewart also founded MIT’s Election Data and Science Lab, which recently teamed up with the American Enterprise Institute to release a major report: Lessons Learned from the 2020 Election. MIT SHASS Communications asked Stewart to share some additional insights on the state of U.S. elections in advance of November voting.

Q: The United States has a decentralized system of election administration, which means local jurisdictions have a lot of control over how votes are collected and counted. What are the pros and cons of this system — particularly at this moment, when partisan political efforts are highly focused on election administration?

A: The advantage of the American decentralized system is that the basic parameters of how people vote get decided locally. This has helped create a great deal of trust among voters about how their own votes are counted. Historically, the greatest disadvantage has been that the anti-democratic pockets of America — think of the pre-Voting Rights Act Deep South — have been able to suppress voting, sometimes brutally.

At present, however, all issues of consequence have become nationalized, and all policy choices — including those around voting — are therefore seen through the lens of the national parties, not local needs. This nationalization of politics has left little room for local election officials to experiment with new technologies and methodologies, and it has made election administration particularly toxic. Now, even those who trust how votes are counted in their own backyards are often deeply distrustful of how votes are counted elsewhere.

Thus, the question about what is best for Arizona or Georgia or California is not left simply to residents of those states, as in the past; today, it is the subject of attention from (often angry) partisan zealots elsewhere in the country. In such an environment, America’s decentralized — and thus naturally inconsistent — system can be a liability.

Another way that the decentralization of the system hurts election administration is often overlooked. Because each state is autonomous and often devolves authority down to the local level, it has been difficult to create standardized voting systems. This means there is no national market for technology and business solutions to the challenges of election administration — and yet, innovation is sorely needed. The American system of election administration was designed for voting in the 1880s, but the 2020s present a very different set of problems.

Many other policies that used to be hyper-local — public education, water and sewer service, public health, etc. — have often been consolidated into larger government units and there has been greater cooperation across towns and counties. States and the federal government have taken on a bigger role in funding them. But not elections. The result is that election administration often throws antiquated solutions at modern problems or, as in the case of the cybersecurity threat, is slow to react.

The reaction to the challenges of voting during the pandemic saw some movements toward a more modern and coordinated management of election administration. States stepped in and provided centralized services, such as printing and processing mail ballots or developing online portals for voters to track mail ballots. The federal government provided nearly half-a-billion dollars to shore up security and meet the many demands on election managers as they quickly pivoted to new election modalities. One hopes that this momentum will carry into the near future, but efforts to re-litigate the 2020 election are a major distraction.

Q: What safeguards exist to ensure that future elections remain free of interference — particularly from those at the top echelons of political power?

A: The 2020 election showed the resilience of the fact-based part of the election administration system — election administrators, judges, and research institutions (including universities) — that have stood for the rule of law in the face of illiberal attacks on election administration. Opponents of fair elections recognize this and have attacked all parts of this fact-based bulwark. They are physically threatening election workers, trying to remove judicial oversight of election administration, and creating so-called “election integrity” think tanks to perpetuate disinformation about elections in America.

The fact-based part of election administration is robust, but we can’t be complacent about its health. The federal government is stepping up the protection of election workers and officials; states should do this as well. Unfortunately, some states have taken action that is less than helpful by passing laws that try to strip authority from local election officials and empower state legislatures to overturn the results of free and fair elections. I think there’s every reason to be concerned about these laws, but not because I think they will achieve these worrisome ends. The biggest worry is that such laws encourage doubt about outcomes and give those who lost elections a platform to sow that doubt.
 
It’s important to keep in mind that the first principles of election laws have not been overturned in these states, nor have constitutional guarantees. There is a judicial principle that says that if an election has been run under a set of rules established before the election, that election’s results must stand even if some of the rules may have been contestable beforehand. If partisan election officials or state legislatures want to throw out an election result because they don’t like the outcome, or on a pretext of unproven fraud, the courts will intervene.

Similarly, if local election officials are replaced for pretextual reasons, it’s hard to imagine a state or federal court letting this stand. Still, democracy will be damaged regardless of the outcomes of such disputes. Politicians will be given more opportunities to denigrate the voting process, and baseless conspiracy theories will be given a megaphone.

For now, I’m more worried about the culture of democracy than I am about whether winners will be properly certified. That can change if the assaults on neutral election administration continues.

One of the final challenges facing the free conduct of elections is how to stanch the stream of disinformation about elections that is the source of the populist energy centered on attacking the system. Even when the clown-show of a ballot review in Arizona had to conclude that Joe Biden legitimately won that state in the 2020 presidential election, the release of the reviewers’ report was used by an array of manipulative pundits to continue to sow doubt.

This is a misinformation problem that infects American public life generally: It’s not confined to election administration alone, or even to politics. Insisting on responsible behavior by the social media platforms is a necessary first step toward addressing the plague of misinformation, but it’s not likely to be enough. I think we are seeing the consequences of the half-century-long destruction of responsibly curated news sources in the name of economic disruption, and that problem and its consequences extend far beyond the administration of elections.

Q: Can you suggest some efforts — either by citizens, legislators, scholars, and/or pro-democracy organizations — that could effectively protect and strengthen democracy at this moment in the nation’s history?

A: The greatest effort to protect and strengthen democracy is voting itself. The consensus on the ground in states like North Carolina and Texas, where efforts have been made in the recent past to raise barriers to voting, is that the efforts of state legislatures in these states have actually served to mobilize pro-election forces. Donating to candidates who are pro-democracy and working for their election is probably the most important thing citizens can do.

For scholars, we need to be laser-focused on what we do uniquely best, which is documenting the actual consequences of election laws on participation. Legislators who support barriers to voting often mis-estimate the consequences of such laws. Citizen groups who are hypervigilant about threats to democracy also may overestimate the power of some election laws to suppress or expand the vote. As scholars, we need to be independent voices in identifying the worst of the barriers, and we need to act to redress anti-democratic efforts, either through our publications or through litigation. If we don’t ground advocacy in science, those of us who study the workings of our democracy squander what we have to offer that is distinct.

Finally, I think we all have to be clear that the illiberal wind is blowing at a gale force in just one of the political parties. This is not a partisan statement, but a fact. Working to isolate the illiberal fringe of the Republican Party and protect those in the party who value open elections and political competition may be the most important thing of all, although how to do that is still not clear. My liberal friends, of whom I have hundreds, don’t like to hear it, but I think that saving the Republican Party from extreme illiberalism may be the most important pro-democracy activity in America. 

At the moment, it’s not clear how that might happen, but ideas are being suggested. In a recent New York Times op-ed, Miles Taylor and Christine Todd Whitman, both strong Republicans, argued that the best path to change is for Republicans to vote for Democrats in 2022. They also alluded to the possibility of creating a conservative third party based on more traditional Republican values, not anti-democratic ones. Changes to election laws that discourage victory by extreme candidates of the left and the right might also work. (Rank-choice voting is one such popular reform.) 

To be clear, changing voting rules to box out extremists or withholding votes from illiberal candidates will not purge the country of extreme anti-democratic movements. But, if we think that the biggest threat to upholding democratic values is the fact that political leaders believe they must appeal to anti-democratic elements, at least we can work to reduce the payoff to appealing to the fringes.

Chronicles of the epic mission to deliver Covid vaccines to the world

The race to deliver a Covid-19 vaccine has been likened to a moonshot, but in several ways landing a man on the moon was easier. In his new book, “A Shot in the Arm: How Science, Engineering, and Supply Chains Converged to Vaccinate the World” (MIT CTL Media, 2021), MIT Professor Yossi Sheffi recounts the vaccine’s extraordinary journey from scientific breakthroughs to coronavirus antidote and mass vaccination. And he explores how the mission could transform the fight against deadly diseases and other global-scale challenges.

“The historic Apollo moonshots built a dozen or so rockets to carry astronauts to a single location. In contrast, vaccine mission teams mass-produced billions of doses of a complex medication from a standing start, and delivered them to billions of individuals across the globe,” says Sheffi. This is a story of bold innovation and risk-taking he notes, “and interdisciplinary teamwork that involved experts vital to the mission’s success, such as manufacturing engineers and supply chain managers.” 

Like previous moonshot quests, this one was founded on revolutionary science. The book describes how the effort built on decades of biochemistry and microbiology research to develop Covid mRNA vaccines. The vaccines teach the human body how to recognize coronavirus invaders and neutralize them before they convert the body’s cells into virus factories.

However, a weapon is impotent without the means to make and distribute it. The book explains how governments joined forces with the scientific community and industry to fund, produce, and deliver the vaccine to a world in danger of losing the battle against the pandemic. The author characterizes this monumental endeavor as the greatest product launch in history. Along the way, the mission teams broke new ground in their respective fields.

The teams also made mistakes, and the book shows how these failures will inform future campaigns. Other obstacles in the way included disinformation, public mistrust of science and government, and political opportunism. Sheffi explores the root causes of these opposing forces and the societal implications.

“Yossi Sheffi offers strategic lessons behind the record-breaking development, production, and global delivery of the Covid vaccines and what they mean for the future,” says Bob Langer, the David H. Koch Institute Professor at MIT and co-founder of Moderna.

“A Shot in the Arm” ends on an optimistic note with a look at the Covid vaccine mission’s formidable legacy. In addition to providing templates for fighting pandemics, the effort has advanced immunology and highlighted the breathtaking potential of mRNA-based vaccines. Future vaccines could cure life-threatening illnesses including cancer, and when combined with other technologies, spur innovations in other fields such as agriculture. The book argues that the convergence of multiple disciplines, industries, and sectors — which resulted in the vaccine — provides a blueprint for humanity for tackling global challenges. These include poverty, food and water security, and climate change, particularly in getting from R&D and lab work to scaling innovations addressing such challenges.

Fifteen MIT faculty honored as “Committed to Caring” for 2021-23

In a normal academic year at MIT, the guidance and mentoring offered by faculty advisors to their graduate students is of paramount importance. This has only become more true during the ongoing Covid-19 pandemic, as the entire world was thrust into uncertainty. 

Very suddenly, activities that were once commonplace were now shrouded in fear; people were confined to their homes, unable to see family and friends; and academic life at MIT was completely disrupted overnight. Many graduate students were left unsure of what would happen to their coursework and research, and caring mentors became even more of a lifeline.

Throughout the pandemic, numerous faculty members have stepped up to support and guide their graduate students in unique and impactful ways, through efforts such as championing diversity, equity, and inclusion programs within their departments; respecting students’ mental health concerns and finding appropriate ways to accommodate them; and fostering community within their advising groups and departments.

Through a process driven by graduate student involvement, from the submission of nomination letters to the selection of honorees, the Committed to Caring (C2C) program at MIT recognizes faculty members that go above and beyond in their mentorship of graduate students. In light of their exceptional efforts, 15 MIT faculty members have been recognized by the C2C program for the 2021-23 cycle, joining the ranks of 60 previous C2C honorees.

The following faculty members are the 2021-23 Committed to Caring honorees:

  • Angelika Amon, Department of Biology (posthumously)
  • Athulya Aravind, Department of Linguistics
  • Mariana Arcaya, Department of Urban Studies and Planning
  • David Autor, Department of Economics
  • Michael Birnbaum, Department of Biological Engineering
  • Irmgard Bischofberger, Department of Mechanical Engineering
  • Devin Bunten, Department of Urban Studies and Planning
  • Esther Duflo, Department of Economics
  • Jeffrey Grossman, Department of Materials Science and Engineering
  • Janelle Knox-Hayes, Department of Urban Studies and Planning
  • Karthish Manthiram, Department of Chemical Engineering 
  • Miho Mazereeuw, Department of Architecture
  • Kerstin Perez, Department of Physics
  • Arvind Satyanarayan, Department of Electrical Engineering and Computer Science
  • Ben Schneider, Department of Political Science

Careful consideration of the nominees

Every other year since the C2C program was founded in 2014, the Office of Graduate Education solicits nominations from graduate students. Each nomination letter calls attention to a specific faculty member for their outstanding mentorship skills and practices. A selection committee, made up of graduate students, staff members, and graduate administrators, deliberates and selects the faculty members who have demonstrated a genuine commitment to the success and well-being of their graduate students.

This year, the selection criteria included the extent of the faculty members’ mentorship and caring actions; their dedication to ensuring students’ academic and professional success; and their willingness to develop their mentorship style and how they can best support students. Of particular note to the selection committee was how faculty members responded to the Covid-19 crisis and altered their mentorship styles to best support students throughout this trying time. Additionally, increased attention was paid to faculty members’ efforts to make meaningful strides toward advancing diversity in their departments and across the Institute. 

This year’s selection committee included graduate students Ellie Immerman (2019-21 C2C graduate community fellow; Program in Science, Technology, and Society); Daniel Korsun (2021-22 C2C graduate community fellow; Department of Nuclear Science and Engineering); Sidhant Pai (Department of Civil and Environmental Engineering); Neha Sunil (Department of Mechanical Engineering); Paula do Vale Pereira (Department of Aeronautics and Astronautics); and Raspberry Simpson (Department of Nuclear Science and Engineering). These students were joined on the committee by Assistant Dean for Graduate Education Gaurav Jashnani (Office of Graduate Education), Academic Administrator Jennifer Weisman (Department of Chemistry), Director of Special Projects Rachel Beingessner (Office of the Associate Provost), and Associate Dean for Graduate Education Suraiya Baluch (Office of Graduate Education).

Baluch, the chair of the selection committee, notes, “what spoke to me during this process was the great lengths that faculty went to support their students throughout the pandemic, as well as how meaningful and impactful these efforts were to their students. These faculty members have truly made MIT a supportive learning environment for their students.”

This year’s honorees have demonstrated an impassioned commitment to improving the lives of their graduate students through mentorship. By recognizing the impact that these professors have had on their students, the Committed to Caring program hopes to reinforce the MIT community’s dedication to fostering a respectful learning culture.

Saving seaweed with machine learning

Last year, Charlene Xia ’17, SM ’20 found herself at a crossroads. She was finishing up her master’s degree in media arts and sciences from the MIT Media Lab and had just submitted applications to doctoral degree programs. All Xia could do was sit and wait. In the meantime, she narrowed down her career options, regardless of whether she was accepted to any program.

“I had two thoughts: I’m either going to get a PhD to work on a project that protects our planet, or I’m going to start a restaurant,” recalls Xia.

Xia poured over her extensive cookbook collection, researching international cuisines as she anxiously awaited word about her graduate school applications. She even looked into the cost of a food truck permit in the Boston area. Just as she started hatching plans to open a plant-based skewer restaurant, Xia received word that she had been accepted into the mechanical engineering graduate program at MIT.

Shortly after starting her doctoral studies, Xia’s advisor, Professor David Wallace, approached her with an interesting opportunity. MathWorks, a software company known for developing the MATLAB computing platform, had announced a new seed funding program in MIT’s Department of Mechanical Engineering. The program encouraged collaborative research projects focused on the health of the planet.

“I saw this as a super-fun opportunity to combine my passion for food, my technical expertise in ocean engineering, and my interest in sustainably helping our planet,” says Xia.

Wallace knew Xia would be up to the task of taking an interdisciplinary approach to solve an issue related to the health of the planet. “Charlene is a remarkable student with extraordinary talent and deep thoughtfulness. She is pretty much fearless, embracing challenges in almost any domain with the well-founded belief that, with effort, she will become a master,” says Wallace.

Alongside Wallace and Associate Professor Stefanie Mueller, Xia proposed a project to predict and prevent the spread of diseases in aquaculture. The team focused on seaweed farms in particular.

Already popular in East Asian cuisines, seaweed holds tremendous potential as a sustainable food source for the world’s ever-growing population. In addition to its nutritive value, seaweed combats various environmental threats. It helps fight climate change by absorbing excess carbon dioxide in the atmosphere, and can also absorb fertilizer run-off, keeping coasts cleaner.

As with so much of marine life, seaweed is threatened by the very thing it helps mitigate against: climate change. Climate stressors like warm temperatures or minimal sunlight encourage the growth of harmful bacteria such as ice-ice disease. Within days, entire seaweed farms are decimated by unchecked bacterial growth.

To solve this problem, Xia turned to the microbiota present in these seaweed farms as a predictive indicator of any threat to the seaweed or livestock. “Our project is to develop a low-cost device that can detect and prevent diseases before they affect seaweed or livestock by monitoring the microbiome of the environment,” says Xia.

The team pairs old technology with the latest in computing. Using a submersible digital holographic microscope, they take a 2D image. They then use a machine learning system known as a neural network to convert the 2D image into a representation of the microbiome present in the 3D environment.

“Using a machine learning network, you can take a 2D image and reconstruct it almost in real time to get an idea of what the microbiome looks like in a 3D space,” says Xia.

The software can be run in a small Raspberry Pi that could be attached to the holographic microscope. To figure out how to communicate these data back to the research team, Xia drew upon her master’s degree research.

In that work, under the guidance of Professor Allan Adams and Professor Joseph Paradiso in the Media Lab, Xia focused on developing small underwater communication devices that can relay data about the ocean back to researchers. Rather than the usual $4,000, these devices were designed to cost less than $100, helping lower the cost barrier for those interested in uncovering the many mysteries of our oceans. The communication devices can be used to relay data about the ocean environment from the machine learning algorithms.

By combining these low-cost communication devices along with microscopic images and machine learning, Xia hopes to design a low-cost, real-time monitoring system that can be scaled to cover entire seaweed farms.

“It’s almost like having the ‘internet of things’ underwater,” adds Xia. “I’m developing this whole underwater camera system alongside the wireless communication I developed that can give me the data while I’m sitting on dry land.”

Armed with these data about the microbiome, Xia and her team can detect whether or not a disease is about to strike and jeopardize seaweed or livestock before it is too late.

While Xia still daydreams about opening a restaurant, she hopes the seaweed project will prompt people to rethink how they consider food production in general.

“We should think about farming and food production in terms of the entire ecosystem,” she says. “My meta-goal for this project would be to get people to think about food production in a more holistic and natural way.”

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.