People Should Find A Safe Storm Shelter During Thunderstorm

Storm Shelters in OKC

Tuesday June 5, 2001 marked the start of an extremely fascinating time in the annals of my cherished Houston. Tropical storm Allison, that early summer daytime came to see. The thunderstorm went rapidly, although there was Tuesday. Friday, afterward arrived, and Allison returned. This time going slowly, this time in the north. The thunderstorm became still. Thousands of people driven from their houses. Only when they might be desired most, several leading hospitals shut. Dozens of important surface roads, and every important highway covered in water that was high.

Yet even prior to the rain stopped, service to others, and narratives of Christian compassion started to be composed. For a couples class, about 75 people had assembled at Lakewood Church among the greatest nondenominational churches in The United States. From time they got ready to depart the waters had climbed so high they were stranded. The facility of Lakewood stayed dry and high at the center of among the hardest hit parts of town. Refugees in the powerful thunderstorm started arriving at their doorstep. Without no advance preparation, and demand of official sanction, those 75 classmates started a calamity shelter that grew to hold over 3,000 customers. The greatest of over 30 refuges that could be established in the height of the thunderstorm.

Where help was doled out to those who’d suffered losses after Lakewood functioned as a Red Cross Service Center. When it became clear that FEMA aid, and Red Cross wouldn’t bring aid enough, Lakewood and Second Baptist joined -Houston to produce an adopt a family plan to greatly help get folks on their feet quicker. In the occasions that followed militaries of Christians arrived in both churches. From all over town, people of economical standing, race, and each and every denomination collected. Wet rotted carpeting were pulled up, sheet stone removed. Piles of clothes donated food and bed clothes were doled out. Elbow grease and cleaning equipment were used to start eliminating traces of the damage.

It would have been an excellent example of practical ministry in a period of disaster, in the event the story stopped here, but it continues. A great many other churches functioned as shelters as well as in the occasions that followed Red Cross Service Centers. Tons of new volunteers, a lot of them Christians put to work, and were put through accelerated training. That Saturday, I used to be trapped in my own, personal subdivision. Particular that my family was safe because I worked in Storm Shelters OKC that was near where I used to live. What they wouldn’t permit the storm to do, is take their demand to give their religion, or their self respect. I saw so a lot of people as they brought gifts of food, clothes and bedclothes, praising the Lord. I saw young kids coming making use of their parents to not give new, rarely used toys to kids who had none.

Leaning On God Through Hard Times

Unity Church of Christianity from a location across town impacted by the storm sent a sizable way to obtain bedding as well as other supplies. A tiny troupe of musicians and Christian clowns requested to be permitted to amuse the kids in the shelter where I served and arrived. We of course promptly taken their offer. The kids were collected by them in a sizable empty space of flooring. They sang, they told stories, balloon animals were made by them. The kids, frightened, at least briefly displaced laughed.

When not occupied elsewhere I did lots of listening. I listened to survivors that were disappointed, and frustrated relief workers. I listened to kids make an effort to take advantage of a scenario they could not comprehend. All these are only the stories I have heard or seen. I am aware that spiritual groups, Churches, and lots of other individual Christians functioned admirably. I do need to thank them for the attempts in disaster. I thank The Lord for supplying them to serve.

I didn’t write its individuals, or this which means you’d feel sorry for Houston. As this disaster unfolded yet what I saw encouraged my beliefs the Lord will provide through our brothers and sisters in religion for us. Regardless how awful your community hits, you the individual Christian can be a part of the remedy. Those blankets you can probably never use, and have stored away mean much to people who have none. You are able to help in the event that you can drive. You are able to help if you’re able to create a cot. It is possible to help in the event that you can scrub a wall. It is possible to help if all you are able to do is sit and listen. Large catastrophes like Allison get lots of focus. However a disaster can come in virtually any size. That is a serious disaster to your family that called it home in case a single household burns. It is going to be generations prior to the folks here forget Allison.

United States Oil and Gas Exploration Opportunities

Firms investing in this sector can research, develop and create, as well as appreciate the edges of a global gas and oil portfolio with no political and economical disadvantages. Allowing regime and the US financial conditions is rated amongst the world and the petroleum made in US is sold at costs that were international. The firms will likely gain as US also has a national market that is booming. Where 500 exploration wells are drilled most of the petroleum exploration in US continues to be concentrated around the Taranaki Basin. On the other hand, the US sedimentary basins still remain unexplored and many show existence of petroleum seeps and arrangements were also unveiled by the investigation data with high hydrocarbon potential. There have already been onshore gas discoveries before including Great south river basins, East Coast Basin and offshore Canterbury.

As interest in petroleum is expected to grow strongly during this interval but this doesn’t automatically dim the bright future expectations in this sector. The interest in petroleum is anticipated to reach 338 PJ per annum. The US government is eager to augment the gas and oil supply. As new discoveries in this sector are required to carry through the national demand at the same time as raise the amount of self reliance and minimize the cost on imports of petroleum the Gas and Oil exploration sector is thought to be among the dawn sectors. The US government has invented a distinctive approach to reach its petroleum and gas exploration targets. It’s developed a “Benefit For Attempt” model for Petroleum and Gas exploration tasks in US.

The “Benefit For Attempt” in today’s analytic thinking is defined as oil reserves found per kilometer drilled. It will help in deriving the estimate of reservations drilled for dollar and each kilometer spent for each investigation. The authorities of US has revealed considerable signs that it’ll bring positive effects of change which will favor investigation of new oil reserves since the price of investigation has adverse effects on investigation task. The Authorities of US has made the information accessible about the oil potential in its study report. Foil of advice in royalty and allocation regimes, and simplicity of processes have enhanced the attractiveness of Petroleum and Natural Gas Sector in the United States.

Petroleum was the third biggest export earner in 2008 for US and the chance to to keep up the growth of the sector is broadly accessible by manners of investigation endeavors that are new. The government is poised to keep the impetus in this sector. Now many firms are active with new exploration jobs in the Challenger Plateau of the United States, Northland East Slope Basin region, outer Taranaki Basin, and Bellona Trough region. The 89 Energy oil and gas sector guarantees foreign investors as government to high increase has declared a five year continuance of an exemption for offshore petroleum and gas exploration in its 2009 budget. The authorities provide nonresident rig operators with tax breaks.

Modern Robot Duct Cleaning Uses

AC systems, and heat, venting collect pollutants and contaminants like mold, debris, dust and bacteria that can have an adverse impact on indoor air quality. Most folks are at present aware that indoor air pollution could be a health concern and increased visibility has been thus gained by the area. Studies have also suggested cleaning their efficacy enhances and is contributory to a longer operating life, along with maintenance and energy cost savings. The cleaning of the parts of forced air systems of heat, venting and cooling system is what’s called duct cleaning. Robots are an advantageous tool raising the price and efficacy facets of the procedure. Therefore, using modern robot duct isn’t any longer a new practice.

A cleaner, healthier indoor environment is created by a clean air duct system which lowers energy prices and increases efficiency. As we spend more hours inside air duct cleaning has become an important variable in the cleaning sector. Indoor pollutant levels can increase. Health effects can show years or up immediately after repeated or long exposure. These effects range from some respiratory diseases, cardiovascular disease, and cancer that can be deadly or debilitating. Therefore, it’s wise to ensure indoor air quality isn’t endangered inside buildings. Dangerous pollutants that can found in inside can transcend outdoor air pollutants in accordance with the Environmental Protection Agency.

Duct cleaning from Air Duct Cleaning Edmond professionals removes microbial contaminants, that might not be visible to the naked eye together with both observable contaminants. Indoor air quality cans impact and present a health hazard. Air ducts can be host to a number of health hazard microbial agents. Legionnaires Disease is one malaise that’s got public notice as our modern surroundings supports the development of the bacteria that has the potential to cause outbreaks and causes the affliction. Typical disorder-causing surroundings contain wetness producing gear such as those in air conditioned buildings with cooling towers that are badly maintained. In summary, in building and designing systems to control our surroundings, we’ve created conditions that were perfect . Those systems must be correctly tracked and preserved. That’s the secret to controlling this disorder.

Robots allow for the occupation while saving workers from exposure to be done faster. Signs of the technological progress in the duct cleaning business is apparent in the variety of gear now available for example, array of robotic gear, to be used in air duct cleaning. Robots are priceless in hard to reach places. Robots used to see states inside the duct, now may be used for spraying, cleaning and sampling procedures. The remote controlled robotic gear can be fitted with practical and fastener characteristics to reach many different use functions.

Video recorders and a closed circuit television camera system can be attached to the robotic gear to view states and operations and for documentation purposes. Inside ducts are inspected by review apparatus in the robot. Robots traveling to particular sections of the system and can move around barriers. Some join functions that empower cleaning operation and instruction manual and fit into little ducts. An useful view range can be delivered by them with models delivering disinfection, cleaning, review, coating and sealing abilities economically.

The remote controlled robotic gear comes in various sizes and shapes for different uses. Of robotic video cameras the first use was in the 80s to record states inside the duct. Robotic cleaning systems have a lot more uses. These devices provide improved accessibility for better cleaning and reduce labor costs. Lately, functions have been expanded by areas for the use of small mobile robots in the service industries, including uses for review and duct cleaning.

More improvements are being considered to make a tool that was productive even more effective. If you determine to have your ventilation, heat and cooling system cleaned, it’s important to make sure all parts of the system clean and is qualified to achieve this. Failure to clean one part of a contaminated system can lead to re-contamination of the entire system.

When To Call A DWI Attorney

Charges or fees against a DWI offender need a legal Sugar Land criminal defense attorney that is qualified dismiss or so that you can reduce charges or the fees. So, undoubtedly a DWI attorney is needed by everyone. Even if it’s a first-time violation the penalties can be severe being represented by a DWI attorney that is qualified is vitally significant. If you’re facing following charges for DWI subsequently the punishments can contain felony charges and be severe. Locating an excellent attorney is thus a job you should approach when possible.

So you must bear in mind that you just should hire a DWI attorney who practices within the state where the violation occurred every state within America will make its laws and legislation regarding DWI violations. It is because they are going to have the knowledge and expertise of state law that is relevant to sufficiently defend you and will be knowledgeable about the processes and evaluations performed to establish your guilt.

As your attorney they are going to look to the evaluations that have been completed at the time of your arrest and the authorities evidence that is accompanying to assess whether or not these evaluations were accurately performed, carried out by competent staff and if the right processes where followed. It isn’t often that a police testimony is asserted against, although authorities testimony also can be challenged in court.

You should attempt to locate someone who specializes in these kind of cases when you start trying to find a DWI attorney. Whilst many attorneys may be willing to consider on your case, a lawyer who specializes in these cases is required by the skilled knowledge needed to interpret the scientific and medical evaluations ran when you had been detained. The first consultation is free and provides you with the chance to to inquire further about their experience in fees and these cases.

Many attorneys will work according into a fee that is hourly or on a set fee basis determined by the kind of case. You may find how they have been paid to satisfy your financial situation and you will have the capacity to negotiate the conditions of their fee. If you are unable to afford to hire an attorney that is private you then can request a court-appointed attorney paid for by the state. Before you hire a DWI attorney you should make sure when you might be expected to appear in court and you understand the precise charges imposed against you.

How Credit Card Works

The credit card is making your life more easy, supplying an amazing set of options. The credit card is a retail trade settlement; a credit system worked through the little plastic card which bears its name. Regulated by ISO 7810 defines credit cards the actual card itself consistently chooses the same structure, size and contour. A strip of a special stuff on the card (the substance resembles the floppy disk or a magnetic group) is saving all the necessary data. This magnetic strip enables the credit card’s validation. The layout has become an important variable; an enticing credit card layout is essential in ensuring advice and its dependability keeping properties.

A credit card is supplied to the user just after a bank approves an account, estimating a varied variety of variables to ascertain fiscal dependability. This bank is the credit supplier. When a purchase is being made by an individual, he must sign a receipt to verify the trade. There are the card details, and the amount of cash to be paid. You can find many shops that take electronic authority for the credit cards and use cloud tokenization for authorization. Nearly all verification are made using a digital verification system; it enables assessing the card is not invalid. If the customer has enough cash to insure the purchase he could be attempting to make staying on his credit limit any retailer may also check.

As the credit supplier, it is as much as the banks to keep the user informed of his statement. They typically send monthly statements detailing each trade procedures through the outstanding fees, the card and the sums owed. This enables the cardholder to ensure all the payments are right, and to discover mistakes or fraudulent action to dispute. Interest is typically charging and establishes a minimal repayment amount by the end of the following billing cycle.

The precise way the interest is charged is normally set within an initial understanding. On the rear of the credit card statement these elements are specified by the supplier. Generally, the credit card is an easy type of revolving credit from one month to another. It can also be a classy financial instrument, having many balance sections to afford a greater extent for credit management. Interest rates may also be not the same as one card to another. The credit card promotion services are using some appealing incentives find some new ones along the way and to keep their customers.

Why Get Help From A Property Management?

One solution while removing much of the anxiety, to have the revenue of your rental home would be to engage and contact property management in Oklahoma City, Oklahoma. If you wish to know more and are considering the product please browse the remainder of the post. Leasing out your bit of real property may be real cash-cow as many landlords understand, but that cash flow usually includes a tremendous concern. Night phones from tenants that have the trouble of marketing the house if you own an emptiness just take out lots of the pleasure of earning money off of leases, overdue lease payments which you must chase down, as well as over-flowing lavatories. One solution while removing much of the anxiety, to have the earnings would be to engage a property management organization.

These businesses perform as the go between for the tenant as well as you. The tenant will not actually need to understand who you’re when you hire a property management company. The company manages the day to day while you still possess the ability to help make the final judgements in regards to the home relationships using the tenant. The company may manage the marketing for you personally, for those who are in possession of a unit that is vacant. Since the company is going to have more connections in a bigger market than you’ve got along with the industry than you are doing, you’ll discover your device gets stuffed a whole lot more quickly making use of their aid. In addition, the property management company may care for testing prospective tenants. With regards to the arrangement you’ve got, you might nevertheless not be unable to get the last say regarding if a tenant is qualified for the the system, but of locating a suitable tenant, the day-to-day difficulty is not any longer your problem. They’ll also manage the before-move-in the reviews as well as reviews required following a tenant moves away.

It is possible to step back watching the profits, after the the system is stuffed. Communicating will be handled by the company with all the tenant if you have an issue. You won’t be telephoned if this pipe explosions at the center of the night time. Your consultant is called by the tenant in the company, who then makes the preparations that are required to get the issue repaired with a care supplier. You get a phone call a day later or may not know there was an issue before you register using the business. The property management organization may also make your leasing obligations to to get. The company will do what’s required to accumulate if your tenant is making a payment. In certain arrangements, the organization is going to also take-over paying taxation, insurance, and the mortgage on the portion of property. You actually need to do-nothing but appreciate after after all the the invoices are paid, the revenue which is sent your way.

With all the advantages, you’re probably questioning exactly what to employing a property management organization, the downside should be. From hiring one the primary variable that stops some landlords is the price. All these providers will be paid for by you. The price must be weighed by you from the time frame you’ll save time that you may subsequently use to follow additional revenue-producing efforts or just take pleasure in the fruits of your expense work.

Benifits From An Orthodontic Care

Orthodontics is the specialty of dentistry centered on the identification and treatment of dental and related facial problems. The outcomes of Norman Orthodontist OKC treatment could be dramatic — an advanced quality of life for a lot of individuals of ages and lovely grins, improved oral health health, aesthetics and increased cosmetic tranquility. Whether into a look dentistry attention is needed or not is an individual’s own choice. Situations are tolerated by most folks like totally various kinds of bite issues or over bites and don’t get treated. Nevertheless, a number people sense guaranteed with teeth that are correctly aligned, appealing and simpler. Dentistry attention may enhance construct and appearance power. It jointly might work with you consult with clearness or to gnaw on greater.

Orthodontic attention isn’t only decorative in character. It might also gain long term oral health health. Right, correctly aligned teeth is not more difficult to floss and clean. This may ease and decrease the risk of rot. It may also quit periodontists irritation that problems gums. Periodontists might finish in disease, that occurs once micro-organism bunch round your house where the teeth and the gums meet. Periodontists can be ended in by untreated periodontists. Such an unhealthiness result in enamel reduction and may ruin bone that surrounds the teeth. Less may be chewed by people who have stings that are harmful with efficacy. A few of us using a serious bite down side might have difficulties obtaining enough nutrients. Once the teeth aren’t aimed correctly, this somewhat might happen. Morsel issues that are repairing may allow it to be more easy to chew and digest meals.

One may also have language problems, when the top and lower front teeth do not arrange right. All these are fixed through therapy, occasionally combined with medical help. Eventually, remedy may ease to avoid early use of rear areas. Your teeth grow to an unlikely quantity of pressure, as you chew down. In case your top teeth do not match it’ll trigger your teeth that are back to degrade. The most frequently encountered type of therapy is the braces (or retainer) and head-gear. But, a lot people complain about suffering with this technique that, unfortunately, is also unavoidable. Sport braces damages, as well as additional individuals have problem in talking. Dental practitioners, though, state several days can be normally disappeared throughout by the hurting. Occasionally annoyance is caused by them. In the event that you’d like to to quit more unpleasant senses, fresh, soft and tedious food must be avoided by you. In addition, tend not to take your braces away unless the medical professional claims so.

It is advised which you just observe your medical professional often for medical examinations to prevent choice possible problems that may appear while getting therapy. You are going to be approved using a specific dental hygiene, if necessary. Dental specialist may look-out of managing and id malocclusion now. Orthodontia – the main specialization of medication – mainly targets repairing chin problems and teeth, your grin as well as thus your sting. Dentist, however, won’t only do chin remedies and crisis teeth. They also handle tender to severe dental circumstances which may grow to states that are risky. You actually have not got to quantify throughout a predicament your life all. See dental specialist San – Direction Posts, and you’ll notice only but of stunning your smile plenty will soon be.

Behind the scenes of the Apollo mission at MIT

Fifty years ago this week, humanity made its first expedition to another world, when Apollo 11 touched down on the moon and two astronauts walked on its surface. That moment changed the world in ways that still reverberate today.

MIT’s deep and varied connections to that epochal event — many of which have been described on MIT News — began years before the actual landing, when the MIT Instrumentation Laboratory (now Draper Labs) signed the very first contract to be awarded for the Apollo program after its announcement by President John F. Kennedy in 1961. The Institute’s involvement continued throughout the program — and is still ongoing today.

MIT’s role in creating the navigation and guidance system that got the mission to the moon and back has been widely recognized in books, movies, and television series. But many other aspects of the Institute’s involvement in the Apollo program and its legacy, including advances in mechanical and computational engineering, simulation technology, biomedical studies, and the geophysics of planet formation, have remained less celebrated.

Amid the growing chorus of recollections in various media that have been appearing around this 50th anniversary, here is a small collection of bits and pieces about some of the unsung heroes and lesser-known facts from the Apollo program and MIT’s central role in it.

A new age in electronics

The computer system and its software that controlled the spacecraft — called the Apollo Guidance Computer and designed by the MIT Instrumentation Lab team under the leadership of Eldon Hall — were remarkable achievements that helped push technology forward in many ways.

The AGC’s programs were written in one of the first-ever compiler languages, called MAC, which was developed by Instrumentation Lab engineer Hal Laning. The computer itself, the 1-cubic-foot Apollo Guidance Computer, was the first significant use of silicon integrated circuit chips and greatly accelerated the development of the microchip technology that has gone on to change virtually every consumer product.

In an age when most computers took up entire climate-controlled rooms, the compact AGC was uniquely small and lightweight. But most of its “software” was actually hard-wired: The programs were woven, with tiny donut-shaped metal “cores” strung like beads along a set of wires, with a given wire passing outside the donut to represent a zero, or through the hole for a 1. These so-called rope memories were made in the Boston suburbs at Raytheon, mostly by women who had been hired because they had experience in the weaving industry. Once made, there was no way to change individual bits within the rope, so any change to the software required weaving a whole new rope, and last-minute changes were impossible.

As David Mindell, the Frances and David Dibner Professor of the History of Engineering and Manufacturing, points out in his book “Digital Apollo,” that system represented the first time a computer of any kind had been used to control, in real-time, many functions of a vehicle carrying human beings — a trend that continues to accelerate as the world moves toward self-driving vehicles. Right after the Apollo successes, the AGC was directly adapted to an F-8 fighter jet, to create the first-ever fly-by-wire system for aircraft, where the plane’s control surfaces are moved via a computer rather than direct cables and hydraulic systems. This approach is now widespread in the aerospace industry, says John Tylko, who teaches MIT’s class 16.895J (Engineering Apollo: The Moon Project as a Complex System), which is taught every other year.

As sophisticated as the computer was for its time, computer users today would barely recognize it as such. Its keyboard and display screen looked more like those on a microwave oven than a computer: a simple numeric keypad and a few lines of five-digit luminous displays. Even the big mainframe computer used to test the code as it was being developed had no keyboard or monitor that the programmers ever saw. Programmers wrote their code by hand, then typed it onto punch cards — one card per line — and handed the deck of cards to a computer operator. The next day, the cards would be returned with a printout of the program’s output. And in this time long before email, communications among the team often relied on handwritten paper notes.

Priceless rocks

MIT’s involvement in the geophysical side of the Apollo program also extends back to the early planning stages — and continues today. For example, Professor Nafi Toksöz, an expert in seismology, helped to develop a seismic monitoring station that the astronauts placed on the moon, where it helped lead to a greater understanding of the moon’s structure and formation. “It was the hardest work I have ever done, but definitely the most exciting,” he has said.

Toksöz says that the data from the Apollo seismometers “changed our understanding of the moon completely.” The seismic waves, which on Earth continue for a few minutes, lasted for two hours, which turned out to be the result of the moon’s extreme lack of water. “That was something we never expected, and had never seen,” he recalls.

The first seismometer was placed on the moon’s surface very shortly after the astronauts landed, and seismologists including Toksöz started seeing the data right away — including every footstep the astronauts took on the surface. Even when the astronauts returned to the lander to sleep before the morning takeoff, the team could see that Buzz Aldrin ScD ’63 and Neil Armstrong were having a sleepless night, with every toss and turn dutifully recorded on the seismic traces.

MIT Professor Gene Simmons was among the first group of scientists to gain access to the lunar samples as soon as NASA released them from quarantine, and he and others in what is now the Department of Earth, Planetary and Atmospheric Sciences (EAPS) have continued to work on these samples ever since. As part of a conference on campus, he exhibited some samples of lunar rock and soil in their first close-up display to the public, where some people may even have had a chance to touch the samples.

Others in EAPS have also been studying those Apollo samples almost from the beginning. Timothy Grove, the Robert R. Shrock Professor of Earth and Planetary Sciences, started studying the Apollo samples in 1971 as a graduate student at Harvard University, and has been doing research on them ever since. Grove says that these samples have led to major new understandings of planetary formation processes that have helped us understand the Earth and other planets better as well.

Among other findings, the rocks showed that ratios of the isotopes of oxygen and other elements in the moon rocks were identical to those in terrestrial rocks but completely different than those of any meteorites, proving that the Earth and the moon had a common origin and leading to the hypothesis that the moon was created through a giant impact from a planet-sized body. The rocks also showed that the entire surface of the moon had likely been molten at one time. The idea that a planetary body could be covered by an ocean of magma was a major surprise to geologists, Grove says.

Many puzzles remain to this day, and the analysis of the rock and soil samples goes on. “There’s still a lot of exciting stuff” being found in these samples, Grove says.

Sorting out the facts

In the spate of publicity and new books, articles, and programs about Apollo, inevitably some of the facts — some trivial, some substantive — have been scrambled along the way. “There are some myths being advanced,” says Tylko, some of which he addresses in his “Engineering Apollo” class. “People tend to oversimplify” many aspects of the mission, he says.

For example, many accounts have described the sequence of alarms that came from the guidance computer during the last four minutes of the mission, forcing mission controllers to make the daring decision to go ahead despite the unknown nature of the problem. But Don Eyles, one of the Instrumentation Lab’s programmers who had written the landing software for the AGC, says that he can’t think of a single account he’s read about that sequence of events that gets it entirely right. According to Eyles, many have claimed the problem was caused by the fact that the rendezvous radar switch had been left on, so that its data were overloading the computer and causing it to reboot.

But Eyles says the actual reason was a much more complex sequence of events, including a crucial mismatch between two circuits that would only occur in rare circumstances and thus would have been hard to detect in testing, and a probably last-minute decion to put a vital switch in a position that allowed it to happen. Eyles has described these details in a memoir about the Apollo years and in a technical paper available online, but he says they are difficult to summarize simply. But he thinks the author Norman Mailer may have come closest, capturing the essence of it in his book “Of a Fire on the Moon,” where he describes the issue as caused by a “sneak circuit” and an “undetectable” error in the onboard checklist.

Some accounts have described the AGC as a very limited and primitive computer compared to today’s average smartphone, and Tylko acknowledges that it had a tiny fraction of the power of today’s smart devices — but, he says, “that doesn’t mean they were unsophisticated.” While the AGC only had about 36 kilobytes of read-only memory and 2 kilobytes of random-access memory, “it was exceptionally sophisticated and made the best use of the resources available at the time,” he says.

In some ways it was even ahead of its time, Tylko says. For example, the compiler language developed by Laning along with Ramon Alonso at the Instrumentation Lab used an architecture that he says was relatively intuitive and easy to interact with. Based on a system of “verbs” (actions to be performed) and “nouns” (data to be worked on), “it could probably have made its way into the architecture of PCs,” he says. “It’s an elegant interface based on the way humans think.”

Some accounts go so far as to claim that the computer failed during the descent and astronaut Neil Armstrong had to take over the controls and land manually, but in fact partial manual control was always part of the plan, and the computer remained in ultimate control throughout the mission. None of the onboard computers ever malfunctioned through the entire Apollo program, according to astronaut David Scott SM ’62, who used the computer on two Apollo missions: “We never had a failure, and I think that is a remarkable achievement.”

Behind the scenes

At the peak of the program, a total of about 1,700 people at MIT’s Instrumentation Lab were working on the Apollo program’s software and hardware, according to Draper Laboratory, the Instrumentation Lab’s successor, which spun off from MIT in 1973. A few of those, such as the near-legendary “Doc” Draper himself — Charles Stark Draper ’26, SM ’28, ScD ’38, former head of the Department of Aeronautics and Astronautics (AeroAstro) — have become widely known for their roles in the mission, but most did their work in near-anonymity, and many went on to entirely different kinds of work after the Apollo program’s end.

Margaret Hamilton, who directed the Instrumentation Lab’s Software Engineering Division, was little known outside of the program itself until an iconic photo of her next to the original stacks of AGC code began making the rounds on social media in the mid 2010s. In 2016, when she was awarded the Presidential Medal of Freedom by President Barack Obama, MIT Professor Jaime Peraire, then head of AeroAstro, said of Hamilton that “She was a true software engineering pioneer, and it’s not hyperbole to say that she, and the Instrumentation Lab’s Software Engineering Division that she led, put us on the moon.” After Apollo, Hamilton went on to found a software services company, which she still leads.

Many others who played major roles in that software and hardware development have also had their roles little recognized over the years. For example, Hal Laning ’40, PhD ’47, who developed the programming language for the AGC, also devised its executive operating system, which employed what was at the time a new way of handling multiple programs at once, by assigning each one a priority level so that the most important tasks, such as controlling the lunar module’s thrusters, would always be taken care of. “Hal was the most brilliant person we ever had the chance to work with,” Instrumentation Lab engineer Dan Lickly told MIT Technology Review. And that priority-driven operating system proved crucial in allowing the Apollo 11 landing to proceed safely in spite of the 1202 alarms going off during the lunar descent.

While the majority of the team working on the project was male, software engineer Dana Densmore recalls that compared to the heavily male-dominated workforce at NASA at the time, the MIT lab was relatively welcoming to women. Densmore, who was a control supervisor for the lunar landing software, told The Wall Street Journal that “NASA had a few women, and they kept them hidden. At the lab it was very different,” and there were opportunities for women there to take on significant roles in the project.

Hamilton recalls the atmosphere at the Instrumentation Lab in those days as one of real dedication and meritocracy. As she told MIT News in 2009, “Coming up with solutions and new ideas was an adventure. Dedication and commitment were a given. Mutual respect was across the board. Because software was a mystery, a black box, upper management gave us total freedom and trust. We had to find a way and we did. Looking back, we were the luckiest people in the world; there was no choice but to be pioneers.”

J-PAL North America announces second round of competition partners

J-PAL North America, a research center at MIT, will partner with two leading education technology nonprofits to test promising models to improve learning, as part of the center’s second Education, Technology, and Opportunity Innovation Competition. 

Running in its second year, J-PAL North America’s Education, Technology, and Opportunity Innovation Competition supports education leaders in using randomized evaluations to generate evidence on how technology can improve student learning, particularly for students from disadvantaged backgrounds. Last year, J-PAL North America partnered with the Family Engagement Lab to develop an evaluation of a multilingual digital messaging platform, and with Western Governors University’s Center for Applied Learning Science to evaluate scalable models to improve student learning in math.

This year, J-PAL North America will continue its work to support rigorous evaluations of educational technologies aimed to reduce disparities by partnering with Boys and Girls Clubs of Greater Houston, a youth-development organization that provides education and social services to at-risk students, and MIND Research Institute, a nonprofit committed to improving math education.

“Even just within the first and second year of the J-PAL ed-tech competition, there continues to be an explosion in promising new initiatives,” says Philip Oreopoulos, professor of economics at the University of Toronto and co-chair of the J-PAL Education, Technology, and Opportunity Initiative. “We’re excited to try to help steer this development towards the most promising and effective programs for improving academic success and student well-being.”

Boys and Girls Clubs of Greater Houston will partner with J-PAL North America to develop an evaluation of the BookNook reading app, a research-based intervention technology that aims to improve literacy skills of K-8 students.

“One of our commitments to our youth is to prepare them to be better citizens in life, and we do this through our programming, which supplements the education they receive in school,” says Michael Ewing, director of programs at Boys & Girls Clubs of Greater Houston. “BookNook is one of our programs that we know can increase reading literacy and help students achieve at a higher level. We are excited about this opportunity to conduct a rigorous evaluation of BookNook’s technology because we can substantially increase our own accountability as an organization, ensuring that we are able to track the literacy gains of our students when the program is implemented with fidelity.”

Children who do not master reading by a young age are often placed at a significant disadvantage to their peers throughout the rest of their development. However, many effective interventions for students struggling with reading involve one-on-one or small-group instruction that places a heavy demand on school resources and teacher time. This makes it particularly challenging for schools that are already resource-strapped and face a shortage of teachers to meet the needs of students who are struggling with reading.

The BookNook app offers a channel to bring research-proven literacy intervention strategies to greater numbers of students through accessible technology. The program is heavily scaffolded so that both teachers and non-teachers can use it effectively, allowing after-school staff like those at Boys & Girls Clubs of Greater Houston to provide adaptive instruction to students struggling with reading.

“Our main priority at BookNook is student success,” says Nate Strong, head of partnerships at for the BookNook team. “We are really excited to partner with J-PAL and with Boys & Girls Clubs of Greater Houston to track the success of students in Houston and learn how we can do better for them over the long haul.”

MIND Research Institute seeks to partner with J-PAL North America to develop a scalable model that will increase students’ conceptual understanding of mathematical concepts. MIND’s Spatial Temporal (ST) math program is a pre-K-8 visual instructional program that leverages the brain’s spatial-temporal reasoning ability using challenging visual puzzles, non-routine problem solving, and animated informative feedback to understand and solve mathematical problems.

“We’re thrilled and honored to begin this partnership with J-PAL to build our capacity to conduct randomized evaluations,” says Andrew Coulson, chief data science officer for MIND. “It’s vital we continue to rigorously evaluate the ability of ST Math’s spatial-temporal approach to provide a level playing field for every student, and to show substantial effects on any assessment. With the combination of talent and experience that J-PAL brings, I expect that we will also be exploring innovative research questions, metrics and outcomes, methods and techniques to improve the applicability, validity and real-world usability of the findings.”

J-PAL North America is excited to work with these two organizations and continue to support rigorous evaluations that will help us better understand the role technology should play in learning. Boys & Girls Clubs of Greater Houston and MIND Research Institute will help J-PAL contribute to growing evidence base on education technology that can help guide decision-makers in understanding which uses of education technology are truly helping students learn amidst a rapidly-changing technological landscape.

J-PAL North America is a regional office of the Abdul Latif Jameel Poverty Action Lab. J-PAL was established in 2003 as a research center at MIT’s Department of Economics. Since then, it has built a global network of affiliated professors based at over 58 universities and regional offices in Africa, Europe, Latin America and the Caribbean, North America, South Asia, and Southeast Asia. J-PAL North America was established with support from the Alfred P. Sloan Foundation and Arnold Ventures and works to improve the effectiveness of social programs in North America through three core activities: research, policy outreach, and capacity building. J-PAL North America’s education technology work is supported by the Overdeck Family Foundation and Arnold Ventures.

MIT and Fashion Institute of Technology join forces to create innovative textiles

If you knew that hundreds of millions of running shoes are disposed of in landfills each year, would you prefer a high-performance athletic shoe that is biodegradable? Would being able to monitor your fitness in real time and help you avoid injury while you are running appeal to you? If so, look no further than the collaboration between MIT and the Fashion Institute of Technology (FIT). 

For the second consecutive year, students from each institution teamed up for two weeks in late June to create product concepts exploring the use of advanced fibers and technology. The workshops were held collaboratively with Advanced Functional Fabrics of America (AFFOA), a Cambridge, Massachusetts-based national nonprofit whose goal is to enable a manufacturing-based transformation of traditional fibers, yarns, and textiles into highly sophisticated, integrated, and networked devices and systems. 

“Humans have made use of natural fibers for millennia. They are essential as tools, clothing and shelter,” says Gregory C. Rutledge, lead principal investigator for MIT in AFFOA and the Lammot du Pont Professor in Chemical Engineering. “Today, new fiber-based solutions can have a significant and timely impact on the challenges facing our world.” 

The students had the opportunity this year to respond to a project challenge posed by footwear and apparel manufacturer New Balance, a member of the AFFOA network. Students spent their first week in Cambridge learning new technologies at MIT and the second at FIT, a college of the State University of New York, in New York City working on projects and prototypes. On the last day of the workshop, the teams presented their final projects at the headquarters of Lafayette 148 at the Brooklyn Navy Yard, with New Balance Creative Manager of Computational Design Onur Yuce Gun in attendance.

Team Natural Futurism presented a concept to develop a biodegradable lifestyle shoe using natural material alternatives, including bacterial cellulose and mycelium, and advanced fiber concepts to avoid use of chemical dyes. The result was a shoe that is both sustainable and aesthetic. Team members included: Giulia de Garay (FIT, Textile Development and Marketing), Rebecca Grekin ’19 (Chemical Engineering), rising senior Kedi Hu (Chemical Engineering/Architecture), Nga Yi “Amy” Lam (FIT, Textile Development and Marketing), Daniella Koller (FIT, Fashion Design), and Stephanie Stickle (FIT, Textile Surface Design).

Team CoMIT to Safety Before ProFIT explored the various ways that runners get hurt, sometimes from acute injuries but more often from overuse. Their solution was to incorporate intuitive textiles, as well as tech elements such as a silent alarm and LED display, into athletic clothing and shoes for entry-level, competitive, and expert runners. The goal is to help runners at all levels to eliminate distraction, know their physical limits, and be able to call for help. Team members included Rachel Cheang (FIT, Fashion Design/Knitwear), Jonathan Mateer (FIT, Accessories Design), Caroline Liu ’19 (Materials Science and Engineering), and Xin Wen ’19 (Electrical Engineering and Computer Science).

“It is critical for design students to work in a team environment engaging in the latest technologies. This interaction will support the invention of products that will define our future,” comments Joanne Arbuckle, deputy to the president for industry partnerships and collaborative programs at FIT.

The specific content of this workshop was co-designed by MIT postdocs Katia Zolotovsky of the Department of Biological Engineering and Mehmet Kanik of the Research Laboratory of Electronics, with assistant professor of fashion design Andy Liu from FIT, to teach the fundamentals of fiber fabrication, 3-D printing with light, sensing, and biosensing. Participating MIT faculty included Yoel Fink, who is CEO of AFFOA and professor of materials science and electrical engineering; Polina Anikeeva, who is associate professor in the departments of Materials Science and Engineering and Brain and Cognitive Sciences; and Nicholas Xuanlai Fang, professor of mechanical engineering. Participating FIT faculty were Preeti Arya, assistant professor, Textile Development and Marketing; Patrice George, associate professor, Textile Development and Marketing; Suzanne Goetz, associate professor, Textile Surface Design; Tom Scott, Fashion Design; David Ulan, adjunct assistant professor, Accessories Design; and Gregg Woodcock, adjunct instructor, Accessories Design.  

To facilitate the intersection of design and engineering for products made of advanced functional fibers, yarns, and textiles, a brand-new workforce must be created and inspired by future opportunities. “The purpose of the program is to bring together undergraduate students from different backgrounds, and provide them with a cross-disciplinary, project-oriented experience that gets them thinking about what can be done with these new materials,” Rutledge adds. 

The goal of MIT, FIT, AFFOA, and industrial partner New Balance is to accelerate innovation in high-tech, U.S.-based manufacturing involving fibers and textiles, and potentially to create a whole new industry based on breakthroughs in fiber technology and manufacturing. AFFOA, a Manufacturing Innovation Institute founded in 2016, is a public-private partnership between industry, academia, and both state and federal governments.

“Collaboration and teamwork are DNA-level attributes of the New Balance workplace,” says Chris Wawrousek, senior creative design lead in the NB Innovation Studio. “We were very excited to participate in the program from a multitude of perspectives. The program allowed us to see some of the emerging research in the field of technical textiles. In some cases, these technologies are still very nascent, but give us a window into future developments.”  

“The diverse pairing and short time period also remind us of the energy captured in an academic crash course, and just how much teams can do in a condensed period of time,” Wawrousek adds. “Finally, it’s a great chance to connect with this future generation of designers and engineers, hopefully giving them an exciting window into the work of our brand.”

By building upon their different points of view from design and science, the teams demonstrated what is possible when creative individuals from each area act and think as one. “When designers and engineers come together and open their minds to creating new technologies that ultimately will impact the world, we can imagine exciting new multi-material fibers that open up a new spectrum of applications in various markets, from clothing to medical and beyond,” says Yuly Fuentes, MIT Materials Research Laboratory project manager for fiber technologies. 

Professor Emeritus Fernando Corbató, MIT computing pioneer, dies at 93

Fernando “Corby” Corbató, an MIT professor emeritus whose work in the 1960s on time-sharing systems broke important ground in democratizing the use of computers, died on Friday, July 12, at his home in Newburyport, Massachusetts. He was 93.

Decades before the existence of concepts like cybersecurity and the cloud, Corbató led the development of one of the world’s first operating systems. His “Compatible Time-Sharing System” (CTSS) allowed multiple people to use a computer at the same time, greatly increasing the speed at which programmers could work. It’s also widely credited as the first computer system to use passwords

After CTSS Corbató led a time-sharing effort called Multics, which directly inspired operating systems like Linux and laid the foundation for many aspects of modern computing. Multics doubled as a fertile training ground for an emerging generation of programmers that included C programming language creator Dennis Ritchie, Unix developer Ken Thompson, and spreadsheet inventors Dan Bricklin and Bob Frankston.

Before time-sharing, using a computer was tedious and required detailed knowledge. Users would create programs on cards and submit them in batches to an operator, who would enter them to be run one at a time over a series of hours. Minor errors would require repeating this sequence, often more than once.

But with CTSS, which was first demonstrated in 1961, answers came back in mere seconds, forever changing the model of program development. Decades before the PC revolution, Corbató and his colleagues also opened up communication between users with early versions of email, instant messaging, and word processing. 

“Corby was one of the most important researchers for making computing available to many people for many purposes,” says long-time colleague Tom Van Vleck. “He saw that these concepts don’t just make things more efficient; they fundamentally change the way people use information.”

Besides making computing more efficient, CTSS also inadvertently helped establish the very concept of digital privacy itself. With different users wanting to keep their own files private, CTSS introduced the idea of having people create individual accounts with personal passwords. Corbató’s vision of making high-performance computers available to more people also foreshadowed trends in cloud computing, in which tech giants like Amazon and Microsoft rent out shared servers to companies around the world. 

“Other people had proposed the idea of time-sharing before,” says Jerry Saltzer, who worked on CTSS with Corbató after starting out as his teaching assistant. “But what he brought to the table was the vision and the persistence to get it done.”

CTSS was also the spark that convinced MIT to launch “Project MAC,” the precursor to the Laboratory for Computer Science (LCS). LCS later merged with the Artificial Intelligence Lab to become MIT’s largest research lab, the Computer Science and Artificial Intelligence Laboratory (CSAIL), which is now home to more than 600 researchers. 

“It’s no overstatement to say that Corby’s work on time-sharing fundamentally transformed computers as we know them today,” says CSAIL Director Daniela Rus. “From PCs to smartphones, the digital revolution can directly trace its roots back to the work that he led at MIT nearly 60 years ago.” 

In 1990 Corbató was honored for his work with the Association of Computing Machinery’s Turing Award, often described as “the Nobel Prize for computing.”

From sonar to CTSS

Corbató was born on July 1, 1926 in Oakland, California. At 17 he enlisted as a technician in the U.S. Navy, where he first got the engineering bug working on a range of radar and sonar systems. After World War II he earned his bachelor’s degree at Caltech before heading to MIT to complete a PhD in physics. 

As a PhD student, Corbató met Professor Philip Morse, who recruited him to work with his team on Project Whirlwind, the first computer capable of real-time computation. After graduating, Corbató joined MIT’s Computation Center as a research assistant, soon moving up to become deputy director of the entire center. 

It was there that he started thinking about ways to make computing more efficient. For all its innovation, Whirlwind was still a rather clunky machine. Researchers often had trouble getting much work done on it, since they had to take turns using it for half-hour chunks of time. (Corbató said that it had a habit of crashing every 20 minutes or so.) 

Since computer input and output devices were much slower than the computer itself, in the late 1950s a scheme called multiprogramming was developed to allow a second program to run whenever the first program was waiting for some device to finish. Time-sharing built on this idea, allowing other programs to run while the first program was waiting for a human user to type a request, thus allowing the user to interact directly with the first program.

Saltzer says that Corbató pioneered a programming approach that would be described today as agile design. 

“It’s a buzzword now, but back then it was just this iterative approach to coding that Corby encouraged and that seemed to work especially well,” he says.  

In 1962 Corbató published a paper about CTSS that quickly became the talk of the slowly-growing computer science community. The following year MIT invited several hundred programmers to campus to try out the system, spurring a flurry of further research on time-sharing.

Foreshadowing future technological innovation, Corbató was amazed — and amused — by how quickly people got habituated to CTSS’ efficiency.

“Once a user gets accustomed to [immediate] computer response, delays of even a fraction of a minute are exasperatingly long,” he presciently wrote in his 1962 paper. “First indications are that programmers would readily use such a system if it were generally available.”

Multics, meanwhile, expanded on CTSS’ more ad hoc design with a hierarchical file system, better interfaces to email and instant messaging, and more precise privacy controls. Peter Neumann, who worked at Bell Labs when they were collaborating with MIT on Multics, says that its design prevented the possibility of many vulnerabilities that impact modern systems, like “buffer overflow” (which happens when a program tries to write data outside the computer’s short-term memory). 

“Multics was so far ahead of the rest of the industry,” says Neumann. “It was intensely software-engineered, years before software engineering was even viewed as a discipline.” 

In spearheading these time-sharing efforts, Corbató served as a soft-spoken but driven commander in chief — a logical thinker who led by example and had a distinctly systems-oriented view of the world.

“One thing I liked about working for Corby was that I knew he could do my job if he wanted to,” says Van Vleck. “His understanding of all the gory details of our work inspired intense devotion to Multics, all while still being a true gentleman to everyone on the team.” 

Another legacy of the professor’s is “Corbató’s Law,” which states that the number of lines of code someone can write in a day is the same regardless of the language used. This maxim is often cited by programmers when arguing in favor of using higher-level languages.

Corbató was an active member of the MIT community, serving as associate department head for computer science and engineering from 1974 to 1978 and 1983 to 1993. He was a member of the National Academy of Engineering, and a fellow of the Institute of Electrical and Electronics Engineers and the American Association for the Advancement of Science. 

Corbató is survived by his wife, Emily Corbató, from Brooklyn, New York; his stepsons, David and Jason Gish; his brother, Charles; and his daughters, Carolyn and Nancy, from his marriage to his late wife Isabel; and five grandchildren. 

In lieu of flowers, gifts may be made to MIT’s Fernando Corbató Fellowship Fund via Bonny Kellermann in the Memorial Gifts Office. 

CSAIL will host an event to honor and celebrate Corbató in the coming months. 

Visiting lecturer to spearhead project exploring the geopolitics of artificial intelligence

Artificial intelligence is expected to have tremendous societal impact across the globe in the near future. Now Luis Videgaray PhD ’98, former foreign minister and finance minister of Mexico, is coming to MIT to spearhead an effort that aims to help shape global AI policies, focusing on how such rising technologies will affect people living in all corners of the world.

Starting this month, Videgaray, an expert in geopolitics and AI policy, will serve as director of the MIT Artificial Intelligence Policy for the World Project (MIT AIPW), a collaboration between the MIT Sloan School of Management and the new MIT Stephen A. Schwarzman College of Computing. Videgaray will also serve as a senior lecturer at the MIT Sloan and as a distinguished fellow at the MIT Internet Policy Research Initiative.

The MIT AIPW will bring together researchers from across the Institute to explore and analyze best AI policies for countries around the world based on various geopolitical considerations. The end result of the year-long effort, Videgaray says, will be a report with actionable policy recommendations for national and local governments, businesses, international organizations, and universities — including MIT.

“The core idea is to analyze, raise awareness, and come up with useful policy recommendations for how the geopolitical context affects both the development and use of AI,” says Videgaray, who earned his PhD at MIT in economics. “It’s called AI Policy for the World, because it’s not only about understanding the geopolitics, but also includes thinking about people in poor nations, where AI is not really being developed but will be adopted and have significant impact in all aspects of life.”

“When we launched the MIT Stephen A. Schwarzman College of Computing, we expressed the desire for the college to examine the societal implications of advanced computational capabilities,” says MIT Provost Martin Schmidt. “One element of that is developing frameworks which help governments and policymakers contemplate these issues. I am delighted to see us jump-start this effort with the leadership of our distinguished alumnus, Dr. Videgaray.”

Democracy, diversity, and de-escalation

As Mexico’s finance minister from 2012 to 2016, Videgaray led Mexico’s energy liberalization process, a telecommunications reform to foster competition in the sector, a tax reform that reduced the country’s dependence on oil revenues, and the drafting of the country’s laws on financial technology. In 2012, he was campaign manager for President Peña Nieto and head of the presidential transition team.

As foreign minister from 2017 to 2018, Videgaray led Mexico’s relationship with the Trump White House, including the renegotiation of the North American Free Trade Agreement (NAFTA). He is one of the founders of the Lima Group, created to promote regional diplomatic efforts toward restoring democracy in Venezuela. He also directed Mexico’s leading role in the UN toward an inclusive debate on artificial intelligence and other new technologies. In that time, Videgaray says AI went from being a “science-fiction” concept in the first year to a major global political issue the following year.

In the past few years, academic institutions, governments, and other organizations have launched initiatives that address those issues, and more than 20 countries have strategies in place that guide AI development. But they miss a very important point, Videgaray says: AI’s interaction with geopolitics.

MIT AIWP will have three guiding principles to help shape policy around geopolitics: democratic values, diversity and inclusion, and de-escalation.

One of the most challenging and important issues MIT AIWP faces is if AI “can be a threat to democracy,” Videgaray says. In that way, the project will explore policies that help advance AI technologies, while upholding the values of liberal democracy.

“We see some countries starting to adopt AI technologies not for the improvement for the quality of life, but for social control,” he says. “This technology can be extremely powerful, but we are already seeing how it can also be used to … influence people and have an effect on democracy. In countries where institutions are not as strong, there can be an erosion of democracy.”

A policy challenge in that regard is how to deal with private data restrictions in different countries. If some countries don’t put any meaningful restrictions on data usage, it could potentially give them a competitive edge. “If people start thinking about geopolitical competition as more important than privacy, biases, or algorithmic transparency, and the concern is to win at all costs, then the societal impact of AI around the world could be quite worrisome,” Videgaray says.

In the same vein, MIT AIPW will focus on de-escalation of potential conflict, by promoting an analytical, practical, and realistic collaborative approach to developing and using AI technologies. While media has dubbed the rise of AI worldwide as a type of “arms race,” Videgaray says that type of thinking is potentially hazardous to society. “That reflects a sentiment that we’re moving again into an adversarial world, and technology will be a huge part of it,” he says. “That will have negative effects of how technology is developed and used.”

For inclusion and diversity, the project will make AI’s ethical impact “a truly global discussion,” Videgaray says. That means promoting awareness and participation from countries around the world, including those that may be less developed and more vulnerable. Another challenge is deciding not only what policies should be implemented, but also where those policies might be best implemented. That could mean at the state level or national level in the United States, in different European countries, or with the UN.

“We want to approach this in a truly inclusive way, which is not just about countries leading development of technology,” Videgaray says. “Every country will benefit and be negatively affected by AI, but many countries are not part of the discussion.”

Building connections

While MIT AIPW won’t be drafting international agreements, Videgaray says another aim of the project is to explore different options and elements of potential international agreements. He also hopes to reach out to decision makers in governments and businesses around the world to gather feedback on the project’s research.         

Part of Videgaray’s role includes building connections across MIT departments, labs, and centers to pull in researchers to focus on the issue. “For this to be successful, we need to integrate the thinking of people from different backgrounds and expertise,” he says.

At MIT Sloan, Videgaray will teach classes alongside Simon Johnson, the Ronald A. Kurtz Professor of Entrepreneurship Professor and a professor of global economics and management. His lectures will focus primarily on the issues explored by the MIT AIPW project.

Next spring, MIT AIPW plans to host a conference at MIT to convene researchers from the Institute and around the world to discuss the project’s initial findings and other topics in AI.

Pathways to a low-carbon China

Fulfilling the ultimate goal of the 2015 Paris Agreement on climate change — keeping global warming well below 2 degrees Celsius, if not 1.5 C — will be impossible without dramatic action from the world’s largest emitter of greenhouse gases, China. Toward that end, China began in 2017 developing an emissions trading scheme (ETS), a national carbon dioxide market designed to enable the country to meet its initial Paris pledge with the greatest efficiency and at the lowest possible cost. China’s pledge, or nationally determined contribution (NDC), is to reduce its CO2 intensity of gross domestic product (emissions produced per unit of economic activity) by 60 to 65 percent in 2030 relative to 2005, and to peak CO2 emissions around 2030.

When it’s rolled out, China’s carbon market will initially cover the electric power sector (which currently produces more than 3 billion tons of CO2) and likely set CO2 emissions intensity targets (e.g., grams of CO2 per kilowatt hour) to ensure that its short-term NDC is fulfilled. But to help the world achieve the long-term 2 C and 1.5 C Paris goals, China will need to continually decrease these targets over the course of the century.

A new study of China’s long-term power generation mix under the nation’s ETS projects that until 2065, renewable energy sources will likely expand to meet these targets; after that, carbon capture and storage (CCS) could be deployed to meet the more stringent targets that follow. Led by researchers at the MIT Joint Program on the Science and Policy of Global Change, the study appears in the journal Energy Economics.

“This research provides insight into the level of carbon prices and mix of generation technologies needed for China to meet different CO2 intensity targets for the electric power sector,” says Jennifer Morris, lead author of the study and a research scientist at the MIT Joint Program. ”We find that coal CCS has the potential to play an important role in the second half of the century, as part of a portfolio that also includes renewables and possibly nuclear power.”

To evaluate the impacts of multiple potential ETS pathways — different starting carbon prices and rates of increase — on the deployment of CCS technology, the researchers enhanced the MIT Economic Projection and Policy Analysis (EPPA) model to include the joint program’s latest assessments of the costs of low-carbon power generation technologies in China. Among the technologies included in the model are natural gas, nuclear, wind, solar, coal with CCS, and natural gas with CCS. Assuming that power generation prices are the same across the country for any given technology, the researchers identify different ETS pathways in which CCS could play a key role in lowering the emissions intensity of China’s power sector, particularly for targets consistent with achieving the long-term 2 C and 1.5 C Paris goals by 2100.

The study projects a two-stage transition — first to renewables, and then to coal CCS. The transition from renewables to CCS is driven by two factors. First, at higher levels of penetration, renewables incur increasing costs related to accommodating the intermittency challenges posed by wind and solar. This paves the way for coal CCS. Second, as experience with building and operating CCS technology is gained, CCS costs decrease, allowing the technology to be rapidly deployed at scale after 2065 and replace renewables as the primary power generation technology.

The study shows that carbon prices of $35-40 per ton of CO2 make CCS technologies coupled with coal-based generation cost-competitive against other modes of generation, and that carbon prices higher than $100 per ton of CO2 allow for a significant expansion of CCS.

“Our study is at the aggregate level of the country,” says Sergey Paltsev, deputy director of the joint program. “We recognize that the cost of electricity varies greatly from province to province in China, and hope to include interactions between provinces in our future modeling to provide deeper understanding of regional differences. At the same time, our current results provide useful insights to decision-makers in designing more substantial emissions mitigation pathways.”

Third annual MIT Teaching with Digital Technology Awards recipients selected

Seven MIT educators have received awards this year for their significant digital learning innovations and their contributions to teaching and learning at MIT and around the world.

Polina Anikeeva, Martin Bazant, and Jessica Sandland shared the third annual MITx Prize for Teaching and Learning in MOOCs — an award given to educators who have developed massive open online courses (MOOCs) that share the best of MIT knowledge and perspectives with learners around the world. Additionally, John Belcher, Amy Carleton, Jared Curhan, and Erik Demaine received Teaching with Digital Technology Awards, nominated by MIT students for their innovative use of digital technology to improve their teaching at MIT.

The MITx Prize for Teaching and Learning in MOOCs

This year’s MITx prize winners were honored at an MIT Open Learning event in May. Professor Polina Anikeeva of the Department of Materials Science and Engineering and Digital Learning Lab Scientist Jessica Sandland received the award for teaching 3.024x (Electronic, Optical and Magnetic Properties of Materials). The course was praised for not only its global impact, but also for the way in which it enhanced the residential experience. Increased flexibility from integrating the online content allowed for the addition of design reviews, which give MIT students firsthand experience working on complicated engineering problems.

3.024x is fast-paced and challenging. To bring some levity to the subject, the instructors designed problem sets around a series of superhero-themed comic strips that integrated the science and engineering concepts that students learned in class.

Martin Bazant, of the departments of Chemical Engineering and Mathematics, received the MITx prize for his course, 10.50.1x (Analysis of Transport Phenomena Mathematical Methods). Most problems in the course involve long calculations, which can be tricky to demonstrate online.

To solve this challenge, Bazant broke up problems into smaller parts that included tips and tutorials to help learners solve the problem while maintaining the rigorous intellectual challenge. Course participants included a diverse group of college students, industry professionals, and faculty from other universities in many science and engineering disciplines across the globe.

Teaching with Digital Technology Awards

Co-sponsored by MIT Open Learning and the Office of the Vice Chancellor, the Teaching with Digital Technology Awards are student-nominated awards for faculty and instructors who have improved teaching and learning at MIT with digital technology. MIT students nominated 117 faculty and instructors for this award this year, more than in any previous year. The winners were celebrated at an awards luncheon in early June. John Belcher, Erik Demaine, and Jared Curhan attended the awards luncheon, and — in the spirit of an award reception for digital innovation — Amy Carleton joined the event virtually, through video chat.

John Belcher was honored for his physics courses on electricity and magnetism. Students appreciated the way that Belcher incorporated videos with his lectures to help provide a physical representation of an abstract subject. He created the animated videos to show visualizations of fundamental physics concepts such as energy transfer and magnetic fields. Students remarked that the videos helped them learn about everything from solar flares and the solar cycle to the fundamentally relativistic nature of electromagnetism.

Erik Demaine of the Computer Science and Artificial Intelligence Lab received the award for his course 6.892 (Fun with Hardness Proofs). The course flipped the traditional classroom model. Instead of lecturing in person, all lectures were posted online and problems were done in class. This allowed the students to spend class time working together on collaborative problem solving through an online application that Demaine created, called Coauthor.

Jared Curhan received the award for his negotiation courses at the MIT Sloan School of Management, including 15.672 (Negotiation Analysis), which he designed for students across the Institute. Curhan used digital technology to provide feedback while students practiced their negotiating skills in class. A platform called iDecisionGames helped simulate negotiation exercises between students, and after each exercise it provided data about how each participant performed, both objectively and subjectively.

Amy Carleton received the award for her course on science writing and new media. During the course, students learned how to write about scientific and technical topics for a general audience. They put their skills to work by writing Wikipedia articles, where they used advanced editing techniques and wrote mathematical expressions in LaTEX. They also used Google Docs during class to edit articles in small groups, and developed PowerPoint presentations where they learned to incorporate sound and graphics to emphasize their ideas.

Both awards celebrate instructors who are using technology in innovative ways to help teach challenging courses to both traditional students and online learners.

“At MIT, there is no shortage of digital learning innovation, and this year’s winners reflect the Institute’s strong commitment to transforming teaching and learning at MIT and around the globe,” says MIT Professor Krishna Rajagopal, dean for digital learning. “They have set new standards for online and blended learning.”

MIT Media Lab Director’s Fellows announced for 2019

The MIT Media Lab has added 11 members to the diverse group of visionary innovators and leaders it calls the Director’s Fellows.

Now in its seventh year, the Director’s Fellows program links a vast array of creators, advocates, artists, scientists, educators, philosophers, and others to the lab. The goal of the program is for the fellows to get involved in the lab’s work, bringing new perspectives, ideas, and knowledge to projects and initiatives.

Conversely, the fellows spread insights, knowledge, and work of the lab out into the world, giving it exposure in spaces as varied as fashion, human rights, and sports.

“My intention was to bring a wide range of voices into the Media Lab that we might not otherwise hear, because I firmly believe that technology and engineering alone cannot address the complexity of the challenges we face in today’s world,” says Joi Ito, director of the Media Lab. “Addressing an issue as complex as climate change or public health require solutions involving philosophy and politics and anthropology — a range of knowledge, skills, and talents that we don’t necessarily have at the lab.”

With the addition of this year’s fellows, the Director’s Fellows network will be 70-ish persons strong. The fellows may collaborate on projects with students and faculty, serve as advisers, bring a project idea into the lab, or work on projects together. Those living abroad may participate in Media Lab workshops and other offsite events.

Fellows have a formal affiliation with the lab for two years, but the hope is that the network continues to flourish after that period ends. “Our intention is to keep them as close as possible, both to each other and to the lab,” says Claudia Robaina, the program’s director. “They are great resources for us and for each other, a huge network of collaborators.”

The fellows this year are as diverse as ever, although Robaina says there is perhaps a greater diversity of age than in the typical class. Among them are a career police officer, a freestyle skateboarder, and a physician.

The Media Lab’s 2019 Directors Fellows are listed below.

Jaylen Brown, an NBA basketball player with the Boston Celtics, has a wide range of interests, including history, finance, technology, and meditation. Considered an innovator by his peers, he entered the NBA draft in 2016 without an agent, and a year later created a stir by pulling together a networking event for rookie players at the NBA Summer League, which was followed by a “Tech Hustle” event at the NBA All-Star Weekend that attracted venture capitalists, rap stars, and corporate chieftains to help players understand investment.

Jan Fuller, a former senior digital forensics investigator for the Redmond Police Department in Washington state, began conducting forensic investigations of electronic devices in 2003, when 1 gigabyte was a lot of data. Currently, she’s pursuing projects aimed at improving law enforcement capabilities deployed against digital crimes and coaching and mentoring students interested in careers in digital forensics.

Kathy Jetñil-Kijiner, a poet of Marshall Islands ancestry, achieved international acclaim with her performance at the opening of the United Nations Climate Summit in New York in 2014. She has published a collection of poetry, Iep Jāltok: Poems from a Marshallese Daughter, and she directs a Marshall Islands-based nonprofit dedicated to empowering Marshallese youth to seek solutions to the environmental challenges their homeland faces.  

Ayana Elizabeth Johnson, founder and chief executive of Ocean Collectiv, a consulting firm for conservation solutions, is a marine biologist and policy expert. She founded the Urban Ocean Lab, a think tank focused on coastal cities, and has worked on ocean policy at the U.S. Environmental Protection Agency and the National Oceanic and Atmospheric Administration.

Lehua Kamalu, an apprentice navigator and the voyaging director at the Polynesian Voyaging Society, researched and devised the sail plan for Hōkūleʻa, a double hulled canoe, as it circumnavigated the Earth from 2014 to 2018 on a voyage named “Malama Honua — to care for the Earth.” She sees the practice of deep-sea voyaging as a means to challenge the depth and quality of our individual relationships to the ocean, nature, and one another.

AiLun Ku, president and chief operating officer at The Opportunity Network, works to create spaces for first-generation high school and college students of color to enhance and improve their postsecondary and career readiness education. She trains partners to integrate culturally balanced, student-centered curriculum design with rigorous data-driven practices with the goal of influencing systems that have traditionally excluded young people of color from college and career opportunities.

Nonabah Lane, a member of the Navajo Nation, is a sustainability specialist and entrepreneur in environmental and culturally conscious business development, energy education, and tribal community commitment. She is a co-founder of Navajo Ethno-Agriculture, a farm that teaches Navajo culture through traditional farming and bilingual education and is active in promoting and developing tribal sustainable energy strategies.

Kate McCall-Kiley, co-founder and director at xD, an emerging technology lab within the U.S. government, works to create new environments and mechanisms for behavior change while experimenting with different ways to productively challenge convention. She served as a White House Presidential Innovation Fellow for the Obama administration, where she worked on projects including vote.gov, The Opportunity Project, worker.gov, BroadbandUSA, and Vice President Joe Biden’s Cancer Moonshot.

Rodney Mullen, co-founder of one of the most dominant skateboarding companies in America, invented many of the tricks in use in skateboarding today and holds two patents related to the sport’s equipment. He has pivoted to work in the open source community, where he finds many parallels between the creativity of skateboarders and hackers. He still skates two hours a day.

Elizabeth Pettit, executive director of Clínica Integral Almas in Álamos, Mexico, which works with remote indigenous communities, is a physician. Medicine and work in rural public health is a second act: Pettit previously was a designer, creating specialty materials for art and architecture and for the film and entertainment industry.

Michael Tubbs, mayor of Stockton, California, has received national attention for his ambitious progressive agenda, which includes securing $20 million to finance scholarships to triple the number of the city’s students entering and graduating from college, and the country’s first universal basic income pilot project. He is the youngest mayor in the history of the country to represent a city with more than 100,000 residents and is Stockton’s first African-American mayor.

Learn more about all of the fellows from all seven cohorts at directorsfellows.media.mit.edu.

Drag-and-drop data analytics

In the Iron Man movies, Tony Stark uses a holographic computer to project 3-D data into thin air, manipulate them with his hands, and find fixes to his superhero troubles. In the same vein, researchers from MIT and Brown University have now developed a system for interactive data analytics that runs on touchscreens and lets everyone — not just billionaire tech geniuses — tackle real-world issues.

For years, the researchers have been developing an interactive data-science system called Northstar, which runs in the cloud but has an interface that supports any touchscreen device, including smartphones and large interactive whiteboards. Users feed the system datasets, and manipulate, combine, and extract features on a user-friendly interface, using their fingers or a digital pen, to uncover trends and patterns.

In a paper being presented at the ACM SIGMOD conference, the researchers detail a new component of Northstar, called VDS for “virtual data scientist,” that instantly generates machine-learning models to run prediction tasks on their datasets. Doctors, for instance, can use the system to help predict which patients are more likely to have certain diseases, while business owners might want to forecast sales. If using an interactive whiteboard, everyone can also collaborate in real-time.  

The aim is to democratize data science by making it easy to do complex analytics, quickly and accurately.

“Even a coffee shop owner who doesn’t know data science should be able to predict their sales over the next few weeks to figure out how much coffee to buy,” says co-author and long-time Northstar project lead Tim Kraska, an associate professor of electrical engineering and computer science in at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and founding co-director of the new Data System and AI Lab (DSAIL). “In companies that have data scientists, there’s a lot of back and forth between data scientists and nonexperts, so we can also bring them into one room to do analytics together.”

VDS is based on an increasingly popular technique in artificial intelligence called automated machine-learning (AutoML), which lets people with limited data-science know-how train AI models to make predictions based on their datasets. Currently, the tool leads the DARPA D3M Automatic Machine Learning competition, which every six months decides on the best-performing AutoML tool.    

Joining Kraska on the paper are: first author Zeyuan Shang, a graduate student, and Emanuel Zgraggen, a postdoc and main contributor of Northstar, both of EECS, CSAIL, and DSAIL; Benedetto Buratti, Yeounoh Chung, Philipp Eichmann, and Eli Upfal, all of Brown; and Carsten Binnig who recently moved from Brown to the Technical University of Darmstadt in Germany.

An “unbounded canvas” for analytics

The new work builds on years of collaboration on Northstar between researchers at MIT and Brown. Over four years, the researchers have published numerous papers detailing components of Northstar, including the interactive interface, operations on multiple platforms, accelerating results, and studies on user behavior.

Northstar starts as a blank, white interface. Users upload datasets into the system, which appear in a “datasets” box on the left. Any data labels will automatically populate a separate “attributes” box below. There’s also an “operators” box that contains various algorithms, as well as the new AutoML tool. All data are stored and analyzed in the cloud.

The researchers like to demonstrate the system on a public dataset that contains information on intensive care unit patients. Consider medical researchers who want to examine co-occurrences of certain diseases in certain age groups. They drag and drop into the middle of the interface a pattern-checking algorithm, which at first appears as a blank box. As input, they move into the box disease features labeled, say, “blood,” “infectious,” and “metabolic.” Percentages of those diseases in the dataset appear in the box. Then, they drag the “age” feature into the interface, which displays a bar chart of the patient’s age distribution. Drawing a line between the two boxes links them together. By circling age ranges, the algorithm immediately computes the co-occurrence of the three diseases among the age range.  

“It’s like a big, unbounded canvas where you can lay out how you want everything,” says Zgraggen, who is the key inventor of Northstar’s interactive interface. “Then, you can link things together to create more complex questions about your data.”

Approximating AutoML

With VDS, users can now also run predictive analytics on that data by getting models custom-fit to their tasks, such as data prediction, image classification, or analyzing complex graph structures.

Using the above example, say the medical researchers want to predict which patients may have blood disease based on all features in the dataset. They drag and drop “AutoML” from the list of algorithms. It’ll first produce a blank box, but with a “target” tab, under which they’d drop the “blood” feature. The system will automatically find best-performing machine-learning pipelines, presented as tabs with constantly updated accuracy percentages. Users can stop the process at any time, refine the search, and examine each model’s errors rates, structure, computations, and other things.

According to the researchers, VDS is the fastest interactive AutoML tool to date, thanks, in part, to their custom “estimation engine.” The engine sits between the interface and the cloud storage. The engine leverages automatically creates several representative samples of a dataset that can be progressively processed to produce high-quality results in seconds.

“Together with my co-authors I spent two years designing VDS to mimic how a data scientist thinks,” Shang says, meaning it instantly identifies which models and preprocessing steps it should or shouldn’t run on certain tasks, based on various encoded rules. It first chooses from a large list of those possible machine-learning pipelines and runs simulations on the sample set. In doing so, it remembers results and refines its selection. After delivering fast approximated results, the system refines the results in the back end. But the final numbers are usually very close to the first approximation.

“For using a predictor, you don’t want to wait four hours to get your first results back. You want to already see what’s going on and, if you detect a mistake, you can immediately correct it. That’s normally not possible in any other system,” Kraska says. The researchers’ previous user study, in fact, “show that the moment you delay giving users results, they start to lose engagement with the system.”

The researchers evaluated the tool on 300 real-world datasets. Compared to other state-of-the-art AutoML systems, VDS’ approximations were as accurate, but were generated within seconds, which is much faster than other tools, which operate in minutes to hours.

Next, the researchers are looking to add a feature that alerts users to potential data bias or errors. For instance, to protect patient privacy, sometimes researchers will label medical datasets with patients aged 0 (if they do not know the age) and 200 (if a patient is over 95 years old). But novices may not recognize such errors, which could completely throw off their analytics.  

“If you’re a new user, you may get results and think they’re great,” Kraska says. “But we can warn people that there, in fact, may be some outliers in the dataset that may indicate a problem.”

New AI programming language goes beyond deep learning

A team of MIT researchers is making it easier for novices to get their feet wet with artificial intelligence, while also helping experts advance the field.

In a paper presented at the Programming Language Design and Implementation conference this week, the researchers describe a novel probabilistic-programming system named “Gen.” Users write models and algorithms from multiple fields where AI techniques are applied — such as computer vision, robotics, and statistics — without having to deal with equations or manually write high-performance code. Gen also lets expert researchers write sophisticated models and inference algorithms — used for prediction tasks — that were previously infeasible.

In their paper, for instance, the researchers demonstrate that a short Gen program can infer 3-D body poses, a difficult computer-vision inference task that has applications in autonomous systems, human-machine interactions, and augmented reality. Behind the scenes, this program includes components that perform graphics rendering, deep-learning, and types of probability simulations. The combination of these diverse techniques leads to better accuracy and speed on this task than earlier systems developed by some of the researchers.

Due to its simplicity — and, in some use cases, automation — the researchers say Gen can be used easily by anyone, from novices to experts. “One motivation of this work is to make automated AI more accessible to people with less expertise in computer science or math,” says first author Marco Cusumano-Towner, a PhD student in the Department of Electrical Engineering and Computer Science. “We also want to increase productivity, which means making it easier for experts to rapidly iterate and prototype their AI systems.”

The researchers also demonstrated Gen’s ability to simplify data analytics by using another Gen program that automatically generates sophisticated statistical models typically used by experts to analyze, interpret, and predict underlying patterns in data. That builds on the researchers’ previous work that let users to write a few lines of code to uncover insights into financial trends, air travel, voting patterns, and the spread of disease, among other trends. This is different from earlier systems, which required a lot of hand coding for accurate predictions.

“Gen is the first system that’s flexible, automated, and efficient enough to cover those very different types of examples in computer vision and data science and give state of-the-art performance,” says Vikash K. Mansinghka ’05, MEng ’09, PhD ’09, a researcher in the Department of Brain and Cognitive Sciences who runs the Probabilistic Computing Project.

Joining Cusumano-Towner and Mansinghka on the paper are Feras Saad and Alexander K. Lew, both CSAIL graduate students and members of the Probabilistic Computing Project.

Best of all worlds

In 2015, Google released TensorFlow, an open-source library of application programming interfaces (APIs) that helps beginners and experts automatically generate machine-learning systems without doing much math. Now widely used, the platform is helping democratize some aspects of AI. But, although it’s automated and efficient, it’s narrowly focused on deep-learning models which are both costly and limited compared to the broader promise of AI in general.

But there are plenty of other AI techniques available today, such as statistical and probabilistic models, and simulation engines. Some other probabilistic programming systems are flexible enough to cover several kinds of AI techniques, but they run inefficiently.

The researchers sought to combine the best of all worlds — automation, flexibility, and speed — into one. “If we do that, maybe we can help democratize this much broader collection of modeling and inference algorithms, like TensorFlow did for deep learning,” Mansinghka says.

In probabilistic AI, inference algorithms perform operations on data and continuously readjust probabilities based on new data to make predictions. Doing so eventually produces a model that describes how to make predictions on new data.

Building off concepts used in their earlier probabilistic-programming system, Church, the researchers incorporate several custom modeling languages into Julia, a general-purpose programming language that was also developed at MIT. Each modeling language is optimized for a different type of AI modeling approach, making it more all-purpose. Gen also provides high-level infrastructure for inference tasks, using diverse approaches such as optimization, variational inference, certain probabilistic methods, and deep learning. On top of that, the researchers added some tweaks to make the implementations run efficiently.

Beyond the lab

External users are already finding ways to leverage Gen for their AI research. For example, Intel is collaborating with MIT to use Gen for 3-D pose estimation from its depth-sense cameras used in robotics and augmented-reality systems. MIT Lincoln Laboratory is also collaborating on applications for Gen in aerial robotics for humanitarian relief and disaster response.

Gen is beginning to be used on ambitious AI projects under the MIT Quest for Intelligence. For example, Gen is central to an MIT-IBM Watson AI Lab project, along with the U.S. Department of Defense’s Defense Advanced Research Projects Agency’s ongoing Machine Common Sense project, which aims to model human common sense at the level of an 18-month-old child. Mansinghka is one of the principal investigators on this project.

“With Gen, for the first time, it is easy for a researcher to integrate a bunch of different AI techniques. It’s going to be interesting to see what people discover is possible now,” Mansinghka says.

Zoubin Ghahramani, chief scientist and vice president of AI at Uber and a professor at Cambridge University, who was not involved in the research, says, “Probabilistic programming is one of most promising areas at the frontier of AI since the advent of deep learning. Gen represents a significant advance in this field and will contribute to scalable and practical implementations of AI systems based on probabilistic reasoning.”

Peter Norvig, director of research at Google, who also was not involved in this research, praised the work as well. “[Gen] allows a problem-solver to use probabilistic programming, and thus have a more principled approach to the problem, but not be limited by the choices made by the designers of the probabilistic programming system,” he says. “General-purpose programming languages … have been successful because they … make the task easier for a programmer, but also make it possible for a programmer to create something brand new to efficiently solve a new problem. Gen does the same for probabilistic programming.”

Gen’s source code is publicly available and is being presented at upcoming open-source developer conferences, including Strange Loop and JuliaCon. The work is supported, in part, by DARPA.

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.