People Should Find A Safe Storm Shelter During Thunderstorm

Storm Shelters in OKC

Tuesday June 5, 2001 marked the start of an extremely fascinating time in the annals of my cherished Houston. Tropical storm Allison, that early summer daytime came to see. The thunderstorm went rapidly, although there was Tuesday. Friday, afterward arrived, and Allison returned. This time going slowly, this time in the north. The thunderstorm became still. Thousands of people driven from their houses. Only when they might be desired most, several leading hospitals shut. Dozens of important surface roads, and every important highway covered in water that was high.

Yet even prior to the rain stopped, service to others, and narratives of Christian compassion started to be composed. For a couples class, about 75 people had assembled at Lakewood Church among the greatest nondenominational churches in The United States. From time they got ready to depart the waters had climbed so high they were stranded. The facility of Lakewood stayed dry and high at the center of among the hardest hit parts of town. Refugees in the powerful thunderstorm started arriving at their doorstep. Without no advance preparation, and demand of official sanction, those 75 classmates started a calamity shelter that grew to hold over 3,000 customers. The greatest of over 30 refuges that could be established in the height of the thunderstorm.

Where help was doled out to those who’d suffered losses after Lakewood functioned as a Red Cross Service Center. When it became clear that FEMA aid, and Red Cross wouldn’t bring aid enough, Lakewood and Second Baptist joined -Houston to produce an adopt a family plan to greatly help get folks on their feet quicker. In the occasions that followed militaries of Christians arrived in both churches. From all over town, people of economical standing, race, and each and every denomination collected. Wet rotted carpeting were pulled up, sheet stone removed. Piles of clothes donated food and bed clothes were doled out. Elbow grease and cleaning equipment were used to start eliminating traces of the damage.

It would have been an excellent example of practical ministry in a period of disaster, in the event the story stopped here, but it continues. A great many other churches functioned as shelters as well as in the occasions that followed Red Cross Service Centers. Tons of new volunteers, a lot of them Christians put to work, and were put through accelerated training. That Saturday, I used to be trapped in my own, personal subdivision. Particular that my family was safe because I worked in Storm Shelters OKC that was near where I used to live. What they wouldn’t permit the storm to do, is take their demand to give their religion, or their self respect. I saw so a lot of people as they brought gifts of food, clothes and bedclothes, praising the Lord. I saw young kids coming making use of their parents to not give new, rarely used toys to kids who had none.

Leaning On God Through Hard Times

Unity Church of Christianity from a location across town impacted by the storm sent a sizable way to obtain bedding as well as other supplies. A tiny troupe of musicians and Christian clowns requested to be permitted to amuse the kids in the shelter where I served and arrived. We of course promptly taken their offer. The kids were collected by them in a sizable empty space of flooring. They sang, they told stories, balloon animals were made by them. The kids, frightened, at least briefly displaced laughed.

When not occupied elsewhere I did lots of listening. I listened to survivors that were disappointed, and frustrated relief workers. I listened to kids make an effort to take advantage of a scenario they could not comprehend. All these are only the stories I have heard or seen. I am aware that spiritual groups, Churches, and lots of other individual Christians functioned admirably. I do need to thank them for the attempts in disaster. I thank The Lord for supplying them to serve.

I didn’t write its individuals, or this which means you’d feel sorry for Houston. As this disaster unfolded yet what I saw encouraged my beliefs the Lord will provide through our brothers and sisters in religion for us. Regardless how awful your community hits, you the individual Christian can be a part of the remedy. Those blankets you can probably never use, and have stored away mean much to people who have none. You are able to help in the event that you can drive. You are able to help if you’re able to create a cot. It is possible to help in the event that you can scrub a wall. It is possible to help if all you are able to do is sit and listen. Large catastrophes like Allison get lots of focus. However a disaster can come in virtually any size. That is a serious disaster to your family that called it home in case a single household burns. It is going to be generations prior to the folks here forget Allison.

United States Oil and Gas Exploration Opportunities

Firms investing in this sector can research, develop and create, as well as appreciate the edges of a global gas and oil portfolio with no political and economical disadvantages. Allowing regime and the US financial conditions is rated amongst the world and the petroleum made in US is sold at costs that were international. The firms will likely gain as US also has a national market that is booming. Where 500 exploration wells are drilled most of the petroleum exploration in US continues to be concentrated around the Taranaki Basin. On the other hand, the US sedimentary basins still remain unexplored and many show existence of petroleum seeps and arrangements were also unveiled by the investigation data with high hydrocarbon potential. There have already been onshore gas discoveries before including Great south river basins, East Coast Basin and offshore Canterbury.

As interest in petroleum is expected to grow strongly during this interval but this doesn’t automatically dim the bright future expectations in this sector. The interest in petroleum is anticipated to reach 338 PJ per annum. The US government is eager to augment the gas and oil supply. As new discoveries in this sector are required to carry through the national demand at the same time as raise the amount of self reliance and minimize the cost on imports of petroleum the Gas and Oil exploration sector is thought to be among the dawn sectors. The US government has invented a distinctive approach to reach its petroleum and gas exploration targets. It’s developed a “Benefit For Attempt” model for Petroleum and Gas exploration tasks in US.

The “Benefit For Attempt” in today’s analytic thinking is defined as oil reserves found per kilometer drilled. It will help in deriving the estimate of reservations drilled for dollar and each kilometer spent for each investigation. The authorities of US has revealed considerable signs that it’ll bring positive effects of change which will favor investigation of new oil reserves since the price of investigation has adverse effects on investigation task. The Authorities of US has made the information accessible about the oil potential in its study report. Foil of advice in royalty and allocation regimes, and simplicity of processes have enhanced the attractiveness of Petroleum and Natural Gas Sector in the United States.

Petroleum was the third biggest export earner in 2008 for US and the chance to to keep up the growth of the sector is broadly accessible by manners of investigation endeavors that are new. The government is poised to keep the impetus in this sector. Now many firms are active with new exploration jobs in the Challenger Plateau of the United States, Northland East Slope Basin region, outer Taranaki Basin, and Bellona Trough region. The 89 Energy oil and gas sector guarantees foreign investors as government to high increase has declared a five year continuance of an exemption for offshore petroleum and gas exploration in its 2009 budget. The authorities provide nonresident rig operators with tax breaks.

Modern Robot Duct Cleaning Uses

AC systems, and heat, venting collect pollutants and contaminants like mold, debris, dust and bacteria that can have an adverse impact on indoor air quality. Most folks are at present aware that indoor air pollution could be a health concern and increased visibility has been thus gained by the area. Studies have also suggested cleaning their efficacy enhances and is contributory to a longer operating life, along with maintenance and energy cost savings. The cleaning of the parts of forced air systems of heat, venting and cooling system is what’s called duct cleaning. Robots are an advantageous tool raising the price and efficacy facets of the procedure. Therefore, using modern robot duct isn’t any longer a new practice.

A cleaner, healthier indoor environment is created by a clean air duct system which lowers energy prices and increases efficiency. As we spend more hours inside air duct cleaning has become an important variable in the cleaning sector. Indoor pollutant levels can increase. Health effects can show years or up immediately after repeated or long exposure. These effects range from some respiratory diseases, cardiovascular disease, and cancer that can be deadly or debilitating. Therefore, it’s wise to ensure indoor air quality isn’t endangered inside buildings. Dangerous pollutants that can found in inside can transcend outdoor air pollutants in accordance with the Environmental Protection Agency.

Duct cleaning from Air Duct Cleaning Edmond professionals removes microbial contaminants, that might not be visible to the naked eye together with both observable contaminants. Indoor air quality cans impact and present a health hazard. Air ducts can be host to a number of health hazard microbial agents. Legionnaires Disease is one malaise that’s got public notice as our modern surroundings supports the development of the bacteria that has the potential to cause outbreaks and causes the affliction. Typical disorder-causing surroundings contain wetness producing gear such as those in air conditioned buildings with cooling towers that are badly maintained. In summary, in building and designing systems to control our surroundings, we’ve created conditions that were perfect . Those systems must be correctly tracked and preserved. That’s the secret to controlling this disorder.

Robots allow for the occupation while saving workers from exposure to be done faster. Signs of the technological progress in the duct cleaning business is apparent in the variety of gear now available for example, array of robotic gear, to be used in air duct cleaning. Robots are priceless in hard to reach places. Robots used to see states inside the duct, now may be used for spraying, cleaning and sampling procedures. The remote controlled robotic gear can be fitted with practical and fastener characteristics to reach many different use functions.

Video recorders and a closed circuit television camera system can be attached to the robotic gear to view states and operations and for documentation purposes. Inside ducts are inspected by review apparatus in the robot. Robots traveling to particular sections of the system and can move around barriers. Some join functions that empower cleaning operation and instruction manual and fit into little ducts. An useful view range can be delivered by them with models delivering disinfection, cleaning, review, coating and sealing abilities economically.

The remote controlled robotic gear comes in various sizes and shapes for different uses. Of robotic video cameras the first use was in the 80s to record states inside the duct. Robotic cleaning systems have a lot more uses. These devices provide improved accessibility for better cleaning and reduce labor costs. Lately, functions have been expanded by areas for the use of small mobile robots in the service industries, including uses for review and duct cleaning.

More improvements are being considered to make a tool that was productive even more effective. If you determine to have your ventilation, heat and cooling system cleaned, it’s important to make sure all parts of the system clean and is qualified to achieve this. Failure to clean one part of a contaminated system can lead to re-contamination of the entire system.

When To Call A DWI Attorney

Charges or fees against a DWI offender need a legal Sugar Land criminal defense attorney that is qualified dismiss or so that you can reduce charges or the fees. So, undoubtedly a DWI attorney is needed by everyone. Even if it’s a first-time violation the penalties can be severe being represented by a DWI attorney that is qualified is vitally significant. If you’re facing following charges for DWI subsequently the punishments can contain felony charges and be severe. Locating an excellent attorney is thus a job you should approach when possible.

So you must bear in mind that you just should hire a DWI attorney who practices within the state where the violation occurred every state within America will make its laws and legislation regarding DWI violations. It is because they are going to have the knowledge and expertise of state law that is relevant to sufficiently defend you and will be knowledgeable about the processes and evaluations performed to establish your guilt.

As your attorney they are going to look to the evaluations that have been completed at the time of your arrest and the authorities evidence that is accompanying to assess whether or not these evaluations were accurately performed, carried out by competent staff and if the right processes where followed. It isn’t often that a police testimony is asserted against, although authorities testimony also can be challenged in court.

You should attempt to locate someone who specializes in these kind of cases when you start trying to find a DWI attorney. Whilst many attorneys may be willing to consider on your case, a lawyer who specializes in these cases is required by the skilled knowledge needed to interpret the scientific and medical evaluations ran when you had been detained. The first consultation is free and provides you with the chance to to inquire further about their experience in fees and these cases.

Many attorneys will work according into a fee that is hourly or on a set fee basis determined by the kind of case. You may find how they have been paid to satisfy your financial situation and you will have the capacity to negotiate the conditions of their fee. If you are unable to afford to hire an attorney that is private you then can request a court-appointed attorney paid for by the state. Before you hire a DWI attorney you should make sure when you might be expected to appear in court and you understand the precise charges imposed against you.

How Credit Card Works

The credit card is making your life more easy, supplying an amazing set of options. The credit card is a retail trade settlement; a credit system worked through the little plastic card which bears its name. Regulated by ISO 7810 defines credit cards the actual card itself consistently chooses the same structure, size and contour. A strip of a special stuff on the card (the substance resembles the floppy disk or a magnetic group) is saving all the necessary data. This magnetic strip enables the credit card’s validation. The layout has become an important variable; an enticing credit card layout is essential in ensuring advice and its dependability keeping properties.

A credit card is supplied to the user just after a bank approves an account, estimating a varied variety of variables to ascertain fiscal dependability. This bank is the credit supplier. When a purchase is being made by an individual, he must sign a receipt to verify the trade. There are the card details, and the amount of cash to be paid. You can find many shops that take electronic authority for the credit cards and use cloud tokenization for authorization. Nearly all verification are made using a digital verification system; it enables assessing the card is not invalid. If the customer has enough cash to insure the purchase he could be attempting to make staying on his credit limit any retailer may also check.

As the credit supplier, it is as much as the banks to keep the user informed of his statement. They typically send monthly statements detailing each trade procedures through the outstanding fees, the card and the sums owed. This enables the cardholder to ensure all the payments are right, and to discover mistakes or fraudulent action to dispute. Interest is typically charging and establishes a minimal repayment amount by the end of the following billing cycle.

The precise way the interest is charged is normally set within an initial understanding. On the rear of the credit card statement these elements are specified by the supplier. Generally, the credit card is an easy type of revolving credit from one month to another. It can also be a classy financial instrument, having many balance sections to afford a greater extent for credit management. Interest rates may also be not the same as one card to another. The credit card promotion services are using some appealing incentives find some new ones along the way and to keep their customers.

Why Get Help From A Property Management?

One solution while removing much of the anxiety, to have the revenue of your rental home would be to engage and contact property management in Oklahoma City, Oklahoma. If you wish to know more and are considering the product please browse the remainder of the post. Leasing out your bit of real property may be real cash-cow as many landlords understand, but that cash flow usually includes a tremendous concern. Night phones from tenants that have the trouble of marketing the house if you own an emptiness just take out lots of the pleasure of earning money off of leases, overdue lease payments which you must chase down, as well as over-flowing lavatories. One solution while removing much of the anxiety, to have the earnings would be to engage a property management organization.

These businesses perform as the go between for the tenant as well as you. The tenant will not actually need to understand who you’re when you hire a property management company. The company manages the day to day while you still possess the ability to help make the final judgements in regards to the home relationships using the tenant. The company may manage the marketing for you personally, for those who are in possession of a unit that is vacant. Since the company is going to have more connections in a bigger market than you’ve got along with the industry than you are doing, you’ll discover your device gets stuffed a whole lot more quickly making use of their aid. In addition, the property management company may care for testing prospective tenants. With regards to the arrangement you’ve got, you might nevertheless not be unable to get the last say regarding if a tenant is qualified for the the system, but of locating a suitable tenant, the day-to-day difficulty is not any longer your problem. They’ll also manage the before-move-in the reviews as well as reviews required following a tenant moves away.

It is possible to step back watching the profits, after the the system is stuffed. Communicating will be handled by the company with all the tenant if you have an issue. You won’t be telephoned if this pipe explosions at the center of the night time. Your consultant is called by the tenant in the company, who then makes the preparations that are required to get the issue repaired with a care supplier. You get a phone call a day later or may not know there was an issue before you register using the business. The property management organization may also make your leasing obligations to to get. The company will do what’s required to accumulate if your tenant is making a payment. In certain arrangements, the organization is going to also take-over paying taxation, insurance, and the mortgage on the portion of property. You actually need to do-nothing but appreciate after after all the the invoices are paid, the revenue which is sent your way.

With all the advantages, you’re probably questioning exactly what to employing a property management organization, the downside should be. From hiring one the primary variable that stops some landlords is the price. All these providers will be paid for by you. The price must be weighed by you from the time frame you’ll save time that you may subsequently use to follow additional revenue-producing efforts or just take pleasure in the fruits of your expense work.

Benifits From An Orthodontic Care

Orthodontics is the specialty of dentistry centered on the identification and treatment of dental and related facial problems. The outcomes of Norman Orthodontist OKC treatment could be dramatic — an advanced quality of life for a lot of individuals of ages and lovely grins, improved oral health health, aesthetics and increased cosmetic tranquility. Whether into a look dentistry attention is needed or not is an individual’s own choice. Situations are tolerated by most folks like totally various kinds of bite issues or over bites and don’t get treated. Nevertheless, a number people sense guaranteed with teeth that are correctly aligned, appealing and simpler. Dentistry attention may enhance construct and appearance power. It jointly might work with you consult with clearness or to gnaw on greater.

Orthodontic attention isn’t only decorative in character. It might also gain long term oral health health. Right, correctly aligned teeth is not more difficult to floss and clean. This may ease and decrease the risk of rot. It may also quit periodontists irritation that problems gums. Periodontists might finish in disease, that occurs once micro-organism bunch round your house where the teeth and the gums meet. Periodontists can be ended in by untreated periodontists. Such an unhealthiness result in enamel reduction and may ruin bone that surrounds the teeth. Less may be chewed by people who have stings that are harmful with efficacy. A few of us using a serious bite down side might have difficulties obtaining enough nutrients. Once the teeth aren’t aimed correctly, this somewhat might happen. Morsel issues that are repairing may allow it to be more easy to chew and digest meals.

One may also have language problems, when the top and lower front teeth do not arrange right. All these are fixed through therapy, occasionally combined with medical help. Eventually, remedy may ease to avoid early use of rear areas. Your teeth grow to an unlikely quantity of pressure, as you chew down. In case your top teeth do not match it’ll trigger your teeth that are back to degrade. The most frequently encountered type of therapy is the braces (or retainer) and head-gear. But, a lot people complain about suffering with this technique that, unfortunately, is also unavoidable. Sport braces damages, as well as additional individuals have problem in talking. Dental practitioners, though, state several days can be normally disappeared throughout by the hurting. Occasionally annoyance is caused by them. In the event that you’d like to to quit more unpleasant senses, fresh, soft and tedious food must be avoided by you. In addition, tend not to take your braces away unless the medical professional claims so.

It is advised which you just observe your medical professional often for medical examinations to prevent choice possible problems that may appear while getting therapy. You are going to be approved using a specific dental hygiene, if necessary. Dental specialist may look-out of managing and id malocclusion now. Orthodontia – the main specialization of medication – mainly targets repairing chin problems and teeth, your grin as well as thus your sting. Dentist, however, won’t only do chin remedies and crisis teeth. They also handle tender to severe dental circumstances which may grow to states that are risky. You actually have not got to quantify throughout a predicament your life all. See dental specialist San – Direction Posts, and you’ll notice only but of stunning your smile plenty will soon be.

Letter regarding the Schwarzman College of Computing Task Force update

The following letter was sent to the MIT community on August 15 by Provost Martin A. Schmidt.

To the members of the MIT community:

I write today to update you on the work of the MIT Stephen A. Schwarzman College of Computing Task Force and the status of the College. The comment period for the task force working group reports has ended, and the final versions and executive summary of the reports are now available, along with a summary of the comments received. I am deeply grateful to everyone who so vigorously engaged in the process—as a member of a working group, as an active participant in one of the community forums, or as a contributor to our web-based idea bank.

The working groups had a number of excellent ideas and provided us with a broad range of perspectives. Moving forward, I will be working with our new dean of the College, Dan Huttenlocher, and the School deans to develop implementation plans for the College.

In the near term, we will need to focus on four items. First, we need to define the status of Electrical Engineering and Computer Science (EECS) faculty in the College. Dan is working closely with the EECS leadership and Dean of Engineering Anantha Chandrakasan on this, with their thinking strongly informed by the ideas of the Organizational Structure Working Group. Similarly, Dan and the Institute for Data, Systems, and Society (IDSS) leadership are working to define the status of IDSS faculty in the College. Second, the work of the Faculty Appointments Working Group has evolved the concept of “bridge faculty,” and Dan is working with the school deans to further advance the “cluster” concept of these faculty appointments. Third, we need to define the details of how best to integrate teaching and research on the societal implications of computing into the fabric of the College.

Finally, I would like to create an ongoing advisory mechanism to facilitate input from the MIT community. The process of establishing the College has benefited greatly from broad community engagement. In that spirit, we will endeavor to share regular updates on the College’s status and will communicate means for the community to continue to share their thoughts.

Sincerely,

Martin A. Schmidt

The intersection of technology and war

Pursuing big questions is part of the MIT ethos, says Fiona Cunningham PhD ’19.  

“Walking through the Infinite Corridor, you can see what people are doing in this space. There is such dedication across the Institute to solving big problems. There is dedication to doing the best work, without hubris, and often without a break. I find this so exciting, and it’s a huge part of what makes me so proud to be an alumna. This dedication will stay with me forever.”

Cunningham completed her PhD at the Department of Political Science, where she was also a member of the Security Studies Program. Her work explores how technology affects warfare in the post-Cold War era. She studies how nations — China specifically — plan to use technology in conflict to achieve their aims. 

“I want to understand the changing nature of warfare and how new technologies have become both opportunities and restraints for countries in international politics. These questions are the kinds of questions that global leaders are thinking about when they are grappling with the rise of China, how technology factors into the current U.S.-China trade war, and how technology does or doesn’t fit within national boundaries.”

She received the Lucian Pye Award for outstanding PhD thesis. The award was established by the political science department in 2005 and recipients are determined by the graduate studies committee. Pye was a leading China scholar who taught political science at MIT for 35 years.

“Fiona’s thesis was exemplary. She asked an important question that bears on the future of peace and stability among nations, and conducted an impressive amount of original research about a topic that is especially challenging to study. In this way, she combined academic rigor with policy relevance,” says Taylor Fravel, the Arthur and Ruth Sloan Professor of Political Science and director of the MIT Security Studies Program.

The road to China

Cunningham was born and raised in Australia, where the influences of neighboring East Asia are strong. This is what led to her initial curiosity about the region. After high school, she took a gap year and spent part of it in China, where she was drawn into the culture and politics — and the challenge of learning Chinese.

She returned to Australia for her undergraduate studies and recalls two pivotal experiences that guided her academic path: a visiting semester at Harvard University, where she got a taste for the U.S. approach to studying international relations, and working as a research associate at the Lowy Institute for International Policy, an Australian think tank. There she worked with Rory Medcalf, whose early attention to the international security challenges created by the rise of China really helped shape her research questions, says Cunningham. 

After those experiences, she knew what she wanted to study and she knew she wanted to study at MIT.

“I chose MIT because no other political science graduate program had such strengths in both East Asia and security studies. And, as someone who has always been interested in science and technology and its impact on international politics, the idea that I would be at an Institute where so much brain power is dedicated to advancing the scientific and technological aspects of how our societies, businesses, and militaries operate was amazing!”

A model community 

The Department of Political Science and the Security Studies Program provided a thriving community for Cunningham.   

The faculty and scholars she worked with —Taylor Fravel, Vipin Narang, Barry Posen, Owen Coté, Frank Gavin — are models of how to do rigorous scholarship about the things that really matter for the way our world works, she says: “They somehow contribute fully to the discipline and the public debate, which is both super-human and very inspiring.”

Fravel served as her dissertation chair. “Taylor was my mentor, my professor, and, in addition to that, my co-author. I was so fortunate to be able to learn how to think, research, write, and teach from him in all of those roles.” 

Fravel and Cunningham co-authored a paper in 2015 on China’s nuclear strategy. They have forthcoming paper delving further into that topic that examines China’s views of nuclear escalation.

Three women — Lena Andrews ’18, Marika Landau-Wells ’18, and Ketian Zhang ’19 — went through the program with Cunningham. “We really helped each other and we will always have a special bond.”

The support she found in these relationships, plus her family, has been a source of inspiration. “My parents have always encouraged me to do something I was passionate about, do it really well, and to do something that will make a difference,” she says.

Breaking new ground

Cunningham joined George Washington University as assistant professor of political science and international affairs this fall after completing a postdoctoral fellowship at the Center for International Security and Cooperation at Stanford University. 

She chose an academic track because she wants the freedom to continue to pursue the international relations questions she finds most important.  

It is also her strong ambition to continue doing fieldwork, especially within China. 

“I want to see the problems I research through the eyes of people on the front lines. In addition to my fieldwork in China, the Security Studies Program provided me with these kinds of experiences through field trips to U.S. military bases during graduate school. You can’t get that from a book.”

She also looks forward to teaching. “For me, teaching is about teaching students how to think critically about future problems, and how to write and communicate their analysis and their thinking.”

Cunningham had the opportunity to serve as a teaching assistant in undergraduate courses while at MIT. “The students at MIT are so capable. They would bring their STEM background to topics like cybersecurity and the causes of war. I would walk away amazed! If these students are our future, then our world will be good hands.”   

As a professor, she aims to help her students consider the consequences, both intended and unintended, of employing technology. She wants them to think about the political questions that come into play both now and into the future.

MIT really gets you attuned to this crossover of technology and its social and political implications, she explains.  

The San Francisco (California) Bay Area, where she has spent the last year, provided fertile ground for her to dig deeper. 

“Silicon Valley is the innovation engine of the U.S. economy, and arguably the world economy. I’ve been looking around there to see what are the next political science questions. What is the next big question that sits at the intersection of technology and conflict? And what role does great power competition play in the day-to-day life of tech companies? What is the role of individuals and the companies they are running in making decisions that have big political implications?” 

Pursuing big questions is a part of Cunningham’s ethos. This dedication will stay with her forever. 

The music of the spheres

Space has long fascinated poets, physicists, astronomers, and science fiction writers. Musicians, too, have often found beauty and meaning in the skies above. At MIT’s Kresge Auditorium, a group of composers and musicians manifested their fascination with space in a concert titled “Songs from Extrasolar Spaces.” Featuring the Lorelei Ensemble — a Boston, Massachusetts-based women’s choir — the concert included premieres by MIT composers John Harbison and Elena Ruehr, along with compositions by Meredith Monk and Molly Herron. All the music was inspired by discoveries in astronomy.

“Songs from Extrasolar Spaces,” part of an MIT conference on TESS — the Transiting Exoplanet Survey Satellite, launched in April 2018. TESS is an MIT-led NASA mission that scans the skies for evidence of exoplanets: bodies ranging from dwarf planets to giant planets that orbit stars other than our sun. During its two-year mission, TESS and its four highly-sensitive cameras survey 85 percent of the sky, monitoring more than 200,000 stars for the temporary dips in brightness that might signal a transit — the passage of a planetary body across that star.

“There is a feeling you get when you look at these images from TESS,” says Ruehr, an award-winning MIT lecturer in the Music and Theater Arts Section and former Guggenheim Fellow. “A sense of vastness, of infinity. This is the sensation I tried to capture and transpose into vocal music.” 

Supported by the MIT Center for Art, Science and Technology’s Fay Chandler Creativity Grant; MIT Music and Theater Arts; and aerospace and technology giant Northrop Grumman, which also built the TESS satellite, the July 30 concert was conceived by MIT Research Associate Natalia Guerrero. Both the conference and concert marked the 50th anniversary of the Apollo 11 moon landing — another milestone in the quest to chart the universe and Earth’s place in it.

A 2014 MIT graduate, Guerrero manages the team finding planet candidates in the TESS images at the MIT Kavli Institute for Astrophysics and Space Research and is also the lead for the MIT branch of the mission’s communications team. “I wanted to include an event that could make the TESS mission accessible to people who aren’t astronomers or physicists,” says Guerrero. “But I also wanted that same event to inspire astronomers and physicists to look at their work in a new way.”

Guerrero majored in physics and creative writing at MIT, and after graduating she deejayed a radio show called “Voice Box” on the MIT radio station WMBR. That transmission showcased contemporary vocal music and exposed her to composers including Harbison and Ruehr. Last year, in early summer, Guerrero contacted Ruehr to gauge her interest in composing music for a still-hypothetical concert that might complement the 2019 TESS conference.

Ruehr was keen on the idea. She was also a perfect fit for the project. The composer had often drawn inspiration from visual images and other art forms for her music. “Sky Above Clouds,” an orchestral piece she composed in 1989, is inspired by the Georgia O’Keefe paintings she viewed as a child at the Art Institute of Chicago. Ruehr had also created music inspired by David Mitchell’s visionary novel “Cloud Atlas” and Anne Patchett’s “Bel Canto.” “It’s a question of reinterpreting language, capturing its rhythms and volumes and channeling them into music,” says Ruehr. “The source language can be fiction, or painting, or in this case these dazzling images of the universe.”

In addition, Ruehr had long been fascinated by space and stars. “My father was a mathematician who studied fast Fourier transform analysis,” says Ruehr, who is currently composing an opera set in space. “As a young girl, I’d listen to him talking about infinity with his colleagues on the telephone. I would imagine my father existing in infinity, on the edge of space.”

Drawing inspiration from the images TESS beams back to Earth, Ruehr composed two pieces for “Songs from Extrasolar Spaces.” The first, titled “Not from the Stars,” takes its name and lyrics from a Shakespeare sonnet. For the second, “Exoplanets,” Ruehr used a text that Guerrero extrapolated from the titles of the first group of scientific papers published from TESS data. “I’m used to working from images,” explains Ruehr. “First, I study them. Then, I sit down at the piano and try to create a single sound that captures their essence and resonance. Then, I start playing with that sound.”

Ruehr was particularly pleased to compose music about space for the Lorelei Ensemble. “There’s a certain quality in a women’s choir, especially the Lorelei Ensemble, that is perfectly suited for this project,” says Ruehr. “They have an ethereal sound and wonderful harmonic structures that make us feel as if we’re perceiving a small dab of brightness in an envelope of darkness.”

At the 2019 MIT TESS conference, experts from across the globe shared results from the first year of observation in the sky above the Southern Hemisphere, and discussed plans for the second-year trek above the Northern Hemisphere. The composers and musicians hope “Songs from Extrasolar Spaces” brought attention to the TESS missions, offers a new perspective on space exploration, and will perhaps spark further collaborations between scientists and artists. George Ricker, TESS principal investigator; Sara Seager, TESS deputy director of science; and Guerrero presented a pre-concert lecture. “Music has the power to generate incredibly powerful emotions,” says Ruehr. “So do these images from TESS. In many ways, they are more beautiful than any stars we might ever imagine.”

TESS is a NASA Astrophysics Explorer mission led and operated by MIT in Cambridge, Massachusetts, and managed by Goddard Spaceflight Center. Additional partners include Northrop Grumman, based in Falls Church, Virginia; NASA’s Ames Research Center in California’s Silicon Valley; the Harvard-Smithsonian Center for Astrophysics in Cambridge; MIT Lincoln Laboratory; and the Space Telescope Science Institute in Baltimore, Maryland. More than a dozen universities, research institutes, and observatories worldwide are participants in the mission.

What Is Hydrogen Sulfide And How To Remove It

Hydrogen Sulfide (H2S) is a gas that is dissolved and is known for its pungent and unpleasant smell. It gives out the stench of a rotten egg and has a very foul taste. Apart from being a health hazard (when present in high quantities), H2S can also lead to corrosion of piping systems. It creates an unpleasant odor in the house and also can turn your fixtures and water black in color. High-quality sterling silver can turn into black almost instantaneously when it comes in contact with Hydrogen Sulfide. H2S is highly pungent and can cause an odor problem even at very low concentration levels as low as 0.05 mg/l (ppm). It is therefore important to remove it from water and other sources. It gets generated in many industries and therefore oil-refineries and other such units have well-established ways and means to remove H2S from biogas, oil and other end-products.

Hydrogen Sulfide In Well Water

H2S occurs quite often in well water and it caused because of the presence of sulfate-reducing bacteria. It also could be present in stored water systems. Bacteria are the most common cause for the presence of hydrogen sulfide scavengers. Therefore, treatment should be focused on eliminating and controlling bacteria. This should be the first line of approach. Shock chlorination is considered to be the standard treatment for controlling sulfate-reducing bacteria and iron bacteria.

Water Heaters And H2S Problem

There could be situations where H2S may be present only in the hot water that is stored and used in households. This happens because of a biological reaction between sulfates in the water, the presence, and multiplication of sulfate-reducing bacteria or the presence of organic matter in the water. Identifying the source of the problem is important before coming out with the treatment approach. If the problem is caused by sulfate-reducing bacteria which thrive on hot water, you must disinfect the water heater using chlorine bleach or hydrogen peroxide. Be sure that the drain is not the source of hydrogen sulfide and the smell associated with it.

How To Solve The Problem

Disinfection is the first step and the water heater must be thoroughly flushed and made completely free from all types of sediments. This can be done quite easily with the help of a large diameter garden hose that has a connecting flush valve at the bottom of the water heater. The water heater must be flushed with the right pressure for around 15 to 30 minutes. This will help remove all sediments. The inlet valve should then be closed and you must then add around 3 to 4 large bottles of hydrogen peroxide. This will help remove the hydrogen sulfide concentration completely. 

Other Traditional Methods

Other traditional methods can also be used. Chlorination along with activated carbon filter is a simple but effective method. H2S is oxidized because of the presence of chlorine and the insoluble sulfide particles are taken away because of the activated carbon filter. The filter also comes in handy for removing any residual chlorine that remains after oxidation of the hydrogen sulfide.

Hence if you look around there are professional as well as natural ways that could help in removing hydrogen sulfide from water and other sources.

Contact US:

Chemical Products Industries, Inc.
Address: 7649 SW 34th St, Oklahoma City, OK
Phone: (800) 624-4356

Study measures how fast humans react to road hazards

Imagine you’re sitting in the driver’s seat of an autonomous car, cruising along a highway and staring down at your smartphone. Suddenly, the car detects a moose charging out of the woods and alerts you to take the wheel. Once you look back at the road, how much time will you need to safely avoid the collision?

MIT researchers have found an answer in a new study that shows humans need about 390 to 600 milliseconds to detect and react to road hazards, given only a single glance at the road — with younger drivers detecting hazards nearly twice as fast as older drivers. The findings could help developers of autonomous cars ensure they are allowing people enough time to safely take the controls and steer clear of unexpected hazards.

Previous studies have examined hazard response times while people kept their eyes on the road and actively searched for hazards in videos. In this new study, recently published in the Journal of Experimental Psychology: General, the researchers examined how quickly drivers can recognize a road hazard if they’ve just looked back at the road. That’s a more realistic scenario for the coming age of semiautonomous cars that require human intervention and may unexpectedly hand over control to human drivers when facing an imminent hazard.

“You’re looking away from the road, and when you look back, you have no idea what’s going on around you at first glance,” says lead author Benjamin Wolfe, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We wanted to know how long it takes you to say, ‘A moose is walking into the road over there, and if I don’t do something about it, I’m going to take a moose to the face.’”

For their study, the researchers built a unique dataset that includes YouTube dashcam videos of drivers responding to road hazards — such as objects falling off truck beds, moose running into the road, 18-wheelers toppling over, and sheets of ice flying off car roofs — and other videos without road hazards. Participants were shown split-second snippets of the videos, in between blank screens. In one test, they indicated if they detected hazards in the videos. In another test, they indicated if they would react by turning left or right to avoid a hazard.

The results indicate that younger drivers are quicker at both tasks: Older drivers (55 to 69 years old) required 403 milliseconds to detect hazards in videos, and 605 milliseconds to choose how they would avoid the hazard. Younger drivers (20 to 25 years old) only needed 220 milliseconds to detect and 388 milliseconds to choose.

Those age results are important, Wolfe says. When autonomous vehicles are ready to hit the road, they’ll most likely be expensive. “And who is more likely to buy expensive vehicles? Older drivers,” he says. “If you build an autonomous vehicle system around the presumed capabilities of reaction times of young drivers, that doesn’t reflect the time older drivers need. In that case, you’ve made a system that’s unsafe for older drivers.”

Joining Wolfe on the paper are: Bobbie Seppelt, Bruce Mehler, Bryan Reimer, of the MIT AgeLab, and Ruth Rosenholtz of the Department of Brain and Cognitive Sciences and CSAIL.

Playing “the worst video game ever”

In the study, 49 participants sat in front of a large screen that closely matched the visual angle and viewing distance for a driver, and watched 200 videos from the Road Hazard Stimuli dataset for each test. They were given a toy wheel, brake, and gas pedals to indicate their responses. “Think of it as the worst video game ever,” Wolfe says.

The dataset includes about 500 eight-second dashcam videos of a variety of road conditions and environments. About half of the videos contain events leading to collisions or near collisions. The other half try to closely match each of those driving conditions, but without any hazards. Each video is annotated at two critical points: the frame when a hazard becomes apparent, and the first frame of the driver’s response, such as braking or swerving.

Before each video, participants were shown a split-second white noise mask. When that mask disappeared, participants saw a snippet of a random video that did or did not contain an imminent hazard. After the video, another mask appeared. Directly following that, participants stepped on the brake if they saw a hazard or the gas if they didn’t. There was then another split-second pause on a black screen before the next mask popped up.

When participants started the experiment, the first video they saw was shown for 750 milliseconds. But the duration changed during each test, depending on the participants’ responses. If a participant responded incorrectly to one video, the next video’s duration would extend slightly. If they responded correctly, it would shorten. In the end, durations ranged from a single frame (33 milliseconds) up to one second. “If they got it wrong, we assumed they didn’t have enough information, so we made the next video longer. If they got it right, we assumed they could do with less information, so made it shorter,” Wolfe says.

The second task used the same setup to record how quickly participants could choose a response to a hazard. For that, the researchers used a subset of videos where they knew the response was to turn left or right. The video stops, and the mask appears on the first frame that the driver begins to react. Then, participants turned the wheel either left or right to indicate where they’d steer.

“It’s not enough to say, ‘I know something fell into road in my lane.’ You need to understand that there’s a shoulder to the right and a car in the next lane that I can’t accelerate into, because I’ll have a collision,” Wolfe says.

More time needed

The MIT study didn’t record how long it actually takes people to, say, physically look up from their phones or turn a wheel. Instead, it showed people need up to 600 milliseconds to just detect and react to a hazard, while having no context about the environment.

Wolfe thinks that’s concerning for autonomous vehicles, since they may not give humans adequate time to respond, especially under panic conditions. Other studies, for instance, have found that it takes people who are driving normally, with their eyes on the road, about 1.5 seconds to physically avoid road hazards, starting from initial detection.

Driverless cars will already require a couple hundred milliseconds to alert a driver to a hazard, Wolfe says. “That already bites into the 1.5 seconds,” he says. “If you look up from your phone, it may take an additional few hundred milliseconds to move your eyes and head. That doesn’t even get into time it’ll take to reassert control and brake or steer. Then, it starts to get really worrying.”

Next, the researchers are studying how well peripheral vision helps in detecting hazards. Participants will be asked to stare at a blank part of the screen — indicating where a smartphone may be mounted on a windshield — and similarly pump the brakes when they notice a road hazard.

The work is sponsored, in part, by the Toyota Research Institute.  

Tips For Finding the Right Homes For Rent

Because of a fall in homeownership rates over the past decade or so, there is an increased demand for rental homes. This makes getting the right rental home quite difficult. If you are from Oklahoma City and surrounding areas and are on the lookout for the right rental home, then there are a few things that perhaps you must do. We are sharing a few of them for the benefit of our readers and other stakeholders.

 Planning Ahead Is Important

 The secret according to experts and brokers is to plan ahead. Renters often make the mistake of waiting until the last minute for renting another apartment. When they are in a hurry, renters often end up taking apartments that they actually do not want. Hence, it would always be better to start looking for rental homes early in the day. Ideally, you should have at least 60 days or more if you are keen on finding the right rental home in OKC.

 Start Early In The Month

 You have to bear in mind that the best rentals, whether it is in terms of price, location, amenities, and other such things get booked early in the month. Hence, it would not be the right thing to wait until mid month and then start looking for a new place to live in. It would always be better to start searching for the rental home around 45 to 60 days before you actually would like to move. You also should bear in mind that the second and the third weekend of any month tend to be quite busy. Hence, if you start looking for rental homes during the first weekend of the month, you can be sure that there will be less competition and you will have the best properties available for rent.

 Start Online But Don’t Depend Too Much On It

 There is nothing wrong in starting to look for your rented apartment either on Zillow or Craigslist. Starting your search online is a good way forward. It will help you to get a good sense about the pricing and the various amenities that you can expect out of rented apartments. However, if you are moving to a new city, it may not be advisable to continue looking for it online. This is because it will not tell you enough about the neighborhood and also the local amenities such as public transportation, schools, hospitals, markets, entertainment avenues, and other such things.

 Importance Of Taking Professional Help

 In many cases, you can come across brokers who help renters find the right rental properties free of charge. However, you have to make the right choice of these professionals. You must look for those who specialize in rental properties and not in home sales. If you are looking for rental properties in areas where there is a big competition and where there is a big demand, you may have to talk to a number of real estate brokers before you get committed to one because different real estate agents have relationships with different apartments, homes, and building owners.

 Hence, there are many things that you must keep in mind when you are planning to get the right home for rent in Nichols Hills OK.

 

Contact US:

J Marshall Square

Address: 9017 N University AveOklahoma City, OK
Phone: (405) 702-0060

Behind the scenes of the Apollo mission at MIT

Fifty years ago this week, humanity made its first expedition to another world, when Apollo 11 touched down on the moon and two astronauts walked on its surface. That moment changed the world in ways that still reverberate today.

MIT’s deep and varied connections to that epochal event — many of which have been described on MIT News — began years before the actual landing, when the MIT Instrumentation Laboratory (now Draper Labs) signed the very first contract to be awarded for the Apollo program after its announcement by President John F. Kennedy in 1961. The Institute’s involvement continued throughout the program — and is still ongoing today.

MIT’s role in creating the navigation and guidance system that got the mission to the moon and back has been widely recognized in books, movies, and television series. But many other aspects of the Institute’s involvement in the Apollo program and its legacy, including advances in mechanical and computational engineering, simulation technology, biomedical studies, and the geophysics of planet formation, have remained less celebrated.

Amid the growing chorus of recollections in various media that have been appearing around this 50th anniversary, here is a small collection of bits and pieces about some of the unsung heroes and lesser-known facts from the Apollo program and MIT’s central role in it.

A new age in electronics

The computer system and its software that controlled the spacecraft — called the Apollo Guidance Computer and designed by the MIT Instrumentation Lab team under the leadership of Eldon Hall — were remarkable achievements that helped push technology forward in many ways.

The AGC’s programs were written in one of the first-ever compiler languages, called MAC, which was developed by Instrumentation Lab engineer Hal Laning. The computer itself, the 1-cubic-foot Apollo Guidance Computer, was the first significant use of silicon integrated circuit chips and greatly accelerated the development of the microchip technology that has gone on to change virtually every consumer product.

In an age when most computers took up entire climate-controlled rooms, the compact AGC was uniquely small and lightweight. But most of its “software” was actually hard-wired: The programs were woven, with tiny donut-shaped metal “cores” strung like beads along a set of wires, with a given wire passing outside the donut to represent a zero, or through the hole for a 1. These so-called rope memories were made in the Boston suburbs at Raytheon, mostly by women who had been hired because they had experience in the weaving industry. Once made, there was no way to change individual bits within the rope, so any change to the software required weaving a whole new rope, and last-minute changes were impossible.

As David Mindell, the Frances and David Dibner Professor of the History of Engineering and Manufacturing, points out in his book “Digital Apollo,” that system represented the first time a computer of any kind had been used to control, in real-time, many functions of a vehicle carrying human beings — a trend that continues to accelerate as the world moves toward self-driving vehicles. Right after the Apollo successes, the AGC was directly adapted to an F-8 fighter jet, to create the first-ever fly-by-wire system for aircraft, where the plane’s control surfaces are moved via a computer rather than direct cables and hydraulic systems. This approach is now widespread in the aerospace industry, says John Tylko, who teaches MIT’s class 16.895J (Engineering Apollo: The Moon Project as a Complex System), which is taught every other year.

As sophisticated as the computer was for its time, computer users today would barely recognize it as such. Its keyboard and display screen looked more like those on a microwave oven than a computer: a simple numeric keypad and a few lines of five-digit luminous displays. Even the big mainframe computer used to test the code as it was being developed had no keyboard or monitor that the programmers ever saw. Programmers wrote their code by hand, then typed it onto punch cards — one card per line — and handed the deck of cards to a computer operator. The next day, the cards would be returned with a printout of the program’s output. And in this time long before email, communications among the team often relied on handwritten paper notes.

Priceless rocks

MIT’s involvement in the geophysical side of the Apollo program also extends back to the early planning stages — and continues today. For example, Professor Nafi Toksöz, an expert in seismology, helped to develop a seismic monitoring station that the astronauts placed on the moon, where it helped lead to a greater understanding of the moon’s structure and formation. “It was the hardest work I have ever done, but definitely the most exciting,” he has said.

Toksöz says that the data from the Apollo seismometers “changed our understanding of the moon completely.” The seismic waves, which on Earth continue for a few minutes, lasted for two hours, which turned out to be the result of the moon’s extreme lack of water. “That was something we never expected, and had never seen,” he recalls.

The first seismometer was placed on the moon’s surface very shortly after the astronauts landed, and seismologists including Toksöz started seeing the data right away — including every footstep the astronauts took on the surface. Even when the astronauts returned to the lander to sleep before the morning takeoff, the team could see that Buzz Aldrin ScD ’63 and Neil Armstrong were having a sleepless night, with every toss and turn dutifully recorded on the seismic traces.

MIT Professor Gene Simmons was among the first group of scientists to gain access to the lunar samples as soon as NASA released them from quarantine, and he and others in what is now the Department of Earth, Planetary and Atmospheric Sciences (EAPS) have continued to work on these samples ever since. As part of a conference on campus, he exhibited some samples of lunar rock and soil in their first close-up display to the public, where some people may even have had a chance to touch the samples.

Others in EAPS have also been studying those Apollo samples almost from the beginning. Timothy Grove, the Robert R. Shrock Professor of Earth and Planetary Sciences, started studying the Apollo samples in 1971 as a graduate student at Harvard University, and has been doing research on them ever since. Grove says that these samples have led to major new understandings of planetary formation processes that have helped us understand the Earth and other planets better as well.

Among other findings, the rocks showed that ratios of the isotopes of oxygen and other elements in the moon rocks were identical to those in terrestrial rocks but completely different than those of any meteorites, proving that the Earth and the moon had a common origin and leading to the hypothesis that the moon was created through a giant impact from a planet-sized body. The rocks also showed that the entire surface of the moon had likely been molten at one time. The idea that a planetary body could be covered by an ocean of magma was a major surprise to geologists, Grove says.

Many puzzles remain to this day, and the analysis of the rock and soil samples goes on. “There’s still a lot of exciting stuff” being found in these samples, Grove says.

Sorting out the facts

In the spate of publicity and new books, articles, and programs about Apollo, inevitably some of the facts — some trivial, some substantive — have been scrambled along the way. “There are some myths being advanced,” says Tylko, some of which he addresses in his “Engineering Apollo” class. “People tend to oversimplify” many aspects of the mission, he says.

For example, many accounts have described the sequence of alarms that came from the guidance computer during the last four minutes of the mission, forcing mission controllers to make the daring decision to go ahead despite the unknown nature of the problem. But Don Eyles, one of the Instrumentation Lab’s programmers who had written the landing software for the AGC, says that he can’t think of a single account he’s read about that sequence of events that gets it entirely right. According to Eyles, many have claimed the problem was caused by the fact that the rendezvous radar switch had been left on, so that its data were overloading the computer and causing it to reboot.

But Eyles says the actual reason was a much more complex sequence of events, including a crucial mismatch between two circuits that would only occur in rare circumstances and thus would have been hard to detect in testing, and a probably last-minute decion to put a vital switch in a position that allowed it to happen. Eyles has described these details in a memoir about the Apollo years and in a technical paper available online, but he says they are difficult to summarize simply. But he thinks the author Norman Mailer may have come closest, capturing the essence of it in his book “Of a Fire on the Moon,” where he describes the issue as caused by a “sneak circuit” and an “undetectable” error in the onboard checklist.

Some accounts have described the AGC as a very limited and primitive computer compared to today’s average smartphone, and Tylko acknowledges that it had a tiny fraction of the power of today’s smart devices — but, he says, “that doesn’t mean they were unsophisticated.” While the AGC only had about 36 kilobytes of read-only memory and 2 kilobytes of random-access memory, “it was exceptionally sophisticated and made the best use of the resources available at the time,” he says.

In some ways it was even ahead of its time, Tylko says. For example, the compiler language developed by Laning along with Ramon Alonso at the Instrumentation Lab used an architecture that he says was relatively intuitive and easy to interact with. Based on a system of “verbs” (actions to be performed) and “nouns” (data to be worked on), “it could probably have made its way into the architecture of PCs,” he says. “It’s an elegant interface based on the way humans think.”

Some accounts go so far as to claim that the computer failed during the descent and astronaut Neil Armstrong had to take over the controls and land manually, but in fact partial manual control was always part of the plan, and the computer remained in ultimate control throughout the mission. None of the onboard computers ever malfunctioned through the entire Apollo program, according to astronaut David Scott SM ’62, who used the computer on two Apollo missions: “We never had a failure, and I think that is a remarkable achievement.”

Behind the scenes

At the peak of the program, a total of about 1,700 people at MIT’s Instrumentation Lab were working on the Apollo program’s software and hardware, according to Draper Laboratory, the Instrumentation Lab’s successor, which spun off from MIT in 1973. A few of those, such as the near-legendary “Doc” Draper himself — Charles Stark Draper ’26, SM ’28, ScD ’38, former head of the Department of Aeronautics and Astronautics (AeroAstro) — have become widely known for their roles in the mission, but most did their work in near-anonymity, and many went on to entirely different kinds of work after the Apollo program’s end.

Margaret Hamilton, who directed the Instrumentation Lab’s Software Engineering Division, was little known outside of the program itself until an iconic photo of her next to the original stacks of AGC code began making the rounds on social media in the mid 2010s. In 2016, when she was awarded the Presidential Medal of Freedom by President Barack Obama, MIT Professor Jaime Peraire, then head of AeroAstro, said of Hamilton that “She was a true software engineering pioneer, and it’s not hyperbole to say that she, and the Instrumentation Lab’s Software Engineering Division that she led, put us on the moon.” After Apollo, Hamilton went on to found a software services company, which she still leads.

Many others who played major roles in that software and hardware development have also had their roles little recognized over the years. For example, Hal Laning ’40, PhD ’47, who developed the programming language for the AGC, also devised its executive operating system, which employed what was at the time a new way of handling multiple programs at once, by assigning each one a priority level so that the most important tasks, such as controlling the lunar module’s thrusters, would always be taken care of. “Hal was the most brilliant person we ever had the chance to work with,” Instrumentation Lab engineer Dan Lickly told MIT Technology Review. And that priority-driven operating system proved crucial in allowing the Apollo 11 landing to proceed safely in spite of the 1202 alarms going off during the lunar descent.

While the majority of the team working on the project was male, software engineer Dana Densmore recalls that compared to the heavily male-dominated workforce at NASA at the time, the MIT lab was relatively welcoming to women. Densmore, who was a control supervisor for the lunar landing software, told The Wall Street Journal that “NASA had a few women, and they kept them hidden. At the lab it was very different,” and there were opportunities for women there to take on significant roles in the project.

Hamilton recalls the atmosphere at the Instrumentation Lab in those days as one of real dedication and meritocracy. As she told MIT News in 2009, “Coming up with solutions and new ideas was an adventure. Dedication and commitment were a given. Mutual respect was across the board. Because software was a mystery, a black box, upper management gave us total freedom and trust. We had to find a way and we did. Looking back, we were the luckiest people in the world; there was no choice but to be pioneers.”

J-PAL North America announces second round of competition partners

J-PAL North America, a research center at MIT, will partner with two leading education technology nonprofits to test promising models to improve learning, as part of the center’s second Education, Technology, and Opportunity Innovation Competition. 

Running in its second year, J-PAL North America’s Education, Technology, and Opportunity Innovation Competition supports education leaders in using randomized evaluations to generate evidence on how technology can improve student learning, particularly for students from disadvantaged backgrounds. Last year, J-PAL North America partnered with the Family Engagement Lab to develop an evaluation of a multilingual digital messaging platform, and with Western Governors University’s Center for Applied Learning Science to evaluate scalable models to improve student learning in math.

This year, J-PAL North America will continue its work to support rigorous evaluations of educational technologies aimed to reduce disparities by partnering with Boys and Girls Clubs of Greater Houston, a youth-development organization that provides education and social services to at-risk students, and MIND Research Institute, a nonprofit committed to improving math education.

“Even just within the first and second year of the J-PAL ed-tech competition, there continues to be an explosion in promising new initiatives,” says Philip Oreopoulos, professor of economics at the University of Toronto and co-chair of the J-PAL Education, Technology, and Opportunity Initiative. “We’re excited to try to help steer this development towards the most promising and effective programs for improving academic success and student well-being.”

Boys and Girls Clubs of Greater Houston will partner with J-PAL North America to develop an evaluation of the BookNook reading app, a research-based intervention technology that aims to improve literacy skills of K-8 students.

“One of our commitments to our youth is to prepare them to be better citizens in life, and we do this through our programming, which supplements the education they receive in school,” says Michael Ewing, director of programs at Boys & Girls Clubs of Greater Houston. “BookNook is one of our programs that we know can increase reading literacy and help students achieve at a higher level. We are excited about this opportunity to conduct a rigorous evaluation of BookNook’s technology because we can substantially increase our own accountability as an organization, ensuring that we are able to track the literacy gains of our students when the program is implemented with fidelity.”

Children who do not master reading by a young age are often placed at a significant disadvantage to their peers throughout the rest of their development. However, many effective interventions for students struggling with reading involve one-on-one or small-group instruction that places a heavy demand on school resources and teacher time. This makes it particularly challenging for schools that are already resource-strapped and face a shortage of teachers to meet the needs of students who are struggling with reading.

The BookNook app offers a channel to bring research-proven literacy intervention strategies to greater numbers of students through accessible technology. The program is heavily scaffolded so that both teachers and non-teachers can use it effectively, allowing after-school staff like those at Boys & Girls Clubs of Greater Houston to provide adaptive instruction to students struggling with reading.

“Our main priority at BookNook is student success,” says Nate Strong, head of partnerships at for the BookNook team. “We are really excited to partner with J-PAL and with Boys & Girls Clubs of Greater Houston to track the success of students in Houston and learn how we can do better for them over the long haul.”

MIND Research Institute seeks to partner with J-PAL North America to develop a scalable model that will increase students’ conceptual understanding of mathematical concepts. MIND’s Spatial Temporal (ST) math program is a pre-K-8 visual instructional program that leverages the brain’s spatial-temporal reasoning ability using challenging visual puzzles, non-routine problem solving, and animated informative feedback to understand and solve mathematical problems.

“We’re thrilled and honored to begin this partnership with J-PAL to build our capacity to conduct randomized evaluations,” says Andrew Coulson, chief data science officer for MIND. “It’s vital we continue to rigorously evaluate the ability of ST Math’s spatial-temporal approach to provide a level playing field for every student, and to show substantial effects on any assessment. With the combination of talent and experience that J-PAL brings, I expect that we will also be exploring innovative research questions, metrics and outcomes, methods and techniques to improve the applicability, validity and real-world usability of the findings.”

J-PAL North America is excited to work with these two organizations and continue to support rigorous evaluations that will help us better understand the role technology should play in learning. Boys & Girls Clubs of Greater Houston and MIND Research Institute will help J-PAL contribute to growing evidence base on education technology that can help guide decision-makers in understanding which uses of education technology are truly helping students learn amidst a rapidly-changing technological landscape.

J-PAL North America is a regional office of the Abdul Latif Jameel Poverty Action Lab. J-PAL was established in 2003 as a research center at MIT’s Department of Economics. Since then, it has built a global network of affiliated professors based at over 58 universities and regional offices in Africa, Europe, Latin America and the Caribbean, North America, South Asia, and Southeast Asia. J-PAL North America was established with support from the Alfred P. Sloan Foundation and Arnold Ventures and works to improve the effectiveness of social programs in North America through three core activities: research, policy outreach, and capacity building. J-PAL North America’s education technology work is supported by the Overdeck Family Foundation and Arnold Ventures.

MIT and Fashion Institute of Technology join forces to create innovative textiles

If you knew that hundreds of millions of running shoes are disposed of in landfills each year, would you prefer a high-performance athletic shoe that is biodegradable? Would being able to monitor your fitness in real time and help you avoid injury while you are running appeal to you? If so, look no further than the collaboration between MIT and the Fashion Institute of Technology (FIT). 

For the second consecutive year, students from each institution teamed up for two weeks in late June to create product concepts exploring the use of advanced fibers and technology. The workshops were held collaboratively with Advanced Functional Fabrics of America (AFFOA), a Cambridge, Massachusetts-based national nonprofit whose goal is to enable a manufacturing-based transformation of traditional fibers, yarns, and textiles into highly sophisticated, integrated, and networked devices and systems. 

“Humans have made use of natural fibers for millennia. They are essential as tools, clothing and shelter,” says Gregory C. Rutledge, lead principal investigator for MIT in AFFOA and the Lammot du Pont Professor in Chemical Engineering. “Today, new fiber-based solutions can have a significant and timely impact on the challenges facing our world.” 

The students had the opportunity this year to respond to a project challenge posed by footwear and apparel manufacturer New Balance, a member of the AFFOA network. Students spent their first week in Cambridge learning new technologies at MIT and the second at FIT, a college of the State University of New York, in New York City working on projects and prototypes. On the last day of the workshop, the teams presented their final projects at the headquarters of Lafayette 148 at the Brooklyn Navy Yard, with New Balance Creative Manager of Computational Design Onur Yuce Gun in attendance.

Team Natural Futurism presented a concept to develop a biodegradable lifestyle shoe using natural material alternatives, including bacterial cellulose and mycelium, and advanced fiber concepts to avoid use of chemical dyes. The result was a shoe that is both sustainable and aesthetic. Team members included: Giulia de Garay (FIT, Textile Development and Marketing), Rebecca Grekin ’19 (Chemical Engineering), rising senior Kedi Hu (Chemical Engineering/Architecture), Nga Yi “Amy” Lam (FIT, Textile Development and Marketing), Daniella Koller (FIT, Fashion Design), and Stephanie Stickle (FIT, Textile Surface Design).

Team CoMIT to Safety Before ProFIT explored the various ways that runners get hurt, sometimes from acute injuries but more often from overuse. Their solution was to incorporate intuitive textiles, as well as tech elements such as a silent alarm and LED display, into athletic clothing and shoes for entry-level, competitive, and expert runners. The goal is to help runners at all levels to eliminate distraction, know their physical limits, and be able to call for help. Team members included Rachel Cheang (FIT, Fashion Design/Knitwear), Jonathan Mateer (FIT, Accessories Design), Caroline Liu ’19 (Materials Science and Engineering), and Xin Wen ’19 (Electrical Engineering and Computer Science).

“It is critical for design students to work in a team environment engaging in the latest technologies. This interaction will support the invention of products that will define our future,” comments Joanne Arbuckle, deputy to the president for industry partnerships and collaborative programs at FIT.

The specific content of this workshop was co-designed by MIT postdocs Katia Zolotovsky of the Department of Biological Engineering and Mehmet Kanik of the Research Laboratory of Electronics, with assistant professor of fashion design Andy Liu from FIT, to teach the fundamentals of fiber fabrication, 3-D printing with light, sensing, and biosensing. Participating MIT faculty included Yoel Fink, who is CEO of AFFOA and professor of materials science and electrical engineering; Polina Anikeeva, who is associate professor in the departments of Materials Science and Engineering and Brain and Cognitive Sciences; and Nicholas Xuanlai Fang, professor of mechanical engineering. Participating FIT faculty were Preeti Arya, assistant professor, Textile Development and Marketing; Patrice George, associate professor, Textile Development and Marketing; Suzanne Goetz, associate professor, Textile Surface Design; Tom Scott, Fashion Design; David Ulan, adjunct assistant professor, Accessories Design; and Gregg Woodcock, adjunct instructor, Accessories Design.  

To facilitate the intersection of design and engineering for products made of advanced functional fibers, yarns, and textiles, a brand-new workforce must be created and inspired by future opportunities. “The purpose of the program is to bring together undergraduate students from different backgrounds, and provide them with a cross-disciplinary, project-oriented experience that gets them thinking about what can be done with these new materials,” Rutledge adds. 

The goal of MIT, FIT, AFFOA, and industrial partner New Balance is to accelerate innovation in high-tech, U.S.-based manufacturing involving fibers and textiles, and potentially to create a whole new industry based on breakthroughs in fiber technology and manufacturing. AFFOA, a Manufacturing Innovation Institute founded in 2016, is a public-private partnership between industry, academia, and both state and federal governments.

“Collaboration and teamwork are DNA-level attributes of the New Balance workplace,” says Chris Wawrousek, senior creative design lead in the NB Innovation Studio. “We were very excited to participate in the program from a multitude of perspectives. The program allowed us to see some of the emerging research in the field of technical textiles. In some cases, these technologies are still very nascent, but give us a window into future developments.”  

“The diverse pairing and short time period also remind us of the energy captured in an academic crash course, and just how much teams can do in a condensed period of time,” Wawrousek adds. “Finally, it’s a great chance to connect with this future generation of designers and engineers, hopefully giving them an exciting window into the work of our brand.”

By building upon their different points of view from design and science, the teams demonstrated what is possible when creative individuals from each area act and think as one. “When designers and engineers come together and open their minds to creating new technologies that ultimately will impact the world, we can imagine exciting new multi-material fibers that open up a new spectrum of applications in various markets, from clothing to medical and beyond,” says Yuly Fuentes, MIT Materials Research Laboratory project manager for fiber technologies. 

Professor Emeritus Fernando Corbató, MIT computing pioneer, dies at 93

Fernando “Corby” Corbató, an MIT professor emeritus whose work in the 1960s on time-sharing systems broke important ground in democratizing the use of computers, died on Friday, July 12, at his home in Newburyport, Massachusetts. He was 93.

Decades before the existence of concepts like cybersecurity and the cloud, Corbató led the development of one of the world’s first operating systems. His “Compatible Time-Sharing System” (CTSS) allowed multiple people to use a computer at the same time, greatly increasing the speed at which programmers could work. It’s also widely credited as the first computer system to use passwords

After CTSS Corbató led a time-sharing effort called Multics, which directly inspired operating systems like Linux and laid the foundation for many aspects of modern computing. Multics doubled as a fertile training ground for an emerging generation of programmers that included C programming language creator Dennis Ritchie, Unix developer Ken Thompson, and spreadsheet inventors Dan Bricklin and Bob Frankston.

Before time-sharing, using a computer was tedious and required detailed knowledge. Users would create programs on cards and submit them in batches to an operator, who would enter them to be run one at a time over a series of hours. Minor errors would require repeating this sequence, often more than once.

But with CTSS, which was first demonstrated in 1961, answers came back in mere seconds, forever changing the model of program development. Decades before the PC revolution, Corbató and his colleagues also opened up communication between users with early versions of email, instant messaging, and word processing. 

“Corby was one of the most important researchers for making computing available to many people for many purposes,” says long-time colleague Tom Van Vleck. “He saw that these concepts don’t just make things more efficient; they fundamentally change the way people use information.”

Besides making computing more efficient, CTSS also inadvertently helped establish the very concept of digital privacy itself. With different users wanting to keep their own files private, CTSS introduced the idea of having people create individual accounts with personal passwords. Corbató’s vision of making high-performance computers available to more people also foreshadowed trends in cloud computing, in which tech giants like Amazon and Microsoft rent out shared servers to companies around the world. 

“Other people had proposed the idea of time-sharing before,” says Jerry Saltzer, who worked on CTSS with Corbató after starting out as his teaching assistant. “But what he brought to the table was the vision and the persistence to get it done.”

CTSS was also the spark that convinced MIT to launch “Project MAC,” the precursor to the Laboratory for Computer Science (LCS). LCS later merged with the Artificial Intelligence Lab to become MIT’s largest research lab, the Computer Science and Artificial Intelligence Laboratory (CSAIL), which is now home to more than 600 researchers. 

“It’s no overstatement to say that Corby’s work on time-sharing fundamentally transformed computers as we know them today,” says CSAIL Director Daniela Rus. “From PCs to smartphones, the digital revolution can directly trace its roots back to the work that he led at MIT nearly 60 years ago.” 

In 1990 Corbató was honored for his work with the Association of Computing Machinery’s Turing Award, often described as “the Nobel Prize for computing.”

From sonar to CTSS

Corbató was born on July 1, 1926 in Oakland, California. At 17 he enlisted as a technician in the U.S. Navy, where he first got the engineering bug working on a range of radar and sonar systems. After World War II he earned his bachelor’s degree at Caltech before heading to MIT to complete a PhD in physics. 

As a PhD student, Corbató met Professor Philip Morse, who recruited him to work with his team on Project Whirlwind, the first computer capable of real-time computation. After graduating, Corbató joined MIT’s Computation Center as a research assistant, soon moving up to become deputy director of the entire center. 

It was there that he started thinking about ways to make computing more efficient. For all its innovation, Whirlwind was still a rather clunky machine. Researchers often had trouble getting much work done on it, since they had to take turns using it for half-hour chunks of time. (Corbató said that it had a habit of crashing every 20 minutes or so.) 

Since computer input and output devices were much slower than the computer itself, in the late 1950s a scheme called multiprogramming was developed to allow a second program to run whenever the first program was waiting for some device to finish. Time-sharing built on this idea, allowing other programs to run while the first program was waiting for a human user to type a request, thus allowing the user to interact directly with the first program.

Saltzer says that Corbató pioneered a programming approach that would be described today as agile design. 

“It’s a buzzword now, but back then it was just this iterative approach to coding that Corby encouraged and that seemed to work especially well,” he says.  

In 1962 Corbató published a paper about CTSS that quickly became the talk of the slowly-growing computer science community. The following year MIT invited several hundred programmers to campus to try out the system, spurring a flurry of further research on time-sharing.

Foreshadowing future technological innovation, Corbató was amazed — and amused — by how quickly people got habituated to CTSS’ efficiency.

“Once a user gets accustomed to [immediate] computer response, delays of even a fraction of a minute are exasperatingly long,” he presciently wrote in his 1962 paper. “First indications are that programmers would readily use such a system if it were generally available.”

Multics, meanwhile, expanded on CTSS’ more ad hoc design with a hierarchical file system, better interfaces to email and instant messaging, and more precise privacy controls. Peter Neumann, who worked at Bell Labs when they were collaborating with MIT on Multics, says that its design prevented the possibility of many vulnerabilities that impact modern systems, like “buffer overflow” (which happens when a program tries to write data outside the computer’s short-term memory). 

“Multics was so far ahead of the rest of the industry,” says Neumann. “It was intensely software-engineered, years before software engineering was even viewed as a discipline.” 

In spearheading these time-sharing efforts, Corbató served as a soft-spoken but driven commander in chief — a logical thinker who led by example and had a distinctly systems-oriented view of the world.

“One thing I liked about working for Corby was that I knew he could do my job if he wanted to,” says Van Vleck. “His understanding of all the gory details of our work inspired intense devotion to Multics, all while still being a true gentleman to everyone on the team.” 

Another legacy of the professor’s is “Corbató’s Law,” which states that the number of lines of code someone can write in a day is the same regardless of the language used. This maxim is often cited by programmers when arguing in favor of using higher-level languages.

Corbató was an active member of the MIT community, serving as associate department head for computer science and engineering from 1974 to 1978 and 1983 to 1993. He was a member of the National Academy of Engineering, and a fellow of the Institute of Electrical and Electronics Engineers and the American Association for the Advancement of Science. 

Corbató is survived by his wife, Emily Corbató, from Brooklyn, New York; his stepsons, David and Jason Gish; his brother, Charles; and his daughters, Carolyn and Nancy, from his marriage to his late wife Isabel; and five grandchildren. 

In lieu of flowers, gifts may be made to MIT’s Fernando Corbató Fellowship Fund via Bonny Kellermann in the Memorial Gifts Office. 

CSAIL will host an event to honor and celebrate Corbató in the coming months. 

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.