People Should Find A Safe Storm Shelter During Thunderstorm

Storm Shelters in OKC

Tuesday June 5, 2001 marked the start of an extremely fascinating time in the annals of my cherished Houston. Tropical storm Allison, that early summer daytime came to see. The thunderstorm went rapidly, although there was Tuesday. Friday, afterward arrived, and Allison returned. This time going slowly, this time in the north. The thunderstorm became still. Thousands of people driven from their houses. Only when they might be desired most, several leading hospitals shut. Dozens of important surface roads, and every important highway covered in water that was high.

Yet even prior to the rain stopped, service to others, and narratives of Christian compassion started to be composed. For a couples class, about 75 people had assembled at Lakewood Church among the greatest nondenominational churches in The United States. From time they got ready to depart the waters had climbed so high they were stranded. The facility of Lakewood stayed dry and high at the center of among the hardest hit parts of town. Refugees in the powerful thunderstorm started arriving at their doorstep. Without no advance preparation, and demand of official sanction, those 75 classmates started a calamity shelter that grew to hold over 3,000 customers. The greatest of over 30 refuges that could be established in the height of the thunderstorm.

Where help was doled out to those who’d suffered losses after Lakewood functioned as a Red Cross Service Center. When it became clear that FEMA aid, and Red Cross wouldn’t bring aid enough, Lakewood and Second Baptist joined -Houston to produce an adopt a family plan to greatly help get folks on their feet quicker. In the occasions that followed militaries of Christians arrived in both churches. From all over town, people of economical standing, race, and each and every denomination collected. Wet rotted carpeting were pulled up, sheet stone removed. Piles of clothes donated food and bed clothes were doled out. Elbow grease and cleaning equipment were used to start eliminating traces of the damage.

It would have been an excellent example of practical ministry in a period of disaster, in the event the story stopped here, but it continues. A great many other churches functioned as shelters as well as in the occasions that followed Red Cross Service Centers. Tons of new volunteers, a lot of them Christians put to work, and were put through accelerated training. That Saturday, I used to be trapped in my own, personal subdivision. Particular that my family was safe because I worked in Storm Shelters OKC that was near where I used to live. What they wouldn’t permit the storm to do, is take their demand to give their religion, or their self respect. I saw so a lot of people as they brought gifts of food, clothes and bedclothes, praising the Lord. I saw young kids coming making use of their parents to not give new, rarely used toys to kids who had none.

Leaning On God Through Hard Times

Unity Church of Christianity from a location across town impacted by the storm sent a sizable way to obtain bedding as well as other supplies. A tiny troupe of musicians and Christian clowns requested to be permitted to amuse the kids in the shelter where I served and arrived. We of course promptly taken their offer. The kids were collected by them in a sizable empty space of flooring. They sang, they told stories, balloon animals were made by them. The kids, frightened, at least briefly displaced laughed.

When not occupied elsewhere I did lots of listening. I listened to survivors that were disappointed, and frustrated relief workers. I listened to kids make an effort to take advantage of a scenario they could not comprehend. All these are only the stories I have heard or seen. I am aware that spiritual groups, Churches, and lots of other individual Christians functioned admirably. I do need to thank them for the attempts in disaster. I thank The Lord for supplying them to serve.

I didn’t write its individuals, or this which means you’d feel sorry for Houston. As this disaster unfolded yet what I saw encouraged my beliefs the Lord will provide through our brothers and sisters in religion for us. Regardless how awful your community hits, you the individual Christian can be a part of the remedy. Those blankets you can probably never use, and have stored away mean much to people who have none. You are able to help in the event that you can drive. You are able to help if you’re able to create a cot. It is possible to help in the event that you can scrub a wall. It is possible to help if all you are able to do is sit and listen. Large catastrophes like Allison get lots of focus. However a disaster can come in virtually any size. That is a serious disaster to your family that called it home in case a single household burns. It is going to be generations prior to the folks here forget Allison.

United States Oil and Gas Exploration Opportunities

Firms investing in this sector can research, develop and create, as well as appreciate the edges of a global gas and oil portfolio with no political and economical disadvantages. Allowing regime and the US financial conditions is rated amongst the world and the petroleum made in US is sold at costs that were international. The firms will likely gain as US also has a national market that is booming. Where 500 exploration wells are drilled most of the petroleum exploration in US continues to be concentrated around the Taranaki Basin. On the other hand, the US sedimentary basins still remain unexplored and many show existence of petroleum seeps and arrangements were also unveiled by the investigation data with high hydrocarbon potential. There have already been onshore gas discoveries before including Great south river basins, East Coast Basin and offshore Canterbury.

As interest in petroleum is expected to grow strongly during this interval but this doesn’t automatically dim the bright future expectations in this sector. The interest in petroleum is anticipated to reach 338 PJ per annum. The US government is eager to augment the gas and oil supply. As new discoveries in this sector are required to carry through the national demand at the same time as raise the amount of self reliance and minimize the cost on imports of petroleum the Gas and Oil exploration sector is thought to be among the dawn sectors. The US government has invented a distinctive approach to reach its petroleum and gas exploration targets. It’s developed a “Benefit For Attempt” model for Petroleum and Gas exploration tasks in US.

The “Benefit For Attempt” in today’s analytic thinking is defined as oil reserves found per kilometer drilled. It will help in deriving the estimate of reservations drilled for dollar and each kilometer spent for each investigation. The authorities of US has revealed considerable signs that it’ll bring positive effects of change which will favor investigation of new oil reserves since the price of investigation has adverse effects on investigation task. The Authorities of US has made the information accessible about the oil potential in its study report. Foil of advice in royalty and allocation regimes, and simplicity of processes have enhanced the attractiveness of Petroleum and Natural Gas Sector in the United States.

Petroleum was the third biggest export earner in 2008 for US and the chance to to keep up the growth of the sector is broadly accessible by manners of investigation endeavors that are new. The government is poised to keep the impetus in this sector. Now many firms are active with new exploration jobs in the Challenger Plateau of the United States, Northland East Slope Basin region, outer Taranaki Basin, and Bellona Trough region. The 89 Energy oil and gas sector guarantees foreign investors as government to high increase has declared a five year continuance of an exemption for offshore petroleum and gas exploration in its 2009 budget. The authorities provide nonresident rig operators with tax breaks.

Modern Robot Duct Cleaning Uses

AC systems, and heat, venting collect pollutants and contaminants like mold, debris, dust and bacteria that can have an adverse impact on indoor air quality. Most folks are at present aware that indoor air pollution could be a health concern and increased visibility has been thus gained by the area. Studies have also suggested cleaning their efficacy enhances and is contributory to a longer operating life, along with maintenance and energy cost savings. The cleaning of the parts of forced air systems of heat, venting and cooling system is what’s called duct cleaning. Robots are an advantageous tool raising the price and efficacy facets of the procedure. Therefore, using modern robot duct isn’t any longer a new practice.

A cleaner, healthier indoor environment is created by a clean air duct system which lowers energy prices and increases efficiency. As we spend more hours inside air duct cleaning has become an important variable in the cleaning sector. Indoor pollutant levels can increase. Health effects can show years or up immediately after repeated or long exposure. These effects range from some respiratory diseases, cardiovascular disease, and cancer that can be deadly or debilitating. Therefore, it’s wise to ensure indoor air quality isn’t endangered inside buildings. Dangerous pollutants that can found in inside can transcend outdoor air pollutants in accordance with the Environmental Protection Agency.

Duct cleaning from Air Duct Cleaning Edmond professionals removes microbial contaminants, that might not be visible to the naked eye together with both observable contaminants. Indoor air quality cans impact and present a health hazard. Air ducts can be host to a number of health hazard microbial agents. Legionnaires Disease is one malaise that’s got public notice as our modern surroundings supports the development of the bacteria that has the potential to cause outbreaks and causes the affliction. Typical disorder-causing surroundings contain wetness producing gear such as those in air conditioned buildings with cooling towers that are badly maintained. In summary, in building and designing systems to control our surroundings, we’ve created conditions that were perfect . Those systems must be correctly tracked and preserved. That’s the secret to controlling this disorder.

Robots allow for the occupation while saving workers from exposure to be done faster. Signs of the technological progress in the duct cleaning business is apparent in the variety of gear now available for example, array of robotic gear, to be used in air duct cleaning. Robots are priceless in hard to reach places. Robots used to see states inside the duct, now may be used for spraying, cleaning and sampling procedures. The remote controlled robotic gear can be fitted with practical and fastener characteristics to reach many different use functions.

Video recorders and a closed circuit television camera system can be attached to the robotic gear to view states and operations and for documentation purposes. Inside ducts are inspected by review apparatus in the robot. Robots traveling to particular sections of the system and can move around barriers. Some join functions that empower cleaning operation and instruction manual and fit into little ducts. An useful view range can be delivered by them with models delivering disinfection, cleaning, review, coating and sealing abilities economically.

The remote controlled robotic gear comes in various sizes and shapes for different uses. Of robotic video cameras the first use was in the 80s to record states inside the duct. Robotic cleaning systems have a lot more uses. These devices provide improved accessibility for better cleaning and reduce labor costs. Lately, functions have been expanded by areas for the use of small mobile robots in the service industries, including uses for review and duct cleaning.

More improvements are being considered to make a tool that was productive even more effective. If you determine to have your ventilation, heat and cooling system cleaned, it’s important to make sure all parts of the system clean and is qualified to achieve this. Failure to clean one part of a contaminated system can lead to re-contamination of the entire system.

When To Call A DWI Attorney

Charges or fees against a DWI offender need a legal Sugar Land criminal defense attorney that is qualified dismiss or so that you can reduce charges or the fees. So, undoubtedly a DWI attorney is needed by everyone. Even if it’s a first-time violation the penalties can be severe being represented by a DWI attorney that is qualified is vitally significant. If you’re facing following charges for DWI subsequently the punishments can contain felony charges and be severe. Locating an excellent attorney is thus a job you should approach when possible.

So you must bear in mind that you just should hire a DWI attorney who practices within the state where the violation occurred every state within America will make its laws and legislation regarding DWI violations. It is because they are going to have the knowledge and expertise of state law that is relevant to sufficiently defend you and will be knowledgeable about the processes and evaluations performed to establish your guilt.

As your attorney they are going to look to the evaluations that have been completed at the time of your arrest and the authorities evidence that is accompanying to assess whether or not these evaluations were accurately performed, carried out by competent staff and if the right processes where followed. It isn’t often that a police testimony is asserted against, although authorities testimony also can be challenged in court.

You should attempt to locate someone who specializes in these kind of cases when you start trying to find a DWI attorney. Whilst many attorneys may be willing to consider on your case, a lawyer who specializes in these cases is required by the skilled knowledge needed to interpret the scientific and medical evaluations ran when you had been detained. The first consultation is free and provides you with the chance to to inquire further about their experience in fees and these cases.

Many attorneys will work according into a fee that is hourly or on a set fee basis determined by the kind of case. You may find how they have been paid to satisfy your financial situation and you will have the capacity to negotiate the conditions of their fee. If you are unable to afford to hire an attorney that is private you then can request a court-appointed attorney paid for by the state. Before you hire a DWI attorney you should make sure when you might be expected to appear in court and you understand the precise charges imposed against you.

How Credit Card Works

The credit card is making your life more easy, supplying an amazing set of options. The credit card is a retail trade settlement; a credit system worked through the little plastic card which bears its name. Regulated by ISO 7810 defines credit cards the actual card itself consistently chooses the same structure, size and contour. A strip of a special stuff on the card (the substance resembles the floppy disk or a magnetic group) is saving all the necessary data. This magnetic strip enables the credit card’s validation. The layout has become an important variable; an enticing credit card layout is essential in ensuring advice and its dependability keeping properties.

A credit card is supplied to the user just after a bank approves an account, estimating a varied variety of variables to ascertain fiscal dependability. This bank is the credit supplier. When a purchase is being made by an individual, he must sign a receipt to verify the trade. There are the card details, and the amount of cash to be paid. You can find many shops that take electronic authority for the credit cards and use cloud tokenization for authorization. Nearly all verification are made using a digital verification system; it enables assessing the card is not invalid. If the customer has enough cash to insure the purchase he could be attempting to make staying on his credit limit any retailer may also check.

As the credit supplier, it is as much as the banks to keep the user informed of his statement. They typically send monthly statements detailing each trade procedures through the outstanding fees, the card and the sums owed. This enables the cardholder to ensure all the payments are right, and to discover mistakes or fraudulent action to dispute. Interest is typically charging and establishes a minimal repayment amount by the end of the following billing cycle.

The precise way the interest is charged is normally set within an initial understanding. On the rear of the credit card statement these elements are specified by the supplier. Generally, the credit card is an easy type of revolving credit from one month to another. It can also be a classy financial instrument, having many balance sections to afford a greater extent for credit management. Interest rates may also be not the same as one card to another. The credit card promotion services are using some appealing incentives find some new ones along the way and to keep their customers.

Why Get Help From A Property Management?

One solution while removing much of the anxiety, to have the revenue of your rental home would be to engage and contact property management in Oklahoma City, Oklahoma. If you wish to know more and are considering the product please browse the remainder of the post. Leasing out your bit of real property may be real cash-cow as many landlords understand, but that cash flow usually includes a tremendous concern. Night phones from tenants that have the trouble of marketing the house if you own an emptiness just take out lots of the pleasure of earning money off of leases, overdue lease payments which you must chase down, as well as over-flowing lavatories. One solution while removing much of the anxiety, to have the earnings would be to engage a property management organization.

These businesses perform as the go between for the tenant as well as you. The tenant will not actually need to understand who you’re when you hire a property management company. The company manages the day to day while you still possess the ability to help make the final judgements in regards to the home relationships using the tenant. The company may manage the marketing for you personally, for those who are in possession of a unit that is vacant. Since the company is going to have more connections in a bigger market than you’ve got along with the industry than you are doing, you’ll discover your device gets stuffed a whole lot more quickly making use of their aid. In addition, the property management company may care for testing prospective tenants and help prospects move in by partnering with the right home services and moving company. With regards to the arrangement you’ve got, you might nevertheless not be unable to get the last say regarding if a tenant is qualified for the the system, but of locating a suitable tenant, the day-to-day difficulty is not any longer your problem. They’ll also manage the before-move-in the reviews as well as reviews required following a tenant moves away.

It is possible to step back watching the profits, after the the system is stuffed. Communicating will be handled by the company with all the tenant if you have an issue. You won’t be telephoned if this pipe explosions at the center of the night time. Your consultant is called by the tenant in the company, who then makes the preparations that are required to get the issue repaired with a care supplier. You get a phone call a day later or may not know there was an issue before you register using the business. The property management organization may also make your leasing obligations to to get. The company will do what’s required to accumulate if your tenant is making a payment. In certain arrangements, the organization is going to also take-over paying taxation, insurance, and the mortgage on the portion of property. You actually need to do-nothing but appreciate after after all the the invoices are paid, the revenue which is sent your way.

With all the advantages, you’re probably questioning exactly what to employing a property management organization, the downside should be. From hiring one the primary variable that stops some landlords is the price. All these providers will be paid for by you. The price must be weighed by you from the time frame you’ll save time that you may subsequently use to follow additional revenue-producing efforts or just take pleasure in the fruits of your expense work.

Benifits From An Orthodontic Care

Orthodontics is the specialty of dentistry centered on the identification and treatment of dental and related facial problems. The outcomes of Norman Orthodontist OKC treatment could be dramatic — an advanced quality of life for a lot of individuals of ages and lovely grins, improved oral health health, aesthetics and increased cosmetic tranquility. Whether into a look dentistry attention is needed or not is an individual’s own choice. Situations are tolerated by most folks like totally various kinds of bite issues or over bites and don’t get treated. Nevertheless, a number people sense guaranteed with teeth that are correctly aligned, appealing and simpler. Dentistry attention may enhance construct and appearance power. It jointly might work with you consult with clearness or to gnaw on greater.

Orthodontic attention isn’t only decorative in character. It might also gain long term oral health health. Right, correctly aligned teeth is not more difficult to floss and clean. This may ease and decrease the risk of rot. It may also quit periodontists irritation that problems gums. Periodontists might finish in disease, that occurs once micro-organism bunch round your house where the teeth and the gums meet. Periodontists can be ended in by untreated periodontists. Such an unhealthiness result in enamel reduction and may ruin bone that surrounds the teeth. Less may be chewed by people who have stings that are harmful with efficacy. A few of us using a serious bite down side might have difficulties obtaining enough nutrients. Once the teeth aren’t aimed correctly, this somewhat might happen. Morsel issues that are repairing may allow it to be more easy to chew and digest meals.

One may also have language problems, when the top and lower front teeth do not arrange right. All these are fixed through therapy, occasionally combined with medical help. Eventually, remedy may ease to avoid early use of rear areas. Your teeth grow to an unlikely quantity of pressure, as you chew down. In case your top teeth do not match it’ll trigger your teeth that are back to degrade. The most frequently encountered type of therapy is the braces (or retainer) and head-gear. But, a lot people complain about suffering with this technique that, unfortunately, is also unavoidable. Sport braces damages, as well as additional individuals have problem in talking. Dental practitioners, though, state several days can be normally disappeared throughout by the hurting. Occasionally annoyance is caused by them. In the event that you’d like to to quit more unpleasant senses, fresh, soft and tedious food must be avoided by you. In addition, tend not to take your braces away unless the medical professional claims so.

It is advised which you just observe your medical professional often for medical examinations to prevent choice possible problems that may appear while getting therapy. You are going to be approved using a specific dental hygiene, if necessary. Dental specialist may look-out of managing and id malocclusion now. Orthodontia – the main specialization of medication – mainly targets repairing chin problems and teeth, your grin as well as thus your sting. Dentist, however, won’t only do chin remedies and crisis teeth. They also handle tender to severe dental circumstances which may grow to states that are risky. You actually have not got to quantify throughout a predicament your life all. See dental specialist San – Direction Posts, and you’ll notice only but of stunning your smile plenty will soon be.

Making art through computation

Chelsi Cocking is an interdisciplinary artist who explores the human body with the help of computers. For her work, she develops sophisticated software to use as her artistic tools, including facial detection techniques, body tracking software, and machine learning algorithms.

Cocking’s interest in the human body stems from her childhood training in modern dance. Growing up in Kingston, Jamaica, she equally loved the arts and sciences, refusing to pick one over the other. For college, “I really wanted to find a way to do both, but it was hard,” she says. “Luckily, through my older brother, I found [the field of] computational media at Georgia Tech.” There, she learned to develop technology for computer-based media, such as animation and graphics.

In her final year of undergrad, Cocking took a studio class where she worked with two other students on a dance performance piece. Together, they tracked the movements of three local dancers and projected visualizations of these movements in real-time. Cocking quickly fell in love with this medium of computational art. But before she could really explore it, she graduated and left to start a full-time job in product design that she had already lined up. 

Cocking worked in product design for four years, first at a startup, then at Dropbox. “In the back of my mind, I always wanted to go back to grad school” to continue exploring computational art, she says. “But I didn’t really have the courage to do so.” When the pandemic hit and everything moved online, she saw an opportunity to chase her dreams. With encouragement from her family, she sought out online courses at the School for Poetic Computation, while still keeping her day job. As soon as she started, everything clicked: “This is what I want to do,” she says.

Through the school, Cocking heard that her current advisor, Zach Lieberman, an adjunct associate professor in the Media Lab, had an opening in his research group, the Future Sketches group. Now, she spends each day exploring new ideas for making art through computation. “Fun is enough justification for my research,” she says.

A long-awaited return to computational art

When Cocking first joined the Future Sketches group last fall, she was filled with ideas and armed with strong design skills, which she had developed as a product designer. But she had also been on a four-year hiatus from full-time coding and needed to get back in shape. After consulting with Lieberman, she set out on a project where she could ramp up her coding skills while still exploring her interests in the human body.

For this project, Cocking delved into a new medium: photography. In a series of images entitled Photorythms, she took photographic portraits of people and manipulated them using techniques from facial detection. “Within facial detection, you get 68 points of your face,” she says. “Using those points, you can manipulate how the image looks to create more expressive portrait photography.” Many of her images slice portraits using a particular shape, such as concentric rings or vertical stripes, and reassemble them in different configurations, reminiscent of cubism.

Through Photorythms, Cocking also adopted a practice of “daily sketching” from her advisor, where she develops new code every day to generate a new piece of art. If the resulting work turns out to be something she’s proud of, she shares it with the world, sometimes through Instagram. But “even if the code doesn’t amount to anything, [I’m] sharpening [my] coding skills every day,” she says.

Now that she’s reacclimated to intensive coding, “I really want to dive into body tracking this summer,” Cocking says. She’s currently in the ideation phase, brainstorming different ways to interactively combine body tracking and live performance. “I am half-scared and half-excited,” she says.

To help generate ideas, she’s participating in an intensive five-day workshop in early July that will bring together artists interested in computational art for dance. Cocking plans to attend the workshop with her best friend from college, Raianna Brown, who’s a dancer. “We’re going to be there for a week in Chatham [UK], just playing around with choreography and code,” she says. “Hopefully that can spark new ideas and new relationships” for future collaborations.

Spreading love for coding and design

Throughout her circuitous and hard-working journey to computational art, “I’ve never taken the position that I was in for granted,” Cocking says. She recognizes the value of having access to opportunities from her own experience, with a self-sustaining cycle of access in one place opening doors for her in another place. But “there’s so many people that I’m surrounded by who are intelligent and talented but don’t have access to opportunities,” especially in computer science and design, she says. Because of this, since college, Cocking has devoted some of her time to providing access to these fields to children and young professionals from underrepresented backgrounds.

This past spring, Cocking worked with fellow Media Lab student Cecilé Sadler to develop a workshop for introducing kids to coding concepts in a fun way. The two partners taught the workshop in parallel at different places in May and June: Sadler taught a series in Cambridge in collaboration with blackyard, a grassroots organization centering Black, Indigenous, and POC youth, while Cocking returned to her home country of Jamaica and taught at the Freedom Skatepark youth center near Kingston.

To get the workshop curriculum to Jamaica, Cocking reached out to her friend Rica G., who teaches computer science at the Freedom Skatepark youth center. Together, they co-taught the curriculum over several weeks. “I was so nervous [the kids] would just walk out,” Cocking says. “But they actually liked it!”

Cocking hopes to use this workshop as a stepping stone to someday establish “a core center for kids in Jamaica to explore creative coding or computational art,” she says. “Hopefully people will see coding as a tool for creation and expression without feeling intimidated, and use it to make the world a little weirder.”

Q&A: Neil Thompson on computing power and innovation

Moore’s Law is the famous prognostication by Intel co-founder Gordon Moore that the number of transistors on a microchip would double every year or two. This prediction has mostly been met or exceeded since the 1970s — computing power doubles about every two years, while better and faster microchips become less expensive.

This rapid growth in computing power has fueled innovation for decades, yet in the early 21st century researchers began to sound alarm bells that Moore’s Law was slowing down. With standard silicon technology, there are physical limits to how small transistors can get and how many can be squeezed onto an affordable microchip.

Neil Thompson, an MIT research scientist at the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Sloan School of Management, and his research team set out to quantify the importance of more powerful computers for improving outcomes across society. In a new working paper, they analyzed five areas where computation is critical, including weather forecasting, oil exploration, and protein folding (important for drug discovery). The working paper is co-authored by research assistants Gabriel F. Manso and Shuning Ge.

They found that between 49 and 94 percent of improvements in these areas can be explained by computing power. For instance, in weather forecasting, increasing computer power by a factor of 10 improves three-day-ahead predictions by one-third of a degree.

But computer progress is slowing, which could have far-reaching impacts across the economy and society. Thompson spoke with MIT News about this research and the implications of the end of Moore’s Law.

Q: How did you approach this analysis and quantify the impact computing has had on different domains?

A: Quantifying the impact of computing on real outcomes is tricky. The most common way to look at computing power, and IT progress more generally, is to study how much companies are spending on it, and look at how that correlates to outcomes. But spending is a tough measure to use because it only partially reflects the value of the computing power being purchased. For example, today’s computer chip may cost the same amount as last year’s, but it is also much more powerful. Economists do try to adjust for that quality change, but it is hard to get your hands around exactly what that number should be. For our project, we measured the computing power more directly — for instance, by looking at capabilities of the systems used when protein folding was done for the first time using deep learning. By looking directly at capabilities, we are able to get more precise measurements and thus get better estimates of how computing power influences performance.

Q: How are more powerful computers enabling improvements in weather forecasting, oil exploration, and protein folding?

A: The short answer is that increases in computing power have had an enormous effect on these areas. With weather prediction, we found that there has been a trillionfold increase in the amount of computing power used for these models. That puts into perspective how much computing power has increased, and also how we have harnessed it. This is not someone just taking an old program and putting it on a faster computer; instead users must constantly redesign their algorithms to take advantage of 10 or 100 times more computer power. There is still a lot of human ingenuity that has to go into improving performance, but what our results show is that much of that ingenuity is focused on how to harness ever-more-powerful computing engines.

Oil exploration is an interesting case because it gets harder over time as the easy wells are drilled, so what is left is more difficult. Oil companies fight that trend with some of the biggest supercomputers in the world, using them to interpret seismic data and map the subsurface geology. This helps them to do a better job of drilling in exactly the right place.

Using computing to do better protein folding has been a longstanding goal because it is crucial for understanding the three-dimensional shapes of these molecules, which in turn determines how they interact with other molecules. In recent years, the AlphaFold systems have made remarkable breakthroughs in this area. What our analysis shows is that these improvements are well-predicted by the massive increases in computing power they use.

Q: What were some of the biggest challenges of conducting this analysis?

A: When one is looking at two trends that are growing over time, in this case performance and computing power, one of the most important challenges is disentangling what of the relationship between them is causation and what is actually just correlation. We can answer that question, partially, because in the areas we studied companies are investing huge amounts of money, so they are doing a lot of testing. In weather modeling, for instance, they are not just spending tens of millions of dollars on new machines and then hoping they work. They do an evaluation and find that running a model for twice as long does improve performance. Then they buy a system that is powerful enough to do that calculation in a shorter time so they can use it operationally. That gives us a lot of confidence. But there are also other ways that we can see the causality. For example, we see that there were a number of big jumps in the computing power used by NOAA (the National Oceanic and Atmospheric Administration) for weather prediction. And, when they purchased a bigger computer and it got installed all at once, performance really jumps.

Q: Would these advancements have been possible without exponential increases in computing power?

A: That is a tricky question because there are a lot of different inputs: human capital, traditional capital, and also computing power. All three are changing over time. One might say, if you have a trillionfold increase in computing power, surely that has the biggest effect. And that’s a good intuition, but you also have to account for diminishing marginal returns. For example, if you go from not having a computer to having one computer, that is a huge change. But if you go from having 100 computers to having 101, that extra one doesn’t provide nearly as much gain. So there are two competing forces — big increases in computing on one side but decreasing marginal benefits on the other side. Our research shows that, even though we already have tons of computing power, it is getting bigger so fast that it explains a lot of the performance improvement in these areas.

Q: What are the implications that come from Moore’s Law slowing down?

A: The implications are quite worrisome. As computing improves, it powers better weather prediction and the other areas we studied, but it also improves countless other areas we didn’t measure but that are nevertheless critical parts of our economy and society. If that engine of improvement slows down, it means that all those follow-on effects also slow down.

Some might disagree, arguing that there are lots of ways of innovating — if one pathway slows down, other ones will compensate. At some level that is true. For example, we are already seeing increased interest in designing specialized computer chips as a way to compensate for the end of Moore’s Law. But the problem is the magnitude of these effects. The gains from Moore’s Law were so large that, in many application areas, other sources of innovation will not be able to compensate.

The MIT Press relaunches the Software Studies series

The MIT Press has announced the relaunch of the Software Studies series, a book series committed to exploring the vast possibilities, histories, relations, and harms that software encompasses. The revamped series will move beyond broad statements about software and integrate a wide range of disciplines, including mathematics, critical race theory, software art, and queer theory.  

A new set of editors — Wendy Hui Kyong Chun, Winnie Soon, and Jichen Zhu — have joined founding editor Noah Wardrip-Fruin to reshape the vision for the series. The revamped series will publish books that focus on software as a site of societal and technical power, by responding to the following questions: How do we see, think, consume, and make software? How does software — from algorithmic procedures and machine learning models to free and open-source software programs — shape our everyday lives, cultures, societies, and identities? How can we critically and creatively analyze something that seems so ubiquitous and general — yet is also so specific and technical? And how do artists, designers, coders, scholars, hackers, and activists create new spaces to engage computational culture, enriching the understanding of software as a cultural form?

“The MIT Press is committed to providing a platform for challenging, provocative, and transformative scholarship that crosses traditional academic boundaries,” says Amy Brand, director and publisher, the MIT Press. “The newly formed editorial board will ensure that the Software Studies series continues to publish cutting-edge research, while pushing the field in new and exciting directions.” 

“Software studies is still so important because software is so nebulous; it touches and reshapes — and is touched and reshaped by — almost everything,” says Professor Wendy Hui Kyong Chun of Simon Frasier University. “The Software Studies series enables us to think in broad and/or interconnected terms, to move beyond, between, and beside the various layers and programs.”

The Software Studies series was originally launched in 2009, under the guidance of editors Matthew Fuller, Lev Manovich, and Noah Wardrip-Fruin. For over a decade, the series was dedicated to publishing the best new work that tracks how software is substantially integrated into the processes of contemporary culture and society through the scholarly modes of the humanities and social science, as well as in the software creation/research modes of computer science, the arts, and design. Important books published under the tenure of Fuller, Manovich, and Wardrip-Fruin include (among others) Nick Montfort et al.’s collaborative treatise on the single line of code, “10 PRINT CHR$ (205.5 + RND (1));  GOTO 10; Benjamin Bratton’s comprehensive overview of an accidental megastructure, “The Stack”; and Annette Vee’s argument for a computational mentality in “Coding Literacy.

To officially relaunch the series, the editors came together to share their vision of why software studies remains necessary. Read the roundtable discussion.

Student-powered machine learning

From their early days at MIT, and even before, Emma Liu ’22, MNG ’22, Yo-whan “John” Kim ’22, MNG ’22, and Clemente Ocejo ’21, MNG ’22 knew they wanted to perform computational research and explore artificial intelligence and machine learning. “Since high school, I’ve been into deep learning and was involved in projects,” says Kim, who participated in a Research Science Institute (RSI) summer program at MIT and Harvard University and went on to work on action recognition in videos using Microsoft’s Kinect.

As students in the Department of Electrical Engineering and Computer Science who recently graduated from the Master of Engineering (MEng) Thesis Program, Liu, Kim, and Ocejo have developed the skills to help guide application-focused projects. Working with the MIT-IBM Watson AI Lab, they have improved text classification with limited labeled data and designed machine-learning models for better long-term forecasting for product purchases. For Kim, “it was a very smooth transition and … a great opportunity for me to continue working in the field of deep learning and computer vision in the MIT-IBM Watson AI Lab.”

Modeling video

Collaborating with researchers from academia and industry, Kim designed, trained, and tested a deep learning model for recognizing actions across domains — in this case, video. His team specifically targeted the use of synthetic data from generated videos for training and ran prediction and inference tasks on real data, which is composed of different action classes. They wanted to see how pre-training models on synthetic videos, particularly simulations of, or game engine-generated, humans or humanoid actions stacked up to real data: publicly available videos scraped from the internet.

The reason for this research, Kim says, is that real videos can have issues, including representation bias, copyright, and/or ethical or personal sensitivity, e.g., videos of a car hitting people would be difficult to collect, or the use of people’s faces, real addresses, or license plates without consent. Kim is running experiments with 2D, 2.5D, and 3D video models, with the goal of creating domain-specific or even a large, general, synthetic video dataset that can be used for some transfer domains, where data are lacking. For instance, for applications to the construction industry, this could include running its action recognition on a building site. “I didn’t expect synthetically generated videos to perform on par with real videos,” he says. “I think that opens up a lot of different roles [for the work] in the future.”

Despite a rocky start to the project gathering and generating data and running many models, Kim says he wouldn’t have done it any other way. “It was amazing how the lab members encouraged me: ‘It’s OK. You’ll have all the experiments and the fun part coming. Don’t stress too much.’” It was this structure that helped Kim take ownership of the work. “At the end, they gave me so much support and amazing ideas that help me carry out this project.”

Data labeling

Data scarcity was also a theme of Emma Liu’s work. “The overarching problem is that there’s all this data out there in the world, and for a lot of machine learning problems, you need that data to be labeled,” says Liu, “but then you have all this unlabeled data that’s available that you’re not really leveraging.”

Liu, with direction from her MIT and IBM group, worked to put that data to use, training text classification semi-supervised models (and combining aspects of them) to add pseudo labels to the unlabeled data, based on predictions and probabilities about which categories each piece of previously unlabeled data fits into. “Then the problem is that there’s been prior work that’s shown that you can’t always trust the probabilities; specifically, neural networks have been shown to be overconfident a lot of the time,” Liu points out.

Liu and her team addressed this by evaluating the accuracy and uncertainty of the models and recalibrated them to improve her self-training framework. The self-training and calibration step allowed her to have better confidence in the predictions. This pseudo labeled data, she says, could then be added to the pool of real data, expanding the dataset; this process could be repeated in a series of iterations.

For Liu, her biggest takeaway wasn’t the product, but the process. “I learned a lot about being an independent researcher,” she says. As an undergraduate, Liu worked with IBM to develop machine learning methods to repurpose drugs already on the market and honed her decision-making ability. After collaborating with academic and industry researchers to acquire skills to ask pointed questions, seek out experts, digest and present scientific papers for relevant content, and test ideas, Liu and her cohort of MEng students working with the MIT-IBM Watson AI Lab felt they had confidence in their knowledge, freedom, and flexibility to dictate their own research’s direction. Taking on this key role, Liu says, “I feel like I had ownership over my project.”

Demand forecasting

After his time at MIT and with the MIT-IBM Watson AI Lab, Clemente Ocejo also came away with a sense of mastery, having built a strong foundation in AI techniques and timeseries methods beginning with his MIT Undergraduate Research Opportunities Program (UROP), where he met his MEng advisor. “You really have to be proactive in decision-making,” says Ocejo, “vocalizing it [your choices] as the researcher and letting people know that this is what you’re doing.”

Ocejo used his background in traditional timeseries methods for a collaboration with the lab, applying deep learning to better predict product demand forecasting in the medical field. Here, he designed, wrote, and trained a transformer, a specific machine learning model, which is typically used in natural-language processing and has the ability to learn very long-term dependencies. Ocejo and his team compared target forecast demands between months, learning dynamic connections and attention weights between product sales within a product family. They looked at identifier features, concerning the price and amount, as well as account features about who is purchasing the items or services. 

“One product does not necessarily impact the prediction made for another product in the moment of prediction. It just impacts the parameters during training that lead to that prediction,” says Ocejo. “Instead, we wanted to make it have a little more of a direct impact, so we added this layer that makes this connection and learns attention between all of the products in our dataset.”

In the long run, over a one-year prediction, MIT-IBM Watson AI Lab group was able to outperform the current model; more impressively, it did so in the short run (close to a fiscal quarter). Ocejo attributes this to the dynamic of his interdisciplinary team. “A lot of the people in my group were not necessarily very experienced in the deep learning aspect of things, but they had a lot of experience in the supply chain management, operations research, and optimization side, which is something that I don’t have that much experience in,” says Ocejo. “They were giving a lot of good high-level feedback of what to tackle next and … and knowing what the field of industry wanted to see or was looking to improve, so it was very helpful in streamlining my focus.”

For this work, a deluge of data didn’t make the difference for Ocejo and his team, but rather its structure and presentation. Oftentimes, large deep learning models require millions and millions of data points in order to make meaningful inferences; however, the MIT-IBM Watson AI Lab group demonstrated that outcomes and technique improvements can be application-specific. “It just shows that these models can learn something useful, in the right setting, with the right architecture, without needing an excess amount of data,” says Ocejo. “And then with an excess amount of data, it’ll only get better.”

MIT unveils new Wright Brothers Wind Tunnel

When Mark Drela first set foot in Cambridge to study aerospace engineering at MIT in 1978, he was no stranger to wind tunnels. Just two years before, he constructed a 1-foot-by-1-foot wind tunnel for the Westinghouse Science Talent Search that earned him a visit to the White House as a finalist. But nothing could have prepared him for the first time he saw the iconic Wright Brothers Wind Tunnel, a moment that would tie into his later career and eventually impact the very fabric of MIT’s campus.

“It was my very first day on MIT’s campus, so I was just wandering around when I turned the corner and saw it — whoa! A wind tunnel! And it’s a big one!” says Drela ’82, SM ’83, PhD ’85. “I had no idea it was even here. I ran up and knocked on the door, and [longtime tunnel operator] Frank Durgin answered. He was the first AeroAstro person I met on campus, and he could see how excited I was, so he gave me a tour.”

Since its dedication in 1938, the Wright Brothers Wind Tunnel has become a campus landmark used for education, research, industry, and outreach. Still, by the time Drela had his fateful first encounter, it was already showing its age. In 2017, the MIT Department of Aeronautics and Astronautics (AeroAstro) announced it would replace the tunnel with a brand-new facility thanks to a lead funding commitment from Boeing with Drela, now the Terry J. Kohler Professor and director of the Wright Brothers Wind Tunnel, at the helm.

Today, MIT is home to the most advanced academic wind tunnel in the country, capable of reaching wind speeds up to 230 miles per hour (mph), with the largest test section in U.S. academia.

“If I had one word to describe the state of the old tunnel after 80 years, it would be decrepit. The tunnel shell and supporting foundations, the instrumentation, and the drive motor and fan were all in a state of decay. The airflow quality was poor, and the tunnel was extremely loud and power-inefficient,” says Drela. “It just wasn’t holding up against our modern standards of wind tunnel testing. Our goal was to bring our vintage tunnel into the 21st century and beyond, and we did that.”

Go with the flow

Wind tunnels have been in use for more than 150 years — even Wilbur and Orville Wright tested candidate wing designs in a simple open-ended wind tunnel they constructed before their historic flight in 1903. Nearly everything on the Earth’s surface has air flowing over it. Instead of moving an object through the air, wind tunnels move air over a stationary object in a controlled environment, allowing the operator to take aerodynamic measurements. When building something that needs to interact with airflow, it’s necessary to understand and predict the aerodynamic forces in that interaction to diagnose and fix any problems or shortcomings in the design.

Wind tunnel measurements can determine how much fuel an aircraft will consume, how slowly it can fly during landing, or how much control it has in maneuvers. But wind tunnels are not limited to aerospace applications. They can also measure the aerodynamic loads on ground vehicles, such as cars and bicycles, or wind loads on stationary objects, such as bridges and buildings. Scientists and engineers also use wind tunnels for fundamental research, like studying how the air behaves when it interacts with an object to understand the science of fluid mechanics.

Uses for the Wright Brothers Wind Tunnel continued to evolve throughout its 80-year history. During World War II, the U.S. government took over the Wright Brothers Wind Tunnel for days to perform top-secret aircraft research and development. Over the years, in addition to aerospace research, investigators used it to test ski and bike equipment, analyze city landscapes, and even demonstrate how a 130-million-year-old four-winged dinosaur might have flown, for a documentary film.

Beyond its use in research, educators also used the tunnel extensively for coursework and public outreach. But after nearly eight decades, the aged equipment became a challenge to use. A full replacement was in order, and thanks to its urban campus home, the project presented several unique challenges.

“To have the best facility possible, we knew we needed a large test section with very good airflow quality and a maximum speed of at least 200 miles per hour, which dictated a large tunnel size and a powerful drive motor,” says Drela. “But since the tunnel sits right in the middle of campus, we had to achieve these goals while making it compatible with our urban environment. When your goals massively conflict with your constraints, you get an incredibly challenging project.”

Innovating convention

In general, nearly all wind tunnels aim to generate “clean” airflow, which means uniform flow with a constant velocity, free from distortion or turbulence. The convention would dictate a large tunnel for the required test-section size, which paradoxically requires less power to produce higher airflow quality while generating less noise. But for the Wright Brothers Wind Tunnel, size was not an option.

“Like any engineering project, size and cost were major considerations. We couldn’t just take the design of a conventional tunnel and size it to fit into the old tunnel’s relatively small space and expect it to work,” says Drela. “We had to design an entirely new architecture with many innovations to the fan, diffusers, contraction, and the corner vanes to give the new tunnel our desired capabilities within the limits of the old tunnel’s existing footprint.”

Both the old and new Wright Brothers tunnels are closed-circuit types, where the air flows through the tunnel’s test section for measurement-taking before recirculating around the tunnel again. But that is where the similarities end.

One of the most distinctive visual differences between old and new is the design of the fan itself. The old fan followed convention still commonly seen today: a 13-foot diameter with six blades made of wood that resembled boat oars. The 2,000-horsepower motor could only run at four fixed speeds, and the operator adjusted the airflow speed by varying the fan’s pitch mechanically. As a result, the system was complex, and the fan was noisy to operate. To mitigate these issues in the new tunnel, Drela worked with wind tunnel vendor Aerolab to conceive and manufacture an entirely new design: the Boundary Layer Ingesting (BLI) fan.

Air flowing over an object has a layer of slow-moving air over the object’s surface caused by fluid friction called a boundary layer. Consequently, the airflow inside a wind tunnel has boundary layers over the entire inner surface of the shell. In the test section, where the airflow is cleanest, the boundary layer is only a few inches thick, but it grows as the airflow moves downstream. By the time it enters the fan, the airflow has a thick boundary layer extending over approximately half the length of each fan blade. Traditional wind tunnel fan design typically ignores the boundary layer, opting to eliminate it by mixing it with the rest of the flow farther downstream. But with 17 uniquely-shaped blades and a 16-foot diameter, the BLI fan is specifically designed not only to accommodate this inflow nonuniformity, but to exploit it.

“The flared tips of the fan blades add extra work to the boundary layer where the velocity is lowest, near the wall,” says Drela. “Using the fan to remove this velocity nonuniformity requires less power than the downstream mixing in all other wind tunnels. The resulting flow that exits the fan is uniform, further reducing the power losses in the downstream portion of the tunnel.”

The BLI fan is driven directly by a 2,500-horsepower motor, so the overall drive system in effect has only one moving part — a significant improvement over the mechanically complex variable-pitch drive of the old tunnel. A variable frequency drive controls the motor speed, making it more power-efficient and quieter than the old tunnel’s system.

The fan pressurizes most of the tunnel flow circuit, resulting in the tunnel’s far wall opposite the fan withstanding up to 80 tons of load when the tunnel operates at full speed, equivalent to the force of a 240-mph hurricane. To accommodate the resulting elastic flexing of the walls, the only parts of the Wright Brothers Wind Tunnel anchored to the ground are the fan and the test section. The remainder of the tunnel has sliding and rocking supports, allowing the tunnel to “squirm” in place up to 1 centimeter, alleviating significant stress generated from the pressure loads and temperature variations.

After the flow leaves the test section, it turns through corners one and two, then passes through the fan, after which it goes through a heat exchanger to regulate the air temperature, which is then followed by corner three. Up until this point, this is a standard process in most current wind tunnels, but according to Drela, the final corner four “is where the real magic happens” in the Wright Brothers Wind Tunnel.

While the first three corners have vanes that only turn the airflow 90 degrees, corner four not only turns the flow but also expands its area while slowing it down significantly, enabled by a screen and aluminum honeycomb diffusers installed in the passages between the vanes. Performing the same flow-deceleration and straightening in a conventional tunnel requires more space and separate honeycomb filters and screens. By combining these components into the single corner vane row, the Wright Brothers Wind Tunnel achieves the same flow turning, deceleration, and straightening functions with minimal added space.

“If we didn’t have the screen expanding turning vanes suppressing the wall boundary layers in corner four, they would ‘burst’ or separate after the corner, thus filling the entire flow path and making the air slosh around like in a washing machine. The resulting flow going into the test section would be very messy and unusable for aerodynamic tests,” says Drela. “The screened expanding turning vanes at corner four are arguably the most important components of the new tunnel because it allows for a large flow area expansion in no added space while maintaining a nearly uniform flow.”

Although the airflow exiting corner four is relatively clean, it next passes through four flow-conditioning screens to make it even more smooth and uniform. Immediately after the final screen, the air enters the contraction, the widest part of the tunnel that accelerates the flow into the test section. A key parameter to indicate the efficiency and quality of a wind tunnel is the contraction ratio, which is the ratio of the airflow velocity between the test section and after the flow-conditioning screens. The old tunnel had a contraction ratio of 4.5:1, but Drela wanted to reach the “sweet spot” by increasing the ratio in the new one to 8:1.

“For the new tunnel, we used computational fluid dynamics to carefully design a minimum-length contraction by combining it with the usual settling chamber after the screens,” says Drela. “This combination saved us about eight feet of space, which was significant for a tunnel that is only 96 feet in total length.”

In the test section, an object is mounted on a slender post connected to the main force balance, which is the instrument installed immediately under the test section floor that senses and reads the aerodynamic forces as the airflow interacts with the model. The test section size and shape in the Wright Brothers Wind Tunnel are other significant improvements when comparing old and new. The old test section only had 57 square feet (ft2) of flow area, and its elliptical shape meant it had a cramped floor that was only 12 feet long. By comparison, the new test section has 90 ft2 of flow area, and its rectangular cross-section is 18 feet long with a floor two times wider than before. A bigger test section can accommodate larger models, which is beneficial for collecting more accurate data while improving user experience significantly by allowing plenty of workspace for the researcher.

The new tunnel also features a new MATLAB-based tunnel control and data acquisition system. This system combines the typical functions of manual tunnel operation, control, and data collection into a streamlined, fully customizable platform. The test section’s glass walls and ceiling windows give extensive optical access, which enables laser-doppler velocimetry and particle-image velocimetry measurements as well as optical model motion tracking. Safety and security features are also built directly into the tunnel control system, monitoring tunnel health parameters such as temperatures, pressures, and vibration levels. The system automatically switches to rapid shutdown mode if any health parameter exceeds its preset physical limit, or in the event of a manual emergency stop.

“You can control everything through this interface — tunnel speed, model positioning, instrument interrogation, data display, logging, and more — all from the same place,” says Drela. “It removes as much human error from the process as possible. Since the system is watching your back, you literally cannot do anything to break the tunnel from the keyboard, which is very comforting from the user’s perspective.”

Breaking new ground

Construction for the Wright Brothers Wind Tunnel project broke ground in fall 2019. It was completed 22 months later in tandem with a complete renovation of Building 17, which used to exclusively house the wind tunnel control room and the headquarters for the MIT Rocket Team.

The Building 17 renovation overhauled these spaces, combined them with the Gerhard Neumann Hangar and Laboratory (formerly housed in Building 33), and added meeting rooms and research laboratory space. Historic buildings come with inherent renovation challenges. Toxic materials like lead and asbestos tend to come standard in old buildings. Delivering massive tunnel components to the job site meant carefully maneuvering the tight squeeze between campus buildings. But the global pandemic was a curve ball that no one saw coming.

“Safety is always a top priority on any construction site. The coronavirus situation took it to another level, especially with the Cambridge-wide moratorium on construction projects that lasted for weeks,” says Anthony Zolnik, manager of infrastructure for AeroAstro, who represented the department on the project management team. “Thankfully, we had an amazing team, both within MIT and our external vendors, so we could work together to add additional measures to keep the workers safe. I’m happy to say that we made it through without any outbreaks, and we were able to keep the construction progress on track.”

Boeing’s generous contribution to the project reflects a long-standing relationship between the company and MIT, representing how collaborations between academia and industry have helped aerospace evolve into a global economy today.

“Boeing’s work with MIT dates back more than a century — but in today’s world, that collaboration is more critical than ever,” says Greg Hyslop, chief engineer of the Boeing Co. and executive vice president of engineering, test, and technology. “No one entity can meet the need for scalable innovation, and the value that academic research brings to our industry is nearly incalculable.”

In addition to support from Boeing, the Wright Brothers Wind Tunnel replacement and Building 17 renovation were made possible thanks to gifts from Becky Samberg and the late Arthur “Art” Samberg ’62 and MathWorks for the MIT Wind Tunnel Instrumentation Platform Project, which is helping MIT build and operate a state-of-the-art modern data test driver and data acquisition system.

Even though the cranes and bulldozers have left the site, the team continues to make final calibrations to the instrumentation and other finishing touches in order to reach full operational capacity by midsummer. At that time, the Wright Brothers Wind Tunnel will be open to the outside world for industry testing, scheduled tours, and more. Planning is already underway for the fall semester, and Drela will incorporate laboratory activities in the tunnel to complement the coursework for the classes he oversees.

In keeping with its predecessor, the new tunnel will carry forward an important legacy representing AeroAstro in outreach efforts across MIT and to the public. Other MIT instructors used the previous tunnel to teach classes and student groups for testing various club equipment. It has always been a popular attraction during campus events, where visitors can step into the test section and experience the wind tunnel in action with the air blowing at a breezy 30 mph.

“We’re looking forward to bringing this sense of excitement back to campus since it’s been on hiatus due to construction and the pandemic,” says Daniel Hastings, associate dean of engineering for diversity, equity, and inclusion at MIT; head of AeroAstro; and Cecil and Ida Green Education Professor. “As we conclude this project, we find ourselves once again at the forefront of academic wind tunnels, which will allow us to deliver world-class capabilities to further education, research, and industry while creating unique, immersive experiences that will inspire future generations of engineers and scientists.”

According to Drela, even in the age of advanced computing, simulation, and modeling, practical testing in wind tunnels is just as valuable as ever, especially when paired with these advanced techniques.

“Even with the most advanced computer, we can’t calculate flow with adequate precision or confidence or without significant margins of error, which could be catastrophic in some circumstances. For example, if you significantly underestimate stall speed, a crucial aspect of airplane performance, it’s the difference between life or death,” says Drela. “While there are situations where I wouldn’t trust calculations over measurements, wind tunnel testing and computation are extremely complementary. Experimental data obtained in wind tunnels will always be indispensable for validating a theoretical and computational fluid flow model.”

Students imagine better products, services, and infrastructure for an aging society

A pop-up hearing aid exposition called HearWeAre. A travel agency that matches older and younger travelers for group adventures. An app that guides outgoing hospital patients through every step of the discharge process.

These are a few of the projects presented by students on the final day in the MIT Department of Urban Studies and Planning (DUSP)’s class 11.547J/SCM.287J (Global Aging and the Built Environment). Taught by Joseph Coughlin, the director of the MIT AgeLab, and supported by his team of AgeLab researchers, the class guides students toward understanding the impact of increased longevity on systems and markets and invites them to imagine how they might design better products, services, and infrastructure for an aging society.

The class attracted MIT, Harvard University, and Wellesley College students from a diverse array of disciplines, including urban planning, industrial design, supply chain management, engineering, business, and architecture. Their projects, accordingly, spanned a wide range of areas, from re-imagining the physical and service architecture of shopping malls, to addressing challenges in evaluating and purchasing hearing aids, to an analysis of the pain points older (and younger) adults experience when navigating the built environment of the bathroom.

The lengthening human lifespan — a trend in industrialized societies since the early 20th century — is often characterized as a crisis, and aging is often discussed as a problem in need of solutions. But in his research and public appearances, Coughlin stresses that longer lives are a boon to individuals as well as an unfulfilled market opportunity.

“A 100-year lifespan is the new normal for many of us. That’s an unqualified achievement,” Coughlin says. “But I think we need to also focus on ensuring and supporting 100 good years of life. There is a market and a need for improving our quality of life as we age that has yet to be meaningfully explored.”

Sheng-Hung Lee, a graduating master’s student at MIT’s Integrated Design and Management program and the teaching assistant for the course, explains that the class was project- and solution-driven. “We guided students to focus on real unmet needs of users. Learn how to interview real users, understand their pain points, and translate that learning into the design process,” he says.

Students had access to the MIT AgeLab’s research tools, including AGNES, an empathy tool that simulates limitations that are commonly associated with aging. The class also had the opportunity to interview and collaborate with members of the 85+ Lifestyle Leaders Panel, a cohort of research participants aged 85 and older.

Throughout the semester, students worked through the design-thinking process, with their coursework organized around the development and unveiling of their final projects. “The aim of the course was not just to create ideas, but to understand what it takes to bring them out into the world,” Coughlin says. With that goal in mind, the class’s projects were informed by players in the industries they were hoping to participate in. Each group paired with a company or organization — including Adventist Health, Lowe’s, Kohler, Viking Cruises, Boston Properties, and AARP — to receive industry input on their projects.

David Hong, a first-year graduate student at in DUSP, worked on a project that looked to facilitate older adults’ travel to and from hospitals. His project group observed that the “last 50 feet,” from stepping onto the pavement to reaching the hospital receptionist, was a challenging and typically unaided part of the hospital journey for older travelers.

Rather than imagine a new transportation mode or service, Hong and his classmates went with a human solution. Connecting older hospital travelers with a medical volunteer — someone to help with aspects of the journey from getting over the curb onto the sidewalk to patient advocacy in the waiting room — could increase travelers’ levels of ease and safety, make them more willing to travel for medical care, and improve health outcomes.

For Hong, the theoretical underpinnings of the course helped to guide the development of his group’s project from the beginning. “Joe’s framing of global demographic trends — both the issues and the business opportunities behind them — was a paradigm shift for me to begin to view aging as a social construct, as well as to view the issues of older adults as consumer needs that have yet to be met,” he says.

Throughout the semester, the class received support and guest lectures from AgeLab research staff, who instructed them on design thinking, conducting interviews, and research methods. “The supportive and collaborative nature of the AgeLab brings like-minded folks together,” says Hong. “With my project group, there were four researchers attached who provided help to us.”

On the final day of class, the student groups presented their ideas before an audience of their peers, industry representatives, AgeLab researchers, and older adults. They were instructed to imagine themselves pitching their ideas to potential investors. And at least one collaborating company, Kohler, plans to continue working with its affiliated student group after the semester is over.

“The class connects the dots between industry and academia,” says Lee, talking about how the Global Aging course fits the broader institutional philosophy at MIT. “We wanted to prioritize “design making” over design thinking. We asked students to use their hands to think.”  

Inaugural Day of AI brings new digital literacy to classrooms worldwide

The first annual Day of AI on Friday, May 13 introduced artificial intelligence literacy to classrooms all over the world. An initiative of MIT Responsible AI for Social Empowerment and Education (RAISE), Day of AI is an opportunity for teachers to introduce K-12 students of all backgrounds to artificial intelligence (AI) and its role in their lives.

With over 3,000 registrations from educators across 88 countries — far exceeding the first-year goal of 1,000 registrations in the United States — the initiative has clearly struck a chord with students and teachers who want to better understand the technology that’s increasingly part of everyday life.

In today’s technology-driven world, kids are exposed to and interact with AI in ways they might not realize — from search algorithms to smart devices, video recommendations to facial recognition. Day of AI aims to help educators and students develop AI literacy with an easy entry point, with free curricula and hands-on activities developed by MIT RAISE for grades 3-12.

Professor Cynthia Breazeal, director of MIT RAISE, dean for digital learning, and head of the MIT Media Lab’s Personal Robots research group, says “We’re so inspired by the enthusiasm that students have expressed about learning about AI. We created this program because we want students and their teachers to be able to learn about these technologies in a way that’s engaging, that’s meaningful, that gives them the experience so they know that they can do AI too.”

AI is for everyone

The MIT RAISE team designed all Day of AI activities to be accessible to educators and students of all backgrounds and abilities, including those with little or no technology experience. In collaboration with education provider i2 Learning, MIT RAISE also offered teachers free professional development sessions prior to teaching the material. “That really helped me understand GANs and how that works,” says Gar-Hay Kit, a sixth-grade teacher from Mary Lyon School in Boston. “The slides that we were given were easy to work with and my class was engaged with all of the activities that we did that day.”

Students engaged with AI topics such as deepfakes, generative adversarial networks (GANs), algorithmic bias in datasets, and responsible design in social media platforms. Through hands-on activities and accessible, age-appropriate lessons, they learned what these technologies do, how they’re built, the potential dangers, along with responsible design and use — to bring benefit while mitigating unintended negative consequences.

To celebrate the inaugural Day of AI, the RAISE team hosted an event at WBUR CitySpace. Students from the fifth and sixth grade at Mary Lyon School shared projects they had created using the Day of AI curriculum during the previous few days. They demonstrated how Google QuickDraw was more likely to recognize spotted cows when the majority of users submit input with drawings of cows with spots; the AI didn’t have a wide enough dataset to draw from to be able to account for other breeds of cows that have different patterns or solid colors.

In a project about responsible social media and game design, students showed how the Roblox game platform only recommends gendered clothing for characters based on the user-entered gender. The solution the students proposed was to change the design of the recommendation system by inputting more options that were less overtly gendered, and allowing all users access to all of the clothing.

When asked what stuck out the most about the Day of AI activities, sixth-grade student Julia said, “It was cool how they were teaching young students AI and how we got to watch videos, and draw on the website.”

“One of the great benefits of this program is that no experience is necessary. You can be from anywhere and still have access to this career,” said Lieutenant Governor Karyn Polito at the event. The accessibility of Day of AI curricula relates to the tenet of Massachusetts STEM Week, “See yourself in STEM,” and Massachusetts’ STEM education goals at large. When Polito asked the audience of fifth- and sixth-graders from Mary Lyon School if they saw themselves in STEM, dozens of hands shot up in the air.

Breazeal echoed that sentiment, saying, “No matter your background, we want you to feel empowered and see a place where you can be inventing and driving these technologies in responsible ways to make a better world.” Working professionals and graduate students who use AI aren’t the only ones affected by this technology. RAISE pursues research, innovation, and outreach programs like Day of AI so K-12 students of all ages can recognize AI, evaluate its influence, and learn how to use it responsibly. Addressing the students, Breazeal said, “As you grow up, you’ll have a voice in our democracy to say how you want to see AI used.” 

More than just robots … but sometimes robots

Breazeal also moderated a panel of professionals who work with AI every day: Daniella DiPaola, PhD student at the MIT Media Lab; Steve Idowu, senior manager of strategic innovation at Liberty Mutual; Alex Aronov, executive director of data strategy and solutions at Vertex; and Sara Saperstein, head of data science, cybersecurity, and fraud at MassMutual. The panelists discussed how they’re able to leverage AI in a variety of different ways at their jobs.

Aronov explained that in a broad sense, AI can help automate “mundane” tasks so employees can focus on projects that require creative, innately “human” thinking. Idowu uses AI to improve customer and employee experiences, from claims to risk assessments. DiPaola addressed the common misconception that AI refers to sentient robots: when the Media Lab developed the social robot Jibo, the AI in action is not the robot itself but natural language understanding, technology that helps Jibo understand what people say and mean. Throughout her academic career, DiPaola has been interested in how people interact with technology. “AI is helping us uncover things about ourselves,” she said.

The panelists also spoke to the broader goals of Day of AI — not only to introduce a younger generation to the STEM concepts at the core of AI technology, but to help them envision a future for themselves that uses those skills in new ways. “It’s not just the math and computer science, it’s about thinking deeply about what we’re doing — and how,” said Saperstein.

Jeffrey Leiden, executive chair of Vertex Pharmaceuticals (a founding sponsor of Day of AI as well as the CitySpace event), said, “Twenty years ago, I don’t think any of us could have predicted how much AI and machine learning would be in our lives. We have Siri on our phones, AI can tell us what’s in our fridges, it can change the temperature automatically on our thermostats,” he said. As someone working in the medical industry, he’s particularly excited for how AI can detect medical events before they happen so patients can be treated proactively.

By introducing STEM subjects as early as elementary and middle school, educators can build pathways for students to pursue STEM in high school and beyond. Exposure to future careers as scientists and researchers working in fields ranging from life sciences to robotics can empower students to bring their ideas forward and come up with even better solutions for science’s great questions.

The first Day of AI was hugely successful, with teachers posting photos and stories of their students’ enthusiasm from all over the world on social media using #DayofAI. Further Day of AI events are planned in Australia and Hong Kong later this summer, and the MIT RAISE team is already planning new curriculum modules, resources, and community-building efforts in advance of next year’s event. Plans include engaging the growing global community for language translation, more cultural localization for curriculum modules, and more.

Linguistics luminaries Noam Chomsky and Morris Halle honored

Nearly 60 years ago, Noam Chomsky and Morris Halle established the MIT Department of Linguistics. This spring, the department dedicated a wing of its Stata Center home to these founding fathers.

“Together, they defined and transformed the entire field of linguistics,” says Danny Fox, the Anshen-Chomsky Professor of Language and Thought and department head. “Naming the wing after them seemed like a way of indicating their centrality not only to our discipline but in so many ways to all of cognitive science.”

Halle, who taught at MIT from 1951 to 1996, and became an Institute Professor, died in 2018. Chomsky came to MIT in 1955 and retired in 2002, continuing his research as Institute Professor Emeritus. He moved to the University of Arizona several years ago, where he is laureate professor of linguistics. Halle and Chomsky shared an office in MIT’s fabled Building 20, and when it was demolished, they moved to a space in the Stata Center. After Chomsky’s departure, this area was redesigned for use as the department’s Language Acquisition Lab.

“With our growing emphasis on experimental work, it seemed natural to devote this space to our new lab,” says David Pesetsky, the Ferrari P. Ward Professor of Modern Languages and Linguistics, and former department head. “Both Noam and Morris were my teachers in the early 1980s, and no student today can work in this field without being influenced by them. We thought it would be wonderful to name this area for our beloved colleagues, who taught so many of us.”

Talk and toasts

At an event that combined celebration with reunion, Chomsky delivered a virtual lecture titled “Genuine Explanation and the Strong Minimalist Thesis,” which was made available to the entire MIT community. After the talk, faculty, alumni, and graduate students — representing the ancestral tree of linguists trained by Chomsky and Halle directly, and by their protégées — engaged in lively colloquy with Chomsky in line with time-honored linguistics department tradition.

At the reception afterwards, friends and family toasted the long-lived partnership. Jay Keyser, professor emeritus of linguistics, described the “deep affection” Chomsky and Halle had for each other. “It was one of those rare moments in history when their paths converged,” he said. “Like Darwin, Newton, Einstein, and Niels Bohr, they were scientists who changed the way we looked at ourselves.”

Robert C. Berwick, a professor of computer science and engineering and computational linguistics, characterized the two as “playing together in an especially complementary way.”

Halle’s son Tim spoke of “the lifelong friendship that defined them both,” and John described roaming the halls of Building 20 and noticing “how much fun people were having in their labs,” he said. “There was arguing and the point was to make progress, advance the science, but laughter is what I remember most.”

Just 15 years earlier, Pesetsky remarked, he could walk down the hall if he “wanted to argue with Morris or find out what Noam had to say about something,” Today, in the revamped wing, “we do much the same, learning from and arguing with each other, in the spirit of the department they created.”

Nostalgia

“My main reaction when I learned about the dedication was, I must admit, nostalgia,” says Chomsky. “I started thinking how Morris and I had worked together all these years, since we met as graduate students in the mid-20th century.” The two commuted to campus on the subway, and later Chomsky would drive them both in. “When Morris and I decided in the late 1950s to start a department, even to us it seemed a pretty wild idea. Would students come to MIT to study linguistics — which was not yet recognized as a field? To our surprise, a group of outstanding students came the first year, and then our contemporary form of linguistics exploded, moving around to the rest of the country and elsewhere in the world.”

This newly defined discipline aimed to characterize human language acquisition as a unique and biologically based trait. Chomsky specialized in syntax, and Halle on phonology, but they produced seminal texts together, such as “The Sound Pattern of English.” They also served as the engine driving a tight-knit group of graduate researchers and young faculty.

Among them was Donca Steriade, the Class of 1941 Professor of Linguistics. “Morris had a seriousness of purpose in science that obliterated everything else,” she recalls. “One could agree or disagree with him over points of doctrine or the ways in which we carried out our work, but there was no question that he was profoundly dedicated to finding the truth.” As a young researcher, Steriade found it reassuring that Halle and Chomsky “viewed their lives as organized around the act of doing science.”

Steriade and other students were held to high standards. “There were quite a few of us as graduate students whom Morris suggested shouldn’t be in the field, and it wasn’t easy always being tested,” she says. But Halle was also “interested in each of us as individuals as well as a source of ideas, and curious about what kind of human beings we were.” His critiques were intended to compel young researchers to embrace linguistics not as a career so much as a path for advancing science. “Don’t look at data without a purpose; Morris and Noam never forgot this, and I never forget this,” says Steriade.

Advancing the field

At the MIT Language Acquisition Lab, Associate Professor Martin Hackl and Assistant Professor Athulya Aravind currently investigate how human infants and children determine the properties of their native language in a startlingly brief period of time. Aravind is acutely aware of what it means to conduct her research in the newly dedicated wing. “We are standing on the shoulders of giants like Chomsky and Halle,” she says.

While theorists such as Chomsky and Halle laid down the theoretical foundations for language acquisition, the experimental field has lagged, she says. “We are dealing with a special population, infants and young children,” she adds, “which has made it difficult to collect empirical data to help us understand what kids know about language, and the biological origins of language development.”

At the lab, Aravind and her colleagues are devising new methods and behavioral measures to explore how babies and toddlers pick up language. “What are slowing us down are just practicalities,” she says. Joining her in the lab are similarly dedicated graduate researchers, another legacy of the Chomsky-Halle collaboration, she notes. “Their vision was specific: They insisted that graduate students must be part of the process of scientific discovery from the get-go, and their ideas and findings must be taken seriously by senior faculty, and they must take themselves seriously as well.”

Chomsky says he could not be more heartened by the experimental and educational objectives of Aravind and her colleagues. “It is very gratifying to have this wing dedicated to new research, which is really exciting work at the frontiers of understanding,” he says. Chomsky also sees no end of questions to explore in the field. “Every time you make a discovery, it opens up a door to new problems, which will go on indefinitely. Somehow, each of us finds new thoughts in our minds, maybe new in the history of language, or in our experience. How do we do that?  Nobody has a clue. That one we may never solve.”

Congressional seminar introduces MIT faculty to 30 Washington staffers

More than 30 congressional and executive branch staffers were hosted by MIT’s Security Studies Program (SSP) for a series of panels and a keynote address focused on contemporary national security issues. 

Organized by the Security Studies Program, the Executive Branch and Congressional Staff Seminar was held from Wednesday, April 20m to Friday, April 22, in Cambridge, Massachusetts. The program, supported by a generous grant from the Raymond Frankel Foundation, is hosted by MIT every other year to encourage interaction and exchange between scholars studying national security and policymakers.

Staff members from the U.S. House of Representatives, the Senate, and the Congressional Research Service were joined by more than 15 MIT SSP faculty members and research affiliates. Each of them is an expert on one of a broad range of topics, from China’s ambitions to great-power competition.

This year’s program included a guided tour of the MIT Lincoln Laboratory in Lexington, Massachusetts, four intensive panels with SSP faculty and affiliates, and a keynote address by Admiral John Richardson, the former chief of naval operations.

Keynote address

In his address, Richardson argued the United States is facing two simultaneous revolutions that have the potential to reshape the world. First, a political revolution of rising powers is returning the world to multipolarity and spreading authoritarianism. Second, a technological revolution of interconnected new technologies, from artificial intelligence to quantum computing, promises not only to increase speed and efficiency, but also to allow for entirely new capabilities. 

Richardson compared the current moment to two points in history: the turn of the 19th century and the beginning of the Cold War. In both periods, he said, the United States faced intertwined political and technological revolutions. 

In each case, he said, the U.S. and its allies prevailed. This success was won in both the political and technological spheres. 

In those areas, there was a sense of existential urgency that enabled a more adaptable and learning-based approach to the rapid changes of the Cold War, he said. In the end, the United States benefited from a coherent strategy to address worldwide changes.

The current challenges, Richardson said, demand a similar sense of urgency, adaptability, and learning if the U.S. is to prevail in preserving its influence in the world, and its quality of life.

The changing international order

During a panel on the “Changing International Order,” staffers heard from Ford International Professor of Political Science Barry Posen, SSP Senior Advisor Carol Saivetz, and Jonathan Kirshner, a professor of political science and international studies at Boston College.

Posen focused his remarks on Russia and China’s growing power relative to the United States, in the context of the 2008 financial crisis, the Covid-19 pandemic, and the war in Ukraine. Kirshner identified the domestic politics of key participants in the international order, especially domestic dysfunction in the United States, as the chief driver of change. Saivetz offered several hypotheses on the cause of Russia’s invasion of Ukraine, which include pushing back against the expansion of NATO and the European Union, the desire for great power status, concerns about a liberal democracy on its borders, and the influence of the Russian Orthodox Church.

New tools of statecraft

A panel on “New Tools of Statecraft” featured remarks by Richard Nielsen, associate professor of political science at MIT, Mariya Grinberg, assistant professor of political science at MIT, and Joel Brenner, senior advisor to MIT SSP. MIT’s R. David Edelman, director of the Project on Technology, Economy and National Security and Computer Science and Artificial Intelligence Laboratory affiliate, chaired the panel.

Nielsen discussed the role of U.S. influence in a world beset by misinformation. He emphasized that the internet is more fragmented than it has ever been, and America’s ability to shape people’s opinions through the internet is extremely limited. Grinberg, an expert on conflict economies, addressed what policy changes are necessary — and what policy changes were unnecessary — in response to the Covid-19 pandemic’s effects on markets. Brenner observed that many existing tools of statecraft are not “new,” but the speed, coordination, and synchronization of tools is new, as demonstrated by both the Russians and the Ukrainians in the ongoing war.

China’s growing ambitions

A panel on “China’s Growing Ambitions” featured remarks by MIT SSP director and Arthur and Ruth Sloan Professor of Political Science M. Taylor Fravel along with two SSP alumni: Joseph Torigian PhD ’16, an assistant professor with the School of International Service at American University, and Fiona Cunningham PhD ’18, an assistant professor of political science at the University of Pennsylvania.

Torigian suggested that Chinese General Secretary Xi Jinping’s views are likely a balance between pursuing the Communist Party’s ideals and mission with a deep skepticism of radical policies, and the kind of leftism and radicalism associated with events such as the Cultural Revolution. Xi is ideological, he said, but is flexible. Cunningham spoke broadly on China’s ambitions, and concluded with an argument that the U.S. needs to do more work to implement a more competitive Indo-Pacific policy, especially in terms of trade, and that U.S. officials should work to protect and strengthen existing channels of communication so that they can be functional in a crisis. Fravel discussed recent military changes in China. He noted that China adopted a new military strategy in 2019, which identifies the U.S. and Taiwan as principal adversaries, but stated that this was fundamentally not much more than top-level cosmetic changes to the 2014 military strategy in order to help cement Xi’s role as a military leader. 

The new nuclear era

The “New Nuclear Era” panel featured three MIT faculty and affiliates: Senior Research Associate Jim Walsh, Principal Research Scientist Eric Heginbotham, and Caitlin Talmadge PhD ’11, an associate professor with the School of Foreign Service at Georgetown University and an SSP alumna.

Heginbotham discussed the increasing number and variety of roles that nuclear weapons play in international affairs, emphasizing how multipolarity and nuclear proliferation create “nested security dilemmas.” Talmadge similarly highlighted the complexity of the deterrence environment with multiple, multi-sided nuclear competitions occurring at once. Walsh framed the war in Ukraine as a reminder of nuclear danger that motivates the public both to “hug nuclear weapons more closely in a more dangerous world” and to “reduce nuclear danger before unimaginably bad things happen.”

Virtual worlds apart

What is virtual reality? On a technical level, it is a headset-enabled system using images and sounds to make the user feel as if they are in another place altogether. But in terms of the content and essence of virtual reality — well, that may depend on where you are.

In the U.S., for instance, virtual reality (VR) has its deep roots as a form of military training technology. Later it took on a “techno-utopian” air when it started getting more attention in the 1980s and 1990s, as MIT Professor Paul Roquet observes in a new book about the subject. But in Japan, virtual reality has become heavily oriented around “isekai,” or “other world” fantasies, including scenarios where the VR user enters a portal to another world and must find their way back.

“Part of my goal, in pulling out these different senses of virtual reality, is that it can mean different things in different parts of the world, and is changing a lot over time,” says Roquet, an associate professor of media studies and Japan studies in MIT’s Comparative Media Studies/Writing program.

As such, VR constitutes a useful case study in the interactions of society and technology, and the way innovations can evolve in relation to the cultures that adopt them. Roquet details these differences in the new book, “The Immersive Enclosure: Virtual reality in Japan,” published this week by Columbia University Press.

Different lineages

As Roquet notes in the book, virtual reality has a lengthy lineage of precursor innovations, dating at least to early 20th-century military flight simulators. A 1960s stereoscopic arcade machine, the Sensorama, is regarded as the first commercial VR device. Later in the decade, Ivan Sutherland, a computer scientist with an MIT PhD, developed a pioneering computerized head-mounted display.

By the 1980s in the U.S., however, virtual reality, often linked with technologist Jaron Lanier, had veered off in a different direction, being cast as a liberatory tool, “more pure than what came before,” as Roquet puts it. He adds: “It goes back to the Platonic ideal of the world that can be separated from everyday materiality. And in the popular imagination, VR becomes this space where we can fix things like sexism, racism, discrimination, and inequality. There’s a lot of promises being made in the U.S. context.”

In Japan, though, VR has a different trajectory. Partly because Japan’s postwar constitution prohibited most military activities, virtual reality developed more in relation to forms of popular entertainment such as manga, anime, and video games. Roquet believes its Japanese technological lineage also includes the Sony Walkman, which created private space for media consumption.

“It’s going in different directions,” Roquet says. “The technology moves away from the kind of military and industrial uses promised in the U.S.”

As Roquet details in the book, different Japanese phrases for virtual reality reflect this. One term, “bacharu riariti,” reflects the more idealistic notion that a virtual space could functionally substitute for a real one; another, “kaso genjitsu,” situates virtual reality more as entertainment where the “feeling matters as much as technology itself.”

The actual content of VR entertainment can vary, from multiplayer battle games to other kinds of fantasy-world activities. As Roquet examines in the book, Japanese virtual reality also has a distinct gender profile: One survey in Japan showed that 87 percent of social virtual reality users were male, but 88 percent of them were embodying female lead characters, and not necessarily in scenarios that are empowering to women. Men are thus “everywhere in control yet nowhere to be seen,” Roquet writes, while “covertly reinscribing gender norms.”

A rather different potential application for virtual reality is telework. As Roquet also details, considerable research has been applied to the idea of using VR to control robots for use in numerous settings, from health care to industrial tasks. This is something Japanese technologists share with, say, Mark Zuckerberg of Meta, whose company has become the leading U.S. backer of virtual reality.

“It’s not so much that there’s an absolute divide [between the U.S. and Japan], Roquet says; instead, he notes, there is a different emphasis in terms of “what virtual reality is about.”

What escapism cannot escape

Other scholars have praised “The Immersive Enclosure.” Yuriko Furuhata, an associate professor at McGill University, has called the book “a refreshing new take on VR  as a consumer technology.” James J. Hodge, an associate professor at Northwestern University, has called the book “a must-read for scholars in media studies and general readers alike fascinated by the flawed revolutionary potential of VR.”

Ultimately, as Roquet concludes as the end of the book, virtual reality still faces key political, commercial, and social questions. One of them, he writes, is “how to envision a VR future governed by something other than a small set of corporate landlords and the same old geopolitical struggles.” Another, as the book notes, is “what it means for a media interface to assert control over someone’s spatial awareness.”

In both matters, that means understanding virtual reality — and technology broadly — as it gets shaped by society. Virtual reality may often present itself as a form of escapism, but there is no escaping the circumstances in which it has been developed and refined.

“You can create a space that’s outside of the social world, but it ends up being highly shaped by whoever is doing the creation,” Roquet says.

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.