People Should Find A Safe Storm Shelter During Thunderstorm

Storm Shelters in OKC

Tuesday June 5, 2001 marked the start of an extremely fascinating time in the annals of my cherished Houston. Tropical storm Allison, that early summer daytime came to see. The thunderstorm went rapidly, although there was Tuesday. Friday, afterward arrived, and Allison returned. This time going slowly, this time in the north. The thunderstorm became still. Thousands of people driven from their houses. Only when they might be desired most, several leading hospitals shut. Dozens of important surface roads, and every important highway covered in water that was high.

Yet even prior to the rain stopped, service to others, and narratives of Christian compassion started to be composed. For a couples class, about 75 people had assembled at Lakewood Church among the greatest nondenominational churches in The United States. From time they got ready to depart the waters had climbed so high they were stranded. The facility of Lakewood stayed dry and high at the center of among the hardest hit parts of town. Refugees in the powerful thunderstorm started arriving at their doorstep. Without no advance preparation, and demand of official sanction, those 75 classmates started a calamity shelter that grew to hold over 3,000 customers. The greatest of over 30 refuges that could be established in the height of the thunderstorm.

Where help was doled out to those who’d suffered losses after Lakewood functioned as a Red Cross Service Center. When it became clear that FEMA aid, and Red Cross wouldn’t bring aid enough, Lakewood and Second Baptist joined -Houston to produce an adopt a family plan to greatly help get folks on their feet quicker. In the occasions that followed militaries of Christians arrived in both churches. From all over town, people of economical standing, race, and each and every denomination collected. Wet rotted carpeting were pulled up, sheet stone removed. Piles of clothes donated food and bed clothes were doled out. Elbow grease and cleaning equipment were used to start eliminating traces of the damage.

It would have been an excellent example of practical ministry in a period of disaster, in the event the story stopped here, but it continues. A great many other churches functioned as shelters as well as in the occasions that followed Red Cross Service Centers. Tons of new volunteers, a lot of them Christians put to work, and were put through accelerated training. That Saturday, I used to be trapped in my own, personal subdivision. Particular that my family was safe because I worked in Storm Shelters OKC that was near where I used to live. What they wouldn’t permit the storm to do, is take their demand to give their religion, or their self respect. I saw so a lot of people as they brought gifts of food, clothes and bedclothes, praising the Lord. I saw young kids coming making use of their parents to not give new, rarely used toys to kids who had none.

Leaning On God Through Hard Times

Unity Church of Christianity from a location across town impacted by the storm sent a sizable way to obtain bedding as well as other supplies. A tiny troupe of musicians and Christian clowns requested to be permitted to amuse the kids in the shelter where I served and arrived. We of course promptly taken their offer. The kids were collected by them in a sizable empty space of flooring. They sang, they told stories, balloon animals were made by them. The kids, frightened, at least briefly displaced laughed.

When not occupied elsewhere I did lots of listening. I listened to survivors that were disappointed, and frustrated relief workers. I listened to kids make an effort to take advantage of a scenario they could not comprehend. All these are only the stories I have heard or seen. I am aware that spiritual groups, Churches, and lots of other individual Christians functioned admirably. I do need to thank them for the attempts in disaster. I thank The Lord for supplying them to serve.

I didn’t write its individuals, or this which means you’d feel sorry for Houston. As this disaster unfolded yet what I saw encouraged my beliefs the Lord will provide through our brothers and sisters in religion for us. Regardless how awful your community hits, you the individual Christian can be a part of the remedy. Those blankets you can probably never use, and have stored away mean much to people who have none. You are able to help in the event that you can drive. You are able to help if you’re able to create a cot. It is possible to help in the event that you can scrub a wall. It is possible to help if all you are able to do is sit and listen. Large catastrophes like Allison get lots of focus. However a disaster can come in virtually any size. That is a serious disaster to your family that called it home in case a single household burns. It is going to be generations prior to the folks here forget Allison.

United States Oil and Gas Exploration Opportunities

Firms investing in this sector can research, develop and create, as well as appreciate the edges of a global gas and oil portfolio with no political and economical disadvantages. Allowing regime and the US financial conditions is rated amongst the world and the petroleum made in US is sold at costs that were international. The firms will likely gain as US also has a national market that is booming. Where 500 exploration wells are drilled most of the petroleum exploration in US continues to be concentrated around the Taranaki Basin. On the other hand, the US sedimentary basins still remain unexplored and many show existence of petroleum seeps and arrangements were also unveiled by the investigation data with high hydrocarbon potential. There have already been onshore gas discoveries before including Great south river basins, East Coast Basin and offshore Canterbury.

As interest in petroleum is expected to grow strongly during this interval but this doesn’t automatically dim the bright future expectations in this sector. The interest in petroleum is anticipated to reach 338 PJ per annum. The US government is eager to augment the gas and oil supply. As new discoveries in this sector are required to carry through the national demand at the same time as raise the amount of self reliance and minimize the cost on imports of petroleum the Gas and Oil exploration sector is thought to be among the dawn sectors. The US government has invented a distinctive approach to reach its petroleum and gas exploration targets. It’s developed a “Benefit For Attempt” model for Petroleum and Gas exploration tasks in US.

The “Benefit For Attempt” in today’s analytic thinking is defined as oil reserves found per kilometer drilled. It will help in deriving the estimate of reservations drilled for dollar and each kilometer spent for each investigation. The authorities of US has revealed considerable signs that it’ll bring positive effects of change which will favor investigation of new oil reserves since the price of investigation has adverse effects on investigation task. The Authorities of US has made the information accessible about the oil potential in its study report. Foil of advice in royalty and allocation regimes, and simplicity of processes have enhanced the attractiveness of Petroleum and Natural Gas Sector in the United States.

Petroleum was the third biggest export earner in 2008 for US and the chance to to keep up the growth of the sector is broadly accessible by manners of investigation endeavors that are new. The government is poised to keep the impetus in this sector. Now many firms are active with new exploration jobs in the Challenger Plateau of the United States, Northland East Slope Basin region, outer Taranaki Basin, and Bellona Trough region. The 89 Energy oil and gas sector guarantees foreign investors as government to high increase has declared a five year continuance of an exemption for offshore petroleum and gas exploration in its 2009 budget. The authorities provide nonresident rig operators with tax breaks.

Modern Robot Duct Cleaning Uses

AC systems, and heat, venting collect pollutants and contaminants like mold, debris, dust and bacteria that can have an adverse impact on indoor air quality. Most folks are at present aware that indoor air pollution could be a health concern and increased visibility has been thus gained by the area. Studies have also suggested cleaning their efficacy enhances and is contributory to a longer operating life, along with maintenance and energy cost savings. The cleaning of the parts of forced air systems of heat, venting and cooling system is what’s called duct cleaning. Robots are an advantageous tool raising the price and efficacy facets of the procedure. Therefore, using modern robot duct isn’t any longer a new practice.

A cleaner, healthier indoor environment is created by a clean air duct system which lowers energy prices and increases efficiency. As we spend more hours inside air duct cleaning has become an important variable in the cleaning sector. Indoor pollutant levels can increase. Health effects can show years or up immediately after repeated or long exposure. These effects range from some respiratory diseases, cardiovascular disease, and cancer that can be deadly or debilitating. Therefore, it’s wise to ensure indoor air quality isn’t endangered inside buildings. Dangerous pollutants that can found in inside can transcend outdoor air pollutants in accordance with the Environmental Protection Agency.

Duct cleaning from Air Duct Cleaning Edmond professionals removes microbial contaminants, that might not be visible to the naked eye together with both observable contaminants. Indoor air quality cans impact and present a health hazard. Air ducts can be host to a number of health hazard microbial agents. Legionnaires Disease is one malaise that’s got public notice as our modern surroundings supports the development of the bacteria that has the potential to cause outbreaks and causes the affliction. Typical disorder-causing surroundings contain wetness producing gear such as those in air conditioned buildings with cooling towers that are badly maintained. In summary, in building and designing systems to control our surroundings, we’ve created conditions that were perfect . Those systems must be correctly tracked and preserved. That’s the secret to controlling this disorder.

Robots allow for the occupation while saving workers from exposure to be done faster. Signs of the technological progress in the duct cleaning business is apparent in the variety of gear now available for example, array of robotic gear, to be used in air duct cleaning. Robots are priceless in hard to reach places. Robots used to see states inside the duct, now may be used for spraying, cleaning and sampling procedures. The remote controlled robotic gear can be fitted with practical and fastener characteristics to reach many different use functions.

Video recorders and a closed circuit television camera system can be attached to the robotic gear to view states and operations and for documentation purposes. Inside ducts are inspected by review apparatus in the robot. Robots traveling to particular sections of the system and can move around barriers. Some join functions that empower cleaning operation and instruction manual and fit into little ducts. An useful view range can be delivered by them with models delivering disinfection, cleaning, review, coating and sealing abilities economically.

The remote controlled robotic gear comes in various sizes and shapes for different uses. Of robotic video cameras the first use was in the 80s to record states inside the duct. Robotic cleaning systems have a lot more uses. These devices provide improved accessibility for better cleaning and reduce labor costs. Lately, functions have been expanded by areas for the use of small mobile robots in the service industries, including uses for review and duct cleaning.

More improvements are being considered to make a tool that was productive even more effective. If you determine to have your ventilation, heat and cooling system cleaned, it’s important to make sure all parts of the system clean and is qualified to achieve this. Failure to clean one part of a contaminated system can lead to re-contamination of the entire system.

When To Call A DWI Attorney

Charges or fees against a DWI offender need a legal Sugar Land criminal defense attorney that is qualified dismiss or so that you can reduce charges or the fees. So, undoubtedly a DWI attorney is needed by everyone. Even if it’s a first-time violation the penalties can be severe being represented by a DWI attorney that is qualified is vitally significant. If you’re facing following charges for DWI subsequently the punishments can contain felony charges and be severe. Locating an excellent attorney is thus a job you should approach when possible.

So you must bear in mind that you just should hire a DWI attorney who practices within the state where the violation occurred every state within America will make its laws and legislation regarding DWI violations. It is because they are going to have the knowledge and expertise of state law that is relevant to sufficiently defend you and will be knowledgeable about the processes and evaluations performed to establish your guilt.

As your attorney they are going to look to the evaluations that have been completed at the time of your arrest and the authorities evidence that is accompanying to assess whether or not these evaluations were accurately performed, carried out by competent staff and if the right processes where followed. It isn’t often that a police testimony is asserted against, although authorities testimony also can be challenged in court.

You should attempt to locate someone who specializes in these kind of cases when you start trying to find a DWI attorney. Whilst many attorneys may be willing to consider on your case, a lawyer who specializes in these cases is required by the skilled knowledge needed to interpret the scientific and medical evaluations ran when you had been detained. The first consultation is free and provides you with the chance to to inquire further about their experience in fees and these cases.

Many attorneys will work according into a fee that is hourly or on a set fee basis determined by the kind of case. You may find how they have been paid to satisfy your financial situation and you will have the capacity to negotiate the conditions of their fee. If you are unable to afford to hire an attorney that is private you then can request a court-appointed attorney paid for by the state. Before you hire a DWI attorney you should make sure when you might be expected to appear in court and you understand the precise charges imposed against you.

How Credit Card Works

The credit card is making your life more easy, supplying an amazing set of options. The credit card is a retail trade settlement; a credit system worked through the little plastic card which bears its name. Regulated by ISO 7810 defines credit cards the actual card itself consistently chooses the same structure, size and contour. A strip of a special stuff on the card (the substance resembles the floppy disk or a magnetic group) is saving all the necessary data. This magnetic strip enables the credit card’s validation. The layout has become an important variable; an enticing credit card layout is essential in ensuring advice and its dependability keeping properties.

A credit card is supplied to the user just after a bank approves an account, estimating a varied variety of variables to ascertain fiscal dependability. This bank is the credit supplier. When a purchase is being made by an individual, he must sign a receipt to verify the trade. There are the card details, and the amount of cash to be paid. You can find many shops that take electronic authority for the credit cards and use cloud tokenization for authorization. Nearly all verification are made using a digital verification system; it enables assessing the card is not invalid. If the customer has enough cash to insure the purchase he could be attempting to make staying on his credit limit any retailer may also check.

As the credit supplier, it is as much as the banks to keep the user informed of his statement. They typically send monthly statements detailing each trade procedures through the outstanding fees, the card and the sums owed. This enables the cardholder to ensure all the payments are right, and to discover mistakes or fraudulent action to dispute. Interest is typically charging and establishes a minimal repayment amount by the end of the following billing cycle.

The precise way the interest is charged is normally set within an initial understanding. On the rear of the credit card statement these elements are specified by the supplier. Generally, the credit card is an easy type of revolving credit from one month to another. It can also be a classy financial instrument, having many balance sections to afford a greater extent for credit management. Interest rates may also be not the same as one card to another. The credit card promotion services are using some appealing incentives find some new ones along the way and to keep their customers.

Why Get Help From A Property Management?

One solution while removing much of the anxiety, to have the revenue of your rental home would be to engage and contact property management in Oklahoma City, Oklahoma. If you wish to know more and are considering the product please browse the remainder of the post. Leasing out your bit of real property may be real cash-cow as many landlords understand, but that cash flow usually includes a tremendous concern. Night phones from tenants that have the trouble of marketing the house if you own an emptiness just take out lots of the pleasure of earning money off of leases, overdue lease payments which you must chase down, as well as over-flowing lavatories. One solution while removing much of the anxiety, to have the earnings would be to engage a property management organization.

These businesses perform as the go between for the tenant as well as you. The tenant will not actually need to understand who you’re when you hire a property management company. The company manages the day to day while you still possess the ability to help make the final judgements in regards to the home relationships using the tenant. The company may manage the marketing for you personally, for those who are in possession of a unit that is vacant. Since the company is going to have more connections in a bigger market than you’ve got along with the industry than you are doing, you’ll discover your device gets stuffed a whole lot more quickly making use of their aid. In addition, the property management company may care for testing prospective tenants and help prospects move in by partnering with the right home services and moving company. With regards to the arrangement you’ve got, you might nevertheless not be unable to get the last say regarding if a tenant is qualified for the the system, but of locating a suitable tenant, the day-to-day difficulty is not any longer your problem. They’ll also manage the before-move-in the reviews as well as reviews required following a tenant moves away.

It is possible to step back watching the profits, after the the system is stuffed. Communicating will be handled by the company with all the tenant if you have an issue. You won’t be telephoned if this pipe explosions at the center of the night time. Your consultant is called by the tenant in the company, who then makes the preparations that are required to get the issue repaired with a care supplier. You get a phone call a day later or may not know there was an issue before you register using the business. The property management organization may also make your leasing obligations to to get. The company will do what’s required to accumulate if your tenant is making a payment. In certain arrangements, the organization is going to also take-over paying taxation, insurance, and the mortgage on the portion of property. You actually need to do-nothing but appreciate after after all the the invoices are paid, the revenue which is sent your way.

With all the advantages, you’re probably questioning exactly what to employing a property management organization, the downside should be. From hiring one the primary variable that stops some landlords is the price. All these providers will be paid for by you. The price must be weighed by you from the time frame you’ll save time that you may subsequently use to follow additional revenue-producing efforts or just take pleasure in the fruits of your expense work.

Benifits From An Orthodontic Care

Orthodontics is the specialty of dentistry centered on the identification and treatment of dental and related facial problems. The outcomes of Norman Orthodontist OKC treatment could be dramatic — an advanced quality of life for a lot of individuals of ages and lovely grins, improved oral health health, aesthetics and increased cosmetic tranquility. Whether into a look dentistry attention is needed or not is an individual’s own choice. Situations are tolerated by most folks like totally various kinds of bite issues or over bites and don’t get treated. Nevertheless, a number people sense guaranteed with teeth that are correctly aligned, appealing and simpler. Dentistry attention may enhance construct and appearance power. It jointly might work with you consult with clearness or to gnaw on greater.

Orthodontic attention isn’t only decorative in character. It might also gain long term oral health health. Right, correctly aligned teeth is not more difficult to floss and clean. This may ease and decrease the risk of rot. It may also quit periodontists irritation that problems gums. Periodontists might finish in disease, that occurs once micro-organism bunch round your house where the teeth and the gums meet. Periodontists can be ended in by untreated periodontists. Such an unhealthiness result in enamel reduction and may ruin bone that surrounds the teeth. Less may be chewed by people who have stings that are harmful with efficacy. A few of us using a serious bite down side might have difficulties obtaining enough nutrients. Once the teeth aren’t aimed correctly, this somewhat might happen. Morsel issues that are repairing may allow it to be more easy to chew and digest meals.

One may also have language problems, when the top and lower front teeth do not arrange right. All these are fixed through therapy, occasionally combined with medical help. Eventually, remedy may ease to avoid early use of rear areas. Your teeth grow to an unlikely quantity of pressure, as you chew down. In case your top teeth do not match it’ll trigger your teeth that are back to degrade. The most frequently encountered type of therapy is the braces (or retainer) and head-gear. But, a lot people complain about suffering with this technique that, unfortunately, is also unavoidable. Sport braces damages, as well as additional individuals have problem in talking. Dental practitioners, though, state several days can be normally disappeared throughout by the hurting. Occasionally annoyance is caused by them. In the event that you’d like to to quit more unpleasant senses, fresh, soft and tedious food must be avoided by you. In addition, tend not to take your braces away unless the medical professional claims so.

It is advised which you just observe your medical professional often for medical examinations to prevent choice possible problems that may appear while getting therapy. You are going to be approved using a specific dental hygiene, if necessary. Dental specialist may look-out of managing and id malocclusion now. Orthodontia – the main specialization of medication – mainly targets repairing chin problems and teeth, your grin as well as thus your sting. Dentist, however, won’t only do chin remedies and crisis teeth. They also handle tender to severe dental circumstances which may grow to states that are risky. You actually have not got to quantify throughout a predicament your life all. See dental specialist San – Direction Posts, and you’ll notice only but of stunning your smile plenty will soon be.

Q&A: Global challenges surrounding the deployment of AI

The AI Policy Forum (AIPF) is an initiative of the MIT Schwarzman College of Computing to move the global conversation about the impact of artificial intelligence from principles to practical policy implementation. Formed in late 2020, AIPF brings together leaders in government, business, and academia to develop approaches to address the societal challenges posed by the rapid advances and increasing applicability of AI.

The co-chairs of the AI Policy Forum are Aleksander Madry, the Cadence Design Systems Professor; Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science; and Luis Videgaray, senior lecturer at MIT Sloan School of Management and director of MIT AI Policy for the World Project. Here, they discuss talk some of the key issues facing the AI policy landscape today and the challenges surrounding the deployment of AI. The three are co-organizers of the upcoming AI Policy Forum Summit on Sept. 28, which will further explore the issues discussed here.

Q: Can you talk about the ­ongoing work of the AI Policy Forum and the AI policy landscape generally?

Ozdaglar: There is no shortage of discussion about AI at different venues, but conversations are often high-level, focused on questions of ethics and principles, or on policy problems alone. The approach the AIPF takes to its work is to target specific questions with actionable policy solutions and engage with the stakeholders working directly in these areas. We work “behind the scenes” with smaller focus groups to tackle these challenges and aim to bring visibility to some potential solutions alongside the players working directly on them through larger gatherings.

Q: AI impacts many sectors, which makes us naturally worry about its trustworthiness. Are there any emerging best practices for development and deployment of trustworthy AI?

Madry: The most important thing to understand regarding deploying trustworthy AI is that AI technology isn’t some natural, preordained phenomenon. It is something built by people. People who are making certain design decisions.

We thus need to advance research that can guide these decisions as well as provide more desirable solutions. But we also need to be deliberate and think carefully about the incentives that drive these decisions. 

Now, these incentives stem largely from the business considerations, but not exclusively so. That is, we should also recognize that proper laws and regulations, as well as establishing thoughtful industry standards have a big role to play here too.

Indeed, governments can put in place rules that prioritize the value of deploying AI while being keenly aware of the corresponding downsides, pitfalls, and impossibilities. The design of such rules will be an ongoing and evolving process as the technology continues to improve and change, and we need to adapt to socio-political realities as well.

Q: Perhaps one of the most rapidly evolving domains in AI deployment is in the financial sector. From a policy perspective, how should governments, regulators, and lawmakers make AI work best for consumers in finance?

Videgaray: The financial sector is seeing a number of trends that present policy challenges at the intersection of AI systems. For one, there is the issue of explainability. By law (in the U.S. and in many other countries), lenders need to provide explanations to customers when they take actions deleterious in whatever way, like denial of a loan, to a customer’s interest. However, as financial services increasingly rely on automated systems and machine learning models, the capacity of banks to unpack the “black box” of machine learning to provide that level of mandated explanation becomes tenuous. So how should the finance industry and its regulators adapt to this advance in technology? Perhaps we need new standards and expectations, as well as tools to meet these legal requirements.

Meanwhile, economies of scale and data network effects are leading to a proliferation of AI outsourcing, and more broadly, AI-as-a-service is becoming increasingly common in the finance industry. In particular, we are seeing fintech companies provide the tools for underwriting to other financial institutions — be it large banks or small, local credit unions. What does this segmentation of the supply chain mean for the industry? Who is accountable for the potential problems in AI systems deployed through several layers of outsourcing? How can regulators adapt to guarantee their mandates of financial stability, fairness, and other societal standards?

Q: Social media is one of the most controversial sectors of the economy, resulting in many societal shifts and disruptions around the world. What policies or reforms might be needed to best ensure social media is a force for public good and not public harm?

Ozdaglar: The role of social media in society is of growing concern to many, but the nature of these concerns can vary quite a bit — with some seeing social media as not doing enough to prevent, for example, misinformation and extremism, and others seeing it as unduly silencing certain viewpoints. This lack of unified view on what the problem is impacts the capacity to enact any change. All of that is additionally coupled with the complexities of the legal framework in the U.S. spanning the First Amendment, Section 230 of the Communications Decency Act, and trade laws.

However, these difficulties in regulating social media do not mean that there is nothing to be done. Indeed, regulators have begun to tighten their control over social media companies, both in the United States and abroad, be it through antitrust procedures or other means. In particular, Ofcom in the U.K. and the European Union is already introducing new layers of oversight to platforms. Additionally, some have proposed taxes on online advertising to address the negative externalities caused by current social media business model. So, the policy tools are there, if the political will and proper guidance exists to implement them.

Cell Rover: Exploring and augmenting the inner world of the cell

Researchers at the MIT Media Lab have designed a miniature antenna that can operate wirelessly inside of a living cell, opening up possibilities in medical diagnostics and treatment and other scientific processes because of the antenna’s potential for monitoring and even directing cellular activity in real-time.

“The most exciting aspect of this research is we are able to create cyborgs at a cellular scale,” says Deblina Sarkar, assistant professor and AT&T Career Development Chair at the MIT Media Lab and head of the Nano-Cybernetic Biotrek Lab. “We are able to fuse the versatility of information technology at the level of cells, the building blocks of biology.”

A paper describing the research was published today in the journal Nature Communications.

The technology, named Cell Rover by the researchers, represents the first demonstration of an antenna that can operate inside a cell and is compatible with 3D biological systems. Typical bioelectronic interfaces, Sarkar says, are millimeters or even centimeters in size, and are not only highly invasive but also fail to provide the resolution needed to interact with single cells wirelessly — especially considering that changes to even one cell can affect a whole organism.

The antenna developed by Sarkar’s team is much smaller than a cell. In fact, in the team’s research with oocyte cells, the antenna represented less than .05 percent of the cell volume, putting it well below a size that would intrude upon and damage the cell.

Finding a way to build an antenna of that size to work inside a cell was a key challenge.

This is because conventional antennas need to be comparable in size to the wavelength of the electromagnetic waves they transmit and receive. Such wavelengths are very large — they represent the velocity of light divided by the wave frequency. At the same time, increasing the frequency in order to reduce that ratio and the size of the antenna is counterproductive because high frequencies produce heat damaging to living tissue.

The antenna developed by the Media Lab researchers converts electromagnetic waves into acoustic waves, whose wavelengths are five orders of magnitude smaller — representing the velocity of sound divided by the wave frequency — than those of the electromagnetic waves.

This conversion from electromagnetic to acoustic waves is accomplished by fabricating the miniature antennas using material that is referred to as magnetostrictive. When a magnetic field is applied to the antenna, powering and activating it, magnetic domains within the magnetostrictive material align to the field, creating strain in the material, the way metal bits woven into a piece of cloth could react to a strong magnet, causing the cloth to contort.

When an alternating magnetic field is applied to the antenna, the varying strain and stress (pressure) produced in the material is what creates the acoustic waves in the antenna, says Baju Joy, a student in Sarkar’s lab and the lead author of this work. “We have also developed a novel strategy using a non-uniform magnetic field to introduce the rovers into the cells,” Joy adds.

Configured in this way, the antenna could be used to explore the fundamentals of biology as natural processes occur, Sarkar says. Instead of destroying cells to examine their cytoplasm as is typically done, the Cell Rover could monitor the development or division of a cell, detecting different chemicals and biomolecules such as enzymes, or physical changes such as in cell pressure — all in real-time and in vivo.

Materials such as polymers that undergo change in mass or stress in response to chemical or biomolecular changes — already used in medical and other research — could be integrated with the operation of the Cell Rover, according to the researchers. Such an integration could provide insights not afforded by the current observational techniques that involve destruction of the cell.

With such capabilities, the Cell Rovers could be valuable in cancer and neurodegenerative disease research, for example. As Sarkar explains, the technology could be used to detect and monitor biochemical and electrical changes associated with the disease over its progression in individual cells. Applied in the field of drug discovery, the technology could illuminate the reactions of live cells to different drugs.

Because of the sophistication and scale of nanoelectronic devices such as transistors and switches — “representing five decades of tremendous advancements in the field of information technology,” Sarkar says — the Cell Rover, with its mini antenna, could carry out functions ranging all the way to intracellular computing and information processing for autonomous exploration and modulation of the cell. The research demonstrated that multiple Cell Rovers can be engaged, even within a single cell, to communicate among themselves and outside of the cells.

“The Cell Rover is an innovative concept as it can embed sensing, communication and information technology inside a living cell,” says Anantha P. Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “This opens up unprecedented opportunities for extremely precise diagnostics, therapeutics, and drug discovery, as well as creating a new direction at intersection between biology and electronic devices.”

The researchers named their intracellular antenna technology Cell Rover to invoke, like that of a Mars rover, its mission to explore a new frontier.

“You can think of the Cell Rover,” says Sarkar, “as being on an expedition, exploring the inner world of the cell.”

Ocean scientists measure sediment plume stirred up by deep-sea-mining vehicle

What will be the impact to the ocean if humans are to mine the deep sea? It’s a question that’s gaining urgency as interest in marine minerals has grown.

The ocean’s deep-sea bed is scattered with ancient, potato-sized rocks called “polymetallic nodules” that contain nickel and cobalt — minerals that are in high demand for the manufacturing of batteries, such as for powering electric vehicles and storing renewable energy, and in response to factors such as increasing urbanization. The deep ocean contains vast quantities of mineral-laden nodules, but the impact of mining the ocean floor is both unknown and highly contested.

Now MIT ocean scientists have shed some light on the topic, with a new study on the cloud of sediment that a collector vehicle would stir up as it picks up nodules from the seafloor.

The study, appearing today in Science Advances, reports the results of a 2021 research cruise to a region of the Pacific Ocean known as the Clarion Clipperton Zone (CCZ), where polymetallic nodules abound. There, researchers equipped a pre-prototype collector vehicle with instruments to monitor sediment plume disturbances as the vehicle maneuvered across the seafloor, 4,500 meters below the ocean’s surface. Through a sequence of carefully conceived maneuvers. the MIT scientists used the vehicle to monitor its own sediment cloud and measure its properties.

Their measurements showed that the vehicle created a dense plume of sediment in its wake, which spread under its own weight, in a phenomenon known in fluid dynamics as a “turbidity current.” As it gradually dispersed, the plume remained relatively low, staying within 2 meters of the seafloor, as opposed to immediately lofting higher into the water column as had been postulated.

“It’s quite a different picture of what these plumes look like, compared to some of the conjecture,” says study co-author Thomas Peacock, professor of mechanical engineering at MIT. “Modeling efforts of deep-sea mining plumes will have to account for these processes that we identified, in order to assess their extent.”

The study’s co-authors include lead author Carlos Muñoz-Royo, Raphael Ouillon, and Souha El Mousadik of MIT; and Matthew Alford of the Scripps Institution of Oceanography.

Deep-sea maneuvers

To collect polymetallic nodules, some mining companies are proposing to deploy tractor-sized vehicles to the bottom of the ocean. The vehicles would vacuum up the nodules along with some sediment along their path. The nodules and sediment would then be separated inside of the vehicle, with the nodules sent up through a riser pipe to a surface vessel, while most of the sediment would be discharged immediately behind the vehicle.

Peacock and his group have previously studied the dynamics of the sediment plume that associated surface operation vessels may pump back into the ocean. In their current study, they focused on the opposite end of the operation, to measure the sediment cloud created by the collectors themselves.

In April 2021, the team joined an expedition led by Global Sea Mineral Resources NV (GSR), a Belgian marine engineering contractor that is exploring the CCZ for ways to extract metal-rich nodules. A European-based science team, Mining Impacts 2, also conducted separate studies in parallel. The cruise was the first in over 40 years to test a “pre-prototype” collector vehicle in the CCZ. The machine, called Patania II, stands about 3 meters high, spans 4 meters wide, and is about one-third the size of what a commercial-scale vehicle is expected to be.

While the contractor tested the vehicle’s nodule-collecting performance, the MIT scientists monitored the sediment cloud created in the vehicle’s wake. They did so using two maneuvers that the vehicle was programmed to take: a “selfie,” and a “drive-by.”

Both maneuvers began in the same way, with the vehicle setting out in a straight line, all its suction systems turned on. The researchers let the vehicle drive along for 100 meters, collecting any nodules in its path. Then, in the “selfie” maneuver, they directed the vehicle to turn off its suction systems and double back around to drive through the cloud of sediment it had just created. The vehicle’s installed sensors measured the concentration of sediment during this “selfie” maneuver, allowing the scientists to monitor the cloud within minutes of the vehicle stirring it up.

For the “drive-by” maneuver, the researchers placed a sensor-laden mooring 50 to 100 meters from the vehicle’s planned tracks. As the vehicle drove along collecting nodules, it created a plume that eventually spread past the mooring after an hour or two. This “drive-by” maneuver enabled the team to monitor the sediment cloud over a longer timescale of several hours, capturing the plume evolution.

Out of steam

Over multiple vehicle runs, Peacock and his team were able to measure and track the evolution of the sediment plume created by the deep-sea-mining vehicle.

“We saw that the vehicle would be driving in clear water, seeing the nodules on the seabed,” Peacock says. “And then suddenly there’s this very sharp sediment cloud coming through when the vehicle enters the plume.”

From the selfie views, the team observed a behavior that was predicted by some of their previous modeling studies: The vehicle stirred up a heavy amount of sediment that was dense enough that, even after some mixing with the surrounding water, it generated a plume that behaved almost as a separate fluid, spreading under its own weight in what’s known as a turbidity current.

“The turbidity current spreads under its own weight for some time, tens of minutes, but as it does so, it’s depositing sediment on the seabed and eventually running out of steam,” Peacock says. “After that, the ocean currents get stronger than the natural spreading, and the sediment transitions to being carried by the ocean currents.”

By the time the sediment drifted past the mooring, the researchers estimate that 92 to 98 percent of the sediment either settled back down or remained within 2 meters of the seafloor as a low-lying cloud. There is, however, no guarantee that the sediment always stays there rather than drifting further up in the water column. Recent and future studies by the research team are looking into this question, with the goal of consolidating understanding for deep-sea mining sediment plumes.

“Our study clarifies the reality of what the initial sediment disturbance looks like when you have a certain type of nodule mining operation,” Peacock says. “The big takeaway is that there are complex processes like turbidity currents that take place when you do this kind of collection. So, any effort to model a deep-sea-mining operation’s impact will have to capture these processes.”

“Sediment plumes produced by deep-seabed mining are a major concern with regards to environmental impact, as they will spread over potentially large areas beyond the actual site of mining and affect deep-sea life,” says Henko de Stigter, a marine geologist at the Royal Netherlands Institute for Sea Research, who was not involved in the research. “The current paper provides essential insight in the initial development of these plumes.”

This research was supported, in part, by the National Science Foundation, ARPA-E, the 11th Hour Project, the Benioff Ocean Initiative, and Global Sea Mineral Resources. The funders had no role in any aspects of the research analysis, the research team states.

Empowering Cambridge youth through data activism

For over 40 years, the Mayor’s Summer Youth Employment Program (MSYEP, or the Mayor’s Program) in Cambridge, Massachusetts, has been providing teenagers with their first work experience, but 2022 brought a new offering. Collaborating with MIT’s Personal Robots research group (PRG) and Responsible AI for Social Empowerment and Education (RAISE) this summer, MSYEP created a STEAM-focused learning site at the Institute. Eleven students joined the program to learn coding and programming skills through the lens of “Data Activism.”

MSYEP’s partnership with MIT provides an opportunity for Cambridge high schoolers to gain exposure to more pathways for their future careers and education. The Mayor’s Program aims to respect students’ time and show the value of their work, so participants are compensated with an hourly wage as they learn workforce skills at MSYEP worksites. In conjunction with two ongoing research studies at MIT, PRG and RAISE developed the six-week Data Activism curriculum to equip students with critical-thinking skills so they feel prepared to utilize data science to challenge social injustice and empower their community.

Rohan Kundargi, K-12 Community Outreach Administrator for MIT Office of Government and Community Relations (OGCR), says, I see this as a model for a new type of partnership between MIT and Cambridge MSYEP. Specifically, an MIT research project that involves students from Cambridge getting paid to learn, research, and develop their own skills!”

Cross-Cambridge collaboration

Cambridge’s Office of Workforce Development initially contacted MIT OGCR about hosting a potential MSYEP worksite that taught Cambridge teens how to code. When Kundargi reached out to MIT pK-12 collaborators, MIT PRG’s graduate research assistant Raechel Walker proposed the Data Activism curriculum. Walker defines “data activism” as utilizing data, computing, and art to analyze how power operates in the world, challenge power, and empathize with people who are oppressed.

Walker says, “I wanted students to feel empowered to incorporate their own expertise, talents, and interests into every activity. In order for students to fully embrace their academic abilities, they must remain comfortable with bringing their full selves into data activism.”

As Kundargi and Walker recruited students for the Data Activism learning site, they wanted to make sure the cohort of students — the majority of whom are individuals of color — felt represented at MIT and felt they had the agency for their voice to be heard. “The pioneers in this field are people who look like them,” Walker says, speaking of well-known data activists Timnit Gebru, Rediet Abebe, and Joy Buolamwini.

When the program began this summer, some of the students were not aware of the ways data science and artificial intelligence exacerbate systemic oppression in society, or some of the tools currently being used to mitigate those societal harms. As a result, Walker says, the students wanted to learn more about discriminatory design in every aspect of life. They were also interested in creating responsible machine learning algorithms and AI fairness metrics.

A different side of STEAM

The development and execution of the Data Activism curriculum contributed to Walker’s and postdoc Xiaoxue Du’s respective research at PRG. Walker is studying AI education, specifically creating and teaching data activism curricula for minoritized communities. Du’s research explores processes, assessments, and curriculum design that prepares educators to use, adapt, and integrate AI literacy curricula. Additionally, her research targets how to leverage more opportunities for students with diverse learning needs.

The Data Activism curriculum utilizes a “libertatory computing” framework, a term Walker coined in her position paper with Professor Cynthia Breazeal, director of MIT RAISE, dean for digital learning, and head of PRG, and Eman Sherif, a then-undergraduate researcher from University of California at San Diego, titled “Liberty Computing for African American Students.” This framework ensures that students, especially minoritized students, acquire a sound racial identity, critical consciousness, collective obligation, liberation centered academic/achievement identity, as well as the activism skills to use computing to transform a multi-layered system of barriers in which racism persists. Walker says, “We encouraged students to demonstrate competency in every pillar because all of the pillars are interconnected and build upon each other.”

Walker developed a series of interactive coding and project-based activities that focused on understanding systemic racism, utilizing data science to analyze systemic oppression, data drawing, responsible machine learning, how racism can be embedded into AI, and different AI fairness metrics.

This was the students’ first time learning how to create data visualizations using the programming language Python and the data analysis tool Pandas. In one project meant to examine how different systems of oppression can affect different aspects of students’ own identities, students created datasets with data from their respective intersectional identities. Another activity highlighted African American achievements, where students analyzed two datasets about African American scientists, activists, artists, scholars, and athletes. Using the data visualizations, students then created zines about the African Americans who inspired them.

RAISE hired Olivia Dias, Sophia Brady, Lina Henriquez, and Zeynep Yalcin through the MIT Undergraduate Research Opportunity Program (UROP) and PRG hired freelancer Matt Taylor to work with Walker on developing the curriculum and designing interdisciplinary experience projects. Walker and the four undergraduate researchers constructed an intersectional data analysis activity about different examples of systemic oppression. PRG also hired three high school students to test activities and offer insights about making the curriculum engaging for program participants. Throughout the program, the Data Activism team taught students in small groups, continually asked students how to improve each activity, and structured each lesson based on the students’ interests. Walker says Dias, Brady, Henriquez, and Yalcin were invaluable to cultivating a supportive classroom environment and helping students complete their projects.

Student Nina says, “It’s opened my eyes to a different side of STEM. I didn’t know what ‘data’ meant before this program, or how intersectionality can affect AI and data.” Before MSYEP, Nina took Intro to Computer Science and AP Computer Science, but she has been coding since Girls Who Code first sparked her interest in middle school. “The community was really nice. I could talk with other girls. I saw there needs to be more women in STEM, especially in coding.” Now she’s interested in applying to colleges with strong computer science programs so she can pursue a coding-related career.

From MYSEP to the mayor’s office

Mayor Sumbul Siddiqui visited the Data Activism learning site on Aug. 9, accompanied by Breazeal. A graduate of MSYEP herself, Siddiqui says, “Through hands-on learning through computer programming, Cambridge Rindge and Latin School students have the unique opportunity to see themselves as data scientists. Students were able learn ways to combat discrimination that occurs through artificial intelligence.” In an Instagram post, Siddiqui also said, “I had a blast visiting the students and learning about their projects.”

Students worked on an activity that asked them to envision how data science might be used to support marginalized communities. They transformed their answers into block-printed T-shirt designs, carving pictures of their hopes into rubber block stamps. Some students focused on the importance of data privacy, like Jacob T., who drew a birdcage to represent data stored and locked away by third party apps. He says, “I want to open that cage and restore my data to myself and see what can be done with it.”

Many students wanted to see more representation in both the media they consume and across various professional fields. Nina talked about the importance of representation in media and how that could contribute to greater representation in the tech industry, while Kiki talked about encouraging more women to pursue STEM fields. Jesmin said, “I wanted to show that data science is accessible to everyone, no matter their origin or language you speak. I wrote ‘hello’ in Bangla, Arabic, and English, because I speak all three languages and they all resonate with me.”

“Overall, I hope the students continue to use their data activism skills to re-envision a society that supports marginalized groups,” says Walker. “Moreover, I hope they are empowered to become data scientists and understand how their race can be a positive part of their identity.”

The power of weak ties in gaining new employment

If you have a LinkedIn account, your connections probably consist of a core group of people you know well, and a larger set of people you know less well. The latter are what experts call “weak ties.” Now a unique, large-scale experiment co-directed by an MIT scholar shows that on LinkedIn, those weak ties are more likely to land you new employment, compared to your ties with people you know better.

“When we look at the experimental data, weak ties are better, on average, for job mobility than strong ties,” says Sinan Aral, a management professor at MIT and co-author of a new paper detailing the results of the study, which involved millions of LinkedIn users.

The experiment upholds the idea, first posited nearly 50 years ago, that weak ties have a value strong ties do not. The people you know best may have social networks that closely resemble your own and thus may not add much new job-seeking value for you. Your more casual acquaintances, on the other hand, have social networks that overlap less with yours and may provide connections or information you would not otherwise be able to access.

In recent years, however, some scholars have suggested there is a “paradox of weak ties,” in which strong ties actually are more useful in the job market. But the new experiment provides evidence to the contrary; weak ties are indeed more useful, a finding that particularly applies to more digitally oriented industries.

“Our experiment provides evidence in the opposite direction from the ‘paradox of weak ties,’” Aral says.

The paper, “A Causal Test of the Strength of Weak Ties,” appears today in Science. The authors are Karthik Rajkumar, a computational social scientist at LinkedIn; Guillaume Saint-Jacques PhD ’18, a senior manager at Apple who previously worked as a research scientist and manager at LinkedIn; Iavor Bojinov, an assistant professor at Harvard Business School and a former data scientist at LinkedIn; Erik Brynjolfsson, the Jerry Yang and Akiko Yamazaki Professor and Senior Fellow at the Stanford Institute for Human-Centered AI, and director of the Stanford Digital Economy Lab; and Aral, the David Austin Professor of Management at the MIT Sloan School of Management and director of the MIT Initiative on the Digital Economy.

A novel test of weak ties

The notion that there is something especially useful about the more tenuous connections in your social network dates to a highly influential 1973 paper by Stanford sociologist Mark Granovetter, “The Strength of Weak Ties,” from The American Journal of Sociology. In it, Granovetter identified weak ties as a key source of “diffusion of influence and information, mobility opportunity, and community organization.”

Granovetter’s ideas have spread widely in academia and Silicon Valley, especially with the growth of online social networks, but have been tough to test. For instance, regarding job-hunting, it can be difficult to untangle the impact of someone’s social network from their networking skills. As the scholars also note in the paper, it is also challenging to find solid data sets linking social networks and job searches in the first place.

The current study gets traction on the issue in a unique way, as a five-year experiment involving LinkedIn’s “People You May Know” (PYMK) algorithm, which suggests new connections to site users. To conduct the experiment, from 2015 through 2019, LinkedIn adjusted the PYMK algorithm, so that some site users saw a higher concentration of PYMK suggestions to whom they had strong ties, and others received more PYMK suggestions to people with whom they had weak ties.

The scholars also defined tie strength in two ways: by interaction intensity, based on the number of messaging interactions people had, and in structural terms, based on the number of mutual friends two users had in common.

All told, the experiment involved around 20 million LinkedIn users, who over the five years ended up creating about 2 billion new connections on the site, recorded over 70 million job applications, and wound up accepting 600,000 new jobs identified through the site.

“This is by far the largest longitudinal, randomized controlled experiment on the strength of weak ties ever conducted,” Aral says. “I don’t think there’s any real debate about that.”

And from that mass of data, a clear pattern emerged. As the researchers write in the paper, “the stronger the newly added ties were, the less likely they were to lead to a job transmission.”

The upside-down U shape

The fact that weak ties led to more job opportunities, overall, is just one of multiple related findings from the study. The scholars also found that the connection between structural tie strength and job transmission does not exist in a simple inverse form.

“The strength of weak ties is not linear,” Aral says. Charting the relationship between structural tie strength — the number of connections you have in common with someone — and usefulness, the scholars found those two properties have an upside-down U-shape form. The connecting bar between the two sides of the “U” is where moderately weak ties are, representing the highest-yield connections that people have on LinkedIn.

“Moderately weak ties are the best,” Aral says. “Not the weakest, but slightly stronger than the weakest.” The inflection point is around 10 mutual connections between people; if you share more than that with someone on LinkedIn, the usefulness of your connection to the other person, in job-hunting terms, diminishes.

However, when it comes to interaction intensity — how often you are in communication with someone — the results look a bit more linear. In this case, what emerges is closer to the idea that the weakest ties produce the most results, and the strongest ties produce the least job transmission.

“These two measures behave differently,” Aral says. “It’s important to think about weak ties in this multifaceted way, with interaction intensity and structural bridging.”

Finally, the usefulness of weak ties varies by industry on LinkedIn. The power of weak ties on the site is especially strong in high-tech industries.

“Weak ties are better in more digital industries,” Aral says, defining those as fields that are “more suitable for machine learning, artificial intelligence, more software intensive, more suitable for remote work, an so on. In those industries, weak ties are even more important. In analog industries, stronger ties can be more important.”

This could be due to what Aral has in previous research called the “refresh rate” of digital industries, in which they keep evolving quickly, making it more important to have a wide range of connections — especially weak ties — in those fields. Still, Aral notes, “We encourage more research, because we need to know more about why this variation seems to exist across industries.”

Developed at MIT

The genesis of the study goes back several years, when Saint-Jacques was pursuing his PhD research at MIT Sloan, advised by Brynjolfsson (then at MIT) and Aral. The group developed the idea of the research project, and after Saint-Jacques joined LinkedIn, had the opportunity to engineer the large-scale experiment.

The sheer size of the study, Aral notes, made it possible for the researchers to draw their multiple conclusions with confidence.

“The scale of the experiment is necessary for that, because you need a lot of statistical power to examine the question with such granularity,” Aral observes.

Other scholars say the study is a significant addition the literature on social and professional networks and weak ties.

For his part, Aral says he regards the current study as part of a larger effort, involving both himself and other members of MIT’s Initiative on the Digital Economy, to grasp the real-world impacts of digital social platforms.

“The main thrust of conversation around those platforms in the world today has been about how they affect society, like teen mental health, democracies and our elections and the spread of fake news, and whether misinformation affects the global pandemic and vaccine hesitancy,” Aral says.

In this case, Aral adds, “What this study really highlights is, we have to add to that: How are these platforms affecting the global economy? … This shows that one of the algorithms on LinkedIn can affect employment patterns, and it’s the largest professional network on the planet. We need to add that to the discussion about the impact of digitial social networking on the world.”

MIT accelerates efforts on path to carbon reduction goals

Under its “Fast Forward” climate action plan, which was announced in May 2021, MIT has set a goal of eliminating direct emissions from its campus by 2050. An important near-term milestone will be achieving net-zero emissions by 2026. Many other colleges and universities have set similar targets. What does it take to achieve such a dramatic reduction?

Since 2014, when MIT launched a five-year plan for action on climate change, net campus emissions have been cut by 20 percent. To meet the 2026 target, and ultimately achieve zero direct emissions by 2050, the Institute is making its campus buildings dramatically more energy efficient, transitioning to electric vehicles (EVs), and enabling large-scale renewable energy projects, among other strategies.

“This is an ‘all-in’ moment for MIT, and we’re taking comprehensive steps to address our carbon footprint,” says Glen Shor, executive vice president and treasurer. “Reducing our emissions to zero will be challenging, but it’s the right aspiration.”

“As an energy-intensive campus in an urban setting, our ability to achieve this goal will, in part, depend on the capacity of the local power grid to support the electrification of buildings and transportation, and how ‘green’ that grid electricity will become over time,” says Joe Higgins, MIT’s vice president for campus services and stewardship. “It will also require breakthrough technology improvements and new public policies to drive their adoption. Many of those tech breakthroughs are being developed by our own faculty, and our teams are planning scenarios in anticipation of their arrival.”

Working toward an energy-efficient campus

The on-campus reductions have come primarily from a major upgrade to MIT’s Central Utilities Plant, which provides electricity, heating, and cooling for about 80 percent of all Institute buildings. The upgraded plant, which uses advanced cogeneration technology, became fully operational at the end of 2021 and is meeting campus energy needs at greater efficiency and lower carbon intensity (on average 15 to 25 percent cleaner) compared to the regional electricity grid. Carbon reductions from the increased efficiency provided by the enhanced plant are projected to counter the added greenhouse gas emissions caused by recently completed and planned construction and operation of new buildings on campus, especially energy-intensive laboratory buildings.

Energy from the plant is delivered to campus buildings through MIT’s district energy system, a network of underground pipes and power lines providing electricity, heating, and air conditioning. With this adaptable system, MIT can introduce new technologies as they become available to increase the system’s energy efficiency. The system enables MIT to export power when the regional grid is under stress and to import electricity from the power grid as it becomes cleaner, likely over the next decade as the availability of offshore wind and renewable resources increases. “At the same time, we are reviewing additional technology options such as industrial-scale heat pumps, thermal batteries, geothermal exchange, microreactors, bio-based fuels, and green hydrogen produced from renewable energy,” Higgins says.

Along with upgrades to the plant, MIT is gradually converting existing steam-based heating systems into more efficient hot-water systems. This long-term project to lower campus emissions requires replacing the vast network of existing steam pipes and infrastructure, and will be phased in as systems need to be replaced. Currently MIT has four buildings that are on a hot-water system, with five more buildings transitioning to hot water by the fall of 2022.  

Minimizing emissions by implementing meaningful building efficiency standards has been an ongoing strategy in MIT’s climate mitigation efforts. In 2016, MIT made a commitment that all new campus construction and major renovation projects must earn at least Leadership in Energy and Environmental Design (LEED) Gold certification. To date, 24 spaces and buildings at MIT have earned a LEED designation, a performance-based rating system of a building’s environmental attributes associated with its design, construction, operations, and management.

Current efficiency efforts focus on reducing energy in the 20 buildings that account for more than 50 percent of MIT’s energy usage. One such project under construction aims to improve energy efficiency in Building 46, which houses the Department of Brain and Cognitive Sciences and the Picower Institute for Learning and Memory and is the biggest energy user on the campus because of its large size and high concentration of lab spaces. Interventions include optimizing ventilation systems that will significantly reduce energy use while improving occupant comfort, and working with labs to implement programs such as fume hood hibernation and equipment adjustments. For example, raising ultralow freezer set points by 10 degrees can reduce their energy consumption by as much as 40 percent. Together, these measures are projected to yield a 35 percent reduction in emissions for Building 46, which would contribute to reducing campus-level emissions by 2 percent.

Over the past decade, in addition to whole building intervention programs, the campus has taken targeted measures in over 100 campus buildings to add building insulation, replace old, inefficient windows, transition to energy-efficient lighting and mechanical systems, optimize lab ventilation systems, and install solar panels on solar-ready rooftops on campus — and will increase the capacity of renewable energy installations on campus by a minimum of 400 percent by 2026. These smaller scale contributions to overall emissions reductions are essential steps in a comprehensive campus effort.

Electrification of buildings and vehicles

With an eye to designing for “the next energy era,” says Higgins, MIT is looking to large-scale electrification of its buildings and district energy systems to reduce building use-associated emissions. Currently under renovation, the Metropolitan Storage Warehouse — which will house the MIT School of Architecture and Planning (SA+P) and the newly established MIT Morningside Academy for Design — will be the first building on campus to undergo this transformation by using electric heat pumps as its main heating and supplemental cooling source. The project team, consisting of campus engineering and construction teams as well as the designers, is working with SA+P faculty to design this innovative electrification project. The solution will move excess heat from the district energy infrastructure and nearby facilities to supply the heat pump system, creating a solution that uses less energy — resulting in fewer carbon emissions. 

Next to building energy use, emissions from on-campus vehicles are a key target for reduction; one of the goals in the “Fast Forward” plan is the electrification of on-campus vehicles. This includes the expansion of electric vehicle charging stations, and work has begun on the promised 200 percent expansion of the number of stations on campus, from 120 to 360. Sites are being evaluated to make sure that all members of the MIT community have easy access to these facilities.

The electrification also includes working toward replacing existing MIT-owned vehicles, from shuttle buses and vans to pickup trucks and passenger cars, as well as grounds maintenance equipment. Shu Yang Zhang, a junior in the Department of Materials Science and Engineering, is part of an Office of Sustainability student research team that carried out an evaluation of the options available for each type of vehicle and compared both their lifecycle costs and emissions.

Zhang says the team examined “the specifics of the vehicles that we own, looking at key measures such as fuel economy and cargo capacity,” and determined what alternatives exist in each category. The team carried out a study of the costs for replacing existing vehicles with EVs on the market now, versus buying new gas vehicles or leaving the existing ones in place. They produced a set of specific recommendations about fleet vehicle replacement and charging infrastructure installation on campus that supports both commuters and an MIT EV fleet in the future. According to their estimates, Zhang says, “the costs should be not drastically different” in the long run for the new electric vehicles.

Strength in numbers

While a panoply of measures has contributed to the successful offsetting of emissions so far, the biggest single contributor was MIT’s creation of an innovative, collaborative power purchase agreement (PPA) that enabled the construction of a large solar farm in North Carolina, which in turn contributed to the early retirement of a large coal-fired power plant in that region. MIT is committed to buying 73 percent of the power generated by the new facility, which is equivalent to approximately 40 percent of the Institute’s electricity use.

That PPA, which was a collaboration between three institutions, provided a template that has already been emulated by other institutions, in many cases enabling smaller organizations to take part in such a plan and achieve greater offsets of their carbon emissions than might have been possible acting on their own. Now, MIT is actively pursuing new, larger variations on that plan, which may include a wider variety of organizational participants, perhaps including local governments as well as institutions and nonprofits. The hope is that, as was the case with the original PPA, such collaborations could provide a model that other institutions and organizations may adopt as well.

Strategic portfolio agreements like the PPA will help achieve net zero emissions on campus while accelerating the decarbonization of regional electricity grids — a transformation critical to achieving net zero emissions, alongside all the work that continues to reduce the direct emissions from the campus itself.

“PPAs play an important role in MIT’s net zero strategy and have an immediate and significant impact in decarbonization of regional power grids by enabling renewable energy projects,” says Paul L. Joskow, the Elizabeth and James Killian Professor of Economics. “Many well-known U.S. companies and organizations that are seeking to enable and purchase CO2-free electricity have turned to long-term PPAs selected through a competitive procurement process to help to meet their voluntary internal decarbonization commitments. While there are still challenges regarding organizational procurements — including proper carbon emissions mitigation accounting, optimal contract design, and efficient integration into wholesale electricity markets — we are optimistic that MIT’s efforts and partnerships will contribute to resolving some of these issues.”

Addressing indirect sources of emissions

MIT’s examination of emissions is not limited to the campus itself but also the indirect sources associated with the Institute’s operations, research, and education. Of these indirect emissions, the three major ones are business travel, purchased goods and services, and construction of buildings, which are collectively larger than the total direct emissions from campus.

The strategic sourcing team in the Office of the Vice President for Finance has been working to develop opportunities and guidelines for making it easier to purchase sustainable products, for everything from office paper to electronics to lab equipment. Jeremy Gregory, executive director of MIT’s Climate and Sustainability Consortium, notes that MIT’s characteristic independent spirit resists placing limits on what products researchers can buy, but, he says, “we have opportunities to centralize some of our efforts and empower our community to choose low-impact alternatives when making procurement decisions.”

The path forward

The process of identifying and implementing MIT’s carbon reductions will be supported, in part, by the Carbon Footprint Working Group, which was launched by the Climate Nucleus, a new body MIT created to manage the implementation of the “Fast Forward” climate plan. The nucleus includes a broad representation from MIT’s departments, labs, and centers that are working on climate change issues. “We’ve created this internal structure in an effort to integrate operational expertise with faculty and student research innovations,” says Director of Sustainability Julie Newman.

Whatever measures end up being adopted to reduce energy and associated emissions, their results will be made available continuously to members of the MIT community in real-time, through a campus data gateway, Newman says — a degree of transparency that is exceptional in higher education. “If you’re interested in supporting all these efforts and following this,” she says, “you can track the progress via Energize MIT,” a set of online visualizations that display various measures of MIT’s energy usage and greenhouse gas emissions over time.

3Q: How MIT is working to reduce carbon emissions on our campus

Fast Forward: MIT’s Climate Action Plan for the Decade, launched in May 2021, charges MIT to eliminate its direct carbon emissions by 2050. Setting an interim goal of net zero emissions by 2026 is an important step to getting there. Joe Higgins, vice president for campus services and stewardship, speaks here about the coordinated, multi-team effort underway to address the Institute’s carbon-reduction goals, the challenges and opportunities in getting there, and creating a blueprint for a carbon-free campus in 2050.

Q: The Fast Forward plan laid out specific goals for MIT to address its own carbon footprint. What has been the strategy to tackle these priorities?

A: The launch of the Fast Forward Climate Action Plan empowered teams at MIT to expand the scope of our carbon reduction tasks beyond the work we’ve been doing to date. The on-campus activities called for in the plan range from substantially expanding our electric vehicle infrastructure on campus, to increasing our rooftop solar installations, to setting impact goals for food, water, and waste systems. Another strategy utilizes artificial intelligence to further reduce energy consumption and emissions from our buildings. When fully implemented, these systems will adjust a building’s temperature setpoints throughout the day while maintaining occupant comfort, and will use occupancy data, weather forecasts, and carbon intensity projections from the grid to make more efficient use of energy. 

We have tremendous momentum right now thanks to the progress made over the past decade by our teams — which include planners, designers, engineers, construction managers, and sustainability and operations experts. Since 2014, our efforts to advance energy efficiency and incorporate renewable energy have reduced net emissions on campus by 20% (from a 2014 baseline) despite significant campus growth. One of our current goals is to further reduce energy use in high-intensity research buildings — 20 of our campus buildings consume more than 50% of our energy. To reduce energy usage in these buildings we have major energy retrofit projects in design or in planning for buildings 32, 46, 68, 76, E14, and E25, and we expect this work will reduce overall MIT emissions by an additional 10 to 15%.

Q: The Fast Forward plan acknowledges the challenges we face in our efforts to reach our campus emission reduction goals, in part due to the current state of New England’s electrical grid. How does MIT’s district energy system factor into our approach? 

A: MIT’s district energy system is a network of underground pipes and power lines that moves energy from the Central Utilities Plant (CUP) around to the vast majority of Institute buildings to provide electricity, heating, and air conditioning. Using a closed-loop, central-source system like this enables MIT to operate more efficiently by using less energy to heat and cool its buildings and labs, and by maintaining better load control to accommodate seasonal variations in peak demand.

When the new MIT campus was built in Cambridge in 1916, it included a centralized state-of-the-art steam and electrical power plant that would service the campus buildings. This central district energy approach allowed MIT to avoid having individual furnaces in each building and to easily incorporate progressively cleaner fuel sources campus-wide over the years. After starting with coal as a primary energy source, MIT transitioned to fuel oil, then to natural gas, and then to cogeneration in 1995 — and each step has made the campus more energy efficient. Our continuous investment in a centralized infrastructure has facilitated our ability to improve energy efficiency while adding capacity; as new technologies become available, we can implement them across the entire campus. Our district energy system is very adaptable to seasonal variations in demand for cooling, heating and electricity, and builds upon decades of centralized investments in energy-efficient infrastructure.

This past year, MIT completed a major upgrade of the district energy system whereby the majority of buildings on campus now benefit from the most advanced cogeneration technology for combined heating, cooling, and power delivery. This system generates electrical power that produces 15 to 25% less carbon than the current New England grid. We also have the ability to export power during times when the grid is most stressed, which contributes to the resiliency of local energy systems. On the flip side, any time the grid is a cleaner option, MIT is able to import a higher amount of electricity from the utility by distributing this energy through our centralized system. In fact, it’s important to note that we have the ability to import 100% of our electrical energy from the grid as it becomes cleaner. We anticipate that this will happen as the next major wave of technology innovation unfolds and the abundance of offshore wind and other renewable resources increases as anticipated by the end of this decade. As the grid gets greener, our adaptable district energy system will bring us closer to meeting our decarbonization goals.

MIT’s ability to adapt its system and use new technologies is crucial right now as we work in collaboration with faculty, students, industry experts, peer institutions, and the cities of Cambridge and Boston to evaluate various strategies, opportunities, and constraints. In terms of evolving into a next-generation district energy system, we are reviewing options such as electric steam boilers and industrial-scale heat pumps, thermal batteries, geothermal exchange, micro-reactors, bio-based fuels, and green hydrogen produced from renewable energy. We are preparing to incorporate the most beneficial technologies into a blueprint that will get us to our 2050 goal.

Q: What is MIT doing in the near term to reach the carbon-reduction goals of the climate action plan?

A: In the near term, we are exploring several options, including enabling large-scale renewable energy projects and investing in verified carbon offset projects that reduce, avoid, or sequester carbon. In 2016, MIT joined a power purchase agreement (PPA) partnership that enabled the construction of a 650-acre solar farm in North Carolina and resulted in the early retirement of a nearby coal plant. We’ve documented a huge emissions savings from this, and we’re exploring how to do something similar on a much larger scale with a broader group of partners. As we seek out collaborative opportunities that enable the development of new renewable energy sources, we hope to provide a model for other institutions and organizations, as the original PPA did. Because PPAs accelerate the de-carbonization of regional electricity grids, they can have an enormous and far-reaching impact. We see these partnerships as an important component of achieving net zero emissions on campus as well as accelerating the de-carbonization of regional power grids — a transformation that must take place to reach zero emissions by 2050.

Other near-term initiatives include enabling community solar power projects in Massachusetts to support the state’s renewable energy goals and provide opportunities for more property owners (municipalities, businesses, homeowners, etc.) to purchase affordable renewable energy. MIT is engaged with three of these projects; one of them is in operation today in Middleton, and the two others are scheduled to be built soon on Cape Cod.

We’re joining the commonwealth and its cities, its organizations and utility providers on an unprecedented journey — the global transition to a clean energy system. Along the way, everything is going to change as technologies and the grid continue to evolve. Our focus is on both the near term and the future, as we plan a path into the next energy era.

Analysis of email traffic suggests remote work may stifle innovation

The debate over what is lost when remote work replaces an in-person workplace just got an infusion of much-needed data. According to a study conducted at MIT, when workers go remote, the types of work relationships that encourage innovation tend to be hard hit.

Two and a half years after Covid-19 shut down offices and research labs around the world, “we can finally use data to address a critical question: How did the pandemic-induced adoption of remote working affect our creativity and innovation on the job?” says Carlo Ratti, professor of the practice of urban technology and planning and director of MIT’s Senseable City Lab. “Until now, we could only guess. Today we can finally start to put real data behind those hypotheses.”

The MIT researchers, with colleagues at Texas A&M University, Italian National Research Council, Technical University of Denmark, and Oxford University, analyzed aspects of a de-identified email network comprising 2,834 MIT research staff, faculty, and postdoctoral researchers, for 18 months starting in December 2019. All of the emails were anonymized and examined to analyze the network structure of their origins and destinations, not their content.

Toward late March 2019, the Covid pandemic abruptly ended much of the on-site research on MIT’s campus. With the shift to remote work, the new study shows, email communications between different research units fell off, leading to a decrease in what researchers call the “weak ties” that undergird the exchange of new ideas that tend to foster innovation.

Weak ties were defined as any connection between two people who had no mutual contact in the email network. In other words, two people, A and B, formed a weak tie if there was no third person C that both of them also contacted. “Strong ties,” on the other hand, which are the type of communication that tends to expose us to the same ideas repeatedly, increased. Over the course of the lockdown, the researchers found that “ego networks,” referring to an individual’s unique web of connections, became more stagnant, with contacts becoming more similar each week.

The study was published in the August 22 issue of Nature Computational Science.

The researchers hypothesized that physical proximity should play a role in the development of weak ties. As such, weak ties between researchers in physically distant labs, who would be unlikely to encounter each other by chance even when working on campus, should not have dropped significantly when workers went remote. The data turned out to support that hypothesis, the team reports.

“Our research shows that co-location is a crucial factor to foster weak ties,” says Paolo Santi, researcher at MIT’s Senseable City Lab and at the Italian National Research Council. “Our data showed that weak ties evaporated at MIT starting on March 23, 2020, with a 38 percent drop,” he says. Over the next 18 months, the drop translated into an estimated cumulative loss of more than 5,100 new weak ties.

The idea that “weak ties” are conducive to innovation dates back to research published in 1973 by sociologist Mark Granovetter, who wrote that “an initially unpopular innovation spread by those with few weak ties is more likely to be confined to a few cliques. … Individuals with many weak ties are, by my arguments, best placed to diffuse such a difficult innovation.” Granovetter’s research was “just the beginning of a vast literature in sociology, which has subsequently confirmed and substantiated his ideas,” Ratti says.

In an accompanying commentary article in Nature Computational Science, John Meluso of the University of Vermont calls the “weak ties” idea “one of the oldest theories in social networks,” while pointing out that what generates and maintains the ties has remained vague. He notes that the new study’s computational techniques shed light on the causal mechanism of “propinquity,” the idea that proximity increases the odds of creating new connections and strengthening existing ones.

The researchers investigated not only the abrupt decline in weak ties when the MIT campus was shut down, but also the transition when researchers began returning to campus on July 15, 2021. A partial reinstatement of weak ties occurred after that point, the team found. With these findings, the researchers created a model that predicted that a complete return to the workplace would result in a “complete recovery of weak ties.”

Ratti and his colleagues suggest that as companies and organizations refine their post-lockdown remote work policies, they should try to find ways to encourage serendipitous interactions across departments and research units to foster the spread of new and diverse information. As Ratti explains, those interactions create exposure for those involved to “a diverse set of people and ideas.” At the same time, the study authors acknowledged that remote or hybrid work offers advantages to individuals, especially in terms of flexibility.

“Employers would make a mistake in discarding the newfound flexibility of the Covid years,” Ratti says. “Our study hints at the fact that establishing a work balance trade-off by combining in-person and remote interactions among colleagues seems to be the optimal solution, which could inform the transition to a hybrid, post-Covid-19 ‘new normal.’”

Achieving that balance could involve modeling the minimum amount of in-person work needed to keep weak ties activated. It could also involve transforming traditional office floor plans designed for individual tasks into “more open, dynamic spaces that encourage the so-called cafeteria effect,” in which people from diverse groups sit and converse together, or event-based spaces for different communities to converge, Ratti says.

Tech in translation

The Sony Walkman and virtual reality headsets are not just prominent examples of personal technology. In the hands of Paul Roquet, they’re also vehicles for learning more about Japan, the U.S., global technology trends — and ourselves.

Roquet is an associate professor in MIT’s program in Comparative Media Studies/Writing, and his forte is analyzing how new consumer technologies change the way people interact with their environments. His focus in this effort has been Japan, an early adopter of many postwar trends in personal tech.

For instance, in his 2016 book “Ambient Media: Japanese Atmospheres of Self” (University of Minnesota Press), Roquet examines how music, film, and other media have been deployed in Japan to create soothing, relaxing individual atmospheres for people. That gives people a feeling of control, even though their moods are now mediated by the products they consume.

In his 2022 book, “The Immersive Enclosure: Virtual Reality in Japan” (Columbia University Press), Roquet explored the impact of VR technologies on users, understanding these devices as tools for both closing off the outside world and interacting with others in networked settings. Roquet also detailed the cross-cultural trajectories of VR, which in the U.S. emerged out of military and aviation applications, but in Japan has been centered around forms of escapist entertainment.

As Roquet puts it, his work is steadily focused on “the relationship between media technologies and environmental perception, and how this relationship plays out differently in different cultural contexts.”

He adds: “There’s a lot to be gained by trying to think through the same questions in different parts of the world.”

Those different cultures are connected, to be sure: In Japan, for example, the English musician Brian Eno was a significant influence in the understanding of ambient media. The translation of VR technologies from the U.S. to Japan happened, in part, via technologists and innovators with MIT links. Meanwhile, Japan gave the world the Sony Walkman, a sonic enclosure of its own. 

As such, Roquet’s work is innovative, pulling together cultural trends across different media and tracing them around the globe, through the history, present, and future of technology. For his research and teaching, Roquet was granted tenure at MIT earlier this year.

Exchange program pays off

Roquet grew up in California, where his family moved around to a few different towns while he was a kid. As a high school student learning Japanese in Davis, he enrolled in an exchange program with Japan, the California-Japan Scholars program, enabling him to see the country up close. It was the first time Roquet had been outside of the U.S., and the trip had a lasting impact.

Roquet kept studying Japanese language and culture while an undergraduate at Pomona College; he earned his BA in 2003, in Asian studies and media studies. Roquet also indulged his growing fascination with atmospheric media by hosting a college radio show featuring often-experimental forms of ambient music. Soon Roquet discovered, to his bemusement, that his show was being played — with unknown effects on customers — at a local car dealership.

Japanese film was still another source of Roquet’s emergent intellectual interests, due to the differences he perceived with mainstream U.S. cinema.

“The storytelling would often function very differently,” Roquet says. “I found myself drawn to films where there was less of an emphasis on plot, and more emphasis on atmosphere and space.”

After college, Roquet won a Thomas J. Watson Fellowship and immediately spent a year on an ambitious research project, investigating what the local soundscape meant to residents across the Asia-Pacific region — including Malaysia, Singapore, Australia, New Zealand, Fiji, the Cook Islands — as well as Canada.

“It made me aware of how different people’s relationship to the soundscape can be from one place to another, and how history, politics, and culture shape the sensory environment,” Roquet says.

He then earned his MA in 2007 from the University of California at Berkeley, and ultimately his PhD from Berkeley in 2012, with a focus on Japan Studies and a Designated Emphasis in Film Studies. His dissertation formed the basis of his “Ambient Media” book.

Following three years as an Andrew W. Mellon Postdoctoral Fellow in the Humanities, at Stanford University, and one as a postdoc in global media at Brown University, Roquet joined the MIT faculty in 2016. He has remained at the Institute since, producing his second book, as well as a range of essays on VR and other forms of environmental media.

Willingness to explore

MIT has been an excellent fit, Roquet says, given his varied interests in the relationship between technology and culture.

“One thing I love about MIT is there’s a real willingness to explore newly emerging ideas and practices, even if they may not be situated in an established disciplinary context yet,” Roquet says. “MIT allows that interdisciplinary conversation to take place because you have this location that ties everything together.”

Roquet has also taught a wide range of undergraduate classes, including introductions to media studies and to Japanese culture; a course on Japanese and Korean cinema; another on Japanese literature and cinema; and a course on digital media in Japan and Korea. This semester he is teaching a new course on critical approaches to immersive media studies. 

Of MIT’s undergraduates, Roquet notes, “They have a remarkable range of interests, and this means class discussions shift from year to year in really interesting ways.

Whatever sparks their curiosity, they are always ready to dig deep.”

When it comes to his ongoing research, Roquet is exploring how the increasing use of immersive media works to transform a society’s relationship with the existing physical landscape.

“These kinds of questions are not asked nearly enough,” Roquet says. “There’s a lot of emphasis on what virtual spaces offer to the consumer, but there are always  environmental and social impacts created by inserting new layers of mediation between a person and their surrounding world. Not to mention by manufacturing headsets that often become obsolete within a couple years.”

Wherever his work takes him, Roquet will still be engaging in a career-long project of exploring the cultural and historical differences among countries in order to expand our understanding of media and technology.

“I don’t want to make the argument that Japan is radically different from the U.S. These histories are very intertwined, and there’s a lot of back and forth [between the countries],” Roquet says. “But also, when you pay close attention to local contexts you can uncover critical differences in how media technologies are understood and put to use. These can teach us a lot, and challenge our assumptions.”

Artificial intelligence model can detect Parkinson’s from breathing patterns

Parkinson’s disease is notoriously difficult to diagnose as it relies primarily on the appearance of motor symptoms such as tremors, stiffness, and slowness, but these symptoms often appear several years after the disease onset. Now, Dina Katabi, the Thuan (1990) and Nicole Pham Professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT and principal investigator at MIT Jameel Clinic, and her team have developed an artificial intelligence model that can detect Parkinson’s just from reading a person’s breathing patterns.

The tool in question is a neural network, a series of connected algorithms that mimic the way a human brain works, capable of assessing whether someone has Parkinson’s from their nocturnal breathing — i.e., breathing patterns that occur while sleeping. The neural network, which was trained by MIT PhD student Yuzhe Yang and postdoc Yuan Yuan, is also able to discern the severity of someone’s Parkinson’s disease and track the progression of their disease over time. 

Yang and Yuan are co-first authors on a new paper describing the work, published today in Nature Medicine. Katabi, who is also an affiliate of the MIT Computer Science and Artificial Intelligence Laboratory and director of the Center for Wireless Networks and Mobile Computing, is the senior author. They are joined by 12 colleagues from Rutgers University, the University of Rochester Medical Center, the Mayo Clinic, Massachusetts General Hospital, and the Boston University College of Health and Rehabilition.

Over the years, researchers have investigated the potential of detecting Parkinson’s using cerebrospinal fluid and neuroimaging, but such methods are invasive, costly, and require access to specialized medical centers, making them unsuitable for frequent testing that could otherwise provide early diagnosis or continuous tracking of disease progression.

The MIT researchers demonstrated that the artificial intelligence assessment of Parkinson’s can be done every night at home while the person is asleep and without touching their body. To do so, the team developed a device with the appearance of a home Wi-Fi router, but instead of providing internet access, the device emits radio signals, analyzes their reflections off the surrounding environment, and extracts the subject’s breathing patterns without any bodily contact. The breathing signal is then fed to the neural network to assess Parkinson’s in a passive manner, and there is zero effort needed from the patient and caregiver.

“A relationship between Parkinson’s and breathing was noted as early as 1817, in the work of Dr. James Parkinson. This motivated us to consider the potential of detecting the disease from one’s breathing without looking at movements,” Katabi says. “Some medical studies have shown that respiratory symptoms manifest years before motor symptoms, meaning that breathing attributes could be promising for risk assessment prior to Parkinson’s diagnosis.”

The fastest-growing neurological disease in the world, Parkinson’s is the second-most common neurological disorder, after Alzheimer’s disease. In the United States alone, it afflicts over 1 million people and has an annual economic burden of $51.9 billion. The research team’s device was tested on 7,687 individuals, including 757 Parkinson’s patients.

Katabi notes that the study has important implications for Parkinson’s drug development and clinical care. “In terms of drug development, the results can enable clinical trials with a significantly shorter duration and fewer participants, ultimately accelerating the development of new therapies. In terms of clinical care, the approach can help in the assessment of Parkinson’s patients in traditionally underserved communities, including those who live in rural areas and those with difficulty leaving home due to limited mobility or cognitive impairment,” she says.

“We’ve had no therapeutic breakthroughs this century, suggesting that our current approaches to evaluating new treatments is suboptimal,” says Ray Dorsey, a professor of neurology at the University of Rochester and Parkinson’s specialist who co-authored the paper. Dorsey adds that the study is likely one of the largest sleep studies ever conducted on Parkinson’s. “We have very limited information about manifestations of the disease in their natural environment and [Katabi’s] device allows you to get objective, real-world assessments of how people are doing at home. The analogy I like to draw [of current Parkinson’s assessments] is a street lamp at night, and what we see from the street lamp is a very small segment … [Katabi’s] entirely contactless sensor helps us illuminate the darkness.”

This research was performed in collaboration with the University of Rochester, Mayo Clinic, and Massachusetts General Hospital, and is sponsored by the National Institutes of Health, with partial support by the National Science Foundation and the Michael J. Fox Foundation.

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.