Feed aggregator
Penguin poop could limit climate impacts on Antarctica
Scientists: Clownfish shrink their bodies to survive ocean heat waves
A magnetic pull toward materials
Growing up in Coeur d’Alene, Idaho, with engineer parents who worked in the state’s silver mining industry, MIT senior Maria Aguiar developed an early interest in materials. The star garnet, the state’s mineral, is still her favorite. It’s a sheer coincidence, though, that her undergraduate thesis also focuses on garnets.
Her research explores ways to manipulate the magnetic properties of garnet thin films — work that can help improve data storage technologies. After all, says Aguiar, a major in the Department of Materials Science and Engineering (DMSE), technology and energy applications increasingly rely on the use of materials with favorable electronic and magnetic properties.
Passionate about engineering in high school — science fiction was also her jam — Aguiar applied and got accepted to MIT. But she had only learned about materials engineering through a Google search. She assumed she would gravitate toward aerospace engineering, astronomy, or even physics, subjects that had all piqued her interest at one time or another.
Aguiar was indecisive about a major for a while but began to realize that the topics she enjoyed would invariably center on materials. “I would visit an aerospace museum and would be more interested in the tiles they used in the shuttle to tolerate the heat. I was interested in the process to engineer such materials,” Aguiar remembers.
It was a first-year pre-orientation program (FPOP), designed to help new students test-drive majors, that convinced Aguiar that materials engineering was a good fit for her interests. It helped that the DMSE students were friendly and approachable. “They were proud to be in that major, and excited to talk about what they did,” Aguiar says.
During the FPOP, Associate Professor James LeBeau, a DMSE expert in transmission electron microscopy, asked students about their interests. When Aguiar piped up, saying she loved astronomy, LeBeau compared the subject to microscopy.
“An electron microscope is just a telescope in reverse,” she recalls him saying. Instead of looking at something far away, you go from big to small — zooming in to see the finer details. That comparison stuck with Aguiar and inspired her to pursue her first Undergraduate Research Opportunities Program (UROP) project with Lebeau, where she learned more about microscopy.
Drawn to magnetic materials
It was class 3.152 (Magnetic Materials), taught by Professor Caroline Ross, that stoked Aguiar’s interest in magnetic materials. The subject matter was fascinating, Aguiar says, and she knew related research would make important contributions to modern data storage technology. After starting a UROP in Ross’s magnetic materials lab in the spring of her junior year, Aguiar was hooked, and the work eventually morphed into her undergraduate thesis, “Effects of Annealing on Atomic Ordering and Magnetic Anisotropy in Iron Garnet Thin Films.”
The broad goal of her work was to understand how to manipulate materials’ magnetic properties, such as anisotropy — the tendency of a material’s magnetic properties to change depending on which direction they are measured in. It turns out that changing where certain metal atoms — or cations — sit in the garnet’s crystal structure can influence this directional behavior. By carefully arranging these atoms, researchers can “tune” garnet films to deliver novel magnetic properties, enabling the design of advanced materials for electronics.
When Aguiar joined the lab, she began working with doctoral candidate Allison Kaczmarek, who was investigating the connection between cation ordering and magnetic properties for her PhD thesis. Specifically, Kaczmarek was studying the growth and characterization of garnet films, evaluating different ways to induce cation ordering by varying the parameters in the pulsed laser deposition process — a technique that fires a laser at a target material (in this case, garnet), vaporizing it so it deposits onto a substrate, such as glass. Adjusting variables such as laser energy, pressure, and temperature, along with the composition of the mixed oxides, can significantly influence the resulting film.
Aguiar studied one specific parameter: annealing — heating a material to a high temperature before cooling it. The strengthening technique is often used to alter the way atoms are arranged in a material. “So far, I have found that when we anneal these films for times as short as five minutes, the film gets closer to preferring out-of-plane magnetization,” Aguiar says. This property, known as perpendicular magnetic anisotropy, is significant for magnetic memory applications because it offers advantages in performance, scalability, and energy efficiency.
“Maria has been very reliable and quick to be independent. She picks things up very quickly and is very thoughtful about what she’s doing,” Kaczmarek says. That thoughtfulness showed early on. When asked to identify an optimal annealing temperature for the films, Aguiar didn’t just run tests — she first conducted a thorough literature review to understand what had been worked out before, then carefully tested films at different temperatures to find one that worked the best.
Kaczmarek first got to know Aguiar as a teaching assistant for class 3.030 (Microstructural Evolution of Materials), taught by Professor Geoffrey Beach. Even before starting the UROP in Ross’ lab, Aguiar had shared a clear research goal: to gain hands-on experience with advanced techniques such as X-ray diffraction, vibrating sample magnetometry, and ferromagnetic resonance — tools typically used by more senior researchers. “That’s a goal she has certainly achieved,” Kaczmarek says.
Beyond the lab, beyond MIT
Outside of the lab, Aguiar combines her love of materials with a strong sense of community outreach and social cohesion. As co-president of the Society of Undergraduate Materials Scientists in DMSE, she helps organize events that make the department more inclusive. Class dinners are great fun — many seniors recently went to a Cambridge restaurant for sushi — and “Materials Week” every semester functions primarily as a recruitment event for new students. A hot cocoa event near the winter holidays combined seasonal cheer with class evaluations — painful for some, perhaps, but necessary for improving instruction.
After graduating this spring, Aguiar is looking forward to pursuing graduate school at Stanford University and is setting her sights on teaching. She loved her time as a teaching assistant for the popular first-year classes 3.091 (Introduction to Solid-State Chemistry) and 3.010 (Structure of Materials), earning her an undergraduate student teaching award.
Ross is convinced that Aguiar is a strong fit for graduate studies. “For graduate school, you need academic excellence and technical skills like being good in the lab, and Maria has both. Then there are the soft skills, which have to do with how well organized you are, how resilient you are, how you manage different responsibilities. Usually, students learn them as they go along, but Maria is well ahead of the curve,” Ross says.
“One thing that makes me hopeful for Maria’s time in grad school is that she is very broadly interested in a lot of aspects of materials science,” Kaczmarek adds.
Aguiar’s passion for the subject spilled over into a fun side project: a DMSE-exclusive “Meow-terials Science” T-shirt she designed — featuring cats doing familiar lab experiments — was a hit among students.
She remains endlessly fascinated by the materials around her, even in the water bottle she drinks from every day. “Studying materials science has changed the way I see the world. I can pick up something as ordinary as this water bottle and think about the metallurgical processing techniques I learned from my classes. I just love that there’s so much to learn from the everyday.”
New research, data advance understanding of early planetary formation
A team of international astronomers led by Richard Teague, the Kerr-McGee Career Development Professor in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) has gathered the most sensitive and detailed observations of 15 protoplanetary disks to date, giving the astronomy community a new look at the mechanisms of early planetary formation.
“The new approaches we’ve developed to gather this data and images are like switching from reading glasses to high-powered binoculars — they reveal a whole new level of detail in these planet-forming systems,” says Teague.
Their open-access findings were published in a special collection of 17 papers in the Astrophysical Journal of Letters, with several more coming out this summer. The report sheds light on a breadth of questions, including ways to calculate the mass of a disk by measuring its gravitational influence and extracting rotational velocity profiles to a precision of meters per second.
Protoplanetary disks are a collection of dust and gas around young stars, from which planets form. Observing the dust in these disks is easier because it is brighter, but the information that can be gleaned from dust alone is only a snapshot of what is going on. Teague’s research focus has shifted attention to the gas in these systems, as they can tell us more about the dynamics in a disk, including properties such as gravity, velocity, and mass.
To achieve the resolution necessary to study gas, the exoALMA program spent five years coordinating longer observation windows on the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile. As a result, the international team of astronomers, many of whom are early-career scientists, were able to collect some of the most detailed images ever taken of protoplanetary disks.
“The impressive thing about the data is that it’s so good, the community is developing new tools to extract signatures from planets,” says Marcelo Barraza-Alfaro, a postdoc in the Planet Formation Lab and a member of the exoALMA project. Several new techniques to improve and calibrate the images taken were developed to maximize the higher resolution and sensitivity that was used.
As a result, “we are seeing new things that require us to modify our understanding of what’s going on in protoplanetary disks,” he says.
One of the papers with the largest EAPS influence explores planetary formation through vortices. It has been known for some time that the simple model of formation often proposed, where dust grains clump together and “snowball” into a planetary core, is not enough. One possible way to help is through vortices, or localized perturbations in the gas that pull dust into the center. Here, they are more likely to clump, the way soap bubbles collect in a draining tub.
“We can see the concentration of dust in different regions, but we cannot see how it is moving,” says Lisa Wölfer, another postdoc in the Planet Formation Lab at MIT and first author on the paper. While astronomers can see that the dust has gathered, there isn’t enough information to rule out how it got to that point.
“Only through the dynamics in the gas can we actually confirm that it’s a vortex, and not something else, creating the structure,” she says.
During the data collection period, Teague, Wölfer, and Barraza-Alfaro developed simple models of protoplanetary disks to compare to their observations. When they got the data back, however, the models couldn’t explain what they were seeing.
“We saw the data and nothing worked anymore. It was way too complicated,” says Teague. “Before, everyone thought they were not dynamic. That’s completely not the case.”
The team was forced to reevaluate their models and work with more complex ones incorporating more motion in the gas, which take more time and resources to run. But early results look promising.
“We see that the patterns look very similar; we think this is the best test case to study further with more observations,” says Wölfer.
The new data, which have been made public, come at a fortuitous time: ALMA will be going dark for a period in the next few years while it undergoes upgrades. During this time, astronomers can continue the monumental process of sifting through all the data.
“It’s going to just keep on producing results for years and years to come,” says Teague.
A new approach could fractionate crude oil using much less energy
Separating crude oil into products such as gasoline, diesel, and heating oil is an energy-intensive process that accounts for about 6 percent of the world’s CO2 emissions. Most of that energy goes into the heat needed to separate the components by their boiling point.
In an advance that could dramatically reduce the amount of energy needed for crude oil fractionation, MIT engineers have developed a membrane that filters the components of crude oil by their molecular size.
“This is a whole new way of envisioning a separation process. Instead of boiling mixtures to purify them, why not separate components based on shape and size? The key innovation is that the filters we developed can separate very small molecules at an atomistic length scale,” says Zachary P. Smith, an associate professor of chemical engineering at MIT and the senior author of the new study.
The new filtration membrane can efficiently separate heavy and light components from oil, and it is resistant to the swelling that tends to occur with other types of oil separation membranes. The membrane is a thin film that can be manufactured using a technique that is already widely used in industrial processes, potentially allowing it to be scaled up for widespread use.
Taehoon Lee, a former MIT postdoc who is now an assistant professor at Sungkyunkwan University in South Korea, is the lead author of the paper, which appears today in Science.
Oil fractionation
Conventional heat-driven processes for fractionating crude oil make up about 1 percent of global energy use, and it has been estimated that using membranes for crude oil separation could reduce the amount of energy needed by about 90 percent. For this to succeed, a separation membrane needs to allow hydrocarbons to pass through quickly, and to selectively filter compounds of different sizes.
Until now, most efforts to develop a filtration membrane for hydrocarbons have focused on polymers of intrinsic microporosity (PIMs), including one known as PIM-1. Although this porous material allows the fast transport of hydrocarbons, it tends to excessively absorb some of the organic compounds as they pass through the membrane, leading the film to swell, which impairs its size-sieving ability.
To come up with a better alternative, the MIT team decided to try modifying polymers that are used for reverse osmosis water desalination. Since their adoption in the 1970s, reverse osmosis membranes have reduced the energy consumption of desalination by about 90 percent — a remarkable industrial success story.
The most commonly used membrane for water desalination is a polyamide that is manufactured using a method known as interfacial polymerization. During this process, a thin polymer film forms at the interface between water and an organic solvent such as hexane. Water and hexane do not normally mix, but at the interface between them, a small amount of the compounds dissolved in them can react with each other.
In this case, a hydrophilic monomer called MPD, which is dissolved in water, reacts with a hydrophobic monomer called TMC, which is dissolved in hexane. The two monomers are joined together by a connection known as an amide bond, forming a polyamide thin film (named MPD-TMC) at the water-hexane interface.
While highly effective for water desalination, MPD-TMC doesn’t have the right pore sizes and swelling resistance that would allow it to separate hydrocarbons.
To adapt the material to separate the hydrocarbons found in crude oil, the researchers first modified the film by changing the bond that connects the monomers from an amide bond to an imine bond. This bond is more rigid and hydrophobic, which allows hydrocarbons to quickly move through the membrane without causing noticeable swelling of the film compared to the polyamide counterpart.
“The polyimine material has porosity that forms at the interface, and because of the cross-linking chemistry that we have added in, you now have something that doesn’t swell,” Smith says. “You make it in the oil phase, react it at the water interface, and with the crosslinks, it’s now immobilized. And so those pores, even when they’re exposed to hydrocarbons, no longer swell like other materials.”
The researchers also introduced a monomer called triptycene. This shape-persistent, molecularly selective molecule further helps the resultant polyimines to form pores that are the right size for hydrocarbons to fit through.
This approach represents “an important step toward reducing industrial energy consumption,” says Andrew Livingston, a professor of chemical engineering at Queen Mary University of London, who was not involved in the study.
“This work takes the workhorse technology of the membrane desalination industry, interfacial polymerization, and creates a new way to apply it to organic systems such as hydrocarbon feedstocks, which currently consume large chunks of global energy,” Livingston says. “The imaginative approach using an interfacial catalyst coupled to hydrophobic monomers leads to membranes with high permeance and excellent selectivity, and the work shows how these can be used in relevant separations.”
Efficient separation
When the researchers used the new membrane to filter a mixture of toluene and triisopropylbenzene (TIPB) as a benchmark for evaluating separation performance, it was able to achieve a concentration of toluene 20 times greater than its concentration in the original mixture. They also tested the membrane with an industrially relevant mixture consisting of naphtha, kerosene, and diesel, and found that it could efficiently separate the heavier and lighter compounds by their molecular size.
If adapted for industrial use, a series of these filters could be used to generate a higher concentration of the desired products at each step, the researchers say.
“You can imagine that with a membrane like this, you could have an initial stage that replaces a crude oil fractionation column. You could partition heavy and light molecules and then you could use different membranes in a cascade to purify complex mixtures to isolate the chemicals that you need,” Smith says.
Interfacial polymerization is already widely used to create membranes for water desalination, and the researchers believe it should be possible to adapt those processes to mass produce the films they designed in this study.
“The main advantage of interfacial polymerization is it’s already a well-established method to prepare membranes for water purification, so you can imagine just adopting these chemistries into existing scale of manufacturing lines,” Lee says.
The research was funded, in part, by ExxonMobil through the MIT Energy Initiative.
MIT physicists discover a new type of superconductor that’s also a magnet
Magnets and superconductors go together like oil and water — or so scientists have thought. But a new finding by MIT physicists is challenging this century-old assumption.
In a paper appearing today in the journal Nature, the physicists report that they have discovered a “chiral superconductor” — a material that conducts electricity without resistance, and also, paradoxically, is intrinsically magnetic. What’s more, they observed this exotic superconductivity in a surprisingly ordinary material: graphite, the primary material in pencil lead.
Graphite is made from many layers of graphene — atomically thin, lattice-like sheets of carbon atoms — that are stacked together and can easily flake off when pressure is applied, as when pressing down to write on a piece of paper. A single flake of graphite can contain several million sheets of graphene, which are normally stacked such that every other layer aligns. But every so often, graphite contains tiny pockets where graphene is stacked in a different pattern, resembling a staircase of offset layers.
The MIT team has found that when four or five sheets of graphene are stacked in this “rhombohedral” configuration, the resulting structure can exhibit exceptional electronic properties that are not seen in graphite as a whole.
In their new study, the physicists isolated microscopic flakes of rhombohedral graphene from graphite, and subjected the flakes to a battery of electrical tests. They found that when the flakes are cooled to 300 millikelvins (about -273 degrees Celsius), the material turns into a superconductor, meaning that any electrical current passing through the material can flow through without resistance.
They also found that when they swept an external magnetic field up and down, the flakes could be switched between two different superconducting states, just like a magnet. This suggests that the superconductor has some internal, intrinsic magnetism. Such switching behavior is absent in other superconductors.
“The general lore is that superconductors do not like magnetic fields,” says Long Ju, assistant professor of physics at MIT. “But we believe this is the first observation of a superconductor that behaves as a magnet with such direct and simple evidence. And that’s quite a bizarre thing because it is against people’s general impression on superconductivity and magnetism.”
Ju is senior author of the study, which includes MIT co-authors Tonghang Han, Zhengguang Lu, Zach Hadjri, Lihan Shi, Zhenghan Wu, Wei Xu, Yuxuan Yao, Jixiang Yang, Junseok Seo, Shenyong Ye, Muyang Zhou, and Liang Fu, along with collaborators from Florida State University, the University of Basel in Switzerland, and the National Institute for Materials Science in Japan.
Graphene twist
In everyday conductive materials, electrons flow through in a chaotic scramble, whizzing by each other, and pinging off the material’s atomic latticework. Each time an electron scatters off an atom, it has, in essence, met some resistance, and loses some energy as a result, normally in the form of heat. In contrast, when certain materials are cooled to ultracold temperatures, they can become superconducting, meaning that the material can allow electrons to pair up, in what physicists term “Cooper pairs.” Rather than scattering away, these electron pairs glide through a material without resistance. With a superconductor, then, no energy is lost in translation.
Since superconductivity was first observed in 1911, physicists have shown many times over that zero electrical resistance is a hallmark of a superconductor. Another defining property was first observed in 1933, when the physicist Walther Meissner discovered that a superconductor will expel an external magnetic field. This “Meissner effect” is due in part to a superconductor’s electron pairs, which collectively act to push away any magnetic field.
Physicists have assumed that all superconducting materials should exhibit both zero electrical resistance, and a natural magnetic repulsion. Indeed, these two properties are what could enable Maglev, or “magnetic levitation” trains, whereby a superconducting rail repels and therefore levitates a magnetized car.
Ju and his colleagues had no reason to question this assumption as they carried out their experiments at MIT. In the last few years, the team has been exploring the electrical properties of pentalayer rhombohedral graphene. The researchers have observed surprising properties in the five-layer, staircase-like graphene structure, most recently that it enables electrons to split into fractions of themselves. This phenomenon occurs when the pentalayer structure is placed atop a sheet of hexagonal boron nitride (a material similar to graphene), and slightly offset by a specific angle, or twist.
Curious as to how electron fractions might change with changing conditions, the researchers followed up their initial discovery with similar tests, this time by misaligning the graphene and hexagonal boron nitride structures. To their surprise, they found that when they misaligned the two materials and sent an electrical current through, at temperatures less than 300 millikelvins, they measured zero resistance. It seemed that the phenomenon of electron fractions disappeared, and what emerged instead was superconductivity.
The researchers went a step further to see how this new superconducting state would respond to an external magnetic field. They applied a magnet to the material, along with a voltage, and measured the electrical current coming out of the material. As they dialed the magnetic field from negative to positive (similar to a north and south polarity) and back again, they observed that the material maintained its superconducting, zero-resistance state, except in two instances, once at either magnetic polarity. In these instances, the resistance briefly spiked, before switching back to zero, and returning to a superconducting state.
“If this were a conventional superconductor, it would just remain at zero resistance, until the magnetic field reaches a critical point, where superconductivity would be killed,” Zach Hadjri, a first-year student in the group, says. “Instead, this material seems to switch between two superconducting states, like a magnet that starts out pointing upward, and can flip downwards when you apply a magnetic field. So it looks like this is a superconductor that also acts like a magnet. Which doesn’t make any sense!”
“One of a kind”
As counterintuitive as the discovery may seem, the team observed the same phenomenon in six similar samples. They suspect that the unique configuration of rhombohedral graphene is the key. The material has a very simple arrangement of carbon atoms. When cooled to ultracold temperatures, the thermal fluctuation is minimized, allowing any electrons flowing through the material to slow down, sense each other, and interact.
Such quantum interactions can lead electrons to pair up and superconduct. These interactions can also encourage electrons to coordinate. Namely, electrons can collectively occupy one of two opposite momentum states, or “valleys.” When all electrons are in one valley, they effectively spin in one direction, versus the opposite direction. In conventional superconductors, electrons can occupy either valley, and any pair of electrons is typically made from electrons of opposite valleys that cancel each other out. The pair overall then, has zero momentum, and does not spin.
In the team’s material structure, however, they suspect that all electrons interact such that they share the same valley, or momentum state. When electrons then pair up, the superconducting pair overall has a “non-zero” momentum, and spinning, that, along with many other pairs, can amount to an internal, superconducting magnetism.
“You can think of the two electrons in a pair spinning clockwise, or counterclockwise, which corresponds to a magnet pointing up, or down,” Tonghang Han, a fifth-year student in the group, explains. “So we think this is the first observation of a superconductor that behaves as a magnet due to the electrons’ orbital motion, which is known as a chiral superconductor. It’s one of a kind. It is also a candidate for a topological superconductor which could enable robust quantum computation.”
“Everything we’ve discovered in this material has been completely out of the blue,” says Zhengguang Lu, a former postdoc in the group and now an assistant professor at Florida State University. “But because this is a simple system, we think we have a good chance of understanding what is going on, and could demonstrate some very profound and deep physics principles.”
“It is truly remarkable that such an exotic chiral superconductor emerges from such simple ingredients,” adds Liang Fu, professor of physics at MIT. “Superconductivity in rhombodedral graphene will surely have a lot to offer.”
The part of the research carried out at MIT was supported by the U.S. Department of Energy and a MathWorks Fellowship.
Study: Climate change may make it harder to reduce smog in some regions
Global warming will likely hinder our future ability to control ground-level ozone, a harmful air pollutant that is a primary component of smog, according to a new MIT study.
The results could help scientists and policymakers develop more effective strategies for improving both air quality and human health. Ground-level ozone causes a host of detrimental health impacts, from asthma to heart disease, and contributes to thousands of premature deaths each year.
The researchers’ modeling approach reveals that, as the Earth warms due to climate change, ground-level ozone will become less sensitive to reductions in nitrogen oxide emissions in eastern North America and Western Europe. In other words, it will take greater nitrogen oxide emission reductions to get the same air quality benefits.
However, the study also shows that the opposite would be true in northeast Asia, where cutting emissions would have a greater impact on reducing ground-level ozone in the future.
The researchers combined a climate model that simulates meteorological factors, such as temperature and wind speeds, with a chemical transport model that estimates the movement and composition of chemicals in the atmosphere.
By generating a range of possible future outcomes, the researchers’ ensemble approach better captures inherent climate variability, allowing them to paint a fuller picture than many previous studies.
“Future air quality planning should consider how climate change affects the chemistry of air pollution. We may need steeper cuts in nitrogen oxide emissions to achieve the same air quality goals,” says Emmie Le Roy, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author of a paper on this study.
Her co-authors include Anthony Y.H. Wong, a postdoc in the MIT Center for Sustainability Science and Strategy; Sebastian D. Eastham, principal research scientist in the MIT Center for Sustainability Science and Strategy; Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor of EAPS; and senior author Noelle Selin, a professor in the Institute for Data, Systems, and Society (IDSS) and EAPS. The research appears today in Environmental Science and Technology.
Controlling ozone
Ground-level ozone differs from the stratospheric ozone layer that protects the Earth from harmful UV radiation. It is a respiratory irritant that is harmful to the health of humans, animals, and plants.
Controlling ground-level ozone is particularly challenging because it is a secondary pollutant, formed in the atmosphere by complex reactions involving nitrogen oxides and volatile organic compounds in the presence of sunlight.
“That is why you tend to have higher ozone days when it is warm and sunny,” Le Roy explains.
Regulators typically try to reduce ground-level ozone by cutting nitrogen oxide emissions from industrial processes. But it is difficult to predict the effects of those policies because ground-level ozone interacts with nitrogen oxide and volatile organic compounds in nonlinear ways.
Depending on the chemical environment, reducing nitrogen oxide emissions could cause ground-level ozone to increase instead.
“Past research has focused on the role of emissions in forming ozone, but the influence of meteorology is a really important part of Emmie’s work,” Selin says.
To conduct their study, the researchers combined a global atmospheric chemistry model with a climate model that simulate future meteorology.
They used the climate model to generate meteorological inputs for each future year in their study, simulating factors such as likely temperature and wind speeds, in a way that captures the inherent variability of a region’s climate.
Then they fed those inputs to the atmospheric chemistry model, which calculates how the chemical composition of the atmosphere would change because of meteorology and emissions.
The researchers focused on Eastern North America, Western Europe, and Northeast China, since those regions have historically high levels of the precursor chemicals that form ozone and well-established monitoring networks to provide data.
They chose to model two future scenarios, one with high warming and one with low warming, over a 16-year period between 2080 and 2095. They compared them to a historical scenario capturing 2000 to 2015 to see the effects of a 10 percent reduction in nitrogen oxide emissions.
Capturing climate variability
“The biggest challenge is that the climate naturally varies from year to year. So, if you want to isolate the effects of climate change, you need to simulate enough years to see past that natural variability,” Le Roy says.
They could overcome that challenge due to recent advances in atmospheric chemistry modeling and by taking advantage of parallel computing to simulate multiple years at the same time. They simulated five 16-year realizations, resulting in 80 model years for each scenario.
The researchers found that eastern North America and Western Europe are especially sensitive to increases in nitrogen oxide emissions from the soil, which are natural emissions driven by increases in temperature.
Due to that sensitivity, as the Earth warms and more nitrogen oxide from soil enters the atmosphere, reducing nitrogen oxide emissions from human activities will have less of an impact on ground-level ozone.
“This shows how important it is to improve our representation of the biosphere in these models to better understand how climate change may impact air quality,” Le Roy says.
On the other hand, since industrial processes in northeast Asia cause more ozone per unit of nitrogen oxide emitted, cutting emissions there would cause greater reductions in ground-level ozone in future warming scenarios.
“But I wouldn’t say that is a good thing because it means that, overall, there are higher levels of ozone,” Le Roy adds.
Running detailed meteorology simulations, rather than relying on annual average weather data, gave the researchers a more complete picture of the potential effects on human health.
“Average climate isn’t the only thing that matters. One high ozone day, which might be a statistical anomaly, could mean we don’t meet our air quality target and have negative human health impacts that we should care about,” Le Roy says.
In the future, the researchers want to continue exploring the intersection of meteorology and air quality. They also want to expand their modeling approach to consider other climate change factors with high variability, like wildfires or biomass burning.
“We’ve shown that it is important for air quality scientists to consider the full range of climate variability, even if it is hard to do in your models, because it really does affect the answer that you get,” says Selin.
This work is funded, in part, by the MIT Praecis Presidential Fellowship, the J.H. and E.V. Wade Fellowship, and the MIT Martin Family Society of Fellows for Sustainability.
The Voter Experience
Technology and innovation have transformed every part of society, including our electoral experiences. Campaigns are spending and doing more than at any other time in history. Ever-growing war chests fuel billions of voter contacts every cycle. Campaigns now have better ways of scaling outreach methods and offer volunteers and donors more efficient ways to contribute time and money. Campaign staff have adapted to vast changes in media and social media landscapes, and use data analytics to forecast voter turnout and behavior.
Yet despite these unprecedented investments in mobilizing voters, overall trust in electoral health, democratic institutions, voter satisfaction, and electoral engagement has significantly declined. What might we be missing?...
Trump, who called FEMA ‘very slow,’ has failed to act on 17 disaster requests
How former NATO chief helped save Empire Wind
China was striking climate deals as Trump toured oil kingdoms
Dems and Zeldin square off in fiery debate over EPA grants
Maryland governor signs sprawling energy plan, vetoes climate studies
Wildfires push tropical forest destruction to 20-year high
Record pace of snowmelt in US West threatens another drought
Conservationists step up efforts to protect amphibian habitat
Ghana e-bike maker approved to sell CO2 credits to Switzerland
AI learns how vision and sound are connected, without human intervention
Humans naturally learn by making connections between sight and sound. For instance, we can watch someone playing the cello and recognize that the cellist’s movements are generating the music we hear.
A new approach developed by researchers from MIT and elsewhere improves an AI model’s ability to learn in this same fashion. This could be useful in applications such as journalism and film production, where the model could help with curating multimodal content through automatic video and audio retrieval.
In the longer term, this work could be used to improve a robot’s ability to understand real-world environments, where auditory and visual information are often closely connected.
Improving upon prior work from their group, the researchers created a method that helps machine-learning models align corresponding audio and visual data from video clips without the need for human labels.
They adjusted how their original model is trained so it learns a finer-grained correspondence between a particular video frame and the audio that occurs in that moment. The researchers also made some architectural tweaks that help the system balance two distinct learning objectives, which improves performance.
Taken together, these relatively simple improvements boost the accuracy of their approach in video retrieval tasks and in classifying the action in audiovisual scenes. For instance, the new method could automatically and precisely match the sound of a door slamming with the visual of it closing in a video clip.
“We are building AI systems that can process the world like humans do, in terms of having both audio and visual information coming in at once and being able to seamlessly process both modalities. Looking forward, if we can integrate this audio-visual technology into some of the tools we use on a daily basis, like large language models, it could open up a lot of new applications,” says Andrew Rouditchenko, an MIT graduate student and co-author of a paper on this research.
He is joined on the paper by lead author Edson Araujo, a graduate student at Goethe University in Germany; Yuan Gong, a former MIT postdoc; Saurabhchand Bhati, a current MIT postdoc; Samuel Thomas, Brian Kingsbury, and Leonid Karlinsky of IBM Research; Rogerio Feris, principal scientist and manager at the MIT-IBM Watson AI Lab; James Glass, senior research scientist and head of the Spoken Language Systems Group in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Hilde Kuehne, professor of computer science at Goethe University and an affiliated professor at the MIT-IBM Watson AI Lab. The work will be presented at the Conference on Computer Vision and Pattern Recognition.
Syncing up
This work builds upon a machine-learning method the researchers developed a few years ago, which provided an efficient way to train a multimodal model to simultaneously process audio and visual data without the need for human labels.
The researchers feed this model, called CAV-MAE, unlabeled video clips and it encodes the visual and audio data separately into representations called tokens. Using the natural audio from the recording, the model automatically learns to map corresponding pairs of audio and visual tokens close together within its internal representation space.
They found that using two learning objectives balances the model’s learning process, which enables CAV-MAE to understand the corresponding audio and visual data while improving its ability to recover video clips that match user queries.
But CAV-MAE treats audio and visual samples as one unit, so a 10-second video clip and the sound of a door slamming are mapped together, even if that audio event happens in just one second of the video.
In their improved model, called CAV-MAE Sync, the researchers split the audio into smaller windows before the model computes its representations of the data, so it generates separate representations that correspond to each smaller window of audio.
During training, the model learns to associate one video frame with the audio that occurs during just that frame.
“By doing that, the model learns a finer-grained correspondence, which helps with performance later when we aggregate this information,” Araujo says.
They also incorporated architectural improvements that help the model balance its two learning objectives.
Adding “wiggle room”
The model incorporates a contrastive objective, where it learns to associate similar audio and visual data, and a reconstruction objective which aims to recover specific audio and visual data based on user queries.
In CAV-MAE Sync, the researchers introduced two new types of data representations, or tokens, to improve the model’s learning ability.
They include dedicated “global tokens” that help with the contrastive learning objective and dedicated “register tokens” that help the model focus on important details for the reconstruction objective.
“Essentially, we add a bit more wiggle room to the model so it can perform each of these two tasks, contrastive and reconstructive, a bit more independently. That benefitted overall performance,” Araujo adds.
While the researchers had some intuition these enhancements would improve the performance of CAV-MAE Sync, it took a careful combination of strategies to shift the model in the direction they wanted it to go.
“Because we have multiple modalities, we need a good model for both modalities by themselves, but we also need to get them to fuse together and collaborate,” Rouditchenko says.
In the end, their enhancements improved the model’s ability to retrieve videos based on an audio query and predict the class of an audio-visual scene, like a dog barking or an instrument playing.
Its results were more accurate than their prior work, and it also performed better than more complex, state-of-the-art methods that require larger amounts of training data.
“Sometimes, very simple ideas or little patterns you see in the data have big value when applied on top of a model you are working on,” Araujo says.
In the future, the researchers want to incorporate new models that generate better data representations into CAV-MAE Sync, which could improve performance. They also want to enable their system to handle text data, which would be an important step toward generating an audiovisual large language model.
This work is funded, in part, by the German Federal Ministry of Education and Research and the MIT-IBM Watson AI Lab.
Learning how to predict rare kinds of failures
On Dec. 21, 2022, just as peak holiday season travel was getting underway, Southwest Airlines went through a cascading series of failures in their scheduling, initially triggered by severe winter weather in the Denver area. But the problems spread through their network, and over the course of the next 10 days the crisis ended up stranding over 2 million passengers and causing losses of $750 million for the airline.
How did a localized weather system end up triggering such a widespread failure? Researchers at MIT have examined this widely reported failure as an example of cases where systems that work smoothly most of the time suddenly break down and cause a domino effect of failures. They have now developed a computational system for using the combination of sparse data about a rare failure event, in combination with much more extensive data on normal operations, to work backwards and try to pinpoint the root causes of the failure, and hopefully be able to find ways to adjust the systems to prevent such failures in the future.
The findings were presented at the International Conference on Learning Representations (ICLR), which was held in Singapore from April 24-28 by MIT doctoral student Charles Dawson, professor of aeronautics and astronautics Chuchu Fan, and colleagues from Harvard University and the University of Michigan.
“The motivation behind this work is that it’s really frustrating when we have to interact with these complicated systems, where it’s really hard to understand what’s going on behind the scenes that’s creating these issues or failures that we’re observing,” says Dawson.
The new work builds on previous research from Fan’s lab, where they looked at problems involving hypothetical failure prediction problems, she says, such as with groups of robots working together on a task, or complex systems such as the power grid, looking for ways to predict how such systems may fail. “The goal of this project,” Fan says, “was really to turn that into a diagnostic tool that we could use on real-world systems.”
The idea was to provide a way that someone could “give us data from a time when this real-world system had an issue or a failure,” Dawson says, “and we can try to diagnose the root causes, and provide a little bit of a look behind the curtain at this complexity.”
The intent is for the methods they developed “to work for a pretty general class of cyber-physical problems,” he says. These are problems in which “you have an automated decision-making component interacting with the messiness of the real world,” he explains. There are available tools for testing software systems that operate on their own, but the complexity arises when that software has to interact with physical entities going about their activities in a real physical setting, whether it be the scheduling of aircraft, the movements of autonomous vehicles, the interactions of a team of robots, or the control of the inputs and outputs on an electric grid. In such systems, what often happens, he says, is that “the software might make a decision that looks OK at first, but then it has all these domino, knock-on effects that make things messier and much more uncertain.”
One key difference, though, is that in systems like teams of robots, unlike the scheduling of airplanes, “we have access to a model in the robotics world,” says Fan, who is a principal investigator in MIT’s Laboratory for Information and Decision Systems (LIDS). “We do have some good understanding of the physics behind the robotics, and we do have ways of creating a model” that represents their activities with reasonable accuracy. But airline scheduling involves processes and systems that are proprietary business information, and so the researchers had to find ways to infer what was behind the decisions, using only the relatively sparse publicly available information, which essentially consisted of just the actual arrival and departure times of each plane.
“We have grabbed all this flight data, but there is this entire system of the scheduling system behind it, and we don’t know how the system is working,” Fan says. And the amount of data relating to the actual failure is just several day’s worth, compared to years of data on normal flight operations.
The impact of the weather events in Denver during the week of Southwest’s scheduling crisis clearly showed up in the flight data, just from the longer-than-normal turnaround times between landing and takeoff at the Denver airport. But the way that impact cascaded though the system was less obvious, and required more analysis. The key turned out to have to do with the concept of reserve aircraft.
Airlines typically keep some planes in reserve at various airports, so that if problems are found with one plane that is scheduled for a flight, another plane can be quickly substituted. Southwest uses only a single type of plane, so they are all interchangeable, making such substitutions easier. But most airlines operate on a hub-and-spoke system, with a few designated hub airports where most of those reserve aircraft may be kept, whereas Southwest does not use hubs, so their reserve planes are more scattered throughout their network. And the way those planes were deployed turned out to play a major role in the unfolding crisis.
“The challenge is that there’s no public data available in terms of where the aircraft are stationed throughout the Southwest network,” Dawson says. “What we’re able to find using our method is, by looking at the public data on arrivals, departures, and delays, we can use our method to back out what the hidden parameters of those aircraft reserves could have been, to explain the observations that we were seeing.”
What they found was that the way the reserves were deployed was a “leading indicator” of the problems that cascaded in a nationwide crisis. Some parts of the network that were affected directly by the weather were able to recover quickly and get back on schedule. “But when we looked at other areas in the network, we saw that these reserves were just not available, and things just kept getting worse.”
For example, the data showed that Denver’s reserves were rapidly dwindling because of the weather delays, but then “it also allowed us to trace this failure from Denver to Las Vegas,” he says. While there was no severe weather there, “our method was still showing us a steady decline in the number of aircraft that were able to serve flights out of Las Vegas.”
He says that “what we found was that there were these circulations of aircraft within the Southwest network, where an aircraft might start the day in California and then fly to Denver, and then end the day in Las Vegas.” What happened in the case of this storm was that the cycle got interrupted. As a result, “this one storm in Denver breaks the cycle, and suddenly the reserves in Las Vegas, which is not affected by the weather, start to deteriorate.”
In the end, Southwest was forced to take a drastic measure to resolve the problem: They had to do a “hard reset” of their entire system, canceling all flights and flying empty aircraft around the country to rebalance their reserves.
Working with experts in air transportation systems, the researchers developed a model of how the scheduling system is supposed to work. Then, “what our method does is, we’re essentially trying to run the model backwards.” Looking at the observed outcomes, the model allows them to work back to see what kinds of initial conditions could have produced those outcomes.
While the data on the actual failures were sparse, the extensive data on typical operations helped in teaching the computational model “what is feasible, what is possible, what’s the realm of physical possibility here,” Dawson says. “That gives us the domain knowledge to then say, in this extreme event, given the space of what’s possible, what’s the most likely explanation” for the failure.
This could lead to a real-time monitoring system, he says, where data on normal operations are constantly compared to the current data, and determining what the trend looks like. “Are we trending toward normal, or are we trending toward extreme events?” Seeing signs of impending issues could allow for preemptive measures, such as redeploying reserve aircraft in advance to areas of anticipated problems.
Work on developing such systems is ongoing in her lab, Fan says. In the meantime, they have produced an open-source tool for analyzing failure systems, called CalNF, which is available for anyone to use. Meanwhile Dawson, who earned his doctorate last year, is working as a postdoc to apply the methods developed in this work to understanding failures in power networks.
The research team also included Max Li from the University of Michigan and Van Tran from Harvard University. The work was supported by NASA, the Air Force Office of Scientific Research, and the MIT-DSTA program.
A new technology for extending the shelf life of produce
We’ve all felt the sting of guilt when fruit and vegetables go bad before we could eat them. Now, researchers from MIT and the Singapore-MIT Alliance for Research and Technology (SMART) have shown they can extend the shelf life of harvested plants by injecting them with melatonin using biodegradable microneedles.
That’s a big deal because the problem of food waste goes way beyond our salads. More than 30 percent of the world’s food is lost after it’s harvested — enough to feed more than 1 billion people. Refrigeration is the most common way to preserve foods, but it requires energy and infrastructure that many regions of the world can’t afford or lack access to.
The researchers believe their system could offer an alternative or complement to refrigeration. Central to their approach are patches of silk microneedles. The microneedles can get through the tough, waxy skin of plants without causing a stress response, and deliver precise amounts of melatonin into plants’ inner tissues.
“This is the first time that we’ve been able to apply these microneedles to extend the shelf life of a fresh-cut crop,” says Benedetto Marelli, the study’s senior author, associate professor of civil and environmental engineering at MIT, and the director of the Wild Cards mission of the MIT Climate Project. “We thought we could use this technology to deliver something that could regulate or control the plant’s post-harvest physiology. Eventually, we looked at hormones, and melatonin is already used by plants to regulate such functions. The food we waste could feed about 1.6 billion people. Even in the U.S., this approach could one day expand access to healthy foods.”
For the study, which appears today in Nano Letters, Marelli and researchers from SMART applied small patches of the microneedles containing melatonin to the base of the leafy vegetable pak choy. After application, the researchers found the melatonin was able to extend the vegetables’ shelf life by four days at room temperature and 10 days when refrigerated, which could allow more crops to reach consumers before they’re wasted.
“Post-harvest waste is a huge issue. This problem is extremely important in emerging markets around Africa and Southeast Asia, where many crops are produced but can't be maintained in the journey from farms to markets,” says Sarojam Rajani, co-senior author of the study and a senior principal investigator at the Temasek Life Sciences Laboratory in Singapore.
Plant destressors
For years, Marelli’s lab has been exploring the use of silk microneedles for things like delivering nutrients to crops and monitoring plant health. Microneedles made from silk fibroin protein are nontoxic and biodegradable, and Marelli’s previous work has described ways of manufacturing them at scale.
To test microneedle’s ability to extend the shelf life of food, the researchers wanted to study their ability to deliver a hormone known to affect the senescence process. Aside from helping humans sleep, melatonin is also a natural hormone in many plants that helps them regulate growth and aging.
“The dose of melatonin we’re delivering is so low that it’s fully metabolized by the crops, so it would not significantly increase the amount of melatonin normally present in the food; we would not ingest more melatonin than usual,” Marelli says. “We chose pak choy because it's a very important crop in Asia, and also because pak choy is very perishable.”
Pak choy is typically harvested by cutting the leafy plant from the root system, exposing the shoot base that provides easy access to vascular bundles which distribute water and nutrients to the rest of the plant. To begin their study, the researchers first used their microneedles to inject a fluorescent dye into the base to confirm that vasculature could spread the dye throughout the plant.
The researchers then compared the shelf life of regular pak choy plants and plants that had been sprayed with or dipped into melatonin, finding no difference.
With their baseline shelf life established, the researchers applied small patches of the melatonin-filled microneedles to the bottom of pak choy plants by hand. They then stored the treated plants, along with controls, in plastic boxes both at room temperature and under refrigeration.
The team evaluated the plants by monitoring their weight, visual appearance, and concentration of chlorophyll, a green pigment that decreases as plants age.
At room temperature, the leaves of the untreated control group began yellowing within two or three days. By the fourth day, the yellowing accelerated to the point that the plants likely could not be sold. Plants treated with the melatonin-loaded silk microneedles, in contrast, remained green on day five, and the yellowing process was significantly delayed. The weight loss and chlorophyll reduction of treated plants also slowed significantly at room temperature. Overall, the researchers estimated the microneedle-treated plants retained their saleable value until the eighth day.
“We clearly saw we could enhance the shelf life of pak choy without the cold chain,” Marelli says.
In refrigerated conditions of about 40 degrees Fahrenheit, plant yellowing was delayed by about five days on average, with treated plants remaining relatively green until day 25.
“Spectrophotometric analysis of the plants indicated the treated plants had higher antioxidant activity, while gene analysis showed the melatonin set off a protective chain reaction inside the plants, preserving chlorophyll and adjusting hormones to slow senescence,” says Monika Jangir, co-first author and former postdoc at the Temasek Life Sciences Laboratory.
“We studied melatonin’s effects and saw it improves the stress response of the plant after it’s been cut, so it’s basically decreasing the stress that plant’s experience, and that extends its shelf life,” says Yangyang Han, co-first author and research scientist at the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group at SMART.
Toward postharvest preservation
While the microneedles could make it possible to minimize waste when compared to other application methods like spraying or dipping crops, the researchers say more work is needed to deploy microneedles at scale. For instance, although the researchers applied the microneedle patches by hand in this experiment, the patches could be applied using tractors, autonomous drones, and other farming equipment in the future.
“For this to be widely adopted, we’d need to reach a performance versus cost threshold to justify its use,” Marelli explains. “This method would need to become cheap enough to be used by farmers regularly.”
Moving forward, the research team plans to study the effects of a variety of hormones on different crops using its microneedle delivery technology. The team believes the technique should work with all kinds of produce.
“We’re going to continue to analyze how we can increase the impact this can have on the value and quality of crops,” Marelli says. “For example, could this let us modulate the nutritional values of the crop, how it’s shaped, its texture, etc.? We're also going to continue looking into scaling up the technology so this can be used in the field.”
The work was supported by the Singapore-MIT Alliance for Research and Technology (SMART) and the National Research Foundation of Singapore.