MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 18 hours 9 min ago

MIT mechanical engineering course invites students to “build with biology”

Wed, 05/28/2025 - 3:25pm

MIT course 2.797/2.798 (Molecular Cellular and Tissue Biomechanics) teaches students about the role that mechanics plays in biology, with a focus on biomechanics and mechanobiology: “Two words that sound similar, but are actually very different,” says Ritu Raman, the Eugene Bell Career Development Professor of Tissue Engineering in the MIT Department of Mechanical Engineering.

Biomechanics, Raman explains, conveys the mechanical properties of biological materials, where mechanobiology teaches students how cells feel and respond to forces in their environment. “When students take this class, they're getting a really unique fusion of not only fundamentals of mechanics, but also emerging research in biomechanics and mechanobiology,” says Raman.

Raman and Peter So, professor of mechanical engineering, co-teach the course, which So says offers a concrete application of some of the basic theory. “We talk about some of the applications and why the fundamental concept is important.”

The pair recently revamped the curriculum to incorporate hands-on lab-learning through the campus BioMakers space and the Safety, Health, Environmental Discovery Lab (SHED) bioprinting makerspace. This updated approach invites students to “build with biology” and see how cells respond to forces in their environment in real time, and it was a change that was seemingly welcomed from the start, with the first offering yielding the course’s largest-ever enrollment.

“Many concepts in biomechanics and mechanobiology can be hard to conceptualize because they happen at length scales that we can't typically visualize,” Raman explains. “In the past, we've done our best to convey these ideas via pictures, videos, and equations. The lab component adds another dimension to our teaching methods. We hope that students seeing firsthand how living cells sense and respond to their environment helps the concepts sink in deeper and last longer in their memories.”

Makerspaces, which are located throughout the campus, offer tools and workspace for MIT community members to invent, prototype, and bring ideas to life. The Institute has over 40 design/build/project spaces that include facilities for 3D printing, glassblowing, wood and metal working, and more. The BioMakers space welcomes students engaged in hands-on bioengineering projects. SHED similarly leverages cutting-edge technologies across disciplines, including a new space focused on 3D bio-printing.

Kamakshi Subramanian, a cross-registered Wellesley College student, says she encountered a polymer model in a prior thermodynamics class, but wondered how she’d apply it. Taking this course gave her a new frame of reference. “I was like, ‘Why are we doing this?’ … and then I came here and I was like, ‘OK, thinking about entropy in this way is actually useful.’”

Raman says there’s a special kind of energy and excitement associated with being in a lab versus staying in the classroom. “It reminds me of going on a field trip when I was in elementary school,” she says, adding that seeing that energy in students during the course’s first run inspired the instructors to expand lab offerings even further in the second offering.  

“[In addition to] one main lab on the biomechanics of muscle contraction, we have added a second lab where students visit the SHED makerspace to learn about 3D bio-printing,” she says. “We have also incorporated an optional hands-on component into the final project, [and] most students in the class are taking advantage of this extra lab time to try exciting curiosity-driven experiments at the intersection of biology and mechanics.”

Raman and So, who were joined in teaching the second iteration of the course this semester by professor of biological engineering Mark Bathe, say they hope to continue to build the amount of hands-on time incorporated into the class in the coming years.

Ayi Agboglo, a Harvard-MIT Health Sciences and Technology graduate student who is studying the physical properties of red blood cells relevant to sickle cell disease (SCD), says taking the course introduced him to studies where mathematical models extracted mechanical properties of red blood cell (RBC) membranes in the context of SCD.

“In SCD, deoxygenation causes rigid protein fibers to form within cells, altering their mechanical and physical properties,” he explains. “This field of work has largely informed my research which focuses on measuring the physical properties of RBCs (mass, volume, and density) in both oxygenated and deoxygenated states. These measurements aim to reveal patient-specific differences in fiber formation — the primary pathological event in SCD — potentially uncovering new therapeutic opportunities.”

Agboglo, who works in Professor Cullen Buie’s lab at MIT and John Higgins’ lab at MGH, says, “I left [the class] not only understanding more about molecular mechanics, but also understanding just fundamentals about thermodynamics and energy and things that I think will be useful as a scientist in general.”

In addition to lab and lecture time, 2.797/2.798 students also had the opportunity to work with the Museum of Science, Boston and generate open-source educational resources about the interplay between mechanics and biology. These resources are now available on the museum's website

A high-fat diet sets off metabolic dysfunction in cells, leading to weight gain

Wed, 05/28/2025 - 11:00am

Consuming a high-fat diet can lead to a variety of health problems — not only weight gain but also an increased risk of diabetes and other chronic diseases.

At the cellular level, hundreds of changes take place in response to a high-fat diet. MIT researchers have now mapped out some of those changes, with a focus on metabolic enzyme dysregulation that is associated with weight gain.

Their study, conducted in mice, revealed that hundreds of enzymes involved in sugar, lipid, and protein metabolism are affected by a high-fat diet, and that these disruptions lead to an increase in insulin resistance and an accumulation of damaging molecules called reactive oxygen species. These effects were more pronounced in males than females.

The researchers also showed that most of the damage could be reversed by giving the mice an antioxidant along with their high-fat diet.

“Under metabolic stress conditions, enzymes can be affected to produce a more harmful state than what was initially there,” says Tigist Tamir, a former MIT postdoc. “Then what we’ve shown with the antioxidant study is that you can bring them to a different state that is less dysfunctional.”

Tamir, who is now an assistant professor of biochemistry and biophysics at the University of North Carolina at Chapel Hill School of Medicine, is the lead author of the new study, which appears today in Molecular Cell. Forest White, the Ned C. and Janet C. Rice Professor of Biological Engineering and a member of the Koch Institute for Integrative Cancer Research at MIT, is the senior author of the paper.

Metabolic networks

In previous work, White’s lab has found that a high-fat diet stimulates cells to turn on many of the same signaling pathways that are linked to chronic stress. In the new study, the researchers wanted to explore the role of enzyme phosphorylation in those responses.

Phosphorylation, or the addition of a phosphate group, can turn enzyme activity on or off. This process, which is controlled by enzymes called kinases, gives cells a way to quickly respond to environmental conditions by fine-tuning the activity of existing enzymes within the cell.

Many enzymes involved in metabolism — the conversion of food into the building blocks of key molecules such as proteins, lipids, and nucleic acids — are known to undergo phosphorylation.

The researchers began by analyzing databases of human enzymes that can be phosphorylated, focusing on enzymes involved in metabolism. They found that many of the metabolic enzymes that undergo phosphorylation belong to a class called oxidoreductases, which transfer electrons from one molecule to another. Such enzymes are key to metabolic reactions such as glycolysis — the breakdown of glucose into a smaller molecule known as pyruvate.

Among the hundreds of enzymes the researchers identified are IDH1, which is involved in breaking down sugar to generate energy, and AKR1C1, which is required for metabolizing fatty acids. The researchers also found that many phosphorylated enzymes are important for the management of reactive oxygen species, which are necessary for many cell functions but can be harmful if too many of them accumulate in a cell.

Phosphorylation of these enzymes can lead them to become either more or less active, as they work together to respond to the intake of food. Most of the metabolic enzymes identified in this study are phosphorylated on sites found in regions of the enzyme that are important for binding to the molecules that they act upon or for forming dimers — pairs of proteins that join together to form a functional enzyme.

“Tigist’s work has really shown categorically the importance of phosphorylation in controlling the flux through metabolic networks. It’s fundamental knowledge that emerges from this systemic study that she’s done, and it’s something that is not classically captured in the biochemistry textbooks,” White says.

Out of balance

To explore these effects in an animal model, the researchers compared two groups of mice, one that received a high-fat diet and one that consumed a normal diet. They found that overall, phosphorylation of metabolic enzymes led to a dysfunctional state in which cells were in redox imbalance, meaning that their cells were producing more reactive oxygen species than they could neutralize. These mice also became overweight and developed insulin resistance.

“In the context of continued high fat diet, what we see is a gradual drift away from redox homeostasis towards a more disease-like setting,” White says.

These effects were much more pronounced in male mice than female mice. Female mice were better able to compensate for the high fat diet by activating pathways involved in processing fat and metabolizing it for other uses, the researchers found.

“One of the things we learned is that the overall systemic effect of these phosphorylation events led to, especially in males, an increased imbalance in redox homeostasis. They were expressing a lot more stress and a lot more of the metabolic dysfunction phenotype compared to females,” Tamir says.

The researchers also found that if they gave mice who were on a high-fat diet an antioxidant called BHA, many of these effects were reversed. These mice showed a significant decrease in weight gain and did not become prediabetic, unlike the other mice fed a high-fat diet.

It appears that the antioxidant treatment leads cells back into a more balanced state, with fewer reactive oxygen species, the researchers say. Additionally, metabolic enzymes showed a systemic rewiring and changed state of phosphorylation in those mice.

“They’re experiencing a lot of metabolic dysfunction, but if you co-administer something that counters that, then they have enough reserve to maintain some sort of normalcy,” Tamir says. “The study suggests that there is something biochemically happening in cells to bring them to a different state — not a normal state, just a different state in which now, at the tissue and organism levels, the mice are healthier.”

In her new lab at the University of North Carolina, Tamir now plans to further explore whether antioxidant treatment could be an effective way to prevent or treat obesity-associated metabolic dysfunction, and what the optimal timing of such a treatment would be.

The research was funded in part by the Burroughs Wellcome Fund, the National Cancer Institute, the National Institutes of Health, the Ludwig Center at MIT, and the MIT Center for Precision Cancer Medicine.

$20 million gift supports theoretical physics research and education at MIT 

Wed, 05/28/2025 - 9:00am

A $20 million gift from the Leinweber Foundation, in addition to a $5 million commitment from the MIT School of Science, will support theoretical physics research and education at MIT.

Leinweber Foundation gifts to five institutions, totaling $90 million, will establish the newly renamed MIT Center for Theoretical Physics – A Leinweber Institute within the Department of Physics, affiliated with the Laboratory for Nuclear Science at the School of Science, as well as Leinweber Institutes for Theoretical Physics at three other top research universities: the University of Michigan, the University of California at Berkeley, and the University of Chicago, as well as a Leinweber Forum for Theoretical and Quantum Physics at the Institute for Advanced Study.

“MIT has one of the strongest and broadest theory groups in the world,” says Professor Washington Taylor, the director of the newly funded center and a leading researcher in string theory and its connection to observable particle physics and cosmology.

“This landmark endowment from the Leinweber Foundation will enable us to support the best graduate students and postdoctoral researchers to develop their own independent research programs and to connect with other researchers in the Leinweber Institute network. By pledging to support this network and fundamental curiosity-driven science, Larry Leinweber and his family foundation have made a huge contribution to maintaining a thriving scientific enterprise in the United States in perpetuity.”

The Leinweber Foundation’s investment across five institutions — constituting the largest philanthropic commitment ever for theoretical physics research, according to the Science Philanthropy Alliance, a nonprofit organization that supports philanthropic support for science — will strengthen existing programs at each institution and foster collaboration across the universities. Recipient institutions will work both independently and collaboratively to explore foundational questions in theoretical physics. Each institute will continue to shape its own research focus and programs, while also committing to big-picture cross-institutional convenings around topics of shared interest. Moreover, each institute will have significantly more funding for graduate students and postdocs, including fellowship support for three to eight fully endowed Leinweber Physics Fellows at each institute.

“This gift is a commitment to America’s scientific future,” says Larry Leinweber, founder and president of the Leinweber Foundation. “Theoretical physics may seem abstract to many, but it is the tip of the spear for innovation. It fuels our understanding of how the world works and opens the door to new technologies that can shape society for generations. As someone who has had a lifelong fascination with theoretical physics, I hope this investment not only strengthens U.S. leadership in basic science, but also inspires curiosity, creativity, and groundbreaking discoveries for generations to come.”

The gift to MIT will create a postdoc program that, once fully funded, will initially provide support for up to six postdocs, with two selected per year for a three-year program. In addition, the gift will provide student financial support, including fellowship support, for up to six graduate students per year studying theoretical physics. The goal is to attract the top talent to the MIT Center for Theoretical Physics – A Leinweber Institute and support the ongoing research programs in a more robust way.

A portion of the funding will also provide support for visitors, seminars, and other scholarly activities of current postdocs, faculty, and students in theoretical physics, as well as helping with administrative support.

“Graduate students are the heart of our country’s scientific research programs. Support for their education to become the future leaders of the field is essential for the advancement of the discipline,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis (1963) and Kathleen Marble Professor of Astrophysics.

The Leinweber Foundation gift is the second significant gift for the center. “We are always grateful to Virgil Elings, whose generous gift helped make possible the space that houses the center,” says Deepto Chakrabarty, head of the Department of Physics. Elings PhD ’66, co-founder of Digital Instruments, which designed and sold scanning probe microscopes, made his gift more than 20 years ago to support a space for theoretical physicists to collaborate.

“Gifts like those from Larry Leinweber and Virgil Elings are critical, especially now in this time of uncertain funding from the federal government for support of fundamental scientific research carried out by our nation’s leading postdocs, research scientists, faculty and students,” adds Mavalvala.

Professor Tracy Slatyer, whose work is motivated by questions of fundamental particle physics — particularly the nature and interactions of dark matter — will be the subsequent director of the MIT Center for Theoretical Physics – A Leinweber Institute beginning this fall. Slatyer will join Mavalvala, Taylor, Chakrabarty, and the entirety of the theoretical physics community for a dedication ceremony planned for the near future.

The Leinweber Foundation was founded in 2015 by software entrepreneur Larry Leinweber, and has worked with the Science Philanthropy Alliance since 2021 to shape its philanthropic strategy. “It’s been a true pleasure to work with Larry and the Leinweber family over the past four years and to see their vision take shape,” says France Córdova, president of the Science Philanthropy Alliance. “Throughout his life, Larry has exemplified curiosity, intellectual openness, and a deep commitment to learning. This gift reflects those values, ensuring that generations of scientists will have the freedom to explore, to question, and to pursue ideas that could change how we understand the universe.”

MIT D-Lab students design global energy solutions through collaboration

Wed, 05/28/2025 - 12:00am

This semester, MIT D-Lab students built prototype solutions to help farmers in Afghanistan, people living in informal settlements in Argentina, and rural poultry farmers in Cameroon. The projects span continents and collectively stand to improve thousands of lives — and they all trace back to two longstanding MIT D-Lab classes.

For nearly two decades, 2.651 / EC.711 (Introduction to Energy in Global Development) and 2.652 / EC.712 (Applications of Energy in Global Development) have paired students with international organizations and communities to learn D-Lab’s participatory approach to design and study energy technologies in low-resource environments. Hundreds of students from across MIT have taken the courses, which feature visits from partners and trips to the communities after the semester. They often discover a passion for helping people in low-resource settings that lasts a lifetime.

“Through the trips, students often gain an appreciation for what they have at home, and they can’t forget about what they see,” says D-Lab instructor Josh Maldonado ’23, who took both courses as a student. “For me, it changed my entire career. Students maintain relationships with the people they work with. They stay on the group chats with community members and meet up with them when they travel. They come back and want to mentor for the class. You can just see it has a lasting effect.”

The introductory course takes place each spring and is followed by summer trips for students. The applications class, which is more focused on specific projects, is held in the fall and followed by student travel over winter break.

“MIT has always advocated for going out and impacting the world,” Maldonado says. “The fact that we can use what we learn here in such a meaningful way while still a student is awesome. It gets back to MIT’s motto, ‘mens et manus’ (‘mind and hand’).”

Curriculum for impact

Introduction to Energy in Global Development has been taught since around 2008, with past projects focusing on mitigating the effects of aquatic weeds for fisherman in Ghana, making charcoal for cookstoves in Uganda, and creating brick evaporative coolers to extend the shelf life of fruits and vegetables in Mali.

The class follows MIT D-Lab’s participatory design philosophy in which students design solutions in close collaboration with local communities. Along the way, students learn about different energy technologies and how they might be implemented cheaply in rural communities that lack basic infrastructure.

“In product design, the idea is to get out and meet your customer where they are,” Maldonado explains. “The problem is our partners are often in remote, low-resource regions of the world. We put a big emphasis on designing with the local communities and increasing their creative capacity building to show them they can build solutions themselves.”

Students from across MIT, including graduates and undergraduates, along with students from Harvard University and Wellesley College, can enroll in both courses. MIT senior Kanokwan Tungkitkancharoen took the introductory class this spring.

“There are students from chemistry, computer science, civil engineering, policy, and more,” says Tungkitkancharoen. “I think that convergence models how things get done in real life. The class also taught me how to communicate the same information in different ways to cater to different people. It helped me distill my approach to what is this person trying to learn and how can I convey that information.”

Tungkitkancharoen’s team worked with a nonprofit called Weatherizers Without Borders to implement weatherization strategies that enhance housing conditions and environmental resilience for people in the southern Argentinian community of Bariloche.

The team built model homes and used heat sensing cameras to show the impact of weatherization strategies to locals and policymakers in the region.

“Our partners live in self-built homes, but the region is notorious for being very cold in the winter and very hot in the summer,” Tungkitkancharoen says. “We’re helping our partners retrofit homes so they can withstand the weather better. Before the semester, I was interested in working directly with people impacted by these technologies and the current climate situation. D-Lab helped me work with people on the ground, and I’ve been super grateful to our community partners.”

The project to design micro-irrigation systems to support agricultural productivity and water conservation in Afghanistan is in partnership with the Ecology and Conservation Organization of Afghanistan and a team from a local university in Afghanistan.

“I love the process of coming into class with a practical question you need to solve and working closely with community partners,” says MIT master’s student Khadija Ghanizada, who has served as a teacher’s assistant for both the introductory and applications courses. “All of these projects will have a huge impact, but being from Afghanistan, I know this will make a difference because it’s a land-locked country, it’s dealing with droughts, and 80 percent of our economy depends on agriculture. We also make sure students are thinking about scalability of their solutions, whether scaling worldwide or just nationally. Every project has its own impact story.”

Meeting community partners

Now that the spring semester is over, many students from the introductory class will travel to the regions they studied with instructors and local guides over the summer.

“The traveling and implementation are things students always look forward to,” Maldonado says. “Students do a lot of prep work, thinking about the tools they need, the local resources they need, and working with partners to acquire those resources.”

Following travel, students write a report on how the trip went, which helps D-Lab refine the course for next semester.

“Oftentimes instructors are also doing research in these regions while they teach the class,” Maldonado says. “To be taught by people who were just in the field two weeks before the class started, and to see pictures of what they’re doing, is really powerful.”

Students who have taken the class have gone on to careers in international development, nonprofits, and to start companies that grow the impact of their class projects. But the most immediate impact can be seen in the communities that students work with.

“These solutions should be able to be built locally, sourced locally, and potentially also lead to the creation of localized markets based around the technology,” Maldonado says. “Almost everything the D-Lab does is open-sourced, so when we go to these communities, we don’t just teach people how to use these solutions, we teach them how to make them. Technology, if implemented correctly by mindful engineers and scientists, can be highly adopted and can grow a community of makers and fabricators and local businesses.”

Shaping the future through systems thinking

Tue, 05/27/2025 - 3:20pm

Long before she stepped into a lab, Ananda Santos Figueiredo was stargazing in Brazil, captivated by the cosmos and feeding her curiosity of science through pop culture, books, and the internet. She was drawn to astrophysics for its blend of visual wonder and mathematics.

Even as a child, Santos sensed her aspirations reaching beyond the boundaries of her hometown. “I’ve always been drawn to STEM,” she says. “I had this persistent feeling that I was meant to go somewhere else to learn more, explore, and do more.”

Her parents saw their daughter’s ambitions as an opportunity to create a better future. The summer before her sophomore year of high school, her family moved from Brazil to Florida.  She recalls that moment as “a big leap of faith in something bigger and we had no idea how it would turn out.” She was certain of one thing: She wanted an education that was both technically rigorous and deeply expansive, one that would allow her to pursue all her passions.

At MIT, she found exactly what she was seeking in a community and curriculum that matched her curiosity and ambition. “I’ve always associated MIT with something new and exciting that was grasping towards the very best we can achieve as humans,” Santos says, emphasizing the use of technology and science to significantly impact society. “It’s a place where people aren’t afraid to dream big and work hard to make it a reality.”

As a first-generation college student, she carried the weight of financial stress and the uncertainty that comes with being the first in her family to navigate college in the U.S. But she found a sense of belonging in the MIT community. “Being a first-generation student helped me grow,” she says. “It inspired me to seek out opportunities and help support others too.”

She channeled that energy into student government roles for the undergraduate residence halls. Through Dormitory Council (DormCon) and her dormitory, Simmons Hall, her voice could help shape life on campus. She began serving as reservations chair for her dormitory but ended up becoming president of the dormitory before being elected dining chair and vice president for DormCon. She’s worked to improve dining hall operations and has planned major community events like Simmons Hall’s 20th anniversary and DormCon’s inaugural Field Day.

Now, a senior about to earn her bachelor’s degree, Santos says MIT’s motto, “mens et manus” — “mind and hand” — has deeply resonated with her from the start. “Learning here goes far beyond the classroom,” she says. “I’ve been surrounded by people who are passionate and purposeful. That energy is infectious. It’s changed how I see myself and what I believe is possible.”

Charting her own course

Initially a physics major, Santos’ academic path took a turn after a transformative internship with the World Bank’s data science lab between her sophomore and junior years. There, she used her coding skills to study the impacts of heat waves in the Philippines. The experience opened her eyes to the role technology and data can play in improving lives and broadened her view of what a STEM career could look like.

“I realized I didn’t want to just study the universe — I wanted to change it,” she says. “I wanted to join systems thinking with my interest in the humanities, to build a better world for people and communities."

When MIT launched a new major in climate system science and engineering (Course 1-12) in 2023, Santos was the first student to declare it. The interdisciplinary structure of the program, blending climate science, engineering, energy systems, and policy, gave her a framework to connect her technical skills to real-world sustainability challenges.

She tailored her coursework to align with her passions and career goals, applying her physics background (now her minor) to understand problems in climate, energy, and sustainable systems. “One of the most powerful things about the major is the breadth,” she says. “Even classes that aren’t my primary focus have expanded how I think.”

Hands-on fieldwork has been a cornerstone of her learning. During MIT’s Independent Activities Period (IAP), she studied climate impacts in Hawai’i in the IAP Course 1.091 (Traveling Research Environmental Experiences, or TREX). This year, she studied the design of sustainable polymer systems in Course 1.096/10.496 (Design of Sustainable Polymer Systems) under MISTI’s Global Classroom program. The IAP class brought her to the middle of the Amazon Rainforest to see what the future of plastic production could look like with products from the Amazon. “That experience was incredibly eye opening,” she explains. “It helped me build a bridge between my own background and the kind of problems that I want to solve in the future.”

Santos also found enjoyment beyond labs and lectures. A member of the MIT Shakespeare Ensemble since her first year, she took to the stage in her final spring production of “Henry V,” performing as both the Chorus and Kate. “The ensemble’s collaborative spirit and the way it brings centuries-old texts to life has been transformative,” she adds.

Her passion for the arts also intersected with her interest in the MIT Lecture Series Committee. She helped host a special screening of the film “Sing Sing,” in collaboration with MIT’s Educational Justice Institute (TEJI). That connection led her to enroll in a TEJI course, illustrating the surprising and meaningful ways that different parts of MIT’s ecosystem overlap. “It’s one of the beautiful things about MIT,” she says. “You stumble into experiences that deeply change you.”

Throughout her time at MIT, the community of passionate, sustainability-focused individuals has been a major source of inspiration. She’s been actively involved with the MIT Office of Sustainability’s decarbonization initiatives and participated in the Climate and Sustainability Scholars Program.

Santos acknowledges that working in sustainability can sometimes feel overwhelming. “Tackling the challenges of sustainability can be discouraging,” she says. “The urgency to create meaningful change in a short period of time can be intimidating. But being surrounded by people who are actively working on it is so much better than not working on it at all."

Looking ahead, she plans to pursue graduate studies in technology and policy, with aspirations to shape sustainable development, whether through academia, international organizations, or diplomacy.

“The most fulfilling moments I’ve had at MIT are when I’m working on hard problems while also reflecting on who I want to be, what kind of future I want to help create, and how we can be better and kinder to each other,” she says. “That’s what excites me — solving real problems that matter.”

New fuel cell could enable electric aviation

Tue, 05/27/2025 - 11:00am

Batteries are nearing their limits in terms of how much power they can store for a given weight. That’s a serious obstacle for energy innovation and the search for new ways to power airplanes, trains, and ships. Now, researchers at MIT and elsewhere have come up with a solution that could help electrify these transportation systems.

Instead of a battery, the new concept is a kind of fuel cell — which is similar to a battery but can be quickly refueled rather than recharged. In this case, the fuel is liquid sodium metal, an inexpensive and widely available commodity. The other side of the cell is just ordinary air, which serves as a source of oxygen atoms. In between, a layer of solid ceramic material serves as the electrolyte, allowing sodium ions to pass freely through, and a porous air-facing electrode helps the sodium to chemically react with oxygen and produce electricity.

In a series of experiments with a prototype device, the researchers demonstrated that this cell could carry more than three times as much energy per unit of weight as the lithium-ion batteries used in virtually all electric vehicles today. Their findings are being published today in the journal Joule, in a paper by MIT doctoral students Karen Sugano, Sunil Mair, and Saahir Ganti-Agrawal; professor of materials science and engineering Yet-Ming Chiang; and five others.

“We expect people to think that this is a totally crazy idea,” says Chiang, who is the Kyocera Professor of Ceramics. “If they didn’t, I’d be a bit disappointed because if people don’t think something is totally crazy at first, it probably isn’t going to be that revolutionary.”

And this technology does appear to have the potential to be quite revolutionary, he suggests. In particular, for aviation, where weight is especially crucial, such an improvement in energy density could be the breakthrough that finally makes electrically powered flight practical at significant scale.

“The threshold that you really need for realistic electric aviation is about 1,000 watt-hours per kilogram,” Chiang says. Today’s electric vehicle lithium-ion batteries top out at about 300 watt-hours per kilogram — nowhere near what’s needed. Even at 1,000 watt-hours per kilogram, he says, that wouldn’t be enough to enable transcontinental or trans-Atlantic flights.

That’s still beyond reach for any known battery chemistry, but Chiang says that getting to 1,000 watts per kilogram would be an enabling technology for regional electric aviation, which accounts for about 80 percent of domestic flights and 30 percent of the emissions from aviation.

The technology could be an enabler for other sectors as well, including marine and rail transportation. “They all require very high energy density, and they all require low cost,” he says. “And that’s what attracted us to sodium metal.”

A great deal of research has gone into developing lithium-air or sodium-air batteries over the last three decades, but it has been hard to make them fully rechargeable. “People have been aware of the energy density you could get with metal-air batteries for a very long time, and it’s been hugely attractive, but it’s just never been realized in practice,” Chiang says.

By using the same basic electrochemical concept, only making it a fuel cell instead of a battery, the researchers were able to get the advantages of the high energy density in a practical form. Unlike a battery, whose materials are assembled once and sealed in a container, with a fuel cell the energy-carrying materials go in and out.

The team produced two different versions of a lab-scale prototype of the system. In one, called an H cell, two vertical glass tubes are connected by a tube across the middle, which contains a solid ceramic electrolyte material and a porous air electrode. Liquid sodium metal fills the tube on one side, and air flows through the other, providing the oxygen for the electrochemical reaction at the center, which ends up gradually consuming the sodium fuel. The other prototype uses a horizontal design, with a tray of the electrolyte material holding the liquid sodium fuel. The porous air electrode, which facilitates the reaction, is affixed to the bottom of the tray. 

Tests using an air stream with a carefully controlled humidity level produced a level of more than 1,500 watt-hours per kilogram at the level of an individual “stack,” which would translate to over 1,000 watt-hours at the full system level, Chiang says.

The researchers envision that to use this system in an aircraft, fuel packs containing stacks of cells, like racks of food trays in a cafeteria, would be inserted into the fuel cells; the sodium metal inside these packs gets chemically transformed as it provides the power. A stream of its chemical byproduct is given off, and in the case of aircraft this would be emitted out the back, not unlike the exhaust from a jet engine.

But there’s a very big difference: There would be no carbon dioxide emissions. Instead the emissions, consisting of sodium oxide, would actually soak up carbon dioxide from the atmosphere. This compound would quickly combine with moisture in the air to make sodium hydroxide — a material commonly used as a drain cleaner — which readily combines with carbon dioxide to form a solid material, sodium carbonate, which in turn forms sodium bicarbonate, otherwise known as baking soda.

“There’s this natural cascade of reactions that happens when you start with sodium metal,” Chiang says. “It’s all spontaneous. We don’t have to do anything to make it happen, we just have to fly the airplane.”

As an added benefit, if the final product, the sodium bicarbonate, ends up in the ocean, it could help to de-acidify the water, countering another of the damaging effects of greenhouse gases.

Using sodium hydroxide to capture carbon dioxide has been proposed as a way of mitigating carbon emissions, but on its own, it’s not an economic solution because the compound is too expensive. “But here, it’s a byproduct,” Chiang explains, so it’s essentially free, producing environmental benefits at no cost.

Importantly, the new fuel cell is inherently safer than many other batteries, he says. Sodium metal is extremely reactive and must be well-protected. As with lithium batteries, sodium can spontaneously ignite if exposed to moisture. “Whenever you have a very high energy density battery, safety is always a concern, because if there’s a rupture of the membrane that separates the two reactants, you can have a runaway reaction,” Chiang says. But in this fuel cell, one side is just air, “which is dilute and limited. So you don’t have two concentrated reactants right next to each other. If you’re pushing for really, really high energy density, you’d rather have a fuel cell than a battery for safety reasons.”

While the device so far exists only as a small, single-cell prototype, Chiang says the system should be quite straightforward to scale up to practical sizes for commercialization. Members of the research team have already formed a company, Propel Aero, to develop the technology. The company is currently housed in MIT’s startup incubator, The Engine.

Producing enough sodium metal to enable widespread, full-scale global implementation of this technology should be practical, since the material has been produced at large scale before. When leaded gasoline was the norm, before it was phased out, sodium metal was used to make the tetraethyl lead used as an additive, and it was being produced in the U.S. at a capacity of 200,000 tons a year. “It reminds us that sodium metal was once produced at large scale and safely handled and distributed around the U.S.,” Chiang says.

What’s more, sodium primarily originates from sodium chloride, or salt, so it is abundant, widely distributed around the world, and easily extracted, unlike lithium and other materials used in today’s EV batteries.

The system they envisage would use a refillable cartridge, which would be filled with liquid sodium metal and sealed. When it’s depleted, it would be returned to a refilling station and loaded with fresh sodium. Sodium melts at 98 degrees Celsius, just below the boiling point of water, so it is easy to heat to the melting point to refuel the cartridges.

Initially, the plan is to produce a brick-sized fuel cell that can deliver about 1,000 watt-hours of energy, enough to power a large drone, in order to prove the concept in a practical form that could be used for agriculture, for example. The team hopes to have such a demonstration ready within the next year.

Sugano, who conducted much of the experimental work as part of her doctoral thesis and will now work at the startup, says that a key insight was the importance of moisture in the process. As she tested the device with pure oxygen, and then with air, she found that the amount of humidity in the air was crucial to making the electrochemical reaction efficient. The humid air resulted in the sodium producing its discharge products in liquid rather than solid form, making it much easier for these to be removed by the flow of air through the system. “The key was that we can form this liquid discharge product and remove it easily, as opposed to the solid discharge that would form in dry conditions,” she says.

Ganti-Agrawal notes that the team drew from a variety of different engineering subfields. For example, there has been much research on high-temperature sodium, but none with a system with controlled humidity. “We’re pulling from fuel cell research in terms of designing our electrode, we’re pulling from older high-temperature battery research as well as some nascent sodium-air battery research, and kind of mushing it together,” which led to the “the big bump in performance” the team has achieved, he says.

The research team also included Alden Friesen, an MIT summer intern who attends Desert Mountain High School in Scottsdale, Arizona; Kailash Raman and William Woodford of Form Energy in Somerville, Massachusetts; Shashank Sripad of And Battery Aero in California, and Venkatasubramanian Viswanathan of the University of Michigan. The work was supported by ARPA-E, Breakthrough Energy Ventures, and the National Science Foundation, and used facilities at MIT.nano.

Overlooked cells might explain the human brain’s huge storage capacity

Tue, 05/27/2025 - 10:00am

The human brain contains about 86 billion neurons. These cells fire electrical signals that help the brain store memories and send information and commands throughout the brain and the nervous system.

The brain also contains billions of astrocytes — star-shaped cells with many long extensions that allow them to interact with millions of neurons. Although they have long been thought to be mainly supportive cells, recent studies have suggested that astrocytes may play a role in memory storage and other cognitive functions.

MIT researchers have now put forth a new hypothesis for how astrocytes might contribute to memory storage. The architecture suggested by their model would help to explain the brain’s massive storage capacity, which is much greater than would be expected using neurons alone.

“Originally, astrocytes were believed to just clean up around neurons, but there’s no particular reason that evolution did not realize that, because each astrocyte can contact hundreds of thousands of synapses, they could also be used for computation,” says Jean-Jacques Slotine, an MIT professor of mechanical engineering and of brain and cognitive sciences, and an author of the new study.

Dmitry Krotov, a research staff member at the MIT-IBM Watson AI Lab and IBM Research, is the senior author of the open-access paper, which appeared May 23 in the Proceedings of the National Academy of Sciences. Leo Kozachkov PhD ’22 is the paper’s lead author.

Memory capacity

Astrocytes have a variety of support functions in the brain: They clean up debris, provide nutrients to neurons, and help to ensure an adequate blood supply.

Astrocytes also send out many thin tentacles, known as processes, which can each wrap around a single synapse — the junctions where two neurons interact with each other — to create a tripartite (three-part) synapse.

Within the past couple of years, neuroscientists have shown that if the connections between astrocytes and neurons in the hippocampus are disrupted, memory storage and retrieval are impaired.

Unlike neurons, astrocytes can’t fire action potentials, the electrical impulses that carry information throughout the brain. However, they can use calcium signaling to communicate with other astrocytes. Over the past few decades, as the resolution of calcium imaging has improved, researchers have found that calcium signaling also allows astrocytes to coordinate their activity with neurons in the synapses that they associate with.

These studies suggest that astrocytes can detect neural activity, which leads them to alter their own calcium levels. Those changes may trigger astrocytes to release gliotransmitters — signaling molecules similar to neurotransmitters — into the synapse.

“There’s a closed circle between neuron signaling and astrocyte-to-neuron signaling,” Kozachkov says. “The thing that is unknown is precisely what kind of computations the astrocytes can do with the information that they’re sensing from neurons.”

The MIT team set out to model what those connections might be doing and how they might contribute to memory storage. Their model is based on Hopfield networks — a type of neural network that can store and recall patterns.

Hopfield networks, originally developed by John Hopfield and Shun-Ichi Amari in the 1970s and 1980s, are often used to model the brain, but it has been shown that these networks can’t store enough information to account for the vast memory capacity of the human brain. A newer, modified version of a Hopfield network, known as dense associative memory, can store much more information through a higher order of couplings between more than two neurons.

However, it is unclear how the brain could implement these many-neuron couplings at a hypothetical synapse, since conventional synapses only connect two neurons: a presynaptic cell and a postsynaptic cell. This is where astrocytes come into play.

“If you have a network of neurons, which couple in pairs, there’s only a very small amount of information that you can encode in those networks,” Krotov says. “In order to build dense associative memories, you need to couple more than two neurons. Because a single astrocyte can connect to many neurons, and many synapses, it is tempting to hypothesize that there might exist an information transfer between synapses mediated by this biological cell. That was the biggest inspiration for us to look into astrocytes and led us to start thinking about how to build dense associative memories in biology.”

The neuron-astrocyte associative memory model that the researchers developed in their new paper can store significantly more information than a traditional Hopfield network — more than enough to account for the brain’s memory capacity.

Intricate connections

The extensive biological connections between neurons and astrocytes offer support for the idea that this type of model might explain how the brain’s memory storage systems work, the researchers say. They hypothesize that within astrocytes, memories are encoded by gradual changes in the patterns of calcium flow. This information is conveyed to neurons by gliotransmitters released at synapses that astrocyte processes connect to.

“By careful coordination of these two things — the spatial temporal pattern of calcium in the cell and then the signaling back to the neurons — you can get exactly the dynamics you need for this massively increased memory capacity,” Kozachkov says.

One of the key features of the new model is that it treats astrocytes as collections of processes, rather than a single entity. Each of those processes can be considered one computational unit. Because of the high information storage capabilities of dense associative memories, the ratio of the amount of information stored to the number of computational units is very high and grows with the size of the network. This makes the system not only high capacity, but also energy efficient.

“By conceptualizing tripartite synaptic domains — where astrocytes interact dynamically with pre- and postsynaptic neurons — as the brain’s fundamental computational units, the authors argue that each unit can store as many memory patterns as there are neurons in the network. This leads to the striking implication that, in principle, a neuron-astrocyte network could store an arbitrarily large number of patterns, limited only by its size,” says Maurizio De Pitta, an assistant professor of physiology at the Krembil Research Institute at the University of Toronto, who was not involved in the study.

To test whether this model might accurately represent how the brain stores memory, researchers could try to develop ways to precisely manipulate the connections between astrocytes’ processes, then observe how those manipulations affect memory function.

“We hope that one of the consequences of this work could be that experimentalists would consider this idea seriously and perform some experiments testing this hypothesis,” Krotov says.

In addition to offering insight into how the brain may store memory, this model could also provide guidance for researchers working on artificial intelligence. By varying the connectivity of the process-to-process network, researchers could generate a huge range of models that could be explored for different purposes, for instance, creating a continuum between dense associative memories and attention mechanisms in large language models.

“While neuroscience initially inspired key ideas in AI, the last 50 years of neuroscience research have had little influence on the field, and many modern AI algorithms have drifted away from neural analogies,” Slotine says. “In this sense, this work may be one of the first contributions to AI informed by recent neuroscience research.” 

The proud history and promising future of MIT’s work on manufacturing

Tue, 05/27/2025 - 10:00am

MIT’s Initiative for New Manufacturing, announced today by President Sally A. Kornbluth, is the latest installment in a grand tradition: Since its founding, MIT has worked overtime to expand U.S. manufacturing, creating jobs and economic growth.

Indeed, one of the strongest through lines in MIT history is its commitment to U.S. manufacturing, which the Institute has pursued in economic good times and lean times, during wartime and in peacetime, and across scores of industries. MIT was founded in 1861 partly to improve U.S. industrial output, and has long devised special programs to bolster it — including multiple projects in recent decades aimed at renewing U.S. manufacturing.

“We want to deliberately design high-quality human-centered manufacturing jobs that bring new life to communities across the country,” Kornbluth wrote in a letter to the Institute community this morning, announcing the Initiative for New Manufacturing. She added: “I’m convinced that there is no more important work we can do to meet the moment and serve the nation now.”

“Embedded in MIT’s core ethos”

On one level, manufacturing is in MIT’s essential DNA. The Institute’s research and education has advanced industries from construction and transportation, to defense, electronics, biosciences, chemical engineering, and more. MIT contributions to management and logistics have also helped manufacturing firms thrive.

As Kornbluth noted in today’s letter, “Frankly, it’s not too much to say that the Institute was founded in 1861 to make manufacturing better.”

The historical record shows this, too. “There is no branch of practical industry, whether in the arts of construction, manufactures or agriculture, which is not capable of being better practiced, and even of being improved in its processes,” wrote MIT’s first president, William Barton Rogers, in a proposal for a new technical university, before MIT opened its doors.

“Manufacturing is embedded in MIT's core ethos,” says Christopher Love, a chemical engineering professor and one of the leads of the Initiative for New Manufacturing.

Beyond its everyday work, MIT has created many special projects to bolster manufacturing. In 1919, under the Institute’s third president, Richard Maclaurin, MIT developed the “Tech Plan,” engaging over 200 corporate sponsors, including AT&T and General Electric, to improve their businesses; period photos show MIT students examining a General Electric factory. (Similarly, today’s Initiative for New Manufacturing contains a “Factory Observatory” among its many facets, enabling Institute students to visit manufacturers.)

“Made in America”

For a few decades after World War II, the U.S. had an especially large global lead in manufacturing. The sector also accounted for roughly a quarter of U.S. GDP for much of the 1950s, compared to about 12 percent in recent years. To be sure, other U.S. industries naturally grew; additionally, global manufacturing increased. But the U.S. still had around 20 million manufacturing jobs in 1979, compared to about 12.8 million today. The 1980s saw concerted job loss in manufacturing, and many believed the U.S. was losing its edge in key industries, including automaking and consumer electronics.

In response, MIT formed a task force on the subject, the MIT Commission on Industrial Productivity — and this group project created a bestselling book.

Made in America: Regaining the Productive Edge,” co-authored by Michael Dertouzos, Richard Lester, and Robert Solow, sold over 300,000 copies after its publication in 1989. The book closely examined U.S. manufacturing practices across eight core industries and found overly short growth horizons for firms, suboptimal technology transfer, a neglect of human resources, and more.

Solow was an apt co-author: The MIT economist produced breakthrough research in the 1950s and 1960s, based on U.S. economic data, showing that technical advances of multiple kinds were responsible for most economic growth — to a much greater extent than, say, population growth or capital expansion. “Total factor productivity,” as Solow called it, included technological innovation, education, and skill-related changes.

Solow’s work won him a Nobel Prize in 1987 and illuminated how important technology and education are to economic expansion: Growth is not largely about making more of the same stuff, but creating new things.

The 21st Century: PIE, The Engine, and INM

This century, manufacturing has had period of growth, but heavy job losses in the first decade of the 2000s. That led to a flurry of new MIT manufacturing projects and research.

For one, an Institute task force on Production in the Innovation Economy (PIE), based on two years of empirical research, found considerable potential for U.S. advanced manufacturing, but also that the country needed to improve its capacity at turning innovations into deliverable products. These finding were detailed in the book “Making in America,” written by Institute Professor Suzanne Berger, a political scientist who has long studied the industrial economy.

MIT also participated in a government initiative, the Advanced Manufacturing Partnership, to help create high-tech economic hubs in parts of the U.S. that had suffered from de-industrialization, an effort that included developing new education initiatives for industrial workers.

And in 2016, MIT first announced a creative effort to spur manufacturing directly, in the form of The Engine, a startup accelerator, innovation hub, and venture fund located adjacent to campus in Cambridge. The Engine seeks to boost promising “tough tech” startups that need time to gain traction, and has invested in dozens of promising companies.

Additionally, MIT’s Work of the Future task force, a multi-year project issuing a final report in 2020, uncovered manufacturing insights while not being solely focused on them. The task fore found that automation will not wipe away colossal numbers of jobs — but that a key issue for the future is how technology can help workers to spur productivity, while not replacing them.

MIT continues to feature a variety of long-term programs and centers focused on manufacturing. The Initiative for New Manufacturing is an outgrowth of the Manufacturing@MIT working group; MIT’s Leaders for Global Operations (LGO) program offers a joint Engineering-MBA degree with a strong focus on manufacturing; the Department of Mechanical Engineering offers an advanced manufacturing concentration; and the Industrial Liason Program develops corporate partnerships with MIT.

All told, as Kornbluth wrote in today’s letter, “Manufacturing has been a throughline in MIT’s research and education … and it’s been an essential part of our service to the nation.” 

MIT announces the Initiative for New Manufacturing

Tue, 05/27/2025 - 10:00am

MIT today launched its Initiative for New Manufacturing (INM), an Institute-wide effort to reinfuse U.S. industrial production with leading-edge technologies, bolster crucial U.S. economic sectors, and ignite job creation.

The initiative will encompass advanced research, innovative education programs, and partnership with companies across many sectors, in a bid to help transform manufacturing and elevate its impact.

“We want to work with firms big and small, in cities, small towns and everywhere in between, to help them adopt new approaches for increased productivity,” MIT President Sally A. Kornbluth wrote in a letter to the Institute community this morning. “We want to deliberately design high-quality, human-centered manufacturing jobs that bring new life to communities across the country.”

Kornbluth added: “Helping America build a future of new manufacturing is a perfect job for MIT — and I’m convinced that there is no more important work we can do to meet the moment and serve the nation now.”

The Initiative for New Manufacturing also announced its first six founding industry consortium members: Amgen, Flextronics International USA, GE Vernova, PTC, Sanofi, and Siemens. Participants in the INM Industry Consortium will support seed projects proposed by MIT researchers, initially in the area of artificial intelligence for manufacturing.

INM joins the ranks of MIT’s other presidential initiatives — including The Climate Project at MIT; MITHIC, which supports the human-centered disciplines; MIT HEALS, centered on the life sciences and health; and MGAIC, the MIT Generative AI Impact Consortium.

“There is tremendous opportunity to bring together a vibrant community working across every scale — from nanotechnology to large-scale manufacturing — and across a wide-range of applications including semiconductors, medical devices, automotive, energy systems, and biotechnology,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer and dean of engineering, who is part of the initiative’s leadership team. “MIT is uniquely positioned to harness the transformative power of digital tools and AI to shape future of manufacturing. I’m truly excited about what we can build together and the synergies this creates with other cross-cutting initiatives across the Institute.”

The initiative is just the latest MIT-centered effort in recent decades aiming to expand American manufacturing. A faculty research group wrote the 1989 bestseller “Made in America: Regaining the Productive Edge,” advocating for a renewal of manufacturing; another MIT project, called Production in the Innovation Economy, called for expanded manufacturing in the early 2010s. In 2016, MIT also founded The Engine, a venture fund investing in hardware-based “tough tech” start-ups including many with potential to became substantial manufacturing firms.

As developed, the MIT Initiative for New Manufacturing is based around four major themes:

  • Reimagining manufacturing technologies and systems: realizing breakthrough technologies and system-level approaches to advance energy production, health care, computing, transportation, consumer products, and more;
  • Elevating the productivity and experience of manufacturing: developing and deploying new digitally driven methods and tools to amplify productivity and improve the human experience of manufacturing;
  • Scaling new manufacturing: accelerating the scaling of manufacturing companies and transforming supply chains to maximize efficiency and resilience, fostering product innovation and business growth; and
  • Transforming the manufacturing base: driving the deployment of a sustainable global manufacturing ecosystem that provides compelling opportunities to workers, with major efforts focused on the U.S.

The initiative has mapped out many concrete activities and programs, which will include an Institute-wide research program on emerging technologies and other major topics; workforce and education programs; and industry engagement and participation. INM also aims to establish new labs for developing manufacturing tools and techniques; a “factory observatory” program which immerses students in manufacturing through visits to production sites; and key “pillars” focusing on areas from semiconductors and biomanufacturing to defense and aviation.

The workforce and education element of INM will include TechAMP, an MIT-created program that works with community colleges to bridge the gap between technicians and engineers; AI-driven teaching tools; professional education; and an effort to expand manufacturing education on campus in collaboration with MIT departments and degree programs.

INM’s leadership team has three faculty co-directors: John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering; Suzanne Berger, Institute Professor at MIT and a political scientist who has conducted influential empirical studies of manufacturing; and Chris Love, the Raymond A. and Helen E. St. Laurent Professor of Chemical Engineering. The initiative’s executive director is Julie Diop.

The initiative is in the process of forming a faculty steering committee with representation from across the Institute, as well as an external advisory board. INM stems partly from the work of the Manufacturing@MIT working group, formed in 2022 to assess many of these issues.

The launch of the new initiative was previewed at a daylong MIT symposium on May 7, titled “A Vision for New Manufacturing.” The event, held before a capacity audience in MIT’s Wong Auditorium, featured over 30 speakers from a wide range of manufacturing sectors.

“The rationale for growing and transforming U.S. manufacturing has never been more urgent than it is today,” Berger said at the event. “What we are trying to build at MIT now is not just another research project. … Together, with people in this room and outside this room, we’re trying to change what’s happening in our country.”

“We need to think about the importance of manufacturing again, because it is what brings product ideas to people,” Love told MIT News. “For instance, in biotechnology, new life-saving medicines can’t reach patients without manufacturing. There is a real urgency about this issue for both economic prosperity and creating jobs. We have seen the impact for our country when we have lost our lead in manufacturing in some sectors. Biotechnology, where the U.S. has been the global leader for more than 40 years, offers the potential to promote new robust economies here, but we need to advance our capabilities in biomanufacturing to maintain our advantage in this area.”

Hart adds: “While manufacturing feels very timely today, it is of enduring importance. Manufactured products enable our daily lives and manufacturing is critical to advancing the frontiers of technology and society. Our efforts leading up to launch of the initiative revealed great excitement about manufacturing across MIT, especially from students. Working with industry — from small to large companies, and from young startups to industrial giants — will be instrumental to creating impact and realizing the vision for new manufacturing.”

In her letter to the MIT community today, Kornbluth stressed that the initiative’s goal is to drive transformation by making manufacturing more productive, resilient, and sustainable.

“We want to reimagine manufacturing technologies and systems to advance fields like energy production, health care, computing, transportation, consumer products, and more,” she wrote. “And we want to reach well beyond the shop floor to tackle challenges like how to make supply chains more resilient, and how to inform public policy to foster a broad, healthy manufacturing ecosystem that can drive decades of innovation and growth.”

Why are some rocks on the moon highly magnetic? MIT scientists may have an answer

Fri, 05/23/2025 - 2:00pm

Where did the moon’s magnetism go? Scientists have puzzled over this question for decades, ever since orbiting spacecraft picked up signs of a high magnetic field in lunar surface rocks. The moon itself has no inherent magnetism today. 

Now, MIT scientists may have solved the mystery. They propose that a combination of an ancient, weak magnetic field and a large, plasma-generating impact may have temporarily created a strong magnetic field, concentrated on the far side of the moon.

In a study appearing today in the journal Science Advances, the researchers show through detailed simulations that an impact, such as from a large asteroid, could have generated a cloud of ionized particles that briefly enveloped the moon. This plasma would have streamed around the moon and concentrated at the opposite location from the initial impact. There, the plasma would have interacted with and momentarily amplified the moon’s weak magnetic field. Any rocks in the region could have recorded signs of the heightened magnetism before the field quickly died away.

This combination of events could explain the presence of highly magnetic rocks detected in a region near the south pole, on the moon’s far side. As it happens, one of the largest impact basins — the Imbrium basin — is located in the exact opposite spot on the near side of the moon. The researchers suspect that whatever made that impact likely released the cloud of plasma that kicked off the scenario in their simulations.

“There are large parts of lunar magnetism that are still unexplained,” says lead author Isaac Narrett, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS). “But the majority of the strong magnetic fields that are measured by orbiting spacecraft can be explained by this process — especially on the far side of the moon.”

Narrett’s co-authors include Rona Oran and Benjamin Weiss at MIT, along with Katarina Miljkovic at Curtin University, Yuxi Chen and Gábor Tóth at the University of Michigan at Ann Arbor, and Elias Mansbach PhD ’24 at Cambridge University. Nuno Loureiro, professor of nuclear science and engineering at MIT, also contributed insights and advice.

Beyond the sun

Scientists have known for decades that the moon holds remnants of a strong magnetic field. Samples from the surface of the moon, returned by astronauts on NASA’s Apollo missions of the 1960s and 70s, as well as global measurements of the moon taken remotely by orbiting spacecraft, show signs of remnant magnetism in surface rocks, especially on the far side of the moon.

The typical explanation for surface magnetism is a global magnetic field, generated by an internal “dynamo,” or a core of molten, churning material. The Earth today generates a magnetic field through a dynamo process, and it’s thought that the moon once may have done the same, though its much smaller core would have produced a much weaker magnetic field that may not explain the highly magnetized rocks observed, particularly on the moon’s far side.

An alternative hypothesis that scientists have tested from time to time involves a giant impact that generated plasma, which in turn amplified any weak magnetic field. In 2020, Oran and Weiss tested this hypothesis with simulations of a giant impact on the moon, in combination with the solar-generated magnetic field, which is weak as it stretches out to the Earth and moon.

In simulations, they tested whether an impact to the moon could amplify such a solar field, enough to explain the highly magnetic measurements of surface rocks. It turned out that it wasn’t, and their results seemed to rule out plasma-induced impacts as playing a role in the moon’s missing magnetism.

A spike and a jitter

But in their new study, the researchers took a different tack. Instead of accounting for the sun’s magnetic field, they assumed that the moon once hosted a dynamo that produced a magnetic field of its own, albeit a weak one. Given the size of its core, they estimated that such a field would have been about 1 microtesla, or 50 times weaker than the Earth’s field today.

From this starting point, the researchers simulated a large impact to the moon’s surface, similar to what would have created the Imbrium basin, on the moon’s near side. Using impact simulations from Katarina Miljkovic, the team then simulated the cloud of plasma that such an impact would have generated as the force of the impact vaporized the surface material. They adapted a second code, developed by collaborators at the University of Michigan, to simulate how the resulting plasma would flow and interact with the moon’s weak magnetic field.

These simulations showed that as a plasma cloud arose from the impact, some of it would have expanded into space, while the rest would stream around the moon and concentrate on the opposite side. There, the plasma would have compressed and briefly amplified the moon’s weak magnetic field. This entire process, from the moment the magnetic field was amplified to the time that it decays back to baseline, would have been incredibly fast — somewhere around 40 minutes, Narrett says.

Would this brief window have been enough for surrounding rocks to record the momentary magnetic spike? The researchers say, yes, with some help from another, impact-related effect.

They found that an Imbrium-scale impact would have sent a pressure wave through the moon, similar to a seismic shock. These waves would have converged to the other side, where the shock would have “jittered” the surrounding rocks, briefly unsettling the rocks’ electrons — the subatomic particles that naturally orient their spins to any external magnetic field. The researchers suspect the rocks were shocked just as the impact’s plasma amplified the moon’s magnetic field. As the rocks’ electrons settled back, they assumed a new orientation, in line with the momentary high magnetic field.

“It’s as if you throw a 52-card deck in the air, in a magnetic field, and each card has a compass needle,” Weiss says. “When the cards settle back to the ground, they do so in a new orientation. That’s essentially the magnetization process.”

The researchers say this combination of a dynamo plus a large impact, coupled with the impact’s shockwave, is enough to explain the moon’s highly magnetized surface rocks — particularly on the far side. One way to know for sure is to directly sample the rocks for signs of shock, and high magnetism. This could be a possibility, as the rocks lie on the far side, near the lunar south pole, where missions such as NASA’s Artemis program plan to explore.

“For several decades, there’s been sort of a conundrum over the moon’s magnetism — is it from impacts or is it from a dynamo?” Oran says. “And here we’re saying, it’s a little bit of both. And it’s a testable hypothesis, which is nice.”

The team’s simulations were carried out using the MIT SuperCloud. This research was supported, in part, by NASA. 

A magnetic pull toward materials

Thu, 05/22/2025 - 5:10pm

Growing up in Coeur d’Alene, Idaho, with engineer parents who worked in the state’s silver mining industry, MIT senior Maria Aguiar developed an early interest in materials. The star garnet, the state’s mineral, is still her favorite. It’s a sheer coincidence, though, that her undergraduate thesis also focuses on garnets.

Her research explores ways to manipulate the magnetic properties of garnet thin films — work that can help improve data storage technologies. After all, says Aguiar, a major in the Department of Materials Science and Engineering (DMSE), technology and energy applications increasingly rely on the use of materials with favorable electronic and magnetic properties.

Passionate about engineering in high school — science fiction was also her jam — Aguiar applied and got accepted to MIT. But she had only learned about materials engineering through a Google search. She assumed she would gravitate toward aerospace engineering, astronomy, or even physics, subjects that had all piqued her interest at one time or another.

Aguiar was indecisive about a major for a while but began to realize that the topics she enjoyed would invariably center on materials. “I would visit an aerospace museum and would be more interested in the tiles they used in the shuttle to tolerate the heat. I was interested in the process to engineer such materials,” Aguiar remembers.

It was a first-year pre-orientation program (FPOP), designed to help new students test-drive majors, that convinced Aguiar that materials engineering was a good fit for her interests. It helped that the DMSE students were friendly and approachable. “They were proud to be in that major, and excited to talk about what they did,” Aguiar says.

During the FPOP, Associate Professor James LeBeau, a DMSE expert in transmission electron microscopy, asked students about their interests. When Aguiar piped up, saying she loved astronomy, LeBeau compared the subject to microscopy.

“An electron microscope is just a telescope in reverse,” she recalls him saying. Instead of looking at something far away, you go from big to small — zooming in to see the finer details. That comparison stuck with Aguiar and inspired her to pursue her first Undergraduate Research Opportunities Program (UROP) project with Lebeau, where she learned more about microscopy.

Drawn to magnetic materials

It was class 3.152 (Magnetic Materials), taught by Professor Caroline Ross, that stoked Aguiar’s interest in magnetic materials. The subject matter was fascinating, Aguiar says, and she knew related research would make important contributions to modern data storage technology. After starting a UROP in Ross’s magnetic materials lab in the spring of her junior year, Aguiar was hooked, and the work eventually morphed into her undergraduate thesis, “Effects of Annealing on Atomic Ordering and Magnetic Anisotropy in Iron Garnet Thin Films.”

The broad goal of her work was to understand how to manipulate materials’ magnetic properties, such as anisotropy — the tendency of a material’s magnetic properties to change depending on which direction they are measured in. It turns out that changing where certain metal atoms — or cations — sit in the garnet’s crystal structure can influence this directional behavior. By carefully arranging these atoms, researchers can “tune” garnet films to deliver novel magnetic properties, enabling the design of advanced materials for electronics.

When Aguiar joined the lab, she began working with doctoral candidate Allison Kaczmarek, who was investigating the connection between cation ordering and magnetic properties for her PhD thesis. Specifically, Kaczmarek was studying the growth and characterization of garnet films, evaluating different ways to induce cation ordering by varying the parameters in the pulsed laser deposition process — a technique that fires a laser at a target material (in this case, garnet), vaporizing it so it deposits onto a substrate, such as glass. Adjusting variables such as laser energy, pressure, and temperature, along with the composition of the mixed oxides, can significantly influence the resulting film.

Aguiar studied one specific parameter: annealing — heating a material to a high temperature before cooling it. The strengthening technique is often used to alter the way atoms are arranged in a material. “So far, I have found that when we anneal these films for times as short as five minutes, the film gets closer to preferring out-of-plane magnetization,” Aguiar says. This property, known as perpendicular magnetic anisotropy, is significant for magnetic memory applications because it offers advantages in performance, scalability, and energy efficiency.

“Maria has been very reliable and quick to be independent. She picks things up very quickly and is very thoughtful about what she’s doing,” Kaczmarek says. That thoughtfulness showed early on. When asked to identify an optimal annealing temperature for the films, Aguiar didn’t just run tests — she first conducted a thorough literature review to understand what had been worked out before, then carefully tested films at different temperatures to find one that worked the best.

Kaczmarek first got to know Aguiar as a teaching assistant for class 3.030 (Microstructural Evolution of Materials), taught by Professor Geoffrey Beach. Even before starting the UROP in Ross’ lab, Aguiar had shared a clear research goal: to gain hands-on experience with advanced techniques such as X-ray diffraction, vibrating sample magnetometry, and ferromagnetic resonance — tools typically used by more senior researchers. “That’s a goal she has certainly achieved,” Kaczmarek says.

Beyond the lab, beyond MIT

Outside of the lab, Aguiar combines her love of materials with a strong sense of community outreach and social cohesion. As co-president of the Society of Undergraduate Materials Scientists in DMSE, she helps organize events that make the department more inclusive. Class dinners are great fun — many seniors recently went to a Cambridge restaurant for sushi — and “Materials Week” every semester functions primarily as a recruitment event for new students. A hot cocoa event near the winter holidays combined seasonal cheer with class evaluations — painful for some, perhaps, but necessary for improving instruction.

After graduating this spring, Aguiar is looking forward to pursuing graduate school at Stanford University and is setting her sights on teaching. She loved her time as a teaching assistant for the popular first-year classes 3.091 (Introduction to Solid-State Chemistry) and 3.010 (Structure of Materials), earning her an undergraduate student teaching award.

Ross is convinced that Aguiar is a strong fit for graduate studies. “For graduate school, you need academic excellence and technical skills like being good in the lab, and Maria has both. Then there are the soft skills, which have to do with how well organized you are, how resilient you are, how you manage different responsibilities. Usually, students learn them as they go along, but Maria is well ahead of the curve,” Ross says.

“One thing that makes me hopeful for Maria’s time in grad school is that she is very broadly interested in a lot of aspects of materials science,” Kaczmarek adds.

Aguiar’s passion for the subject spilled over into a fun side project: a DMSE-exclusive “Meow-terials Science” T-shirt she designed — featuring cats doing familiar lab experiments — was a hit among students.

She remains endlessly fascinated by the materials around her, even in the water bottle she drinks from every day. “Studying materials science has changed the way I see the world. I can pick up something as ordinary as this water bottle and think about the metallurgical processing techniques I learned from my classes. I just love that there’s so much to learn from the everyday.”

New research, data advance understanding of early planetary formation

Thu, 05/22/2025 - 3:40pm

A team of international astronomers led by Richard Teague, the Kerr-McGee Career Development Professor in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) has gathered the most sensitive and detailed observations of 15 protoplanetary disks to date, giving the astronomy community a new look at the mechanisms of early planetary formation.

“The new approaches we’ve developed to gather this data and images are like switching from reading glasses to high-powered binoculars — they reveal a whole new level of detail in these planet-forming systems,” says Teague.

Their open-access findings were published in a special collection of 17 papers in the Astrophysical Journal of Letters, with several more coming out this summer. The report sheds light on a breadth of questions, including ways to calculate the mass of a disk by measuring its gravitational influence and extracting rotational velocity profiles to a precision of meters per second.

Protoplanetary disks are a collection of dust and gas around young stars, from which planets form. Observing the dust in these disks is easier because it is brighter, but the information that can be gleaned from dust alone is only a snapshot of what is going on. Teague’s research focus has shifted attention to the gas in these systems, as they can tell us more about the dynamics in a disk, including properties such as gravity, velocity, and mass.

To achieve the resolution necessary to study gas, the exoALMA program spent five years coordinating longer observation windows on the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile. As a result, the international team of astronomers, many of whom are early-career scientists, were able to collect some of the most detailed images ever taken of protoplanetary disks.

“The impressive thing about the data is that it’s so good, the community is developing new tools to extract signatures from planets,” says Marcelo Barraza-Alfaro, a postdoc in the Planet Formation Lab and a member of the exoALMA project. Several new techniques to improve and calibrate the images taken were developed to maximize the higher resolution and sensitivity that was used.

As a result, “we are seeing new things that require us to modify our understanding of what’s going on in protoplanetary disks,” he says.

One of the papers with the largest EAPS influence explores planetary formation through vortices. It has been known for some time that the simple model of formation often proposed, where dust grains clump together and “snowball” into a planetary core, is not enough. One possible way to help is through vortices, or localized perturbations in the gas that pull dust into the center. Here, they are more likely to clump, the way soap bubbles collect in a draining tub.

“We can see the concentration of dust in different regions, but we cannot see how it is moving,” says Lisa Wölfer, another postdoc in the Planet Formation Lab at MIT and first author on the paper. While astronomers can see that the dust has gathered, there isn’t enough information to rule out how it got to that point.

“Only through the dynamics in the gas can we actually confirm that it’s a vortex, and not something else, creating the structure,” she says.

During the data collection period, Teague, Wölfer, and Barraza-Alfaro developed simple models of protoplanetary disks to compare to their observations. When they got the data back, however, the models couldn’t explain what they were seeing.

“We saw the data and nothing worked anymore. It was way too complicated,” says Teague. “Before, everyone thought they were not dynamic. That’s completely not the case.”

The team was forced to reevaluate their models and work with more complex ones incorporating more motion in the gas, which take more time and resources to run. But early results look promising.

“We see that the patterns look very similar; we think this is the best test case to study further with more observations,” says Wölfer.

The new data, which have been made public, come at a fortuitous time: ALMA will be going dark for a period in the next few years while it undergoes upgrades. During this time, astronomers can continue the monumental process of sifting through all the data.

“It’s going to just keep on producing results for years and years to come,” says Teague.

A new approach could fractionate crude oil using much less energy

Thu, 05/22/2025 - 2:00pm

Separating crude oil into products such as gasoline, diesel, and heating oil is an energy-intensive process that accounts for about 6 percent of the world’s CO2 emissions. Most of that energy goes into the heat needed to separate the components by their boiling point.

In an advance that could dramatically reduce the amount of energy needed for crude oil fractionation, MIT engineers have developed a membrane that filters the components of crude oil by their molecular size.

“This is a whole new way of envisioning a separation process. Instead of boiling mixtures to purify them, why not separate components based on shape and size? The key innovation is that the filters we developed can separate very small molecules at an atomistic length scale,” says Zachary P. Smith, an associate professor of chemical engineering at MIT and the senior author of the new study.

The new filtration membrane can efficiently separate heavy and light components from oil, and it is resistant to the swelling that tends to occur with other types of oil separation membranes. The membrane is a thin film that can be manufactured using a technique that is already widely used in industrial processes, potentially allowing it to be scaled up for widespread use.

Taehoon Lee, a former MIT postdoc who is now an assistant professor at Sungkyunkwan University in South Korea, is the lead author of the paper, which appears today in Science.

Oil fractionation

Conventional heat-driven processes for fractionating crude oil make up about 1 percent of global energy use, and it has been estimated that using membranes for crude oil separation could reduce the amount of energy needed by about 90 percent. For this to succeed, a separation membrane needs to allow hydrocarbons to pass through quickly, and to selectively filter compounds of different sizes.

Until now, most efforts to develop a filtration membrane for hydrocarbons have focused on polymers of intrinsic microporosity (PIMs), including one known as PIM-1. Although this porous material allows the fast transport of hydrocarbons, it tends to excessively absorb some of the organic compounds as they pass through the membrane, leading the film to swell, which impairs its size-sieving ability.

To come up with a better alternative, the MIT team decided to try modifying polymers that are used for reverse osmosis water desalination. Since their adoption in the 1970s, reverse osmosis membranes have reduced the energy consumption of desalination by about 90 percent — a remarkable industrial success story.

The most commonly used membrane for water desalination is a polyamide that is manufactured using a method known as interfacial polymerization. During this process, a thin polymer film forms at the interface between water and an organic solvent such as hexane. Water and hexane do not normally mix, but at the interface between them, a small amount of the compounds dissolved in them can react with each other.

In this case, a hydrophilic monomer called MPD, which is dissolved in water, reacts with a hydrophobic monomer called TMC, which is dissolved in hexane. The two monomers are joined together by a connection known as an amide bond, forming a polyamide thin film (named MPD-TMC) at the water-hexane interface.

While highly effective for water desalination, MPD-TMC doesn’t have the right pore sizes and swelling resistance that would allow it to separate hydrocarbons.

To adapt the material to separate the hydrocarbons found in crude oil, the researchers first modified the film by changing the bond that connects the monomers from an amide bond to an imine bond. This bond is more rigid and hydrophobic, which allows hydrocarbons to quickly move through the membrane without causing noticeable swelling of the film compared to the polyamide counterpart.

“The polyimine material has porosity that forms at the interface, and because of the cross-linking chemistry that we have added in, you now have something that doesn’t swell,” Smith says. “You make it in the oil phase, react it at the water interface, and with the crosslinks, it’s now immobilized. And so those pores, even when they’re exposed to hydrocarbons, no longer swell like other materials.”

The researchers also introduced a monomer called triptycene. This shape-persistent, molecularly selective molecule further helps the resultant polyimines to form pores that are the right size for hydrocarbons to fit through.

This approach represents “an important step toward reducing industrial energy consumption,” says Andrew Livingston, a professor of chemical engineering at Queen Mary University of London, who was not involved in the study.

“This work takes the workhorse technology of the membrane desalination industry, interfacial polymerization, and creates a new way to apply it to organic systems such as hydrocarbon feedstocks, which currently consume large chunks of global energy,” Livingston says. “The imaginative approach using an interfacial catalyst coupled to hydrophobic monomers leads to membranes with high permeance and excellent selectivity, and the work shows how these can be used in relevant separations.”

Efficient separation

When the researchers used the new membrane to filter a mixture of toluene and triisopropylbenzene (TIPB) as a benchmark for evaluating separation performance, it was able to achieve a concentration of toluene 20 times greater than its concentration in the original mixture. They also tested the membrane with an industrially relevant mixture consisting of naphtha, kerosene, and diesel, and found that it could efficiently separate the heavier and lighter compounds by their molecular size.

If adapted for industrial use, a series of these filters could be used to generate a higher concentration of the desired products at each step, the researchers say.

“You can imagine that with a membrane like this, you could have an initial stage that replaces a crude oil fractionation column. You could partition heavy and light molecules and then you could use different membranes in a cascade to purify complex mixtures to isolate the chemicals that you need,” Smith says.

Interfacial polymerization is already widely used to create membranes for water desalination, and the researchers believe it should be possible to adapt those processes to mass produce the films they designed in this study.

“The main advantage of interfacial polymerization is it’s already a well-established method to prepare membranes for water purification, so you can imagine just adopting these chemistries into existing scale of manufacturing lines,” Lee says.

The research was funded, in part, by ExxonMobil through the MIT Energy Initiative. 

MIT physicists discover a new type of superconductor that’s also a magnet

Thu, 05/22/2025 - 1:45pm

Magnets and superconductors go together like oil and water — or so scientists have thought. But a new finding by MIT physicists is challenging this century-old assumption.

In a paper appearing today in the journal Nature, the physicists report that they have discovered a “chiral superconductor” — a material that conducts electricity without resistance, and also, paradoxically, is intrinsically magnetic. What’s more, they observed this exotic superconductivity in a surprisingly ordinary material: graphite, the primary material in pencil lead.

Graphite is made from many layers of graphene — atomically thin, lattice-like sheets of carbon atoms — that are stacked together and can easily flake off when pressure is applied, as when pressing down to write on a piece of paper. A single flake of graphite can contain several million sheets of graphene, which are normally stacked such that every other layer aligns. But every so often, graphite contains tiny pockets where graphene is stacked in a different pattern, resembling a staircase of offset layers.

The MIT team has found that when four or five sheets of graphene are stacked in this “rhombohedral” configuration, the resulting structure can exhibit exceptional electronic properties that are not seen in graphite as a whole.

In their new study, the physicists isolated microscopic flakes of rhombohedral graphene from graphite, and subjected the flakes to a battery of electrical tests. They found that when the flakes are cooled to 300 millikelvins (about -273 degrees Celsius), the material turns into a superconductor, meaning that any electrical current passing through the material can flow through without resistance.

They also found that when they swept an external magnetic field up and down, the flakes could be switched between two different superconducting states, just like a magnet. This suggests that the superconductor has some internal, intrinsic magnetism. Such switching behavior is absent in other superconductors.

“The general lore is that superconductors do not like magnetic fields,” says Long Ju, assistant professor of physics at MIT. “But we believe this is the first observation of a superconductor that behaves as a magnet with such direct and simple evidence. And that’s quite a bizarre thing because it is against people’s general impression on superconductivity and magnetism.”

Ju is senior author of the study, which includes MIT co-authors Tonghang Han, Zhengguang Lu, Zach Hadjri, Lihan Shi, Zhenghan Wu, Wei Xu, Yuxuan Yao, Jixiang Yang, Junseok Seo, Shenyong Ye, Muyang Zhou, and Liang Fu, along with collaborators from Florida State University, the University of Basel in Switzerland, and the National Institute for Materials Science in Japan.

Graphene twist

In everyday conductive materials, electrons flow through in a chaotic scramble, whizzing by each other, and pinging off the material’s atomic latticework. Each time an electron scatters off an atom, it has, in essence, met some resistance, and loses some energy as a result, normally in the form of heat. In contrast, when certain materials are cooled to ultracold temperatures, they can become superconducting, meaning that the material can allow electrons to pair up, in what physicists term “Cooper pairs.” Rather than scattering away, these electron pairs glide through a material without resistance. With a superconductor, then, no energy is lost in translation.

Since superconductivity was first observed in 1911, physicists have shown many times over that zero electrical resistance is a hallmark of a superconductor. Another defining property was first observed in 1933, when the physicist Walther Meissner discovered that a superconductor will expel an external magnetic field. This “Meissner effect” is due in part to a superconductor’s electron pairs, which collectively act to push away any magnetic field.

Physicists have assumed that all superconducting materials should exhibit both zero electrical resistance, and a natural magnetic repulsion. Indeed, these two properties are what could enable Maglev, or “magnetic levitation” trains, whereby a superconducting rail repels and therefore levitates a magnetized car.

Ju and his colleagues had no reason to question this assumption as they carried out their experiments at MIT. In the last few years, the team has been exploring the electrical properties of pentalayer rhombohedral graphene. The researchers have observed surprising properties in the five-layer, staircase-like graphene structure, most recently that it enables electrons to split into fractions of themselves. This phenomenon occurs when the pentalayer structure is placed atop a sheet of hexagonal boron nitride (a material similar to graphene), and slightly offset by a specific angle, or twist. 

Curious as to how electron fractions might change with changing conditions, the researchers followed up their initial discovery with similar tests, this time by misaligning the graphene and hexagonal boron nitride structures. To their surprise, they found that when they misaligned the two materials and sent an electrical current through, at temperatures less than 300 millikelvins, they measured zero resistance. It seemed that the phenomenon of electron fractions disappeared, and what emerged instead was superconductivity.

The researchers went a step further to see how this new superconducting state would respond to an external magnetic field. They applied a magnet to the material, along with a voltage, and measured the electrical current coming out of the material. As they dialed the magnetic field from negative to positive (similar to a north and south polarity) and back again, they observed that the material maintained its superconducting, zero-resistance state, except in two instances, once at either magnetic polarity. In these instances, the resistance briefly spiked, before switching back to zero, and returning to a superconducting state.

“If this were a conventional superconductor, it would just remain at zero resistance, until the magnetic field reaches a critical point, where superconductivity would be killed,” Zach Hadjri, a first-year student in the group, says. “Instead, this material seems to switch between two superconducting states, like a magnet that starts out pointing upward, and can flip downwards when you apply a magnetic field. So it looks like this is a superconductor that also acts like a magnet. Which doesn’t make any sense!”

“One of a kind”

As counterintuitive as the discovery may seem, the team observed the same phenomenon in six similar samples. They suspect that the unique configuration of rhombohedral graphene is the key. The material has a very simple arrangement of carbon atoms. When cooled to ultracold temperatures, the thermal fluctuation is minimized, allowing any electrons flowing through the material to slow down, sense each other, and interact.

Such quantum interactions can lead electrons to pair up and superconduct. These interactions can also encourage electrons to coordinate. Namely, electrons can collectively occupy one of two opposite momentum states, or “valleys.” When all electrons are in one valley, they effectively spin in one direction, versus the opposite direction. In conventional superconductors, electrons can occupy either valley, and any pair of electrons is typically made from electrons of opposite valleys that cancel each other out. The pair overall then, has zero momentum, and does not spin.

In the team’s material structure, however, they suspect that all electrons interact such that they share the same valley, or momentum state. When electrons then pair up, the superconducting pair overall has a “non-zero” momentum, and spinning, that, along with many other pairs, can amount to an internal, superconducting magnetism.

“You can think of the two electrons in a pair spinning clockwise, or counterclockwise, which corresponds to a magnet pointing up, or down,” Tonghang Han, a fifth-year student in the group, explains. “So we think this is the first observation of a superconductor that behaves as a magnet due to the electrons’ orbital motion, which is known as a chiral superconductor. It’s one of a kind. It is also a candidate for a topological superconductor which could enable robust quantum computation.”

“Everything we’ve discovered in this material has been completely out of the blue,” says Zhengguang Lu, a former postdoc in the group and now an assistant professor at Florida State University. “But because this is a simple system, we think we have a good chance of understanding what is going on, and could demonstrate some very profound and deep physics principles.”

“It is truly remarkable that such an exotic chiral superconductor emerges from such simple ingredients,” adds Liang Fu, professor of physics at MIT. “Superconductivity in rhombodedral graphene will surely have a lot to offer.”     

The part of the research carried out at MIT was supported by the U.S. Department of Energy and a MathWorks Fellowship.

Study: Climate change may make it harder to reduce smog in some regions

Thu, 05/22/2025 - 8:00am

Global warming will likely hinder our future ability to control ground-level ozone, a harmful air pollutant that is a primary component of smog, according to a new MIT study.

The results could help scientists and policymakers develop more effective strategies for improving both air quality and human health. Ground-level ozone causes a host of detrimental health impacts, from asthma to heart disease, and contributes to thousands of premature deaths each year.

The researchers’ modeling approach reveals that, as the Earth warms due to climate change, ground-level ozone will become less sensitive to reductions in nitrogen oxide emissions in eastern North America and Western Europe. In other words, it will take greater nitrogen oxide emission reductions to get the same air quality benefits.

However, the study also shows that the opposite would be true in northeast Asia, where cutting emissions would have a greater impact on reducing ground-level ozone in the future. 

The researchers combined a climate model that simulates meteorological factors, such as temperature and wind speeds, with a chemical transport model that estimates the movement and composition of chemicals in the atmosphere.

By generating a range of possible future outcomes, the researchers’ ensemble approach better captures inherent climate variability, allowing them to paint a fuller picture than many previous studies.

“Future air quality planning should consider how climate change affects the chemistry of air pollution. We may need steeper cuts in nitrogen oxide emissions to achieve the same air quality goals,” says Emmie Le Roy, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author of a paper on this study.

Her co-authors include Anthony Y.H. Wong, a postdoc in the MIT Center for Sustainability Science and Strategy; Sebastian D. Eastham, principal research scientist in the MIT Center for Sustainability Science and Strategy; Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor of EAPS; and senior author Noelle Selin, a professor in the Institute for Data, Systems, and Society (IDSS) and EAPS. The research appears today in Environmental Science and Technology.

Controlling ozone

Ground-level ozone differs from the stratospheric ozone layer that protects the Earth from harmful UV radiation. It is a respiratory irritant that is harmful to the health of humans, animals, and plants.

Controlling ground-level ozone is particularly challenging because it is a secondary pollutant, formed in the atmosphere by complex reactions involving nitrogen oxides and volatile organic compounds in the presence of sunlight.

“That is why you tend to have higher ozone days when it is warm and sunny,” Le Roy explains.

Regulators typically try to reduce ground-level ozone by cutting nitrogen oxide emissions from industrial processes. But it is difficult to predict the effects of those policies because ground-level ozone interacts with nitrogen oxide and volatile organic compounds in nonlinear ways.

Depending on the chemical environment, reducing nitrogen oxide emissions could cause ground-level ozone to increase instead.

“Past research has focused on the role of emissions in forming ozone, but the influence of meteorology is a really important part of Emmie’s work,” Selin says.

To conduct their study, the researchers combined a global atmospheric chemistry model with a climate model that simulate future meteorology.

They used the climate model to generate meteorological inputs for each future year in their study, simulating factors such as likely temperature and wind speeds, in a way that captures the inherent variability of a region’s climate.

Then they fed those inputs to the atmospheric chemistry model, which calculates how the chemical composition of the atmosphere would change because of meteorology and emissions.

The researchers focused on Eastern North America, Western Europe, and Northeast China, since those regions have historically high levels of the precursor chemicals that form ozone and well-established monitoring networks to provide data.

They chose to model two future scenarios, one with high warming and one with low warming, over a 16-year period between 2080 and 2095. They compared them to a historical scenario capturing 2000 to 2015 to see the effects of a 10 percent reduction in nitrogen oxide emissions.

Capturing climate variability

“The biggest challenge is that the climate naturally varies from year to year. So, if you want to isolate the effects of climate change, you need to simulate enough years to see past that natural variability,” Le Roy says.

They could overcome that challenge due to recent advances in atmospheric chemistry modeling and by taking advantage of parallel computing to simulate multiple years at the same time. They simulated five 16-year realizations, resulting in 80 model years for each scenario.

The researchers found that eastern North America and Western Europe are especially sensitive to increases in nitrogen oxide emissions from the soil, which are natural emissions driven by increases in temperature.

Due to that sensitivity, as the Earth warms and more nitrogen oxide from soil enters the atmosphere, reducing nitrogen oxide emissions from human activities will have less of an impact on ground-level ozone.

“This shows how important it is to improve our representation of the biosphere in these models to better understand how climate change may impact air quality,” Le Roy says.

On the other hand, since industrial processes in northeast Asia cause more ozone per unit of nitrogen oxide emitted, cutting emissions there would cause greater reductions in ground-level ozone in future warming scenarios.

“But I wouldn’t say that is a good thing because it means that, overall, there are higher levels of ozone,” Le Roy adds.

Running detailed meteorology simulations, rather than relying on annual average weather data, gave the researchers a more complete picture of the potential effects on human health.

“Average climate isn’t the only thing that matters. One high ozone day, which might be a statistical anomaly, could mean we don’t meet our air quality target and have negative human health impacts that we should care about,” Le Roy says.

In the future, the researchers want to continue exploring the intersection of meteorology and air quality. They also want to expand their modeling approach to consider other climate change factors with high variability, like wildfires or biomass burning.

“We’ve shown that it is important for air quality scientists to consider the full range of climate variability, even if it is hard to do in your models, because it really does affect the answer that you get,” says Selin.

This work is funded, in part, by the MIT Praecis Presidential Fellowship, the J.H. and E.V. Wade Fellowship, and the MIT Martin Family Society of Fellows for Sustainability.

AI learns how vision and sound are connected, without human intervention

Thu, 05/22/2025 - 12:00am

Humans naturally learn by making connections between sight and sound. For instance, we can watch someone playing the cello and recognize that the cellist’s movements are generating the music we hear.

A new approach developed by researchers from MIT and elsewhere improves an AI model’s ability to learn in this same fashion. This could be useful in applications such as journalism and film production, where the model could help with curating multimodal content through automatic video and audio retrieval.

In the longer term, this work could be used to improve a robot’s ability to understand real-world environments, where auditory and visual information are often closely connected.

Improving upon prior work from their group, the researchers created a method that helps machine-learning models align corresponding audio and visual data from video clips without the need for human labels.

They adjusted how their original model is trained so it learns a finer-grained correspondence between a particular video frame and the audio that occurs in that moment. The researchers also made some architectural tweaks that help the system balance two distinct learning objectives, which improves performance.

Taken together, these relatively simple improvements boost the accuracy of their approach in video retrieval tasks and in classifying the action in audiovisual scenes. For instance, the new method could automatically and precisely match the sound of a door slamming with the visual of it closing in a video clip.

“We are building AI systems that can process the world like humans do, in terms of having both audio and visual information coming in at once and being able to seamlessly process both modalities. Looking forward, if we can integrate this audio-visual technology into some of the tools we use on a daily basis, like large language models, it could open up a lot of new applications,” says Andrew Rouditchenko, an MIT graduate student and co-author of a paper on this research.

He is joined on the paper by lead author Edson Araujo, a graduate student at Goethe University in Germany; Yuan Gong, a former MIT postdoc; Saurabhchand Bhati, a current MIT postdoc; Samuel Thomas, Brian Kingsbury, and Leonid Karlinsky of IBM Research; Rogerio Feris, principal scientist and manager at the MIT-IBM Watson AI Lab; James Glass, senior research scientist and head of the Spoken Language Systems Group in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Hilde Kuehne, professor of computer science at Goethe University and an affiliated professor at the MIT-IBM Watson AI Lab. The work will be presented at the Conference on Computer Vision and Pattern Recognition.

Syncing up

This work builds upon a machine-learning method the researchers developed a few years ago, which provided an efficient way to train a multimodal model to simultaneously process audio and visual data without the need for human labels.

The researchers feed this model, called CAV-MAE, unlabeled video clips and it encodes the visual and audio data separately into representations called tokens. Using the natural audio from the recording, the model automatically learns to map corresponding pairs of audio and visual tokens close together within its internal representation space.

They found that using two learning objectives balances the model’s learning process, which enables CAV-MAE to understand the corresponding audio and visual data while improving its ability to recover video clips that match user queries.

But CAV-MAE treats audio and visual samples as one unit, so a 10-second video clip and the sound of a door slamming are mapped together, even if that audio event happens in just one second of the video.

In their improved model, called CAV-MAE Sync, the researchers split the audio into smaller windows before the model computes its representations of the data, so it generates separate representations that correspond to each smaller window of audio.

During training, the model learns to associate one video frame with the audio that occurs during just that frame.

“By doing that, the model learns a finer-grained correspondence, which helps with performance later when we aggregate this information,” Araujo says.

They also incorporated architectural improvements that help the model balance its two learning objectives.

Adding “wiggle room”

The model incorporates a contrastive objective, where it learns to associate similar audio and visual data, and a reconstruction objective which aims to recover specific audio and visual data based on user queries.

In CAV-MAE Sync, the researchers introduced two new types of data representations, or tokens, to improve the model’s learning ability.

They include dedicated “global tokens” that help with the contrastive learning objective and dedicated “register tokens” that help the model focus on important details for the reconstruction objective.

“Essentially, we add a bit more wiggle room to the model so it can perform each of these two tasks, contrastive and reconstructive, a bit more independently. That benefitted overall performance,” Araujo adds.

While the researchers had some intuition these enhancements would improve the performance of CAV-MAE Sync, it took a careful combination of strategies to shift the model in the direction they wanted it to go.

“Because we have multiple modalities, we need a good model for both modalities by themselves, but we also need to get them to fuse together and collaborate,” Rouditchenko says.

In the end, their enhancements improved the model’s ability to retrieve videos based on an audio query and predict the class of an audio-visual scene, like a dog barking or an instrument playing.

Its results were more accurate than their prior work, and it also performed better than more complex, state-of-the-art methods that require larger amounts of training data.

“Sometimes, very simple ideas or little patterns you see in the data have big value when applied on top of a model you are working on,” Araujo says.

In the future, the researchers want to incorporate new models that generate better data representations into CAV-MAE Sync, which could improve performance. They also want to enable their system to handle text data, which would be an important step toward generating an audiovisual large language model.

This work is funded, in part, by the German Federal Ministry of Education and Research and the MIT-IBM Watson AI Lab.

Learning how to predict rare kinds of failures

Wed, 05/21/2025 - 4:35pm

On Dec. 21, 2022, just as peak holiday season travel was getting underway, Southwest Airlines went through a cascading series of failures in their scheduling, initially triggered by severe winter weather in the Denver area. But the problems spread through their network, and over the course of the next 10 days the crisis ended up stranding over 2 million passengers and causing losses of $750 million for the airline.

How did a localized weather system end up triggering such a widespread failure? Researchers at MIT have examined this widely reported failure as an example of cases where systems that work smoothly most of the time suddenly break down and cause a domino effect of failures. They have now developed a computational system for using the combination of sparse data about a rare failure event, in combination with much more extensive data on normal operations, to work backwards and try to pinpoint the root causes of the failure, and hopefully be able to find ways to adjust the systems to prevent such failures in the future.

The findings were presented at the International Conference on Learning Representations (ICLR), which was held in Singapore from April 24-28 by MIT doctoral student Charles Dawson, professor of aeronautics and astronautics Chuchu Fan, and colleagues from Harvard University and the University of Michigan.

“The motivation behind this work is that it’s really frustrating when we have to interact with these complicated systems, where it’s really hard to understand what’s going on behind the scenes that’s creating these issues or failures that we’re observing,” says Dawson.

The new work builds on previous research from Fan’s lab, where they looked at problems involving hypothetical failure prediction problems, she says, such as with groups of robots working together on a task, or complex systems such as the power grid, looking for ways to predict how such systems may fail. “The goal of this project,” Fan says, “was really to turn that into a diagnostic tool that we could use on real-world systems.”

The idea was to provide a way that someone could “give us data from a time when this real-world system had an issue or a failure,” Dawson says, “and we can try to diagnose the root causes, and provide a little bit of a look behind the curtain at this complexity.”

The intent is for the methods they developed “to work for a pretty general class of cyber-physical problems,” he says. These are problems in which “you have an automated decision-making component interacting with the messiness of the real world,” he explains. There are available tools for testing software systems that operate on their own, but the complexity arises when that software has to interact with physical entities going about their activities in a real physical setting, whether it be the scheduling of aircraft, the movements of autonomous vehicles, the interactions of a team of robots, or the control of the inputs and outputs on an electric grid. In such systems, what often happens, he says, is that “the software might make a decision that looks OK at first, but then it has all these domino, knock-on effects that make things messier and much more uncertain.”

One key difference, though, is that in systems like teams of robots, unlike the scheduling of airplanes, “we have access to a model in the robotics world,” says Fan, who is a principal investigator in MIT’s Laboratory for Information and Decision Systems (LIDS). “We do have some good understanding of the physics behind the robotics, and we do have ways of creating a model” that represents their activities with reasonable accuracy. But airline scheduling involves processes and systems that are proprietary business information, and so the researchers had to find ways to infer what was behind the decisions, using only the relatively sparse publicly available information, which essentially consisted of just the actual arrival and departure times of each plane.

“We have grabbed all this flight data, but there is this entire system of the scheduling system behind it, and we don’t know how the system is working,” Fan says. And the amount of data relating to the actual failure is just several day’s worth, compared to years of data on normal flight operations.

The impact of the weather events in Denver during the week of Southwest’s scheduling crisis clearly showed up in the flight data, just from the longer-than-normal turnaround times between landing and takeoff at the Denver airport. But the way that impact cascaded though the system was less obvious, and required more analysis. The key turned out to have to do with the concept of reserve aircraft.

Airlines typically keep some planes in reserve at various airports, so that if problems are found with one plane that is scheduled for a flight, another plane can be quickly substituted. Southwest uses only a single type of plane, so they are all interchangeable, making such substitutions easier. But most airlines operate on a hub-and-spoke system, with a few designated hub airports where most of those reserve aircraft may be kept, whereas Southwest does not use hubs, so their reserve planes are more scattered throughout their network. And the way those planes were deployed turned out to play a major role in the unfolding crisis.

“The challenge is that there’s no public data available in terms of where the aircraft are stationed throughout the Southwest network,” Dawson says. “What we’re able to find using our method is, by looking at the public data on arrivals, departures, and delays, we can use our method to back out what the hidden parameters of those aircraft reserves could have been, to explain the observations that we were seeing.”

What they found was that the way the reserves were deployed was a “leading indicator” of the problems that cascaded in a nationwide crisis. Some parts of the network that were affected directly by the weather were able to recover quickly and get back on schedule. “But when we looked at other areas in the network, we saw that these reserves were just not available, and things just kept getting worse.”

For example, the data showed that Denver’s reserves were rapidly dwindling because of the weather delays, but then “it also allowed us to trace this failure from Denver to Las Vegas,” he says. While there was no severe weather there, “our method was still showing us a steady decline in the number of aircraft that were able to serve flights out of Las Vegas.”

He says that “what we found was that there were these circulations of aircraft within the Southwest network, where an aircraft might start the day in California and then fly to Denver, and then end the day in Las Vegas.” What happened in the case of this storm was that the cycle got interrupted. As a result, “this one storm in Denver breaks the cycle, and suddenly the reserves in Las Vegas, which is not affected by the weather, start to deteriorate.”

In the end, Southwest was forced to take a drastic measure to resolve the problem: They had to do a “hard reset” of their entire system, canceling all flights and flying empty aircraft around the country to rebalance their reserves.

Working with experts in air transportation systems, the researchers developed a model of how the scheduling system is supposed to work. Then, “what our method does is, we’re essentially trying to run the model backwards.” Looking at the observed outcomes, the model allows them to work back to see what kinds of initial conditions could have produced those outcomes.

While the data on the actual failures were sparse, the extensive data on typical operations helped in teaching the computational model “what is feasible, what is possible, what’s the realm of physical possibility here,” Dawson says. “That gives us the domain knowledge to then say, in this extreme event, given the space of what’s possible, what’s the most likely explanation” for the failure.

This could lead to a real-time monitoring system, he says, where data on normal operations are constantly compared to the current data, and determining what the trend looks like. “Are we trending toward normal, or are we trending toward extreme events?” Seeing signs of impending issues could allow for preemptive measures, such as redeploying reserve aircraft in advance to areas of anticipated problems.

Work on developing such systems is ongoing in her lab, Fan says. In the meantime, they have produced an open-source tool for analyzing failure systems, called CalNF, which is available for anyone to use. Meanwhile Dawson, who earned his doctorate last year, is working as a postdoc to apply the methods developed in this work to understanding failures in power networks.

The research team also included Max Li from the University of Michigan and Van Tran from Harvard University. The work was supported by NASA, the Air Force Office of Scientific Research, and the MIT-DSTA program.

A new technology for extending the shelf life of produce

Wed, 05/21/2025 - 11:00am

We’ve all felt the sting of guilt when fruit and vegetables go bad before we could eat them. Now, researchers from MIT and the Singapore-MIT Alliance for Research and Technology (SMART) have shown they can extend the shelf life of harvested plants by injecting them with melatonin using biodegradable microneedles.

That’s a big deal because the problem of food waste goes way beyond our salads. More than 30 percent of the world’s food is lost after it’s harvested — enough to feed more than 1 billion people. Refrigeration is the most common way to preserve foods, but it requires energy and infrastructure that many regions of the world can’t afford or lack access to.

The researchers believe their system could offer an alternative or complement to refrigeration. Central to their approach are patches of silk microneedles. The microneedles can get through the tough, waxy skin of plants without causing a stress response, and deliver precise amounts of melatonin into plants’ inner tissues.

“This is the first time that we’ve been able to apply these microneedles to extend the shelf life of a fresh-cut crop,” says Benedetto Marelli, the study’s senior author, associate professor of civil and environmental engineering at MIT, and the director of the Wild Cards mission of the MIT Climate Project. “We thought we could use this technology to deliver something that could regulate or control the plant’s post-harvest physiology. Eventually, we looked at hormones, and melatonin is already used by plants to regulate such functions. The food we waste could feed about 1.6 billion people. Even in the U.S., this approach could one day expand access to healthy foods.”

For the study, which appears today in Nano Letters, Marelli and researchers from SMART applied small patches of the microneedles containing melatonin to the base of the leafy vegetable pak choy. After application, the researchers found the melatonin was able to extend the vegetables’ shelf life by four days at room temperature and 10 days when refrigerated, which could allow more crops to reach consumers before they’re wasted.

“Post-harvest waste is a huge issue. This problem is extremely important in emerging markets around Africa and Southeast Asia, where many crops are produced but can't be maintained in the journey from farms to markets,” says Sarojam Rajani, co-senior author of the study and a senior principal investigator at the Temasek Life Sciences Laboratory in Singapore.

Plant destressors

For years, Marelli’s lab has been exploring the use of silk microneedles for things like delivering nutrients to crops and monitoring plant health. Microneedles made from silk fibroin protein are nontoxic and biodegradable, and Marelli’s previous work has described ways of manufacturing them at scale.

To test microneedle’s ability to extend the shelf life of food, the researchers wanted to study their ability to deliver a hormone known to affect the senescence process. Aside from helping humans sleep, melatonin is also a natural hormone in many plants that helps them regulate growth and aging.

“The dose of melatonin we’re delivering is so low that it’s fully metabolized by the crops, so it would not significantly increase the amount of melatonin normally present in the food; we would not ingest more melatonin than usual,” Marelli says. “We chose pak choy because it's a very important crop in Asia, and also because pak choy is very perishable.”

Pak choy is typically harvested by cutting the leafy plant from the root system, exposing the shoot base that provides easy access to vascular bundles which distribute water and nutrients to the rest of the plant. To begin their study, the researchers first used their microneedles to inject a fluorescent dye into the base to confirm that vasculature could spread the dye throughout the plant.

The researchers then compared the shelf life of regular pak choy plants and plants that had been sprayed with or dipped into melatonin, finding no difference.

With their baseline shelf life established, the researchers applied small patches of the melatonin-filled microneedles to the bottom of pak choy plants by hand. They then stored the treated plants, along with controls, in plastic boxes both at room temperature and under refrigeration.

The team evaluated the plants by monitoring their weight, visual appearance, and concentration of chlorophyll, a green pigment that decreases as plants age.

At room temperature, the leaves of the untreated control group began yellowing within two or three days. By the fourth day, the yellowing accelerated to the point that the plants likely could not be sold. Plants treated with the melatonin-loaded silk microneedles, in contrast, remained green on day five, and the yellowing process was significantly delayed. The weight loss and chlorophyll reduction of treated plants also slowed significantly at room temperature. Overall, the researchers estimated the microneedle-treated plants retained their saleable value until the eighth day.

“We clearly saw we could enhance the shelf life of pak choy without the cold chain,” Marelli says.

In refrigerated conditions of about 40 degrees Fahrenheit, plant yellowing was delayed by about five days on average, with treated plants remaining relatively green until day 25.

“Spectrophotometric analysis of the plants indicated the treated plants had higher antioxidant activity, while gene analysis showed the melatonin set off a protective chain reaction inside the plants, preserving chlorophyll and adjusting hormones to slow senescence,” says Monika Jangir, co-first author and former postdoc at the Temasek Life Sciences Laboratory.

“We studied melatonin’s effects and saw it improves the stress response of the plant after it’s been cut, so it’s basically decreasing the stress that plant’s experience, and that extends its shelf life,” says Yangyang Han, co-first author and research scientist at the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group at SMART.

Toward postharvest preservation

While the microneedles could make it possible to minimize waste when compared to other application methods like spraying or dipping crops, the researchers say more work is needed to deploy microneedles at scale. For instance, although the researchers applied the microneedle patches by hand in this experiment, the patches could be applied using tractors, autonomous drones, and other farming equipment in the future.

“For this to be widely adopted, we’d need to reach a performance versus cost threshold to justify its use,” Marelli explains. “This method would need to become cheap enough to be used by farmers regularly.”

Moving forward, the research team plans to study the effects of a variety of hormones on different crops using its microneedle delivery technology. The team believes the technique should work with all kinds of produce.

“We’re going to continue to analyze how we can increase the impact this can have on the value and quality of crops,” Marelli says. “For example, could this let us modulate the nutritional values of the crop, how it’s shaped, its texture, etc.? We're also going to continue looking into scaling up the technology so this can be used in the field.”

The work was supported by the Singapore-MIT Alliance for Research and Technology (SMART) and the National Research Foundation of Singapore.

Startup enables 100-year bridges with corrosion-resistant steel

Wed, 05/21/2025 - 12:00am

According to the American Road and Transportation Builders Association, one in three bridges needs repair or replacement, amounting to more than 200,000 bridges across the country. A key culprit of America’s aging infrastructure is rebar that has accumulated rust, which cracks the concrete around it, making bridges more likely to collapse.

Now Allium Engineering, founded by two MIT PhDs, is tripling the lifetime of bridges and other structures with a new technology that uses a stainless steel cladding to make rebar resilient to corrosion. By eliminating corrosion, infrastructure lasts much longer, fewer repairs are required, and carbon emissions are reduced. The company’s technology is easily integrated into existing steelmaking processes to make America’s infrastructure more resilient, affordable, and sustainable over the next century.

“Across the U.S., the typical bridge deck lasts about 30 years on average — we’re enabling 100-year lifetimes,” says Allium co-founder and CEO Steven Jepeal PhD ’21. “There’s a huge backlog of infrastructure that needs to be replaced, and that has frankly aged faster than it was expected to, largely because the materials we were using at the time weren’t cut out for the job. We’re trying to ride the momentum of rebuilding America’s infrastructure, but rebuild in a way that makes it last.”

To accomplish that, Allium adds a thin protective layer of stainless steel on top of traditional steel rebar to make it more resistant to corrosion. About 100,000 pounds of Allium’s stainless steel-clad rebar have already been used in construction projects around the U.S., and the company believes its process can be quickly scaled alongside steel mills.

“We integrate our system into mills so they don’t have to do anything differently,” says Jepeal, who co-founded Allium with Sam McAlpine PhD ’22. “We add everything we need to make a normal product into a stainless-clad product so that any mill out there can make a material that won’t corrode. That’s what needs to happen for all of the world’s infrastructure to be longer lasting.”

Toward better bridges

Jepeal completed his PhD in the MIT Department of Nuclear Science and Engineering (NSE) under Professor Zach Hartwig. During that time, he saw Hartwig and fellow NSE researchers spinout Commonwealth Fusion Systems to create the first commercial fusion reactors, which he says sparked his interest in startups.

“It definitely helped me catch the startup bug,” Jepeal says. “MIT is also where I got my materials science chops.”

McAlpine completed his PhD under Associate Professor Michael Short. In 2019, McAlpine and Short were working on an ARPA-E-funded project in which they would combine metals to improve corrosion-resistance in extreme environments.

Jepeal and McAlpine decided to start a company around applying a similar approach to improve the resilience of metals in everyday settings, working with MIT’s Venture Mentoring Service and speaking with Tata Steel, one of the largest steel makers in the world that has worked with the MIT Industrial Liaison Program (ILP). Members of Tata told the founders that one of their biggest problems was steel corrosion.

A key early problem the founders set out to solve was depositing corrosion-resistant material without adding significant costs or disrupting existing processes. Steelmaking traditionally begins by putting huge pieces of precursor steel through machines called rollers at extremely high temperatures to stretch out the material. Jepeal compares the process to making pasta on an industrial scale.

The founders decided to add their cladding before the rolling process. Although Allium’s system is customized, today the company makes use of existing pieces of equipment used in other metal processing applications, like welding, to add its cladding.

“We go into the mills and take big chunks of steel that are going through the steelmaking process but aren’t the end-product, and we deposit stainless steel on the outside of their cheap carbon steel, which is typically just recycled scrap from products like cars and fridges,” Jepeal says. “The treated steel then goes through the mill’s typical process for making end products like rebar.”

Each 40-foot piece of thick precursor steel turns into about a mile of rebar following the rolling process. Rebar treated by Allium is still more than 95 percent regular rebar and doesn’t need any special post-processing or handling.

“What comes out of the mill looks like regular rebar,” Jepeal says. “It is just as strong and can be bent, cut, and installed in all the same ways. But instead of being put into a bridge and lasting an average of 30 years, it will last 100 years or more.”

Infrastructure to last

Last year, Allium’s factory in Billerica, Massachusetts, began producing its first commercial cladding material, helping to manufacture about 100 tons of the company’s stainless steel-clad rebar in collaboration with a partner steel mill. That rebar has since been placed into construction projects in California and Florida.

Allium’s first facility has the capacity to produce about 1,000 tons of its long-lasting rebar each year, but the company is hoping to build more facilities closer to the steel mills it partners with, eventually integrating them into mill operations.

“Our mission of reducing emissions and improving this infrastructure is what’s driving us to scale very quickly to meet the needs of the industry,” Jepeal says. “Everyone we talk to wants this to be bigger than it is today.”

Allium is also experimenting with other cladding materials and composites. Down the line, Jepeal sees Allium’s tech being used for things beyond rebar like train tracks, steel beams, and pipes. But he stresses the company’s focus on rebar will keep it busy for the foreseeable future.

“Almost all of our infrastructure has this corrosion problem, so it’s the biggest problem we could imagine solving with our set of skills,” Jepeal says. “Tunnels, bridges, roads, industrial buildings, power plants, chemical factories — all of them have this problem.”

Fueling social impact: PKG IDEAS Challenge invests in bold student-led social enterprises

Tue, 05/20/2025 - 4:25pm

On Wednesday, April 16, members of the MIT community gathered at the MIT Welcome Center to celebrate the annual IDEAS Social Innovation Challenge Showcase and Awards ceremony. Hosted by the Priscilla King Gray Public Service Center (PKG Center), the event celebrated 19 student-led teams who spent the spring semester developing and implementing solutions to complex social and environmental challenges, both locally and globally.

Founded in 2001, the IDEAS Challenge is an experiential learning incubator that prepares students to take their early-stage social enterprises to the next level. As the program approaches its 25th anniversary, IDEAS serves a vital role in the Institute’s innovation ecosystem — with a focus on social impact that encourages students across disciplines to think boldly, act compassionately, and engineer for change.

This year’s event featured keynote remarks by Amy Smith, co-founder of IDEAS and founder of D-Lab, who reflected on IDEAS’ legacy and the continued urgency of its mission. She emphasized the importance of community-centered design and celebrated the creativity and determination of the program’s participants over the years. 

“We saw the competition as a vehicle for MIT students to apply their technical skills to problems that they cared about, with impact and community engagement at the forefront,” Smith said. “I think that the goal of helping as many teams as possible along their journey has continued to this day.”

A legacy of impact and a vision for the future

Since its inception, the IDEAS Challenge has fueled over 1,200 ventures through training, mentorship, and seed funding; the program has also awarded more than $1.3 million to nearly 300 teams. Many of these have gone on to effect transformative change in the areas of global health, civic engagement, energy and the environment, education, and employment.  

Over the course of the spring semester, MIT student-led teams engage in a rigorous process of ideating, prototyping, and stakeholder engagement, supported by a robust series of workshops on the topics of systems change, social impact measurement, and social enterprise business models. Participants also benefit from mentorship, an expansive IDEAS alumni network, and connections with partners across MIT’s innovation ecosystem. 

“IDEAS continues to serve as a critical home to MIT students determined to meaningfully address complex systems challenges by building social enterprises that prioritize social impact and sustainability over profit,” said Lauren Tyger, the PKG Center’s assistant dean of social innovation, who has overseen the program since 2023. 

Voices of innovation

For many of this year’s participants, IDEAS offered the chance to turn their academic and professional experience into real-world impact. Blake Blaze, co-founder of SamWise, was inspired to design a platform that provides personalized education for incarcerated students after teaching classes in Boston-area jails and prisons in partnership with The Educational Justice Institute (TEJI) at MIT.

“Our team began the year motivated by a good idea, but IDEAS gave us the frameworks, mindset, and, more simply, the language to be effective collaborators with the communities we aim to serve,” said Blaze. “We learned that sometimes building technology for a customer requires more than product-market fit — it requires proper orientation for meaningful outcomes and impact.”

Franny Xi Wu, who co-founded China Dispossession Watch, a platform to document and raise awareness of grassroots anti-displacement activism in China, highlighted the niche space that IDEAS occupies within the entrepreneurship ecosystem. “IDEAS provided crucial support by helping us achieve federated, trust-based program rollout rather than rapid extractive scaling, pursue diversified funding aligned with community-driven incentives, and find like-minded collaborators equally invested in human rights and spatial justice.” 

A network of alumni and other volunteers play an invaluable mentorship role in IDEAS, fostering remarkable growth in their mentees over the course of the semester. 

“Engaging with mentors, judges, and peers profoundly validated our vision, reinforcing our confidence to pursue what initially felt like audacious goals,” said Xi Wu. “Their insightful feedback and genuine encouragement created a supportive environment that inspired and energized us. They also provided us valuable perspectives on how to effectively launch and scale social ventures, communicate compellingly with funders, and navigate the multifaceted challenges in impact entrepreneurship.”

“Being a PKG IDEAS mentor for the last two years has been an incredible experience. I have met a group of inspiring entrepreneurs trying to solve big problems, helped them on their journeys, and developed my own mentoring skills along the way,” said IDEAS mentor Dheera Ananthakrishnan SM ’90, EMBA ’23. “The PKG network is an incredible resource, a reinforcing loop, giving back so much more than it gets — I’m so proud to be a part of it. I look forward to seeing the impact of IDEAS teams as they continue on their journey, and I am excited to mentor and learn with the MIT PKG Center in the future.”

Top teams recognized with over $60K in awards

The 2025 IDEAS Challenge culminated with the announcement of this year’s winners. Teams were evaluated by a panel of expert judges representing a wide range of industries, and eight were selected to receive awards and additional mentorship that will jump-start their social innovations. These volunteer judges evaluated each proposal for innovation, feasibility, and potential for social impact. 

The showcase was not just a celebration of projects — it was a testament to the value of systems-driven design, collaborative problem-solving, and sustained engagement with community partners.

The 2025 grantees include:

  • $20,000 award: SamWise is an AI-powered oral assessment tool that provides personalized education for incarcerated students, overcoming outdated testing methods. By leveraging large language models, it enhances learning engagement and accessibility.
  • $15,000 award: China Dispossession Watch is developing a digital platform to document and raise awareness of grassroots anti-displacement activism and provide empirical analysis of forced expropriation and demolition in China.
  • $10,000 award: Liberatory Computing is an educational framework that empowers African-American youth to use data science and AI to address systemic inequities.
  • $7,500 Award: POLLEN is a purpose-driven card game and engagement framework designed to spark transnational conversations around climate change and disaster preparedness.
  • $5,000 Award: Helix Carbon is transforming carbon conversion by producing electrolyzers with enhanced system lifetimes, enabling the onsite conversion of carbon dioxide into useful chemicals at industrial facilities.
  • $2,000 Award: Forma Systems has developed a breakthrough in concrete floor design, using up to 72 percent less cement and 67 percent less steel, with the potential for significant environmental impact.
  • $2,000 Award: Precisia empowers women with real-time, data-driven insights into their hormonal health through micro-needle patch technology, allowing them to make informed decisions about their well-being.
  • $2,000 Award: BioBoost is experimenting with converting Caribbean sargassum seaweed waste into carbon-neutral energy using pyrolysis, addressing both the region's energy challenges and the environmental threat of seaweed accumulation.

Looking ahead: Supporting the next generation

As IDEAS nears its 25th anniversary, the PKG Center is launching a year-long celebration and campaign to ensure the program’s longevity and expand its reach. Christine Ortiz, the Morris Cohen Professor of Materials Science and Engineering, announced the IDEAS25 campaign during the event.

“Over the past quarter-century, close to 300 teams have launched projects through the support of IDEAS Awards, and several hundred more have entered the challenge — working on projects in over 60 countries,” Ortiz said. “IDEAS has supported student-led work that has had real-world impact across sectors and regions.”

In honor of the program’s 25th year, the PKG Center will measure the collective impact of IDEAS teams, showcase the work of alumni and partners at an Alumni Showcase this fall, and rally support to sustain the program for the next 25 years. 

“Whether you're a past team member, a mentor, a friend of IDEAS, or someone who just learned about the program tonight,” Ortiz said, “we invite you to join us. Let’s keep the momentum going together.”

Pages