MIT Latest News

At the Venice Biennale, design through flexible thinking
When the Venice Biennale’s 19th International Architecture Exhibition launches on May 10, its guiding theme will be applying nimble, flexible intelligence to a demanding world — an ongoing focus of its curator, MIT faculty member Carlo Ratti.
The Biennale is the world’s most renowned exhibition of its kind, an international event whose subject matter shifts over time, with a new curator providing new focus every two years. This year, the Biennale’s formal theme is “Intelligens,” the Latin word behind “intelligence,” in English, and “intelligenza,” in Italian — a word that evokes both the exhibition’s international scope and the many ways humans learn, adapt, and create.
“Our title is ‘Intelligens. Natural, artificial, collective,’” notes Ratti, who is a professor of the practice of urban technologies and planning in the MIT School of Architecture and Planning. “One key point is how we can go beyond what people normally think about intelligence, whether in people or AI. In the built environment we deal with many types of feedback and need to leverage all types of intelligence to collect and use it all.”
That applies to the subject of climate change, as adaptation is an ongoing focal point for the design community, whether facing the need to rework structures or to develop new, resilient designs for cities and regions.
“I would emphasize how eager architects are today to play a big role in addressing the big crises we face on the planet we live in,” Ratti says. “Architecture is the only discipline to bring everybody together, because it means rethinking the built environment, the places we all live.”
He adds: “If you think about the fires in Los Angeles, or the floods in Valencia or Bangladesh, or the drought in Sicily, these are cases where architecture and design need to apply feedback and use intelligence.”
Not just sharing design, but creating it
The Venice Biennale is the leading event of its kind globally and one of the earliest: It started with art exhibitions in 1895 and later added biannual shows focused on other facets of culture. Since 1980, the Biennale of Architecture was held every two years, until the 2020 exhibition — curated by MIT’s Hashim Sarkis — was rescheduled to 2021 due to the Covid-19 pandemic. It is now continuing in odd-numbered years.
After its May 10 opening, this year’s exhibition runs until Nov. 23.
Ratti is a wide-ranging scholar, designer, and writer, and the long-running director of MIT’s Senseable City Lab, which has been on the leading edge of using data to understand cities as living systems.
Additionally, Ratti is a founding partner of the international design firm Carlo Ratti Associati. He graduated from the Politecnico di Torino and the École Nationale des Ponts et Chaussées in Paris, then earned his MPhil and PhD at Cambridge University. He has authored and co-authored hundeds of publications, including the books “Atlas of the Senseable City” (2023) and “The City of Tomorrow” (2016). Ratti’s work has been exhibited at the Venice Biennale, the Design Museum in Barcelona, the Science Museum in London, and the Museum of Modern Art in New York, among other venues.
In his role as curator of this year’s Biennale, Ratti adapted the traditional format to engage with some of the leading questions design faces. Ratti and the organizers created multiple forums to gather feedback about the exhibition’s possibilities, sifting through responses during the planning process.
Ratti has also publicly called this year’s Biennale a “living lab,” not just an exhibition, in accordance with the idea of learning from feedback and developing designs in response.
Back in 1895, Ratti notes, the Biennale was principally “a place to share existing knowledge, with artists and architectures coming together every two years. Today, and for a few decades, you can find almost anything in architecture and art immediately online. I think Biennales can not only be places where you share existing knowledge, but places where you create new knowledge.”
At this moment, he emphasizes, that will often mean listening to nature as we grapple with climate solutions. It also implies recognizing that nature itself inevitably responds to inputs, too.
In this vein, Ratti says, “Remember what the great architect Carlo Scarpa once said: ‘Between a tree and a house, choose the tree.’ I see that as a powerful call to learn from nature — a vast lab of trial and error, guided by feedback loops. Too often in the 20th century, architects believed they had the solution and simply needed to scale it up. The results? Frequently disastrous. Especially now, when adaptability is everything, I believe in a different approach: experimentation, feedback, iteration. That’s the spirit I hope defines this year’s Biennale.”
An MIT touch
This year, MIT will again have a robust presence at the Biennale, even beyond Ratti’s presence as curator. In the first place, he emphasizes, there is a strong team organizing the Biennale. That includes MIT graduate student Claire Gorman, who has taken a year out of her studies to serve as principal assistant to the Biennale curator.
Many of the Biennale’s projects, Gorman observes, “align ecology, technology, and culture in stunning illustrations of the fact that intelligence emerges from the complex behaviors of many parts working together. Visitors to the exhibition will discover robots and artisans collaborating alongside algae, 3D printers, ancient building practices, and new materials. … One of the strengths of the exhibition is that it includes participants who approach similar topics from different points of view.”
Overall, Gorman adds, “Our hope is that visitors will come away from the exhibition with a sense of optimism about the capacity of design fields to unite many forms of expertise.”
Numerous other Institute faculty and researchers are represented as well. For instance, Daniela Rus, head of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), has helped design an installation about using robotics in the restoration of ancient structures. And famed MIT computer scientist Tim Berners-Lee, creator of the World Wide Web, is participating in a Biennale event on intelligence.
“In choosing ‘Intelligens’ as the Venice Biennale theme, Carlo Ratti recognizes that our moment requires a holistic understanding of how different forms of intelligence — from social and ecological to computational and spatial — converge to shape our built environment,” Rus says. “The Biennale offers a timely platform to explore how architecture can mediate between these intelligences, creating buildings and cities that think with and for us.”
Even as the Biennale runs, there is also a separate exhibit in Venice showcasing MIT work in architecture and design. Running from May 10 through Nov. 23, at the Palazzo Diedo, the show, “The Next Earth: Computation, Crisis, Cosmology,” features the work of 40 faculty members in MIT’s Department of Architecture, along with entries from the think tank Antikythera.
Meanwhile, for the Biennale itself, the main exhibition hall, the Arsenale, is open, but other event spaces are being renovated. That means the organizers are using additional spaces in the city of Venice this year to showcase cutting-edge design work and installations.
“We’re turning Venice into a living lab — taking the Biennale beyond its usual borders,” Ratti says. “But there’s a bigger picture: Venice may be the world’s most fragile city, caught between rising seas and the crush of mass tourism. That’s why it could become a true laboratory for the future. Venice today could be a glimpse of the world tomorrow.”
Merging design and computer science in creative ways
The speed with which new technologies hit the market is nothing compared to the speed with which talented researchers find creative ways to use them, train them, even turn them into things we can’t live without. One such researcher is MIT MAD Fellow Alexander Htet Kyaw, a graduate student pursuing dual master’s degrees in architectural studies in computation and in electrical engineering and computer science.
Kyaw takes technologies like artificial intelligence, augmented reality, and robotics, and combines them with gesture, speech, and object recognition to create human-AI workflows that have the potential to interact with our built environment, change how we shop, design complex structures, and make physical things.
One of his latest innovations is Curator AI, for which he and his MIT graduate student partners took first prize — $26,000 in OpenAI products and cash — at the MIT AI Conference’s AI Build: Generative Voice AI Solutions, a weeklong hackathon at MIT with final presentations held last fall in New York City. Working with Kyaw were Richa Gupta (architecture) and Bradley Bunch, Nidhish Sagar, and Michael Won — all from the MIT Department of Electrical Engineering and Computer Science (EECS).
Curator AI is designed to streamline online furniture shopping by providing context-aware product recommendations using AI and AR. The platform uses AR to take the dimensions of a room with locations of windows, doors, and existing furniture. Users can then speak to the software to describe what new furnishings they want, and the system will use a vision-language AI model to search for and display various options that match both the user’s prompts and the room’s visual characteristics.
“Shoppers can choose from the suggested options, visualize products in AR, and use natural language to ask for modifications to the search, making the furniture selection process more intuitive, efficient, and personalized,” Kyaw says. “The problem we’re trying to solve is that most people don’t know where to start when furnishing a room, so we developed Curator AI to provide smart, contextual recommendations based on what your room looks like.” Although Curator AI was developed for furniture shopping, it could be expanded for use in other markets.
Another example of Kyaw’s work is Estimate, a product that he and three other graduate students created during the MIT Sloan Product Tech Conference’s hackathon in March 2024. The focus of that competition was to help small businesses; Kyaw and team decided to base their work on a painting company in Cambridge that employs 10 people. Estimate uses AR and an object-recognition AI technology to take the exact measurements of a room and generate a detailed cost estimate for a renovation and/or paint job. It also leverages generative AI to display images of the room or rooms as they might look like after painting or renovating, and generates an invoice once the project is complete.
The team won that hackathon and $5,000 in cash. Kyaw’s teammates were Guillaume Allegre, May Khine, and Anna Mathy, all of whom graduated from MIT in 2024 with master’s degrees in business analytics.
In April, Kyaw will give a TedX talk at his alma mater, Cornell University, in which he’ll describe Curator AI, Estimate, and other projects that use AI, AR, and robotics to design and build things.
One of these projects is Unlog, for which Kyaw connected AR with gesture recognition to build a software that takes input from the touch of a fingertip on the surface of a material, or even in the air, to map the dimensions of building components. That’s how Unlog — a towering art sculpture made from ash logs that stands on the Cornell campus — came about.
Unlog represents the possibility that structures can be built directly from a whole log, rather than having the log travel to a lumber mill to be turned into planks or two-by-fours, then shipped to a wholesaler or retailer. It’s a good representation of Kyaw’s desire to use building materials in a more sustainable way. A paper on this work, “Gestural Recognition for Feedback-Based Mixed Reality Fabrication a Case Study of the UnLog Tower,” was published by Kyaw, Leslie Lok, Lawson Spencer, and Sasa Zivkovic in the Proceedings of the 5th International Conference on Computational Design and Robotic Fabrication, January 2024.
Another system Kyaw developed integrates physics simulation, gesture recognition, and AR to design active bending structures built with bamboo poles. Gesture recognition allows users to manipulate digital bamboo modules in AR, and the physics simulation is integrated to visualize how the bamboo bends and where to attach the bamboo poles in ways that create a stable structure. This work appeared in the Proceedings of the 41st Education and Research in Computer Aided Architectural Design in Europe, August 2023, as “Active Bending in Physics-Based Mixed Reality: The Design and Fabrication of a Reconfigurable Modular Bamboo System.”
Kyaw pitched a similar idea using bamboo modules to create deployable structures last year to MITdesignX, an MIT MAD program that selects promising startups and provides coaching and funding to launch them. Kyaw has since founded BendShelters to build the prefabricated, modular bamboo shelters and community spaces for refugees and displaced persons in Myanmar, his home country.
“Where I grew up, in Myanmar, I’ve seen a lot of day-to-day effects of climate change and extreme poverty,” Kyaw says. “There’s a huge refugee crisis in the country, and I want to think about how I can contribute back to my community.”
His work with BendShelters has been recognized by MIT Sandbox, PKG Social Innovation Challenge, and the Amazon Robotics’ Prize for Social Good.
At MIT, Kyaw is collaborating with Professor Neil Gershenfeld, director of the Center for Bits and Atoms, and PhD student Miana Smith to use speech recognition, 3D generative AI, and robotic arms to create a workflow that can build objects in an accessible, on-demand, and sustainable way. Kyaw holds bachelor’s degrees in architecture and computer science from Cornell. Last year, he was awarded an SJA Fellowship from the Steve Jobs Archive, which provides funding for projects at the intersection of technology and the arts.
“I enjoy exploring different kinds of technologies to design and make things,” Kyaw says. “Being part of MAD has made me think about how all my work connects, and helped clarify my intentions. My research vision is to design and develop systems and products that enable natural interactions between humans, machines, and the world around us.”
New chip tests cooling solutions for stacked microelectronics
As demand grows for more powerful and efficient microelectronics systems, industry is turning to 3D integration — stacking chips on top of each other. This vertically layered architecture could allow high-performance processors, like those used for artificial intelligence, to be packaged closely with other highly specialized chips for communication or imaging. But technologists everywhere face a major challenge: how to prevent these stacks from overheating.
Now, MIT Lincoln Laboratory has developed a specialized chip to test and validate cooling solutions for packaged chip stacks. The chip dissipates extremely high power, mimicking high-performance logic chips, to generate heat through the silicon layer and in localized hot spots. Then, as cooling technologies are applied to the packaged stack, the chip measures temperature changes. When sandwiched in a stack, the chip will allow researchers to study how heat moves through stack layers and benchmark progress in keeping them cool.
"If you have just a single chip, you can cool it from above or below. But if you start stacking several chips on top of each other, the heat has nowhere to escape. No cooling methods exist today that allow industry to stack multiples of these really high-performance chips," says Chenson Chen, who led the development of the chip with Ryan Keech, both of the laboratory’s Advanced Materials and Microsystems Group.
The benchmarking chip is now being used at HRL Laboratories, a research and development company co-owned by Boeing and General Motors, as they develop cooling systems for 3D heterogenous integrated (3DHI) systems. Heterogenous integration refers to the stacking of silicon chips with non-silicon chips, such as III-V semiconductors used in radio-frequency (RF) systems.
"RF components can get very hot and run at very high powers — it adds an extra layer of complexity to 3D integration, which is why having this testing capability is so needed," Keech says.
The Defense Advanced Research Projects Agency (DARPA) funded the laboratory's development of the benchmarking chip to support the HRL program. All of this research stems from DARPA's Miniature Integrated Thermal Management Systems for 3D Heterogeneous Integration (Minitherms3D) program.
For the Department of Defense, 3DHI opens new opportunities for critical systems. For example, 3DHI could increase the range of radar and communication systems, enable the integration of advanced sensors on small platforms such as uncrewed aerial vehicles, or allow artificial intelligence data to be processed directly in fielded systems instead of remote data centers.
The test chip was developed through collaboration between circuit designers, electrical testing experts, and technicians in the laboratory's Microelectronics Laboratory.
The chip serves two functions: generating heat and sensing temperature. To generate heat, the team designed circuits that could operate at very high power densities, in the kilowatts-per-square-centimeter range, comparable to the projected power demands of high-performance chips today and into the future. They also replicated the layout of circuits in those chips, allowing the test chip to serve as a realistic stand-in.
"We adapted our existing silicon technology to essentially design chip-scale heaters," says Chen, who brings years of complex integration and chip design experience to the program. In the 2000s, he helped the laboratory pioneer the fabrication of two- and three-tier integrated circuits, leading early development of 3D integration.
The chip's heaters emulate both the background levels of heat within a stack and localized hot spots. Hot spots often occur in the most buried and inaccessible areas of a chip stack, making it difficult for 3D-chip developers to assess whether cooling schemes, such as microchannels delivering cold liquid, are reaching those spots and are effective enough.
That's where temperature-sensing elements come in. The chip is distributed with what Chen likens to "tiny thermometers" that read out the temperature in multiple locations across the chip as coolants are applied.
These thermometers are actually diodes, or switches that allow current to flow through a circuit as voltage is applied. As the diodes heat up, the current-to-voltage ratio changes. "We're able to check a diode's performance and know that it's 200 degrees C, or 100 degrees C, or 50 degrees C, for example," Keech says. "We thought creatively about how devices could fail from overheating, and then used those same properties to design useful measurement tools."
Chen and Keech — along with other design, fabrication, and electrical test experts across the laboratory — are now collaborating with HRL Laboratories researchers as they couple the chip with novel cooling technologies, and integrate those technologies into a 3DHI stack that could boost RF signal power. "We need to cool the heat equivalent of more than 190 laptop CPUs [central processing units], but in the size of a single CPU package," Christopher Roper, co-principal investigator at HRL, said in a recent press release announcing their program.
According to Keech, the rapid timeline for delivering the chip was a challenge overcome by teamwork through all phases of the chip's design, fabrication, test, and 3D heterogenous integration.
"Stacked architectures are considered the next frontier for microelectronics," he says. "We want to help the U.S. government get ahead in finding ways to integrate them effectively and enable the highest performance possible for these chips."
The laboratory team presented this work at the annual Government Microcircuit Applications and Critical Technology Conference (GOMACTech), held March 17-20.
A new computational framework illuminates the hidden ecology of diseased tissues
To understand what drives disease progression in tissues, scientists need more than just a snapshot of cells in isolation — they need to see where the cells are, how they interact, and how that spatial organization shifts across disease states. A new computational method called MESA (Multiomics and Ecological Spatial Analysis), detailed in a study published in Nature Genetics, is helping researchers study diseased tissues in more meaningful ways.
The work details the results of a collaboration between researchers from MIT, Stanford University, Weill Cornell Medicine, the Ragon Institute of MGH, MIT, and Harvard, and the Broad Institute of MIT and Harvard, and was led by the Stanford team.
MESA brings an ecology-inspired lens to tissue analysis. It offers a pipeline to interpret spatial omics data — the product of cutting-edge technology that captures molecular information along with the location of cells in tissue samples. These data provide a high-resolution map of tissue “neighborhoods,” and MESA helps make sense of the structure of that map.
“By integrating approaches from traditionally distinct disciplines, MESA enables researchers to better appreciate how tissues are locally organized and how that organization changes in different disease contexts, powering new diagnostics and the identification of new targets for preventions and cures,” says Alex K. Shalek, the director of the Institute for Medical Engineering and Science (IMES), the J. W. Kieckhefer Professor in IMES and the Department of Chemistry, and an extramural member of the Koch Institute for Integrative Cancer Research at MIT, as well as an institute member of the Broad Institute and a member of the Ragon Institute.
“In ecology, people study biodiversity across regions — how animal species are distributed and interact,” explains Bokai Zhu, MIT postdoc and author on the study. “We realized we could apply those same ideas to cells in tissues. Instead of rabbits and snakes, we analyze T cells and B cells.”
By treating cell types like ecological species, MESA quantifies “biodiversity” within tissues and tracks how that diversity changes in disease. For example, in liver cancer samples, the method revealed zones where tumor cells consistently co-occurred with macrophages, suggesting these regions may drive unique disease outcomes.
“Our method reads tissues like ecosystems, uncovering cellular ‘hotspots’ that mark early signs of disease or treatment response,” Zhu adds. “This opens new possibilities for precision diagnostics and therapy design.”
MESA also offers another major advantage: It can computationally enrich tissue data without the need for more experiments. Using publicly available single-cell datasets, the tool transfers additional information — such as gene expression profiles — onto existing tissue samples. This approach deepens understanding of how spatial domains function, especially when comparing healthy and diseased tissue.
In tests across multiple datasets and tissue types, MESA uncovered spatial structures and key cell populations that were previously overlooked. It integrates different types of omics data, such as transcriptomics and proteomics, and builds a multilayered view of tissue architecture.
Currently available as a Python package, MESA is designed for academic and translational research. Although spatial omics is still too resource-intensive for routine in-hospital clinical use, the technology is gaining traction among pharmaceutical companies, particularly for drug trials where understanding tissue responses is critical.
“This is just the beginning,” says Zhu. “MESA opens the door to using ecological theory to unravel the spatial complexity of disease — and ultimately, to better predict and treat it.”
Gene circuits enable more precise control of gene therapy
Many diseases are caused by a missing or defective copy of a single gene. For decades, scientists have been working on gene therapy treatments that could cure such diseases by delivering a new copy of the missing genes to the affected cells.
Despite those efforts, very few gene therapy treatments have been approved by the FDA. One of the challenges to developing these treatments has been achieving control over how much the new gene is expressed in cells — too little and it won’t succeed, too much and it could cause serious side effects.
To help achieve more precise control of gene therapy, MIT engineers have tuned and applied a control circuit that can keep expression levels within a target range. In human cells, they showed that they could use this method to deliver genes that could help treat diseases including fragile X syndrome, a disorder that leads to intellectual disability and other developmental problems.
“In theory, gene supplementation can solve monogenic disorders that are very diverse but have a relatively straightforward gene therapy fix if you could control the therapy well enough,” says Katie Galloway, the W. M. Keck Career Development Professor in Biomedical Engineering and Chemical Engineering and the senior author of the new study.
MIT graduate student Kasey Love is the lead author of the paper, which appears today in Cell Systems. Other authors of the paper include MIT graduate students Christopher Johnstone, Emma Peterman, and Stephanie Gaglione, and Michael Birnbaum, an associate professor of biological engineering at MIT.
Delivering genes
While gene therapy holds promise for treating a variety of diseases, including hemophilia and sickle cell anemia, only a handful of treatments have been approved so far, for an inherited retinal disease and certain blood cancers.
Most gene therapy approaches use a virus to deliver a new copy of a gene, which is then integrated into the DNA of host cells. Some cells may take up many copies of the gene, while others don’t receive any.
“Simple overexpression of that payload can result in a really wide range of expression levels in the target genes as they take up different numbers of copies of those genes or just have different expression levels,” Love says. “If it's not expressing enough, that defeats the purpose of the therapy. But on the other hand, expressing at too high levels is also a problem, as that payload can be toxic.”
To try to overcome this, scientists have experimented with different types of control circuits that constrain expression of the therapeutic gene. In this study, the MIT team decided to use a type of circuit called an incoherent feedforward loop (IFFL).
In an IFFL circuit, activation of the target gene simultaneously activates production of a molecule that suppresses gene expression. One type of molecule that can be used to achieve that suppression is microRNA — a short RNA sequence that binds to messenger RNA, preventing it from being translated into protein.
In this study, the MIT team designed an IFFL circuit, called “ComMAND” (Compact microRNA-mediated attenuator of noise and dosage), so that a microRNA strand that represses mRNA translation is encoded within the therapeutic gene. The microRNA is located within a short segment called an intron, which gets spliced out of the gene when it is transcribed into mRNA. This means that whenever the gene is turned on, both the mRNA and the microRNA that represses it are produced in roughly equal amounts.
This approach allows the researchers to control the entire ComMAND circuit with just one promoter — the DNA site where gene transcription is turned on. By swapping in promoters of different strengths, the researchers can tailor how much of the therapeutic gene will be produced.
In addition to offering tighter control, the circuit’s compact design allows it to be carried on a single delivery vehicle, such as a lentivirus or adeno-associated virus, which could improve the manufacturability of these therapies. Both of those viruses are frequently used to deliver therapeutic cargoes.
“Other people have developed microRNA based incoherent feed forward loops, but what Kasey has done is put it all on a single transcript, and she showed that this gives the best possible control when you have variable delivery to cells,” Galloway says.
Precise control
To demonstrate this system, the researchers designed ComMAND circuits that could deliver the gene FXN, which is mutated in Friedreich’s ataxia — a disorder that affects the heart and nervous system. They also delivered the gene Fmr1, whose dysfunction causes fragile X syndrome. In tests in human cells, they showed that they could tune gene expression levels to about eight times the levels normally seen in healthy cells.
Without ComMAND, gene expression was more than 50 times the normal level, which could pose safety risks. Further tests in animal models would be needed to determine the optimal levels, the researchers say.
The researchers also performed tests in rat neurons, mouse fibroblasts, and human T-cells. For those cells, they delivered a gene that encodes a fluorescent protein, so they could easily measure the gene expression levels. In those cells, too, the researchers found that they could control gene expression levels more precisely than without the circuit.
The researchers now plan to study whether they could use this approach to deliver genes at a level that would restore normal function and reverse signs of disease, either in cultured cells or animal models.
“There's probably some tuning that would need to be done to the expression levels, but we understand some of those design principles, so if we needed to tune the levels up or down, I think we'd know potentially how to go about that,” Love says.
Other diseases that this approach could be applied to include Rett syndrome, muscular dystrophy and spinal muscular atrophy, the researchers say.
“The challenge with a lot of those is they're also rare diseases, so you don't have large patient populations,” Galloway says. “We're trying to build out these tools that are robust so people can figure out how to do the tuning, because the patient populations are so small and there isn't a lot of funding for solving some of these disorders.”
The research was funded by the National Institute of General Medical Sciences, the National Science Foundation, the Institute for Collaborative Biotechnologies, and the Air Force Research Laboratory.
Novel method detects microbial contamination in cell cultures
The chemistry of creativity
Senior Madison Wang, a double major in creative writing and chemistry, developed her passion for writing in middle school. Her interest in chemistry fit nicely alongside her commitment to producing engaging narratives.
Wang believes that world-building in stories supported by science and research can make for a more immersive reader experience.
“In science and in writing, you have to tell an effective story,” she says. “People respond well to stories.”
A native of Buffalo, New York, Wang applied early action for admission to MIT and learned quickly that the Institute was where she wanted to be. “It was a really good fit,” she says. “There was positive energy and vibes, and I had a great feeling overall.”
The power of science and good storytelling
“Chemistry is practical, complex, and interesting,” says Wang. “It’s about quantifying natural laws and understanding how reality works.”
Chemistry and writing both help us “see the world’s irregularity,” she continues. Together, they can erase the artificial and arbitrary line separating one from the other and work in concert to tell a more complete story about the world, the ways in which we participate in building it, and how people and objects exist in and move through it.
“Understanding magnetism, material properties, and believing in the power of magic in a good story … these are why we’re drawn to explore,” she says. “Chemistry describes why things are the way they are, and I use it for world-building in my creative writing.”
Wang lauds MIT’s creative writing program and cites a course she took with Comparative Media Studies/Writing Professor and Pulitzer Prize winner Junot Díaz as an affirmation of her choice. Seeing and understanding the world through the eyes of a scientist — its building blocks, the ways the pieces fit and function together — help explain her passion for chemistry, especially inorganic and physical chemistry.
Wang cites the work of authors like Sam Kean and Knight Science Journalism Program Director Deborah Blum as part of her inspiration to study science. The books “The Disappearing Spoon” by Kean and “The Poisoner’s Handbook” by Blum “both present historical perspectives, opting for a story style to discuss the events and people involved,” she says. “They each put a lot of work into bridging the gap between what can sometimes be sterile science and an effective narrative that gets people to care about why the science matters.”
Genres like fantasy and science fiction are complementary, according to Wang. “Constructing an effective world means ensuring readers understand characters’ motivations — the ‘why’ — and ensuring it makes sense,” she says. “It’s also important to show how actions and their consequences influence and motivate characters.”
As she explores the world’s building blocks inside and outside the classroom, Wang works to navigate multiple genres in her writing, as with her studies in chemistry. “I like romance and horror, too,” she says. “I have gripes with committing to a single genre, so I just take whatever I like from each and put them in my stories.”
In chemistry, Wang favors an environment in which scientists can regularly test their ideas. “It’s important to ground chemistry in the real world to create connections for students,” she argues. Advancements in the field have occurred, she notes, because scientists could exit the realm of theory and apply ideas practically.
“Fritz Haber’s work on ammonia synthesis revolutionized approaches to food supply chains,” she says, referring to the German chemist and Nobel laureate. “Converting nitrogen and hydrogen gas to ammonia for fertilizer marked a dramatic shift in how farming could work.” This kind of work could only result from the consistent, controlled, practical application of the theories scientists consider in laboratory environments.
A future built on collaboration and cooperation
Watching the world change dramatically and seeing humanity struggle to grapple with the implications of phenomena like climate change, political unrest, and shifting alliances, Wang emphasizes the importance of deconstructing silos in academia and the workplace. Technology can be a tool for harm, she notes, so inviting more people inside previously segregated spaces helps everyone.
Criticism in both chemistry and writing, Wang believes, are valuable tools for continuous improvement. Effective communication, explaining complex concepts, and partnering to develop long-term solutions are invaluable when working at the intersection of history, art, and science. In writing, Wang says, criticism can help define areas to improve writers’ stories and shape interesting ideas.
“We’ve seen the positive results that can occur with effective science writing, which requires rigor and fact-checking,” she says. “MIT’s cross-disciplinary approach to our studies, alongside feedback from teachers and peers, is a great set of tools to carry with us regardless of where we are.”
Wang explores connections between science and stories in her leisure time, too. “I’m a member of MIT’s Anime Club and I enjoy participating in MIT’s Sport Taekwondo Club,” she says. The competitive aspect in tae kwon do allows for her to feed her competitive drive and gets her out of her head. Her participation in DAAMIT (Digital Art and Animation at MIT) creates connections with different groups of people and gives her ideas she can use to tell better stories. “It’s fascinating exploring others’ minds,” she says.
Wang argues that there’s a false divide between science and the humanities and wants the work she does after graduation to bridge that divide. “Writing and learning about science can help,” she asserts. “Fields like conservation and history allow for continued exploration of that intersection.”
Ultimately, Wang believes it’s important to examine narratives carefully and to question notions of science’s inherent superiority over humanities fields. “The humanities and science have equal value,” she says.
Artificial intelligence enhances air mobility planning
Every day, hundreds of chat messages flow between pilots, crew, and controllers of the Air Mobility Command's 618th Air Operations Center (AOC). These controllers direct a thousand-wide fleet of aircraft, juggling variables to determine which routes to fly, how much time fueling or loading supplies will take, or who can fly those missions. Their mission planning allows the U.S. Air Force to quickly respond to national security needs around the globe.
"It takes a lot of work to get a missile defense system across the world, for example, and this coordination used to be done through phone and email. Now, we are using chat, which creates opportunities for artificial intelligence to enhance our workflows," says Colonel Joseph Monaco, the director of strategy at the 618th AOC, which is the Department of Defense's largest air operations center.
The 618th AOC is sponsoring Lincoln Laboratory to develop these artificial intelligence tools, through a project called Conversational AI Technology for Transition (CAITT).
During a visit to Lincoln Laboratory from the 618th AOC's headquarters at Scott Air Force Base in Illinois, Colonel Monaco, Lieutenant Colonel Tim Heaton, and Captain Laura Quitiquit met with laboratory researchers to discuss CAITT. CAITT is a part of a broader effort to transition AI technology into a major Air Force modernization initiative, called the Next Generation Information Technology for Mobility Readiness Enhancement (NITMRE).
The type of AI being used in this project is natural language processing (NLP), which allows models to read and process human language. "We are utilizing NLP to map major trends in chat conversations, retrieve and cite specific information, and identify and contextualize critical decision points," says Courtland VanDam, a researcher in Lincoln Laboratory's AI Technology and Systems Group, which is leading the project. CAITT encompasses a suite of tools leveraging NLP.
One of the most mature tools, topic summarization, extracts trending topics from chat messages and formats those topics in a user-friendly display highlighting critical conversations and emerging issues. For example, a trending topic might read, "Crew members missing Congo visas, potential for delay." The entry shows the number of chats related to the topic and summarizes in bullet points the main points of conversations, linking back to specific chat exchanges.
"Our missions are very time-dependent, so we have to synthesize a lot of information quickly. This feature can really cue us as to where our efforts should be focused," says Monaco.
Another tool in production is semantic search. This tool improves upon the chat service's search engine, which currently returns empty results if chat messages do not contain every word in the query. Using the new tool, users can ask questions in a natural language format, such as why a specific aircraft is delayed, and receive intelligent results. "It incorporates a search model based on neural networks that can understand the user intent of the query and go beyond term matching," says VanDam.
Other tools under development aim to automatically add users to chat conversations deemed relevant to their expertise, predict the amount of ground time needed to unload specific types of cargo from aircraft, and summarize key processes from regulatory documents as a guide to operators as they develop mission plans.
The CAITT project grew out of the DAF–MIT AI Accelerator, a three-pronged effort between MIT, Lincoln Laboratory, and the Department of the Air Force (DAF) to develop and transition AI algorithms and systems to advance both the DAF and society. "Through our involvement in the AI Accelerator via the NITMRE project, we realized we could do something innovative with all of the unstructured chat information in the 618th AOC," says Heaton.
As laboratory researchers advance their prototypes of CAITT tools, they have begun to transition them to the 402nd Software Engineering Group, a software provider for the Department of Defense. That group will implement the tools into the operational software environment in use by the 618th AOC.
Designing a new way to optimize complex coordinated systems
Coordinating complicated interactive systems, whether it’s the different modes of transportation in a city or the various components that must work together to make an effective and efficient robot, is an increasingly important subject for software designers to tackle. Now, researchers at MIT have developed an entirely new way of approaching these complex problems, using simple diagrams as a tool to reveal better approaches to software optimization in deep-learning models.
They say the new method makes addressing these complex tasks so simple that it can be reduced to a drawing that would fit on the back of a napkin.
The new approach is described in the journal Transactions of Machine Learning Research, in a paper by incoming doctoral student Vincent Abbott and Professor Gioele Zardini of MIT’s Laboratory for Information and Decision Systems (LIDS).
“We designed a new language to talk about these new systems,” Zardini says. This new diagram-based “language” is heavily based on something called category theory, he explains.
It all has to do with designing the underlying architecture of computer algorithms — the programs that will actually end up sensing and controlling the various different parts of the system that’s being optimized. “The components are different pieces of an algorithm, and they have to talk to each other, exchange information, but also account for energy usage, memory consumption, and so on.” Such optimizations are notoriously difficult because each change in one part of the system can in turn cause changes in other parts, which can further affect other parts, and so on.
The researchers decided to focus on the particular class of deep-learning algorithms, which are currently a hot topic of research. Deep learning is the basis of the large artificial intelligence models, including large language models such as ChatGPT and image-generation models such as Midjourney. These models manipulate data by a “deep” series of matrix multiplications interspersed with other operations. The numbers within matrices are parameters, and are updated during long training runs, allowing for complex patterns to be found. Models consist of billions of parameters, making computation expensive, and hence improved resource usage and optimization invaluable.
Diagrams can represent details of the parallelized operations that deep-learning models consist of, revealing the relationships between algorithms and the parallelized graphics processing unit (GPU) hardware they run on, supplied by companies such as NVIDIA. “I’m very excited about this,” says Zardini, because “we seem to have found a language that very nicely describes deep learning algorithms, explicitly representing all the important things, which is the operators you use,” for example the energy consumption, the memory allocation, and any other parameter that you’re trying to optimize for.
Much of the progress within deep learning has stemmed from resource efficiency optimizations. The latest DeepSeek model showed that a small team can compete with top models from OpenAI and other major labs by focusing on resource efficiency and the relationship between software and hardware. Typically, in deriving these optimizations, he says, “people need a lot of trial and error to discover new architectures.” For example, a widely used optimization program called FlashAttention took more than four years to develop, he says. But with the new framework they developed, “we can really approach this problem in a more formal way.” And all of this is represented visually in a precisely defined graphical language.
But the methods that have been used to find these improvements “are very limited,” he says. “I think this shows that there’s a major gap, in that we don’t have a formal systematic method of relating an algorithm to either its optimal execution, or even really understanding how many resources it will take to run.” But now, with the new diagram-based method they devised, such a system exists.
Category theory, which underlies this approach, is a way of mathematically describing the different components of a system and how they interact in a generalized, abstract manner. Different perspectives can be related. For example, mathematical formulas can be related to algorithms that implement them and use resources, or descriptions of systems can be related to robust “monoidal string diagrams.” These visualizations allow you to directly play around and experiment with how the different parts connect and interact. What they developed, he says, amounts to “string diagrams on steroids,” which incorporates many more graphical conventions and many more properties.
“Category theory can be thought of as the mathematics of abstraction and composition,” Abbott says. “Any compositional system can be described using category theory, and the relationship between compositional systems can then also be studied.” Algebraic rules that are typically associated with functions can also be represented as diagrams, he says. “Then, a lot of the visual tricks we can do with diagrams, we can relate to algebraic tricks and functions. So, it creates this correspondence between these different systems.”
As a result, he says, “this solves a very important problem, which is that we have these deep-learning algorithms, but they’re not clearly understood as mathematical models.” But by representing them as diagrams, it becomes possible to approach them formally and systematically, he says.
One thing this enables is a clear visual understanding of the way parallel real-world processes can be represented by parallel processing in multicore computer GPUs. “In this way,” Abbott says, “diagrams can both represent a function, and then reveal how to optimally execute it on a GPU.”
The “attention” algorithm is used by deep-learning algorithms that require general, contextual information, and is a key phase of the serialized blocks that constitute large language models such as ChatGPT. FlashAttention is an optimization that took years to develop, but resulted in a sixfold improvement in the speed of attention algorithms.
Applying their method to the well-established FlashAttention algorithm, Zardini says that “here we are able to derive it, literally, on a napkin.” He then adds, “OK, maybe it’s a large napkin.” But to drive home the point about how much their new approach can simplify dealing with these complex algorithms, they titled their formal research paper on the work “FlashAttention on a Napkin.”
This method, Abbott says, “allows for optimization to be really quickly derived, in contrast to prevailing methods.” While they initially applied this approach to the already existing FlashAttention algorithm, thus verifying its effectiveness, “we hope to now use this language to automate the detection of improvements,” says Zardini, who in addition to being a principal investigator in LIDS, is the Rudge and Nancy Allen Assistant Professor of Civil and Environmental Engineering, and an affiliate faculty with the Institute for Data, Systems, and Society.
The plan is that ultimately, he says, they will develop the software to the point that “the researcher uploads their code, and with the new algorithm you automatically detect what can be improved, what can be optimized, and you return an optimized version of the algorithm to the user.”
In addition to automating algorithm optimization, Zardini notes that a robust analysis of how deep-learning algorithms relate to hardware resource usage allows for systematic co-design of hardware and software. This line of work integrates with Zardini’s focus on categorical co-design, which uses the tools of category theory to simultaneously optimize various components of engineered systems.
Abbott says that “this whole field of optimized deep learning models, I believe, is quite critically unaddressed, and that’s why these diagrams are so exciting. They open the doors to a systematic approach to this problem.”
“I’m very impressed by the quality of this research. ... The new approach to diagramming deep-learning algorithms used by this paper could be a very significant step,” says Jeremy Howard, founder and CEO of Answers.ai, who was not associated with this work. “This paper is the first time I’ve seen such a notation used to deeply analyze the performance of a deep-learning algorithm on real-world hardware. ... The next step will be to see whether real-world performance gains can be achieved.”
“This is a beautifully executed piece of theoretical research, which also aims for high accessibility to uninitiated readers — a trait rarely seen in papers of this kind,” says Petar Velickovic, a senior research scientist at Google DeepMind and a lecturer at Cambridge University, who was not associated with this work. These researchers, he says, “are clearly excellent communicators, and I cannot wait to see what they come up with next!”
The new diagram-based language, having been posted online, has already attracted great attention and interest from software developers. A reviewer from Abbott’s prior paper introducing the diagrams noted that “The proposed neural circuit diagrams look great from an artistic standpoint (as far as I am able to judge this).” “It’s technical research, but it’s also flashy!” Zardini says.
Martina Solano Soto wants to solve the mysteries of the universe, and MIT Open Learning is part of her plan
Martina Solano Soto is on a mission to pursue her passion for physics and, ultimately, to solve big problems. Since she was a kid, she has had a lot of questions: Why do animals exist? What are we doing here? Why don’t we know more about the Big Bang? And she has been determined to find answers.
“That’s why I found MIT OpenCourseWare,” says Solano, of Girona, Spain. “When I was 14, I started to browse and wanted to find information that was reliable, dynamic, and updated. I found MIT resources by chance, and it’s one of the biggest things that has happened to me.”
In addition to OpenCourseWare, which offers free, online, open educational resources from more than 2,500 courses that span the MIT undergraduate and graduate curriculum, Solano also took advantage of the MIT Open Learning Library. Part of MIT Open Learning, the library offers free courses and invites people to learn at their own pace while receiving immediate feedback through interactive content and exercises.
Solano, who is now 17, has studied quantum physics via OpenCourseWare — also part of MIT Open Learning — and she has taken Open Learning Library courses on electricity and magnetism, calculus, quantum computation, and kinematics. She even created her own syllabus, complete with homework, to ensure she stayed on track and kept her goals in mind. Those goals include studying math and physics as an undergraduate. She also hopes to study general relativity and quantum mechanics at the doctoral level. “I really want to unify them to find a theory of quantum gravity,” she says. “I want to spend all my life studying and learning.”
Solano was particularly motivated by Barton Zwiebach, professor of physics, whose courses Quantum Physics I and Quantum Physics II are available on MIT OpenCourseWare. She took advantage of all of the resources that were provided: video lectures, assignments, lecture notes, and exams.
“I was fascinated by the way he explained. I just understood everything, and it was amazing,” she says. “Then, I learned about his book, 'A First Course in String Theory,' and it was because of him that I learned about black holes and gravity. I’m extremely grateful.”
While Solano gives much credit to the variety and quality of Open Learning resources, she also stresses the importance of being organized. As a high school student, she has things other than string theory on her mind: her school, extracurriculars, friends, and family.
For anyone in a similar position, she recommends “figuring out what you’re most interested in and how you can take advantage of the flexibility of Open Learning resources. Is there a half-hour before bed to watch a video, or some time on the weekend to read lecture notes? If you figure out how to make it work for you, it is definitely worth the effort.”
“If you do that, you are going to grow academically and personally,” Solano says. “When you go to school, you will feel more confident.”
And Solano is not slowing down. She plans to continue using Open Learning resources, this time turning her attention to graduate-level courses, all in service of her curiosity and drive for knowledge.
“When I was younger, I read the book 'The God Equation,' by Michio Kaku, which explains quantum gravity theory. Something inside me awoke,” she recalls. “I really want to know what happens at the center of a black hole, and how we unify quantum mechanics, black holes, and general relativity. I decided that I want to invest my life in this.”
She is well on her way. Last summer, Solano applied for and received a scholarship to study particle physics at the Autonomous University of Barcelona. This summer, she’s applying for opportunities to study the cosmos. All of this, she says, is only possible thanks to what she has learned with MIT Open Learning resources.
“The applications ask you to explain what you like about physics, and thanks to MIT, I’m able to express that,” Solano says. “I’m able to go for these scholarships and really fight for what I dream.”
Luna: A moon on Earth
On March 6, MIT launched its first lunar landing mission since the Apollo era, sending three payloads — the AstroAnt, the RESOURCE 3D camera, and the HUMANS nanowafer — to the moon’s south polar region. The mission was based out of Luna, a mission control space designed by MIT Department of Architecture students and faculty in collaboration with the MIT Space Exploration Initiative, Inploration, and Simpson Gumpertz and Heger. It is installed in the MIT Media Lab ground-floor gallery and is open to the public as part of Artfinity, MIT’s Festival for the Arts. The installation allows visitors to observe payload operators at work and interact with the software used for the mission, thanks to virtual reality.
A central hub for mission operations, the control room is a structural and conceptual achievement, balancing technical challenges with a vision for an immersive experience, and the result of a multidisciplinary approach. “This will be our moon on Earth,” says Mateo Fernandez, a third-year MArch student and 2024 MAD Design Fellow, who designed and fabricated Luna in collaboration with Nebyu Haile, a PhD student in the Building Technology program in the Department of Architecture, and Simon Lesina Debiasi, a research assistant in the SMArchS Computation program and part of the Self-Assembly Lab. “The design was meant for people — for the researchers to be able to see what’s happening at all times, and for the spectators to have a 360 panoramic view of everything that’s going on,” explains Fernandez. “A key vision of the team was to create a control room that broke away from the traditional, closed-off model — one that instead invited the public to observe, ask questions, and engage with the mission,” adds Haile.
For this project, students were advised by Skylar Tibbits, founder and co-director of the Self-Assembly Lab, associate professor of design research, and the Morningside Academy for Design (MAD)’s assistant director for education; J. Roc Jih, associate professor of the practice in architectural design; John Ochsendorf, MIT Class of 1942 Professor with appointments in the departments of Architecture and Civil and Environmental Engineering, and founding director of MAD; and Brandon Clifford, associate professor of architecture. The team worked closely with Cody Paige, director of the Space Exploration Initiative at the Media Lab, and her collaborators, emphasizing that they “tried to keep things very minimal, very simple, because at the end of the day,” explains Fernandez, “we wanted to create a design that allows the researchers to shine and the mission to shine.”
“This project grew out of the Space Architecture class we co-taught with Cody Paige and astronaut and MIT AeroAstro [Department of Aeronautics and Astronautics] faculty member Jeff Hoffman” in the fall semester, explains Tibbits. “Mateo was part of that studio, and from there, Cody invited us to design the mission control project. We then brought Mateo onboard, Simon, Nebyu, and the rest of the project team.” According to Tibbits, “this project represents MIT’s mind-and-hand ethos. We had designers, architects, artists, computational experts, and engineers working together, reflecting the polymath vision — left brain, right brain, the creative and the technical coming together to make this possible.”
Luna was funded and informed by Tibbits and Jih’s Professor Amar G. Bose Research Grant Program. “J. Jih and I had been doing research for the Bose grant around basalt and mono-material construction,” says Tibbits, adding that they “had explored foamed glass materials similar to pumice or foamed basalt, which are also similar to lunar regolith.” “FOAMGLAS is typically used for insulation, but it has diverse applications, including direct ground contact and exterior walls, with strong acoustic and thermal properties,” says Jih. “We helped Mateo understand how the material is used in architecture today, and how it could be applied in this project, aligning with our work on new material palettes and mono-material construction techniques.”
Additional funding came from Inploration, a project run by creative director, author, and curator Lawrence Azerrad, as well as expeditionary artist, curator, and analog astronaut artist Richelle Ellis, and Comcast, a Media Lab member company. It was also supported by the MIT Morningside Academy for Design through Fernandez’s Design Fellowship. Additional support came from industry members such as Owens Corning (construction materials), Bose (communications), as well as MIT Media Lab member companies Dell Technologies (operations hardware) and Steelcase (operations seating).
A moon on Earth
While the lunar mission ended prematurely, the team says it achieved success in the design and construction of a control room embodying MIT’s design approach and capacity to explore new technologies while maintaining simplicity. Luna looks like variations of the moon, offering different perspectives of the moon’s round or crescent shape, depending on the viewer’s position.
“What’s remarkable is how close the final output is to Mateo’s original sketches and renderings,” Tibbits notes. “That often doesn’t happen — where the final built project aligns so precisely with the initial design intent.”
Luna’s entire structure is built from FOAMGLAS, a durable material composed of glass cells usually used for insulation. “FOAMGLAS is an interesting material,” says Lesina Debiasi, who supported fabrication efforts, ensuring a fast and safe process. “It’s relatively durable and light, but can easily be crumbled with a sharp edge or blade, requiring every step of the fabrication process — cutting, texturing, sealing — to be carefully controlled.”
Fernandez, whose design experience was influenced by the idea that “simple moves” are most powerful, explains: “We’re giving a second life to materials that are not thought of for building construction … and I think that’s an effective idea. Here, you don’t need wood, concrete, rebar — you can build with one material only.” While the interior of the dome-shaped construction is smooth, the exterior was hand textured to evoke the basalt-like surface of the moon.
The lightweight cellular glass produced by Owens Corning, which sponsored part of the material, comes as an unexpected choice for a compression structure — a type of architectural design where stability is achieved through the natural force of compression, usually implying heavy materials. The control room doesn’t use connections or additional supports, and depends upon the precise placement, size, and weight of individual blocks to create a stable form from a succession of arches.
“Traditional compression structures rely on their own weight for stability, but using a material that is more than 10 times lighter than masonry meant we had to rethink everything. It was about finding the perfect balance between design vision and structural integrity,” reflects Haile, who was responsible for the structural calculations for the dome and its support.
Compression relies on gravity, and wouldn’t be a viable construction method on the moon itself. “We’re building using physics, loads, structures, and equilibrium to create this thing that looks like the moon, but depends on Earth’s forces to be built. I think people don’t see that at first, but there’s something cheeky and ironic about it,” confides Fernandez, acknowledging that the project merges historical building methods with contemporary design.
The location and purpose of Luna — both a work space and an installation engaging the public — implied balancing privacy and transparency to achieve functionality. “One of the most important design elements that reflected this vision was the openness of the dome,” says Haile. “We worked closely from the start to find the right balance — adjusting the angle and size of the opening to make the space feel welcoming, while still offering some privacy to those working inside.”
The power of collaboration
With the FOAMGLAS material, the team had to invent a fabrication process that would achieve the initial vision while maintaining structural integrity. Sourcing a material with radically different properties compared to conventional construction implied collaborating closely on the engineering front, the lightweight nature of the cellular glass requiring creative problem-solving: “What appears perfect in digital models doesn’t always translate seamlessly into the real world,” says Haile. “The slope, curves, and overall geometry directly determine whether the dome will stand, requiring Mateo and me to work in sync from the very beginning through the end of construction.” While the engineering was primarily led by Haile and Ochsendorf, the structural design was officially reviewed and approved by Paul Kassabian at Simpson Gumpertz and Heger (SGH), ensuring compliance with engineering standards and building codes.
“None of us had worked with FOAMGLAS before, and we needed to figure out how best to cut, texture, and seal it,” says Lesina Debiasi. “Since each row consists of a distinct block shape and specific angles, ensuring accuracy and repeatability across all the blocks became a major challenge. Since we had to cut each individual block four times before we were able to groove and texture the surface, creating a safe production process and mitigating the distribution of dust was critical,” he explains. “Working inside a tent, wearing personal protective equipment like masks, visors, suits, and gloves made it possible to work for an extended period with this material.”
In addition, manufacturing introduced small margins of error threatening the structural integrity of the dome, prompting hands-on experimentation. “The control room is built from 12 arches,” explains Fernandez. “When one of the arches closes, it becomes stable, and you can move on to the next one … Going from side to side, you meet at the middle and close the arch using a special block — a keystone, which was cut to measure,” he says. “In conversations with our advisors, we decided to account for irregularities in the final keystone of each row. Once this custom keystone sat in place, the forces would stabilize the arch and make it secure,” adds Lesina Debiasi.
“This project exemplified the best practices of engineers and architects working closely together from design inception to completion — something that was historically common but is less typical today,” says Haile. “This collaboration was not just necessary — it ultimately improved the final result.”
Fernandez, who is supported this year by the MAD Design Fellowship, expressed how “the fellowship gave [him] the freedom to explore [his] passions and also keep [his] agency.”
“In a way, this project embodies what design education at MIT should be,” Tibbits reflects. “We’re building at full scale, with real-world constraints, experimenting at the limits of what we know — design, computation, engineering, and science. It’s hands-on, highly experimental, and deeply collaborative, which is exactly what we dream of for MAD, and MIT’s design education more broadly.”
“Luna, our physical lunar mission control, highlights the incredible collaboration across the Media Lab, Architecture, and the School of Engineering to bring our lunar mission to the world. We are democratizing access to space for all,” says Dava Newman, Media Lab director and Apollo Professor of Astronautics.
A full list of contributors and supporters can be found at the Morningside Academy of Design's website.
Six from MIT elected to American Academy of Arts and Sciences for 2025
Six MIT faculty members are among the nearly 250 leaders from academia, the arts, industry, public policy, and research elected to the American Academy of Arts and Sciences, the academy announced April 23.
One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.
Those elected from MIT in 2025 are:
- Lotte Bailyn, T. Wilson Professor of Management Emerita;
- Gareth McKinley, School of Engineering Professor of Teaching Innovation;
- Nasser Rabbat, Aga Khan Professor;
- Susan Silbey, Leon and Anne Goldberg Professor of Humanities and professor of sociology and anthropology;
- Anne Whiston Spirn, Cecil and Ida Green Distinguished Professor of Landscape Architecture and Planning; and
- Catherine Wolfram, William Barton Rogers Professor in Energy and professor of applied economics.
“These new members’ accomplishments speak volumes about the human capacity for discovery, creativity, leadership, and persistence. They are a stellar testament to the power of knowledge to broaden our horizons and deepen our understanding,” says Academy President Laurie L. Patton. “We invite every new member to celebrate their achievement and join the Academy in our work to promote the common good.”
Since its founding in 1780, the academy has elected leading thinkers from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 250 Nobel and Pulitzer Prize winners.
Robotic system zeroes in on objects most relevant for helping humans
For a robot, the real world is a lot to take in. Making sense of every data point in a scene can take a huge amount of computational effort and time. Using that information to then decide how to best help a human is an even thornier exercise.
Now, MIT roboticists have a way to cut through the data noise, to help robots focus on the features in a scene that are most relevant for assisting humans.
Their approach, which they aptly dub “Relevance,” enables a robot to use cues in a scene, such as audio and visual information, to determine a human’s objective and then quickly identify the objects that are most likely to be relevant in fulfilling that objective. The robot then carries out a set of maneuvers to safely offer the relevant objects or actions to the human.
The researchers demonstrated the approach with an experiment that simulated a conference breakfast buffet. They set up a table with various fruits, drinks, snacks, and tableware, along with a robotic arm outfitted with a microphone and camera. Applying the new Relevance approach, they showed that the robot was able to correctly identify a human’s objective and appropriately assist them in different scenarios.
In one case, the robot took in visual cues of a human reaching for a can of prepared coffee, and quickly handed the person milk and a stir stick. In another scenario, the robot picked up on a conversation between two people talking about coffee, and offered them a can of coffee and creamer.
Overall, the robot was able to predict a human’s objective with 90 percent accuracy and to identify relevant objects with 96 percent accuracy. The method also improved a robot’s safety, reducing the number of collisions by more than 60 percent, compared to carrying out the same tasks without applying the new method.
“This approach of enabling relevance could make it much easier for a robot to interact with humans,” says Kamal Youcef-Toumi, professor of mechanical engineering at MIT. “A robot wouldn’t have to ask a human so many questions about what they need. It would just actively take information from the scene to figure out how to help.”
Youcef-Toumi’s group is exploring how robots programmed with Relevance can help in smart manufacturing and warehouse settings, where they envision robots working alongside and intuitively assisting humans.
Youcef-Toumi, along with graduate students Xiaotong Zhang and Dingcheng Huang, will present their new method at the IEEE International Conference on Robotics and Automation (ICRA) in May. The work builds on another paper presented at ICRA the previous year.
Finding focus
The team’s approach is inspired by our own ability to gauge what’s relevant in daily life. Humans can filter out distractions and focus on what’s important, thanks to a region of the brain known as the Reticular Activating System (RAS). The RAS is a bundle of neurons in the brainstem that acts subconsciously to prune away unnecessary stimuli, so that a person can consciously perceive the relevant stimuli. The RAS helps to prevent sensory overload, keeping us, for example, from fixating on every single item on a kitchen counter, and instead helping us to focus on pouring a cup of coffee.
“The amazing thing is, these groups of neurons filter everything that is not important, and then it has the brain focus on what is relevant at the time,” Youcef-Toumi explains. “That’s basically what our proposition is.”
He and his team developed a robotic system that broadly mimics the RAS’s ability to selectively process and filter information. The approach consists of four main phases. The first is a watch-and-learn “perception” stage, during which a robot takes in audio and visual cues, for instance from a microphone and camera, that are continuously fed into an AI “toolkit.” This toolkit can include a large language model (LLM) that processes audio conversations to identify keywords and phrases, and various algorithms that detect and classify objects, humans, physical actions, and task objectives. The AI toolkit is designed to run continuously in the background, similarly to the subconscious filtering that the brain’s RAS performs.
The second stage is a “trigger check” phase, which is a periodic check that the system performs to assess if anything important is happening, such as whether a human is present or not. If a human has stepped into the environment, the system’s third phase will kick in. This phase is the heart of the team’s system, which acts to determine the features in the environment that are most likely relevant to assist the human.
To establish relevance, the researchers developed an algorithm that takes in real-time predictions made by the AI toolkit. For instance, the toolkit’s LLM may pick up the keyword “coffee,” and an action-classifying algorithm may label a person reaching for a cup as having the objective of “making coffee.” The team’s Relevance method would factor in this information to first determine the “class” of objects that have the highest probability of being relevant to the objective of “making coffee.” This might automatically filter out classes such as “fruits” and “snacks,” in favor of “cups” and “creamers.” The algorithm would then further filter within the relevant classes to determine the most relevant “elements.” For instance, based on visual cues of the environment, the system may label a cup closest to a person as more relevant — and helpful — than a cup that is farther away.
In the fourth and final phase, the robot would then take the identified relevant objects and plan a path to physically access and offer the objects to the human.
Helper mode
The researchers tested the new system in experiments that simulate a conference breakfast buffet. They chose this scenario based on the publicly available Breakfast Actions Dataset, which comprises videos and images of typical activities that people perform during breakfast time, such as preparing coffee, cooking pancakes, making cereal, and frying eggs. Actions in each video and image are labeled, along with the overall objective (frying eggs, versus making coffee).
Using this dataset, the team tested various algorithms in their AI toolkit, such that, when receiving actions of a person in a new scene, the algorithms could accurately label and classify the human tasks and objectives, and the associated relevant objects.
In their experiments, they set up a robotic arm and gripper and instructed the system to assist humans as they approached a table filled with various drinks, snacks, and tableware. They found that when no humans were present, the robot’s AI toolkit operated continuously in the background, labeling and classifying objects on the table.
When, during a trigger check, the robot detected a human, it snapped to attention, turning on its Relevance phase and quickly identifying objects in the scene that were most likely to be relevant, based on the human’s objective, which was determined by the AI toolkit.
“Relevance can guide the robot to generate seamless, intelligent, safe, and efficient assistance in a highly dynamic environment,” says co-author Zhang.
Going forward, the team hopes to apply the system to scenarios that resemble workplace and warehouse environments, as well as to other tasks and objectives typically performed in household settings.
“I would want to test this system in my home to see, for instance, if I’m reading the paper, maybe it can bring me coffee. If I’m doing laundry, it can bring me a laundry pod. If I’m doing repair, it can bring me a screwdriver,” Zhang says. “Our vision is to enable human-robot interactions that can be much more natural and fluent.”
This research was made possible by the support and partnership of King Abdulaziz City for Science and Technology (KACST) through the Center for Complex Engineering Systems at MIT and KACST.
Wearable device tracks individual cells in the bloodstream in real time
Researchers at MIT have developed a noninvasive medical monitoring device powerful enough to detect single cells within blood vessels, yet small enough to wear like a wristwatch. One important aspect of this wearable device is that it can enable continuous monitoring of circulating cells in the human body.
The technology was presented online on March 3 by the journal npj Biosensing and is forthcoming in the journal’s print version.
The device — named CircTrek — was developed by researchers in the Nano-Cybernetic Biotrek research group, led by Deblina Sarkar, assistant professor at MIT and AT&T Career Development Chair at the MIT Media Lab. This technology could greatly facilitate early diagnosis of disease, detection of disease relapse, assessment of infection risk, and determination of whether a disease treatment is working, among other medical processes.
Whereas traditional blood tests are like a snapshot of a patient’s condition, CircTrek was designed to present real-time assessment, referred to in the npj Biosensing paper as having been “an unmet goal to date.” A different technology that offers monitoring of cells in the bloodstream with some continuity, in vivo flow cytometry, “requires a room-sized microscope, and patients need to be there for a long time,” says Kyuho Jang, a PhD student in Sarkar’s lab.
CircTrek, on the other hand, which is equipped with an onboard Wi-Fi module, could even monitor a patient’s circulating cells at home and send that information to the patient’s doctor or care team.
“CircTrek offers a path to harnessing previously inaccessible information, enabling timely treatments, and supporting accurate clinical decisions with real-time data,” says Sarkar. “Existing technologies provide monitoring that is not continuous, which can lead to missing critical treatment windows. We overcome this challenge with CircTrek.”
The device works by directing a focused laser beam to stimulate cells beneath the skin that have been fluorescently labeled. Such labeling can be accomplished with a number of methods, including applying antibody-based fluorescent dyes to the cells of interest or genetically modifying such cells so that they express fluorescent proteins.
For example, a patient receiving CAR T cell therapy, in which immune cells are collected and modified in a lab to fight cancer (or, experimentally, to combat HIV or Covid-19), could have those cells labeled at the same time with fluorescent dyes or genetic modification so the cells express fluorescent proteins. Importantly, cells of interest can also be labeled with in vivo labeling methods approved in humans. Once the cells are labeled and circulating in the bloodstream, CircTrek is designed to apply laser pulses to enhance and detect the cells’ fluorescent signal while an arrangement of filters minimizes low-frequency noise such as heartbeats.
“We optimized the optomechanical parts to reduce noise significantly and only capture the signal from the fluorescent cells,” says Jang.
Detecting the labeled CAR T cells, CircTrek could assess whether the cell therapy treatment is working. As an example, persistence of the CAR T cells in the blood after treatment is associated with better outcomes in patients with B-cell lymphoma.
To keep CircTrek small and wearable, the researchers were able to miniaturize the components of the device, such as the circuit that drives the high-intensity laser source and keeps the power level of the laser stable to avoid false readings.
The sensor that detects the fluorescent signals of the labeled cells is also minute, and yet it is capable of detecting a quantity of light equivalent to a single photon, Jang says.
The device’s subcircuits, including the laser driver and the noise filters, were custom-designed to fit on a circuit board measuring just 42 mm by 35 mm, allowing CircTrek to be approximately the same size as a smartwatch.
CircTrek was tested on an in vitro configuration that simulated blood flow beneath human skin, and its single-cell detection capabilities were verified through manual counting with a high-resolution confocal microscope. For the in vitro testing, a fluorescent dye called Cyanine5.5 was employed. That particular dye was selected because it reaches peak activation at wavelengths within skin tissue’s optical window, or the range of wavelengths that can penetrate the skin with minimal scattering.
The safety of the device, particularly the temperature increase on experimental skin tissue caused by the laser, was also investigated. An increase of 1.51 degrees Celsius at the skin surface was determined to be well below heating that would damage tissue, with enough of a margin that even increasing the device’s area of detection, and its power, in order to ensure the observation of at least one blood vessel could be safely permitted.
While clinical translation of CircTrek will require further steps, Jang says its parameters can be modified to broaden its potential, so that doctors could be provided with critical information on nearly any patient.
A brief history of expansion microscopy
Nearly 150 years ago, scientists began to imagine how information might flow through the brain based on the shapes of neurons they had seen under the microscopes of the time. With today’s imaging technologies, scientists can zoom in much further, seeing the tiny synapses through which neurons communicate with one another, and even the molecules the cells use to relay their messages. These inside views can spark new ideas about how healthy brains work and reveal important changes that contribute to disease.
This sharper view of biology is not just about the advances that have made microscopes more powerful than ever before. Using methodology developed in the lab of MIT McGovern Institute for Brain Research investigator Edward Boyden, researchers around the world are imaging samples that have been swollen to as much as 20 times their original size so their finest features can be seen more clearly.
“It’s a very different way to do microscopy,” says Boyden, who is also a Howard Hughes Medical Institute (HHMI) investigator, a professor of brain and cognitive sciences and biological engineering, and a member of the Yang Tan Collective at MIT. “In contrast to the last 300 years of bioimaging, where you use a lens to magnify an image of light from an object, we physically magnify objects themselves.” Once a tissue is expanded, Boyden says, researchers can see more even with widely available, conventional microscopy hardware.
Boyden’s team introduced this approach, which they named expansion microscopy (ExM), in 2015. Since then, they have been refining the method and adding to its capabilities, while researchers at MIT and beyond deploy it to learn about life on the smallest of scales.
“It’s spreading very rapidly throughout biology and medicine,” Boyden says. “It’s being applied to kidney disease, the fruit fly brain, plant seeds, the microbiome, Alzheimer’s disease, viruses, and more.”
Origins of ExM
To develop expansion microscopy, Boyden and his team turned to hydrogel, a material with remarkable water-absorbing properties that had already been put to practical use; it’s layered inside disposable diapers to keep babies dry. Boyden’s lab hypothesized that hydrogels could retain their structure while they absorbed hundreds of times their original weight in water, expanding the space between their chemical components as they swell.
After some experimentation, Boyden’s team settled on four key steps to enlarging tissue samples for better imaging. First, the tissue must be infused with a hydrogel. Components of the tissue, biomolecules, are anchored to the gel’s web-like matrix, linking them directly to the molecules that make up the gel. Then the tissue is chemically softened and water is added. As the hydrogel absorbs the water, it swells and the tissue expands, growing evenly so the relative positions of its components are preserved.
Boyden and graduate students Fei Chen and Paul Tillberg’s first report on expansion microscopy was published in the journal Science in 2015. In it, the team demonstrated that by spreading apart molecules that had been crowded inside cells, features that would have blurred together under a standard light microscope became separate and distinct. Light microscopes can discriminate between objects that are separated by about 300 nanometers — a limit imposed by the laws of physics. With expansion microscopy, Boyden’s group reported an effective resolution of about 70 nanometers, for a fourfold expansion.
Boyden says this is a level of clarity that biologists need. “Biology is fundamentally, in the end, a nanoscale science,” he says. “Biomolecules are nanoscale, and the interactions between biomolecules are over nanoscale distances. Many of the most important problems in biology and medicine involve nanoscale questions.” Several kinds of sophisticated microscopes, each with their own advantages and disadvantages, can bring this kind of detail to light. But those methods are costly and require specialized skills, making them inaccessible for most researchers. “Expansion microscopy democratizes nanoimaging,” Boyden says. “Now, anybody can go look at the building blocks of life and how they relate to each other.”
Empowering scientists
Since Boyden’s team introduced expansion microscopy in 2015, research groups around the world have published hundreds of papers reporting on discoveries they have made using expansion microscopy. For neuroscientists, the technique has lit up the intricacies of elaborate neural circuits, exposed how particular proteins organize themselves at and across synapses to facilitate communication between neurons, and uncovered changes associated with aging and disease.
It has been equally empowering for studies beyond the brain. Sabrina Absalon uses expansion microscopy every week in her lab at Indiana University School of Medicine to study the malaria parasite, a single-celled organism packed with specialized structures that enable it to infect and live inside its hosts. The parasite is so small, most of those structures can’t be seen with ordinary light microscopy. “So as a cell biologist, I’m losing the biggest tool to infer protein function, organelle architecture, morphology, linked to function, and all those things — which is my eye,” she says. With expansion, she can not only see the organelles inside a malaria parasite, she can watch them assemble and follow what happens to them when the parasite divides. Understanding those processes, she says, could help drug developers find new ways to interfere with the parasite’s life cycle.
Absalon adds that the accessibility of expansion microscopy is particularly important in the field of parasitology, where a lot of research is happening in parts of the world where resources are limited. Workshops and training programs in Africa, South America, and Asia are ensuring the technology reaches scientists whose communities are directly impacted by malaria and other parasites. “Now they can get super-resolution imaging without very fancy equipment,” Absalon says.
Always improving
Since 2015, Boyden’s interdisciplinary lab group has found a variety of creative ways to improve expansion microscopy and use it in new ways. Their standard technique today enables better labeling, bigger expansion factors, and higher-resolution imaging. Cellular features less than 20 nanometers from one another can now be separated enough to appear distinct under a light microscope.
They’ve also adapted their protocols to work with a range of important sample types, from entire roundworms (popular among neuroscientists, developmental biologists, and other researchers) to clinical samples. In the latter regard, they’ve shown that expansion can help reveal subtle signs of disease, which could enable earlier or less-costly diagnoses.
Originally, the group optimized its protocol for visualizing proteins inside cells, by labeling proteins of interest and anchoring them to the hydrogel prior to expansion. With a new way of processing samples, users can now re-stain their expanded samples with new labels for multiple rounds of imaging, so they can pinpoint the positions of dozens of different proteins in the same tissue. That means researchers can visualize how molecules are organized with respect to one another and how they might interact, or survey large sets of proteins to see, for example, what changes with disease.
But better views of proteins were just the beginning for expansion microscopy. “We want to see everything,” Boyden says. “We’d love to see every biomolecule there is, with precision down to atomic scale.” They’re not there yet — but with new probes and modified procedures, it’s now possible to see not just proteins, but also RNA and lipids in expanded tissue samples.
Labeling lipids, including those that form the membranes surrounding cells, means researchers can now see clear outlines of cells in expanded tissues. With the enhanced resolution afforded by expansion, even the slender projections of neurons can be traced through an image. Typically, researchers have relied on electron microscopy, which generates exquisitely detailed pictures but requires expensive equipment, to map the brain’s circuitry. “Now, you can get images that look a lot like electron microscopy images, but on regular old light microscopes — the kind that everybody has access to,” Boyden says.
Boyden says expansion can be powerful in combination with other cutting-edge tools. When expanded samples are used with an ultra-fast imaging method developed by Eric Betzig, an HHMI investigator at the University of California at Berkeley, called lattice light-sheet microscopy, the entire brain of a fruit fly can be imaged at high resolution in just a few days.
And when RNA molecules are anchored within a hydrogel network and then sequenced in place, scientists can see exactly where inside cells the instructions for building specific proteins are positioned, which Boyden’s team demonstrated in a collaboration with Harvard University geneticist George Church and then-MIT-professor Aviv Regev. “Expansion basically upgrades many other technologies’ resolutions,” Boyden says. “You’re doing mass-spec imaging, X-ray imaging, or Raman imaging? Expansion just improved your instrument.”
Expanding possibilities
Ten years past the first demonstration of expansion microscopy’s power, Boyden and his team are committed to continuing to make expansion microscopy more powerful. “We want to optimize it for different kinds of problems, and making technologies faster, better, and cheaper is always important,” he says. But the future of expansion microscopy will be propelled by innovators outside the Boyden lab, too. “Expansion is not only easy to do, it’s easy to modify — so lots of other people are improving expansion in collaboration with us, or even on their own,” Boyden says.
Boyden points to a group led by Silvio Rizzoli at the University Medical Center Göttingen in Germany that, collaborating with Boyden, has adapted the expansion protocol to discern the physical shapes of proteins. At the Korea Advanced Institute of Science and Technology, researchers led by Jae-Byum Chang, a former postdoc in Boyden’s group, have worked out how to expand entire bodies of mouse embryos and young zebra fish, collaborating with Boyden to set the stage for examining developmental processes and long-distance neural connections with a new level of detail. And mapping connections within the brain’s dense neural circuits could become easier with light-microscopy based connectomics, an approach developed by Johann Danzl and colleagues at the Institute of Science and Technology in Austria that takes advantage of both the high resolution and molecular information that expansion microscopy can reveal.
“The beauty of expansion is that it lets you see a biological system down to its smallest building blocks,” Boyden says.
His team is intent on pushing the method to its physical limits, and anticipates new opportunities for discovery as they do. “If you can map the brain or any biological system at the level of individual molecules, you might be able to see how they all work together as a network — how life really operates,” he says.
New electronic “skin” could enable lightweight night-vision glasses
MIT engineers have developed a technique to grow and peel ultrathin “skins” of electronic material. The method could pave the way for new classes of electronic devices, such as ultrathin wearable sensors, flexible transistors and computing elements, and highly sensitive and compact imaging devices.
As a demonstration, the team fabricated a thin membrane of pyroelectric material — a class of heat-sensing material that produces an electric current in response to changes in temperature. The thinner the pyroelectric material, the better it is at sensing subtle thermal variations.
With their new method, the team fabricated the thinnest pyroelectric membrane yet, measuring 10 nanometers thick, and demonstrated that the film is highly sensitive to heat and radiation across the far-infrared spectrum.
The newly developed film could enable lighter, more portable, and highly accurate far-infrared (IR) sensing devices, with potential applications for night-vision eyewear and autonomous driving in foggy conditions. Current state-of-the-art far-IR sensors require bulky cooling elements. In contrast, the new pyroelectric thin film requires no cooling and is sensitive to much smaller changes in temperature. The researchers are exploring ways to incorporate the film into lighter, higher-precision night-vision glasses.
“This film considerably reduces weight and cost, making it lightweight, portable, and easier to integrate,” Xinyuan Zhang, a graduate student in MIT’s Department of Materials Science and Engineering (DMSE). “For example, it could be directly worn on glasses.”
The heat-sensing film could also have applications in environmental and biological sensing, as well as imaging of astrophysical phenomena that emit far-infrared radiation.
What’s more, the new lift-off technique is generalizable beyond pyroelectric materials. The researchers plan to apply the method to make other ultrathin, high-performance semiconducting films.
Their results are reported today in a paper appearing in the journal Nature. The study’s MIT co-authors are first author Xinyuan Zhang, Sangho Lee, Min-Kyu Song, Haihui Lan, Jun Min Suh, Jung-El Ryu, Yanjie Shao, Xudong Zheng, Ne Myo Han, and Jeehwan Kim, associate professor of mechanical engineering and of materials science and engineering, along with researchers at the University Wisconsin at Madison led by Professor Chang-Beom Eom and authors from multiple other institutions.
Chemical peel
Kim’s group at MIT is finding new ways to make smaller, thinner, and more flexible electronics. They envision that such ultrathin computing “skins” can be incorporated into everything from smart contact lenses and wearable sensing fabrics to stretchy solar cells and bendable displays. To realize such devices, Kim and his colleagues have been experimenting with methods to grow, peel, and stack semiconducting elements, to fabricate ultrathin, multifunctional electronic thin-film membranes.
One method that Kim has pioneered is “remote epitaxy” — a technique where semiconducting materials are grown on a single-crystalline substrate, with an ultrathin layer of graphene in between. The substrate’s crystal structure serves as a scaffold along which the new material can grow. The graphene acts as a nonstick layer, similar to Teflon, making it easy for researchers to peel off the new film and transfer it onto flexible and stacked electronic devices. After peeling off the new film, the underlying substrate can be reused to make additional thin films.
Kim has applied remote epitaxy to fabricate thin films with various characteristics. In trying different combinations of semiconducting elements, the researchers happened to notice that a certain pyroelectric material, called PMN-PT, did not require an intermediate layer assist in order to separate from its substrate. Just by growing PMN-PT directly on a single-crystalline substrate, the researchers could then remove the grown film, with no rips or tears to its delicate lattice.
“It worked surprisingly well,” Zhang says. “We found the peeled film is atomically smooth.”
Lattice lift-off
In their new study, the MIT and UW Madison researchers took a closer look at the process and discovered that the key to the material’s easy-peel property was lead. As part of its chemical structure, the team, along with colleagues at the Rensselaer Polytechnic Institute, discovered that the pyroelectric film contains an orderly arrangement of lead atoms that have a large “electron affinity,” meaning that lead attracts electrons and prevents the charge carriers from traveling and connecting to another materials such as an underlying substrate. The lead acts as tiny nonstick units, allowing the material as a whole to peel away, perfectly intact.
The team ran with the realization and fabricated multiple ultrathin films of PMN-PT, each about 10 nanometers thin. They peeled off pyroelectric films and transfered them onto a small chip to form an array of 100 ultrathin heat-sensing pixels, each about 60 square microns (about .006 square centimeters). They exposed the films to ever-slighter changes in temperature and found the pixels were highly sensitive to small changes across the far-infrared spectrum.
The sensitivity of the pyroelectric array is comparable to that of state-of-the-art night-vision devices. These devices are currently based on photodetector materials, in which a change in temperature induces the material’s electrons to jump in energy and briefly cross an energy “band gap,” before settling back into their ground state. This electron jump serves as an electrical signal of the temperature change. However, this signal can be affected by noise in the environment, and to prevent such effects, photodetectors have to also include cooling devices that bring the instruments down to liquid nitrogen temperatures.
Current night-vision goggles and scopes are heavy and bulky. With the group’s new pyroelectric-based approach, NVDs could have the same sensitivity without the cooling weight.
The researchers also found that the films were sensitive beyond the range of current night-vision devices and could respond to wavelengths across the entire infrared spectrum. This suggests that the films could be incorporated into small, lightweight, and portable devices for various applications that require different infrared regions. For instance, when integrated into autonomous vehicle platforms, the films could enable cars to “see” pedestrians and vehicles in complete darkness or in foggy and rainy conditions.
The film could also be used in gas sensors for real-time and on-site environmental monitoring, helping detect pollutants. In electronics, they could monitor heat changes in semiconductor chips to catch early signs of malfunctioning elements.
The team says the new lift-off method can be generalized to materials that may not themselves contain lead. In those cases, the researchers suspect that they can infuse Teflon-like lead atoms into the underlying substrate to induce a similar peel-off effect. For now, the team is actively working toward incorporating the pyroelectric films into a functional night-vision system.
“We envision that our ultrathin films could be made into high-performance night-vision goggles, considering its broad-spectrum infrared sensitivity at room-temperature, which allows for a lightweight design without a cooling system,” Zhang says. “To turn this into a night-vision system, a functional device array should be integrated with readout circuitry. Furthermore, testing in varied environmental conditions is essential for practical applications.”
This work was supported by the U.S. Air Force Office of Scientific Research.
New model predicts a chemical reaction’s point of no return
When chemists design new chemical reactions, one useful piece of information involves the reaction’s transition state — the point of no return from which a reaction must proceed.
This information allows chemists to try to produce the right conditions that will allow the desired reaction to occur. However, current methods for predicting the transition state and the path that a chemical reaction will take are complicated and require a huge amount of computational power.
MIT researchers have now developed a machine-learning model that can make these predictions in less than a second, with high accuracy. Their model could make it easier for chemists to design chemical reactions that could generate a variety of useful compounds, such as pharmaceuticals or fuels.
“We’d like to be able to ultimately design processes to take abundant natural resources and turn them into molecules that we need, such as materials and therapeutic drugs. Computational chemistry is really important for figuring out how to design more sustainable processes to get us from reactants to products,” says Heather Kulik, the Lammot du Pont Professor of Chemical Engineering, a professor of chemistry, and the senior author of the new study.
Former MIT graduate student Chenru Duan PhD ’22, who is now at Deep Principle; former Georgia Tech graduate student Guan-Horng Liu, who is now at Meta; and Cornell University graduate student Yuanqi Du are the lead authors of the paper, which appears today in Nature Machine Intelligence.
Better estimates
For any given chemical reaction to occur, it must go through a transition state, which takes place when it reaches the energy threshold needed for the reaction to proceed. These transition states are so fleeting that they’re nearly impossible to observe experimentally.
As an alternative, researchers can calculate the structures of transition states using techniques based on quantum chemistry. However, that process requires a great deal of computing power and can take hours or days to calculate a single transition state.
“Ideally, we’d like to be able to use computational chemistry to design more sustainable processes, but this computation in itself is a huge use of energy and resources in finding these transition states,” Kulik says.
In 2023, Kulik, Duan, and others reported on a machine-learning strategy that they developed to predict the transition states of reactions. This strategy is faster than using quantum chemistry techniques, but still slower than what would be ideal because it requires the model to generate about 40 structures, then run those predictions through a “confidence model” to predict which states were most likely to occur.
One reason why that model needs to be run so many times is that it uses randomly generated guesses for the starting point of the transition state structure, then performs dozens of calculations until it reaches its final, best guess. These randomly generated starting points may be very far from the actual transition state, which is why so many steps are needed.
The researchers’ new model, React-OT, described in the Nature Machine Intelligence paper, uses a different strategy. In this work, the researchers trained their model to begin from an estimate of the transition state generated by linear interpolation — a technique that estimates each atom’s position by moving it halfway between its position in the reactants and in the products, in three-dimensional space.
“A linear guess is a good starting point for approximating where that transition state will end up,” Kulik says. “What the model’s doing is starting from a much better initial guess than just a completely random guess, as in the prior work.”
Because of this, it takes the model fewer steps and less time to generate a prediction. In the new study, the researchers showed that their model could make predictions with only about five steps, taking about 0.4 seconds. These predictions don’t need to be fed through a confidence model, and they are about 25 percent more accurate than the predictions generated by the previous model.
“That really makes React-OT a practical model that we can directly integrate to the existing computational workflow in high-throughput screening to generate optimal transition state structures,” Duan says.
“A wide array of chemistry”
To create React-OT, the researchers trained it on the same dataset that they used to train their older model. These data contain structures of reactants, products, and transition states, calculated using quantum chemistry methods, for 9,000 different chemical reactions, mostly involving small organic or inorganic molecules.
Once trained, the model performed well on other reactions from this set, which had been held out of the training data. It also performed well on other types of reactions that it hadn’t been trained on, and could make accurate predictions involving reactions with larger reactants, which often have side chains that aren’t directly involved in the reaction.
“This is important because there are a lot of polymerization reactions where you have a big macromolecule, but the reaction is occurring in just one part. Having a model that generalizes across different system sizes means that it can tackle a wide array of chemistry,” Kulik says.
The researchers are now working on training the model so that it can predict transition states for reactions between molecules that include additional elements, including sulfur, phosphorus, chlorine, silicon, and lithium.
“To quickly predict transition state structures is key to all chemical understanding,” says Markus Reiher, a professor of theoretical chemistry at ETH Zurich, who was not involved in the study. “The new approach presented in the paper could very much accelerate our search and optimization processes, bringing us faster to our final result. As a consequence, also less energy will be consumed in these high-performance computing campaigns. Any progress that accelerates this optimization benefits all sorts of computational chemical research.”
The MIT team hopes that other scientists will make use of their approach in designing their own reactions, and have created an app for that purpose.
“Whenever you have a reactant and product, you can put them into the model and it will generate the transition state, from which you can estimate the energy barrier of your intended reaction, and see how likely it is to occur,” Duan says.
The research was funded by the U.S. Army Research Office, the U.S. Department of Defense Basic Research Office, the U.S. Air Force Office of Scientific Research, the National Science Foundation, and the U.S. Office of Naval Research.
MIT engineers print synthetic “metamaterials” that are both strong and stretchy
In metamaterials design, the name of the game has long been “stronger is better.”
Metamaterials are synthetic materials with microscopic structures that give the overall material exceptional properties. A huge focus has been in designing metamaterials that are stronger and stiffer than their conventional counterparts. But there’s a trade-off: The stiffer a material, the less flexible it is.
MIT engineers have now found a way to fabricate a metamaterial that is both strong and stretchy. The base material is typically highly rigid and brittle, but it is printed in precise, intricate patterns that form a structure that is both strong and flexible.
The key to the new material’s dual properties is a combination of stiff microscopic struts and a softer woven architecture. This microscopic “double network,” which is printed using a plexiglass-like polymer, produced a material that could stretch over four times its size without fully breaking. In comparison, the polymer in other forms has little to no stretch and shatters easily once cracked.
The researchers say the new double-network design can be applied to other materials, for instance to fabricate stretchy ceramics, glass, and metals. Such tough yet bendy materials could be made into tear-resistant textiles, flexible semiconductors, electronic chip packaging, and durable yet compliant scaffolds on which to grow cells for tissue repair.
“We are opening up this new territory for metamaterials,” says Carlos Portela, the Robert N. Noyce Career Development Associate Professor at MIT. “You could print a double-network metal or ceramic, and you could get a lot of these benefits, in that it would take more energy to break them, and they would be significantly more stretchable.”
Portela and his colleagues report their findings today in the journal Nature Materials. His MIT co-authors include first author James Utama Surjadi as well as Bastien Aymon and Molly Carton.
Inspired gel
Along with other research groups, Portela and his colleagues have typically designed metamaterials by printing or nanofabricating microscopic lattices using conventional polymers similar to plexiglass and ceramic. The specific pattern, or architecture, that they print can impart exceptional strength and impact resistance to the resulting metamaterial.
Several years ago, Portela was curious whether a metamaterial could be made from an inherently stiff material, but be patterned in a way that would turn it into a much softer, stretchier version.
“We realized that the field of metamaterials has not really tried to make an impact in the soft matter realm,” he says. “So far, we’ve all been looking for the stiffest and strongest materials possible.”
Instead, he looked for a way to synthesize softer, stretchier metamaterials. Rather than printing microscopic struts and trusses, similar to those of conventional lattice-based metamaterials, he and his team made an architecture of interwoven springs, or coils. They found that, while the material they used was itself stiff like plexiglass, the resulting woven metamaterial was soft and springy, like rubber.
“They were stretchy, but too soft and compliant,” Portela recalls.
In looking for ways to bulk up their softer metamaterial, the team found inspiration in an entirely different material: hydrogel. Hydrogels are soft, stretchy, Jell-O-like materials that are composed of mostly water and a bit of polymer structure. Researchers including groups at MIT have devised ways to make hydrogels that are both soft and stretchy, and also tough. They do so by combining polymer networks with very different properties, such as a network of molecules that is naturally stiff, which gets chemically cross-linked with another molecular network that is inherently soft. Portela and his colleagues wondered whether such a double-network design could be adapted to metamaterials.
“That was our ‘aha’ moment,” Portela says. “We thought: Can we get inspiration from these hydrogels to create a metamaterial with similar stiff and stretchy properties?”
Strut and weave
For their new study, the team fabricated a metamaterial by combining two microscopic architectures. The first is a rigid, grid-like scaffold of struts and trusses. The second is a pattern of coils that weave around each strut and truss. Both networks are made from the same acrylic plastic and are printed in one go, using a high-precision, laser-based printing technique called two-photon lithography.
The researchers printed samples of the new double-network-inspired metamaterial, each measuring in size from several square microns to several square millimeters. They put the material through a series of stress tests, in which they attached either end of the sample to a specialized nanomechanical press and measured the force it took to pull the material apart. They also recorded high-resolution videos to observe the locations and ways in which the material stretched and tore as it was pulled apart.
They found their new double-network design was able stretch three times its own length, which also happened to be 10 times farther compared to a conventional lattice-patterned metamaterial printed with the same acrylic plastic. Portela says the new material’s stretchy resistance comes from the interactions between the material’s rigid struts and the messier, coiled weave as the material is stressed and pulled.
“Think of this woven network as a mess of spaghetti tangled around a lattice. As we break the monolithic lattice network, those broken parts come along for the ride, and now all this spaghetti gets entangled with the lattice pieces,” Portela explains. “That promotes more entanglement between woven fibers, which means you have more friction and more energy dissipation.”
In other words, the softer structure wound throughout the material’s rigid lattice takes on more stress thanks to multiple knots or entanglements promoted by the cracked struts. As this stress spreads unevenly through the material, an initial crack is unlikely to go straight through and quickly tear the material. What’s more, the team found that if they introduced strategic holes, or “defects,” in the metamaterial, they could further dissipate any stress that the material undergoes, making it even stretchier and more resistant to tearing apart.
“You might think this makes the material worse,” says study co-author Surjadi. “But we saw once we started adding defects, we doubled the amount of stretch we were able to do, and tripled the amount of energy that we dissipated. That gives us a material that’s both stiff and tough, which is usually a contradiction.”
The team has developed a computational framework that can help engineers estimate how a metamaterial will perform given the pattern of its stiff and stretchy networks. They envision such a blueprint will be useful in designing tear-proof textiles and fabrics.
“We also want to try this approach on more brittle materials, to give them multifunctionality,” Portela says. “So far we’ve talked of mechanical properties, but what if we could also make them conductive, or responsive to temperature? For that, the two networks could be made from different polymers, that respond to temperature in different ways, so that a fabric can open its pores or become more compliant when it’s warm and can be more rigid when it’s cold. That’s something we can explore now.”
This research was supported, in part, by the U.S. National Science Foundation, and the MIT MechE MathWorks Seed Fund. This work was performed, in part, through the use of MIT.nano’s facilities.
MIT D-Lab spinout provides emergency transportation during childbirth
Amama has lived in a rural region of northern Ghana all her life. In 2022, she went into labor with her first child. Women in the region traditionally give birth at home with the help of a local birthing attendant, but Amama experienced last-minute complications, and the decision was made to go to a hospital. Unfortunately, there were no ambulances in the community and the nearest hospital was 30 minutes away, so Amama was forced to take a motorcycle taxi, leaving her husband and caregiver behind.
Amama spent the next 30 minutes traveling over bumpy dirt roads to get to the hospital. She was in pain and afraid. When she arrived, she learned her child had not survived.
Unfortunately, Amama’s story is not unique. Around the world, more than 700 women die every day due to preventable pregnancy and childbirth complications. A lack of transportation to hospitals contributes to those deaths.
Moving Health was founded by MIT students to give people like Amama a safer way to get to the hospital. The company, which was started as part of a class at MIT D-Lab, works with local communities in rural Ghana to offer a network of motorized tricycle ambulances to communities that lack emergency transportation options.
The locally made ambulances are designed for the challenging terrain of rural Ghana, equipped with medical supplies, and have space for caregivers and family members.
“We’re providing the first rural-focused emergency transportation network,” says Moving Health CEO and co-founder Emily Young ’18. “We’re trying to provide emergency transportation coverage for less cost and with a vehicle tailored to local needs. When we first started, a report estimated there were 55 ambulances in the country of over 30 million people. Now, there is more coverage, but still the last mile areas of the country do not have access to reliable emergency transportation.”
Today, Moving Health’s ambulances and emergency transportation network cover more than 100,000 people in northern Ghana who previously lacked reliable medical transportation.
One of those people is Amama. During her most recent pregnancy, she was able to take a Moving Health ambulance to the hospital. This time, she traveled in a sanitary environment equipped with medical supplies and surrounded by loved ones. When she arrived, she gave birth to healthy twins.
From class project to company
Young and Sade Nabahe ’17, SM ’21 met while taking Course 2.722J (D-Lab: Design), which challenges students to think like engineering consultants on international projects. Their group worked on ways to transport pregnant women in remote areas of Tanzania to hospitals more safely and quickly. Young credits D-Lab instructor Matt McCambridge with helping students explore the project outside of class. Fellow Moving Health co-founder Eva Boal ’18 joined the effort the following year.
The early idea was to build a trailer that could attach to any motorcycle and be used to transport women. Following the early class projects, the students received funding from MIT’s PKG Center and the MIT Undergraduate Giving Campaign, which they used to travel to Tanzania in the following year’s Independent Activities Period (IAP). That’s when they built their first prototype in the field.
The founders realized they needed to better understand the problem from the perspective of locals and interviewed over 250 pregnant women, clinicians, motorcycle drivers, and birth attendants.
“We wanted to make sure the community was leading the charge to design what this solution should be. We had to learn more from the community about why emergency transportation doesn’t work in these areas,” Young says. “We ended up redesigning our vehicle completely.”
Following their graduation from MIT in 2018, the founders bought one-way tickets to Tanzania and deployed a new prototype. A big part of their plans was creating a product that could be manufactured by the community to support the local economy.
Nabahe and Boal left the company in 2020, but word spread of Moving Health’s mission, and Young received messages from organizations in about 15 different countries interested in expanding the company’s trials.
Young found the most alignment in Ghana, where she met two local engineers, Ambra Jiberu and Sufiyanu Imoro, who were building cars from scratch and inventing innovative agricultural technologies. With these two engineers joining the team, she was confident they had the team to build a solution in Ghana.
Taking what they’d learned in Tanzania, the new team set up hundreds of interviews and focus groups to understand the Ghanaian health system. The team redesigned their product to be a fully motorized tricycle based on the most common mode of transportation in northern Ghana. Today Moving Health focuses solely on Ghana, with local manufacturing and day-to-day operations led by Country Director and CTO Isaac Quansah.
Moving Health is focused on building a holistic emergency transportation network. To do this, Moving Health’s team sets up community-run dispatch systems, which involves organizing emergency phone numbers, training community health workers, dispatchers, and drivers, and integrating all of that within the existing health care system. The company also conducts educational campaigns in the communities it serves.
Moving Health officially launched its ambulances in 2023. The ambulance has an enclosed space for patients, family members, and medical providers and includes a removable stretcher along with supplies like first aid equipment, oxygen, IVs, and more. It costs about one-tenth the price of a traditional ambulance.
“We’ve built a really cool, small-volume manufacturing facility, led by our local engineering team, that has incredible quality,” Young says. “We also have an apprenticeship program that our two lead engineers run that allows young people to learn more hard skills. We want to make sure we’re providing economic opportunities in these communities. It’s very much a Ghanaian-made solution.”
Unlike the national ambulances, Moving Health’s ambulances are stationed in rural communities, at community health centers, to enable faster response times.
“When the ambulances are stationed in these people’s communities, at their local health centers, it makes all the difference,” Young says. “We’re trying to create an emergency transportation solution that is not only geared toward rural areas, but also focused on pregnancy and prioritizing women’s voices about what actually works in these areas.”
A lifeline for mothers
When Young first got to Ghana, she met Sahada, a local woman who shared the story of her first birth at the age of 18. Sahada had intended to give birth in her community with the help of a local birthing attendant, but she began experiencing so much pain during labor the attendant advised her to go to the nearest hospital. With no ambulances or vehicles in town, Sahada’s husband called a motorcycle driver, who took her alone on the three-hour drive to the nearest hospital.
“It was rainy, extremely muddy, and she was in a lot of pain,” Young recounts. “She was already really worried for her baby, and then the bike slips and they crash. They get back on, covered in mud, she has no idea if the baby survived, and finally gets to the maternity ward.”
Sahada was able to give birth to a healthy baby boy, but her story stuck with Young.
“The experience was extremely traumatic, and what’s really crazy is that counts as a successful birth statistic,” Young says. “We hear that kind of story a lot.”
This year, Moving Health plans to expand into a new region of northern Ghana. The team is also exploring other ways their network can provide health care to rural regions. But no matter how the company evolves, the team remain grateful to have seen their D-Lab project turn into such an impactful solution.
“Our long-term vision is to prove that this can work on a national level and supplement the existing health system,” Young says. “Then we’re excited to explore mobile health care outreach and other transportation solutions. We’ve always been focused on maternal health, but we’re staying cognizant of other community ideas that might be able to help improve health care more broadly.”
“Periodic table of machine learning” could fuel AI discovery
MIT researchers have created a periodic table that shows how more than 20 classical machine-learning algorithms are connected. The new framework sheds light on how scientists could fuse strategies from different methods to improve existing AI models or come up with new ones.
For instance, the researchers used their framework to combine elements of two different algorithms to create a new image-classification algorithm that performed 8 percent better than current state-of-the-art approaches.
The periodic table stems from one key idea: All these algorithms learn a specific kind of relationship between data points. While each algorithm may accomplish that in a slightly different way, the core mathematics behind each approach is the same.
Building on these insights, the researchers identified a unifying equation that underlies many classical AI algorithms. They used that equation to reframe popular methods and arrange them into a table, categorizing each based on the approximate relationships it learns.
Just like the periodic table of chemical elements, which initially contained blank squares that were later filled in by scientists, the periodic table of machine learning also has empty spaces. These spaces predict where algorithms should exist, but which haven’t been discovered yet.
The table gives researchers a toolkit to design new algorithms without the need to rediscover ideas from prior approaches, says Shaden Alshammari, an MIT graduate student and lead author of a paper on this new framework.
“It’s not just a metaphor,” adds Alshammari. “We’re starting to see machine learning as a system with structure that is a space we can explore rather than just guess our way through.”
She is joined on the paper by John Hershey, a researcher at Google AI Perception; Axel Feldmann, an MIT graduate student; William Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Mark Hamilton, an MIT graduate student and senior engineering manager at Microsoft. The research will be presented at the International Conference on Learning Representations.
An accidental equation
The researchers didn’t set out to create a periodic table of machine learning.
After joining the Freeman Lab, Alshammari began studying clustering, a machine-learning technique that classifies images by learning to organize similar images into nearby clusters.
She realized the clustering algorithm she was studying was similar to another classical machine-learning algorithm, called contrastive learning, and began digging deeper into the mathematics. Alshammari found that these two disparate algorithms could be reframed using the same underlying equation.
“We almost got to this unifying equation by accident. Once Shaden discovered that it connects two methods, we just started dreaming up new methods to bring into this framework. Almost every single one we tried could be added in,” Hamilton says.
The framework they created, information contrastive learning (I-Con), shows how a variety of algorithms can be viewed through the lens of this unifying equation. It includes everything from classification algorithms that can detect spam to the deep learning algorithms that power LLMs.
The equation describes how such algorithms find connections between real data points and then approximate those connections internally.
Each algorithm aims to minimize the amount of deviation between the connections it learns to approximate and the real connections in its training data.
They decided to organize I-Con into a periodic table to categorize algorithms based on how points are connected in real datasets and the primary ways algorithms can approximate those connections.
“The work went gradually, but once we had identified the general structure of this equation, it was easier to add more methods to our framework,” Alshammari says.
A tool for discovery
As they arranged the table, the researchers began to see gaps where algorithms could exist, but which hadn’t been invented yet.
The researchers filled in one gap by borrowing ideas from a machine-learning technique called contrastive learning and applying them to image clustering. This resulted in a new algorithm that could classify unlabeled images 8 percent better than another state-of-the-art approach.
They also used I-Con to show how a data debiasing technique developed for contrastive learning could be used to boost the accuracy of clustering algorithms.
In addition, the flexible periodic table allows researchers to add new rows and columns to represent additional types of datapoint connections.
Ultimately, having I-Con as a guide could help machine learning scientists think outside the box, encouraging them to combine ideas in ways they wouldn’t necessarily have thought of otherwise, says Hamilton.
“We’ve shown that just one very elegant equation, rooted in the science of information, gives you rich algorithms spanning 100 years of research in machine learning. This opens up many new avenues for discovery,” he adds.
“Perhaps the most challenging aspect of being a machine-learning researcher these days is the seemingly unlimited number of papers that appear each year. In this context, papers that unify and connect existing algorithms are of great importance, yet they are extremely rare. I-Con provides an excellent example of such a unifying approach and will hopefully inspire others to apply a similar approach to other domains of machine learning,” says Yair Weiss, a professor in the School of Computer Science and Engineering at the Hebrew University of Jerusalem, who was not involved in this research.
This research was funded, in part, by the Air Force Artificial Intelligence Accelerator, the National Science Foundation AI Institute for Artificial Intelligence and Fundamental Interactions, and Quanta Computer.