MIT Latest News

Recovering from the past and transitioning to a better energy future
As the frequency and severity of extreme weather events grow, it may become increasingly necessary to employ a bolder approach to climate change, warned Emily A. Carter, the Gerhard R. Andlinger Professor in Energy and the Environment at Princeton University. Carter made her case for why the energy transition is no longer enough in the face of climate change while speaking at the MIT Energy Initiative (MITEI) Presents: Advancing the Energy Transition seminar on the MIT campus.
“If all we do is take care of what we did in the past — but we don’t change what we do in the future — then we’re still going to be left with very serious problems,” she said. Our approach to climate change mitigation must comprise transformation, intervention, and adaption strategies, said Carter.
Transitioning to a decarbonized electricity system is one piece of the puzzle. Growing amounts of solar and wind energy — along with nuclear, hydropower, and geothermal — are slowly transforming the energy electricity landscape, but Carter noted that there are new technologies farther down the pipeline.
“Advanced geothermal may come on in the next couple of decades. Fusion will only really start to play a role later in the century, but could provide firm electricity such that we can start to decommission nuclear,” said Carter, who is also a senior strategic advisor and associate laboratory director at the Department of Energy’s Princeton Plasma Physics Laboratory.
Taking this a step further, Carter outlined how this carbon-free electricity should then be used to electrify everything we can. She highlighted the industrial sector as a critical area for transformation: “The energy transition is about transitioning off of fossil fuels. If you look at the manufacturing industries, they are driven by fossil fuels right now. They are driven by fossil fuel-driven thermal processes.” Carter noted that thermal energy is much less efficient than electricity and highlighted electricity-driven strategies that could replace heat in manufacturing, such as electrolysis, plasmas, light-emitting diodes (LEDs) for photocatalysis, and joule heating.
The transportation sector is also a key area for electrification, Carter said. While electric vehicles have become increasingly common in recent years, heavy-duty transportation is not as easily electrified. The solution? “Carbon-neutral fuels for heavy-duty aviation and shipping,” she said, emphasizing that these fuels will need to become part of the circular economy. “We know that when we burn those fuels, they’re going to produce CO2 [carbon dioxide] again. They need to come from a source of CO2 that is not fossil-based.”
The next step is intervention in the form of carbon dioxide removal, which then necessitates methods of storage and utilization, according to Carter. “There’s a lot of talk about building large numbers of pipelines to capture the CO2 — from fossil fuel-driven power plants, cement plants, steel plants, all sorts of industrial places that emit CO2 — and then piping it and storing it in underground aquifers,” she explained. Offshore pipelines are much more expensive than those on land, but can mitigate public concerns over their safety. Europe is exclusively focusing their efforts offshore for this very reason, and the same could be true for the United States, Carter said.
Once carbon dioxide is captured, commercial utilization may provide economic leverage to accelerate sequestration, even if only a few gigatons are used per year, Carter noted. Through mineralization, CO2 can be converted into carbonates, which could be used in building materials such as concrete and road-paving materials.
There is another form of intervention that Carter currently views as a last resort: solar geoengineering, sometimes known as solar radiation management or SRM. In 1991, Mount Pinatubo in the Philippines erupted and released sulfur dioxide into the stratosphere, which caused a temporary cooling of the Earth by approximately 0.5 degree Celsius for over a year. SRM seeks to recreate that cooling effect by injecting particles into the atmosphere that reflect sunlight. According to Carter, there are three main strategies: stratospheric aerosol injection, cirrus cloud thinning (thinning clouds to let more infrared radiation emitted by the earth escape to space), and marine cloud brightening (brightening clouds with sea salt so they reflect more light).
“My view is, I hope we don't ever have to do it, but I sure think we should understand what would happen in case somebody else just decides to do it. It’s a global security issue,” said Carter. “In principle, it’s not so difficult technologically, so we’d like to really understand and to be able to predict what would happen if that happened.”
With any technology, stakeholder and community engagement is essential for deployment, Carter said. She emphasized the importance of both respectfully listening to concerns and thoroughly addressing them, stating, “Hopefully, there’s enough information given to assuage their fears. We have to gain the trust of people before any deployment can be considered.”
A crucial component of this trust starts with the responsibility of the scientific community to be transparent and critique each other’s work, Carter said. “Skepticism is good. You should have to prove your proof of principle.”
MITEI Presents: Advancing the Energy Transition is an MIT Energy Initiative speaker series highlighting energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. The series will continue in fall 2025. For more information on this and additional events, visit the MITEI website.
Inroads to personalized AI trip planning
Travel agents help to provide end-to-end logistics — like transportation, accommodations, meals, and lodging — for businesspeople, vacationers, and everyone in between. For those looking to make their own arrangements, large language models (LLMs) seem like they would be a strong tool to employ for this task because of their ability to iteratively interact using natural language, provide some commonsense reasoning, collect information, and call other tools in to help with the task at hand. However, recent work has found that state-of-the-art LLMs struggle with complex logistical and mathematical reasoning, as well as problems with multiple constraints, like trip planning, where they’ve been found to provide viable solutions 4 percent or less of the time, even with additional tools and application programming interfaces (APIs).
Subsequently, a research team from MIT and the MIT-IBM Watson AI Lab reframed the issue to see if they could increase the success rate of LLM solutions for complex problems. “We believe a lot of these planning problems are naturally a combinatorial optimization problem,” where you need to satisfy several constraints in a certifiable way, says Chuchu Fan, associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and the Laboratory for Information and Decision Systems (LIDS). She is also a researcher in the MIT-IBM Watson AI Lab. Her team applies machine learning, control theory, and formal methods to develop safe and verifiable control systems for robotics, autonomous systems, controllers, and human-machine interactions.
Noting the transferable nature of their work for travel planning, the group sought to create a user-friendly framework that can act as an AI travel broker to help develop realistic, logical, and complete travel plans. To achieve this, the researchers combined common LLMs with algorithms and a complete satisfiability solver. Solvers are mathematical tools that rigorously check if criteria can be met and how, but they require complex computer programming for use. This makes them natural companions to LLMs for problems like these, where users want help planning in a timely manner, without the need for programming knowledge or research into travel options. Further, if a user’s constraint cannot be met, the new technique can identify and articulate where the issue lies and propose alternative measures to the user, who can then choose to accept, reject, or modify them until a valid plan is formulated, if one exists.
“Different complexities of travel planning are something everyone will have to deal with at some point. There are different needs, requirements, constraints, and real-world information that you can collect,” says Fan. “Our idea is not to ask LLMs to propose a travel plan. Instead, an LLM here is acting as a translator to translate this natural language description of the problem into a problem that a solver can handle [and then provide that to the user],” says Fan.
Co-authoring a paper on the work with Fan are Yang Zhang of MIT-IBM Watson AI Lab, AeroAstro graduate student Yilun Hao, and graduate student Yongchao Chen of MIT LIDS and Harvard University. This work was recently presented at the Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics.
Breaking down the solver
Math tends to be domain-specific. For example, in natural language processing, LLMs perform regressions to predict the next token, a.k.a. “word,” in a series to analyze or create a document. This works well for generalizing diverse human inputs. LLMs alone, however, wouldn’t work for formal verification applications, like in aerospace or cybersecurity, where circuit connections and constraint tasks need to be complete and proven, otherwise loopholes and vulnerabilities can sneak by and cause critical safety issues. Here, solvers excel, but they need fixed formatting inputs and struggle with unsatisfiable queries. A hybrid technique, however, provides an opportunity to develop solutions for complex problems, like trip planning, in a way that’s intuitive for everyday people.
“The solver is really the key here, because when we develop these algorithms, we know exactly how the problem is being solved as an optimization problem,” says Fan. Specifically, the research group used a solver called satisfiability modulo theories (SMT), which determines whether a formula can be satisfied. “With this particular solver, it’s not just doing optimization. It’s doing reasoning over a lot of different algorithms there to understand whether the planning problem is possible or not to solve. That’s a pretty significant thing in travel planning. It’s not a very traditional mathematical optimization problem because people come up with all these limitations, constraints, restrictions,” notes Fan.
Translation in action
The “travel agent” works in four steps that can be repeated, as needed. The researchers used GPT-4, Claude-3, or Mistral-Large as the method’s LLM. First, the LLM parses a user’s requested travel plan prompt into planning steps, noting preferences for budget, hotels, transportation, destinations, attractions, restaurants, and trip duration in days, as well as any other user prescriptions. Those steps are then converted into executable Python code (with a natural language annotation for each of the constraints), which calls APIs like CitySearch, FlightSearch, etc. to collect data, and the SMT solver to begin executing the steps laid out in the constraint satisfaction problem. If a sound and complete solution can be found, the solver outputs the result to the LLM, which then provides a coherent itinerary to the user.
If one or more constraints cannot be met, the framework begins looking for an alternative. The solver outputs code identifying the conflicting constraints (with its corresponding annotation) that the LLM then provides to the user with a potential remedy. The user can then decide how to proceed, until a solution (or the maximum number of iterations) is reached.
Generalizable and robust planning
The researchers tested their method using the aforementioned LLMs against other baselines: GPT-4 by itself, OpenAI o1-preview by itself, GPT-4 with a tool to collect information, and a search algorithm that optimizes for total cost. Using the TravelPlanner dataset, which includes data for viable plans, the team looked at multiple performance metrics: how frequently a method could deliver a solution, if the solution satisfied commonsense criteria like not visiting two cities in one day, the method’s ability to meet one or more constraints, and a final pass rate indicating that it could meet all constraints. The new technique generally achieved over a 90 percent pass rate, compared to 10 percent or lower for the baselines. The team also explored the addition of a JSON representation within the query step, which further made it easier for the method to provide solutions with 84.4-98.9 percent pass rates.
The MIT-IBM team posed additional challenges for their method. They looked at how important each component of their solution was — such as removing human feedback or the solver — and how that affected plan adjustments to unsatisfiable queries within 10 or 20 iterations using a new dataset they created called UnsatChristmas, which includes unseen constraints, and a modified version of TravelPlanner. On average, the MIT-IBM group’s framework achieved 78.6 and 85 percent success, which rises to 81.6 and 91.7 percent with additional plan modification rounds. The researchers analyzed how well it handled new, unseen constraints and paraphrased query-step and step-code prompts. In both cases, it performed very well, especially with an 86.7 percent pass rate for the paraphrasing trial.
Lastly, the MIT-IBM researchers applied their framework to other domains with tasks like block picking, task allocation, the traveling salesman problem, and warehouse. Here, the method must select numbered, colored blocks and maximize its score; optimize robot task assignment for different scenarios; plan trips minimizing distance traveled; and robot task completion and optimization.
“I think this is a very strong and innovative framework that can save a lot of time for humans, and also, it’s a very novel combination of the LLM and the solver,” says Hao.
This work was funded, in part, by the Office of Naval Research and the MIT-IBM Watson AI Lab.
Melding data, systems, and society
Research that crosses the traditional boundaries of academic disciplines, and boundaries between academia, industry, and government, is increasingly widespread, and has sometimes led to the spawning of significant new disciplines. But Munther Dahleh, a professor of electrical engineering and computer science at MIT, says that such multidisciplinary and interdisciplinary work often suffers from a number of shortcomings and handicaps compared to more traditionally focused disciplinary work.
But increasingly, he says, the profound challenges that face us in the modern world — including climate change, biodiversity loss, how to control and regulate artificial intelligence systems, and the identification and control of pandemics — require such meshing of expertise from very different areas, including engineering, policy, economics, and data analysis. That realization is what guided him, a decade ago, in the creation of MIT’s pioneering Institute for Data, Systems and Society (IDSS), aiming to foster a more deeply integrated and lasting set of collaborations than the usual temporary and ad hoc associations that occur for such work.
Dahleh has now written a book detailing the process of analyzing the landscape of existing disciplinary divisions at MIT and conceiving of a way to create a structure aimed at breaking down some of those barriers in a lasting and meaningful way, in order to bring about this new institute. The book, “Data, Systems, and Society: Harnessing AI for Societal Good,” was published this March by Cambridge University Press.
The book, Dahleh says, is his attempt “to describe our thinking that led us to the vision of the institute. What was the driving vision behind it?” It is aimed at a number of different audiences, he says, but in particular, “I’m targeting students who are coming to do research that they want to address societal challenges of different types, but utilizing AI and data science. How should they be thinking about these problems?”
A key concept that has guided the structure of the institute is something he refers to as “the triangle.” This refers to the interaction of three components: physical systems, people interacting with those physical systems, and then regulation and policy regarding those systems. Each of these affects, and is affected by, the others in various ways, he explains. “You get a complex interaction among these three components, and then there is data on all these pieces. Data is sort of like a circle that sits in the middle of this triangle and connects all these pieces,” he says.
When tackling any big, complex problem, he suggests, it is useful to think in terms of this triangle. “If you’re tackling a societal problem, it’s very important to understand the impact of your solution on society, on the people, and the role of people in the success of your system,” he says. Often, he says, “solutions and technology have actually marginalized certain groups of people and have ignored them. So the big message is always to think about the interaction between these components as you think about how to solve problems.”
As a specific example, he cites the Covid-19 pandemic. That was a perfect example of a big societal problem, he says, and illustrates the three sides of the triangle: there’s the biology, which was little understood at first and was subject to intensive research efforts; there was the contagion effect, having to do with social behavior and interactions among people; and there was the decision-making by political leaders and institutions, in terms of shutting down schools and companies or requiring masks, and so on. “The complex problem we faced was the interaction of all these components happening in real-time, when the data wasn’t all available,” he says.
Making a decision, for example shutting schools or businesses, based on controlling the spread of the disease, had immediate effects on economics and social well-being and health and education, “so we had to weigh all these things back into the formula,” he says. “The triangle came alive for us during the pandemic.” As a result, IDSS “became a convening place, partly because of all the different aspects of the problem that we were interested in.”
Examples of such interactions abound, he says. Social media and e-commerce platforms are another case of “systems built for people, and they have a regulation aspect, and they fit into the same story if you’re trying to understand misinformation or the monitoring of misinformation.”
The book presents many examples of ethical issues in AI, stressing that they must be handled with great care. He cites self-driving cars as an example, where programming decisions in dangerous situations can appear ethical but lead to negative economic and humanitarian outcomes. For instance, while most Americans support the idea that a car should sacrifice its driver rather than kill an innocent person, they wouldn’t buy such a car. This reluctance lowers adoption rates and ultimately increases casualties.
In the book, he explains the difference, as he sees it, between the concept of “transdisciplinary” versus typical cross-disciplinary or interdisciplinary research. “They all have different roles, and they have been successful in different ways,” he says. The key is that most such efforts tend to be transitory, and that can limit their societal impact. The fact is that even if people from different departments work together on projects, they lack a structure of shared journals, conferences, common spaces and infrastructure, and a sense of community. Creating an academic entity in the form of IDSS that explicitly crosses these boundaries in a fixed and lasting way was an attempt to address that lack. “It was primarily about creating a culture for people to think about all these components at the same time.”
He hastens to add that of course such interactions were already happening at MIT, “but we didn’t have one place where all the students are all interacting with all of these principles at the same time.” In the IDSS doctoral program, for instance, there are 12 required core courses — half of them from statistics and optimization theory and computation, and half from the social sciences and humanities.
Dahleh stepped down from the leadership of IDSS two years ago to return to teaching and to continue his research. But as he reflected on the work of that institute and his role in bringing it into being, he realized that unlike his own academic research, in which every step along the way is carefully documented in published papers, “I haven’t left a trail” to document the creation of the institute and the thinking behind it. “Nobody knows what we thought about, how we thought about it, how we built it.” Now, with this book, they do.
The book, he says, is “kind of leading people into how all of this came together, in hindsight. I want to have people read this and sort of understand it from a historical perspective, how something like this happened, and I did my best to make it as understandable and simple as I could.”
How we really judge AI
Suppose you were shown that an artificial intelligence tool offers accurate predictions about some stocks you own. How would you feel about using it? Now, suppose you are applying for a job at a company where the HR department uses an AI system to screen resumes. Would you be comfortable with that?
A new study finds that people are neither entirely enthusiastic nor totally averse to AI. Rather than falling into camps of techno-optimists and Luddites, people are discerning about the practical upshot of using AI, case by case.
“We propose that AI appreciation occurs when AI is perceived as being more capable than humans and personalization is perceived as being unnecessary in a given decision context,” says MIT Professor Jackson Lu, co-author of a newly published paper detailing the study’s results. “AI aversion occurs when either of these conditions is not met, and AI appreciation occurs only when both conditions are satisfied.”
The paper, “AI Aversion or Appreciation? A Capability–Personalization Framework and a Meta-Analytic Review,” appears in Psychological Bulletin. The paper has eight co-authors, including Lu, who is the Career Development Associate Professor of Work and Organization Studies at the MIT Sloan School of Management.
New framework adds insight
People’s reactions to AI have long been subject to extensive debate, often producing seemingly disparate findings. An influential 2015 paper on “algorithm aversion” found that people are less forgiving of AI-generated errors than of human errors, whereas a widely noted 2019 paper on “algorithm appreciation” found that people preferred advice from AI, compared to advice from humans.
To reconcile these mixed findings, Lu and his co-authors conducted a meta-analysis of 163 prior studies that compared people’s preferences for AI versus humans. The researchers tested whether the data supported their proposed “Capability–Personalization Framework” — the idea that in a given context, both the perceived capability of AI and the perceived necessity for personalization shape our preferences for either AI or humans.
Across the 163 studies, the research team analyzed over 82,000 reactions to 93 distinct “decision contexts” — for instance, whether or not participants would feel comfortable with AI being used in cancer diagnoses. The analysis confirmed that the Capability–Personalization Framework indeed helps account for people’s preferences.
“The meta-analysis supported our theoretical framework,” Lu says. “Both dimensions are important: Individuals evaluate whether or not AI is more capable than people at a given task, and whether the task calls for personalization. People will prefer AI only if they think the AI is more capable than humans and the task is nonpersonal.”
He adds: “The key idea here is that high perceived capability alone does not guarantee AI appreciation. Personalization matters too.”
For example, people tend to favor AI when it comes to detecting fraud or sorting large datasets — areas where AI’s abilities exceed those of humans in speed and scale, and personalization is not required. But they are more resistant to AI in contexts like therapy, job interviews, or medical diagnoses, where they feel a human is better able to recognize their unique circumstances.
“People have a fundamental desire to see themselves as unique and distinct from other people,” Lu says. “AI is often viewed as impersonal and operating in a rote manner. Even if the AI is trained on a wealth of data, people feel AI can’t grasp their personal situations. They want a human recruiter, a human doctor who can see them as distinct from other people.”
Context also matters: From tangibility to unemployment
The study also uncovered other factors that influence individuals’ preferences for AI. For instance, AI appreciation is more pronounced for tangible robots than for intangible algorithms.
Economic context also matters. In countries with lower unemployment, AI appreciation is more pronounced.
“It makes intuitive sense,” Lu says. “If you worry about being replaced by AI, you’re less likely to embrace it.”
Lu is continuing to examine people’s complex and evolving attitudes toward AI. While he does not view the current meta-analysis as the last word on the matter, he hopes the Capability–Personalization Framework offers a valuable lens for understanding how people evaluate AI across different contexts.
“We’re not claiming perceived capability and personalization are the only two dimensions that matter, but according to our meta-analysis, these two dimensions capture much of what shapes people’s preferences for AI versus humans across a wide range of studies,” Lu concludes.
In addition to Lu, the paper’s co-authors are Xin Qin, Chen Chen, Hansen Zhou, Xiaowei Dong, and Limei Cao of Sun Yat-sen University; Xiang Zhou of Shenzhen University; and Dongyuan Wu of Fudan University.
The research was supported, in part, by grants to Qin and Wu from the National Natural Science Foundation of China.
“Each of us holds a piece of the solution”
MIT has an unparalleled history of bringing together interdisciplinary teams to solve pressing problems — think of the development of radar during World War II, or leading the international coalition that cracked the code of the human genome — but the challenge of climate change could demand a scale of collaboration unlike any that’s come before at MIT.
“Solving climate change is not just about new technologies or better models. It’s about forging new partnerships across campus and beyond — between scientists and economists, between architects and data scientists, between policymakers and physicists, between anthropologists and engineers, and more,” MIT Vice President for Energy and Climate Evelyn Wang told an energetic crowd of faculty, students, and staff on May 6. “Each of us holds a piece of the solution — but only together can we see the whole.”
Undeterred by heavy rain, approximately 300 campus community members filled the atrium in the Tina and Hamid Moghadam Building (Building 55) for a spring gathering hosted by Wang and the Climate Project at MIT. The initiative seeks to direct the full strength of MIT to address climate change, which Wang described as one of the defining challenges of this moment in history — and one of its greatest opportunities.
“It calls on us to rethink how we power our world, how we build, how we live — and how we work together,” Wang said. “And there is no better place than MIT to lead this kind of bold, integrated effort. Our culture of curiosity, rigor, and relentless experimentation makes us uniquely suited to cross boundaries — to break down silos and build something new.”
The Climate Project is organized around six missions, thematic areas in which MIT aims to make significant impact, ranging from decarbonizing industry to new policy approaches to designing resilient cities. The faculty leaders of these missions posed challenges to the crowd before circulating among the crowd to share their perspectives and to discuss community questions and ideas.
Wang and the Climate Project team were joined by a number of research groups, startups, and MIT offices conducting relevant work today on issues related to energy and climate. For example, the MIT Office of Sustainability showcased efforts to use the MIT campus as a living laboratory; MIT spinouts such as Forma Systems, which is developing high-performance, low-carbon building systems, and Addis Energy, which envisions using the earth as a reactor to produce clean ammonia, presented their technologies; and visitors learned about current projects in MIT labs, including DebunkBot, an artificial intelligence-powered chatbot that can persuade people to shift their attitudes about conspiracies, developed by David Rand, the Erwin H. Schell Professor at the MIT Sloan School of Management.
Benedetto Marelli, an associate professor in the Department of Civil and Environmental Engineering who leads the Wild Cards Mission, said the energy and enthusiasm that filled the room was inspiring — but that the individual conversations were equally valuable.
“I was especially pleased to see so many students come out. I also spoke with other faculty, talked to staff from across the Institute, and met representatives of external companies interested in collaborating with MIT,” Marelli said. “You could see connections being made all around the room, which is exactly what we need as we build momentum for the Climate Project.”
Universal nanosensor unlocks the secrets to plant growth
Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group within the Singapore-MIT Alliance for Research and Technology have developed the world’s first near-infrared fluorescent nanosensor capable of real-time, nondestructive, and species-agnostic detection of indole-3-acetic acid (IAA) — the primary bioactive auxin hormone that controls the way plants develop, grow, and respond to stress.
Auxins, particularly IAA, play a central role in regulating key plant processes such as cell division, elongation, root and shoot development, and response to environmental cues like light, heat, and drought. External factors like light affect how auxin moves within the plant, temperature influences how much is produced, and a lack of water can disrupt hormone balance. When plants cannot effectively regulate auxins, they may not grow well, adapt to changing conditions, or produce as much food.
Existing IAA detection methods, such as liquid chromatography, require taking plant samples from the plant — which harms or removes part of it. Conventional methods also measure the effects of IAA rather than detecting it directly, and cannot be used universally across different plant types. In addition, since IAA are small molecules that cannot be easily tracked in real time, biosensors that contain fluorescent proteins need to be inserted into the plant’s genome to measure auxin, making it emit a fluorescent signal for live imaging.
SMART’s newly developed nanosensor enables direct, real-time tracking of auxin levels in living plants with high precision. The sensor uses near infrared imaging to monitor IAA fluctuations non-invasively across tissues like leaves, roots, and cotyledons, and it is capable of bypassing chlorophyll interference to ensure highly reliable readings even in densely pigmented tissues. The technology does not require genetic modification and can be integrated with existing agricultural systems — offering a scalable precision tool to advance both crop optimization and fundamental plant physiology research.
By providing real-time, precise measurements of auxin, the sensor empowers farmers with earlier and more accurate insights into plant health. With these insights and comprehensive data, farmers can make smarter, data-driven decisions on irrigation, nutrient delivery, and pruning, tailored to the plant’s actual needs — ultimately improving crop growth, boosting stress resilience, and increasing yields.
“We need new technologies to address the problems of food insecurity and climate change worldwide. Auxin is a central growth signal within living plants, and this work gives us a way to tap it to give new information to farmers and researchers,” says Michael Strano, co-lead principal investigator at DiSTAP, Carbon P. Dubbs Professor of Chemical Engineering at MIT, and co-corresponding author of the paper. “The applications are many, including early detection of plant stress, allowing for timely interventions to safeguard crops. For urban and indoor farms, where light, water, and nutrients are already tightly controlled, this sensor can be a valuable tool in fine-tuning growth conditions with even greater precision to optimize yield and sustainability.”
The research team documented the nanosensor’s development in a paper titled, “A Near-Infrared Fluorescent Nanosensor for Direct and Real-Time Measurement of Indole-3-Acetic Acid in Plants,” published in the journal ACS Nano. The sensor comprises single-walled carbon nanotubes wrapped in a specially designed polymer, which enables it to detect IAA through changes in near infrared fluorescence intensity. Successfully tested across multiple species, including Arabidopsis, Nicotiana benthamiana, choy sum, and spinach, the nanosensor can map IAA responses under various environmental conditions such as shade, low light, and heat stress.
“This sensor builds on DiSTAP’s ongoing work in nanotechnology and the CoPhMoRe technique, which has already been used to develop other sensors that can detect important plant compounds such as gibberellins and hydrogen peroxide. By adapting this approach for IAA, we’re adding to our inventory of novel, precise, and nondestructive tools for monitoring plant health. Eventually, these sensors can be multiplexed, or combined, to monitor a spectrum of plant growth markers for more complete insights into plant physiology,” says Duc Thinh Khong, research scientist at DiSTAP and co-first author of the paper.
“This small but mighty nanosensor tackles a long-standing challenge in agriculture: the need for a universal, real-time, and noninvasive tool to monitor plant health across various species. Our collaborative achievement not only empowers researchers and farmers to optimize growth conditions and improve crop yield and resilience, but also advances our scientific understanding of hormone pathways and plant-environment interactions,” says In-Cheol Jang, senior principal investigator at TLL, principal investigator at DiSTAP, and co-corresponding author of the paper.
Looking ahead, the research team is looking to combine multiple sensing platforms to simultaneously detect IAA and its related metabolites to create a comprehensive hormone signaling profile, offering deeper insights into plant stress responses and enhancing precision agriculture. They are also working on using microneedles for highly localized, tissue-specific sensing, and collaborating with industrial urban farming partners to translate the technology into practical, field-ready solutions.
The research was carried out by SMART, and supported by the National Research Foundation of Singapore under its Campus for Research Excellence And Technological Enterprise program.
AI-enabled control system helps autonomous drones stay on target in uncertain environments
An autonomous drone carrying water to help extinguish a wildfire in the Sierra Nevada might encounter swirling Santa Ana winds that threaten to push it off course. Rapidly adapting to these unknown disturbances inflight presents an enormous challenge for the drone’s flight control system.
To help such a drone stay on target, MIT researchers developed a new, machine learning-based adaptive control algorithm that could minimize its deviation from its intended trajectory in the face of unpredictable forces like gusty winds.
Unlike standard approaches, the new technique does not require the person programming the autonomous drone to know anything in advance about the structure of these uncertain disturbances. Instead, the control system’s artificial intelligence model learns all it needs to know from a small amount of observational data collected from 15 minutes of flight time.
Importantly, the technique automatically determines which optimization algorithm it should use to adapt to the disturbances, which improves tracking performance. It chooses the algorithm that best suits the geometry of specific disturbances this drone is facing.
The researchers train their control system to do both things simultaneously using a technique called meta-learning, which teaches the system how to adapt to different types of disturbances.
Taken together, these ingredients enable their adaptive control system to achieve 50 percent less trajectory tracking error than baseline methods in simulations and perform better with new wind speeds it didn’t see during training.
In the future, this adaptive control system could help autonomous drones more efficiently deliver heavy parcels despite strong winds or monitor fire-prone areas of a national park.
“The concurrent learning of these components is what gives our method its strength. By leveraging meta-learning, our controller can automatically make choices that will be best for quick adaptation,” says Navid Azizan, who is the Esther and Harold E. Edgerton Assistant Professor in the MIT Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), a principal investigator of the Laboratory for Information and Decision Systems (LIDS), and the senior author of a paper on this control system.
Azizan is joined on the paper by lead author Sunbochen Tang, a graduate student in the Department of Aeronautics and Astronautics, and Haoyuan Sun, a graduate student in the Department of Electrical Engineering and Computer Science. The research was recently presented at the Learning for Dynamics and Control Conference.
Finding the right algorithm
Typically, a control system incorporates a function that models the drone and its environment, and includes some existing information on the structure of potential disturbances. But in a real world filled with uncertain conditions, it is often impossible to hand-design this structure in advance.
Many control systems use an adaptation method based on a popular optimization algorithm, known as gradient descent, to estimate the unknown parts of the problem and determine how to keep the drone as close as possible to its target trajectory during flight. However, gradient descent is only one algorithm in a larger family of algorithms available to choose, known as mirror descent.
“Mirror descent is a general family of algorithms, and for any given problem, one of these algorithms can be more suitable than others. The name of the game is how to choose the particular algorithm that is right for your problem. In our method, we automate this choice,” Azizan says.
In their control system, the researchers replaced the function that contains some structure of potential disturbances with a neural network model that learns to approximate them from data. In this way, they don’t need to have an a priori structure of the wind speeds this drone could encounter in advance.
Their method also uses an algorithm to automatically select the right mirror-descent function while learning the neural network model from data, rather than assuming a user has the ideal function picked out already. The researchers give this algorithm a range of functions to pick from, and it finds the one that best fits the problem at hand.
“Choosing a good distance-generating function to construct the right mirror-descent adaptation matters a lot in getting the right algorithm to reduce the tracking error,” Tang adds.
Learning to adapt
While the wind speeds the drone may encounter could change every time it takes flight, the controller’s neural network and mirror function should stay the same so they don’t need to be recomputed each time.
To make their controller more flexible, the researchers use meta-learning, teaching it to adapt by showing it a range of wind speed families during training.
“Our method can cope with different objectives because, using meta-learning, we can learn a shared representation through different scenarios efficiently from data,” Tang explains.
In the end, the user feeds the control system a target trajectory and it continuously recalculates, in real-time, how the drone should produce thrust to keep it as close as possible to that trajectory while accommodating the uncertain disturbance it encounters.
In both simulations and real-world experiments, the researchers showed that their method led to significantly less trajectory tracking error than baseline approaches with every wind speed they tested.
“Even if the wind disturbances are much stronger than we had seen during training, our technique shows that it can still handle them successfully,” Azizan adds.
In addition, the margin by which their method outperformed the baselines grew as the wind speeds intensified, showing that it can adapt to challenging environments.
The team is now performing hardware experiments to test their control system on real drones with varying wind conditions and other disturbances.
They also want to extend their method so it can handle disturbances from multiple sources at once. For instance, changing wind speeds could cause the weight of a parcel the drone is carrying to shift in flight, especially when the drone is carrying sloshing payloads.
They also want to explore continual learning, so the drone could adapt to new disturbances without the need to also be retrained on the data it has seen so far.
“Navid and his collaborators have developed breakthrough work that combines meta-learning with conventional adaptive control to learn nonlinear features from data. Key to their approach is the use of mirror descent techniques that exploit the underlying geometry of the problem in ways prior art could not. Their work can contribute significantly to the design of autonomous systems that need to operate in complex and uncertain environments,” says Babak Hassibi, the Mose and Lillian S. Bohn Professor of Electrical Engineering and Computing and Mathematical Sciences at Caltech, who was not involved with this work.
This research was supported, in part, by MathWorks, the MIT-IBM Watson AI Lab, the MIT-Amazon Science Hub, and the MIT-Google Program for Computing Innovation.
Envisioning a future where health care tech leaves some behind
Will the perfect storm of potentially life-changing, artificial intelligence-driven health care and the desire to increase profits through subscription models alienate vulnerable patients?
For the third year in a row, MIT's Envisioning the Future of Computing Prize asked students to describe, in 3,000 words or fewer, how advancements in computing could shape human society for the better or worse. All entries were eligible to win a number of cash prizes.
Inspired by recent research on the greater effect microbiomes have on overall health, MIT-WHOI Joint Program in Oceanography and Applied Ocean Science and Engineering PhD candidate Annaliese Meyer created the concept of “B-Bots,” a synthetic bacterial mimic designed to regulate gut biomes and activated by Bluetooth.
For the contest, which challenges MIT students to articulate their musings for what a future driven by advances in computing holds, Meyer submitted a work of speculative fiction about how recipients of a revolutionary new health-care technology find their treatment in jeopardy with the introduction of a subscription-based pay model.
In her winning paper, titled “(Pre/Sub)scribe,” Meyer chronicles the usage of B-Bots from the perspective of both their creator and a B-Bots user named Briar. They celebrate the effects of the supplement, helping them manage vitamin deficiencies and chronic conditions like acid reflux and irritable bowel syndrome. Meyer says that the introduction of a B-Bots subscription model “seemed like a perfect opportunity to hopefully make clear that in a for-profit health-care system, even medical advances that would, in theory, be revolutionary for human health can end up causing more harm than good for the many people on the losing side of the massive wealth disparity in modern society.”
As a Canadian, Meyer has experienced the differences between the health care systems in the United States and Canada. She recounts her mother’s recent cancer treatments, emphasizing the cost and coverage of treatments in British Columbia when compared to the U.S.
Aside from a cautionary tale of equity in the American health care system, Meyer hopes readers take away an additional scientific message on the complexity of gut microbiomes. Inspired by her thesis work in ocean metaproteomics, Meyer says, “I think a lot about when and why microbes produce different proteins to adapt to environmental changes, and how that depends on the rest of the microbial community and the exchange of metabolic products between organisms.”
Meyer had hoped to participate in the previous year’s contest, but the time constraints of her lab work put her submission on hold. Now in the midst of thesis work, she saw the contest as a way to add some variety to what she was writing while keeping engaged with her scientific interests. However, writing has always been a passion. “I wrote a lot as a kid (‘author’ actually often preceded ‘scientist’ as my dream job while I was in elementary school), and I still write fiction in my spare time,” she says.
Named the winner of the $10,000 grand prize, Meyer says the essay and presentation preparation were extremely rewarding.
“The chance to explore a new topic area which, though related to my field, was definitely out of my comfort zone, really pushed me as a writer and a scientist. It got me reading papers I’d never have found before, and digging into concepts that I’d barely ever encountered. (Did I have any real understanding of the patent process prior to this? Absolutely not.) The presentation dinner itself was a ton of fun; it was great to both be able to celebrate with my friends and colleagues as well as meet people from a bunch of different fields and departments around MIT.”
Envisioning the future of the computing prize
Co-sponsored by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing and the School of Humanities, Arts, and Social Sciences (SHASS), with support from MAC3 Philanthropies, the contest this year attracted 65 submissions from undergraduate and graduate students across various majors, including brain and cognitive sciences, economics, electrical engineering and computer science, physics, anthropology, and others.
Caspar Hare, associate dean of SERC and professor of philosophy, launched the prize in 2023. He says that the object of the prize was “to encourage MIT students to think about what they’re doing, not just in terms of advancing computing-related technologies, but also in terms of how the decisions they make may or may not work to our collective benefit.”
He emphasized that the Envisioning the Future of Computing prize will continue to remain “interesting and important” to the MIT community. There are plans in place to tweak next year’s contest, offering more opportunities for workshops and guidance for those interested in submitting essays.
“Everyone is excited to continue this for as long as it remains relevant, which could be forever,” he says, suggesting that in years to come the prize could give us a series of historical snapshots of what computing-related technologies MIT students found most compelling.
“Computing-related technology is going to be transforming and changing the world. MIT students will remain a big part of that.”
Crowning a winner
As part of a two-stage evaluation process, all the submitted essays were reviewed anonymously by a committee of faculty members from the college, SHASS, and the Department of Urban Studies and Planning. The judges moved forward three finalists based on the papers that were deemed to be the most articulate, thorough, grounded, imaginative, and inspiring.
In early May, a live awards ceremony was held where the finalists were invited to give 20-minute presentations on their entries and took questions from the audience. Nearly 140 MIT community members, family members, and friends attended the ceremony in support of the finalists. The audience members and judging panel asked the presenters challenging and thoughtful questions on the societal impact of their fictional computing technologies.
A final tally, which comprised 75 percent of their essay score and 25 percent of their presentation score, determined the winner.
This year’s judging panel included:
- Marzyeh Ghassemi, associate professor in electrical engineering and computer science;
- Caspar Hare, associate dean of SERC and professor of philosophy;
- Jason Jackson, associate professor in political economy and urban planning;
- Brad Skow, professor of philosophy;
- Armando Solar-Lezama, associate director and chief operating officer of the MIT Computer Science and Artificial Intelligence Laboratory; and
- Nikos Trichakis, interim associate dean of SERC and associate professor of operations management.
The judges also awarded $5,000 to the two runners-up: Martin Staadecker, a graduate student in the Technology and Policy Program in the Institute for Data, Systems, and Society, for his essay on a fictional token-based system to track fossil fuels, and Juan Santoyo, a PhD candidate in the Department of Brain and Cognitive Sciences, for his short story of a field-deployed AI designed to help the mental health of soldiers in times of conflict. In addition, eight honorable mentions were recognized, with each receiving a cash prize of $1,000.
Helping machines understand visual content with AI
Data should drive every decision a modern business makes. But most businesses have a massive blind spot: They don’t know what’s happening in their visual data.
Coactive is working to change that. The company, founded by Cody Coleman ’13, MEng ’15 and William Gaviria Rojas ’13, has created an artificial intelligence-powered platform that can make sense of data like images, audio, and video to unlock new insights.
Coactive’s platform can instantly search, organize, and analyze unstructured visual content to help businesses make faster, better decisions.
“In the first big data revolution, businesses got better at getting value out of their structured data,” Coleman says, referring to data from tables and spreadsheets. “But now, approximately 80 to 90 percent of the data in the world is unstructured. In the next chapter of big data, companies will have to process data like images, video, and audio at scale, and AI is a key piece of unlocking that capability.”
Coactive is already working with several large media and retail companies to help them understand their visual content without relying on manual sorting and tagging. That’s helping them get the right content to users faster, remove explicit content from their platforms, and uncover how specific content influences user behavior.
More broadly, the founders believe Coactive serves as an example of how AI can empower humans to work more efficiently and solve new problems.
“The word coactive means to work together concurrently, and that’s our grand vision: helping humans and machines work together,” Coleman says. “We believe that vision is more important now than ever because AI can either pull us apart or bring us together. We want Coactive to be an agent that pulls us together and gives human beings a new set of superpowers.”
Giving computers vision
Coleman met Gaviria Rojas in the summer before their first yearthrough the MIT Interphase Edge program. Both would go on to major in electrical engineering and computer science and work on bringing MIT OpenCourseWare content to Mexican universities, among other projects.
“That was a great example of entrepreneurship,” Coleman recalls of the OpenCourseWare project. “It was really empowering to be responsible for the business and the software development. It led me to start my own small web-development businesses afterward, and to take [the MIT course] Founder’s Journey.”
Coleman first explored the power of AI at MIT while working as a graduate researcher with the Office of Digital Learning (now MIT Open Learning), where he used machine learning to study how humans learn on MITx, which hosts massive, open online courses created by MIT faculty and instructors.
“It was really amazing to me that you could democratize this transformational journey that I went through at MIT with digital learning — and that you could apply AI and machine learning to create adaptive systems that not only help us understand how humans learn, but also deliver more personalized learning experiences to people around the world,” Coleman says of MITx. “That was also the first time I got to explore video content and apply AI to it.”
After MIT, Coleman went to Stanford University for his PhD, where he worked on lowering barriers to using AI. The research led him to work with companies like Pinterest and Meta on AI and machine-learning applications.
“That’s where I was able to see around the corner into the future of what people wanted to do with AI and their content,” Coleman recalls. “I was seeing how leading companies were using AI to drive business value, and that’s where the initial spark for Coactive came from. I thought, ‘What if we create an enterprise-grade operating system for content and multimodal AI to make that easy?’”
Meanwhile, Gaviria Rojas moved to the Bay Area in 2020 and started working as a data scientist at eBay. As part of the move, he needed help transporting his couch, and Coleman was the lucky friend he called.
“On the car ride, we realized we both saw an explosion happening around data and AI,” Gaviria Rojas says. “At MIT, we got a front row seat to the big data revolution, and we saw people inventing technologies to unlock value from that data at scale. Cody and I realized we had another powder keg about to explode with enterprises collecting tremendous amount of data, but this time it was multimodal data like images, video, audio, and text. There was a missing technology to unlock it at scale. That was AI.”
The platform the founders went on to build — what Coleman describes as an “AI operating system” — is model agnostic, meaning the company can swap out the AI systems under the hood as models continue to improve. Coactive’s platform includes prebuilt applications that business customers can use to do things like search through their content, generate metadata, and conduct analytics to extract insights.
“Before AI, computers would see the world through bytes, whereas humans would see the world through vision,” Coleman says. “Now with AI, machines can finally see the world like we do, and that’s going to cause the digital and physical worlds to blur.”
Improving the human-computer interface
Reuters’ database of images supplies the world’s journalists with millions of photos. Before Coactive, the company relied on reporters manually entering tags with each photo so that the right images would show up when journalists searched for certain subjects.
“It was incredible slow and expensive to go through all of these raw assets, so people just didn’t add tags,” Coleman says. “That meant when you searched for things, there were limited results even if relevant photos were in the database.”
Now, when journalists on Reuters’ website select ‘Enable AI Search,’ Coactive can pull up relevant content based on its AI system’s understanding of the details in each image and video.
“It’s vastly improving the quality of results for reporters, which enables them to tell better, more accurate stories than ever before,” Coleman says.
Reuters is not alone in struggling to manage all of its content. Digital asset management is a huge component of many media and retail companies, who today often rely on manually entered metadata for sorting and searching through that content.
Another Coactive customer is Fandom, which is one of the world’s largest platforms for information around TV shows, videogames, and movies with more than 300 million monthly active users. Fandom is using Coactive to understand visual data in their online communities and help remove excessive gore and sexualized content.
“It used to take 24 to 48 hours for Fandom to review each new piece of content,” Coleman says. “Now with Coactive, they’ve codified their community guidelines and can generate finer-grain information in an average of about 500 milliseconds.”
With every use case, the founders see Coactive as enabling a new paradigm in the ways humans work with machines.
“Throughout the history of human-computer interaction, we’ve had to bend over a keyboard and mouse to input information in a way that machines could understand,” Coleman says. “Now, for the first time, we can just speak naturally, we can share images and video with AI, and it can understand that content. That’s a fundamental change in the way we think about human-computer interactions. The core vision of Coactive is because of that change, we need a new operating system and a new way of working with content and AI.”
How the brain distinguishes between ambiguous hypotheses
When navigating a place that we’re only somewhat familiar with, we often rely on unique landmarks to help make our way. However, if we’re looking for an office in a brick building, and there are many brick buildings along our route, we might use a rule like looking for the second building on a street, rather than relying on distinguishing the building itself.
Until that ambiguity is resolved, we must hold in mind that there are multiple possibilities (or hypotheses) for where we are in relation to our destination. In a study of mice, MIT neuroscientists have now discovered that these hypotheses are explicitly represented in the brain by distinct neural activity patterns.
This is the first time that neural activity patterns that encode simultaneous hypotheses have been seen in the brain. The researchers found that these representations, which were observed in the brain’s retrosplenial cortex (RSC), not only encode hypotheses but also could be used by the animals to choose the correct way to go.
“As far as we know, no one has shown in a complex reasoning task that there’s an area in association cortex that holds two hypotheses in mind and then uses one of those hypotheses, once it gets more information, to actually complete the task,” says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.
Jakob Voigts PhD ’17, a former postdoc in Harnett’s lab and now a group leader at the Howard Hughes Medical Institute Janelia Research Campus, is the lead author of the paper, which appears today in Nature Neuroscience.
Ambiguous landmarks
The RSC receives input from the visual cortex, the hippocampal formation, and the anterior thalamus, which it integrates to help guide navigation.
In a 2020 paper, Harnett’s lab found that the RSC uses both visual and spatial information to encode landmarks used for navigation. In that study, the researchers showed that neurons in the RSC of mice integrate visual information about the surrounding environment with spatial feedback of the mice’s own position along a track, allowing them to learn where to find a reward based on landmarks that they saw.
In their new study, the researchers wanted to delve further into how the RSC uses spatial information and situational context to guide navigational decision-making. To do that, the researchers devised a much more complicated navigational task than typically used in mouse studies. They set up a large, round arena, with 16 small openings, or ports, along the side walls. One of these openings would give the mice a reward when they stuck their nose through it. In the first set of experiments, the researchers trained the mice to go to different reward ports indicated by dots of light on the floor that were only visible when the mice get close to them.
Once the mice learned to perform this relatively simple task, the researchers added a second dot. The two dots were always the same distance from each other and from the center of the arena. But now the mice had to go to the port by the counterclockwise dot to get the reward. Because the dots were identical and only became visible at close distances, the mice could never see both dots at once and could not immediately determine which dot was which.
To solve this task, mice therefore had to remember where they expected a dot to show up, integrating their own body position, the direction they were heading, and path they took to figure out which landmark is which. By measuring RSC activity as the mice approached the ambiguous landmarks, the researchers could determine whether the RSC encodes hypotheses about spatial location. The task was carefully designed to require the mice to use the visual landmarks to obtain rewards, instead of other strategies like odor cues or dead reckoning.
“What is important about the behavior in this case is that mice need to remember something and then use that to interpret future input,” says Voigts, who worked on this study while a postdoc in Harnett’s lab. “It’s not just remembering something, but remembering it in such a way that you can act on it.”
The researchers found that as the mice accumulated information about which dot might be which, populations of RSC neurons displayed distinct activity patterns for incomplete information. Each of these patterns appears to correspond to a hypothesis about where the mouse thought it was with respect to the reward.
When the mice get close enough to figure out which dot was indicating the reward port, these patterns collapsed into the one that represents the correct hypothesis. The findings suggest that these patterns not only passively store hypotheses, they can also be used to compute how to get to the correct location, the researchers say.
“We show that RSC has the required information for using this short-term memory to distinguish the ambiguous landmarks. And we show that this type of hypothesis is encoded and processed in a way that allows the RSC to use it to solve the computation,” Voigts says.
Interconnected neurons
When analyzing their initial results, Harnett and Voigts consulted with MIT Professor Ila Fiete, who had run a study about 10 years ago using an artificial neural network to perform a similar navigation task.
That study, previously published on bioRxiv, showed that the neural network displayed activity patterns that were conceptually similar to those seen in the animal studies run by Harnett’s lab. The neurons of the artificial neural network ended up forming highly interconnected low-dimensional networks, like the neurons of the RSC.
“That interconnectivity seems, in ways that we still don’t understand, to be key to how these dynamics emerge and how they’re controlled. And it’s a key feature of how the RSC holds these two hypotheses in mind at the same time,” Harnett says.
In his lab at Janelia, Voigts now plans to investigate how other brain areas involved in navigation, such as the prefrontal cortex, are engaged as mice explore and forage in a more naturalistic way, without being trained on a specific task.
“We’re looking into whether there are general principles by which tasks are learned,” Voigts says. “We have a lot of knowledge in neuroscience about how brains operate once the animal has learned a task, but in comparison we know extremely little about how mice learn tasks or what they choose to learn when given freedom to behave naturally.”
The research was funded, in part, by the National Institutes of Health, a Simons Center for the Social Brain at MIT postdoctoral fellowship, the National Institute of General Medical Sciences, and the Center for Brains, Minds, and Machines at MIT, funded by the National Science Foundation.
Infinite Threads popup thrift store helps the MIT community and the planet
Three years ago, Massachusetts passed a law prohibiting the disposal of used clothing and textiles. The law aims to reduce waste and promote recycling and repurposing. While many are unaware of the nascent law, MIT students at the helm of Infinite Threads were happy to see its passage.
Infinite Threads is a spinoff of the Undergraduate Association Sustainability Committee — a group of students running reuse-related events since 2013. With new leadership and a new focus, Infinite Threads went from holding three to four popup sales a year to nine.
A group of students collects lightly used clothing from MIT community members and sells the items at deeply discounted prices at popup sales held several times each semester. Sales take place outside of the Student Center to optimize the high foot traffic in the area. Anyone can purchase items at the sales, and Infinite Threads also accepts clothing donations at the popups as well.
Administrators Cameron Dougal ’25, a recent graduate who majored in urban science and planning with computer science (Course 11-6), and Erin Hovendon, a rising senior in mechanical engineering (Course 2), led the small student-run organization for much of the year 2024-25 academic year.
“Our mission is to reduce material waste. We collect a lot of clothing at the end of the spring semester when students are moving out of their residence halls. We then sell items such as shirts, jackets, pants, and jeans at the popup sales for $2 to $6,” says Dougal, adding “we often have a lot of leftover T-shirts from residence hall events and career fairs that we give away for free. These MIT-related items demonstrate the importance of a hyperlocal reuse ecosystem. As soon as these types of items leave campus, there is a much lower chance that they will find a new home.”
Hovendon, who has an interest in sustainability and hopes to pursue a career in renewable energy, joined the group after seeing an email sent to DormSpam. “It was a great opportunity to jump into a sustainability leadership role while also helping the MIT community. We aim to offer affordable clothing options, and we get a lot of positive feedback about the thrift popups — I love hearing from students that they got clothing items they now wear frequently from one of our sales,” says Hovendon.
“Any money made at the popups is used to pay the student workers and to rent the U-Haul we use to bring the clothing we store at MIT’s Furniture Exchange warehouse to the Student Center. Our goal is simple: we want to keep clothing out of landfills, which in return helps the planet,” says Dougal.
Studies show that a pair of cotton denim jeans can take up to a year to decompose, while jeans or items of clothing made with polyester can take 40-200 years to decompose. According to the Environmental Protection Agency, blue jeans account for 5 percent of landfill space. Infinite Threads saves clothing items from ending up in landfills.
Hovendon agrees. “We don’t make a lot of money at the sales — it’s not our goal. Our goal is to help the environment. We received some seed funding from the MIT Women's League, the Office of Sustainability, and the MIT Fabric Innovation Hub.”
Infinite Threads also collaborates with the MIT Office of Sustainability (MITOS) to bring awareness to their work.
“Infinite Threads is a fantastic model for how students can directly take action, empower individuals, and leverage the collective community to design out clothing waste and climate impacts through the reuse culture. MIT students, like Cameron and Erin, are well-positioned to tackle sustainability challenges on campus and out in the world as they bring a willingness to solve complex challenges, experiment with many solutions, and grapple with operational realities,” says Brian Goldberg, assistant director of MITOS.
In 2024-25, the club sold over 1,000 clothing items. Any clothing that does not sell at the thrift shop is given to Helpsy, an organization that helps keep clothing out of the trash and landfills while also creating jobs. Dougal and Hovendon say they have diverted about 750 pounds of textiles to Helpsy in 2024-25 alone.
Lauren Higgins, a rising senior majoring in political science who took over managing Infinite Threads from Dougal earlier this year, says, “I originally joined as one of the staff for Infinite Threads, and I love being able to help out with waste reduction and sustainability efforts on campus. It's been great to see our impact, and I hope we're able to continue that this upcoming year.”
Animation technique simulates the motion of squishy objects
Animators could create more realistic bouncy, stretchy, and squishy characters for movies and video games thanks to a new simulation method developed by researchers at MIT.
Their approach allows animators to simulate rubbery and elastic materials in a way that preserves the physical properties of the material and avoids pitfalls like instability.
The technique simulates elastic objects for animation and other applications, with improved reliability compared to other methods. In comparison, many existing simulation techniques can produce elastic animations that become erratic or sluggish or can even break down entirely.
To achieve this improvement, the MIT researchers uncovered a hidden mathematical structure in equations that capture how elastic materials deform on a computer. By leveraging this property, known as convexity, they designed a method that consistently produces accurate, physically faithful simulations.
“The way animations look often depends on how accurately we simulate the physics of the problem,” says Leticia Mattos Da Silva, an MIT graduate student and lead author of a paper on this research. “Our method aims to stay true to physical laws while giving more control and stability to animation artists.”
Beyond 3D animation, the researchers also see potential future uses in the design of real elastic objects, such as flexible shoes, garments, or toys. The method could be extended to help engineers explore how stretchy objects will perform before they are built.
She is joined on the paper by Silvia Sellán, an assistant professor of computer science at Columbia University; Natalia Pacheco-Tallaj, an MIT graduate student; and senior author Justin Solomon, an associate professor in the MIT Department of Electrical Engineering and Computer Science and leader of the Geometric Data Processing Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the SIGGRAPH conference.
Truthful to physics
If you drop a rubber ball on a wooden floor, it bounces back up. Viewers expect to see the same behavior in an animated world, but recreating such dynamics convincingly can be difficult. Many existing techniques simulate elastic objects using fast solvers that trade physical realism for speed, which can result in excessive energy loss or even simulation failure.
More accurate approaches, including a class of techniques called variational integrators, preserve the physical properties of the object, such as its total energy or momentum, and, in this way, mimic real-world behavior more closely. But these methods are often unreliable because they depend on complex equations that are hard to solve efficiently.
The MIT researchers tackled this problem by rewriting the equations of variational integrators to reveal a hidden convex structure. They broke the deformation of elastic materials into a stretch component and a rotation component, and found that the stretch portion forms a convex problem that is well-suited for stable optimization algorithms.
“If you just look at the original formulation, it seems fully non-convex. But because we can rewrite it so that is convex in at least some of its variables, we can inherit some advantages of convex optimization algorithms,” she says.
These convex optimization algorithms, when applied under the right conditions, come with guarantees of convergence, meaning they are more likely to find the correct answer to the problem. This generates more stable simulations over time, avoiding issues like a bouncing rubber ball losing too much energy or exploding mid-animation.
One of the biggest challenges the researchers faced was reinterpreting the formulation so they could extract that hidden convexity. Some other works explored hidden convexity in static problems, but it was not clear whether the structures remained solid for dynamic problems like simulating elastic objects in motion, Mattos Da Silva says.
Stability and efficiency
In experiments, their solver was able to simulate a wide range of elastic behavior, from bouncing shapes to squishy characters, with preservation of important physical properties and stability over long periods of time. Other simulation methods quickly ran into trouble: Some became unstable, causing erratic behavior, while others showed visible damping.
“Because our method demonstrates more stability, it can give animators more reliability and confidence when simulating anything elastic, whether it’s something from the real world or even something completely imaginary,” she says.
While the solver is not as fast as some simulation tools that prioritize speed over accuracy, it avoids many of the trade-offs those methods make. Compared to other physics-based approaches, it also avoids the need for complex, nonlinear solvers that can be sensitive and prone to failure.
In the future, the researchers want to explore techniques to further reduce computational cost. In addition, they want to explore applications of this technique in fabrication and engineering, where reliable simulations of elastic materials could support the design of real-world objects, like garments and toys.
“We were able to revive an old class of integrators in our work. My guess is there are other examples where researchers can revisit a problem to find a hidden convexity structure that could offer a lot of advantages,” she says.
This research is funded, in part, by a MathWorks Engineering Fellowship, the Army Research Office, the National Science Foundation, the CSAIL Future of Data Program, the MIT-IBM Watson AI Laboratory, the Wistron Corporation, and the Toyota-CSAIL Joint Research Center.
Former MIT researchers advance a new model for innovation
Academic research groups and startups are essential drivers of scientific progress. But some projects, like the Hubble Space Telescope or the Human Genome Project, are too big for any one academic lab or loose consortium. They’re also not immediately profitable enough for industry to take on.
That’s the gap researchers at MIT were trying to fill when they created the concept of focused research organizations, or FROs. They describe a FRO as a new type of entity, often philanthropically funded, that undertakes large research efforts using tightly coordinated teams to create a public good that accelerates scientific progress.
The original idea for focused research organizations came out of talks among researchers, most of whom were working to map the brain in MIT Professor Ed Boyden’s lab. After they began publishing their ideas, however, the researchers realized FROs could be a powerful tool to unlock scientific advances across many other applications.
“We were quite pleasantly surprised by the range of fields where we see FRO-shaped problems,” says Adam Marblestone, a former MIT research scientist who co-founded the nonprofit Convergent Research to help launch FROs in 2021. “Convergent has FRO proposals from climate, materials science, chemistry, biology — we even have launched a FRO on software for math. You wouldn’t expect math to be something with a large-scale technological research bottleneck, but it turns out even there, we found a software engineering bottleneck that needed to be solved.”
Marblestone helped formulate the idea for focused research organizations at MIT with a group including Andrew Payne SM ’17, PhD ’21 and Sam Rodriques PhD ’19, who were PhD students in Boyden’s lab at the time. Since then, the FRO concept has caught on. Convergent has helped attract philanthropic funding for FROs working to decode the immune system, identify the unintended targets of approved drugs, and understand the impacts of carbon dioxide removal in our oceans.
In total, Convergent has supported the creation of 10 FROs since its founding in 2021. Many of those groups have already released important tools for better understanding our world — and their leaders believe the best is yet to come.
“We’re starting to see these first open-source tools released in important areas,” Marblestone says. “We’re seeing the first concrete evidence that FROs are effective, because no other entity could have released these tools, and I think 2025 is going to be a significant year in terms of our newer FROs putting out new datasets and tools.”
A new model
Marblestone joined Boyden’s lab in 2014 as a research scientist after completing his PhD at Harvard University. He also worked in a new position called director of scientific architecting at the MIT Media Lab, which Boyden helped create, through which he tried to organize individual research efforts into larger projects. His own research focused on overcoming the challenges of measuring brain activity across large scales.
Marblestone discussed this and other large-scale neuroscience problems with Payne and Rodriques, and the researchers began thinking about gaps in scientific funding more broadly.
“The combination of myself, Sam, Andrew, Ed, and others’ experiences trying to start various large brain-mapping projects convinced us of the gap in support for medium-sized science and engineering teams with startup-inspired structures, built for the nonprofit purpose of building scientific infrastructure,” Marblestone says.
Through MIT, the researchers also connected with Tom Kalil, who was at the time working as the U.S. deputy director for technology and innovation. Rodriques wrote about the concept of a focused research organization as the last chapter of his PhD thesis in 2019.
“Ed always encouraged us to dream very, very big,” Rodriques says. “We were always trying to think about the hardest problems in biology and how to tackle them. My thesis basically ended with me explaining why we needed a new structure that is like a company, but nonprofit and dedicated to science.”
As part of a fellowship with the Federation of American Scientists in 2020, and working with Kalil, Marblestone interviewed scientists in dozens of fields outside of neuroscience and learned that the funding gap existed across disciplines.
When Rodriques and Marblestone published an essay about their findings, it helped attract philanthropic funding, which Marblestone, Kalil, and co-founder Anastasia Gamick used to launch Convergent Research, a nonprofit science studio for launching FROs.
“I see Ed’s lab as a melting pot where myself, Ed, Sam, and others worked on articulating a need and identifying specific projects that might make sense as FROs,” Marblestone says. “All those ideas later got crystallized when we created Convergent Research.”
In 2021, Convergent helped launch the first FROs: E11 Bio, which is led by Payne and committed to developing tools to understand how the brain is wired, and Cultivarium, an FRO making microorganisms more accessible for work in synthetic biology.
“From our brain mapping work we started asking the question, ‘Are there other projects that look like this that aren’t getting funded?’” Payne says. “We realized there was a gap in the research ecosystem, where some of these interdisciplinary, team science projects were being systematically overlooked. We knew a lot of amazing things would come out of getting those projects funded.”
Tools to advance science
Early progress from the first focused research organizations has strengthened Marblestone’s conviction that they’re filling a gap.
[C]Worthy is the FRO building tools to ensure safe, ocean-based carbon dioxide removal. It recently released an interactive map of alkaline activity to improve our understanding of one method for sequestering carbon known as ocean alkalinity enhancement. Last year, a math FRO, Lean, released a programming language and proof assistant that was used by Google’s DeepMind AI lab to solve problems in the International Mathematical Olympiad, achieving the same level as a silver medalist in the competition for the first time. The synthetic biology FRO Cultivarium, in turn, has already released software that can predict growth conditions for microbes based on their genome.
Last year, E11 Bio previewed a new method for mapping the brain called PRISM, which it has used to map out a portion of the mouse hippocampus. It will be making the data and mapping tool available to all researchers in coming months.
“A lot of this early work has proven you can put a really talented team together and move fast to go from zero to one,” Payne says. “The next phase is proving FROs can continue to build on that momentum and develop even more datasets and tools, establish even bigger collaborations, and scale their impact.”
Payne credits Boyden for fostering an ecosystem where researchers could think about problems beyond their narrow area of study.
“Ed’s lab was a really intellectually stimulating, collaborative environment,” Payne says. “He trains his students to think about impact first and work backward. It was a bunch of people thinking about how they were going to change the world, and that made it a particularly good place to develop the FRO idea.”
Marblestone says supporting FROs has been the highest-impact thing he’s been able to do in his career. Still, he believes the success of FROs should be judged over closer to 10-year periods and will depend on not just the tools they produce but also whether they spin out companies, partner with other institutes, and create larger, long-lasting initiatives to deploy what they built.
“We were initially worried people wouldn’t be willing to join these organizations because it doesn’t offer tenure and it doesn’t offer equity in a startup,” Marblestone says. “But we’ve been able to recruit excellent leaders, scientists, engineers, and others to create highly motivated teams. That’s good evidence this is working. As we get strong projects and good results, I hope it will create this flywheel where it becomes easier to fund these ideas, more scientists will come up with them, and I think we’re starting to get there.”
Scene at MIT: Reflecting on a shared journey toward MIT PhDs
“My wife, Erin Tevonian, and I both graduated last week with our PhDs in biological engineering, a program we started together when we arrived at MIT in fall 2019. At the time, we had already been dating for three years, having met as classmates in the bioengineering program at the University of Illinois at Urbana-Champaign in 2015. We went through college together — taking classes, vacationing with friends, and biking cross-country, all side-by-side — and so we were lucky to be able to continue doing so by coming to Course 20 at MIT together. It was during our graduate studies at MIT that we got engaged (spring 2022) and married (last September), a milestone that we were able to celebrate with the many wonderful friends we found at MIT.
First-year students in the MIT Biological Engineering PhD program rotate through labs of interest before picking where they will complete their doctorates, and so we found our way to research groups by January 2020 just before the Covid-19 pandemic disrupted on-campus research and caused social distancing. Erin completed her PhD in Doug Lauffenburger and Linda Griffith’s labs, during which she used computational and experimental models to study human insulin resistance and built better liver tissue models for recapitulating disease pathology. I completed my PhD in Anders Hansen’s lab and studied how DNA folds in 3D space to drive gene regulation by building and applying a new method for mapping DNA architecture at finer resolutions than previously possible. The years flew by as we dove into our research projects, and we defended our PhDs a week apart back in April.
Erin and I were standing at Commencement with the Class of 2025 at the moment this photo was snapped, smiling as we listened to MIT’s school song. Graduation is a bittersweet milestone because it represents the end of what has been an incredible adventure for us, an adventure that made campus feel like home, so I must admit that I wasn’t sure how I would feel going into graduation week. This moment, though, felt like a fitting close for our time at MIT, and I was filled with gratitude for the many memories, opportunities, and adventures I got to share with Erin over the course of grad school. I also graduated from the MIT Sloan School of Management/School of Engineering’s Leaders for Global Operations program (hence the stole), so I was also reflecting on the many folks I’ve met across campus that make MIT the wonderful place that it is, and how special it is to be a part of a community that makes it so hard to say goodbye.”
—Viraat Goel MBA ’25, PhD ’25
Have a creative photo of campus life you'd like to share? Submit it to Scene at MIT.
New system enables robots to solve manipulation problems in seconds
Ready for that long-awaited summer vacation? First, you’ll need to pack all items required for your trip into a suitcase, making sure everything fits securely without crushing anything fragile.
Because humans possess strong visual and geometric reasoning skills, this is usually a straightforward problem, even if it may take a bit of finagling to squeeze everything in.
To a robot, though, it is an extremely complex planning challenge that requires thinking simultaneously about many actions, constraints, and mechanical capabilities. Finding an effective solution could take the robot a very long time — if it can even come up with one.
Researchers from MIT and NVIDIA Research have developed a novel algorithm that dramatically speeds up the robot’s planning process. Their approach enables a robot to “think ahead” by evaluating thousands of possible solutions in parallel and then refining the best ones to meet the constraints of the robot and its environment.
Instead of testing each potential action one at a time, like many existing approaches, this new method considers thousands of actions simultaneously, solving multistep manipulation problems in a matter of seconds.
The researchers harness the massive computational power of specialized processors called graphics processing units (GPUs) to enable this speedup.
In a factory or warehouse, their technique could enable robots to rapidly determine how to manipulate and tightly pack items that have different shapes and sizes without damaging them, knocking anything over, or colliding with obstacles, even in a narrow space.
“This would be very helpful in industrial settings where time really does matter and you need to find an effective solution as fast as possible. If your algorithm takes minutes to find a plan, as opposed to seconds, that costs the business money,” says MIT graduate student William Shen SM ’23, lead author of the paper on this technique.
He is joined on the paper by Caelan Garrett ’15, MEng ’15, PhD ’21, a senior research scientist at NVIDIA Research; Nishanth Kumar, an MIT graduate student; Ankit Goyal, a NVIDIA research scientist; Tucker Hermans, a NVIDIA research scientist and associate professor at the University of Utah; Leslie Pack Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); Tomás Lozano-Pérez, an MIT professor of computer science and engineering and a member of CSAIL; and Fabio Ramos, principal research scientist at NVIDIA and a professor at the University of Sydney. The research will be presented at the Robotics: Science and Systems Conference.
Planning in parallel
The researchers’ algorithm is designed for what is called task and motion planning (TAMP). The goal of a TAMP algorithm is to come up with a task plan for a robot, which is a high-level sequence of actions, along with a motion plan, which includes low-level action parameters, like joint positions and gripper orientation, that complete that high-level plan.
To create a plan for packing items in a box, a robot needs to reason about many variables, such as the final orientation of packed objects so they fit together, as well as how it is going to pick them up and manipulate them using its arm and gripper.
It must do this while determining how to avoid collisions and achieve any user-specified constraints, such as a certain order in which to pack items.
With so many potential sequences of actions, sampling possible solutions at random and trying one at a time could take an extremely long time.
“It is a very large search space, and a lot of actions the robot does in that space don’t actually achieve anything productive,” Garrett adds.
Instead, the researchers’ algorithm, called cuTAMP, which is accelerated using a parallel computing platform called CUDA, simulates and refines thousands of solutions in parallel. It does this by combining two techniques, sampling and optimization.
Sampling involves choosing a solution to try. But rather than sampling solutions randomly, cuTAMP limits the range of potential solutions to those most likely to satisfy the problem’s constraints. This modified sampling procedure allows cuTAMP to broadly explore potential solutions while narrowing down the sampling space.
“Once we combine the outputs of these samples, we get a much better starting point than if we sampled randomly. This ensures we can find solutions more quickly during optimization,” Shen says.
Once cuTAMP has generated that set of samples, it performs a parallelized optimization procedure that computes a cost, which corresponds to how well each sample avoids collisions and satisfies the motion constraints of the robot, as well as any user-defined objectives.
It updates the samples in parallel, chooses the best candidates, and repeats the process until it narrows them down to a successful solution.
Harnessing accelerated computing
The researchers leverage GPUs, specialized processors that are far more powerful for parallel computation and workloads than general-purpose CPUs, to scale up the number of solutions they can sample and optimize simultaneously. This maximized the performance of their algorithm.
“Using GPUs, the computational cost of optimizing one solution is the same as optimizing hundreds or thousands of solutions,” Shen explains.
When they tested their approach on Tetris-like packing challenges in simulation, cuTAMP took only a few seconds to find successful, collision-free plans that might take sequential planning approaches much longer to solve.
And when deployed on a real robotic arm, the algorithm always found a solution in under 30 seconds.
The system works across robots and has been tested on a robotic arm at MIT and a humanoid robot at NVIDIA. Since cuTAMP is not a machine-learning algorithm, it requires no training data, which could enable it to be readily deployed in many situations.
“You can give it a brand-new problem and it will provably solve it,” Garrett says.
The algorithm is generalizable to situations beyond packing, like a robot using tools. A user could incorporate different skill types into the system to expand a robot’s capabilities automatically.
In the future, the researchers want to leverage large language models and vision language models within cuTAMP, enabling a robot to formulate and execute a plan that achieves specific objectives based on voice commands from a user.
This work is supported, in part, by the National Science Foundation (NSF), Air Force Office for Scientific Research, Office of Naval Research, MIT Quest for Intelligence, NVIDIA, and the Robotics and Artificial Intelligence Institute.
Guardian Ag’s crop-spraying drone is replacing dangerous pilot missions
Every year during the growing season, thousands of pilots across the country climb into small planes loaded with hundreds of pounds of pesticides and fly extremely close to the ground at upward of 140 miles an hour, unloading their cargo onto rows of corn, cotton, and soybeans.
The world of agricultural aviation is as dangerous as it is vital to America’s farms. Unfortunately, fatal crashes are common. Now Guardian Ag, founded by former MIT Electronics Research Society (MITERS) makers Adam Bercu and Charles Guan ’11, is offering an alternative in the form of a large, purpose-built drone that can autonomously deliver 200-pound payloads across farms. The company’s drones feature an 18-foot spray radius, 80-inch rotors, a custom battery pack, and aerospace-grade materials designed to make crop spraying more safe, efficient, and inexpensive for farmers.
“We’re trying to bring technology to American farms that are hundreds or thousands of acres, where you’re not replacing a human with a hand pump — you’re replacing a John Deere tractor or a helicopter or an airplane,” Bercu says.
“With Guardian, the operator shows up about 30 minutes before they want to spray, they mix the product, path plan the field in our app, and it gives an estimate for how long the job will take,” he says. “With our fast charging, you recharge the aircraft while you fill the tank, and those two operations take about the same amount of time.”
From Battlebots to farmlands
At a young age, Bercu became obsessed with building robots. Growing up in south Florida, he’d attend robotic competitions, build prototypes, and even dumpster dive for particularly hard-to-find components. At one competition, Bercu met Charles Guan, who would go on to major in mechanical engineering at MIT, and the two robot enthusiasts became lifelong friends.
“When Charles came to MIT, he basically convinced me to move to Cambridge,” Bercu says. “He said, ‘You need to come up here. I found more people like us. Hackers!’”
Bercu visited Cambridge, Massachusetts, and indeed fell in love with the region’s makerspaces and hacker culture. He moved soon after, and he and Guan began spending free time at spaces including the Artisans Asylum makerspace in Somerville, Massachusetts; MIT’s International Design Center; and the MIT Electronics Research Society (MITERS) makerspace. Guan held several leadership positions at MITERS, including facilities manager, treasurer, and president.
“MIT offered enormous latitude to its students to be independent and creative, which was reflected in the degree of autonomy they permit student-run organizations like MITERS to have compared to other top-tier schools,” Guan says. “It was a key selling point to me when I was touring mechanical engineering labs as a junior in high school. I was well-known in the department circle for being at MITERS all the time, possibly even more than I spent on classes.”
After Guan graduated, he and Bercu started a hardware consulting business and competed in the robot combat show Battlebots. Guan also began working as a design instructor in MIT’s Department of Mechanical Engineering, where he taught a section of Course 2.007 that tasked students with building go-karts.
Eventually, Guan and Bercu decided to use their experience to start a drone company.
“Over the course of Battlebots and building go-karts, we knew electric batteries were getting really cheap and electric vehicle supply chains were established,” Bercu explains. “People were raising money to build eVTOL [electric vertical take-off and landing] vehicles to transport people, but we knew diesel fuel still outperformed batteries over long distances. Where electric systems did outperform combustion engines was in areas where you needed peak power for short periods of time. Basically, batteries are awesome when you have a short mission.”
That idea made the founders think crop spraying could be a good early application. Bercu’s family runs an aviation business, and he knew pilots who would spray crops as their second jobs.
“It’s one of those high-paying but very dangerous jobs,” Bercu says. “Even in the U.S., we lose between 1 and 2 percent of all agriculture pilots each year to fatal accidents. These people are rolling the dice every time they do this: You’re flying 6 feet off the ground at 140 miles an hour with 800 gallons of pesticide in your tank.”
After cobbling together spare parts from Battlebots and their consulting business, the founders built a 600-pound drone. When they finally got it to fly, they decided the time was right to launch their company, receiving crucial early guidance and their first funding from the MIT-affiliated investment firm the E14 Fund.
The founders spent the next year interviewing crop dusters and farmers. They also started engaging with the Federal Aviation Administration.
“There was no category for anything like this,” Bercu explains. “With the FAA, we not only got through the approval process, we helped them build the process as we went through it, because we wanted to establish some common-sense standards.”
Guardian custom-built its batteries to optimize throughput and utilization rate of its drones. Depending on the farm, Bercu says his machines can unload about 1.5 to 2 tons of payload per hour.
Guardian’s drones can also spray more precisely than planes, reducing the environmental impact of pesticides, which often pollute the landscapes and waterways surrounding farms.
“This thing has the precision to spray the ‘Mona Lisa’ on 20 acres, but we’re not leveraging that functionality today,” Bercu says. “For the operator we want to make it very easy. The goal is to take someone who sprays with a tractor and teach them to spray with a drone in less than a week.”
Scaling for farmers
To date, Guardian Ag has built eight of its aircraft, which are actively delivering payloads over California farms in trials with paying customers. The company is currently ramping up manufacturing in its 60,000-square-foot facility in Massachusetts, and Bercu says Guardian has a backlog of hundreds of millions of dollars-worth of drones.
“Grower demand has been exceptional,” Bercu says. “We don’t need to educate them on the need for this. They see the big drone with the big tank and they’re in.”
Bercu envisions Guardian’s drones helping with a number of other tasks like ship-to-ship logistics, delivering supplies to offshore oil rigs, mining, and other areas where helicopters and small aircraft are currently flown through difficult terrain. But for now, the company is focused on starting with agriculture.
“Agriculture is such an important and foundational aspect of our country,” says Guardian Ag chief operating officer Ashley Ferguson MBA ’19. “We work with multigenerational farming families, and when we talk to them, it’s clear aerial spray has taken hold in the industry. But there’s a large shortage of pilots, especially for agriculture applications. So, it’s clear there’s a big opportunity.”
Seven years since founding Guardian, Bercu remains grateful that MIT’s community opened its doors for him when he moved to Cambridge.
“Without the MIT community, this company wouldn’t be possible,” Bercu says. “I was never able to go to college, but I’d love to one day apply to MIT and do my engineering undergrad or go to the Sloan School of Management. I’ll never forget MIT’s openness to me. It’s a place I hold near and dear to my heart.”
Physicists observe a new form of magnetism for the first time
MIT physicists have demonstrated a new form of magnetism that could one day be harnessed to build faster, denser, and less power-hungry “spintronic” memory chips.
The new magnetic state is a mash-up of two main forms of magnetism: the ferromagnetism of everyday fridge magnets and compass needles, and antiferromagnetism, in which materials have magnetic properties at the microscale yet are not macroscopically magnetized.
Now, the MIT team has demonstrated a new form of magnetism, termed “p-wave magnetism.”
Physicists have long observed that electrons of atoms in regular ferromagnets share the same orientation of “spin,” like so many tiny compasses pointing in the same direction. This spin alignment generates a magnetic field, which gives a ferromagnet its inherent magnetism. Electrons belonging to magnetic atoms in an antiferromagnet also have spin, although these spins alternate, with electrons orbiting neighboring atoms aligning their spins antiparallel to each other. Taken together, the equal and opposite spins cancel out, and the antiferromagnet does not exhibit macroscopic magnetization.
The team discovered the new p-wave magnetism in nickel iodide (NiI2), a two-dimensional crystalline material that they synthesized in the lab. Like a ferromagnet, the electrons exhibit a preferred spin orientation, and, like an antiferromagnet, equal populations of opposite spins result in a net cancellation. However, the spins on the nickel atoms exhibit a unique pattern, forming spiral-like configurations within the material that are mirror-images of each other, much like the left hand is the right hand’s mirror image.
What’s more, the researchers found this spiral spin configuration enabled them to carry out “spin switching”: Depending on the direction of spiraling spins in the material, they could apply a small electric field in a related direction to easily flip a left-handed spiral of spins into a right-handed spiral of spins, and vice-versa.
The ability to switch electron spins is at the heart of “spintronics,” which is a proposed alternative to conventional electronics. With this approach, data can be written in the form of an electron’s spin, rather than its electronic charge, potentially allowing orders of magnitude more data to be packed onto a device while using far less power to write and read that data.
“We showed that this new form of magnetism can be manipulated electrically,” says Qian Song, a research scientist in MIT’s Materials Research Laboratory. “This breakthrough paves the way for a new class of ultrafast, compact, energy-efficient, and nonvolatile magnetic memory devices.”
Song and his colleagues published their results May 28 in the journal Nature. MIT co-authors include Connor Occhialini, Batyr Ilyas, Emre Ergeçen, Nuh Gedik, and Riccardo Comin, along with Rafael Fernandes at the University of Illinois Urbana-Champaign, and collaborators from multiple other institutions.
Connecting the dots
The discovery expands on work by Comin’s group in 2022. At that time, the team probed the magnetic properties of the same material, nickel iodide. At the microscopic level, nickel iodide resembles a triangular lattice of nickel and iodine atoms. Nickel is the material’s main magnetic ingredient, as the electrons on the nickel atoms exhibit spin, while those on iodine atoms do not.
In those experiments, the team observed that the spins of those nickel atoms were arranged in a spiral pattern throughout the material’s lattice, and that this pattern could spiral in two different orientations.
At the time, Comin had no idea that this unique pattern of atomic spins could enable precise switching of spins in surrounding electrons. This possibility was later raised by collaborator Rafael Fernandes, who along with other theorists was intrigued by a recently proposed idea for a new, unconventional, “p-wave” magnet, in which electrons moving along opposite directions in the material would have their spins aligned in opposite directions.
Fernandes and his colleagues recognized that if the spins of atoms in a material form the geometric spiral arrangement that Comin observed in nickel iodide, that would be a realization of a “p-wave” magnet. Then, when an electric field is applied to switch the “handedness” of the spiral, it should also switch the spin alignment of the electrons traveling along the same direction.
In other words, such a p-wave magnet could enable simple and controllable switching of electron spins, in a way that could be harnessed for spintronic applications.
“It was a completely new idea at the time, and we decided to test it experimentally because we realized nickel iodide was a good candidate to show this kind of p-wave magnet effect,” Comin says.
Spin current
For their new study, the team synthesized single-crystal flakes of nickel iodide by first depositing powders of the respective elements on a crystalline substrate, which they placed in a high-temperature furnace. The process causes the elements to settle into layers, each arranged microscopically in a triangular lattice of nickel and iodine atoms.
“What comes out of the oven are samples that are several millimeters wide and thin, like cracker bread,” Comin says. “We then exfoliate the material, peeling off even smaller flakes, each several microns wide, and a few tens of nanometers thin.”
The researchers wanted to know if, indeed, the spiral geometry of the nickel atoms’s spins would force electrons traveling in opposite directions to have opposite spins, like what Fernandes expected a p-wave magnet should exhibit. To observe this, the group applied to each flake a beam of circularly polarized light — light that produces an electric field that rotates in a particular direction, for instance, either clockwise or counterclockwise.
They reasoned that if travelling electrons interacting with the spin spirals have a spin that is aligned in the same direction, then incoming light, polarized in that same direction, should resonate and produce a characteristic signal. Such a signal would confirm that the traveling electrons’ spins align because of the spiral configuration, and furthermore, that the material does in fact exhibit p-wave magnetism.
And indeed, that’s what the group found. In experiments with multiple nickel iodide flakes, the researchers directly observed that the direction of the electron’s spin was correlated to the handedness of the light used to excite those electrons. Such is a telltale signature of p-wave magnetism, here observed for the first time.
Going a step further, they looked to see whether they could switch the spins of the electrons by applying an electric field, or a small amount of voltage, along different directions through the material. They found that when the direction of the electric field was in line with the direction of the spin spiral, the effect switched electrons along the route to spin in the same direction, producing a current of like-spinning electrons.
“With such a current of spin, you can do interesting things at the device level, for instance, you could flip magnetic domains that can be used for control of a magnetic bit,” Comin explains. “These spintronic effects are more efficient than conventional electronics because you’re just moving spins around, rather than moving charges. That means you’re not subject to any dissipation effects that generate heat, which is essentially the reason computers heat up.”
“We just need a small electric field to control this magnetic switching,” Song adds. “P-wave magnets could save five orders of magnitude of energy. Which is huge.”
“We are excited to see these cutting-edge experiments confirm our prediction of p-wave spin polarized states,” says Libor Šmejkal, head of the Max Planck Research Group in Dresden, Germany, who is one of the authors of the theoretical work that proposed the concept of p-wave magnetism but was not involved in the new paper. “The demonstration of electrically switchable p-wave spin polarization also highlights the promising applications of unconventional magnetic states.”
The team observed p-wave magnetism in nickel iodide flakes, only at ultracold temperatures of about 60 kelvins.
“That’s below liquid nitrogen, which is not necessarily practical for applications,” Comin says. “But now that we’ve realized this new state of magnetism, the next frontier is finding a material with these properties, at room temperature. Then we can apply this to a spintronic device.”
This research was supported, in part, by the National Science Foundation, the Department of Energy, and the Air Force Office of Scientific Research.
Day of Climate inspires young learners to take action
“Close your eyes and imagine we are on the same team. Same arena. Same jersey. And the game is on the line,” Jaylen Brown, the 2024 NBA Finals MVP for the Boston Celtics, said to a packed room of about 200 people at the recent Day of Climate event at the MIT Museum.
“Now think about this: We aren’t playing for ourselves; we are playing for the next generation,” Brown added, encouraging attendees to take climate action.
The inaugural Day of Climate event brought together local learners, educators, community leaders, and the MIT community. Featuring project showcases, panels, and a speaker series, the event sparked hands-on learning and inspired climate action across all ages.
The event marked the celebration of the first year of a larger initiative by the same name. Led by the pK-12 team at MIT Open Learning, Day of Climate has brought together learners and educators by offering free, hands-on curriculum lessons and activities designed to introduce learners to climate change, teach how it shapes their lives, and consider its effects on humanity.
Cynthia Breazeal, dean of digital learning at MIT Open Learning, notes the breadth of engagement across MIT that made the event, and the larger initiative, possible with contributions from more than 10 different MIT departments, labs, centers, and initiatives.
“MIT is passionate about K-12 education,” she says. “It was truly inspiring to witness how our entire community came together to demonstrate the power of collaboration and advocacy in driving meaningful change.”
From education to action
The event kicked off with a showcase, where the Day of Climate grantees and learners invited attendees to learn about their projects and meaningfully engage with lessons and activities. Aranya Karighattam, a local high school senior, adapted the curriculum Urban Heat Islands — developed by Lelia Hampton, a PhD student in electrical engineering and computer science at MIT, and Chris Rabe, program director at the MIT Environmental Solution Initiative — sharing how this phenomenon affects the Boston metropolitan area.
Karighattam discussed what could be done to shield local communities from urban heat islands. They suggested doubling the tree cover in areas with the lowest quartile tree coverage as one mitigating strategy, but noted that even small steps, like building a garden and raising awareness for this issue, can help.
Day of Climate echoed a consistent call to action, urging attendees to meaningfully engage in both education and action. Brown, who is an MIT Media Lab Director’s Fellow, spoke about how education and collective action will pave the way to tackle big societal challenges. “We need to invest in sustainability communities,” he said. “We need to invest in clean technology, and we need to invest in education that fosters environmental stewardship.”
Part of MIT’s broader sustainability efforts, including The Climate Project, the event reflected a commitment to building a resilient and sustainable future for all. Influenced by the Climate Action Through Education (CATE), Day of Climate panelist Sophie Shen shared how climate education inspired her civic life. “Learning about climate change has inspired me to take action on a wider systemic level,” she said.
Shen, a senior at Arlington High School and local elected official, emphasized how engagement and action looks different for everyone. “There are so many ways to get involved,” she said. “That could be starting a community garden — those can be great community hubs and learning spaces — or it could include advocating to your local or state governments.”
Becoming a catalyst for change
The larger Day of Climate initiative encourages young people to understand the interdisciplinary nature of climate change and consider how the changing climate impacts many aspects of life. With curriculum available for learners from ages 4 to 18, these free activities range from Climate Change Charades — where learners act out words like “deforestation” and “recycling” — to Climate Change Happens Below Water, where learners use sensors to analyze water quality data like pH and solubility.
Many of the speakers at the event shared personal anecdotes from their childhood about how climate education, both in and out of the classroom, has changed the trajectory of their lives. Addaline Jorroff, deputy climate chief and director of mitigation and community resilience in the Office of Climate Resilience and Innovation for the Commonwealth of Massachusetts, explained how resources from MIT were instrumental in her education as a middle and high schooler, while Jaylen Brown told how his grandmother helped him see the importance of taking care of the planet, through recycling and picking up trash together, when he was young.
Claudia Urrea, director of the pK-12 team at Open Learning and director of Day of Climate, emphasizes how providing opportunities at schools — through new curriculum, classroom resources and mentorship — are crucial, but providing other educational opportunities also matter: in particular, opportunities that support learners in becoming strong leaders.
“I strongly believe that this event not only inspired young learners to take meaningful action, both large and small, towards a better future, but also motivated all the stakeholders to continue to create opportunities for these young learners to emerge as future leaders,” Urrea says.
The team plans to hold the Day of Climate event annually, bringing together young people, educators, and the MIT community. Urrea hopes the event will act as a catalyst for change — for everyone.
“We hope Day of Climate serves as the opportunity for everyone to recognize the interconnectedness of our actions,” Urrea says. “Understanding this larger system is crucial for addressing current and future challenges, ultimately making the world a better place for all.”
The Day of Climate event was hosted by the Day of Climate team in collaboration with MIT Climate Action Through Education (CATE) and Earth Day Boston.
Highlights from MIT’s first-ever Artfinity festival
When people think of MIT, they may first think of code, circuits, and cutting-edge science. But the school has a rich history of interweaving art, science, and technology in unexpected and innovative ways — and that’s never been more clear than with the Institute’s latest festival, Artfinity: A Celebration of Creativity and Community at MIT.
After an open-call invitation to the MIT community in early 2024, the inaugural Artfinity delivered an extended multi-week exploration of art and ideas, with more than 80 free performing and visual arts events between Feb. 15 and May 2, including a two-day film festival, interactive augmented reality art installations, an evening at the MIT Museum, a simulated lunar landing, and concerts by both student groups and internationally renowned musicians.
“Artfinity was a fantastic celebration of MIT’s creative excellence, offering so many different ways to explore our thriving arts culture,” says MIT president Sally Kornbluth. “It was wonderful to see people from our community getting together with family, friends, and neighbors from Cambridge and Boston to experience the joy of music and the arts.”
Among the highlights were a talk by Tony-winning scenic designer Es Devlin, a concert by Grammy-winning rapper and visiting scholar Lupe Fiasco, and a series of events commemorating the opening of the Edward and Joyce Linde Music Building.
Devlin shared art tied to her recent spring residency at MIT as the latest honoree of the Eugene McDermott Award in the Arts. Working with MIT faculty, students, and staff, she inspired a site-specific installation called “Face to Face,” in which more than 100 community members were paired with strangers to draw each other. In recent years, Devlin has focused her work on fostering interpersonal connection, as in her London multimedia exhibition “Congregation,” in which she drew 50 people displaced from their homelands and documented their stories on video.
Fiasco’s May 2 performance centered around a new project inspired by MIT’s public art collection, developed this year in collaboration with students and faculty as part of his work as a visiting scholar and teaching the class “Rap Theory and Practice.” With the backing of MIT’s Festival Jazz Ensemble, Fiasco presented original compositions based on famed campus sculptures such as Alexander Calder’s La Grande Voile [The Big Sail] and Jaume Plensa’s Alchemist, with members of the MIT Rap Ensemble also jumping on board for many of the pieces. Several students in the ensemble also spearheaded complex multi-instrument arrangements of some of Fiasco’s most popular songs, including “The Show Goes On” and “Kick, Push.”
Artfinity’s programming also encompassed an eclectic mix of concerts commemorating the new Linde Music Building, which features the 390-seat Tull Hall, rehearsal rooms, a recording studio, and a research lab to help support a new music technology graduate program launching this fall. Events included performances of multiple student ensembles, the Boston Symphony Chamber Players, the Boston Chamber Music Society, Sanford Biggers’ group Moonmedicin, and Grammy-winning jazz saxophonist Miguel Zenón, an assistant professor of music at MIT.
“Across campus, from our new concert hall to the Great Dome, in gallery spaces and in classrooms, our community was inspired by the visual and performing arts of the Artfinity festival,” says MIT provost Cynthia Barnhart. “Artfinity has been an incredible celebration and display of the collective creativity and innovative spirit of our community of students, faculty, and staff.”
A handful of other Artfinity pieces also made use of MIT’s iconic architecture, including Creative Lumens and Media Lab professor Behnaz Farahi’s “Gaze to the Stars.” Taking place March 12–14 and coinciding with the total lunar eclipse, the large-scale video projections illuminated a wide range of campus buildings, transforming the exteriors of the new Linde Music Building, the MIT Chapel, the Stratton Student Center, the Zesiger Sports & Fitness Center, and even the Great Dome, which Farahi’s team affixed with images of eyes from the MIT community.
Other popular events included the MIT Museum’s After Dark series and its Argus Installation, which examined the interplay of light and hand-blown glass. A two-day Bartos Theatre film festival featured works by students, staff, and faculty, ranging from shorts to 30-minute productions, and spanning the genres of fiction, nonfiction, animation, and experimental pieces. The Welcome Center also hosted “All Our Relations,” a multimedia celebration of MIT's Indigenous community through song, dance, and story.
An Institute event, Artfinity was organized by the Office of the Arts, and led by professor of art, culture, and technology Azra Akšamija and Institute Professor of Music Marcus A. Thompson. Both professors spoke about the importance of spotlighting the arts and demonstrating a diverse breadth and depth of programming for future iterations of the event.
“People think of MIT as a place you go to only for technology. But, in reality, MIT has always attracted students with broad interests and required them to explore balance in their programs with substantive world-class offerings in the humanities, social sciences, and visual and performing arts,” says Thompson. “We are hoping this festival, Artfinity, will showcase the infinite variety and quality we have been offering and actually doing in the arts for quite some time.”
Professor of music and theater art Jay Scheib sees the mix of art and technology as a way for students to explore other ways for them to approach different research challenges. “In the arts, we tend to look at problems in a different way … framed by ideas of aesthetics, civic discourse, and experience,” says Scheib. “This approach can help students in physics, aerospace design, or artificial intelligence to ask different, yet equally useful, questions.”
An Institute-sponsored campus-wide event organized by the Office of the Arts, Artfinity represents MIT’s largest arts festival since its 150th anniversary in 2011. Akšamija, who is director of MIT’s Art, Culture, and Technology (ACT) program, says that the festival serves as both a student spotlight and an opportunity to interact with, and meaningfully give back to, MIT’s surrounding community in Cambridge and greater Boston.
“What became evident during the planning of this festival was the quantity and quality of art here at MIT, and how much of that work is cutting-edge,” says Akšamija. “We wanted to celebrate the creativity and joyfulness of the brilliant minds on campus [and] to bring joy and beauty to MIT and the surrounding community.”
Women’s track and field wins first NCAA Division III Outdoor National Championship
With a dramatic victory in the 4x400m relay, the MIT women's track and field team clinched the 2025 NCAA Division III Outdoor Track and Field National Championship May 24 at the SPIRE Institute's Outdoor Track and Field facility. The title was MIT's first NCAA women's outdoor track and field national championship. The team scored first of 79 with 56 points; runners-up included Washington University with 47 points and the University of Winsconsin at La Crosse with 38 points.
With the victory, MIT completed a sweep of the 2024-25 NCAA Division III women's cross country, indoor track and field, and outdoor track and field titles — becoming the first women's program to sweep all three in the same year.
MIT earned 20 All-America honors across three days, including the program's first relay national championship in the 4x400m on Saturday and Alexis Boykin's eighth career national title with an NCAA record-breaking performance in the shot put on Friday.
On Thursday, Boykin opened the championships with a third-place performance in the discus as MIT quickly moved to the top of the team leaderboard on the first day of competition. Boykin and classmate Emily Ball each earned a spot on the podium. Boykin was third with a throw of 45.12m (148' 0") on her second attempt and Ball was seventh with a mark of 41.90m (137' 5") on her final throw of prelims.
In the pole vault, junior Katelyn Howard tied for fifth, clearing 3.85m (12' 7.5") to pick up three points for MIT. Howard passed on the first height and cleared at both 3.75m and 3.85m, but did not pass the fourth progression. Classmate Hailey Surace was 14th, clearing 3.75m (12' 3.5").
Junior Elaine Wang picked up a big point with an eighth-place finish for MIT in the javelin. Wang's second attempt traveled 40.44m (132' 8"), moving her into sixth place. She would eventually finish in eighth on the strength of her second attempt.
The opening day concluded with junior Kate Sanderson finishing fourth with a personal best of 34:48.601 in the 10,000m to earn a spot on the podium, as MIT continued to lead the team standings.
On Friday, Boykin returned on day two and set the NCAA Division III women's shot put all-time record, winning her eighth career national championship with a throw of 16.80m (55’ 1/2”). Boykin won the event by over 2 meters, breaking Robyn Jarocki's NCAA Division III record on her final preliminary attempt with a throw of 16.80m.
MIT wrapped action with the 3,000m Steeplechase final, where sophomore Liv Girand finished in 10th place in 10:58.71 to earn the first All-America honor of her career. MIT continued to lead the team standings at the end of the second day of competition.
On Saturday, Boykin earned her third All-America honor in three events at the championships with a third-place finish in the hammer with a throw of 58.79m (192' 10”), while junior Nony Otu Ugwu took 10th with a jump of 11.91m (39' 1") on her final attempt of prelims. Otu Ugwu did not advance to the final.
MIT shined on the track to secure the title, as grad student Gillian Roeder and senior Christina Crow picked up seven big points in the 1,500m final. Roeder was fifth in 4:27.76 and Crow was one spot back, finishing sixth in 4:28.81.
Senior Marina Miller followed and picked up six more points while earning the first of two All-America honors on the day with a third-place finish and a personal record of 54.32 in the 400m.
Junior Rujuta Sane, Roeder, and junior Kate Sanderson finished 13th, 14th, and 16th, respectively, in the 5,000m. Sane had a time of 16:51.45, with Roeder finishing in 16:54.07 and Sanderson clocking in at 17:00.55.
With MIT leading second-place Washington University by seven points heading into the final event, MIT's 4x4 relay team of senior Olivia Dias, junior Shreya Kalyan, junior Krystal Montgomery, and Miller left no doubt, securing the team championship with a national title of their own, as Miller moved from third to first over the final 50m to win an electric final race.