MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 17 hours 12 min ago

A cool new way to study gravity

Tue, 05/20/2025 - 4:10pm

One of the most profound open questions in modern physics is: “Is gravity quantum?” 

The other fundamental forces — electromagnetic, weak, and strong — have all been successfully described, but no complete and consistent quantum theory of gravity yet exists.  

“Theoretical physicists have proposed many possible scenarios, from gravity being inherently classical to fully quantum, but the debate remains unresolved because we’ve never had a clear way to test gravity’s quantum nature in the lab,” says Dongchel Shin, a PhD candidate in the MIT Department of Mechanical Engineering (MechE). “The key to answering this lies in preparing mechanical systems that are massive enough to feel gravity, yet quiet enough — quantum enough — to reveal how gravity interacts with them.”

Shin, who is also a MathWorks Fellow, researches quantum and precision metrology platforms that probe fundamental physics and are designed to pave the way for future industrial technology. He is the lead author of a new paper that demonstrates laser cooling of a centimeter-long torsional oscillator. The open-access paper, “Active laser cooling of a centimeter-scale torsional oscillator,” was recently published in the journal Optica

Lasers have been routinely employed to cool down atomic gases since the 1980s, and have been used in the linear motion of nanoscale mechanical oscillators since around 2010. The new paper presents the first time this technique has been extended to torsional oscillators, which are key to a worldwide effort to study gravity using these systems.

“Torsion pendulums have been classical tools for gravity research since [Henry] Cavendish’s famous experiment in 1798. They’ve been used to measure Newton’s gravitational constant, G, test the inverse-square law, and search for new gravitational phenomena,” explains Shin.

By using lasers to remove nearly all thermal motion from atoms, in recent decades scientists have created ultracold atomic gases at micro- and nanokelvin temperatures. These systems now power the world’s most precise clocks — optical lattice clocks — with timekeeping precision so high that they would gain or lose less than a second over the age of the universe.

“Historically, these two technologies developed separately — one in gravitational physics, the other in atomic and optical physics,” says Shin. “In our work, we bring them together. By applying laser cooling techniques originally developed for atoms to a centimeter-scale torsional oscillator, we try to bridge the classical and quantum worlds. This hybrid platform enables a new class of experiments — ones that could finally let us test whether gravity needs to be described by quantum theory.”

The new paper demonstrates laser cooling of a centimeter-scale torsional oscillator from room temperature to a temperature of 10 millikelvins (1/1,000th of a kelvin) using a mirrored optical lever.

“An optical lever is a simple but powerful measurement technique: You shine a laser onto a mirror, and even a tiny tilt of the mirror causes the reflected beam to shift noticeably on a detector. This magnifies small angular motions into easily measurable signals,” explains Shin, noting that while the premise is simple, the team faced challenges in practice. “The laser beam itself can jitter slightly due to air currents, vibrations, or imperfections in the optics. These jitters can falsely appear as motion of the mirror, limiting our ability to measure true physical signals.”

To overcome this, the team used the mirrored optical lever approach, which employs a second, mirrored version of the laser beam to cancel out the unwanted jitter.

“One beam interacts with the torsional oscillator, while the other reflects off a corner-cube mirror, reversing any jitter without picking up the oscillator’s motion,” Shin says. “When the two beams are combined at the detector, the real signal from the oscillator is preserved, and the false motion from [the] laser jitter is canceled.”

This approach reduced noise by a factor of a thousand, which allowed the researchers to detect motion with extreme precision, nearly 10 times better than the oscillator’s own quantum zero-point fluctuations. “That level of sensitivity made it possible for us to cool the system down to just 10 milli-kelvins using laser light,” Shin says.

Shin says this work is just the beginning. “While we’ve achieved quantum-limited precision below the zero-point motion of the oscillator, reaching the actual quantum ground state remains our next goal,” he says. “To do that, we’ll need to further strengthen the optical interaction — using an optical cavity that amplifies angular signals, or optical trapping strategies. These improvements could open the door to experiments where two such oscillators interact only through gravity, allowing us to directly test whether gravity is quantum or not.”

The paper’s other authors from the Department of Mechanical Engineering include Vivishek Sudhir, assistant professor of mechanical engineering and the Class of 1957 Career Development Professor, and PhD candidate Dylan Fife. Additional authors are Tina Heyward and Rajesh Menon of the Department of Electrical and Computer Engineering at the University of Utah. Shin and Fife are both members of Sudhir’s lab, the Quantum and Precision Measurements Group.

Shin says one thing he’s come to appreciate through this work is the breadth of the challenge the team is tackling. “Studying quantum aspects of gravity experimentally doesn’t just require deep understanding of physics — relativity, quantum mechanics — but also demands hands-on expertise in system design, nanofabrication, optics, control, and electronics,” he says.

“Having a background in mechanical engineering, which spans both the theoretical and practical aspects of physical systems, gave me the right perspective to navigate and contribute meaningfully across these diverse domains,” says Shin. “It’s been incredibly rewarding to see how this broad training can help tackle one of the most fundamental questions in science.”

How to solve a bottleneck for CO2 capture and conversion

Tue, 05/20/2025 - 9:00am

Removing carbon dioxide from the atmosphere efficiently is often seen as a crucial need for combatting climate change, but systems for removing carbon dioxide suffer from a tradeoff. Chemical compounds that efficiently remove CO₂ from the air do not easily release it once captured, and compounds that release CO₂ efficiently are not very efficient at capturing it. Optimizing one part of the cycle tends to make the other part worse.

Now, using nanoscale filtering membranes, researchers at MIT have added a simple intermediate step that facilitates both parts of the cycle. The new approach could improve the efficiency of electrochemical carbon dioxide capture and release by six times and cut costs by at least 20 percent, they say.

The new findings are reported today in the journal ACS Energy Letters, in a paper by MIT doctoral students Simon Rufer, Tal Joseph, and Zara Aamer, and professor of mechanical engineering Kripa Varanasi.

“We need to think about scale from the get-go when it comes to carbon capture, as making a meaningful impact requires processing gigatons of CO₂,” says Varanasi. “Having this mindset helps us pinpoint critical bottlenecks and design innovative solutions with real potential for impact. That’s the driving force behind our work.”

Many carbon-capture systems work using chemicals called hydroxides, which readily combine with carbon dioxide to form carbonate. That carbonate is fed into an electrochemical cell, where the carbonate reacts with an acid to form water and release carbon dioxide. The process can take ordinary air with only about 400 parts per million of carbon dioxide and generate a stream of 100 percent pure carbon dioxide, which can then be used to make fuels or other products.

Both the capture and release steps operate in the same water-based solution, but the first step needs a solution with a high concentration of hydroxide ions, and the second step needs one high in carbonate ions. “You can see how these two steps are at odds,” says Varanasi. “These two systems are circulating the same sorbent back and forth. They’re operating on the exact same liquid. But because they need two different types of liquids to operate optimally, it’s impossible to operate both systems at their most efficient points.”

The team’s solution was to decouple the two parts of the system and introduce a third part in between. Essentially, after the hydroxide in the first step has been mostly chemically converted to carbonate, special nanofiltration membranes then separate ions in the solution based on their charge. Carbonate ions have a charge of 2, while hydroxide ions have a charge of 1. “The nanofiltration is able to separate these two pretty well,” Rufer says.

Once separated, the hydroxide ions are fed back to the absorption side of the system, while the carbonates are sent ahead to the electrochemical release stage. That way, both ends of the system can operate at their more efficient ranges. Varanasi explains that in the electrochemical release step, protons are being added to the carbonate to cause the conversion to carbon dioxide and water, but if hydroxide ions are also present, the protons will react with those ions instead, producing just water.

“If you don’t separate these hydroxides and carbonates,” Rufer says, “the way the system fails is you’ll add protons to hydroxide instead of carbonate, and so you’ll just be making water rather than extracting carbon dioxide. That’s where the efficiency is lost. Using nanofiltration to prevent this was something that we aren’t aware of anyone proposing before.”

Testing showed that the nanofiltration could separate the carbonate from the hydroxide solution with about 95 percent efficiency, validating the concept under realistic conditions, Rufer says. The next step was to assess how much of an effect this would have on the overall efficiency and economics of the process. They created a techno-economic model, incorporating electrochemical efficiency, voltage, absorption rate, capital costs, nanofiltration efficiency, and other factors.

The analysis showed that present systems cost at least $600 per ton of carbon dioxide captured, while with the nanofiltration component added, that drops to about $450 a ton. What’s more, the new system is much more stable, continuing to operate at high efficiency even under variations in the ion concentrations in the solution. “In the old system without nanofiltration, you’re sort of operating on a knife’s edge,” Rufer says; if the concentration varies even slightly in one direction or the other, efficiency drops off drastically. “But with our nanofiltration system, it kind of acts as a buffer where it becomes a lot more forgiving. You have a much broader operational regime, and you can achieve significantly lower costs.”

He adds that this approach could apply not only to the direct air capture systems they studied specifically, but also to point-source systems — which are attached directly to the emissions sources such as power plant emissions — or to the next stage of the process, converting captured carbon dioxide into useful products such as fuel or chemical feedstocks.  Those conversion processes, he says, “are also bottlenecked in this carbonate and hydroxide tradeoff.”

In addition, this technology could lead to safer alternative chemistries for carbon capture, Varanasi says. “A lot of these absorbents can at times be toxic, or damaging to the environment. By using a system like ours, you can improve the reaction rate, so you can choose chemistries that might not have the best absorption rate initially but can be improved to enable safety.”

Varanasi adds that “the really nice thing about this is we’ve been able to do this with what’s commercially available,” and with a system that can easily be retrofitted to existing carbon-capture installations. If the costs can be further brought down to about $200 a ton, it could be viable for widespread adoption. With ongoing work, he says, “we’re confident that we’ll have something that can become economically viable” and that will ultimately produce valuable, saleable products.

Rufer notes that even today, “people are buying carbon credits at a cost of over $500 per ton. So, at this cost we’re projecting, it is already commercially viable in that there are some buyers who are willing to pay that price.” But by bringing the price down further, that should increase the number of buyers who would consider buying the credit, he says. “It’s just a question of how widespread we can make it.” Recognizing this growing market demand, Varanasi says, “Our goal is to provide industry scalable, cost-effective, and reliable technologies and systems that enable them to directly meet their decarbonization targets.”

The research was supported by Shell International Exploration and Production Inc. through the MIT Energy Initiative, and the U.S. National Science Foundation, and made use of the facilities at MIT.nano.

Technique rapidly measures cells’ density, reflecting health and developmental state

Tue, 05/20/2025 - 5:00am

Measuring the density of a cell can reveal a great deal about the cell’s state. As cells proliferate, differentiate, or undergo cell death, they may gain or lose water and other molecules, which is revealed by changes in density.

Tracking these tiny changes in cells’ physical state is difficult to do at a large scale, especially with single-cell resolution, but a team of MIT researchers has now found a way to measure cell density quickly and accurately — measuring up to 30,000 cells in a single hour.

The researchers also showed that density changes could be used to make valuable predictions, including whether immune cells such as T cells have become activated to kill tumors, or whether tumor cells are susceptible to a specific drug.

“These predictions are all based on looking at very small changes in the physical properties of cells, which can tell you how they’re going to respond,” says Scott Manalis, the David H. Koch Professor of Engineering in the departments of Biological Engineering and Mechanical Engineering, and a member of the Koch Institute for Integrative Cancer Research.

Manalis is the senior author of the new study, which appears today in Nature Biomedical Engineering. The paper’s lead author is MIT Research Scientist Weida (Richard) Wu.

Measuring density

As cells enter new states, their molecular contents, including lipids, proteins, and nucleic acids, can become more or less crowded. Measuring the density of a cell offers an indirect view of this crowding.

The new density measurement technique reported in this study builds on work that Manalis’ lab has done over the past two decades on technologies for making measurements of cells and tiny particles. In 2007, his lab developed a microfluidic device known as a suspended microchannel resonator (SMR), which consists of a microchannel across a tiny silicon cantilever that vibrates at a specific frequency. As a cell passes through the channel, the frequency of the vibration changes slightly, and the magnitude of that change can be used to calculate the cell’s mass.

In 2011, the researchers adapted the technique to measure the density of cells. To achieve that, cells are sent through the device twice, suspended in two liquids of different densities. A cell’s buoyant mass (its mass as it floats in fluid) depends on its absolute mass and volume, so by measuring two different buoyant masses for a cell, its mass, volume, and density can be calculated.

That technique works well, but swapping fluids and flowing cells through each one is time-consuming, so it can only be used to measure a few hundred cells at a time.

To create a faster, more streamlined system, the researchers combined their SMR device with a fluorescent microscope, which enables measurements of cell volume. The microscope is positioned at the entrance to the resonator, and cells flow through the device while floating in a fluorescent dye that can’t be absorbed by cells. When cells pass by the microscope, the dip in the fluorescent signal can be used to determine the volume of the cell.

After that volume measurement is taken, the cells flow into the resonator, which measures their mass. This process, which allows for rapid calculation of density, can be used to measure up to 30,000 cells in an hour.

“Instead of trying to flow the cells back and forth at least twice through the cantilever to get cell density, we wanted to try to create a method to do a streamlined measurement, so the cells only need to pass through the cantilever once,” Wu says. “From a cell’s mass and volume, we can then derive its density, without compromising the throughput or the precision.”

Evaluating T cells

The researchers used their new technique to track what happens to the density of T cells after they are activated by signaling molecules.

As T cells transition from a quiescent state to an active state, they gain new molecules, as well as water, the researchers found. From their pre-activation state to the first day of activation, the densities of the cells dropped from an average of 1.08 grams per milliliter to 1.06 grams per milliliter. This means that the cells are becoming less crowded, as they gain water faster than they gain other molecules.

“This is suggesting that cell density is very likely reflecting an increase in cellular water content as the cells transit from a quiescent, non-proliferative state to a high-growth state,” Wu says. “These data are pointing to the notion that cell density is an interesting biomarker that is changing during T-cell activation and may have functional relevance to how well the T cells could proliferate.”

Travera, a clinical-stage company co-founded by Manalis, is working on using the SMR mass measurements to predict whether individual cancer patients’ T cells will respond to drugs meant to stimulate a strong anti-tumor immune response. The company has also begun using the density measurement technique, and preliminary studies have found that using mass and density measurements together gives a much more accurate prediction that using either one alone.

“Both mass and density are revealing something about the overall fitness of the immune cells,” Manalis says.

Using physical measurements of cells to monitor their immune activation “is very exciting and may offer a new way of evaluating and measuring changes in immune cells in circulation,” says Genevieve Boland, an associate professor of surgery at Harvard Medical School and vice chair of research for the Integrated Department of Surgery at Mass General Brigham, who was not involved in the study.

“This is a complementary, but very different, method than those currently used for immune assessments in cancer and other diseases, potentially offering a novel tool to assist in clinical decision-making regarding the need for and the choice of a specific cancer therapy, allow monitoring of response to therapy, and/or in early detection of side effects of immune-based therapies,” she says.

Making predictions

Another potential application for this approach is predicting how tumor cells will respond to different types of cancer drugs. In previous work, Manalis has shown that tracking changes in cell mass after treatment can predict whether a tumor cell is undergoing drug-induced apoptosis. In the new study, he found that density could also reveal these responses.

In those experiments, the researchers treated pancreatic cancer cells with one of two different drugs — one that the cells are susceptible to, and one they are resistant to. They found that density changes after treatment accurately reflected the cells’ known responses to treatment.

“We capture something about the cells that is highly predictive within the first couple of days after they get taken out from the tumor,” Wu says. “Cell density is a rapid biomarker to predict in vivo drug response in a very timely manner.”

Manalis’ lab is now working on using measurements of cell mass and density as a way to evaluate the fitness of cells used to synthesize complex proteins such as therapeutic antibodies.

“As cells are producing these proteins, we can learn from these markers of cell fitness and metabolic state to try to make predictions about how well these cells can produce these proteins, and hopefully in the future also guide design and control strategies to even further improve the yield of these complex proteins,” Wu says.

The research was funded by the Paul G. Allen Frontiers Group, the Virginia and Daniel K. Ludwig Fund for Cancer Research, the MIT Center for Precision Cancer Medicine, the Stand up to Cancer Convergence Program, Bristol Myers Squibb, and the Koch Institute Support (core) Grant from the National Cancer Institute.

Scientists discover potential new targets for Alzheimer’s drugs

Tue, 05/20/2025 - 5:00am

By combining information from many large datasets, MIT researchers have identified several new potential targets for treating or preventing Alzheimer’s disease.

The study revealed genes and cellular pathways that haven’t been linked to Alzheimer’s before, including one involved in DNA repair. Identifying new drug targets is critical because many of the Alzheimer’s drugs that have been developed to this point haven’t been as successful as hoped.

Working with researchers at Harvard Medical School, the team used data from humans and fruit flies to identify cellular pathways linked to neurodegeneration. This allowed them to identify additional pathways that may be contributing to the development of Alzheimer’s.

“All the evidence that we have indicates that there are many different pathways involved in the progression of Alzheimer’s. It is multifactorial, and that may be why it’s been so hard to develop effective drugs,” says Ernest Fraenkel, the Grover M. Hermann Professor in Health Sciences and Technology in MIT’s Department of Biological Engineering and the senior author of the study. “We will need some kind of combination of treatments that hit different parts of this disease.”

Matthew Leventhal PhD ’25 is the lead author of the paper, which appears today in Nature Communications.

Alternative pathways

Over the past few decades, many studies have suggested that Alzheimer’s disease is caused by the buildup of amyloid plaques in the brain, which triggers a cascade of events that leads to neurodegeneration.

A handful of drugs have been developed to block or break down these plaques, but these drugs usually do not have a dramatic effect on disease progression. In hopes of identifying new drug targets, many scientists are now working on uncovering other mechanisms that might contribute to the development of Alzheimer’s.

“One possibility is that maybe there’s more than one cause of Alzheimer’s, and that even in a single person, there could be multiple contributing factors,” Fraenkel says. “So, even if the amyloid hypothesis is correct — and there are some people who don’t think it is — you need to know what those other factors are. And then if you can hit all the causes of the disease, you have a better chance of blocking and maybe even reversing some losses.”

To try to identify some of those other factors, Fraenkel’s lab teamed up with Mel Feany, a professor of pathology at Harvard Medical School and a geneticist specializing in fruit fly genetics.

Using fruit flies as a model, Feany and others in her lab did a screen in which they knocked out nearly every conserved gene expressed in fly neurons. Then, they measured whether each of these gene knockdowns had any effect on the age at which the flies develop neurodegeneration. This allowed them to identify about 200 genes that accelerate neurodegeneration.

Some of these were already linked to neurodegeneration, including genes for the amyloid precursor protein and for proteins called presenillins, which play a role in the formation of amyloid proteins.

The researchers then analyzed this data using network algorithms that Fraenkel’s lab has been developing over the past several years. These are algorithms that can identify connections between genes that may be involved in the same cellular pathways and functions.

In this case, the aim was to try to link the genes identified in the fruit fly screen with specific processes and cellular pathways that might contribute to neurodegeneration. To do that, the researchers combined the fruit fly data with several other datasets, including genomic data from postmortem tissue of Alzheimer’s patients.

The first stage of their analysis revealed that many of the genes identified in the fruit fly study also decline as humans age, suggesting that they may be involved in neurodegeneration in humans.

Network analysis

In the next phase of their study, the researchers incorporated additional data relevant to Alzheimer’s disease, including eQTL (expression quantitative trait locus) data — ­a measure of how different gene variants affect the expression levels of certain proteins.

Using their network optimization algorithms on this data, the researchers identified pathways that link genes to their potential role in Alzheimer’s development. The team chose two of those pathways to focus on in the new study.

The first is a pathway, not previously linked to Alzheimer’s disease, related to RNA modification. The network suggested that when one of two of the genes in this pathway — MEPCE and HNRNPA2B1 — are missing, neurons become more vulnerable to the Tau tangles that form in the brains of Alzheimer’s patients. The researchers confirmed this effect by knocking down those genes in studies of fruit flies and in human neurons derived from induced pluripotent stem cells (IPSCs).

The second pathway reported in this study is involved in DNA damage repair. This network includes two genes called NOTCH1 and CSNK2A1, which have been linked to Alzheimer’s before, but not in the context of DNA repair. Both genes are most well-known for their roles in regulating cell growth.

In this study, the researchers found evidence that when these genes are missing, DNA damage builds up in cells, through two different DNA-damaging pathways. Buildup of unrepaired DNA has previously been shown to lead to neurodegeneration.

Now that these targets have been identified, the researchers hope to collaborate with other labs to help explore whether drugs that target them could improve neuron health. Fraenkel and other researchers are working on using IPSCs from Alzheimer’s patients to generate neurons that could be used to evaluate such drugs.

“The search for Alzheimer’s drugs will get dramatically accelerated when there are very good, robust experimental systems,” he says. “We’re coming to a point where a couple of really innovative systems are coming together. One is better experimental models based on IPSCs, and the other one is computational models that allow us to integrate huge amounts of data. When those two mature at the same time, which is what we’re about to see, then I think we’ll have some breakthroughs.”

The research was funded by the National Institutes of Health.

Imaging technique removes the effect of water in underwater scenes

Tue, 05/20/2025 - 12:00am

The ocean is teeming with life. But unless you get up close, much of the marine world can easily remain unseen. That’s because water itself can act as an effective cloak: Light that shines through the ocean can bend, scatter, and quickly fade as it travels through the dense medium of water and reflects off the persistent haze of ocean particles. This makes it extremely challenging to capture the true color of objects in the ocean without imaging them at close range.

Now a team from MIT and the Woods Hole Oceanographic Institution (WHOI) has developed an image-analysis tool that cuts through the ocean’s optical effects and generates images of underwater environments that look as if the water had been drained away, revealing an ocean scene’s true colors. The team paired the color-correcting tool with a computational model that converts images of a scene into a three-dimensional underwater “world,” that can then be explored virtually.

The researchers have dubbed the new tool “SeaSplat,” in reference to both its underwater application and a method known as 3D gaussian splatting (3DGS), which takes images of a scene and stitches them together to generate a complete, three-dimensional representation that can be viewed in detail, from any perspective.

“With SeaSplat, it can model explicitly what the water is doing, and as a result it can in some ways remove the water, and produces better 3D models of an underwater scene,” says MIT graduate student Daniel Yang.

The researchers applied SeaSplat to images of the sea floor taken by divers and underwater vehicles, in various locations including the U.S. Virgin Islands. The method generated 3D “worlds” from the images that were truer and more vivid and varied in color, compared to previous methods.

The team says SeaSplat could help marine biologists monitor the health of certain ocean communities. For instance, as an underwater robot explores and takes pictures of a coral reef, SeaSplat would simultaneously process the images and render a true-color, 3D representation, that scientists could then virtually “fly” through, at their own pace and path, to inspect the underwater scene, for instance for signs of coral bleaching.

“Bleaching looks white from close up, but could appear blue and hazy from far away, and you might not be able to detect it,” says Yogesh Girdhar, an associate scientist at WHOI. “Coral bleaching, and different coral species, could be easier to detect with SeaSplat imagery, to get the true colors in the ocean.”

Girdhar and Yang will present a paper detailing SeaSplat at the IEEE International Conference on Robotics and Automation (ICRA). Their study co-author is John Leonard, professor of mechanical engineering at MIT.

Aquatic optics

In the ocean, the color and clarity of objects is distorted by the effects of light traveling through water. In recent years, researchers have developed color-correcting tools that aim to reproduce the true colors in the ocean. These efforts involved adapting tools that were developed originally for environments out of water, for instance to reveal the true color of features in foggy conditions. One recent work accurately reproduces true colors in the ocean, with an algorithm named “Sea-Thru,” though this method requires a huge amount of computational power, which makes its use in producing 3D scene models challenging.

In parallel, others have made advances in 3D gaussian splatting, with tools that seamlessly stitch images of a scene together, and intelligently fill in any gaps to create a whole, 3D version of the scene. These 3D worlds enable “novel view synthesis,” meaning that someone can view the generated 3D scene, not just from the perspective of the original images, but from any angle and distance.

But 3DGS has only successfully been applied to environments out of water. Efforts to adapt 3D reconstruction to underwater imagery have been hampered, mainly by two optical underwater effects: backscatter and attenuation. Backscatter occurs when light reflects off of tiny particles in the ocean, creating a veil-like haze. Attenuation is the phenomenon by which light of certain wavelengths attenuates, or fades with distance. In the ocean, for instance, red objects appear to fade more than blue objects when viewed from farther away.

Out of water, the color of objects appears more or less the same regardless of the angle or distance from which they are viewed. In water, however, color can quickly change and fade depending on one’s perspective. When 3DGS methods attempt to stitch underwater images into a cohesive 3D whole, they are unable to resolve objects due to aquatic backscatter and attenuation effects that distort the color of objects at different angles.

“One dream of underwater robotic vision that we have is: Imagine if you could remove all the water in the ocean. What would you see?” Leonard says.

A model swim

In their new work, Yang and his colleagues developed a color-correcting algorithm that accounts for the optical effects of backscatter and attenuation. The algorithm determines the degree to which every pixel in an image must have been distorted by backscatter and attenuation effects, and then essentially takes away those aquatic effects, and computes what the pixel’s true color must be.

Yang then worked the color-correcting algorithm into a 3D gaussian splatting model to create SeaSplat, which can quickly analyze underwater images of a scene and generate a true-color, 3D virtual version of the same scene that can be explored in detail from any angle and distance.

The team applied SeaSplat to multiple underwater scenes, including images taken in the Red Sea, in the Carribean off the coast of Curaçao, and the Pacific Ocean, near Panama. These images, which the team took from a pre-existing dataset, represent a range of ocean locations and water conditions. They also tested SeaSplat on images taken by a remote-controlled underwater robot in the U.S. Virgin Islands.

From the images of each ocean scene, SeaSplat generated a true-color 3D world that the researchers were able to virtually explore, for instance zooming in and out of a scene and viewing certain features from different perspectives. Even when viewing from different angles and distances, they found objects in every scene retained their true color, rather than fading as they would if viewed through the actual ocean.

“Once it generates a 3D model, a scientist can just ‘swim’ through the model as though they are scuba-diving, and look at things in high detail, with real color,” Yang says.

For now, the method requires hefty computing resources in the form of a desktop computer that would be too bulky to carry aboard an underwater robot. Still, SeaSplat could work for tethered operations, where a vehicle, tied to a ship, can explore and take images that can be sent up to a ship’s computer.

“This is the first approach that can very quickly build high-quality 3D models with accurate colors, underwater, and it can create them and render them fast,” Girdhar says. “That will help to quantify biodiversity, and assess the health of coral reef and other marine communities.”

This work was supported, in part, by the Investment in Science Fund at WHOI, and by the U.S. National Science Foundation.

MIT students turn vision to reality

Mon, 05/19/2025 - 4:45pm

Life is a little brighter in Kapiyo these days.

For many in this rural Kenyan town, nightfall used to signal the end to schoolwork and other family activities. Now, however, the darkness is pierced by electric lights from newly solar-powered homes. Inside, children in this off-the-grid area can study while parents extend daily activities past dusk, thanks to a project conceived by an MIT mechanical engineering student and financed by the MIT African Students Association (ASA) Impact Fund.

There are changes coming, too, in the farmlands of Kashusha in the Democratic Republic of Congo (DRC), where another ASA Impact Fund project is working with local growers to establish an energy-efficient mill for processing corn — adding value, creating jobs, and sparking new economic opportunities. Similarly, plans are underway to automate processing of locally-grown cashews in the Mtwara area of Tanzania — an Impact Fund project meant to increase the income of farmers who now send over 90 percent of their nuts abroad for processing.

Inspired by a desire by MIT students to turn promising ideas into practical solutions for people in their home countries, the ASA Impact Fund is a student-run initiative that launched during the 2023-24 academic year. Backed by an alumni board, the fund empowers students to conceive, design, and lead projects with social and economic impact in communities across Africa.

After financing three projects its first year, the ASA Impact Fund received eight project proposals earlier this year and plans to announce its second round of two to four grants sometime this spring, says Pamela Abede, last year’s fund president. Last year’s awards totaled approximately $15,000.

The fund is an outgrowth of MIT’s African Learning Circle, a seminar open to the entire MIT community where biweekly discussions focus on ways to apply MIT’s educational resources, entrepreneurial spirit, and innovation to improve lives on the African continent.

“The Impact Fund was created,” says MIT African Students Association president Victory Yinka-Banjo, “to take this to the next level … to go from talking to execution.”

Aimed at bridging a gap between projects Learning Circle participants envision and resources available to fund them, the ASA Impact Fund “exists as an avenue to assist our members in undertaking social impact projects on the African continent,” the initiative’s website states, “thereby combining theoretical learning with practical application in alignment with MIT's motto.”

The fund’s value extends to the Cambridge campus as well, says ASA Impact Fund board member and 2021 MIT graduate Bolu Akinola.

“You can do cool projects anywhere,” says Akinola, who is originally from Nigeria and currently pursuing a master’s degree in business administration at Harvard University. “Where this is particularly catalyzing is in incentivizing folks to go back home and impact life back on the continent of Africa.”

MIT-Africa managing director Ari Jacobovits, who helped students get the fund off the ground last year, agrees.

“I think it galvanized the community, bringing people together to bridge a programmatic gap that had long felt like a missed opportunity,” Jacobovits says. “I’m always impressed by the level of service-mindedness ASA members have towards their home communities. It’s something we should all be celebrating and thinking about incorporating into our home communities, wherever they may be.”

Alumni Board president Selam Gano notes that a big part of the Impact Fund’s appeal is the close connections project applicants have with the communities they’re working with. MIT engineering major Shekina Pita, for example, is from Kapiyo, and recalls “what it was like growing up in a place with unreliable electricity,” which “would impact every aspect of my life and the lives of those that I lived around.” Pita’s personal experience and familiarity with the community informed her proposal to install solar panels on Kapiyo homes.

So far, the ASA Impact Fund has financed installation of solar panels for five households where families had been relying on candles so their children could do homework after dark.

“A candle is 15 Kenya shillings, and I don’t always have that amount to buy candles for my children to study. I am grateful for your help,” comments one beneficiary of the Kapiyo solar project.

Pita anticipates expanding the project, 10 homes at a time, and involving some college-age residents of those homes in solar panel installation apprenticeships.

“In general, we try to balance projects where we fund some things that are very concrete solutions to a particular community’s problems — like a water project or solar energy — and projects with a longer-term view that could become an organization or a business — like a novel cashew nut processing method,” says Gano, who conducted projects in his father’s homeland of Ethiopia while an MIT student. “I think striking that balance is something I am particularly proud of. We believe that people in the community know best what they need, and it’s great to empower students from those same communities.”  

Vivian Chinoda, who received a grant from the ASA Impact Fund and was part of the African Students Association board that founded it, agrees.

“We want to address problems that can seem trivial without the lived experience of them,” says Chinoda. “For my friend and I, getting funding to go to Tanzania and drive more than 10 hours to speak to remotely located small-scale cashew farmers … made a difference. We were able to conduct market research and cross-check our hypotheses on a project idea we brainstormed in our dorm room in ways we would not have otherwise been able to access remotely.”

Similarly, Florida Mahano’s Impact Fund-financed project is benefiting from her experience growing up near farms in the DRC. Partnering with her brother, a mechanical engineer in her home community of Bukavu in eastern DRC, Mahano is on her way to developing a processing plant that will serve the needs of local farmers. Informed by market research involving about 500 farmers, consumers, and retailers that took place in January, the plant will likely be operational by summer 2026, says Mahano, who has also received funding from MIT’s Priscilla King Gray (PKG) Public Service Center.

“The ASA Impact Fund was the starting point for us,” paving the way for additional support, she says. “I feel like the ASA Impact Fund was really amazing because it allowed me to bring my idea to life.”

Importantly, Chinoda notes that the Impact Fund has already had early success in fostering ties between undergraduate students and MIT alumni.

“When we sent out the application to set up the alumni board, we had a volume of respondents coming in quite quickly, and it was really encouraging to see how the alums were so willing to be present and use their skill sets and connections to build this from the ground up,” she says.

Abede, who is originally from Ghana, would like to see that enthusiasm continue — increasing alumni awareness about the fund “to get more alums involved … more alums on the board and mentoring the students.”

Mentoring is already an important aspect of the ASA Impact Fund, says Akinola. Grantees, she says, get paired with alumni to help them through the process of getting projects underway. 

“This fund could be a really good opportunity to strengthen the ties between the alumni community and current students,” Akinola says. “I think there are a lot of opportunities for funds like this to tap into the MIT alumni community. I think where there is real value is in the advisory nature — mentoring and coaching current students, helping the transfer of skills and resources.”

As more projects are proposed and funded each year, awareness of the ASA Impact Fund among MIT alumni will increase, Gano predicts.

“We’ve had just one year of grantees so far, and all of the projects they’ve conducted have been great,” he says. “I think even if we just continue functioning at this scale, if we’re able to sustain the fund, we can have a real lasting impact as students and alumni and build more and more partnerships on the continent.”

The sweet taste of a new idea

Mon, 05/19/2025 - 4:30pm

Behavioral economist Sendhil Mullainathan has never forgotten the pleasure he felt the first time he tasted a delicious crisp, yet gooey Levain cookie. He compares the experience to when he encounters new ideas.

“That hedonic pleasure is pretty much the same pleasure I get hearing a new idea, discovering a new way of looking at a situation, or thinking about something, getting stuck and then having a breakthrough. You get this kind of core basic reward,” says Mullainathan, the Peter de Florez Professor with dual appointments in the MIT departments of Economics and Electrical Engineering and Computer Science, and a principal investigator at the MIT Laboratory for Information and Decision Systems (LIDS).

Mullainathan’s love of new ideas, and by extension of going beyond the usual interpretation of a situation or problem by looking at it from many different angles, seems to have started very early. As a child in school, he says, the multiple-choice answers on tests all seemed to offer possibilities for being correct.

“They would say, ‘Here are three things. Which of these choices is the fourth?’ Well, I was like, ‘I don’t know.’ There are good explanations for all of them,” Mullainathan says. “While there’s a simple explanation that most people would pick, natively, I just saw things quite differently.”

Mullainathan says the way his mind works, and has always worked, is “out of phase” — that is, not in sync with how most people would readily pick the one correct answer on a test. He compares the way he thinks to “one of those videos where an army’s marching and one guy’s not in step, and everyone is thinking, what’s wrong with this guy?”

Luckily, Mullainathan says, “being out of phase is kind of helpful in research.”

And apparently so. Mullainathan has received a MacArthur “Genius Grant,” has been designated a “Young Global Leader” by the World Economic Forum, was named a “Top 100 thinker” by Foreign Policy magazine, was included in the “Smart List: 50 people who will change the world” by Wired magazine, and won the Infosys Prize, the largest monetary award in India recognizing excellence in science and research.

Another key aspect of who Mullainathan is as a researcher — his focus on financial scarcity — also dates back to his childhood. When he was about 10, just a few years after his family moved to the Los Angeles area from India, his father lost his job as an aerospace engineer because of a change in security clearance laws regarding immigrants. When his mother told him that without work, the family would have no money, he says he was incredulous.

“At first I thought, that can’t be right. It didn’t quite process,” he says. “So that was the first time I thought, there’s no floor. Anything can happen. It was the first time I really appreciated economic precarity.”

His family got by running a video store and then other small businesses, and Mullainathan made it to Cornell University, where he studied computer science, economics, and mathematics. Although he was doing a lot of math, he found himself drawn not to standard economics, but to the behavioral economics of an early pioneer in the field, Richard Thaler, who later won the Nobel Memorial Prize in Economic Sciences for his work. Behavioral economics brings the psychological, and often irrational, aspects of human behavior into the study of economic decision-making.

“It’s the non-math part of this field that’s fascinating,” says Mullainathan. “What makes it intriguing is that the math in economics isn’t working. The math is elegant, the theorems. But it’s not working because people are weird and complicated and interesting.”

Behavioral economics was so new as Mullainathan was graduating that he says Thaler advised him to study standard economics in graduate school and make a name for himself before concentrating on behavioral economics, “because it was so marginalized. It was considered super risky because it didn’t even fit a field,” Mullainathan says.

Unable to resist thinking about humanity’s quirks and complications, however, Mullainathan focused on behavioral economics, got his PhD at Harvard University, and says he then spent about 10 years studying people.

“I wanted to get the intuition that a good academic psychologist has about people. I was committed to understanding people,” he says.

As Mullainathan was formulating theories about why people make certain economic choices, he wanted to test these theories empirically.

In 2013, he published a paper in Science titled “Poverty Impedes Cognitive Function.” The research measured sugarcane farmers’ performance on intelligence tests in the days before their yearly harvest, when they were out of money, sometimes nearly to the point of starvation. In the controlled study, the same farmers took tests after their harvest was in and they had been paid for a successful crop — and they scored significantly higher.

Mullainathan says he is gratified that the research had far-reaching impact, and that those who make policy often take its premise into account.

“Policies as a whole are kind of hard to change,” he says, “but I do think it has created sensitivity at every level of the design process, that people realize that, for example, if I make a program for people living in economic precarity hard to sign up for, that’s really going to be a massive tax.”

To Mullainathan, the most important effect of the research was on individuals, an impact he saw in reader comments that appeared after the research was covered in The Guardian.

“Ninety percent of the people who wrote those comments said things like, ‘I was economically insecure at one point. This perfectly reflects what it felt like to be poor.’”

Such insights into the way outside influences affect personal lives could be among important advances made possible by algorithms, Mullainathan says.

“I think in the past era of science, science was done in big labs, and it was actioned into big things. I think the next age of science will be just as much about allowing individuals to rethink who they are and what their lives are like.”

Last year, Mullainathan came back to MIT (after having previously taught at MIT from 1998 to 2004) to focus on artificial intelligence and machine learning.

“I wanted to be in a place where I could have one foot in computer science and one foot in a top-notch behavioral economic department,” he says. “And really, if you just objectively said ‘what are the places that are A-plus in both,’ MIT is at the top of that list.”

While AI can automate tasks and systems, such automation of abilities humans already possess is “hard to get excited about,” he says. Computer science can be used to expand human abilities, a notion only limited by our creativity in asking questions.

“We should be asking, what capacity do you want expanded? How could we build an algorithm to help you expand that capacity? Computer science as a discipline has always been so fantastic at taking hard problems and building solutions,” he says. “If you have a capacity that you’d like to expand, that seems like a very hard computing challenge. Let’s figure out how to take that on.”

The sciences that “are very far from having hit the frontier that physics has hit,” like psychology and economics, could be on the verge of huge developments, Mullainathan says. “I fundamentally believe that the next generation of breakthroughs is going to come from the intersection of understanding of people and understanding of algorithms.”

He explains a possible use of AI in which a decision-maker, for example a judge or doctor, could have access to what their average decision would be related to a particular set of circumstances. Such an average would be potentially freer of day-to-day influences — such as a bad mood, indigestion, slow traffic on the way to work, or a fight with a spouse.

Mullainathan sums the idea up as “average-you is better than you. Imagine an algorithm that made it easy to see what you would normally do. And that’s not what you’re doing in the moment. You may have a good reason to be doing something different, but asking that question is immensely helpful.”

Going forward, Mullainathan will absolutely be trying to work toward such new ideas — because to him, they offer such a delicious reward.

Study in India shows several tactics together boost vaccination against deadly diseases

Mon, 05/19/2025 - 12:00am

Around the world, low immunizations rates for children are a persistent problem. Now, an experiment conducted in India shows that an inexpensive combination of methods, including text reminders and small financial incentives, has a major impact on immunization.

Led by MIT economists, the research finds that a trifecta of incentives, text messages, and information provided by local residents creates a 44 percent increase in child immunizations, at low cost. Alternately, without financial incentives, but still using text messages and local information, there is a 9 percent increase in immunizations at virtually no expense — the most cost-effective increase the researchers found.

“The most effective package overall has incentives, reminders, and enlisting of community ambassadors to remind people,” says MIT economist Esther Duflo, who helped lead the research. “The cost is very low. And an even more cost-effective package is to not have incentives — you can increase immunization just from reminders through social networks. That’s basically a free lunch because you are making a more effective use of the immunization infrastructure in place. So the small cost of the program is more compensated by the fact that the full cost of administering an immunization goes down.”

The experiment is also notable for the sophisticated new method the research team developed to combine a variety of these approaches in the experiment — and then see precisely what effects were produced by different combinations as well as their component parts.

“What is good about this is that it triangulates and links all these pieces of evidence together,” says MIT economist Abhijit Banerjee, who also helped lead the project. “In terms of our confidence in saying this is a reasonable policy recipe, that’s very important.”

A new paper detailing the results and the method, “Selecting the Most Effective Nudge: Evidence from a Large-Scale Experiment on Immunization,” is being published in the journal Econometrica. Duflo and Banerjee are among 11 co-authors of the paper, along with several staff members of MIT’s Abdul Latif Jameel Anti-Poverty Lab (J-PAL).

Duflo and Banerjee are also two of the co-founders of MIT’s Abdul Latif Jameel Anti-Poverty Lab (J-PAL), a global leader in field experiments about antipoverty programs. In 2019 they were awarded the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, along with Michael Kremer of Harvard University.

Analyzing 75 approaches at once

About 2 million children die per year globally from diseases that are vaccine-preventable. As of 2016, when the current study began, only 62 percent of children in India were fully immunized against tuberculosis, measles, diptheria, tetanus, and polio.

Prior research by Duflo and Banerjee has helped validate the value of finding new ways to boost immunizations rates. In one prior study the economists found that immunization rates for rural children in the state of Rajasthan, India, jumped from 5 percent to 39 percent when their families were offered a modest quantity of lentils as an incentive. (That finding was mentioned in their Nobel citation.) Subsequently, many other researchers have studied new methods of increasing immunization.

To conduct the current study, the research team partnered with the state government of Haryana, India, to conduct an experiment spanning more than 900 villages, from 2016 through 2019.

The researchers based the experiment around their three basic ways of encouraging parents to get their children vaccinated: financial incentives, text messages, and information from local “ambassadors,” that is, well-connected residents. The research team then developed a set of varying combinations of these elements. In some cases they would offer more incentives, or fewer, along with different amounts of text messages, and different kinds of exposure to local information.

In all, the researchers wound up with 75 combinations of these elements and developed a new method to evaluate them all, which they call treatment variant aggregation (TVA). Essentially, the scholars developed an algorithm that used a systematic data-driven approach to pool together variations that were ultimately identical, and noted which ones were ineffective. To select the best package, they also adjusted their results for the so-called “winner’s curse” of social-science studies, in which the policy option that works best in a particular experiment will tend to be the one that did better due to random chance.

All told, the scholars believe they have developed a way of evaluating many “treatments” — the individual elements, such as financial incentives — within the same experiment, rather than just trying out one concept, like distributing lentils, per every large study.

“It’s not one experiment where you compare A with B,” says Banerjee, who is also the Ford Foundation International Professor of Economics. “What we do here is evaluate a combination of things. Even in scenarios where you see no effect, there is information to be harvested. It may be that in a combination of treatments, maybe one element works well, and the others have a negative effect and the net is zero, but there is information there. So, you want to keep track of all the possibilities as you go along, although it is a mathematically difficult exercise.”

The researchers were also able to discern that differences among local populations have an impact on the effectiveness of the different elements being tested. Generally, groups with lower immunization rates will respond more to incentives to immunize.

“In a way, we are landing back where we were in [the lentil study in] Rajasthan, where low immunization rates lead to super-high effects for these incentives,” says Duflo, who is also the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics. “We replicated the result in this context.” However, she reinforces, the new method allows scholars to acquire more information about that process more quickly.

An actionable path

The research team is hopeful that the new TVA method will gain wider adoption among scholars and lead to more experiments with multifaceted approaches, in which numerous potential solutions are evaluated simultaneously. The method could apply to antipoverty research, medical trials, and more.

Beyond that, they note, these kinds of results give governments and other organizations the ability to see how different policy options will play out, in both medical and fiscal terms.

“The reason why we did this was to be able to give the government of Haryana an actionable path, moving forward,” Duflo says.

She adds: “People before thought in order to say something with confidence, you should try just one treatment at a time,” meaning, one type of intervention at a time, such as incentives, or text messages. However, Duflo notes, “I’m very happy to say you can have more than one, and you can analyze all of them. It takes many steps, but such is life: many steps.”

In addition to Duflo and Banerjee, the co-authors of the study are Arun G. Chandrasekhar of J-PAL; Suresh Dalpath of the Government of Haryana; John Floretta of J-PAL; Matthew O. Jackson, an economist at Stanford University; Harini Kannan of J-PAL; Francine Loza of J-PAL; Anirudh Sankar of Stanford; Anna Schrimpf of J-PAL; and Maheshwor Shrestha of the World Bank.

The research was made possible through cooperation with the Haryana Department of Health and Family Welfare. 

A day in the life of MIT MBA student David Brown

Fri, 05/16/2025 - 1:25pm

MIT Sloan was my first and only choice,” says MIT graduate student David Brown. After receiving his BS in chemical engineering at the U.S. Military Academy at West Point, Brown spent eight years as a helicopter pilot in the U.S. Army, serving as a platoon leader and troop commander. 

Now in the final year of his MBA, Brown has co-founded a climate tech company — Helix Carbon — with Ariel Furst, an MIT assistant professor in the Department of Chemical Engineering, and Evan Haas MBA ’24, SM ’24. Their goal: erase the carbon footprint of tough-to-decarbonize industries like ironmaking, polyurethanes, and olefins by generating competitively-priced, carbon-neutral fuels directly from waste carbon dioxide (CO2). It’s an ambitious project; they’re looking to scale the company large enough to have a gigaton per year impact on CO2 emissions. They have lab space off campus, and after graduation, Brown will be taking a full-time job as chief operating officer.

“What I loved about the Army was that I felt every day that the work I was doing was important or impactful in some way. I wanted that to continue, and felt the best way to have the greatest possible positive impact was to use my operational skills learned from the military to help close the gap between the lab and impact in the market.”

The following photo essay provides a snapshot of what a typical day for Brown has been like as an MIT student.

Usha Lee McFarling named director of the Knight Science Journalism Program

Fri, 05/16/2025 - 12:30pm

The Knight Science Journalism Program (KSJ) at MIT has announced that Usha Lee McFarling, national science correspondent for STAT and former KSJ Fellow, will be joining the team in August as their next director.

As director, McFarling will play a central role in helping to manage KSJ — an elite mid-career fellowship program that brings prominent science journalists from around the world for 10 months of study and intellectual exploration at MIT, Harvard University, and other institutions in the Boston area.

“I’m eager to take the helm during this critical time for science journalism, a time when journalism is under attack both politically and economically and misinformation — especially in areas of science and health — is rife,” says McFarling. “My goal is for the program to find even more ways to support our field and its practitioners as they carry on their important work.”

McFarling is a veteran science writer, most recently working for STAT News. She previously reported for the Los Angeles Times, The Boston Globe, Knight Ridder Washington Bureau, and the San Antonio Light, and was a Knight Science Journalism Fellow in 1992-93. McFarling graduated from Brown University with a degree in biology in 1988 and later earned a master’s degree in biological psychology from the University of California at Berkeley.

Her work on the diseased state of the world’s oceans earned the 2007 Pulitzer Prize for explanatory journalism and a Polk Award, among others. Her coverage of health disparities at STAT has earned an Edward R. Murrow award, and awards from the Association of Health Care Journalists, and the Asian American Journalists Association. In 2024, she was awarded the Victor Cohn prize for excellence in medical science reporting and the Bernard Lo, MD award in bioethics.

McFarling will succeed director Deborah Blum, who served as director for 10 years. Blum, also a Pulitzer-prize winning journalist and the bestselling author of six books, is retiring to return to a full-time writing career. She will join the board of Undark, a magazine she helped found while at KSJ, and continue as a board member of the Council for the Advancement of Science Writing and the Burroughs Wellcome Fund, among others.

“It’s been an honor to serve as director of the Knight Science Journalism program for the past 10 years and a pleasure to be able to support the important work that science journalists do,” Blum says. “And I know that under the direction of Usha McFarling — who brings such talent and intelligence to the job — that KSJ will continue to grow and thrive in all the best ways.”

With AI, researchers predict the location of virtually any protein within a human cell

Thu, 05/15/2025 - 10:30am

A protein located in the wrong part of a cell can contribute to several diseases, such as Alzheimer’s, cystic fibrosis, and cancer. But there are about 70,000 different proteins and protein variants in a single human cell, and since scientists can typically only test for a handful in one experiment, it is extremely costly and time-consuming to identify proteins’ locations manually.

A new generation of computational techniques seeks to streamline the process using machine-learning models that often leverage datasets containing thousands of proteins and their locations, measured across multiple cell lines. One of the largest such datasets is the Human Protein Atlas, which catalogs the subcellular behavior of over 13,000 proteins in more than 40 cell lines. But as enormous as it is, the Human Protein Atlas has only explored about 0.25 percent of all possible pairings of all proteins and cell lines within the database.

Now, researchers from MIT, Harvard University, and the Broad Institute of MIT and Harvard have developed a new computational approach that can efficiently explore the remaining uncharted space. Their method can predict the location of any protein in any human cell line, even when both protein and cell have never been tested before.

Their technique goes one step further than many AI-based methods by localizing a protein at the single-cell level, rather than as an averaged estimate across all the cells of a specific type. This single-cell localization could pinpoint a protein’s location in a specific cancer cell after treatment, for instance.

The researchers combined a protein language model with a special type of computer vision model to capture rich details about a protein and cell. In the end, the user receives an image of a cell with a highlighted portion indicating the model’s prediction of where the protein is located. Since a protein’s localization is indicative of its functional status, this technique could help researchers and clinicians more efficiently diagnose diseases or identify drug targets, while also enabling biologists to better understand how complex biological processes are related to protein localization.

“You could do these protein-localization experiments on a computer without having to touch any lab bench, hopefully saving yourself months of effort. While you would still need to verify the prediction, this technique could act like an initial screening of what to test for experimentally,” says Yitong Tseo, a graduate student in MIT’s Computational and Systems Biology program and co-lead author of a paper on this research.

Tseo is joined on the paper by co-lead author Xinyi Zhang, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and the Eric and Wendy Schmidt Center at the Broad Institute; Yunhao Bai of the Broad Institute; and senior authors Fei Chen, an assistant professor at Harvard and a member of the Broad Institute, and Caroline Uhler, the Andrew and Erna Viterbi Professor of Engineering in EECS and the MIT Institute for Data, Systems, and Society (IDSS), who is also director of the Eric and Wendy Schmidt Center and a researcher at MIT’s Laboratory for Information and Decision Systems (LIDS). The research appears today in Nature Methods.

Collaborating models

Many existing protein prediction models can only make predictions based on the protein and cell data on which they were trained or are unable to pinpoint a protein’s location within a single cell.

To overcome these limitations, the researchers created a two-part method for prediction of unseen proteins’ subcellular location, called PUPS.

The first part utilizes a protein sequence model to capture the localization-determining properties of a protein and its 3D structure based on the chain of  amino acids that forms it.

The second part incorporates an image inpainting model, which is designed to fill in missing parts of an image. This computer vision model looks at three stained images of a cell to gather information about the state of that cell, such as its type, individual features, and whether it is under stress.

PUPS joins the representations created by each model to predict where the protein is located within a single cell, using an image decoder to output a highlighted image that shows the predicted location.

“Different cells within a cell line exhibit different characteristics, and our model is able to understand that nuance,” Tseo says.

A user inputs the sequence of amino acids that form the protein and three cell stain images — one for the nucleus, one for the microtubules, and one for the endoplasmic reticulum. Then PUPS does the rest.

A deeper understanding

The researchers employed a few tricks during the training process to teach PUPS how to combine information from each model in such a way that it can make an educated guess on the protein’s location, even if it hasn’t seen that protein before.

For instance, they assign the model a secondary task during training: to explicitly name the compartment of localization, like the cell nucleus. This is done alongside the primary inpainting task to help the model learn more effectively.

A good analogy might be a teacher who asks their students to draw all the parts of a flower in addition to writing their names. This extra step was found to help the model improve its general understanding of the possible cell compartments.

In addition, the fact that PUPS is trained on proteins and cell lines at the same time helps it develop a deeper understanding of where in a cell image proteins tend to localize.

PUPS can even understand, on its own, how different parts of a protein’s sequence contribute separately to its overall localization.

“Most other methods usually require you to have a stain of the protein first, so you’ve already seen it in your training data. Our approach is unique in that it can generalize across proteins and cell lines at the same time,” Zhang says.

Because PUPS can generalize to unseen proteins, it can capture changes in localization driven by unique protein mutations that aren’t included in the Human Protein Atlas.

The researchers verified that PUPS could predict the subcellular location of new proteins in unseen cell lines by conducting lab experiments and comparing the results. In addition, when compared to a baseline AI method, PUPS exhibited on average less prediction error across the proteins they tested.

In the future, the researchers want to enhance PUPS so the model can understand protein-protein interactions and make localization predictions for multiple proteins within a cell. In the longer term, they want to enable PUPS to make predictions in terms of living human tissue, rather than cultured cells.

This research is funded by the Eric and Wendy Schmidt Center at the Broad Institute, the National Institutes of Health, the National Science Foundation, the Burroughs Welcome Fund, the Searle Scholars Foundation, the Harvard Stem Cell Institute, the Merkin Institute, the Office of Naval Research, and the Department of Energy.

Particles carrying multiple vaccine doses could reduce the need for follow-up shots

Thu, 05/15/2025 - 10:00am

Around the world, 20 percent of children are not fully immunized, leading to 1.5 million child deaths each year from diseases that are preventable by vaccination. About half of those underimmunized children received at least one vaccine dose but did not complete the vaccination series, while the rest received no vaccines at all.

To make it easier for children to receive all of their vaccines, MIT researchers are working to develop microparticles that can release their payload weeks or months after being injected. This could lead to vaccines that can be given just once, with several doses that would be released at different time points.

In a study appearing today in the journal Advanced Materials, the researchers showed that they could use these particles to deliver two doses of diphtheria vaccine — one released immediately, and the second two weeks later. Mice that received this vaccine generated as many antibodies as mice that received two separate doses two weeks apart.

The researchers now hope to extend those intervals, which could make the particles useful for delivering childhood vaccines that are given as several doses over a few months, such as the polio vaccine.

“The long-term goal of this work is to develop vaccines that make immunization more accessible — especially for children living in areas where it’s difficult to reach health care facilities. This includes rural regions of the United States as well as parts of the developing world where infrastructure and medical clinics are limited,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research.

Jaklenec and Robert Langer, the David H. Koch Institute Professor at MIT, are the senior authors of the study. Linzixuan (Rhoda) Zhang, an MIT graduate student who recently completed her PhD in chemical engineering, is the paper’s lead author.

Self-boosting vaccines

In recent years, Jaklenec, Langer, and their colleagues have been working on vaccine delivery particles made from a polymer called PLGA. In 2018, they showed they could use these types of particles to deliver two doses of the polio vaccine, which were released about 25 days apart.

One drawback to PLGA is that as the particles slowly break down in the body, the immediate environment can become acidic, which may damage the vaccine contained within the particles.

The MIT team is now working on ways to overcome that issue in PLGA particles and is also exploring alternative materials that would create a less acidic environment. In the new study, led by Zhang, the researchers decided to focus on another type of polymer, known as polyanhydride.

“The goal of this work was to advance the field by exploring new strategies to address key challenges, particularly those related to pH sensitivity and antigen degradation,” Jaklenec says.

Polyanhydrides, biodegradable polymers that Langer developed for drug delivery more than 40 years ago, are very hydrophobic. This means that as the polymers gradually erode inside the body, the breakdown products hardly dissolve in water and generate a much less acidic environment.

Polyanhydrides usually consist of chains of two different monomers that can be assembled in a huge number of possible combinations. For this study, the researchers created a library of 23 polymers, which differed from each other based on the chemical structures of the monomer building blocks and the ratio of the two monomers that went into the final product.

The researchers evaluated these polymers based on their ability to withstand temperatures of at least 104 degrees Fahrenheit (40 degrees Celsius, or slightly above body temperature) and whether they could remain stable throughout the process required to form them into microparticles.

To make the particles, the researchers developed a process called stamped assembly of polymer layers, or SEAL. First, they use silicon molds to form cup-shaped particles that can be filled with the vaccine antigen. Then, a cap made from the same polymer is applied and sealed using heat. Polymers that proved too brittle or didn’t seal completely were eliminated from the pool, leaving six top candidates.

The researchers used those polymers to design particles that would deliver diphtheria vaccine two weeks after injection, and gave them to mice along with vaccine that was released immediately. Four weeks after the initial injection, those mice showed comparable levels of antibodies to mice that received two doses two weeks apart.

Extended release

As part of their study, the researchers also developed a machine-learning model to help them explore the factors that determine how long it takes the particles to degrade once in the body. These factors include the type of monomers that go into the material, the ratio of the monomers, the molecular weight of the polymer, and the loading capacity or how much vaccine can go into the particle.

Using this model, the researchers were able to rapidly evaluate nearly 500 possible particles and predict their release time. They tested several of these particles in controlled buffers and showed that the model’s predictions were accurate.

In future work, this model could also help researchers to develop materials that would release their payload after longer intervals — months or even years. This could make them useful for delivering many childhood vaccines, which require multiple doses over several years.

“If we want to extend this to longer time points, let’s say over a month or even further, we definitely have some ways to do this, such as increasing the molecular weight or the hydrophobicity of the polymer. We can also potentially do some cross-linking. Those are further changes to the chemistry of the polymer to slow down the release kinetics or to extend the retention time of the particle,” Zhang says.

The researchers now hope to explore using these delivery particles for other types of vaccines. The particles could also prove useful for delivering other types of drugs that are sensitive to acidity and need to be given in multiple doses, they say.

“This technology has broad potential for single-injection vaccines, but it could also be adapted to deliver small molecules or other biologics that require durability or multiple doses. Additionally, it can accommodate drugs with pH sensitivities,” Jaklenec says.

The research was funded, in part, by the Koch Institute Support (core) Grant from the National Cancer Institute.

Class pairs students with military officers to build mission-critical solutions

Thu, 05/15/2025 - 12:00am

On a recent Friday afternoon, Marine Corps Major and U.S. Congressman Jake Auchincloss stood in the front of a crowded MIT classroom in Building 1 and made his case for modernizing America’s military to counter the threat from China. Part of his case involved shifting resources away from the U.S. Army to bolster the Marines, Navy, and Air Force.

When it was time for questions, several hands shot up. One person took exception to Auchincloss’ plans for the Army, although he admitted his views were influenced by the fact that he was an active member of the Army’s Special Forces. Another person had a question about the future of wartime technology. Again, the questioner had some personal experience: He sits on the board of a Ukrainian drone-manufacturing company. Next up was an MIT student with a question about artificial intelligence.

Course 15.362/6.9160 (Engineering Innovation: Global Security Systems) is not your typical MIT class. It teaches students about the most pressing problems in global security and challenges them to build functioning prototypes over the course of one whirlwind semester. Along the way, students hear from high-ranking members of the military, MIT professors, government officials, startup founders, and others to learn about the realities of combat and how best to create innovative solutions.

“As far as I know, this is the only class in the world that works in this way,” says Gene Keselman MBA ’17, a lecturer in the MIT Sloan School of Management and a colonel in the U.S. Air Force Reserves who helped start the class. “There are other classes trying to do something similar, but they use intermediaries. In this course, the Navy SEALs are in the classroom working directly with the students. By teaching students in this way, we’re giving them exposure to something they’d never otherwise be exposed to.”

In the beginning of the semester, students split into interdisciplinary groups that feature both undergraduate and graduate students. Each group is assigned mentors with deep military experience. From there, students learn to take a problem, map out a set of possible solutions, and pitch their prototypes to the active members of the armed services they’re trying to help.

They get feedback on their ideas and iterate as they go through a series of presentation milestones throughout the semester.

“The outcomes are twofold,” says A.J. Perez ’13, MEng ’14, PhD ’23, a lecturer in the MIT School of Engineering and a research scientist with the Office of Innovation, who built the course’s engineering design curriculum. “There are the prototypes, which could have real impact on war fighters, and then there are the learnings students get by going through the process of defining a problem and building a prototype. The prototype is important, but the process of the class leads to skills that are transferable to a multitude of other domains.”

The class’s organizers say although the course is only in its second year, it aligns with MIT’s long legacy of working alongside the military.

“MIT has these incredibly fruitful relationships with the Department of Defense going back to World War II,” says Keselman. “We developed advanced radar systems that helped win the war and launched the military-industrial complex, including organizations like MIT Lincoln Laboratory and MITRE. It’s in our ethos, it’s in our culture, and this is another extension of that. This is another way for MIT to lead in tough tech and work on the world’s hardest problems. We couldn’t do this class in another university in this country.”

Tapping into student interest

Like many things at MIT, the class was inspired by a hackathon. For several years, college students in the U.S. Armed Forces’ Reserve Officers’ Training Corps (ROTC) program came to MIT from across the country for a weekend hackathon focused on solving specific military problems.

Last year, Keselman, Perez, and others decided to create the class to give MIT’s ROTC cadets more time to work on the projects and give them the opportunity to earn course credit. But when Keselman and Perez announced a class geared toward solving problems in the armed forces, many non-ROTC MIT students enrolled.

“We realized there was a lot of interest in national security at MIT beyond the ROTC cadets,” Keselman explains. “National security is obviously important to a lot of people, but it also offers super interesting problems you can’t find anywhere else. I think that attracted students from all over MIT.”

About 25 students enrolled the first year to work on a problem that prevented U.S. Navy SEALs from bringing lithium-ion batteries onto submarines. This year, the organizers who include senior faculty members Fiona Murray, Sertac Karaman, and Vladimir Bulovic, couldn’t fit everyone who showed up, so they expanded to room 1-190, a lecture hall. They also added graduate-level credits and were more prepared for student interest.

More than 70 students registered this year from 15 different MIT departments, Harvard College, the Harvard Business School, and the Harvard Kennedy School. Student groups contain undergraduates, graduates, engineers, and business students. Many have military experience, and each group has access to mentors from places including the Navy, Air Force, Special Operations, and the Massachusetts State Police.

“Last year a student said, ‘This class is weird, and that’s exactly why it needs to stick around,’” Keselman says. “It is weird. It’s not normal for this many disciplines to come together, to have a Congressman showing up, Navy Seals, and members of the Army’s Special Forces all sitting in a room. Some are active-duty students, some are mentors, but it’s an incredible melting pot. I think it’s exactly what MIT embodies.”

From projects to military programs

This year’s class project challenges students to develop countermeasures for autonomous drone systems, which either travel by air or sea. Over the course of the semester, teams have built solutions that achieve early drone detection, categorization, and countermeasures. The solutions also must integrate AI and consider domestic manufacturing capabilities and supply chains.

One group is using sensors to detect the auditory signature of drones in the air. In the class, they gave a live demo that would only signal a threat when it detected a certain steady pitch associated with the electric motor of an air drone.

“Nothing motivates MIT students like a problem in the real world that they know really matters,” Perez says. “At the core of this year’s problem is how we protect a human from a drone attack. They take the process seriously.”

Organizers have also talked about extending the class into a year-long program that would allow the teams to build their projects into real products in partnership with groups at places like Lincoln Laboratory.

“This class is spreading the seeds of collaboration between academia and government,” Keselman says. “It’s a true partnership as opposed to just a funding program. Government officials come to MIT and sit in the classroom and see what’s actually happening here — and they rave about how impressive all the work is.”

3 Questions: Making the most of limited data to boost pavement performance

Wed, 05/14/2025 - 11:20pm

Pavements form the backbone of our built environment. In the United States, almost 2.8 million lane-miles, or about 4.6 million lane-kilometers, are paved. They take us to work or school, take goods to their destinations, and much more.

To secure a more sustainable future, we must take a careful look at the long-term performance and environmental impacts of our pavements. Haoran Li, a postdoc at the MIT Concrete Sustainability Hub and the Department of Civil and Environmental Engineering, is deeply invested in studying how to give stakeholders the information and tools they need to make informed pavement decisions with the future in mind. Here, he discusses life-cycle assessments for pavements as well as research from MIT in addressing pavement sustainability.

Q: What is life-cycle assessment, and why does it matter for pavements?

A: Life-cycle assessment (LCA) is a method that helps us holistically assess the environmental impacts of products and systems throughout their life cycle — everything from the impacts of raw materials to construction, use, maintenance, and repair, and finally decommissioning. For pavements, up to 78 percent of the life-cycle impact comes from the use phase, with the majority stemming from vehicle fuel use impacted by pavement characteristics, such as stiffness and smoothness. This phase also includes the sunlight reflected by pavements: Lighter, more reflective pavement bounces heat back into the atmosphere instead of absorbing it, which can help keep nearby buildings and streets cooler. At the same time, there are positive use phase impacts like carbon uptake — the natural process by which cement-based products like concrete roads and infrastructure sequester CO2 [carbon dioxide] from the atmosphere. Due to the sheer area of our pavements, they offer a great potential for the sustainability solution. Unlike many decarbonization solutions, pavements are managed by government agencies and influence the emissions from vehicles and surrounding buildings, allowing for a coordinated push toward sustainability through better materials, designs, and maintenance.

Q: What are the gaps in current pavement life-cycle assessment methods and tools and what has the MIT Concrete Sustainability Hub done to address them so far?

A: A key gap is the complexity of performing pavement LCA. Practitioners should assess both the long-term structural performance and environmental impacts of paving materials, considering the pavements’ interactions with the built environment. Another key gap is the great uncertainty associated with pavement LCA. Since pavements are designed to last for decades, it is necessary to handle the inherent uncertainty through their long-term performance evaluations.

To tackle these challenges, the MIT Concrete Sustainability Hub (CSHub) developed an innovative method and practical tools that address data intensity and uncertainty while offering context-specific and probabilistic LCA strategies. For instance, we demonstrated that it is possible to achieve meaningful results on the environmentally preferred pavement alternatives while reducing data collection efforts by focusing on the most influential and least variable parameters. By targeting key variables that significantly impact the pavement’s life cycle, we can streamline the process and still obtain robust conclusions. Overall, the efforts of the CSHub aim to enhance the accuracy and efficiency of pavement LCAs, making them better aligned with real-world conditions and more manageable in terms of data requirements.

Q: How does the MIT Concrete Sustainability Hub’s new streamlined pavement life-cycle assessment method improve on previous designs?

A: The CSHub recently developed a new framework to streamline both probabilistic and comparative LCAs for pavements. Probabilistic LCA accounts for randomness and variability in data, while comparative LCA allows the analysis of different options simultaneously to determine the most sustainable choice.

One key innovation is the use of a structured data underspecification approach, which prioritizes the data collection efforts. In pavement LCA, underspecifying can reduce the overall data collection burden by up to 85 percent, allowing for a reliable decision-making process with minimal data. By focusing on the most critical elements, we can still reach robust conclusions without the need for extensive data collection.

To make this framework practical and accessible, it is being integrated into an online LCA software tool. This tool facilitates use by practitioners, such as departments of transportation and metropolitan planning organizations. It helps them identify choices that lead to the highest-performing, longest-lasting, and most environmentally friendly pavements. Some of these solutions could include incorporating low-carbon concrete mixtures, prioritizing long-lasting treatment actions, and optimizing the design of pavement geometry to reduce life-cycle greenhouse gas emissions.

Overall, the CSHub’s new streamlined pavement LCA method significantly improves the efficiency and accessibility of conducting pavement LCAs, making it easier for stakeholders to make informed decisions that enhance pavement performance and sustainability.

Deploying a practical solution to space debris

Wed, 05/14/2025 - 5:30pm

At this moment, there are approximately 35,000 tracked human-generated objects in orbit around Earth. Of these, only about one-third are active payloads: science and communications satellites, research experiments, and other beneficial technology deployments. The rest are categorized as debris — defunct satellites, spent rocket bodies, and the detritus of hundreds of collisions, explosions, planned launch vehicle separations, and other “fragmentation events” that have occurred throughout humanity’s 67 years of space launches. 

The problem of space debris is well documented, and only set to grow in the near term as launch rates increase and fragmentation events escalate accordingly. The clutter of debris — which includes an estimated 1 million objects over 1 centimeter, in addition to the tracked objects — regularly causes damage to satellites, requires the repositioning of the International Space Station, and has the potential to cause catastrophic collisions with increasing frequency. 

To address this issue, in 2019 the World Economic Forum selected a team co-led by MIT Associate Professor Danielle Wood’s Space Enabled Research Group at the MIT Media Lab to create a system for scoring space mission operators on their launch and de-orbit plans, collision-avoidance measures, debris generation, and data sharing, among other factors that would allow for better coordination and maintenance of space objects. The team has developed a system called the Space Sustainability Rating (SSR), and launched it in 2021 as an independent nonprofit. 

“Satellites provide valuable services that impact everyone in the world by helping us understand the environment, communicate globally, navigate, and operate our modern infrastructure. As innovative new missions are proposed that operate thousands of satellites, a new approach is needed to provide space traffic management. National governments and space operators need to design coordination approaches to reduce the risk of losing access to valuable satellite missions,” says Wood, who is jointly appointed in the Program in Media Arts and Sciences and the Department of Aeronautics and Astronautics (AeroAstro). “The Space Sustainability Rating plays a role by compiling internationally recognized responsible on-orbit behaviors, and celebrating space actors that implement them.” 

France-based Eutelsat Group, a geostationary Earth orbit and low Earth orbit satellite operator, signed on as the first constellation operator with a large deployment of satellites to undergo a rating. Eutelsat submitted a mission to SSR for assessment, and was rated on a tiered scoring system based on six performance modules. Eutelsat earned a platinum rating with a score exceeding 80 percent, indicating that the mission demonstrated exceptional sustainability in design, operations, and disposal practices.

As of December 2024, SSR has also provided ratings to operators such as OHB Sweden AB, Stellar, and TU Delft. 

In a new open-access paper published in Acta Astronautica, lead author Minoo Rathnasabapathy, Wood, and the SSR team provide the detailed history, motivation, and design of the Space Sustainability Rating as an incentive system that provides a score for space operators based on their effort to reduce space debris and collision risk. The researchers include AeroAstro alumnus Miles Lifson SM '20, PhD '24; University of Texas at Austin professor and former MIT MLK Scholar Moriba Jah; and collaborators from the European Space Agency, BryceTech, and the Swiss Institute of Technology of Lausanne Space Center (eSpace).

The paper provides transparency about the inception of SSR as a cross-organizational collaboration and its development as a composite indicator that evaluates missions across multiple quantifiable factors. The aim of SSR is to provide actionable feedback and a score recognizing operators’ contributions to the space sustainability effort. The paper also addresses the challenges SSR faces in adoption and implementation, and its alignment with various international space debris mitigation guidelines. 

SSR draws heavily on proven rating methodologies from other industries, particularly Leadership in Energy and Environmental Design (LEED) in the building and manufacturing industries, Sustainability Assessment of Food and Agriculture systems (SAFA) in the agriculture industry, and Sustainability Tracking, Assessment and Rating System (STARS) in the education industry. 

“By grounding SSR in quantifiable metrics and testing it across diverse mission profiles, we created a rating system that recognizes sustainable decisions and operations by satellite operators, aligned with international guidelines and industry best practices,” says Rathnasabapathy. 

The Space Sustainability Rating is a nongovernmental approach to encourage space mission operators to take responsible actions to reduce space debris and collision risk. The paper highlights the roles for private sector space operators and public sector space regulators to put steps in place to ensure such responsible actions are pursued. 

The Space Enabled Research Group continues to perform academic research that illustrates the benefits of space missions and government oversight bodies enforcing sustainable and safe space practices. Future work will highlight the need for a sustainability focus as practices such as satellite service and in-space manufacturing start to become more common. 

Dimitris Bertsimas receives the 2025-2026 Killian Award

Wed, 05/14/2025 - 4:00pm

Dimitris Bertsimas SM ’87, PhD ’88, a leading figure in operations research, has been named the recipient of the 2025-26 James R. Killian Jr. Faculty Achievement Award. It is the highest honor the MIT faculty grants to its own professors.

Bertsimas is the Boeing Professor of Operations Research at the MIT Sloan School of Management, where he has made substantial contributions to business practices in many fields. He has also been a prolific advisor of graduate students and an enterprising leader of academic programs, serving as the inaugural faculty director of the Master of Business Analytics (MBAn) program since 2016, associate dean of business analytics since 2019, and vice provost for open learning since 2024.

“To be recognized among the group of Killian Award winners is a very humbling experience,” Bertsimas says. “I love this institution. This is where I have spent the last 40 years of my life. We don’t do things at MIT to get awards; we do things here because we believe they are important. It’s definitely something I’m proud of, but I’m also humbled to be in the company of many of my heroes.”

The Killian Award citation states that Bertsimas, “through his remarkable intellectual breadth and accomplishments, incredible productivity, outstanding contributions to theory and practice, and educational leadership, has made enormous contributions to his profession, the Institute, and the world.” It also notes that his “scholarly contributions are both vast and groundbreaking.”

Bertsimas received his BS in electrical engineering and computer science from the National Technical University of Athens in Greece. He moved to MIT in 1985 for his graduate work, earning his MS in operations research and his PhD in applied mathematics and operations research. After completing his doctoral work in 1988, Bertsimas joined the MIT faculty, and has remained at the Institute ever since.

A powerhouse researcher, Bertsimas has tackled a wide range of problems during his career. One area of his work has focused on optimization, the development of mathematical tools to help business operations be as efficient and logical as possible. Another focus of his scholarship has been machine learning and its interaction with optimization as well as stochastic systems. Overall, Bertsimas has developed the concepts and tools of “robust optimization,” allowing people to make better decisions under uncertainty.

At times in his career, Bertsimas has focused on health care issues, examining how machine learning can be used to develop more tools for personalized medical care. But all told, Bertsimas’ work has long been applied across many industries, from medicine to finance, energy, and beyond. The fingerprints of his research can be found in financial portfolios, school bus routing, supply chain logistics, energy use, medical data mining, diabetes management, and more.

“My strategy has been to address significant challenges that the world faces, and try to make progress,” Bertsimas says.

A dedicated educator, Bertsimas has been the principal doctoral thesis advisor to 103 MIT PhD students as of this spring. Lately he has been advising about five doctoral students per year, a remarkable number.

“Working with my doctoral students is my principal and most favorite activity,” Bertsimas says. “Second, in my research, I’ve tried to address problems that I think are important. If you solve them, something changes, in what we teach or in industry, and in short order. Not in 50 years, but in two years. Third, I feel it’s my obligation to educate — not only to create new knowledge, but to transmit it.”

As such, Bertsimas has been the founder and driving force behind MIT Sloan’s leading-edge MBAn program, and has thrown himself into leading MIT’s Open Learning efforts over the past year. He has also founded 10 data analytics companies during his career, while co-authoring hundreds of papers and eight graduate-level textbooks on data analytics.

The Killian Award was founded in 1971 in recognition of “extraordinary professional accomplishments by full-time members of the MIT faculty,” as the citation notes. The award is named after James R. Killian Jr., who served as president of MIT from 1948 to 1959 and as chair of the MIT Corporation from 1959 to 1971.

By tradition, Bertsimas will give a lecture in spring 2026 about his work.

The Killian Award is the latest honor in Bertsimas’ career. In 2019, he received the John von Neumann Theory Prize from INFORMS, the Institute for Operations Research and Management Science, for his contributions to the theory of operations research and the management sciences. He also received the INFORMS President’s Award in 2019 for his contributions to societal welfare. Bertsimas was elected to be a member of the National Academy of Engineering at age 43.

Reflecting on his career so far, Bertsimas emphasizes that he operates on a philosophy centered around positive thinking, high aspirations, and a can-do attitude applied to making a difference in the world for other people. Bertsimas praised MIT Sloan and the Operations Research Center as ideal places for him to pursue his work, due to its interdisciplinary nature, the quality of the students, and its openness to founding firms based on breakthrough work.

“I have been very happy at Sloan,” Bertsimas says. “It gives me the opportunity to work on things that are important with exceptional students predominantly from the Operations Research Center, and encourages my entrepreneurial spirit. Being at MIT Sloan and at the Operations Research Center has made a material difference in my career and my life.”

Steven Truong ’20 named 2025 Knight-Hennessy Scholar

Wed, 05/14/2025 - 10:45am

MIT alumnus Steven Troung ’20 has been awarded a 2025 Knight-Hennessy Scholarship and will join the eighth cohort of the prestigious fellowship. Knight-Hennessy Scholars receive up to three years of financial support for graduate studies at Stanford University.

Knight-Hennessy Scholars are selected for their independence of thought, purposeful leadership, and civic mindset. Troung is dedicated to making scientific advances in metabolic disorders, specifically diabetes, a condition that has affected many of his family members.

Truong, the son of Vietnamese refugees, originally hails from Minneapolis and graduated from MIT in 2020 with bachelor’s degrees in biological engineering and creative writing. During his time at MIT, Truong conducted research on novel diabetes therapies with professors Daniel Anderson and Robert Langer at the Koch Institute for Integrative Cancer Research and with Professor Douglas Lauffenburger in the Department of Biological Engineering.

Troung also founded a diabetes research project in Vietnam and co-led Vietnam’s largest genome-wide association study with physicians at the University of Medicine and Pharmacy in Ho Chi Minh City, where the team investigated the genetic determinants of Type 2 diabetes.

In his senior year at MIT, Truong won a Marshall Scholarship for post-graduate studies in the U.K. As a Marshall Scholar, he completed an MPhil in computational biology at Cambridge University and an MA in creative writing at Royal Holloway, University of London. Troung is currently pursuing an MD and a PhD in biophysics at the Stanford School of Medicine.

In addition to winning a Knight-Hennessy Scholarship and the Marshall Scholarship, Truong was the recipient of a 2019-20 Goldwater Scholarship and a 2023 Paul and Daisy Soros Fellowship for New Americans.

Students interested in applying to the Knight-Hennessy Scholars program can contact Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development. 

Drug injection device wins MIT $100K Competition

Wed, 05/14/2025 - 10:00am

The winner of this year’s MIT $100K Entrepreneurship Competition is helping advanced therapies reach more patients faster with a new kind of drug-injection device.

CoFlo Medical says its low-cost device can deliver biologic drugs more than 10 times faster than existing methods, accelerating the treatment of a range of conditions including cancers, autoimmune diseases, and infectious diseases.

“For patients battling these diseases, every hour matters,” said Simon Rufer SM ’22 in the winning pitch. “Biologic drugs are capable of treating some of the most challenging diseases, but their administration is unacceptably time-consuming, infringing on the freedom of the patient and effectively leaving them tethered to their hospital beds. The requirement of a hospital setting also makes biologics all but impossible in remote and low-access areas.”

Today, biologic drugs are mainly delivered through intravenous fusions, requiring patients to sit in hospital beds for hours during each delivery. That’s because many biologic drugs are too viscous to be pushed through a needle. CoFlo’s device enables quick injections of biologic drugs no matter how viscous. It works by surrounding the viscous drug with a second, lower-viscosity fluid.

“Imagine trying to force a liquid as viscous as honey through a needle: It’s simply not possible,” said Rufer, who is currently a PhD candidate in the Department of Mechanical Engineering. “Over the course of six years of research and development at MIT, we’ve overcome a myriad of fluidic instabilities that have otherwise made this technology impossible. We’ve also patented the fundamental inner workings of this device.”

Rufer made the winning pitch to a packed Kresge Auditorium that included a panel of judges on May 12. In a video, he showed someone injecting biologic drugs using CoFlo’s device using one hand.

Rufer says the second fluid in the device could be the buffer of the drug solution itself, which wouldn’t alter the drug formulation and could potentially expedite the device’s approval in clinical trials. The device can also easily be made using existing mass manufacturing processes, which will keep the cost low.

In laboratory experiments, CoFlo’s team has demonstrated injections that are up to 200 times faster.

“CoFlo is the only technology that is capable of administering viscous drugs while simultaneously optimizing the patient experience, minimizing the clinical burden, and reducing device cost,” Rufer said.

Celebrating entrepreneurship

The MIT $100K Competition started more than 30 years ago, when students, along with the late MIT Professor Ed Roberts, raised $10,000 to turn MIT’s “mens et manus” (“mind and hand”) motto into a startup challenge. Over time, with sponsor support, the event grew into the renown, highly anticipated startup competition it is today, highlighting some of the most promising new companies founded by MIT community members each year.

The Monday night event was the culmination of months of work and preparation by participating teams. The $100K program began with student pitches in December and was followed by mentorship, funding, and other support for select teams over the course of ensuing months.

This year more than 50 teams applied for the $100K’s final event. A network of external judges whittled that down to the eight finalists that made their pitches.

Other winners

In addition to the grand prize, finalists were also awarded a $50,000 second-place prize, a $5,000 third-place prize, and a $5,000 audience choice award, which was voted on during the judge’s deliberations.

The second-place prize went to Haven, an artificial intelligence-powered financial planning platform that helps families manage lifelong disability care. Haven’s pitch was delivered by Tej Mehta, a student in the MIT Sloan School of Management who explained the problem by sharing his own family’s experience managing his sister’s intellectual disability.

“As my family plans for the future, a number of questions are keeping us up at night,” Mehta told the audience. “How much money do we need to save? What public benefits is she eligible for? How do we structure our private assets so she doesn’t lose those public benefits? Finally, how do we manage the funds and compliance over time?”

Haven works by using family information and goals to build a personalized roadmap that can predict care needs and costs over more than 50 years.

“We recommend to families the exact next steps they need to take, what to apply for, and when,” Mehta explained.

The third-place prize went to Aorta Scope, which combines AI and ultrasound to provide augmented reality guidance during vascular surgery. Today, surgeons must rely on a 2-D X-ray image as they feed a large stent into patients’ body during a common surgery known as endovascular repair.

Aorta Scope has developed a platform for real-time, 3-D implant alignment. The solution combines intravascular ultrasound technology with fiber optic shape sensing. Tom Dillon built the system that combines data from those sources as part of his ongoing PhD in MIT’s Department of Mechanical Engineering.

Finally, the audience choice award went to Flood Dynamics, which provides real-time flood risk modeling to help cities, insurers, and developers adapt and protect urban communities from flooding.

Although most urban flood damages are driven by rain today, flood models don’t account for rainfall, making cities less prepared for flooding risks.

“Flooding, and especially rain-driven flooding, is the costliest natural hazard around the world today,” said Katerina Boukin SM ’20, PhD ’25, who developed the company’s technology at MIT. “The price of staying rain-blind is really steep. This is an issue that is costing the U.S. alone more than $30 billion a year.”

Study shows vision-language models can’t handle queries with negation words

Wed, 05/14/2025 - 12:00am

Imagine a radiologist examining a chest X-ray from a new patient. She notices the patient has swelling in the tissue but does not have an enlarged heart. Looking to speed up diagnosis, she might use a vision-language machine-learning model to search for reports from similar patients.

But if the model mistakenly identifies reports with both conditions, the most likely diagnosis could be quite different: If a patient has tissue swelling and an enlarged heart, the condition is very likely to be cardiac related, but with no enlarged heart there could be several underlying causes.

In a new study, MIT researchers have found that vision-language models are extremely likely to make such a mistake in real-world situations because they don’t understand negation — words like “no” and “doesn’t” that specify what is false or absent. 

“Those negation words can have a very significant impact, and if we are just using these models blindly, we may run into catastrophic consequences,” says Kumail Alhamoud, an MIT graduate student and lead author of this study.

The researchers tested the ability of vision-language models to identify negation in image captions. The models often performed as well as a random guess. Building on those findings, the team created a dataset of images with corresponding captions that include negation words describing missing objects.

They show that retraining a vision-language model with this dataset leads to performance improvements when a model is asked to retrieve images that do not contain certain objects. It also boosts accuracy on multiple choice question answering with negated captions.

But the researchers caution that more work is needed to address the root causes of this problem. They hope their research alerts potential users to a previously unnoticed shortcoming that could have serious implications in high-stakes settings where these models are currently being used, from determining which patients receive certain treatments to identifying product defects in manufacturing plants.

“This is a technical paper, but there are bigger issues to consider. If something as fundamental as negation is broken, we shouldn’t be using large vision/language models in many of the ways we are using them now — without intensive evaluation,” says senior author Marzyeh Ghassemi, an associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems.

Ghassemi and Alhamoud are joined on the paper by Shaden Alshammari, an MIT graduate student; Yonglong Tian of OpenAI; Guohao Li, a former postdoc at Oxford University; Philip H.S. Torr, a professor at Oxford; and Yoon Kim, an assistant professor of EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. The research will be presented at Conference on Computer Vision and Pattern Recognition.

Neglecting negation

Vision-language models (VLM) are trained using huge collections of images and corresponding captions, which they learn to encode as sets of numbers, called vector representations. The models use these vectors to distinguish between different images.

A VLM utilizes two separate encoders, one for text and one for images, and the encoders learn to output similar vectors for an image and its corresponding text caption.

“The captions express what is in the images — they are a positive label. And that is actually the whole problem. No one looks at an image of a dog jumping over a fence and captions it by saying ‘a dog jumping over a fence, with no helicopters,’” Ghassemi says.

Because the image-caption datasets don’t contain examples of negation, VLMs never learn to identify it.

To dig deeper into this problem, the researchers designed two benchmark tasks that test the ability of VLMs to understand negation.

For the first, they used a large language model (LLM) to re-caption images in an existing dataset by asking the LLM to think about related objects not in an image and write them into the caption. Then they tested models by prompting them with negation words to retrieve images that contain certain objects, but not others.

For the second task, they designed multiple choice questions that ask a VLM to select the most appropriate caption from a list of closely related options. These captions differ only by adding a reference to an object that doesn’t appear in the image or negating an object that does appear in the image.

The models often failed at both tasks, with image retrieval performance dropping by nearly 25 percent with negated captions. When it came to answering multiple choice questions, the best models only achieved about 39 percent accuracy, with several models performing at or even below random chance.

One reason for this failure is a shortcut the researchers call affirmation bias — VLMs ignore negation words and focus on objects in the images instead.

“This does not just happen for words like ‘no’ and ‘not.’ Regardless of how you express negation or exclusion, the models will simply ignore it,” Alhamoud says.

This was consistent across every VLM they tested.

“A solvable problem”

Since VLMs aren’t typically trained on image captions with negation, the researchers developed datasets with negation words as a first step toward solving the problem.

Using a dataset with 10 million image-text caption pairs, they prompted an LLM to propose related captions that specify what is excluded from the images, yielding new captions with negation words.

They had to be especially careful that these synthetic captions still read naturally, or it could cause a VLM to fail in the real world when faced with more complex captions written by humans.

They found that finetuning VLMs with their dataset led to performance gains across the board. It improved models’ image retrieval abilities by about 10 percent, while also boosting performance in the multiple-choice question answering task by about 30 percent.

“But our solution is not perfect. We are just recaptioning datasets, a form of data augmentation. We haven’t even touched how these models work, but we hope this is a signal that this is a solvable problem and others can take our solution and improve it,” Alhamoud says.

At the same time, he hopes their work encourages more users to think about the problem they want to use a VLM to solve and design some examples to test it before deployment.

In the future, the researchers could expand upon this work by teaching VLMs to process text and images separately, which may improve their ability to understand negation. In addition, they could develop additional datasets that include image-caption pairs for specific applications, such as health care.

Duke University Press to join MIT Press’ Direct to Open, publish open-access monographs

Tue, 05/13/2025 - 5:10pm

The MIT Press has announced that beginning in 2026, Duke University Press will join its Direct to Open (D2O) program. This collaboration marks the first such partnership with another university press for the D2O program, and reaffirms their shared commitment to open access publishing that is ethical, equitable, and sustainable.

Launched in 2021, D2O is the MIT Press’ framework for open access monographs that shifts publishing from a solely market-based purchase model, where individuals and libraries buy single e-books, to a collaborative, library-supported open access model. 

Duke University Press brings their distinguished catalog in the humanities and social sciences to Direct to Open, providing open access to 20 frontlist titles annually alongside the MIT Press’ 80 scholarly books each year. Their participation in the D2O program — which will also include free term access to a paywalled collection of 250 key backlist titles — enhances the range of openly available academic content for D2O’s library partners.

“By expanding the Direct to Open model to include one of the most innovative university presses publishing today, we’re taking a significant step toward building a more open and accessible future for academic publishing,” says Amy Brand, director and publisher of the MIT Press. “We couldn’t be more thrilled to be building this partnership with Duke University Press. This collaboration will benefit the entire scholarly community, ensuring that more books are made openly available to readers worldwide.”

“We are honored to participate in MIT Press’ dynamic and successful D2O program,” says Dean Smith, director of Duke University Press. “It greatly expands our open-access footprint and serves our mission of making bold and transformational scholarship accessible to the world.”

With Duke University Press’ involvement in 2026, D2O will feature multiple package options, combining content from both the MIT Press and Duke University Press. Participating institutions will have the opportunity to support each press individually, providing flexibility for libraries while fostering collective impact.

For details on how your institution might participate in or support Direct to Open, please visit the D2O website or contact the MIT Press library relations team.  

Pages