MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 17 hours 33 min ago

MIT Department of Economics to launch James M. and Cathleen D. Stone Center on Inequality and Shaping the Future of Work

Tue, 05/13/2025 - 4:35pm

Starting in July, MIT’s Shaping the Future of Work Initiative in the Department of Economics will usher in a significant new era of research, policy, and education of the next generation of scholars, made possible by a gift from the James M. and Cathleen D. Stone Foundation. In recognition of the gift and the expansion of priorities it supports, on July 1 the initiative will become part of the new James M. and Cathleen D. Stone Center on Inequality and Shaping the Future of Work. This center will be officially launched at a public event in fall 2025.

The Stone Center will be led by Daron Acemoglu, Institute Professor, and co-directors David Autor, the Daniel (1972) and Gail Rubinfeld Professor in Economics, and Simon Johnson, the Ronald A. Kurtz (1954) Professor of Entrepreneurship. It will join a global network of 11 other wealth inequality centers funded by the Stone Foundation as part of an effort to advance research on the causes and consequences of the growing accumulation at the top of the wealth distribution.

“This generous gift from the Stone Foundation advances our pioneering economics research on inequality, technology, and the future of the workforce. This work will create a pipeline of scholars in this critical area of study, and it will help to inform the public and policymakers,” says Provost Cynthia Barnhart.

Originally established as part of MIT Blueprint Labs with a foundational gift from the William and Flora Hewlett Foundation, the Shaping the Future of Work Initiative is a nonpartisan research organization that applies economics research to identify innovative ways to move the labor market onto a more equitable trajectory, with a central focus on revitalizing labor market opportunities for workers without a college education. Building on frontier micro- and macro-economics, economic sociology, political economy, and other disciplines, the initiative seeks to answer key questions about the decline in labor market opportunities for non-college workers in recent decades. These labor market changes have been a major driver of growing wealth inequality, a phenomenon that has, in turn, broadly reshaped our economy, democracy, and society.

Support from the Stone Foundation will allow the new Stone Center to build on the Shaping the Future of Work Initiative’s ongoing research agenda and extend its focus to include a growing emphasis on the interplay between technologies and inequality, as well as the technology sector’s role in defining future inequality.

Core objectives of the James M. and Cathleen D. Stone Center on Inequality and Shaping the Future of Work will include fostering connections between scholars doing pathbreaking research on automation, AI, the intersection of work and technology, and wealth inequality across disciplines, including within the Department of Economics, the MIT Sloan School of Management, and the MIT Stephen A. Schwarzman College of Computing; strengthening the pipeline of emerging scholars focused on these issues; and using research to inform and engage a wider audience including the public, undergraduate and graduate students, and policymakers.     

The Stone Foundation’s support will allow the center to strengthen and expand its commitments to produce new research, convene additional events to share research findings, promote connection and collaboration between scholars working on related topics, provide new resources for the center’s research affiliates, and expand public outreach to raise awareness of this important emerging challenge. “Cathy and I are thrilled to welcome MIT to the growing family of Stone Centers dedicated to studying the urgent challenges of accelerating wealth inequality,” James M. Stone says.

Agustín Rayo, dean of the School of Humanities, Arts, and Social Sciences, says, “I am thrilled to celebrate the creation of the James M. and Cathleen D. Stone Center in the MIT economics department. Not only will it enhance the cutting-edge work of MIT’s social scientists, but it will support cross-disciplinary interactions that will enable new insights and solutions to complex social challenges.”

Jonathan Gruber, chair of the Department of Economics, adds, “I couldn’t be more excited about the Stone Foundation’s support for the Shaping the Future of Work Initiative. The initiative’s leaders have been far ahead of the curve in anticipating the rapid changes that technological forces are bringing to the labor market, and their influential studies have helped us understand the potential effects of AI and other technologies on U.S. workers. The generosity of the Stone Foundation will allow them to continue this incredible work, while expanding their priorities to include other critical issues around inequality. This is a great moment for the paradigm-shifting research that Acemoglu, Autor, and Johnson are leading here at MIT.”

“We are grateful to the James M. and Cathleen D. Stone Foundation for their generous support enabling us to study two defining challenges of our age: inequality and the future of work,” says Acemoglu, who was awarded the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel in 2024 (with co-laureates Simon Johnson and James A. Robinson). “We hope to go beyond exploring the causes of inequality and the determinants of the availability of good jobs in the present and in the future, but also develop ideas about how society can shape both the work of the future and inequality by its choices of institutions and technological trajectories.”

“We are incredibly fortunate to be joining the family of Stone Centers around the world. Jim and Cathleen Stone are far-sighted and generous donors, and we are delighted that they are willing to back us and MIT in this way,” says Johnson. “We look forward to working with all our colleagues, at MIT and around the world, to advance understanding and practical approaches to inequality and the future of work.”

Autor adds, “This support will enable us — and many others — to focus our scholarship, teaching and public outreach towards shaping a labor market that offers opportunity, mobility, and economic security to a far broader set of people.” 

Daily mindfulness practice reduces anxiety for autistic adults

Tue, 05/13/2025 - 2:40pm

Just 10 to 15 minutes of mindfulness practice a day led to reduced stress and anxiety for autistic adults who participated in a study led by scientists at MIT’s McGovern Institute for Brain Research. Participants in the study used a free smartphone app to guide their practice, giving them the flexibility to practice when and where they chose.

Mindfulness is a state in which the mind is focused only on the present moment. It is a way of thinking that can be cultivated with practice, often through meditation or breathing exercises — and evidence is accumulating that practicing mindfulness has positive effects on mental health. The new open-access study, reported April 8 in the journal Mindfulness, adds to that evidence, demonstrating clear benefits for autistic adults.

“Everything you want from this on behalf of somebody you care about happened: reduced reports of anxiety, reduced reports of stress, reduced reports of negative emotions, and increased reports of positive emotions,” says McGovern investigator and MIT Professor John Gabrieli, who led the research with Liron Rozenkrantz, an investigator at the Azrieli Faculty of Medicine at Bar-Ilan University in Israel and a research affiliate in Gabrieli’s lab. “Every measure that we had of well-being moved in significantly in a positive direction,” adds Gabrieli, who is also the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT.

One of the reported benefits of practicing mindfulness is that it can reduce the symptoms of anxiety disorders. This prompted Gabrieli and his colleagues to wonder whether it might benefit adults with autism, who tend to report above average levels of anxiety and stress, which can interfere with daily living and quality of life. As many as 65 percent of autistic adults may also have an anxiety disorder.

Gabrieli adds that the opportunity for autistic adults to practice mindfulness with an app, rather than needing to meet with a teacher or class, seemed particularly promising. “The capacity to do it at your own pace in your own home, or any environment you like, might be good for anybody,” he says. “But maybe especially for people for whom social interactions can sometimes be challenging.”

The research team, including Cindy Li, the autism recruitment and outreach coordinator in Gabrieli’s lab, recruited 89 autistic adults to participate in their study. Those individuals were split into two groups: one would try the mindfulness practice for six weeks, while the others would wait and try the intervention later.

Participants were asked to practice daily using an app called Healthy Minds, which guides participants through seated or active meditations, each lasting 10 to 15 minutes. Participants reported that they found the app easy to use and had little trouble making time for the daily practice.

After six weeks, participants reported significant reductions in anxiety and perceived stress. These changes were not experienced by the wait-list group, which served as a control. However, after their own six weeks of practice, people in the wait-list group reported similar benefits. “We replicated the result almost perfectly. Every positive finding we found with the first sample we found with the second sample,” Gabrieli says.

The researchers followed up with study participants after another six weeks. Almost everyone had discontinued their mindfulness practice — but remarkably, their gains in well-being had persisted. Based on this finding, the team is eager to further explore the long-term effects of mindfulness practice in future studies. “There’s a hypothesis that a benefit of gaining mindfulness skills or habits is they stick with you over time — that they become incorporated in your daily life,” Gabrieli says. “If people are using the approach to being in the present and not dwelling on the past or worrying about the future, that’s what you want most of all. It’s a habit of thought that’s powerful and helpful.”

Even as they plan future studies, the researchers say they are already convinced that mindfulness practice can have clear benefits for autistic adults. “It’s possible mindfulness would be helpful at all kinds of ages,” Gabrieli says. But he points out the need is particularly great for autistic adults, who usually have fewer resources and support than autistic children have access to through their schools. Gabrieli is eager for more people with autism to try the Healthy Minds app. “Having scientifically proven resources for adults who are no longer in school systems might be a valuable thing,” he says.

This research was funded, in part, by The Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT and the Yang Tan Collective.

How we think about protecting data

Tue, 05/13/2025 - 5:00am

How should personal data be protected? What are the best uses of it? In our networked world, questions about data privacy are ubiquitous and matter for companies, policymakers, and the public.

A new study by MIT researchers adds depth to the subject by suggesting that people’s views about privacy are not firmly fixed and can shift significantly, based on different circumstances and different uses of data.

“There is no absolute value in privacy,” says Fabio Duarte, principal research scientist in MIT’s Senseable City Lab and co-author of a new paper outlining the results. “Depending on the application, people might feel use of their data is more or less invasive.”

The study is based on an experiment the researchers conducted in multiple countries using a newly developed game that elicits public valuations of data privacy relating to different topics and domains of life.

“We show that values attributed to data are combinatorial, situational, transactional, and contextual,” the researchers write.

The open-access paper, “Data Slots: tradeoffs between privacy concerns and benefits of data-driven solutions,” is published today in Nature: Humanities and Social Sciences Communications. The authors are Martina Mazzarello, a postdoc in the Senseable City Lab; Duarte; Simone Mora, a research scientist at Senseable City Lab; Cate Heine PhD ’24 of University College London; and Carlo Ratti, director of the Senseable City Lab.

The study is based around a card game with poker-type chips the researchers created to study the issue, called Data Slots. In it, players hold hands of cards with 12 types of data — such as a personal profile, health data, vehicle location information, and more — that relate to three types of domains where data are collected: home life, work, and public spaces. After exchanging cards, the players generate ideas for data uses, then assess and invest in some of those concepts. The game has been played in-person in 18 different countries, with people from another 74 countries playing it online; over 2,000 individual player-rounds were included in the study.

The point behind the game is to examine the valuations that members of the public themselves generate about data privacy. Some research on the subject involves surveys with pre-set options that respondents choose from. But in Data Slots, the players themselves generate valuations for a wide range of data-use scenarios, allowing the researchers to estimate the relative weight people place on privacy in different situations. 

 

The idea is “to let people themselves come up with their own ideas and assess the benefits and privacy concerns of their peers’ ideas, in a participatory way,” Ratti explains.

The game strongly suggests that people’s ideas about data privacy are malleable, although the results do indicate some tendencies. The data privacy card whose use players most highly valued was for personal mobility; given the opportunity in the game to keep it or exchange it, players retained it in their hands 43 percent of the time, an indicator of its value. That was followed in order by personal health data, and utility use. (With apologies to pet owners, the type of data privacy card players held on to the least, about 10 percent of the time, involved animal health.)

However, the game distinctly suggests that the value of privacy is highly contingent on specific use-cases. The game shows that people care about health data to a substantial extent but also value the use of environmental data in the workplace, for instance. And the players of Data Slots also seem less concerned about data privacy when use of data is combined with clear benefits. In combination, that suggests a deal to be cut: Using health data can help people understand the effects of the workplace on wellness.

“Even in terms of health data in work spaces, if they are used in an aggregated way to improve the workspace, for some people it’s worth combining personal health data with environmental data,” Mora says.

Mazzarello adds: “Now perhaps the company can make some interventions to improve overall health. It might be invasive, but you might get some benefits back.”

In the bigger picture, the researchers suggest, taking a more flexible, user-driven approach to understanding what people think about data privacy can help inform better data policy. Cities — the core focus on the Senseable City Lab — often face such scenarios. City governments can collect a lot of aggregate traffic data, for instance, but public input can help determine how anonymized such data should be. Understanding public opinion along with the benefits of data use can produce viable policies for local officials to pursue.

“The bottom line is that if cities disclose what they plan to do with data, and if they involve resident stakeholders to come up with their own ideas about what they could do, that would be beneficial to us,” Duarte says. “And in those scenarios, people’s privacy concerns start to decrease a lot.” 

Eldercare robot helps people sit and stand, and catches them if they fall

Tue, 05/13/2025 - 12:00am

The United States population is older than it has ever been. Today, the country’s median age is 38.9, which is nearly a decade older than it was in 1980. And the number of adults older than 65 is expected to balloon from 58 million to 82 million by 2050. The challenge of caring for the elderly, amid shortages in care workers, rising health care costs, and evolving family structures, is an increasingly urgent societal issue.

To help address the eldercare challenge, a team of MIT engineers is looking to robotics. They have built and tested the Elderly Bodily Assistance Robot, or E-BAR, a mobile robot designed to physically support the elderly and prevent them from falling as they move around their homes.

E-BAR acts as a set of robotic handlebars that follows a person from behind. A user can walk independently or lean on the robot’s arms for support. The robot can support the person’s full weight, lifting them from sitting to standing and vice versa along a natural trajectory. And the arms of the robot can them by rapidly inflating side airbags if they begin to fall.

With their design, the researchers hope to prevent falls, which today are the leading cause of injury in adults who are 65 and older. 

“Many older adults underestimate the risk of fall and refuse to use physical aids, which are cumbersome, while others overestimate the risk and may not to exercise, leading to declining mobility,” says Harry Asada, the Ford Professor of Engineering at MIT. “Our design concept is to provide older adults having balance impairment with robotic handlebars for stabilizing their body. The handlebars go anywhere and provide support anytime, whenever they need.”

In its current version, the robot is operated via remote control. In future iterations, the team plans to automate much of the bot’s functionality, enabling it to autonomously follow and physically assist a user. The researchers are also working on streamlining the device to make it slimmer and more maneuverable in small spaces.

“I think eldercare is the next great challenge,” says E-BAR designer Roberto Bolli, a graduate student in the MIT Department of Mechanical Engineering. “All the demographic trends point to a shortage of caregivers, a surplus of elderly persons, and a strong desire for elderly persons to age in place. We see it as an unexplored frontier in America, but also an intrinsically interesting challenge for robotics.”

Bolli and Asada will present a paper detailing the design of E-BAR at the IEEE Conference on Robotics and Automation (ICRA) later this month.

Asada’s group at MIT develops a variety of technologies and robotic aides to assist the elderly. In recent years, others have developed fall prediction algorithms, designed robots and automated devices including robotic walkers, wearable, self-inflating airbags, and robotic frames that secure a person with a harness and move with them as they walk.

In designing E-BAR, Asada and Bolli aimed for a robot that essentially does three tasks: providing physical support, preventing falls, and safely and unobtrusively moving with a person. What’s more, they looked to do away with any harness, to give a user more independence and mobility.

“Elderly people overwhelmingly do not like to wear harnesses or assistive devices,” Bolli says. “The idea behind the E-BAR structure is, it provides body weight support, active assistance with gait, and fall catching while also being completely unobstructed in the front. You can just get out anytime.”

The team looked to design a robot specifically for aging in place at home or helping in care facilities. Based on their interviews with older adults and their caregivers, they came up with several design requirements, including that the robot must fit through home doors, allow the user to take a full stride, and support their full weight to help with balance, posture, and transitions from sitting to standing.

The robot consists of a heavy, 220-pound base whose dimensions and structure were optimized to support the weight of an average human without tipping or slipping. Underneath the base is a set of omnidirectional wheels that allows the robot to move in any direction without pivoting, if needed. (Imagine a car’s wheels shifting to slide into a space between two other cars, without parallel parking.)

Extending out from the robot’s base is an articulated body made from 18 interconnected bars, or linkages, that can reconfigure like a foldable crane to lift a person from a sitting to standing position, and vice versa. Two arms with handlebars stretch out from the robot in a U-shape, which a person can stand between and lean against if they need additional support. Finally, each arm of the robot is embedded with airbags made from a soft yet grippable material that can inflate instantly to catch a person if they fall, without causing bruising on impact. The researchers believe that E-BAR is the first robot able to catch a falling person without wearable devices or use of a harness.

They tested the robot in the lab with an older adult who volunteered to use the robot in various household scenarios. The team found that E-BAR could actively support the person as they bent down to pick something up from the ground and stretched up to reach an object off a shelf — tasks that can be challenging to do while maintaining balance. The robot also was able to lift the person up and over the lip of a tub, simulating the task of getting out of a bathtub.

Bolli envisions a design like E-BAR would be ideal for use in the home by elderly people who still have a moderate degree of muscle strength but require assistive devices for activities of daily living.

“Seeing the technology used in real-life scenarios is really exciting,” says Bolli.

In their current paper, the researchers did not incorporate any fall-prediction capabilities in E-BAR’s airbag system. But another project in Asada’s lab, led by graduate student Emily Kamienski, has focused on developing algorithms with machine learning to control a new robot in response to the user’s real-time fall risk level.

Alongside E-BAR, Asada sees different technologies in his lab as providing different levels of assistance for people at certain phases of life or mobility.

“Eldercare conditions can change every few weeks or months,” Asada says. “We’d like to provide continuous and seamless support as a person’s disability or mobility changes with age.”

This work was supported, in part, by the National Robotics Initiative and the National Science Foundation.

In Down syndrome mice, 40Hz light and sound improve cognition, neurogenesis, connectivity

Mon, 05/12/2025 - 4:50pm

Studies by a growing number of labs have identified neurological health benefits from exposing human volunteers or animal models to light, sound, and/or tactile stimulation at the brain’s “gamma” frequency rhythm of 40Hz. In the latest such research at The Picower Institute for Learning and Memory and Alana Down Syndrome Center at MIT, scientists found that 40Hz sensory stimulation improved cognition and circuit connectivity and encouraged the growth of new neurons in mice genetically engineered to model Down syndrome.

Li-Huei Tsai, Picower Professor at MIT and senior author of the new study in PLOS ONE, says that the results are encouraging, but also cautions that much more work is needed to test whether the method, called GENUS (for gamma entrainment using sensory stimulation), could provide clinical benefits for people with Down syndrome. Her lab has begun a small study with human volunteers at MIT.

“While this work, for the first time, shows beneficial effects of GENUS on Down syndrome using an imperfect mouse model, we need to be cautious, as there is not yet data showing whether this also works in humans,” says Tsai, who directs The Picower Institute and The Alana Center, and is a member of MIT’s Department of Brain and Cognitive Sciences faculty.

Still, she says, the newly published article adds evidence that GENUS can promote a broad-based, restorative, “homeostatic” health response in the brain amid a wide variety of pathologies. Most GENUS studies have addressed Alzheimer’s disease in humans or mice, but others have found benefits from the stimulation for conditions such as “chemo brain” and stroke.

Down syndrome benefits

In the study, the research team led by postdoc Md Rezaul Islam and Brennan Jackson PhD ’23 worked with the commonly used “Ts65Dn” Down syndrome mouse model. The model recapitulates key aspects of the disorder, although it does not exactly mirror the human condition, which is caused by carrying an extra copy of chromosome 21.

In the first set of experiments in the paper, the team shows that an hour a day of 40Hz light and sound exposure for three weeks was associated with significant improvements on three standard short-term memory tests — two involving distinguishing novelty from familiarity and one involving spatial navigation. Because these kinds of memory tasks involve a brain region called the hippocampus, the researchers looked at neural activity there and measured a significant increase in activity indicators among mice that received the GENUS stimulation versus those that did not.

To better understand how stimulated mice could show improved cognition, the researchers examined whether cells in the hippocampus changed how they express their genes. To do this, the team used a technique called single cell RNA sequencing, which provided a readout of how nearly 16,000 individual neurons and other cells transcribed their DNA into RNA, a key step in gene expression. Many of the genes whose expression varied most prominently in neurons between the mice that received stimulation and those that did not were directly related to forming and organizing neural circuit connections called synapses.

To confirm the significance of that finding, the researchers directly examined the hippocampus in stimulated and control mice. They found that in a critical subregion, the dentate gyrus, stimulated mice had significantly more synapses.

Diving deeper

The team not only examined gene expression across individual cells, but also analyzed those data to assess whether there were patterns of coordination across multiple genes. Indeed, they found several such “modules” of co-expression. Some of this evidence further substantiated the idea that 40Hz-stimulated mice made important improvements in synaptic connectivity, but another key finding highlighted a role for TCF4, a key regulator of gene transcription needed for generating new neurons, or “neurogenesis.”  

The team’s analysis of genetic data suggested that TCF4 is underexpressed in Down syndrome mice, but the researchers saw improved TCF4 expression in GENUS-stimulated mice. When the researchers went to the lab bench to determine whether the mice also exhibited a difference in neurogenesis, they found direct evidence that stimulated mice exhibited more than unstimulated mice in the dentate gyrus. These increases in TCF4 expression and neurogenesis are only correlational, the researchers noted, but they hypothesize that the increase in new neurons likely helps explain at least some of the increase in new synapses and improved short-term memory function.

“The increased putative functional synapses in the dentate gyrus is likely related to the increased adult neurogenesis observed in the Down syndrome mice following GENUS treatment,” Islam says.

This study is the first to document that GENUS is associated with increased neurogenesis.

The analysis of gene expression modules also yielded other key insights. One is that a cluster of genes whose expression typically declines with normal aging, and in Alzheimer’s disease, remained at higher expression levels among mice who received 40Hz sensory stimulation.

And the researchers also found evidence that mice that received stimulation retained more cells in the hippocampus that express Reelin. Reelin-expressing neurons are especially vulnerable in Alzheimer’s disease, but expression of the protein is associated with cognitive resilience amid Alzheimer’s disease pathology, which Ts65Dn mice develop. About 90 percent of people with Down syndrome develop Alzheimer’s disease, typically after the age of 40.

“In this study, we found that GENUS enhances the percentage of Reln+ neurons in hippocampus of a mouse model of Down syndrome, suggesting that GENUS may promote cognitive resilience,” Islam says.

Taken together with other studies, Tsai and Islam say, the new results add evidence that GENUS helps to stimulate the brain at the cellular and molecular level to mount a homeostatic response to aberrations caused by disease pathology, be it neurodegeneration in Alzheimer’s, demyelination in chemo brain, or deficits of neurogenesis in Down syndrome.

But the authors also cautioned that the study had limits. Not only is the Ts65Dn model an imperfect reflection of human Down syndrome, but also the mice used were all male. Moreover, the cognitive tests in the study only measured short-term memory. And finally, while the study was novel for extensively examining gene expression in the hippocampus amid GENUS stimulation, it did not look at changes in other cognitively critical brain regions, such as the prefrontal cortex.

In addition to Jackson, Islam, and Tsai, the paper’s other authors are Maeesha Tasnim Naomi, Brooke Schatz, Noah Tan, Mitchell Murdock, Dong Shin Park, Daniela Rodrigues Amorim, Fred Jiang, S. Sebastian Pineda, Chinnakkaruppan Adaikkan, Vanesa Fernandez, Ute Geigenmuller, Rosalind Mott Firenze, Manolis Kellis, and Ed Boyden.

Funding for the study came from the Alana Down Syndrome Center at MIT and the Alana USA Foundation, the U.S. National Science Foundation, the La Caixa Banking Foundation, a European Molecular Biology Organization long-term postdoctoral fellowship, Barbara J. Weedon, Henry E. Singleton, and the Hubolow family.

Student spotlight: Aria Eppinger ’24

Fri, 05/09/2025 - 4:40pm

This interview is part of a series of short interviews from the MIT Department of Electrical Engineering and Computer Science, called Student Spotlights. Each spotlight features a student answering their choice of questions about themselves and life at MIT. Today’s interviewee, Aria Eppinger ’24, graduated with her undergraduate degree in Course 6-7 (Computer Science and Molecular Biology) last spring. This spring, she will complete her MEng in 6-7. Her thesis, supervised by Ford Professor of Engineering Doug Lauffenburger in the Department of Biological Engineering, investigates the biological underpinnings of adverse pregnancy outcomes, including preterm birth and preeclampsia, by applying polytope-fitting algorithms.

Q: Tell us about one teacher from your past who had an influence on the person you’ve become. 

A: There are many teachers who had a large impact on my trajectory. I would first like to thank my elementary and middle school teachers for imbuing in me a love of learning. I would also like to thank my high school teachers for not only teaching me the foundations of writing strong arguments, programming, and designing experiments, but also instilling in me the importance of being a balanced person. It can be tempting to be ruled by studies or work, especially when learning and working are so fun. My high school teachers encouraged me to pursue my hobbies, make memories with friends, and spend time with family. As life continues to be hectic, I’m so grateful for this lesson (even if I’m still working on mastering it).

Q: Describe one conversation that changed the trajectory of your life.

A: A number of years ago, I had the opportunity to chat with Warren Buffett. I was nervous at first, but soon put to ease by his descriptions of his favorite foods — hamburgers, French fries, and ice cream — and his hitchhiking stories. His kindness impressed and inspired me, which is something I carry with me and aim to emulate all these years later.

Q: Do you have any pets?

A: I have one dog who lives at home with my parents. Dodger, named after “Artful Dodger” in Oliver Twist, is as mischievous as beagles tend to be. We adopted him from a rescue shelter when I was in elementary school. 

Q: Are you a re-reader or a re-watcher — and if so, what are your comfort books, shows, or movies?

A: I don’t re-read many books or re-watch many movies, but I never tire of Jane Austen’s “Pride and Prejudice.” I bought myself an ornately bound copy when I was interning in New York City last summer. Austen’s other novels, especially “Sense and Sensibility,” “Persuasion,” and “Emma,” are also favorites, and I’ve seen a fair number of their movie and miniseries adaptations. My favorite adaptation is the 1995 BBC production of “Pride and Prejudice” because of the cohesion with the original book and the casting of the leads, as well as the touches and plot derivations added by the producer and director to bring the work to modern audiences. The adaptation is quite long, but I have fond memories of re-watching it with some fellow Austinites at MIT.

Q: If you had to teach a really in-depth class about one niche topic, what would you pick?

A: There are two types of people in the world: those who eat to live, and those who live to eat. As one of the latter, I would have to teach some sort of in-depth class on food. Perhaps I would teach the science behind baking chocolate cake, or churning the perfect ice cream. Or maybe I would teach the biochemistry of digesting. In any case, I would have to have lots of hands-on demos and reserve plenty for taste-testing!

Q: What was the last thing you changed your mind about?

A: Brisket! I never was a big fan of brisket until I went to a Texas BBQ restaurant near campus, The Smoke Shop BBQ. Growing up, I had never had true BBQ, so I was quite skeptical. However, I enjoyed not only the brisket but also the other dishes. The Brussels sprouts with caramelized onions is probably my favorite dish, but it feels like a crime to say that about a BBQ place!

Q: What are you looking forward to about life after graduation? What do you think you’ll miss about MIT? 

A: I’m looking forward to new adventures after graduation, including working in New York City and traveling to new places. I cross-registered to take Intensive Italian at Harvard this semester and am planning a trip to Italy to practice my Italian, see the historic sites, visit the Vatican, and taste the food. Non vedo l’ora di viaggiare all’Italia! [I can't wait to travel to Italy!]

While I’m excited for what lies ahead, I will miss MIT. What a joy it is to spend most of the day learning information from a fire hose, taking a class on a foreign topic because the course catalog description looked fun, talking to people whose viewpoint is very similar or very different from my own, and making friends that will last a lifetime.

School of Engineering faculty and staff receive awards for winter 2025

Fri, 05/09/2025 - 4:25pm

MIT faculty and researchers receive many external awards throughout the year. The MIT School of Engineering periodically highlights the honors, prizes, and medals won by community members working in academic departments, labs, and centers. Winter 2025 honorees include the following:

  • Faez Ahmed, the American Bureau of Shipping Career Development Professor in Naval Engineering and Utilization and an assistant professor in the Department of Mechanical Engineering (MechE), received a 2024 National Science Foundation (NSF) CAREER Award. The CAREER program is one of NSF’s most prestigious awards that supports early-career faculty who display outstanding research, excellent education, and the integration of education and research.
     
  • Martin Zdenek Bazant, the E.G. Roos (1944) Professor in the Department of Chemical Engineering (ChemE), was elected to the National Academy of Engineering (NAE). Membership in the NAE is awarded to individuals who have made outstanding contributions to “engineering research, practice, or education.”
     
  • Angela Belcher, the James Mason Crafts Professor in the Department of Biological Engineering and the Department of Materials Science and Engineering (DMSE), received the National Medal of Science. The award is the nation’s highest honor for scientists and innovators.
     
  • Moshe E. Ben-Akiva, the Edmund K. Turner Professor in Civil Engineering, was elected to the National Academy of Engineering. Membership in the NAE is given to individuals who have made outstanding contributions to “engineering research, practice, or education.”
     
  • Emery Brown, the Edward Hood Taplin Professor of Medical Engineering, received the National Medal of Science. The award is the nation’s highest honor for scientists and innovators.
     
  • Charles L. Cooney, professor emeritus of the Department of ChemE, was elected to the National Academy of Engineering. Membership in the NAE is given to individuals who have made outstanding contributions to “engineering research, practice, or education.”
     
  • Yoel Fink, the Danae and Vasilis (1961) Salapatas Professor in the DMSE, was elected to the National Academy of Engineering. Membership in the NAE is given to individuals who have made outstanding contributions to “engineering research, practice, or education.”
     
  • James Fujimoto, the Elihu Thomson Professor in the Department of Electrical Engineering and Computer Science (EECS), is a 2025 inductee into the National Inventors Hall of Fame. Inductees are patent-holding inventors whose work has made all our lives easier, safer, healthier, and more fulfilling.
     
  • Mohsen Ghaffari, an associate professor in the Department of EECS, received a 2025 Sloan Research Fellowship. The fellowship honors exceptional researchers at U.S. and Canadian educational institutions, whose creativity, innovation, and research accomplishments make them stand out as the next generation of leaders.
     
  • Marzyeh Ghassemi, the Germeshausen Career Development Professor and associate professor in the Department of EECS and the Institute for Medical Engineering and Science, received a 2025 Sloan Research Fellowship. The fellowships honor exceptional researchers at US and Canadian educational institutions, whose creativity, innovation, and research accomplishments make them stand out as the next generation of leaders.
     
  • Linda Griffith, the School of Engineering Professor of Teaching Innovation in the Department of Biological Engineering, received the 2025 BMES Robert A. Pritzker Distinguished Lectureship Award. The award is given to individuals who have demonstrated impactful leadership and accomplishments in biomedical engineering science and practice.
     
  • Paula Hammond, MIT’s vice provost for faculty and an Institute Professor in the Department of ChemE, received the National Medal of Technology and Innovation. The award is the nation’s highest honor for scientists and innovators.
     
  • Kuikui Liu, the Elting Morison Career Development Professor and an assistant professor in the Department of EECS, received the 2025 Michael and Sheila Held Prize. The award is presented annually to honor outstanding, innovative, creative, and influential research in combinatorial and discrete optimization or related parts of computer science, such as the design and analysis of algorithms and complexity theory.
     
  • Farnaz Niroui, an associate professor in the Department of EECS, received a DARPA Innovation Fellowship. The highly selective program chooses fellows to develop and manage a portfolio of high-impact, exploratory research efforts to help identify breakthrough technologies for the U.S. Department of Defense.
     
  • Tomás Lozano-Pérez, the School of Engineering Professor of Teaching Excellence in the Department of EECS, was elected to the National Academy of Engineering. Membership in the NAE is given to individuals who have made outstanding contributions to “engineering research, practice, or education.”
     
  • Kristala L. Prather, the Arthur Dehon Little Professor and head of the Department of ChemE, was elected to the National Academy of Engineering. Membership in the NAE is given to individuals who have made outstanding contributions to “engineering research, practice, or education.”
     
  • Frances Ross, the TDK Professor in DMSE, received the Joseph F. Keithley Award for Advances in Measurement Science. The award recognizes physicists who have been instrumental in developing measurement techniques or equipment that have impacted the physics community by providing better measurements.
     
  • Henry “Hank” Smith, the Joseph F. and Nancy P. Keithley Professor of Electrical Engineering Emeritus in the Department of EECS, received the SPIE Frits Zernike Award for Microlithography. The award is presented for outstanding accomplishments in microlithographic technology, especially those furthering the development of semiconductor lithographic imaging and patterning solutions.
     
  • Eric Swanson, research affiliate at the Research Laboratory of Electronics, was elected to the National Academy of Engineering. Membership in the NAE is given to individuals who have made outstanding contributions to “engineering research, practice, or education.”
     
  • Evelyn N. Wang, MIT's vice president for energy and climate and Ford Professor of Engineering in the Department of MechE, was elected to the National Academy of Engineering. Membership in the NAE is given to individuals who have made outstanding contributions to “engineering research, practice, or education.”
     
  • Bilge Yildiz, the Breene M. Kerr (1951) Professor in the Department of Nuclear Science and Engineering and the DMSE, received the Faraday Medal. The award is given to individuals for notable scientific or industrial achievement in engineering or for conspicuous service rendered to the advancement of science, engineering, and technology.
     
  • Feng Zhang, the James and Patricia Poitras Professor of Neuroscience and professor of brain and cognitive sciences and biological engineering, received the National Medal of Technology and Innovation. The award is the nation’s highest honor for scientists and innovators.

Twenty-one exceptional students receive 2025 MIT Supply Chain Excellence Awards

Thu, 05/08/2025 - 4:35pm

The MIT Supply Chain Management (MCM) master’s program has recognized 34 exceptional students from nine renowned undergraduate programs specializing in supply chain management and engineering across the United States. Twenty-one students have won the 2025 MIT Supply Chain Excellence Award, while an additional 13 were named honorable mentions.

Presented annually, the MIT Supply Chain Excellence Awards honor undergraduate students who have demonstrated outstanding talent in supply chain management or industrial engineering. These students originate from the institutions that have collaborated with the MIT Center for Transportation and Logistics’ Supply Chain Management master’s program since 2013 to expand opportunities for graduate study and advance the field of supply chain and logistics.

In this year’s awards, the MIT SCM master’s program has provided over $800,000 in fellowship funding to the recipients. These students come from schools like Arizona State University, University of Illinois Urbana-Champaign, Lehigh University, Michigan State University, Monterrey Institute of Technology and Higher Education (Mexico), Penn State University, Purdue University, the University of Massachusetts at Amherst, and Syracuse University.

Recipients can use their awards by applying to the SCM program after gaining two to five years of professional experience post-graduation. Fellowship funds can be applied toward tuition fees for the SCM master’s program at MIT, or at MIT Supply Chain and Logistics Excellence (SCALE) network centers.

Winners ($30,000 fellowship awards):

  • Grace Albano, Lehigh University
  • Addison Clauss, Purdue University
  • Avery Geiger, University of Illinois Urbana-Champaign
  • Patrick Estefan, Michigan State University
  • Addison Kiteley, Michigan State University
  • Sarah Seo, Michigan State University
  • Dakarai Young, Michigan State University
  • Denver Zhang, Michigan State University
  • Mickey Miller, University of Massachusetts Amherst
  • Ana Paula Martínez Caldera, Monterrey Tech
  • Valeria Quinto Lange, Monterrey Tech
  • Alejandro Garza, Monterrey Tech
  • Mariana Otero Becerril, Monterrey Tech
  • Drew Gibble, Penn State University
  • Gabe Marshall, Penn State University
  • Eric Chen, Arizona State University
  • Dachi Tabatadze, Arizona State University
  • Srishti Garg, Arizona State University
  • Amanda Gong, Arizona State University
  • Austin Hurley, Arizona State University
  • Emily Wong, Arizona State University

Honorable Mentions ($15,000 fellowship awards):

  • Alisa Chen, Arizona State University
  • Sean Ratigan, Arizona State University
  • Natalie Alexander, Arizona State University
  • Chris Lewis, Arizona State University
  • Aiden Lyons, Arizona State University
  • Mia Thorn, Syracuse University
  • Devangi Deoras, Michigan State University
  • Api Sen, Michigan State University
  • Ashley Sheko, Michigan State University
  • Mila Straskraba, Michigan State University
  • Abeeha Zaidi, Michigan State University
  • Valeria Gonzalez Garcia Monterrey Tech
  • Ceci Herrera Guerrero, Monterrey Tech

The MIT Center for Transportation and Logistics (CTL) is a world leader in supply chain management research and education, with over 50 years of expertise. The center’s work spans industry partnerships, cutting-edge research, and the advancement of sustainable supply chain practices to creates supply chain innovation and drive it into practice through three pillars: research, outreach, and education. 

Founded in 1998 by the CTL, MIT SCM attracts a diverse group of talented and motivated students from across the globe. Students work directly with researchers and industry experts on complex and challenging problems in all aspects of supply chain management. MIT SCM students propel their classroom and laboratory learning straight into industry. They graduate from our programs as thought leaders ready to engage in an international, highly competitive marketplace. For more information, contact Kate Padilla.

Inaugural Morningside Academy for Design Professorships named

Thu, 05/08/2025 - 4:00pm

The newly established Morningside Academy of Design (MAD) Professorships recognize outstanding faculty whose teaching, research, and service have significantly shaped the field of design at MIT and beyond. The appointments support a commitment to interdisciplinary collaboration, mentorship, and the development of new educational approaches to design. 

These appointments mark the creation of the MAD Professorships and were formally announced on April 29 at the MAD in Dialogue event, where faculty members, introduced by their department heads, each gave a short presentation on their work, followed by a shared conversation on the future of design education. 

The inaugural chair-holders are Behnaz Farahi, assistant professor of media arts and sciences and director of the Critical Matter Group in the MIT Media Lab; Skylar Tibbits, associate professor of architecture, co-founder and director of the MIT Self-Assembly Lab, and assistant director for education at MAD; and David Wallace, professor of mechanical engineering, MacVicar Fellow, and Class of 1960 Innovation in Education Fellow. 

John Ochsendorf, MAD’s founding director, reflects that “the professorships are more than titles — they’re affirming the central role of design in empowering students to solve complex challenges. Behnaz, Skylar, and David are all celebrated designers who each bring a unique perspective to design education and research. By supporting them, we will cultivate more agile, creative thinkers across MIT.”

Professor Farahi’s MAD professorship appointment will begin Sept. 1, upon the completion of her Asahi Broadcast Corp. professorship. Tibbits’ and Wallace’s appointments are effective immediately. The faculty members will remain affiliated with their respective departments. 

Behnaz Farahi

Having joined the MIT faculty in fall 2024 as an assistant professor in media arts and sciences, Behnaz Farahi brings her critical lens to design research and education. With a foundation in architecture, her career spans fashion and creative technology. Farahi takes interest in addressing critical social issues with a design practice engaging emerging technologies, human bodies, and the environment. As director of the Critical Matter research group at the MIT Media Lab, Farahi aims to re-integrate the tradition of critical thinking in philosophy and social sciences with the concerns of “matter” in science and technology. 

She has won awards including the Cooper Hewitt Smithsonian Design Museum Digital Design Award, Innovation by Design Fast Company Award, and the World Technology Award. Her work has been included in the permanent collection of the Museum of Science and Industry in Chicago and has been exhibited internationally.

Her most recent installation, “Gaze to the Stars,” projected video closeups of MIT community members’ eyes onto the Great Dome, with encoded personal stories of perseverance and transformation. The project integrated large language model and computer vision tools in service of a collective art experience.

Currently the recipient of the Asahi Broadcasting Corporation Career Development Professorship in Media Arts and Sciences, Farahi’s MAD appointment will begin after the completion of her present chair. She will remain affiliated with the MIT Media Lab. 

Skylar Tibbits

An architect by training, Skylar Tibbits combines design and computer science as co-founder and director of the Self-Assembly Lab at MIT and associate professor of design research in the Department of Architecture. Dedicated to broadening the reach of design education, he directs the undergraduate design programs at MIT and contributes to its curricula.

At the Self-Assembly Lab, Tibbits oversees the advancement of self-assembly and programmable material technologies such as 4D knitting and liquid metal printing, with a plurality of applications ranging from garments and housing to coastal resilience.

He has designed and built large-scale installations and exhibited in galleries around the world, including the Museum of Modern Art, Centre Pompidou, Philadelphia Museum of Art, Cooper Hewitt Smithsonian Design Museum, Victoria and Albert Museum, and various others. 

David Robert Wallace

David Wallace has long been a recognized leader in design research and education at MIT and around the world. Wallace began his research career focused on computational tools for design representation and has evolved his interests over time to environmentally-conscious design approaches, developing software tools to enhance design and creativity, and incorporating new media and tools into the design classroom to empower engineers and designers. His research goals are to develop new methods that impact upon the practice of product development and to help inspire and equip the next generation of engineering innovators.

Wallace is known both inside and outside of MIT for his development of two iconic design classes at MIT, 2.009 (Product Engineering Processes), and 2.00B (Toy Product Design). In sculpting and refining 2.009 over many years, Wallace merged a studio-based approach with rigorous engineering to create a new paradigm for team-based, project-based design. In these courses, students experience hands-on building and testing in real-world contexts so they experience what it means to design for real users, not just design in theory. 

His approach to design education is captured in the video series “Play Seriously!,” which follows one semester of 2.009. For his tremendous educational contributions, he has been awarded the Baker Award for Teaching Excellence and was named a MacVicar Faculty Fellow, which is MIT’s highest teaching award.

Biologists identify targets for new pancreatic cancer treatments

Thu, 05/08/2025 - 2:00pm

Researchers from MIT and Dana-Farber Cancer Institute have discovered that a class of peptides expressed in pancreatic cancer cells could be a promising target for T-cell therapies and other approaches that attack pancreatic tumors.

Known as cryptic peptides, these molecules are produced from sequences in the genome that were not thought to encode proteins. Such peptides can also be found in some healthy cells, but in this study, the researchers identified about 500 that appear to be found only in pancreatic tumors.

The researchers also showed they could generate T cells targeting those peptides. Those T cells were able to attack pancreatic tumor organoids derived from patient cells, and they significantly slowed down tumor growth in a study of mice.

“Pancreas cancer is one of the most challenging cancers to treat. This study identifies an unexpected vulnerability in pancreas cancer cells that we may be able to exploit therapeutically,” says Tyler Jacks, the David H. Koch Professor of Biology at MIT and a member of the Koch Institute for Integrative Cancer Research.

Jacks and William Freed-Pastor, a physician-scientist in the Hale Family Center for Pancreatic Cancer Research at Dana-Farber Cancer Institute and an assistant professor at Harvard Medical School, are the senior authors of the study, which appears today in Science. Zackery Ely PhD ’22 and Zachary Kulstad, a former research technician at Dana-Farber Cancer Institute and the Koch Institute, are the lead authors of the paper.

Cryptic peptides

Pancreatic cancer has one of the lowest survival rates of any cancer — about 10 percent of patients survive for five years after their diagnosis.

Most pancreatic cancer patients receive a combination of surgery, radiation treatment, and chemotherapy. Immunotherapy treatments such as checkpoint blockade inhibitors, which are designed to help stimulate the body’s own T cells to attack tumor cells, are usually not effective against pancreatic tumors. However, therapies that deploy T cells engineered to attack tumors have shown promise in clinical trials.

These therapies involve programming the T-cell receptor (TCR) of T cells to recognize a specific peptide, or antigen, found on tumor cells. There are many efforts underway to identify the most effective targets, and researchers have found some promising antigens that consist of mutated proteins that often show up when pancreatic cancer genomes are sequenced.

In the new study, the MIT and Dana-Farber team wanted to extend that search into tissue samples from patients with pancreatic cancer, using immunopeptidomics — a strategy that involves extracting the peptides presented on a cell surface and then identifying the peptides using mass spectrometry.

Using tumor samples from about a dozen patients, the researchers created organoids — three-dimensional growths that partially replicate the structure of the pancreas. The immunopeptidomics analysis, which was led by Jennifer Abelin and Steven Carr at the Broad Institute, found that the majority of novel antigens found in the tumor organoids were cryptic antigens. Cryptic peptides have been seen in other types of tumors, but this is the first time they have been found in pancreatic tumors.

Each tumor expressed an average of about 250 cryptic peptides, and in total, the researchers identified about 1,700 cryptic peptides.

“Once we started getting the data back, it just became clear that this was by far the most abundant novel class of antigens, and so that’s what we wound up focusing on,” Ely says.

The researchers then performed an analysis of healthy tissues to see if any of these cryptic peptides were found in normal cells. They found that about two-thirds of them were also found in at least one type of healthy tissue, leaving about 500 that appeared to be restricted to pancreatic cancer cells.

“Those are the ones that we think could be very good targets for future immunotherapies,” Freed-Pastor says.

Programmed T cells

To test whether these antigens might hold potential as targets for T-cell-based treatments, the researchers exposed about 30 of the cancer-specific antigens to immature T cells and found that 12 of them could generate large populations of T cells targeting those antigens.

The researchers then engineered a new population of T cells to express those T-cell receptors. These engineered T cells were able to destroy organoids grown from patient-derived pancreatic tumor cells. Additionally, when the researchers implanted the organoids into mice and then treated them with the engineered T cells, tumor growth was significantly slowed.

This is the first time that anyone has demonstrated the use of T cells targeting cryptic peptides to kill pancreatic tumor cells. Even though the tumors were not completely eradicated, the results are promising, and it is possible that the T-cells’ killing power could be strengthened in future work, the researchers say.

Freed-Pastor’s lab is also beginning to work on a vaccine targeting some of the cryptic antigens, which could help stimulate patients’ T cells to attack tumors expressing those antigens. Such a vaccine could include a collection of the antigens identified in this study, including those frequently found in multiple patients.

This study could also help researchers in designing other types of therapy, such as T cell engagers — antibodies that bind an antigen on one side and T cells on the other, which allows them to redirect any T cell to kill tumor cells.

Any potential vaccine or T cell therapy is likely a few years away from being tested in patients, the researchers say.

The research was funded in part by the Hale Family Center for Pancreatic Cancer Research, the Lustgarten Foundation, Stand Up To Cancer, the Pancreatic Cancer Action Network, the Burroughs Wellcome Fund, a Conquer Cancer Young Investigator Award, the National Institutes of Health, and the National Cancer Institute.

MIT engineering students crack egg dilemma, finding sideways is stronger

Thu, 05/08/2025 - 11:45am

It’s been a scientific truth so universally acknowledged that it’s taught in classrooms and repeated in pop-science videos: An egg is strongest when dropped vertically, on its ends. But when MIT engineers actually put this assumption to the test, they cracked open a surprising revelation. 

Their experiments revealed that eggs dropped on their sides — not their tips — are far more resilient, thanks to a clever physics trick: Sideways eggs bend like shock absorbers, trading stiffness for superior energy absorption. Their open-access findings, published today in Communications Physics, don’t just rewrite the rules of the classic egg drop challenge — they’re a lesson in intellectual humility and curiosity. Even “settled” science can yield surprises when approached with rigor and an open mind.

At first glance, an eggshell may seem fragile, but its strength is a marvel of physics. Crack an egg on its side for your morning omelet and it breaks easily. Intuitively, we believe eggs are harder to break when positioned vertically. This notion has long been a cornerstone of the classic “egg drop challenge,” a popular science activity in STEM classrooms across the country that introduces students to physics concepts of impact, force, kinetic energy, and engineering design.

The annual egg drop competition is a highlight of first-year orientation in the MIT Department of Civil and Environmental Engineering. “Every year we follow the scientific literature and talk to the students about how to position the egg to avoid breakage on impact,” says Tal Cohen, associate professor of civil and environmental engineering and mechanical engineering. “But about three years ago, we started to question whether vertical really is stronger.” 

That curiosity sparked an initial experiment by Cohen’s research group, which leads the department’s egg drop event. They decided to put their remaining box of eggs to the test in the lab. “We expected to confirm the vertical side was tougher based on what we had read online,” says Cohen. “But when we looked at the data — it was really unclear.”

What began as casual inquiry evolved into a research project. To rigorously investigate the strength of both egg orientations, the researchers conducted two types of experiments: static compression tests, which applied gradually increasing force to measure stiffness and toughness; and dynamic drop tests, to quantify the likelihood of breaking on impact.

“In the static testing, we wanted to keep an egg at a standstill and push on it until it cracked,” explains Avishai Jeselsohn, an undergraduate researcher and an author in the study. “We used thin paper supports to precisely orient the eggs vertically and horizontally.”

What the researchers found was it required the same amount of force to initiate a crack in both orientations. “However, we noticed a key difference in how much the egg compressed before it broke, says Joseph Bonavia, PhD candidate who contributed to the work. “The horizontal egg compressed more under the same amount of force, meaning it was more compliant.”

Using mechanical modeling and numerical simulations to validate results of their experiments, the researchers concluded that even though the force to crack the egg was consistent, the horizontal eggs absorbed more energy due to their compliance. “This suggested that in situations where energy absorption is important, like in a drop, the horizontal orientation might be more resilient. We then performed the dynamic drop tests to see if this held true in practice,” says Jeselsohn.

The researchers designed a drop setup using solenoids and 3D-printed supports, ensuring simultaneous release and consistent egg orientation. Eggs were dropped from various heights to observe breakage patterns. The result: Horizontal eggs cracked less frequently when dropped from the same height.

“This confirmed what we saw in the static tests,” says Jeselsohn. “Even though both orientations experienced similar peak forces, the horizontal eggs absorbed energy better and were more resistant to breaking.”

Challenging common notions

The study reveals a misconception in popular science regarding the strength of an egg when subjected to impact. Even seasoned researchers in fracture mechanics initially assumed that vertical oriented eggs would be stronger. “It’s a widespread, accepted belief, referenced in many online sources,” notes Jeselsohn.

Everyday experience may reinforce that misconception. After all, we often crack eggs on their sides when cooking. “But that’s not the same as resisting impact,” explains Brendan Unikewicz, a PhD candidate and author on the paper. “Cracking an egg for cooking involves applying locally focused force for a clean break to retrieve the yolk, while its resistance to breaking from a drop involves distributing and absorbing energy across the shell.”

The difference is subtle but significant. A vertically oriented egg, while stiffer, is more brittle under sudden force. A horizontal egg, being more compliant, bends and absorbs energy over a greater distance — similar to how bending your knees during a fall softens the blow.

“In a way, our legs are ‘weaker’ when bent, but they’re actually tougher in absorbing impact,” Bonavia adds. “It’s the same with the egg. Toughness isn’t just about resisting force — it’s about how that force is dissipated.”

The research findings offer more than insight into egg behavior — they underscore a broader scientific principle: that widely accepted “truths” are worth re-examining.

Which came first?

“It’s great to see an example of ‘received wisdom’ being tested scientifically and shown to be incorrect. There are many such examples in the scientific literature, and it’s a real problem in some fields because it can be difficult to secure funding to challenge an existing, ‘well-known’ theory,” says David Taylor, emeritus professor in the Department of Mechanical, Manufacturing and Biomedical Engineering at Trinity College Dublin, who was not affiliated with the study.

The authors hope their findings encourage young people to remain curious and recognize just how much remains to be discovered in the physical world.

“Our paper is a reminder of the value in challenging common notions and relying on empirical evidence, rather than intuition,” says Cohen. “We hope our work inspires students to stay curious, question even the most familiar assumptions, and continue thinking critically about the physical world around them. That’s what we strive to do in our group — constantly challenge what we’re taught through thoughtful inquiry.”

In addition to Cohen, who serves as senior author on the paper, co-authors include lead authors Antony Sutanto MEng ’24 and Suhib Abu-Qbeitah, a postdoc at Tel Aviv University, as well as the following MIT affiliates: Avishai Jeselsohn, an undergraduate in mechanical engineering; Brendan Unikewicz, a PhD candidate in mechanical engineering; Joseph Bonavia, a PhD candidate in mechanical engineering; Stephen Rudolph, a lab instructor in civil and environmental engineering; Hudson Borja da Rocha, an MIT postdoc in civil and environmental engineering; and Kiana Naghibzadeh, Engineering Excellence Postdoctoral Fellow in civil and environmental engineering. The research was funded by U.S. Office of Naval Research with support from the U.S. National Science Foundation. 

Ping pong bot returns shots with high-speed precision

Thu, 05/08/2025 - 12:00am

MIT engineers are getting in on the robotic ping pong game with a powerful, lightweight design that returns shots with high-speed precision.

The new table tennis bot comprises a multijointed robotic arm that is fixed to one end of a ping pong table and wields a standard ping pong paddle. Aided by several high-speed cameras and a high-bandwidth predictive control system, the robot quickly estimates the speed and trajectory of an incoming ball and executes one of several swing types — loop, drive, or chop — to precisely hit the ball to a desired location on the table with various types of spin.

In tests, the engineers threw 150 balls at the robot, one after the other, from across the ping pong table. The bot successfully returned the balls with a hit rate of about 88 percent across all three swing types. The robot’s strike speed approaches the top return speeds of human players and is faster than that of other robotic table tennis designs.

Now, the team is looking to increase the robot’s playing radius so that it can return a wider variety of shots. Then, they envision the setup could be a viable competitor in the growing field of smart robotic training systems.

Beyond the game, the team says the table tennis tech could be adapted to improve the speed and responsiveness of humanoid robots, particularly for search-and-rescue scenarios, and situations in a which a robot would need to quickly react or anticipate.

“The problems that we’re solving, specifically related to intercepting objects really quickly and precisely, could potentially be useful in scenarios where a robot has to carry out dynamic maneuvers and plan where its end effector will meet an object, in real-time,” says MIT graduate student David Nguyen.

Nguyen is a co-author of the new study, along with MIT graduate student Kendrick Cancio and Sangbae Kim, associate professor of mechanical engineering and head of the MIT Biomimetics Robotics Lab. The researchers will present the results of those experiments in a paper at the IEEE International Conference on Robotics and Automation (ICRA) this month.

Precise play

Building robots to play ping pong is a challenge that researchers have taken up since the 1980s. The problem requires a unique combination of technologies, including high-speed machine vision, fast and nimble motors and actuators, precise manipulator control, and accurate, real-time prediction, as well as higher-level planning of game strategy.

“If you think of the spectrum of control problems in robotics, we have on one end manipulation, which is usually slow and very precise, such as picking up an object and making sure you’re grasping it well. On the other end, you have locomotion, which is about being dynamic and adapting to perturbations in your system,” Nguyen explains. “Ping pong sits in between those. You’re still doing manipulation, in that you have to be precise in hitting the ball, but you have to hit it within 300 milliseconds. So, it balances similar problems of dynamic locomotion and precise manipulation.”

Ping pong robots have come a long way since the 1980s, most recently with designs by Omron and Google DeepMind that employ artificial intelligence techniques to “learn” from previous ping pong data, to improve a robot’s performance against an increasing variety of strokes and shots. These designs have been shown to be fast and precise enough to rally with intermediate human players.

“These are really specialized robots designed to play ping pong,” Cancio says. “With our robot, we are exploring how the techniques used in playing ping pong could translate to a more generalized system, like a humanoid or anthropomorphic robot that can do many different, useful things.”

Game control

For their new design, the researchers modified a lightweight, high-power robotic arm that Kim’s lab developed as part of the MIT Humanoid — a bipedal, two-armed robot that is about the size of a small child. The group is using the robot to test various dynamic maneuvers, including navigating uneven and varying terrain as well as jumping, running, and doing backflips, with the aim of one day deploying such robots for search-and-rescue operations.

Each of the humanoid’s arms has four joints, or degrees of freedom, which are each controlled by an electrical motor. Cancio, Nguyen, and Kim built a similar robotic arm, which they adapted for ping pong by adding an additional degree of freedom in the wrist to allow for control of a paddle.

The team fixed the robotic arm to a table at one end of a standard ping pong table and set up high-speed motion capture cameras around the table to track balls that are bounced at the robot. They also developed optimal control algorithms that predict, based on the principles of math and physics, what speed and paddle orientation the arm should execute to hit an incoming ball with a particular type of swing: loop (or topspin), drive (straight-on), or chop (backspin).

They implemented the algorithms using three computers that simultaneously processed camera images, estimated a ball’s real-time state, and translated these estimations to commands for the robot’s motors to quickly react and take a swing.

After consecutively bouncing 150 balls at the arm, they found the robot’s hit rate, or accuracy of returning the ball, was about the same for all three types of swings: 88.4 percent for loop strikes, 89.2 percent for chops, and 87.5 percent for drives. They have since tuned the robot’s reaction time and found the arm hits balls faster than existing systems, at velocities of 20 meters per second.

In their paper, the team reports that the robot’s strike speed, or the speed at which the paddle hits the ball, is on average 11 meters per second. Advanced human players have been known to return balls at speeds of between 21 to 25 meters second. Since writing up the results of their initial experiments, the researchers have further tweaked the system, and have recorded strike speeds of up to 19 meters per second (about 42 miles per hour).

“Some of the goal of this project is to say we can reach the same level of athleticism that people have,” Nguyen says. “And in terms of strike speed, we’re getting really, really close.”

Their follow-up work has also enabled the robot to aim. The team incorporated control algorithms into the system that predict not only how but where to hit an incoming ball. With its latest iteration, the researchers can set a target location on the table, and the robot will hit a ball to that same location.

Because it is fixed to the table, the robot has limited mobility and reach, and can mostly return balls that arrive within a crescent-shaped area around the midline of the table. In the future, the engineers plan to rig the bot on a gantry or wheeled platform, enabling it to cover more of the table and return a wider variety of shots.

“A big thing about table tennis is predicting the spin and trajectory of the ball, given how your opponent hit it, which is information that an automatic ball launcher won’t give you,” Cancio says. “A robot like this could mimic the maneuvers that an opponent would do in a game environment, in a way that helps humans play and improve.”

This research is supported, in part, by the Robotics and AI Institute.

System lets robots identify an object’s properties through handling

Thu, 05/08/2025 - 12:00am

A human clearing junk out of an attic can often guess the contents of a box simply by picking it up and giving it a shake, without the need to see what’s inside. Researchers from MIT, Amazon Robotics, and the University of British Columbia have taught robots to do something similar.

They developed a technique that enables robots to use only internal sensors to learn about an object’s weight, softness, or contents by picking it up and gently shaking it. With their method, which does not require external measurement tools or cameras, the robot can accurately guess parameters like an object’s mass in a matter of seconds.

This low-cost technique could be especially useful in applications where cameras might be less effective, such as sorting objects in a dark basement or clearing rubble inside a building that partially collapsed after an earthquake.

Key to their approach is a simulation process that incorporates models of the robot and the object to rapidly identify characteristics of that object as the robot interacts with it. 

The researchers’ technique is as good at guessing an object’s mass as some more complex and expensive methods that incorporate computer vision. In addition, their data-efficient approach is robust enough to handle many types of unseen scenarios.

“This idea is general, and I believe we are just scratching the surface of what a robot can learn in this way. My dream would be to have robots go out into the world, touch things and move things in their environments, and figure out the properties of everything they interact with on their own,” says Peter Yichen Chen, an MIT postdoc and lead author of a paper on this technique.

His coauthors include fellow MIT postdoc Chao Liu; Pingchuan Ma PhD ’25; Jack Eastman MEng ’24; Dylan Randle and Yuri Ivanov of Amazon Robotics; MIT professors of electrical engineering and computer science Daniela Rus, who leads MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL); and Wojciech Matusik, who leads the Computational Design and Fabrication Group within CSAIL. The research will be presented at the International Conference on Robotics and Automation.

Sensing signals

The researchers’ method leverages proprioception, which is a human or robot’s ability to sense its movement or position in space.

For instance, a human who lifts a dumbbell at the gym can sense the weight of that dumbbell in their wrist and bicep, even though they are holding the dumbbell in their hand. In the same way, a robot can “feel” the heaviness of an object through the multiple joints in its arm.

“A human doesn’t have super-accurate measurements of the joint angles in our fingers or the precise amount of torque we are applying to an object, but a robot does. We take advantage of these abilities,” Liu says.

As the robot lifts an object, the researchers’ system gathers signals from the robot’s joint encoders, which are sensors that detect the rotational position and speed of its joints during movement. 

Most robots have joint encoders within the motors that drive their moveable parts, Liu adds. This makes their technique more cost-effective than some approaches because it doesn’t need extra components like tactile sensors or vision-tracking systems.

To estimate an object’s properties during robot-object interactions, their system relies on two models: one that simulates the robot and its motion and one that simulates the dynamics of the object.

“Having an accurate digital twin of the real-world is really important for the success of our method,” Chen adds.

Their algorithm “watches” the robot and object move during a physical interaction and uses joint encoder data to work backward and identify the properties of the object.

For instance, a heavier object will move slower than a light one if the robot applies the same amount of force.

Differentiable simulations

They utilize a technique called differentiable simulation, which allows the algorithm to predict how small changes in an object’s properties, like mass or softness, impact the robot’s ending joint position. The researchers built their simulations using NVIDIA’s Warp library, an open-source developer tool that supports differentiable simulations.

Once the differentiable simulation matches up with the robot’s real movements, the system has identified the correct property. The algorithm can do this in a matter of seconds and only needs to see one real-world trajectory of the robot in motion to perform the calculations.

“Technically, as long as you know the model of the object and how the robot can apply force to that object, you should be able to figure out the parameter you want to identify,” Liu says.

The researchers used their method to learn the mass and softness of an object, but their technique could also determine properties like moment of inertia or the viscosity of a fluid inside a container.

Plus, because their algorithm does not need an extensive dataset for training like some methods that rely on computer vision or external sensors, it would not be as susceptible to failure when faced with unseen environments or new objects.

In the future, the researchers want to try combining their method with computer vision to create a multimodal sensing technique that is even more powerful.

“This work is not trying to replace computer vision. Both methods have their pros and cons. But here we have shown that without a camera we can already figure out some of these properties,” Chen says.

They also want to explore applications with more complicated robotic systems, like soft robots, and more complex objects, including sloshing liquids or granular media like sand.

In the long run, they hope to apply this technique to improve robot learning, enabling future robots to quickly develop new manipulation skills and adapt to changes in their environments.

“Determining the physical properties of objects from data has long been a challenge in robotics, particularly when only limited or noisy measurements are available. This work is significant because it shows that robots can accurately infer properties like mass and softness using only their internal joint sensors, without relying on external cameras or specialized measurement tools,” says Miles Macklin, senior director of simulation technology at NVIDIA, who was not involved with this research.

This work is funded, in part, by Amazon and the GIST-CSAIL Research Program.

Dopamine signals when a fear can be forgotten

Wed, 05/07/2025 - 9:50am

Dangers come but dangers also go, and when they do, the brain has an “all-clear” signal that teaches it to extinguish its fear. A new study in mice by MIT neuroscientists shows that the signal is the release of dopamine along a specific interregional brain circuit. The research therefore pinpoints a potentially critical mechanism of mental health, restoring calm when it works, but prolonging anxiety or even post-traumatic stress disorder when it doesn’t.

“Dopamine is essential to initiate fear extinction,” says Michele Pignatelli di Spinazzola, co-author of the new study from the lab of senior author Susumu Tonegawa, Picower Professor of biology and neuroscience at the RIKEN-MIT Laboratory for Neural Circuit Genetics within The Picower Institute for Learning and Memory at MIT, and a Howard Hughes Medical Institute (HHMI) investigator.

In 2020, Tonegawa’s lab showed that learning to be afraid, and then learning when that’s no longer necessary, result from a competition between populations of cells in the brain’s amygdala region. When a mouse learns that a place is “dangerous” (because it gets a little foot shock there), the fear memory is encoded by neurons in the anterior of the basolateral amygdala (aBLA) that express the gene Rspo2. When the mouse then learns that a place is no longer associated with danger (because they wait there and the zap doesn’t recur), neurons in the posterior basolateral amygdala (pBLA) that express the gene Ppp1r1b encode a new fear extinction memory that overcomes the original dread. Notably, those same neurons encode feelings of reward, helping to explain why it feels so good when we realize that an expected danger has dwindled.

In the new study, the lab, led by former members Xiangyu Zhang and Katelyn Flick, sought to determine what prompts these amygdala neurons to encode these memories. The rigorous set of experiments the team reports in the Proceedings of the National Academy of Sciences show that it’s dopamine sent to the different amygdala populations from distinct groups of neurons in the ventral tegmental area (VTA).

“Our study uncovers a precise mechanism by which dopamine helps the brain unlearn fear,” says Zhang, who also led the 2020 study and is now a senior associate at Orbimed, a health care investment firm. “We found that dopamine activates specific amygdala neurons tied to reward, which in turn drive fear extinction. We now see that unlearning fear isn’t just about suppressing it — it’s a positive learning process powered by the brain’s reward machinery. This opens up new avenues for understanding and potentially treating fear-related disorders, like PTSD.”

Forgetting fear

The VTA was the lab’s prime suspect to be the source of the signal because the region is well known for encoding surprising experiences and instructing the brain, with dopamine, to learn from them. The first set of experiments in the paper used multiple methods for tracing neural circuits to see whether and how cells in the VTA and the amygdala connect. They found a clear pattern: Rspo2 neurons were targeted by dopaminergic neurons in the anterior and left and right sides of the VTA. Ppp1r1b neurons received dopaminergic input from neurons in the center and posterior sections of the VTA. The density of connections was greater on the Ppp1r1b neurons than for the Rspo2 ones.

The circuit tracing showed that dopamine is available to amygdala neurons that encode fear and its extinction, but do those neurons care about dopamine? The team showed that indeed they express “D1” receptors for the neuromodulator. Commensurate with the degree of dopamine connectivity, Ppp1r1b cells had more receptors than Rspo2 neurons.

Dopamine does a lot of things, so the next question was whether its activity in the amygdala actually correlated with fear encoding and extinction. Using a method to track and visualize it in the brain, the team watched dopamine in the amygdala as mice underwent a three-day experiment. On Day One, they went to an enclosure where they experienced three mild shocks on the feet. On Day Two, they went back to the enclosure for 45 minutes, where they didn’t experience any new shocks — at first, the mice froze in anticipation of a shock, but then relaxed after about 15 minutes. On Day Three they returned again to test whether they had indeed extinguished the fear they showed at the beginning of Day Two.

The dopamine activity tracking revealed that during the shocks on Day One, Rspo2 neurons had the larger response to dopamine, but in the early moments of Day Two, when the anticipated shocks didn’t come and the mice eased up on freezing, the Ppp1r1b neurons showed the stronger dopamine activity. More strikingly, the mice that learned to extinguish their fear most strongly also showed the greatest dopamine signal at those neurons.

Causal connections

The final sets of experiments sought to show that dopamine is not just available and associated with fear encoding and extinction, but also actually causes them. In one set, they turned to optogenetics, a technology that enables scientists to activate or quiet neurons with different colors of light. Sure enough, when they quieted VTA dopaminergic inputs in the pBLA, doing so impaired fear extinction. When they activated those inputs, it accelerated fear extinction. The researchers were surprised that when they activated VTA dopaminergic inputs into the aBLA they could reinstate fear even without any new foot shocks, impairing fear extinction.

The other way they confirmed a causal role for dopamine in fear encoding and extinction was to manipulate the amygdala neurons’ dopamine receptors. In Ppp1r1b neurons, over-expressing dopamine receptors impaired fear recall and promoted extinction, whereas knocking the receptors down impaired fear extinction. Meanwhile in the Rspo2 cells, knocking down receptors reduced the freezing behavior.

“We showed that fear extinction requires VTA dopaminergic activity in the pBLA Ppp1r1b neurons by using optogenetic inhibition of VTA terminals and cell-type-specific knockdown of D1 receptors in these neurons,” the authors wrote.

The scientists are careful in the study to note that while they’ve identified the “teaching signal” for fear extinction learning, the broader phenomenon of fear extinction occurs brainwide, rather than in just this single circuit.

But the circuit seems to be a key node to consider as drug developers and psychiatrists work to combat anxiety and PTSD, Pignatelli di Spinazzola says.

“Fear learning and fear extinction provide a strong framework to study generalized anxiety and PTSD,” he says. “Our study investigates the underlying mechanisms suggesting multiple targets for a translational approach, such as pBLA and use of dopaminergic modulation.”

Marianna Rizzo is also a co-author of the study. Support for the research came from the RIKEN Center for Brain Science, the HHMI, the Freedom Together Foundation, and The Picower Institute.

Using AI to explore the 3D structure of the genome

Wed, 05/07/2025 - 12:00am

Inside every human cell, 2 meters of DNA is crammed into a nucleus that is only one-hundredth of a millimeter in diameter.

To fit inside that tiny space, the genome must fold into a complex structure known as chromatin, made up of DNA and proteins. The structure of that chromatin, in turn, helps to determine which of the genes will be expressed in a given cell. Neurons, skin cells, and immune cells each express different genes depending on which of their genes are accessible to be transcribed.

Deciphering those structures experimentally is a time-consuming process, making it difficult to compare the 3D genome structures found in different cell types. MIT Professor Bin Zhang is taking a computational approach to this challenge, using computer simulations and generative artificial intelligence to determine these structures.

“Regulation of gene expression relies on the 3D genome structure, so the hope is that if we can fully understand those structures, then we could understand where this cellular diversity comes from,” says Zhang, an associate professor of chemistry.

From the farm to the lab

Zhang first became interested in chemistry when his brother, who was four years older, bought some lab equipment and started performing experiments at home.

“He would bring test tubes and some reagents home and do the experiment there. I didn’t really know what he was doing back then, but I was really fascinated with all the bright colors and the smoke and the odors that could come from the reactions. That really captivated my attention,” Zhang says.

His brother later became the first person from Zhang’s rural village to go to college. That was the first time Zhang had an inkling that it might be possible to pursue a future other than following in the footsteps of his parents, who were farmers in China’s Anhui province.

“Growing up, I would have never imagined doing science or working as a faculty member in America,” Zhang says. “When my brother went to college, that really opened up my perspective, and I realized I didn’t have to follow my parents’ path and become a farmer. That led me to think that I could go to college and study more chemistry.”

Zhang attended the University of Science and Technology in Hefei, China, where he majored in chemical physics. He enjoyed his studies and discovered computational chemistry and computational research, which became his new fascination.

“Computational chemistry combines chemistry with other subjects I love — math and physics — and brings a sense of rigor and reasoning to the otherwise more empirical rules,” he says. “I could use programming to solve interesting chemistry problems and test my own ideas very quickly.”

After graduating from college, he decided to continue his studies in the United States, which he recalled thinking was “the pinnacle of academics.” At Caltech, he worked with Thomas Miller, a professor of chemistry who used computational methods to understand molecular processes such as protein folding.

For Zhang’s PhD research, he studied a transmembrane protein that acts as a channel to allow other proteins to pass through the cell membrane. This protein, called translocon, can also open a side gate within the membrane, so that proteins that are meant to be embedded in the membrane can exit directly into the membrane.

“It’s really a remarkable protein, but it wasn’t clear how it worked,” Zhang says. “I built a computational model to understand the molecular mechanisms that dictate what are the molecular features that allow certain proteins to go into the membrane, while other proteins get secreted.”

Turning to the genome

After finishing grad school, Zhang’s research focus shifted from proteins to the genome. At Rice University, he did a postdoc with Peter Wolynes, a professor of chemistry who had made many key discoveries in the dynamics of protein folding. Around the time that Zhang joined the lab, Wolynes turned his attention to the structure of the genome, and Zhang decided to do the same.

Unlike proteins, which tend to have highly structured regions that can be studied using X-ray crystallography or cryo-EM, DNA is a very globular molecule that doesn’t lend itself to those types of analysis.

A few years earlier, in 2009, researchers at the Broad Institute, the University of Massachusetts Medical School, MIT, and Harvard University had developed a technique for studying the genome’s structure by cross-linking DNA in a cell’s nucleus. Researchers can then determine which segments are located near each other by shredding the DNA into many tiny pieces and sequencing it.

Zhang and Wolynes used data generated by this technique, known as Hi-C, to explore the question of whether DNA forms knots when it’s condensed in the nucleus, similar to how a strand of Christmas lights may become tangled when crammed into a box for storage.

“If DNA was just like a regular polymer, you would expect that it will become tangled and form knots. But that could be very detrimental for biology, because the genome is not just sitting there passively. It has to go through cell division, and also all this molecular machinery has to interact with the genome and transcribe it into RNA, and having knots will create a lot of unnecessary barriers,” Zhang says.

They found that, unlike Christmas lights, DNA does not form any knots even when packed into the cell nucleus, and they built a computational model allowing them to test hypotheses for how the genome is able to avoid those entanglements.

Since joining the MIT faculty in 2016, Zhang has continued developing models of how the genome behaves in 3D space, using molecular dynamic simulations. In one area of research, his lab is studying how differences between the genome structures of neurons and other brain cells give rise to their unique functions, and they are also exploring how misfolding of the genome may lead to diseases such as Alzheimer’s.

When it comes to connecting genome structure and function, Zhang believes that generative AI methods will also be essential. In a recent study, he and his students reported a new computational model, ChromoGen, that uses generative AI to predict the 3D structures of genomic regions, based on their DNA sequences.

“I think that in the future, we will have both components: generative AI and also theoretical chemistry-based approaches,” he says. “They nicely complement each other and allow us to both build accurate 3D structures and understand how those structures arise from the underlying physical forces.” 

How can India decarbonize its coal-dependent electric power system?

Tue, 05/06/2025 - 5:00pm

As the world struggles to reduce climate-warming carbon emissions, India has pledged to do its part, and its success is critical: In 2023, India was the third-largest carbon emitter worldwide. The Indian government has committed to having net-zero carbon emissions by 2070.

To fulfill that promise, India will need to decarbonize its electric power system, and that will be a challenge: Fully 60 percent of India’s electricity comes from coal-burning power plants that are extremely inefficient. To make matters worse, the demand for electricity in India is projected to more than double in the coming decade due to population growth and increased use of air conditioning, electric cars, and so on.

Despite having set an ambitious target, the Indian government has not proposed a plan for getting there. Indeed, as in other countries, in India the government continues to permit new coal-fired power plants to be built, and aging plants to be renovated and their retirement postponed.

To help India define an effective — and realistic — plan for decarbonizing its power system, key questions must be addressed. For example, India is already rapidly developing carbon-free solar and wind power generators. What opportunities remain for further deployment of renewable generation? Are there ways to retrofit or repurpose India’s existing coal plants that can substantially and affordably reduce their greenhouse gas emissions? And do the responses to those questions differ by region?

With funding from IHI Corp. through the MIT Energy Initiative (MITEI), Yifu Ding, a postdoc at MITEI, and her colleagues set out to answer those questions by first using machine learning to determine the efficiency of each of India’s current 806 coal plants, and then investigating the impacts that different decarbonization approaches would have on the mix of power plants and the price of electricity in 2035 under increasingly stringent caps on emissions.

First step: Develop the needed dataset

An important challenge in developing a decarbonization plan for India has been the lack of a complete dataset describing the current power plants in India. While other studies have generated plans, they haven’t taken into account the wide variation in the coal-fired power plants in different regions of the country. “So, we first needed to create a dataset covering and characterizing all of the operating coal plants in India. Such a dataset was not available in the existing literature,” says Ding.

Making a cost-effective plan for expanding the capacity of a power system requires knowing the efficiencies of all the power plants operating in the system. For this study, the researchers used as their metric the “station heat rate,” a standard measurement of the overall fuel efficiency of a given power plant. The station heat rate of each plant is needed in order to calculate the fuel consumption and power output of that plant as plans for capacity expansion are being developed.

Some of the Indian coal plants’ efficiencies were recorded before 2022, so Ding and her team used machine-learning models to predict the efficiencies of all the Indian coal plants operating now. In 2024, they created and posted online the first comprehensive, open-sourced dataset for all 806 power plants in 30 regions of India. The work won the 2024 MIT Open Data Prize. This dataset includes each plant’s power capacity, efficiency, age, load factor (a measure indicating how much of the time it operates), water stress, and more.

In addition, they categorized each plant according to its boiler design. A “supercritical” plant operates at a relatively high temperature and pressure, which makes it thermodynamically efficient, so it produces a lot of electricity for each unit of heat in the fuel. A “subcritical” plant runs at a lower temperature and pressure, so it’s less thermodynamically efficient. Most of the Indian coal plants are still subcritical plants running at low efficiency.

Next step: Investigate decarbonization options

Equipped with their detailed dataset covering all the coal power plants in India, the researchers were ready to investigate options for responding to tightening limits on carbon emissions. For that analysis, they turned to GenX, a modeling platform that was developed at MITEI to help guide decision-makers as they make investments and other plans for the future of their power systems.

Ding built a GenX model based on India’s power system in 2020, including details about each power plant and transmission network across 30 regions of the country. She also entered the coal price, potential resources for wind and solar power installations, and other attributes of each region. Based on the parameters given, the GenX model would calculate the lowest-cost combination of equipment and operating conditions that can fulfill a defined future level of demand while also meeting specified policy constraints, including limits on carbon emissions. The model and all data sources were also released as open-source tools for all viewers to use.

Ding and her colleagues — Dharik Mallapragada, a former principal research scientist at MITEI who is now an assistant professor of chemical and biomolecular energy at NYU Tandon School of Engineering and a MITEI visiting scientist; and Robert J. Stoner, the founding director of the MIT Tata Center for Technology and Design and former deputy director of MITEI for science and technology — then used the model to explore options for meeting demands in 2035 under progressively tighter carbon emissions caps, taking into account region-to-region variations in the efficiencies of the coal plants, the price of coal, and other factors. They describe their methods and their findings in a paper published in the journal Energy for Sustainable Development.

In separate runs, they explored plans involving various combinations of current coal plants, possible new renewable plants, and more, to see their outcome in 2035. Specifically, they assumed the following four “grid-evolution scenarios:”

Baseline: The baseline scenario assumes limited onshore wind and solar photovoltaics development and excludes retrofitting options, representing a business-as-usual pathway.

High renewable capacity: This scenario calls for the development of onshore wind and solar power without any supply chain constraints.

Biomass co-firing: This scenario assumes the baseline limits on renewables, but here all coal plants — both subcritical and supercritical — can be retrofitted for “co-firing” with biomass, an approach in which clean-burning biomass replaces some of the coal fuel. Certain coal power plants in India already co-fire coal and biomass, so the technology is known.

Carbon capture and sequestration plus biomass co-firing: This scenario is based on the same assumptions as the biomass co-firing scenario with one addition: All of the high-efficiency supercritical plants are also retrofitted for carbon capture and sequestration (CCS), a technology that captures and removes carbon from a power plant’s exhaust stream and prepares it for permanent disposal. Thus far, CCS has not been used in India. This study specifies that 90 percent of all carbon in the power plant exhaust is captured.

Ding and her team investigated power system planning under each of those grid-evolution scenarios and four assumptions about carbon caps: no cap, which is the current situation; 1,000 million tons (Mt) of carbon dioxide (CO2) emissions, which reflects India’s announced targets for 2035; and two more-ambitious targets, namely 800 Mt and 500 Mt. For context, CO2 emissions from India’s power sector totaled about 1,100 Mt in 2021. (Note that transmission network expansion is allowed in all scenarios.)

Key findings

Assuming the adoption of carbon caps under the four scenarios generated a vast array of detailed numerical results. But taken together, the results show interesting trends in the cost-optimal mix of generating capacity and the cost of electricity under the different scenarios.

Even without any limits on carbon emissions, most new capacity additions will be wind and solar generators — the lowest-cost option for expanding India’s electricity-generation capacity. Indeed, this is observed to be the case now in India. However, the increasing demand for electricity will still require some new coal plants to be built. Model results show a 10 to 20 percent increase in coal plant capacity by 2035 relative to 2020.

Under the baseline scenario, renewables are expanded up to the maximum allowed under the assumptions, implying that more deployment would be economical. More coal capacity is built, and as the cap on emissions tightens, there is also investment in natural gas power plants, as well as batteries to help compensate for the now-large amount of intermittent solar and wind generation. When a 500 Mt cap on carbon is imposed, the cost of electricity generation is twice as high as it was with no cap.

The high renewable capacity scenario reduces the development of new coal capacity and produces the lowest electricity cost of the four scenarios. Under the most stringent cap — 500 Mt — onshore wind farms play an important role in bringing the cost down. “Otherwise, it’ll be very expensive to reach such stringent carbon constraints,” notes Ding. “Certain coal plants that remain run only a few hours per year, so are inefficient as well as financially unviable. But they still need to be there to support wind and solar.” She explains that other backup sources of electricity, such as batteries, are even more costly. 

The biomass co-firing scenario assumes the same capacity limit on renewables as in the baseline scenario, and the results are much the same, in part because the biomass replaces such a low fraction — just 20 percent — of the coal in the fuel feedstock. “This scenario would be most similar to the current situation in India,” says Ding. “It won’t bring down the cost of electricity, so we’re basically saying that adding this technology doesn’t contribute effectively to decarbonization.”

But CCS plus biomass co-firing is a different story. It also assumes the limits on renewables development, yet it is the second-best option in terms of reducing costs. Under the 500 Mt cap on CO2 emissions, retrofitting for both CCS and biomass co-firing produces a 22 percent reduction in the cost of electricity compared to the baseline scenario. In addition, as the carbon cap tightens, this option reduces the extent of deployment of natural gas plants and significantly improves overall coal plant utilization. That increased utilization “means that coal plants have switched from just meeting the peak demand to supplying part of the baseline load, which will lower the cost of coal generation,” explains Ding.

Some concerns

While those trends are enlightening, the analyses also uncovered some concerns for India to consider, in particular, with the two approaches that yielded the lowest electricity costs.

The high renewables scenario is, Ding notes, “very ideal.” It assumes that there will be little limiting the development of wind and solar capacity, so there won’t be any issues with supply chains, which is unrealistic. More importantly, the analyses showed that implementing the high renewables approach would create uneven investment in renewables across the 30 regions. Resources for onshore and offshore wind farms are mainly concentrated in a few regions in western and southern India. “So all the wind farms would be put in those regions, near where the rich cities are,” says Ding. “The poorer cities on the eastern side, where the coal power plants are, will have little renewable investment.”

So the approach that’s best in terms of cost is not best in terms of social welfare, because it tends to benefit the rich regions more than the poor ones. “It’s like [the government will] need to consider the trade-off between energy justice and cost,” says Ding. Enacting state-level renewable generation targets could encourage a more even distribution of renewable capacity installation. Also, as transmission expansion is planned, coordination among power system operators and renewable energy investors in different regions could help in achieving the best outcome.

CCS plus biomass co-firing — the second-best option for reducing prices — solves the equity problem posed by high renewables, and it assumes a more realistic level of renewable power adoption. However, CCS hasn’t been used in India, so there is no precedent in terms of costs. The researchers therefore based their cost estimates on the cost of CCS in China and then increased the required investment by 10 percent, the “first-of-a-kind” index developed by the U.S. Energy Information Administration. Based on those costs and other assumptions, the researchers conclude that coal plants with CCS could come into use by 2035 when the carbon cap for power generation is less than 1,000 Mt.

But will CCS actually be implemented in India? While there’s been discussion about using CCS in heavy industry, the Indian government has not announced any plans for implementing the technology in coal-fired power plants. Indeed, India is currently “very conservative about CCS,” says Ding. “Some researchers say CCS won’t happen because it’s so expensive, and as long as there’s no direct use for the captured carbon, the only thing you can do is put it in the ground.” She adds, "It’s really controversial to talk about whether CCS will be implemented in India in the next 10 years.”

Ding and her colleagues hope that other researchers and policymakers — especially those working in developing countries — may benefit from gaining access to their datasets and learning about their methods. Based on their findings for India, she stresses the importance of understanding the detailed geographical situation in a country in order to design plans and policies that are both realistic and equitable.

Philip Khoury to step down as vice provost for the arts

Tue, 05/06/2025 - 12:50pm

MIT Provost Cynthia Barnhart has announced that Vice Provost for the Arts Philip S. Khoury will step down from the position on Aug. 31. Khoury, the Ford International Professor of History, served in the role for 19 years. After a sabbatical, he will rejoin the faculty in the School of Humanities, Arts, and Social Sciences (SHASS).

“Since arriving at MIT in 1981, Philip has championed what he calls the Institute’s ‘artistic ecosystem,’ which sits at the intersection of technology, science, the humanities, and the arts. Thanks to Philip’s vision, this ecosystem is now a foundational element of MIT’s educational and research missions and a critical component of how we advance knowledge, understanding, and discovery in service to the world,” says Barnhart.

Khoury was appointed associate provost in 2006 by then-MIT president Susan Hockfield, with a double portfolio enhancing the Institute’s nonacademic arts programs and beginning a review of MIT’s international activities. Those programs include the List Visual Arts Center, the MIT Museum, the Center for Art, Science and Technology (CAST), and the Council for the Arts at MIT (CAMIT). After five years, the latter half of this portfolio evolved into the Office of the Vice Provost for International Activities. 

Khoury devoted most of his tenure to expanding the Institute’s arts infrastructure, promoting the visibility of its stellar arts faculty, and guiding the growth of student participation in the arts. Today, more than 50 percent of MIT undergraduates take arts classes, with more than 1,500 studying music.

“Philip has been a remarkable leader at MIT over decades. He has ensured that the arts are a prominent part of the MIT ‘mens-et-manus’ [‘mind-and-hand’] experience and that our community has the opportunity to admire, learn from, and participate in creative thinking in all realms,” says L. Rafael Reif, the Ray and Maria Stata Professor of Electrical Engineering and Computer Science and MIT president emeritus. “A historian — and a humanist at heart — Philip also played a crucial role in helping MIT develop a thoughtful international strategy in research and education."

“I will miss my colleagues first and foremost as I leave this position behind,” says Khoury. “But I have been proud to see the quality of the faculty grow and the student interest in the arts grow almost exponentially, along with an awareness of how the arts are prospering at MIT.”

Stream of creativity

During his time as vice provost, he partnered with then-School of Architecture and Planning (SAP) dean Adèle Santos and SHASS dean Deborah Fitzgerald to establish the CAST in 2012. The center encourages artistic collaborations and provides seed funds and research grants to students and faculty. 

Khoury also helped oversee a significant expansion of the Institute’s art facilities, including the unique multipurpose design of the Theater Arts Building, the new MIT Museum, and the Edward and Joyce Linde Music Building. Along with the List Visual Arts Center, which will celebrate its 40th anniversary this year, these vibrant spaces “offer an opportunity for our students to do something different from what they came to MIT to do in science and engineering,” Khoury suggests. “It gives them an outlet to do other kinds of experimentation.”

“What makes the arts so successful here is that they are very much in the stream of creativity, which science and technology are all about,” he adds.

One of Khoury’s other long-standing goals has been to elevate the recognition of the arts faculty, “to show that the quality of what we do in those areas matches the quality of what we do in engineering and science,” he says.

“I will always remember Philip Khoury’s leadership and advocacy as dean of the School of Humanities and Social Sciences for changing the definition of the ‘A’ in SHASS from ‘and’ to ‘Arts.’ That small change had large implications for professional careers for artists, enrollments, and subject options that remain a source of renewal and strength to this day,” says Institute Professor Marcus Thompson.

Most recently, Khoury and his team, in collaboration with faculty, students, and staff from across the Institute, oversaw the development and production of MIT’s new festival of the arts, known as Artfinity. Launched in February and open to the public, the Institute-sponsored, campus-wide festival featured a series of 80 performing and visual arts events.

International activities

Khoury joined the faculty as an assistant professor in 1981 and later served as dean of SHASS between 1991 and 2006. In 2002, he was appointed the inaugural Kenan Sahin Dean of SHASS.

His academic focus made him a natural choice for the first coordinator of MIT international activities, a role he served in from 2006 to 2011. During that time, he traveled widely to learn more about the ways MIT faculty were engaged abroad, and he led the production of an influential report on the state of MIT’s international activities.

“We wanted to create a strategy, but not a foreign policy,” Khoury said of the report.

Khoury’s time in the international role led him to consider ways that collaborations with other countries should be balanced so as not to diminish MIT’s offerings at home, he says. He also looked for ways to encourage more collaborations with countries in sub-Saharan Africa, South America, and parts of the Middle East.

Future plans

Khoury was instrumental in establishing the Future of the Arts at MIT Committee, which was charged by Provost Barnhart in June 2024 in collaboration with Dean Hashim Sarkis of the School of Architecture and Planning and Dean Agustín Rayo of SHASS. The committee aims to find new ways to envision the place of arts at the Institute — a task that was last undertaken in 1987, he says. The committee submitted a draft report to Provost Barnhart in April. 

“I think it will hit the real sweet spot of where arts meet science and technology, but not where art is controlled by science and technology,” Khoury says. “I think the promotion of that, and the emphasis on that, among other connections with art, are really what we should be pushing for and developing.”

After he steps down as vice provost, Khoury plans to devote more time to writing two books: a personal memoir and a book about the Middle East. And he is looking forward to seeing how the arts at MIT will flourish in the near future. “I feel elated about where we’ve landed and where we’ll continue to go,” he says.

As Barnhart noted in her letter to the community, the Future of the Arts at MIT Committee's efforts combined with Khoury staying on through the end of the summer, provides President Kornbluth, the incoming provost, and Khoury with the opportunity to reflect on the Institute’s path forward in this critical space.

Hybrid AI model crafts smooth, high-quality videos in seconds

Tue, 05/06/2025 - 12:15pm

What would a behind-the-scenes look at a video generated by an artificial intelligence model be like? You might think the process is similar to stop-motion animation, where many images are created and stitched together, but that’s not quite the case for “diffusion models” like OpenAl's SORA and Google's VEO 2.

Instead of producing a video frame-by-frame (or “autoregressively”), these systems process the entire sequence at once. The resulting clip is often photorealistic, but the process is slow and doesn’t allow for on-the-fly changes. 

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Adobe Research have now developed a hybrid approach, called “CausVid,” to create videos in seconds. Much like a quick-witted student learning from a well-versed teacher, a full-sequence diffusion model trains an autoregressive system to swiftly predict the next frame while ensuring high quality and consistency. CausVid’s student model can then generate clips from a simple text prompt, turning a photo into a moving scene, extending a video, or altering its creations with new inputs mid-generation.

This dynamic tool enables fast, interactive content creation, cutting a 50-step process into just a few actions. It can craft many imaginative and artistic scenes, such as a paper airplane morphing into a swan, woolly mammoths venturing through snow, or a child jumping in a puddle. Users can also make an initial prompt, like “generate a man crossing the street,” and then make follow-up inputs to add new elements to the scene, like “he writes in his notebook when he gets to the opposite sidewalk.”

The CSAIL researchers say that the model could be used for different video editing tasks, like helping viewers understand a livestream in a different language by generating a video that syncs with an audio translation. It could also help render new content in a video game or quickly produce training simulations to teach robots new tasks.

Tianwei Yin SM ’25, PhD ’25, a recently graduated student in electrical engineering and computer science and CSAIL affiliate, attributes the model’s strength to its mixed approach.

“CausVid combines a pre-trained diffusion-based model with autoregressive architecture that’s typically found in text generation models,” says Yin, co-lead author of a new paper about the tool. “This AI-powered teacher model can envision future steps to train a frame-by-frame system to avoid making rendering errors.”

Yin’s co-lead author, Qiang Zhang, is a research scientist at xAI and a former CSAIL visiting researcher. They worked on the project with Adobe Research scientists Richard Zhang, Eli Shechtman, and Xun Huang, and two CSAIL principal investigators: MIT professors Bill Freeman and Frédo Durand.

Caus(Vid) and effect

Many autoregressive models can create a video that’s initially smooth, but the quality tends to drop off later in the sequence. A clip of a person running might seem lifelike at first, but their legs begin to flail in unnatural directions, indicating frame-to-frame inconsistencies (also called “error accumulation”).

Error-prone video generation was common in prior causal approaches, which learned to predict frames one by one on their own. CausVid instead uses a high-powered diffusion model to teach a simpler system its general video expertise, enabling it to create smooth visuals, but much faster.

CausVid displayed its video-making aptitude when researchers tested its ability to make high-resolution, 10-second-long videos. It outperformed baselines like “OpenSORA” and “MovieGen,” working up to 100 times faster than its competition while producing the most stable, high-quality clips.

Then, Yin and his colleagues tested CausVid’s ability to put out stable 30-second videos, where it also topped comparable models on quality and consistency. These results indicate that CausVid may eventually produce stable, hours-long videos, or even an indefinite duration.

A subsequent study revealed that users preferred the videos generated by CausVid’s student model over its diffusion-based teacher.

“The speed of the autoregressive model really makes a difference,” says Yin. “Its videos look just as good as the teacher’s ones, but with less time to produce, the trade-off is that its visuals are less diverse.”

CausVid also excelled when tested on over 900 prompts using a text-to-video dataset, receiving the top overall score of 84.27. It boasted the best metrics in categories like imaging quality and realistic human actions, eclipsing state-of-the-art video generation models like “Vchitect” and “Gen-3.

While an efficient step forward in AI video generation, CausVid may soon be able to design visuals even faster — perhaps instantly — with a smaller causal architecture. Yin says that if the model is trained on domain-specific datasets, it will likely create higher-quality clips for robotics and gaming.

Experts say that this hybrid system is a promising upgrade from diffusion models, which are currently bogged down by processing speeds. “[Diffusion models] are way slower than LLMs [large language models] or generative image models,” says Carnegie Mellon University Assistant Professor Jun-Yan Zhu, who was not involved in the paper. “This new work changes that, making video generation much more efficient. That means better streaming speed, more interactive applications, and lower carbon footprints.”

The team’s work was supported, in part, by the Amazon Science Hub, the Gwangju Institute of Science and Technology, Adobe, Google, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator. CausVid will be presented at the Conference on Computer Vision and Pattern Recognition in June.

How J-WAFS Solutions grants bring research to market

Tue, 05/06/2025 - 11:55am

For the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), 2025 marks a decade of translating groundbreaking research into tangible solutions for global challenges. Few examples illustrate that mission better than NONA Technologies. With support from a J-WAFS Solutions grant, MIT electrical engineering and biological engineering Professor Jongyoon Han and his team developed a portable desalination device that transforms seawater into clean drinking water without filters or high-pressure pumps. 

The device stands apart from traditional systems because conventional desalination technologies, like reverse osmosis, are energy-intensive, prone to fouling, and typically deployed at large, centralized plants. In contrast, the device developed in Han’s lab employs ion concentration polarization technology to remove salts and particles from seawater, producing potable water that exceeds World Health Organization standards. It is compact, solar-powered, and operable at the push of a button — making it an ideal solution for off-grid and disaster-stricken areas.

This research laid the foundation for spinning out NONA Technologies along with co-founders Junghyo Yoon PhD ’21 from Han’s lab and Bruce Crawford MBA ’22, to commercialize the technology and address pressing water-scarcity issues worldwide. “This is really the culmination of a 10-year journey that I and my group have been on,” said Han in an earlier MIT News article. “We worked for years on the physics behind individual desalination processes, but pushing all those advances into a box, building a system, and demonstrating it in the ocean ... that was a really meaningful and rewarding experience for me.” You can watch this video showcasing the device in action.

Moving breakthrough research out of the lab and into the world is a well-known challenge. While traditional “seed” grants typically support early-stage research at Technology Readiness Level (TRL) 1-2, few funding sources exist to help academic teams navigate to the next phase of technology development. The J-WAFS Solutions Program is strategically designed to address this critical gap by supporting technologies in the high-risk, early-commercialization phase that is often neglected by traditional research, corporate, and venture funding. By supporting technologies at TRLs 3-5, the program increases the likelihood that promising innovations will survive beyond the university setting, advancing sufficiently to attract follow-on funding.

Equally important, the program gives academic researchers the time, resources, and flexibility to de-risk their technology, explore customer need and potential real-world applications, and determine whether and how they want to pursue commercialization. For faculty-led teams like Han’s, the J-WAFS Solutions Program provided the critical financial runway and entrepreneurial guidance needed to refine the technology, test assumptions about market fit, and lay the foundation for a startup team. While still in the MIT innovation ecosystem, Nona secured over $200,000 in non-dilutive funding through competitions and accelerators, including the prestigious MIT delta v Educational Accelerator. These early wins laid the groundwork for further investment and technical advancement.

Since spinning out of MIT, NONA has made major strides in both technology development and business viability. What started as a device capable of producing just over half-a-liter of clean drinking water per hour has evolved into a system that now delivers 10 times that capacity, at 5 liters per hour. The company successfully raised a $3.5 million seed round to advance its portable desalination device, and entered into a collaboration with the U.S. Army Natick Soldier Systems Center, where it co-developed early prototypes and began generating revenue while validating the technology. Most recently, NONA was awarded two SBIR Phase I grants totaling $575,000, one from the National Science Foundation and another from the National Institute of Environmental Health Sciences.

Now operating out of Greentown Labs in Somerville, Massachusetts, NONA has grown to a dedicated team of five and is preparing to launch its nona5 product later this year, with a wait list of over 1,000 customers. It is also kicking off its first industrial pilot, marking a key step toward commercial scale-up. “Starting a business as a postdoc was challenging, especially with limited funding and industry knowledge,” says Yoon, who currently serves as CTO of NONA. “J-WAFS gave me the financial freedom to pursue my venture, and the mentorship pushed me to hit key milestones. Thanks to J-WAFS, I successfully transitioned from an academic researcher to an entrepreneur in the water industry.”

NONA is one of several J-WAFS-funded technologies that have moved from the lab to market, part of a growing portfolio of water and food solutions advancing through MIT’s innovation pipeline. As J-WAFS marks a decade of catalyzing innovation in water and food, NONA exemplifies what is possible when mission-driven research is paired with targeted early-stage support and mentorship.

To learn more or get involved in supporting startups through the J-WAFS Solutions Program, please contact jwafs@mit.edu.

If time is money, here’s one way consumers value it

Tue, 05/06/2025 - 12:00am

As the saying goes, time is money. That’s certainly evident in the transportation sector, where people will pay more for direct flights, express trains, and other ways to get somewhere quickly.

Still, it is difficult to measure precisely how much people value their time. Now, a paper co-authored by an MIT economist uses ride-sharing data to reveal multiple implications of personalized pricing.

By focusing on a European ride-sharing platform that auctions its rides, the researchers found that people are more responsive to prices than to wait times. They also found that people pay more to save time during the workday, and that when people pay more to avoid waiting, it notably increases business revenues. And some segments of consumers are distinctly more willing than others to pay higher prices.

Specifically, when people can bid for rides that arrive sooner, the amount above the minimum price the platform can charge increases by 5.2 percent. Meanwhile, the gap between offered prices and the maximum that consumers are willing to pay decreases by 2.5 percent. In economics terms, this creates additional “surplus” value for firms, while lowering the “consumer surplus” in these transactions.

“One of the important quantities in transportation is the value of time,” says MIT economist Tobias Salz, co-author of a new paper detailing the study’s findings. “We came across a setting that offered a very clean way of examining this quantity, where the value of time is revealed by people’s transportation choices.”

The paper, “Personalized Pricing and the Value of Time: Evidence from Auctioned Cab Rides,” is being published in Econometrica. The authors are Nicholas Buchholz, an assistant professor of economics at Princeton University; Laura Doval, a professor at Columbia Business School; Jakub Kastl, a professor of economics at Princeton University; Filip Matejka, a professor at Charles University in Prague; and Salz, the Castle Krob Career Development Associate Professor of Economics in MIT’s Department of Economics.

It is not easy to study how much money people will spend to save time — and time alone. Transportation is one sector where it is possible to do so, though not the only one. People will also pay more for, say, an express pass to avoid long lines at an amusement park. But data for those scenarios, even when available, may contain complicating factors. (Also, the value of time shouldn’t be confused with how much people pay for services charged by the hour, from accountants to tuba lessons.)

In this case, however, the researchers were provided data from Liftago, a ride-sharing platform in Prague with a distinctive feature: It lets drivers bid on a passenger’s business, with the wait time until the auto arrives as one of the factors involved. Drivers can also indicate when they will be available. In studying how passengers compare offers with different wait times and prices, the researchers see exactly how much people are paying not to wait, other things being equal. All told, they examined 1.9 million ride requests and 5.2 million bids.

“It’s like an eBay for taxis,” Salz says. “Instead of assigning the driver to you, drivers bid for the passengers’ business. With this, we can very directly observe how people make their choices. How they value time is revealed by the wait and the prices attached to that. In many settings we don’t observe that directly, so it’s a very clean comparison that rids the data of a lot of confounds.”

The data set allows the researchers to examine many aspects of personalized pricing and the way it affects the transportation market in this setting. That produces a set of insights on its own, along with the findings on time valuation. 

Ultimately, the researchers found that the elasticity of prices — how much they change — ranged from four to 10 times as much as the elasticity of wait times, meaning people are more keen on avoiding high prices.

The team found the overall value of time in this context is $13.21 per hour for users of the ride-share platform, though the researchers note that is not a universal measure of the value of time and is dependent on this setting. The study also shows that bids increase during work hours.

Additionally, the research reveals a split among consumers: The top quartile of bids placed a value on time that is 3.5 times higher than the value of the bids in the bottom quartile.

Then there is still the question of how much personalized pricing benefits consumers, providers, or both. The numbers, again, show that the overall surplus increases — meaning business benefits — while the consumer surplus is reduced. However, the data show an even more nuanced picture. Because the top quartile of bidders are paying substantially more to avoid longer waits, they are the ones who absorb the brunt of the costs in this kind of system.

“The majority of consumers still benefit,” Salz says. “The consumers hurt by this have a very high willingness to pay. The source of welfare gains is that most consumers can be brought into the market. But the flip side is that the firm, by knowing every consumer’s choke point, can extract the surplus. Welfare goes up, the ride-sharing platform captures most of that, and drivers — interestingly — also benefit from the system, although they do not have access to the data.”

Economic theory and other transportation studies alone would not necessarily have predicted the study’s results and various nuances.

“It was not clear a priori whether consumers benefit,” Salz observes. “That is not something you would know without going to the data.”

While this study might hold particular interest for firms and others interested in transportation, mobility, and ride-sharing, it also fits into a larger body of economics research about information in markets and how its presence, or absence, influences consumer behavior, consumer welfare, and the functioning of markets.

“The [research] umbrella here is really information about where to find trading partners and what their willingness to pay is,” Salz says. “What I’m broadly interested in is these types of information frictions and how they determine market outcomes, how they might impact consumers, and be used by firms.”

The research was supported, in part, by the National Bureau of Economic Research, the U.S. Department of Transportation, and the National Science Foundation. 

Pages