MIT Latest News

This compact, low-power receiver could give a boost to 5G smart devices
MIT researchers have designed a compact, low-power receiver for 5G-compatible smart devices that is about 30 times more resilient to a certain type of interference than some traditional wireless receivers.
The low-cost receiver would be ideal for battery-powered internet of things (IoT) devices like environmental sensors, smart thermostats, or other devices that need to run continuously for a long time, such as health wearables, smart cameras, or industrial monitoring sensors.
The researchers’ chip uses a passive filtering mechanism that consumes less than a milliwatt of static power while protecting both the input and output of the receiver’s amplifier from unwanted wireless signals that could jam the device.
Key to the new approach is a novel arrangement of precharged, stacked capacitors, which are connected by a network of tiny switches. These miniscule switches need much less power to be turned on and off than those typically used in IoT receivers.
The receiver’s capacitor network and amplifier are carefully arranged to leverage a phenomenon in amplification that allows the chip to use much smaller capacitors than would typically be necessary.
“This receiver could help expand the capabilities of IoT gadgets. Smart devices like health monitors or industrial sensors could become smaller and have longer battery lives. They would also be more reliable in crowded radio environments, such as factory floors or smart city networks,” says Soroush Araei, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on the receiver.
He is joined on the paper by Mohammad Barzgari, a postdoc in the MIT Research Laboratory of Electronics (RLE); Haibo Yang, an EECS graduate student; and senior author Negar Reiskarimian, the X-Window Consortium Career Development Assistant Professor in EECS at MIT and a member of the Microsystems Technology Laboratories and RLE. The research was recently presented at the IEEE Radio Frequency Integrated Circuits Symposium.
A new standard
A receiver acts as the intermediary between an IoT device and its environment. Its job is to detect and amplify a wireless signal, filter out any interference, and then convert it into digital data for processing.
Traditionally, IoT receivers operate on fixed frequencies and suppress interference using a single narrow-band filter, which is simple and inexpensive.
But the new technical specifications of the 5G mobile network enable reduced-capability devices that are more affordable and energy-efficient. This opens a range of IoT applications to the faster data speeds and increased network capability of 5G. These next-generation IoT devices need receivers that can tune across a wide range of frequencies while still being cost-effective and low-power.
“This is extremely challenging because now we need to not only think about the power and cost of the receiver, but also flexibility to address numerous interferers that exist in the environment,” Araei says.
To reduce the size, cost, and power consumption of an IoT device, engineers can’t rely on the bulky, off-chip filters that are typically used in devices that operate on a wide frequency range.
One solution is to use a network of on-chip capacitors that can filter out unwanted signals. But these capacitor networks are prone to special type of signal noise known as harmonic interference.
In prior work, the MIT researchers developed a novel switch-capacitor network that targets these harmonic signals as early as possible in the receiver chain, filtering out unwanted signals before they are amplified and converted into digital bits for processing.
Shrinking the circuit
Here, they extended that approach by using the novel switch-capacitor network as the feedback path in an amplifier with negative gain. This configuration leverages the Miller effect, a phenomenon that enables small capacitors to behave like much larger ones.
“This trick lets us meet the filtering requirement for narrow-band IoT without physically large components, which drastically shrinks the size of the circuit,” Araei says.
Their receiver has an active area of less than 0.05 square millimeters.
One challenge the researchers had to overcome was determining how to apply enough voltage to drive the switches while keeping the overall power supply of the chip at only 0.6 volts.
In the presence of interfering signals, such tiny switches can turn on and off in error, especially if the voltage required for switching is extremely low.
To address this, the researchers came up with a novel solution, using a special circuit technique called bootstrap clocking. This method boosts the control voltage just enough to ensure the switches operate reliably while using less power and fewer components than traditional clock boosting methods.
Taken together, these innovations enable the new receiver to consume less than a milliwatt of power while blocking about 30 times more harmonic interference than traditional IoT receivers.
“Our chip also is very quiet, in terms of not polluting the airwaves. This comes from the fact that our switches are very small, so the amount of signal that can leak out of the antenna is also very small,” Araei adds.
Because their receiver is smaller than traditional devices and relies on switches and precharged capacitors instead of more complex electronics, it could be more cost-effective to fabricate. In addition, since the receiver design can cover a wide range of signal frequencies, it could be implemented on a variety of current and future IoT devices.
Now that they have developed this prototype, the researchers want to enable the receiver to operate without a dedicated power supply, perhaps by harvesting Wi-Fi or Bluetooth signals from the environment to power the chip.
This research is supported, in part, by the National Science Foundation.
Gaspare LoDuca named VP for information systems and technology and CIO
Gaspare LoDuca has been appointed MIT’s vice president for information systems and technology (IS&T) and chief information officer, effective Aug. 18. Currently vice president for information technology and CIO at Columbia University, LoDuca has held IT leadership roles in or related to higher education for more than two decades. He succeeds Mark Silis, who led IS&T from 2019 until 2024, when he left MIT to return to the entrepreneurial ecosystem in the San Francisco Bay area.
Executive Vice President and Treasurer Glen Shor announced the appointment today in an email to MIT faculty and staff.
“I believe that Gaspare will be an incredible asset to MIT, bringing wide-ranging experience supporting faculty, researchers, staff, and students and a highly collaborative style,” says Shor. “He is eager to start his work with our talented IS&T team to chart and implement their contributions to the future of information technology at MIT.”
LoDuca will lead the IS&T organization and oversee MIT’s information technology infrastructure and services that support its research and academic enterprise across student and administrative systems, network operations, cloud services, cybersecurity, and customer support. As co-chair of the Information Technology Governance Committee, he will guide the development of IT policy and strategy at the Institute. He will also play a key role in MIT’s effort to modernize its business processes and administrative systems, working in close collaboration with the Business and Digital Transformation Office.
“Gaspare brings to his new role extensive experience leading a complex IT organization,” says Provost Cynthia Barnhart, who served as one of Shor's advisors during the search process. “His depth of experience, coupled with his vision for the future state of information technology and digital transformation at MIT, are compelling, and I am excited to see the positive impact he will have here.”
“As I start my new role, I plan to learn more about MIT’s culture and community to ensure that any decisions or changes we make are shaped by the community’s needs and carried out in a way that fits the culture. I’m also looking forward to learning more about the research and work being done by students and faculty to advance MIT’s mission. It’s inspiring, and I’m eager to support their success,” says LoDuca.
In his role at Columbia, LoDuca has overseen the IT department, headed IT governance committees for school and department-level IT functions, and ensured the secure operation of the university’s enterprise-class systems since 2015. During his tenure, he has crafted a culture of customer service and innovation — building a new student information system, identifying emerging technologies for use in classrooms and labs, and creating a data-sharing platform for university researchers and a grants dashboard for principal investigators. He also revamped Columbia’s technology infrastructure and implemented tools to ensure the security and reliability of its technology resources.
Before joining Columbia, LoDuca was the technology managing director for the education practice at Accenture from 1998 to 2015. In that role, he helped universities to develop and implement technology strategies and adopt modern applications and systems. His projects included overseeing the implementation of finance, human resources, and student administration systems for clients such as Columbia University, University of Miami, Carnegie Mellon University, the University System of Georgia, and Yale University.
“At a research institution, there’s a wide range of activities happening every day, and our job in IT is to support them all while also managing cybersecurity risks. We need to be creative and thoughtful in our solutions, and consider the needs and expectations of our community,” he says.
LoDuca holds a bachelor’s degree in chemical engineering from Michigan State University. He and his wife are recent empty nesters, and are in the process of relocating to Boston.
Closing in on superconducting semiconductors
In 2023, about 4.4 percent (176 terawatt-hours) of total energy consumption in the United States was by data centers that are essential for processing large quantities of information. Of that 176 TWh, approximately 100 TWh (57 percent) was used by CPU and GPU equipment. Energy requirements have escalated substantially in the past decade and will only continue to grow, making the development of energy-efficient computing crucial.
Superconducting electronics have arisen as a promising alternative for classical and quantum computing, although their full exploitation for high-end computing requires a dramatic reduction in the amount of wiring linking ambient temperature electronics and low-temperature superconducting circuits. To make systems that are both larger and more streamlined, replacing commonplace components such as semiconductors with superconducting versions could be of immense value. It’s a challenge that has captivated MIT Plasma Science and Fusion Center senior research scientist Jagadeesh Moodera and his colleagues, who described a significant breakthrough in a recent Nature Electronics paper, “Efficient superconducting diodes and rectifiers for quantum circuitry.”
Moodera was working on a stubborn problem. One of the critical long-standing requirements is the need for the efficient conversion of AC currents into DC currents on a chip while operating at the extremely cold cryogenic temperatures required for superconductors to work efficiently. For example, in superconducting “energy-efficient rapid single flux quantum” (ERSFQ) circuits, the AC-to-DC issue is limiting ERSFQ scalability and preventing their use in larger circuits with higher complexities. To respond to this need, Moodera and his team created superconducting diode (SD)-based superconducting rectifiers — devices that can convert AC to DC on the same chip. These rectifiers would allow for the efficient delivery of the DC current necessary to operate superconducting classical and quantum processors.
Quantum computer circuits can only operate at temperatures close to 0 kelvins (absolute zero), and the way power is supplied must be carefully controlled to limit the effects of interference introduced by too much heat or electromagnetic noise. Most unwanted noise and heat come from the wires connecting cold quantum chips to room-temperature electronics. Instead, using superconducting rectifiers to convert AC currents into DC within a cryogenic environment reduces the number of wires, cutting down on heat and noise and enabling larger, more stable quantum systems.
In a 2023 experiment, Moodera and his co-authors developed SDs that are made of very thin layers of superconducting material that display nonreciprocal (or unidirectional) flow of current and could be the superconducting counterpart to standard semiconductors. Even though SDs have garnered significant attention, especially since 2020, up until this point the research has focused only on individual SDs for proof of concept. The group’s 2023 paper outlined how they created and refined a method by which SDs could be scaled for broader application.
Now, by building a diode bridge circuit, they demonstrated the successful integration of four SDs and realized AC-to-DC rectification at cryogenic temperatures.
The new approach described in their recent Nature Electronics paper will significantly cut down on the thermal and electromagnetic noise traveling from ambient into cryogenic circuitry, enabling cleaner operation. The SDs could also potentially serve as isolators/circulators, assisting in insulating qubit signals from external influence. The successful assimilation of multiple SDs into the first integrated SD circuit represents a key step toward making superconducting computing a commercial reality.
“Our work opens the door to the arrival of highly energy-efficient, practical superconductivity-based supercomputers in the next few years,” says Moodera. “Moreover, we expect our research to enhance the qubit stability while boosting the quantum computing program, bringing its realization closer." Given the multiple beneficial roles these components could play, Moodera and his team are already working toward the integration of such devices into actual superconducting logic circuits, including in dark matter detection circuits that are essential to the operation of experiments at CERN and LUX-ZEPLIN in at the Berkeley National Lab.
This work was partially funded by MIT Lincoln Laboratory’s Advanced Concepts Committee, the U.S. National Science Foundation, U.S. Army Research Office, and U.S. Air Force Office of Scientific Research.
A brief history of the global economy, through the lens of a single barge
In 1989, New York City opened a new jail. But not on dry land. The city leased a barge, then called the “Bibby Resolution,” which had been topped with five stories of containers made into housing, and anchored it in the East River. For five years, the vessel lodged inmates.
A floating detention center is a curiosity. But then, the entire history of this barge is curious. Built in 1979 in Sweden, it housed British troops during the Falkland Islands war with Argentina, became worker housing for Volkswagen employees in West Germany, got sent to New York, also became a detention center off the coast of England, then finally was deployed as oil worker housing off the coast of Nigeria. The barge has had nine names, several owners, and flown the flags of five countries.
In this one vessel, then, we can see many currents: globalization, the transience of economic activity, and the hazy world of transactions many analysts and observers call “the offshore,” the lightly regulated sphere of economic activity that encourages short-term actions.
“The offshore presents a quick and potentially cheap solution to a crisis,” says MIT lecturer Ian Kumekawa. “It is not a durable solution. The story of the barge is the story of it being used as a quick fix in all sorts of crises. Then these expediences become the norm, and people get used to them and have an expectation that this is the way the world works.”
Now Kumekawa, a historian who started teaching as a lecturer at MIT earlier this year, explores the ship’s entire history in “Empty Vessel: The Global Economy in One Barge,” just published by Knopf and John Murray. In it, he traces the barge’s trajectory and the many economic and geopolitical changes that helped create the ship’s distinctive deployments around the world.
“The book is about a barge, but it’s also about the developing, emerging offshore world, where you see these layers of globalization, financialization, privatization, and the dissolution of territoriality and orders,” Kumekawa says. “The barge is a vehicle through which I can tell the story of those layers together.”
“Never meant to be permanent”
Kumekawa first found out about the vessel several years ago; New York City obtained another floating detention center in the 1990s, which prompted Kumekawa to start looking into the past of the older jail ship, the former “Bibby Resolution,” from the 1990s. The more he found out about its distinctive past, the more curious he became.
“You start pulling on a thread, and you realize you can keep pulling,” Kumekawa says.
The barge Kumekawa follows in the book was built in Sweden in 1979 as the “Balder Scapa.” Even then, commerce was plenty globalized: The vessel was commissioned by a Norwegian shell company, with negotiations run by an expatriate Swedish shipping agent whose firm was registered in Panama and used a Miami bank.
The barge was built at an inflection point following the economic slowdown and oil shocks of the 1970s. Manufacturing was on the verge of declining in both Western Europe and the U.S.; about half as many people now work in manufacturing in those regions, compared to 1960. Companies were looking to find cheaper global locations for production, reinforcing the sense that economic activity was now less durable in any given place.
The barge became part of this transience. The five-story accommodation block was added in the early 1980s; in 1983 it was re-registered in the UK and sent to the Falkland Islands as a troop accommodation named the “COASTEL 3.” Then it was re-registered in the Bahamas and sent to Emden, West Germany, as housing for Volkswagen workers. The vessel then served its stints as inmate housing — first in New York, then off the coast of England from 1997 to 2005. By 2010, it had been re-re-re-registered, in St. Vincent and Grenadines, and was housing oil workers off the coast of Nigeria.
“Globalization is more about flow than about stocks, and the barge is a great example of that,” Kumekawa says. “It’s always on the move, and never meant to be a permanent container. It’s understood people are going to be passing through.”
As Kumekawa explores in the book, this sense of social dislocation overlapped with the shrinking of state capacity, as many states increasingly encouraged companies to pursue globalized production and lightly regulated financial activities in numerous jurisdictions, in the hope it would enhance growth. And it has, albeit with unresolved questions about who the benefits accrue to, the social dislocation of workers, and more.
“In a certain sense it’s not an erosion of state power at all,” Kumekawa says. “These states are making very active choices to use offshore tools, to circumvent certain roadblocks.” He adds: “What happens in the 1970s and certainly in the 1980s is that the offshore comes into its own as an entity, and didn’t exist in the same way even in the 1950s and 1960s. There’s a money interest in that, and there’s a political interest as well.”
Abstract forces, real materials and people
Kumekawa is a scholar with a strong interest in economic history; his previous book, “The First Serious Optimist: A.C. Pigou and the Birth of Welfare Economics,” was published in 2017. This coming fall, Kumekawa will be team-teaching a class on the relationship between economics and history, along with MIT economists Abhijit Banerjee and Jacob Moscona.
Working on “Empty Vessel” also necessitated that Kumekawa use a variety of research techniques, from archival work to journalistic interviews with people who knew the vessel well.
“I had a wonderful set of conversations with the man who was the last bargemaster,” Kumekawa says. “He was the person in effect steering the vessel for many years. He was so aware of all of the forces at play — the market for oil, the prices of accommodations, the regulations, the fact no one had reinforced the frame.”
“Empty Vessel” has already received critical acclaim. Reviewing it in The New York Times, Jennifer Szalai writes that this “elegant and enlightening book is an impressive feat.”
For his part, Kumekawa also took inspiration from a variety of writings about ships, voyages, commerce, and exploration, recognizing that these vessels contain stories and vignettes that illuminate the wider world.
“Ships work very well as devices connecting the global and the local,” he says. Using the barge as the organizing principle of his book, Kumekawa adds, “makes a whole bunch of abstract processes very concrete. The offshore itself is an abstraction, but it’s also entirely dependent on physical infrastructure and physical places. My hope for the book is it reinforces the material dimension of these abstract global forces.”
Students and staff work together for MIT’s first “No Mow May”
In recent years, some grass lawns around the country have grown a little taller in springtime thanks to No Mow May, a movement originally launched by U.K. nonprofit Plantlife in 2019 designed to raise awareness about the ecological impacts of the traditional, resource-intensive, manicured grass lawn. No Mow May encourages people to skip spring mowing to allow for grass to grow tall and provide food and shelter for beneficial creatures including bees, beetles, and other pollinators.
This year, MIT took part in the practice for the first time, with portions of the Kendall/MIT Open Space, Bexley Garden, and the Tang Courtyard forgoing mowing from May 1 through June 6 to make space for local pollinators, decrease water use, and encourage new thinking about the traditional lawn. MIT’s first No Mow May was the result of championing by the Graduate Student Council Sustainability Subcommittee (GSC Sustain) and made possible by the Office of the Vice Provost for Campus Space Management and Planning.
A student idea sprouts
Despite being a dense urban campus, MIT has no shortage of green spaces — from pocket gardens and community-managed vegetable plots to thousands of shade trees — and interest in these spaces continues to grow. In recent years, student-led initiatives supported by Institute leadership and operational staff have transformed portions of campus by increasing the number of native pollinator plants and expanding community gardens, like the Hive Garden. With No Mow May, these efforts stepped out of the garden and into MIT’s many grassy open spaces.
“The idea behind it was to raise awareness for more sustainable and earth-friendly lawn practices,” explains Gianmarco Terrones, GSC Sustain member. Those practices include reducing the burden of mowing, limiting use of fertilizers, and providing shelter and food for pollinators. “The insects that live in these spaces are incredibly important in terms of pollination, but they’re also part of the food chain for a lot of animals,” says Terrones.
Research has shown that holding off on mowing in spring, even in small swaths of green space, can have an impact. The early months of spring have the lowest number of flowers in regions like New England, and providing a resource and refuge — even for a short duration — can support fragile pollinators like bees. Additionally, No Mow May aims to help people rethink their yards and practices, which are not always beneficial for local ecosystems.
Signage at each No Mow site on campus highlighted information on local pollinators, the impact of the project, and questions for visitors to ask themselves. “Having an active sign there to tell people, ‘look around. How many butterflies do you see after six weeks of not mowing? Do you see more? Do you see more bees?’ can cause subtle shifts in people’s awareness of ecosystems,” says GSC Sustain member Mingrou Xie. A mowed barrier around each project also helped visitors know that areas of tall grass at No Mow sites are intentional.
Campus partners bring sustainable practices to life
To make MIT’s No Mow May possible, GSC Sustain members worked with the Office of the Vice Provost and the Open Space Working Group, co-chaired by Vice Provost for Campus Space Management and Planning Brent Ryan and Director of Sustainability Julie Newman. The Working Group, which also includes staff from Open Space Programming, Campus Planning, and faculty in the School of Architecture and Planning, helped to identify potential No Mow locations and develop strategies for educational signage and any needed maintenance. “Massachusetts is a biodiverse state, and No Mow May provides an exciting opportunity for MIT to support that biodiversity on its own campus,” says Ryan.
Students were eager for space on campus with high visibility, and the chosen locations of the Kendall/MIT Open Space, Bexley Garden, and the Tang Courtyard fit the bill. “We wanted to set an example and empower the community to feel like they can make a positive change to an environment they spend so much time in,” says Xie.
For GSC Sustain, that positive change also takes the form of the Native Plant Project, which they launched in 2022 to increase the number of Massachusetts-native pollinator plants on campus — plants like swamp milkweed, zigzag goldenrod, big leaf aster, and red columbine, with which native pollinators have co-evolved. Partnering with the Open Space Working Group, GSC Sustain is currently focused on two locations for new native plant gardens — the President’s Garden and the terrace gardens at the E37 Graduate Residence. “Our short-term goal is to increase the number of native [plants] on campus, but long term we want to foster a community of students and staff interested in supporting sustainable urban gardening,” says Xie.
Campus as a test bed continues to grow
After just a few weeks of growing, the campus No Mow May locations sprouted buttercups, mouse ear chickweed, and small tree saplings, highlighting the diversity waiting dormant in the average lawn. Terrones also notes other discoveries: “It’s been exciting to see how much the grass has sprung up these last few weeks. I thought the grass would all grow at the same rate, but as May has gone on the variations in grass height have become more apparent, leading to non-uniform lawns with a clearly unmanicured feel,” he says. “We hope that members of MIT noticed how these lawns have evolved over the span of a few weeks and are inspired to implement more earth-friendly lawn practices in their own homes/spaces.”
No Mow May and the Native Plant Project fit into MIT’s overall focus on creating resilient ecosystems that support and protect the MIT community and the beneficial critters that call it home. MIT Grounds Services has long included native plants in the mix of what is grown on campus and native pollinator gardens, like the Hive Garden, have been developed and cared for through partnerships with students and Grounds Services in recent years. Grounds, along with consultants that design and install our campus landscape projects, strive to select plants that assist us with meeting sustainability goals, like helping with stormwater runoff and cooling. No Mow May can provide one more data point for the iterative process of choosing the best plants and practices for a unique microclimate like the MIT campus.
“We are always looking for new ways to use our campus as a test bed for sustainability,” says Director of Sustainability Julie Newman. “Community-led projects like No Mow May help us to learn more about our campus and share those lessons with the larger community.”
The Office of the Vice Provost, the Open Space Working Group, and GSC Sustain will plan to reconnect in the fall for a formal debrief of the project and its success. Given the positive community feedback, future possibilities of expanding or extending No Mow May will be discussed.
Professor Emeritus Hank Smith honored for pioneering work in nanofabrication
Nanostructures are a stunning array of intricate patterns that are imperceptible to the human eye, yet they help power modern life. They are the building blocks of microchip transistors, etched onto grating substrates of space-based X-ray telescopes, and drive innovations in medicine, sustainability, and quantum computing.
Since the 1970s, Henry “Hank” Smith, MIT professor emeritus of electrical engineering, has been a leading force in this field. He pioneered the use of proximity X-ray lithography, proving that X-rays’ short optical wavelength could produce high-resolution patterns at the nanometer scale. Smith also made significant advancements in phase-shifting masks (PSMs), a technique that disrupts light waves to enhance contrast. His design of attenuated PSMs, which he co-created with graduate students Mark Schattenburg PhD ʼ84 and Erik H. Anderson ʼ81, SM ʼ84, PhD ʼ88, is still used today in the semiconductor industry.
In recognition of these contributions, as well as highly influential achievements in liquid-immersion lithography, achromatic-interference lithography, and zone-plate array lithography, Smith recently received the 2025 SPIE Frits Zernike Award for Microlithography. Given by the Society of Photo-Optical Instrumentation Engineers (SPIE), the accolade recognizes scientists for their outstanding accomplishments in microlithographic technology.
“The Zernike Award is an impressive honor that aptly recognizes Hank’s pioneering contributions,” says Karl Berggren, MIT’s Joseph F. and Nancy P. Keithley Professor in Electrical Engineering and faculty head of electrical engineering. “Whether it was in the classroom, at a research conference, or in the lab, Hank approached his work with a high level of scientific rigor that helped make him decades ahead of industry practices.”
Now 88 years old, Smith has garnered many other honors. He was also awarded the SPIE BACUS Prize, named a member of the National Academy of Engineering, and is a fellow of the American Academy of Arts and Sciences, IEEE, the National Academy of Inventors, and the International Society for Nanomanufacturing.
Jump-starting the nano frontier
From an early age, Smith was fascinated by the world around him. He took apart clocks to see how they worked, explored the outdoors, and even observed the movement of water. After graduating from high school in New Jersey, Smith majored in physics at College of the Holy Cross. From there, he pursued his doctorate at Boston College and served three years as an officer in the U.S. Air Force.
It was his job at MIT Lincoln Laboratory that ultimately changed Smith’s career trajectory. There, he met visitors from MIT and Harvard University who shared their big ideas for electronic and surface acoustic wave devices but were stymied by the physical limitations of fabrication. Yet, few were inclined to tackle this challenge.
“The job of making things was usually brushed off the table with, ‘oh well, we’ll get some technicians to do that,’” Smith said in his oral history for the Center for Nanotechnology in Society. “And the intellectual content of fabrication technology was not appreciated by people who had been ‘traditionally educated,’ I guess.”
More interested in solving problems than maintaining academic rank, Smith set out to understand the science of fabrication. His breakthrough in X-ray lithography signaled to the world the potential and possibilities of working on the nanometer scale, says Schattenburg, who is a senior research scientist at MIT Kavli Institute for Astrophysics and Space Research.
“His early work proved to people at MIT and researchers across the country that nanofabrication had some merit,” Schattenburg says. “By showing what was possible, Hank really jump-started the nano frontier.”
Cracking open lithography’s black box
By 1980, Smith left Lincoln Lab for MIT’s main campus and continued to push forward new ideas in his NanoStructures Laboratory (NSL), formerly the Submicron Structures Laboratory. NSL served as both a research lab and a service shop that provided optical gratings, which are pieces of glass engraved with sub-micron periodic patterns, to the MIT community and outside scientists. It was a busy time for the lab; NSL attracted graduate students and international visitors. Still, Smith and his staff ensured that anyone visiting NSL would also receive a primer on nanotechnology.
“Hank never wanted anything we produced to be treated as a black box,” says Mark Mondol, MIT.nano e-beam lithography domain expert who spent 23 years working with Smith in NSL. “Hank was always very keen on people understanding our work and how it happens, and he was the perfect person to explain it because he talked in very clear and basic terms.”
The physical NSL space in MIT Building 39 shuttered in 2023, a decade after Smith became an emeritus faculty member. NSL’s knowledgeable staff and unique capabilities transferred to MIT.nano, which now serves as MIT’s central hub for supporting nanoscience and nanotechnology advancements. Unstoppable, Smith continues to contribute his wisdom to the ever-expanding nano community by giving talks at the NSL Community Meetings at MIT.nano focused on lithography, nanofabrication, and their future.
Smith’s career is far from complete. Through his startup LumArray, Smith continues to push the boundaries of knowledge. He recently devised a maskless lithography method, known as X-ray Maskless Lithography (XML), that has the potential to lower manufacturing costs of microchips and thwart the sale of counterfeit microchips.
Dimitri Antoniadis, MIT professor emeritus of electrical engineering and computer science, is Smith’s longtime collaborator and friend. According to him, Smith’s commitment to research is practically unheard-of.
“Once professors reach emeritus status, we usually inspire and supervise research,” Antoniadis says. “It’s very rare for retired professors to do all the work themselves, but he loves it.”
Enduring influence
Smith’s legacy extends far beyond the groundbreaking tools and techniques he pioneered, say his friends, colleagues, and former students. His relentless curiosity and commitment to his graduate students helped propel his field forward.
He earned a reputation for sitting in the front row at research conferences, ready to ask the first question. Fellow researchers sometimes dreaded seeing him there.
“Hank kept us honest,” Berggren says. “Scientists and engineers knew that they couldn’t make a claim that was a little too strong, or use data that didn’t support the hypothesis, because Hank would hold them accountable.”
Smith never saw himself as playing the good cop or bad cop — he was simply a curious learner unafraid to look foolish.
“There are famous people, Nobel Prize winners, that will sit through research presentations and not have a clue as to what’s going on,” Smith says. “That is an utter waste of time. If I don’t understand something, I’m going to ask a question.”
As an advisor, Smith held his graduate students to high standards. If they came unprepared or lacked understanding of their research, he would challenge them with tough, unrelenting questions. Yet, he was also their biggest advocate, helping students such as Lisa Su SB/SM ʼ91, PhD ʼ94, who is now the chair and chief executive officer of AMD, and Dario Gil PhD ʼ03, who is now the chair of the National Science Board and senior vice president and director of research at IBM, succeed in the lab and beyond.
Research Specialist James Daley has spent nearly three decades at MIT, most of them working with Smith. In that time, he has seen hundreds of advisees graduate and return to offer their thanks. “Hank’s former students are all over the world,” Daley says. “Many are now professors mentoring their own graduate students and bringing with them some of Hank’s style. They are his greatest legacy.”
Celebrating an academic-industry collaboration to advance vehicle technology
On May 6, MIT AgeLab’s Advanced Vehicle Technology (AVT) Consortium, part of the MIT Center for Transportation and Logistics, celebrated 10 years of its global academic-industry collaboration. AVT was founded with the aim of developing new data that contribute to automotive manufacturers, suppliers, and insurers’ real-world understanding of how drivers use and respond to increasingly sophisticated vehicle technologies, such as assistive and automated driving, while accelerating the applied insight needed to advance design and development. The celebration event brought together stakeholders from across the industry for a set of keynote addresses and panel discussions on critical topics significant to the industry and its future, including artificial intelligence, automotive technology, collision repair, consumer behavior, sustainability, vehicle safety policy, and global competitiveness.
Bryan Reimer, founder and co-director of the AVT Consortium, opened the event by remarking that over the decade AVT has collected hundreds of terabytes of data, presented and discussed research with its over 25 member organizations, supported members’ strategic and policy initiatives, published select outcomes, and built AVT into a global influencer with tremendous impact in the automotive industry. He noted that current opportunities and challenges for the industry include distracted driving, a lack of consumer trust and concerns around transparency in assistive and automated driving features, and high consumer expectations for vehicle technology, safety, and affordability. How will industry respond? Major players in attendance weighed in.
In a powerful exchange on vehicle safety regulation, John Bozzella, president and CEO of the Alliance for Automotive Innovation, and Mark Rosekind, former chief safety innovation officer of Zoox, former administrator of the National Highway Traffic Safety Administration, and former member of the National Transportation Safety Board, challenged industry and government to adopt a more strategic, data-driven, and collaborative approach to safety. They asserted that regulation must evolve alongside innovation, not lag behind it by decades. Appealing to the automakers in attendance, Bozzella cited the success of voluntary commitments on automatic emergency braking as a model for future progress. “That’s a way to do something important and impactful ahead of regulation.” They advocated for shared data platforms, anonymous reporting, and a common regulatory vision that sets safety baselines while allowing room for experimentation. The 40,000 annual road fatalities demand urgency — what’s needed is a move away from tactical fixes and toward a systemic safety strategy. “Safety delayed is safety denied,” Rosekind stated. “Tell me how you’re going to improve safety. Let’s be explicit.”
Drawing inspiration from aviation’s exemplary safety record, Kathy Abbott, chief scientific and technical advisor for the Federal Aviation Administration, pointed to a culture of rigorous regulation, continuous improvement, and cross-sectoral data sharing. Aviation’s model, built on highly trained personnel and strict predictability standards, contrasts sharply with the fragmented approach in the automotive industry. The keynote emphasized that a foundation of safety culture — one that recognizes that technological ability alone isn’t justification for deployment — must guide the auto industry forward. Just as aviation doesn’t equate absence of failure with success, vehicle safety must be measured holistically and proactively.
With assistive and automated driving top of mind in the industry, Pete Bigelow of Automotive News offered a pragmatic diagnosis. With companies like Ford and Volkswagen stepping back from full autonomy projects like Argo AI, the industry is now focused on Level 2 and 3 technologies, which refer to assisted and automated driving, respectively. Tesla, GM, and Mercedes are experimenting with subscription models for driver assistance systems, yet consumer confusion remains high. JD Power reports that many drivers do not grasp the differences between L2 and L2+, or whether these technologies offer safety or convenience features. Safety benefits have yet to manifest in reduced traffic deaths, which have risen by 20 percent since 2020. The recurring challenge: L3 systems demand that human drivers take over during technical difficulties, despite driver disengagement being their primary benefit, potentially worsening outcomes. Bigelow cited a quote from Bryan Reimer as one of the best he’s received in his career: “Level 3 systems are an engineer’s dream and a plaintiff attorney’s next yacht,” highlighting the legal and design complexity of systems that demand handoffs between machine and human.
In terms of the impact of AI on the automotive industry, Mauricio Muñoz, senior research engineer at AI Sweden, underscored that despite AI’s transformative potential, the automotive industry cannot rely on general AI megatrends to solve domain-specific challenges. While landmark achievements like AlphaFold demonstrate AI’s prowess, automotive applications require domain expertise, data sovereignty, and targeted collaboration. Energy constraints, data firewalls, and the high costs of AI infrastructure all pose limitations, making it critical that companies fund purpose-driven research that can reduce costs and improve implementation fidelity. Muñoz warned that while excitement abounds — with some predicting artificial superintelligence by 2028 — real progress demands organizational alignment and a deep understanding of the automotive context, not just computational power.
Turning the focus to consumers, a collision repair panel drawing Richard Billyeald from Thatcham Research, Hami Ebrahimi from Caliber Collision, and Mike Nelson from Nelson Law explored the unintended consequences of vehicle technology advances: spiraling repair costs, labor shortages, and a lack of repairability standards. Panelists warned that even minor repairs for advanced vehicles now require costly and complex sensor recalibrations — compounded by inconsistent manufacturer guidance and no clear consumer alerts when systems are out of calibration. The panel called for greater standardization, consumer education, and repair-friendly design. As insurance premiums climb and more people forgo insurance claims, the lack of coordination between automakers, regulators, and service providers threatens consumer safety and undermines trust. The group warned that until Level 2 systems function reliably and affordably, moving toward Level 3 autonomy is premature and risky.
While the repair panel emphasized today’s urgent challenges, other speakers looked to the future. Honda’s Ryan Harty, for example, highlighted the company’s aggressive push toward sustainability and safety. Honda aims for zero environmental impact and zero traffic fatalities, with plans to be 100 percent electric by 2040 and to lead in energy storage and clean power integration. The company has developed tools to coach young drivers and is investing in charging infrastructure, grid-aware battery usage, and green hydrogen storage. “What consumers buy in the market dictates what the manufacturers make,” Harty noted, underscoring the importance of aligning product strategy with user demand and environmental responsibility. He stressed that manufacturers can only decarbonize as fast as the industry allows, and emphasized the need to shift from cost-based to life-cycle-based product strategies.
Finally, a panel involving Laura Chace of ITS America, Jon Demerly of Qualcomm, Brad Stertz of Audi/VW Group, and Anant Thaker of Aptiv covered the near-, mid-, and long-term future of vehicle technology. Panelists emphasized that consumer expectations, infrastructure investment, and regulatory modernization must evolve together. Despite record bicycle fatality rates and persistent distracted driving, features like school bus detection and stop sign alerts remain underutilized due to skepticism and cost. Panelists stressed that we must design systems for proactive safety rather than reactive response. The slow integration of digital infrastructure — sensors, edge computing, data analytics — stems not only from technical hurdles, but procurement and policy challenges as well.
Reimer concluded the event by urging industry leaders to re-center the consumer in all conversations — from affordability to maintenance and repair. With the rising costs of ownership, growing gaps in trust in technology, and misalignment between innovation and consumer value, the future of mobility depends on rebuilding trust and reshaping industry economics. He called for global collaboration, greater standardization, and transparent innovation that consumers can understand and afford. He highlighted that global competitiveness and public safety both hang in the balance. As Reimer noted, “success will come through partnerships” — between industry, academia, and government — that work toward shared investment, cultural change, and a collective willingness to prioritize the public good.
Anantha Chandrakasan named MIT provost
Anantha Chandrakasan, a professor of electrical engineering and computer science who has held multiple leadership roles at MIT, has been named the Institute’s new provost, effective July 1.
Chandrakasan has served as the dean of the School of Engineering since 2017 and as MIT’s inaugural chief innovation and strategy officer since 2024. Prior to becoming dean, he headed the Department of Electrical Engineering and Computer Science (EECS), MIT’s largest academic department, for six years.
“Anantha brings to this post an exceptional record of shaping and leading important innovations for the Institute,” wrote MIT President Sally Kornbluth, in an email announcing the decision to the MIT community today. “I am particularly grateful that we will be able to draw on Anantha’s depth and breadth of experience; his nimbleness, entrepreneurial spirit and boundless energy; his remarkable record in raising funds from outside sources for important ideas; and his profound commitment to MIT’s mission.”
The provost is MIT’s senior academic and budget officer, with overall responsibility for the Institute’s educational programs, as well as for the recruitment, promotion, and tenuring of faculty. With the president and other members of the Institute’s senior leadership team, the provost establishes academic priorities, manages financial planning and research support, and oversees MIT’s international engagements.
“I feel deeply honored to take on the role of provost,” says Chandrakasan, who is also the Vannevar Bush Professor of Electrical Engineering and Computer Science. “Looking ahead, I see myself as a key facilitator, enabling faculty, students, postdocs, and staff to continue making extraordinary contributions to the nation and the world.”
Investing in excellence
Chandrakasan succeeds Cynthia Barnhart, who announced her decision to step down from the role in February. As dean of engineering, Chandrakasan worked with Barnhart closely during her tenure as provost and, before that, chancellor.
“Cindy has been a tremendous mentor,” he says. “She is always very thoughtful and makes sure she hears all the viewpoints, which is something I will strive to do as well. I so admire how deftly she approaches complex problems and supports a variety of perspectives and approaches.”
As MIT’s chief academic officer, Chandrakasan will focus on three overarching priorities: understanding institutional needs and strategic financial planning, attracting and retaining top talent, and supporting cross-cutting research, education, and entrepreneurship programming. On all of these fronts, he plans to seek frequent input from across the Institute.
“Recognizing that each school and other academic units operate within a unique context, I plan to engage deeply with their leaders to understand their challenges and aspirations. This will help me refine and set the priorities for the Office of the Provost,” Chandrakasan says.
He also plans to establish a provost faculty advisory group to hear on an ongoing basis from faculty across the five schools and the college, as well as student/postdoc advisory groups and an external provost advisory council.
“My goal is to continue to facilitate excellence at MIT at all levels,” Chandrakasan says.
He adds: “There is a tremendous opportunity for MIT to be at the center of the innovations in areas where the United States wants to lead. It’s about AI. It’s about semiconductors. It’s about quantum, the biosecurity and biomanufacturing space — but not only that. We need students who can do more than just code or design or build. We really need students who understand the human perspective and human insights. This is why collaborations between STEM fields and the humanities, arts and social sciences, such as through the new MIT Human Insights Collaborative, are so important.”
In her email to the MIT community, Kornbluth also noted that Institute Professor Paula Hammond, currently vice provost for faculty, will take on an expanded portfolio with the new title of executive vice provost, and Deputy Dean of Engineering Maria Yang will serve as interim dean until the new dean is in place.
Advancing the president’s vision
In February 2024, Chandrakasan was appointed at MIT’s first chief innovation and strategy officer, to help develop and implement plans to advance research, education, and innovation in areas that President Kornbluth identified as her top priorities.
Working closely with the president, Chandrakasan oversaw MIT’s launch of several Institute-wide initiatives, including the MIT Human Insight Collaborative (MITHIC), the MIT Health and Life Sciences Collaborative (MIT HEALS), the MIT Generative AI Impact Consortium (MGAIC, or “magic”), the MIT Initiative for New Manufacturing (INM), and multiple energy- and climate-related initiatives including the MIT-GE Vernova Energy and Climate Alliance.
These initiatives bring together MIT faculty, staff, and students from across the Institute, as well as industry partners, supporting bold, ground-breaking research and education to address pressing problems. In launching them, Chandrakasan was responsible for the “full stack” of tasks, from developing the vision to finding funding to implementing the programming — a significant undertaking on top of his other responsibilities.
“People consider me intense, which might be true,” he says, with a chuckle. “The reality is that I’m deeply passionate about the academic mission of MIT to create breakthrough technologies, educate the next generation of leaders, and serve the country and the world.”
New models for collaboration
During his time as dean of engineering, Chandrakasan played a key role in advancing a variety of historic Institute-wide initiatives, including the founding of the MIT Schwarzman College of computing and the development of the MIT Fast Forward plan for addressing climate change. He also served as the inaugural chair of the Abdul Latif Jameel Clinic for Machine Learning in Health and as the co-chair of the academic workstream for MIT’s Task Force 2021. Earlier, he led an Institute-wide working group to guide the development of policies and procedures related to MIT’s 2016 launch of The Engine, an incubator and accelerator for tough tech, and also served on its inaugural board.
He implemented a variety of interdisciplinary programs within the School of Engineering, creating new models for how academia and industry can work together to accelerate the pace of research. This work led to multiple new initiatives, such as the MIT Climate and Sustainability Consortium, the MIT-IBM Watson AI Lab, the MIT-Takeda Program, the MIT and Accenture Convergence Initiative, the MIT Mobility Initiative, the MIT Quest for Intelligence, the MIT AI Hardware Program, the MIT-Northpond Program, the MIT Faculty Founder Initiative, and the MIT-Novo Nordisk Artificial Intelligence Postdoctoral Fellows Program.
Chandrakasan also welcomed and supported 110 new faculty members to the School of Engineering, including in the Department of Electrical Engineering and Computer Science, which jointly reports between the School of Engineering and the MIT Schwarzman College of Computing. He also oversaw 274 faculty and senior researcher promotion cases in Engineering Council.
One of his priorities as dean was to bolster the School of Engineering’s sense of community, launching several programs to give students and staff a more active role in shaping the initiatives and operations of the school, including the Staff Advice and Implementation Committee (SAIC), the undergraduate Student Advisory Group, the Graduate Student Advisory Group (GradSage), and the MIT School of Engineering Postdoctoral Fellowship Program for Engineering Excellence. Working closely with GradSage, Chandrakasan also played a key role in establishing the Daniel J. Riccio Graduate Engineering Leadership Program.
A champion for EECS research and education
Chandrakasan earned his BS, MS, and PhD in electrical engineering and computer sciences from the University of California at Berkeley. After joining the MIT faculty, he was the director of the Microsystems Technology Laboratories from 2006 until 2011, when he became the EECS department head.
An active researcher throughout his time at MIT, Chandrakasan has led the MIT Energy-Efficient Circuits and Systems Group even while taking on new administrative roles. The group works on the design and implementation of integrated systems, from ultra-low-power wireless sensors and multimedia devices to biomedical systems. Chandrakasan has more than 120,000 citations and has advised or co-advised and graduated 78 PhD students. He says this experience will help him succeed as provost.
“To understand the pain points of our researcher scholars, you have to be in the trenches,” he says.
While at the helm of EECS, Chandrakasan also launched a number of initiatives on behalf of the department’s students. For example, the Advanced Undergraduate Research Opportunities Program, more commonly known as “SuperUROP,” is a year-long independent research program that launched in EECS in 2012 and expanded to the whole School of Engineering in 2015.
Chandrakasan also initiated the Rising Stars program in EECS, an annual event that convenes graduate and postdoc women for the purpose of sharing advice about the early stages of an academic career. Another program for EECS postdocs, Postdoc6, aimed to foster a sense of community for postdocs and help them develop skills that will serve their careers.
As higher education faces new challenges, Chandrakasan says he is looking forward to helping MIT position itself for the future. “I'm not afraid to try bold things,” he says.
Startup’s biosensor makes drug development and manufacturing cheaper
In the biotech and pharmaceutical industries, ELISA tests provide critical quality control during drug development and manufacturing. The tests can precisely quantify protein levels, but they also require hours of work by trained technicians and specialized equipment. That makes them prohibitively expensive, driving up the costs of drugs and putting research testing out of reach for many.
Now the Advanced Silicon Group (ASG), founded by Marcie Black ’94, MEng ’95, PhD ’03 and Bill Rever, is commercializing a new technology that could dramatically lower the time and costs associated with protein sensing. ASG’s proprietary sensor combines silicon nanowires with antibodies that can bind to different proteins to create a highly sensitive measurement of their concentration in a given solution.
The tests can measure the concentration of many different proteins and other molecules at once, with results typically available in less than 15 minutes. Users simply place a tiny amount of solution on the sensor, rinse the sensor, and then insert it into ASG’s handheld testing system.
“We’re making it 15 times faster and 15 times lower cost to test for proteins,” Black says. “That’s on the drug development side. This could also make the manufacturing of drugs significantly faster and more cost-effective. It could revolutionize how we create drugs in this country and around the world.”
Since developing its sensor, ASG’s team has received inquiries from a long list of people interested in using them to develop new therapeutics, help elite athletes train, and understand soil concentrations in agriculture, among other applications.
For now, though, the small company is focusing on lowering barriers in health care by selling its low-cost sensors to companies developing and manufacturing drugs.
“Right now, money is a limiting factor in researching and creating new drugs,” explains Marissa Gillis, a member of ASG’s team. “Making these processes faster and less costly could dramatically increase the amount of biologic testing and creation. It also makes it more viable for companies to develop drugs for rare conditions with smaller markets.”
A family away from home
Black grew up in a small town in Ohio before coming to MIT for three degrees in electrical engineering.
“Going to MIT changed my life,” Black says. “It opened my eyes to the possibilities of doing science and engineering to make the world a better place. Also, just being around so many amazing people taught me how to dream big.”
For her PhD, Black worked with the late Institute Professor Mildred Dresselhaus, a highly acclaimed physicist and nanotechnology pioneer who Black remembers for her mentorship and compassion as much as her contributions to our understanding of exotic materials. Black couldn’t always afford to go home for holidays, so she’d spend Thanksgivings with the Dresselhaus family.
“Millie was an amazing person, and her family was a family away from home for me,” Black says. “Millie continued to be my mentor — and I hear she did this with a lot of students — until the day she died.”
For her thesis, Black studied the optical properties of nanowires, which taught her about the nanostructures and optoelectronics she’d eventually use as part of the Advanced Silicon Group.
Following graduation, Black worked at the Los Alamos National Laboratory before founding the company Bandgap Engineering, which developed efficient, low-cost nanostructured solar cells. That technology was subsequently commercialized by other companies and became the subject of a patent dispute. In 2015, Black spun out the Advanced Silicon Group to apply a similar technology to protein sensing.
ASG’s sensors combine known approaches for sensitizing silicon to biological molecules, using the photoelectric properties of silicon nanowires to detect proteins electrically.
“It’s basically a solar cell that we functionalize with an antibody that’s specific to a certain protein,” Black says. “When the protein gets close, it brings an electrical charge with it that will repel light carriers inside the silicon, and doing that changes how well the electron and the holes can recombine. By looking at the photocurrent when you’re exposed to a solution, you can tell how much protein is bound to the surface and thus the concentration of that protein.”
ASG was accepted into MIT.nano’s START.nano startup accelerator and MIT’s Office of Corporate Relations Startup Exchange Program soon after its founding, which gave Black’s team access to cutting-edge equipment at MIT and connected her with potential investors and partners.
Black has also received broad support from MIT’s Venture Mentoring Service and worked with researchers from MIT’s Microsystems Technology Laboratories (MTL), where she conducted research as a student.
“Even though the company is in Lowell, [Massachusetts], I’m constantly going to MIT and getting help from professors and researchers at MIT,” Black says.
Biosensing for impact
From extensive discussions with people in the pharmaceutical industry, Black learned about the need for a more affordable protein-measurement tool. During drug development and manufacturing, protein levels must be measured to detect problems such as contamination from host cell proteins, which can be fatal to patients even at very low quantities.
“It can cost more than $1 billion to develop a drug,” Black says. “A big part of the process is bioprocessing, and 50 to 80 percent of bioprocessing is dedicated to purifying these unwanted proteins. That challenge leads to drugs being more expensive and taking longer to get to market.”
ASG has since worked with researchers to develop tests for biomarkers associated with lung cancer and dormant tuberculosis and has received multiple grants from the National Science Foundation, the National Institute of Standards and Technology, and the commonwealth of Massachusetts, including funding to develop tests for host cell proteins.
This year, ASG announced a partnership with Axogen to help the regenerative nerve repair company grow nerve tissue.
“There’s a lot of interest in using our sensor for applications in regenerative medicine,” Black says. “Another example we envision is if you’re sick in rural India and there’s no doctor nearby, you can show up at a clinic, nurses can give this to you and test for the flu, Covid-19, food poisoning, pregnancy, and 10 other things all at once. The results come in 15 minutes, then you could get what you need or teleconference a doctor.”
ASG is currently able to produce about 2,000 of its sensors on 8-inch chips per production line in its partner’s semiconductor foundry. As the company continues scaling up production, Black is hopeful the sensors will lower costs at every step between drug developers and patients.
“We really want to lower the barriers for testing so that everyone has access to good health care,” Black says. “Beyond that, there are so many applications for protein sensing. It’s really where the rubber hits the road in biology, agriculture, diagnostics. We’re excited to partner with leaders in every one of these industries.”
First-of-its-kind device profiles newborns’ immune function
After more than a decade of successes, ESI’s work will spread out across the Institute
MIT’s Environmental Solutions Initiative (ESI), a pioneering cross-disciplinary body that helped give a major boost to sustainability and solutions to climate change at MIT, will close as a separate entity at the end of June. But that’s far from the end for its wide-ranging work, which will go forward under different auspices. Many of its key functions will become part of MIT’s recently launched Climate Project. John Fernandez, head of ESI for nearly a decade, will return to the School of Architecture and Planning, where some of ESI’s important work will continue as part of a new interdisciplinary lab.
When the ideas that led to the founding of MIT’s Environmental Solutions Initiative first began to be discussed, its founders recall, there was already a great deal of work happening at MIT relating to climate change and sustainability. As Professor John Sterman of the MIT Sloan School of Management puts it, “there was a lot going on, but it wasn’t integrated. So the whole added up to less than the sum of its parts.”
ESI was founded in 2014 to help fill that coordinating role, and in the years since it has accomplished a wide range of significant milestones in research, education, and communication about sustainable solutions in a wide range of areas. Its founding director, Professor Susan Solomon, helmed it for its first year, and then handed the leadership to Fernandez, who has led it since 2015.
“There wasn’t much of an ecosystem [on sustainability] back then,” Solomon recalls. But with the help of ESI and some other entities, that ecosystem has blossomed. She says that Fernandez “has nurtured some incredible things under ESI,” including work on nature-based climate solutions, and also other areas such as sustainable mining, and reduction of plastics in the environment.
Desiree Plata, director of MIT’s Climate and Sustainability Consortium and associate professor of civil and environmental engineering, says that one key achievement of the initiative has been in “communication with the external world, to help take really complex systems and topics and put them in not just plain-speak, but something that’s scientifically rigorous and defensible, for the outside world to consume.”
In particular, ESI has created three very successful products, which continue under the auspices of the Climate Project. These include the popular TIL Climate Podcast, the Webby Award-winning Climate Portal website, and the online climate primer developed with Professor Kerry Emanuel. “These are some of the most frequented websites at MIT,” Plata says, and “the impact of this work on the global knowledge base cannot be overstated.”
Fernandez says that ESI has played a significant part in helping to catalyze what has become “a rich institutional landscape of work in sustainability and climate change” at MIT. He emphasizes three major areas where he feels the ESI has been able to have the most impact: engaging the MIT community, initiating and stewarding critical environmental research, and catalyzing efforts to promote sustainability as fundamental to the mission of a research university.
Engagement of the MIT community, he says, began with two programs: a research seed grant program and the creation of MIT’s undergraduate minor in environment and sustainability, launched in 2017.
ESI also created a Rapid Response Group, which gave students a chance to work on real-world projects with external partners, including government agencies, community groups, nongovernmental organizations, and businesses. In the process, they often learned why dealing with environmental challenges in the real world takes so much longer than they might have thought, he says, and that a challenge that “seemed fairly straightforward at the outset turned out to be more complex and nuanced than expected.”
The second major area, initiating and stewarding environmental research, grew into a set of six specific program areas: natural climate solutions, mining, cities and climate change, plastics and the environment, arts and climate, and climate justice.
These efforts included collaborations with a Nobel Peace Prize laureate, three successive presidential administrations from Colombia, and members of communities affected by climate change, including coal miners, indigenous groups, various cities, companies, the U.N., many agencies — and the popular musical group Coldplay, which has pledged to work toward climate neutrality for its performances. “It was the role that the ESI played as a host and steward of these research programs that may serve as a key element of our legacy,” Fernandez says.
The third broad area, he says, “is the idea that the ESI as an entity at MIT would catalyze this movement of a research university toward sustainability as a core priority.” While MIT was founded to be an academic partner to the industrialization of the world, “aren’t we in a different world now? The kind of massive infrastructure planning and investment and construction that needs to happen to decarbonize the energy system is maybe the largest industrialization effort ever undertaken. Even more than in the recent past, the set of priorities driving this have to do with sustainable development.”
Overall, Fernandez says, “we did everything we could to infuse the Institute in its teaching and research activities with the idea that the world is now in dire need of sustainable solutions.”
Fernandez “has nurtured some incredible things under ESI,” Solomon says. “It’s been a very strong and useful program, both for education and research.” But it is appropriate at this time to distribute its projects to other venues, she says. “We do now have a major thrust in the Climate Project, and you don’t want to have redundancies and overlaps between the two.”
Fernandez says “one of the missions of the Climate Project is really acting to coalesce and aggregate lots of work around MIT.” Now, with the Climate Project itself, along with the Climate Policy Center and the Center for Sustainability Science and Strategy, it makes more sense for ESI’s climate-related projects to be integrated into these new entities, and other projects that are less directly connected to climate to take their places in various appropriate departments or labs, he says.
“We did enough with ESI that we made it possible for these other centers to really flourish,” he says. “And in that sense, we played our role.”
As of June 1, Fernandez has returned to his role as professor of architecture and urbanism and building technology in the School of Architecture and Planning, where he directs the Urban Metabolism Group. He will also be starting up a new group called Environment ResearchAction (ERA) to continue ESI work in cities, nature, and artificial intelligence.
Tiny organisms, huge implications for people
Back in 1676, a Dutch cloth merchant with a keen interest in microscopes, Antony van Leeuwenhoek, discovered microbes and began cataloging them. Two hundred years later, a German doctor in current-day Poland, Robert Koch, identified the anthrax bacterium, a crucial step toward modern germ theory. Those two signal advances, with others, have helped create the conditions of modern living as we know it.
After all, germ theory led to modern medical advances that have drastically limited deaths from infectious diseases. In the U.S. in 1900, the leading causes of death were pneumonia, influenza, tuberculosis, and gut infection, which combined for close to half of the country’s fatalities. For that matter, due to the threat of disease, childhood was a precarious thing more or less from the start of civilization until the last half-century.
“The world we’ve experienced since the 1950s, and really since the 1970s, is unprecedented in human history,” says MIT Professor Thomas Levenson. “Think of all the grandparents able to dance at their grandkids’ weddings who would not have been able to, because either they or the kids would have died from one of these diseases. Human flourishing has come from this extraordinary scientific development.”
To Levenson, two things about this historical trajectory stand out. One is that it took 200 years to develop germ theory. Another is our ability to combat these diseases so thoroughly — something he believes we should not take for granted.
Now in a new book, “So Very Small: How Humans Discovered the Microcosmos, Defeated Germs — and May Still Lose the War against Infectious Disease,” published by Penguin Random House, Levenson explores both these issues, crafting a historically rich narrative with relevance today. In writing about the development of germ theory, Levenson says, he is aiming to better illuminate “the single most lifesaving tool that human ingenuity has ever come up with.”
A 200-year incubation period
The starting point of Levenson’s research was the simple fact that van Leeuwenhoek’s discovery — accompanied by his illustrations of microbes we can identify today — did not lead to concrete advances for a long, long time.
“It’s almost exactly 200 years between the discovery of bacteria and the definitive proof that they matter to us in life-and-death ways,” Levenson says. “Infectious disease is a big deal and yet it took two centuries to get there. And I wanted to know why.”
Among other things, a variety of ideas, often about the structure of society, blocked the way. The common notion of a “great chain of being” steered people away from the idea that microorganisms could affect human health. Still, some people did recognize the possibility that tiny creatures might be spreading disease. In the late 1600s, the Puritan clergyman Cotton Mather wondered if specific types of “animacules” might each be responsible for spreading different diseases.
Into the 19th century, a few intellectually lonely figures recognized the significance of microbes in the spread of infectious disease, without their ideas gaining much traction. An 18th-century physician in Aberdeen, Scotland, Alexander Gordon, traced the spread of puerperal fever — a disease that killed new mothers — to something doctors and midwives carried on their hands as they delivered babies. A few decades later a doctor in Vienna, Ignaz Semmelweis, deduced that doctors performing autopsies were spreading illness into maternity wards. But skeptics doubted that respectable, gentlemanly doctors could be vectors of disease, and for decades, little was done to prevent the spread of infection.
Eventually, as Levenson chronicles, more scientists, especially Louis Pasteur in France, accumulated enough evidence to establish bacteriology as a field. Medicine advanced through much of the 20th century to the point where, in the postwar years in the U.S., vaccines and antibiotics had enormously reduced human deaths and suffering.
Ultimately, acceptance of new ideas like microbes causing disease involve “how strong cultural presuppositions are and how strong the hierarchical organization of society is,” Levenson says. “If you think you’ve shown that doctors can carry infections from patient to patient, but other people can’t entertain that insight because of other assumptions, that tells you why it took so long to arrive at germ theory. The facts of the science may win out in the end, but even if they do, the end can be delayed.”
He adds: “It can happen when a solution then gets entangled with things that have nothing to do with science.”
Science and society
Understanding that entanglement, between science and society, is a key part of “So Very Small,” as it is in Levenson’s numerous books and other works. Science almost never stands apart from society. The question is how they interact, in any given circumstance.
“One of the themes of my work is how science really works, as opposed to how we’re told it works,” Levenson says. “It’s not simply an ongoing iterative machine to generate new knowledge and hypotheses. Science is a huge human endeavor. The human beings who do it have their own beliefs and cultural assumptions, and are part of larger societies which they interact with all the time, and which have their own characteristics. Those things matter a lot to what science gets done, and how. And that’s still true.”
To be sure, infectious diseases have never entirely been a thing of the past. Some are still prevalent in developing countries, while Covid and the HIV/AIDS epidemics are cases where new medical treatments needed to be developed to staunch emerging illnesses. Still, as Levenson observes in the book, the interplay of science and society may produce yet more uncertainties for us in the future. Antibiotics can lose effectiveness over time, for one thing.
“If we want new antibiotics that can defeat bacterial infections, we need to fund research into them and market them and regulate them,” Levenson says. “That isn’t a political statement. Bacteria do what they do, they evolve when they are challenged.” Meanwhile, he notes, while “there has always been [human] resistance to vaccines,” the greater prevalence of that today introduces new questions about how widely vaccines will be available and used.
“So Very Small” has earned strongly positive reviews in major publications. The Wall Street Journal stated that “With extraordinary detail and authoritative prose … What Mr. Levenson’s book makes clear is that the battle against germs never ends.” The New York Review of Books has called it “an elegant, wide-ranging history of the discovery of microorganisms and their relation to disease.”
Ultimately, Levenson says, “Science both gives us the material power that drives changes in society, that drives history, and science is done by people who are embedded in places and times. Looking at that is a wonderful way into bigger questions. That’s true of germ theory as well. It tells you a great deal about what societies value, and probes the society we now live in.”
Decarbonizing steel is as tough as steel
The long-term aspirational goal of the Paris Agreement on climate change is to cap global warming at 1.5 degrees Celsius above preindustrial levels, and thereby reduce the frequency and severity of floods, droughts, wildfires, and other extreme weather events. Achieving that goal will require a massive reduction in global carbon dioxide (CO2) emissions across all economic sectors. A major roadblock, however, could be the industrial sector, which accounts for roughly 25 percent of global energy- and process-related CO2 emissions — particularly within the iron and steel sector, industry’s largest emitter of CO2.
Iron and steel production now relies heavily on fossil fuels (coal or natural gas) for heat, converting iron ore to iron, and making steel strong. Steelmaking could be decarbonized by a combination of several methods, including carbon capture technology, the use of low- or zero-carbon fuels, and increased use of recycled steel. Now a new study in the Journal of Cleaner Production systematically explores the viability of different iron-and-steel decarbonization strategies.
Today’s strategy menu includes improving energy efficiency, switching fuels and technologies, using more scrap steel, and reducing demand. Using the MIT Economic Projection and Policy Analysis model, a multi-sector, multi-region model of the world economy, researchers at MIT, the University of Illinois at Urbana-Champaign, and ExxonMobil Technology and Engineering Co. evaluate the decarbonization potential of replacing coal-based production processes with electric arc furnaces (EAF), along with either scrap steel or “direct reduced iron” (DRI), which is fueled by natural gas with carbon capture and storage (NG CCS DRI-EAF) or by hydrogen (H2 DRI-EAF).
Under a global climate mitigation scenario aligned with the 1.5 C climate goal, these advanced steelmaking technologies could result in deep decarbonization of the iron and steel sector by 2050, as long as technology costs are low enough to enable large-scale deployment. Higher costs would favor the replacement of coal with electricity and natural gas, greater use of scrap steel, and reduced demand, resulting in a more-than-50-percent reduction in emissions relative to current levels. Lower technology costs would enable massive deployment of NG CCS DRI-EAF or H2 DRI-EAF, reducing emissions by up to 75 percent.
Even without adoption of these advanced technologies, the iron-and-steel sector could significantly reduce its CO2 emissions intensity (how much CO2 is released per unit of production) with existing steelmaking technologies, primarily by replacing coal with gas and electricity (especially if it is generated by renewable energy sources), using more scrap steel, and implementing energy efficiency measures.
“The iron and steel industry needs to combine several strategies to substantially reduce its emissions by mid-century, including an increase in recycling, but investing in cost reductions in hydrogen pathways and carbon capture and sequestration will enable even deeper emissions mitigation in the sector,” says study supervising author Sergey Paltsev, deputy director of the MIT Center for Sustainability Science and Strategy (MIT CS3) and a senior research scientist at the MIT Energy Initiative (MITEI).
This study was supported by MIT CS3 and ExxonMobil through its membership in MITEI.
The shadow architects of power
In Washington, where conversations about Russia often center on a single name, political science doctoral candidate Suzanne Freeman is busy redrawing the map of power in autocratic states. Her research upends prevailing narratives about Vladimir Putin’s Russia, asking us to look beyond the individual to understand the system that produced him.
“The standard view is that Putin originated Russia’s system of governance and the way it engages with the world,” Freeman explains. “My contention is that Putin is a product of a system rather than its author, and that his actions are very consistent with the foreign policy beliefs of the organization in which he was educated.”
That organization — the KGB and its successor agencies — stands at the center of Freeman’s dissertation, which examines how authoritarian intelligence agencies intervene in their own states’ foreign policy decision-making processes, particularly decisions about using military force.
Dismantling the “yes men” myth
Past scholarship has relied on an oversimplified characterization of intelligence agencies in authoritarian states. “The established belief that I’m challenging is essentially that autocrats surround themselves with ‘yes’ men,” Freeman says. She notes that this narrative stems in great part from a famous Soviet failure, when intelligence officers were too afraid to contradict Stalin’s belief that Nazi Germany wouldn’t invade in 1941.
Freeman’s research reveals a far more complex reality. Through extensive archival work, including newly declassified documents from Lithuania, Moldova, and Poland, she shows that intelligence agencies in authoritarian regimes actually have distinct foreign policy preferences and actively work to advance them.
“These intelligence agencies are motivated by their organizational interests, seeking to survive and hold power inside and beyond their own borders,” Freeman says.
When an international situation threatens those interests, authoritarian intelligence agencies may intervene in the policy process using strategies Freeman has categorized in an innovative typology: indirect manipulation (altering collected intelligence), direct manipulation (misrepresenting analyzed intelligence), preemption in the field (unauthorized actions that alter a foreign crisis), and coercion (threats against political leadership).
“By intervene, I mean behaving in some way that’s inappropriate in accordance with what their mandate is,” Freeman explains. That mandate includes providing policy advice. “But sometimes intelligence agencies want to make their policy advice look more attractive by manipulating information,” she notes. “They may change the facts out on the ground, or in very rare circumstances, coerce policymakers.”
From Soviet archives to modern Russia
Rather than studying contemporary Russia alone, Freeman uses historical case studies of the Soviet Union’s KGB. Her research into this agency’s policy intervention covers eight foreign policy crises between 1950 and 1981, including uprisings in Eastern Europe, the Sino-Soviet border dispute, and the Soviet-Afghan War.
What she discovered contradicts prior assumptions that the agency was primarily a passive information provider. “The KGB had always been important for Soviet foreign policy and gave policy advice about what they thought should be done,” she says. Intelligence agencies were especially likely to pursue policy intervention when facing a “dual threat:” domestic unrest sparked by foreign crises combined with the loss of intelligence networks abroad.
This organizational motivation, rather than simply following a leader’s preferences, drove policy recommendations in predictable ways.
Freeman sees striking parallels to Russia’s recent actions in Ukraine. “This dual organizational threat closely mirrors the threat that the KGB faced in Hungary in 1956, Czechoslovakia in 1968, and Poland from 1980 to 1981,” she explains. After 2014, Ukrainian intelligence reform weakened Russian intelligence networks in the country — a serious organizational threat to Russia’s security apparatus.
“Between 2014 and 2022, this network weakened,” Freeman notes. “We know that Russian intelligence had ties with a polling firm in Ukraine, where they had data saying that 84 percent of the population would view them as occupiers, that almost half of the Ukrainian population was willing to fight for Ukraine.” In spite of these polls, officers recommended going into Ukraine anyway.
This pattern resembles the KGB’s advocacy for invading Afghanistan using the manipulation of intelligence — a parallel that helps explain Russia’s foreign policy decisions beyond just Putin’s personal preferences.
Scholarly detective work
Freeman’s research innovations have allowed her to access previously unexplored material. “From a methodological perspective, it’s new archival material, but it’s also archival material from regions of a country, not the center,” she explains.
In Moldova, she examined previously classified KGB documents: huge amounts of newly available and unstructured documents that provided details into how anti-Soviet sentiment during foreign crises affected the KGB.
Freeman’s willingness to search beyond central archives distinguishes her approach, especially valuable as direct research in Russia becomes increasingly difficult. “People who want to study Russia or the Soviet Union who are unable to get to Russia can still learn very meaningful things, even about the central state, from these other countries and regions.”
From Boston to Moscow to MIT
Freeman grew up in Boston in an academic, science-oriented family; both her parents were immunologists. Going against the grain, she was drawn to history, particularly Russian and Soviet history, beginning in high school.
“I was always curious about the Soviet Union and why it fell apart, but I never got a clear answer from my teachers,” says Freeman. “This really made me want to learn more and solve that puzzle myself."
At Columbia University, she majored in Slavic studies and completed a master’s degree at the School of International and Public Affairs. Her undergraduate thesis examined Russian military reform, a topic that gained new relevance after Russia’s 2014 invasion of Ukraine.
Before beginning her doctoral studies at MIT, Freeman worked at the Russia Maritime Studies Institute at the U.S. Naval War College, researching Russian military strategy and doctrine. There, surrounded by scholars with political science and history PhDs, she found her calling.
“I decided I wanted to be in an academic environment where I could do research that I thought would prove valuable,” she recalls.
Bridging academia and public education
Beyond her core research, Freeman has established herself as an innovator in war-gaming methodology. With fellow PhD student Benjamin Harris, she co-founded the MIT Wargaming Working Group, which has developed a partnership with the Naval Postgraduate School to bring mid-career military officers and academics together for annual simulations.
Their work on war-gaming as a pedagogical tool resulted in a peer-reviewed publication in PS: Political Science & Politics titled “Crossing a Virtual Divide: Wargaming as a Remote Teaching Tool.” This research demonstrates that war games are effective tools for active learning even in remote settings and can help bridge the civil-military divide.
When not conducting research, Freeman works as a tour guide at the International Spy Museum in Washington. “I think public education is important — plus they have a lot of really cool KGB objects,” she says. “I felt like working at the Spy Museum would help me keep thinking about my research in a more fun way and hopefully help me explain some of these things to people who aren’t academics.”
Looking beyond individual leaders
Freeman’s work offers vital insight for policymakers who too often focus exclusively on autocratic leaders, rather than the institutional systems surrounding them. “I hope to give people a new lens through which to view the way that policy is made,” she says. “The intelligence agency and the type of advice that it provides to political leadership can be very meaningful.”
As tensions with Russia continue, Freeman believes her research provides a crucial framework for understanding state behavior beyond individual personalities. “If you're going to be negotiating and competing with these authoritarian states, thinking about the leadership beyond the autocrat seems very important.”
Currently completing her dissertation as a predoctoral fellow at George Washington University’s Institute for Security and Conflict Studies, Freeman aims to contribute critical scholarship on Russia’s role in international security and inspire others to approach complex geopolitical questions with systematic research skills.
“In Russia and other authoritarian states, the intelligence system may endure well beyond a single leader’s reign,” Freeman notes. “This means we must focus not on the figures who dominate the headlines, but on the institutions that shape them.”
Bringing meaning into technology deployment
In 15 TED Talk-style presentations, MIT faculty recently discussed their pioneering research that incorporates social, ethical, and technical considerations and expertise, each supported by seed grants established by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing. The call for proposals last summer was met with nearly 70 applications. A committee with representatives from every MIT school and the college convened to select the winning projects that received up to $100,000 in funding.
“SERC is committed to driving progress at the intersection of computing, ethics, and society. The seed grants are designed to ignite bold, creative thinking around the complex challenges and possibilities in this space,” said Nikos Trichakis, co-associate dean of SERC and the J.C. Penney Professor of Management. “With the MIT Ethics of Computing Research Symposium, we felt it important to not just showcase the breadth and depth of the research that’s shaping the future of ethical computing, but to invite the community to be part of the conversation as well.”
“What you’re seeing here is kind of a collective community judgment about the most exciting work when it comes to research, in the social and ethical responsibilities of computing being done at MIT,” said Caspar Hare, co-associate dean of SERC and professor of philosophy.
The full-day symposium on May 1 was organized around four key themes: responsible health-care technology, artificial intelligence governance and ethics, technology in society and civic engagement, and digital inclusion and social justice. Speakers delivered thought-provoking presentations on a broad range of topics, including algorithmic bias, data privacy, the social implications of artificial intelligence, and the evolving relationship between humans and machines. The event also featured a poster session, where student researchers showcased projects they worked on throughout the year as SERC Scholars.
Highlights from the MIT Ethics of Computing Research Symposium in each of the theme areas, many of which are available to watch on YouTube, included:
Making the kidney transplant system fairer
Policies regulating the organ transplant system in the United States are made by a national committee that often takes more than six months to create, and then years to implement, a timeline that many on the waiting list simply can’t survive.
Dimitris Bertsimas, vice provost for open learning, associate dean of business analytics, and Boeing Professor of Operations Research, shared his latest work in analytics for fair and efficient kidney transplant allocation. Bertsimas’ new algorithm examines criteria like geographic location, mortality, and age in just 14 seconds, a monumental change from the usual six hours.
Bertsimas and his team work closely with the United Network for Organ Sharing (UNOS), a nonprofit that manages most of the national donation and transplant system through a contract with the federal government. During his presentation, Bertsimas shared a video from James Alcorn, senior policy strategist at UNOS, who offered this poignant summary of the impact the new algorithm has:
“This optimization radically changes the turnaround time for evaluating these different simulations of policy scenarios. It used to take us a couple months to look at a handful of different policy scenarios, and now it takes a matter of minutes to look at thousands and thousands of scenarios. We are able to make these changes much more rapidly, which ultimately means that we can improve the system for transplant candidates much more rapidly.”
The ethics of AI-generated social media content
As AI-generated content becomes more prevalent across social media platforms, what are the implications of disclosing (or not disclosing) that any part of a post was created by AI? Adam Berinsky, Mitsui Professor of Political Science, and Gabrielle Péloquin-Skulski, PhD student in the Department of Political Science, explored this question in a session that examined recent studies on the impact of various labels on AI-generated content.
In a series of surveys and experiments affixing labels to AI-generated posts, the researchers looked at how specific words and descriptions impacted users’ perception of deception, their intent to engage with the post, and ultimately if the post was true or false.
“The big takeaway from our initial set of findings is that one size doesn’t fit all,” said Péloquin-Skulski. “We found that labeling AI-generated images with a process-oriented label reduces belief in both false and true posts. This is quite problematic, as labeling intends to reduce people’s belief in false information, not necessarily true information. This suggests that labels combining both process and veracity might be better at countering AI-generated misinformation.”
Using AI to increase civil discourse online
“Our research aims to address how people increasingly want to have a say in the organizations and communities they belong to,” Lily Tsai explained in a session on experiments in generative AI and the future of digital democracy. Tsai, Ford Professor of Political Science and director of the MIT Governance Lab, is conducting ongoing research with Alex Pentland, Toshiba Professor of Media Arts arts Sciences, and a larger team.
Online deliberative platforms have recently been rising in popularity across the United States in both public- and private-sector settings. Tsai explained that with technology, it’s now possible for everyone to have a say — but doing so can be overwhelming, or even feel unsafe. First, too much information is available, and secondly, online discourse has become increasingly “uncivil.”
The group focuses on “how we can build on existing technologies and improve them with rigorous, interdisciplinary research, and how we can innovate by integrating generative AI to enhance the benefits of online spaces for deliberation.” They have developed their own AI-integrated platform for deliberative democracy, DELiberation.io, and rolled out four initial modules. All studies have been in the lab so far, but they are also working on a set of forthcoming field studies, the first of which will be in partnership with the government of the District of Columbia.
Tsai told the audience, “If you take nothing else from this presentation, I hope that you’ll take away this — that we should all be demanding that technologies that are being developed are assessed to see if they have positive downstream outcomes, rather than just focusing on maximizing the number of users.”
A public think tank that considers all aspects of AI
When Catherine D’Ignazio, associate professor of urban science and planning, and Nikko Stevens, postdoc at the Data + Feminism Lab at MIT, initially submitted their funding proposal, they weren’t intending to develop a think tank, but a framework — one that articulated how artificial intelligence and machine learning work could integrate community methods and utilize participatory design.
In the end, they created Liberatory AI, which they describe as a “rolling public think tank about all aspects of AI.” D’Ignazio and Stevens gathered 25 researchers from a diverse array of institutions and disciplines who authored more than 20 position papers examining the most current academic literature on AI systems and engagement. They intentionally grouped the papers into three distinct themes: the corporate AI landscape, dead ends, and ways forward.
“Instead of waiting for Open AI or Google to invite us to participate in the development of their products, we’ve come together to contest the status quo, think bigger-picture, and reorganize resources in this system in hopes of a larger societal transformation,” said D’Ignazio.
Photonic processor could streamline 6G wireless signal processing
As more connected devices demand an increasing amount of bandwidth for tasks like teleworking and cloud computing, it will become extremely challenging to manage the finite amount of wireless spectrum available for all users to share.
Engineers are employing artificial intelligence to dynamically manage the available wireless spectrum, with an eye toward reducing latency and boosting performance. But most AI methods for classifying and processing wireless signals are power-hungry and can’t operate in real-time.
Now, MIT researchers have developed a novel AI hardware accelerator that is specifically designed for wireless signal processing. Their optical processor performs machine-learning computations at the speed of light, classifying wireless signals in a matter of nanoseconds.
The photonic chip is about 100 times faster than the best digital alternative, while converging to about 95 percent accuracy in signal classification. The new hardware accelerator is also scalable and flexible, so it could be used for a variety of high-performance computing applications. At the same time, it is smaller, lighter, cheaper, and more energy-efficient than digital AI hardware accelerators.
The device could be especially useful in future 6G wireless applications, such as cognitive radios that optimize data rates by adapting wireless modulation formats to the changing wireless environment.
By enabling an edge device to perform deep-learning computations in real-time, this new hardware accelerator could provide dramatic speedups in many applications beyond signal processing. For instance, it could help autonomous vehicles make split-second reactions to environmental changes or enable smart pacemakers to continuously monitor the health of a patient’s heart.
“There are many applications that would be enabled by edge devices that are capable of analyzing wireless signals. What we’ve presented in our paper could open up many possibilities for real-time and reliable AI inference. This work is the beginning of something that could be quite impactful,” says Dirk Englund, a professor in the MIT Department of Electrical Engineering and Computer Science, principal investigator in the Quantum Photonics and Artificial Intelligence Group and the Research Laboratory of Electronics (RLE), and senior author of the paper.
He is joined on the paper by lead author Ronald Davis III PhD ’24; Zaijun Chen, a former MIT postdoc who is now an assistant professor at the University of Southern California; and Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research. The research appears today in Science Advances.
Light-speed processing
State-of-the-art digital AI accelerators for wireless signal processing convert the signal into an image and run it through a deep-learning model to classify it. While this approach is highly accurate, the computationally intensive nature of deep neural networks makes it infeasible for many time-sensitive applications.
Optical systems can accelerate deep neural networks by encoding and processing data using light, which is also less energy intensive than digital computing. But researchers have struggled to maximize the performance of general-purpose optical neural networks when used for signal processing, while ensuring the optical device is scalable.
By developing an optical neural network architecture specifically for signal processing, which they call a multiplicative analog frequency transform optical neural network (MAFT-ONN), the researchers tackled that problem head-on.
The MAFT-ONN addresses the problem of scalability by encoding all signal data and performing all machine-learning operations within what is known as the frequency domain — before the wireless signals are digitized.
The researchers designed their optical neural network to perform all linear and nonlinear operations in-line. Both types of operations are required for deep learning.
Thanks to this innovative design, they only need one MAFT-ONN device per layer for the entire optical neural network, as opposed to other methods that require one device for each individual computational unit, or “neuron.”
“We can fit 10,000 neurons onto a single device and compute the necessary multiplications in a single shot,” Davis says.
The researchers accomplish this using a technique called photoelectric multiplication, which dramatically boosts efficiency. It also allows them to create an optical neural network that can be readily scaled up with additional layers without requiring extra overhead.
Results in nanoseconds
MAFT-ONN takes a wireless signal as input, processes the signal data, and passes the information along for later operations the edge device performs. For instance, by classifying a signal’s modulation, MAFT-ONN would enable a device to automatically infer the type of signal to extract the data it carries.
One of the biggest challenges the researchers faced when designing MAFT-ONN was determining how to map the machine-learning computations to the optical hardware.
“We couldn’t just take a normal machine-learning framework off the shelf and use it. We had to customize it to fit the hardware and figure out how to exploit the physics so it would perform the computations we wanted it to,” Davis says.
When they tested their architecture on signal classification in simulations, the optical neural network achieved 85 percent accuracy in a single shot, which can quickly converge to more than 99 percent accuracy using multiple measurements. MAFT-ONN only required about 120 nanoseconds to perform entire process.
“The longer you measure, the higher accuracy you will get. Because MAFT-ONN computes inferences in nanoseconds, you don’t lose much speed to gain more accuracy,” Davis adds.
While state-of-the-art digital radio frequency devices can perform machine-learning inference in a microseconds, optics can do it in nanoseconds or even picoseconds.
Moving forward, the researchers want to employ what are known as multiplexing schemes so they could perform more computations and scale up the MAFT-ONN. They also want to extend their work into more complex deep learning architectures that could run transformer models or LLMs.
This work was funded, in part, by the U.S. Army Research Laboratory, the U.S. Air Force, MIT Lincoln Laboratory, Nippon Telegraph and Telephone, and the National Science Foundation.
Have a damaged painting? Restore it in just hours with an AI-generated “mask”
Art restoration takes steady hands and a discerning eye. For centuries, conservators have restored paintings by identifying areas needing repair, then mixing an exact shade to fill in one area at a time. Often, a painting can have thousands of tiny regions requiring individual attention. Restoring a single painting can take anywhere from a few weeks to over a decade.
In recent years, digital restoration tools have opened a route to creating virtual representations of original, restored works. These tools apply techniques of computer vision, image recognition, and color matching, to generate a “digitally restored” version of a painting relatively quickly.
Still, there has been no way to translate digital restorations directly onto an original work, until now. In a paper appearing today in the journal Nature, Alex Kachkine, a mechanical engineering graduate student at MIT, presents a new method he’s developed to physically apply a digital restoration directly onto an original painting.
The restoration is printed on a very thin polymer film, in the form of a mask that can be aligned and adhered to an original painting. It can also be easily removed. Kachkine says that a digital file of the mask can be stored and referred to by future conservators, to see exactly what changes were made to restore the original painting.
“Because there’s a digital record of what mask was used, in 100 years, the next time someone is working with this, they’ll have an extremely clear understanding of what was done to the painting,” Kachkine says. “And that’s never really been possible in conservation before.”
As a demonstration, he applied the method to a highly damaged 15th century oil painting. The method automatically identified 5,612 separate regions in need of repair, and filled in these regions using 57,314 different colors. The entire process, from start to finish, took 3.5 hours, which he estimates is about 66 times faster than traditional restoration methods.
Kachkine acknowledges that, as with any restoration project, there are ethical issues to consider, in terms of whether a restored version is an appropriate representation of an artist’s original style and intent. Any application of his new method, he says, should be done in consultation with conservators with knowledge of a painting’s history and origins.
“There is a lot of damaged art in storage that might never be seen,” Kachkine says. “Hopefully with this new method, there’s a chance we’ll see more art, which I would be delighted by.”
Digital connections
The new restoration process started as a side project. In 2021, as Kachkine made his way to MIT to start his PhD program in mechanical engineering, he drove up the East Coast and made a point to visit as many art galleries as he could along the way.
“I’ve been into art for a very long time now, since I was a kid,” says Kachkine, who restores paintings as a hobby, using traditional hand-painting techniques. As he toured galleries, he came to realize that the art on the walls is only a fraction of the works that galleries hold. Much of the art that galleries acquire is stored away because the works are aged or damaged, and take time to properly restore.
“Restoring a painting is fun, and it’s great to sit down and infill things and have a nice evening,” Kachkine says. “But that’s a very slow process.”
As he has learned, digital tools can significantly speed up the restoration process. Researchers have developed artificial intelligence algorithms that quickly comb through huge amounts of data. The algorithms learn connections within this visual data, which they apply to generate a digitally restored version of a particular painting, in a way that closely resembles the style of an artist or time period. However, such digital restorations are usually displayed virtually or printed as stand-alone works and cannot be directly applied to retouch original art.
“All this made me think: If we could just restore a painting digitally, and effect the results physically, that would resolve a lot of pain points and drawbacks of a conventional manual process,” Kachkine says.
“Align and restore”
For the new study, Kachkine developed a method to physically apply a digital restoration onto an original painting, using a 15th-century painting that he acquired when he first came to MIT. His new method involves first using traditional techniques to clean a painting and remove any past restoration efforts.
“This painting is almost 600 years old and has gone through conservation many times,” he says. “In this case there was a fair amount of overpainting, all of which has to be cleaned off to see what’s actually there to begin with.”
He scanned the cleaned painting, including the many regions where paint had faded or cracked. He then used existing artificial intelligence algorithms to analyze the scan and create a virtual version of what the painting likely looked like in its original state.
Then, Kachkine developed software that creates a map of regions on the original painting that require infilling, along with the exact colors needed to match the digitally restored version. This map is then translated into a physical, two-layer mask that is printed onto thin polymer-based films. The first layer is printed in color, while the second layer is printed in the exact same pattern, but in white.
“In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains. “If those two layers are misaligned, that’s very easy to see. So I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.”
Kachkine used high-fidelity commercial inkjets to print the mask’s two layers, which he carefully aligned and overlaid by hand onto the original painting and adhered with a thin spray of conventional varnish. The printed films are made from materials that can be easily dissolved with conservation-grade solutions, in case conservators need to reveal the original, damaged work. The digital file of the mask can also be saved as a detailed record of what was restored.
For the painting that Kachkine used, the method was able to fill in thousands of losses in just a few hours. “A few years ago, I was restoring this baroque Italian painting with probably the same order magnitude of losses, and it took me nine months of part-time work,” he recalls. “The more losses there are, the better this method is.”
He estimates that the new method can be orders of magnitude faster than traditional, hand-painted approaches. If the method is adopted widely, he emphasizes that conservators should be involved at every step in the process, to ensure that the final work is in keeping with an artist’s style and intent.
“It will take a lot of deliberation about the ethical challenges involved at every stage in this process to see how can this be applied in a way that’s most consistent with conservation principles,” he says. “We’re setting up a framework for developing further methods. As others work on this, we’ll end up with methods that are more precise.”
This work was supported, in part, by the John O. and Katherine A. Lutz Memorial Fund. The research was carried out, in part, through the use of equipment and facilities at MIT.Nano, with additional support from the MIT Microsystems Technology Laboratories, the MIT Department of Mechanical Engineering, and the MIT Libraries.
Window-sized device taps the air for safe drinking water
Today, 2.2 billion people in the world lack access to safe drinking water. In the United States, more than 46 million people experience water insecurity, living with either no running water or water that is unsafe to drink. The increasing need for drinking water is stretching traditional resources such as rivers, lakes, and reservoirs.
To improve access to safe and affordable drinking water, MIT engineers are tapping into an unconventional source: the air. The Earth’s atmosphere contains millions of billions of gallons of water in the form of vapor. If this vapor can be efficiently captured and condensed, it could supply clean drinking water in places where traditional water resources are inaccessible.
With that goal in mind, the MIT team has developed and tested a new atmospheric water harvester and shown that it efficiently captures water vapor and produces safe drinking water across a range of relative humidities, including dry desert air.
The new device is a black, window-sized vertical panel, made from a water-absorbent hydrogel material, enclosed in a glass chamber coated with a cooling layer. The hydrogel resembles black bubble wrap, with small dome-shaped structures that swell when the hydrogel soaks up water vapor. When the captured vapor evaporates, the domes shrink back down in an origami-like transformation. The evaporated vapor then condenses on the the glass, where it can flow down and out through a tube, as clean and drinkable water.
The system runs entirely on its own, without a power source, unlike other designs that require batteries, solar panels, or electricity from the grid. The team ran the device for over a week in Death Valley, California — the driest region in North America. Even in very low-humidity conditions, the device squeezed drinking water from the air at rates of up to 160 milliliters (about two-thirds of a cup) per day.
The team estimates that multiple vertical panels, set up in a small array, could passively supply a household with drinking water, even in arid desert environments. What’s more, the system’s water production should increase with humidity, supplying drinking water in temperate and tropical climates.
“We have built a meter-scale device that we hope to deploy in resource-limited regions, where even a solar cell is not very accessible,” says Xuanhe Zhao, the Uncas and Helen Whitaker Professor of Mechanical Engineering and Civil and Environmental Engineering at MIT. “It’s a test of feasibility in scaling up this water harvesting technology. Now people can build it even larger, or make it into parallel panels, to supply drinking water to people and achieve real impact.”
Zhao and his colleagues present the details of the new water harvesting design in a paper appearing today in the journal Nature Water. The study’s lead author is former MIT postdoc “Will” Chang Liu, who is currently an assistant professor at the National University of Singapore (NUS). MIT co-authors include Xiao-Yun Yan, Shucong Li, and Bolei Deng, along with collaborators from multiple other institutions.
Carrying capacity
Hydrogels are soft, porous materials that are made mainly from water and a microscopic network of interconnecting polymer fibers. Zhao’s group at MIT has primarily explored the use of hydrogels in biomedical applications, including adhesive coatings for medical implants, soft and flexible electrodes, and noninvasive imaging stickers.
“Through our work with soft materials, one property we know very well is the way hydrogel is very good at absorbing water from air,” Zhao says.
Researchers are exploring a number of ways to harvest water vapor for drinking water. Among the most efficient so far are devices made from metal-organic frameworks, or MOFs — ultra-porous materials that have also been shown to capture water from dry desert air. But the MOFs do not swell or stretch when absorbing water, and are limited in vapor-carrying capacity.
Water from air
The group’s new hydrogel-based water harvester addresses another key problem in similar designs. Other groups have designed water harvesters out of micro- or nano-porous hydrogels. But the water produced from these designs can be salty, requiring additional filtering. Salt is a naturally absorbent material, and researchers embed salts — typically, lithium chloride — in hydrogel to increase the material’s water absorption. The drawback, however, is that this salt can leak out with the water when it is eventually collected.
The team’s new design significantly limits salt leakage. Within the hydrogel itself, they included an extra ingredient: glycerol, a liquid compound that naturally stabilizes salt, keeping it within the gel rather than letting it crystallize and leak out with the water. The hydrogel itself has a microstructure that lacks nanoscale pores, which further prevents salt from escaping the material. The salt levels in the water they collected were below the standard threshold for safe drinking water, and significantly below the levels produced by many other hydrogel-based designs.
In addition to tuning the hydrogel’s composition, the researchers made improvements to its form. Rather than keeping the gel as a flat sheet, they molded it into a pattern of small domes resembling bubble wrap, that act to increase the gel’s surface area, along with the amount of water vapor it can absorb.
The researchers fabricated a half-square-meter of hydrogel and encased the material in a window-like glass chamber. They coated the exterior of the chamber with a special polymer film, which helps to cool the glass and stimulates any water vapor in the hydrogel to evaporate and condense onto the glass. They installed a simple tubing system to collect the water as it flows down the glass.
In November 2023, the team traveled to Death Valley, California, and set up the device as a vertical panel. Over seven days, they took measurements as the hydrogel absorbed water vapor during the night (the time of day when water vapor in the desert is highest). In the daytime, with help from the sun, the harvested water evaporated out from the hydrogel and condensed onto the glass.
Over this period, the device worked across a range of humidities, from 21 to 88 percent, and produced between 57 and 161.5 milliliters of drinking water per day. Even in the driest conditions, the device harvested more water than other passive and some actively powered designs.
“This is just a proof-of-concept design, and there are a lot of things we can optimize,” Liu says. “For instance, we could have a multipanel design. And we’re working on a next generation of the material to further improve its intrinsic properties.”
“We imagine that you could one day deploy an array of these panels, and the footprint is very small because they are all vertical,” says Zhao, who has plans to further test the panels in many resource-limited regions. “Then you could have many panels together, collecting water all the time, at household scale.”
This work was supported, in part, by the MIT J-WAFS Water and Food Seed Grant, the MIT-Chinese University of Hong Kong collaborative research program, and the UM6P-MIT collaborative research program.
How the brain solves complicated problems
The human brain is very good at solving complicated problems. One reason for that is that humans can break problems apart into manageable subtasks that are easy to solve one at a time.
This allows us to complete a daily task like going out for coffee by breaking it into steps: getting out of our office building, navigating to the coffee shop, and once there, obtaining the coffee. This strategy helps us to handle obstacles easily. For example, if the elevator is broken, we can revise how we get out of the building without changing the other steps.
While there is a great deal of behavioral evidence demonstrating humans’ skill at these complicated tasks, it has been difficult to devise experimental scenarios that allow precise characterization of the computational strategies we use to solve problems.
In a new study, MIT researchers have successfully modeled how people deploy different decision-making strategies to solve a complicated task — in this case, predicting how a ball will travel through a maze when the ball is hidden from view. The human brain cannot perform this task perfectly because it is impossible to track all of the possible trajectories in parallel, but the researchers found that people can perform reasonably well by flexibly adopting two strategies known as hierarchical reasoning and counterfactual reasoning.
The researchers were also able to determine the circumstances under which people choose each of those strategies.
“What humans are capable of doing is to break down the maze into subsections, and then solve each step using relatively simple algorithms. Effectively, when we don’t have the means to solve a complex problem, we manage by using simpler heuristics that get the job done,” says Mehrdad Jazayeri, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, an investigator at the Howard Hughes Medical Institute, and the senior author of the study.
Mahdi Ramadan PhD ’24 and graduate student Cheng Tang are the lead authors of the paper, which appears today in Nature Human Behavior. Nicholas Watters PhD ’25 is also a co-author.
Rational strategies
When humans perform simple tasks that have a clear correct answer, such as categorizing objects, they perform extremely well. When tasks become more complex, such as planning a trip to your favorite cafe, there may no longer be one clearly superior answer. And, at each step, there are many things that could go wrong. In these cases, humans are very good at working out a solution that will get the task done, even though it may not be the optimal solution.
Those solutions often involve problem-solving shortcuts, or heuristics. Two prominent heuristics humans commonly rely on are hierarchical and counterfactual reasoning. Hierarchical reasoning is the process of breaking down a problem into layers, starting from the general and proceeding toward specifics. Counterfactual reasoning involves imagining what would have happened if you had made a different choice. While these strategies are well-known, scientists don’t know much about how the brain decides which one to use in a given situation.
“This is really a big question in cognitive science: How do we problem-solve in a suboptimal way, by coming up with clever heuristics that we chain together in a way that ends up getting us closer and closer until we solve the problem?” Jazayeri says.
To overcome this, Jazayeri and his colleagues devised a task that is just complex enough to require these strategies, yet simple enough that the outcomes and the calculations that go into them can be measured.
The task requires participants to predict the path of a ball as it moves through four possible trajectories in a maze. Once the ball enters the maze, people cannot see which path it travels. At two junctions in the maze, they hear an auditory cue when the ball reaches that point. Predicting the ball’s path is a task that is impossible for humans to solve with perfect accuracy.
“It requires four parallel simulations in your mind, and no human can do that. It’s analogous to having four conversations at a time,” Jazayeri says. “The task allows us to tap into this set of algorithms that the humans use, because you just can’t solve it optimally.”
The researchers recruited about 150 human volunteers to participate in the study. Before each subject began the ball-tracking task, the researchers evaluated how accurately they could estimate timespans of several hundred milliseconds, about the length of time it takes the ball to travel along one arm of the maze.
For each participant, the researchers created computational models that could predict the patterns of errors that would be seen for that participant (based on their timing skill) if they were running parallel simulations, using hierarchical reasoning alone, counterfactual reasoning alone, or combinations of the two reasoning strategies.
The researchers compared the subjects’ performance with the models’ predictions and found that for every subject, their performance was most closely associated with a model that used hierarchical reasoning but sometimes switched to counterfactual reasoning.
That suggests that instead of tracking all the possible paths that the ball could take, people broke up the task. First, they picked the direction (left or right), in which they thought the ball turned at the first junction, and continued to track the ball as it headed for the next turn. If the timing of the next sound they heard wasn’t compatible with the path they had chosen, they would go back and revise their first prediction — but only some of the time.
Switching back to the other side, which represents a shift to counterfactual reasoning, requires people to review their memory of the tones that they heard. However, it turns out that these memories are not always reliable, and the researchers found that people decided whether to go back or not based on how good they believed their memory to be.
“People rely on counterfactuals to the degree that it’s helpful,” Jazayeri says. “People who take a big performance loss when they do counterfactuals avoid doing them. But if you are someone who’s really good at retrieving information from the recent past, you may go back to the other side.”
Human limitations
To further validate their results, the researchers created a machine-learning neural network and trained it to complete the task. A machine-learning model trained on this task will track the ball’s path accurately and make the correct prediction every time, unless the researchers impose limitations on its performance.
When the researchers added cognitive limitations similar to those faced by humans, they found that the model altered its strategies. When they eliminated the model’s ability to follow all possible trajectories, it began to employ hierarchical and counterfactual strategies like humans do. If the researchers reduced the model’s memory recall ability, it began to switch to hierarchical only if it thought its recall would be good enough to get the right answer — just as humans do.
“What we found is that networks mimic human behavior when we impose on them those computational constraints that we found in human behavior,” Jazayeri says. “This is really saying that humans are acting rationally under the constraints that they have to function under.”
By slightly varying the amount of memory impairment programmed into the models, the researchers also saw hints that the switching of strategies appears to happen gradually, rather than at a distinct cut-off point. They are now performing further studies to try to determine what is happening in the brain as these shifts in strategy occur.
The research was funded by a Lisa K. Yang ICoN Fellowship, a Friends of the McGovern Institute Student Fellowship, a National Science Foundation Graduate Research Fellowship, the Simons Foundation, the Howard Hughes Medical Institute, and the McGovern Institute.
Once-a-week pill for schizophrenia shows promise in clinical trials
For many patients with schizophrenia, other psychiatric illnesses, or diseases such as hypertension and asthma, it can be difficult to take their medicine every day. To help overcome that challenge, MIT researchers have developed a pill that can be taken just once a week and gradually releases medication from within the stomach.
In a phase 3 clinical trial conducted by MIT spinout Lyndra Therapeutics, the researchers used the once-a-week pill to deliver a widely used medication for managing the symptoms of schizophrenia. They found that this treatment regimen maintained consistent levels of the drug in patients’ bodies and controlled their symptoms just as well as daily doses of the drug. The results are published today in Lancet Psychiatry.
“We’ve converted something that has to be taken once a day to once a week, orally, using a technology that can be adapted for a variety of medications,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, an associate member of the Broad Institute, and an author of the study. “The ability to provide a sustained level of drug for a prolonged period, in an easy-to-administer system, makes it easier to ensure patients are receiving their medication.”
Traverso’s lab began developing the ingestible capsule studied in this trial more than 10 years ago, as part of an ongoing effort to make medications easier for patients to take. The capsule is about the size of a multivitamin, and once swallowed, it expands into a star shape that helps it remain in the stomach until all of the drug is released.
Richard Scranton, chief medical officer of Lyndra Therapeutics, is the senior author of the paper, and Leslie Citrome, a clinical professor of psychiatry and behavioral sciences at New York Medical College School of Medicine, is the lead author. Nayana Nagaraj, medical director at Lyndra Therapeutics, and Todd Dumas, senior director of pharmacometrics at Certara, are also authors.
Sustained delivery
Over the past decade, Traverso’s lab has been working on a variety of capsules that can be swallowed and remain in the digestive tract for days or weeks, slowly releasing their drug payload. In 2016, his team reported the star-shaped device, which was then further developed by Lyndra for clinical trials in patients with schizophrenia.
The device contains six arms that can be folded in, allowing it to fit inside a capsule. The capsule dissolves when the device reaches the stomach, allowing the arms to spring out. Once the arms are extended, the device becomes too large to pass through the pylorus (the exit of the stomach), so it remains freely floating in the stomach as drugs are slowly released from the arms. After about a week, the arms break off on their own, and each segment exits the stomach and passes through the digestive tract.
For the clinical trials, the capsule was loaded with risperidone, a commonly prescribed medication used to treat schizophrenia. Most patients take the drug orally once a day. There are also injectable versions that can be given every two weeks, every month, or every two months, but they require administration by a health care provider and are not always acceptable to patients.
The MIT and Lyndra team chose to focus on schizophrenia in hopes that a drug regimen that could be administered less frequently, through oral delivery, could make treatment easier for patients and their caregivers.
“One of the areas of unmet need that was recognized early on is neuropsychiatric conditions, where the illness can limit or impair one’s ability to remember to take their medication,” Traverso says. “With that in mind, one of the conditions that has been a big focus has been schizophrenia.”
The phase 3 trial was coordinated by researchers at Lyndra and enrolled 83 patients at five different sites around the United States. Forty-five of those patients completed the full five weeks of the study, in which they took one risperidone-loaded capsule per week.
Throughout the study, the researchers measured the amount of drug in each patient’s bloodstream. Each week, they found a sharp increase on the day the pill was given, followed by a slow decline over the next week. The levels were all within the optimal range, and there was less variation over time than is seen when patients take a pill each day.
Effective treatment
Using an evaluation known as the Positive and Negative Syndrome Scale (PANSS), the researchers also found that the patients’ symptoms remained stable throughout the study.
“One of the biggest obstacles in the care of people with chronic illnesses in general is that medications are not taken consistently. This leads to worsening symptoms, and in the case of schizophrenia, potential relapse and hospitalization,” Citrome says. “Having the option to take medication by mouth once a week represents an important option that can assist with adherence for the many patients who would prefer oral medications versus injectable formulations.”
Side effects from the treatment were minimal, the researchers found. Some patients experienced mild acid reflux and constipation early in the study, but these did not last long. The results, showing effectiveness of the capsule and few side effects, represent a major milestone in this approach to drug delivery, Traverso says.
“This really demonstrates that what we had hypothesized a decade ago, which is that a single capsule providing a drug depot within the GI tract could be possible,” he says. “Here what you see is that the capsule can achieve the drug levels that were predicted, and also control symptoms in a sizeable cohort of patients with schizophrenia.”
The investigators now hope to complete larger phase 3 studies before applying for FDA approval of this delivery approach for risperidone. They are also preparing for phase 1 trials using this capsule to deliver other drugs, including contraceptives.
“We are delighted that this technology which started at MIT has reached the point of phase 3 clinical trials,” says Robert Langer, the David H. Koch Institute Professor at MIT, who was an author of the original study on the star capsule and is a co-founder of Lyndra Therapeutics.
The research was funded by Lyndra Therapeutics.