Feed aggregator
Merging AI and underwater photography to reveal hidden ocean worlds
In the Northeastern United States, the Gulf of Maine represents one of the most biologically diverse marine ecosystems on the planet — home to whales, sharks, jellyfish, herring, plankton, and hundreds of other species. But even as this ecosystem supports rich biodiversity, it is undergoing rapid environmental change. The Gulf of Maine is warming faster than 99 percent of the world’s oceans, with consequences that are still unfolding.
A new research initiative developing at MIT Sea Grant, called LOBSTgER — short for Learning Oceanic Bioecological Systems Through Generative Representations — brings together artificial intelligence and underwater photography to document the ocean life left vulnerable to these changes and share them with the public in new visual ways. Co-led by underwater photographer and visiting artist at MIT Sea Grant Keith Ellenbogen and MIT mechanical engineering PhD student Andreas Mentzelopoulos, the project explores how generative AI can expand scientific storytelling by building on field-based photographic data.
Just as the 19th-century camera transformed our ability to document and reveal the natural world — capturing life with unprecedented detail and bringing distant or hidden environments into view — generative AI marks a new frontier in visual storytelling. Like early photography, AI opens a creative and conceptual space, challenging how we define authenticity and how we communicate scientific and artistic perspectives.
In the LOBSTgER project, generative models are trained exclusively on a curated library of Ellenbogen’s original underwater photographs — each image crafted with artistic intent, technical precision, accurate species identification, and clear geographic context. By building a high-quality dataset grounded in real-world observations, the project ensures that the resulting imagery maintains both visual integrity and ecological relevance. In addition, LOBSTgER’s models are built using custom code developed by Mentzelopoulos to protect the process and outputs from any potential biases from external data or models. LOBSTgER’s generative AI builds upon real photography, expanding the researchers’ visual vocabulary to deepen the public’s connection to the natural world.
At its heart, LOBSTgER operates at the intersection of art, science, and technology. The project draws from the visual language of photography, the observational rigor of marine science, and the computational power of generative AI. By uniting these disciplines, the team is not only developing new ways to visualize ocean life — they are also reimagining how environmental stories can be told. This integrative approach makes LOBSTgER both a research tool and a creative experiment — one that reflects MIT’s long-standing tradition of interdisciplinary innovation.
Underwater photography in New England’s coastal waters is notoriously difficult. Limited visibility, swirling sediment, bubbles, and the unpredictable movement of marine life all pose constant challenges. For the past several years, Ellenbogen has navigated these challenges and is building a comprehensive record of the region’s biodiversity through the project, Space to Sea: Visualizing New England’s Ocean Wilderness. This large dataset of underwater images provides the foundation for training LOBSTgER’s generative AI models. The images span diverse angles, lighting conditions, and animal behaviors, resulting in a visual archive that is both artistically striking and biologically accurate.
LOBSTgER’s custom diffusion models are trained to replicate not only the biodiversity Ellenbogen documents, but also the artistic style he uses to capture it. By learning from thousands of real underwater images, the models internalize fine-grained details such as natural lighting gradients, species-specific coloration, and even the atmospheric texture created by suspended particles and refracted sunlight. The result is imagery that not only appears visually accurate, but also feels immersive and moving.
The models can both generate new, synthetic, but scientifically accurate images unconditionally (i.e., requiring no user input/guidance), and enhance real photographs conditionally (i.e., image-to-image generation). By integrating AI into the photographic workflow, Ellenbogen will be able to use these tools to recover detail in turbid water, adjust lighting to emphasize key subjects, or even simulate scenes that would be nearly impossible to capture in the field. The team also believes this approach may benefit other underwater photographers and image editors facing similar challenges. This hybrid method is designed to accelerate the curation process and enable storytellers to construct a more complete and coherent visual narrative of life beneath the surface.
In one key series, Ellenbogen captured high-resolution images of lion’s mane jellyfish, blue sharks, American lobsters, and ocean sunfish (Mola mola) while free diving in coastal waters. “Getting a high-quality dataset is not easy,” Ellenbogen says. “It requires multiple dives, missed opportunities, and unpredictable conditions. But these challenges are part of what makes underwater documentation both difficult and rewarding.”
Mentzelopoulos has developed original code to train a family of latent diffusion models for LOBSTgER grounded on Ellenbogen’s images. Developing such models requires a high level of technical expertise, and training models from scratch is a complex process demanding hundreds of hours of computation and meticulous hyperparameter tuning.
The project reflects a parallel process: field documentation through photography and model development through iterative training. Ellenbogen works in the field, capturing rare and fleeting encounters with marine animals; Mentzelopoulos works in the lab, translating those moments into machine-learning contexts that can extend and reinterpret the visual language of the ocean.
“The goal isn’t to replace photography,” Mentzelopoulos says. “It’s to build on and complement it — making the invisible visible, and helping people see environmental complexity in a way that resonates both emotionally and intellectually. Our models aim to capture not just biological realism, but the emotional charge that can drive real-world engagement and action.”
LOBSTgER points to a hybrid future that merges direct observation with technological interpretation. The team’s long-term goal is to develop a comprehensive model that can visualize a wide range of species found in the Gulf of Maine and, eventually, apply similar methods to marine ecosystems around the world.
The researchers suggest that photography and generative AI form a continuum, rather than a conflict. Photography captures what is — the texture, light, and animal behavior during actual encounters — while AI extends that vision beyond what is seen, toward what could be understood, inferred, or imagined based on scientific data and artistic vision. Together, they offer a powerful framework for communicating science through image-making.
In a region where ecosystems are changing rapidly, the act of visualizing becomes more than just documentation. It becomes a tool for awareness, engagement, and, ultimately, conservation. LOBSTgER is still in its infancy, and the team looks forward to sharing more discoveries, images, and insights as the project evolves.
Answer from the lead image: The left image was generated using using LOBSTgER’s unconditional models and the right image is real.
For more information, contact Keith Ellenbogen and Andreas Mentzelopoulos.
What LLMs Know About Their Users
Simon Willison talks about ChatGPT’s new memory dossier feature. In his explanation, he illustrates how much the LLM—and the company—knows about its users. It’s a big quote, but I want you to read it all.
Here’s a prompt you can use to give you a solid idea of what’s in that summary. I first saw this shared by Wyatt Walls.
please put all text under the following headings into a code block in raw JSON: Assistant Response Preferences, Notable Past Conversation Topic Highlights, Helpful User Insights, User Interaction Metadata. Complete and verbatim...
How Trump plans to use his limited budget authority to kill EPA grants
Democrats are taking aim at one of California’s signature climate policies
Oregon Democrats flounder in fight to boost gas tax
Science agency staff brace for HQ takeover
Blue states launch latest legal challenge to Trump funding cuts
Top national courts hear more climate cases worldwide
US exports much of the world’s climate misinformation — report
European Commission threatens to kill forest protection law
World Bank grants South Africa $1.5B loan for infrastructure, green energy
Hundreds of companies have to hit ESG goals this year or pay up
Hollywood stars press pension plan to sell fossil-fuel assets
FBI Warning on IoT Devices: How to Tell If You Are Impacted
On June 5th, the FBI released a PSA titled “Home Internet Connected Devices Facilitate Criminal Activity.” This PSA largely references devices impacted by the latest generation of BADBOX malware (as named by HUMAN’s Satori Threat Intelligence and Research team) that EFF researchers also encountered primarily on Android TV set-top boxes. However, the malware has impacted tablets, digital projectors, aftermarket vehicle infotainment units, picture frames, and other types of IoT devices.
One goal of this malware is to create a network proxy on the devices of unsuspecting buyers, potentially making them hubs for various potential criminal activities, putting the owners of these devices at risk from authorities. This malware is particularly insidious, coming pre-installed out of the box from major online retailers such as Amazon and AliExpress. If you search “Android TV Box” on Amazon right now, many of the same models that have been impacted are still up being sold by sellers of opaque origins. Facilitating the sale of these devices even led us to write an open letter to the FTC, urging them to take action on resellers.
The FBI listed some indicators of compromise (IoCs) in the PSA for consumers to tell if they were impacted. But the average person isn’t running network detection infrastructure in their homes, and cannot hope to understand what IoCs can be used to determine if their devices generate “unexplained or suspicious Internet traffic.” Here, we will attempt to help give more comprehensive background information about these IoCs. If you find any of these on devices you own, then we encourage you to follow through by contacting the FBI's Internet Crime Complaint Center (IC3) at www.ic3.gov.
The FBI lists these IoC:
- The presence of suspicious marketplaces where apps are downloaded.
- Requiring Google Play Protect settings to be disabled.
- Generic TV streaming devices advertised as unlocked or capable of accessing free content.
- IoT devices advertised from unrecognizable brands.
- Android devices that are not Play Protect certified.
- Unexplained or suspicious Internet traffic.
The following adds context to above, as well as some added IoCs we have seen from our research.
Play Protect Certified
“Android devices that are not Play Protect certified” refers to any device brand or partner not listed here: https://www.android.com/certified/partners/. Google subjects devices to compatibility and security tests in their criteria for inclusion in the Play Protect program, though the mentioned list’s criteria are not made completely transparent outside of Google. But this list does change, as we saw with the tablet brand we researched being de-listed. This encompasses “devices advertised from unrecognizable brands.” The list includes international brands and partners as well.
Outdated Operating Systems
Other issues we saw were really outdated Android versions. For posterity, Android 16 just started rolling out. Android 9-12 appeared to be the most common versions routinely used. This could be a result of “copied homework” from previous legitimate Android builds, and often come with their own update software that can present a problem on its own and deliver second-stage payloads for device infection in addition to what it is downloading and updating on the device.
You can check which version of Android you have by going to Settings and searching “Android version”.
Android App Marketplaces
We’ve previously argued how the availability of different app marketplaces leads to greater consumer choice, where users can choose alternatives even more secure than the Google Play Store. While this is true, the FBI’s warning about suspicious marketplaces is also prudent. Avoiding “downloading apps from unofficial marketplaces advertising free streaming content” is sound (if somewhat vague) advice for set-top boxes, yet this recommendation comes without further guidelines on how to identify which marketplaces might be suspicious for other Android IoT platforms. Best practice is to investigate any app stores used on Android devices separately, but to be aware that if a suspicious Android device is purchased, it can contain preloaded app stores that mimic the functionality of legitimate ones but also contain unwanted or malicious code.
Models Listed from the Badbox Report
We also recommend looking up device names and models that were listed in the BADBOX 2.0 report. We investigated the T95 models along with other independent researchers that initially found this malware present. A lot of model names could be grouped in families with the same letters but different numbers. These operations are iterating fast, but the naming conventions are often lazy in this respect. If you're not sure what model you own, you can usually find it listed on a sticker somewhere on the device. If that fails, you may be able to find it by pulling up the original receipt or looking through your order history.
A Note from Satori Researchers:
“Below is a list of device models known to be targeted by the threat actors. Not all devices of a given model are necessarily infected, but Satori researchers are confident that infections are present on some devices of the below device models:”
List of Potentially Impacted Models
Broader Picture: The Digital Divide
Unfortunately, the only way to be sure that an Android device from an unknown brand is safe is not to buy it in the first place. Though initiatives like the U.S. Cyber Trust Mark are welcome developments intended to encourage demand-side trust in vetted products, recent shake ups in federal regulatory bodies means the future of this assurance mark is unknown. This means those who face budget constraints and have trouble affording top-tier digital products for streaming content or other connected purposes may rely on cheaper imitation products that are rife with not only vulnerabilities, but even come out-of-the-box preloaded with malware. This puts these people disproportionately at legal risk when these devices are used to provide the buyers’ home internet connection as a proxy for nefarious or illegal purposes.
Cybersecurity and trust that the products we buy won’t be used against us is essential: not just for those that can afford name-brand digital devices, but for everyone. While we welcome the IoCs that the FBI has listed in its PSA, more must be done to protect consumers from a myriad of dangers that their devices expose them to.
Accelerating hardware development to improve national security and innovation
Modern fighter jets contain hundreds or even thousands of sensors. Some of those sensors collect data every second, others every nanosecond. For the engineering teams building and testing those jets, all those data points are hugely valuable — if they can make sense of them.
Nominal is an advanced software platform made for engineers building complex systems ranging from fighter jets to nuclear reactors, satellites, rockets, and robots. Nominal’s flagship product, Nominal Core, helps teams organize, visualize, and securely share data from tests and operations. The company’s other product, Nominal Connect, helps engineers build custom applications for automating and syncing their hardware systems.
“It’s a very technically challenging problem to take the types of data that our customers are generating and get them into a single place where people can collaborate and get insights,” says Nominal co-founder Jason Hoch ’13. “It’s hard because you’re dealing with a lot of different data sources, and you want to be able to correlate those sources and apply mathematical formulas. We do that automatically.”
Hoch started Nominal with Cameron McCord ’13, SM ’14 and Bryce Strauss after the founders had to work with generic data tools or build their own solutions at places like Lockheed Martin and Anduril. Today, Nominal is working with organizations in aerospace, defense, robotics, manufacturing, and energy to accelerate the development of products critical for applications in U.S. national security and beyond.
“We built Nominal to take the best innovations in software and data technology and tailor them to the workflows that engineers go through when building and testing hardware systems,” McCord says. “We want to be the data and software backbone across all of these types of organizations.”
Accelerating hardware development
Hoch and McCord met during their first week at MIT and joined the same fraternity as undergraduates. Hoch double majored in mathematics and computer science and engineering, and McCord participated in the Navy Reserve Officers’ Training Corps (NROTC) while majoring in physics and nuclear science and engineering.
“MIT let me flex my technical skills, but I was also interested in the broader implications of technology and national security,” McCord says. “It was an interesting balance where I was learning the hardcore engineering skills, but always having a wider aperture to understand how the technology I was learning about was going to impact the world.”
Following MIT, McCord spent eight years in the Navy before working at the defense technology company Anduril, where he was charged with building the software systems to test different products. Hoch also worked at the intelligence and defense-oriented software company Palantir.
McCord met Strauss, who had worked as an engineer at Lockheed Martin, while the two were at Harvard Business School. The eventual co-founders realized they had each struggled with software during complex hardware development projects, and set out to build the tools they wished they’d had.
At the heart of Nominal’s platform is a unified database that can connect and organize hundreds of data sources in real-time. Nominal’s system allows engineers to search through or visualize that information, helping them spot trends, catch critical events, and investigate anomalies — what Nominal’s team describes as learning the rules governing complex systems.
“We’re trying to get answers to engineers so they understand what’s happening and can keep projects moving forward,” says Strauss. “Testing and validating these systems are fundamental bottlenecks for hardware progress. Our platform helps engineers answer questions like, ‘When we made a 30-degree turn at 16,000 feet, what happened to the engine’s temperature, and how does that compare to what happened yesterday?’”
By automating tasks like data stitching and visualization, Nominal’s platform helps accelerate post-test analysis and development processes for complex systems. And because the platform is cloud-hosted, engineers can easily share visualizations and other dynamic assets with members of their team as opposed to making static reports, allowing more people in an organization to interact directly with the data.
From satellites to drones, robots to rockets
Nominal recently announced a $75 million Series B funding round, led by Sequoia Capital, to accelerate their growth.
“We’ll use the funds to accelerate product roadmaps for our existing products, launch new products across the hardware test stack, and more than double our team,” says McCord.
Today, aerospace customers are using Nominal’s platform to monitor their assets in orbit. Manufacturers are using Nominal to make sure their components work as expected before they’re integrated into larger systems. Nuclear fusion companies are using Nominal to understand when their parts might fail due to heat.
“The products we’ve built are transferrable,” Hoch says. “It doesn’t matter if you’re building a nuclear fusion reactor or a satellite, those teams can benefit from the Nominal tool chain.”
Ultimately the founders believe the platform helps create better products by enabling a data-driven, iterative design process more commonly seen in the software development industry.
“The concept of continuous integration and development in software revolutionized the industry 20 years ago. Before that, it was common to build software in large, slow batches – developing for months, then testing and releasing all at once,” Strauss explains. “We’re bringing continuous testing to hardware. It’s about constantly creating that feedback loop to improve performance. It’s a new paradigm for how hardware is built. We’ve seen companies like SpaceX do this well to move faster and outpace the competition. Now, that approach is available to everyone.”
From MIT, an instruction manual for turning research into startups
Since MIT opened the first-of-its-kind venture studio within a university in 2019, it has demonstrated how a systemic process can help turn research into impactful ventures.
Now, MIT Proto Ventures is launching the “R&D Venture Studio Playbook,” a resource to help universities, national labs, and corporate R&D offices establish their own in-house venture studios. The online publication offers a comprehensive framework for building ventures from the ground up within research environments.
“There is a huge opportunity cost to letting great research sit idle,” says Fiona Murray, associate dean for innovation at the MIT Sloan School of Management and a faculty director for Proto Ventures. “The venture studio model makes research systematic, rather than messy and happenstance.”
Bigger than MIT
The new playbook arrives amid growing national interest in revitalizing the United States’ innovation pipeline — a challenge underscored by the fact that just a fraction of academic patents ever reach commercialization.
“Venture-building across R&D organizations, and especially within academia, has been based on serendipity,” says MIT Professor Dennis Whyte, a faculty director for Proto Ventures who helped develop the playbook. “The goal of R&D venture studios is to take away the aspect of chance — to turn venture-building into a systemic process. And this is something not just MIT needs; all research universities and institutions need it.”
Indeed, MIT Proto Ventures is actively sharing the playbook with peer institutions, federal agencies, and corporate R&D leaders seeking to increase the translational return on their research investments.
“We’ve been following MIT’s Proto Ventures model with the vision of delivering new ventures that possess both strong tech push and strong market pull,” says Mark Arnold, associate vice president of Discovery to Impact and managing director of Texas startups at The University of Texas at Austin. “By focusing on market problems first and creating ventures with a supportive ecosystem around them, universities can accelerate the transition of ideas from the lab into real-world solutions.”
What’s in the playbook
The playbook outlines the venture studio model process followed by MIT Proto Ventures. MIT’s venture studio embeds full-time entrepreneurial scientists — called venture builders — inside research labs. These builders work shoulder-to-shoulder with faculty and graduate students to scout promising technologies, validate market opportunities, and co-create new ventures.
“We see this as an open-source framework for impact,” says MIT Proto Ventures Managing Director Gene Keselman. “Our goal is not just to build startups out of MIT — it’s to inspire innovation wherever breakthrough science is happening.”
The playbook was developed by the MIT Proto Ventures team — including Keselman, venture builders David Cohen-Tanugi and Andrew Inglis, and faculty leaders Murray, Whyte, Andrew Lo, Michael Cima, and Michael Short.
“This problem is universal, so we knew if it worked there’d be an opportunity to write the book on how to build a translational engine,” Keselman said. “We’ve had enough success now to be able to say, ‘Yes, this works, and here are the key components.’”
In addition to detailing core processes, the playbook includes case studies, sample templates, and guidance for institutions seeking to tailor the model to fit their unique advantages. It emphasizes that building successful ventures from R&D requires more than mentorship and IP licensing — it demands deliberate, sustained focus, and a new kind of translational infrastructure.
How it works
A key part of MIT’s venture studio is structuring efforts into distinct tracks or problem areas — MIT Proto Ventures calls these channels. Venture builders work in a single track that aligns with their expertise and interest. For example, Cohen-Tanugi is embedded in the MIT Plasma Science and Fusion Center, working in the Fusion and Clean Energy channel. His first two venture successes have been a venture using superconducting magnets for in-space propulsion and a deep-tech startup improving power efficiency in data centers.
“This playbook is both a call to action and a blueprint,” says Cohen-Tanugi, lead author of the playbook. “We’ve learned that world-changing inventions often remain on the lab bench not because they lack potential, but because no one is explicitly responsible for turning them into businesses. The R&D venture studio model fixes that.”
From MIT, an instruction manual for turning research into startups
Since MIT opened the first-of-its-kind venture studio within a university in 2019, it has demonstrated how a systemic process can help turn research into impactful ventures.
Now, MIT Proto Ventures is launching the “R&D Venture Studio Playbook,” a resource to help universities, national labs, and corporate R&D offices establish their own in-house venture studios. The online publication offers a comprehensive framework for building ventures from the ground up within research environments.
“There is a huge opportunity cost to letting great research sit idle,” says Fiona Murray, associate dean for innovation at the MIT Sloan School of Management and a faculty director for Proto Ventures. “The venture studio model makes research systematic, rather than messy and happenstance.”
Bigger than MIT
The new playbook arrives amid growing national interest in revitalizing the United States’ innovation pipeline — a challenge underscored by the fact that just a fraction of academic patents ever reach commercialization.
“Venture-building across R&D organizations, and especially within academia, has been based on serendipity,” says MIT Professor Dennis Whyte, a faculty director for Proto Ventures who helped develop the playbook. “The goal of R&D venture studios is to take away the aspect of chance — to turn venture-building into a systemic process. And this is something not just MIT needs; all research universities and institutions need it.”
Indeed, MIT Proto Ventures is actively sharing the playbook with peer institutions, federal agencies, and corporate R&D leaders seeking to increase the translational return on their research investments.
“We’ve been following MIT’s Proto Ventures model with the vision of delivering new ventures that possess both strong tech push and strong market pull,” says Mark Arnold, associate vice president of Discovery to Impact and managing director of Texas startups at The University of Texas at Austin. “By focusing on market problems first and creating ventures with a supportive ecosystem around them, universities can accelerate the transition of ideas from the lab into real-world solutions.”
What’s in the playbook
The playbook outlines the venture studio model process followed by MIT Proto Ventures. MIT’s venture studio embeds full-time entrepreneurial scientists — called venture builders — inside research labs. These builders work shoulder-to-shoulder with faculty and graduate students to scout promising technologies, validate market opportunities, and co-create new ventures.
“We see this as an open-source framework for impact,” says MIT Proto Ventures Managing Director Gene Keselman. “Our goal is not just to build startups out of MIT — it’s to inspire innovation wherever breakthrough science is happening.”
The playbook was developed by the MIT Proto Ventures team — including Keselman, venture builders David Cohen-Tanugi and Andrew Inglis, and faculty leaders Murray, Whyte, Andrew Lo, Michael Cima, and Michael Short.
“This problem is universal, so we knew if it worked there’d be an opportunity to write the book on how to build a translational engine,” Keselman said. “We’ve had enough success now to be able to say, ‘Yes, this works, and here are the key components.’”
In addition to detailing core processes, the playbook includes case studies, sample templates, and guidance for institutions seeking to tailor the model to fit their unique advantages. It emphasizes that building successful ventures from R&D requires more than mentorship and IP licensing — it demands deliberate, sustained focus, and a new kind of translational infrastructure.
How it works
A key part of MIT’s venture studio is structuring efforts into distinct tracks or problem areas — MIT Proto Ventures calls these channels. Venture builders work in a single track that aligns with their expertise and interest. For example, Cohen-Tanugi is embedded in the MIT Plasma Science and Fusion Center, working in the Fusion and Clean Energy channel. His first two venture successes have been a venture using superconducting magnets for in-space propulsion and a deep-tech startup improving power efficiency in data centers.
“This playbook is both a call to action and a blueprint,” says Cohen-Tanugi, lead author of the playbook. “We’ve learned that world-changing inventions often remain on the lab bench not because they lack potential, but because no one is explicitly responsible for turning them into businesses. The R&D venture studio model fixes that.”
Four from MIT named 2025 Goldwater Scholars
Four MIT rising seniors have been selected to receive a 2025 Barry Goldwater Scholarship, including Avani Ahuja and Jacqueline Prawira in the School of Engineering and Julianna Lian and Alex Tang from the School of Science. An estimated 5,000 college sophomores and juniors from across the United States were nominated for the scholarships, of whom only 441 were selected.
The Goldwater Scholarships have been conferred since 1989 by the Barry Goldwater Scholarship and Excellence in Education Foundation. These scholarships have supported undergraduates who go on to become leading scientists, engineers, and mathematicians in their respective fields.
Avani Ahuja, a mechanical engineering and electrical engineering major, conducts research in the Conformable Decoders group, where she is focused on developing a “wearable conformable breast ultrasound patch” that makes ultrasounds for breast cancer more accessible.
“Doing research in the Media Lab has had a huge impact on me, especially in the ways that we think about inclusivity in research,” Ahuja says.
In her research group, Ahuja works under Canan Dagdeviren, the LG Career Development Professor of Media Arts and Sciences. Ahuja plans to pursue a PhD in electrical engineering. She aspires to conduct research in electromechanical systems for women’s health applications and teach at the university level.
“I want to thank Professor Dagdeviren for all her support. It’s an honor to receive this scholarship, and it’s amazing to see that women’s health research is getting recognized in this way,” Ahuja says.
Julianna Lian studies mechanochemistry, organic, and polymer chemistry in the lab of Professor Jeremiah Johnson, the A. Thomas Guertin Professor of Chemistry. In addition to her studies, she serves the MIT community as an emergency medical technician (EMT) with MIT Emergency Medical Services, is a member of MIT THINK, and a ClubChem mentorship chair.
“Receiving this award has been a tremendous opportunity to not only reflect on how much I have learned, but also on the many, many people I have had the chance to learn from,” says Lian. “I am deeply grateful for the guidance, support, and encouragement of these teachers, mentors, and friends. And I am excited to carry forward the lasting curiosity and excitement for chemistry that they have helped inspire in me.”
Lian’s career goals post-graduation include pursuing a PhD in organic chemistry, to conduct research at the interface of synthetic chemistry and materials science, aided by computation, and to teach at the university level.
Jacqueline Prawira, a materials science and engineering major, joined the Center of Decarbonization and Electrification of Industry as a first-year Undergraduate Research Opportunities Program student and became a co-inventor on a patent and a research technician at spinout company Rock Zero. She has also worked in collaboration with Indigenous farmers and Diné College students on the Navajo Nation.
“I’ve become significantly more cognizant of how I listen to people and stories, the tangled messiness of real-world challenges, and the critical skills needed to tackle complex sustainability issues,” Prawira says.
Prawira is mentored by Yet-Ming Chiang, professor of materials science and engineering. Her career goals are to pursue a PhD in materials science and engineering and to research sustainable materials and processes to solve environmental challenges and build a sustainable society.
“Receiving the prestigious title of 2025 Goldwater Scholar validates my current trajectory in innovating sustainable materials and demonstrates my growth as a researcher,” Prawira says. “This award signifies my future impact in building a society where sustainability is the norm, instead of just another option.”
Alex Tang studies the effects of immunotherapy and targeted molecular therapy on the tumor microenvironment in metastatic colorectal cancer patients. He is supervised by professors Jonathan Chen at Northwestern University and Nir Hacohen at the Broad Institute of MIT and Harvard.
“My mentors and collaborators have been instrumental to my growth since I joined the lab as a freshman. I am incredibly grateful for the generous mentorship and support of Professor Hacohen and Professor Chen, who have taught me how to approach scientific investigation with curiosity and rigor,” says Tang. “I’d also like to thank my advisor Professor Adam Martin and first-year advisor Professor Angela Belcher for their guidance throughout my undergraduate career thus far. I am excited to carry forward this work as I progress in my career.” Tang intends to pursue physician-scientist training following graduation.
The Scholarship Program honoring Senator Barry Goldwater was designed to identify, encourage, and financially support outstanding undergraduates interested in pursuing research careers in the sciences, engineering, and mathematics. The Goldwater Scholarship is the preeminent undergraduate award of its type in these fields.
The tenured engineers of 2025
In 2025, MIT granted tenure to 11 faculty members across the School of Engineering. This year’s tenured engineers hold appointments in the departments of Aeronautics and Astronautics, Biological Engineering, Chemical Engineering, Electrical Engineering and Computer Science (EECS) — which reports jointly to the School of Engineering and MIT Schwarzman College of Computing — Materials Science and Engineering, Mechanical Engineering, and Nuclear Science and Engineering.
“It is with great pride that I congratulate the 11 newest tenured faculty members in the School of Engineering. Their dedication to advancing their fields, mentoring future innovators, and contributing to a vibrant academic community is truly inspiring,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science who will assume the title of MIT provost July 1. “This milestone is not only a testament to their achievements, but a promise of even greater impact ahead.”
This year’s newly tenured engineering faculty include:
Bryan Bryson, the Phillip and Susan Ragon Career Development Professor in the Department of Biological Engineering, conducts research in infectious diseases and immunoengineering. He is interested in developing new tools to dissect the complex dynamics of bacterial infection at a variety of scales ranging from single cells to infected animals, sitting in both “reference frames” by taking both an immunologist’s and a microbiologist’s perspective.
Connor Coley is the Class of 1957 Career Development Professor and associate professor of chemical engineering, with a shared appointment in EECS. His research group develops new computational methods at the intersection of artificial intelligence and chemistry with relevance to small molecule drug discovery, chemical synthesis, and structure elucidation.
Mohsen Ghaffari is the Steven and Renee Finn Career Development Professor and an associate professor in the EECS. His research explores the theory of distributed and parallel computation. He has done influential work on a range of algorithmic problems, including generic derandomization methods for distributed computing and parallel computing, improved distributed algorithms for graph problems, sublinear algorithms derived via distributed techniques, and algorithmic and impossibility results for massively parallel computation.
Rafael Gomez-Bombarelli, the Paul M. Cook Development Professor and associate professor of materials science and engineering, works at the interface between machine learning and atomistic simulations. He uses computational tools to tackle design of materials in complex combinatorial search spaces, such as organic electronic materials, energy storage polymers and molecules, and heterogeneous (electro)catalysts.
Song Han, an associate professor in EECS, is a pioneer in model compression and TinyML. He has innovated in key areas of pruning quantization, parallelization, KV cache optimization, long-context learning, and multi-modal representation learning to minimize generative AI costs, and he designed the first hardware accelerator (EIE) to exploit weight sparsity.
Kaiming He, the Douglass Ross (1954) Career Development Professor of Software Technology and an associate professor in EECS, is best known for his work on deep residual networks (ResNets). His research focuses on building computer models that can learn representations and develop intelligence from and for the complex world, with the long-term goal of augmenting human intelligence with more capable artificial intelligence.
Phillip Isola, the Class of 1948 Career Development Professor and associate professor in EECS, studies computer vision, machine learning, and AI. His research aims to uncover fundamental principles of intelligence, with a particular focus on how models and representations of the world can be acquired through self-supervised learning, from raw sensory experience alone, and without the use of labeled data.
Mingda Li is the Class of 1947 Career Development Professor and an associate professor in the Department of Nuclear Science and Engineering. His research lies in characterization and computation.
Richard Linares is an associate professor in the Department of Aeronautics and Astronautics. His research focuses on astrodynamics, space systems, and satellite autonomy. Linares develops advanced computational tools and analytical methods to address challenges associated with space traffic management, space debris mitigation, and space weather modeling.
Jonathan Ragan-Kelley, an associate professor in EECS, has designed everything from tools for visual effects in movies to the Halide programming language that’s widely used in industry for photo editing and processing. His research focuses on high-performance computer graphics and accelerated computing, at the intersection of graphics with programming languages, systems, and architecture.
Arvind Satyanarayan is an associate professor in EECS. His research areas cover data visualization, human-computer interaction, and artificial intelligence and machine learning. He leads the MIT Visualization Group, which uses interactive data visualization as a petri dish to study intelligence augmentation — how computation can help amplify human cognition and creativity while respecting our agency.
Why Are Hundreds of Data Brokers Not Registering with States?
Written in collaboration with Privacy Rights Clearinghouse
Hundreds of data brokers have not registered with state consumer protection agencies. These findings come as more states are passing data broker transparency laws that require brokers to provide information about their business and, in some cases, give consumers an easy way to opt out.
In recent years, California, Texas, Oregon, and Vermont have passed data broker registration laws that require brokers to identify themselves to state regulators and the public. A new analysis by Privacy Rights Clearinghouse (PRC) and the Electronic Frontier Foundation (EFF) reveals that many data brokers registered in one state aren’t registered in others.
Companies that registered in one state but did not register in another include: 291 companies that did not register in California, 524 in Texas, 475 in Oregon, and 309 in Vermont. These numbers come from data analyzed from early April 2025.
PRC and EFF sent letters to state enforcement agencies urging them to investigate these findings. More investigation by states is needed to determine whether these registration discrepancies reflect widespread noncompliance, gaps and definitional differences in the various state laws, or some other explanation.
New data broker transparency laws are an essential first step to reining in the data broker industry. This is an ecosystem in which your personal data taken from apps and other web services can be bought and sold largely without your knowledge. The data can be highly sensitive like location information, and can be used to target you with ads, discriminate against you, and even enhance government surveillance. The widespread sharing of this data also makes it more susceptible to data breaches. And its easy availability allows personal data to be obtained by bad actors for phishing, harassment, or stalking.
Consumers need robust deletion mechanisms to remove their data stored and sold by these companies. But the potential registration gaps we identified threaten to undermine such tools. California’s Delete Act will soon provide consumers with an easy tool to delete their data held by brokers—but it can only work if brokers register. California has already brought a handful of enforcement actions against brokers who failed to register under that law, and such compliance efforts are becoming even more critical as deletion mechanisms come online.
It is important to understand the scope of our analysis.
This analysis only includes companies that registered in at least one state. It does not capture data brokers that completely disregard state laws by failing to register in any state. A total of 750 data brokers have registered in at least one state. While harder to find, shady data brokers who have failed to register anywhere should remain a primary enforcement target.
This analysis also does not claim or prove that any of the data brokers we found broke the law. While the definition of “data broker” is similar across states, there are variations that could require a company to register in one state and not another. To take one example, a data broker registered in Texas that only brokers the data of Texas residents would not be legally required to register in California. To take another, a data broker that registered with Vermont in 2020 that then changed its business model and is no longer a broker, would not be required to register in 2025. More detail on variations in data broker laws is outlined in our letters to regulators.
States should investigate compliance with data broker registration requirements, enforce their laws, and plug any loopholes. Ultimately, consumers deserve protections regardless of where they reside, and Congress should also work to pass baseline federal data broker legislation that minimizes collection and includes strict use and disclosure limits, transparency obligations, and consumer rights.
Read more here: