Feed aggregator
Judge Rejects Government’s Attempt to Dismiss EFF Lawsuit Against OPM, DOGE, and Musk
NEW YORK—A lawsuit seeking to stop the U.S. Office of Personnel Management (OPM) from disclosing tens of millions of Americans’ private, sensitive information to Elon Musk’s “Department of Government Efficiency” (DOGE) can continue, a federal judge ruled Thursday.
Judge Denise L. Cote of the U.S. District Court for the Southern District of New York partially rejected the defendants’ motion to dismiss the lawsuit, which was filed Feb. 11 on behalf of two labor unions and individual current and former government workers across the country. This decision is a victory: The court agreed that the claims that OPM illegally disclosed highly personal records of millions of people to DOGE agents can move forward with the goal of stopping that ongoing disclosure and requiring that any shared information be returned.
Cote ruled current and former federal employees "may pursue their request for injunctive relief under the APA [Administrative Procedure Act]. ... The defendants’ Kafkaesque argument to the contrary would deprive the plaintiffs of any recourse under the law."
"The complaint plausibly alleges that actions by OPM were not representative of its ordinary day-to-day operations but were, in sharp contrast to its normal procedures, illegal, rushed, and dangerous,” the judge wrote.
The Court added: “The complaint adequately pleads that the DOGE Defendants 'plainly and openly crossed a congressionally drawn line in the sand.'"
OPM maintains databases of highly sensitive personal information about tens of millions of federal employees, retirees, and job applicants. The lawsuit by EFF, Lex Lumina LLP, State Democracy Defenders Fund, and The Chandra Law Firm argues that OPM and OPM Acting Director Charles Ezell illegally disclosed personnel records to DOGE agents in violation of the federal Privacy Act of 1974, a watershed anti-surveillance statute that prevents the federal government from abusing our personal information.
The lawsuit’s union plaintiffs are the American Federation of Government Employees AFL-CIO and the Association of Administrative Law Judges, International Federation of Professional and Technical Engineers Judicial Council 1 AFL-CIO.
“Today’s legal victory sends a crystal-clear message: Americans’ private data stored with the government isn't the personal playground of unelected billionaires,” said AFGE National President Everett Kelley. “Elon Musk and his DOGE cronies have no business rifling through sensitive data stored at OPM, period. AFGE and our allies fought back – and won – because we will not compromise when it comes to protecting the privacy and security of our members and the American people they proudly serve.”
As the federal government is the nation’s largest employer, the records held by OPM represent one of the largest collections of sensitive personal data in the country. In addition to personally identifiable information such as names, social security numbers, and demographic data, these records include work information like salaries and union activities; personal health records and information regarding life insurance and health benefits; financial information like death benefit designations and savings programs; nondisclosure agreements; and information concerning family members and other third parties referenced in background checks and health records.
OPM holds these records for tens of millions of Americans, including current and former federal workers and those who have applied for federal jobs. OPM has a history of privacy violations—an OPM breach in 2015 exposed the personal information of 22.1 million people—and its recent actions make its systems less secure.
With few exceptions, the Privacy Act limits the disclosure of federally maintained sensitive records on individuals without the consent of the individuals whose data is being shared. It protects all Americans from harms caused by government stockpiling of our personal data. This law was enacted in 1974, the last time Congress acted to limit the data collection and surveillance powers of an out-of-control President. The judge ruled that the request for an injunction under the Privacy Act claims can go forward under the Administrative Procedures Act, but not directly under the Privacy Act.
For the order denying the motion to dismiss: https://www.eff.org/document/afge-v-opm-opinion-and-order-motion-dismiss
For the complaint: https://www.eff.org/document/afge-v-opm-complaint
For more about the case: https://www.eff.org/cases/american-federation-government-employees-v-us-office-personnel-management
Contacts
Electronic Frontier Foundation: press@eff.org
Lex Lumina LLP: Managing Partner Rhett Millsaps, rhett@lex-lumina.com
EFF Joins Amicus Brief Supporting Perkins Coie Law Firm Against Unconstitutional Executive Order
EFF has joined the American Civil Liberties Union and other legal advocacy organizations across the ideological spectrum in filing an amicus brief asking a federal judge to strike down President Donald Trump’s executive order targeting law firm Perkins Coie for its past work on voting rights lawsuits and its representation of the President’s prior political opponents.
As a legal organization that has fought in court to defend the rights of technology users for almost 35 years, including numerous legal challenges to federal government overreach, EFF unequivocally supports Perkins Coie’s challenge to this shocking, vindictive, and unconstitutional executive order. In punishing the law firm for its zealous advocacy on behalf of its clients, the March 6 order offends the First Amendment, the rule of law, and the legal profession broadly in numerous ways. We commend Perkins Coie and other targeted law firms that have chosen to do so (and their legal representatives) for fighting back.
“If allowed to stand, these pressure tactics will have broad and lasting impacts on Americans' ability to retain legal counsel in important matters, to arrange their business and personal affairs as they like, and to speak their minds,” our brief says.
Lawsuits against the federal government are a vital component of the system of checks and balances that undergirds American democracy. They reflect a confidence in both the judiciary to decide such matters fairly and justly, and the executive to abide by the court’s determination. They are a backstop against autocracy and a sustaining feature of American jurisprudence since Marbury v. Madison, 5 U.S. 137 (1803).
The executive order, if enforced, would upend that system and set an appalling precedent: Law firms that represent clients adverse to a given administration can and will be punished for doing their jobs.
This is a fundamental abuse of executive power.
The constitutional problems are legion, but here are a few:
- The First Amendment bars the government from “distorting the legal system by altering the traditional role of attorneys” by controlling what legal arguments lawyers can make. See Legal Services Corp. v. Velasquez, 531 U.S. 533, 544 (2001). “An informed independent judiciary presumes an informed, independent bar.” Id. at 545.
- The executive order is also unconstitutional retaliation for Perkins Coie’s engaging in constitutionally protected speech during the course of representing its clients. See Lozman v. City of Riviera Beach, 585 U.S. 87, 90 (2018).
- The executive order violates fundamental precepts of separation of powers and the Fifth and Sixth Amendment rights of litigants to select the counsel of their choice. See United States v. Gonzalez-Lopez, 548 U.S. 140, 147–48 (2006).
An independent legal profession is a fundamental component of democracy and the rule of law. As a nonprofit legal organization that frequently sues the federal government, we well understand the value of this bedrock principle and how it – and First Amendment rights more broadly – are threatened by President Trump’s executive orders targeting Perkins Coie and other law firms. It is especially important that the whole legal profession speak out against the executive orders in light of the capitulation by a few large law firms.
The order must be swiftly nullified by the U.S. District Court for the District of Columbia, and must be uniformly vilified by the entire legal profession.
The ACLU’s press release with quotes from fellow amici can be found here.
Engineers develop a way to mass manufacture nanoparticles that deliver cancer drugs directly to tumors
Polymer-coated nanoparticles loaded with therapeutic drugs show significant promise for cancer treatment, including ovarian cancer. These particles can be targeted directly to tumors, where they release their payload while avoiding many of the side effects of traditional chemotherapy.
Over the past decade, MIT Institute Professor Paula Hammond and her students have created a variety of these particles using a technique known as layer-by-layer assembly. They’ve shown that the particles can effectively combat cancer in mouse studies.
To help move these nanoparticles closer to human use, the researchers have now come up with a manufacturing technique that allows them to generate larger quantities of the particles, in a fraction of the time.
“There’s a lot of promise with the nanoparticle systems we’ve been developing, and we’ve been really excited more recently with the successes that we’ve been seeing in animal models for our treatments for ovarian cancer in particular,” says Hammond, who is also MIT’s vice provost for faculty and a member of the Koch Institute for Integrative Cancer Research. “Ultimately, we need to be able to bring this to a scale where a company is able to manufacture these on a large level.”
Hammond and Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute, are the senior authors of the new study, which appears today in Advanced Functional Materials. Ivan Pires PhD ’24, now a postdoc at Brigham and Women’s Hospital and a visiting scientist at the Koch Institute, and Ezra Gordon ’24 are the lead authors of paper. Heikyung Suh, an MIT research technician, is also an author.
A streamlined process
More than a decade ago, Hammond’s lab developed a novel technique for building nanoparticles with highly controlled architectures. This approach allows layers with different properties to be laid down on the surface of a nanoparticle by alternately exposing the surface to positively and negatively charged polymers.
Each layer can be embedded with drug molecules or other therapeutics. The layers can also carry targeting molecules that help the particles find and enter cancer cells.
Using the strategy that Hammond’s lab originally developed, one layer is applied at a time, and after each application, the particles go through a centrifugation step to remove any excess polymer. This is time-intensive and would be difficult to scale up to large-scale production, the researchers say.
More recently, a graduate student in Hammond’s lab developed an alternative approach to purifying the particles, known as tangential flow filtration. However, while this streamlined the process, it still was limited by its manufacturing complexity and maximum scale of production.
“Although the use of tangential flow filtration is helpful, it’s still a very small-batch process, and a clinical investigation requires that we would have many doses available for a significant number of patients,” Hammond says.
To create a larger-scale manufacturing method, the researchers used a microfluidic mixing device that allows them to sequentially add new polymer layers as the particles flow through a microchannel within the device. For each layer, the researchers can calculate exactly how much polymer is needed, which eliminates the need to purify the particles after each addition.
“That is really important because separations are the most costly and time-consuming steps in these kinds of systems,” Hammond says.
This strategy eliminates the need for manual polymer mixing, streamlines production, and integrates good manufacturing practice (GMP)-compliant processes. The FDA’s GMP requirements ensure that products meet safety standards and can be manufactured in a consistent fashion, which would be highly challenging and costly using the previous step-wise batch process. The microfluidic device that the researchers used in this study is already used for GMP manufacturing of other types of nanoparticles, including mRNA vaccines.
“With the new approach, there’s much less chance of any sort of operator mistake or mishaps,” Pires says. “This is a process that can be readily implemented in GMP, and that’s really the key step here. We can create an innovation within the layer-by-layer nanoparticles and quickly produce it in a manner that we could go into clinical trials with.”
Scaled-up production
Using this approach, the researchers can generate 15 milligrams of nanoparticles (enough for about 50 doses) in just a few minutes, while the original technique would take close to an hour to create the same amount. This could enable the production of more than enough particles for clinical trials and patient use, the researchers say.
“To scale up with this system, you just keep running the chip, and it is much easier to produce more of your material,” Pires says.
To demonstrate their new production technique, the researchers created nanoparticles coated with a cytokine called interleukin-12 (IL-12). Hammond’s lab has previously shown that IL-12 delivered by layer-by-layer nanoparticles can activate key immune cells and slow ovarian tumor growth in mice.
In this study, the researchers found that IL-12-loaded particles manufactured using the new technique showed similar performance as the original layer-by-layer nanoparticles. And, not only do these nanoparticles bind to cancer tissue, but they show a unique ability to not enter the cancer cells. This allows the nanoparticles to serve as markers on the cancer cells that activate the immune system locally in the tumor. In mouse models of ovarian cancer, this treatment can lead to both tumor growth delay and even cures.
The researchers have filed for a patent on the technology and are now working with MIT’s Deshpande Center for Technological Innovation in hopes of potentially forming a company to commercialize the technology. While they are initially focusing on cancers of the abdominal cavity, such as ovarian cancer, the work could also be applied to other types of cancer, including glioblastoma, the researchers say.
The research was funded by the U.S. National Institutes of Health, the Marble Center for Nanomedicine, the Deshpande Center for Technological Innovation, and the Koch Institute Support (core) Grant from the National Cancer Institute.
Calyx Institute: A Case Study in Grassroots Innovation
Technologists play a huge role in building alternative tools and resources when our right to privacy and security are undermined by governments and major corporations. This direct resistance ensures that even in the face of powerful adversaries, communities can find some safety and autonomy through community-built tools.
One of the most renowned names in this work is the Calyx Institute, a New York based 501(c)3 nonprofit founded by Nicholas Merrill, after a successful and influential constitutional challenge to the National Security Letter (NSL) statute in the USA Patriot Act. Today Calyx’s mission is to defend digital privacy, advance connectivity, and strive for a future where everyone has access to the resources and tools they need to remain securely connected. Their work is made possible thanks to the generous donations of their over 12,000 grassroots members.
More recently, Calyx joined EFF’s network of grassroots organizations across the US, the Electronic Frontier Alliance (EFA). Members of the alliance are not-for-profit local organizations dedicated to EFA’s five guiding principles: privacy, free expression, access to knowledge, creativity, and security. Calyx has since been an exceptional ally, lifting up and collaborating with fellow members.
If you’re inspired by Calyx to start making a difference in your community, you can get started with our organizer toolkits. Once you’re ready, we hope you consider applying to join the alliance.
We corresponded with Calyx over email to discuss the group's ambitious work, and what the future holds for Calyx. Here are excerpts from our conversation:
Thanks for chatting with us, to get started could you tell us a bit about Calyx’s current work?Calyx focuses on three areas: (1) developing a privacy-respecting software ecosystem, (2) bridging the digital divide with affordable internet access, and (3) sustaining our community through grants, and research, and educational initiatives.
We build and maintain a digital ecosystem of free and open-source software (FOSS) centering on CalyxOS, an Android operating system that encrypts communications, combats invasive metadata collection, and protects users from geolocation tracking. The Calyx Internet Membership Program offers mobile hotspots so people have a way to stay connected despite limited resources or a lack of viable alternatives. Finally, Calyx actively engages with diverse stakeholder groups to build a shared understanding of privacy and expand digital-security literacy and provide grants to directly support aligned organizations. By partnering with our peers, funders, and service providers, we hope to drive collective action toward a privacy-and-rights-respecting future of technology.
Calyx projects work with a wide range of technologies. What are some barriers Calyx runs into in this work?Our biggest challenge is one shared by many tech communities, particularly FOSS advocates: it is difficult to balance privacy and security with usability in tool development. On the one hand, the current data-mining business model of the tech sector makes it extremely hard to provide FOSS solutions to proprietary tech while keeping the tool intuitive and easy to use. On the other, there is a general lack of momentum for funding and growing an alternative digital ecosystem.
As a result, many digital rights enthusiasts are left with scarce resources and a narrow space within which to work on technical solutions. We need more people to work together and collectively advocate for a privacy-respecting tech ecosystem that cares about all communities and does not marginalize anyone.
Take CalyxOS, for example. Before it became a tangible project, our founder Nick spent years thinking about an alternative mobile operating system that put privacy first. Back in 2012, Nick spoke to Moxie Marlinspike, the creator of the Signal messaging app, about his idea. Moxie shared several valid concerns that almost led Nick to stop working on it. Fortunately, these warnings, which came from Moxie’s experience and success with Signal, made Nick even more determined, and he recruited an expert global team to help realize his idea.
What do you see as the role of technologists in defending civil liberties with local communities?Technologists are enablers—they build tools and technical infrastructures, fundamental parts of the digital ecosystem within which people exercise their rights and enjoy their lives. A healthy digital ecosystem consists of technologies that liberate people. It is an arena where people willingly and actively connect and share their expertise, confident in the shared protocols that protect everyone’s rights and dignity. That is why Calyx builds and advocates for people-centered, privacy-focused FOSS tools.
How has Calyx supported folks in NYC? What have you learned from it?It’s a real privilege to be part of the NYC tech community, which has such a wealth of technologists, policy experts, human rights watchdogs, and grassroots activists. In recent years, we joined efforts led by multiple networks and organizations to mobilize against unjustifiable mass surveillance and other digital threats faced by millions of people of color, immigrants, and other underrepresented groups.
We’re particularly proud of the support we provided to another EFA member, Surveillance Technology Oversight Project, on the Ban the Scan campaign to ban facial recognition in NYC, and CryptoHarlem to sustain their work bringing digital privacy and cybersecurity education to communities in Harlem and beyond. Most recently, we funded Sunset Spark—a small nonprofit offering free education in science and technology in the heart of Brooklyn—to develop a multipurpose curriculum focused on privacy, internet infrastructure, and the roles of the public and private sectors in our digital world.
These experiences deeply inspired us to shape a funding philosophy that centers the needs of organizations and groups with limited resources, helps local communities break barriers and build capacity, and grows reciprocal relationships between each member of the community.
You mentioned a grantmaking program, which is a really unique project for an EFA member. Could you tell us a bit about your theory of change for the program?Since 2020, the Calyx Institute has been funding the development of digital privacy and security tools, research on mass surveillance systems, and training efforts to equip people with the knowledge and tools they need to protect their right to privacy and connectivity. In 2022, Calyx launched the Fusion Center Research Fund to aid investigations into law enforcement harvesting of personal data through intelligence-sharing centers. This effort, with nearly $200,000 disbursed to grantees, helped reveal the deleterious impact of surveillance technology on privacy and freedom of expression.
These efforts have led to the Sepal Fund, Calyx’s pilot program to offer small groups unrestricted and holistic grants. This program will provide five organizations, collectives, or projects a yearly grant of up to $50,000 for a total of three years. In addition, we will provide our grantees opportunities for professional development, as well as other resources. Through this program, we hope to sustain and elevate research, tool development, and education that will support digital privacy and defend internet freedom.
Could you tell us a bit about how people can get involved?
All our projects are, at their core, community projects, and we welcome insights and involvement from anyone to whom our work is relevant. CalyxOS offers a variety of ways to connect, including a CalyxOS Matrix room and GitLab repository where users and programmers interact in real time to troubleshoot and discuss improvements. Part of making CalyxOS accessible is ensuring that it’s as widely available as possible, so anyone who would like to be part of that translation and localization effort should visit our weblate site.
What does the future look like for Calyx?We are hoping that the future holds big things for us, like CalyxOS builds on more affordable and globally available mobile devices so that people in different locations with varied resources can equally enjoy the right to privacy. We are also looking forward to updating our visual communication—we have been “substance over style” for so long that it will be exciting to see how a refreshed look will help us reach new audiences.
Finally, what’s your “moonshot”? What’s the ideal future Calyx wants to build?The Calyx dream is accessible digital privacy, security, and connectivity for all, regardless of budget or tech background, centering communities that are most in need.
We want a future where everyone has access to the resources and tools they need to remain securely connected. To get there, we’ll need to work on building a lot of capacity, both technological and informational. Great tools can only fulfill their purpose if people know why and how to use them. Creating those tools and spreading the word about them requires collaboration, and we are proud to be working toward that goal alongside all the organizations that make up the EFA.
Our thanks to the Calyx Institute for their continued efforts to build private and secure tools for targeted groups, in New York City and across the globe. You can find and support other Electronic Frontier Alliance affiliated groups near you by visiting eff.org/fight.
Web 3.0 Requires Data Integrity
If you’ve ever taken a computer security class, you’ve probably learned about the three legs of computer security—confidentiality, integrity, and availability—known as the CIA triad. When we talk about a system being secure, that’s what we’re referring to. All are important, but to different degrees in different contexts. In a world populated by artificial intelligence (AI) systems and artificial intelligent agents, integrity will be paramount.
What is data integrity? It’s ensuring that no one can modify data—that’s the security angle—but it’s much more than that. It encompasses accuracy, completeness, and quality of data—all over both time and space. It’s preventing accidental data loss; the “undo” button is a primitive integrity measure. It’s also making sure that data is accurate when it’s collected—that it comes from a trustworthy source, that nothing important is missing, and that it doesn’t change as it moves from format to format. The ability to restart your computer is another integrity measure...
Trump killed US climate aid. Here’s what it means for the world.
HHS extreme heat programs hollowed out by Trump staff cuts
Minority advocate warns against merging the two US carbon markets
Judge grills Trump admin lawyer on canceled climate grants
Tesla’s plunging sales and Trump’s tariffs mark a day of EV turmoil
Duffy pushes for faster permitting in next highway bill
Climate finance finds itself at a pivotal moment in history
Once-rare fungal diseases kill millions in an unprepared world
Storms batter Greek islands for second day, with Crete hardest hit
High waves cause damage on Sydney waterfront
Enhance responsible governance to match the scale and pace of marine–climate interventions
Nature Climate Change, Published online: 03 April 2025; doi:10.1038/s41558-025-02292-3
Oceans are on the frontline of an array of new marine–climate actions that are both poorly understood and under-regulated. Development and deployment of these interventions is outpacing governance readiness to address risks and ensure responsible transformation and effective action.Novel marine-climate interventions hampered by low consensus and governance preparedness
Nature Climate Change, Published online: 03 April 2025; doi:10.1038/s41558-025-02291-4
Oceans are on the front line of new planned climate actions, but understanding of novel marine-climate intervention development and deployment remains low. Here a survey among intervention practitioners allows identification of science and governance gaps for marine-climate interventions.Vana is letting users own a piece of the AI models trained on their data
In February 2024, Reddit struck a $60 million deal with Google to let the search giant use data on the platform to train its artificial intelligence models. Notably absent from the discussions were Reddit users, whose data were being sold.
The deal reflected the reality of the modern internet: Big tech companies own virtually all our online data and get to decide what to do with that data. Unsurprisingly, many platforms monetize their data, and the fastest-growing way to accomplish that today is to sell it to AI companies, who are themselves massive tech companies using the data to train ever more powerful models.
The decentralized platform Vana, which started as a class project at MIT, is on a mission to give power back to the users. The company has created a fully user-owned network that allows individuals to upload their data and govern how they are used. AI developers can pitch users on ideas for new models, and if the users agree to contribute their data for training, they get proportional ownership in the models.
The idea is to give everyone a stake in the AI systems that will increasingly shape our society while also unlocking new pools of data to advance the technology.
“This data is needed to create better AI systems,” says Vana co-founder Anna Kazlauskas ’19. “We’ve created a decentralized system to get better data — which sits inside big tech companies today — while still letting users retain ultimate ownership.”
From economics to the blockchain
A lot of high school students have pictures of pop stars or athletes on their bedroom walls. Kazlauskas had a picture of former U.S. Treasury Secretary Janet Yellen.
Kazlauskas came to MIT sure she’d become an economist, but she ended up being one of five students to join the MIT Bitcoin club in 2015, and that experience led her into the world of blockchains and cryptocurrency.
From her dorm room in MacGregor House, she began mining the cryptocurrency Ethereum. She even occasionally scoured campus dumpsters in search of discarded computer chips.
“It got me interested in everything around computer science and networking,” Kazlauskas says. “That involved, from a blockchain perspective, distributed systems and how they can shift economic power to individuals, as well as artificial intelligence and econometrics.”
Kazlauskas met Art Abal, who was then attending Harvard University, in the former Media Lab class Emergent Ventures, and the pair decided to work on new ways to obtain data to train AI systems.
“Our question was: How could you have a large number of people contributing to these AI systems using more of a distributed network?” Kazlauskas recalls.
Kazlauskas and Abal were trying to address the status quo, where most models are trained by scraping public data on the internet. Big tech companies often also buy large datasets from other companies.
The founders’ approach evolved over the years and was informed by Kazlauskas’ experience working at the financial blockchain company Celo after graduation. But Kazlauskas credits her time at MIT with helping her think about these problems, and the instructor for Emergent Ventures, Ramesh Raskar, still helps Vana think about AI research questions today.
“It was great to have an open-ended opportunity to just build, hack, and explore,” Kazlauskas says. “I think that ethos at MIT is really important. It’s just about building things, seeing what works, and continuing to iterate.”
Today Vana takes advantage of a little-known law that allows users of most big tech platforms to export their data directly. Users can upload that information into encrypted digital wallets in Vana and disburse it to train models as they see fit.
AI engineers can suggest ideas for new open-source models, and people can pool their data to help train the model. In the blockchain world, the data pools are called data DAOs, which stands for decentralized autonomous organization. Data can also be used to create personalized AI models and agents.
In Vana, data are used in a way that preserves user privacy because the system doesn’t expose identifiable information. Once the model is created, users maintain ownership so that every time it’s used, they’re rewarded proportionally based on how much their data helped trained it.
“From a developer’s perspective, now you can build these hyper-personalized health applications that take into account exactly what you ate, how you slept, how you exercise,” Kazlauskas says. “Those applications aren’t possible today because of those walled gardens of the big tech companies.”
Crowdsourced, user-owned AI
Last year, a machine-learning engineer proposed using Vana user data to train an AI model that could generate Reddit posts. More than 140,000 Vana users contributed their Reddit data, which contained posts, comments, messages, and more. Users decided on the terms in which the model could be used, and they maintained ownership of the model after it was created.
Vana has enabled similar initiatives with user-contributed data from the social media platform X; sleep data from sources like Oura rings; and more. There are also collaborations that combine data pools to create broader AI applications.
“Let’s say users have Spotify data, Reddit data, and fashion data,” Kazlauskas explains. “Usually, Spotify isn’t going to collaborate with those types of companies, and there’s actually regulation against that. But users can do it if they grant access, so these cross-platform datasets can be used to create really powerful models.”
Vana has over 1 million users and over 20 live data DAOs. More than 300 additional data pools have been proposed by users on Vana’s system, and Kazlauskas says many will go into production this year.
“I think there’s a lot of promise in generalized AI models, personalized medicine, and new consumer applications, because it’s tough to combine all that data or get access to it in the first place,” Kazlauskas says.
The data pools are allowing groups of users to accomplish something even the most powerful tech companies struggle with today.
“Today, big tech companies have built these data moats, so the best datasets aren’t available to anyone,” Kazlauskas says. “It’s a collective action problem, where my data on its own isn’t that valuable, but a data pool with tens of thousands or millions of people is really valuable. Vana allows those pools to be built. It’s a win-win: Users get to benefit from the rise of AI because they own the models. Then you don’t end up in scenario where you don’t have a single company controlling an all-powerful AI model. You get better technology, but everyone benefits.”
MIT welcomes 2025 Heising-Simons Foundation 51 Pegasi b Fellow Jess Speedie
The MIT School of Science welcomes Jess Speedie, one of eight recipients of the 2025 51 Pegasi b Fellowship. The announcement was made March 27 by the Heising-Simons Foundation.
The 51 Pegasi b Fellowship, named after the first exoplanet discovered orbiting a sun-like star, was established in 2017 to provide postdocs with the opportunity to conduct theoretical, observational, and experimental research in planetary astronomy.
Speedie, who expects to complete her PhD in astronomy at the University of Victoria, Canada, this summer, will be hosted by the Department of Earth, Atmospheric and Planetary Sciences (EAPS). She will be mentored by Kerr-McGee Career Development Professor Richard Teague as she uses a combination of observational data and simulations to study the birth of planets and the processes of planetary formation.
“The planetary environment is where all the good stuff collects … it has the greatest potential for the most interesting things in the universe to happen, such as the origin of life,” she says. “Planets, for me, are where the stories happen.”
Speedie’s work has focused on understanding “cosmic nurseries” and the detection and characterization of the youngest planets in the galaxy. A lot of this work has made use of the Atacama Large Millimeter/submillimeter Array (ALMA), located in northern Chile. Made up of a collection of 66 parabolic dishes, ALMA studies the universe with radio wavelengths, and Speedie has developed a novel approach to find signals in the data of gravitational instability in protoplanetary disks, a method of planetary formation.
“One of the big, big questions right now in the community focused on planet formation is, where are the planets? It is that simple. We think they’re developing in these disks, but we’ve detected so few of them,” she says.
While working as a fellow, Speedie is aiming to develop an algorithm that carefully aligns and stacks a decade of ALMA observational data to correct for a blurring effect that happens when combining images captured at different times. Doing so should produce the sharpest, most sensitive images of early planetary systems to date.
She is also interested in studying infant planets, especially ones that may be forming in disks around protoplanets, rather than stars. Modeling how these ingredient materials in orbit behave could give astronomers a way to measure the mass of young planets.
“What’s exciting is the potential for discovery. I have this sense that the universe as a whole is infinitely more creative than human minds — the kinds of things that happen out there, you can’t make that up. It’s better than science fiction,” she says.
The other 51 Pegasi b Fellows and their host institutions this year are Nick Choksi (Caltech), Yan Liang (Yale University), Sagnick Mukherjee (Arizona State University), Matthew Nixon (Arizona State University), Julia Santos (Harvard University), Nour Skaf (University of Hawaii), and Jerry Xuan (University of California at Los Angeles).
The fellowship provides up to $450,000 of support over three years for independent research, a generous salary and discretionary fund, mentorship at host institutions, an annual summit to develop professional networks and foster collaboration, and an option to apply for another grant to support a future position in the United States.
A flexible robot can help emergency responders search through rubble
When major disasters hit and structures collapse, people can become trapped under rubble. Extricating victims from these hazardous environments can be dangerous and physically exhausting. To help rescue teams navigate these structures, MIT Lincoln Laboratory, in collaboration with researchers at the University of Notre Dame, developed the Soft Pathfinding Robotic Observation Unit (SPROUT). SPROUT is a vine robot — a soft robot that can grow and maneuver around obstacles and through small spaces. First responders can deploy SPROUT under collapsed structures to explore, map, and find optimum ingress routes through debris.
"The urban search-and-rescue environment can be brutal and unforgiving, where even the most hardened technology struggles to operate. The fundamental way a vine robot works mitigates a lot of the challenges that other platforms face," says Chad Council, a member of the SPROUT team, which is led by Nathaniel Hanson. The program is conducted out of the laboratory's Human Resilience Technology Group.
First responders regularly integrate technology, such as cameras and sensors, into their workflows to understand complex operating environments. However, many of these technologies have limitations. For example, cameras specially built for search-and-rescue operations can only probe on a straight path inside of a collapsed structure. If a team wants to search further into a pile, they need to cut an access hole to get to the next area of the space. Robots are good for exploring on top of rubble piles, but are ill-suited for searching in tight, unstable structures and costly to repair if damaged. The challenge that SPROUT addresses is how to get under collapsed structures using a low-cost, easy-to-operate robot that can carry cameras and sensors and traverse winding paths.
SPROUT is composed of an inflatable tube made of airtight fabric that unfurls from a fixed base. The tube inflates with air, and a motor controls its deployment. As the tube extends into rubble, it can flex around corners and squeeze through narrow passages. A camera and other sensors mounted to the tip of the tube image and map the environment the robot is navigating. An operator steers SPROUT with joysticks, watching a screen that displays the robot's camera feed. Currently, SPROUT can deploy up to 10 feet, and the team is working on expanding it to 25 feet.
When building SPROUT, the team overcame a number of challenges related to the robot's flexibility. Because the robot is made of a deformable material that bends at many points, determining and controlling the robot's shape as it unfurls through the environment is difficult — think of trying to control an expanding wiggly sprinkler toy. Pinpointing how to apply air pressure within the robot so that steering is as simple as pointing the joystick forward to make the robot move forward was essential for system adoption by emergency responders. In addition, the team had to design the tube to minimize friction while the robot grows and engineer the controls for steering.
While a teleoperated system is a good starting point for assessing the hazards of void spaces, the team is also finding new ways to apply robot technologies to the domain, such as using data captured by the robot to build maps of the subsurface voids. "Collapse events are rare but devastating events. In robotics, we would typically want ground truth measurements to validate our approaches, but those simply don't exist for collapsed structures," Hanson says. To solve this problem, Hanson and his team made a simulator that allows them to create realistic depictions of collapsed structures and develop algorithms that map void spaces.
SPROUT was developed in collaboration with Margaret Coad, a professor at the University of Notre Dame and an MIT graduate. When looking for collaborators, Hanson — a graduate of Notre Dame — was already aware of Coad's work on vine robots for industrial inspection. Coad's expertise, together with the laboratory's experience in engineering, strong partnership with urban search-and-rescue teams, and ability to develop fundamental technologies and prepare them for transition to industry, "made this a really natural pairing to join forces and work on research for a traditionally underserved community," Hanson says. "As one of the primary inventors of vine robots, Professor Coad brings invaluable expertise on the fabrication and modeling of these robots."
Lincoln Laboratory tested SPROUT with first responders at the Massachusetts Task Force 1 training site in Beverly, Massachusetts. The tests allowed the researchers to improve the durability and portability of the robot and learn how to grow and steer the robot more efficiently. The team is planning a larger field study this spring.
"Urban search-and-rescue teams and first responders serve critical roles in their communities but typically have little-to-no research and development budgets," Hanson says. "This program has enabled us to push the technology readiness level of vine robots to a point where responders can engage with a hands-on demonstration of the system."
Sensing in constrained spaces is not a problem unique to disaster response communities, Hanson adds. The team envisions the technology being used in the maintenance of military systems or critical infrastructure with difficult-to-access locations.
The initial program focused on mapping void spaces, but future work aims to localize hazards and assess the viability and safety of operations through rubble. "The mechanical performance of the robots has an immediate effect, but the real goal is to rethink the way sensors are used to enhance situational awareness for rescue teams," says Hanson. "Ultimately, we want SPROUT to provide a complete operating picture to teams before anyone enters a rubble pile."