Feed aggregator

Study shows making hydrogen with soda cans and seawater is scalable and sustainable

MIT Latest News - Tue, 06/03/2025 - 11:00am

Hydrogen has the potential to be a climate-friendly fuel since it doesn’t release carbon dioxide when used as an energy source. Currently, however, most methods for producing hydrogen involve fossil fuels, making hydrogen less of a “green” fuel over its entire life cycle.

A new process developed by MIT engineers could significantly shrink the carbon footprint associated with making hydrogen.

Last year, the team reported that they could produce hydrogen gas by combining seawater, recycled soda cans, and caffeine. The question then was whether the benchtop process could be applied at an industrial scale, and at what environmental cost.

Now, the researchers have carried out a “cradle-to-grave” life cycle assessment, taking into account every step in the process at an industrial scale. For instance, the team calculated the carbon emissions associated with acquiring and processing aluminum, reacting it with seawater to produce hydrogen, and transporting the fuel to gas stations, where drivers could tap into hydrogen tanks to power engines or fuel cell cars. They found that, from end to end, the new process could generate a fraction of the carbon emissions that is associated with conventional hydrogen production.

In a study appearing today in Cell Reports Sustainability, the team reports that for every kilogram of hydrogen produced, the process would generate 1.45 kilograms of carbon dioxide over its entire life cycle. In comparison, fossil-fuel-based processes emit 11 kilograms of carbon dioxide per kilogram of hydrogen generated.

The low-carbon footprint is on par with other proposed “green hydrogen” technologies, such as those powered by solar and wind energy.

“We’re in the ballpark of green hydrogen,” says lead author Aly Kombargi PhD ’25, who graduated this spring from MIT with a doctorate in mechanical engineering. “This work highlights aluminum’s potential as a clean energy source and offers a scalable pathway for low-emission hydrogen deployment in transportation and remote energy systems.”

The study’s MIT co-authors are Brooke Bao, Enoch Ellis, and professor of mechanical engineering Douglas Hart.

Gas bubble

Dropping an aluminum can in water won’t normally cause much of a chemical reaction. That’s because when aluminum is exposed to oxygen, it instantly forms a shield-like layer. Without this layer, aluminum exists in its pure form and can readily react when mixed with water. The reaction that occurs involves aluminum atoms that efficiently break up molecules of water, producing aluminum oxide and pure hydrogen. And it doesn’t take much of the metal to bubble up a significant amount of the gas.

“One of the main benefits of using aluminum is the energy density per unit volume,” Kombargi says. “With a very small amount of aluminum fuel, you can conceivably supply much of the power for a hydrogen-fueled vehicle.”

Last year, he and Hart developed a recipe for aluminum-based hydrogen production. They found they could puncture aluminum’s natural shield by treating it with a small amount of gallium-indium, which is a rare-metal alloy that effectively scrubs aluminum into its pure form. The researchers then mixed pellets of pure aluminum with seawater and observed that the reaction produced pure hydrogen. What’s more, the salt in the water helped to precipitate gallium-indium, which the team could subsequently recover and reuse to generate more hydrogen, in a cost-saving, sustainable cycle.

“We were explaining the science of this process in conferences, and the questions we would get were, ‘How much does this cost?’ and, ‘What’s its carbon footprint?’” Kombargi says. “So we wanted to look at the process in a comprehensive way.”

A sustainable cycle

For their new study, Kombargi and his colleagues carried out a life cycle assessment to estimate the environmental impact of aluminum-based hydrogen production, at every step of the process, from sourcing the aluminum to transporting the hydrogen after production. They set out to calculate the amount of carbon associated with generating 1 kilogram of hydrogen — an amount that they chose as a practical, consumer-level illustration.

“With a hydrogen fuel cell car using 1 kilogram of hydrogen, you can go between 60 to 100 kilometers, depending on the efficiency of the fuel cell,” Kombargi notes.

They performed the analysis using Earthster — an online life cycle assessment tool that draws data from a large repository of products and processes and their associated carbon emissions. The team considered a number of scenarios to produce hydrogen using aluminum, from starting with “primary” aluminum mined from the Earth, versus “secondary” aluminum that is recycled from soda cans and other products, and using various methods to transport the aluminum and hydrogen.

After running life cycle assessments for about a dozen scenarios, the team identified one scenario with the lowest carbon footprint. This scenario centers on recycled aluminum — a source that saves a significant amount of emissions compared with mining aluminum — and seawater — a natural resource that also saves money by recovering gallium-indium. They found that this scenario, from start to finish, would generate about 1.45 kilograms of carbon dioxide for every kilogram of hydrogen produced. The cost of the fuel produced, they calculated, would be about $9 per kilogram, which is comparable to the price of hydrogen that would be generated with other green technologies such as wind and solar energy.

The researchers envision that if the low-carbon process were ramped up to a commercial scale, it would look something like this: The production chain would start with scrap aluminum sourced from a recycling center. The aluminum would be shredded into pellets and treated with gallium-indium. Then, drivers could transport the pretreated pellets as aluminum “fuel,” rather than directly transporting hydrogen, which is potentially volatile. The pellets would be transported to a fuel station that ideally would be situated near a source of seawater, which could then be mixed with the aluminum, on demand, to produce hydrogen. A consumer could then directly pump the gas into a car with either an internal combustion engine or a fuel cell.

The entire process does produce an aluminum-based byproduct, boehmite, which is a mineral that is commonly used in fabricating semiconductors, electronic elements, and a number of industrial products. Kombargi says that if this byproduct were recovered after hydrogen production, it could be sold to manufacturers, further bringing down the cost of the process as a whole.

“There are a lot of things to consider,” Kombargi says. “But the process works, which is the most exciting part. And we show that it can be environmentally sustainable.”

The group is continuing to develop the process. They recently designed a small reactor, about the size of a water bottle, that takes in aluminum pellets and seawater to generate hydrogen, enough to power an electric bike for several hours. They previously demonstrated that the process can produce enough hydrogen to fuel a small car. The team is also exploring underwater applications, and are designing a hydrogen reactor that would take in surrounding seawater to power a small boat or underwater vehicle.

This research was supported, in part, by the MIT Portugal Program.

New Linux Vulnerabilities

Schneier on Security - Tue, 06/03/2025 - 7:07am

They’re interesting:

Tracked as CVE-2025-5054 and CVE-2025-4598, both vulnerabilities are race condition bugs that could enable a local attacker to obtain access to access sensitive information. Tools like Apport and systemd-coredump are designed to handle crash reporting and core dumps in Linux systems.

[…]

“This means that if a local attacker manages to induce a crash in a privileged process and quickly replaces it with another one with the same process ID that resides inside a mount and pid namespace, apport will attempt to forward the core dump (which might contain sensitive information belonging to the original, privileged process) into the namespace.”...

Trump fired the heat experts. Now he might kill their heat rule.

ClimateWire News - Tue, 06/03/2025 - 6:18am
Government layoffs threaten to make it easier for the Trump administration to ditch draft regulations for heat safety.

Trump seeks record-high FEMA funding after vowing to cut agency

ClimateWire News - Tue, 06/03/2025 - 6:17am
The president’s request for an additional $4 billion in disaster aid indicates that he might not carry through with his threats to dismantle the Federal Emergency Management Agency.

Labor Department ready to roll back climate investing rule

ClimateWire News - Tue, 06/03/2025 - 6:16am
The administration intends to issue new guidelines after a Trump-appointed judge twice upheld a Biden-era rule that lets investors consider climate costs.

Relaxing tailpipe rules would hurt climate and consumers, critics say

ClimateWire News - Tue, 06/03/2025 - 6:16am
The Trump team is looking to roll back fuel economy standards put in place by the Biden administration.

EU science advisers slam Brussels’ weakened 2040 climate plans

ClimateWire News - Tue, 06/03/2025 - 6:15am
Using international carbon credits in place of domestic action undermines climate efforts, the scientific advisory board says.

EU climate chief lobbied Germany to back weakened 2040 goal

ClimateWire News - Tue, 06/03/2025 - 6:13am
Wopke Hoekstra successfully pushed the incoming coalition to back foreign carbon credits, helping shift the EU-level 2040 talks.

States roll out red carpets for data centers. But some lawmakers push back.

ClimateWire News - Tue, 06/03/2025 - 6:12am
The fights revolve around the things that tech companies and data center developers seem to most want: large tracts of land, tax breaks and huge volumes of electricity and water.

River dammed by huge Swiss landslide flows once again

ClimateWire News - Tue, 06/03/2025 - 6:12am
Authorities are still leaving open the possibility of evacuations farther downstream if required, though the risk to other villages appears very low.

Flood-induced selective migration patterns examined

Nature Climate Change - Tue, 06/03/2025 - 12:00am

Nature Climate Change, Published online: 03 June 2025; doi:10.1038/s41558-025-02346-6

Selective migration patterns emerge in flood-prone regions in the USA. The sociodemographic profiles of individuals who were more inclined to move in or out of flood-prone areas were strikingly different. Media sentiment aggravates population replacement in these regions, leading to short-term structure changes in the housing market and long-term socioeconomic decline.

New 3D printing method enables complex designs and creates less waste

MIT Latest News - Tue, 06/03/2025 - 12:00am

Hearing aids, mouth guards, dental implants, and other highly tailored structures are often products of 3D printing. These structures are typically made via vat photopolymerization — a form of 3D printing that uses patterns of light to shape and solidify a resin, one layer at a time.

The process also involves printing structural supports from the same material to hold the product in place as it’s printed. Once a product is fully formed, the supports are removed manually and typically thrown out as unusable waste.

MIT engineers have found a way to bypass this last finishing step, in a way that could significantly speed up the 3D-printing process. They developed a resin that turns into two different kinds of solids, depending on the type of light that shines on it: Ultraviolet light cures the resin into an highly resilient solid, while visible light turns the same resin into a solid that is easily dissolvable in certain solvents.

The team exposed the new resin simultaneously to patterns of UV light to form a sturdy structure, as well as patterns of visible light to form the structure’s supports. Instead of having to carefully break away the supports, they simply dipped the printed material into solution that dissolved the supports away, revealing the sturdy, UV-printed part.

The supports can dissolve in a variety of food-safe solutions, including baby oil. Interestingly, the supports could even dissolve in the main liquid ingredient of the original resin, like a cube of ice in water. This means that the material used to print structural supports could be continuously recycled: Once a printed structure’s supporting material dissolves, that mixture can be blended directly back into fresh resin and used to print the next set of parts — along with their dissolvable supports.

The researchers applied the new method to print complex structures, including functional gear trains and intricate lattices.

“You can now print — in a single print — multipart, functional assemblies with moving or interlocking parts, and you can basically wash away the supports,” says graduate student Nicholas Diaco. “Instead of throwing out this material, you can recycle it on site and generate a lot less waste. That’s the ultimate hope.”

He and his colleagues report the details of the new method in a paper appearing today in Advanced Materials Technologies. The MIT study’s co-authors include Carl Thrasher, Max Hughes, Kevin Zhou, Michael Durso, Saechow Yap, Professor Robert Macfarlane, and Professor A. John Hart, head of MIT’s Department of Mechanical Engineering.

Waste removal

Conventional vat photopolymerization (VP) begins with a 3D computer model of a structure to be printed — for instance, of two interlocking gears. Along with the gears themselves, the model includes small support structures around, under, and between the gears to keep every feature in place as the part is printed. This computer model is then sliced into many digital layers that are sent to a VP printer for printing.

A standard VP printer includes a small vat of liquid resin that sits over a light source. Each slice of the model is translated into a matching pattern of light that is projected onto the liquid resin, which solidifies into the same pattern. Layer by layer, a solid, light-printed version of the model’s gears and supports forms on the build platform. When printing is finished, the platform lifts the completed part above the resin bath. Once excess resin is washed away, a person can go in by hand to remove the intermediary supports, usually by clipping and filing, and the support material is ultimately thrown away.

“For the most part, these supports end up generating a lot of waste,” Diaco says.

Print and dip

Diaco and the team looked for a way to simplify and speed up the removal of printed supports and, ideally, recycle them in the process. They came up with a general concept for a resin that, depending on the type of light that it is exposed to, can take on one of two phases: a resilient phase that would form the desired 3D structure and a secondary phase that would function as a supporting material but also be easily dissolved away.

After working out some chemistry, the team found they could make such a two-phase resin by mixing two commercially available monomers, the chemical building blocks that are found in many types of plastic. When ultraviolet light shines on the mixture, the monomers link together into a tightly interconnected network, forming a tough solid that resists dissolution. When the same mixture is exposed to visible light, the same monomers still cure, but at the molecular scale the resulting monomer strands remain separate from one another. This solid can quickly dissolve when placed in certain solutions.

In benchtop tests with small vials of the new resin, the researchers found the material did transform into both the insoluble and soluble forms in response to ultraviolet and visible light, respectively. But when they moved to a 3D printer with LEDs dimmer than the benchtop setup, the UV-cured material fell apart in solution. The weaker light only partially linked the monomer strands, leaving them too loosely tangled to hold the structure together.

Diaco and his colleagues found that adding a small amount of a third “bridging” monomer could link the two original monomers together under UV light, knitting them into a much sturdier framework. This fix enabled the researchers to simultaneously print resilient 3D structures and dissolvable supports using timed pulses of UV and visible light in one run.

The team applied the new method to print a variety of intricate structures, including interlocking gears, intricate lattices, a ball within a square frame, and, for fun, a small dinosaur encased in an egg-shaped support that dissolved away when dipped in solution.

“With all these structures, you need a lattice of supports inside and out while printing,” Diaco says. “Removing those supports normally requires careful, manual removal. This shows we can print multipart assemblies with a lot of moving parts, and detailed, personalized products like hearing aids and dental implants, in a way that’s fast and sustainable.”

“We’ll continue studying the limits of this process, and we want to develop additional resins with this wavelength-selective behavior and mechanical properties necessary for durable products,” says professor of mechanical engineering John Hart. “Along with automated part handling and closed-loop reuse of the dissolved resin, this is an exciting path to resource-efficient and cost-effective polymer 3D printing at scale.”

This research was supported, in part, by the Center for Perceptual and Interactive Intelligence (InnoHK) in Hong Kong, the U.S. National Science Foundation, the U.S. Office of Naval Research, and the U.S. Army Research Office.

Teaching AI models what they don’t know

MIT Latest News - Tue, 06/03/2025 - 12:00am

Artificial intelligence systems like ChatGPT provide plausible-sounding answers to any question you might ask. But they don’t always reveal the gaps in their knowledge or areas where they’re uncertain. That problem can have huge consequences as AI systems are increasingly used to do things like develop drugs, synthesize information, and drive autonomous cars.

Now, the MIT spinout Themis AI is helping quantify model uncertainty and correct outputs before they cause bigger problems. The company’s Capsa platform can work with any machine-learning model to detect and correct unreliable outputs in seconds. It works by modifying AI models to enable them to detect patterns in their data processing that indicate ambiguity, incompleteness, or bias.

“The idea is to take a model, wrap it in Capsa, identify the uncertainties and failure modes of the model, and then enhance the model,” says Themis AI co-founder and MIT Professor Daniela Rus, who is also the director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). “We’re excited about offering a solution that can improve models and offer guarantees that the model is working correctly.”

Rus founded Themis AI in 2021 with Alexander Amini ’17, SM ’18, PhD ’22 and Elaheh Ahmadi ’20, MEng ’21, two former research affiliates in her lab. Since then, they’ve helped telecom companies with network planning and automation, helped oil and gas companies use AI to understand seismic imagery, and published papers on developing more reliable and trustworthy chatbots.

“We want to enable AI in the highest-stakes applications of every industry,” Amini says. “We’ve all seen examples of AI hallucinating or making mistakes. As AI is deployed more broadly, those mistakes could lead to devastating consequences. Our software can make these systems more transparent.”

Helping models know what they don’t know

Rus’ lab has been researching model uncertainty for years. In 2018, she received funding from Toyota to study the reliability of a machine learning-based autonomous driving solution.

“That is a safety-critical context where understanding model reliability is very important,” Rus says.

In separate work, Rus, Amini, and their collaborators built an algorithm that could detect racial and gender bias in facial recognition systems and automatically reweight the model’s training data, showing it eliminated bias. The algorithm worked by identifying the unrepresentative parts of the underlying training data and generating new, similar data samples to rebalance it.

In 2021, the eventual co-founders showed a similar approach could be used to help pharmaceutical companies use AI models to predict the properties of drug candidates. They founded Themis AI later that year.

“Guiding drug discovery could potentially save a lot of money,” Rus says. “That was the use case that made us realize how powerful this tool could be.”

Today Themis is working with companies in a wide variety of industries, and many of those companies are building large language models. By using Capsa, the models are able to quantify their own uncertainty for each output.

“Many companies are interested in using LLMs that are based on their data, but they’re concerned about reliability,” observes Stewart Jamieson SM ’20, PhD ’24, Themis AI's head of technology. “We help LLMs self-report their confidence and uncertainty, which enables more reliable question answering and flagging unreliable outputs.”

Themis AI is also in discussions with semiconductor companies building AI solutions on their chips that can work outside of cloud environments.

“Normally these smaller models that work on phones or embedded systems aren’t very accurate compared to what you could run on a server, but we can get the best of both worlds: low latency, efficient edge computing without sacrificing quality,” Jamieson explains. “We see a future where edge devices do most of the work, but whenever they’re unsure of their output, they can forward those tasks to a central server.”

Pharmaceutical companies can also use Capsa to improve AI models being used to identify drug candidates and predict their performance in clinical trials.

“The predictions and outputs of these models are very complex and hard to interpret — experts spend a lot of time and effort trying to make sense of them,” Amini remarks. “Capsa can give insights right out of the gate to understand if the predictions are backed by evidence in the training set or are just speculation without a lot of grounding. That can accelerate the identification of the strongest predictions, and we think that has a huge potential for societal good.”

Research for impact

Themis AI’s team believes the company is well-positioned to improve the cutting edge of constantly evolving AI technology. For instance, the company is exploring Capsa’s ability to improve accuracy in an AI technique known as chain-of-thought reasoning, in which LLMs explain the steps they take to get to an answer.

“We’ve seen signs Capsa could help guide those reasoning processes to identify the highest-confidence chains of reasoning,” Amini says. “We think that has huge implications in terms of improving the LLM experience, reducing latencies, and reducing computation requirements. It’s an extremely high-impact opportunity for us.”

For Rus, who has co-founded several companies since coming to MIT, Themis AI is an opportunity to ensure her MIT research has impact.

“My students and I have become increasingly passionate about going the extra step to make our work relevant for the world," Rus says. “AI has tremendous potential to transform industries, but AI also raises concerns. What excites me is the opportunity to help develop technical solutions that address these challenges and also build trust and understanding between people and the technologies that are becoming part of their daily lives.”

At MIT, Lindsay Caplan reflects on artistic crossroads where humans and machines meet

MIT Latest News - Mon, 06/02/2025 - 4:35pm

The intersection of art, science, and technology presents a unique, sometimes challenging, viewpoint for both scientists and artists. It is in this nexus that art historian Lindsay Caplan positions herself: “My work as an art historian focuses on the ways that artists across the 20th century engage with new technologies like computers, video, and television, not merely as new materials for making art as they already understand it, but as conceptual platforms for reorienting and reimagining the foundational assumptions of their practice.”

With this introduction, Caplan, an assistant professor at Brown University, opened the inaugural Resonances Lecture — a new series by STUDIO.nano to explore the generative edge where art, science, and technology meet. Delivered on April 28 to an interdisciplinary crowd at MIT.nano, Caplan’s lecture, titled “Analogical Engines — Collaborations across Art and Technology in the 1960s,” traced how artists across Europe and the Americas in the 1960s engaged with and responded to the emerging technological advances of computer science, cybernetics, and early AI. “By the time we reached the 1960s,” she said, “analogies between humans and machines, drawn from computer science and fields like information theory and cybernetics, abound among art historians and artists alike.”

Kaplan’s talk centered on two artistic networks, with a particular emphasis on American artist Liliane Lijn: New Tendencies exhibitions (1961-79) and the Signals gallery in London (1964-66). She deftly analyzed the artist’s material experimentation with contemporary advances in emergent technologies — quantum physics and mathematical formalism, particularly Heisenberg's uncertainty principle. She argued that both art historical formalism and mathematical formalism share struggles with representation, indeterminacy, and the tension between constructed and essential truths.

Following her talk, Caplan was joined by MIT faculty Mark Jarzombek, professor of the history and theory of architecture, and Gediminas Urbonas, associate professor of art, culture, and technology (ACT), for a panel discussion moderated by Ardalan SadeghiKivi SM ’22, lecturer of comparative media studies. The conversation expanded on Caplan’s themes with discussions of artists’ attraction to newly developed materials and technology, and the critical dimension of reimagining and repurposing technologies that were originally designed with an entirely different purpose.

Urbonas echoed the urgency of these conversations. “It is exceptionally exciting to witness artists working in dialectical tension with scientists — a tradition that traces back to the founding of the Center for Advanced Visual Studies at MIT and continues at ACT today,” reflected Urbonas. “The dual ontology of science and art enables us to grasp the world as a web of becoming, where new materials, social imaginaries, and aesthetic values are co-constituted through interdisciplinary inquiry. Such collaborations are urgent today, offering tools to reimagine agency, subjectivity, and the role of culture in shaping the future.”

The event concluded with a reception in MIT.nano’s East Lobby, where attendees could view MIT ACT student projects currently on exhibition in MIT.nano’s gallery spaces. The reception was, itself, an intersection of art and technology. “The first lecture of the Resonances Lecture Series lived up to the title,” reflects Jarzombek. “A brilliant talk by Lindsay Caplan proved that the historical and aesthetical dimensions in the sciences have just as much relevance to a critical posture as the technical.”

The Resonances lecture and panel series seeks to gather artists, designers, scientists, engineers, and historians who examine how scientific endeavors shape artistic production, and vice versa. Their insights expose the historical context on how art and science are made and distributed in society and offer hints at the possible futures of such productions.

“When we were considering who to invite to launch this lecture series, Lindsay Caplan immediately came to mind,” says Tobias Putrih, ACT lecturer and academic advisor for STUDIO.nano. “She is one of the most exciting thinkers and historians writing about the intersection between art, technology, and science today. We hope her insights and ideas will encourage further collaborative projects.”

The Resonances series is one of several new activities organized by STUDIO,nano, a program within MIT.nano, to connect the arts with cutting-edge research environments. “MIT.nano generates extraordinary scientific work,” says Samantha Farrell, manager of STUDIO.nano, “but it’s just as vital to create space for cultural reflection. STUDIO.nano invites artists to engage directly with new technologies — and with the questions they raise.”

In addition to the Resonances lectures, STUDIO.nano organizes exhibitions in the public spaces at MIT.nano, and an Encounters series, launched last fall, to bring artists to MIT.nano. To learn about current installations and ongoing collaborations, visit the STUDIO.nano web page.

The Defense Attorney’s Arsenal In Challenging Electronic Monitoring

EFF: Updates - Mon, 06/02/2025 - 4:32pm

In criminal prosecutions, electronic monitoring (EM) is pitched as a “humane alternative" to incarceration – but it is not. The latest generation of “e-carceration” tools are burdensome, harsh, and often just as punitive as imprisonment. Fortunately, criminal defense attorneys have options when shielding their clients from this over-used and harmful tech.

Framed as a tool that enhances public safety while reducing jail populations, EM is increasingly used as a condition of pretrial release, probation, parole, or even civil detention. However, this technology imposes serious infringements on liberty, privacy, and due process for not only those placed on it but also for people they come into contact with. It can transform homes into digital jails, inadvertently surveil others, impose financial burdens, and punish every misstep—no matter how minor or understandable.

Even though EM may appear less severe than incarceration, research and litigation reveal that these devices often function as a form of detention in all but name. Monitored individuals must often remain at home for long periods, request permission to leave for basic needs, and comply with curfews or “exclusion zones.” Violations, even technical ones—such as a battery running low or a dropped GPS signal—can result in arrest and incarceration. Being able to take care of oneself and reintegrate into the world becomes a minefield of compliance and red tape. The psychological burden, social stigma, and physical discomfort associated with EM are significant, particularly for vulnerable populations.   

For many, EM still evokes bulky wrist or ankle “shackles” that can monitor a subject’s location, and sometimes even their blood alcohol levels. These devices have matured with digital technology however,  increasingly imposed through more sophisticated devices like smartwatches or mobile phones applications. Newer iterations of EM have also followed a trajectory of collecting much more data, including biometrics and more precise location information.

This issue is more pressing than ever, as the 2020 COVID pandemic led to an explosion in EM adoption. As incarceration and detention facilities became superspreader zones, judges kept some offenders out of these facilities by expanding the use of EM; so much so that some jurisdictions ran out of classic EM devices like ankle bracelets.

Today the number of people placed on EM in the criminal system continues to skyrocket. Fighting the spread of EM requires many tactics, but on the front lines are the criminal defense attorneys challenging EM impositions. This post will focus on the main issues for defense attorneys to consider while arguing against the imposition of this technology.

PRETRIAL ELECTRONIC MONITORING

We’ve seen challenges to EM programs in a variety of ways, including attacking the constitutionality of the program as a whole and arguing against pretrial and/or post-conviction imposition. However, it is likely that the most successful challenges will come from individualized challenges to pretrial EM.

First, courts have not been receptive to arguments that entire EM programs are unconstitutional. For example, in Simon v. San Francisco et.al, 135 F.4th 784 (9 Cir. 2025), the Ninth Circuit held that although San Francisco’s EM program constituted a Fourth Amendment search, a warrant was not required. The court explained their decision by stating that the program was a condition of pretrial release, included the sharing of location data, and was consented to by the individual (with counsel present) by signing a form that essentially operated as a contract. This decision exemplifies the court’s failure to grasp the coercive nature of this type of “consent” that is pervasive in the criminal legal system.

Second, pretrial defendants have more robust rights than they do after conviction. While a person’s expectation of privacy may be slightly diminished following arrest but before trial, the Fourth Amendment is not entirely out of the picture. Their “privacy and liberty interests” are, for instance, “far greater” than a person who has been convicted and is on probation or parole. United States v. Scott, 450 F.3d 863, 873 (9th Cir. 2006). Although individuals continue to retain Fourth Amendment rights after conviction, the reasonableness analysis will be heavily weighted towards the state as the defendant is no longer presumed innocent. However, even people on probation have a “substantial” privacy interest. United States v. Lara, 815 F.3d 605, 610 (9th Cir. 2016). 

THE FOURTH AMENDMENT

The first foundational constitutional rights threatened by the sheer invasiveness of EM are those protected by the Fourth Amendment. This concern is only heightened as the technology improves and collects increasingly detailed information. Unlike traditional probation or parole supervision, EM often tracks individuals with no geographic limitations or oversight, and can automatically record more than just approximate location information.

Courts have increasingly recognized that this new technology poses greater and more novel threats to our privacy than earlier generations. In Grady v. North Carolina, 575 U.S. 306 (2015), the Supreme Court, relying on United States v. Jones, 565 U.S. 400 (2012) held that attaching a GPS tracking device to a person—even a convicted sex offender—constitutes a Fourth Amendment search and is thus subject to the inquiry of reasonableness. A few years later, the monumental decision in Carpenter v. United States, 138 S. Ct. 2206 (2018), firmly established that Fourth Amendment analysis is affected by the advancement of technology, holding that that long-term cell-site location tracking by law enforcement constituted a search requiring a warrant.

As criminal defense attorneys are well aware, the Fourth Amendment’s ostensibly powerful protections are often less effective in practice. Nevertheless, this line of cases still forms a strong foundation for arguing that EM should be subjected to exacting Fourth Amendment scrutiny.

DUE PROCESS

Three key procedural due process challenges that defense attorneys can raise under the Fifth and Fourteenth Amendments are: inadequate hearing, lack of individualized assessment, and failure to consider ability to pay.

Many courts impose EM without adequate consideration of individual circumstances or less restrictive alternatives. Defense attorneys should demand evidentiary hearings where the government must prove that monitoring is necessary and narrowly tailored. If the defendant is not given notice, hearing, or the opportunity to object, that could arguably constitute a violation of due process. For example, in the previously mentioned case, Simon v. San Francisco, the Ninth Circuit found that individuals who were not informed of the details regarding the city’s pretrial EM program in the presence of counsel had their rights violated.

Second, imposition of EM should be based on an individualized assessment rather than a blanket rule. For pretrial defendants, EM is frequently used as a condition of bail. Although under both federal and state bail frameworks, courts are generally required to impose the least restrictive conditions necessary to ensure the defendant’s court appearance and protect the community, many jurisdictions have included EM as a default condition rather than individually assessing whether EM is appropriate. The Bail Reform Act of 1984, for instance, mandates that release conditions be tailored to the individual’s circumstances. Yet in practice, many jurisdictions impose EM categorically, without specific findings or consideration of alternatives. Defense counsel should challenge this practice by insisting that judges articulate on the record why EM is necessary, supported by evidence related to flight risk or danger. Where clients have stable housing, employment, and no history of noncompliance, EM may be more restrictive than justified.

Lastly, financial burdens associated with EM may also implicate due process where a failure to pay can result in violations and incarceration. In Bearden v. Georgia, 461 U.S. 660 (1983), the Supreme Court held that courts cannot revoke probation for failure to pay fines or restitution without first determining whether the failure was willful. Relying on Bearden, defense attorneys can argue that EM fees imposed on indigent clients amount to unconstitutional punishment for poverty. Similarly, a growing number of lower courts have agreed, particularly where clients were not given the opportunity to contest their ability to pay. Defense attorneys should request fee waivers, present evidence of indigence, and challenge any EM orders that functionally condition liberty on wealth.

STATE LAW PROTECTIONS

State constitutions and statutes often provide stronger protections than federal constitutional minimums. In addition to state corollaries to the Fourth and Fifth Amendment, some states have also enacted statutes to govern pretrial release and conditions. A number of states have established a presumption in favor of release on recognizance or personal recognizance bonds. In those jurisdictions, the state has to overcome this presumption before the court can impose restrictive conditions like EM. Some states require courts to impose the least restrictive conditions necessary to achieve legitimate purposes, making EM appropriate only when less restrictive alternatives are inadequate.

Most pretrial statutes list specific factors courts must consider, such as community ties, employment history, family responsibilities, nature of the offense, criminal history, and risk of flight or danger to community. Courts that fail to adequately consider these factors or impose generic monitoring conditions may violate statutory requirements.

For example, Illinois's SAFE-T Act includes specific protections against overly restrictive EM conditions, but implementation has been inconsistent. Defense attorneys in Illinois and states with similar laws should challenge monitoring conditions that violate specific statutory requirements.

TECHNOLOGICAL ISSUES

Attorneys should also consider the reliability of EM technology. Devices frequently produce false violations and alerts, particularly in urban areas or buildings where GPS signals are weak. Misleading data can lead to violation hearings and even incarceration. Attorneys should demand access to raw location data, vendor records, and maintenance logs. Expert testimony can help demonstrate technological flaws, human error, or system limitations that cast doubt on the validity of alleged violations.

In some jurisdictions, EM programs are operated by private companies under contracts with probation departments, courts, or sheriffs. These companies profit from fees paid by clients and have minimal oversight. Attorneys should request copies of contracts, training manuals, and policies governing EM use. Discovery may reveal financial incentives, lack of accountability, or systemic issues such as racial or geographic disparities in monitoring. These findings can support broader litigation or class actions, particularly where indigent individuals are jailed for failing to pay private vendors.

Recent research provides compelling evidence that EM fails to achieve its stated purposes while creating significant harms. Studies have not found significant relationships between EM of individuals on pretrial release and their court appearance rates or likelihood of arrest. Nor do they show that law enforcement is employing EM on individuals they would otherwise put in jail.

To the contrary, studies indicate that law enforcement is using EM to surveil and constrain the liberty of those who wouldn't otherwise be detained, as the rise in the number of people placed on EM has not coincided with a decrease in detention. This research demonstrates that EM represents an expansion of government control rather than a true alternative to detention.

Additionally, EM devices may be rife with technical issues as described above. Communication system failures that prevent proper monitoring, and device malfunctions that cause electronic shocks. Cutting of ankle bracelets is a common occurrence among users, especially when the technology is malfunctioning or hurting them. Defense attorneys should document all technical issues and argue that unreliable technology cannot form the basis for liberty restrictions or additional criminal charges.

CREATING A RECORD FOR APPEAL

Attorneys should always make sure they are creating a record on which the EM imposition can be appealed, should the initial hearing be unsuccessful. This will require lawyers to include the factual basis for challenge and preserve the appropriate legal arguments. The modern generation of EM has yet to undergo the extensive judicial review that ankle shackles have been subjected to, making it integral to make an extensive record of the ways in which it is more invasive and harmful, so that it can be properly argued to an appellate court that the nature of the newest EM requires more than perfunctory application of decades-old precedence. As we saw with Carpenter, the rapid advancement of technology may push the courts to reconsider older paradigms for constitutional analysis and find them wanting. Thus, a comprehensive record would be critical to show EM as it is—an extension of incarceration—rather than a benevolent alternative to detention. 

Defeating electronic monitoring will require a multidimensional approach that includes litigating constitutional claims, contesting factual assumptions, exposing technological failures, and advocating for systemic reforms. As the carceral state evolves, attorneys must remain vigilant and proactive in defending the rights of their clients.

The EU’s “Encryption Roadmap” Makes Everyone Less Safe

EFF: Updates - Mon, 06/02/2025 - 4:15pm

EFF has joined more than 80 civil society organizations, companies, and cybersecurity experts in signing a letter urging the European Commission to change course on its recently announced “Technology Roadmap on Encryption.” The roadmap, part of the EU’s ProtectEU strategy, discusses new ways for law enforcement to access encrypted data. That framing is dangerously flawed. 

Let’s be clear: there is no technical “lawful access” to end-to-end encrypted messages that preserves security and privacy. Any attempt to circumvent encryption—like client-side scanning—creates new vulnerabilities, threatening the very people governments claim to protect.

This letter is significant in not just its content, but in who signed it. The breadth of the coalition makes one thing clear: civil society and the global technical community overwhelmingly reject the idea that weakening encryption can coexist with respect for fundamental rights.

Strong encryption is a pillar of cybersecurity, protecting everyone: activists, journalists, everyday web users, and critical infrastructure. Undermining it doesn’t just hurt privacy. It makes everyone’s data more vulnerable and weakens the EU’s ability to defend against cybersecurity threats.

EU officials should scrap any roadmap focused on circumvention and instead invest in stronger, more widespread use of end-to-end encryption. Security and human rights aren’t in conflict. They depend on each other.

You can read the full letter here.

AI stirs up the recipe for concrete in MIT study

MIT Latest News - Mon, 06/02/2025 - 3:45pm

For weeks, the whiteboard in the lab was crowded with scribbles, diagrams, and chemical formulas. A research team across the Olivetti Group and the MIT Concrete Sustainability Hub (CSHub) was working intensely on a key problem: How can we reduce the amount of cement in concrete to save on costs and emissions? 

The question was certainly not new; materials like fly ash, a byproduct of coal production, and slag, a byproduct of steelmaking, have long been used to replace some of the cement in concrete mixes. However, the demand for these products is outpacing supply as industry looks to reduce its climate impacts by expanding their use, making the search for alternatives urgent. The challenge that the team discovered wasn’t a lack of candidates; the problem was that there were too many to sort through.

On May 17, the team, led by postdoc Soroush Mahjoubi, published an open-access paper in Nature’s Communications Materials outlining their solution. “We realized that AI was the key to moving forward,” notes Mahjoubi. “There is so much data out there on potential materials — hundreds of thousands of pages of scientific literature. Sorting through them would have taken many lifetimes of work, by which time more materials would have been discovered!”

With large language models, like the chatbots many of us use daily, the team built a machine-learning framework that evaluates and sorts candidate materials based on their physical and chemical properties. 

“First, there is hydraulic reactivity. The reason that concrete is strong is that cement — the ‘glue’ that holds it together — hardens when exposed to water. So, if we replace this glue, we need to make sure the substitute reacts similarly,” explains Mahjoubi. “Second, there is pozzolanicity. This is when a material reacts with calcium hydroxide, a byproduct created when cement meets water, to make the concrete harder and stronger over time.  We need to balance the hydraulic and pozzolanic materials in the mix so the concrete performs at its best.”

Analyzing scientific literature and over 1 million rock samples, the team used the framework to sort candidate materials into 19 types, ranging from biomass to mining byproducts to demolished construction materials. Mahjoubi and his team found that suitable materials were available globally — and, more impressively, many could be incorporated into concrete mixes just by grinding them. This means it’s possible to extract emissions and cost savings without much additional processing. 

“Some of the most interesting materials that could replace a portion of cement are ceramics,” notes Mahjoubi. “Old tiles, bricks, pottery — all these materials may have high reactivity. That’s something we’ve observed in ancient Roman concrete, where ceramics were added to help waterproof structures. I’ve had many interesting conversations on this with Professor Admir Masic, who leads a lot of the ancient concrete studies here at MIT.”

The potential of everyday materials like ceramics and industrial materials like mine tailings is an example of how materials like concrete can help enable a circular economy. By identifying and repurposing materials that would otherwise end up in landfills, researchers and industry can help to give these materials a second life as part of our buildings and infrastructure.

Looking ahead, the research team is planning to upgrade the framework to be capable of assessing even more materials, while experimentally validating some of the best candidates. “AI tools have gotten this research far in a short time, and we are excited to see how the latest developments in large language models enable the next steps,” says Professor Elsa Olivetti, senior author on the work and member of the MIT Department of Materials Science and Engineering. She serves as an MIT Climate Project mission director, a CSHub principal investigator, and the leader of the Olivetti Group.

“Concrete is the backbone of the built environment,” says Randolph Kirchain, co-author and CSHub director. “By applying data science and AI tools to material design, we hope to support industry efforts to build more sustainably, without compromising on strength, safety, or durability.

In addition to Mahjoubi, Olivetti, and Kirchain, co-authors on the work include MIT postdoc Vineeth Venugopal, Ipek Bensu Manav SM ’21, PhD ’24; and CSHub Deputy Director Hessam AzariJafari.

245 Days Without Justice: Laila Soueif’s Hunger Strike and the Fight to Free Alaa Abd el-Fattah

EFF: Updates - Mon, 06/02/2025 - 3:14pm

Laila Soueif has now been on hunger strike for 245 days. On Thursday night, she was taken to the hospital once again. Soueif’s hunger strike is a powerful act of protest against the failures of two governments. The Egyptian government continues to deny basic justice by keeping her son, Alaa Abd el-Fattah, behind bars—his only “crime” was sharing a Facebook post about the torture of a fellow detainee. Meanwhile, the British government, despite Alaa’s citizenship, has failed to secure even a single consular visit. Its muted response reflects an unacceptable unwillingness to stand up for the rights of its own citizens.

This is the second time this year that Soueif’s health has collapsed due to her hunger strike. Now, her condition is dire. Her blood sugar is dangerously low, and every day, her family fears it could be her last. Doctors say it’s a miracle she’s still alive.

Her protest is a call for accountability—a demand that both governments uphold the rule of law and protect human rights, not only in rhetoric, but through action.

Late last week, after an 18-month investigation, the United Nations Working Group on Arbitrary Detention (UNWGAD) issued its Opinion on Abd el-Fattah’s case, stating that he is being held unlawfully by the Egyptian government. That Egypt will not provide the United Kingdom with consular access to its citizen further violates the country’s obligations under international law. 

As stated in a letter to British Prime Minister Keir Starmer by 21 organizations, including EFF, the UK must now use every tool it has at its disposal to ensure that Alaa Abd el-Fattah is released immediately.

MIT students and postdoc explore the inner workings of Capitol Hill

MIT Latest News - Mon, 06/02/2025 - 3:00pm

This spring, 25 MIT students and a postdoc traveled to Washington, where they met with congressional offices to advocate for federal science funding and specific, science-based policies based on insights from their research on pressing issues — including artificial intelligence, health, climate and ocean science, energy, and industrial decarbonization. Organized annually by the Science Policy Initiative (SPI), this year’s trip came at a particularly critical moment, as science agencies are facing unprecedented funding cuts.

Over the course of two days, the group met with 66 congressional offices across 35 states and select committees, advocating for stable funding for science agencies such as the Department of Energy, the National Oceanic and Atmospheric Administration, the National Science Foundation, NASA, and the Department of Defense.

Congressional Visit Days (CVD), organized by SPI, offer students and researchers a hands-on introduction to federal policymaking. In addition to meetings on Capitol Hill, participants connected with MIT alumni in government and explored potential career paths in science policy.

This year’s trip was co-organized by Mallory Kastner, a PhD student in biological oceanography at MIT and Woods Hole Oceanographic Institution (WHOI), and Julian Ufert, a PhD student in chemical engineering at MIT. Ahead of the trip, participants attended training sessions hosted by SPI, the MIT Washington Office, and the MIT Policy Lab. These sessions covered effective ways to translate scientific findings into policy, strategies for a successful advocacy meeting, and hands-on demos of a congressional meeting.

Participants then contacted their representatives’ offices in advance and tailored their talking points to each office’s committees and priorities. This structure gave participants direct experience initiating policy conversations with those actively working on issues they cared about.

Audrey Parker, a PhD student in civil and environmental engineering studying methane abatement, emphasizes the value of connecting scientific research with priorities in Congress: “Through CVD, I had the opportunity to contribute to conversations on science-backed solutions and advocate for the role of research in shaping policies that address national priorities — including energy, sustainability, and climate change.”

To many of the participants, stepping into the shoes of a policy advisor was a welcome diversion from their academic duties and scientific routine. For Alex Fan, an undergraduate majoring in electrical engineering and computer science, the trip was enlightening: “It showed me that student voices really do matter in shaping science policy. Meeting with lawmakers, especially my own representative, Congresswoman Bonamici, made the experience personal and inspiring. It has made me seriously consider a future at the intersection of research and policy.”

“I was truly impressed by the curiosity and dedication of our participants, as well as the preparation they brought to each meeting,” says Ufert. “It was inspiring to watch them grow into confident advocates, leveraging their experience as students and their expertise as researchers to advise on policy needs.”

Kastner adds: “It was eye-opening to see the disconnect between scientists and policymakers. A lot of knowledge we generate as scientists rarely makes it onto the desk of congressional staff, and even more rarely onto the congressperson’s. CVD was an incredibly empowering experience for me as a scientist — not only am I more motivated to broaden my scientific outreach to legislators, but I now also have the skills to do so.”

Funding is the bedrock that allows scientists to carry out research and make discoveries. In the United States, federal funding for science has enabled major technological breakthroughs and advancements in manufacturing and other industrial sectors, and led to important environmental protection standards. While participants found the degree of support for science funding variable among offices from across the political spectrum, they were reassured by the fact that many offices on both sides of the aisle still recognized the significance of science. 

Teaching AI models the broad strokes to sketch more like humans do

MIT Latest News - Mon, 06/02/2025 - 2:50pm

When you’re trying to communicate or understand ideas, words don’t always do the trick. Sometimes the more efficient approach is to do a simple sketch of that concept — for example, diagramming a circuit might help make sense of how the system works.

But what if artificial intelligence could help us explore these visualizations? While these systems are typically proficient at creating realistic paintings and cartoonish drawings, many models fail to capture the essence of sketching: its stroke-by-stroke, iterative process, which helps humans brainstorm and edit how they want to represent their ideas.

A new drawing system from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stanford University can sketch more like we do. Their method, called “SketchAgent,” uses a multimodal language model — AI systems that train on text and images, like Anthropic’s Claude 3.5 Sonnet — to turn natural language prompts into sketches in a few seconds. For example, it can doodle a house either on its own or through collaboration, drawing with a human or incorporating text-based input to sketch each part separately.

The researchers showed that SketchAgent can create abstract drawings of diverse concepts, like a robot, butterfly, DNA helix, flowchart, and even the Sydney Opera House. One day, the tool could be expanded into an interactive art game that helps teachers and researchers diagram complex concepts or give users a quick drawing lesson.

CSAIL postdoc Yael Vinker, who is the lead author of a paper introducing SketchAgent, notes that the system introduces a more natural way for humans to communicate with AI.

“Not everyone is aware of how much they draw in their daily life. We may draw our thoughts or workshop ideas with sketches,” she says. “Our tool aims to emulate that process, making multimodal language models more useful in helping us visually express ideas.”

SketchAgent teaches these models to draw stroke-by-stroke without training on any data — instead, the researchers developed a “sketching language” in which a sketch is translated into a numbered sequence of strokes on a grid. The system was given an example of how things like a house would be drawn, with each stroke labeled according to what it represented — such as the seventh stroke being a rectangle labeled as a “front door” — to help the model generalize to new concepts.

Vinker wrote the paper alongside three CSAIL affiliates — postdoc Tamar Rott Shaham, undergraduate researcher Alex Zhao, and MIT Professor Antonio Torralba — as well as Stanford University Research Fellow Kristine Zheng and Assistant Professor Judith Ellen Fan. They’ll present their work at the 2025 Conference on Computer Vision and Pattern Recognition (CVPR) this month.

Assessing AI’s sketching abilities

While text-to-image models such as DALL-E 3 can create intriguing drawings, they lack a crucial component of sketching: the spontaneous, creative process where each stroke can impact the overall design. On the other hand, SketchAgent’s drawings are modeled as a sequence of strokes, appearing more natural and fluid, like human sketches.

Prior works have mimicked this process, too, but they trained their models on human-drawn datasets, which are often limited in scale and diversity. SketchAgent uses pre-trained language models instead, which are knowledgeable about many concepts, but don’t know how to sketch. When the researchers taught language models this process, SketchAgent began to sketch diverse concepts it hadn’t explicitly trained on.

Still, Vinker and her colleagues wanted to see if SketchAgent was actively working with humans on the sketching process, or if it was working independently of its drawing partner. The team tested their system in collaboration mode, where a human and a language model work toward drawing a particular concept in tandem. Removing SketchAgent’s contributions revealed that their tool’s strokes were essential to the final drawing. In a drawing of a sailboat, for instance, removing the artificial strokes representing a mast made the overall sketch unrecognizable.

In another experiment, CSAIL and Stanford researchers plugged different multimodal language models into SketchAgent to see which could create the most recognizable sketches. Their default backbone model, Claude 3.5 Sonnet, generated the most human-like vector graphics (essentially text-based files that can be converted into high-resolution images). It outperformed models like GPT-4o and Claude 3 Opus.

“The fact that Claude 3.5 Sonnet outperformed other models like GPT-4o and Claude 3 Opus suggests that this model processes and generates visual-related information differently,” says co-author Tamar Rott Shaham.

She adds that SketchAgent could become a helpful interface for collaborating with AI models beyond standard, text-based communication. “As models advance in understanding and generating other modalities, like sketches, they open up new ways for users to express ideas and receive responses that feel more intuitive and human-like,” says Shaham. “This could significantly enrich interactions, making AI more accessible and versatile.”

While SketchAgent’s drawing prowess is promising, it can’t make professional sketches yet. It renders simple representations of concepts using stick figures and doodles, but struggles to doodle things like logos, sentences, complex creatures like unicorns and cows, and specific human figures.

At times, their model also misunderstood users’ intentions in collaborative drawings, like when SketchAgent drew a bunny with two heads. According to Vinker, this may be because the model breaks down each task into smaller steps (also called “Chain of Thought” reasoning). When working with humans, the model creates a drawing plan, potentially misinterpreting which part of that outline a human is contributing to. The researchers could possibly refine these drawing skills by training on synthetic data from diffusion models.

Additionally, SketchAgent often requires a few rounds of prompting to generate human-like doodles. In the future, the team aims to make it easier to interact and sketch with multimodal language models, including refining their interface. 

Still, the tool suggests AI could draw diverse concepts the way humans do, with step-by-step human-AI collaboration that results in more aligned final designs.

This work was supported, in part, by the U.S. National Science Foundation, a Hoffman-Yee Grant from the Stanford Institute for Human-Centered AI, the Hyundai Motor Co., the U.S. Army Research Laboratory, the Zuckerman STEM Leadership Program, and a Viterbi Fellowship.

Pages