Feed aggregator
Discord Voluntarily Pushes Mandatory Age Verification Despite Recent Data Breach
Discord has begun rolling out mandatory age verification and the internet is, understandably, freaking out.
At EFF, we’ve been raising the alarm about age verification mandates for years. In December, we launched our Age Verification Resource Hub to push back against laws and platform policies that require users to hand over sensitive personal information just to access basic online services. At the time, age gates were largely enforced in polities where it was mandated by law. Now they’re landing in platforms and jurisdictions where they’re not required.
Beginning in early March, users who are either (a) estimated by Discord to be under 18, or (b) Discord doesn't have enough information on, may find themselves locked into a “teen-appropriate experience.” That means content filters, age gates, restrictions on direct messages and friend requests, and the inability to speak in “Stage channels,” which are the large-audience audio spaces that power many community events. Discord says most adults may be sorted automatically through a new “age inference” system that relies on account tenure, device and activity data, and broader platform patterns. Those whose age isn’t estimated due to lack of information or who are estimated to not be adults will be asked to scan their face or upload a government ID through a third-party vendor if they want to avoid the default teen account restrictions.
We’ve written extensively about why age verification mandates are a censorship and surveillance nightmare. Discord’s shift only reinforces those concerns. Here’s why:
The 2025 Breach and What's Changed SinceDiscord literally won our 2025 “We Still Told You So” Breachies Award. Last year, attackers accessed roughly 70,000 users’ government IDs, selfies, and other sensitive information after compromising Discord’s third-party customer support system.
To be clear: Discord is no longer using that system, which involved routing ID uploads through its general ticketing system for age verification. It now uses dedicated age verification vendors (k-ID globally and Persona for some users in the United Kingdom).
That’s an improvement. But it doesn’t eliminate the underlying potential for data breaches and other harms. Discord says that it will delete records of any user-uploaded government IDs, and that any facial scans will never leave users’ devices. But platforms are closed-source, audits are limited, and history shows that data (especially this ultra-valuable identity data) will leak—whether through hacks, misconfigurations, or retention mistakes. Users are being asked to simply trust that this time will be different.
Age Verification and Anonymous SpeechFor decades, we’ve taught young people a simple rule: don’t share personal information with strangers online.
Age verification complicates that advice. Suddenly, some Discord users will now be asked to submit a government ID or facial scan to access certain features if their age-inference technology fails. Discord has said on its blog that it will not associate a user’s ID with their account (only using that information to confirm their age) and that identifying documents won’t be retained. We take those commitments seriously. However, users have little independent visibility into how those safeguards operate in practice or whether they are sufficient to prevent identification.
Even if Discord can technically separate IDs from accounts, many users are understandably skeptical, especially after the platform’s recent breach involving age-verification data. For people who rely on pseudonymity, being required to upload a face scan or government ID at all can feel like crossing a line.
Many people rely on anonymity to speak freely. LGBTQ+ youth, survivors of abuse, political dissidents, and countless others use aliases to explore identity, find support, and build community safely. When identity checks become a condition of participation, many users will simply opt out. The chilling effect isn’t only about whether an ID is permanently linked to an account; it’s about whether users trust the system enough to participate in the first place. When you’re worried that what you say can be traced back to your government ID, you speak differently—or not at all.
No one should have to choose between accessing online communities and protecting their privacy.
Age Verification Systems Are Not Ready for Prime TimeDiscord says it is trying to address privacy concerns by using device-based facial age estimation and separating government IDs from user accounts, retaining only a user’s age rather than their identity documents. This is meant to reduce the risks associated with retaining and collecting this sensitive data. However, even when privacy safeguards are in place, we are faced with another problem: There is no current technology that is fully privacy-protective, universally accessible, and consistently accurate. Facial age estimation tools are notoriously unreliable, particularly for people of color, trans and nonbinary people, and people with disabilities. The internet has now proliferated with stories of people bypassing these facial age estimation tools. But when systems get it wrong, users may be forced into appeals processes or required to submit more documentation, such as government-issued IDs, which would exclude those whose appearance doesn’t match their documents and the millions of people around the world who don’t have government-issued identity documents at all.
Even newer approaches (things like age inference, behavior tracking, financial database checks, digital ID systems) expand the web of data collection, and carry their own tradeoffs around access and error. As we mentioned earlier, no current approach is simultaneously privacy-protective, universally accessible, and consistently accurate across all demographics.
That’s the challenge: the technology itself is not fit for the sweeping role platforms are asking it to play.
That’s the challenge: the technology itself is not fit for the sweeping role platforms are asking it to play.
The AftermathDiscord reports over 200 million monthly active users, and is one of the largest platforms used by gamers to chat. The video game industry is larger than movies, TV, and music combined, and Discord represents an almost-default option for gamers looking to host communities.
Many communities, including open-source projects, sports teams, fandoms, friend groups, and families, use Discord to stay connected. If communities or individuals are wrongly flagged as minors, or asked to complete the age verification process, they may face a difficult choice: submit to facial scans or ID checks, or accept a more restricted “teen” experience. For those who decline to go through the process, the result can mean reduced functionality, limited communication tools, and the chilling effects that follow.
Most importantly, Discord did not have to “comply in advance” by requiring age verification for all users, whether or not they live in a jurisdiction that mandates it. Other social media platforms and their trade groups have fought back against more than a dozen age verification laws in the U.S., and Reddit has now taken the legal fight internationally. For a platform with as much market power as Discord, voluntarily imposing age verification is unacceptable.
So You’ve Hit an Age Gate. Now What?Discord should reconsider whether expanding identity checks is worth the harm to its communities. But in the meantime, many users are facing age checks today.
That’s why we created our guide, “So You’ve Hit an Age Gate. Now What?” It walks through practical steps to minimize risk, such as:
- Submit the least amount of sensitive data possible.
- Ask: What data is collected? Who can access it? How long is it retained?
- Look for evidence of independent, security-focused audits.
- Be cautious about background details in selfies or ID photos.
There is unfortunately no perfect option, only tradeoffs. And every user will have their own unique set of safety concerns to consider. Amidst this confusion, our goal is to help keep you informed, so you can make the best choices for you and your community.
In light of the harms imposed by age-verification systems, EFF encourages all services to stop adopting these systems when they are not mandated by law. And lawmakers across the world that are considering bills that would make Discord’s approach the norm for every platform should watch this backlash and similarly move away from the idea.
If you care about privacy, free expression, and the right to participate online without handing over your identity, now is the time to speak up.
EPA repeals endangerment finding
Maria Yang named vice provost for faculty
Maria Yang ’91, the William E. Leonhard (1940) Professor in the Department of Mechanical Engineering, has been appointed vice provost for faculty at MIT, a role in which she will oversee programs and strategies to recruit and retain faculty members and support them throughout their careers.
Provost Anantha Chandrakasan announced Yang’s appointment, which is effective Feb. 16, in an email to MIT faculty and staff today.
“In the nearly two decades since Maria joined the MIT faculty, she has exemplified dedicated service to the Institute and deep interdisciplinary collaboration,” Chandrakasan wrote. He added that, in a series of leadership positions within the School of Engineering, Yang “consistently demonstrated her skill as a leader, her empathy as a colleague, and her values-driven decision-making.”
As vice provost for faculty, Yang will play a pivotal role in creating an environment where MIT’s faculty members are able to do their best work, “pursuing bold ideas with excellence and creativity,” according to Chandrakasan’s letter. She will partner with school and department leaders on faculty recruitment and retention, mentorship, and strategic planning, and she will oversee programs to support faculty members’ professional development at every stage of their careers.
“Part of what makes MIT unique is the way it provides faculty the room and the encouragement to do work that they think is important, impactful, and sometimes unexpected,” says Yang. “I think it’s vital to foster a culture and a sense of community that really enables our faculty to perform at their best — as researchers, of course, but also as educators and mentors, and as citizens of MIT.”
In addition to her role supporting MIT faculty, Yang will also handle oversight and planning responsibilities for campus academic and research spaces, in partnership with the Office of the Executive Vice President and Treasurer. She will also serve as the principal investigator for the National Science Foundation’s New England Innovation Corps Hub, oversee MIT Solve, and represent the provost on various boards and committees, such as MIT International and the Axim Collaborative.
Yang, who attended MIT as an undergraduate in mechanical engineering as part of the Class of 1991 before earning her master’s and PhD degrees from the design division of the mechanical engineering department at Stanford University, returned to MIT in 2007 as an assistant professor. She has held a number of leadership positions at MIT, including associate dean, deputy dean, and interim dean of the School of Engineering.
In 2021, Yang co-chaired an Institute-wide committee on the future of design, which recommended the creation of a center to support design opportunities at MIT. Through a generous gift from the Morningside Foundation, the recommendation came to life as the interdisciplinary Morningside Academy for Design (MAD), where Yang has served as associate director since inception. Yang has been instrumental in the development of several new programs at MAD, including design-focused graduate fellowships open to students across MIT and a new design-themed first-year learning community.
Since 2017, Yang has also served as academic faculty director for MIT D-Lab, which uses participatory design to collaborate with communities around the world on the development of solutions to poverty challenges. And since 2024, Yang has served as a co-chair of the SHASS+ Connectivity Fund, which funds research projects in which scholars in the School of Humanities, Arts, and Social Sciences collaborate with faculty colleagues from other schools at MIT.
Given Yang’s extensive track record of working across disciplinary lines, Chandrakasan said in his letter that he had “no doubt that in her new role she will be an effective and trusted champion for colleagues across the Institute.”
An internationally recognized leader in design theory and methodology, Yang is currently focused on researching the early-stage processes used to create successful designs for everything from consumer products to complex, large-scale engineering systems, and the role that these early-stage processes play in determining design outcomes.
Yang, a fellow of the American Society of Mechanical Engineers (ASME), received the 2024 ASME Design Theory and Methodology Award, recognizing “sustained and meritorious contributions” in the field. She has also been recognized with a National Science Foundation CAREER award and the American Society of Engineering Education Fred Merryfield Design Award. In 2017 Yang was named a MacVicar Faculty Fellow, one of MIT’s highest teaching honors.
Yang succeeds Institute Professor Paula Hammond, who served in the role from 2023 before being named dean of the School of Engineering, a role she assumed in January.
3D Printer Surveillance
New York is contemplating a bill that adds surveillance to 3D printers:
New York’s 20262027 executive budget bill (S.9005 / A.10005) includes language that should alarm every maker, educator, and small manufacturer in the state. Buried in Part C is a provision requiring all 3D printers sold or delivered in New York to include “blocking technology.” This is defined as software or firmware that scans every print file through a “firearms blueprint detection algorithm” and refuses to print anything it flags as a potential firearm or firearm component...
States target oil giants’ wealth as climate damages rise
Hearing on looming shutdown turns into FEMA fight
12 states debate heat rules as Trump delays action
Union asks judge to block upcoming FEMA staff cuts
China’s emissions fall as US scraps bedrock climate rules
European chemical giants plot to weaken EU’s flagship climate policy
California lawmaker reintroduces bill to expand CARB’s regulatory authority
Climate change set the stage for Argentina and Chile fires, says study
Death toll rises to 31 after Tropical Cyclone Gezani hits Madagascar
Accelerating science with AI and simulations
For more than a decade, MIT Associate Professor Rafael Gómez-Bombarelli has used artificial intelligence to create new materials. As the technology has expanded, so have his ambitions.
Now, the newly tenured professor in materials science and engineering believes AI is poised to transform science in ways never before possible. His work at MIT and beyond is devoted to accelerating that future.
“We’re at a second inflection point,” Gómez-Bombarelli says. “The first one was around 2015 with the first wave of representation learning, generative AI, and high-throughput data in some areas of science. Those are some of the techniques I first brought into my lab at MIT. Now I think we’re at a second inflection point, mixing language and merging multiple modalities into general scientific intelligence. We’re going to have all the model classes and scaling laws needed to reason about language, reason over material structures, and reason over synthesis recipes.”
Gómez Bombarelli’s research combines physics-based simulations with approaches like machine learning and generative AI to discover new materials with promising real-world applications. His work has led to new materials for batteries, catalysts, plastics, and organic light-emitting diodes (OLEDs). He has also co-founded multiple companies and served on scientific advisory boards for startups applying AI to drug discovery, robotics, and more. His latest company, Lila Sciences, is working to build a scientific superintelligence platform for the life sciences, chemical, and materials science industries.
All of that work is designed to ensure the future of scientific research is more seamless and productive than research today.
“AI for science is one of the most exciting and aspirational uses of AI,” Gómez-Bombarelli says. “Other applications for AI have more downsides and ambiguity. AI for science is about bringing a better future forward in time.”
From experiments to simulations
Gómez-Bombarelli grew up in Spain and gravitated toward the physical sciences from an early age. In 2001, he won a Chemistry Olympics competition, setting him on an academic track in chemistry, which he studied as an undergraduate at his hometown college, the University of Salamanca. Gómez-Bombarelli stuck around for his PhD, where he investigated the function of DNA-damaging chemicals.
“My PhD started out experimental, and then I got bitten by the bug of simulation and computer science about halfway through,” he says. “I started simulating the same chemical reactions I was measuring in the lab. I like the way programming organizes your brain; it felt like a natural way to organize one’s thinking. Programming is also a lot less limited by what you can do with your hands or with scientific instruments.”
Next, Gómez-Bombarelli went to Scotland for a postdoctoral position, where he studied quantum effects in biology. Through that work, he connected with Alán Aspuru-Guzik, a chemistry professor at Harvard University, whom he joined for his next postdoc in 2014.
“I was one of the first people to use generative AI for chemistry in 2016, and I was on the first team to use neural networks to understand molecules in 2015,” Gómez-Bombarelli says. “It was the early, early days of deep learning for science.”
Gómez-Bombarelli also began working to eliminate manual parts of molecular simulations to run more high-throughput experiments. He and his collaborators ended up running hundreds of thousands of calculations across materials, discovering hundreds of promising materials for testing.
After two years in the lab, Gómez-Bombarelli and Aspuru-Guzik started a general-purpose materials computation company, which eventually pivoted to focus on producing organic light-emitting diodes. Gómez-Bombarelli joined the company full-time and calls it the hardest thing he’s ever done in his career.
“It was amazing to make something tangible,” he says. “Also, after seeing Aspuru-Guzik run a lab, I didn’t want to become a professor. My dad was a professor in linguistics, and I thought it was a mellow job. Then I saw Aspuru-Guzik with a 40-person group, and he was on the road 120 days a year. It was insane. I didn’t think I had that type of energy and creativity in me.”
In 2018, Aspuru-Guzik suggested Gómez-Bombarelli apply for a new position in MIT’s Department of Materials Science and Engineering. But, with his trepidation about a faculty job, Gómez-Bombarelli let the deadline pass. Aspuru-Guzik confronted him in his office, slammed his hands on the table, and told him, “You need to apply for this.” It was enough to get Gómez-Bombarelli to put together a formal application.
Fortunately at his startup, Gómez-Bombarelli had spent a lot of time thinking about how to create value from computational materials discovery. During the interview process, he says, he was attracted to the energy and collaborative spirit at MIT. He also began to appreciate the research possibilities.
“Everything I had been doing as a postdoc and at the company was going to be a subset of what I could do at MIT,” he says. “I was making products, and I still get to do that. Suddenly, my universe of work was a subset of this new universe of things I could explore and do.”
It’s been nine years since Gómez Bombarelli joined MIT. Today his lab focuses on how the composition, structure, and reactivity of atoms impact material performance. He has also used high-throughput simulations to create new materials and helped develop tools for merging deep learning with physics-based modeling.
“Physics-based simulations make data and AI algorithms get better the more data you give them,” Gómez Bombarelli’s says. “There are all sorts of virtuous cycles between AI and simulations.”
The research group he has built is solely computational — they don’t run physical experiments.
“It’s a blessing because we can have a huge amount of breadth and do lots of things at once,” he says. “We love working with experimentalists and try to be good partners with them. We also love to create computational tools that help experimentalists triage the ideas coming from AI .”
Gómez-Bombarelli is also still focused on the real-world applications of the materials he invents. His lab works closely with companies and organizations like MIT’s Industrial Liaison Program to understand the material needs of the private sector and the practical hurdles of commercial development.
Accelerating science
As excitement around artificial intelligence has exploded, Gómez-Bombarelli has seen the field mature. Companies like Meta, Microsoft, and Google’s DeepMind now regularly conduct physics-based simulations reminiscent of what he was working on back in 2016. In November, the U.S. Department of Energy launched the Genesis Mission to accelerate scientific discovery, national security, and energy dominance using AI.
“AI for simulations has gone from something that maybe could work to a consensus scientific view,” Gómez-Bombarelli says. “We’re at an inflection point. Humans think in natural language, we write papers in natural language, and it turns out these large language models that have mastered natural language have opened up the ability to accelerate science. We’ve seen that scaling works for simulations. We’ve seen that scaling works for language. Now we’re going to see how scaling works for science.”
When he first came to MIT, Gómez-Bombarelli says he was blown away by how non-competitive things were between researchers. He tries to bring that same positive-sum thinking to his research group, which is made up of about 25 graduate students and postdocs.
“We’ve naturally grown into a really diverse group, with a diverse set of mentalities,” Gomez-Bombarelli says. “Everyone has their own career aspirations and strengths and weaknesses. Figuring out how to help people be the best versions of themselves is fun. Now I’ve become the one insisting that people apply to faculty positions after the deadline. I guess I’ve passed that baton.”
🗣 Homeland Security Wants Names | EFFector 38.3
Criticize the government online? The Department of Homeland Security (DHS) might ask Google to cough up your name. By abusing an investigative tool called "administrative subpoenas," DHS has been demanding that tech companies hand over users' names, locations, and more. We're explaining how companies can stand up for users—and covering the latest news in the fight for privacy and free speech online—with our EFFector newsletter.
For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue tracks our campaign to expand end-to-end encryption protections, a bill to stop government face scans from Immigration and Customs Enforcement (ICE) and others, and why Section 230 remains the best available system to protect everyone’s ability to speak online.
Prefer to listen in? In our audio companion, EFF Senior Staff Attorney F. Mario Trujillo explains how Homeland Security's lawless subpoenas differ from court orders. Find the conversation on YouTube or the Internet Archive.
EFFECTOR 38.3 - 🗣 Homeland Security Wants Names
Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight against unlawful government surveillance when you support EFF today!
“Free” Surveillance Tech Still Comes at a High and Dangerous Cost
Surveillance technology vendors, federal agencies, and wealthy private donors have long helped provide local law enforcement “free” access to surveillance equipment that bypasses local oversight. The result is predictable: serious accountability gaps and data pipelines to other entities, including Immigration and Customs Enforcement (ICE), that expose millions of people to harm.
The cost of “free” surveillance tools — like automated license plate readers (ALPRs), networked cameras, face recognition, drones, and data aggregation and analysis platforms — is measured not in tax dollars, but in the erosion of civil liberties.
The cost of “free” surveillance tools is measured not in tax dollars, but in the erosion of civil liberties.
The collection and sharing of our data quietly generates detailed records of people’s movements and associations that can be exposed, hacked, or repurposed without their knowledge or consent. Those records weaken sanctuary and First Amendment protections while facilitating the targeting of vulnerable people.
Cities can and should use their power to reject federal grants, vendor trials, donations from wealthy individuals, or participation in partnerships that facilitate surveillance and experimentation with spy tech.
If these projects are greenlit, oversight is imperative. Mechanisms like public hearings, competitive bidding, public records transparency, and city council supervision aid to ensure these acquisitions include basic safeguards — like use policies, audits, and consequences for misuse — to protect the public from abuse and from creeping contracts that grow into whole suites of products.
Clear policies and oversight mechanisms must be in place before using any surveillance tools, free or not, and communities and their elected officials must be at the center of every decision about whether to bring these tools in at all.
Here are some of the most common methods “free” surveillance tech makes its way into communities.
Trials and PilotsPolice departments are regularly offered free access to surveillance tools and software through trials and pilot programs that often aren’t accompanied by appropriate use policies. In many jurisdictions, trials do not trigger the same requirements to go before decision-makers outside the police department. This means the public may have no idea that a pilot program for surveillance technology is happening in their city.
The public may have no idea that a pilot program for surveillance technology is happening in their city.
In Denver, Colorado, the police department is running trials of possible unmanned aerial vehicles (UAVs) for a drone-as-first-responder (DFR) program from two competing drone vendors: Flock Safety Aerodome drones (through August 2026) and drones from the company Skydio, partnering with Axon, the multi-billion dollar police technology company behind tools like Tasers and AI-generated police reports. Drones create unique issues given their vantage for capturing private property and unsuspecting civilians, as well as their capacity to make other technologies, like ALPRs, airborne.
Functional, Even Without FundingWe’ve seen cities decide not to fund a tool, or run out of funding for it, only to have a company continue providing it in the hope that money will turn up. This happened in Fall River, Massachusetts, where the police department decided not to fund ShotSpotter’s $90,000 annual cost and its frequent false alarms, but continued using the system when the company provided free access.
Police technology companies are developing more features and subscription-based models, so what’s “free” today frequently results in taxpayers footing the bill later.
In May 2025, Denver's city council unanimously rejected a $666,000 contract extension for Flock Safety ALPR cameras after weeks of public outcry over mass surveillance data sharing with federal immigration enforcement. But Mayor Mike Johnston’s office allowed the cameras to keep running through a “task force” review, effectively extending the program even after the contract was voted down. In response, the Denver Taskforce to Reimagine Policing and Public Safety and Transforming Our Communities Alliance launched a grassroots campaign demanding the city “turn Flock cameras off now,” a reminder that when surveillance starts as a pilot or time‑limited contract, communities often have to fight not just to block renewals but to shut the systems off.
Importantly, police technology companies are developing more features and subscription-based models, so what’s “free” today frequently results in taxpayers footing the bill later.
Gifts from Police Foundations and Wealthy DonorsPolice foundations and the wealthy have pushed surveillance-driven agendas in their local communities by donating equipment and making large monetary gifts, another means of acquiring these tools without public oversight or buy-in.
In Atlanta, the Atlanta Police Foundation (APF) attempted to use its position as a private entity to circumvent transparency. Following a court challenge from the Atlanta Community Press Collective and Lucy Parsons Labs, a Georgia court determined that the APF must comply with public records laws related to some of its actions and purchases on behalf of law enforcement.
In San Francisco, billionaire Chris Larsen has financially supported a supercharging of the city’s surveillance infrastructure, donating $9.4 million to fund the San Francisco Police Department’s (SFPD) Real-Time Investigation Center, where a menu of surveillance technologies and data come together to surveil the city’s residents. This move comes after the billionaire backed a ballot measure, which passed in March 2025, eroding the city’s surveillance technology law and allowing the SFPD free rein to use new surveillance technologies for a full year without oversight.
Federal grants and Department of Homeland Security funding are another way surveillance technology appears free to, only to lock municipalities into long‑term data‑sharing and recurring costs.
Through the Homeland Security Grant Program, which includes the State Homeland Security Program (SHSP) and the Urban Areas Security (UASI) Initiative, and Department of Justice programs like Byrne JAG, the federal government reimburses states and cities for "homeland security" equipment and software, including including law‑enforcement surveillance tools, analytics platforms, and real‑time crime centers. Grant guidance and vendor marketing materials make clear that these funds can be used for automated license plate readers, integrated video surveillance and analytics systems, and centralized command‑center software—in other words, purchases framed as counterterrorism investments but deployed in everyday policing.
Vendors have learned to design products around this federal money, pitching ALPR networks, camera systems, and analytic platforms as "grant-ready" solutions that can be acquired with little or no upfront local cost. Motorola Solutions, for example, advertises how SHSP and UASI dollars can be used for "law enforcement surveillance equipment" and "video surveillance, warning, and access control" systems. Flock Safety, partnering with Lexipol, a company that writes use policies for law enforcement, offers a "License Plate Readers Grant Assistance Program" that helps police departments identify federal and state grants and tailor their applications to fund ALPR projects.
Grant assistance programs let police chiefs fast‑track new surveillance: the paperwork is outsourced, the grant eats the upfront cost, and even when there is a formal paper trail, the practical checks from residents, councils, and procurement rules often get watered down or bypassed.
On paper, these systems arrive “for free” through a federal grant; in practice, they lock cities into recurring software, subscription, and data‑hosting fees that quietly turn into permanent budget lines—and a lasting surveillance infrastructure—as soon as police and prosecutors start to rely on them. In Santa Cruz, California, the police department explicitly sought to use a DHS-funded SHSP grant to pay for a new citywide network of Flock ALPR cameras at the city's entrances and exits, with local funds covering additional cameras. In Sumner, Washington, a $50,000 grant was used to cover the entire first year of a Flock system — including installation and maintenance — after which the city is on the hook for roughly $39,000 every year in ongoing fees. The free grant money opens the door, but local governments are left with years of financial, political, and permanent surveillance entanglements they never fully vetted.
The most dangerous cost of this "free" funding is not just budgetary; it is the way it ties local systems into federal data pipelines. Since 9/11, DHS has used these grant streams to build a nationwide network of at least 79–80 state and regional fusion centers that integrate and share data from federal, state, local, tribal, and private partners. Research shows that state fusion centers rely heavily on the DHS Homeland Security Grant Program (especially SHSP and UASI) to "mature their capabilities," with some centers reporting that 100 percent of their annual expenditures are covered by these grants.
Civil rights investigations have documented how this funding architecture creates a backdoor channel for ICE and other federal agencies to access local surveillance data for their own purposes. A recent report by the Surveillance Technology Oversight Project (S.T.O.P.) describes ICE agents using a Philadelphia‑area fusion center to query the city’s ALPR network to track undocumented drivers in a self‑described sanctuary city.
Ultimately, federal grants follow the same script as trials and foundation gifts: what looks “free” ends up costing communities their data, their sanctuary protections, and their power over how local surveillance is used.
Protecting Yourself Against “Free” TechnologyThe most important protection against "free" surveillance technology is to reject it outright. Cities do not have to accept federal grants, vendor trials, or philanthropic donations. Saying no to "free" tech is not just a policy choice; it is a political power that local governments possess and can exercise. Communities and their elected officials can and should refuse surveillance systems that arrive through federal grants, vendor pilots, or private donations, regardless of how attractive the initial price tag appears.
For those cities that have already accepted surveillance technology, the imperative is equally clear: shut it down. When a community has rejected use of a spying tool, the capabilities, equipment, and data collected from that tool should be shut off immediately. Full stop.
And for any surveillance technology that remains in operation, even temporarily, there must be clear rules: when and how equipment is used, how that data is retained and shared, who owns data and how companies can access and use it, transparency requirements, and consequences for any misuse and abuse.
“Free” surveillance technology is never free. Someone profits or gains power from it. Police technology vendors, federal agencies, and wealthy donors do not offer these systems out of generosity; they offer them because surveillance serves their interests, not ours. That is the real cost of “free” surveillance.
Rewiring Democracy Ebook is on Sale
I just noticed that the ebook version of Rewriring Democracy is on sale for $5 on Amazon, Apple Books, Barnes & Noble, Books A Million, Google Play, Kobo, and presumably everywhere else in the US. I have no idea how long this will last.
Using synthetic biology and AI to address global antimicrobial resistance threat
James J. Collins, the Termeer Professor of Medical Engineering and Science at MIT and faculty co-lead of the Abdul Latif Jameel Clinic for Machine Learning in Health, is embarking on a multidisciplinary research project that applies synthetic biology and generative artificial intelligence to the growing global threat of antimicrobial resistance (AMR).
The research project is sponsored by Jameel Research, part of the Abdul Latif Jameel International network. The initial three-year, $3 million research project in MIT’s Department of Biological Engineering and Institute of Medical Engineering and Science focuses on developing and validating programmable antibacterials against key pathogens.
AMR — driven by the overuse and misuse of antibiotics — has accelerated the rise of drug-resistant infections, while the development of new antibacterial tools has slowed. The impact is felt worldwide, especially in low- and middle-income countries, where limited diagnostic infrastructure causes delays or ineffective treatment.
The project centers on developing a new generation of targeted antibacterials using AI to design small proteins to disable specific bacterial functions. These designer molecules would be produced and delivered by engineered microbes, providing a more precise and adaptable approach than traditional antibiotics.
“This project reflects my belief that tackling AMR requires both bold scientific ideas and a pathway to real-world impact,” Collins says. “Jameel Research is keen to address this crisis by supporting innovative, translatable research at MIT.”
Mohammed Abdul Latif Jameel ’78, chair of Abdul Latif Jameel, says, “antimicrobial resistance is one of the most urgent challenges we face today, and addressing it will require ambitious science and sustained collaboration. We are pleased to support this new research, building on our long-standing relationship with MIT and our commitment to advancing research across the world, to strengthen global health and contribute to a more resilient future.”
Prompt Injection Via Road Signs
Interesting research: “CHAI: Command Hijacking Against Embodied AI.”
Abstract: Embodied Artificial Intelligence (AI) promises to handle edge cases in robotic vehicle systems where data is scarce by using common-sense reasoning grounded in perception and action to generalize beyond training distributions and adapt to novel real-world situations. These capabilities, however, also create new security risks. In this paper, we introduce CHAI (Command Hijacking against embodied AI), a new class of prompt-based attacks that exploit the multimodal language interpretation abilities of Large Visual-Language Models (LVLMs). CHAI embeds deceptive natural language instructions, such as misleading signs, in visual input, systematically searches the token space, builds a dictionary of prompts, and guides an attacker model to generate Visual Attack Prompts. We evaluate CHAI on four LVLM agents; drone emergency landing, autonomous driving, and aerial object tracking, and on a real robotic vehicle. Our experiments show that CHAI consistently outperforms state-of-the-art attacks. By exploiting the semantic and multimodal reasoning strengths of next-generation embodied AI systems, CHAI underscores the urgent need for defenses that extend beyond traditional adversarial robustness...
