EFF: Updates
Betting on Your Digital Rights: EFF Benefit Poker Tournament at DEF CON 33
Hacker Summer Camp is almost here... and with it comes the Third Annual EFF Benefit Poker Tournament at DEF CON 33 hosted by security expert Tarah Wheeler.
Please join us at the same place and time as last year: Friday, August 8th, at high noon at the Horseshoe Poker Room. The fees haven’t changed; it’s still $250 to register plus $100 the day of the tournament with unlimited rebuys. (AND your registration donation covers your EFF membership for the year.)
Tarah Wheeler—EFF board member and resident poker expert—has been working hard on the tournament since last year! We will have Lintile as emcee this year and there's going to be bug bounties! When you take someone out of the tournament, they will give you a pin. Prizes—and major bragging rights—go to the player with the most bounty pins. Be sure to register today and see Lintile in action!
Did we mention there will be Celebrity Bounties? Knock out Wendy Nather, Chris “WeldPond” Wysopal, Jake “MalwareJake” Williams and get neat EFF swag and the respect of your peers! Plus, as always, knock out Tarah's dad Mike, and she donates $250 to the EFF in your name!
Find Full Event Details and Registration
Have a friend that might be interested but not sure how to play? Have you played some poker before but could use a refresher? Join poker pro Mike Wheeler (Tarah’s dad) and celebrities for a free poker clinic from 11:00 am-11:45 am just before the tournament. Mike will show you the rules, strategy, table behavior, and general Vegas slang at the poker table. Even if you know poker pretty well, come a bit early and help out.
Register today and reserve your deck. Be sure to invite your friends to join you!
Oppose STOP CSAM: Protecting Kids Shouldn’t Mean Breaking the Tools That Keep Us Safe
A Senate bill re-introduced this week threatens security and free speech on the internet. EFF urges Congress to reject the STOP CSAM Act of 2025 (S. 1829), which would undermine services offering end-to-end encryption and force internet companies to take down lawful user content.
Tell Congress Not to Outlaw Encrypted Apps
As in the version introduced last Congress, S. 1829 purports to limit the online spread of child sexual abuse material (CSAM), also known as child pornography. CSAM is already highly illegal. Existing law already requires online service providers who have actual knowledge of “apparent” CSAM on their platforms to report that content to the National Center for Missing and Exploited Children (NCMEC). NCMEC then forwards actionable reports to law enforcement agencies for investigation.
S. 1829 goes much further than current law and threatens to punish any service that works to keep its users secure, including those that do their best to eliminate and report CSAM. The bill applies to “interactive computer services,” which broadly includes private messaging and email apps, social media platforms, cloud storage providers, and many other internet intermediaries and online service providers.
The Bill Threatens End-to-End EncryptionThe bill makes it a crime to intentionally “host or store child pornography” or knowingly “promote or facilitate” the sexual exploitation of children. The bill also opens the door for civil lawsuits against providers for the intentional, knowing or even reckless “promotion or facilitation” of conduct relating to child exploitation, the “hosting or storing of child pornography,” or for “making child pornography available to any person.”
The terms “promote” and “facilitate” are broad, and civil liability may be imposed based on a low recklessness state of mind standard. This means a court can find an app or website liable for hosting CSAM even if the app or website did not even know it was hosting CSAM, including because the provider employed end-to-end encryption and could not view the contents of content uploaded by users.
Creating new criminal and civil claims against providers based on broad terms and low standards will undermine digital security for all internet users. Because the law already prohibits the distribution of CSAM, the bill’s broad terms could be interpreted as reaching more passive conduct, like merely providing an encrypted app.
Due to the nature of their services, encrypted communications providers who receive a notice of CSAM may be deemed to have “knowledge” under the criminal law even if they cannot verify and act on that notice. And there is little doubt that plaintiffs’ lawyers will (wrongly) argue that merely providing an encrypted service that can be used to store any image—not necessarily CSAM—recklessly facilitates the sharing of illegal content.
Affirmative Defense Is Expensive and InsufficientWhile the bill includes an affirmative defense that a provider can raise if it is “technologically impossible” to remove the CSAM without “compromising encryption,” it is not sufficient to protect our security. Online services that offer encryption shouldn’t have to face the impossible task of proving a negative in order to avoid lawsuits over content they can’t see or control.
First, by making this protection an affirmative defense, providers must still defend against litigation, with significant costs to their business. Not every platform will have the resources to fight these threats in court, especially newcomers that compete with entrenched giants like Meta and Google. Encrypted platforms should not have to rely on prosecutorial discretion or favorable court rulings after protracted litigation. Instead, specific exemptions for encrypted providers should be addressed in the text of the bill.
Second, although technologies like client-side scanning break encryption, members of Congress have misleadingly claimed otherwise. Plaintiffs are likely to argue that providers who do not use these techniques are acting recklessly, leading many apps and websites to scan all of the content on their platforms and remove any content that a state court could find, even wrongfully, is CSAM.
Tell Congress Not to Outlaw Encrypted Apps
The Bill Threatens Free Speech by Creating a New Exception to Section 230The bill allows a new type of lawsuit to be filed against internet platforms, accusing them of “facilitating” child sexual exploitation based on the speech of others. It does this by creating an exception to Section 230, the foundational law of the internet and online speech. Section 230 provides partial immunity to internet intermediaries when sued over content posted by their users. Without that protection, platforms are much more likely to aggressively monitor and censor users.
Section 230 creates the legal breathing room for internet intermediaries to create online spaces for people to freely communicate around the world, with low barriers to entry. However, creating a new exception that exposes providers to more lawsuits will cause them to limit that legal exposure. Online services will censor more and more user content and accounts, with minimal regard as to whether that content is in fact legal. Some platforms may even be forced to shut down or may not even get off the ground in the first place, for fear of being swept up in a flood of litigation and claims around alleged CSAM. On balance, this harms all internet users who rely on intermediaries to connect with their communities and the world at large.
Despite Changes, A.B. 412 Still Harms Small Developers
California lawmakers are continuing to promote a bill that will reinforce the power of giant AI companies by burying small AI companies and non-commercial developers in red tape, copyright demands and potentially, lawsuits. After several amendments, the bill hasn’t improved much, and in some ways has actually gotten worse. If A.B. 412 is passed, it will make California’s economy less innovative, and less competitive.
The Bill Threatens Small Tech CompaniesA.B. 412 masquerades as a transparency bill, but it’s actually a government-mandated “reading list” that will allow rights holders to file a new type of lawsuit in state court, even as the federal courts continue to assess whether and how federal copyright law applies to the development of generative AI technologies.
The bill would require developers—even two-person startups— to keep lists of training materials that are “registered, pre-registered or indexed” with the U.S. Copyright Office, and help rights holders create digital ‘fingerprints’ of those works—a technical task with no established standards and no realistic path for small teams to follow. Even if it were limited to registered copyrighted material, that’s a monumental task, as we explained in March when we examined the earlier text of A.B. 412.
The bill’s amendments have made compliance even harder, since it now requires technologists to go beyond copyrighted material and somehow identify “pre-registered” copyrights. The amended bill also has new requirements that demand technologists document and keep track of when they look at works that aren’t copyrighted but are subject to exclusive rights, such as pre-1972 sound recordings—rights that, not coincidentally, are primarily controlled by large entertainment companies.
The penalties for noncompliance are steep—up to $1,000 per day per violation—putting small developers at enormous financial risk even for accidental lapses.
The goal of this list is clear: for big content companies to more easily file lawsuits against software developers, big and small. And for most AI developers, the burden will be crushing. Under A.B. 412, a two-person startup building an open-source chatbot, or an indie developer fine-tuning a language model for disability access, would face the same compliance burdens as Google or Meta.
Reading and Analyzing The Open Web Is Not a CrimeIt’s critical to remember that AI training is very likely protected by fair use under U.S. copyright law—a point that’s still being worked out in the courts. The idea that we should preempt that process with sweeping state regulation is not just premature; it’s dangerous.
It’s also worth noting that copyright is governed by federal law. Federal courts are already working to define the boundaries of fair use and copyright in the AI context—the California legislature should let them do their job. A.B. 412 tries to create a state-level regulatory scheme in an area that belongs in federal hands—a risky legal overreach that could further complicate an already unsettled policy space.
A.B. 412 is a solution in search of a problem. The courthouse doors are far from closed to content owners who want to dispute the use of their copyrighted works. There are multiple high-profile litigations over the copyright status of AI training works that are working their way through trial courts and appeal courts right now.
Scope CreepRather than narrowing its focus to make compliance more realistic, the latest amendments to A.B. 412 actually expand the scope of covered works. The bill now demands documentation of obscure categories of content like pre-1972 sound recordings. These recordings have rights that are often murky, and largely controlled by major media companies.
The bill also adds “preregistered” and indexed works to its coverage. Preregistration, designed to help entertainment companies punish unauthorized copying even before commercial release, expands the universe of content that developers must track—without offering any meaningful help to small creators.
A Moat Serving Big TechIronically, the companies that will benefit most from A.B. 412 are the very same large tech firms that lawmakers often claim they want to regulate. Big companies can hire teams of lawyers and compliance officers to handle these requirements. Small developers? They’re more likely to shut down, sell out, or never enter the field in the first place.
This bill doesn’t create a fairer marketplace. It builds a regulatory moat around the incumbents, locking out new competitors and ensuring that only a handful of companies have the resources to develop advanced AI systems. Truly innovative technology often comes from unknown or small companies, but A.B. 412 threatens to turn California—and anyone who does business there—into a fortress where only the biggest players survive.
A Lopsided BillA.B. 412 is becoming an increasingly extreme and one-sided piece of legislation. It’s a maximalist wishlist for legacy rights-holders, delivered at the expense of small developers and the public. The result will be less competition, less innovation, and fewer choices for consumers—not more protection for creators.
This new version does close a few loopholes, and expands the period for AI developers to respond to copyright demands from 7 days to 30 days. But it seriously fails to close others: for instance, the exemption for noncommercial development applies only to work done “exclusively for noncommercial academic or governmental” institutions. That still leaves a huge window to sue hobbyists and independent researchers who don’t have university or government jobs.
While the bill nominally exempts developers who use only public or developer-owned data, that’s a carve-out with no practical value. Like a search engine, nearly every meaningful AI system relies on mixed sources — and developers can’t realistically track the copyright status of them all.
At its core, A.B. 412 is a flawed bill that would harm the whole U.S. tech ecosystem. Lawmakers should be advancing policies that protect privacy, promote competition, and ensure that innovation benefits the public—not just a handful of entrenched interests.
If you’re a California resident, now is the time to speak out. Tell your legislators that A.B. 412 will hurt small companies, help big tech, and lock California’s economy in the past.
35 Years for Your Freedom Online
Once upon a time we were promised flying cars and jetpacks. Yet we've arrived at a more complicated timeline where rights advocates can find themselves defending our hard-earned freedoms more often than shooting for the moon. In tough times, it's important to remember that your vision for the future can be just as valuable as the work you do now.
Thirty-five years ago, a small group of folks saw the coming digital future and banded together to ensure that technology would empower people, not oppress them—and EFF was born. While the dangers of corporate and state forces grew alongside the internet, EFF and supporters like you faithfully rose to the occasion. Will you help celebrate EFF’s 35th anniversary and donate in support of digital freedom?
Protect Online Privacy & Free Expression
Together we’ve won many fights for encryption, free speech, innovation, and privacy online. Yet it’s plain to see that we must keep advocating for technology users whether that’s in the courts, before lawmakers, educating the public, or creating privacy-enhancing tools. EFF members make it possible—you can lend a hand and get some great perks!
Summer Swag Is HereWe love making stuff for EFF’s members each year. It’s our way of saying thanks for supporting the mission for your rights online, and I hope it’s your way of starting a conversation about internet freedom with people in your life.
shirts-both-necklines-wider-square-750px.jpgCelebrate EFF's 35th Anniversary in the digital rights movement with this EFF35 Cityscape member t-shirt by Hugh D’Andrade! EFF has a not-so-secret weapon that keeps us in the fight even when the odds are against us: we never lose sight of our vision for a better future. Choose a roomy Classic Fit Crewneck or a soft Slim Fit V-Neck.
hoodie-front-back-alt-square-750px.jpgAnd enjoy Lovelace-Klimtian vibes on EFF’s new Motherboard Hooded Sweatshirt by Shirin Mori. Gold details and orange poppies pop on lush forest green. Don't lose the forest for the trees—keep fighting for a world where tech supports people irl.
Join the Sustaining Donor Challenge (it’s easy)You'll get a numbered EFF35 Challenge Coin when you become a monthly or annual Sustaining Donor by July 10. It’s that simple.
If you're already a Sustaining Donor—THANKS! You too can get an EFF 35th Anniversary Challenge Coin when you upgrade your donation. Just increase your monthly or annual gift and let us know by emailing upgrade@eff.org. Get started at eff.org/recurring or go to your PayPal account if you used one.
coin_cat_1200px.jpgSupport internet freedom with a no-fuss automated recurring donation! Over 30% of EFF members have joined as Sustaining Donors to defend digital rights (and get some great swag every year). Challenge coins follow a long tradition of offering a symbol of kinship and respect for great achievements—and EFF owes its strength to technology creators and users like you.
With your help, EFF is here to stay.
Protect Online Privacy & Free Expression
NYC lets AI gamble with Child Welfare
The Markup revealed in its reporting last month that New York City’s Administration for Children’s Services (ACS) has been quietly deploying an algorithmic tool to categorize families as “high risk". Using a grab-bag of factors like neighborhood and mother’s age, this AI tool can put families under intensified scrutiny without proper justification and oversight.
ACS knocking on your door is a nightmare for any parent, with the risk that any mistakes can break up your family and have your children sent to the foster care system. Putting a family under such scrutiny shouldn’t be taken lightly and shouldn’t be a testing ground for automated decision-making by the government.
This “AI” tool, developed internally by ACS’s Office of Research Analytics, scores families for “risk” using 279 variables and subjects those deemed highest-risk to intensified scrutiny. The lack of transparency, accountability, or due process protections demonstrates that ACS has learned nothing from the failures of similar products in the realm of child services.
The algorithm operates in complete secrecy and the harms from this opaque “AI theater” are not theoretical. The 279 variables are derived only from cases back in 2013 and 2014 where children were seriously harmed. However, it is unclear how many cases were analyzed, what, if any, kind of auditing and testing was conducted, and whether including of data from other years would have altered the scoring.
What we do know is disturbing: Black families in NYC face ACS investigations at seven times the rate of white families and ACS staff has admitted that the agency is more punitive towards Black families, with parents and advocates calling its practices “predatory.” It is likely that the algorithm effectively automates and amplifies this discrimination.
Despite the disturbing lack of transparency and accountability, ACS’s usage of this system has subjected families that this system ranks as “highest risk” to additional scrutiny, including possible home visits, calls to teachers and family, or consultations with outside experts. But those families, their attorneys, and even caseworkers don't know when and why the system flags a case, making it difficult to challenge the circumstances or process that leads to this intensified scrutiny.
This is not the only incidence in which usage of AI tools in the child services system has encountered issues with systemic biases. Back in 2022, the Associated Press reported that Carnegie Mellon researchers found that from August 2016 to May 2018, Allegheny County in Pennsylvania used an algorithmic tool that flagged 32.5% of Black children for “mandatory” investigation compared to just 20.8% of white, all while social workers disagreed with the algorithm's risk scores about one-third of the time.
The Allegheny system operates with the same toxic combination of secrecy and bias now plaguing NYC. Families and their attorneys can never know their algorithmic scores, making it impossible to challenge decisions that could destroy their lives. When a judge asked to see a family’s score in court, the county resisted, claiming it didn't want to influence legal proceedings with algorithmic numbers, which suggests that the scores are too unreliable for judicial scrutiny yet acceptable for targeting families.
Elsewhere these biased systems were successfully challenged. The developers of the Allegheny tool had already had their product rejected in New Zealand, where researchers correctly identified that the tool would likely result in more Māori families being tagged for investigation. Meanwhile, California spent $195,273 developing a similar tool before abandoning it in 2019 due in part to concerns about racial equity.
Governmental deployment of automated and algorithmic decision making not only perpetuates social inequalities, but removes mechanisms for accountability when agencies make mistakes. The state should not be using these tools for rights-determining decisions and any other uses must be subject to vigorous scrutiny and independent auditing to ensure the public’s trust in the government’s actions.
Criminalizing Masks at Protests is Wrong
There has been a crescendo of states attempting to criminalize the wearing of face coverings while attending protests. Now the President has demanded, in the context of ongoing protests in Los Angeles: “ARREST THE PEOPLE IN FACE MASKS, NOW!”
But the truth is: whether you are afraid of catching an airborne illness from your fellow protestors, or you are concerned about reprisals from police or others for expressing your political opinions in public, you should have the right to wear a mask. Attempts to criminalize masks at protests fly in the face of a right to privacy.
Anonymity is a fundamental human right.
In terms of public health, wearing a mask while in a crowd can be a valuable tool to prevent the spread of communicable illnesses. This can be essential for people with compromised immune systems who still want to exercise their First Amendment-protected right to protest.
Moreover, wearing a mask is a perfectly legitimate surveillance self-defense practice during a protest. There has been a massive proliferation of surveillance camera networks, face recognition technology, and databases of personal information. There also is a long law enforcement’s history of harassing and surveilling people for publicly criticizing or opposing law enforcement practices and other government policies. What’s more, non-governmental actors may try to identify protesters in order to retaliate against them, for example, by limiting their employment opportunities.
All of this may chill our willingness to speak publicly or attend a protest in a cause we believe in. Many people would be less willing to attend a rally or march if they know that a drone or helicopter, equipped with a camera, will take repeated passes over the crowd, and police later will use face recognition to scan everyone’s faces and create a list of protest attendees. This would make many people rightfully concerned about surveillance and harassment from law enforcement.
Anonymity is a fundamental human right. EFF has long advocated for anonymity online. We’ve also supported low-tech methods to protect our anonymity from high-tech snooping in public places; for example, we’ve supported legislation to allow car owners to use license plate covers when their cars are parked to reduce their exposure to ALPRs.
A word of caution. No surveillance self-defense technique is perfect. Technology companies are trying to develop ways to use face recognition technology to identify people wearing masks. But if somebody wants to hide their face to try to avoid government scrutiny, the government should not punish them.
While members of the public have a right to wear a mask when they protest, law enforcement officials should not wear a mask when they arrest protesters and others. An elementary principle of police accountability is to require uniformed officers to identify themselves to the public; this discourages officer misconduct, and facilitates accountability if an officer violates the law. This is one reason EFF has long supported the First Amendment right to record on-duty police, including ICE officers.
For these reasons, EFF believes it is wrong for state legislatures, and now federal law enforcement, to try to criminalize or punish mask wearing at protests. It is especially wrong, in moments like the present, where government it taking extreme measures to crack down on the civil liberties of protesters.
Privacy Victory! Judge Grants Preliminary Injunction in OPM/DOGE Lawsuit
NEW YORK–In a victory for personal privacy, a New York federal district court judge today granted a preliminary injunction in a lawsuit challenging the U.S. Office of Personnel Management’s (OPM) disclosure of records to DOGE and its agents.
Judge Denise L. Cote of the U.S. District Court for the Southern District of New York found that OPM violated the Privacy Act and bypassed its established cybersecurity practices under the Administrative Procedures Act. The court will decide the scope of the injunction later this week. The plaintiffs have asked the court to halt DOGE agents’ access to OPM records and for DOGE and its agents to delete any records that have already been disclosed. OPM’s databases hold highly sensitive personal information about tens of millions of federal employees, retirees, and job applicants.
“The plaintiffs have shown that the defendants disclosed OPM records to individuals who had no legal right of access to those records,” Cote found. “In doing so, the defendants violated the Privacy Act and departed from cybersecurity standards that they are obligated to follow. This was a breach of law and of trust. Tens of millions of Americans depend on the Government to safeguard records that reveal their most private and sensitive affairs.”
The Electronic Frontier Foundation (EFF), Lex Lumina LLP, Democracy Defenders Fund, and The Chandra Law Firm requested the injunction as part of their ongoing lawsuit against OPM and DOGE on behalf of two labor unions and individual current and former government workers across the country. The lawsuit’s union plaintiffs are the American Federation of Government Employees AFL-CIO and the Association of Administrative Law Judges, International Federation of Professional and Technical Engineers Judicial Council 1 AFL-CIO.
The lawsuit argues that OPM and OPM Acting Director Charles Ezell illegally disclosed personnel records to DOGE agents in violation of the Administrative Procedures Act and the federal Privacy Act of 1974, a watershed anti-surveillance statute that prevents the federal government from abusing our personal information. In addition to seeking to permanently halt the disclosure of further OPM data to DOGE, the lawsuit asks for the deletion of any data previously disclosed by OPM to DOGE.
The federal government is the nation’s largest employer, and the records held by OPM represent one of the largest collections of sensitive personal data in the country. In addition to personally identifiable information such as names, social security numbers, and demographic data, these records include work information like salaries and union activities; personal health records and information regarding life insurance and health benefits; financial information like death benefit designations and savings programs; nondisclosure agreements; and information concerning family members and other third parties referenced in background checks and health records.
OPM holds these records for tens of millions of Americans, including current and former federal workers and those who have applied for federal jobs. OPM has a history of privacy violations—an OPM breach in 2015 exposed the personal information of 22.1 million people—and its recent actions make its systems less secure.
With few exceptions, the Privacy Act limits the disclosure of federally maintained sensitive records on individuals without the consent of the individuals whose data is being shared. It protects all Americans from harms caused by government stockpiling of our personal data. This law was enacted in 1974, the last time Congress acted to limit the data collection and surveillance powers of an out-of-control President.
A number of courts have already found that DOGE’s activities at other agencies likely violate the law, including at the Social Security Administration and the Treasury Department.
For the preliminary injunction: https://www.eff.org/document/afge-v-opm-opinion-and-order-granting-preliminary-injunction
For the complaint: https://www.eff.org/document/afge-v-opm-complaint
For more about the case: https://www.eff.org/cases/american-federation-government-employees-v-us-office-personnel-management
Contacts:
Electronic Frontier Foundation: press@eff.org
Lex Lumina LLP: Managing Partner Rhett Millsaps, rhett@lex-lumina.com