EFF: Updates
California’s Corporate Cover-Up Act Is a Privacy Nightmare
California lawmakers are pushing one of the most dangerous privacy rollbacks we’ve seen in years. S.B. 690, what we’re calling the Corporate Cover-Up Act, is a brazen attempt to let corporations spy on us in secret, gutting long-standing protections without a shred of accountability.
The Corporate Cover-Up Act is a massive carve-out that would gut California’s Invasion of Privacy Act (CIPA) and give Big Tech and data brokers a green light to spy on us without consent for just about any reason. If passed, S.B. 690 would let companies secretly record your clicks, calls, and behavior online—then share or sell that data with whomever they’d like, all under the banner of a “commercial business purpose.”
Simply put, The Corporate Cover-Up Act (S.B. 690) is a blatant attack on digital privacy, and is written to eviscerate long-standing privacy laws and legal safeguards Californians rely on. If passed, it would:
- Gut California’s Invasion of Privacy Act (CIPA)—a law that protects us from being secretly recorded or monitored
- Legalize corporate wiretaps, allowing companies to intercept real-time clicks, calls, and communications
- Authorize pen registers and trap-and-trace tools, which track who you talk to, when, and how—without consent
- Let companies use all of this surveillance data for “commercial business purposes”—with zero notice and no legal consequences
This isn’t a small fix. It’s a sweeping rollback of hard-won privacy protections—the kind that helped expose serious abuses by companies like Facebook, Google, and Oracle.
You Can't Opt Out of Surveillance You Don't Know Is HappeningProponents of The Corporate Cover-Up Act claim it’s just a “clarification” to align CIPA with the California Consumer Privacy Act (CCPA). That’s misleading. The truth is, CIPA and CCPA don’t conflict. CIPA stops secret surveillance. The CCPA governs how data is used after it’s collected, such as through the right to opt out of your data being shared.
You can't opt out of being spied on if you’re never told it’s happening in the first place. Once companies collect your data under S.B. 690, they can:
- Sell it to data brokers
- Share it with immigration enforcement or other government agencies
- Use it to against abortion seekers, LGBTQ+ people, workers, and protesters, and
- Retain it indefinitely for profiling
…with no consent; no transparency; and no recourse.
The Communities Most at RiskThis bill isn’t just a tech policy misstep. It’s a civil rights disaster. If passed, S.B. 690 will put the most vulnerable people in California directly in harm’s way:
- Immigrants, who may be tracked and targeted by ICE
- LGBTQ+ individuals, who could be outed or monitored without their knowledge
- Abortion seekers, who could have location or communications data used against them
- Protesters and workers, who rely on private conversations to organize safely
The message this bill sends is clear: corporate profits come before your privacy.
We Must Act NowS.B. 690 isn’t just a bad tech bill—it’s a dangerous precedent. It tells every corporation: Go ahead and spy on your consumers—we’ve got your back.
Californians deserve better.
If you live in California, now is the time to call your lawmakers and demand they vote NO on the Corporate Cover-Up Act.
Spread the word, amplify the message, and help stop this attack on privacy before it becomes law.
FBI Warning on IoT Devices: How to Tell If You Are Impacted
On June 5th, the FBI released a PSA titled “Home Internet Connected Devices Facilitate Criminal Activity.” This PSA largely references devices impacted by the latest generation of BADBOX malware (as named by HUMAN’s Satori Threat Intelligence and Research team) that EFF researchers also encountered primarily on Android TV set-top boxes. However, the malware has impacted tablets, digital projectors, aftermarket vehicle infotainment units, picture frames, and other types of IoT devices.
One goal of this malware is to create a network proxy on the devices of unsuspecting buyers, potentially making them hubs for various potential criminal activities, putting the owners of these devices at risk from authorities. This malware is particularly insidious, coming pre-installed out of the box from major online retailers such as Amazon and AliExpress. If you search “Android TV Box” on Amazon right now, many of the same models that have been impacted are still up being sold by sellers of opaque origins. Facilitating the sale of these devices even led us to write an open letter to the FTC, urging them to take action on resellers.
The FBI listed some indicators of compromise (IoCs) in the PSA for consumers to tell if they were impacted. But the average person isn’t running network detection infrastructure in their homes, and cannot hope to understand what IoCs can be used to determine if their devices generate “unexplained or suspicious Internet traffic.” Here, we will attempt to help give more comprehensive background information about these IoCs. If you find any of these on devices you own, then we encourage you to follow through by contacting the FBI's Internet Crime Complaint Center (IC3) at www.ic3.gov.
The FBI lists these IoC:
- The presence of suspicious marketplaces where apps are downloaded.
- Requiring Google Play Protect settings to be disabled.
- Generic TV streaming devices advertised as unlocked or capable of accessing free content.
- IoT devices advertised from unrecognizable brands.
- Android devices that are not Play Protect certified.
- Unexplained or suspicious Internet traffic.
The following adds context to above, as well as some added IoCs we have seen from our research.
Play Protect Certified
“Android devices that are not Play Protect certified” refers to any device brand or partner not listed here: https://www.android.com/certified/partners/. Google subjects devices to compatibility and security tests in their criteria for inclusion in the Play Protect program, though the mentioned list’s criteria are not made completely transparent outside of Google. But this list does change, as we saw with the tablet brand we researched being de-listed. This encompasses “devices advertised from unrecognizable brands.” The list includes international brands and partners as well.
Outdated Operating Systems
Other issues we saw were really outdated Android versions. For posterity, Android 16 just started rolling out. Android 9-12 appeared to be the most common versions routinely used. This could be a result of “copied homework” from previous legitimate Android builds, and often come with their own update software that can present a problem on its own and deliver second-stage payloads for device infection in addition to what it is downloading and updating on the device.
You can check which version of Android you have by going to Settings and searching “Android version”.
Android App Marketplaces
We’ve previously argued how the availability of different app marketplaces leads to greater consumer choice, where users can choose alternatives even more secure than the Google Play Store. While this is true, the FBI’s warning about suspicious marketplaces is also prudent. Avoiding “downloading apps from unofficial marketplaces advertising free streaming content” is sound (if somewhat vague) advice for set-top boxes, yet this recommendation comes without further guidelines on how to identify which marketplaces might be suspicious for other Android IoT platforms. Best practice is to investigate any app stores used on Android devices separately, but to be aware that if a suspicious Android device is purchased, it can contain preloaded app stores that mimic the functionality of legitimate ones but also contain unwanted or malicious code.
Models Listed from the Badbox Report
We also recommend looking up device names and models that were listed in the BADBOX 2.0 report. We investigated the T95 models along with other independent researchers that initially found this malware present. A lot of model names could be grouped in families with the same letters but different numbers. These operations are iterating fast, but the naming conventions are often lazy in this respect. If you're not sure what model you own, you can usually find it listed on a sticker somewhere on the device. If that fails, you may be able to find it by pulling up the original receipt or looking through your order history.
A Note from Satori Researchers:
“Below is a list of device models known to be targeted by the threat actors. Not all devices of a given model are necessarily infected, but Satori researchers are confident that infections are present on some devices of the below device models:”
List of Potentially Impacted Models
Broader Picture: The Digital Divide
Unfortunately, the only way to be sure that an Android device from an unknown brand is safe is not to buy it in the first place. Though initiatives like the U.S. Cyber Trust Mark are welcome developments intended to encourage demand-side trust in vetted products, recent shake ups in federal regulatory bodies means the future of this assurance mark is unknown. This means those who face budget constraints and have trouble affording top-tier digital products for streaming content or other connected purposes may rely on cheaper imitation products that are rife with not only vulnerabilities, but even come out-of-the-box preloaded with malware. This puts these people disproportionately at legal risk when these devices are used to provide the buyers’ home internet connection as a proxy for nefarious or illegal purposes.
Cybersecurity and trust that the products we buy won’t be used against us is essential: not just for those that can afford name-brand digital devices, but for everyone. While we welcome the IoCs that the FBI has listed in its PSA, more must be done to protect consumers from a myriad of dangers that their devices expose them to.
Why Are Hundreds of Data Brokers Not Registering with States?
Written in collaboration with Privacy Rights Clearinghouse
Hundreds of data brokers have not registered with state consumer protection agencies. These findings come as more states are passing data broker transparency laws that require brokers to provide information about their business and, in some cases, give consumers an easy way to opt out.
In recent years, California, Texas, Oregon, and Vermont have passed data broker registration laws that require brokers to identify themselves to state regulators and the public. A new analysis by Privacy Rights Clearinghouse (PRC) and the Electronic Frontier Foundation (EFF) reveals that many data brokers registered in one state aren’t registered in others.
Companies that registered in one state but did not register in another include: 291 companies that did not register in California, 524 in Texas, 475 in Oregon, and 309 in Vermont. These numbers come from data analyzed from early April 2025.
PRC and EFF sent letters to state enforcement agencies urging them to investigate these findings. More investigation by states is needed to determine whether these registration discrepancies reflect widespread noncompliance, gaps and definitional differences in the various state laws, or some other explanation.
New data broker transparency laws are an essential first step to reining in the data broker industry. This is an ecosystem in which your personal data taken from apps and other web services can be bought and sold largely without your knowledge. The data can be highly sensitive like location information, and can be used to target you with ads, discriminate against you, and even enhance government surveillance. The widespread sharing of this data also makes it more susceptible to data breaches. And its easy availability allows personal data to be obtained by bad actors for phishing, harassment, or stalking.
Consumers need robust deletion mechanisms to remove their data stored and sold by these companies. But the potential registration gaps we identified threaten to undermine such tools. California’s Delete Act will soon provide consumers with an easy tool to delete their data held by brokers—but it can only work if brokers register. California has already brought a handful of enforcement actions against brokers who failed to register under that law, and such compliance efforts are becoming even more critical as deletion mechanisms come online.
It is important to understand the scope of our analysis.
This analysis only includes companies that registered in at least one state. It does not capture data brokers that completely disregard state laws by failing to register in any state. A total of 750 data brokers have registered in at least one state. While harder to find, shady data brokers who have failed to register anywhere should remain a primary enforcement target.
This analysis also does not claim or prove that any of the data brokers we found broke the law. While the definition of “data broker” is similar across states, there are variations that could require a company to register in one state and not another. To take one example, a data broker registered in Texas that only brokers the data of Texas residents would not be legally required to register in California. To take another, a data broker that registered with Vermont in 2020 that then changed its business model and is no longer a broker, would not be required to register in 2025. More detail on variations in data broker laws is outlined in our letters to regulators.
States should investigate compliance with data broker registration requirements, enforce their laws, and plug any loopholes. Ultimately, consumers deserve protections regardless of where they reside, and Congress should also work to pass baseline federal data broker legislation that minimizes collection and includes strict use and disclosure limits, transparency obligations, and consumer rights.
Read more here:
Major Setback for Intermediary Liability in Brazil: Risks and Blind Spots
This is the third post of a series about internet intermediary liability in Brazil. Our first post gives an overview of Brazil's current internet intermediary liability regime, set out in a law known as "Marco Civil da Internet," the context of its approval in 2014, and the beginning of the Supreme Court's judgment of such regime in November 2024. Our second post provides a bigger picture of the Brazilian context underlying the court's analysis and its most likely final decision.
The court’s examination of Marco Civil’s Article 19 began with Justice Dias Toffoli in November last year. We explained here about the cases under trial, the reach of the Supreme Court’s decision, and Article 19’s background related to Marco Civil’s approval in 2014. We also highlighted some aspects and risks of Justice Dias Toffoli’s vote, who considered the intermediary liability regime established in Article 19 unconstitutional.
Most of the justices have agreed to find this regime at least partially unconstitutional, but differ on the specifics. Relevant elements of their votes include:
-
Notice-and-takedown is likely to become the general rule for platforms' liability for third-party content (based on Article 21 of Marco Civil). Justices still have to settle whether this applies to internet applications in general or if some distinctions are relevant, for example, applying only to those that curate or recommend content. Another open question refers to the type of content subject to liability under this rule: votes pointed to unlawful content/acts, manifestly criminal or clearly unlawful content, or opted to focus on crimes. Some justices didn’t explicitly qualify the nature of the restricted content under this rule.
-
If partially valid, the need for a previous judicial order to hold intermediaries liable for user posts (Article 19 of Marco Civil) remains in force for certain types of content (or certain types of internet applications). For some justices, Article 19 should be the liability regime in the case of crimes against honor, such as defamation. Justice Luís Roberto Barroso also considered this rule should apply for any unlawful acts under civil law. Justice Cristiano Zanin has a different approach. For him, Article 19 should prevail for internet applications that don’t curate, recommend or boost content (what he called “neutral” applications) or when there’s reasonable doubt about whether the content is unlawful.
-
Platforms are considered liable for ads and boosted content that they deliver to users. This was the position held by most of the votes so far. Justices did so either by presuming platforms’ knowledge of the paid content they distribute, holding them strictly liable for paid posts, or by considering the delivery of paid content as platforms’ own act (rather than “third-party” conduct). Justice Dias Toffoli went further, including also non-paid recommended content. Some justices extended this regime to content posted by inauthentic or fake accounts, or when the non-identification of accounts hinders holding the content authors liable for their posts.
-
Monitoring duty of specific types of harmful and/or criminal content. Most concerning is that different votes establish some kind of active monitoring and likely automated restriction duty for a list of contents, subject to internet applications' liability. Justices have either recognized a “monitoring duty” or considered platforms liable for these types of content regardless of a previous notification. Justices Luís Roberto Barroso, Cristiano Zanin, and Flávio Dino adopt a less problematic systemic flaw approach, by which applications’ liability would not derive from each piece of content individually, but from an analysis of whether platforms employ the proper means to tackle these types of content. The list of contents also varies. In most of the cases they are restricted to criminal offenses, such as crimes against the democratic state, racism, and crimes against children and adolescents; yet they may also include vaguer terms, like “any violence against women,” as in Justice Dias Toffoli’s vote.
-
Complementary or procedural duties. Justices have also voted to establish complementary or procedural duties. These include providing a notification system that is easily accessible to users, a due process mechanism where users can appeal against content restrictions, and the release of periodic transparency reports. Justice Alexandre de Moraes also specifically mentioned algorithmic transparency measures.
-
Oversight. Justices also discussed which entity or oversight model should be used to monitor compliance while Congress doesn’t approve a specific regulation. They raised different possibilities, including the National Council of Justice, the General Attorney’s Office, the National Data Protection Authority, a self-regulatory body, or a multistakeholder entity with government, companies, and civil society participation.
Three other justices have yet to present their votes to complete the judgment. As we pointed out, the ruling will both decide the individual cases that entered the Supreme Court through appeals and the “general repercussion” issues underlying these individual cases. For addressing such general repercussion issues, the Supreme Court approves a thesis that orients lower court decisions in similar cases. The final thesis will reflect the majority of the court's agreements around the topics we outlined above.
Justice Alexandre de Moraes argued that the final thesis should equate the liability regime of social media and private messaging applications to the one applied to traditional media outlets. This disregards important differences between both: even if social media platforms curate content, it involves a massive volume of third-party posts, mainly organized through algorithms. Although such curation reflects business choices, it does not equate to media outlets that directly create or individually purchase specific content from approved independent producers. This is even more complicated with messaging applications, seriously endangering privacy and end-to-end encryption.
Justice André Mendonça was the only one so far to preserve the full application of Article 19. His proposed thesis highlighted the necessity of safeguarding privacy, data protection, and the secrecy of communications in messaging applications, among other aspects. It also indicated that judicial takedown orders must provide specific reasoning and be made available to platforms, even if issued within a sealed proceeding. The platform must also have the ability to appeal the takedown order. These are all important points the final ruling should endorse.
Risks and Blind SpotsWe have stressed the many problems entangled with broad notice-and-takedown mandates and expanded content monitoring obligations. Extensively relying on AI-based content moderation and tying it to intermediary liability for user content will likely exacerbate the detrimental effects of these systems’ limitations and flaws. The perils and concerns that grounded Article 19's approval remain valid and should have led to a position of the court preserving its regime.
However, given the judgement’s current stage, there are still some minimum safeguards that justices should consider or reinforce to reduce harm.
It’s crucial to put in place guardrails against the abuse and weaponization of notification mechanisms. At a minimum, platforms shouldn’t be liable following an extrajudicial notification when there’s reasonable doubt concerning the content’s lawfulness. In addition, notification procedures should make sure that notices are sufficiently precise and properly substantiated indicating the content’s specific location (e.g. URL) and why the notifier considers it to be illegal. Internet applications must also provide reasoned justification and adequate appeal mechanisms for those who face content restrictions.
On the other hand, holding intermediaries liable for individual pieces of user content regardless of notification, by massively relying on AI-based content flagging, is a recipe for over censorship. Adopting a systemic flaw approach could minimally mitigate this problem. Moreover, justices should clearly set apart private messaging applications, as mandated content-based restrictions would erode secure and end-to-end encrypted implementations.
Finally, we should note that justices generally didn’t distinguish large internet applications from other providers when detailing liability regimes and duties in their votes. This is one major blind spot, as it could significantly impact the feasibility of alternate and decentralized alternatives to Big Tech’s business models, entrenching platform concentration. Similarly, despite criticism of platforms’ business interests in monetizing and capturing user attention, court debates mainly failed to address the pervasive surveillance infrastructure lying underneath Big Tech’s power and abuses.
Indeed, while justices have called out Big Tech’ enormous power over the online flow of information – over what’s heard and seen, and by whom – the consequences of this decision can actually deepen this powerful position.
It’s worth recalling a line of Aaron Schwarz in the film “The Internet’s Own Boy” when comparing broadcasting and the internet. He said: “[…] what you see now is not a question of who gets access to the airwaves, it’s a question of who gets control over the ways you find people.” As he puts it, today’s challenge is less about who gets to speak, but rather about who gets to be heard.
There’s an undeniable source of power in operating the inner rules and structures by which the information flows within a platform with global reach and millions of users. The crucial interventions must aim at this source of power, putting a stop to behavioral surveillance ads, breaking Big Tech’s gatekeeper dominance, and redistributing the information flow.
That’s not to say that we shouldn’t care about how each platform organizes its online environment. We should, and we do. The EU Digital Services Act, for example, established rules in this sense, leaving the traditional liability regime largely intact. Rather than leveraging platforms as users’ speech watchdogs by potentially holding intermediaries liable for each piece of user content, platform accountability efforts should broadly look at platforms’ processes and business choices. Otherwise, we will end up focusing on monitoring users instead of targeting platforms’ abuses.
Major Setback for Intermediary Liability in Brazil: How Did We Get Here?
This is the second post of a series about intermediary liability in Brazil. Our first post gives an overview of Brazil's current intermediary liability regime, the context of its approval in 2014, and the beginning of the Supreme Court's analysis of such regime in November 2024. Our third post provides an outlook on justices' votes up until June 23, underscoring risks, mitigation measures, and blind spots of their potential decision.
The Brazilian Supreme Court has formed a majority to overturn the country’s current online intermediary liability regime. With eight out of eleven justices having presented their opinions, the court has reached enough votes to mostly remove the need for a previous judicial order demanding content takedown to hold digital platforms liable for user posts, which is currently the general rule.
The judgment relates to Article 19 of Brazil’s Civil Rights Framework for the Internet (“Marco Civil da Internet,” Law n. 12.965/2014), wherein internet applications can only be held liable for third-party content if they fail to comply with a judicial decision ordering its removal. Article 19 aligns with the Manila Principles and reflects the important understanding that holding platforms liable for user content without a judicial analysis creates strong incentives for enforcement overreach and over censorship of protected speech.
Nonetheless, while Justice André Mendonça voted to preserve Article 19’s application, four other justices stated it should prevail only in specific cases, mainly for crimes against honor (such as defamation). The remaining three justices considered that Article 19 offers insufficient protection to constitutional guarantees, such as the integral protection of children and teenagers.
The judgment will resume on June 25th, with the three final justices completing the analysis by the plenary of the court. Whereas Article 19’s partial unconstitutionality (or its interpretation “in accordance with” the Constitution) seems to be the position the majority of the court will take, the details of each vote vary, indicating important agreements still to sew up and critical tweaks to make.
As we previously noted, the outcome of this ruling can seriously undermine free expression and privacy safeguards if they lead to general content monitoring obligations or broadly expand notice-and-takedown mandates. This trend could negatively shape developments globally in other courts, parliaments, or with respect to executive powers. Sadly, the votes so far have aggravated these concerns.
But before we get to them, let's look at some circumstances underlying the Supreme Court's analysis.
2014 vs. 2025: The Brazilian Techlash After Marco Civil's ApprovalHow did Article 19 end up (mostly) overturned a decade after Marco Civil’s much-celebrated approval in Brazil back in 2014?
In addition to the broader techlash following the impacts of an increasing concentration of power in the digital realm, developments in Brazil have leveraged a harsher approach towards internet intermediaries. Marco Civil became a scapegoat, especially Article 19, within regulatory approaches that largely diminished the importance of the free expression concerns that informed its approval. Rather than viewing the provision as a milestone to be complemented with new legislation, this context has reinforced the view that Article 19 should be left behind.
The tougher approach to internet intermediaries gained steam after former President Jair Bolsonaro’s election in 2018 and throughout the legislative debates around draft bill 2630, also known as the “Fake News bill.”
Specifically, though not exhaustive, concerns around the spread of disinformation, online-fueled discrimination, and political violence, as well as threats to election integrity, constitute an important piece of this scenario. This includes the use of social media by the far right within the escalation of acts seeking to undermine the integrity of elections and ultimately overthrow the legitimately elected President Luis Inácio da Silva in January 2023. Investigations later unveiled that related plans included killing the new president, the vice-president, and Justice Alexandre de Moraes.
Concerns over child and adolescents’ rights and safety are another part of the underlying context. Among others, a wave of violent threats and actual attacks in schools in early 2023 was bolstered by online content. Social media challenges also led to injuries and deaths of young people.
Finally, the political reactions to Big Tech’s alignment with far-right politicians and feuds with Brazilian authorities complete this puzzle. It includes reactions to Meta’s policy changes in January 2025 and the Trump’s administration’s decision to restrict visas to foreign officials based on grounds of limiting free speech online. This decision is viewed as an offensive against Brazil's Supreme Court from U.S. authorities in alliance with Bolsonaro’s supporters, including his son now living in the U.S.
Changes in the tech landscape, including concerns about the attention-driven information flow, alongside geopolitical tensions, landed in Article 19 examination by the Brazilian Supreme Court. Hurdles in the legislative debate of draft bill 2630 turned attention to the internet intermediary liability cases pending in the Supreme Court as the main vehicles for providing “some” response. Yet, the scope of such cases (explained here) determined the most likely outcome. As they focus on assessing platform liability for user content and whether it involves a duty to monitor, these issues became the main vectors for analysis and potential change. Alternative approaches, such as improving transparency, ensuring due process, and fostering platform accountability through different measures, like risk assessments, were mainly sidelined.
Read our third post in this series to learn more about the analysis of the Supreme Court so far and its risks and blind spots.
Copyright Cases Should Not Threaten Chatbot Users’ Privacy
Like users of all technologies, ChatGPT users deserve the right to delete their personal data. Nineteen U.S. States, the European Union, and a host of other countries already protect users’ right to delete. For years, OpenAI gave users the option to delete their conversations with ChatGPT, rather than let their personal queries linger on corporate servers. Now, they can’t. A badly misguided court order in a copyright lawsuit requires OpenAI to store all consumer ChatGPT conversations indefinitely—even if a user tries to delete them. This sweeping order far outstrips the needs of the case and sets a dangerous precedent by disregarding millions of users’ privacy rights.
The privacy harms here are significant. ChatGPT’s 300+ million users submit over 1 billion messages to its chatbots per day, often for personal purposes. Virtually any personal use of a chatbot—anything from planning family vacations and daily habits to creating social media posts and fantasy worlds for Dungeons and Dragons games—reveal personal details that, in aggregate, create a comprehensive portrait of a person’s entire life. Other uses risk revealing people’s most sensitive information. For example, tens of millions of Americans use ChatGPT to obtain medical and financial information. Notwithstanding other risks of these uses, people still deserve privacy rights like the right to delete their data. Eliminating protections for user-deleted data risks chilling beneficial uses by individuals who want to protect their privacy.
This isn’t a new concept. Putting users in control of their data is a fundamental piece of privacy protection. Nineteen states, the European Union, and numerous other countries already protect the right to delete under their privacy laws. These rules exist for good reasons: retained data can be sold or given away, breached by hackers, disclosed to law enforcement, or even used to manipulate a user’s choices through online behavioral advertising.
While appropriately tailored orders to preserve evidence are common in litigation, that’s not what happened here. The court disregarded the privacy rights of millions of ChatGPT users without any reasonable basis to believe it would yield evidence. The court granted the order based on unsupported assertions that users who delete their data are probably copyright infringers looking to “cover their tracks.” This is simply false, and it sets a dangerous precedent for cases against generative AI developers and other companies that have vast stores of user information. Unless courts limit orders to information that is actually relevant and useful, they will needlessly violate the privacy rights of millions of users.
OpenAI is challenging this order. EFF urges the court to lift the order and correct its mistakes.
The NO FAKES Act Has Changed – and It’s So Much Worse
A bill purporting to target the issue of misinformation and defamation caused by generative AI has mutated into something that could change the internet forever, harming speech and innovation from here on out.
The Nurture Originals, Foster Art and Keep Entertainment Safe (NO FAKES) Act aims to address understandable concerns about generative AI-created “replicas” by creating a broad new intellectual property right. That approach was the first mistake: rather than giving people targeted tools to protect against harmful misrepresentations—balanced against the need to protect legitimate speech such as parodies and satires—the original NO FAKES just federalized an image-licensing system.
Tell Congress to Say No to NO FAKES
The updated bill doubles down on that initial mistaken approach by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them, with few safeguards against abuse.
The new version of NO FAKES requires almost every internet gatekeeper to create a system that will a) take down speech upon receipt of a notice; b) keep down any recurring instance—meaning, adopt inevitably overbroad replica filters on top of the already deeply flawed copyright filters; c) take down and filter tools that might have been used to make the image; and d) unmask the user who uploaded the material based on nothing more than the say so of person who was allegedly “replicated.”
This bill would be a disaster for internet speech and innovation.
Targeting ToolsThe first version of NO FAKES focused on digital replicas. The new version goes further, targeting tools that can be used to produce images that aren’t authorized by the individual, anyone who owns the rights in that individual’s image, or the law. Anyone who makes, markets, or hosts such tools is on the hook. There are some limits—the tools must be primarily designed for, or have only limited commercial uses other than making unauthorized images—but those limits will offer cold comfort to developers given that they can be targeted based on nothing more than a bare allegation. These provisions effectively give rights-holders the veto power on innovation they’ve long sought in the copyright wars, based on the same tech panics.
Takedown Notices and Filter MandateThe first version of NO FAKES set up a notice and takedown system patterned on the DMCA, with even fewer safeguards. NO FAKES expands it to cover more service providers and require those providers to not only take down targeted materials (or tools) but keep them from being uploaded in the future. In other words, adopt broad filters or lose the safe harbor.
Filters are already a huge problem when it comes to copyright, and at least in that instance all it should be doing is flagging for human review if an upload appears to be a whole copy of a work. The reality is that these systems often flag things that are similar but not the same (like two different people playing the same piece of public domain music). They also flag things for infringement based on mere seconds of a match, and they frequently do not take into account context that would make the use authorized by law.
But copyright filters are not yet required by law. NO FAKES would create a legal mandate that will inevitably lead to hecklers’ vetoes and other forms of over-censorship.
The bill does contain carve outs for parody, satire, and commentary, but those will also be cold comfort for those who cannot afford to litigate the question.
Threats to Anonymous SpeechAs currently written, NO FAKES also allows anyone to get a subpoena from a court clerk—not a judge, and without any form of proof—forcing a service to hand over identifying information about a user.
We've already seen abuse of a similar system in action. In copyright cases, those unhappy with the criticisms being made against them get such subpoenas to silence critics. Often that the criticism includes the complainant's own words as proof of the criticism, an ur-example of fair use. But the subpoena is issued anyway and, unless the service is incredibly on the ball, the user can be unmasked.
Not only does this chill further speech, the unmasking itself can cause harm to users. Either reputationally or in their personal life.
Threats to InnovationMost of us are very unhappy with the state of Big Tech. It seems like not only are we increasingly forced to use the tech giants, but that the quality of their services is actively degrading. By increasing the sheer amount of infrastructure a new service would need to comply with the law, NO FAKES makes it harder for any new service to challenge Big Tech. It is probably not a coincidence that some of these very giants are okay with this new version of NO FAKES.
Requiring removal of tools, apps, and services could likewise stymie innovation. For one, it would harm people using such services for otherwise lawful creativity. For another, it would discourage innovators from developing new tools. Who wants to invest in a tool or service that can be forced offline by nothing more than an allegation?
This bill is a solution in search of a problem. Just a few months ago, Congress passed Take It Down, which targeted images involving intimate or sexual content. That deeply flawed bill pressures platforms to actively monitor online speech, including speech that is presently encrypted. But if Congress is really worried about privacy harms, it should at least wait to see the effects of the last piece of internet regulation before going further into a new one. Its failure to do so makes clear that this is not about protecting victims of harmful digital replicas.
NO FAKES is designed to consolidate control over the commercial exploitation of digital images, not prevent it. Along the way, it will cause collateral damage to all of us.
New Journalism Curriculum Module Teaches Digital Security for Border Journalists
SAN FRANCISCO – A new college journalism curriculum module teaches students how to protect themselves and their digital devices when working near and across the U.S.-Mexico border.
“Digital Security 101: Crossing the US-Mexico Border” was developed by Electronic Frontier Foundation (EFF) Director of Investigations Dave Maass and Dr. Martin Shelton, deputy director of digital security at Freedom of the Press Foundation (FPF), in collaboration with the University of Texas at El Paso (UTEP) Multimedia Journalism Program and Borderzine.
The module offers a step-by-step process for improving the digital security of journalists passing through U.S. Land Ports of Entry, focusing on threat modeling: thinking through what you want to protect, and what actions you can take to secure it.
This involves assessing risk according to the kind of work the journalist is doing, the journalist’s own immigration status, potential adversaries, and much more, as well as planning in advance for protecting oneself and one’s devices should the journalist face delay, detention, search, or device seizure. Such planning might include use of encrypted communications, disabling or enabling certain device settings, minimizing the data on devices, and mentally preparing oneself to interact with border authorities.
The module, in development since early 2023, is particularly timely given increasingly invasive questioning and searches at U.S. borders under the Trump Administration and the documented history of border authorities targeting journalists covering migrant caravans during the first Trump presidency.
"Today's journalism students are leaving school only to face complicated, new digital threats to press freedom that did not exist for previous generations. This is especially true for young reporters serving border communities," Shelton said. "Our curriculum is designed to equip emerging journalists with the skills to protect themselves and sources, while this new module is specifically tailored to empower students who must regularly traverse ports of entry at the U.S.-Mexico border while carrying their phones, laptops, and multimedia equipment."
The guidance was developed through field visits to six ports of entry across three border states, interviews with scores of journalists and students from on both sides of the border, and a comprehensive review of CBP policies, while also drawing from EFF and FPF’s combined decades of experience researching constitutional rights and security techniques when it comes to our devices.
“While this training should be helpful to investigative journalists from anywhere in the country who are visiting the borderlands, we put journalism students based in and serving border communities at the center of our work,” Maass said. “Whether you’re reviewing the food scene in San Diego and Tijuana, covering El Paso and Ciudad Juarez’s soccer teams, reporting on family separation in the Rio Grande Valley, or uncovering cross-border corruption, you will need the tools to protect your work and sources."
The module includes a comprehensive slide deck that journalism lecturers can use and remix for their classes, as well as an interactive worksheet. With undergraduate students in mind, the module includes activities such as roleplaying a primary inspection interview and analyzing pop singer Olivia Rodrigo’s harrowing experience of mistaken identity while reentering the country. The module has already been delivered successfully in trainings with journalism students at UTEP and San Diego State University.
“UTEP’s Multimedia Journalism program is well-situated to help develop this digital security training module,” said UTEP Communication Department Chair Dr. Richard Pineda. “Our proximity to the U.S.-Mexico border has influenced our teaching models, and our student population – often daily border crossers – give us a unique perspective from which to train journalists on issues related to reporting safely on both sides of the border.”
For the “Digital security 101: Crossing the US-Mexico border” module: https://freedom.press/digisec/blog/border-security-module/
For more about the module: https://www.eff.org/deeplinks/2025/06/journalist-security-checklist-preparing-devices-travel-through-us-border
For EFF’s guide to digital security at the U.S. border: https://www.eff.org/press/releases/digital-privacy-us-border-new-how-guide-eff
For EFF’s student journalist Surveillance Self Defense guide: https://ssd.eff.org/playlist/journalism-student
Contact: DaveMaassDirector of Investigationsdm@eff.orgA Journalist Security Checklist: Preparing Devices for Travel Through a US Border
This post was originally published by the Freedom of the Press Foundation (FPF). This checklist complements the recent training module for journalism students in border communities that EFF and FPF developed in partnership with the University of Texas at El Paso Multimedia Journalism Program and Borderzine. We are cross-posting it under FPF's Creative Commons Attribution 4.0 International license. It has been slightly edited for style and consistency.
Before diving in: This space is changing quickly! Check FPF's website for updates and contact them with questions or suggestions. This is a joint project of Freedom of the Press Foundation (FPF) and the Electronic Frontier Foundation.
Those within the U.S. have Fourth Amendment protections against unreasonable searches and seizures — but there is an exception at the border. Customs and Border Protection (CBP) asserts broad authority to search travelers’ devices when crossing U.S. borders, whether traveling by land, sea, or air. And unfortunately, except for a dip at the start of the COVID-19 pandemic when international travel substantially decreased, CBP has generally searched more devices year over year since the George W. Bush administration. While the percentage of travelers affected by device searches remains small, in recent months we’ve heard growing concerns about apparent increased immigration scrutiny and enforcement at U.S. ports of entry, including seemingly unjustified device searches.
Regardless, it’s hard to say with certainty the likelihood that you will experience a search of your items, including your digital devices. But there’s a lot you can do to lower your risk in case you are detained in transit, or if your devices are searched. We wrote this checklist to help journalists prepare for transit through a U.S. port of entry while preserving the confidentiality of your most sensitive information, such as unpublished reporting materials or source contact information. It’s important to think about your strategy in advance, and begin planning which options in this checklist make sense for you.
First thing’s first: What might CBP do?U.S. CBP’s policy is that they may conduct a “basic” search (manually looking through information on a device) for any reason or no reason at all. If they feel they have reasonable suspicion “of activity in violation of the laws enforced or administered by CBP” or if there is a “national security concern,” they may conduct what they call an “advanced” search, which may include connecting external equipment to your device, such as a forensic analysis tool designed to make a copy of your data.
Your citizenship status matters as to whether you can refuse to comply with a request to unlock your device or provide the passcode. If you are a U.S. citizen entering the U.S., you have the most legal leverage to refuse to comply because U.S. citizens cannot be denied entry — they must be let back into the country. But note that if you are a U.S. citizen, you may be subject to escalated harassment and further delay at the port of entry, and your device may be seized for days, weeks, or months.
If CBP officers seek to search your locked device using forensic tools, there is a chance that some (if not all of the) information on the device will be compromised. But this probability depends on what tools are available to government agents at the port of entry, if they are motivated to seize your device and send it elsewhere for analysis, and what type of device, operating system, and security features your device has. Thus, it is also possible that strong encryption may substantially slow down or even thwart a government device search.
Lawful permanent residents (green-card holders) must generally also be let back into the country. However, the current administration seems more willing to question LPR status, so refusing to comply with a request to unlock a device or provide a passcode may be risky for LPRs. Finally, CBP has broad discretion to deny entry to foreign nationals arriving on a visa or via the visa waiver program.
At present, traveling domestically within the United States, particularly if you are a U.S. citizen, is lower risk than travelling internationally. Our luggage and the physical aspects of digital devices may be searched — e.g., manual inspection or x-rays to ensure a device is not a bomb. CBP is often present at airports, but for domestic travel within the U.S. you should only be interacting with the Transportation Security Administration. TSA does not assert authority to search the data on your device — this is CBP’s role.
At an international airport or other port of entry, you have to decide whether you will comply with a request to access your device, but this might not feel like much of a choice if you are a non-U.S. citizen entering the country! Plan accordingly.
Your border digital security checklist Preparing for travel☐ Make a backup of each of your devices before traveling.
☐ Use long, unpredictable, alphanumeric passcodes for your devices and commit those passwords to memory.
☐ If bringing a laptop, ensure it is encrypted using BitLocker for Windows, or FileVault for macOS. Chromebooks are encrypted by default. A password-protected laptop screen lock is usually insufficient. When going through security, devices should be turned all the way off.
☐ Fully update your device and apps.
☐ Optional: Use a password manager to help create and store randomized passcodes. 1Password users can create temporary travel vaults.
☐ Bring as few sensitive devices as possible — only what you need.
☐ Regardless which country you are visiting, think carefully about what you are willing to post publicly on social media about that country to avoid scrutiny.
☐ For land ports of entry in the U.S., check CBP’s border wait times and plan accordingly.
☐ If possible, print out any travel documents in advance to avoid the necessity to unlock your phone during boarding, including boarding passes for your departure and return, rental car information, and any information about your itinerary that you would like to have on hand if questioned (e.g., hotel bookings, visa paperwork, employment information if applicable, conference information). Use a printer you trust at home or at the office, just in case.
☐ Avoid bringing sensitive physical documents you wouldn’t want searched. If you need them, consider digitizing them (e.g., by taking a photo) and storing them remotely on a cloud service or backup device.
Decide in advance whether you will unlock your device or provide the passcode for a search. Your overall likelihood of experiencing a device search is low (e.g., less than .01% of international travelers are selected), but depending on what information you carry, the impact of a search may be quite high. If you plan to unlock your device for a search or provide the passcode, ensure your devices are prepared:
☐ Upload any information you would like to keep in cloud providers in advance (e.g., using iCloud) that you would like stored remotely, instead of locally on your device.
☐ Remove any apps, files, chat histories, browsing histories, and sensitive contacts you would not want exposed during a search.
☐ If you delete photos or files, delete them a second time in the “Recently Deleted” or “Trash” sections of your Files and Photos apps.
☐ Remove messages from the device that you believe would draw unwanted scrutiny. Remove yourself — even if temporarily — from chat groups on platforms like Signal.
☐ If you use Signal and plan to keep it on your device, use disappearing messages to minimize how much information you keep within the app.
☐ Optional: Bring a travel device instead of your usual device. Ensure it is populated with the apps you need while traveling, as well as login credentials (e.g., stored in a password manager), and necessary files. If you do this, ensure your trusted contacts know how to reach you on this device.
☐ Optional: Rather than manually removing all sensitive files from your computer, if you are primarily accessing web services during your travels, a Chromebook may be an affordable alternative to your regular computer.
☐ Optional: After backing up your devices for every day use, factory reset it and add only the information you need back onto the device.
☐ Optional: If you intend to work during your travel, plan in advance with a colleague who can remotely assist you in accessing and/or rotating necessary credentials.
☐ If you don’t plan to work, consider discussing with your IT department whether temporarily suspending your work accounts could mitigate risks at border crossings.
☐ Log out of accounts you do not want accessible to border officials. Note that border officers do not have authority to access live cloud content — they must put devices in airplane mode or otherwise disconnect them from the internet.
☐ Power down your phone and laptop entirely before going through security. This will enable disk encryption, and make it harder for someone to analyze your device.
☐ Immediately before travel, if you have a practicing attorney who has expertise in immigration and border issues, particularly related to members of the media, make sure you have their contact information written down before visiting.
☐ Immediately before travel, ensure that a friend, relative, or colleague is aware of your whereabouts when passing through a port of entry, and provide them with an update as soon as possible afterward.
☐ Be polite and try not to emotionally escalate the situation.
☐ Do not lie to border officials, but don’t offer any information they do not explicitly request.
☐ Politely request officers’ names and badge numbers.
☐ If you choose to unlock your device, rather than telling border officials your passcode, ask to type it in yourself.
☐ Ask to be present for a search of your device. But note officers are likely to take your device out of your line of sight.
☐ You may decline the request to search your device, but this may result in your device being seized and held for days, weeks, or months. If you are not a U.S. citizen, refusal to comply with a search request may lead to denial of entry, or scrutiny of lawful permanent resident status.
☐ If your device is seized, ask for a custody receipt (Form 6051D). This should also list the name and contact information for a supervising officer.
☐ If an officer has plugged your unlocked phone or computer into another electronic device, they may have obtained a forensic copy of your device. You will want to remember anything you can about this event if it happens.
☐ Immediately afterward, write down as many details as you can about the encounter: e.g., names, badge numbers, descriptions of equipment that may have been used to analyze the device, changes to the device or corrupted data, etc.
Reporting is not a crime. Be confident knowing you haven’t done anything wrong.
More resources- https://hselaw.com/news-and-information/legalcurrents/preparing-for-electronic-device-searches-at-united-states-borders/
- https://www.eff.org/wp/digital-privacy-us-border-2017#main-content
- https://www.aclu.org/news/privacy-technology/can-border-agents-search-your-electronic
- https://www.theverge.com/policy/634264/customs-border-protection-search-phone-airport-rights
- https://www.wired.com/2017/02/guide-getting-past-customs-digital-privacy-intact/
- https://www.washingtonpost.com/technology/2025/03/27/cbp-cell-phones-devices-traveling-us/
EFF to European Commission: Don’t Resurrect Illegal Data Retention Mandates
The mandatory retention of metadata is an evergreen of European digital policy. Despite a number of rulings by Europe’s highest court, confirming again and again the incompatibility of general and indiscriminate data retention mandates with European fundamental rights, the European Commission is taking major steps towards the re-introduction of EU-wide data retention mandates. Recently, the Commission launched a Call for Evidence on data retention for criminal investigations—the first formal step towards a legislative proposal.
The European Commission and EU Member States have been attempting to revive data retention for years. For this purpose, a secretive “High Level Group on Access to Data for Effective Law Enforcement” has been formed, usually referred to as High level Group (HLG) “Going dark”. Going dark refers to the false narrative that law enforcement authorities are left “in the dark” due to a lack of accessible data, despite the ever increasing collection and accessing of data through companies, data brokers and governments. Going dark also describes the intransparent ways of working of the HLG, behind closed doors and without input from civil society.
The Groups’ recommendations to the European Commission, published in 2024, read like a wishlist of government surveillance.They include suggestions to backdoors in various technologies (reframed as “lawful access by design”), obligations on service providers to collect and retain more user data than they need for providing their services, and intercepting and providing decrypted data to law enforcement in real time, all the while avoiding to compromise the security of their systems. And of course, the HLG calls for a harmonized data retention regime, including not only the retention of but also the access to data, and extending data retention to any service provider that could provide access to data.
EFF joined other civil society organizations in addressing the dangerous proposals of the HLG, calling on the European Commission to safeguard fundamental rights and ensuring the security and confidentiality of communication.
In our response to the Commission's Call for Evidence, we reiterated the same principles.
- Any future legislative measures must prioritize the protection of fundamental rights and must be aligned with the extensive jurisprudence of the Court of Justice of the European Union.
- General and indiscriminate data retention mandates undermine anonymity and privacy, which are essential for democratic societies, and pose significant cybersecurity risks by creating centralized troves of sensitive metadata that are attractive targets for malicious actors.
- We highlight the lack of empirical evidence to justify blanket data retention and warn against extending retention duties to number-independent interpersonal communication services as it would violate CJEU doctrine, conflict with European data protection law, and compromise security.
The European Commission must once and for all abandon the ghost of data retention that’s been haunting EU policy discussions for decades, and shift its focus to rights respecting alternatives.
Protect Yourself From Meta’s Latest Attack on Privacy
Researchers recently caught Meta using an egregious new tracking technique to spy on you. Exploiting a technical loophole, the company was able to have their apps snoop on users’ web browsing. This tracking technique stands out for its flagrant disregard of core security protections built into phones and browsers. The episode is yet another reason to distrust Meta, block web tracking, and end surveillance advertising.
Fortunately, there are steps that you, your browser, and your government can take to fight online tracking.
What Makes Meta’s New Tracking Technique So Problematic?More than 10 years ago, Meta introduced a snippet of code called the “Meta pixel,” which has since been embedded on about 20% of the most trafficked websites. This pixel exists to spy on you, recording how visitors use a website and respond to ads, and siphoning potentially sensitive info like financial information from tax filing websites and medical information from hospital websites, all in service of the company’s creepy system of surveillance-based advertising.
While these pixels are well-known, and can be blocked by tools like EFF’s Privacy Badger, researchers discovered another way these pixels were being used to track you.
Even users who blocked or cleared cookies, hid their IP address with a VPN, or browsed in incognito mode could be identified
Meta’s tracking pixel was secretly communicating with Meta’s apps on Android devices. This violates a fundamental security feature (“sandboxing”) of mobile operating systems that prevents apps from communicating with each other. Meta got around this restriction by exploiting localhost, a feature meant for developer testing. This allowed Meta to create a hidden channel between mobile browser apps and its own apps. You can read more about the technical details here.
This workaround helped Meta bypass user privacy protections and attempts at anonymity. Typically, Meta tries to link data from “anonymous” website visitors to individual Meta accounts using signals like IP addresses and cookies. But Meta made re-identification trivial with this new tracking technique by sending information directly from its pixel to Meta's apps, where users are already logged in. Even users who blocked or cleared cookies, hid their IP address with a VPN, or browsed in incognito mode could be identified with this tracking technique.
Meta didn’t just hide this tracking technique from users. Developers who embedded Meta’s tracking pixels on their websites were also kept in the dark. Some developers noticed the pixel contacting localhost from their websites, but got no explanation when they raised concerns to Meta. Once publicly exposed, Meta immediately paused this tracking technique. They claimed they were in discussions with Google about “a potential miscommunication regarding the application of their policies.”
While the researchers only observed the practice on Android devices, similar exploits may be possible on iPhones as well.
This exploit underscores the unique privacy risks we face when Big Tech can leverage out of control online tracking to profit from our personal data.
How Can You Protect Yourself?Meta seems to have stopped using this technique for now, but that doesn’t mean they’re done inventing new ways to track you. Here are a few steps you can take to protect yourself:
Use a Privacy-Focused Browser
Choose a browser with better default privacy protections than Chrome. For example, Brave and DuckDuckGo protected users from this tracking technique because they block Meta’s tracking pixel by default. Firefox only partially blocked the new tracking technique with its default settings, but fully blocked it for users with “Enhanced Tracking Protection” set to “Strict.”
It’s also a good idea to avoid using in-app browsers. When you open links inside the Facebook or Instagram apps, Meta can track you more easily than if you opened the same links in an external browser.
Delete Unnecessary Apps
Reduce the number of ways your information can leak by deleting apps you don’t trust or don’t regularly use. Try opting for websites over apps when possible. In this case, and many similar cases, using the Facebook and Instagram website instead of the apps would have limited data collection. Even though both can contain tracking code, apps can access information that websites generally can’t, like a persistent “advertising ID” that companies use to track you (follow EFF’s instructions to turn it off if you haven’t already).
Install Privacy Badger
EFF’s free browser extension blocks trackers to stop companies from spying on you online. Although Privacy Badger would’ve stopped Meta’s latest tracking technique by blocking their pixel, Firefox for Android is the only mobile browser it currently supports. You can install Privacy Badger on Chrome, Firefox, and Edge on your desktop computer.
Limit Meta’s Use of Your Data
Meta’s business model creates an incentive to collect as much information as possible about people to sell targeted ads. Short of deleting your accounts, you have a number of options to limit tracking and how the company uses your data.
How Should Google Chrome Respond?After learning about Meta’s latest tracking technique, Chrome and Firefox released fixes for the technical loopholes that Meta exploited. That’s an important step, but Meta’s deliberate attempt to bypass browsers’ privacy protections shows why browsers should do more to protect users from online trackers.
Unfortunately, the most popular browser, Google Chrome, is also the worst for your privacy. Privacy Badger can help by blocking trackers on desktop Chrome, but Chrome for Android doesn’t support browser extensions. That seems to be Google’s choice, rather than a technical limitation. Given the lack of privacy protections they offer, Chrome should support extensions on Android to let users protect themselves.
Although Chrome addressed the latest Meta exploit after it was exposed, their refusal to block third-party cookies or known trackers leaves the door wide open for Meta’s other creepy tracking techniques. Even when browsers block third-party cookies, allowing trackers to load at all gives them other ways to harvest and de-anonymize users’ data. Chrome should protect its users by blocking known trackers (including Google’s). Tracker-blocking features in Safari and Firefox show that similar protections are possible and long overdue in Chrome. It has yet to be approved to ship in Chrome, but a Google proposal to block fingerprinting scripts in Incognito Mode is a promising start.
Yet Another Reason to Ban Online Behavioral AdvertisingMeta’s business model relies on collecting as much information as possible about people in order to sell highly-targeted ads. Even if this method has been paused, as long as they have the incentive to do so Meta will keep finding ways to bypass your privacy protections.
The best way to stop this cycle of invasive tracking techniques and patchwork fixes is to ban online behavioral advertising. This would end the practice of targeting ads based on your online activity, removing the primary incentive for companies to track and share your personal data. We need strong federal privacy laws to ensure that you, not Meta, control what information you share online.
A Token of Appreciation for Sustaining Donors 💞
You'll get a custom EFF35 Challenge Coin when you become a monthly or annual Sustaining Donor by July 10. It’s that simple.
Start a Convenient recurring donation Today!
But here's a little more background for all of you detail-oriented digital rights fans. EFF's 35th Anniversary celebration has begun and we're commemorating three and a half decades for fighting for your privacy, security, and free expression rights online. These values are hallmarks of freedom and necessities for true democracy, and you can help protect them. It's only possible with the kindness and steadfast support from EFF members, and over 30% of them are Sustaining Donors: people who spread out their support with a monthly or annual automatic recurring donation.
We're saying thanks to new and upgrading Sustaining Donors by offering brand new EFF35 Challenge Coins as a literal token of thanks. Challenge coins follow a long tradition of offering a symbol of kinship and respect for great achievements—and we owe our strength to tech creators and users like you. EFF challenge coins are individually numbered for each supporter and only available while supplies last.
Become a Sustaining DonorJust start an automated recurring donation of at least $5 per month (Copper Level) or $25 per year (Silicon Level) by July 10, 2025. We'll automatically send a special-edition EFF challenge coin to the shipping address you provide during your transaction.
Already a Monthly or Annual Sustaining Donor?First of all—THANKS! Second, you can get an EFF35 Challenge Coin when you upgrade your donation. Just increase your monthly or annual gift by any amount and let us know by emailing upgrade@eff.org.
Get started with your upgrade at eff.org/recurring. If you used PayPal, just cancel your current recurring donation and then go to eff.org to start a new upgraded recurring donation.
Digital Rights Every DayEFF's mission is sustained by thousands of people from every imaginable background giving modest donations when they can. Every cent counts. We like to show our gratitude and give you something to start conversations about civil liberties and human rights, whether you're a one time donor or recurring Sustaining Donor.
Check out freshly-baked member gifts made for EFF's anniversary year including new EFF35 Cityscape T-Shirt, Motherboard Hooded Sweatshirt, and new stickers. With your help, EFF is here to stay.
Strategies for Resisting Tech-Enabled Violence Facing Transgender People
Today's Supreme Court’s ruling in U.S. v. Skrmetti upholding bans on gender-affirming care for youth makes it clear: trans people are under attack. Threats to trans rights and healthcare are coming from legislatures, anti-trans bigots (both organized and not), apathetic bystanders, and more. Living under the most sophisticated surveillance apparatus in human history only makes things worse. While the dangers are very much tangible and immediate, the risks posed by technology can amplify them in insidious ways. Here is a non-exhaustive overview of concerns, a broad-sweeping threat model, and some recommended strategies that you can take to keep yourself and your loved ones safe.
Dangers for Trans YouthTrans kids experience an inhumane amount of cruelty and assault. Much of today’s anti-trans legislation is aimed specifically at making life harder for transgender youth, across all aspects of life. For this reason, we have highlighted several of the unique threats facing transgender youth.
School Monitoring SoftwareMost school-issued devices are root-kitted with surveillance spyware known as student-monitoring software. The purveyors of these technologies have been widely criticized for posing significant risks to marginalized children, particularly LGBTQ+ students. We ran our own investigation on the dangers posed by these technologies with a project called Red Flag Machine. Our findings showed that a significant portion of the times students’ online behavior was flagged as “inappropriate” was when they were researching LGBTQ+ topics such as queer history, sexual education, psychology, and medicine. When a device with this software flags such activity it often leads to students being placed in direct contact with school administrators or even law enforcement. As I wrote 3 years ago, this creates a persistent and uniquely dangerous situation for students living in areas with regressive laws around LGBTQ+ life or unsafe home environments.
The risks posed by technology can amplify threats in insidious ways
Unfortunately, because of the invasive nature of these school-issued devices, we can’t recommend a safe way to research LGBTQ+ topics on them without risking school administrators finding out. If possible, consider compartmentalizing those searches to different devices, ones owned by you or a trusted friend, or devices found in an environment you trust, such as a public library.
Family Owned DevicesIf you don’t own your phone, laptop, or other devices—such as if your parents or guardians are in control of them (e.g. they have access to unlock them or they exert control over the app stores you can access with them)— it’s safest to treat those devices as you would a school-issued device. This means you should not trust those devices for the most sensitive activities or searches that you want to keep especially private. While steps like deleting browser history and using hidden folders or photo albums can offer some safety, they aren’t sure-fire protections to prevent the adults in your life from accessing your sensitive information. When possible, try using a public library computer (outside of school) or borrow a trusted friend’s device with fewer restrictions.
Dangers for ProtestorsPride demonstrations are once again returning to their roots as political protests. It’s important to treat them as such by locking down your devices and coming up with some safety plans in advance. We recommend reading our entire Surveillance Self-Defense guide on attending a protest, taking special care to implement strategies like disabling biometric unlock on your phone and documenting the protest without putting others at risk. If you’re attending the demonstration with others–which is strongly encouraged–consider setting up a Signal group chat and using strategies laid out in this blog post by Micah Lee.
Counter-protestorsThere is a significant push from anti-trans bigots to make Pride month more dangerous for our community. An independent source has been tracking and mapping anti-trans organized groups who are specifically targeting Pride events. While the list is non-exhaustive, it does provide some insight into who these groups are and where they are active. If one of these groups is organizing in your area, it will be important to take extra precautions to keep yourself safe.
Data Brokers & Open-Source IntelligenceData brokers pose a significant threat to everyone–and frankly, the entire industry deserves to be deleted out of existence. The dangers are even more pressing for people doing the vital work advocating for human rights of transgender people. If you’re a doctor, an activist, or a supportive family member of a transgender person, you are at risk of your own personal information being weaponized against you. Anti-trans bigots and their supporters online will routinely access open-source intelligence and data broker records to cause harm.
You can reduce some of these risks by opting out from data brokers. It’s not a cure-all (the entire dissolution of the data broker industry is the only solution), but it’s a meaningful step. The DIY method has been found most effective, though there are services to automate the process if you would rather save yourself the time and energy. For the DIY approach, we recommend using Yael Grauer’s Big Ass Data-Broker Opt Out List.
Legality is likely to continue to shift
It’s also important to look into other publicly accessible information that may be out there, including voter registration records, medical licensing information, property sales records, and more. Some of these can be obfuscated through mechanisms like “address confidentiality programs.” These protections vary state-by-state, so we recommend checking your local laws and protections.
Medical DataIn recent years, legislatures across the country have moved to restrict access to and ban transgender healthcare. Legality is likely to continue to shift, especially after the Supreme Court’s green light today in Skrmetti. Many of the concerns around criminalization of transgender healthcare overlap with those surrounding abortion access –issues that are deeply connected and not mutually exclusive. The Surveillance Self-Defense playlist for the abortion access movement is a great place to start when thinking through these risks, particularly the guides on mobile phone location tracking, making a security plan, and communicating with others. While some of this overlaps with the previously linked protest safety guides, that redundancy only underscores the importance.
Unfortunately, much of the data about your medical history and care is out of your hands. While some medical practitioners may have some flexibility over how your records reflect your trans identity, certain aspects like diagnostic codes and pharmaceutical data for hormone therapy or surgery are often more rigid and difficult to obscure. As a patient, it’s important to consult with your medical provider about this information. Consider opening up a dialogue with them about what information needs to be documented, versus what could be obfuscated, and how you can plan ahead in the event that this type of care is further outlawed or deemed criminal.
Account Safety Locking Down Social Media AccountsIt’s a good idea for everyone to review the privacy and security settings on their social media accounts. But given the extreme amount of anti-trans hate online (sometimes emboldened by the very platforms themselves), this is a necessary step for trans people online. To start, check out the Surveillance Self-Defense guide on social media account safety.
We can’t let the threats posed by technology diminish our humanity and our liberation.
In addition to reviewing your account settings, you may want to think carefully about what information you choose to share online. While visibility of queerness and humanity is a powerful tool for destigmatizing our existence, only you can decide if the risk involved with sharing your face, your name, and your life outweigh the benefit of showing others that no matter what happens, trans people exist. There’s no single right answer—only what’s right for you.
Keep in mind also that LGBTQ expression is at significantly greater risk of censorship by these platforms. There is little individuals can do to fully evade or protect against this, underscoring the importance of advocacy and platform accountability.
Dating AppsDating apps also pose a unique set of risks for transgender people. Intimate partner violence for transgender people is at a staggeringly high rate compared to cisgender people–meaning we must take special care to protect ourselves. This guide on LGBTQ dating app safety is worth reading, but here’s the TLDR: always designate a friend as your safety contact before and after meeting anyone new, meet in public first, and be mindful of how you share photos with others on dating apps.
Safety and Liberation Are Collective EffortsWhile bodily autonomy is under attack from multiple fronts, it’s crucial that we band together to share strategies of resistance. Digital privacy and security must be considered when it comes to holistic security and safety. Don’t let technology become the tool that enables violence or restricts the self-determination we all deserve.
Trans people have always existed. Trans people will continue to exist despite the state’s efforts to eradicate us. Digital privacy and security are just one aspect of our collective safety. We can’t let the threats posed by technology diminish our humanity and our liberation. Stay informed. Fight back. We keep each other safe.
Apple to Australians: You’re Too Stupid to Choose Your Own Apps
Apple has released a scaremongering, self-serving warning aimed at the Australian government, claiming that Australians will be overrun by a parade of digital horribles if Australia follows the European Union’s lead and regulates Apple’s “walled garden.”
The EU’s Digital Markets Act is a big, complex, ambitious law that takes aim squarely at the source of Big Tech’s power: lock-in. For users, the DMA offers interoperability rules that let Europeans escape US tech giants’ walled gardens without giving up their relationships and digital memories.
For small businesses, the DMA offers something just as valuable: the right to process their own payments. That may sound boring, but here’s the thing: Apple takes 30 percent commission on most payments made through iPhone and iPad apps, and they ban app makers from including alternative payment methods or even mentioning that Apple customers can make their payments on the web.
All this means that every euro a European Patreon user sends to a performer or artist takes a round-trip through Cupertino, California, and comes back 30 cents lighter. Same goes for other money sent to major newspapers, big games, or large service providers. Meanwhile, the actual cost of processing a payment in the EU is less than one percent, meaning that Apple is taking in a 3,000 percent margin on its EU payments.
To make things worse, Apple uses “digital rights management” to lock iPhones and iPads to its official App Store. That means that Europeans can’t escape Apple’s 30 percent “app tax” by installing apps from a store with fairer payment policies.
Here, too, the DMA offers relief, with a rule that requires Apple to permit “sideloading” of apps (that is, installing apps without using an app store). The same rule requires Apple to allow its customers to choose to use independent app stores.
With the DMA, the EU is leading the world in smart, administrable tech policies that strike at the power of tech companies. This is a welcome break from the dominant approach to tech policy over the first two decades of this century, in which regulators focused on demanding that tech companies use their power wisely – by surveilling and controlling their users to prevent bad behavior – rather than taking that power away.
Which is why Australia is so interested. A late 2024 report from the Australian Treasury took a serious look at transposing DMA-style rules to Australia. It’s a sound policy, as the European experience has shown.
But you wouldn’t know it by listening to Apple. According to Apple, Australians aren’t competent to have the final say over which apps they use and how they pay for them, and only Apple can make those determinations safely. It’s true that Apple sometimes takes bold, admirable steps to protect its customers’ privacy – but it’s also true that sometimes Apple invades its customers’ privacy (and lies about it). It’s true that sometimes Apple defends its customers from government spying – but it’s also true that sometimes Apple serves its customers up on a platter to government spies, delivering population-scale surveillance for autocratic regimes (and Apple has even been known to change its apps to help autocrats cling to power).
Apple sometimes has its customers’ backs, but often, it sides with its shareholders (or repressive governments) over those customers. There’s no such thing as a benevolent dictator: letting Apple veto your decisions about how you use your devices will not make you safer.
Apple’s claims about the chaos and dangers that Europeans face thanks to the DMA are even more (grimly) funny when you consider that Apple has flouted EU law with breathtaking acts of malicious compliance. Apparently, the European iPhone carnage has been triggered by the words on the European law books, without Apple even having to follow those laws!
The world is in the midst of a global anti-monopoly wave that keeps on growing. This decade has seen big, muscular antitrust action in the US, the UK, the EU, Canada, South Korea, Japan, Germany, Spain, France, and even China.
It’s been a century since the last wave of trustbusting swept the globe, and while today’s monopolists are orders of magnitude larger than their early 20th-century forbears, they also have a unique vulnerability.
Broadly speaking, today’s tech giants cheat in the same way everywhere. They do the same spying, the same price-gouging, and employ the same lock-in tactics in every country where they operate, which is practically every country. That means that when a large bloc like the EU makes a good tech regulation, it has the power to ripple out across the planet, benefiting all of us – like when the EU forced Apple to switch to standard USB-C cables to charge their devices, and we all got iPhones with USB-C ports.
It makes perfect sense for Australia to import the DMA – after all, Apple and other American tech companies run the same scams on Australians as they do on Europeans.
Around the world, antitrust enforcers have figured out that they can copy one another’s homework, to the benefit of the people they defend. For example, in 2022, the UK’s Digital Markets Unit published a landmark study on the abuses of the mobile duopoly. The EU Commission relied on the UK report when it crafted the DMA, as did an American Congressman who introduced a similar law that year. The same report’s findings became the basis for new enforcement efforts in Japan and South Korea.
As Benjamin Franklin wrote, “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening mine.” It’s wonderful to see Australian regulators picking up best practices from the EU, and we look forward to seeing what ideas Australia has for the rest of the world to copy.
LGBT Q&A: Your Online Speech and Privacy Questions, Answered
This year, like almost all years before, LGBTQ+ Pride month is taking place at a time of burgeoning anti-LGBTQ+ violence, harassment, and criticism. Lawmakers and regulators are passing legislation restricting freedom of expression and privacy for LGBTQ+ individuals and fueling offline intolerance. Online platforms are also complicit in this pervasive ecosystem by censoring pro-LGBTQ+ speech, forcing LGBTQ+ individuals to self-censor or turn to VPNs to avoid being profiled, harassed, doxxed, or criminally prosecuted. Unfortunately, these risks look likely to continue, threatening LGBTQ+ individuals and the fight for queer liberation.
This Pride, we’re here to help build an online space where you get to decide what aspects of yourself you share with others, how you present to the world, and what things you keep private.
We know that it feels overwhelming thinking about how to protect yourself online in the face of these issues—whether that's best practices for using gay dating apps like Grindr and Her, how to download a VPN to see and interact with banned LGBTQ+ content, methods for posting pictures from events and protests without outing your friends, or how to argue over your favorite queer musicians’ most recent problematic takes without being doxxed.
That's why this LGBTQ+ Pride month, we’re launching an LGBT Q&A. Throughout Pride, we’ll be answering your most pressing digital rights questions on EFF’s Instagram and TikTok accounts. Comment your questions under these posts on Instagram and TikTok, and we’ll reply directly. Want to stay anonymous? Submit your questions via a secure link on our website and we’ll answer these in separate posts.
Everyone needs guidance and protection from prying eyes. This is especially true for those of us who face consequences when intimate details around gender or sexual identities are revealed without consent. This Pride, we’re here to help build an online space where you get to decide what aspects of yourself you share with others, how you present to the world, and what things you keep private.
No question is too big or too small! But comments that discriminate against marginalized groups, including the LGBTQ+ community, will not be engaged with.
The fight for the safety and rights of LGBTQ+ people is not just a fight for visibility online (and offline)—it’s a fight for survival. Now more than ever, it's essential to collectivize information sharing to not only make the digital world safer for LGBTQ+ individuals, but to make it a space where people can have fun, share memes, date, and build communities without facing repression and harm. Join us to make the internet private, safe, and full of gay pride.
Big Brother's Little Problem | EFFector 37.6
Just in time for summer, EFFector is back—with a brand new look! If you're not signed up, now's a perfect time to subscribe and get the latest details on EFF's work defending your rights to privacy and free expression online.
EFFector 37.6 highlights an important role that EFF has to protecting you online: watching the watchers. In this issue, we're pushing back on invasive car-tracking technologies, and we share an update on our case challenging the illegal disclosure of government records to DOGE. You'll also find updates on issues like masking at protests, defending encryption in Europe, and the latest developments in the right to repair movement.
Speaking of right to repair: we're debuting a new audio companion to EFFector as well! This time, Hayley Tsukayama breaks down how Washington's new right to repair law fits into broader legislative trends. You can listen now on YouTube or the Internet Archive.
EFFECTOR 37.6 - BIG BROTHER'S LITTLE PROBLEM
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
Podcast Episode: Securing Journalism on the ‘Data-Greedy’ Internet
Public-interest journalism speaks truth to power, so protecting press freedom is part of protecting democracy. But what does it take to digitally secure journalists’ work in an environment where critics, hackers, oppressive regimes, and others seem to have the free press in their crosshairs?
%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F3a9d54ab-0f04-453e-8101-fe44607d3800%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E
Privacy info.
This embed will serve content from simplecast.com
(You can also find this episode on the Internet Archive and on YouTube.)
That’s what Harlo Holmes focuses on as Freedom of the Press Foundation’s digital security director. Her team provides training, consulting, security audits, and other support to newsrooms, independent journalists, freelancers, documentary filmmakers – anyone who is making independent journalism in the public interest – so that they can do their jobs more safely and securely. Holmes joins EFF’s Cindy Cohn and Jason Kelley to discuss the tools and techniques that help journalists protect themselves and their sources while keeping the world informed.
In this episode you’ll learn about:
- The importance of protecting online anonymity on an ever-increasingly “data-greedy” internet
- How digital security nihilism in the United States compares with regions of the world where oppressive and repressive governance are more common
- Why compartmentalization can be a simple, easy approach to digital security
- The need for middleware to provide encryption and other protections that shield sources’ anonymity and journalists’ work product when using corporate data platforms
- How podcasters, YouTubers, and TikTokers fit into the broad sweep of media history, and need digital protections as well
Harlo Holmes is the chief information security officer and director of digital security at Freedom of the Press Foundation. She strives to help individual journalists in various media organizations become confident and effective in securing their communications within their newsrooms, with their sources, and with the public at large. She is a media scholar, software programmer, and activist. Holmes was a regular contributor to the open-source mobile security collective Guardian Project, where she spearheaded the media metadata verification initiative currently empowering ProofMode, Save by OpenArchive, eyeWitness to Atrocities, and others.
Resources:
- SecureDrop
- The Tor Project
- EFF: “Privacy Isn't Dead. Far From It." (Feb. 13, 2024)
- Digital Dada Podcast: “Combatting Digital Security Nihilism featuring Harlo Holmes” (Dec. 20, 2023)
- Reuters: “Inside the UAE’s secret hacking team of American mercenaries” (Jan. 30, 2019)
What do you think of “How to Fix the Internet?” Share your feedback here.
TranscriptHARLO HOLMES: within the sphere of public interest journalism. The reason why it exists is because it holds truth to power and it doesn't have to be adversarial, although, that's our right as citizens on this planet, but it doesn't have to be adversarial. And over the tenure that I've had, I've seen so many amazing examples where affecting change through public interest journalism done right, with the most detail paid to the operational and digital security of an investigation, literally ended up with laws being changed and legislation being written in order to make sure the problem that the journalist pointed out does not happen again.
One of my favorites is with Reuters. They wrote a story about how members of the intelligence community in Washington DC, after they had left Washington DC, were being actively poached by intelligence services in the UAE.
So it would take, like, leaving members of the people working in Washington DC, place them in cushy intelligence jobs at the UAE in order to, like, work on programs that we know are like, surveillance heavy, antithetical to all of our interests, public interest as well as the interest of the United States government.
And when that reporting came out, literally like, uh, Congress approved a bill saying that you have to wait three years before you can go through that revolving door rotation.
And that's the trajectory that makes me the most proud to work where I do.
CINDY COHN: That's Harlo Holmes talking about some of the critically important journalism that she is able to help facilitate in her role with the Freedom of the Press Foundation.
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.
JASON KELLEY: And I'm Jason Kelley, EFF's activism director. This is our podcast, How to Fix the Internet.
CINDY COHN: On this show, we flip the script from the dystopian doom and gloom thinking we all get mired in when thinking about the future of tech -- we're here challenge ourselves, our guests and our listeners to imagine the better future that we could be working towards. What can we look forward to if we dare to dream about getting things right?
JASON KELLEY: Our guest today, Harlo Holmes, is the chief information security officer and the director of digital security at the Freedom of the Press Foundation where she teaches journalists how to keep themselves – and their sources – safe online.
CINDY COHN: We started off by getting Harlo to explain exactly how the Freedom of the Press Foundation operates.
HARLO HOLMES: What we do, I like to say, is a three-pillared approach to protecting press freedom in the 21st century. The first, absolutely most important is our advocacy team. So not only do we have a staff of lawyers and legal scholars that weigh in on First Amendment issues and protect them within the United States, we also have a fantastic advocacy team at our own little newsroom, the US Press Freedom Tracker, where we have reporters who, anytime members of the press have their right to perform their rightful function challenged, minimized, persecuted, et cetera, we have reporters who are there who report on it, and we stay with those cases for as long as it takes.
And that's something that we're incredibly proud of. That's just one pillar. The other pillars that we have, is our engineering wing. So perhaps you have heard of a tool called SecureDrop. In certain newsrooms all over the planet, it's actually installed in order to technologically enable, as much anonymity as, uh, technically possible. Between reporters at those newsrooms and members of the press at large who might want to be whistleblowers or just to, you know, like, uh, say hey to, a news outlet that they admire in a way that ensures their anonymity.
And then there is my small team. We are the digital security team. Uh, we do a lot of training, consulting, security audits, and other supports that we can provide to newsrooms, independent journalists, freelancers, documentary filmmakers, anyone who is making independent journalism in the public interest in order to do their job more safely and securely.
CINDY COHN: Yeah. I think this is a really important thing that the Freedom of the Press Foundation does. Specifically your piece of it, this kind of connective tissue between the people who are really busy doing the reporting and finding things out and the people who wanna give them information and making sure that this whole thing is protected in a secure way. And I appreciate that you put it third, but to me it's really central to how this whole thing works. So I think that's really important.
And of course, SecureDrop for, you know, old time EFF and digital rights people – we know that this piece of technology was developed by our friend Aaron Schwartz, before he passed away. And the Freedom Press Foundation has picked it up and really turned it from a good but small idea into something that is vital and in newsrooms all around the world.
HARLO HOLMES: Yes. And thank you very, very much, for recognizing those particular achievements. SecureDrop has grown over the past, what, 12 years? I would say, into not only a tool that enables, the groundbreaking amount of journalism that has pretty much changed the trajectory of current events over the years that it's been developed, but also, represents increasing advances in technology around security that everyone on the planet benefits from. So for example, SecureDrop would not be anywhere were it not for its deep collaboration with the Tor Project, right?
And for all of us who pay attention to, digital security cryptography and, the intersection with human rights, you know, that the Tor Network is a groundbreaking piece of technology that not only provides, you know, anonymity on the internet in an increasingly, like, data-greedy environment, but also, like, represents, the ways that people can access parts of the internet in so many different innovative ways. And investigative journalism use in Secure drop is just one example of the benefits of like having Tor around and having it supported.
And so, that's one example. Another example is that, as people's interactions with computers change, uh, the way that we interface with browsers change the. Interplay between, you know, like using a regular computer and accessing stuff on mobile, that's changed, right?
And so our team has, like, such commendable intellectual curiosity in talking about these nuances and finding ways to make people's safety using all of these interfaces better. And so even though, we build Secure Drop in service of promoting public interest journalism, the way that it reverberates in technology is something that we're incredibly proud of. And it's all done in open source, right? Which means that anyone can access it. Anyone can iterate upon it, anyone can benefit from it.
CINDY COHN: Yeah, and it, and everyone can trust it. 'cause you know, you might not be able to read the code, but many people can. And so developing this trust and security, you know, they go hand in hand.
HARLO HOLMES: Yes,
JASON KELLEY: You use this term "data-greedy," which I really love. I've never heard that before.
CINDY COHN: It's so good!
JASON KELLEY: So you just created this incredible term "data-greedy" that I've never heard anyone use and I love and it's a good descriptor, I think of sort of like why journalists, but also everyone needs to be aware of like the tracks that they're leaving, the digital security practices that they use because it's not even necessarily the case that that data collection is intended to be harmful, but we just live in this environment where data is collected, where it's, you know, used sometimes intentionally to track people, but often just for other reasons.
Let's talk a little bit about that third pillar. What is it that journalists specifically need to think about in terms of security? I think a lot of people probably who have never done journalism, don't really think about the dangers of collecting information, of talking to sources of, you know, protecting that, how, how should they be thinking about it and what are the kinds of things that you talk to people about?
HARLO HOLMES: Great question. First and foremost, I feel that our team at Freedom of the Press Foundation, leads every training with the assumption that a journalist's job is to tell the story in the most compelling and effective way. Their job is not to concern themselves with what data stewardship means.
What protection of digital assets means. That's our job. And so, we really, really lean into meeting people where they are and just giving them exactly what it is that they need to know in order to do this job better without putting undue pressure on them. And also without scaring the bejesus out of anyone.
Because when you do take stock of like how data greedy all of our devices are, it can be a little bit scary to the point of making people feel disempowered. And so we always want to avoid that.
CINDY COHN: What are some techniques you use to try to avoid that? 'Cause I think that's really central to a lot of work that we're trying to do to try to get people, beyond what I think my colleague, Eva Galperin called “privacy nihilism. I'm not sure if she started it. She's the one who I heard it from.
HARLO HOLMES: I probably have heard that from her as well. I love, Eva and, uh, she has been so instrumental in the way that I think through these issues over the past like decade so yeah, digital security nihilism is 100% a thing.
And, perhaps maybe later we can get into like the regional contours of that because people in the United States have or exhibit a certain amount of nihilism. And then if you talk to people in like Central and Eastern Europe, it's a different way. If you talk to people in Latin America and South America, it's a different way.
So having that perspective actually like really helps the contours around how you approach people in digital security education and training..
CINDY COHN: Oh please, tell us more. I'm fascinated by this.
HARLO HOLMES: OK, so, I do want to come back to your original question, but, that said, I can definitely do a detour into the historicity of, um, digital security nihilism and how it interplays with where you are on the planet.
It's all political and in the United States we have, well, even though we're currently like in a bit of a, or in a bit of a, in a crisis mode, where we are absolutely looking at, you know, like our rights to privacy, the concessions that we make, our prominence in building these technologies and thus having a little bit of, like, insider knowledge of what the contours are.
Uh, if you compare that to the digital security protections of people who are in, let's say, you know, like Central or Eastern Europe, where, historically, they have never had or not for, you know, like decades, um, if not even like, you know, a hundred years. Um, that access to transparency about what's being done to their data and also transparency into how that data has been taken away from them because they didn't have a seat at the table.
if you look at places in, Latin America, Central America, South America, there are plenty of places where loss of digital security also comes hand in hand with loss of physical security, right? Like speaking to someone over the phone can often, especially where journalists are considered, will often come with a threat of physical violence, often to the most extreme. Right. So, yeah, exactly. Which is, you know, according to, um, so many, you know, like academics and scholars who focus on press freedom, know that, that that is one of the most dangerous places on the planet to be a journalist because failures in digital security can often come with literally, you know, like being summarily executed, right? So, every region on this planet has their own contours. It is constantly a fascinating challenge and one that I'm willing to meet in order to understand these contexts and to appropriately apply the right digital security solutions to the audiences that we find ourselves in front of.
CINDY COHN: Yeah. Okay. Back to my original question, sorry.
HARLO HOLMES: Go for it.
JASON KELLEY: Well, what, what is, I mean, did we get to the point? I don't think we really covered yet, really the basics of, like, what journalists need to think about in terms of their security. I mean, that's, you know, I, I, I love talking about privacy nihilism and how we can fight it, but, um, we would talk for three hours if we did that.
HARLO HOLMES: Yeah. Um, so quite frankly, one of the things that we're leaning most heavily on, and this is pretty much across the board, right, has to do with compartmentalization. I feel that, uh, recently within the United States, it's become really like technicolor to people. So they understand exactly why that's important, but it's always been important and it's always like something that you can apply everywhere.
There's always historically been attention as, uh, since the very moment the first iPhone stepped onto the market, this temptation to go the easy route. Everything is on the same device. You're calling your mom. You're, you know, like researching a flight on Expedia. You're, you know, Googling something. And then you're also talking to a source about a sensitive story, or you're also like, you know, gonna like, go through the comments in the Google Doc on the report that you're writing regarding a national security issue.
People definitely do need to be encouraged to like decouple the ways that they treat devices because these devices are not our friends. And the companies that like, create the experiences on these devices, they are definitely not our friends. They never have been.
But I hear you on that and, uh, reminding people, despite their digital security nihilism, despite their temptation to do the easiest of things, just reminding people to apply appropriate compartmentalization.
We take things very slowly. We take things as easily as we possibly can because there are ways that people can get started in order to, actually be effective at this until they get to the point where it actually means something either to their livelihoods or the story that they're working on and that of the sources that they, interact with. But yeah, that's pretty much where it starts.
Also, credential security is like the bread and butter. And I've been at this for, almost exactly 10 years at FPF and, you know, within this industry for about 15.
And it never changes that people really, really do need to maintain as much rigor regarding how people access their accounts. So like, you gotta have a unique, complex password. You have to be using a password manager. You have to be using multifactor authentication. And the ways that you can get it have changed over the years and they get better and better and better.
You have to be vigilant against phishing, but the ways that people try to phish you are like, you know, increasingly, like, sneakier. You know, we deal with it as it comes, but ultimately that has never changed. It really hasn't.
CINDY COHN: So we've, we've talked a little bit about kind of the nihilism and the kind of, thicket of things that you have to kind of make your way through in order to, help, journalists and their sources feel more secure. So let's flip it a bit. What does it look like if it's better? What are the kinds of places where you see, you know, if we could get this right, it would start to get better?
HARLO HOLMES: I love this question because I do feel that I've been able to look at it from multiple sides. Similarly, as I was describing how Secure Drop not only enables impactful public interest journalism, it represents a herculean feat of cryptography and techno activism. This is one example, Signal is another example.
So, one of the things I thought was so poignant when, as Joe Biden was exiting the White House, one of his, like, parting shots was to say like, everyone should use Signal. Like, and the reason why he says this is because, Signal not only represents like a cool app or like, you know, a thing that, like, hackers love and you know, like we can be proud of 'cause we got in on the first floor.
It represents the evolution of technologies that we should have. Our phone conversations had not been encrypted. Now they are. Get with it. You know, like that's the point. So from a technical perspective, that's what is so important and that's something that we always want to position ourselves to champion.
JASON KELLEY: Let's take a quick moment to say thank you to our sponsor. How to Fix The Internet is supported by the Alfred P. Sloan Foundation's program in Public Understanding of Science and Technology, enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF members and donors. You can become a member for just $25 and for a little more, you can get some good, very stylish gear. Your support is the reason we can keep our digital security guides for journalists, and everyone else, up to date to deal with the latest threats. So please, if you like what we do, go to eff.org/pod to donate.
We also wanted to share that our friend Cory Doctorow has a new podcast. Listen to this.
[Who Broke the Internet trailer]
JASON KELLEY: And now back to our conversation with Harlo Holmes.
Are there tools that are missing that, in a better world you're like, oh, this would be great to have, you know, or things that maybe couldn't exist without changes to technology or to the way that people, work or to policy that you just absolutely hear, you know, oh, it would be nice if we could do this, but for whatever reason, that's not a place we're at yet.
HARLO HOLMES: Yeah. Actually I have started to have a couple of conversations about that. Um, before I answer, I will say that I don't have, like, the bandwidth or time to be a technologist. Um, it's like my code writing days are probably over, but I have so many opinions.
JASON KELLEY: Of course. So many ideas.
HARLO HOLMES: Yeah. Um -
CINDY COHN: Well, we're your audience, right? I mean, you know, the EFF audience are people who, you know, uh, not overwhelmingly, um, but a lot of people with technical skills who are trying to figure out, okay, how do I, how do I apply them to do good? And, and, and I think, you know, over the years we've seen a lot of really well-meaning efforts by technologists to try to do something to support other communities that weren't grounded enough in those communities, and so didn't really work.
And I think your work at Freedom of the Press Foundation, again, has kind of bridged that gap, especially for journalists. But there's, there's broader things. so where else could you see places where technologists could really dig in and have this work in a way that sometimes it does, but often it doesn't.
HARLO HOLMES: I love that question because that is exactly the point, right? Bridging the gap. And I feel that like at FPF, given, you know how I introduce it with like the three pillars or whatever, we are uniquely poised in order to perform, like, you know, user research within a community, right? And then have that directly inform technology mandates, have that directly inform advocacy, uh, like, you know, charge to action.
So I think anyone who finds themselves at those cross sections, that's exactly what you have to kind of, like, strategize around in order to be as effective as possible. In terms of, like, actual technologies, one thing and I already kind of started having these conversations with people, is let's take our relationship within a typical newsroom to cloud services like Google when you are drafting, right? I mean it's anecdotal and like the plural anecdote is not data, right. But that said, we do know that given that, you know, Google's Drive has so much machine learning and AI enabled power, drafting a story that's like the next Watergate, right? Like that's actually going to get you put in jail before you get to publish, right?
Because we know about their capabilities. And, not gonna, like, talk about specific anecdotes, but like that is a thing, right? But one of the things, or like the big contention is that actually, like, in terms of collaboration, how effective you can be writing a story, how like, you know, you rely on the comments section with your editor, right, as you're, you know, massaging a story. You rely on those features as much.
What are 0pen source, like, you know, hacker ethos alternatives. We have, you know, we have Nextcloud, we have uh, CryptPad, we have Etherpad. But all of those things are insufficient not only, like, in terms of their feature set in what needs to be accommodated in order for a journalist to work, right, but also, can be insufficient in terms of their sustainability models, the fact that we can rely upon them in the future. And as much as we love all of those people at those developer initiatives, no one is supporting them to make sure that they can be in a place to be a viable alternative, right?
So, what's the next frontier, right? If I don't want to live in a world where a Nextcloud doesn't exist, where a CryptPad doesn't exist, or an Etherpad, like that's not what I'm saying, 'cause they're fantastic and they're really great to use in creative scenarios.
However, if you're thinking about the meat and potatoes day to day in a typical newsroom, and you have to contend with a tech giant like Google that has become increasingly, like, ideologically unreliable. Guess what? They actually do have a really cool tool called client side encryption, right? So now you're actually, like, removing the people who decide at Google what is ideologically acceptable use of their tools, right? You're removing them from the position where they can make any decision or scrutinize further and client side encryption.
Or like anything that provides end-to-end encryption, that is like the ultimate goal. That's what we should protect. Whether it is in Secure Drop, whether it is in Nextcloud or CryptPad, or if it's in Google itself. And so actually, I would recommend, like, anybody who has these spare cycles to contribute to a developer effort to tackle this type of middleware that allows us to still have as much autonomy as possible within the ecosystems that we have to kind of work within.
CINDY COHN: I love that. I mean, it’s a story about interoperability, right? This, you know, what you're talking about, middleware in this area is like, we should be able to make a choice. Either use things that are not hosted corporately or use things that are hosted corporately, but be able to have our cake and eat it too.
Have a tool that helps us interoperate with that system without the bad parts of the deal, right. And in this instance, the bad parts of the deal are a piece of it, it’s the business model, but a piece of it is just compliance with, with government in a way that the, the company, is increasing, you know, used to fight. They still fight some.
HARLO HOLMES: They still fight, yes.
CINDY COHN: They might fight, yes, but they also don't have the ability to fight that much. We might wanna go to something that's a little, that, that gives them the ability to say, look, we don't have access to that information. Just like Apple doesn't have access to the information that's stored on your iPhone. They made a policy decision to protect you.
HARLO HOLMES: But now we're looking at what happened in the UK, and we’re like, hm.
CINDY COHN: Exactly, but then the government has to act, you know, so it's always a fight on the technical level, and on the policy level, sadly. I wish that encryption we could, you know, fix with just technology. But we need other forms of protection. but I love this idea of having so many options, you know, some that are decentralized, some that are hosted, you know, in the nonprofit world, some that might be publicly supported, and then some that are the corporate side, but with the protections that we need.
And I just feel like we need all of the above. Anybody who asks you to choose between these strategies is kind of getting you caught in a side fight when the main fight is how do we get people the privacy that they need to do their work?
HARLO HOLMES: Yeah. Yeah. And one of the things that gives me the most hope is, continuing to fight in a space where we are all about the options.
We're all about giving people options and being as creative as possible and building options for everyone.
JASON KELLEY: What else gives you hope? Because you've been at Freedom of the Press for a while now, and we're at a difficult time in a lot of ways, but I assume there are other things that you've, you know, seen change over the years in a positive way, right? Because it feels too easy to say, look, things are getting dire, because in many ways they are. But, but what else gives you hope, given how long you've been working on this issue?
HARLO HOLMES: I actually, I love really thinking through the new challenges of other types of media that is represented. So much of my career had been, pretty much centered around traditional print and/or digital. However, I am so enthusiastic about being alongside, like, podcasters and YouTube creators as they navigate these new challenges and also understand, like, the long history of media theory, where we've gone as an industry in order to understand how it applies to them.
So one thing that I thought was pretty cool was having a conversation, recently, with a somewhat influential, TikTok person about class consciousness in regards to whether or not people who are influencers should actually start considering themselves as journalists legitimately.
And one of the things that I mentioned had to do with the fact that, you know, like in the 2010s, bloggers were not considered quote unquote journalists, and yet blogging has become one of the most influential, even like from a financial perspective, like, drivers within this market. So influencers should not consider themselves anything other than journalists, because their fights are – especially like when, you know, platforms get involved and like what their economic model looks like and their, you know, integrity and ethos within journalism – like, that's the media history that we are building right now. So that excites me.
CINDY COHN: Oh, that's great. You know, EFF was involved in some of the early cases about whether bloggers could be protected by journalism shield laws, we had a case called Apple v. Does a long time ago that, uh, that helped establish that in the state of California. But I, I really love helping, kind of, new media think of itself as media.
And also, I mean, the way that I always think about it is, it's not whether you're a journalist, it's whether you're doing journalism, right? It's the verb part. and that, different framing than I think helps break people out of the mold of, well, I do some stuff that's just kind of silly, and that might not be journalism, but if you're bringing news to the public, if you're bringing information to the public that the public wants, even if it's in a fashion context, like, that's journalism and it should have, uh, you should think of yourself that way because there is this rich history of how we protect that and how important that is to society, not just about the kind of hard political issues, but actually, you know, in creating and shaping and managing our culture as well.
HARLO HOLMES: Mm-hmm. I agree 100%.
JASON KELLEY: How did you end up doing this kind of digital security work specifically for journalists? Did you make an intentional choice at some point that you wanted to help journalists, or have you sort of found yourself here and it's just incredible, important work?
HARLO HOLMES: A little bit of both. I'm an avid media consumer who cares a lot about media history, and in undergraduate school I studied comparative literature, which is all based off of the fact that the media itself has its own unique power. And the way that it is expressed says way more than what is actually said.
And I've always found that to be the most important thing to do. As far as technology is concerned, as any young inquisitive person might do, I got into coding like so hardcore and, it wasn't until I was in grad school that I discovered, via a class with this fantastic person, Nathan Freitas, who's a Harvard, uh, Berkman Fellow Emeritus, and also the head of the Guardian Project, where he opened my eyes to the fact that like the code that you're writing, just like, you know, for fun or whatever, like you can actually use this to defend human rights.
And so it was kind of the culmination of those ideas that led me through, like, a couple of things. Like, um, I was an open news fellow at, um, the New York Times for about a year where I worked with the computer assisted reporting team and that was really impressive. And that was the first time where I got to see how people will, like, scrape a webpage in order to write an investigative story.
And I was like, wow, people do that that's so cool! And then also because I was hanging out with like Nathan and other folks, um, I was the, the one of the kids in the newsroom floor who knew what Tor was, they're like, that's cool. How do we use this in journalism? I'm like, well, I got ideas. And that's how, kind of how my career got started.
CINDY COHN: That's so great. Nathan's an old friend of EFF. That's so fun to hear the tentacles of how, you know, people inspire other people. Inspire other people. I think that's part of the fun story of digital rights.
HARLO HOLMES: Yeah, yeah. I agree. I think anyone is super duper lucky to understand not only like the place that you occupy right now, but also where it sits within, like, a long history. And, I also really love, any experience where I get to kind of touch people with that as well.
CINDY COHN: Nice. Ooh, that's a nice place to end. What do you think, Jason?
JASON KELLEY: That sounds great. Yeah. And think of all the people who are saying the same thing about you now that you're saying about Nathan. Right. It never stops.
HARLO HOLMES: It shouldn't ever stop. It shouldn't. This is our history.
CINDY COHN: Oh, Harlo, thank you so much for coming and spending time with us. It's just been a delight to talk to you and good luck going forward. The times really need people like you.
HARLO HOLMES: Thank you so much. Um, it's always a pleasure to talk to you and, um, I love your pod. I love the work that you do, and I'll, you know, see you next time.
JASON KELLEY: Well, I'm really glad that we got a chance to talk to Harlo because these conversations with folks who work in these, um, specific areas with people are really helpful when, you know, it's not our job every day to talk to journalists, just like it's not our job every day to talk to specific advocates about specific issues. But you learn exactly what the kinds of things are that they think about and what we need to get things right and what it'll look like if we do get things right for journalists or, or whomever it is.
CINDY COHN: Yeah, and I think the thing that I loved about the conversation is the stuff that she articulated is stuff that will help all of us. You know, it's a particular need for journalists. But when, you know, when we asked her, you know, what kind of tools need to exist, you know, she pointed, you know, not only to the open source decentralized tools like Ether Pad and things like that, but to basically an interoperability issue that making Google Docs secure, so that Google doesn't know what you're saying on your Google Docs. And I would toss Slack in there as well. That, you know, taking the tools that people rely on every day and building in things that make them secure against the company and against government coming and strong arming the company into giving them information, like that's a tool that will be really great for journalists, and I can see that. It'll also help all the rest of us.
JASON KELLEY: Yeah.
CINDY COHN: And the, you know, the other thing she said when she was giving, you know, what advice do you give to journalists, like off the top? She said, well, use separate devices for the things that you're doing and don't have everything on one device, you know, because, uh, I think I love the, what she say, they're data-hungry?
JASON KELLEY: Data-greedy.
CINDY COHN: Data-greedy, even better. That our devices are data greedy. So separating them gives us something. That's a useful piece of information for anyone who’s in activism.
JASON KELLY: Yeah. And so, I mean, I, I wanna say easy. It's not always simple to have two devices, but the idea that the solution wasn't something more complicated. It reminds me that often the best advice is something that's fairly simple and that really, you know, anyone who has the ability and the money could have multiple devices and, and journalists are no different.
So it reminded me also that, you know, when we're working on things like our surveillance, self-defense guides, it's helpful to remember that, like Harlo said, her job is to make the journalist’s job easy, right? They shouldn't have to think about this stuff. And that's how sort of the spirit of the guides that we write as well.
And that was just a really good reminder that sometimes you feel like you're trying to convince everyone, or explain to them how all these tools work and actually it might be better to think about, well, you shouldn't have to understand all of this deeply like I do. In some cases you just need to know that this works and that's what you need to use.
CINDY COHN: Yeah, I think that's right and I, you know, obviously, you know, ‘just go out and buy a second device’ isn't advice that we would give to people in parts of the world where that's a really a prohibitive suggestion. But there are many parts of the world, and journalists, many of them, live in them, where it is actually not that hard a thing to do to get yourself a burner phone or get a simpler phone for your work, rather than having to try to, you know, configure one device to really support all of those things.
And turning on two FA right? Turning on two factor authentication. Another thing that is just good advice for anybody. So, you know, what I'm hearing is that, you know, if we build a place that is better for journalists, it's better for all of us and vice versa. If we build a world that's better for all of us, it's also better for journalists. So, I really liked that. I also really liked her articulating and lifting up the role that the Tor project plays in what they do with Secure Drop. What they do to try to help protect journalists who have, uh, confidential sources.
Because we're, again, as we're looking into all of these various tools that help create a better future, a more secure future, we're discovering that actually open source tools, like Tor, underlie many different pieces of the better world. And so we're starting to see kind of the network for good, right, the conspiracy for good of a lot of the open source security projects.
JASON KELLEY: I didn't really realize when we were putting together these guests for this season, how interconnected they all were, and it's been really wonderful to hear everyone lift everyone else up. They really do all depend on one another, and it is really important to see that for the people who maybe don't think about it and use these tools as one-offs, right?
CINDY COHN: Yeah. And I think as those of us who are trying to make the internet better, recognizing that we're all in this together, so as we're headed into this time, where we're seeing a lot of targeted attacks on different pieces of a secure world. You know, recognizing that these things are interconnected and then building strength from there seems to me to be a really important strategy.
JASON KELLEY: And that's our episode for today. If you have feedback or suggestions, we'd love to hear from you. Visit eff.org/podcast and click on listener feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch, and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred P SLoan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelley.
CINDY COHN: And I'm Cindy Cohn.
MUSIC CREDITS: This podcast is licensed creative commons attribution 4.0 international, and includes the following music licensed creative commons attribution 3.0 unported by its creators: Drops of H2, The Filtered Water Treatment by Jay Lang. Sound design, additional music and theme remixes by Gaetan Harris.
Connectivity is a Lifeline, Not a Luxury: Telecom Blackouts in Gaza Threaten Lives and Digital Rights
For the third time since October 2023, Gaza has faced a near-total telecommunications blackout—plunging over 2 million residents into digital darkness and isolating them from the outside world. According to Palestinian digital rights organization 7amleh, the latest outage began on June 11, 2025, and lasted three days before partial service was restored on June 14. As of today, reports from inside Gaza suggest that access has been cut off again in central and southern Gaza.
Blackouts like these affect internet and phone communications across Gaza, leaving journalists, emergency responders, and civilians unable to communicate, document, or call for help.
Cutting off telecommunications during an active military campaign is not only a violation of basic human rights—it is a direct attack on the ability of civilians to survive, seek safety, and report abuses. Access to information and the ability to communicate are core to the exercise of freedom of expression, press freedom, and the right to life itself.
The threat of recurring outages looms large. Palestinian digital rights groups warn of a complete collapse of Gaza’s telecommunications infrastructure, which has already been weakened by years of blockade, lack of spare parts, and now sustained bombardment.
These blackouts systematically silence the people of Gaza amidst a humanitarian crisis. They prevent the documentation of war crimes, hide the extent of humanitarian crises, and obstruct the global community’s ability to witness and respond.
EFF has long maintained that governments and occupying powers must not disrupt internet or telecom access, especially during times of conflict. The blackout in Gaza is not just a local or regional issue—it’s a global human rights emergency.
As part of the campaign led by 7amleh to #ReconnectGaza, we call on all actors, including governments, telecommunications regulators, and civil society, to demand an end to telecommunications blackouts in Gaza and everywhere. Connectivity is a lifeline, not a luxury.
Google’s Advanced Protection Arrives on Android: Should You Use It?
With this week’s release of Android 16, Google added a new security feature to Android, called Advanced Protection. At-risk people—like journalists, activists, or politicians—should consider turning on. Here’s what it does, and how to decide if it’s a good fit for your security needs.
To get some confusing naming schemes clarified at the start: Advanced Protection is an extension of Google’s Advanced Protection Program, which protects your Google account from phishing and harmful downloads, and is not to be confused with Apple’s Advanced Data Protection, which enables end-to-end encryption for most data in iCloud. Instead, Google's Advanced Protection is more comparable to the iPhone’s Lockdown Mode, Apple’s solution to protecting high risk people from specific types of digital threats on Apple devices.
Advanced Protection for Android is meant to provide stronger security by: enabling certain features that aren’t on by default, disabling the ability to turn off features that are enabled by default, and adding new security features. Put together, this suite of features is designed to isolate data where possible, and reduce the chances of interacting with unsecure websites and unknown individuals.
For example, when it comes to enabling existing features, Advanced Protection turns on Android’s “theft detection” features (designed to protect against in-person thefts), forces Chrome to use HTTPS for all website connections (a feature we’d like to see expand to everything on the phone), enables scam and spam protection features in Google Messages, and disables 2G (which helps prevent your phone from connecting to some Cell Site Simulators). You could go in and enable each of these individually in the Settings app, but having everything turned on with one tap is much easier to do.
Advanced Protection also prevents you from disabling certain core security features that are enabled by default, like Google Play Protect (Android’s built-in malware protection) and Android Safe Browsing (which safeguards against malicious websites).
But Advanced Protection also adds some new features. Once turned on, the “Inactivity reboot” feature restarts your device if it’s locked for 72 hours, which prevents ease of access that can occur when your device is on for a while and you have settings that could unlock your device. By forcing a reboot, it resets everything to being encrypted and behind biometric or pin access. It also turns on “USB Protection,” which makes it so any new USB connection can only be used for charging when the device is locked. It also prevents your device from auto-reconnecting to unsecured Wi-Fi networks.
As with all things Android, some of these features are limited to select devices, or only phones made by certain manufacturers. Memory Tagging Extension (MTE), which attempts to mitigate memory vulnerabilities by blocking unauthorized access, debuted on Pixel 8 devices in 2023 is only now showing up on other phones. These segmentations in features makes it a little difficult to know exactly what your device is protecting against if you’re not using a Pixel phone.
Some of the new features, like the ability to generate security logs that you can then share with security professionals in case your device is ever compromised, along with the aforementioned insecure network reconnect and USB protection features, won’t launch until later this year.
It’s also worth considering that enabling Advanced Protection may impact how you use your device. For example, Advanced Protection disables the JavaScript optimizer in Chrome, which may break some websites, and since Advanced Protection blocks unknown apps, you won’t be able to side-load. There’s also the chance that some of the call screening and scam detection features may misfire and flag legitimate calls.
How to Turn on Advanced ProtectionAdvanced Protection is easy to turn on and off, so there’s no harm in giving it a try. Advanced Protection was introduced with Android 16, so you may need to update your phone, or wait a little longer for your device manufacturer to support the update if it doesn’t already. Once you’re updated, to turn it on:
- Open the Settings app.
- Tap Security and Privacy > Advanced Protection, and enable the option next to “Device Protection.”
- If you haven’t already done so, now is a good time to consider enabling Advanced Protection for your Google account as well, though you will need to enroll a security key or a passkey to use this feature.
We welcome these features on Android, as well as the simplicity of its approach to enabling several pre-existing security and privacy features all at once. While there is no panacea for every security threat, this is a baseline that improves the security on Android for at-risk individuals without drastically altering day-to-day use, which is a win for everyone. We hope to see Google continue to push new improvements to this feature and for different phone manufacturer’s to support Advanced Protection where they don’t already.
EFF to NJ Supreme Court: Prosecutors Must Disclose Details Regarding FRT Used to Identify Defendant
This post was written by EFF legal intern Alexa Chavara.
Black box technology has no place in the criminal legal system. That’s why we’ve once again filed an amicus brief arguing that the both the defendant and the public have a right to information regarding face recognition technology (FRT) that was used during an investigation to identify a criminal defendant.
Back in June 2023, we filed an amicus brief along with Electronic Privacy Information Center (EPIC) and the National Association of Criminal Defense Lawyers (NACDL) in State of New Jersey v. Arteaga. We argued that information regarding the face recognition technology used to identify the defendant should be disclosed due to the fraught process of a face recognition search and the many ways that inaccuracies manifest in the use of the technology. The New Jersey appellate court agreed, holding that state prosecutors must turn over detailed information to the defendant about the FRT used, including how it works, its source code, and its error rate. The court held that this ensures the defendant’s due process rights with the ability to examine the information, scrutinize its reliability, and build a defense.
Last month, partnering with the same organizations, we filed another amicus brief in favor of transparency regarding FRT in the criminal system, this time in the New Jersey Supreme Court in State of New Jersey v. Miles.
In Miles, New Jersey law enforcement used FRT to identify Mr. Miles as a suspect in a criminal investigation. The defendant, represented by the same public defender in Arteaga, moved for discovery on information about the FRT used, relying on Arteaga. The trial court granted this request for discovery, and the appellate court affirmed. The State then appealed to the New Jersey Supreme Court, where the issue is before the Court for the first time.
As explained in our amicus brief, disclosure is necessary to ensure criminal prosecutions are based on accurate evidence. Every search using face recognition technology presents a unique risk of error depending on various factors from the specific FRT system used, the databases searched, the quality of the photograph, and the demographics of the individual. Study after study shows that facial recognition algorithms are not always reliable, and that error rates spike significantly when involving faces of people of color, especially Black women, as well as trans and nonbinary people.
Moreover, these searches often determine the course of investigation, reinforcing errors and resulting in numerous wrongful arrests, most often of Black folks. Discovery is the last chance to correct harm from misidentification and to allow the defendant to understand the evidence against them.
Furthermore, the public, including independent experts, have the right to examine the technology used in criminal proceedings. Under the First Amendment and the more expansive New Jersey Constitution corollary, the public’s right to access criminal judicial proceedings includes filings in pretrial proceedings, like the information being sought here. That access provides the public meaningful oversight of the criminal justice system and increases confidence in judicial outcomes, which is especially significant considering the documented risks and shortcomings of FRT.