Feed aggregator

MIx helps innovators tackle challenges in national security

MIT Latest News - Tue, 06/24/2025 - 1:35pm

Startups and government defense agencies have historically seemed like polar opposites. Startups thrive on speed and risk, while defense agencies are more cautious. Over the past few years, however, things have changed. Many startups are eager to work with these organizations, which are always looking for innovative solutions to their hardest problems.

To help bridge that gap while advancing research along the way, MIT Lecturer Gene Keselman launched MIT’s Mission Innovation X (MIx) along with Sertac Karaman, a professor in the MIT Department of Aeronautics and Astronautics, and Fiona Murray, the William Porter Professor of Entrepreneurship at the MIT Sloan School of Management. MIx develops educational programming, supports research at MIT, and facilitates connections among government organizations, startups, and researchers.

“Startups know how to commercialize their tech, but they don’t necessarily know how to work with the government, and especially how to understand the needs of defense customers,” explains MIx Senior Program Manager Keenan Blatt. “There are a lot of different challenges when it comes to engaging with defense, not only from a procurement cycle and timeline perspective, but also from a culture perspective.”

MIx’s work helps innovators secure crucial early funding while giving defense agencies access to cutting-edge technologies, boosting America’s security capabilities in the process. Through the work, MIx has also become a thought leader in the emerging “dual-use” space, in which researchers and founders make strategic choices to advance technologies that have both civilian and defense applications.

Gene Keselman, the executive director of MIx as well as managing director of MIT’s venture studio Proto Ventures and a colonel in the U.S. Air Force Reserve, believes MIT is uniquely positioned to deliver on MIx’s mission.

“It’s not a coincidence MIx is happening at MIT,” says Keselman, adding that supporting national security “is part of MIT’s ethos.”

A history of service

MIx’s work has deep roots at the Institute.

“MIT has worked with the Department of Defense since at least since the 1940s, but really going back to its founding years,” says Karaman, who is also the director of MIT’s Laboratory for Information and Decision Systems (LIDS), a research group with its own long history of working with the government.

“The difference today,” adds Murray, who teaches courses on building deep tech ventures and regional innovation ecosystems and is the vice chair of NATO's Innovation Fund, “is that defense departments and others looking to support the defense, security, and resilience agenda are looking to several innovation ecosystem stakeholders — universities, startup ventures, and venture capitalists — for solutions. Not only from the large prime contractors.  We have learned this lesson from Ukraine, but the same ecosystem logic is at the core of our MIx offer.”

MIx was borne out of the MIT Innovation Initiative in response to interest Keselman saw from researchers and defense officials in expanding MIT’s work with the defense and global security communities. About seven years ago, he hired Katie Person, who left MIT last year to become a battalion commander, to handle all that interest as a program manager with the initiative. MIx activities, like mentoring and educating founders, began shortly after, and MIx officially launched at MIT in 2021.

“It was a good example of the ways in which MIT responds to its students’ interests and external demand,” Keselman says.

One source of early interest was from startup founders who wanted to know how to work with the defense industry and commercialize technology that could have dual commercial and defense applications. That led the team to launch the Dual Use Ventures course, which helps startup founders and other innovators work with defense agencies. The course has since been offered annually during MIT’s Independent Activities Period (IAP) and tailored for NATO’s Defense Innovation Accelerator for the North Atlantic (DIANA).

Personnel from agencies including U.S. Special Operations Command were also interested in working with MIT students, which led the MIx team to develop course 15.362/6.9160 (Engineering Innovation: Global Security Systems), which is taken each spring by students across MIT and Harvard University.

“There are the government organizations that want to be more innovative and work with startups, and there are startups that want to get access to funding from government and have government as a customer,” Keselman says. “We’re kind of the middle layer, facilitating connections, educating, and partnering on research.”

MIx research activities give student and graduate researchers opportunities to work on pressing problems in the real world, and the MIT community has responded eagerly: More than 150 students applied for MIx’s openings in this summer’s Undergraduate Research Opportunities Program.

"We’re helping push the boundaries of what’s possible and explore the frontiers of technology, but do it in a way that is publishable," says MIx Head Research Scientist A.J. Perez ’13, MEng ’14, PhD ’23. “More broadly, we want to unlock as much support for students and researchers at MIT as possible to work on problems that we know matter to defense agencies.”

Early wins

Some of MIx’s most impactful research so far has come in partnership with startups. For example, MIx helped the startup Picogrid secure a small business grant from the U.S. Air Force to build an early wildfire detection system. As part of the grant, MIT students built a computer vision model for Picogrid’s devices that can detect smoke in the sky, proving the technical feasibility of the system and describing a promising new pathway in the field of machine learning.

In another recent project with the MIT alumni-founded startup Nominal, MIT students helped improve and automate post-flight data analysis for the U.S. Air Force’s Test Pilot School.

MIx’s work connecting MIT’s innovators and the wider innovation ecosystem with defense agencies has already begun to bear fruit, and many members of MIx believe early collaborations are a sign of things to come.

“We haven’t even scratched the surface of the potential for MIx,” says Karaman, “This could be the start of something much bigger.”

Major Setback for Intermediary Liability in Brazil: Risks and Blind Spots

EFF: Updates - Tue, 06/24/2025 - 11:33am

This is the third post of a series about internet intermediary liability in Brazil. Our first post gives an overview of Brazil's current internet intermediary liability regime, set out in a law known as "Marco Civil da Internet," the context of its approval in 2014, and the beginning of the Supreme Court's judgment of such regime in November 2024. Our second post provides a bigger picture of the Brazilian context underlying the court's analysis and its most likely final decision. 

The court’s examination of Marco Civil’s Article 19 began with Justice Dias Toffoli in November last year. We explained here about the cases under trial, the reach of the Supreme Court’s decision, and Article 19’s background related to Marco Civil’s approval in 2014. We also highlighted some aspects and risks of Justice Dias Toffoli’s vote, who considered the intermediary liability regime established in Article 19 unconstitutional.  

Most of the justices have agreed to find this regime at least partially unconstitutional, but differ on the specifics. Relevant elements of their votes include: 

  • Notice-and-takedown is likely to become the general rule for platforms' liability for third-party content (based on Article 21 of Marco Civil). Justices still have to settle whether this applies to internet applications in general or if some distinctions are relevant, for example, applying only to those that curate or recommend content. Another open question refers to the type of content subject to liability under this rule: votes pointed to unlawful content/acts, manifestly criminal or clearly unlawful content, or opted to focus on crimes. Some justices didn’t explicitly qualify the nature of the restricted content under this rule.   

  • If partially valid, the need for a previous judicial order to hold intermediaries liable for user posts (Article 19 of Marco Civil) remains in force for certain types of content (or certain types of internet applications). For some justices, Article 19 should be the liability regime in the case of crimes against honor, such as defamation. Justice Luís Roberto Barroso also considered this rule should apply for any unlawful acts under civil law. Justice Cristiano Zanin has a different approach. For him, Article 19 should prevail for internet applications that don’t curate, recommend or boost content (what he called “neutral” applications) or when there’s reasonable doubt about whether the content is unlawful.

  • Platforms are considered liable for ads and boosted content that they deliver to users. This was the position held by most of the votes so far. Justices did so either by presuming platforms’ knowledge of the paid content they distribute, holding them strictly liable for paid posts, or by considering the delivery of paid content as platforms’ own act (rather than “third-party” conduct). Justice Dias Toffoli went further, including also non-paid recommended content. Some justices extended this regime to content posted by inauthentic or fake accounts, or when the non-identification of accounts hinders holding the content authors liable for their posts.   

  • Monitoring duty of specific types of harmful and/or criminal content. Most concerning is that different votes establish some kind of active monitoring and likely automated restriction duty for a list of contents, subject to internet applications' liability. Justices have either recognized a “monitoring duty” or considered platforms liable for these types of content regardless of a previous notification. Justices Luís Roberto Barroso, Cristiano Zanin, and Flávio Dino adopt a less problematic systemic flaw approach, by which applications’ liability would not derive from each piece of content individually, but from an analysis of whether platforms employ the proper means to tackle these types of content. The list of contents also varies. In most of the cases they are restricted to criminal offenses, such as crimes against the democratic state, racism, and crimes against children and adolescents; yet they may also include vaguer terms, like “any violence against women,” as in Justice Dias Toffoli’s vote. 

  • Complementary or procedural duties. Justices have also voted to establish complementary or procedural duties. These include providing a notification system that is easily accessible to users, a due process mechanism where users can appeal against content restrictions, and the release of periodic transparency reports. Justice Alexandre de Moraes also specifically mentioned algorithmic transparency measures. 

  • Oversight. Justices also discussed which entity or oversight model should be used to monitor compliance while Congress doesn’t approve a specific regulation. They raised different possibilities, including the National Council of Justice, the General Attorney’s Office, the National Data Protection Authority, a self-regulatory body, or a multistakeholder entity with government, companies, and civil society participation. 

Three other justices have yet to present their votes to complete the judgment. As we pointed out, the ruling will both decide the individual cases that entered the Supreme Court through appeals and the “general repercussion” issues underlying these individual cases. For addressing such general repercussion issues, the Supreme Court approves a thesis that orients lower court decisions in similar cases. The final thesis will reflect the majority of the court's agreements around the topics we outlined above. 

Justice Alexandre de Moraes argued that the final thesis should equate the liability regime of social media and private messaging applications to the one applied to traditional media outlets. This disregards important differences between both: even if social media platforms curate content, it involves a massive volume of third-party posts, mainly organized through algorithms. Although such curation reflects business choices, it does not equate to media outlets that directly create or individually purchase specific content from approved independent producers. This is even more complicated with messaging applications, seriously endangering privacy and end-to-end encryption. 

Justice André Mendonça was the only one so far to preserve the full application of Article 19. His proposed thesis highlighted the necessity of safeguarding privacy, data protection, and the secrecy of communications in messaging applications, among other aspects. It also indicated that judicial takedown orders must provide specific reasoning and be made available to platforms, even if issued within a sealed proceeding. The platform must also have the ability to appeal the takedown order. These are all important points the final ruling should endorse. 

Risks and Blind Spots 

We have stressed the many problems entangled with broad notice-and-takedown mandates and expanded content monitoring obligations. Extensively relying on AI-based content moderation and tying it to intermediary liability for user content will likely exacerbate the detrimental effects of these systems’ limitations and flaws. The perils and concerns that grounded Article 19's approval remain valid and should have led to a position of the court preserving its regime.  

However, given the judgement’s current stage, there are still some minimum safeguards that justices should consider or reinforce to reduce harm.  

It’s crucial to put in place guardrails against the abuse and weaponization of notification mechanisms. At a minimum, platforms shouldn’t be liable following an extrajudicial notification when there’s reasonable doubt concerning the content’s lawfulness. In addition, notification procedures should make sure that notices are sufficiently precise and properly substantiated indicating the content’s specific location (e.g. URL) and why the notifier considers it to be illegal. Internet applications must also provide reasoned justification and adequate appeal mechanisms for those who face content restrictions.  

On the other hand, holding intermediaries liable for individual pieces of user content regardless of notification, by massively relying on AI-based content flagging, is a recipe for over censorship. Adopting a systemic flaw approach could minimally mitigate this problem. Moreover, justices should clearly set apart private messaging applications, as mandated content-based restrictions would erode secure and end-to-end encrypted implementations. 

Finally, we should note that justices generally didn’t distinguish large internet applications from other providers when detailing liability regimes and duties in their votes. This is one major blind spot, as it could significantly impact the feasibility of alternate and decentralized alternatives to Big Tech’s business models, entrenching platform concentration. Similarly, despite criticism of platforms’ business interests in monetizing and capturing user attention, court debates mainly failed to address the pervasive surveillance infrastructure lying underneath Big Tech’s power and abuses.   

Indeed, while justices have called out Big Tech’ enormous power over the online flow of information – over what’s heard and seen, and by whom – the consequences of this decision can actually deepen this powerful position. 

It’s worth recalling a line of Aaron Schwarz in the film “The Internet’s Own Boy” when comparing broadcasting and the internet. He said: “[…] what you see now is not a question of who gets access to the airwaves, it’s a question of who gets control over the ways you find people.” As he puts it, today’s challenge is less about who gets to speak, but rather about who gets to be heard.  

There’s an undeniable source of power in operating the inner rules and structures by which the information flows within a platform with global reach and millions of users. The crucial interventions must aim at this source of power, putting a stop to behavioral surveillance ads, breaking Big Tech’s gatekeeper dominance, and redistributing the information flow.  

That’s not to say that we shouldn’t care about how each platform organizes its online environment. We should, and we do. The EU Digital Services Act, for example, established rules in this sense, leaving the traditional liability regime largely intact. Rather than leveraging platforms as users’ speech watchdogs by potentially holding intermediaries liable for each piece of user content, platform accountability efforts should broadly look at platforms’ processes and business choices. Otherwise, we will end up focusing on monitoring users instead of targeting platforms’ abuses. 

Major Setback for Intermediary Liability in Brazil: How Did We Get Here?

EFF: Updates - Tue, 06/24/2025 - 11:13am

This is the second post of a series about intermediary liability in Brazil. Our first post gives an overview of Brazil's current intermediary liability regime, the context of its approval in 2014, and the beginning of the Supreme Court's analysis of such regime in November 2024. Our third post provides an outlook on justices' votes up until June 23, underscoring risks, mitigation measures, and blind spots of their potential decision.

The Brazilian Supreme Court has formed a majority to overturn the country’s current online intermediary liability regime. With eight out of eleven justices having presented their opinions, the court has reached enough votes to mostly remove the need for a previous judicial order demanding content takedown to hold digital platforms liable for user posts, which is currently the general rule.  

The judgment relates to Article 19 of Brazil’s Civil Rights Framework for the Internet (“Marco Civil da Internet,” Law n. 12.965/2014), wherein internet applications can only be held liable for third-party content if they fail to comply with a judicial decision ordering its removal. Article 19 aligns with the Manila Principles and reflects the important understanding that holding platforms liable for user content without a judicial analysis creates strong incentives for enforcement overreach and over censorship of protected speech.  

Nonetheless, while Justice André Mendonça voted to preserve Article 19’s application, four other justices stated it should prevail only in specific cases, mainly for crimes against honor (such as defamation). The remaining three justices considered that Article 19 offers insufficient protection to constitutional guarantees, such as the integral protection of children and teenagers.  

The judgment will resume on June 25th, with the three final justices completing the analysis by the plenary of the court. Whereas Article 19’s partial unconstitutionality (or its interpretation “in accordance with” the Constitution) seems to be the position the majority of the court will take, the details of each vote vary, indicating important agreements still to sew up and critical tweaks to make.   

As we previously noted, the outcome of this ruling can seriously undermine free expression and privacy safeguards if they lead to general content monitoring obligations or broadly expand notice-and-takedown mandates. This trend could negatively shape developments globally in other courts, parliaments, or with respect to executive powers. Sadly, the votes so far have aggravated these concerns.  

But before we get to them, let's look at some circumstances underlying the Supreme Court's analysis. 

2014 vs. 2025: The Brazilian Techlash After Marco Civil's Approval 

How did Article 19 end up (mostly) overturned a decade after Marco Civil’s much-celebrated approval in Brazil back in 2014?   

In addition to the broader techlash following the impacts of an increasing concentration of power in the digital realm, developments in Brazil have leveraged a harsher approach towards internet intermediaries. Marco Civil became a scapegoat, especially Article 19, within regulatory approaches that largely diminished the importance of the free expression concerns that informed its approval. Rather than viewing the provision as a milestone to be complemented with new legislation, this context has reinforced the view that Article 19 should be left behind. 

 The tougher approach to internet intermediaries gained steam after former President Jair Bolsonaro’s election in 2018 and throughout the legislative debates around draft bill 2630, also known as the “Fake News bill.”  

Specifically, though not exhaustive, concerns around the spread of disinformation, online-fueled discrimination, and political violence, as well as threats to election integrity, constitute an important piece of this scenario. This includes the use of social media by the far right within the escalation of acts seeking to undermine the integrity of elections and ultimately overthrow the legitimately elected President Luis Inácio da Silva in January 2023. Investigations later unveiled that related plans included killing the new president, the vice-president, and Justice Alexandre de Moraes.  

Concerns over child and adolescents’ rights and safety are another part of the underlying context. Among others, a wave of violent threats and actual attacks in schools in early 2023 was bolstered by online content. Social media challenges also led to injuries and deaths of young people.  

Finally, the political reactions to Big Tech’s alignment with far-right politicians and feuds with Brazilian authorities complete this puzzle. It includes reactions to Meta’s policy changes in January 2025 and the Trump’s administration’s decision to restrict visas to foreign officials based on grounds of limiting free speech online. This decision is viewed as an offensive against Brazil's Supreme Court from U.S. authorities in alliance with Bolsonaro’s supporters, including his son now living in the U.S

Changes in the tech landscape, including concerns about the attention-driven information flow, alongside geopolitical tensions, landed in Article 19 examination by the Brazilian Supreme Court. Hurdles in the legislative debate of draft bill 2630 turned attention to the internet intermediary liability cases pending in the Supreme Court as the main vehicles for providing “some” response. Yet, the scope of such cases (explained here) determined the most likely outcome. As they focus on assessing platform liability for user content and whether it involves a duty to monitor, these issues became the main vectors for analysis and potential change. Alternative approaches, such as improving transparency, ensuring due process, and fostering platform accountability through different measures, like risk assessments, were mainly sidelined.  

Read our third post in this series to learn more about the analysis of the Supreme Court so far and its risks and blind spots. 

Here’s a Subliminal Channel You Haven’t Considered Before

Schneier on Security - Tue, 06/24/2025 - 7:09am

Scientists can manipulate air bubbles trapped in ice to encode messages.

GOP budget would slash wind and solar subsidies

ClimateWire News - Tue, 06/24/2025 - 7:05am
Tax credits for clean energy have previously enjoyed bipartisan support.

GOP attorneys general want legal immunity for fossil fuel industry

ClimateWire News - Tue, 06/24/2025 - 7:04am
Red states are urging the Trump administration to take steps to quash lawsuits that seek to hold the oil and gas industry accountable for climate change.

Saudis, US drive strife inside global climate science body

ClimateWire News - Tue, 06/24/2025 - 7:04am
The proposal for a Saudi Aramco oil company staffer to become author of key science report is denounced as “political capture.”

Regulation of industrial carbon emissions surged in past year

ClimateWire News - Tue, 06/24/2025 - 7:00am
A new World Bank report says 40 percent of global industrial emissions are now regulated through carbon taxes or carbon markets.

Digital tool tracks impact of heat, pollution on California’s Latino communities

ClimateWire News - Tue, 06/24/2025 - 6:59am
The dashboard was launched Tuesday by UCLA’s Latino Policy and Politics Institute.

Postal Service EV fleet back on Congress’ hit list

ClimateWire News - Tue, 06/24/2025 - 6:58am
Republicans have proposed selling the Postal Service's electric vehicles. The issue may come up during a hearing Tuesday.

‘Getting especially ugly’: Industry analyst sees uncertain future for US carmakers

ClimateWire News - Tue, 06/24/2025 - 6:58am
Edmunds' Ivan Drury is trying to make sense of an American auto market in constant flux.

Japan boosts effort to curb methane leaks from LNG supply chains

ClimateWire News - Tue, 06/24/2025 - 6:57am
The announcement was made after a three-day energy summit in Tokyo where government officials urged energy importers to secure gas past 2050.

Scientists stumble upon way to cut cow dung methane emissions

ClimateWire News - Tue, 06/24/2025 - 6:57am
Two local scientists began testing the addition of polyferric sulfate in an attempt to recycle the water in cow dung lagoons and made a startling observation.

EU climate boss fought Commission plan to nix greenwashing rules

ClimateWire News - Tue, 06/24/2025 - 6:56am
The vice president of the EU executive pressured the environment commissioner over several days to preserve the law.

Greenpeace joins anti-Bezos protest in Venice about wedding, tax breaks

ClimateWire News - Tue, 06/24/2025 - 6:55am
Activists argue Jeff Bezos' wedding exemplifies broader failures in municipal governance, particularly the prioritization of tourism over resident needs.

Protect young secondary forests for optimum carbon removal

Nature Climate Change - Tue, 06/24/2025 - 12:00am

Nature Climate Change, Published online: 24 June 2025; doi:10.1038/s41558-025-02355-5

The authors generate ~1-km2 growth curves for aboveground live carbon in regrowing forests, globally. They show that maximum carbon removal rates can vary by 200-fold spatially and with age, with the greatest rates estimated at about 30 ± 12 years, highlighting the role of secondary forests in carbon cycling.

Copyright Cases Should Not Threaten Chatbot Users’ Privacy

EFF: Updates - Mon, 06/23/2025 - 10:07pm

Like users of all technologies, ChatGPT users deserve the right to delete their personal data. Nineteen U.S. States, the European Union, and a host of other countries already protect users’ right to delete. For years, OpenAI gave users the option to delete their conversations with ChatGPT, rather than let their personal queries linger on corporate servers. Now, they can’t. A badly misguided court order in a copyright lawsuit requires OpenAI to store all consumer ChatGPT conversations indefinitely—even if a user tries to delete them. This sweeping order far outstrips the needs of the case and sets a dangerous precedent by disregarding millions of users’ privacy rights.

The privacy harms here are significant. ChatGPT’s 300+ million users submit over 1 billion messages to its chatbots per day, often for personal purposes. Virtually any personal use of a chatbot—anything from planning family vacations and daily habits to creating social media posts and fantasy worlds for Dungeons and Dragons games—reveal personal details that, in aggregate, create a comprehensive portrait of a person’s entire life. Other uses risk revealing people’s most sensitive information. For example, tens of millions of Americans use ChatGPT to obtain medical and financial information. Notwithstanding other risks of these uses, people still deserve privacy rights like the right to delete their data. Eliminating protections for user-deleted data risks chilling beneficial uses by individuals who want to protect their privacy.

This isn’t a new concept. Putting users in control of their data is a fundamental piece of privacy protection. Nineteen states, the European Union, and numerous other countries already protect the right to delete under their privacy laws. These rules exist for good reasons: retained data can be sold or given away, breached by hackers, disclosed to law enforcement, or even used to manipulate a user’s choices through online behavioral advertising.

While appropriately tailored orders to preserve evidence are common in litigation, that’s not what happened here. The court disregarded the privacy rights of millions of ChatGPT users without any reasonable basis to believe it would yield evidence. The court granted the order based on unsupported assertions that users who delete their data are probably copyright infringers looking to “cover their tracks.” This is simply false, and it sets a dangerous precedent for cases against generative AI developers and other companies that have vast stores of user information. Unless courts limit orders to information that is actually relevant and useful, they will needlessly violate the privacy rights of millions of users.

OpenAI is challenging this order. EFF urges the court to lift the order and correct its mistakes.  

The NO FAKES Act Has Changed – and It’s So Much Worse

EFF: Updates - Mon, 06/23/2025 - 3:39pm

A bill purporting to target the issue of misinformation and defamation caused by generative AI has mutated into something that could change the internet forever, harming speech and innovation from here on out.

The Nurture Originals, Foster Art and Keep Entertainment Safe (NO FAKES) Act aims to address understandable concerns about generative AI-created “replicas” by creating a broad new intellectual property right. That approach was the first mistake: rather than giving people targeted tools to protect against harmful misrepresentations—balanced against the need to protect legitimate speech such as parodies and satires—the original NO FAKES just federalized an image-licensing system.

Take Action

Tell Congress to Say No to NO FAKES

The updated bill doubles down on that initial mistaken approach by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them, with few safeguards against abuse.

The new version of NO FAKES requires almost every internet gatekeeper to create a system that will a) take down speech upon receipt of a notice; b) keep down any recurring instance—meaning, adopt inevitably overbroad replica filters on top of the already deeply flawed copyright filters;  c) take down and filter tools that might have been used to make the image; and d) unmask the user who uploaded the material based on nothing more than the say so of person who was allegedly “replicated.”

This bill would be a disaster for internet speech and innovation.

Targeting Tools

The first version of NO FAKES focused on digital replicas. The new version goes further, targeting tools that can be used to produce images that aren’t authorized by the individual, anyone who owns the rights in that individual’s image, or the law. Anyone who makes, markets, or hosts such tools is on the hook. There are some limits—the tools must be primarily designed for, or have only limited commercial uses other than making unauthorized images—but those limits will offer cold comfort to developers given that they can be targeted based on nothing more than a bare allegation. These provisions effectively give rights-holders the veto power on innovation they’ve long sought in the copyright wars, based on the same tech panics. 

Takedown Notices and Filter Mandate

The first version of NO FAKES set up a notice and takedown system patterned on the DMCA, with even fewer safeguards. NO FAKES expands it to cover more service providers and require those providers to not only take down targeted materials (or tools) but keep them from being uploaded in the future.  In other words, adopt broad filters or lose the safe harbor.

Filters are already a huge problem when it comes to copyright, and at least in that instance all it should be doing is flagging for human review if an upload appears to be a whole copy of a work. The reality is that these systems often flag things that are similar but not the same (like two different people playing the same piece of public domain music). They also flag things for infringement based on mere seconds of a match, and they frequently do not take into account context that would make the use authorized by law.

But copyright filters are not yet required by law. NO FAKES would create a legal mandate that will inevitably lead to hecklers’ vetoes and other forms of over-censorship.

The bill does contain carve outs for parody, satire, and commentary, but those will also be cold comfort for those who cannot afford to litigate the question.

Threats to Anonymous Speech

As currently written, NO FAKES also allows anyone to get a subpoena from a court clerk—not a judge, and without any form of proof—forcing a service to hand over identifying information about a user.

We've already seen abuse of a similar system in action. In copyright cases, those unhappy with the criticisms being made against them get such subpoenas to silence critics. Often that the criticism includes the complainant's own words as proof of the criticism, an ur-example of fair use. But the subpoena is issued anyway and, unless the service is incredibly on the ball, the user can be unmasked.

Not only does this chill further speech, the unmasking itself can cause harm to users. Either reputationally or in their personal life.

Threats to Innovation

Most of us are very unhappy with the state of Big Tech. It seems like not only are we increasingly forced to use the tech giants, but that the quality of their services is actively degrading. By increasing the sheer amount of infrastructure a new service would need to comply with the law, NO FAKES makes it harder for any new service to challenge Big Tech. It is probably not a coincidence that some of these very giants are okay with this new version of NO FAKES.

Requiring removal of tools, apps, and services could likewise stymie innovation. For one, it would harm people using such services for otherwise lawful creativity.  For another, it would discourage innovators from developing new tools. Who wants to invest in a tool or service that can be forced offline by nothing more than an allegation?

This bill is a solution in search of a problem. Just a few months ago, Congress passed Take It Down, which targeted images involving intimate or sexual content. That deeply flawed bill pressures platforms to actively monitor online speech, including speech that is presently encrypted. But if Congress is really worried about privacy harms, it should at least wait to see the effects of the last piece of internet regulation before going further into a new one. Its failure to do so makes clear that this is not about protecting victims of harmful digital replicas.

NO FAKES is designed to consolidate control over the commercial exploitation of digital images, not prevent it. Along the way, it will cause collateral damage to all of us.

Take Action

Tell Congress to Say No to NO FAKES

New Journalism Curriculum Module Teaches Digital Security for Border Journalists

EFF: Updates - Mon, 06/23/2025 - 12:00pm
Module Developed by EFF, Freedom of the Press Foundation, and University of Texas, El Paso Guides Students Through Threat Modeling and Preparation

SAN FRANCISCO – A new college journalism curriculum module teaches students how to protect themselves and their digital devices when working near and across the U.S.-Mexico border. 

“Digital Security 101: Crossing the US-Mexico Border” was developed by Electronic Frontier Foundation (EFF) Director of Investigations Dave Maass and Dr. Martin Shelton, deputy director of digital security at Freedom of the Press Foundation (FPF), in collaboration with the University of Texas at El Paso (UTEP) Multimedia Journalism Program and Borderzine

The module offers a step-by-step process for improving the digital security of journalists passing through U.S. Land Ports of Entry, focusing on threat modeling: thinking through what you want to protect, and what actions you can take to secure it. 

This involves assessing risk according to the kind of work the journalist is doing, the journalist’s own immigration status, potential adversaries, and much more, as well as planning in advance for protecting oneself and one’s devices should the journalist face delay, detention, search, or device seizure. Such planning might include use of encrypted communications, disabling or enabling certain device settings, minimizing the data on devices, and mentally preparing oneself to interact with border authorities.  

The module, in development since early 2023, is particularly timely given increasingly invasive questioning and searches at U.S. borders under the Trump Administration and the documented history of border authorities targeting journalists covering migrant caravans during the first Trump presidency. 

"Today's journalism students are leaving school only to face complicated, new digital threats to press freedom that did not exist for previous generations. This is especially true for young reporters serving border communities," Shelton said. "Our curriculum is designed to equip emerging journalists with the skills to protect themselves and sources, while this new module is specifically tailored to empower students who must regularly traverse ports of entry at the U.S.-Mexico border while carrying their phones, laptops, and multimedia equipment." 

The guidance was developed through field visits to six ports of entry across three border states, interviews with scores of journalists and students from on both sides of the border, and a comprehensive review of CBP policies, while also drawing from EFF and FPF’s combined decades of experience researching constitutional rights and security techniques when it comes to our devices.  

“While this training should be helpful to investigative journalists from anywhere in the country who are visiting the borderlands, we put journalism students based in and serving border communities at the center of our work,” Maass said. “Whether you’re reviewing the food scene in San Diego and Tijuana, covering El Paso and Ciudad Juarez’s soccer teams, reporting on family separation in the Rio Grande Valley, or uncovering cross-border corruption, you will need the tools to protect your work and sources." 

The module includes a comprehensive slide deck that journalism lecturers can use and remix for their classes, as well as an interactive worksheet. With undergraduate students in mind, the module includes activities such as roleplaying a primary inspection interview and analyzing pop singer Olivia Rodrigo’s harrowing experience of mistaken identity while reentering the country. The module has already been delivered successfully in trainings with journalism students at UTEP and San Diego State University. 

“UTEP’s Multimedia Journalism program is well-situated to help develop this digital security training module,” said UTEP Communication Department Chair Dr. Richard Pineda. “Our proximity to the U.S.-Mexico border has influenced our teaching models, and our student population – often daily border crossers – give us a unique perspective from which to train journalists on issues related to reporting safely on both sides of the border.” 

For the “Digital security 101: Crossing the US-Mexico border” module: https://freedom.press/digisec/blog/border-security-module/ 

For more about the module: https://www.eff.org/deeplinks/2025/06/journalist-security-checklist-preparing-devices-travel-through-us-border

For EFF’s guide to digital security at the U.S. border: https://www.eff.org/press/releases/digital-privacy-us-border-new-how-guide-eff 

For EFF’s student journalist Surveillance Self Defense guide: https://ssd.eff.org/playlist/journalism-student 

Contact:  DaveMaassDirector of Investigationsdm@eff.org

A Journalist Security Checklist: Preparing Devices for Travel Through a US Border

EFF: Updates - Mon, 06/23/2025 - 11:31am

This post was originally published by the Freedom of the Press Foundation (FPF). This checklist complements the recent training module for journalism students in border communities that EFF and FPF developed in partnership with the University of Texas at El Paso Multimedia Journalism Program and Borderzine. We are cross-posting it under FPF's Creative Commons Attribution 4.0 International license. It has been slightly edited for style and consistency.

Before diving in: This space is changing quickly! Check FPF's website for updates and contact them with questions or suggestions. This is a joint project of Freedom of the Press Foundation (FPF) and the Electronic Frontier Foundation.

Those within the U.S. have Fourth Amendment protections against unreasonable searches and seizures — but there is an exception at the border. Customs and Border Protection (CBP) asserts broad authority to search travelers’ devices when crossing U.S. borders, whether traveling by land, sea, or air. And unfortunately, except for a dip at the start of the COVID-19 pandemic when international travel substantially decreased, CBP has generally searched more devices year over year since the George W. Bush administration. While the percentage of travelers affected by device searches remains small, in recent months we’ve heard growing concerns about apparent increased immigration scrutiny and enforcement at U.S. ports of entry, including seemingly unjustified device searches.

Regardless, it’s hard to say with certainty the likelihood that you will experience a search of your items, including your digital devices. But there’s a lot you can do to lower your risk in case you are detained in transit, or if your devices are searched. We wrote this checklist to help journalists prepare for transit through a U.S. port of entry while preserving the confidentiality of your most sensitive information, such as unpublished reporting materials or source contact information. It’s important to think about your strategy in advance, and begin planning which options in this checklist make sense for you.

First thing’s first: What might CBP do?

U.S. CBP’s policy is that they may conduct a “basic” search (manually looking through information on a device) for any reason or no reason at all. If they feel they have reasonable suspicion “of activity in violation of the laws enforced or administered by CBP” or if there is a “national security concern,” they may conduct what they call an “advanced” search, which may include connecting external equipment to your device, such as a forensic analysis tool designed to make a copy of your data.

Your citizenship status matters as to whether you can refuse to comply with a request to unlock your device or provide the passcode. If you are a U.S. citizen entering the U.S., you have the most legal leverage to refuse to comply because U.S. citizens cannot be denied entry — they must be let back into the country. But note that if you are a U.S. citizen, you may be subject to escalated harassment and further delay at the port of entry, and your device may be seized for days, weeks, or months.

If CBP officers seek to search your locked device using forensic tools, there is a chance that some (if not all of the) information on the device will be compromised. But this probability depends on what tools are available to government agents at the port of entry, if they are motivated to seize your device and send it elsewhere for analysis, and what type of device, operating system, and security features your device has. Thus, it is also possible that strong encryption may substantially slow down or even thwart a government device search.

Lawful permanent residents (green-card holders) must generally also be let back into the country. However, the current administration seems more willing to question LPR status, so refusing to comply with a request to unlock a device or provide a passcode may be risky for LPRs. Finally, CBP has broad discretion to deny entry to foreign nationals arriving on a visa or via the visa waiver program.

At present, traveling domestically within the United States, particularly if you are a U.S. citizen, is lower risk than travelling internationally. Our luggage and the physical aspects of digital devices may be searched — e.g., manual inspection or x-rays to ensure a device is not a bomb. CBP is often present at airports, but for domestic travel within the U.S. you should only be interacting with the Transportation Security Administration. TSA does not assert authority to search the data on your device — this is CBP’s role.

At an international airport or other port of entry, you have to decide whether you will comply with a request to access your device, but this might not feel like much of a choice if you are a non-U.S. citizen entering the country! Plan accordingly.

Your border digital security checklist Preparing for travel

Make a backup of each of your devices before traveling.
Use long, unpredictable, alphanumeric passcodes for your devices and commit those passwords to memory.
☐ If bringing a laptop, ensure it is encrypted using BitLocker for Windows, or FileVault for macOS. Chromebooks are encrypted by default. A password-protected laptop screen lock is usually insufficient. When going through security, devices should be turned all the way off.
☐ Fully update your device and apps.
☐ Optional: Use a password manager to help create and store randomized passcodes. 1Password users can create temporary travel vaults.
☐ Bring as few sensitive devices as possible — only what you need.
☐ Regardless which country you are visiting, think carefully about what you are willing to post publicly on social media about that country to avoid scrutiny.
☐ For land ports of entry in the U.S., check CBP’s border wait times and plan accordingly.
☐ If possible, print out any travel documents in advance to avoid the necessity to unlock your phone during boarding, including boarding passes for your departure and return, rental car information, and any information about your itinerary that you would like to have on hand if questioned (e.g., hotel bookings, visa paperwork, employment information if applicable, conference information). Use a printer you trust at home or at the office, just in case.
☐ Avoid bringing sensitive physical documents you wouldn’t want searched. If you need them, consider digitizing them (e.g., by taking a photo) and storing them remotely on a cloud service or backup device.

Decide in advance whether you will unlock your device or provide the passcode for a search. Your overall likelihood of experiencing a device search is low (e.g., less than .01% of international travelers are selected), but depending on what information you carry, the impact of a search may be quite high. If you plan to unlock your device for a search or provide the passcode, ensure your devices are prepared:

☐ Upload any information you would like to keep in cloud providers in advance (e.g., using iCloud) that you would like stored remotely, instead of locally on your device.
☐ Remove any apps, files, chat histories, browsing histories, and sensitive contacts you would not want exposed during a search.
☐ If you delete photos or files, delete them a second time in the “Recently Deleted” or “Trash” sections of your Files and Photos apps.
☐ Remove messages from the device that you believe would draw unwanted scrutiny. Remove yourself — even if temporarily — from chat groups on platforms like Signal.
☐ If you use Signal and plan to keep it on your device, use disappearing messages to minimize how much information you keep within the app.
☐ Optional: Bring a travel device instead of your usual device. Ensure it is populated with the apps you need while traveling, as well as login credentials (e.g., stored in a password manager), and necessary files. If you do this, ensure your trusted contacts know how to reach you on this device.
☐ Optional: Rather than manually removing all sensitive files from your computer, if you are primarily accessing web services during your travels, a Chromebook may be an affordable alternative to your regular computer.
☐ Optional: After backing up your devices for every day use, factory reset it and add only the information you need back onto the device.
☐ Optional: If you intend to work during your travel, plan in advance with a colleague who can remotely assist you in accessing and/or rotating necessary credentials.
☐ If you don’t plan to work, consider discussing with your IT department whether temporarily suspending your work accounts could mitigate risks at border crossings.

On the day of travel

☐ Log out of accounts you do not want accessible to border officials. Note that border officers do not have authority to access live cloud content — they must put devices in airplane mode or otherwise disconnect them from the internet.
☐ Power down your phone and laptop entirely before going through security. This will enable disk encryption, and make it harder for someone to analyze your device.
☐ Immediately before travel, if you have a practicing attorney who has expertise in immigration and border issues, particularly related to members of the media, make sure you have their contact information written down before visiting.
☐ Immediately before travel, ensure that a friend, relative, or colleague is aware of your whereabouts when passing through a port of entry, and provide them with an update as soon as possible afterward.

If you are pulled into secondary screening

☐ Be polite and try not to emotionally escalate the situation.
☐ Do not lie to border officials, but don’t offer any information they do not explicitly request.
☐ Politely request officers’ names and badge numbers.
☐ If you choose to unlock your device, rather than telling border officials your passcode, ask to type it in yourself.
☐ Ask to be present for a search of your device. But note officers are likely to take your device out of your line of sight.
☐ You may decline the request to search your device, but this may result in your device being seized and held for days, weeks, or months. If you are not a U.S. citizen, refusal to comply with a search request may lead to denial of entry, or scrutiny of lawful permanent resident status.
☐ If your device is seized, ask for a custody receipt (Form 6051D). This should also list the name and contact information for a supervising officer.
☐ If an officer has plugged your unlocked phone or computer into another electronic device, they may have obtained a forensic copy of your device. You will want to remember anything you can about this event if it happens.
☐ Immediately afterward, write down as many details as you can about the encounter: e.g., names, badge numbers, descriptions of equipment that may have been used to analyze the device, changes to the device or corrupted data, etc.

Reporting is not a crime. Be confident knowing you haven’t done anything wrong.

More resources

Pages