EFF Awards Spotlight ✨ Software Freedom Law Center, India

3 hours 41 minutes ago

In 1992 EFF presented our very first awards recognizing key leaders and organizations advancing innovation and championing civil liberties and human rights online. Now in 2025 we're continuing to celebrate the accomplishments of people working toward a better future for everyone with the EFF Awards!

All are invited to attend the EFF Awards on Wednesday, September 10 at the San Francisco Design Center. Whether you're an activist, an EFF supporter, a student interested in cyberlaw, or someone who wants to munch on a strolling dinner with other likeminded individuals, anyone can enjoy the ceremony!

REGISTER TODAY!

GENERAL ADMISSION: $55 | CURRENT EFF MEMBERS: $45 | STUDENTS: $35

If you're not able to make it, we'll also be hosting a livestream of the event on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to YouTube and the Internet Archive after the livestream.

We are honored to present the three winners of this year's EFF Awards: Just Futures Law, Erie Meyer, and Software Freedom Law Center, India. But, before we kick off the ceremony next week, let's take a closer look at each of the honorees. And last, but certainly not least—Software Freedom Law Center, India, winner of the EFF Award for Defending Digital Freedoms:

Software Freedom Law Center, India is a donor-supported legal services organization based in India that brings together lawyers, policy analysts, students, and technologists to protect freedom in the digital world. It promotes innovation and open access to knowledge by helping developers make great free and open-source software, protects privacy and civil liberties for Indians by educating and providing free legal advice, and helps policymakers make informed and just decisions about use of technology. SFLC.IN tracks and participates in litigation, AI regulations, and free speech issues that are defining Indian technology. It also tracks internet shutdowns and censorship incidents across India, provides digital security training, and has launched the Digital Defenders Network, a pan-Indian network of lawyers committed to protecting digital rights. It has conducted landmark litigation cases, petitioned the government of India on freedom of expression and internet issues, and campaigned for WhatsApp and Facebook to fix a feature of their platform that has been used to harass women in India. 

We're excited to celebrate SFLC.IN and the other EFF Award winners in person in San Francisco on September 10! We hope that you'll join us there.

Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.

Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit eff.org/thanks or contact tierney@eff.org for more information on corporate giving and sponsorships.

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Questions? Email us at events@eff.org.

Christian Romero

Age Verification Is A Windfall for Big Tech—And A Death Sentence For Smaller Platforms

3 hours 52 minutes ago

If you live in Mississippi, you may have noticed that you are no longer able to log into your Bluesky or Dreamwidth accounts from within the state. That’s because, in a chilling early warning sign for the U.S., both social platforms decided to block all users in Mississippi from their services rather than risk hefty fines under the state’s oppressive age verification mandate. 

If this sounds like censorship to you, you’re right—it is. But it’s not these small platforms’ fault. This is the unfortunate result of Mississippi’s wide-sweeping age verification law, H.B. 1126. Though the law had previously been blocked by a federal district court, the Supreme Court lifted that injunction last month, even as one justice (Kavanaugh) concluded that the law is “likely unconstitutional.” This allows H.B. 1126 to go into effect while the broader constitutional challenge works its way through the courts. EFF has opposed H.B. 1126 from the start, arguing consistently and constantly that it violates all internet users’ First Amendment rights, seriously risks our privacy, and forces platforms to implement invasive surveillance systems that ruin our anonymity

Lawmakers often sell age-verification mandates as a silver bullet for Big Tech’s harms, but in practice, these laws do nothing to rein in the tech giants. Instead, they end up crushing smaller platforms that can’t absorb the exorbitant costs. Now that Mississippi’s mandate has gone into effect, the reality is clear: age verification laws entrench Big Tech’s dominance, while pushing smaller communities like Bluesky and Dreamwidth offline altogether. 

Sorry Mississippians, We Can’t Afford You

Bluesky was the first platform to make the announcement. In a public blogpost, Bluesky condemned H.B. 1126’s broad scope, barriers to innovation, and privacy implications, explaining that the law forces platforms to “make every Mississippi Bluesky user hand over sensitive personal information and undergo age checks to access the site—or risk massive fines.” As Bluesky noted, “This dynamic entrenches existing big tech platforms while stifling the innovation and competition that benefits users.” Instead, Bluesky made the decision to cut off Mississippians entirely until the courts consider whether to overturn the law. 

About a week later, we saw a similar announcement from Dreamwidth, an open-source online community similar to LiveJournal where users share creative writing, fanfiction, journals, and other works. In its post, Dreamwidth shared that it too would have to resort to blocking the IP addresses of all users in Mississippi because it could not afford the hefty fines. 

Dreamwidth wrote: “Even a single $10,000 fine would be rough for us, but the per-user, per-incident nature of the actual fine structure is an existential threat.” The service also expressed fear that being involved in the lawsuit against Mississippi left it particularly vulnerable to retaliation—a clear illustration of the chilling effect of these laws. For Dreamwidth, blocking Mississippi users entirely was the only way to survive. 

Age Verification Mandates Don’t Rein In Big Tech—They Entrench It

Proponents of age verification claim that these mandates will hold Big Tech companies accountable for their outsized influence, but really the opposite is true. As we can see from Mississippi, age verification mandates concentrate and consolidate power in the hands of the largest companies—the only entities with the resources to build costly compliance systems and absorb potentially massive fines. While megacorporations like Google (with YouTube) and Meta (with Instagram) are already experimenting with creepy new age-estimation tech on their social platforms, smaller sites like Bluesky and Dreamwidth simply cannot afford the risks. 

We’ve already seen how this plays out in the UK. When the Online Safety Act came into force recently, platforms like Reddit, YouTube, and Spotify implemented broad (and extremely clunky) age verification measures while smaller sites, including forums on parenting, green living, and gaming on Linux, were forced to shutter. Take, for example, the Hamster Forum, “home of all things hamstery,” which announced in March 2025 that the OSA would force it to shut down its community message boards. Instead, users were directed to migrate over to Instagram with this wistful disclaimer: “It will not be the same by any means, but . . . We can follow each other and message on there and see each others [sic] individual posts and share our hammy photos and updates still.” 

When smaller platforms inevitably cave under the financial pressure of these mandates, users will be pushed back to the social media giants.

This perfectly illustrates the market impact of online age verification laws. When smaller platforms inevitably cave under the financial pressure of these mandates, users will be pushed back to the social media giants. These huge companies—those that can afford expensive age verification systems and aren’t afraid of a few $10,000 fines while they figure out compliance—will end up getting more business, more traffic, and more power to censor users and violate their privacy. 

This consolidation of power is a dream come true for the Big Tech platforms, but it’s a nightmare for users. While the megacorporations get more traffic and a whole lot more user data (read: profit), users are left with far fewer community options and a bland, corporate surveillance machine instead of a vibrant public sphere. The internet we all fell in love with is a diverse and colorful place, full of innovation, connection, and unique opportunities for self-expression. That internet—our internet—is worth defending.

TAKE ACTION

Don't let congress censor the internet

Molly Buckley

EFF Joins 55 Civil Society Organizations Urging the End of Sanctions on UN Special Rapporteur Francesca Albanese

8 hours 7 minutes ago

Following the U.S. government's overreaching decision to impose sanctions against Francesca Albanese, the United Nations Special Rapporteur on the situation of human rights in the Palestinian territories occupied since 1967, EFF joined more than 50 civil society organizations in calling for the U.S. to lift the sanctions. 

The U.S.’s sanctions on Francesca Albanese were formally issued in July 2025, pursuant to Section 1(a)(ii)(A) of President Trump’s Executive Order 14203, which was imposed by the U.S. on the International Criminal Court (ICC) in February for having “engaged in illegitimate and baseless actions targeting America and our close ally Israel.” Under this Executive Order, the State Department is instructed to name specific people who have worked with or for the ICC.  Rapporteur Albanese joins several ICC judges and the lead prosecutor in having their U.S. property and interests in property blocked, as well as restrictions on entering the country, banking, and more. 

One of the reasons cited in the far-reaching U.S. sanction is Albanese’s engagement with the ICC to investigate or prosecute nationals of the U.S. and Israel. The sanction came just days after the publication of the Special Rapportuer’s recent report to the UN Human Rights Council, “From economy of occupation to economy of genocide.” In her report, the Special Rapporteur “urges the International Criminal Court and national judiciaries to investigate and prosecute corporate executives and/or corporate entities for their part in the commission of international crimes and laundering of the proceeds from those crimes.” 

As a UN Special Rapporteur, Albanese’s role is to conduct independent research, gather information, and prepare reports on human rights situations, including documenting violations and providing recommendations to the Human Rights Council and other Human Rights bodies. Special Rapporteurs are independent experts chosen by the UN Human Rights Council in Geneva. They do not represent the UN or hold any formal authority, but their reports and findings are essential for advocacy in transnational situations, informing prosecutors at the International Criminal Court, or pressuring counties for human rights abuses. 

The unilateral sanctions imposed on the UN Special Rapporteur not only target her as an individual but also threaten the broader international human rights framework, undermining crucial work in monitoring and reporting on human rights issues. Such measures risk politicizing their mandates, discouraging frank reporting, and creating a chilling effect on human rights defenders more broadly. With the 80th session of the UN General Assembly opening in New York this September, these sanctions and travel restrictions present an amplified impingement on the Special Rapporteur’s capacity to fulfill her mandate and report on human rights abuses in Palestine.

The Special Rapportuer’s report identifies how AI, cloud services, biometric surveillance, and predictive policing technologies have reinforced military operations, population control and the unlawful targeting of civilians in the ongoing genocide in Gaza. More specifically, it illuminates the role of U.S. tech giants like Microsoft, Alphabet (Google’s parent company), Amazon, and IBM in providing dual-use infrastructure to “integrate mass data collection and surveillance, while profiting from the unique testing ground for military technology offered by the occupied Palestinian territory.”  

This report is well within her legal mandate to investigate and report on human rights issues in Palestine and provide critical oversight and accountability for human rights abuses. This work is particularly essential at a time when the very survival of Palestinians in the occupied Gaza Strip is at stake—journalists are being killed with deplorable frequency; internet shutdowns and biased censorship by social media platforms are preventing vital information from circulating within and leaving Gaza; and U.S.-based tech companies are continuing to be opaque about their role in providing technologies to the Israeli authorities for use in the ongoing genocide against Palestinians, despite the mounting evidence

EFF has repeatedly called for greater transparency relating to the role of Big Tech companies like Google, Amazon, and Microsoft in human rights abuses across Gaza and the West Bank, with these U.S.-based companies coming under pressure to reveal more about the services they provide and the nature of their relationships with the Israeli forces engaging in the military response. Without greater transparency, the public cannot tell whether these companies are complying with human rights standards—both those set by the United Nations and those they have publicly set for themselves. We know that this conflict has resulted in alleged war crimes and has involved massive, ongoing surveillance of civilians and refugees living under what international law recognizes as an illegal occupation. That kind of surveillance requires significant technical support and it seems unlikely that it could occur without any ongoing involvement by the companies providing the platforms. 

Top UN human rights officials have called for the reversal of the sanctions against the Special Rapporteur, voicing serious concerns about the dangerous precedent this sets in undermining human rights. The UN High Commissioner for Human Rights, Volker Türk, called for a prompt reversal of the sanctions and noted that, “even in the face of fierce disagreement, UN member states should engage substantively and constructively, rather than resort to punitive measures.” Similarly, UN Spokesperson Stéphane Dujarric noted that whilst Member States “are perfectly entitled to their views and to disagree with” experts’ reports, they should still “engage with the UN’s human rights architecture.”

In a press conference, Albanese said she believed that the sanctions were calculated to weaken her mission, and questioned why they had even been introduced: “for having exposed a genocide? For having denounced the system? They never challenged me on the facts.”

The United States must reverse these sanctions, and respect human rights for all—not just for the people they consider worthy of having them.

Read our full civil society letter here.

Electronic Frontier Foundation

California Lawmakers: Support S.B. 524 to Rein in AI Written Police Reports

1 day 2 hours ago

EFF urges California state lawmakers to pass S.B. 524, authored by Sen. Jesse Arreguín. This bill is an important first step in regaining control over police using generative AI to write their narrative police reports. 

This bill does several important things: It mandates that police reports written by AI include disclaimers on every page or within the body of the text that make it clear that this report was written in part or in total by a computer. It also says that any reports written by AI must retain their first draft. That way, it should be easier for defense attorneys, judges, police supervisors, or any other auditing entity to see which portions of the final report were written by AI and which parts were written by the officer. Further, the bill requires officers to sign and verify that they read the report and its facts are correct. And it bans AI vendors from selling or sharing the information a police agency provided to the AI.

These common-sense, first-step reforms are important: watchdogs are struggling to figure out where and how AI is being used in a police context. In fact, a popular AI police report writing tool, Axon’s Draft One, would be out of compliance with this bill, which would require them to redesign their tool to make it more transparent. 

This bill is an important first step in regaining control over police using generative AI to write their narrative police reports. 

Draft One takes audio from an officer’s body-worn camera, and uses AI  to turn that dialogue into a narrative police report. Because independent researchers have been unable to test it, there are important questions about how the system handles things like sarcasm, out of context comments, or interactions with members of the public that speak languages other than English. Another major concern is Draft One’s inability to keep track of which parts of a report were written by people and which parts were written by AI. By design, their product does not retain different iterations of the draft—making it easy for an officer to say, “I didn’t lie in my police report, the AI wrote that part.” 

All lawmakers should pass regulations of AI written police reports. This technology could be nearly everywhere, and soon. Axon is a top supplier of body-worn cameras in the United States, which means they have a massive ready-made customer base. Through the bundling of products, AI-written police reports could be at a vast percentage of police departments. 

AI-written police reports are unproven in terms of their accuracy, and their overall effects on the criminal justice system. Vendors still have a long way to go to prove this technology can be transparent and auditable. While it would not solve all of the many problems of AI encroaching on the criminal justice system, S.B. 524 is a good first step to rein in an unaccountable piece of technology. 

We urge California lawmakers to pass S.B. 524. 

Matthew Guariglia

EFF Awards Spotlight ✨ Erie Meyer

1 day 3 hours ago

In 1992 EFF presented our very first awards recognizing key leaders and organizations advancing innovation and championing civil liberties and human rights online. Now in 2025 we're continuing to celebrate the accomplishments of people working toward a better future for everyone with the EFF Awards!

All are invited to attend the EFF Awards on Wednesday, September 10 at the San Francisco Design Center. Whether you're an activist, an EFF supporter, a student interested in cyberlaw, or someone who wants to munch on a strolling dinner with other likeminded individuals, anyone can enjoy the ceremony!

REGISTER TODAY!

GENERAL ADMISSION: $55 | CURRENT EFF MEMBERS: $45 | STUDENTS: $35

If you're not able to make it, we'll also be hosting a livestream of the event on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to YouTube and the Internet Archive after the livestream.

We are honored to present the three winners of this year's EFF Awards: Just Futures Law, Erie Meyer, and Software Freedom Law Center, India. But, before we kick off the ceremony next week, let's take a closer look at each of the honorees. This time—Erie Meyer, winner of the EFF Award for Protecting Americans' Data:

Erie Meyer is a Senior Fellow at the Vanderbilt Policy Accelerator where she focuses on the intersection of technology, artificial intelligence, and regulation, and a Senior Fellow at the Georgetown Law Institute for Technology Law & Policy. Since January 20, Meyer has helped organize former government technologists to stand up for the privacy and integrity of governmental systems that hold Americans’ data. In addition to organizing others, she filed a declaration in federal court in February warning that 12 years of critical records could be irretrievably lost in the CFPB’s purge by the Trump Administration’s Department of Government Efficiency. In April, she filed a declaration in another case warning about using private-sector AI on government information. That same month, she testified to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation that DOGE is centralizing access to some of the most sensitive data the government holds—Social Security records, disability claims, even data tied to national security—without a clear plan or proper oversight, warning that “DOGE is burning the house down and calling it a renovation.” 

We're excited to celebrate Erie Meyer and the other EFF Award winners in person in San Francisco on September 10! We hope that you'll join us there.

Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.

Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit eff.org/thanks or contact tierney@eff.org for more information on corporate giving and sponsorships.

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Questions? Email us at events@eff.org.

Christian Romero

From Libraries to Schools: Why Organizations Should Install Privacy Badger

1 day 7 hours ago

​​In an era of pervasive online surveillance, organizations have an important role to play in protecting their communities’ privacy. Millions of people browse the web on computers provided by their schools, libraries, and employers. By default, popular browsers on these computers leave people exposed to hidden trackers.

Organizations can enhance privacy and security on their devices by installing Privacy Badger, EFF’s free, open source browser extension that automatically blocks trackers. Privacy Badger is already used by millions to fight online surveillance and take back control of their data.

Why Should Organizations Install Privacy Badger on Managed Devices?

Protect People from Online Surveillance

Most websites contain hidden trackers that let advertisers, data brokers, and Big Tech companies monitor people’s browsing activity. This surveillance has serious consequences: it fuels scams, government spying, predatory advertising, and surveillance pricing

By installing Privacy Badger on managed devices, organizations can protect entire communities from these harms. Most people don’t realize the risks of browsing the web unprotected. Organizations can step in to make online privacy available to everyone, not just the people who know they need it. 

Ad Blocking is a Cybersecurity Best Practice

Privacy Badger helps reduce cybersecurity threats by blocking ads that track you (unfortunately, that’s most ads these days). Targeted ads aren’t just a privacy nightmare. They can also be a vehicle for malware and phishing attacks. Cybercriminals have tricked legitimate ad networks into distributing malware, a tactic known as malvertising.

The risks are serious enough that the U.S. Cybersecurity and Infrastructure Security Agency (CISA) recommends federal agencies deploy ad-blocking software. The NSA, CIA, and other intelligence agencies already follow this guidance. These agencies are using advertising systems to surveil others, but blocking ads for their own employees. 

All organizations, not just spy agencies, should make ad blocking part of their security strategy.

A Tracker Blocker You Can Trust

Four million users already trust Privacy Badger, which has been recommended by The New York Times' Wirecutter, Consumer Reports, and The Washington Post.

Trust is crucial when choosing an ad-blocking or tracker-blocking extension because they require high levels of browser permissions. Unfortunately, not all extensions deserve that trust. Avast’s “privacy” extension was caught collecting and selling users’ browsing data to third parties—the very practice it claimed to prevent.

Privacy Badger is different. EFF released it over a decade ago, and the extension has been open-source—meaning other developers and researchers can inspect its code—that entire time. Built by a nonprofit with a 35-year history fighting for user rights, organizations can trust that Privacy Badger works for its users, not for profit. 

Which organizations should deploy Privacy Badger?

All of them! Installing Privacy Badger on managed devices improves privacy and security across an organization. That said, Privacy Badger is most beneficial for two types of organizations: libraries and schools. Both can better serve their communities by safeguarding the computers they provide.

Libraries

The American Library Association (ALA) already recommends installing Privacy Badger on public computers to block third-party tracking. Librarians have a long history of defending privacy. The ALA’s guidance is a natural extension of that legacy for the digital age. While librarians protect the privacy of books people check out, Privacy Badger protects the privacy of websites they visit on library computers. 

Millions of Americans depend on libraries for internet access. That makes libraries uniquely positioned to promote equitable access to private browsing. With Privacy Badger, libraries can ensure that safe and private browsing is the default for anyone using their computers. 

Libraries also play a key role in promoting safe internet use through their digital literacy trainings. By including Privacy Badger in these trainings, librarians can teach patrons about a simple, free tool that protects their privacy and security online.

Schools

Schools should protect their students’ from online surveillance by installing Privacy Badger on computers they provide. Parents are rightfully worried about their children’s privacy online, with a Pew survey showing 85% worry about advertisers using data about what kids do online to target ads. Deploying Privacy Badger is a concrete step schools can take to address these concerns. 

By blocking online trackers, schools can protect students from manipulative ads and limit the personal data fueling social media algorithms. Privacy Badger can even block tracking in Ed Tech products that schools require students to use. Alarmingly, a Human Rights Watch analysis of Ed Tech products found that 89% shared children’s personal data with advertisers or other companies.

Instead of deploying invasive student monitoring tools, schools should keep students safe by keeping their data safe. Students deserve to learn without being tracked, profiled, and targeted online. Privacy Badger can help make that happen.

How can organizations deploy Privacy Badger on managed devices?

System administrators can deploy and configure Privacy Badger on managed devices by setting up an enterprise policy. Chrome, Firefox, and Edge provide instructions for automatically installing extensions organization-wide. You’ll be able to configure certain Privacy Badger settings for all devices. For example, you can specify websites where Privacy Badger is disabled or prevent Privacy Badger’s welcome page from popping up on computers that get reset after every session. 

We recommend educating users about the addition of Privacy Badger and what it does. Since some websites deeply embed tracking, privacy protections can occasionally break website functionality. For example, a video might not play or a comments section might not appear. If this happens, users should know that they can easily turn off Privacy Badger on any website. Just open the Privacy Badger popup and click “Disable for this site.” 

Don't hesitate to reach out if you're interested in deploying Privacy Badger at scale. Our team is here to help you protect your community's privacy. And if you're already deploying Privacy Badger across your organization, we'd love to hear how it’s going

Make Private Browsing the Default at Your Organization

Schools, libraries, and other organizations can make private browsing the norm by deploying Privacy Badger on devices they manage. If you work at an organization with managed devices, talk to your IT team about Privacy Badger. You can help strengthen the security and privacy of your entire organization while joining the fight against online surveillance.

Lena Cohen

Verifying Trust in Digital ID Is Still Incomplete

1 day 14 hours ago

In the past few years, governments across the world have rolled out different digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the second in a short series that explains digital ID and the pending use case of age verification. Upcoming posts will evaluate what real protections we can implement with current digital ID frameworks and discuss how better privacy and controls can keep people safer online.

Digital identity encompasses various aspects of an individual's identity that are presented and verified through either the internet or in person. This could mean a digital credential issued by a certification body or a mobile driver’s license provisioned to someone’s mobile wallet. They can be presented in plain text on a device, as a scannable QR code, or through tapping your device to something called a Near Field Communication (NFC) reader. There are other ways to present credential information that is a little more privacy preserving, but in practice those three methods are how we are seeing digital ID being used today.

Advocates of digital ID often use a framework they call the "Triangle of Trust." This is usually presented as a triangle of exchange between the holder of an ID—those who use a phone or wallet application to access a service; the issuer of an ID—this is normally a government entity, like the state Departments of Motor Vehicles in the U.S, or a banking system; and the verifier of an ID—the entity that wants to confirm your identity, such as law enforcement, a university, a government benefits office, a porn site, or an online retailer.

This triangle implies that the issuer and verifier—for example, the government who provides the ID and the website checking your age—never need to talk to one another. This theoretically avoids the tracking and surveillance threats that arise by preventing your ID, by design, from phoning home every time you verify your ID with another party.

But it also makes a lot of questionable assumptions, such as:

1) the verifier will only ever ask for a limited amount of information. 

2) the verifier won’t store information it collects.

3) the verifier is always trustworthy. 

The third assumption is especially problematic. How do you trust that the verifier will protect your most personal information and not use, store, or sell it beyond what you have consented to? Any of the following could be verifiers:

  • Law enforcement when doing a traffic stop and verifying your ID as valid.
  • A government benefits office that requires ID verification to sign up for social security benefits.
  • A porn site in a state or country which requires age verification or identity verification before allowing access.
  • An online retailer selling products like alcohol or tobacco.

Looking at the triangle again, this isn’t quite an equal exchange. Your personal ID like a driver’s license or government ID is both one of the most centralized and sensitive documents you have—you can’t control how it is issued or create your own, having to go through your government to obtain one. This relationship will always be imbalanced. But we have to make sure digital ID does not exacerbate these imbalances.

The effort to answer the questions of how to prevent verifier abuse is ongoing. But instead of working on the harms that these systems cause, the push for this technology is being fast-tracked by governments around the world scrambling to solve what they see as a crisis of online harms by mandating age verification. And current implementations of the Triangle of Trust have already proven disastrous.

One key example of the speed of implementation outpacing proper protections is the Digital Credential API. Initially launched by Google and now supported by Apple, this rollout allows for mass, unfettered verification by apps and websites to use the API to request information from your digital ID. The introduction of this technology to people’s devices came with no limits or checks on what information verifiers can seek—incentivizing verifiers to over-ask for ID information beyond the question of whether a holder is over a certain age, simply because they can. 

Digital Credential API also incentivizes for a variety of websites to ask for ID information that aren’t required and did not commonly do so previously. For example, food delivery services, medical services, and gaming sites, and literally anyone else interested in being a verifier, may become one tomorrow with digital ID and the Digital Credential API. This is both an erosion of personal privacy, as well as a pathway into further surveillance. There must be established limitations and scope, including:

  • verifiers establishing who they are and what they plan to ask from holders. There should also be an established plan for transparency on verifiers and their data retention policies.
  • ways to identify and report abusive verifiers, as well as real consequences, like revoking or blocking a verifier from requesting IDs in the future.
  • unlinkable presentations that do not allow for verifier and issuer collusion. As well as no data shared between verifiers you attest to. Preventing tracking of your movements in person or online every time you attest your age.

A further point of concern arises in cases of abuse or deception. A malicious verifier can send a request with no limiting mechanisms or checks and the user who rejects the request could be  fully blocked from the website or application. There must be provisions that ensure people have access to vital services that will require age verification from visitors.

Government's efforts to tackle verifiers potentially abusing digital ID requests haven’t come to fruition yet. For example, the EU Commission recently launched its age verification “mini app” ahead of the EU ID wallet for 2026. The mini app will not have a registry for verifiers, as EU regulators had promised and then withdrew. Without verifier accountability, the wallet cannot tell if a request is legitimate. As a result, verifiers and issuers will demand verification from the people who want to use online services, but those same people are unable to insist on verification and accountability from the other sides of the triangle. 

While digital ID gets pushed as the solution to the problem of uploading IDs to each site users access, the security and privacy on them varies based on implementation. But when privacy is involved, regulators must make room for negotiation. There should be more thoughtful and protective measures for holders interacting with more and more potential verifiers over time. Otherwise digital ID solutions will just exacerbate existing harms and inequalities, rather than improving internet accessibility and information access for all.

Alexis Hancock

EFF Statement on ICE Use of Paragon Solutions Malware

2 days 1 hour ago

This statement can be attributed to EFF Senior Staff Technologist Cooper Quintin

It was recently reported by Jack Poulson on Substack that ICE has reactivated its 2 million dollar contract with Paragon Solutions, a cyber-mercenary and spyware manufacturer. 

The reactivation of the contract between the Department of Homeland Security and Paragon Solutions, a known spyware vendor, is extremely troubling.

This end run around the executive order both ignores the spirit of the rule and does not actually do anything to prevent misuse of Paragon Malware for human rights abuses

Paragon's “Graphite” malware has been implicated in widespread misuse by the Italian government. Researchers at Citizen Lab at the Munk School of Global Affairs at the University of Toronto and with Meta found that it has been used in Italy to spy on journalists and civil society actors, including humanitarian workers. Without strong legal guardrails, there is a risk that the malware will be misused in a similar manner by the U.S. Government.

These reports undermine Paragon Solutions’s public  marketing of itself as a more ethical provider of surveillance malware. 

Reportedly, the contract is being reactivated because the US arm of Paragon Solutions was acquired by a Miami based private equity firm, AE Industrial Partners, and then merged into a Virginia based cybersecurity company, REDLattice, allowing ICE to circumvent Executive Order 14093 which bans the acquisition of spyware controlled by a foreign government or person. Even though this order was always insufficient in preventing the acquisition of dangerous spyware, it was the best protection we had. This end run around the executive order both ignores the spirit of the rule and does not actually do anything to prevent misuse of Paragon Malware for human rights abuses. Nor will it prevent insider threats at Paragon using their malware to spy on US government officials, or US government officials from misusing it to spy on their personal enemies, rivals, or spouses. 

The contract between Paragon and ICE requires all US users to adjust their threat models and take extra precautions. Paragon’s Graphite isn’t magical, it’s still just malware. It still needs a zero day exploit in order to compromise a phone with the latest security updates and those are expensive. The best thing you can do to protect yourself against Graphite is to keep your phone up to date and enable Lockdown Mode in your operating system if you are using an iPhone or Advanced Protection Mode on Android. Turning on disappearing messages is also helpful that way if someone in your network does get compromised you don’t also reveal your entire message history. For more tips on protecting yourself from malware check out our Surveillance Self Defense guides.

Related Cases: AlHathloul v. DarkMatter Group
Cooper Quintin

EFF Awards Spotlight ✨ Just Futures Law

2 days 2 hours ago

In 1992 EFF presented our very first awards recognizing key leaders and organizations advancing innovation and championing civil liberties and human rights online. Now in 2025 we're continuing to celebrate the accomplishments of people working toward a better future for everyone with the EFF Awards!

All are invited to attend the EFF Awards on Wednesday, September 10 at the San Francisco Design Center. Whether you're an activist, an EFF supporter, a student interested in cyberlaw, or someone who wants to munch on a strolling dinner with other likeminded individuals, anyone can enjoy the ceremony!

REGISTER TODAY!

GENERAL ADMISSION: $55 | CURRENT EFF MEMBERS: $45 | STUDENTS: $35

If you're not able to make it, we'll also be hosting a livestream of the event on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to YouTube and the Internet Archive after the livestream.

We are honored to present the three winners of this year's EFF Awards: Just Futures Law, Erie Meyer, and Software Freedom Law Center, India. But, before we kick off the ceremony next week, let's take a closer look at each of the honorees. First up—Just Futures Law, winner of the EFF Award for Leading Immigration and Surveillance Litigation:

Just Futures Law is a women-of-color-led law project that recognizes how surveillance disproportionately impacts immigrants and people of color in the United States. In the past year, Just Futures sued the Department of Homeland Security and its subagencies seeking a court order to compel the agencies to release records on their use of AI and other algorithms, and sued the Trump Administration for prematurely halting Haiti’s Temporary Protected Status, a humanitarian program that allows hundreds of thousands of Haitians to temporarily remain and work in the United States due to Haiti’s current conditions of extraordinary crises. It has represented activists in their fight against tech giants like Clearview AI, it has worked with Mijente to launch the TakeBackTech fellowship to train new advocates on grassroots-directed research, and it has worked with Grassroots Leadership to fight for the release of detained individuals under Operation Lone Star.

We're excited to celebrate Just Futures Law and the other EFF Award winners in person in San Francisco on September 10! We hope that you'll join us there.

Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.

Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit eff.org/thanks or contact tierney@eff.org for more information on corporate giving and sponsorships.

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Questions? Email us at events@eff.org.

Christian Romero

🤐 This Censorship Law Turns Parents Into Content Cops | EFFector 37.11

2 days 3 hours ago

School is back in session! Perfect timing to hit the books and catch up on the latest digital rights news. We've got you covered with bite-sized updates in this issue of our EFFector newsletter.

This time, we're breaking down why Wyoming’s new age verification law is a free speech disaster. You’ll also read about a big win for transparency around police surveillance, how the Trump administration’s war on “woke AI” threatens civil liberties, and a welcome decision in a landmark human rights case.

Prefer to listen? Be sure to check out the audio companion to EFFector! We're interviewing EFF staff about some of the important issues they are working on. This time, EFF Legislative Activist Rindala Alajaji discusses the real harms of age verification laws like the one passed in Wyoming. Tune in on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.11 - This Censorship Law Turns Parents Into Content Cops

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

What WhatsApp’s “Advanced Chat Privacy” Really Does

3 days 2 hours ago

In April, WhatsApp launched its “Advanced Chat Privacy” feature, which, once enabled, disables using certain AI features in chats and prevents conversations from being exported. Since its launch, an inaccurate viral post has been ping-ponging around social networks, creating confusion around what exactly it does.

The viral post falsely claims that if you do not enable Advanced Chat Privacy, Meta’s AI tools will be able to access your private conversations. This isn’t true, and it misrepresents both how Meta AI works and what Advanced Chat Privacy is.

The confusion seems to spawn from the fact that Meta AI can be invoked through a number of methods, including in any group chat with the @Meta AI command. While the chat contents between you and other people are always end-to-end encrypted on the app, what you say to Meta AI is not. Similarly, if you or anyone else in the chat chooses to use Meta AI's “Summarize” feature, which uses Meta’s “Private Processing” technology, that feature routes the text of the chat through Meta’s servers. However, the company claims that they cannot view the content of those messages. This feature remains opt-in, so it's up to you to decide if you want to use it. The company also recently released the results of two audits detailing the issues that have been found thus far and what they’ve done to fix it.

For example, if you and your buddy are chatting, and your friend types in @Meta AI and asks it a question, that part of the conversion, which you can both see, is not end-to-end encrypted, and is usable for AI training or whatever other purposes are included in Meta’s privacy policy. But otherwise, chats remain end-to-end encrypted.

Advanced Chat Privacy offers some bit of control over this. The new privacy feature isn’t a universal setting in WhatsApp; you can enable or disable it on a per-chat basis, but it’s turned off by default. When enabled, Advanced Chat Privacy does three core things:

  • Blocks anyone in the chat from exporting the chats,
  • Disables auto-downloading media to chat participant’s phones, and
  • Disables some Meta AI features

Outside disabling some Meta AI features, Advanced Chat Privacy can be useful in other instances. For example, while someone can always screenshot chats, if you’re concerned about someone easily exporting an entire group chat history, Advanced Chat Privacy makes this harder to do because there’s no longer a one-tap option to do so. And since media can’t be automatically downloaded to someone’s phone (the “Save to Photos” option on the chat settings screen), it’s harder for an attachment to accidentally end up on someone’s device.

How to Enable Advanced Chat Privacy

Advanced Chat Privacy is enabled or disabled per chat. To enable it:

  • Tap the chat name at the top of the screen.
  • Select Advanced chat privacy, then tap the toggle to turn it on.

There are some quirks to how this works, though. For one, by default, anyone involved in a chat can turn Advanced Chat Privacy on or off at will, which limits its usefulness but at least helps ensure something doesn’t accidentally get sent to Meta AI.

There’s one way around this, which is for a group admin to lock down what users in the group can do. In an existing group chat that you are the administrator of, tap the chat name at the top of the screen, then:

  • Scroll down to Group Permissions.
  • Disable the option to “Edit Group Settings.” This makes it so only the administrator can change several important permissions, including Advanced Chat Privacy.

You can also set this permission when starting a new group chat. Just be sure to pop into the permissions page when prompted. Even without Advanced Chat Privacy, the “Edit Group Settings” option is an important one for privacy, because it also includes whether participants can change the length that disappearing messages can be viewed, so it’s something worth considering for every group chat you’re an administrator of, and something WhatsApp should require admins to choose before starting a new chat.

When it comes to one-on-one chats, there is currently no way to block the other person from changing the Advanced Chat Privacy feature, so you’ll have to come to an agreement with the other person on keeping it enabled if that’s what you want. If the setting is changed, you’ll see a notice in the chat stating so:

There are already serious concerns with how much metadata WhatsApp collects, and as the company introduces ads and AI, it’s going to get harder and harder to navigate the app, understand what each setting does, and properly protect the privacy of conversations. One of the reasons alternative encrypted chat options like Signal tend to thrive is because they keep things simple and employ strong default settings and clear permissions. WhatsApp should keep this in mind as it adds more and more features.

Thorin Klosowski

Open Austin: Reimagining Civic Engagement and Digital Equity in Texas

6 days 21 hours ago

The Electronic Frontier Alliance is growing, and this year we’ve been honored to welcome Open Austin into the EFA. Open Austin began in 2009 as a meetup that successfully advocated for a city-run open data portal, and relaunched as a 501(c)3 in 2018 dedicated to reimagining civic engagement and digital equity by building volunteer open source projects for local social organizations.

As Central Texas’ oldest and largest grassroots civic tech organization, their work has provided hands-on training for over 1,500 members in the hard and soft skills needed to build digital society, not just scroll through it. Recently, I got the chance to speak with Liani Lye, Executive Director of Open Austin, about the organization, its work, and what lies ahead:

There’s so many exciting things happening with Open Austin. Can you tell us about your Civic Digital Lab and your Data Research Hub?

Open Austin's Civic Digital Lab reimagines civic engagement by training central Texans to build technology for the public good. We build freely, openly, and alongside a local community stakeholder to represent community needs. Our lab currently supports 5 products:

  • Data Research Hub: Answering residents' questions with detailed information about our city
  • Streamlining Austin Public Library’s “book a study room” UX and code
  • Mapping landlords’ and rental properties to support local tenant rights organizing
  • Promoting public transit by highlighting points of interest along bus routes
  • Creating an interactive exploration of police bodycam data

We’re actively scaling up our Data Research Hub, which started in January 2025 and was inspired by 9b Corp’s Neighborhood Explorer. Through community outreach, we gather residents’ questions about our region and connect the questions with Open Austin’s data analysts. Each answered question adds to a pool of knowledge that equips communities to address local issues. Crucially, the organizing team at EFF, through the EFA, have connected us to local organizations to generate these questions.

Can you discuss your new Civic Data Fellowship cohort and Communities of Civic Practice?

Launched in 2024, Open Austin’s Civic Data Fellowship trains the next generation of technologically savvy community leaders by pairing aspiring women, people of color, and LGBTQ+ data analysts with mentors to explore Austin’s challenges. These culminate in data projects and talks to advocates and policymakers, which double as powerful portfolio pieces.  While we weren’t able to fully fund fellowship stipends through grants this year, thanks to the generosity of our supporters, we successfully raised 25% through grassroots efforts.

Along with our fellowship and lab, we host monthly Communities of Civic Practice peer-learning circles that build skills for employability and practical civic engagement. Recent sessions include a speaker on service design in healthcare, and co-creating a data visualization on broadband adoption presented to local government staff. Our in-person communities are a great way to learn and build local public interest tech without becoming a full-on Labs contributor.

For those in Austin and Central Texas that want to get involved in-person, how can they plug-in?

If you can only come to one event for the rest of the year, come to our Open Austin’s 2025 Year-End Celebration. Open Austin members and our freshly graduated Civic Data Fellow cohort will give lightning talks to share how they’ve supported local social advocacy through open source software and open data work. Otherwise, come to a monthly remote volunteer orientation call. There, we'll share how to get involved in our in-person Communities of Civic Practice and our remote Civic Digital Labs (i.e. building open source software).

Open Austin welcomes volunteers from all backgrounds, including those with skills in marketing, fundraising, communications, and operations– not just technologists. You can make a difference in various ways. Come to a remote volunteer orientation call to learn more. And, as always, donate. Running multiple open source projects for structured workforce development is expensive, and your contributions help sustain Open Austin's work in the community. Please visit our donation page for ways to give.

Thanks, EFF!

Christopher Vines

Join Your Fellow Digital Rights Supporters for the EFF Awards on September 10!

1 week ago

For over 35 years, the Electronic Frontier Foundation has presented awards recognizing key leaders and organizations advancing innovation and championing digital rights. The EFF Awards celebrate the accomplishments of people working toward a better future for technology users, both in the public eye and behind the scenes.

EFF is pleased to welcome all members of the digital rights community, supporters, and friends to this annual award ceremony. Join us to celebrate this year's honorees with drinks, bytes, and excellent company.

 

EFF Award Ceremony
Wednesday, September 10th, 2025
6:00 PM to 10:00 PM Pacific
San Francisco Design Center Galleria
101 Henry Adams Street, San Francisco, CA

Register Now

General Admission: $55 | Current EFF Members: $45 | Students: $35

The celebration will include a strolling dinner and desserts, as well as a hosted bar with cocktails, mocktails, wine, beer, and non-alcoholic beverages! Vegan, vegetarian, and gluten-free food options will be available. We hope to see you in person, wearing either a signature EFF hoodie, or something formal if you're excited for the opportunity to dress up!

If you're not able to make it, we'll also be hosting a livestream of the event on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to YouTube and the Internet Archive after the livestream.

We are proud to present awards to this year's winners:JUST FUTURES LAW

EFF Award for Leading Immigration and Surveillance Litigation

ERIE MEYER

EFF Award for Protecting Americans' Data

SOFTWARE FREEDOM LAW CENTER, INDIA

EFF Award for Defending Digital Freedoms

 More About the 2025 EFF Award Winners

Just Futures Law

Just Futures Law is a women-of-color-led law project that recognizes how surveillance disproportionately impacts immigrants and people of color in the United States.  It uses litigation to fight back as part of defending and building the power of immigrant rights and criminal justice activists, organizers, and community groups to prevent criminalization, detention, and deportation of immigrants and people of color. Just Futures was founded in 2019 using a movement lawyering and racial justice framework and seeks to transform how litigation and legal support serves communities and builds movement power.  

In the past year, Just Futures sued the Department of Homeland Security and its subagencies seeking a court order to compel the agencies to release records on their use of AI and other algorithms, and sued the Trump Administration for prematurely halting Haiti’s Temporary Protected Status, a humanitarian program that allows hundreds of thousands of Haitians to temporarily remain and work in the United States due to Haiti’s current conditions of extraordinary crises. It has represented activists in their fight against tech giants like Clearview AI, it has worked with Mijente to launch the TakeBackTech fellowship to train new advocates on grassroots-directed research, and it has worked with Grassroots Leadership to fight for the release of detained individuals under Operation Lone Star.

Erie Meyer

Erie Meyer is a Senior Fellow at the Vanderbilt Policy Accelerator where she focuses on the intersection of technology, artificial intelligence, and regulation, and a Senior Fellow at the Georgetown Law Institute for Technology Law & Policy. She is former Chief Technologist at both the Consumer Financial Protection Bureau (CFPB) and the Federal Trade Commission. Earlier, she was senior advisor to the U.S. Chief Technology Officer at the White House, where she co-founded the United States Digital Service, a team of technologists and designers working to improve digital services for the public. Meyer also worked as senior director at Code for America, a nonprofit that promotes civic hacking to modernize government services, and in the Ohio Attorney General's office at the height of the financial crisis. 

 

Since January 20, Meyer has helped organize former government technologists to stand up for the privacy and integrity of governmental systems that hold Americans’ data. In addition to organizing others, she filed a declaration in federal court in February warning that 12 years of critical records could be irretrievably lost in the CFPB’s purge by the Trump Administration’s Department of Government Efficiency. In April, she filed a declaration in another case warning about using private-sector AI on government information. That same month, she testified to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation that DOGE is centralizing access to some of the most sensitive data the government holds—Social Security records, disability claims, even data tied to national security—without a clear plan or proper oversight, warning that “DOGE is burning the house down and calling it a renovation.” 

Software Freedom Law Center

Software Freedom Law Center, India is a donor-supported legal services organization based in India that brings together lawyers, policy analysts, students, and technologists to protect freedom in the digital world. It promotes innovation and open access to knowledge by helping developers make great free and open-source software, protects privacy and civil liberties for Indians by educating and providing free legal advice, and helps policymakers make informed and just decisions about use of technology. 

Founded in 2010 by technology lawyer and online civil liberties activist Mishi Choudhary, SFLC.IN tracks and participates in litigation, AI regulations, and free speech issues that are defining Indian technology. It also tracks internet shutdowns and censorship incidents across India, provides digital security training, and has launched the Digital Defenders Network, a pan-Indian network of lawyers committed to protecting digital rights. It has conducted landmark litigation cases, petitioned the government of India on freedom of expression and internet issues, and campaigned for WhatsApp and Facebook to fix a feature of their platform that has been used to harass women in India. 

Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.

Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit eff.org/thanks or contact tierney@eff.org for more information on corporate giving and sponsorships.

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Questions? Email us at events@eff.org.

 

Christian Romero

Podcast Episode: Protecting Privacy in Your Brain

1 week 2 days ago

The human brain might be the grandest computer of all, but in this episode, we talk to two experts who confirm that the ability for tech to decipher thoughts, and perhaps even manipulate them, isn't just around the corner – it's already here. Rapidly advancing "neurotechnology" could offer new ways for people with brain trauma or degenerative diseases to communicate, as the New York Times reported this month, but it also could open the door to abusing the privacy of the most personal data of all: our thoughts. Worse yet, it could allow manipulating how people perceive and process reality, as well as their responses to it – a Pandora’s box of epic proportions.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F3955c653-7346-44d2-82e2-0238931bcfd9%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.) 

Neuroscientist Rafael Yuste and human rights lawyer Jared Genser are awestruck by both the possibilities and the dangers of neurotechnology. Together they established The Neurorights Foundation, and now they join EFF’s Cindy Cohn and Jason Kelley to discuss how technology is advancing our understanding of what it means to be human, and the solid legal guardrails they're building to protect the privacy of the mind. 

In this episode you’ll learn about:

  • How to protect people’s mental privacy, agency, and identity while ensuring equal access to the positive aspects of brain augmentation
  • Why neurotechnology regulation needs to be grounded in international human rights
  • Navigating the complex differences between medical and consumer privacy laws
  • The risk that information collected by devices now on the market could be decoded into actual words within just a few years
  • Balancing beneficial innovation with the protection of people’s mental privacy 

Rafael Yuste is a professor of biological sciences and neuroscience, co-director of the Kavli Institute for Brain Science, and director of the NeuroTechnology Center at Columbia University. He led the group of researchers that first proposed the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative launched in 2013 by the Obama Administration. 

Jared Genser is an international human rights lawyer who serves as managing director at Perseus Strategies, renowned for his successes in freeing political prisoners around the world. He’s also the Senior Tech Fellow at Harvard University’s Carr-Ryan Center for Human Rights, and he is outside general counsel to The Neurorights Foundation, an international advocacy group he co-founded with Yuste that works to enshrine human rights as a crucial part of the development of neurotechnology.  

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

RAFAEL YUSTE: The brain is not just another organ of the body, but the one that generates our mind, all of our mental activity. And that's the heart of what makes us human is our mind. So this technology is one technology that for the first time in history can actually get to the core of what makes us human and not only potentially decipher, but manipulate the essence of our humanity.
10 years ago we had a breakthrough with studying the mouse’s visual cortex in which we were able to not just decode from the brain activity of the mouse what the mouse was looking at, but to manipulate the brain activity of the mouse. To make the mouse see things that it was not looking at.
Essentially we introduce, in the brain of the mouse, images. Like hallucinations. And in doing so, we took control over the perception and behavior of the mouse. So the mouse started to behave as if it was seeing what we were essentially putting into his brain by activating groups of neurons.
So this was fantastic scientifically, but that night I didn't sleep because it hit me like a ton of bricks. Like, wait a minute, what we can do in a mouse today, you can do in a human tomorrow. And this is what I call my Oppenheimer moment, like, oh my God, what have we done here?

CINDY COHN: That's the renowned neuroscientist Rafael Yuste talking about the moment he realized that his groundbreaking brain research could have incredibly serious consequences. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF's activism director. This is our podcast, How to Fix the Internet.

CINDY COHN: On this show, we flip the script from the dystopian doom and gloom thinking we all get mired in when thinking about the future of tech. We're here to challenge ourselves, our guests and our listeners to imagine a better future that we can be working towards. How can we make sure to get this right, and what can we look forward to if we do?
And today we have two guests who are at the forefront of brain science -- and are thinking very hard about how to protect us from the dangers that might seem like science fiction today, but are becoming more and more likely.

JASON KELLEY: Rafael Yuste is one of the world's most prominent neuroscientists. He's been working in the field of neurotechnology for many years, and was one of the researchers who led the BRAIN initiative launched by the Obama administration, which was a large-scale research project akin to the Genome Project, but focusing on brain research. He's the director of the NeuroTechnology Centre at Columbia University, and his research has enormous implications for a wide range of mental health disorders, including schizophrenia, and neurodegenerative diseases like Parkinson's and ALS.

CINDY COHN: But as Rafael points out in the introduction, there are scary implications for technology that can directly manipulate someone's brain.

JASON KELLEY: We're also joined by his partner, Jared Genser, a legendary human rights lawyer who has represented no less than five Nobel Peace Prize Laureates. He’s also the Senior Tech Fellow at Harvard University’s Carr-Ryan Center for Human Rights, and together with Rafael, he founded the Neurorights Foundation, an international advocacy group that is working to enshrine human rights as a crucial part of the development of neurotechnology.

CINDY COHN: We started our conversation by asking how the brain scientist and the human rights lawyer first teamed up.

RAFAEL YUSTE: I knew nothing about the law. I knew nothing about human rights my whole life. I said, okay, I avoided that like the pest because you know what? I have better things to do, which is to focus on how the brain works. But I was just dragged into the middle of this by our own work.
So it was a very humbling moment and I said, okay, you know what? I have to cross to the other side and get involved really with the experts that know how this works. And that's how I ended up talking to Jared. The whole reason we got together was pretty funny. We both got the same award from a Swedish foundation, from the Talbert Foundation, this Liaison Award for Global Leadership. In my case, because of the work I did on the Brain Initiative, and Jared, got this award for his human rights work.
And, you know, this is one, good thing of getting an award, or let me put it differently, at least, that getting an award led to something positive in this case is that someone in the award committee said, wait a minute, you guys should be talking to each other. and they put us in touch. He was like a matchmaker.

CINDY COHN: I mean, you really stumbled into something amazing because, you know, Jared, you're, you're not just kind of your random human rights lawyer, right? So tell me your version, Jared, of the meet cute.

JARED GENSER: Yes. I'd say we're like work spouses together. So the feeling is mutual in terms of the admiration, to say the least. And for me, that call was really transformative. It was probably the most impactful one hour call I've had in my career in the last decade because I knew very little to nothing about the neurotechnology side, you know, other than what you might read here or there.
I definitely had no idea how quickly emerging neuro technologies were developing and the sensitivity - the enormous sensitivity - of that data. And in having this discussion with Rafa, it was quite clear to me that my view of the major challenges we might face as humanity in the field of human rights was dramatically more limited than I might have thought.
And, you know, Rafa and I became fast friends after that and very shortly thereafter co-founded the Neurorights Foundation, as you noted earlier. And I think that this is what's made us such a strong team, is that our experiences and our knowledge and expertise are highly complimentary.
Um, you know, Rafa and his colleagues had, uh, at the Morningside Group, which is a group of 25 experts he collected together at, uh, at Columbia, had already, um, you know, met and come up with, and published in the journal Nature, a review of the potential concerns that arise out of the potential misuse and abuse of neurotech.
And there were five areas of concerns that they had identified that include mental privacy, mental agency, mental identity, concerns about discrimination and the development in application of neurotechnologies and fair use of mental augmentation. And these generalized concerns, uh, which they refer to as neurorights, of course map over to international human rights, uh, that to some extent are already protected by international treaties.
Um, but to other extents might need to be further interpreted from existing international treaties. And it was quite clear that when one would think about emerging neuro technologies and what they might be able to do, that a whole dramatic amount of work needed to be done before these things proliferate in such an extraordinary sense around the world.

JASON KELLEY: So Rafa and Jared, when I read a study like the one you described with the mice, my initial thought is, okay, that's great in a lab setting. I don't initially think like, oh, in five years or 10 years, we'll have technology that actually can be, you know, in the marketplace or used by the government to do the hallucination implanting you're describing. But it sounds like this is a realistic concern, right? You wouldn't be doing this work unless this had progressed very quickly from that experiment to actual applications and concerns. So what has that progression been like? Where are we now?

RAFAEL YUSTE: So let me tell you, two years ago I got a phone call in the middle of the night. It woke me up in the middle of the night, okay, from a colleague and friend who had his Oppenheimer moment. And his name is Eddie Chang. He's a professor of neurosurgery at UCSF, and he's arguably the leader in the world to decode brain activity from human patients. So he had been working with a patient that was paralyzed, because of a Bulbar infarction, a stroke in her, essentially, the base of her brain and she had a locking syndrome, so she couldn't communicate with the exterior. She was in a wheelchair and they implanted a few electrodes and electrode array into her brain with neurosurgery and connected those electrodes to a computer with an algorithm using generative AI.
And using this algorithm, they were able to decode her inner speech - the language that she wanted to generate. She couldn't speak because she was paralyzed. And when you conjure – we don't really know exactly what goes on during speech – but when you conjure the words in your mind, they were able to actually decode those words.
And then not only that, they were able to decode her emotions and even her facial gestures. So she was paralyzed and Eddie and her team built an avatar of the person in the computer with her face and gave that avatar, her voice, her emotions, and her facial gestures. And if you watch the video, she was just blown away.
So Eddie called me up and explained to me what they've done. I said, well, Eddie, this is absolutely fantastic. You just unlocked the person from this locking syndrome, giving hope to all the patients that have a similar problem. But of course he said, no, no, I, I'm not talking about that. I'm talking about, we just cloned her essentially.
It was actually published as the cover of the journal Nature. Again, this is the top journal in the world, so they gave them the cover. It was such an impressive result. and this was implantable neurotechnology. So it requires a neurosurgeon that go in and put in this electrode. So it is, of course, in a hospital setting, this is all under control and super regulated.
But since then, there's been fast development, partly, spurred by all these investments into neurotechnology that, uh, private and public all over the world. There's been a lot of development of non-implantable neurotechnology to either record brain activity from the surface or to stimulate the brain from the surface without having to open up the skull.
And let me just tell you two examples that bring home the fact that this is not science fiction. In December 2023, a team in Australia used an EG device, essentially like a helmet that you put on. You can actually buy these things in Amazon and couple it to generative AI algorithm again, like Eddie Chang. In fact, I think they were inspired by Eddie Chang's work and they were able to decode the inner speech of volunteers. It wasn't as accurate as the decoding that you can do if you stick the electrodes inside. But from the outside, they have a video of a person that is mentally ordering a cappuccino at a Starbucks. No. And they essentially decode, they don't decode absolutely every word that the person is thinking. But enough words that the message comes out loud and clear. So the coding of inner speech, it's doable, with non-invasive technology. Not only that study from Australia since then, you know, all these teams in the world, uh, we work as we help each other continuously. So, uh, shortly after that Australian team, another study in Japan published something, uh, with much higher accuracy and then another study in China. Anyway, this is now becoming very common practice to choose generative AI to decode speech.
And then on the stimulation side is also something that raises a lot of concerns ethically. In 2022 a lab in Boston University used external magnetic stimulation to activate parts of the brain in a cohort of volunteers that were older in age. This was the control group for a study on Alzheimer's patients. And they reported in a very good paper, that they could increase 30% of both short-term and long-term memory.
So this is the first serious case that I know of where again, this is not science fiction, this is demonstrated enhancement of, uh, mental ability in a human with noninvasive neurotechnology. So this could open the door to a whole industry that could use noninvasive devices, maybe magnetic simulation, maybe acoustical, maybe, who knows, optical, to enhance any aspect of our mental activity. And that, I mean, just imagine.
This is what we're actually focusing on our foundation right now, this issue of mental augmentation because we don't think it's science fiction. We think it's coming.

JARED GENSER: Let me just kind of amplify what Rafa's saying and to kind of make this as tangible as possible for your listeners, which is that, as Rafa was already alluding to, when you're talking about, of course, implantable devices, you know, they have to be licensed by the Food and Drug Administration. They're implanted through neurosurgery in the medical context. All the data that's being gathered is covered by, you know, HIPAA and other state health data laws. But there are already available on the market today 30 different kinds of wearable neurotechnology devices that you can buy today and use.
As one example, you know, there's the company, Muse, that has a meditation device and you can buy their device. You put it on your head, you meditate for an hour. The BCI - brain computer interface - connects to your app. And then basically you'll get back from the company, you know, decoding of your brain activity to know when you're in a meditative state or not.
The problem is, is that these are EEG scanning devices that if they were used in a medical context, they would be required to be licensed. But in a consumer context, there's no regulation of any kind. And you're talking about devices that can gather from gigabytes to terabytes of neural data today, of which you can only decode maybe 1% of it.
And the data that's being gathered, uh, you know, EEG scanning device data in wearable form, you could identify if a person has any of a number of different brain diseases and you could also decode about a dozen different mental states. Are you happy, are you sad? And so forth.
And so at our foundation, at the Neurorights Foundation, we actually did a very important study on this topic that actually was covered on the front page of the New York Times. And we looked at the user agreements for, and the privacy agreements, for the 30 different companies’ products that you can buy today, right now. And what we found was that in 29, out of the 30 cases, basically, it's carte blanche for the companies. They can download your data, they can do it as they see fit, and they can transfer it, sell it, etc.
Only in one case did a company, ironically called Unicorn, actually keep the data on your local device, and it was never transferred to the company in question. And we benchmark those agreements across a half dozen different global privacy standards and found that there were just, you know, gigantic gaps that were there.
So, you know, why is that a problem? Well take the Muse device I just mentioned, they talk about how they've downloaded a hundred million hours of consumer neural data from people who have bought their device and used it. And we're talking about these studies in Australia and Japan that are decoding thought to text.
Today thought to text, you know, with the EEG can only be done in a relatively. Slow speed, like 10 or 15 words a minute with like maybe 40, 50% accuracy. But eventually it's gonna start to approach the speed of Eddie Chang's work in California, where with the implantable device you can do thought to text at 80 words a minute, 95% accuracy.
And so the problem is that in three, four years, let's say when this technology is perfected with a wearable device, this company Muse could theoretically go back to that hundred million hours of neural data and then actually decode what the person was thinking in the form of words when they were actually meditating.
And to help you understand as a last point, why is this, again, science and not science fiction? You know, Apple is already clearly aware of the potential here, and two years ago, they actually filed a patent application for their next generation AirPod device that is going to have built-in EEG scanners in each ear, right?
And they sell a hundred million pairs of AirPods every single year, right? And when this kind of technology, thought to text, is perfected in wearable form, those AirPods will be able to be used, for example, to do thought-to-text emails, thought-to-text text messages, et cetera.
But when you continue to wear those AirPod devices, the huge question is what's gonna be happening to all the other data that's being, you know, absorbed how is it going to be able to be used, and so forth. And so this is why it's really urgent at an international level to be dealing with this. And we're working at the United Nations and in many other places to develop various kinds of frameworks consistent with international human rights law. And we're also working, you know, at the national and sub-national level.
Rafa, my colleague, you know, led the charge in Chile to help create a first-ever constitutional amendment to a constitution that protects mental privacy in Chile. We've been working with a number of states in the United States now, uh, California, Colorado and Montana – very different kinds of states – have all amended their state consumer data privacy laws to extend their application to narrow data. But it is really, really urgent in light of the fast developing technology and the enormous gaps between these consumer product devices and their user agreements and what is considered to be best practice in terms of data privacy protection.

CINDY COHN: Yeah, I mean I saw that study that you did and it's just, you know, it mirrors a lot of what we do in the other context where we've got click wrap licenses and other, you know, kind of very flimsy one-sided agreements that people allegedly agree to, but I don't think under any lawyer's understanding of like meeting of the minds, and there's a contract that you negotiate that it's anything like that.
And then when you add it to this context, I think it puts these problems on steroids in many ways and makes 'em really worse. And I think one of the things I've been thinking about in this is, you know, you guys have in some ways, you know, one of the scenarios that demonstrates how our refusal to take privacy seriously on the consumer side and on the law enforcement side is gonna have really, really dire, much more dire consequences for people potentially than we've even seen so far. And really requires serious thinking about, like, what do we mean in terms of protecting people's privacy and identity and self-determination?

JARED GENSER: Let me just interject on that one narrow point because I was literally just on a panel discussion remotely at the UN Crime Congress last week that was hosted by the UN Office in Drugs and Crime, UNODC and Interpol, the International Police Organization. And it was a panel discussion on the topic of emerging law enforcement uses of neurotechnologies. And so this is coming. They just launched a project jointly to look at potential uses as well as to develop, um, guidelines for how that can be done. But this is not at all theoretical. I mean, this is very, very practical.

CINDY COHN: And much of the funding that's come out of this has come out of the Department of Defense thinking about how do we put the right guardrails in place are really important. And honestly, if you think that the only people who are gonna want access to the neural data that these devices are collecting are private companies who wanna sell us things, like I, you know, that's not the history, right? Law enforcement comes for these things both locally and internationally, no matter who has custody of them. And so you kind of have to recognize that this isn't just a foray for kind of skeezy companies to do things we don't like.

JARED GENSER: Absolutely.

JASON KELLEY: Let's take a quick moment to thank our sponsor. How to Fix The Internet is supported by the Alfred P. Sloan Foundation's program and public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF members and donors. You're the reason we exist, and EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever. So please, if you like what we do, go to eff.org/pod to donate. Also, we'd love for you to join us at this year's EFF awards where we celebrate the people working towards the better digital future that we all care so much about.
Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast you might like. Have a listen to this:
[WHO BROKE THE INTERNET TRAILER]
And now back to our conversation with Rafael Yuste and Jared Genser.

CINDY COHN: This might be a little bit of a geeky lawyer question, but I really appreciated the decision you guys made to really ground this in international human rights, which I think is tremendously important. But not obvious to most Americans as the kind of framework that we ought to invoke. And I was wondering how you guys came to that conclusion.

JARED GENSER: No, I think it's actually a very, very important question. I mean, I think that the bottom line is that there are a lot of ways to look at, um, questions like this. You know, you can think about, you know, a national constitution or national laws. You can think about international treaties or laws.
You can look at ethical frameworks or self governance by companies themselves, right? And at the end of the day, because of the seriousness and the severity of the potential downside risks if this kind of technology is misused or abused, you know, our view is that what we really need is what's referred to by lawyers as hard law, as in law that is binding and enforceable against states by citizens. And obviously binding on governments and what they do, binding on companies and what they do and so forth.
And so it's not that we don't think, for example, ethical frameworks or ethical standards or self-governance by companies are not important. They are very much a part of an overall approach, but our approach at the Neurorights Foundation is, let's look at hard law, and there are two kinds of hard law to look at. The first are international human rights treaties. These are multilateral agreements that states negotiate and come to agreements on. And when a country signs and ratifies a treaty, as the US has on the key relevant treaty here, which is the International Covenant and Civil and Political Rights, those rights get domesticated in the law of each country in the world that signs and ratifies them, and that makes them then enforceable. And so we think first and foremost, it's important that we ground our concerns about the misuse and abuse of these technologies in the requirements of international human rights law.
Because the United States is obligated and other countries in the world are obligated to protect their citizens from abuses of these rights.
And at the same time, of course that isn't sufficient on its own. We also need to see in certain contexts, probably not in the US context, amendments to a constitution that's much harder to do in the US but laws that are actually enforceable against companies.
And this is why our work in California, Montana and Colorado is so important because now companies in California, as one illustration, which is where Apple is based and where meta is based and so forth, right? They now have to provide the protections embedded in the California Consumer Privacy Act to all of their gathering and use of neural data, right?
And that means that you have a right to be forgotten. You have a right to demand your data not be transferred or sold to third parties. You have a right to have access to your data. Companies have obligations to tell you what data are they gathering, how are they gonna use it? If they propose selling or transferring it to whom and so forth, right?
So these are now ultimately gonna be binding law on companies, you know, based in California and, as we're developing this, around the world. But to us, you know, that is really what needs to happen.

JASON KELLEY: Your success has been pretty stunning. I mean, even though you're, you know, there's obviously so much more to do. We work to try to amend and change and improve laws at the state and local and federal level and internationally sometimes, and it's hard.
But the two of you together, I think there's something really fascinating about the way, you know, you're building a better future and building in protections for that better future at the same time.
And, like, you're aware of why that's so important. I think there's a big lesson there for a lot of people who work in the tech field and in the science field about, you know, you can make incredible things and also make sure they don't cause huge problems. Right? And that's just a really important lesson.
What we do with this podcast is we do try to think about what the better future that people are building looks like, what it should look like. And the two of you are, you know, thinking about that in a way that I think a lot of our guests aren't because you're at the forefront of a lot of this technology. But I'd love to hear what Rafa and then Jared, you each think, uh, science and the law look like if you get it right, if things go the way you hope they do, what, what does the technology look like? What did the protections look like? Rafa, could you start.

RAFAEL YUSTE: Yeah, I would comment, there's five places in the world today where there's, uh, hard law protection for brain activity and brain data in the Republic of Chile, the state of Rio Grande do Sul in Brazil, in the states of Colorado, California, and Montana in the US. And in every one of these places there's been votes in the legislature, and they're all bicameral legislature, so there've been 10 votes, and every single one of those votes has been unanimous.
All political parties in Chile, in Brazil - actually in Brazil there were 16 political parties. That never happened before that they all agreed on something. California, Montana, and Colorado, all unanimous except for one vote no in Colorado of a person that votes against everything. He's like, uh, he goes, he has some, some axe to grind with, uh, his companions and he just votes no on everything.
But aside from this person. Uh, actually the way the Colorado, um, bill was introduced by a Democratic representative, but, uh, the Republican side, um, took it to heart. The Republican senator said that this is a definition of a no-brainer. And he asked for permission to introduce that bill in the Senate in Colorado.
So he, the person that defended the Senate in Colorado, was actually not a Democrat but a Republican. So why is that? So as quoting this Colorado senator is a no brainer, this is an issue where it doesn't, I mean, the minute you get it, you understand, do you want your brain activity to be decoded with what your consent? Well, this is not a good idea.
So not a single person that we've met has opposed this issue. So I think Jared and I do the best job we can and we work very hard. And I should tell you that we're doing this pro bono without being compensated for our work. But the reason behind the success is really the issue, it's not just us. I think that we're dealing with an issue which is a fundamental widespread universal agreement.

JARED GENSER: What I would say is that, you know, on the one hand, and we appreciate of course, the kind words about the progress we're making. We have made a lot of progress in a relatively short period of time, and yet we have a dramatically long way to go.
We need to further interpret international law in the way that I'm describing to ensure that privacy includes mental privacy all around the world, and we really need national laws in every country in the world. Subnational laws and various places too, and so forth.
I will say that, as you know from all the great work you guys do with your podcast, getting something done at the federal level is of course much more difficult in the United States because of the divisions that exist. And there is no federal consumer data privacy law because we've never been able to get Republicans and Democrats to agree on the text of one.
The only kinds of consumer data protected at the federal level is healthcare data under HIPAA and financial data. And there have been multiple efforts to try to do a federal consumer data privacy law that have failed. In the last Congress, there was something called the American Privacy Rights Act. It was bipartisan, and it basically just got ripped apart because they were adding, trying to put together about a dozen different categories of data that would be protected at the federal level. And each one of those has a whole industry association associated with it.
And we were able to get that draft bill amended to include neural data in it, which it didn't originally include, but ultimately the bill died before even coming to a vote at committees. In our view, you know, that then just leaves state consumer data privacy laws. There are about 35 states now that have state level laws. 15 states actually still don't.
And so we are working state by state. Ultimately, I think that when it comes, especially to the sensitivity of neural data, right? You know, we need a federal law that's going to protect neural data. But because it's not gonna be easy to achieve, definitely not as a package with a dozen other types of data, or in general, you know, one way of course to get to a federal solution is to start to work with lots of different states. All these different state consumer data privacy laws are different. I mean, they're similar, but they have differences to them, right?
And ultimately, as you start to see different kinds of regulation being adopted in different states relating to the same kind of data, our hope is that industry will start to say to members of Congress and the, you know, the Trump administration, hey, we need a common way forward here and let's set at least a floor at the federal level for what needs to be done. If states want to regulate it more than that, that's fine, but ultimately, I think that there's a huge amount of work still left to be done, obviously all around the world and at the state level as well.

CINDY COHN: I wanna push you a little bit. So what does it look like if we get it right? What is, what is, you know, what does my world look like? Do I, do I get the cool earbuds or do I not?

JARED GENSER: Yeah, I mean, look, I think the bottom line is that, you know, the world that we want to see, and I mean Rafa of course is the technologist, and I'm the human rights guy. But the world that we wanna see is one in which, you know, we promote innovation while simultaneously, you know, protecting people from abuses of their human rights and ensure that neuro technologies are developed in an ethical manner, right?
I mean, so we do need self-regulation by industry. You know, we do need national and international laws. But at the same time, you know, one in three people in their lifetimes will have a neurological disease, right?
The brain diseases that people know best or you know, from family, friends or their own experience, you know, whether you look at Alzheimer's or Parkinson's, I mean, these are devastating, debilitating and all, today, you know, irreversible conditions. I mean, all you can do with any brain disease today at best is to slow its progression. You can't stop its progression and you can't reverse it.
And eventually, in 20 or 30 years, from these kinds of emerging neurotechnologies, we're going to be able to ultimately cure brain diseases. And so that's what the world looks like, is the, think about all of the different ways in which humanity is going to be improved, when we're able to not only address, but cure, diseases of this kind, right?
And, you know, one of the other exciting parts of emerging neurotechnologies is our ability to understand ourselves, right? And our own brain and how it operates and functions. And that is, you know, very, very exciting.
Eventually we're gonna be able to decode not only thought-to-text, but even our subconscious thoughts. And that of course, you know, raises enormous questions. And this technology is also gonna, um, also even raise fundamental questions about, you know, what does it actually mean to be human? And who are we as humans, right?
And so, for example, one of the side effects of deep brain stimulation in a very, very, very small percentage of patients is a change in personality. In other words, you know, if you put a device in someone's, you know, mind to control the symptoms of Parkinson's, when you're obviously messing with a human brain, other things can happen.
And there's a well known case of a woman, um, who went from being, in essence, an extreme introvert to an extreme extrovert, you know, with deep brain stimulation as a side effect. And she's currently being studied right now, um, along with other examples of these kinds of personality changes.
And if we can figure out in the human brain, for example, what parts of the brain, for example, deal with being an introvert or an extrovert, you know, you're also raising fundamental questions about the, the possibility of being able to change your personality and parts with a brain implant, right? I mean, we can already do that, obviously, with psychotropic medications for people who have mental illnesses through psychotherapy and so forth. But there are gonna be other ways in which we can understand how the brain operates and functions and optimize our lives through the development of these technologies.
So the upside is enormous, you know. Medically and scientifically, economically, from a self-understanding point of view. Right? And at the same time, the downside risks are profound. It's not just decoding our thoughts. I mean, we're on the cusp of an unbeatable lie detector test, which could have huge positive and negative impacts, you know, in criminal justice contexts, right?
So there are so many different implications of these emerging technologies, and we are often so far behind, on the regulatory side, the actual scientific developments that in this particular case we really need to try to do everything possible to at least develop these solutions at a pace that matches the developments, let alone get ahead of them.

JASON KELLEY: I'm fascinated to see, in talking to them, how successful they've been when there isn't a big, you know, lobbying wing of neurorights products and companies stopping them from this because they're ahead of the game. I think that's the thing that really struck me and, and something that we can hopefully learn from in the future that if you're ahead of the curve, you can implement these privacy protections much easier, obviously. That was really fascinating. And of course just talking to them about the technology set my mind spinning.

CINDY COHN: Yeah, in both directions, right? Both what an amazing opportunity and oh my God, how terrifying this is, both at the same time. I thought it was interesting because I think from where we sit as people who are trying to figure out how to bring privacy into some already baked technologies and business models and we see how hard that is, you know, but they feel like they're a little behind the curve, right? They feel like there's so much more to do. So, you know, I hope that we were able to kind of both inspire them and support them in this, because I think to us, they look ahead of the curve and I think to them, they feel a little either behind or over, you know, not overwhelmed, but see the mountain in front of them.

JASON KELLEY: A thing that really stands out to me is when Rafa was talking about the popularity of these protections, you know, and, and who on all sides of the aisle are voting in favor of these protections, it's heartwarming, right? It's inspiring that if you can get people to understand the sort of real danger of lack of privacy protections in one field. It makes me feel like we can still get people, you know, we can still win privacy protections in the rest of the fields.
Like you're worried for good reason about what's going on in your head and that, how that should be protected. But when you type on a computer, you know, that's just the stuff in your head going straight onto the web. Right? We've talked about how like the phone or your search history are basically part of the contents of your mind. And those things need privacy protections too. And hopefully we can, you know, use the success of their work to talk about how we need to also protect things that are already happening, not just things that are potentially going to happen in the future.

CINDY COHN: Yeah. And you see kind of both kinds of issues, right? Like, if they're right, it's scary. When they're wrong it's scary. But also I'm excited about and I, what I really appreciated about them, is that they're excited about the potentialities too. This isn't an effort that's about the house of no innovation. In fact, this is where responsibility ought to come from. The people who are developing the technology are recognizing the harms and then partnering with people who have expertise in kind of the law and policy and regulatory side of things. So that together, you know, they're kind of a dream team of how you do this responsibly.
And that's really inspiring to me because I think sometimes people get caught in this, um, weird, you know, choose, you know, the tech will either protect us or the law will either protect us. And I think what Rafa and Jared are really embodying and making real is that we need both of these to come together to really move into a better technological future.

JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit eff.org/podcast and click on listener feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred P Sloan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelley.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.

Josh Richman

Fourth Amendment Victory: Michigan Supreme Court Reins in Digital Device Fishing Expeditions

2 weeks ago

EFF legal intern Noam Shemtov was the principal author of this post.

When police have a warrant to search a phone, should they be able to see everything on the phone—from family photos to communications with your doctor to everywhere you’ve been since you first started using the phone—in other words, data that is in no way connected to the crime they’re investigating? The Michigan Supreme Court just ruled no. 

In People v. Carson, the court held that to satisfy the Fourth Amendment, warrants authorizing searches of cell phones and other digital devices must contain express limitations on the data police can review, restricting searches to data that they can establish is clearly connected to the crime.

The realities of modern cell phones call for a strict application of rules governing the scope of warrants.

EFF, along with ACLU National and the ACLU of Michigan, filed an amicus brief in Carson, expressly calling on the court to limit the scope of cell phone search warrants. We explained that the realities of modern cell phones call for a strict application of rules governing the scope of warrants. Without clear limits, warrants would  become de facto licenses to look at everything on the device, a great universe of information that amounts to “the sum of an individual’s private life.” 

The Carson case shows just how broad many cell phone search warrants can be. Defendant Michael Carson was suspected of stealing money from a neighbor’s safe. The warrant to search his phone allowed the police to access:

Any and all data including, text messages, text/picture messages, pictures and videos, address book, any data on the SIM card if applicable, and all records or documents which were created, modified, or stored in electronic or magnetic form and, any data, image, or information.

There were no temporal or subject matter limitations. Consequently, investigators obtained over 1,000 pages of information from Mr. Carson’s phone, the vast majority of which did not have anything to do with the crime under investigation.

The Michigan Supreme Court held that this extremely broad search warrant was “constitutionally intolerable” and violated the particularity requirement of the Fourth Amendment. 

The Fourth Amendment requires that warrants “particularly describ[e] the place to be searched, and the persons or things to be seized.” This is intended to limit authorization to search to the specific areas and things for which there is probable cause to search and to prevent police from conducting “wide-ranging exploratory searches.” 

Cell phones hold vast and varied information, including our most intimate data.

Across two opinions, a four-Justice majority joined a growing national consensus of courts recognizing that, given the immense and ever-growing storage capacity of cell phones, warrants must spell out up-front limitations on the information the government may review, including the dates and data categories that constrain investigators’ authority to search. And magistrates reviewing warrants must ensure the information provided by police in the warrant affidavit properly supports a tailored search.

This ruling is good news for digital privacy. Cell phones hold vast and varied information, including our most intimate data—“privacies of life” like our personal messages, location histories, and medical and financial information. The U.S. Supreme Court has recognized as much, saying that application of Fourth Amendment principles to searches of cell phones must respond to cell phones’ unique characteristics, including the weighty privacy interests in our digital data. 

We applaud the Michigan Supreme Court’s recognition that unfettered cell phone searches pose serious risks to privacy. We hope that courts around the country will follow its lead in concluding that the particularity rule applies with special force to such searches and requires clear limitations on the data the government may access.

Jennifer Pinsof

Victory! Pen-Link's Police Tools Are Not Secret

2 weeks 3 days ago

In a victory for transparency, the government contractor Pen-Link agreed to disclose the prices and descriptions of surveillance products that it sold to a local California Sheriff's office.

The settlement ends a months-long California public records lawsuit with the Electronic Frontier Foundation and the San Joaquin County Sheriff’s Office. The settlement provides further proof that the surveillance tools used by governments are not secret and shouldn’t be treated that way under the law.

Last year, EFF submitted a California public records request to the San Joaquin County Sheriff’s Office for information about its work with Pen-Link and its subsidy Cobwebs Technology. Pen-Link went to court to try to block the disclosure, claiming the names of its products and prices were trade secrets. EFF later entered the case to obtain the records it requested.  

The Records Show the Sheriff Bought Online Monitoring Tools

The records disclosed in the settlement show that in late 2023, the Sheriff’s Office paid $180,000 for a two-year subscription to the Tangles “Web Intelligence Platform,” which is a Cobwebs Technologies product that allows the Sheriff to monitor online activity. The subscription allows the Sheriff to perform hundreds of searches and requests per month. The source of information includes the “Dark Web” and “Webloc,” according to the price quotation. According to the settlement, the Sheriff’s Office was offered but did not purchase a series of other add-ons including “AI Image processing” and “Webloc Geo source data per user/Seat.”

Have you been blocked from receiving similar information? We’d like to hear from you.

The intelligence platform overall has been described in other documents as analyzing data from the “open, deep, and dark web, to mobile and social.” And Webloc has been described as a platform that “provides access to vast amounts of location-based data in any specified geographic location.” Journalists at multiple news outlets have chronicled Pen-Link's technology and have published Cobwebs training manuals that demonstrate that its product can be used to target activists and independent journalists. Major local, state, and federal agencies use Pen-Link's technology.

The records also show that in late 2022 the Sheriff’s Office purchased some of Pen-Link’s more traditional products that help law enforcement execute and analyze data from wiretaps and pen-registers after a court grants approval. 

Government Surveillance Tools Are Not Trade Secrets

The public has a right to know what surveillance tools the government is using, no matter whether the government develops its own products or purchases them from private contractors. There are a host of policy, legal, and factual reasons that the surveillance tools sold by contractors like Pen-Link are not trade secrets.

Public information about these products and prices helps communities have informed conversations and make decisions about how their government should operate. In this case, Pen-Link argued that its products and prices are trade secrets partially because governments rely on the company to “keep their data analysis capabilities private.” The company argued that clients would “lose trust” and governments may avoid “purchasing certain services” if the purchases were made public. This troubling claim highlights the importance of transparency. The public should be skeptical of any government tool that relies on secrecy to operate.

Information about these tools is also essential for defendants and criminal defense attorneys, who have the right to discover when these tools are used during an investigation. In support of its trade secret claim, Pen-Link cited terms of service that purported to restrict the government from disclosing its use of this technology without the company’s consent. Terms like this cannot be used to circumvent the public’s right to know, and governments should not agree to them.

Finally, in order for surveillance tools and their prices to be protected as a trade secret under the law, they have to actually be secret. However, Pen-Link’s tools and their prices are already public across the internet—in previous public records disclosures, product descriptions, trademark applications, and government websites.

 Lessons Learned

Government surveillance contractors should consider the policy implications, reputational risks, and waste of time and resources when attempting to hide from the public the full terms of their sales to law enforcement.

Cases like these, known as reverse-public records act lawsuits, are troubling because a well-resourced company can frustrate public access by merely filing the case. Not every member of the public, researcher, or journalist can afford to litigate their public records request. Without a team of internal staff attorneys, it would have cost EFF tens of thousands of dollars to fight this lawsuit.

 Luckily in this case, EFF had the ability to fight back. And we will continue our surveillance transparency work. That is why EFF required some attorneys’ fees to be part of the final settlement.

Related Cases: Pen-Link v. County of San Joaquin Sheriff’s Office
Mario Trujillo

Victory! Ninth Circuit Limits Intrusive DMCA Subpoenas

2 weeks 3 days ago

The Ninth Circuit upheld an important limitation on Digital Millenium Copyright Act (DMCA) subpoenas that other federal courts have recognized for more than two decades. The DMCA, a misguided anti-piracy law passed in the late nineties, created a bevy of powerful tools, ostensibly to help copyright holders fight online infringement. Unfortunately, the DMCA’s powerful protections are ripe for abuse by “copyright trolls,” unscrupulous litigants who abuse the system at everyone else’s expense.

The DMCA’s “notice and takedown” regime is one of these tools. Section 512 of the DMCA creates “safe harbors” that protect service providers from liability, so long as they disable access to content when a copyright holder notifies them that the content is infringing, and fulfill some other requirements. This gives copyright holders a quick and easy way to censor allegedly infringing content without going to court. 

Unfortunately, the DMCA’s powerful protections are ripe for abuse by “copyright trolls”

Section 512(h) is ostensibly designed to facilitate this system, by giving rightsholders a fast and easy way of identifying anonymous infringers. Section 512(h) allows copyright holders to obtain a judicial subpoena to unmask the identities of allegedly infringing anonymous internet users, just by asking a court clerk to issue one, and attaching a copy of the infringement notice. In other words, they can wield the court’s power to override an internet user’s right to anonymous speech, without permission from a judge.  It’s easy to see why these subpoenas are prone to misuse.

Internet service providers (ISPs)—the companies that provide an internet connection (e.g. broadband or fiber) to customers—are obvious targets for these subpoenas. Often, copyright holders know the Internet Protocol (IP) address of an alleged infringer, but not their name or contact information. Since ISPs assign IP addresses to customers, they can often identify the customer associated with one.

Fortunately, Section 512(h) has an important limitation that protects users.  Over two decades ago, several federal appeals courts ruled that Section 512(h) subpoenas cannot be issued to ISPs. Now, in In re Internet Subscribers of Cox Communications, LLC, the Ninth Circuit agreed, as EFF urged it to in our amicus brief.

As the Ninth Circuit held:

Because a § 512(a) service provider cannot remove or disable access to infringing content, it cannot receive a valid (c)(3)(A) notification, which is a prerequisite for a § 512(h) subpoena. We therefore conclude from the text of the DMCA that a § 512(h) subpoena cannot issue to a § 512(a) service provider as a matter of law.

This decision preserves the understanding of Section 512(h) that internet users, websites, and copyright holders have shared for decades. As EFF explained to the court in its amicus brief:

[This] ensures important procedural safeguards for internet users against a group of copyright holders who seek to monetize frequent litigation (or threats of litigation) by coercing settlements—copyright trolls. Affirming the district court and upholding the interpretation of the D.C. and Eighth Circuits will preserve this protection, while still allowing rightsholders the ability to find and sue infringers.

EFF applauds this decision. And because three federal appeals courts have all ruled the same way on this question—and none have disagreed—ISPs all over the country can feel confident about protecting their customers’ privacy by simply throwing improper DMCA 512(h) subpoenas in the trash.

Tori Noble

From Book Bans to Internet Bans: Wyoming Lets Parents Control the Whole State’s Access to The Internet

2 weeks 4 days ago

If you've read about the sudden appearance of age verification across the internet in the UK and thought it would never happen in the U.S., take note: many politicians want the same or even more strict laws. As of July 1st, South Dakota and Wyoming enacted laws requiring any website that hosts any sexual content to implement age verification measures. These laws would potentially capture a broad range of non-pornographic content, including classic literature and art, and expose a wide range of platforms, of all sizes, to civil or criminal liability for not using age verification on every user. That includes social media networks like X, Reddit, and Discord; online retailers like Amazon and Barnes & Noble; and streaming platforms like Netflix and Rumble—essentially, any site that allows user-generated or published content without gatekeeping access based on age.

These laws expand on the flawed logic from last month’s troubling Supreme Court decision,  Free Speech Coalition v. Paxton, which gave Texas the green light to require age verification for sites where at least one-third (33.3%) of the content is sexual materials deemed “harmful to minors.” Wyoming and South Dakota seem to interpret this decision to give them license to require age verification—and potential legal liability—for any website that contains ANY image, video, or post that contains sexual content that could be interpreted as harmful to minors. Platforms or websites may be able to comply by implementing an “age gate” within certain sections of their sites where, for example, user-generated content is allowed, or at the point of entry to the entire site.

Although these laws are in effect, we do not believe the Supreme Court’s decision in FSC v. Paxton gives these laws any constitutional legitimacy. You do not need a law degree to see the difference between the Texas law—which targets sites where a substantial portion (one third) of content is “sexual material harmful to minors”—and these laws, which apply to any site that contains even a single instance of such material. In practice, it is the difference between burdening adults with age gates for websites that host “adult” content, and burdening the entire internet, including sites that allow user-generated content or published content.

The law invites parents in Wyoming to take enforcement for the entire state—every resident, and everyone else's children—into their own hands

But lawmakers, prosecutors, and activists in conservative states have worked for years to aggressively expand the definition of “harmful to minors” and use other methods to censor a broad swath of content: diverse educational materials, sex education resources, art, and even award-winning literature. Books like The Bluest Eye by Toni Morrison, The Handmaid’s Tale by Margaret Atwood, and And Tango Makes Three have all been swept up in these crusades—not because of their overall content, but because of isolated scenes or references.

Wyoming’s law is also particularly extreme: rather than provide enforcement by the Attorney General, HB0043 is a “bounty” law that deputizes any resident with a child to file civil lawsuits against websites they believe are in violation, effectively turning anyone into a potential content cop. There is no central agency, no regulatory oversight, and no clear standard. Instead, the law invites parents in Wyoming to take enforcement for the entire state—every resident, and everyone else's children—into their own hands by suing websites that contain a single example of objectionable content. Though most other state age-verification laws often allow individuals to make reports to state Attorneys General who are responsible for enforcement, and some include a private right of action allowing parents or guardians to file civil claims for damages, the Wyoming law is similar to laws in Louisiana and Utah that rely entirely on civil enforcement. 

This is a textbook example of a “heckler’s veto,” where a single person can unilaterally decide what content the public is allowed to access. However, it is clear that the Wyoming legislature explicitly designed the law this way in a deliberate effort to sidestep state enforcement and avoid an early constitutional court challenge, as many other bounty laws targeting people who assist in abortions, drag performers, and trans people have done. The result? An open invitation from the Wyoming legislature to weaponize its citizens, and the courts, against platforms, big or small. Because when nearly anyone can sue any website over any content they deem unsafe for minors, the result isn’t safety. It’s censorship.

That also means your personal website or blog—if it includes any “sexual content harmful to minors”—is also at risk. 

Imagine a Wyomingite stumbling across an NSFW subreddit or a Tumblr fanfic blog and deciding it violates the law. If they were a parent of a minor, that resident could sue the platform, potentially forcing those websites to restrict or geo-block access to the entire state in order to avoid the cost and risk of litigation. And because there’s no threshold for how much “harmful” content a site must host, a single image or passage could be enough. That also means your personal website or blog—if it includes any “sexual content harmful to minors”—is also at risk. 

This law will likely be challenged, and eventually, halted, by the courts. But given that the state cannot enforce it, those challenges will not come until a parent sues a website. Until then, its mere existence poses a serious threat to free speech online. Risk-averse platforms may over-correct, over-censor, or even restrict access to the state entirely just to avoid the possibility of a lawsuit, as Pornhub has already done. And should sites impose age-verification schemes to comply, they will be a speech and privacy disaster for all state residents.

And let’s be clear: these state laws are not outliers. They are part of a growing political movement to redefine terms like “obscene,” “pornographic,” and “sexually explicit”  as catchalls to restrict content for both adults and young people alike. What starts in one state and one lawsuit can quickly become a national blueprint. 

If we don’t push back now, the internet as we know it could disappear behind a wall of fear and censorship.

Age-verification laws like these have relied on vague language, intimidating enforcement mechanisms, and public complacency to take root. Courts may eventually strike them down, but in the meantime, users, platforms, creators, and digital rights advocacy groups need to stay alert, speak up against these laws, and push back while they can. When governments expand censorship and surveillance offline, it's our job at EFF to protect your access to a free and open internet. Because if we don’t push back now, the internet as we know it— the messy, diverse, and open internet we know—could disappear behind a wall of fear and censorship.

Ready to join us? Urge your state lawmakers to reject harmful age-verification laws. Call or email your representatives to oppose KOSA and any other proposed federal age-checking mandates. Make your voice heard by talking to your friends and family about what we all stand to lose if the age-gated internet becomes a global reality. Because the fight for a free internet starts with us.

Rindala Alajaji

New Documents Show First Trump DOJ Worked With Congress to Amend Section 230

3 weeks ago

In the wake of rolling out its own proposal to significantly limit a key law protecting internet users’ speech in the summer of 2020, the Department of Justice under the first Trump administration actively worked with lawmakers to support further efforts to stifle online speech.

The new documents, disclosed in an EFF Freedom of Information Act (FOIA) lawsuit, show officials were talking with Senate staffers working to pass speech- and privacy-chilling bills like the EARN IT Act and PACT Act (neither became law). DOJ officials also communicated with an organization that sought to condition Section 230’s legal protections on websites using age-verification systems if they hosted sexual content.

Section 230 protects users’ online speech by protecting the online intermediaries we all rely on to communicate on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive. Section 230 embodies the principle that we should all be responsible for our own actions and statements online, but generally not those of others. The law prevents most civil suits against users or services that are based on what others say.

DOJ’s work to weaken Section 230 began before President Donald Trump issued an executive order targeting social media services in 2020, and officials in DOJ appeared to be blindsided by the order. EFF was counsel to plaintiffs who challenged the order, and President Joe Biden later rescinded it. EFF filed two FOIA suits seeking records about the executive order and the DOJ’s work to weaken Section 230.

The DOJ’s latest release provides more detail on a general theme that has been apparent for years: that the DOJ in 2020 flexed its powers to try to undermine or rewrite Section 230. The documents show that in addition to meeting with congressional staffers, DOJ was critical of a proposed amendment to the EARN IT Act, with one official stating that it “completely undermines” the sponsors’ argument for rejecting DOJ’s proposal to exempt so-called “Bad Samaritan” websites from Section 230.

Further, DOJ reviewed and proposed edits to a rulemaking petition to the Federal Communications Commission that tried to reinterpret Section 230. That effort never moved forward given the FCC lacked any legal authority to reinterpret the law.

You can read the latest release of documents here, and all the documents released in this case are here.

Related Cases: EFF v. OMB (Trump 230 Executive Order FOIA)
Aaron Mackey

President Trump’s War on “Woke AI” Is a Civil Liberties Nightmare

3 weeks ago

The White House’s recently-unveiled “AI Action Plan” wages war on so-called “woke AI”—including large language models (LLMs) that provide information inconsistent with the administration’s views on climate change, gender, and other issues. It also targets measures designed to mitigate the generation of racial and gender biased content and even hate speech. The reproduction of this bias is a pernicious problem that AI developers have struggled to solve for over a decade.

A new executive order called “Preventing Woke AI in the Federal Government,” released alongside the AI Action Plan, seeks to strong-arm AI companies into modifying their models to conform with the Trump Administration’s ideological agenda.

The executive order requires AI companies that receive federal contracts to prove that their LLMs are free from purported “ideological biases” like “diversity, equity, and inclusion.” This heavy-handed censorship will not make models more accurate or “trustworthy,” as the Trump Administration claims, but is a blatant attempt to censor the development of LLMs and restrict them as a tool of expression and information access. While the First Amendment permits the government to choose to purchase only services that reflect government viewpoints, the government may not use that power to influence what services and information are available to the public. Lucrative government contracts can push commercial companies to implement features (or biases) that they wouldn't otherwise, and those often roll down to the user. Doing so would impact the 60 percent of Americans who get information from LLMs, and it would force developers to roll back efforts to reduce biases—making the models much less accurate, and far more likely to cause harm, especially in the hands of the government. 

Less Accuracy, More Bias and Discrimination

It’s no secret that AI models—including gen AI—tend to discriminate against racial and gender minorities. AI models use machine learning to identify and reproduce patterns in data that they are “trained” on. If the training data reflects biases against racial, ethnic, and gender minorities—which it often does—then the AI model will “learn” to discriminate against those groups. In other words, garbage in, garbage out. Models also often reflect the biases of the people who train, test, and evaluate them. 

This is true across different types of AI. For example, “predictive policing” tools trained on arrest data that reflects overpolicing of black neighborhoods frequently recommend heightened levels of policing in those neighborhoods, often based on inaccurate predictions that crime will occur there. Generative AI models are also implicated. LLMs already recommend more criminal convictions, harsher sentences, and less prestigious jobs for people of color. Despite that people of color account for less than half of the U.S. prison population, 80 percent of Stable Diffusion's AI-generated images of inmates have darker skin. Over 90 percent of AI-generated images of judges were men; in real life, 34 percent of judges are women. 

These models aren’t just biased—they’re fundamentally incorrect. Race and gender aren’t objective criteria for deciding who gets hired or convicted of a crime. Those discriminatory decisions reflected trends in the training data that could be caused by bias or chance—not some “objective” reality. Setting fairness aside, biased models are just worse models: they make more mistakes, more often. Efforts to reduce bias-induced errors will ultimately make models more accurate, not less. 

Biased LLMs Cause Serious Harm—Especially in the Hands of the Government

But inaccuracy is far from the only problem. When government agencies start using biased AI to make decisions, real people suffer. Government officials routinely make decisions that impact people’s personal freedom and access to financial resources, healthcare, housing, and more. The White House’s AI Action Plan calls for a massive increase in agencies’ use of LLMs and other AI—while all but requiring the use of biased models that automate systemic, historical injustice. Using AI simply to entrench the way things have always been done squanders the promise of this new technology.

We need strong safeguards to prevent government agencies from procuring biased, harmful AI tools. In a series of executive orders, as well as his AI Action Plan, the Trump Administration has rolled back the already-feeble Biden-era AI safeguards. This makes AI-enabled civil rights abuses far more likely, putting everyone’s rights at risk. 

And the Administration could easily exploit the new rules to pressure companies to make publicly available models worse, too. Corporations like healthcare companies and landlords increasingly use AI to make high-impact decisions about people, so more biased commercial models would also cause harm. 

We have argued against using machine learning to make predictive policing decisions or other punitive judgments for just these reasons, and will continue to protect your right not to be subject to biased government determinations influenced by machine learning.

Tori Noble
Checked
1 hour 31 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed