Admiring Our Heroes for International Women’s Day: Celebrating Women Who Have Received EFF Awards 

13 hours 23 minutes ago

For the last hundred years, women have had pivotal and far too often unsung roles in building and shaping the technology that we now use every day. Many have heard of Ada Lovelace’s contributions to computer programming, but far fewer know Mary Allen Wilkes, a prominent modern programmer who wrote much of the software for the LINC, one of the world’s first interactive personal computers (it could fit in a single office and cost $40,000, but it was the 60’s). Decades earlier, when the first all-electronic, digital Eniac computer was built in the 40’s, the “software” for it was written by women: Kathleen McNulty, Jean Jennings, Betty Snyder, Marlyn Wescoff, Frances Bilas and Ruth Lichterman. 

It’s thankfully become more common knowledge that actor and inventor Hedy Lamarr co-created the concept of "frequency-hopping" that became a basis for radio systems from cell phones to wireless networking systems. But too few know Laila Ohlgren, who in the 1970’s solved a major problem with the development of mobile networks and phones by recognizing that dialed numbers could be stored and sent all at once with a “call button,” rather than sent one number at a time, which created connection issues before a call was even made. 

Women in tech deserve more and brighter spotlights. At EFF, we’ve had the honor of celebrating some of our heroes at our annual EFF Awards, including many women who are leading the digital rights community. For International Women’s Day, we’re highlighting the contributions of just a few of these recipients from the last decade, whose work to protect privacy, speech, and creativity online has had a global impact.

Carolina Botero (EFF Award Winner, 2024) 

Carolina Botero is a leader in the fight for digital rights in Latin America. For over a decade, she led the Colombia-based Karisma Foundation and cultivated its regional and international impact. Botero and Karisma helped connect indigenous peoples to the internet and made it possible to contribute content to Wikipedia in their native language, expanding access to both history and modern information. They built alliances to combat disinformation, pushed for legal tools to protect cultural and heritage institutions from digital blackholes, and were, and remain, a necessary voice speaking for human rights in the online world. EFF worked closely with Karisma and Botero to help free Colombian graduate student Diego Gomez, who shared another student’s Master’s thesis with colleagues over the internet. Diego’s story demonstrates what can go wrong when nations enact severe penalties for copyright infringement, and thanks to work from Karisma, many partners, and many EFF supporters, he was cleared of the criminal charges that he faced for this harmless act of sharing scholarly research.

Carolina Botero receiving her EFF Award

Botero stepped down from the role in 2024, opening the door for a new generation. While her work continues—she’s currently on the advisory board of CELE, the Centro de Estudios en Libertad de Expresión—her EFF Award was well-deserved based on her strong and inspiring legacy for those in Latin America and beyond who advocate for a digital world that enhances rights and empowers the powerless. Learn more about Botero on her EFF Awards page and the recap of the 2024 event

Chelsea Manning (EFF Award Winner, 2017)

Chelsea Manning became famous as a whistleblower: In 2010, she disclosed classified Iraq War documents, including a video of the killings of Iraqi civilians and two Reuters reporters by U.S. troops. These documents exposed aspects of U.S. operations in Iraq and Afghanistan that infuriated the public and embarrassed the government. But she is also a transparency and transgender rights advocate, network security expert, author, and former U.S. Army intelligence analyst. 

Manning joined the military in 2007. Her role as an intelligence analyst to an Army unit in Iraq in 2009 gave her access to classified databases, but more importantly, it gave her a uniquely comprehensive view of the war in Iraq, and she became increasingly disillusioned and frustrated by what she saw, versus what was being shared. In 2010, she approached major news outlets hoping to give information to them that would reveal a new side of the war to the public. Ultimately, she shared the documents with Wikileaks. 

Manning’s bravery did not end there. When she was arrested a few months later, she endured "cruel, inhuman and degrading" treatment, according to the UN Special Rapporteur on torture. She was locked up alone for 23 hours a day over an 11-month period, before her trial. The mistreatment resulted in public outcry and advocacy by organizations like Amnesty International. Even a State Department spokesperson, Philip Crowley, criticized the treatment as "ridiculous, counterproductive, and stupid," and resigned. She was moved to a medium-security facility in April 2011. 

The government’s charges against Manning were outrageous, but in 2013 she was convicted of 19 of 22 counts as a result of her whistleblowing activities. She became one of fewerthan a dozen people prosecuted for espionage in the entire history of the United States, and she was sentenced to the longest punishment ever imposed on a whistleblower. Then, the day after her conviction, isolated from her community and in all likelihood expecting to remain in prison for years if not decades, she courageously issued a statement identifying herself as a trans woman, which she’d wanted to reveal for years. 

Over the next several years, while imprisoned, she became an advocate both for government transparency and for transgender rights. Her conviction and sentence pointed to the need for legal reform of both the Computer Fraud and Abuse Act (CFAA) and the Espionage Act.  EFF filed an amicus brief to the U.S. Army Court of Criminal Appeals arguing that the CFAA was never meant to criminalize violations of private policies like those of government systems, and EFF also pushed, and continues to fight for, narrower interpretations of the Espionage Act and stronger protections for whistleblowers, particularly to take into account both the motivation of individuals who pass on documents and the disclosure’s ramifications. 

Even after President Obama commuted her sentence in 2017, and EFF celebrated her work and her release with an EFF award in September, 2017, her fight wasn’t over. She was imprisoned again twice in 2019 and ultimately fined $256,000 for refusing to testify before grand juries investigating WikiLeaks founder Julian Assange. The U.N. Special Rapporteur on torture again criticized Manning’s treatment, writing that "the practice of coercive detention appears to be incompatible with the international human rights obligations of the United States." 

Manning was released in 2020 after having spent almost a decade in total imprisoned for her courage. She wrote a memoir, README.txt, in 2022, to take back control over her story.

EFF Award Winners Mike Masnick, Annie Game, and Chelsea Manning

Annie Game (EFF Award Winner, 2017)

Annie Game spent over 16 years as the Executive Director of IFEX, a global network of journalism and civil liberties organizations working together to defend freedom of expression.  IFEX (formerly International Freedom of Expression Exchange) began in the 1990s, when a group of organizations and the Canadian Committee to Protect Journalists came together to consider how to respond as a single voice to free-expression violations around the world. IFEX now is a global hub for the protection of free speech and journalism. 

Game recognized early on that digital rights and freedom of expression groups needed one another. Under her leadership, IFEX paired more traditional free-expression organizations with their more digital counterparts, with a focus on building organizational security capacities. IFEX Initiatives under Game’s leadership have been expansive. For example, the International Day to End Impunity for Crimes against Journalists, November 2, has been an annual wake-up call and reminder for UN member states to live up to their commitments to protecting journalists. UNESCO observed more than 1,700 journalists were killed globally between 2006 and 2024, and nearly 90% of these cases went unsolved in the courts. 

Game and IFEX have also focused on high-profile cases of journalists threatened by governments for their work, such as Bahey eldin Hassan in Egypt. Bahey is the director of the Cairo Institute for Human Rights Studies (CIHRS) and has advocated for freedom of expression and the basic human rights of Egyptians, but has lived in exile since 2014. The charges against him, of “disseminating false information” and “insulting the judiciary,” are common tactics of intimidation and harassment. Bahey’s supposed crimes were sharing social media posts criticising the Egyptian judiciary’s lack of independence, and speaking about the killing in Egypt of Italian researcher Giulio Regeni. Bahey—an IFEX member—is just one of many reporters and human rights workers in danger when they speak. But when journalists and those defending their rights online speak out as one voice, as IFEX helps them do, it makes a difference. 

Another initiative has been the Faces of Free Expression project, a partnership between IFEX and the International Free Expression Project. If you’re looking for more heroes, this project details the stories of “risk-takers and change-makers – individuals who put their careers, their freedom, their safety, and sometimes even their lives on the line,” while reporting, or defending free expression and the right to information. 

Wherever authoritarianism and repression of speech have been on the rise, Game has unapologetically called out injustices and made it safer for journalists to do their work, while ensuring accountability when crimes are committed. The work is more critical now than ever, and since leaving IFEX in 2022, she’s remained an activist while focusing increasingly on environmental protection. 

Twelve More Heroes 

EFF has honored many more women with awards over the years—from Anita Borg and Hedy Lamarr to Amy Goodman and Beth Givens. This blog from 2012 looks back and acknowledges the important contributions from twelve more EFF Award winners. 

We’ve also asked five women at EFF about women in digital rights, freedom of expression, technology, and tech activism who have inspired us. You can read that here.

Donate to Support EFF's Work

Your donations empower EFF to do even more.

Jason Kelley

Admiring Our Heroes for International Women’s Day: Five Women In Tech That EFF Admires

15 hours 32 minutes ago

In honor of International Women’s Day, we asked five women at EFF about women in digital rights, freedom of expression, technology, and tech activism who have inspired us.  

Anna Politkovskaya 

Jillian York, Activist 
This International Women’s Day, I want to honor the memory of Anna Politkovskaya, the Russian investigative journalist who relentlessly exposed political and social abuses, endured harassment and violence for her work, and was ultimately killed for telling the truth. I had just started my career when I learned of her death, and it forced me to confront that freedom of expression isn’t an abstract principle but rather something people risk—and sometimes lose—their lives for. 

Her story reminds me that journalism at its best is an act of moral courage, not just a profession. In the face of threats, poison, and relentless pressure to stay silent, she chose to continue writing about what she saw, insisting that ordinary people’s lives were worth the world’s attention. She refused to compromise with power, even when she knew it could cost her life. To me, defending freedom of expression means defending those like Anna who bear witness to injustice, prioritize truth, and hold power to account for those whose voices are silenced.  

Cindy Cohn 

Corynne McSherry, Legal Director 
There are so many women who have shaped tech history–most of whom are still unsung heroes—that it’s hard to single out just one. But it’s easier this year because it’s a chance to celebrate my boss, Cindy Cohn, before she leaves EFF for her next adventure.  

Cindy has been fighting for our digital rights for 30 years. leading EFF’s legal work and eventually the whole organization. She helped courts understand that code is speech deserving of constitutional protections at a time when many judges weren’t entirely sure what code even was. She led the fight against NSA spying, and even though outdated and ill-fitting doctrines like the state secrets privilege prevented courts from ruling on the obvious unconstitutionality of the NSA’s mass surveillance program, the fight itself led to real reforms that have expanded over time.   

I’ve worked closely with her for much of her EFF career, starting in 2005 when we sued Sony for installing spyware in millions of computers, and I’ve seen firsthand her work as a visionary lawyer, outstanding writer, and tireless champion for user privacy, free expression, and innovation. She’s also warm and funny, with the biggest heart in the world, and I’m proud to call her a friend as well as a mentor.  

Jane

Sarah Hamid, Activist 
When talking about women in tech, we usually mean founders, engineers, and executives. But just as important are the women who quietly built the practices that underpin today’s movement security culture. 

For as long as social movements have organized in the shadow of state surveillance, women have been designing the protocols, mutual aid networks, and information flows that keep people alive. Those threats feel ever-escalating: fusion‑center monitoring of protests, federal agencies infiltrating and subpoenaing encrypted Signal and social media chats, prosecutors mining search histories.  

In the late 1960s and early 1970s, the underground Jane abortion counseling service—formally the Abortion Counseling Service of Women’s Liberation—built what we would now recognize as a feminist infosec project for abortion access. Jane connected an estimated 11,000 people with safer abortions before Roe v. Wade, using a single public phone number—Call Jane—paired with code names, compartmentalized roles, and minimal records so no one person held the full story of who needed care, who was providing it, and where. When Chicago police raided the collective in 1972, members destroyed their index‑card files rather than let them become a ready‑made map of patients and helpers—an analog secure‑deletion choice that should feel familiar to anyone who has ever wiped a phone or locked down a shared drive. 

The lesson we should take from Jane is a set of principles that still hold in our encrypted‑but‑insecure present: Collect less, separate what you do collect, and be ready to burn the file box. When a search query, a location ping, or a solidarity post can become evidence, treating information as both lifeline and liability is not paranoia—it is care work.  

Ebele Okobi

Babette Ngene, Director of Public Interest Technology 
In the winter of 2013, I had just landed my first job at the intersection of tech and human rights, working for a prominent nonprofit and I was encouraged to attend regular tech and policy events around town. One such event on internet governance was happening at George Washington Universit,  focusing on multistakeholder engagement on internet policy and governance issues, with companies, nonprofits, and government representatives in attendance. I was inexperienced with these topics, and I’ll admit I was a bit intimidated. 

Then I saw her. She was the only woman on the opening panel, an African woman, an accomplished woman. Not only was she a respected lawyer at Yahoo at the time, but her impressive background, presence, and confident speaking style immediately inspired me. She made me feel like I, too, belonged in that room and could become a powerful voice. 

Ebele Okobi would go on to become one of the most powerful and respected voices in the tech and human rights space, known for her advocacy for digital rights and responsible innovation across Africa and the broader global majority during her tenure at Facebook. Beyond her corporate advocacy, Ebele has consistently championed ethical technology and social justice. She embodies the leadership qualities I value most: empathy, speaking truth to power, integrity, and authenticity. 

I remain in the tech and human rights space because I saw her, because seeing her made me feel seen. Representation truly does matter.  

Ada Lovelace 

Allison Morris, Chief Development Director 
I’m not a lawyer, activist, or technologist; I’m a fundraiser and a lover of stories. And what storyteller at EFF couldn’t help but love Ada Lovelace? The daughter of Lord Byron – the human embodiment of Romanticism – Ada was an innovator in math and science and, ultimately, the writer of the first computer program.  

Lovelace saw the potential in Charles Babbage’s theoretical General Purpose Computer (which was never actually built) and created the foundations of modern computing long before the digital age. In creating the first computer code, Lovelace took Babbage’s concept of a machine that could perform mathematical calculations and realized that it could manipulate symbols as well as numbers. 

Given the expectations of women in her time and the controversy of what work should be attributed to Lovelace as opposed to the man she often worked with, I can’t help but be inspired by her story.  

Donate to Support EFF's Work

Your donations empower EFF to do even more.

Women in tech deserve more and brighter spotlights. At EFF, we’ve had the honor of celebrating some of our heroes at our annual EFF Awards, including many women who are leading the digital rights community. For International Women’s Day, we also highlighted the contributions of just a few of these recipients from the last decade, whose work to protect privacy, speech, and creativity online has had a global impact.

Allison Morris

Weasel Words: OpenAI’s Pentagon Deal Won’t Stop AI‑Powered Surveillance

22 hours 18 minutes ago

OpenAI, the maker of ChaptGPT, is rightfully facing widespread criticism for its decisions to fill the gap the U.S. Department of Defense (DoD) created when rival Anthropic refused to drop its restrictions against using its AI for surveillance and autonomous weapons systems. After protests from both users and employees who did not sign up to support government mass surveillance—early reports show that ChaptGPT uninstalls rose nearly 300% after the company announced the deal—Sam Altman, CEO of OpenAI, conceded that the initial agreement was “opportunistic and sloppy.” He then re-published an internal memo on social media stating that additions to the agreement made clear that “Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, [and] FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

Trouble is, the U.S. government doesn’t believe “consistent with applicable laws” means “no domestic surveillance.” Instead, for the most part, the government has embraced a lax interpretation of “applicable law” that has blessed mass surveillance and large-scale violations of our civil liberties, and then fought tooth and nail to prevent courts from weighing in. 

"After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time."

“Intentionally” is also doing an awful lot of work in that sentence. For years the government has insisted that the mass surveillance of U.S. persons only happens incidentally (read: not intentionally) because their communications with people both inside the United States and overseas are swept up in surveillance programs supposedly designed to only collect communications outside the United States. 

The company’s amendment to the contract continues in a similar vein, “For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” Here, “deliberate” is the red flag given how often intelligence and law enforcement agencies rely on incidental or commercially purchased data to sidestep stronger privacy protections.

Here’s another one: “The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.” What, one wonders, does “unconstrained” mean, precisely—and according to whom? 

Lawyers sometimes call these “weasel words” because they create ambiguity that protects one side or another from real accountability for contract violations. As with the Anthropic negotiations, where the Pentagon reportedly agreed to adhere to Anthropic’s red lines only “as appropriate,” the government is likely attempting to publicly commit to limits in principle, but retain broad flexibility in practice.

OpenAI also notes that the Pentagon promised the NSA would not be allowed to use OpenAI’s tools absent a new agreement, and that its deployment architecture will help it verify that no red lines are crossed. But secret agreements and technical assurances have never been enough to rein in surveillance agencies, and they are no substitute for strong, enforceable legal limits and transparency.

OpenAI executives may indeed be trying, as claimed, to use the company’s contractual relationship with the Pentagon to help ensure that the government should use AI tools only in a way consistent with democratic processes. But based on what we know so far, that hope seems very naïve.

Moreover, that naïvete is dangerous. In a time when governments are willing to embrace extreme and unfounded interpretations of “applicable laws,” companies need to put some actual muscle behind standing by their commitments. After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time. OpenAI promises the public that it will  “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” but we know that enabling mass surveillance does both.     

OpenAI isn’t the only consumer-facing company that is, on the one hand, seeking to reassure the public that they aren’t participating in actions that violate human rights while, on the other, seeking to cash in on government mass surveillance efforts.  Despite this marketing double-speak, it is very clear that companies just cannot do both. It’s also clear that companies shouldn’t be given that much power over the limits of our privacy to begin with. The public should not have to rely on a small group of people—whether CEOs or Pentagon officials—to protect our civil liberties.

Corynne McSherry

The Government Uses Targeted Advertising to Track Your Location. Here's What We Need to Do.

2 days ago

We've all had the unsettling experience of seeing an ad online that reveals just how much advertisers know about our lives. You're right to be disturbed. Those very same online ad systems have been used by the government to warrantlessly track peoples' locations, new reporting has confirmed.

For years, the internet advertising industry has been sucking up our data, including our location data, to serve us "more relevant ads." At the same time, we know that federal law enforcement agencies have been buying up our location data from shady data brokers that most people have never heard of.

Now, a new report gives us direct evidence that Customs and Border Protection (CBP) has used location data taken from the internet advertising ecosystem to track phones. In a document uncovered by 404 Media, CBP admits what we’ve been saying for years: The technical systems powering creepy targeted ads also allow federal agencies to track your location.

The document acknowledges that a program by the agency to use "commercially available marketing location data" for surveillance drew from the process used to select the targeted ads shown to you on nearly every website and app you visit. In this blog post, we'll tell you what this process is, how it can and is being used for state surveillance, and what can be done about it—by individuals, by lawmakers, and by the tech companies that enable these abuses.

Advertising Surveillance Enables Government Surveillance

The online advertising industry has built a massive surveillance machine, and the government can co-opt it to spy on us. 

In the absence of strong privacy laws, surveillance-based advertising has become the norm online. Companies track our online and offline activity, then share it with ad tech companies and data brokers to help target ads. Law enforcement agencies take advantage of this advertising system to buy information about us that they would normally need a warrant for, like location data. They rely on the multi-billion-dollar data broker industry to buy location data harvested from people’s smartphones.

We’ve known for years that location data brokers are one part of federal law enforcement's massive surveillance arsenal, including immigration enforcement agencies like CBP and Immigration and Customs Enforcement (ICE). ICE, CBP and the FBI have purchased location data from the data broker Venntell and used it to identify immigrants who were later arrested. Last year, ICE purchased a spy tool called Webloc that gathers the locations of millions of phones and makes it easy to search for phones within specific geographic areas over a period of time. Webloc also allows them to filter location data by the unique advertising IDs that Apple and Google assign to our phones.

But a document recently obtained by 404 Media is the first time CBP has acknowledged the location data it buys is partially sourced from the system powering nearly every ad you see online: real-time bidding (RTB). As CBP puts it, “RTB-sourced location data is recorded when an advertisement is served.” 

Even though this document is about a 2019-2021 pilot use of this data, CBP and other federal agencies have continued to purchase and use commercially obtained location data. ICE has purchased location tracking tools since then and recently requested information on “Ad Tech” tools it could use for investigations. 

The CBP document acknowledges two sources of location data that it relies on: software development kits (SDKs) and RTB, both methods of location-tracking that EFF has written about before. Apps for weather, navigation, dating, fitness, and “family safety” often request location permissions to enable key features. But once an app has access to your location, it could share it with data brokers directly through SDKs or indirectly (and often without the app developers' knowledge) through RTB. Data brokers can collect location data from SDKs that they pay developers to put in their apps. When relying on RTB, data brokers don’t need any direct relationship with the apps and websites they’re collecting location data from. RTB is facilitated by ad companies that are already plugged into most websites and apps. 

How Real-Time Bidding Works

RTB is the process by which most websites and apps auction off their ad space. Unfortunately, the milliseconds-long auctions that determine which ads you see also expose your information, including location data, to thousands of companies a day. At a high-level, here’s how RTB works:

  1. The moment you visit a website or app with ad space, it asks an ad tech company to determine which ads to display for you. 
  2. This ad tech company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers. 
  3. The bid request may contain information like your unique advertising ID, your GPS coordinates, IP address, device details, inferred interests, demographic information, and the app or website you’re visiting. The information in bid requests is called “bidstream data” and typically includes identifiers that can be linked to real people. 
  4. Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on the ad space. 
  5. The highest bidder gets to display an ad for you, but advertisers (or the adtech companies that represent them) can collect your bidstream data regardless of whether or not they bid on the ad space.   

A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive data about the person who would see their ad. As a result, anyone posing as an ad buyer can access a stream of sensitive data about billions of individuals a day. Data brokers have taken advantage of this vulnerability to harvest data at a staggering scale. For example, the FTC found that location data broker Mobilewalla collected data on over a billion people, with an estimated 60% sourced from RTB auctions. Leaked data from another location data broker, Gravy Analytics, referenced thousands of apps, including Microsoft apps, Candy Crush, Tinder, Grindr, MyFitnessPal, pregnancy trackers and religious-focused apps. When confronted, several of these apps’ developers said they had never heard of Gravy Analytics. 

As Venntel, one of the location data brokers that has sold to ICE, puts it, “Commercially available bidstream data from the advertising ecosystem has long been one of the most comprehensive sources of real-time location and device data available.” But the privacy harms of RTB are not just a matter of misuse by individual data brokers. RTB auctions broadcast the average person’s data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately exploited. Once your information is broadcast through RTB, it’s almost impossible to know who receives it or control how it’s used. 

What You Can Do To Protect Yourself

Revelations about the government's exploitation of this location data shows how dangerous online tracking has become, but we’re not powerless. Here are two basic steps you can take to better protect your location data:

  1. Disable your mobile advertising ID (see instructions for iPhone/Android). Apple and Google assign unique advertising IDs to each of their phones. Location data brokers use these advertising IDs to stitch together the information they collect about you from different apps. 
  2. Review apps you’ve granted location permissions to. Apps that have access to your location could share it with other companies, so make sure you’re only granting location permission to apps that really need it in order to function. If you can’t disable location access completely for an app, limit it to only when you have the app open or only approximate location instead of precise location. 

For more tips, check out EFF’s guide to protecting yourself from mobile-device based location tracking. Keep in mind that the security plan that’s best for you will vary in different situations. For example, you may want to take stronger steps to protect your location data when traveling to a sensitive location, like a protest. 

What Tech Companies and Lawmakers Must Do

Legislators and tech companies must act so that individuals don’t bear the burden of defending their data every time they use the internet.

Ad tech companies must reckon with their role in warrantless government surveillance, among other privacy harms. The systems they built for targeted advertising are actively used to track people’s location. The best way to prevent online ads from fueling surveillance is to stop targeting ads based on detailed behavioral profiles. Ads can still be targeted contextually—based on the content people are viewing—without collecting or exposing their sensitive personal information. Short of moving to contextual advertising, tech companies can limit the use of their systems for government location tracking by:

  • Stopping the use of precise location data for targeted advertising. Ad tech companies facilitating ad auctions can and should remove precise location data from bid requests. Ads can be targeted based on people’s coarse location, like the city they’re in, without giving data brokers people’s exact GPS coordinates. Precise location data can reveal where we work, where we live, who we meet, where we protest, where we worship, and more. Broadcasting it to thousands of companies a day through RTB is dangerous.
  • Removing advertising IDs from devices, or at minimum, disabling them by default. Advertising IDs have become a linchpin of the data broker economy and are actively used by law enforcement to track people’s location. Advertising IDs were added to phones in 2012 to let companies track you, and removing them is not a far-fetched idea. When Apple forced apps to request access to people’s advertising IDs starting in 2021 (if you have an iPhone you’ve probably seen the "Ask App Not to Track" pop-ups), 96% of U.S. users opted out, essentially disabling advertising IDs on most iOS devices. One study found that iPhone users were less likely to be victims of financial fraud after Apple implemented this change. Google should follow Apple’s lead and disable advertising IDs by default.

Lawmakers also need to step up to protect their constituents' privacy. We need strong, federal privacy laws to stop companies from spying on us and selling our personal information. EFF advocates for data privacy legislation with teeth and a ban on ad targeting based on online behavioral profiles, as it creates a financial incentive for companies to track our every move.

Legislators can and must also close the "data broker loophole" on the Fourth Amendment. Instead of obtaining a warrant signed by a judge, law enforcement agencies can just buy location data from private brokers to find out where you've been. Last year, Montana became the first state in the U.S. to pass a law blocking the government from buying sensitive data it would otherwise need a warrant to obtain. And in 2024, Senator Ron Wyden's EFF-endorsed Fourth Amendment is Not for Sale Act passed the House before dying in the Senate. Others should follow suit to stop this end-run around constitutional protections.

Online behavioral advertising isn’t just creepy–it’s dangerous. It's wrong that our personal information is being silently harvested, bought by shadow-y data brokers, and sold to anyone who wants to invade our privacy. This latest revelation of warrantless government surveillance should serve as a frightening wakeup call of how dangerous online behavioral advertising  has become.

Lena Cohen

Speaking Freely: Shin Yang

2 days 18 hours ago

*This interview has been edited for length and clarity.

David Greene: Shin, please introduce yourself to the Speaking Freely community.

 Shin Yang: My name is Shin Yang. I am a queer writer with a legal background and experience in product management. I am the steward of Lezismore, an independent, self-hosted, open-source community for sexual minorities in Taiwan. For the past decade, I have focused on platform governance as infrastructure, with a particular emphasis on anonymity, minimal data collection, and behavior-based accountability, so that people can speak about intimacy and identity without fear of extraction or exposure. I am a community architect and builder, not an influencer. I’ve spent most of the past decade working anonymously building systems, designing governance protocols, and holding space for others to speak while keeping myself in the background.

 DG: Great. And so let’s talk about how that work intersects with freedom of expression as a principle, and your own personal feelings about freedom of expression. And so with that in mind, let me just start with a basic question, what does freedom of expression mean to you?

 SHIN: For me, free expression is about possibility, and possibility always contains both and even multiple ends, the beautiful ones and the brutal in equal measure. Maybe not that equal, but you cannot just speak about the beautiful or good things. I think it's not about pushing discomfort out of the room. If we refuse all discomfort, we end up in echo chambers, which is safe, predictable; but dead. What matters to me is the equipment and principles: Who carries through that discomfort, self-discipline, mutual support, and the infrastructure and governance that can let people grow over time. Keeps a workable gray space open: room to make mistakes, learn, repair, and keep speaking.

 DG: How does that resonate with you personally? Why are you passionate about that?

 SHIN: Around 2013 in Taiwan's context, when Facebook started to take over the digital ecosystem in Taiwan, many local independent bulletin boards (BBS) that had been formed for sexual minorities were shut down because they had no income from advertisements, and people were pushed into mainstream platforms—like Facebook, Instagram, Meta, whatever, Twitter now X—where sexual expression was usually reported or flagged, and where I watched sharp intra-community exclusionary voices saying “bisexual and trans people were not pure enough”, or that talking openly about sex would harm our image, or that it was inappropriate to children, or it would invite harassment. Those oppressions are even fiercer within the queer community itself, which is self-censoring in order to gain approval from mainstream society.

 So, the community itself says that the best way to do it is don't talk about it. Never talk about it. Never mention a single thing about it. It was a wakeup call for me, because I think it's not right. And also, there's another more private story for me, it's a story I heard from our sexual minority community. I once heard about a butch student who was sexually assaulted by a group of men because she dated a beautiful classmate, a beautiful woman in the class.

 And when I learned what happened to her, that story changed my focus. Because, you know, when people hear this kind of story, they always focus on punishing those men, punishing those criminals—but what matters for me most is building conditions where someone like her could someday still have a chance at intimacy on her own terms, and finally be free from fear. That's more important for me. I may never meet her, but I know who I am and what I'm here to build. I have been building an infrastructure –– not just “safe space” as a slogan, but an “ecospace” designed to make survival and growth possible. So that's why I believe that a well-governed space is what matters for communities now.

 DG: Why is it so important for sexual minorities to have forums where they can communicate in that way? When it was just the bulletin boards, before social media, what worked really well and what didn’t work well?

 SHIN: That’s a wonderful question. Okay, the bulletin boards I used before, the registration process doesn't require a lot of information. You just need email.

 What I miss about bulletin boards is the sense of structure. You didn’t enter a personalized feed—you entered a place with visible rooms and topics. Even in the spaces you visited daily, you’d encounter views you didn’t like, and you had to live with that—and learn how to argue, or leave, or build something parallel. In some boards, moderators were community-chosen, which created a practical kind of participation—not perfect democracy, but civic practice.

 You have to provide the information of which school you are in, because it's based on school. But it's not that difficult to use that. And also they have some kinds of logistics, like you log into different boards with different topics, and you can see that there are huge topics along with several small topics. So when you log into that, you can sense and feel the whole structure of that community. It's not a personal feed bombing you with everything you like. So you know, even in the board you’re most likely to visit every day, you will definitely encounter some speech you don't like, and you argue with them, you fight with them, or build something parallel, that's the civic foundation of democracy. You experience the everyday practice of civic democracy. People can vote for moderators or even recall them.

 DG: You mean, the community can ask them to leave the bulletin boards?

 SHIN: No, they don't actually leave the bulletin board. It's more that the moderator no longer has the right to perform administrative tasks, but they can still be part of the community, and ordinary users can vote in the election for this.

 DG: Okay, and then what were the shortcomings of the bulletin boards?

 SHIN: Yeah, it’s brutal. Really brutal. And I’ve seen people literally organize to push others out. I didn’t expect this to turn into story time, but I actually love this. So—back in Taiwan, we had this big BBS forum called PTT. There was a board called the ‘Sex’ board, where people could talk about sexual topics and share sexual health info. But around 2010, the space was dominated by mainstream straight cis men. And whenever a woman or a sexual minority posted anything, they often got harassed or attacked. So, women created another board inside the forum—basically a separate space—called ‘Feminine Sex.’ And from then on, the original Sex board and the Feminine Sex board were in conflict all the time. And honestly, if this happened today on Facebook, Threads, or X… we’d just block each other. Easy. Clean. Done.

But the problem is: when blocking becomes the default, we don’t really learn how to argue well, how to organize our reasons, or even how to sit with discomfort and understand why the other side thinks the way they do. We lose that practice—because it’s just so easy to delete people from our world now. I’m not saying blocking is always wrong. But there’s a trade-off.

 DG: I get that. Then when Facebook and the other social media platforms that followed came along and the users migrated over to the commercial services, what was lost? 

 SHIN: What was lost? I think our behavior got shaped—personal branding became the default setting for joining an online community. If you don't do it, like me, you basically don't exist.  Influence can be shaped by the number of social media followers; people define each other based on this. Choosing not to obey the logic of mainstream platforms means being unseen, and being unseen means having no influence.

And sure, personal branding can be useful—but I don’t believe it’s the only way to express yourself or connect with a community. The problem is, on mainstream platforms, the whole system is built for visibility. So clout becomes the game. Look at what they push: stories, reels, short-form visuals. And as a former product manager, I can tell you—this is not accidental. It’s designed. It’s designed around human nature: to avoid friction as much as possible. So they keep you scrolling, to make reacting effortless. One tap and you’ve sent a smiley face. Engagement becomes easier… but also cheaper.

And the scary part is, people start thinking that’s the whole internet. It’s not. But the more we get trained by these interfaces, the harder it becomes to even imagine other ways of building community. It is becoming more difficult for people to imagine that the "right" amount of friction can actually help us to grow, and coexist with the diversity.

 DG: So did you find that there were certain things you couldn't talk about on Facebook or on the other social media platforms because they were sexual, because sexual speech was not as welcome as it was earlier?

 SHIN: Yes, when I first started building my community, I knew nothing about technology. Like everyone else, I just created a fan page on Facebook, which was then flagged and deleted. This happened. I think it still happens to this day. At first, I was so angry about it. I felt it was unjust. But every time I wrote to Facebook, they just said that I had violated the user terms. At first I was furious. But I don’t stop at anger. I dig deeper. I thought, “Why do you say I violated the user terms?”

I read the terms, compared policies across platforms and applications, and realized the pattern: All of the terms of use forbid adult or erotic content in fine print. Because these are profit-driven systems optimized to minimize legal and business risk. So, I don’t frame it as “evil platforms.” I frame it as incentives. Once I understood this, I realized that we should not only protest and ask those big tech platforms to “give” us a voice –– that's a good approach, but it shouldn't be the only one. I believe we should build our own community. That's why I started researching open-source software and building my own self-hosted community.

 DG: Please talk a little bit more about what you're building, and how what you're building is consistent with your view of free expression.

 SHIN: Sure. It’s a long process but the reason why I use open-source software is, for a person knowing nothing about technology, I can come to the open-source community and ask questions about it. It’s more reliable than building it myself.

 And the second example is about how I designed Lezismore’s registration and community access, mostly through trial and error.

 We don’t require any real-name or ID verification. In fact, you can register with just an email. But instead of “verifying people,” we redesigned the "space".

 Lezismore is built as a two-layer structure. The main website is searchable, but it looks almost… boring on purpose—advocacy articles, writers’ posts, slow content. The truly active community space is inside that main site, and the entry point is not something you casually discover through search. Most people learn how to get in through word of mouth. We also block search engines, bots, and crawlers from the community area. So from day one, we gave up visibility on purpose—we traded reach for resilience.

 Then there’s the onboarding. New users go through an “apprenticeship” period. You can’t immediately post, comment, or DM people. You first have to read, observe, and understand how the community works. We don’t even tell you exactly how long it takes—you just have to be patient. In the fast-content era, people constantly complain that this is “annoying” or “hard to use.” And yes, it is friction indeed.

 But that friction buys something valuable: a space that can stay anonymous, inclusive, and high-trust—without being instantly overwhelmed by harassment or bad-faith users. It also means we don’t need to depend on Big Tech’s third-party verification APIs. With relatively low technical cost, we’re using governance design—not data collection—to balance inclusion and protection.

And honestly, as a platform owner, I have to be real about what users “actually” need. If this was truly “just terrible UX,” the site wouldn’t survive in today’s hyper-competitive platform environment. But Lezismore has been running for over a decade, and we still have tens of thousands of people quietly reading and interacting every month. This is one of the biggest tradeoffs in my governance design. In an attention economy, choosing low visibility is a bold decision, and maintaining it has a real cost.

 On top of that, we rely on human, context-based moderation. We use posts, replies, and Q&A threads to actively teach community norms—why diversity and conflict exist, how to handle risk, and how to protect yourself. Users also share practical safety tips and real interaction experiences with each other. There are many more small mechanisms built into the system, but that’s the core logic.

 And there’s one more layer: the legal environment. In Taiwan, the legal climate around sex and speech can create chilling effects for smaller platforms. Platform owners can be criminally liable in certain scenarios. That’s exactly why governance design matters—it’s how we keep lawful expression possible without over-collecting data.

 DG: Ah, so you need to be careful. I’m curious whether you’ve had any examples of offline repression. Do you have any experiences with censorship or feeling like you didn’t have your full freedom of expression in your offline experiences? Any experiences that might inform what an ideal online community might look like?

 SHIN: Yes—actually, most of my earliest experiences with repression were offline, and they shaped how I later understood the internet as an escape route.

 Back when I was a high school student, I was already involved in student movements and gender-related advocacy. One very concrete example was dress codes. The school restricted what female students could wear, and students organized to push for change. At one point we even had a vote—something like 98% of students supported revising the policy. But when the issue entered the “official” system, the administration simply ignored it. They bypassed procedure, dismissed the consensus, and used authority to shut it down completely.

That was my first clear lesson about repression: it’s not always someone telling you “you’re forbidden to speak.” Sometimes it’s a system designed so that even if students, women, or sexual minorities spend enormous effort building agreement, once our voices enter the institution, they can be treated as if they don’t exist.

That’s why, in the early 2010s, online space became my breakthrough. This was still the blog era, before social platforms fully standardized everything, and even before “share” mechanisms were built into everyday activism. I started experimenting with things like blog-based petitions, and a lot of students joined. The internet became a way to bypass institutional gatekeeping.

In college, I saw another layer. There was serious sexism from people in authority—military-style discipline officers, some teachers, and administrators. When gender-related controversies happened on campus, the media sometimes showed up and reported in ways that were harmful: exposing people, sensationalizing stories, and ignoring the realities of sexual minority students. Meanwhile, the administration would shut down student demands with authority, and at the same time use incentives and pressure behind the scenes, especially around housing or “benefits”—so some student representatives were afraid to speak honestly in meetings.

And this was before livestreaming was a normal tool. But even then, I was already using audio-based live channels to connect students across campuses. Online networks became a lifeline for young advocates, especially those of us who didn’t “fit” the institution and needed each other to survive.

I came from a literature background. I had zero technical training at the beginning. But I’ve always been the kind of person who loves trying new technology. And I was lucky, because I was born in that strange window when the internet was rapidly expanding, but not yet fully swallowed by Big Tech. So, I grew up in this tension between nostalgia and innovation, and I kept pushing, resisting, and experimenting. I’ve experienced both sides of speech: how beautiful freedom can be, and how terrifying it can become. 

 DG: Going back to Lezismore, I’m curious: When you ask people to observe before they post, what are you hoping they learn about the community before they more actively participate in it?

 SHIN: I hope people understand that this is a community rather than a dating app focused on results. The community needs people to support and nurture each other. Some people see us as a dating app and expect a frictionless experience; naturally, they are disappointed. If you're only looking for a fast-food relationship, that's fine. Here, however, it is a community that offers more than just hooking up. The design focuses on words and a person’s behavioral history rather than just a photo. Dopamine bombing is not how we do things here.

 We’ve also built a library of community safety notes, FAQs, and governance reminders over time. Some written by the team, some contributed by members. Not everyone reads them, and that’s fine. But the design makes it easier for people who want a slower, more intentional space to stay—and for people who want something frictionless to self-select out.

 SHIN: I run the platform anonymously by design. People may know that there’s an admin called “Shin”, but I don’t associate a face or personal brand with the role because I don’t want the community to depend on my visibility for their trust.

 We maintain a clear distinction between work and private life. Admin power is never a shortcut to social capital. In a sex-positive space, this boundary is a matter of ethics. The moment a founder’s identity becomes central, the space starts to orbit that person, and expectations, fan-service dynamics and power asymmetries creep in. Then speech becomes performance.

It also means I’m less “marketable” to attention-driven media—but that tradeoff protects the community’s integrity. Some media outlets only want a face and a persona. However, I accept this cost because I am trying to build a community that can thrive independently of an idol, where people relate to each other through behavior and shared norms, not proximity to the founder.

 DG: It sounds like a lot of what you’re doing is about people being authentic on the site, not using personas or using it to create a personal platform for themselves for marketing purposes.

 SHIN: Exactly, people can share links, but if a post is purely self-promotion with no contribution to the community, we don’t encourage this. I hope people here can respect the reciprocity.

 DG: I want to shift a bit and talk about freedom of expression as a principle for a while. Do you think freedom of expression should be regulated by governments?

 SHIN: Speech regulation is hard, because speech is freaking messy. And once you turn messy human speech into rules that scale, nuance gets flattened. Minority communities usually pay first, because large systems choose efficiency over lived reality.

 I also don’t think the answer is “erase all conflict.” Some friction is the price of pluralism, and with good guidance and interface design, conflict can become a point of learning instead of a point of collapse. From a platform owner’s perspective, legal liability is real and often cruel. So if we expect platforms to be free, frictionless, allow everything we like, erase everything we dislike, and still amplify our visibility—then we’re really asking for magic. That’s why we need to talk seriously about alternatives and procedural safeguards, not just louder demands.

 Age verification is a good example. I get that the goal is to protect minors. But identity-based age gates often turn into identity infrastructure. They chill lawful adult speech, concentrate gatekeeping power, and push everyone to hand over personal data just to access legal content. From my experience, there are other tools that can reduce harm with less damage—things like community design, visibility gating, and human, context-based moderation. Those approaches can protect people without building a personal-data checkpoint for everyone.

 DG: You talked about minority voices, and minority speech. Are you concerned that any regulation will end up trying to silence minority speakers, or won’t benefit minority speakers. How are these speakers more vulnerable to speech regulations than others?

 SHIN: Hmmm......a lot of minority speech is context-heavy. The same words can be support, education, or harassment depending on who says it and why. When regulation turns into broad categories, sexual health education, self-explore experiment sharing, trans healthcare discussions, or reclaimed language can be treated as “harmful” out of context (at both sides). So the risk isn’t only censorship, it’s misclassification at scale.

 DG: Are there certain types of speech that don’t deserve the conversation. Some people might say that hate speech or speech that’s dehumanizing doesn’t deserve the conversation. Are there any categories of speech that you would say we shouldn’t consider, or do we get to talk about everything?

 SHIN: Okay, I don't think the issue is about saying certain kinds of speech don't deserve to be discussed; the problem lies in the definition. As soon as we suggest that some speech doesn't merit discussion, some people will exploit this to silence their opponents. Whether it's right-wing, left-wing or anything else, if we say that we don't allow any kind of hate speech, the next thing someone will do is define your speech as hate speech. It's an endless war that draws us all into an eagerness to silence others and grab the mic, instead of creating more space for conversations and learning from each other.

 We should go further than just regulation and create spaces where people can coexist in a grey area, endure some discomfort and engage with each other. I prefer this approach to trying to draw lines.

 DG: So even well-intentioned restrictions might always be used against minority speakers?

 SHIN: I wouldn’t say restriction is not good. There always has to be some kind of restriction, but people will always find a way to overcome or take advantage of it. So, the thing I believe is that regulation is regulation, but community should be an open-source archive. How we govern community, how we dialogue between each other when we disagree with each other…how can we create a space where those things can exist? I believe that those things should be open source. People always talk about open source like it’s just coding, but I believe governance should be open source too.

 DG: So when you said before some restrictions are necessary but then we talk about open source governance, we’re talking about the same thing. When you say some restrictions are necessary, you’re not necessarily saying government restrictions, but that restrictions should come from somewhere else: that’s an open source governance model?

 SHIN: Yes. And it should include restrictions in law, and how people deal with it, the way we deal with it. I’m not saying every rule or detection signal should be public. By “open-source governance,” I mean shareable governance playbooks: proportional steps, appeals templates, community norms, and design patterns that small communities can adapt. The goal is portability and adaptability of methods, not making systems easy to game. Because malice is always part of the environment.

 DG: Is there anything else you want to say about your theory of open-source governance or what it means to you?

 SHIN: I noticed there was a question in another interview about fostering transparency in social media, and how to appeal, and that the reason [for a takedown] should be more transparent. The interesting thing is that before our interview today I was joining a law and technology policy research group, and they’re reading a book called “Law and Technology: A Methodical Approach”. It's worth mentioning that it's very interesting. Apparently, scientists tend to place emphasis on complexity, which often trips up pragmatic reform efforts, so the recommendations often only call for greater transparency or participation.

 I think this echoes what we were talking about before and the transparency thing. I heard this podcast in Taiwan about cybersecurity where they interview an outsourced ex-moderator from Meta and how the platform moderates speech. Because most of the information is confidential, the moderator can’t say too much, but she told us that every day Meta provided a whole set of lists with things they should ban, and every day it changes. Sometimes it even changes on an hourly basis. And they can never really put those fully transparent to the world. The reason they can’t do that is because those words are partially forbidding scams, because the scale is too big. So, when they show the transparency of how they ban things, the scammers will use this against them. Like, “now you’ve banned this word so I’ll just use another one.” It’s an endless war. So, I think transparency matters, but it shouldn’t be the only thing we think about, we should think about governance as well. And when we talk about governance, we shouldn’t just think about some high authority in government or a law just forcing the platform into something we like. We should go back and think about what we can do. We’ve got lots of open-source software now and we can literally build those things by ourselves. That’s what I’m trying to say.

 DG: Okay, one last question. This is the last question we ask everybody. Who’s your free speech hero?

 SHIN: This is the question I saw everyone answering, and I honestly struggled with it. Because I’m Taiwanese, and the names that often come up in U.S. free speech conversations aren’t the names I’m familiar with. I’m sorry about this.

 DG: That’s okay, it doesn’t have to be a perfect answer.

 SHIN: If you want a public figure from Taiwan, I think of the journalists and dissidents who pushed for press freedom during Taiwan’s democratization—Nylon (Tēnn Lâm-iông) is one name many Taiwanese recognize.

 If I answer this as truthfully as I can, my hero is my family. My father taught me that integrity is not a slogan. It’s the ability to keep your ethics when it costs you something. My mother is the opposite kind of teacher: she’s relentless in a practical way: she doesn’t easily back down, and she keeps finding room to move even when the room is small. Put together, that’s what free expression means to me. It’s not “I can say anything.” It's about whether you can continue to think independently and live with integrity through layers of fear, pressure, temptation and coercion, while still moving forward and creating more possibilities for others.

David Greene

EFF to Third Circuit: Electronic Device Searches at the Border Require a Warrant

3 days 14 hours ago

EFF, along with the national ACLU and the ACLU affiliates in Pennsylvania, Delaware, and New Jersey, filed an amicus brief in the U.S. Court of Appeals for the Third Circuit urging the court to require a warrant for border searches of electronic devices, an argument EFF has been making in the courts and Congress for nearly a decade.

The case, U.S. v. Roggio, involves a man who had been under ongoing criminal investigation for illegal exports when he returned to the United States from an international trip via JFK airport. Border officers used the opportunity to bypass the Fourth Amendment’s warrant requirement when they seized several of his electronic devices (laptop, tablet, cell phone, and flash drive) and conducted forensic searches of them. As the district court explained, “investigative agents had a case coordination meeting and border search authority was discussed in early January 2017,” before Mr. Roggio traveled internationally in February 2017.

The district court denied Mr. Roggio’s motion to suppress the emails and other data obtained from the warrantless searches of his devices. He was subsequently convicted of illegally exporting gun manufacturing parts to Iraq (he was also charged in a superseding indictment with torture and also convicted of that).

The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2025, U.S. Customs and Border Protection (CBP) conducted 55,318 device searches, both manual (“basic”) and forensic (“advanced”).

While a manual search involves a border officer tapping or mousing around a device, a forensic search involves connecting another device to the traveler’s device and using software to extract and analyze the data to create a detailed report the device owner’s activities and communications. Border officers have access to forensic tools that help gain access to data on a locked or encrypted device they have physical access to. From public reporting, we know that more recent devices (and ones that have had the latest security updates applied) are more resistant to these type of tools, especially if they are turned off or turned on but not yet unlocked.

The U.S. Supreme Court has recognized for a century a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless “routine” searches of luggage, vehicles, and other items crossing the border.

The primary justification for the border search exception has been to find—in the items being searched—goods smuggled to avoid paying duties (i.e., taxes) and contraband such as drugs, weapons, and other prohibited items, thereby blocking their entry into the country. But a traveler’s privacy interests in their suitcase and its contents are minimal compared to those in all the personal data on the person’s phone or laptop.

In our amicus brief, we argue that the U.S. Supreme Court’s balancing test in Riley v. California (2014) should govern the analysis here. In that case, the Court weighed the government’s interests in warrantless and suspicionless access to cell phone data following an arrest against an arrestee’s privacy interests in the depth and breadth of personal information stored on a cell phone. The Court concluded that the search-incident-to-arrest warrant exception does not apply, and that police need to get a warrant to search an arrestee’s phone.

Travelers’ privacy interests in their cell phones, laptops and other electronic devices are, of course, the same as those considered in Riley. Modern devices, over a decade later, contain even more data that together reveal the most personal aspects of our lives, including political affiliations, religious beliefs and practices, sexual and romantic affinities, financial status, health conditions, and family and professional associations.

In considering the government’s interests in warrantless access to digital data at the border, Riley requires analyzing how closely such searches hew to the original purpose of the warrant exception—preventing the entry of prohibited goods themselves via the items being searched. We argue that the government’s interests are weak in seeking unfettered access to travelers’ electronic devices.

First, physical contraband (like drugs) can’t be found in digital data.

Second, digital contraband (such as child sexual abuse material) can’t be prevented from entering the country through a warrantless search of a device at the border because it’s likely, given the nature of cloud technology and how internet-connected devices work, that identical copies of the files are already in the country on servers accessible via the internet.

Finally, searching devices for evidence of contraband smuggling (for example, the emails here revealing details of the illegal import scheme) and other evidence for general law enforcement (i.e., investigating non-border-related domestic crimes) are too “untethered” from the original purpose of the border search exception, which is to find prohibited items themselves and not evidence to support a criminal prosecution. Therefore, emails or other data found on a digital device searched without a warrant at the border cannot and should not be used as evidence in court.

If the Third Circuit is not inclined to require a warrant for electronic device searches at the border, we also argue that such a search—whether manual or forensic—should be justified only by reasonable suspicion that the device contains digital contraband and be limited in scope to looking for digital contraband.

This extends the Ninth Circuit’s rule from U.S. v. Cano (2019) in which the court held that only forensic device searches at the border require reasonable suspicion that the device contains digital contraband—that is, some set of already known facts pointing to this possibility—while manual searches may be conducted without suspicion. But the Cano court also held that all searches must be limited in scope to looking for digital contraband (for example, call logs are off limits because they can’t contain digital contraband in the form of photos or files).

We hope that the Third Circuit will rise to the occasion and be the first circuit to fully protect travelers’ Fourth Amendment rights at the border.

Sophia Cope

The Anthropic-DOD Conflict: Privacy Protections Shouldn’t Depend On the Decisions of a Few Powerful People

3 days 16 hours ago

The U.S. military has officially ended its $200 million contract with AI company Anthropic and has ordered all other military contractors to cease use of their products. Why? Because of a dispute over what the government could and could not use Anthropic’s technology to do. Anthropic had made it clear since it first signed the contract with the Pentagon in 2025 that it did not want its technology to be used for mass surveillance of people in the United States or for fully autonomous weapons systems. Starting in January, that became a problem for the Department of Defense, which ordered Anthropic to give them unrestricted use of the technology. Anthropic refused, and the DoD retaliated.

There is a lot we could learn from this conflict, but the biggest take away is this: the state of your privacy is being decided by contract negotiations between giant tech companies and the U.S. government—two entities with spotty track records for caring about your civil liberties. It’s good when CEOs step up and do the right thing—but it's not a sustainable or reliable solution to build our rights on. Given the government’s loose interpretations of the law, ability to find loopholes to surveil you, and willingness to do illegal spying, we needs serious and proactive legal restrictions to prevent it from gobbling up all the personally data it can acquire and using even routine bureaucratic data for punitive ends.

Imposing and enforcing such those restrictions is properly a role for Congress and the courts, not the private sector. 

The companies know this. When speaking about the specific risk that AI poses to privacy, the CEO of Anthropic Dario Amodei said in an interview, “I actually do believe it is Congress’s job. If, for example, there are possibilities with domestic mass surveillance—the government buying of bulk data has been produced on Americans, locations, personal information, political affiliations, to build profiles, and it’s not possible to analyze all of that with AI—the fact that that is legal—that seems like the judicial interpretation of the Fourth Amendment has not caught up or the laws passed by Congress have not caught up.” 

The example he cites here is a scarily realistic one—because it’s already happening. Customs and Border Protection has tapped into the online advertising world to buy data on Americans for surveillance purposes. Immigration and Customs Enforcement has been using a tool that maps millions of peoples’ devices based on purchased cell phone data. The Office of the Director of National Intelligence has proposed a centralized data broker marketplace to make it easier for intelligence agencies to buy commercially available data. Considering the government’s massive contracts with a bunch of companies that could do analysis, including Palantir, a company which does AI-enabled analysis of huge amounts of data, then the concerns are incredibly well founded. 

But Congress is sadly neglecting its duties. For example, a bill that would close the loophole of the government buying personal information passed the House of Representatives in 2024, but the Senate stopped it.  And because Congress did not act, Americans must rely on a tech company CEO has to try to protect our privacy—or at least refuse to help the government violate it.

Privacy in the digital age should be an easy bipartisan issue. Given that it’s wildly popular (71% of American adults are concerned about the government's use of their data and among adults that have heard of AI 70% have little to no trust in how companies use those products) you would think politicians would be leaping over each other to create the best legislation and companies would be promising us the most high-end privacy protecting features. Instead, for the time being, we are largely left adrift in a sea of constant surveillance, having to paddle our own life rafts.

EFF has, and always will, fight for real and sustainable protections for our civil liberties including  a world where our privacy does not rest upon the whims of CEOs and back room deals with the surveillance state. 

Matthew Guariglia

EFF to Supreme Court: Shut Down Unconstitutional Geofence Searches

3 days 22 hours ago
Digital Dragnets Violate Fourth Amendment, Brief Argues

WASHINGTON, D.C. – The Electronic Frontier Foundation (EFF), the American Civil Liberties Union (ACLU), the ACLU of Virginia, and the Center on Privacy & Technology at Georgetown Law filed a brief Monday urging the U.S. Supreme Court to rule that invasive geofence warrants are unconstitutional.

The brief argues that geofence warrants—which compel companies to provide information on every electronic device in a given area during a given time period—are the digital version of the exploratory rummaging that the drafters of the Fourth Amendment specifically intended to prevent. 

Unlike typical warrants, geofence warrants do not name a suspect or even target a specific individual or device. Instead, police cast a digital dragnet, demanding location data on every device in a geographic area during a certain time period, regardless of whether the device owner has any connection to the crime under investigation. These searches simultaneously impact the privacy of millions and turn innocent bystanders into suspects, just for being in the wrong place at the wrong time. 

The Supreme Court agreed earlier this year to hear Chatrie v. United States, in which a 2019 geofence warrant  compelled Google to search the accounts of all its hundreds of millions of users to see if any one of them was within a radius police drew around a Northern Virginia crime scene. This area amounted to several football fields in size and encompassed numerous homes, businesses, and a church. In an amicus brief filed Monday, the brief argues that allowing this sweeping power to go unchecked is inconsistent with the basic freedoms of a democratic society. 

"This is not traditional police work, but rather the leveraging of new and powerful technology to claim a novel and formidable power over the people," the brief states. "By their very nature, geofence searches turn innocent bystanders into suspects and leverage even purportedly limited searches into larger dragnets, causing intrusions at a scale far beyond those held unconstitutional in the physical world." 

The brief also cautioned the Court not to authorize future geofence warrants based on the facts of the Chatrie case, which reflect how such searches were conducted in 2019. Since July 2025, mass geofence searches of Google users’ location data have not been possible. However, Google is not the only company collecting location data, nor the only way for police to access mass amounts of data on people with no connection to a crime. All suspicionless searches drag a net through vast swaths of information in hopes of identifying previously unknown suspects—ensnaring innocent bystanders along the way. 

"To courts, to lawmakers, and to tech companies themselves, EFF has repeatedly argued that these high-tech efforts to pull suspects out of thin air cannot be constitutional, even with a warrant," said EFF Surveillance Litigation Director Andrew Crocker. "The Supreme Court should find once and for all that geofence searches are just the kind of impermissible general warrants that the Framers of the Constitution so reviled."

For the brief: https://www.eff.org/document/chatrie-v-united-states-eff-supreme-court-amicus-brief

Tags: geofence warrantsContact:  AndrewCrockerSurveillance Litigation Directorandrew@eff.org
Hudson Hongo

EFF to Court: Don’t Make Embedding Illegal

4 days 14 hours ago

Who should be directly liable for online infringement – the entity that serves it up or a user who embeds a link to it? For almost two decades, most U.S. courts have held that the former is responsible, applying a rule called the server test. Under the server test, whomever controls the server that hosts a copyrighted work—and therefore determines who has access to what and how—can be directly liable if that content turns out to be infringing. Anyone else who merely links to it can be secondarily liable in some circumstances (for example, if that third party promotes the infringement), but isn’t on the hook under most circumstances.

The test just makes sense. In the analog world, a person is free to tell others where they may view a third party’s display of a copyrighted work, without being directly liable for infringement if that display turns out to be unlawful. The server test is the straightforward application of the same principle in the online context. A user that links to a picture, video, or article isn’t in charge of transmitting that content to the world, nor are they in a good position to know whether that content violates copyright. In fact, the user doesn’t even control what’s located on the other end of the link—the person that controls the server can change what’s on it at any time, such as swapping in different images, re-editing a video or rewriting an article.

But a news publisher, Emmerich Newspapers, wants the Fifth Circuit to reject the server test, arguing that the entity that embeds links to the content is responsible for “displaying” it and, therefore, can be directly liable if the content turns out to be infringing. If they are right, the common act of embedding is a legally fraught activity and a trap for the unwary.

The Court should decline, or risk destabilizing fundamental, and useful, online activities. As we explain in an amicus brief filed with several public interest and trade organizations, linking and embedding are not unusual, nefarious, or misleading practices. Rather, the ability to embed external content and code is a crucial design feature of internet architecture, responsible for many of the internet’s most useful functions. Millions of websites—including EFF’s—embed external content or code for everything from selecting fonts and streaming music to providing services like customer support and legal compliance. The server test provides legal certainty for internet users by assigning primary responsibility to the person with the best ability to prevent infringement. Emmerich’s approach, by contrast, invites legal chaos.

Emmerich also claims that altering a URL violates the Digital Millennium Copyright Act’s prohibition on changing or deleting copyright management information. If they are correct, using a link shortener could put users at risks of statutory penalties—an outcome Congress surely did not intend.

Both of these theories would make common internet activities legally risky and undermine copyright’s Constitutional purpose: to promote the creation of and access to knowledge. The district court recognized as much and we hope the appeals court agrees.

Related Cases: Emmerich Newspapers v. Particle Media
Corynne McSherry

National Book Tour for Cindy Cohn’s Memoir, ‘Privacy’s Defender’

4 days 22 hours ago
MIT Press Publishes EFF Executive Director’s Book As She Prepares to Depart Organization After 25 Years

SAN FRANCISCO – Electronic Frontier Foundation Executive Director Cindy Cohn will launch her memoir, Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance (MIT Press, March 10), with events in San Francisco and Berkeley before embarking on a national book tour

In Privacy’s Defender, Cohn weaves her own personal story with her role as a leading legal voice representing the rights and interests of technology users, innovators, whistleblowers, and researchers during the Crypto Wars of the 1990s, battles over NSA’s dragnet internet spying revealed in the 2000s, and the fight against FBI gag orders.  

The book will be Cohn’s swansong at EFF as she’s stepping down as executive director later this year after 25 years with the organization. And there’s no timelier topic: Everyone should be concerned about privacy right now, as the federal government consolidates and weaponizes data, companies track our every click, and law enforcement from local police to ICE keep tabs on all of us, everywhere we go, every day. 

The Privacy’s Defender tour will begin with a free event at San Francisco’s famed City Lights Bookstore (261 Columbus Ave., San Francisco, CA 94133) moderated by bestselling author and EFF Special Advisor Cory Doctorow, at 7pm PST Tuesday, March 10.  

Then EFF will host a launch party at Berkeley’s Ciel Creative Space (940 Parker St., Berkeley, CA 94710) moderated by bestselling author Annalee Newitz at 7 p.m. PT on Thursday, March 12; tickets cost $12.50-$20. 

The book tour will also include events in Portland, OR; Seattle; Denver; Cambridge, MA; Ann Arbor, MI; and Iowa City, IA. Later events are being planned in New York City and Washington, D.C., as well as a May 13 event at Commonwealth Club World Affairs in San Francisco. 

Proceeds from sales of the book benefit EFF. 

“These beautifully written stories show why the fight for privacy is worth having and reveal all that Cindy Cohn and EFF have done to establish the modern privacy doctrine as the essential core of a free society.” -- Lawrence Lessig, Harvard University; author of How to Steal a Presidential Election 

“Cindy Cohn gives readers a first-person window into some of the pivotal legal disputes of the digital era and reminds us that action and activism are crucial to preserving Americans’ freedom.” -- U.S. Sen. Ron Wyden, D-OR, author of It Takes Chutzpah: How to Fight Fearlessly for Progressive Change 

Privacy’s Defender is a compelling account of a life well lived and an inspiring call to action for the next generation of civil liberties champions.” -- Edward Snowden, whistleblower; author of Permanent Record 

For the San Francisco event: https://citylights.com/events/cindy-cohn-launch-party-for-privacys-defender/ 

For the Berkeley event: https://www.eff.org/event/privacys-defender-book-launch-party  

For more on Privacy’s Defender and the book tour: https://www.eff.org/Privacys-Defender 

Contact:  KarenGulloSenior Writer for Free Speech and Privacykaren@eff.org
Josh Richman

Victory! Tenth Circuit Finds Fourth Amendment Doesn’t Support Broad Search of Protesters’ Devices and Digital Data

1 week 1 day ago

In a big win for protesters’ rights, the U.S. Court of Appeals for the Tenth Circuit overturned a lower court’s dismissal of a challenge to sweeping warrants to search a protester’s devices and digital data and a nonprofit’s social media data.

The case, Armendariz v. City of Colorado Springs, arose after a housing protest in 2021, during which Colorado Springs police arrested protesters for obstructing a roadway. After the demonstration, police also obtained warrants to seize and search through the devices and data of Jacqueline Armendariz Unzueta, who they claimed threw a bike at them during the protest. The warrants included a search through all of her photos, videos, emails, text messages, and location data over a two-month period, as well as a time-unlimited search for 26 keywords, including words as broad as “bike,” “assault,” “celebration,” and “right,” that allowed police to comb through years of Armendariz’s private and sensitive data—all supposedly to look for evidence related to the alleged simple assault. Police further obtained a warrant to search the Facebook page of the Chinook Center, the organization that spearheaded the protest, despite the Chinook Center never having been accused of a crime.

The district court dismissed the civil rights lawsuit brought by Armendariz and the Chinook Center, holding that the searches were justified and that, in any case, the officers were entitled to qualified immunity. The plaintiffs, represented by the ACLU of Colorado, appealed. EFF—joined by the Center for Democracy and Technology, the Electronic Privacy Information Center, and the Knight First Amendment Institute at Columbia University—wrote an amicus brief in support of that appeal.

In a 2-1 opinion, the Tenth Circuit reversed the district court’s dismissal of the lawsuit’s Fourth Amendment search and seizure claims. The court painstakingly picked apart each of the three warrants and found them to be overbroad and lacking in particularity as to the scope and duration of the searches. The court further held that in furnishing such facially deficient warrants, the officers violated “clearly established” law and thus were not entitled to qualified immunity. Although the court did not explicitly address the First Amendment concerns raised by the lawsuit, it did note the backdrop against how these searches were carried out, including animus by Colorado Springs police leading up to the housing protest.

It is rare for appellate courts to call into question any search warrants. It’s even rarer for them to deny qualified immunity defenses. The Tenth Circuit’s decision should be celebrated as a big win for protesters and anyone concerned about police immunity for violating people’s constitutional rights. The case is now remanded back to the district court to proceed—and hopefully further vindicate the privacy rights we all have in our devices and digital data.

Saira Hussain

☺️ Trust Us With Your Face | EFFector 38.4

1 week 2 days ago

Do you remember the last time you were carded at a bar or restaurant? It was probably such a quick and normal experience, that you barely remember it. But have you ever been carded to use the internet? Being required to present your ID to access content online is becoming a growing reality for many. We're explaining the dangers of age verification laws, and the latest in the fight for privacy and free speech online, with our EFFector newsletter.

For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This issue covers Discord's controversial rollout of mandatory age verification, a leaked Meta memo on face-scanning smart glasses, and a Super Bowl surveillance ad that said the quiet part out loud.

Prefer to listen in? In our audio companion, EFF Associate Director of State Affairs Rin Alajaji explains how online age verification hurts free expression for all users. Find the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 38.4 - ☺️ Trust Us With Your Face

Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight against mandatory age verification laws when you support EFF today!

Christian Romero

How to Pick Your Password Manager

1 week 2 days ago

Phishing and data breaches are a constant on the internet. The single best defense against both is to use a password manager to generate and automatically fill a unique password for every site. While 1Password has recently raised their prices, and researchers have recently published potential flaws in some implementations, using a password manager is still a critical investment in keeping yourself safe on the internet. There are free options, and even ones built into your operating system or browser. We can help you choose.

Password managers protect you from phishing by memorizing the connection between a password and a website, and, if you use the browser integration, filling each password only on the website it belongs to. They protect you from data breaches by making it feasible to use a long, random, unique password on each site. When bad actors get their hands on a data breach that includes email addresses and password data, they will typically try to crack those passwords, and then attempt to login on dozens of different websites with the email address/password combinations from the breach. If you use the same password everywhere, this can turn one site’s data breach into a personal disaster, as many of your accounts get compromised at once.

In recent years, the built-in password managers in browsers and operating systems have come a long way but still stumble on cross-platform support. Within the Apple ecosystem, you can use iCloud Keychain, with support for generating passwords, autofill in Safari, and end-to-end encrypted synchronization, so long as you don’t need access to your passwords in Google Chrome or Android (Windows is supported, though). Within the Google ecosystem, you can use Google Password Manager, which also supports password generation, autofill, and sync. Crucially, though, Google Password manager does not end-to-end encrypt credentials ​​unless you manually enable on-device encryption. Firefox and Microsoft also offer password managers. All of these platform-based options are free, and may already be on your devices. But they tend to lock you into a single-vendor world.

There are also a variety of third-party password managers, some paid, and some free, and some open source. Most of these have the advantage of letting you sync your passwords across a wide variety of devices, operating systems, and browsers. Here are four key things to look out for. First, when synchronizing between devices, your passwords should be encrypted end-to-end using a password that only you know (a “master” or “primary” password). Second, support for autofill can reduce the chance that you’ll get phished. Third, security audits performed by third parties can increase confidence that the software really does what it is designed to do. And finally, of course, random generation of unique passwords is a must.

Don’t let uncertainty or price increases dissuade you from using a password manager. There’s a good choice for everyone, and using one can make your online life a lot safer. Want more help choosing? Check out our Surveillance Self-Defense guide.

Jacob Hoffman-Andrews

Tech Companies Shouldn’t Be Bullied Into Doing Surveillance

1 week 3 days ago

The Secretary of Defense has given an ultimatum to the artificial intelligence company Anthropic in an attempt to bully them into making their technology available to the U.S. military without any restrictions for their use. Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance. The Department of Defense has reportedly threatened to label Anthropic a “supply chain risk,” in retribution for not lifting restrictions on how their technology is used. According to WIRED, that label would be, “a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic’s AI in their defense work.”

Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance.

In 2025, reportedly Anthropic became the first AI company cleared for use in relation to classified operations and to handle classified information. This current controversy, however, began in January 2026 when, through a partnership with defense contractor Palantir, Anthropic came to suspect their AI had been used during the January 3 attack on Venezuela. In January 2026, Anthropic CEO Dario Amodei wrote to reiterate that surveillance against US persons and autonomous weapons systems were two “bright red lines” not to be crossed, or at least topics that needed to be handled with “extreme care and scrutiny combined with guardrails to prevent abuses.” You can also read Anthropic’s self-proclaimed core views on AI safety here, as well as their LLM, Claude’s, constitution here

Now, the U.S. government is threatening to terminate the government’s contract with the company if it doesn’t switch gears and voluntarily jump right across those lines.  

Companies, especially technology companies, often fail to live up to their public statements and internal policies related to human rights and civil liberties for all sorts of reasons, including profit. Government pressure shouldn’t be one of those reasons. 

Whatever the U.S. government does to threaten Anthropic, the AI company should know that their corporate customers, the public, and the engineers who make their products are expecting them not to cave. They, and all other technology companies, would do best to refuse to become yet another tool of surveillance.

Matthew Guariglia

EFF’s Policy on LLM-Assisted Contributions to Our Open-Source Projects

2 weeks 1 day ago

We recently introduced a policy governing large language model (LLM) assisted contributions to EFF's open-source projects. At EFF, we strive to produce high quality software tools, rather than simply generating more lines of code in less time. We now explicitly require that contributors understand the code they submit to us and that comments and documentation be authored by a human.

LLMs excel at producing code that looks mostly human generated, but can often have underlying bugs that can be replicated at scale. This makes LLM-generated code exhausting to review, especially with smaller, less resourced teams. LLMs make it easy for well-intentioned people to submit code that may suffer from hallucination, omission, exaggeration, or misrepresentation.

It is with this in mind that we introduce a new policy on submitting LLM-assisted contributions to our open-source projects. We want to ensure that our maintainers spend their time reviewing well thought out submissions. We do not completely outright ban LLMs, as their use has become so pervasive a blanket ban is impractical to enforce.

Banning a tool is against our general ethos, but this class of tools comes with an ecosystem of problems. This includes issues with code reviews turning into code refactors for our maintainers if the contributor doesn’t understand the code they submitted. Or the sheer scale of contributions that could come in as AI generated code but is only marginally useful or potentially unreviewable. By disclosing when you use LLM tools, you help us spend our time wisely.

EFF has described how extending copyright is an impractical solution to the problem of AI generated content, but it is worth mentioning that these tools raise privacy, censorship, ethical, and climatic concerns for many. These issues are largely a continuation of tech companies’ harmful practices that led us to this point. LLM generated code isn’t written on a clean slate, but born out of a climate of companies speedrunning their profits over people. We are once again in “just trust us” territory of Big Tech being obtuse about the power it wields. We are strong  advocates of using tools to innovate and come up with new ideas. However, we ask you to come to our projects knowing how to use them safely.

Samantha Baldwin

EFF to Wisconsin Legislature: VPN Bans Are Still a Terrible Idea

2 weeks 3 days ago

Update, February 25, 2026: In response to widespread pushback, Wisconsin lawmakers have removed the provision banning VPN services from S.B. 130 / A.B. 105. The bill now awaits Governor Tony Evers’ signature. While the removal of the VPN provision is a positive step, EFF continues to oppose the bill. Advocates and residents across Wisconsin are urged to maintain pressure and encourage Governor Evers to veto the bill.

Wisconsin’s S.B. 130 / A.B. 105 is a spectacularly bad idea.

It’s an age-verification bill that effectively bans VPN access to certain websites for Wisconsinites and censors lawful speech. We wrote about it last November in our blog “Lawmakers Want to Ban VPNs—And They Have No Idea What They're Doing,” but since then, the bill has passed the State Assembly and is scheduled for a vote in the State Senate tomorrow.

In light of this, EFF sent a letter to the entire Wisconsin Legislature urging lawmakers to reject this dangerous bill.

You can read the full letter here.

The short version? This bill both requires invasive age verification for websites that host content lawmakers might deem “sexual” and requires that those sites block any user that connects via a Virtual Private Network (VPN). VPNs are a basic cybersecurity tool used by businesses, universities, journalists, veterans, abuse survivors, and ordinary people who simply don’t want to broadcast their location to every website they visit.

As we lay out in the letter, Wisconsin’s mandate is technically unworkable. Websites cannot reliably determine whether a VPN user is in Wisconsin, a different state, or a different country. So, to avoid liability, websites are faced with an unfortunate choice: either resort to over-blocking IP addresses commonly associated with commercial VPNs, block all Wisconsin users’ access, or mandate nationwide restrictions just to avoid liability. 

The bill also creates a privacy nightmare. It pushes websites to collect sensitive personal data (e.g. government IDs, financial information, biometric identifiers) just to access lawful speech. At the same time, it broadens the definition of material deemed “harmful to minors” far beyond the narrow categories courts have historically allowed states to regulate. The definition goes far beyond the narrow categories historically recognized by courts (namely, explicit adult sexual materials) and instead sweeps in material that merely describes sex or depicts human anatomy. This approach invites over-censorship, chills lawful speech, and exposes websites to vague and unpredictable enforcement. That combination—mass data collection plus vague, expansive speech restrictions—is a recipe for over-censorship, data breaches, and constitutional overreach.

If you live in Wisconsin, now is the time for you to contact your State Senator and urge them to vote NO on S.B. 130 / A.B. 105. Tell them protecting young people online should not mean undermining cybersecurity, chilling lawful speech, and forcing residents to hand over their IDs just to browse the internet.

As we said last time: Our privacy matters. VPNs matter. And politicians who can't tell the difference between a security tool and a "loophole" shouldn't be writing laws about the internet.

Rindala Alajaji

San Jose Can Protect Immigrants by Ending Flock Surveillance System

2 weeks 3 days ago

(This appeared as an op-ed published February 12, 2026 in the San Jose Spotlight, written by Huy Tran (SIREN), Jeffrey Wang (CAIR-SFBA), and Jennifer Pinsof.)

As ICE and other federal agencies continue their assault on civil liberties, local leaders are stepping up to protect their communities. This includes pushing back against automated license plate readers, or ALPRs, which are tools of mass surveillance that can be weaponized against immigrants, political dissidents and other targets.

In recent weeks, Mountain View, Los Altos Hills, Santa Cruz, East Palo Alto and Santa Clara County have begun reconsidering their ALPR programs. San Jose should join them. This dangerous technology poses an unacceptable risk to the safety of immigrants and other vulnerable populations.

ALPRs are marketed to promote public safety. But their utility is debatable and they come with significant drawbacks. They don’t just track “criminals.” They track everyone, all the time. Your vehicle’s movements can reveal where you work, worship and obtain medical care. ALPR vendors like Flock Safety put the location information of millions of drivers into databases, allowing anyone with access to instantly reconstruct the public’s movements.

But “anyone with access” is far broader than just local police. Some California law enforcement agencies have used ALPR networks to run searches related to immigration enforcement. In other situations, purported issues with the system’s software have enabled federal agencies to directly access California ALPR data. This is despite the promises of ALPR vendors and clear legal prohibitions.

Communities are saying enough is enough. Just last week, police in Mountain View decided to turn off all of the city’s Flock cameras, following revelations that federal and other unauthorized agencies had accessed their network. The cameras will remain inactive until the City Council provides further direction.

Other localities have shut off the cameras for good. In January, Los Altos Hills terminated its contract with Flock following concerns about ICE. Santa Cruz severed relations with Flock, citing rising tensions with ICE. Most recently, East Palo Alto and Santa Clara County are reconsidering whether to continue their relationships with Flock, given heightened concern for the safety of immigrant communities.

California law prohibits local police from disclosing ALPR data to out-of-state or federal agencies. But at least 75 California police agencies were sharing these records out-of-state as recently as 2023. Just last year, San Francisco police allowed access to out-of-state agencies and 19 searches were related to ICE.

Even without direct access, ICE can exploit local ALPR systems. One investigation found more than 4,000 cases where police had made searches on behalf of federal law enforcement, including for immigration investigations.

Increasing the risk is that law enforcement routinely searches these networks without first obtaining a warrant. In San Jose, police aren’t required to have any suspicion of wrongdoing before searching ALPR databases, which contain a year’s worth of data representing hundreds of millions of records. In a little over a year, San Jose police logged more than 261,000 ALPR searches, or nearly 700 searches a day, all without a warrant.

Two nonprofit organizations, SIREN and CAIR California, represented by Electronic Frontier Foundation and the ACLU of Northern California, are currently suing to stop San Jose’s warrantless searches of ALPR data. But this is only the first step. A better solution is to simply turn these cameras off.

San Jose cannot afford delay. Each day these cameras remain active, they collect sensitive location data that can be misused to target immigrant families and violate fundamental freedoms. It is a risk materializing across California. City leaders must act now to shut down ALPR systems and make clear that public safety will not come at the expense of privacy, human dignity or community trust.

Related Cases: SIREN and CAIR-CA v. San Jose
Jennifer Pinsof

New Report Helps Journalists Dig Deeper Into Police Surveillance Technology

2 weeks 3 days ago
Report from EFF, Center for Just Journalism, and IPVM Helps Cut Through Sales Hype

SAN FRANCISCO — A new report released today offers journalists tips on cutting through the sales hype about police surveillance technology and report accurately on costs, benefits, privacy, and accountability as these invasive and often ineffective tools come to communities across the nation. 

The “Selling Safety” report is a joint project of the Electronic Frontier Foundation (EFF), the Center for Just Journalism (CJJ), and IPVM

Police technology is often sold as a silver bullet: a way to modernize departments, make communities safer, and eliminate human bias from policing with algorithmic objectivity. Behind the slick marketing is a sprawling, under-scrutinized industry that relies on manufacturing the appearance of effectiveness, not measuring it. The cost of blindly deferring to advertising can be high in tax dollars, privacy, and civil liberties. 

“Selling Safety” helps journalists see through the spin. It breaks down how policing technology companies market their tools, and how those sales claims — which are often misleading — get recycled into media coverage. It offers tools for asking better questions, understanding incentives, and finding local accountability stories. 

“The industry that provides technology to law enforcement is one of the most unregulated, unexamined, and consequential in the United States,” said EFF Senior Policy Analyst Matthew Guariglia. “Most Americans would rightfully be horrified to know how many decisions about policing are made: not by public employees, but by multi-billion-dollar surveillance tech companies who have an insatiable profit motive to market their technology as the silver bullet that will stop crime. Lawmakers often are too eager to seem ‘tough on crime’ and journalists too often see an easy story in publishing law enforcement press releases about new technology. This report offers a glimpse into how the police-tech sausage gets made so reporters and lawmakers can recognize the tactics of glossy marketing pitches, manufactured effectiveness numbers, and chumminess between companies and police.” 

“Surveillance and other police technologies are spreading faster than public understanding or oversight, leaving journalists to do critical accountability work in real time. We hope this report helps make that work easier,” said Hannah Riley Fernandez, CJJ’s Director of Programming. 

"The surveillance technology industry has a documented pattern of making unsubstantiated claims about technology,” said Conor Healy, IPVM's Director of Government Research. “Marketing is not a substitute for evidence. Journalists who go beyond press releases to critically examine vendor claims will often find solutions are not as magical as they may seem. In doing so, they perform essential accountability work that protects both taxpayer dollars and civil liberties." 

EFF also maintains resources for understanding various police technologies and mapping those technologies in communities across the United States. 

For the “Selling Safety” report:  https://www.eff.org/document/selling-safety-journalists-guide-covering-police-technology

For EFF’s Street-Level Surveillance hub: https://sls.eff.org/ 

For EFF’s Atlas of Surveillance: https://www.atlasofsurveillance.org/ 

Contact:  BerylLiptonSenior Investigative Researcherberyl@eff.org
Josh Richman

Seven Billion Reasons for Facebook to Abandon its Face Recognition Plans

3 weeks ago

The New York Times reported that Meta is considering adding face recognition technology to its smart glasses. According to an internal Meta document, the company may launch the product “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” 

This is a bad idea that Meta should abandon. If adopted and released to the public, it would violate the privacy rights of millions of people and cost the company billions of dollars in legal battles.   

Your biometric data, such as your faceprint, are some of the most sensitive pieces of data that a company can collect. Associated risks include mass surveillance, data breach, and discrimination. Adding this technology to glasses on the street also raises safety concerns.  

 This kind of face recognition feature would require the company to collect a faceprint from every person who steps into view of the camera-equipped glasses to find a match. Meta cannot possibly obtain consent from everyone—especially bystanders who are not Meta users.  

Dozens of state laws consider biometric information to be sensitive and require companies to implement strict protections to collect and process it, including affirmative consent.  

Meta Should Know the Privacy and Legal Risks  

Meta should already know the privacy risks of face recognition technology, after abandoning related technology and paying nearly $7 billion in settlements a few years ago.  

In November 2021, Meta announced that it would shut down its tool that scanned the face of every person in photos posted on the platform. At the time, Meta also announced that it would delete more than a billion face templates. 

Two years before that in July 2019, Facebook settled a sweeping privacy investigation with the Federal Trade Commission for $5 billion. This included allegations that Facebook’s face recognition settings were confusing and deceptive. At the time, the company agreed to obtain consent before running face recognition on users in the future.   

In March 2021, the company agreed to a $650 million class action settlement brought by Illinois consumers under the state's strong biometric privacy law. 

And most recently, in July 2024, Meta agreed to pay $1.4 billion to settle claims that its defunct face recognition system violated Texas law.  

 Privacy Advocates Will Continue to Focus our Resources on Meta  

 Meta’s conclusion that it can avoid scrutiny by releasing a privacy invasive product during a time of political crisis is craven and morally bankrupt. It is also dead wrong.  

Now more than ever, people have seen the real-world risk of invasive technology. The public has recoiled at masked immigration agents roving cities with phones equipped with a face recognition app called Mobile Fortify. And Amazon Ring just experienced a huge backlash when people realized that a feature marketed for finding lost dogs could one day be repurposed for mass biometric surveillance.  

The public will continue to resist these privacy invasive features. And EFF, other civil liberties groups, and plaintiffs’ attorneys will be here to help. We urge privacy regulators and attorneys general to step up to investigate as well.  

Mario Trujillo

Discord Voluntarily Pushes Mandatory Age Verification Despite Recent Data Breach

3 weeks 1 day ago

Update February 25, 2026: Discord announced yesterday that it will delay the global rollout of its age verification system to the "second half of 2026", instead of March. The company also said it has announced stricter requirements for partners offering facial age estimation, including that the process must be entirely on-device— Discord said one of its initial partners, Persona, "did not meet that bar."

Discord has begun rolling out mandatory age verification and the internet is, understandably, freaking out.

At EFF, we’ve been raising the alarm about age verification mandates for years. In December, we launched our Age Verification Resource Hub to push back against laws and platform policies that require users to hand over sensitive personal information just to access basic online services. At the time, age gates were largely enforced in polities where it was mandated by law. Now they’re landing in platforms and jurisdictions where they’re not required.

Beginning in early March, users who are either (a) estimated by Discord to be under 18, or (b) Discord doesn't have enough information on, may find themselves locked into a “teen-appropriate experience.” That means content filters, age gates, restrictions on direct messages and friend requests, and the inability to speak in “Stage channels,” which are the large-audience audio spaces that power many community events. Discord says most adults may be sorted automatically through a new “age inference” system that relies on account tenure, device and activity data, and broader platform patterns. Those whose age isn’t estimated due to lack of information or who are estimated to not be adults will be asked to scan their face or upload a government ID through a third-party vendor if they want to avoid the default teen account restrictions.

We’ve written extensively about why age verification mandates are a censorship and surveillance nightmare. Discord’s shift only reinforces those concerns. Here’s why:

The 2025 Breach and What's Changed Since

Discord literally won our 2025 “We Still Told You So” Breachies Award. Last year, attackers accessed roughly 70,000 users’ government IDs, selfies, and other sensitive information after compromising Discord’s third-party customer support system.

To be clear: Discord is no longer using that system, which involved routing ID uploads through its general ticketing system for age verification. It now uses dedicated age verification vendors (k-ID globally and Persona for some users in the United Kingdom).

That’s an improvement. But it doesn’t eliminate the underlying potential for data breaches and other harms. Discord says that it will delete records of any user-uploaded government IDs, and that any facial scans will never leave users’ devices. But platforms are closed-source, audits are limited, and history shows that data (especially this ultra-valuable identity data) will leak—whether through hacks, misconfigurations, or retention mistakes. Users are being asked to simply trust that this time will be different.

Age Verification and Anonymous Speech

For decades, we’ve taught young people a simple rule: don’t share personal information with strangers online.

Age verification complicates that advice. Suddenly, some Discord users will now be asked to submit a government ID or facial scan to access certain features if their age-inference technology fails. Discord has said on its blog that it will not associate a user’s ID with their account (only using that information to confirm their age) and that identifying documents won’t be retained. We take those commitments seriously. However, users have little independent visibility into how those safeguards operate in practice or whether they are sufficient to prevent identification.

Even if Discord can technically separate IDs from accounts, many users are understandably skeptical, especially after the platform’s recent breach involving age-verification data. For people who rely on pseudonymity, being required to upload a face scan or government ID at all can feel like crossing a line.

Many people rely on anonymity to speak freely. LGBTQ+ youth, survivors of abuse, political dissidents, and countless others use aliases to explore identity, find support, and build community safely. When identity checks become a condition of participation, many users will simply opt out. The chilling effect isn’t only about whether an ID is permanently linked to an account; it’s about whether users trust the system enough to participate in the first place. When you’re worried that what you say can be traced back to your government ID, you speak differently—or not at all.

No one should have to choose between accessing online communities and protecting their privacy.

Age Verification Systems Are Not Ready for Prime Time

Discord says it is trying to address privacy concerns by using device-based facial age estimation and separating government IDs from user accounts, retaining only a user’s age rather than their identity documents. This is meant to reduce the risks associated with retaining and collecting this sensitive data. However, even when privacy safeguards are in place, we are faced with another problem: There is no current technology that is fully privacy-protective, universally accessible, and consistently accurate. Facial age estimation tools are notoriously unreliable, particularly for people of color, trans and nonbinary people, and people with disabilities. The internet has now proliferated with stories of people bypassing these facial age estimation tools. But when systems get it wrong, users may be forced into appeals processes or required to submit more documentation, such as government-issued IDs, which would exclude those whose appearance doesn’t match their documents and the millions of people around the world who don’t have government-issued identity documents at all.

Even newer approaches (things like age inference, behavior tracking, financial database checks, digital ID systems) expand the web of data collection, and carry their own tradeoffs around access and error. As we mentioned earlier, no current approach is simultaneously privacy-protective, universally accessible, and consistently accurate across all demographics. 

That’s the challenge: the technology itself is not fit for the sweeping role platforms are asking it to play.

That’s the challenge: the technology itself is not fit for the sweeping role platforms are asking it to play.

The Aftermath

Discord reports over 200 million monthly active users, and is one of the largest platforms used by gamers to chat. The video game industry is larger than movies, TV, and music combined, and Discord represents an almost-default option for gamers looking to host communities.

Many communities, including open-source projects, sports teams, fandoms, friend groups, and families, use Discord to stay connected. If communities or individuals are wrongly flagged as minors, or asked to complete the age verification process, they may face a difficult choice: submit to facial scans or ID checks, or accept a more restricted “teen” experience. For those who decline to go through the process, the result can mean reduced functionality, limited communication tools, and the chilling effects that follow. 

Most importantly, Discord did not have to “comply in advance” by requiring age verification for all users, whether or not they live in a jurisdiction that mandates it. Other social media platforms and their trade groups have fought back against more than a dozen age verification laws in the U.S., and Reddit has now taken the legal fight internationally. For a platform with as much market power as Discord, voluntarily imposing age verification is unacceptable. 

So You’ve Hit an Age Gate. Now What?

Discord should reconsider whether expanding identity checks is worth the harm to its communities. But in the meantime, many users are facing age checks today.

That’s why we created our guide, “So You’ve Hit an Age Gate. Now What?” It walks through practical steps to minimize risk, such as:

  • Submit the least amount of sensitive data possible.
  • Ask: What data is collected? Who can access it? How long is it retained?
  • Look for evidence of independent, security-focused audits.
  • Be cautious about background details in selfies or ID photos.

There is unfortunately no perfect option, only tradeoffs. And every user will have their own unique set of safety concerns to consider. Amidst this confusion, our goal is to help keep you informed, so you can make the best choices for you and your community.

In light of the harms imposed by age-verification systems, EFF encourages all services to stop adopting these systems when they are not mandated by law. And lawmakers across the world that are considering bills that would make Discord’s approach the norm for every platform should watch this backlash and similarly move away from the idea.

If you care about privacy, free expression, and the right to participate online without handing over your identity, now is the time to speak up.

Join us in the fight.

Rindala Alajaji
Checked
2 hours 49 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed