House Moves Forward With Dangerous Proposal Targeting Nonprofits

4 hours 14 minutes ago

This week, the U.S. House Ways and Means Committee moved forward with a proposal that would allow the Secretary of the Treasury to strip any U.S. nonprofit of its tax-exempt status by unilaterally determining the organization is a “Terrorist Supporting Organization.” This proposal, which places nearly unlimited discretion in the hands of the executive branch to target organizations it disagrees with, poses an existential threat to nonprofits across the U.S. 

This proposal, added to the House’s budget reconciliation bill, is an exact copy of a House-passed bill that EFF and hundreds of nonprofits across the country strongly opposed last fall. Thankfully, the Senate rejected that bill, and we urge the House to do the same when the budget reconciliation bill comes up for a vote on the House floor. 

The goal of this proposal is not to stop the spread of or support for terrorism; the U.S. already has myriad other laws that do that, including existing tax code section 501(p), which allows the government to revoke the tax status of designated “Terrorist Organizations.” Instead, this proposal is designed to inhibit free speech by discouraging nonprofits from working with and advocating on behalf of disadvantaged individuals and groups, like Venezuelans or Palestinians, who may be associated, even completely incidentally, with any group the U.S. deems a terrorist organization. And depending on what future groups this administration decides to label as terrorist organizations, it could also threaten those advocating for racial justice, LGBTQ rights, immigrant communities, climate action, human rights, and other issues opposed by this administration. 

On top of its threats to free speech, the language lacks due process protections for targeted nonprofit organizations. In addition to placing sole authority in the hands of the Treasury Secretary, the bill does not require the Treasury Secretary to disclose the reasons for or evidence supporting a “Terrorist Supporting Organization” designation. This, combined with only providing an after-the-fact administrative or judicial appeals process, would place a nearly insurmountable burden on any nonprofit to prove a negative—that they are not a terrorist supporting organization—instead of placing the burden where it should be, on the government. 

As laid out in letter led by ACLU and signed by over 350 diverse nonprofits, this bill would provide the executive branch with: 

“the authority to target its political opponents and use the fear of crippling legal fees, the stigma of the designation, and donors fleeing controversy to stifle dissent and chill speech and advocacy. And while the broadest applications of this authority may not ultimately hold up in court, the potential reputational and financial cost of fending off an investigation and litigating a wrongful designation could functionally mean the end of a targeted nonprofit before it ever has its day in court.” 

Current tax law makes it a crime for the President and other high-level officials to order IRS investigations over policy disagreements. This proposal creates a loophole to this rule that could chill nonprofits for years to come. 

There is no question that nonprofits and educational institutions – along with many other groups and individuals – are under threat from this administration. If passed, future administrations, regardless of party affiliation, could weaponize the powers in this bill against nonprofits of all kinds. We urge the House to vote down this proposal. 

Maddie Daly

The U.S. Copyright Office’s Draft Report on AI Training Errs on Fair Use

20 hours ago

Within the next decade, generative AI could join computers and electricity as one of the most transformational technologies in history, with all of the promise and peril that implies. Governments’ responses to GenAI—including new legal precedents—need to thoughtfully address real-world harms without destroying the public benefits GenAI can offer. Unfortunately, the U.S. Copyright Office’s rushed draft report on AI training misses the mark.

The Report Bungles Fair Use

Released amidst a set of controversial job terminations, the Copyright Office’s report covers a wide range of issues with varying degrees of nuance. But on the core legal question—whether using copyrighted works to train GenAI is a fair use—it stumbles badly. The report misapplies long-settled fair use principles and ultimately puts a thumb on the scale in favor of copyright owners at the expense of creativity and innovation.

To work effectively, today’s GenAI systems need to be trained on very large collections of human-created works—probably millions of them. At this scale, locating copyright holders and getting their permission is daunting for even the biggest and wealthiest AI companies, and impossible for smaller competitors. If training makes fair use of copyrighted works, however, then no permission is needed.

Right now, courts are considering dozens of lawsuits that raise the question of fair use for GenAI training. Federal District Judge Vince Chhabria is poised to rule on this question, after hearing oral arguments in Kadrey v. Meta PlatformsThe Third Circuit Court of Appeals is expected to consider a similar fair use issue in Thomson Reuters v. Ross Intelligence. Courts are well-equipped to resolve this pivotal issue by applying existing law to specific uses and AI technologies. 

Courts Should Reject the Copyright Office’s Fair Use Analysis

The report’s fair use discussion contains some fundamental errors that place a thumb on the scale in favor of rightsholders. Though the report is non-binding, it could influence courts, including in cases like Kadrey, where plaintiffs have already filed a copy of the report and urged the court to defer to its analysis.   

Courts need only accept the Copyright Office’s draft conclusions, however, if they are persuasive. They are not.   

The Office’s fair use analysis is not one the courts should follow. It repeatedly conflates the use of works for training models—a necessary step in the process of building a GenAI model—with the use of the model to create substantially similar works. It also misapplies basic fair use principles and embraces a novel theory of market harm that has never been endorsed by any court.

The first problem is the Copyright Office’s transformative use analysis. Highly transformative uses—those that serve a different purpose than that of the original work—are very likely to be fair. Courts routinely hold that using copyrighted works to build new software and technology—including search engines, video games, and mobile apps—is a highly transformative use because it serves a new and distinct purpose. Here, the original works were created for various purposes and using them to train large language models is surely very different.

The report attempts to sidestep that conclusion by repeatedly ignoring the actual use in question—training —and focusing instead on how the model may be ultimately used. If the model is ultimately used primarily to create a class of works that are similar to the original works on which it was trained, the Office argues, then the intermediate copying can’t be considered transformative. This fundamentally misunderstands transformative use, which should turn on whether a model itself is a new creation with its own distinct purpose, not whether any of its potential uses might affect demand for a work on which it was trained—a dubious standard that runs contrary to decades of precedent.

The Copyright Office’s transformative use analysis also suggests that the fair use analysis should consider whether works were obtained in “bad faith,” and whether developers respected the right “to control” the use of copyrighted works.  But the Supreme Court is skeptical that bad faith has any role to play in the fair use analysis and has made clear that fair use is not a privilege reserved for the well-behaved. And rightsholders don’t have the right to control fair uses—that’s kind of the point.

Finally, the Office adopts a novel and badly misguided theory of “market harm.” Traditionally, the fair use analysis requires courts to consider the effects of the use on the market for the work in question. The Copyright Office suggests instead that courts should consider overall effects of the use of the models to produce generally similar works. By this logic, if a model was trained on a Bridgerton novel—among millions of other works—and was later used by a third party to produce romance novels, that might harm series author Julia Quinn’s bottom line.

This market dilution theory has four fundamental problems. First, like the transformative use analysis, it conflates training with outputs. Second, it’s not supported by any relevant precedent. Third, it’s based entirely on speculation that Bridgerton fans will buy random “romance novels” instead of works produced by a bestselling author they know and love.  This relies on breathtaking assumptions that lack evidence, including that all works in the same genre are good substitutes for each other—regardless of their quality, originality, or acclaim. Lastly, even if competition from other, unique works might reduce sales, it isn’t the type of market harm that weighs against fair use.

Nor is lost revenue from licenses for fair uses a type of market harm that the law should recognize. Prioritizing private licensing market “solutions” over user rights would dramatically expand the market power of major media companies and chill the creativity and innovation that copyright is intended to promote. Indeed, the fair use doctrine exists in part to create breathing room for technological innovation, from the phonograph record to the videocassette recorder to the internet itself. Without fair use, crushing copyright liability could stunt the development of AI technology.

We’re still digesting this report, but our initial review suggests that, on balance, the Copyright Office’s approach to fair use for GenAI training isn’t a dispassionate report on how existing copyright law applies to this new and revolutionary technology. It’s a policy judgment about the value of GenAI technology for future creativity, by an office that has no business making new, free-floating policy decisions.

The courts should not follow the Copyright Office’s speculations about GenAI. They should follow precedent.

Tori Noble

In Memoriam: John L. Young, Cryptome Co-Founder

1 day 4 hours ago

John L. Young, who died March 28 at age 89 in New York City, was among the first people to see the need for an online library of official secrets, a place where the public could find out things that governments and corporations didn’t want them to know. He made real the idea – revolutionary in its time – that the internet could make more information available to more people than ever before.

John and architect Deborah Natsios, his wife, in 1996 founded Cryptome, an online library which collects and publishes data about freedom of expression, privacy, cryptography, dual-use technologies, national security, intelligence, and government secrecy. Its slogan: “The greatest threat to democracy is official secrecy which favors a few over the many.” And its invitation: “We welcome documents for publication that are prohibited by governments worldwide.”

Cryptome soon became known for publishing an encyclopedic array of government, court, and corporate documents. Cryptome assembled an indispensable, almost daily chronicle of the ‘crypto wars’ of the 1990s – when the first generation of internet lawyers and activists recognized the need to free up encryption from government control and undertook litigation, public activism and legislative steps to do so.  Cryptome became required reading for anyone looking for information about that early fight, as well as many others.    

John and Cryptome were also among the early organizers and sponsors of WikiLeaks, though like many others, he later broke with that organization over what he saw as its monetization. Cryptome later published Wikileaks’ alleged internal emails. Transparency was the core of everything John stood for.

John was one of the early, under-recognized heroes of the digital age.

John was a West Texan by birth and an architect by training and trade. Even before he launched the website, his lifelong pursuit of not-for-profit, public-good ideals led him to seek access to documents about shadowy public development entities that seemed to ignore public safety, health, and welfare. As the digital age dawned, this expertise in and passion for exposing secrets evolved into Cryptome with John its chief information architect, designing and building a real-time archive of seminal debates shaping cyberspace’s evolving information infrastructures.

The FBI and Secret Service tried to chill his activities. Big Tech companies like Microsoft tried to bully him into pulling documents off the internet. But through it all, John remained a steadfast if iconoclastic librarian without fear or favor.

John served in the United States Army Corps of Engineers in Germany (1953–1956) and earned degrees in philosophy and architecture from Rice University (1957–1963) and his graduate degree in architecture from Columbia University in 1969. A self-identified radical, he became an activist and helped create the community service group Urban Deadline, where his fellow student-activists initially suspected him of being a police spy. Urban Deadline went on to receive citations from the Citizens Union of the City of New York and the New York City Council.

John was one of the early, under-recognized heroes of the digital age. He not only saw the promise of digital technology to help democratize access to information, he brought that idea into being and nurtured it for many years.  We will miss him and his unswerving commitment to the public’s right to know.

Cindy Cohn

The Kids Online Safety Act Will Make the Internet Worse for Everyone

1 day 6 hours ago

The Kids Online Safety Act (KOSA) is back in the Senate. Sponsors are claiming—again—that the latest version won’t censor online content. It isn’t true. This bill still sets up a censorship regime disguised as a “duty of care,” and it will do what previous versions threatened: suppress lawful, important speech online, especially for young people.

TAKE ACTION

KOSA Will silence kids and adults

KOSA Still Forces Platforms to Police Legal Speech

At the center of the bill is a requirement that platforms “exercise reasonable care” to prevent and mitigate a sweeping list of harms to minors, including depression, anxiety, eating disorders, substance use, bullying, and “compulsive usage.” The bill claims to bar lawsuits over “the viewpoint of users,” but that’s a smokescreen. Its core function is to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to one of these harms. 

When the safest legal option is to delete a forum, platforms will delete the forum.

This bill won’t bother big tech. Large companies will be able to manage this regulation, which is why Apple and X have agreed to support it. In fact, X helped negotiate the text of the last version of this bill we saw. Meanwhile, those companies’ smaller competitors will be left scrambling to comply. Under KOSA, a small platform hosting mental health discussion boards will be just as vulnerable as Meta or TikTok—but much less able to defend itself. 

To avoid liability, platforms will over-censor. It’s not merely hypothetical. It’s what happens when speech becomes a legal risk. The list of harms in KOSA’s “duty of care” provision is so broad and vague that no platform will know what to do regarding any given piece of content. Forums won’t be able to host posts with messages like “love your body,” “please don’t do drugs,” or “here’s how I got through depression” without fearing that an attorney general or FTC lawyer might later decide the content was harmful. Support groups and anti-harm communities, which can’t do their work without talking about difficult subjects like eating disorders, mental health, and drug abuse, will get caught in the dragnet. 

When the safest legal option is to delete a forum, platforms will delete the forum.

There’s Still No Science Behind KOSA’s Core Claims

KOSA relies heavily on vague, subjective harms like “compulsive usage.” The bill defines it as repetitive online behavior that disrupts life activities like eating, sleeping, or socializing. But here’s the problem: there is no accepted clinical definition of “compulsive usage” of online services.

There’s no scientific consensus that online platforms cause mental health disorders, nor agreement on how to measure so-called “addictive” behavior online. The term sounds like settled medical science, but it’s legislative sleight-of-hand: an undefined concept given legal teeth, with major consequences for speech and access to information.

Carveouts Don’t Fix the First Amendment Problem

The bill says it can’t be enforced based on a user’s “viewpoint.” But the text of the bill itself preferences certain viewpoints over others. Plus, liability in KOSA attaches to the platform, not the user. The only way for platforms to reduce risk in the world of KOSA is to monitor, filter, and restrict what users say.

If the FTC can sue a platform because minors saw a medical forum discussing anorexia, or posts about LGBTQ identity, or posts discussing how to help a friend who’s depressed, then that’s censorship. The bill’s stock language that “viewpoints are protected” won’t matter. The legal incentives guarantee that platforms will silence even remotely controversial speech to stay safe.

Lawmakers who support KOSA today are choosing to trust the current administration, and future administrations, to define what youth—and to some degree, all of us—should be allowed to read online. 

KOSA will not make kids safer. It will make the internet more dangerous for anyone who relies on it to learn, connect, or speak freely. Lawmakers should reject it, and fast. 

TAKE ACTION

TELL CONGRESS: OPPOSE KOSA

Joe Mullin

EFF to California Lawmakers: There’s a Better Way to Help Young People Online

1 day 9 hours ago

We’ve covered a lot of federal and state proposals that badly miss the mark when attempting to grapple with protecting young people’s safety online. These include bills that threaten to cut young people off from vital information, infringe on their First Amendment rights to speak for themselves, subject them (and adults) to invasive and insecure age verification technology, and expose them to danger by sharing personal information with people they may not want to see it.

Several such bills are moving through the California legislature this year, continuing a troubling years-long trend of lawmakers pushing similarly problematic proposals. This week, EFF sent a letter to the California legislature expressing grave concerns with lawmakers’ approach to regulating young people’s ability to speak online.

We’re far from the only ones who have issues with this approach. Many of the laws California has passed attempting to address young people’s online safety have been subsequently challenged in court and stopped from going into effect.

Our letter outlines the legal, technical, and policy problems with proposed “solutions” including age verification mandates, age gating, mandatory parental controls, and proposals that will encourage platforms to take down speech that’s even remotely controversial.

There are better paths that don’t hurt young people’s First Amendment rights.

We also note that the current approach completely ignores what we’ve heard from thousands of young people: the online platforms and communities they frequent can be among the safest spaces for them in the physical or digital world. These responses show the relationship between social media and young people’s mental health is far more nuanced than many lawmakers are willing to believe.

While our letter is addressed to California’s Assembly and Senate, they are not the only state lawmakers taking this approach. All lawmakers should listen to the people they’re trying to protect and find ways to help young people without hurting the spaces that are so important to them.

There are better paths that don’t hurt young people’s First Amendment rights and still help protect them against many of the harms that lawmakers have raised. In fact, elements of such approaches, such as data minimization, are already included in some of these otherwise problematic bills. A well-crafted privacy law that empowers everyone—children and adults—to control how their data is collected and used would be a crucial step in curbing many of these problems.

We recognize that many young people face real harms online, that families are grappling with how to deal with them, and that tech companies are not offering much help.

However, many of the California legislature’s proposals—this year, and for several years—miss the root of the problem. We call on lawmakers work with us to enact better solutions.

Hayley Tsukayama

Keeping People Safe Online – Fundamental Rights Protective Alternatives to Age Checks

1 day 16 hours ago

This is the final part of a three-part series about age verification in the European Union. In part one, we give an overview of the political debate around age verification and explore the age verification proposal introduced by the European Commission, based on digital identities. Part two takes a closer look at the European Commission’s age verification app, and part three explores measures to keep all users safe that do not require age checks. 

When thinking about the safety of young people online, it is helpful to remember that we can build on and learn from the decades of experience we already have thinking through risks that can stem from content online. Before mandating a “fix,” like age checks or age assurance obligations, we should take the time to reflect on what it is exactly we are trying to address, and whether the proposed solution is able to solve the problem.

The approach of analyzing, defining and mitigating risks is a helpful one in this regard as it allows us to take a holistic look at possible risks, which includes thinking about the likelihood of a risk materializing, the severity of a certain risk and how risks may affect different groups of people very differently. 

In the context of child safety online, mandatory age checks are often presented as a solution to a number of risks potentially faced by minors online. The most common concerns to which policymakers refer in the context of age checks can be broken down into three categories of risks:

  • Content risks: This refers to the negative implications from the exposure to online content that might be age-inappropriate, such as violent or sexually explicit content, or content that incites dangerous behavior like self-harm. 
  • Conduct risks: Conduct risks involve behavior by children or teenagers that might be harmful to themselves or others, like cyberbullying, sharing intimate or personal information or problematic overuse of a service.
  • Contact risks: This includes potential harms stemming from contact with people that might pose a risk to minors, including grooming or being forced to exchange sexually explicit material. 

Taking a closer look at these risk categories, we can see that mandatory age checks are an ineffective and disproportionate tool to mitigate many risks at the top of policymakers’ minds.

Mitigating risks stemming from contact between minors and adults usually means ensuring that adults are barred from spaces designated for children. Age checks, especially age verification depending on ID documents like the European Commission’s mini-ID wallet, are not a helpful tool in this regard as children routinely do not have access to the kind of documentation allowing them to prove their age. Adults with bad intentions, on the other hand, are much more likely to be able to circumvent any measures put in place to keep them out.

Conduct risks have little to do with how old a specific user is, and much more to do with social dynamics and the affordances and constraints of online services. Differently put: Whether a platform knows a user’s age will not change how minor users themselves decide to behave and interact on the platform. Age verification won’t prevent users from choosing to engage in harmful or risky behavior, like freely posting personal information or spending too much time online. 

Finally, mitigating risks related to content deemed inappropriate is often thought of as shutting minors out from accessing certain information. Age check mandates seek to limit access to services and content without much granularity. They don’t allow for a nuanced weighing of the ways in which accessing the internet and social media can be a net positive for young people, and the ways in which it can lead to harm. This is complicated by the fact that although arguments in favour of age checks claim that the science on the relationship between the internet and young people is clear, the evidence on the effects of social media on minors is unsettled, and researchers have refuted claims that social media use is responsible for wellbeing crises among teenagers. This doesn’t mean that we shouldn’t consider the risks that may be associated with being young and online. 

But it’s clear that banning all access to certain information for an entire age cohort interferes with all users’ fundamental rights, and is therefore not a proportionate risk mitigation strategy. Under a mandatory age check regime, adults are also required to upload identifying documents just to access websites, interfering with their speech, privacy and security online. At the same time, age checks are not even effective at accomplishing what they’re intended to achieve. Assuming that all age check mandates can and will be circumvented, they seem to do little in the way of protecting children but rather undermine their fundamental rights to privacy, freedom of expression and access to information crucial for their development. 

At EFF, we have been firm in our advocacy against age verification mandates and often get asked what we think policymakers should do instead to protect users online. Our response is a nuanced one, recognizing that there is no easy technological fix for complex, societal challenges: Take a holistic approach to risk mitigation, strengthen user choice, and adopt a privacy-first approach to fighting online harms. 

Taking a Holistic Approach to Risk Mitigation 

In the European Union, the past years have seen the adoption of a number of landmark laws to regulate online services. With new rules such as the Digital Services Act or the AI Act, lawmakers are increasingly pivoting to risk-based approaches to regulate online services, attempting to square the circle by addressing known cases of harm while also providing a framework for dealing with possible future risks. It remains to be seen how risk mitigation will work out in practice and whether enforcement will genuinely uphold fundamental rights without enabling overreach. 

Under the Digital Services Act, this framework also encompasses rights-protective moderation of content relevant to the risks faced by young people using their services. Platforms may also come up with their own policies on how to moderate legal content that may be considered harmful, such as hate speech or violent content. Robust enforcement of their own community guidelines is one of the most important tools at the disposal of online platforms, but unfortunately often lacking – also for categories of content harmful to children and teenagers, like pro-anorexia content

To counterbalance potential negative implications on users’ rights to free expression, the DSA puts boundaries on platforms’ content moderation: Platforms must act objectively and proportionately and must take users’ fundamental rights into account when restricting access to content. Additionally, users have the right to appeal content moderation decisions and can ask platforms to review content moderation decisions they disagree with. Users can also seek resolution through out-of-court dispute settlement bodies, at no cost, and can ask nonprofits to represent them in the platform’s internal dispute resolution process, in out-of-court dispute settlements and in court. Platforms must also publish detailed transparency reports, and give researchers and non-profits access to data to study the impacts of online platforms on society. 

Beyond these specific obligations on platforms regarding content moderation, the protection of user rights, and improving transparency, the DSA obliges online platforms to take appropriate and proportionate measures to protect the privacy, security and safety of minors. Upcoming guidelines will hopefully provide more clarity on what this means in practice, but it is clear that there are a host of measures platforms can adopt before resorting to approaches as disproportionate as age verification.

The DSA also foresees obligations on the largest platforms and search engines – so called Very Large Online Platforms (VLOPs) and Very Large Search Engines (VLOSEs) that have more than 45 million monthly users in the EU – to analyze and mitigate so-called systemic risks posed by their services. This includes analyzing and mitigating risks to the protection of minors and the rights of the child, including freedom of expression and access to information. While we have some critiques of the DSA’s systemic risk governance approach, it is helpful for thinking through the actual risks for young people that may be associated with different categories of content, platforms and their functionalities.

However, it is crucial that such risk assessments are not treated as mere regulatory compliance exercises, but put fundamental rights – and the impact of platforms and their features on those rights – front and center, especially in relation to the rights of children. Platforms would be well-advised to use risk assessments responsibly for their regular product and policy assessments when mitigating risks stemming from content, design choices or features, like recommender systems, ways of engaging with content and users and or online ads. Especially when it comes to possible negative and positive effects of these features on children and teenagers, such assessments should be frequent and granular, expanding the evidence base available to both platforms and regulators. Additionally, platforms should allow external researchers to challenge and validate their assumptions and should provide extensive access to research data, as mandated by the DSA. 

The regulatory framework to deal with potentially harmful content and protect minors in the EU is a new and complex one, and enforcement is still in its early days. We believe that its robust, rights-respecting enforcement should be prioritized before eyeing new rules and legal mandates. 

Strengthening Users’ Choice 

Many online platforms also deploy their own tools to help families navigate their services, including parental control settings and apps, specific offers tailored to the needs of children and teens, or features like reminders to take a break. While these tools are certainly far from perfect, and should not be seen as a sufficient measure to address all concerns, they do offer families an opportunity to set boundaries that work for them. 

Academic and civil society research underlines that better and more granular user controls can also be an effective tool to minimize content and contact risks: Allowing users to integrate third-party content moderation systems or recommendation algorithms would enable families to alter their childrens’ online experiences according to their needs. 

The DSA takes a first helpful step in this direction by mandating that online platforms give users transparency about the main parameters used to recommend content to users, and to allow users to easily choose between different recommendation systems when multiple options are available. The DSA also obliges VLOPs that use recommender systems to offer at least one option that is not based on profiling users, thereby giving users of large platforms the choice to protect themselves from the often privacy-invasive personalization of their feeds. However, forgoing all personalization will likely not be attractive to most users, and platforms should give users the choice to use third-party recommender systems that better mirror their privacy preferences.

Giving users more control over which accounts can interact with them, and in which ways, can also help protect children and teenagers against unwanted interactions. Strengthening users’ choice also includes prohibiting companies from implementing user interfaces that have the intent or substantial effect of impairing autonomy and choice. This so-called “deceptive design” can take many forms, from tricking people into giving consent to the collection of their personal data, to encouraging the use of certain features. The DSA takes steps to ban dark patterns, but European consumer protection law must make sure that this prohibition is strictly enforced and that no loopholes remain. 

A Privacy First Approach to Addressing Online Harms 

While rights-respecting content moderation and tools to strengthen parents’ and childrens’ self-determination online are part of the answer, we have long advocated for a privacy-focused approach to fighting online harms. 

We follow this approach for two reasons: On the one hand, privacy risks are complex and young people cannot be expected to predict risks that may materialize in the future. On the other hand, many of the ways in which children and teenangers can be harmed online are directly linked to the accumulation and exploitation of their personal data. 

Online services collect enormous amounts of personal data and personalize or target their services – displaying ads or recommender systems – based on that data. While the systems that target and display ads and curate online content are distinct, both are based on the surveillance and profiling of users. In addition to allowing users to choose a recommender system, settings for all users should by default turn off recommender systems based on behavioral data. To protect all users’ privacy and data protection rights, platforms should have to ask for users’ informed, specific, voluntary, opt-in consent before collecting their data to personalize recommender systems. Privacy settings should be easily accessible and allow users to enable additional protections. 

Data collection in the context of online ads is even more opaque. Due to the large number of ad tech actors and data brokers involved, it is practically impossible for users to give informed consent for the processing of their personal data. This data is used by ad tech companies and data brokers to profile users to draw inferences about what they like, what kind of person they are (including demographics like age and gender), and what they might be interested in buying, seeing, or engaging with. This information is then used by ad tech companies to target advertisements, including for children. Beyond undermining children’s privacy and autonomy, the online behavioral ad system teaches users from a young age that data collection, tracking, and profiling are evils that come with using the web, thereby normalizing being tracked, profiled, and surveilled. 

This is why we have long advocated for a ban of online behavioral advertising. Banning behavioral ads would remove a major incentive for companies to collect as much data as they do. The DSA already bans targeting minors with behavioral ads, but this protection should be extended to everyone. Banning behavioral advertising will be the most effective path to disincentivize the collection and processing of personal data and end the surveillance of all users, including children, online. 

Similarly, pay-for-privacy schemes should be banned, and we welcome the recent decision by the European Commission to fine Meta for breaching the Digital Markets Act by offering its users a binary choice between paying for privacy or having their personal data used for ads targeting. Especially in the face of recent political pressure from the Trump administration to not enforce European tech laws, we applaud the European Commission for taking a clear stance and confirming that the protection of privacy online should never be a luxury or privilege. And especially vulnerable users like children should not be confronted with the choice between paying extra (something that many children will not be able to do) or being surveilled.

Svea Windwehr

Stopping States From Passing AI Laws for the Next Decade Is a Terrible Idea

2 days 1 hour ago

This week, the U.S. House Energy and Commerce Committee moved forward with a proposal in its budget reconciliation bill to impose a ten-year preemption of state AI regulation—essentially saying only Congress, not state legislatures, can place safeguards on AI for the next decade.

We strongly oppose this. We’ve talked before about why federal preemption of stronger state privacy laws hurts everyone. Many of the same arguments apply here. For one, this would override existing state laws enacted to mitigate against emerging harms from AI use. It would also keep states, which have been more responsive on AI regulatory issues, from reacting to emerging problems.

Finally, it risks freezing any regulation on the issue for the next decade—a considerable problem given the pace at which companies are developing the technology. Congress does not react quickly and, particularly when addressing harms from emerging technologies, has been far slower to act than states. Or, as a number of state lawmakers who are leading on tech policy issues from across the country said in a recent joint open letter, “If Washington wants to pass a comprehensive privacy or AI law with teeth, more power to them, but we all know this is unlikely.”

Even if Congress does nothing on AI for the next ten years, this would still prevent states from stepping into the breach.

Even if Congress does nothing on AI for the next ten years, this would still prevent states from stepping into the breach. Given how different the AI industry looks now from how it looked just three years ago, it’s hard to even conceptualize how different it may look in ten years. State lawmakers must be able to react to emerging issues.

Many state AI proposals struggle to find the right balance between innovation and speech, on the one hand, and consumer protection and equal opportunity, on the other. EFF supports some bills to regulate AI and opposes others. But stopping states from acting at all puts a heavy thumb on the scale in favor of companies.

Stopping states will stop progress. As the big technology companies have done (and continue to do) with privacy legislation, AI companies are currently going all out to slow or roll back legal protections in states.

For example, Colorado passed a broad bill on AI protections last year. While far from perfect, the bill set down basic requirements to give people visibility into how companies use AI to make consequential decisions about them. This year, several AI companies lobbied to delay and weaken the bill. Meanwhile, POLITICO recently reported that this push in Washington, D.C. is in direct response to proposed California rules.

We oppose the AI preemption language in the reconciliation bill and urge Congress not to move forward with this damaging proposal.

Hayley Tsukayama

Montana Becomes First State to Close the Law Enforcement Data Broker Loophole

2 days 1 hour ago

Montana has done something that many states and the United States Congress have debated but failed to do: it has just enacted the first attempt to close the dreaded, invasive, unconstitutional, but easily fixed “data broker loophole.” This is a very good step in the right direction because right now, across the country, law enforcement routinely purchases information on individuals it would otherwise need a warrant to obtain.

What does that mean? In every state other than Montana, if police want to know where you have been, rather than presenting evidence and sending a warrant signed by a judge to a company like Verizon or Google to get your geolocation data for a particular set of time, they only need to buy that same data from data brokers. In other words, all the location data apps on your phone collect —sometimes recording your exact location every few minutes—is just sitting for sale on the open market. And police routinely take that as an opportunity to skirt your Fourth Amendment rights.

Now, with SB 282, Montana has become the first state to close the data broker loophole. This means the government may not use money to get access to information about electronic communications (presumably metadata), the contents of electronic communications, contents of communications sent by a tracking devices, digital information on electronic funds transfers, pseudonymous information, or “sensitive data”, which is defined in Montana as information about a person’s private life, personal associations, religious affiliation, health status, citizen status, biometric data, and precise geolocation. This does not mean information is now fully off limits to police. There are other ways for law enforcement in Montana to gain access to sensitive information: they can get a warrant signed by a judge, they can get consent of the owner to search a digital device, they can get an “investigative subpoena” which unfortunately requires far less justification than an actual warrant.

Despite the state’s insistence on honoring lower-threshold subpoena usage, SB 282 is not the first time Montana has been ahead of the curve when it comes to passing privacy-protecting legislation. For the better part of a decade, the Big Sky State has seriously limited the use of face recognition, passed consumer privacy protections, added an amendment to their constitution recognizing digital data as something protected from unwarranted searches and seizures, and passed a landmark law protecting against the disclosure or collection of genetic information and DNA. 

SB 282 is similar in approach to  H.R.4639, a federal bill the EFF has endorsed, introduced by Senator Ron Wyden, called the Fourth Amendment is Not for Sale Act. H.R.4639 passed through the House in April 2024 but has not been taken up by the Senate. 

Absent the United States Congress being able to pass important privacy protections into law, states, cities, and towns have taken it upon themselves to pass legislation their residents sorely need in order to protect their civil liberties. Montana, with a population of just over one million people, is showing other states how it’s done. EFF applauds Montana for being the first state to close the data broker loophole and show the country that the Fourth Amendment is not for sale. 

Matthew Guariglia

How Signal, WhatsApp, Apple, and Google Handle Encrypted Chat Backups

1 week ago

Encrypted chat apps like Signal and WhatsApp are one of the best ways to keep your digital conversations as private as possible. But if you’re not careful with how those conversations are backed up, you can accidentally undermine your privacy.

When a conversation is properly encrypted end-to-end, it means that the contents of those messages are only viewable by the sender and the recipient. The organization that runs the messaging platform—such as Meta or Signal—does not have access to the contents of the messages. But it does have access to some metadata, like the who, where, and when of a message. Companies have different retention policies around whether they hold onto that information after the message is sent.

What happens after the messages are sent and received is entirely up to the sender and receiver. If you’re having a conversation with someone, you may choose to screenshot that conversation and save that screenshot to your computer’s desktop or phone’s camera roll. You might choose to back up your chat history, either to your personal computer or maybe even to cloud storage (services like Google Drive or iCloud, or to servers run by the application developer).

Those backups do not necessarily have the same type of encryption protections as the chats themselves, and may make those conversations—which were sent with strong, privacy-protecting end-to-end encryption—available to read by whoever runs the cloud storage platform you’re backing up to, which also means they could hand them at the request of law enforcement.

With that in mind, let’s take a look at how several of the most popular chat apps handle backups, and what options you may have to strengthen the security of those backups.

How Signal Handles Backups

The official Signal app doesn’t offer any way to back up your messages to a cloud server (some alternate versions of the app may provide this, but we recommend you avoid those, as there don’t exist any alternatives with the same level of security as the official app). Even if you use a device backup, like Apple’s iCloud backup, the contents of Signal messages are not included in those.

Instead, Signal supports a manual backup and restore option. Basically, messages are not backed up to any cloud storage, and Signal cannot access them, so the only way to transfer messages from one device to another is manually through a process that Signal details here. If you lose your phone or it breaks, you will likely not be able to transfer your messages.

How WhatsApp Handles Backups

WhatsApp can optionally back up the contents of chats to either a Google Account on Android, or iCloud on iPhone, and you have a choice to back up with or without end-to-end encryption. Here are directions for enabling end-to-end encryption in those backups. When you do so, you’ll need to create a password or save a 64-digit key.

How Apple’s iMessages Handles Backups

Communication between people with Apple devices using Apple’s iMessage (blue bubbles in the Messages app), are end-to-end encrypted, but the backups of those conversations are not end-to-end encrypted by default. This is a loophole we’ve routinely demanded Apple close.

The good news is that with the release of the Advanced Data Protection feature, you can optionally turn on end-to-end encryption for almost everything stored in iCloud, including those backups (unless you’re in the U.K., where Apple is currently arguing with the government over demands to access data in the cloud, and has pulled the feature for U.K. users).

How Google Messages Handles Backups

Similar to Apple iMessages, Google Messages conversations are end-to-end encrypted only with other Google Messages users (you’ll know it’s enabled when there’s a small lock icon next to the send button in a chat).

You can optionally back up Google Messages to a Google Account, and as long as you have a passcode or lock screen password, the backup of the text of those conversations is end-to-end encrypted. A feature to turn on end-to-end encrypted backups directly in the Google Messages app, similar to how WhatsApp handles it, was spotted in beta last year but hasn’t been officially announced or released.

Everyone in the Group Chat Needs to Get Encrypted

Note that even if you take the extra step to turn on end-to-end encryption, everyone else you converse with would have to do the same to protect their own backups. If you have particularly sensitive conversations on apps like WhatsApp or Apple Messages, where those encrypted backups are an option but not the default, you may want to ask those participants to either not back up their chats at all, or turn on end-to-end encrypted backups. 

Ask Yourself: Do I Need Backups Of These Conversations?

Of course, there’s a reason people want to back up their conversations. Maybe you want to keep a record of the first time you messaged your partner, or want to be able to look back on chats with friends and family. There should not be a privacy trade-off for those who want to save those conversations, but unfortunately you do need to weigh whether or not it’s worth saving your chats with the potential of them being exposed in your security plan.

But also it’s worth considering that we don’t typically need every conversation we have stored forever. Many chat apps, including WhatsApp and Signal, offer some form of “disappearing messages,” which is a way to delete messages after a certain amount of time. This gets a little tricky with backups in WhatsApp. If you create a backup before a message disappears, it’ll be included in the backup, but deleted when you restore later. Those messages will remain there until you back up again, which may be the next day, or may not be many days, if you don’t connect to Wi-Fi.

You can change these disappearing messaging settings on a per-conversation basis. That means you can choose to set the meme-friendly group chat with your friends to delete after a week, but retain the messages with your kids forever. Google Messages and Apple Messages don’t offer any such feature—but they should, because it’s a simple way to protect our conversations that gives more control over to the people using the app.

End-to-end encrypted chat apps are a wonderful tool for communicating safely and privately, but backups are always going to be a contentious part of how they work. Signal’s approach of not offering cloud storage for backups at all is useful for those who need that level of privacy, but is not going to work for everyone’s needs. Better defaults and end-to-end encrypted backups as the only option when cloud storage is offered would be a step forward, and a much easier solution than going through and asking every one of your contacts how or if they back up their chats.

Thorin Klosowski

The FCC Must Reject Efforts to Lock Up Public Airwaves

1 week 2 days ago

President Trump’s attack on public broadcasting has attracted plenty of deserved attention, but there’s a far more technical, far more insidious policy change in the offing—one that will take away Americans’ right to unencumbered access to our publicly owned airwaves.

The FCC is quietly contemplating a fundamental restructuring of all broadcasting in the United States, via a new DRM-based standard for digital television equipment, enforced by a private “security authority” with control over licensing, encryption, and compliance. This move is confusingly called the “ATSC Transition” (ATSC is the digital TV standard the US switched to in 2009 – the “transition” here is to ATSC 3.0, a new version with built-in DRM).

The “ATSC Transition” is championed by the National Association of Broadcasters, who want to effectively privatize the public airwaves, allowing broadcasters to encrypt over-the-air programming, meaning that you will only be able to receive those encrypted shows if you buy a new TV with built-in DRM keys. It’s a tax on American TV viewers, forcing you to buy a new TV so you can continue to access a public resource you already own. 

This may not strike you as a big deal. Lots of us have given up on broadcast and get all our TV over the internet. But millions of American still rely heavily or exclusively on broadcast television for everything from news to education to simple entertainment. Many of these viewers live in rural or tribal areas, and/or are low-income households who can least afford to “upgrade.” Historically, these viewers have been able to rely on access to broadcast because, by law, broadcasters get extremely valuable spectrum licenses in exchange for making their programming available for free to anyone within range of their broadcast antennas. 

If broadcasters have cool new features the public will enjoy, they don’t need to force us to adopt them

Adding DRM to over-the-air broadcasts upends this system. The “ATSC Transition” is a really a transition from the century-old system of universally accessible programming to a privately controlled web of proprietary technological restrictions. It’s a transition from a system where anyone can come up with innovative new TV hardware to one where a centralized, unaccountable private authority gets a veto right over new devices. 

DRM licensing schemes like this are innovation killers. Prime example: DVDs and DVD players, which have been subject to a similar central authority, and haven’t gotten a single new feature since the DVD player was introduced in 1995. 

DRM is also incompatible with fundamental limits on copyright, like fair use.  Those limits let you do things like record a daytime baseball game and then watch it after dinner, skipping the ads. Broadcasters would like to prevent that and DRM helps them do it. Keep in mind that bypassing or breaking a DRM system’s digital keys—even for lawful purposes like time-shifting, ad-skipping, security research, and so on—risks penalties under Section 1201 of the Digital Millennium Copyright Act. That is, unless you have the time and resources to beg the Copyright Office for an exemption (and, if the exemption is granted, to renew your plea every three years). 

Broadcasters say they need this change to offer viewers new interactive features that will serve the public interest. But if broadcasters have cool new features the public will enjoy, they don’t need to force us to adopt them. The most reliable indicator that a new feature is cool and desirable is that people voluntarily install it. If the only way to get someone to use a new feature is to lock up the keys so they can’t turn it off, that’s a clear sign that the feature is not in the public interest. 

That's why EFF joined Public Knowledge, Consumer Reports and others in urging the FCC to reject this terrible, horrible, no good, very bad idea and keep our airwaves free for all of us. We hope the agency listens, and puts the interests of millions of Americans above the private interests of a few powerful media cartels.

Corynne McSherry

Appeals Court Sidesteps The Big Questions on Geofence Warrants

1 week 2 days ago

Another federal appeals court has ruled on controversial geofence warrants—sort of. Last week, the US Court of Appeals for the Fourth Circuit sitting en banc issued a single sentence opinion affirming the lower court opinion in United States v. Chatrie. The practical outcome of this sentence is clear: the evidence collected from a geofence warrant issued to Google can be used against the defendant in this case. But that is largely where the clarity ends, because the fifteen judges of the Fourth Circuit who heard the en banc appeal agreed on little else. The judges wrote a total of nine separate opinions, no single one of which received a majority of votes. Amid this fracture, the judges essentially deadlocked on important constitutional questions about whether geofence warrants are a Fourth Amendment search. As a result, the new opinion in Chatrie is a missed opportunity for the Fourth Circuit to join both other appellate courts to have considered the issue in finding geofence warrants unconstitutional.

Geofence warrants require a provider—almost always Google—to search its entire reserve of user location data to identify all users or devices located within a geographic area and time period both specified by law enforcement. This creates a high risk of suspicion falling on innocent people and can reveal sensitive and private information about where individuals have traveled in the past. Following intense scrutiny from the press and the public, Google announced changes to how it stores location data in late 2023, apparently with the effect of eventually making it impossible for the company to respond to geofence warrants.

Regardless, numerous criminal cases involving geofence evidence continue to make their way through the courts. The district court decision in Chatrie was one of the first, and it set an important precedent in finding the warrant overbroad and unconstitutional. However, the court allowed the government to use the evidence it obtained because it relied on the warrant in “good faith.” On appeal, a three judge panel of the Fourth Circuit voted 2-1 that the geofence warrant did not constitute a search at all. Later, the appeals court agreed to rehear the case en banc, in front of all active judges in the circuit. (EFF filed amicus briefs at both the panel and en banc stages of the appeal).

The only agreement among the fifteen judges who reheard the case was that the evidence should be allowed in, with at least eight relying on the good faith analysis. Meanwhile, seven judges argued that geofence warrants constitute a Fourth Amendment search in at least some fashion, while exactly seven disagreed. Although that means the appellate court did not rule on the Fourth Amendment implications of geofence warrants, neither did it vacate the lower court’s solid constitutional analysis.

Above all, it remains the case that every appellate court to rule on geofence warrants to date has found serious constitutional defects. As we explain in every brief we file in these cases, reverse warrants like these are very sort of “general searches” that the authors of the Fourth Amendment sought to prohibit. We’re dedicated to fighting them in courts and legislatures around the country.

Andrew Crocker

Podcast Episode: Digital Autonomy for Bodily Autonomy

1 week 2 days ago

We all leave digital trails as we navigate the internet – records of what we searched for, what we bought, who we talked to, where we went or want to go in the real world – and those trails usually are owned by the big corporations behind the platforms we use. But what if we valued our digital autonomy the way that we do our bodily autonomy? What if we reclaimed the right to go, read, see, do and be what we wish online as we try to do offline? Moreover, what if we saw digital autonomy and bodily autonomy as two sides of the same coin – inseparable?

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F0ffeccaf-2933-474a-87b2-2cae932ab88d%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.)

Kate Bertash wants that digital autonomy for all of us, and she pursues it in many different ways – from teaching abortion providers and activists how to protect themselves online, to helping people stymie the myriad surveillance technologies that watch and follow us in our communities. She joins EFF’s Cindy Cohn and Jason Kelley to discuss how creativity and community can align to center people in the digital world and make us freer both online and offline. 

In this episode you’ll learn about:

  • Why it’s important for local communities to collaboratively discuss and decide whether and how much they want to be surveilled
  • How the digital era has blurred the bright line between public and private spaces
  • Why we can’t surveil ourselves to safety
  • How DefCon – America's biggest hacker conference – embodies the ideal that we don’t have to simply accept technology as it’s given to us, but instead can break, tinker with, and rebuild it to meet our needs
  • Why building community helps us move beyond hopelessness to build and disseminate technology that helps protects everyone’s privacy  

Kate Bertash works at the intersection of tech, privacy, art, and organizing. She directs the Digital Defense Fund, launched in 2017 to meet the abortion rights and bodily autonomy movements’ increased need for security and technology resources after the 2016 election. This multidisciplinary team of organizers, engineers, designers, abortion fund and practical support volunteers provides digital security evaluations, conducts staff training, maintains a library of go-to resources on reproductive justice and digital privacy, and builds software for abortion access, bodily autonomy, and pro-democracy organizations. Bertash also engages in various multidisciplinary civic tech projects as a project manager, volunteer, activist, and artist; she’s especially interested in ways that artistic methods can interrogate use of AI-driven computer vision, other analytical technologies in surveillance, and related intersections with our civil rights. 

Resources:

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

KATE BERTASH: It is me, having my experience, like walking through these spaces, and so much of that privacy, right, should, like, treat me as if my digital autonomy in this space is as important as my bodily autonomy in the world.
I think it's totally possible. I have such amazing optimism for the idea of reclaiming our digital autonomy and understanding that it is like the you that moves through the world in this way, rather than just some like shoddy facsimile or some, like, shadow of you.


CINDY COHN: That’s Kate Bertash speaking about how the world will be better when we recognize that our digital selves and our physical selves are the same, and that reclaiming our digital autonomy is a necessary part of reclaiming our bodily autonomy. And that’s especially true for the people she focuses on helping, people who are seeking reproductive assistance.
I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I’m Jason Kelley – EFF’s Activism Director. This is our podcast series How to Fix the Internet.

CINDY COHN: The idea behind this show is that we're trying to make our digital lives BETTER. Now a big part of our job at EFF is to envision the ways things can go wrong online-- and jumping into the action to help when things then DO go wrong.
But this show is about optimism, hope and solutions – we want to share visions of what it looks like when we get it right.

JASON KELLEY: Our guest today is someone who has been tirelessly fighting for the safety and privacy of a very vulnerable group of people for many years – and she does so with compassion, creativity and joy.

CINDY COHN: Kate Bertash is a major force in the world of digital privacy and security. Her work with the Digital Defense Fund started in 2017 as a resource to help serve the digital security needs of people seeking abortions and other reproductive care, and they have \ expanded their purview to include trans rights, elections integrity, harm reduction and other areas that are crucial to an equitable and functional democratic society. She’s also an artist, with a clothing line called Adversarial Fashion. She designs clothes that do all sorts of deliciously sneaky things – like triggering automatic license plate readers, or injecting junk data into invasive state and corporate monitoring systems. We’re just delighted to have her with us today - welcome Kate!

KATE BERTASH: Thank you so much for having me on. What an introduction.

CINDY COHN: Well, let's start with your day job, privacy and reproductive rights. You've been doing this since long before it became, you know, such a national crisis. Tell us about the Digital Defense Fund.

KATE BERTASH: So after Donald Trump was elected in 2016, I had started running some, what I would call tech volunteering events, the most well known of which is the Abortion Access Hackathon in San Francisco, we had about 700 people apply to come and hundreds of people over the weekend who basically were able to help people with very functional requests.
So we went to different organizations in the area and worked to ensure that they could get help with, you know, turning a spreadsheet into a database or getting help working on open source that they use for case management, or fixing something that was broken in their sales force. So, very functional stuff.
And then I was approached after that and asked if I wanted to run this new fund, the Digital Defense Fund. So we spent the first couple years kind of figuring out what the fund was going to do, but sort of organically and learning basically from the people that we serve and the organizations that work at Abortion Access, we now have this model where we can provide hands-on, totally free digital security and privacy support to organizations working in the field.
We provide everything from digital security evaluations to trainings. We do a lot of project management, connecting folks with different kinds of vendor software, community support, a lot of professional development.
And I think probably the best part is we also get to help them fund those improvements. So I know we always talk a lot about how things can improve, but I think kind of seeing it through, uh, and getting to watch people actually, you know, install things and turn them on and learn how to be their own experts has been a really incredible experience. So I can't believe that was eight years ago.

JASON KELLEY: You know a lot has changed in eight years, we had the Dobbs decision, um, that happened under the Biden administration, and now we've got the Dobbs decision, under a Trump administration. I assume that, you know, your work has changed a lot. Like at EFF we've been doing some work, with the Repro Uncensored Coalition tracking the changes in take downs of abortion related content. And that is a hard thing to do just for, you know, all the reasons, um, that it, you know, tracking what systems take down is sort of a thing you have to do one at a time and just put the data together. But for you, I mean, out of eight years, you know what's different now than, than maybe not 2017 or, but, but certainly, you know, 2022.

KATE BERTASH: I think this is a really excellent question just because I think it's kind of strange to look backwards and, and know that, uh, abortion access is a really interesting space in that for decades it's been under various kinds of different legal, and I would say ideological attacks as well as, you know, dealing with the kind of common problems of nonprofits, usually funding, often being targets of financial scams and crime as all nonprofits are.
But I think the biggest change has been that, um, a lot of folks who I think sort of. Could always lean on the idea that abortion would be federally legal, and so your job may be helping people get their abortions or performing abortions or supporting folks with funding to get to their procedures, that that always sort of had this like, color of law that would always kind of back you up or provide for you a certain level of security.
Um, now we kind of don't have that safety, mentally, even to lean on anymore as well as legally. And so a lot of the meat and potatoes of the work that we do, um, often it was always about, you know, ensuring patient privacy. But a lot of times now it's also ensuring that organizations are kind of ready to ask and answer kind of hard questions about how they wanna work. What data is at risk when so much is uncertain in a legal space?
Because I think, you know, I hardly have to tell anybody at EFF that, often, uh, we kind of don't know what, what quote unquote qualifies or what is legal under a particular new law or statute until somebody makes you prove it in court.
And I think a lot of our job at Digital Defense Fund really then crystallized into what we can do to help people sort of tolerate this level of uncertainty and ensure that your tools and that your tactics and your understanding even of the environment that you're operating in at least buoys you and is a source of certainty and safety when the world cannot be.

CINDY COHN: Oh, I think that's great. Do you have a, an example?

KATE BERTASH: Yes, absolutely. I think one of the biggest changes that I've seen in how people tend to work and operate is that, uh, I think you know, this kind of backs into many other topics that I know get discussed on this podcast, which is that when we reach into our pocket for the computer that is on us all day, you know, our phone and we reach out to text people, it's, it's a very accessible way to reach somebody and trying to really wrap around the understanding of the difference between sending an SMS text message to somebody, or responding to a text message asking about services that your organization provides or where to get an abortion or something like that, and the difference of how much information is kept, for example, by your cell phone carrier. Usually, you know, as all of you have taught all of us very well, uh, in plain text as far as we know forever.
Uh, and the absolute huge difference then of getting to really inform people about this sort of static understanding of our environment that we operate in, that we kind of take for granted every day, when we're just like texting our friends or, you know, getting a message about whether something's ready for pickup at the pharmacy. Uh, and then instead we get to help move people onto other tools, encrypted chat like Signal or Wire or whatever meets their needs, helping meet people where they're at on other platforms like WhatsApp, and to really not just like tell people these are the quote unquote correct tools to use, because certainly there are many great, uh, you know, all loads roads lead to Rome as they say.
But I think getting to improve people's sort of environmental understanding of the ocean that we're all swimming in, uh, that it actually doesn't have to work this way, but that these are also the results of systems that, are motivated by capital and how you make money off of data. And so I think trying to help people to be prepared then to make different decisions when they encounter new questions or new technologies has been a really, really big piece of it. And I love that it gets to start with something as simple as, you know, a safer place to have a sensitive conversation in a text message on your phone in a place like Signal. So, yeah.

CINDY COHN: Yeah, no, I think that makes such sense. And we've seen this, right? I mean, you know, we had a mother in Nebraska who went to jail because she used Facebook to communicate with her daughter, I believe about getting reproductive help. And the shifting to a just a different platform might've changed that story quite a bit because, you know, Facebook had this information and, you know, one of the things that, you know, we know as lawyers is that like when Facebook gets a subpoena or process asking for information about a user, the government doesn't have to tell them what the prosecution is for, right? So that, you know, it could be a bank robber or it could be a person seeking reproductive help. The company is not in a position to really know that. Now we've worked in a couple places to create situations in which if the company does happen to know for some reason they can resist.
But the way that the baseline legal system works means that we can't just, you know, uh, as much as I love to blame Facebook, we can't blame Facebook for this. We need actual tools that help protect people from the jump.

KATE BERTASH: Absolutely, and I think that case is a really important example of, especially I think, how unclear it is from platform to platform, sort of how that information is kept and used.
I think one of the really tragic things about that conversation was that it was a very loving conversation. It was the kind of experience I think that you would want to have between a parent and child to be able to be there for each other. And they were even to talking to each other while they were in the same house. So they were just sharing a conversation from one room to the next. And that's something that I think like, to see the reaction the public had to, that I think, was very affirming to me that, that it was wrong, uh, that, you know, that just the way that this platform is structured somehow then, uh, put this extra amount of risk on this family.
I think, because, you know, we can imagine that it should be a common experience or common right to just have a simple conversation within your household and to know that like that's in a safe place, that that's treated with the sensitivity that it deserves. And I think it helps us to understand that. You know, we are actually, and I mean this in a good sense of the word, entitled to that, and I know that seeing actually, uh, Meta respond to the sort of outcry, there was also a very, like, positive flag for me, because they don't typically respond to, uh, their, their comms department does not typically respond to any individual subpoena that they received, but they felt they had to come out and say why they responded and what the, the problem was there. Um, I think as sort of an indication that this is important.
These different kinds of cases that come up, especially around abortion and criminalization, one of the reasons I think they're so important for us to cover is that, you know, on this podcast or within the spaces that both you and I work with so much about digital security and privacy kind of exists in this very like cloudy, theoretical space.
Like we have these, like, ideals of what we know we want to be true and, and often, you know, when you, when you're talking to folks about like big data, it's literally so large that it can be hard to like pin it down and decide how you feel. But these cases, they provide these concrete examples of how you think the world actually should or should not work.
And it really nails it down and lets people form these very strong emotional responses to it. Um, that's why I'm so grateful that, um, you know, organizations like yours get to help us contextualize that like, yes, there's this like, really personal, uh, and, and tragic story – and it also takes place within this larger conversation around your digital civil liberties.

CINDY COHN: Yeah, so let's flip that around a little bit. I've heard you talk about this before, which is, what would the world look like if our technologies actually stood up for us in these contexts? And, you know, inside the home is a very particular one. And I think because the Fourth Amendment is really clear about the need for privacy. It's one of the places where privacy is actually in our constitution, but I think we're having a broader conversation, like what would the world look like if the tools protected us in these times?

KATE BERTASH: I think especially, it's really interesting to think about the, the problems that I know I've learned so much from your team around the, the problem of what is public and what is private. I think, you know, we always talk about abortion access as a right to privacy and then it suddenly exists in this space where we kind of really haven't decided what that means, and especially anything that's very fuzzy about that.
People are often very familiar with the image of the protestor outside of the abortion clinic. There are many of the same problems kind of wrapped up in the fact that protestors will often film or take photographs or write down the license plates of people who are going in and out of clinics, often for a variety of reasons, but mostly to surveil them in in some way that we actually see then from state actors or from corporations, this is done on a very personal basis.
And it has a lot of that same level of damage. And we frequently have had to capitulate that like, well, this is a public space, you know, people can take photos in, in a public area, and that information that is taken about your personal abortion experience is unfortunately, you know, can be used and, and misused in, in whatever way people want.
And then we watched that exact same problem map itself onto the online space. So yeah, very important to me.

CINDY COHN: I think this is one of the fundamental, things that the digital era brought us was an increasing recognition that this bright line between public spaces and private spaces isn't working.
And so we need a more, you know, it's not like there aren't public spaces online. I definitely want reporters to be able to, you know. Do investigations that give us information about people in power and, and what they're doing. Um, so it's not, it's, it's not either or, right, and I think that's the thing we have to have a more nuanced conversation about what public spaces. Are really not public in the context. You know, what we think of as Bright Line public spaces aren't really rightfully treated as public. And I love your reframing about this as being about us. It's about us and our lives.

KATE BERTASH: Absolutely. Uh, I think one of the larger kind of examples that has come up also as well, uh, is that your experience of seeking out medical care actually then travels into the domain of, of the doctor that you see they often use in electronic health records system. And so you have this record of something that I don't think any of these companies were really quite adequately prepared for, for the policy eventuality that they would be holding information that would be an enshrined human right in some states’ constitutions, but a crime in a different state. And you know, you have these products like Epic Everywhere, and they allow access to that same information from a variety of places, including from a state where, you know, that, to that state, it is evidence of a crime to have this in the health record versus just it's, you know, a normal continuity of care in a different state.
And kind of seeing how, you know, we tend to have these sort of debates and understandings and trying to, like you say, examine the nuance and get to the bottom of how we wanna live in these different contexts of policy or in court cases. But then so much of it is held in this corporate space and I think they really are not. Ready for the fact that they are going to have to take a much more active role, I think, than they even want to, uh, in understanding how that shows up for us.

JASON KELLEY: Let’s take a quick moment to say thank you to our sponsor.
“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also want to thank EFF members and donors. You’re the reason that we exist. You can become a member if you’re not for just $25 and for a little more you can get some great, very stylish gear. The more members we have, the more power we have - in statehouses, courthouses, and on the streets. EFF has been fighting for digital rights for decades, and that fight is bigger than ever, so please, if you like what we do, go to eff.org/pod to donate.
And now back to our conversation with Kate Bertash.
So we've been talking a lot about the skills and wisdom that you've learned during the fight for reproductive rights, but I know a lot of that can be used in other areas as well. And I heard recently that you live in a pretty small rural town, and not all your neighbours share your political views. But you've been building sort of a local movement to fight surveillance there – and I’d love to hear about how you are bringing together different people with different sort of political alignments to come together on this privacy issue.

KATE BERTASH: Yeah, it actually had started so many years ago with Dave Moss, who's on the EFF team and I having a conversation about the license plate surveillance actually at clinics and, and kind of how that's affected by the proliferation of automated license plate reader technology. And I had come up with this, this like line of clothing called Adversarial Fashion, which, uh, injects junk into automated license plate readers.
It was a really fun project. I was really happy to see the public response to it, but as a result, I sort of learned a lot about these systems and kind of became a bit of an activist on the privacy issues around them.
And then suddenly, I now live in a rural community in southwest Washington and I then suddenly found out on Facebook one day that our sheriff's department had purchased Flock automated license plate reader cameras, and just installed them already and just announced it. Like there was no public discussion, no debate, no nothing. There had been debate in neighboring counties where they decided, oh, kind of not for us. You know, where a lot of rural communities, uh, and, and like, I wanna give you a sense of the size. Our county has 12,000 people in it. My town has a thousand people in it. So very tiny, like, you kind of almost wonder why you would even need license plate for surveillance when you could just like literally ask almost anybody what's going on with, like, I've seen people before on, on Facebook where they're like, Hey, is this your car? You know, somebody stole it. Come pick it up. It's on our, on our hill.

CINDY COHN: I grew up in a very small town in Iowa and the saying in our town was, you know, you don't need turn signals 'cause everybody knows where you're going.

KATE BERTASH: I love that. See exactly like I did not know that about you, Cindy. I love that. And that was kind of this initiating, uh, event where I was just, I, I'll be honest with you, I totally hit the ceiling. What I found out I was, I was really mad because, you know, you are active on all this stuff outside of, you know, your work and your, you know, I've been all over the country talking about the problems with this technology and the privacy issues that it raises and you know, how tech companies take advantage of communities and here they were taking advantage of my community.
It's like, not in my house! How is it in my house?

JASON KELLEY: Well, when did this happen? When? When did they install these?

KATE BERTASH: Oh my gosh, it had to be a couple of months ago. I mean, it was very, very recently. Yeah, it was super recently, and so I kind of did what I know best, which is that I took everything that I learned, I put it into a presentation to my neighbors. I scheduled a bunch of nights at the different libraries and community centers in my county, and invited everybody to come, and the sheriff and the undersheriff came too.
And the most surprising thing about this was that I think, A, that people showed up. I was actually very pleasantly surprised. I think a lot of people, when they move to rural areas, they do so because, you know, they want to feel freer to be not, you know, watched every day by the state or by corporations, or even by their neighbors, frankly.
And so it was really surprising to me when, this is probably the most politically diverse room I've ever presented to. And definitely people that I think would absolutely not love any of my rest of my politics, but both nights, one hundred percent of the room was in agreement that they did not like these cameras, did not think that they were a good fit for our community, that they don't really appreciate, you know, not being asked.
I think that was kind of the core thing we wanted to get through is that even if you do decide these are a good fit. We should have been asked first, and I got people, shaking my hands afterwards. We're like, thank you young lady for bringing up this important issue.
Um, it's still ongoing. We haven't had all of them. Some of them have been removed, uh, but not all of them. And I think there's a lot closer scrutiny now on like the disclosure page that Flock puts up where you get to see kind of how the data is accessed. Uh, but I think it was like, you know, I've been doing this like privacy and safety work for a while, but it made me realize I still have room to be surprised, and I think that like I was surprised that everybody in my community was very united on privacy. It might be the thing on which we most agree, and that was like so heartwarming in such a way. I really can't wait to keep, keep building on that and using it as a way to connect with people.

CINDY COHN: So I'd like to follow up because we've been working hard to try to figure out how to convince people that you can't surveil yourself to safety, right? This stuff is always promoted as if it's going to make us safe. What stories did you hear that were resonating with people? What was the counter story from, you know, surveillance equals safety.

KATE BERTASH: I think the biggest story that I knew really connected with folks was actually the way in which that data was shared outside of our community. And there was somebody who was sitting in the room who I think had elaborated to that point that she said. I might like you as the sheriff, you know, these are all people who voted for the sheriff. We got to actually have this conversation face to face, which was really quite amazing. And they got to say to the sheriff, I voted for you. I might like you just fine. I might think you would be responsible logging into this stuff, but I don't know all those people who these platforms share this stuff with.
And Flock actually shares your data, unless you specifically request that they turn it off, and I think that was where they were like, you know, I don't trust those people, I don't know those people.
I also don't know your successor. Who's gonna get this? If we give this power to this office, I might not trust the future sheriff as much. And in a small town, like, that personal relationship matters a lot. And I think it was like really helpful to kind of take it out of this, you know, I am obviously very concerned about the ways in which they're, you know, abusive of policing technology and power. I think though, because like so many of these people are people who are your neighbors and you know them, it was so helpful to kind of put it in terms of like, you know, I don't want you to think it's about whether or not I trust your confidence personally.
It's about rather what we maybe owe each other. And you know, I wish you had asked me first, and it became a very like, powerful personal experience and a personal narrative. And, and I think even at the end of the night, like by the second night, I think the sheriff's department had really changed their tune a lot.
And I said to them, I was like, this is the longest we've ever gotten to talk to each other. And I think that's a great thing.

CINDY COHN: I think that's really great. And what I love about this is landing, it really, you know, community has come up over and over again in the way that we've talked to different people about what's important about making technology serve people.

KATE BERTASH: Yeah, people make these decisions very emotionally. And I think it was really nice to be able to talk about trust and relationships and communication because so much of the conversation when it's just held online, gets pulled into, I think everybody in this room our least favorite phrase. If you're not doing anything wrong, why do you care about being surveilled?
And it's just sort of like, well, it's not about whether or not I'm committing a crime. It's about whether or not, you know, we've had a discussion about what we should all know about each other, or like, why don't you just come over and ask me first.
I still want our community to have the ability to get people’s stolen cars back or to like find somebody who is like a, a lost senior adult or, or a child who's been abducted, you know? But these are like problems. Then we get to solve together rather than in this like adversarial manner where everybody's an obstacle to some public good.

JASON KELLEY: One of the things that I think a lot of the people we talk with, but I think you in particular are bringing to this conversation is, I don't know, optimism, joy, creativity.
You're someone who is dealing with some complicated, difficult, often depressing stuff. And you think about how to get people involved in ways that aren't, you know, uh, using the word dystopia, which is a word we use too much at e fff because it's too often becoming true. Cindy, I think mentioned earlier the adversarial fashion line. I think you've done a lot of work in getting people who aren't necessarily engineers thinking about like data issues clearly.
Tell us a little bit about the adversarial fashion work and also just, you know, how we get more people involved in protecting privacy that aren't necessarily the ones working at Facebook, right?

KATE BERTASH: So one of the most fun things about the adversarial fashion line, uh, was in, in kind of researching how I was gonna do that. The reason I did it is because I actually spent some of my free time designing fabrics, like mostly stuff with little, you know, manatees or cats on them, like silly things for kids.
And so I was like, yeah, it's, it's a surface pattern. I could definitely do that. Seems easy. Uh, and I got to research and find out more about sort of the role that art has in a lot of anti-surveillance movements. There's a lot of really cool anti surveillance art projects. Uh, it has been amazing as I present adversarial fashion, uh, in different places to kinda show off how that works.
So the way that the adversarial fashion line works is that these clothes have basically, you know, see these sort of iterations of what kind of look like plates on them. And automated license plate readers are kind of interesting in that they're, what I guess the system with low specificity is, is the way that a software engineer might term it, which is that they are working on a highway at, you know, 60, 70 miles an hour.
They're ingesting hundreds, sometimes thousands of plates a minute. So they really have to just be generous in what they're willing to ingest. So they, they put the vacuum and things like picket fences and billboards. And so clothing was kind of trivial, frankly, to get them to pick that up as well.
And what was really nice about the example of, you know, like a shirt that. You know, could be read as a car by some of these systems. And it was very easy to show, especially on some of the open source systems that are the exact same models deployed in surveillance technology that's bought and sold, uh, that, you know, you would really think differently than about your plate being seen someplace as sort of something that might implicate you in a crime or determine a pattern of behavior or justify somebody surveilling you further if it can be fooled by a t-shirt.
And you know, much like the example we talked about, uh, with, you know, conversations being held on a place like Facebook, anti surveillance artworks are cool in that they get to help people who feel like they're not technical enough or they don't really understand the underlying pieces of technology to have a concrete example that they can form a really strong reaction to. I know that some of the people who had really thrilled me that they were very excited about were like criminal defense attorneys reached out and asked a bunch of questions.
We have a lot of other people who are artists or designers who are like, how did you learn to use these systems? Did you need to know how to code? And I'm like, no, you can just roll them up on, you know, there's actually a bunch of a LPR apps that are available on, you know, the Apple store or that you can use on your computer, on your phone and test out the things that you've made.
And this actually works for many other systems. So, you know, facial recognition systems, if you wanna play around and come up with really great, you know. Clothing or masks or makeup or something, you can actually test it with the facial recognition piece of Instagram or any of these different types of applications.
It's a lot of fun. I love getting to answer people's questions. I love seeing the kind of creative spark that they're like, oh yeah, maybe I am smart enough to understand this, or to try and fool it on my own. Or know that like these systems aren't maybe as complex or smart as I give them credit for.

JASON KELLEY: What I like about this especially is that you are, you know, pointing out that this stuff is actually not that complicated and we've moved into a world where often the kind of digital spaces we live in, the technology we use feels so opaque that people can't even understand how to begin to like modify it, or to understand how it works or how they would build it themselves.
It's something we've been talking about with other people about how there's sort of like a moment where you realize that you can modify the digital world or that you can. You, you know how it works. Was there a moment in your work or in your life, um, where you realized that you could sort of understand that technology was there FOR you, not just there like to be thrust upon you?

KATE BERTASH: You know, it might be a little bit late in my life, but I think when I first got this job and I was like, oh my gosh, what am I going to do to really help kind of break through the many types of like privacy and safety problems that are facing this community, somebody had said, Kate, you should go to Def Con, and I went to Def Con, my very first one, and I was like blown back in my chair.
Defcon is America's largest hacker conference. It takes place every single year in Las Vegas and I think going there, you see, not only are these presentations on things that people have broken, but then there are places called villages that you walk through and people show you how to break systems or why, actually, it should be a lot harder to break this than it is.
Like the voting village. They buy old voting machines off of eBay and then, you know, teach everyone who walks in within, you know, 20 minutes how you can break into a voting machine. And it was just this, like, moment where I realized that you don't have to take technology as it is given to you. We all deserve technology that has our back and, and can't be modified or broken to hurt us.
And you can do that by yourself, sort of like actively tinkering on it. And I think that spirit of irreverence Really carried through to a lot of the work that we do with Digital Defense Fund, where we get people all the time who, like, they come in and they are worried about absolutely everything. It's so hard to decide what bite of the elephant to take first on, you know, improving the safety and privacy for the team and how they work and the patients that they serve.
But then we get to kind of show people some great examples of how actually this. Isn't quite as complicated as you might think. I'm gonna walk you through sort of the difference of like getting to use, like, one app text versus another, or turning on two factor.
We love tools like have I been pwned because they kind of help shape that understanding. You know, like you think about how a hacker gets a password, it feels so abstract or like technical, and then you realize, oh, actually when somebody breaks these, they buy and sell them, and then somebody just takes old passwords and reuses them.
That seems far more intuitive. I can now understand the ecosystem and the logic that's used behind so much of security and it builds on itself. And I think the thing that I'm most proud of is that we not only have this community of folks that we've worked with to improve their safety that we introduced to personal, you know, professional development opportunities to keep growing that understanding. We also manage an amazing community of technologists who build their own systems.
There's one group called the DC Abortion Fund who built their own case management platform because they were not being served by any of these corporate or enterprise options that charge way too much. They have like, you know, dozens of case managers, so that many seats was never gonna be affordable. And so they just sat down and they, you know, worked with Code For DC and they built it out, hand in hand.
And that is a project that I always point to as like, you know, it took somebody saying to themselves, I deserve better than this, and I can learn from everything I like about, you know, systems that you can buy and sell, but also like our community's gonna build what we need.
And to be supported to do that and have that encouragement is, is one of the reasons that I'm so proud that, um, over these years, the number of sort of self-built and community built software projects and other types of like ways that people deploy more secure technology to each other and teach each other has grown by leaps and bounds.
My job is so different now than what it was eight years ago because people are hungry for it. They know that they are, you know, ready to become their own experts in their communities. And the requests that we get then for, for more train the trainer type of material, or to help equip people to bring this back to their space the way, you know, I brought my ALPR presentation back to my own community. It's great to see that everyone is so much more encouraged, especially in these times when like systems are unstable, nonprofits spin up and down. We all have funding problems that have very little often to do with the demand for those resources, that that's not the end of the story.
So, yeah, I love it. It's been a wonderful journey, seeing how everything has changed from, like you said, that spirit of, of being always worried that things are getting worse, focusing on this dystopia, to seeing sort of, you know, how our own community has expanded its imagination. It's really wonderful. //

CINDY COHN: What a joy it is to talk to someone like Kate. She brings this spirit of irreverence that I think is great that she centers on Defcon because that's a community that definitely takes security seriously, but don't take themselves very seriously. So I really, I love that attitude and how important that is, I hear, for building community, building resilience through what are pretty dark times for the community that she fundamentally, you know, works with.

JASON KELLEY: And building that understanding that you have the not just ability, but like the right to work with the technology that is presented to you and to understand it and to take it apart and to rebuild it. All of that is, I think, critical to, you know, building the better internet that we want.
And Kate really shows how just, you know, going to the DEF Con Village can change your whole mind about that sort of thing, and hopefully people who don't have technical skills will recognize that you actually don't necessarily need them to do what she's describing. That's another thing that she said that I really liked, which is this, that, you know, she could show up in a room and talk to 40 people about surveillance and she doesn't have to talk about it at a, you know, technical level really, just saying, Hey, here's how this works. Did you know that? And anyone can do that. You know, you just have to show up.

CINDY COHN: Yeah. And how important these, like hyperlocal conversations are to really getting a handle on combating this idea that we can surveil ourselves to safety. What I really loved about that story, about gathering her community together, including the sheriff, is that, you know, they actually had a real conversation about the impact of what the sheriff was, was, is doing with Alps and really were able to be like, you know, look, I want you to be able to catch people who are stealing cars, but also there are these other ramifications and really bringing it down to a human level as one of the ways we get people to kind of stop thinking that we can surveil ourselves to safety. Then that technology can just replace the kind of individual community-based conversations we need to have.

JASON KELLEY: Yeah. She really is maybe one of the best people I've ever spoken to at bringing it down to that human level.

CINDY COHN: I think of people like Kate as the connective tissue between the communities that really need technologies that serve them, and the people who either develop those technologies or think about them or advocacy groups like us who are kind of doing the policy level work or the national level or even international level work on this.
We need those, those bridges between the communities that need technologies and the people who really think about it in the kind of broader perspective or develop it and deploy it.

JASON KELLEY: I think the thing that I'm gonna take away from this most is again, just Kate's creativity and the fact that she's so optimistic and this is such a difficult topic and, and we're living in such, you know, easily described as dystopic times. Um, but, uh, she's sort of alive with the idea that it doesn't have to be that way, which is really the, the whole point of the podcast. So she embodied it really well.

CINDY COHN: Yep. And this season we're gonna be really featuring the technologies of freedom, the technologies we need in these particular times.
And Kate is just one example of so many people who are really bright spots here and pointing the way to, you know, how we can fix the internet and build ourselves a better future.

JASON KELLEY: Thanks for joining us for this episode – and this new season! – of How to Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF dot org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe even pick up some merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
We’ll see you next time.
I’m Jason Kelley.

CINDY COHN: And I’m Cindy Cohn.

Josh Richman

No Postal Service Data Sharing to Deport Immigrants

1 week 3 days ago

The law enforcement arm of the U.S. Postal Service (USPS) recently joined a U.S. Department of Homeland Security (DHS) task force geared towards finding and deporting immigrants, according to a report from the Washington Post. Now, immigration officials want two sets of data from the U.S. Postal Inspection Service (USPIS). First, they want access to what the Post describes as the agency’s “broad surveillance systems, including Postal Service online account data, package- and mail-tracking information, credit card data and financial material and IP addresses.” Second, they want “mail covers,” meaning “photographs of the outside of envelopes and packages.”

Both proposals are alarming. The U.S. mail is a vital, constitutionally established system of communication and commerce that should not be distorted into infrastructure for dragnet surveillance. Immigrants have a human right to data privacy. And new systems of surveilling immigrants will inevitably expand to cover all people living in our country.

USPS Surveillance Systems

Mail is a necessary service in our society. Every day, the agency delivers 318 million letters, hosts 7 million visitors to its website, issues 209,000 money orders, and processes 93,000 address changes.

To obtain these necessary services, we often must provide some of our personal data to the USPS. According to the USPS’ Privacy Policy: “The Postal Service collects personal information from you and from your transactions with us.” It states that this can include “your name, email, mailing and/or business address, phone numbers, or other information that identifies you personally.” If you visit the USPS’s website, they “automatically collect and store” your IP address, the date and time of your visit, the pages you visited, and more. Also: “We occasionally collect data about you from financial entities to perform verification services and from commercial sources.”

The USPS should not collect, store, disclose, or use our data except as strictly necessary to provide us the services we request. This is often called “data minimization.” Among other things, in the words of a seminal 1973 report from the U.S. government: “There must be a way for an individual to prevent information about him that was obtained for one purpose from being used or made available for other purposes without [their] consent.” Here, the USPS should not divert customer data, collected for the purpose of customer service, to the new purpose of surveilling immigrants.

The USPS is subject to the federal Privacy Act of 1974, a watershed anti-surveillance statute. As the USPS acknowledges: “the Privacy Act applies when we use your personal information to know who you are and to interact with you.” Among other things, the Act limits how an agency may disclose a person’s records. (Sound familiar? EFF has a Privacy Act lawsuit against DOGE and the Office of Personnel Management.) While the Act only applies to citizens and lawful permanent residents, that will include many people who send mail to or receive mail from other immigrants. If USPS were to assert the “law enforcement” exemption from the Privacy Act’s non-disclosure rule, the agency would need to show (among other things) a written request for “the particular portion desired” of “the record.” It is unclear how dragnet surveillance like that reported by the Washington Post could satisfy this standard.

USPS Mail Covers

From 2015 to 2023, according to another report from the Washington Post, the USPS received more than 60,000 requests for “mail cover” information from federal, state, and local law enforcement. Each request could include days or weeks of information about the cover of mail sent to or from a person or address. The USPS approved 97% of these requests, leading to postal inspectors recording the covers of more than 312,000 letters and packages.

In 2023, a bipartisan group of eight U.S. Senators (led by Sen. Wyden and Sen. Paul) raised the alarm about this mass surveillance program:

While mail covers do not reveal the contents of correspondence, they can reveal deeply personal information about Americans’ political leanings, religious beliefs, or causes they support. Consequently, surveillance of this information does not just threaten Americans’ privacy, but their First Amendment rights to freely associate with political or religious organizations or peacefully assemble without the government watching.

The Senators called on the USPIS to “only conduct mail covers when a federal judge has approved this surveillance,” except in emergencies. We agree that, at minimum, a warrant based on probable cause should be required.

The USPS operates other dragnet surveillance programs. Its Mail Isolation Control and Tracking Program photographs the exterior of all mail, and it has been used for criminal investigations. The USPIS’s Internet Covert Operations Program (iCOP) conducts social media surveillance to identify protest activity. (Sound familiar? EFF has a FOIA lawsuit about iCOP.)

This is just the latest of many recent attacks on the data privacy of immigrants. Now is the time to restrain USPIS’s dragnet surveillance programs—not to massively expand them to snoop on immigrants. If this scheme goes into effect, it is only a matter of time before such USPIS spying is expanded against other vulnerable groups, such as protesters or people crossing state lines for reproductive or gender affirming health care. And then against everyone.

Adam Schwartz

Nominations Open for 2025 EFF Awards!

1 week 3 days ago

Nominations are now open for the 2025 EFF Awards! The nomination window will be open until Friday, May 23rd at 2:00 PM Pacific time. You could nominate the next winner today!

For over thirty years, the Electronic Frontier Foundation presented awards to key leaders and organizations in the fight for freedom and innovation online. The EFF Awards celebrate the longtime stalwarts working on behalf of technology users, both in the public eye and behind the scenes. Past Honorees include visionary activist Aaron Swartz, human rights and security researchers The Citizen Lab, media activist Malkia Devich-Cyril, media group 404 Media, and whistle-blower Chelsea Manning.

The internet is a necessity in modern life and a continually evolving tool for communication, creativity, and human potential. Together we carry—and must always steward—the movement to protect civil liberties and human rights online. Will you help us spotlight some of the latest and most impactful work towards a better digital future?

Remember, nominations close on May 23rd at 2:00 PM Pacific time!

GO TO NOMINATION PAGE

Nominate your favorite digital rights Heroes now!

After you nominate your favorite contenders, we hope you will consider joining us on September 10 to celebrate the work of the 2025 winners. If you have any questions or if you'd like to receive updates about the event, please email events@eff.org.

The EFF Awards depend on the generous support of individuals and companies with passion for digital civil liberties. To learn about how you can sponsor the EFF Awards, please visit eff.org/thanks or contact tierney@eff.org for more information.

 

Melissa Srago

Beware the Bundle: Companies Are Banking on Becoming Your Police Department’s Favorite "Public Safety Technology” Vendor

1 week 3 days ago

When your local police department buys one piece of surveillance equipment, you can easily expect that the company that sold it will try to upsell them on additional tools and upgrades. 

At the end of the day, public safety vendors are tech companies, and their representatives are salespeople using all the tricks from the marketing playbook. But these companies aren't just after public money—they also want data. 

And each new bit of data that police collect contributes to a pool of information to which the company can attach other services: storage, data processing, cross-referencing tools, inter-agency networking, and AI analysis. The companies may even want the data to train their own AI model. The landscape of the police tech industry is changing, and companies that once specialized in a single technology (such as hardware products like automated license plate readers (ALPRs) or gunshot detection sensors) have developed new capabilities or bought up other tech companies and law enforcement data brokers—all in service of becoming the corporate giant that serves as a one-stop shop for police surveillance needs.

One of the most alarming trends in policing is that companies are regularly pushing police to buy more than they need. Vendors regularly pressure police departments to lock in the price now for a whole bundle of features and tools in the name of “cost savings,” often claiming that the cost à la carte for any of these tools will be higher than the cost of a package, which they warn will also be priced more expensively in the future. Market analysts have touted the benefits of creating “moats” between these surveillance ecosystems and any possible competitors. By making it harder to switch service providers due to integrated features, these companies can lock their cop customers into multi-year subscriptions and long-term dependence. 

Think your local police are just getting body-worn cameras (BWCs) to help with public trust or ALPRs to aid their hunt for stolen vehicles? Don’t assume that’s the end of it. If there’s already a relationship between a company and a department, that department is much more likely to get access to a free trial of whatever other device or software that company hopes the department will put on its shopping list. 

These vendors also regularly help police departments apply for grants and waivers, and provide other assistance to find funding, so that as soon as there’s money available for a public safety initiative, those funds can make their way directly to their business.

Companies like Axon have been particularly successful at using their relationships and leveraging the ability to combine equipment into receiving “sole source” designations. Typically, government agencies must conduct a bidding process when buying a new product, be it toilet paper, computers, or vehicles. For a company to be designated a sole-source provider, it is supposed to provide a product that no other vendor can provide. If a company can get this designation, it can essentially eliminate any possible competition for particular government contracts. When Axon is under consideration as a vendor for equipment like BWCs, for which there are multiple possible other providers, it’s not uncommon to see a police department arguing for a sole-source procurement for Axon BWCs based on the company’s ability to directly connect their cameras to the Fusus system, another Axon product. 

Here are a few of the big players positioning themselves to collect your movements, analyze your actions, and make you—the taxpayer—bear the cost for the whole bundle of privacy invasions. 

Axon Enterprise's ‘Suite’

Axon expects to have yet another year of $2 billion-plus in revenue in 2025. The company first got its hooks into police departments through the Taser, the electric stun gun. Axon then plunged into the BWC market amidst Obama-era outrage at police brutality and the flood of grant money flowing from the federal government to local police departments for BWCs, which were widely promoted as a police accountability tool. Axon parlayed its relationships with hundreds of police departments and capture and storage of growing terabytes of police footage into a menu of new technological offerings. 

In its annual year-end securities filing, Axon told investors it was "building the public safety operating system of the future” through its suite of “cloud-hosted digital evidence management solutions, productivity and real-time operations software, body cameras, in-car cameras, TASER energy devices, robotic security and training solutions” to cater to agencies in the federal, corrections, justice, and security sectors.”

Axon controls an estimated 85 percent of the police body-worn camera market. Its Evidence.com platform, once a trial add-on for BWC customers, is now also one of the biggest records management systems used by police. Its other tools and services include record management, video storage in the cloud, drones, connected private cameras, analysis tools, virtual reality training, and real-time crime centers. 

axon_flywheel_of_growth.png An image from the Quarter 4 2024 slide deck for investors, which describes different levels of the “Officer Safety Plan” (OSP) product package and highlights how 95% of Axon customers are tied to a subscription plan.

Axon has been adding AI to its repertoire, and it now features a whole “AI Era” bundle plan. One recent offering is Draft One, which connects to Axon’s body-worn cameras (BWCs) and uses AI to generate police reports based on the audio captured in the BWC footage. While use of the tool may start off as a free trial, Axon sees Draft One as another key product for capturing new customers, despite widespread skepticism of the accuracy of the reports, the inability to determine which reports have been drafted using the system, and the liability they could bring to prosecutions.

In 2024, Axon acquired a company called Fusus, a platform that combines the growing stores of data that police departments collect—notifications from gunshot detection and automated license plate reader (ALPR) systems; footage from BWCs, drones, public cameras, and sometimes private cameras; and dispatch information—to create “real-time crime centers.” The company now claims that Fusus is being used by more than 250 different policing agencies.

Fusus claims to bring the power of the real-time crime center to police departments of all sizes, which includes the ability to help police access and use live footage from both public and private cameras through an add-on service that requires a recurring subscription. It also claims to integrate nicely with surveillance tools from other providers. Recently, it has been cutting ties, most notably with Flock Safety, as it starts to envelop some of the options its frenemies had offered.

In the middle of April, Axon announced that it would begin offering fixed ALPR, a key feature of the Flock Safety catalogue, and an AI Assistant, which has been a core offering of Truleo, another Axon competitor.

Flock Safety's Bundles and FlockOS

Flock Safety is another major police technology company that has expanded its focus from one primary technology to a whole package of equipment and software services. 

Flock Safety started with ALPRs. These tools use a camera to read vehicle license plates, collecting the make, model, location, and other details which can be used for what Flock calls “Vehicle Fingerprinting.” The details are stored in a database that sometimes finds a match among a “hot list” provided by police officers, but otherwise just stores and shares data on how, where, and when everyone is driving and parking their vehicles. 

Founded in 2017, Flock Safety has been working to expand its camera-based offerings, and it now claims to have a presence in more than 5,000 jurisdictions around the country, including through law enforcement and neighborhood association customers. 

flock_proposal_for_brookhaven.png flock_proposal_for_brookhaven_2.png A list of FlockOS features proposed to Brookhaven Police Department in Georgia.

Among its tools are now the drone-as-first-responder system, gunshot detection, and a software platform meant to combine all of them. Flock also sells an option for businesses to use ALPRs to "optimize" marketing efforts and for analyzing traffic patterns to segment their patrons. Flock Safety offers the ability to integrate private camera systems as well.

flockos_hardware_software.png A price proposal for the FlockSafety platform made to Palatine, IL

Much of what Flock Safety does now comes together in their FlockOS system, which claims to bring together various surveillance feeds and facilitate real-time “situational awareness.”

Flock is optimistic about its future, recently opening a massive new manufacturing facility in Georgia.

Motorola Solutions' "Ecosystem"

When you think of Motorola, you may think of phones—but there’s a good chance that you missed the moment in 2011 when the phone side of the company, Motorola Mobility, split off from Motorola Solutions, which is now a big player in police surveillance.

On its website, Motorola Solutions claims that departments are better off using a whole list of equipment from the same ecosystem, boasting the tagline, “Technology that’s exponentially more powerful, together.” Motorola describes this as an "ecosystem of safety and security technologies" in its securities filings. In 2024, the company also reported $2 billion in sales, but unlike Axon, its customer base is not exclusively law enforcement and includes private entities like sports stadiums, schools, and hospitals.

Motorola’s technology includes 911 services, radio, BWCs, in-car cameras, ALPRs, drones, face recognition, crime mapping, and software that supposedly unifies it all. Notably, video can also come with artificial intelligence analysis, in some cases allowing law enforcement to search video and track individuals across cameras.

motorola_offerings_screenshot.png A screenshot from Motorola Solutions webpage on law enforcement technology.

In January 2019, Motorola Solutions acquired Vigilant Solutions, one of the big players in the ALPR market, as part of its takeover of Vaas International Holdings. Now the company (under the subsidiary DRN Data) claims to have billions of scans saved from police departments and private ALPR cameras around the country. Marketing language for its Vehicle Manager system highlights that “data is overwhelming,” because the amount of data being collected is “a lot.” It’s a similar claim made by other companies: Now that you’ve bought so many surveillance tools to collect so much data, you’re finding that it is too much data, so you now need more surveillance tools to organize and make sense of it.

SoundThinking's ‘SafetySmart Platform’

SoundThinking began as ShotSpotter, a so-called gunshot detection tool that uses microphones placed around a city to identify and locate sounds of gunshots. As news reports of the tool’s inaccuracy and criticisms have grown, the company has rebranded as SoundThinking, adding to its offerings ALPRs, case management, and weapons detection. The company is now marketing its SafetySmart platform, which claims to integrate different stores of data and apply AI analytics.

In 2024, SoundThinking laid out its whole scheme in its annual report, referring to it as the "cross-sell" component of their sales strategy. 

The "cross-sell" component of our strategy is designed to leverage our established relationships and understanding of the customer environs by introducing other capabilities on the SafetySmart platform that can solve other customer challenges. We are in the early stages of the upsell/cross-sell strategy, but it is promising - particularly around bundled sales such as ShotSpotter + ResourceRouter and CaseBuilder +CrimeTracer. Newport News, VA, Rocky Mount, NC, Reno, NV and others have embraced this strategy and recognized the value of utilizing multiple SafetySmart products to manage the entire life cycle of gun crime…. We will seek to drive more of this sales activity as it not only enhances our system's effectiveness but also deepens our penetration within existing customer relationships and is a proof point that our solutions are essential for creating comprehensive public safety outcomes. Importantly, this strategy also increases the average revenue per customer and makes our customer relationships even stickier.

Many of SoundThinking’s new tools rely on a push toward “data integration” and artificial intelligence. ALPRs can be integrated with ShotSpotter. ShotSpotter can be integrated with the CaseBuilder records management system, and CaseBuilder can be integrated with CrimeTracer. CrimeTracer, once known as COPLINK X, is a platform that SoundThinking describes as a “powerful law enforcement search engine and information platform that enables law enforcement to search data from agencies across the U.S.” EFF tracks this type of tool in the Atlas of Surveillance as a third-party investigative platform: software tools that combine open-source intelligence data, police records, and other data sources, including even those found on the dark web, to generate leads or conduct analyses. 

SoundThinking, like a lot of surveillance, can be costly for departments, but the company seems to see the value in fostering its existing police department relationships even if they’re not getting paid right now. In Baton Rouge, budget cuts recently resulted in the elimination of the $400,000 annual contract for ShotSpotter, but the city continues to use it

"They have agreed to continue that service without accepting any money from us for now, while we look for possible other funding sources. It was a decision that it's extremely expensive and kind of cost-prohibitive to move the sensors to other parts of the city," Baton Rouge Police Department Chief Thomas Morse told a local news outlet, WBRZ.

Beware the Bundle

Government surveillance is big business. The companies that provide surveillance and police data tools know that it’s lucrative to cultivate police departments as loyal customers. They’re jockeying for monopolization of the state surveillance market that they’re helping to build. While they may be marketing public safety in their pitches for products, from ALPRs to records management to investigatory analysis to AI everything, these companies are mostly beholden to their shareholders and bottom lines. 

The next time you come across BWCs or another piece of tech on your city council’s agenda or police department’s budget, take a closer look to see what other strings and surveillance tools might be attached. You are not just looking at one line item on the sheet—it’s probably an ongoing subscription to a whole package of equipment designed to challenge your privacy, and no sort of discount makes that a price worth paying.

To learn more about what surveillance tools your local agencies are using, take a look at EFF’s Atlas of Surveillance and our Street-Level Surveillance Hub

Beryl Lipton

Washington’s Right to Repair Bill Heads to the Governor

2 weeks 1 day ago

The right to repair just keeps on winning. Last week, thanks in part to messages from EFF supporters, the Washington legislature passed a strong consumer electronics right-to-repair legislation through both the House and Senate. The bill affirms our right to repair by banning restrictions that keep people and local businesses from accessing the parts, manuals, and tools they need for cheaper, easier repairs. It joined another strong right-to-repair bill for wheelchairs, ensuring folks can access the parts and manuals they need to fix their mobility devices. Both measures now head to Gov. Bob Ferguson. If you’re in Washington State, please urge the governor to sign these important bills.

TAKE ACTION

Washington State has come close to passing strong right-to-repair legislation before, only to falter at the last moments. This year, thanks to the work of our friends at the U.S. Public Interest Research Group (USPIRG) and their affiliate Washington PIRG, a coalition of groups got the bill through the legislature by emphasizing that the right to repair is good for people, good for small business, and good for the environment. Given the cost of new electronic devices is likely to increase, it’s also a pocketbook issue that more lawmakers should get behind.  

This spring marked the first time that all 50 states have considered right-to-repair legislation. Seven states—California, Colorado, Massachusetts, Minnesota, Maine, New York, and Oregon—have right-to-repair laws to date. If you’re in Washington, urge Gov. Ferguson to sign both bills and make your state the eighth to join this elite club. Let’s keep this momentum going!

TAKE ACTION

Hayley Tsukayama

Ninth Circuit Hands Users A Big Win: Californians Can Sue Out-of-State Corporations That Violate State Privacy Laws

2 weeks 1 day ago

Simple common sense tells us that a corporation’s decision to operate in every state shouldn’t mean it can’t be sued in most of them. Sadly, U.S. law doesn’t always follow common sense. That’s why we were so pleased with a recent holding from the Ninth Circuit Court of Appeals. Setting a crucial precedent, the court held that consumers can sue national or multinational companies in the consumers’ home courts if those companies violate state data privacy laws.

The case, Briskin v. Shopify, stems from a California resident’s allegations that Shopify, a company that offers back-end support to e-commerce companies around the U.S. and the globe, installed tracking software on his devices without his knowledge or consent, and used it to secretly collect data about him. Shopify also allegedly tracked users’ browsing activities across multiple sites and compiled that information into comprehensive user profiles, complete with financial “risk scores” that companies could use to block users’ future purchases. The Ninth Circuit initially dismissed the lawsuit for lack of personal jurisdiction, ruling that Shopify did not have a close enough connection to California to be fairly sued there. Collecting data on Californians along with millions of other users was not enough; to be sued in California, Shopify had to do something to target Californians in particular.  

Represented by nonprofit Public Citizen, Briskin asked the court to rehear the case en banc (meaning, review by the full court rather than just a three-judge panel). The court agreed and invited further briefing. After that review, the court vacated the earlier holding, agreeing with the plaintiff (and EFF’s argument in a supporting amicus brief) that Shopify’s extensive collection of information from users in other states should not prevent California plaintiffs from having their day in court in their home state.   

The key issue was whether Shopify’s actions were “expressly aimed” at California. Shopify argued that it was “mere happenstance” that its conduct affected a consumer in California, arising from the consumer’s own choices. The Ninth Circuit rejected that theory, noting:

Pre-internet, there would be no doubt that the California courts would have specific personal jurisdiction over a third party who physically entered a Californian’s home by deceptive means to take personal information from the Californian’s files for its own commercial gain. Here, though Shopify’s entry into the state of California is by electronic means, its surreptitious interception of Briskin’s personal identifying information certainly is a relevant contact with the forum state.

The court further noted that the harm in California was not “mere happenstance” because, among other things, Shopify allegedly knew plaintiff's location either prior to or shortly after installing its initial tracking software on his device as well as those of other Californians.

Importantly, the court overruled earlier cases that had suggested that “express aiming” required the plaintiff to show that a company “targeted” California in particular. As the court recognized, such a requirement would have the

perverse effect of allowing a corporation to direct its activities toward all 50 states yet to escape specific personal jurisdiction in each of those states for claims arising from or relating to the relevant contacts in the forum state that injure that state’s residents.

Instead, the question is whether Shopify’s own conduct connected it to California in a meaningful way. The answer was a resounding yes, for multiple reasons:

Shopify knows about its California consumer base, conducts its regular business in California, contacts California residents, interacts with them as an intermediary for its merchants, installs its software onto their devices in California, and continues to track their activities.

In other words, a company can’t deliberately collect a bunch of information about a person in a given state, including where they are located, use that information for its own commercial purposes, and then claim it has little or no relationship with that state.

As states around the country seek to fill the gaps left by Congress’ failure to pass comprehensive data privacy legislation, this ruling helps ensure that those state laws will have real teeth. In an era of ever-increasing corporate surveillance, that’s a crucial win.

Corynne McSherry

Age Verification in the European Union: The Commission's Age Verification App

2 weeks 3 days ago

This is the second part of a three-part series about age verification in the European Union. In this blog post, we take a deep dive into the age verification app solicited by the European Commission, based on digital identities. Part one gives an overview of the political debate around age verification in the EU and part three explores measures to keep all users safe that do not require age checks. 

In part one of this series on age verification in the European Union, we gave an overview of the state of the debate in the EU and introduced an age verification app, or mini-wallet, that the European Commission has commissioned. In this post, we will take a more detailed look at the app, how it will work and what some of its shortcomings are.

According to the original tender and the app’s recently published specifications, the Commission is soliciting the creation of a mobile application that will act as a digital wallet by storing a proof of age to enable users to verify their ages and access age-restricted content.

After downloading the app, a user would request proof of their age. For this crucial step, the Commission foresees users relying on a variety of age verification methods, including national eID schemes, physical ID cards, linking the app to another app that contains information about a user’s age, like a banking app, or age assessment through third parties like banks or notaries. 

In the next step, the age verification app would generate a proof of age. Once the user would access a website restricting content for certain age cohorts, the platform would request proof of the user’s age through the app. The app would then present proof of the user’s age via the app, allowing online services to verify the age attestation and the user would then access age-restricted websites or content in question. The goal is to build an app that will be aligned and allows for integration with the architecture of the upcoming EU Digital Identity Wallet

The user journey of the European Commission's age verification app

Review of the Commission’s Specifications for an Age Verification Mini-ID Wallet 

According to the specifications for the app, interoperability, privacy and security are key concerns for the Commission in designing the main requirements of the app. It acknowledges that the development of the app is far from finished, but an interactive process, and that key areas require feedback from stakeholders across industry and civil society. 

The specifications consider important principles to ensure the security and privacy of users verifying their age through the app, including data minimization, unlinkability (to ensure that only the identifiers required for specific linkable transactions are disclosed), storage limitations, transparency and measures to secure user data and prevent the unauthorized interception of personal data. 

However, taking a closer look at the specifications, many of the mechanisms envisioned to protect users’ privacy are not necessary requirements, but optional. For example, the app  should implement salted hashes and Zero Knowledge Proofs (ZKPs), but is not required to do so. Indeed, the app’s specifications seem to heavily rely on ZKPs, while simultaneously acknowledging that no compatible ZKP solution is currently available. This warrants a closer inspection of what ZKPs are and why they may not be the final answer to protecting users’ privacy in the context of age verification. 

A Closer Look at Zero Knowledge Proofs

Zero Knowledge Proofs provide a cryptographic way to not give something away, like your exact date of birth and age, while proving something about it. They can offer a “yes-or-no” claim (like above or below 18) to a verifier requiring a legal age threshold. Two properties of ZKPs are “soundness” and “zero knowledge.” Soundness is appealing to verifiers and to governments to make it hard for a prover to present forged information. Zero-Knowledge can be beneficial to the holder, because they don’t have to share explicit information, just the proof that said information exists. This is objectively more secure than uploading a picture of your ID  to multiple sites or applications, but it still requires an initial ID upload process as mentioned above for activation.

This scheme makes several questionable assumptions. First, that frequently used ZKPs will avoid privacy concerns, and second, that verifiers won’t combine this data with existing information, such as account data, profiles, or interests, for other purposes, such as advertising. The European Commission plans to test this assumption with extremely sensitive data: government-issued IDs. Though ZKPs are a better approach, this is a brand new system affecting millions of people, who will be asked to provide an age proof with potentially higher frequency than ever before. This rolls the dice with the resiliency of these privacy measures over time. Furthermore, not all ZKP systems are the same, and while there is  research about its use on mobile devices, this rush to implementation before the research matures puts all of the users at risk.

Who Can Ask for Proof of Your Age?

Regulation on verifiers (the service providers asking for age attestations) and what they can ask for is also just as important to limit a potential flood of verifiers that didn’t previously need age verification. This is especially true for non Know-Your-Customer (KYC) cases, in which service providers are not required to perform due diligence on their users. Equally important are rules that determine the consequences for when verifiers violate those regulations. Up until recently, the eIDAS framework, of which the technical implementation is still being negotiated, required registration certificates across all EU member states for verifiers. By forcing verifiers to register the data categories they intend to ask for, issues like illegal data requests were supposed to be mitigated. But now, this requirement has been rolled back again and the Commission’s planned mini-AV wallet will not require it in the beginning. Users will be asked to prove how old they are without the restraint on verifiers that protects from request abuse. Without verifier accountability, or at least industry-level data categories being given a determined scope, users are being asked to enter into an imbalanced relationship. An earlier mock-up gave some hope for  empowered selective disclosure, where a user could toggle giving discrete information on and off during the time of the verifier request. It would be more proactive to provide that setting to the holder in their wallet settings, before a request is made from a relying party.

Privacy tech is offered in this system as a concession to users forced to share information even more frequently, rather than as an additional way to bring equity in existing interactions with those who hold power, through mediating access to information, loans, jobs, and public benefits. Words mean things, and ZKPs are not the solution, but a part of one. Most ZKP systems are more focused on making proof and verification time more efficient than they are concerned with privacy itself. The result of the latest research with digital credentials are more privacy oriented ways to share information. But at this scale, we will need regulation and added measures on aggressive verification to complete the promise of better privacy for eID use.

Who Will Have Access to the Mini-ID Wallet, and Who Will Be Left Out?

Beyond its technical specifications, the proposed app raises a number of accessibility and participation issues. At its heart, the mini-ID wallet will rely on the verification of a user’s age through a proof of age. According to the tender, the wallet should support four methods for the issuance and proving of age of a user.

Different age verification methods foreseen by the app

The first options are national eID schemes, which is an obvious choice: Many Member States are currently working on (or have already notified) national eID schemes in the context of the eIDAS, Europe’s eID framework. The goal is to allow the mini-ID wallet to integrate with the eIDAS node operated by the European Commission to verify a user’s age. Although many Member States are working on national eID schemes, previous uptake of eIDs has been reluctant, and it's questionable whether an EU-wide rollout of eIDs will be successful. 

But even if an EU-wide roll out was achievable, many will not be able to participate. Those who are not in possession of ID cards, passports, residence permits, or documents like birth certificates will not be able to attain an eID and will be at risk of losing access to knowledge, information, and services. This is especially relevant for already marginalized groups like refugees or unhoused people who may lose access to critical resources. But also many children and teenagers will not be able to participate in eID schemes. There are no EU-wide rules on when children need to have government-issued IDs, and while some countries, like Germany, mandate that every citizen above the age of 16 possess an ID, others, like Sweden, don’t require their citizens to have an ID or passport. In most EU Member States, the minimum age at which children can apply for an ID without parental consent is 18. So even in cases where children and teenagers may have a legal option to get an ID, their parents might withhold consent, thereby making it impossible for a child to verify their age in order to access information or services online.

The second option are so-called smartcards, or physical eID cards, such as national ID cards, e-passports or other trustworthy physical eID cards. The same limitations as for eIDs apply. Additionally, the Commission’s tender suggests the mini-ID wallet will rely on biometric recognition software to compare a user to the physical ID card they are using to verify their age. This leads to a host of questions regarding the processing and storing of sensitive biometric data. A recent study by the National Institute of Standards and Technology compared different age estimation algorithms based on biometric data and found that certain ethnicities are still underrepresented in training data sets, thus exacerbating the risk age estimation systems of discriminating against people of color. The study also reports higher error rates for female faces compared to male faces and that overall accuracy is strongly influenced by factors people have no control over, including “sex, image quality, region-of-birth, age itself, and interactions between those factors.” Other studies on the accuracy of biometric recognition software have reported higher error rates for people with disabilities as well as trans and non-binary people

The third option foresees a procedure to allow for the verification of a user’s identity through institutions like a bank, a notary, or a citizen service center. It is encouraging that the Commission’s tender foresees an option for different, non-state institutions to verify a user’s age. But neither banks nor notary offices are especially accessible for people who are undocumented, unhoused, don’t speak a Member State’s official language, or are otherwise marginalized or discriminated against. Banks and notaries also often require a physical ID in order to verify a client’s identity, so the fundamental access issues outlined above persist.

Finally, the specification suggests that third party apps that already have verified a user's identity, like banking apps or mobile network operators, could provide age verification signals. In many European countries, however, showing an ID is a necessary prerequisite for opening a bank account, setting up a phone contract, or even buying a SIM card. 

In summary, none of the options the Commission considers to allow for proving someone’s age accounts for the obstacles faced by different marginalized groups, leaving potentially millions of people across the EU unable to access crucial services and information, thereby undermining their fundamental rights. 

The question of which institutions will be able to verify ages is only one dimension when considering the ramification of approaches like the mini-ID wallet for accessibility and participation. Although often forgotten in policy discussions, not everyone has access to a personal device. Age verification methods like the mini-ID wallet, which are device dependent, can be a real obstacle to people who share devices, or users who access the internet through libraries, schools, or internet cafés, which do not accommodate the use of personal age verification apps. The average number of devices per household has been  found to correlate strongly with income and education levels, further underscoring the point that it is often those who are already on the margins of society who are at risk of being left behind by age verification mandates based on digital identities. 

This is why we need to push back against age verification mandates. Not because child safety is not a concern – it is. But because age verification mandates risk undermining crucial access to digital services, eroding privacy and data protection, and limiting the freedom of expression. Instead, we must ensure that the internet remains a space where all voices can be heard, free from discrimination, and where we do not have to share sensitive personal data to access information and connect with each other.

Svea Windwehr

Congress Passes TAKE IT DOWN Act Despite Major Flaws

2 weeks 4 days ago

Today the U.S. House of Representatives passed the TAKE IT DOWN Act, giving the powerful a dangerous new route to manipulate platforms into removing lawful speech that they simply don't like. President Trump himself has said that he would use the law to censor his critics. The bill passed the Senate in February, and it now heads to the president's desk. 

The takedown provision in TAKE IT DOWN applies to a much broader category of content—potentially any images involving intimate or sexual content—than the narrower NCII definitions found elsewhere in the bill. The takedown provision also lacks critical safeguards against frivolous or bad-faith takedown requests. Services will rely on automated filters, which are infamously blunt tools. They frequently flag legal content, from fair-use commentary to news reporting. The law’s tight time frame requires that apps and websites remove speech within 48 hours, rarely enough time to verify whether the speech is actually illegal. As a result, online service providers, particularly smaller ones, will likely choose to avoid the onerous legal risk by simply depublishing the speech rather than even attempting to verify it.

Congress is using the wrong approach to helping people whose intimate images are shared without their consent. TAKE IT DOWN pressures platforms to actively monitor speech, including speech that is presently encrypted. The law thus presents a huge threat to security and privacy online. While the bill is meant to address a serious problem, good intentions alone are not enough to make good policy. Lawmakers should be strengthening and enforcing existing legal protections for victims, rather than inventing new takedown regimes that are ripe for abuse. 

Jason Kelley

EFF Leads Prominent Security Experts in Urging Trump Administration to Leave Chris Krebs Alone

2 weeks 4 days ago
Political Retribution for Telling the Truth Weakens the Entire Infosec Community and Threatens Our Democracy; Letter Remains Open for Further Sign-Ons

SAN FRANCISCO – The Trump Administration must cease its politically motivated investigation of former U.S. Cybersecurity and Infrastructure Security Agency Director Christopher Krebs, the Electronic Frontier Foundation (EFF) and dozens hundreds (see update below) of prominent cybersecurity and election security experts urged in an open letter. 

The letter – signed by preeminent names from academia, civil society, and the private sector – notes that security researchers play a vital role in protecting our democracy, securing our elections, and building, testing, and safeguarding government infrastructure. 

“By placing Krebs and SentinelOne in the crosshairs, the President is signaling that cybersecurity professionals whose findings do not align with his narrative risk having their businesses and livelihoods subjected to spurious and retaliatory targeting, the same bullying tactic he has recently used against law firms,” EFF’s letter said. “As members of the cybersecurity profession and information security community, we counter with a strong stand in defense of our professional obligation to report truthful findings, even – and especially – when they do not fit the playbook of the powerful. And we stand with Chris Krebs for doing just that.” 

President Trump appointed Krebs as Director of the Cybersecurity and Infrastructure Security Agency in the U.S. Department of Homeland Security in November 2018, and then fired him in November 2020 after Krebs publicly contradicted Trump's false claims of widespread fraud in the 2020 presidential election. 

Trump issued a presidential memorandum on April 9 directing Attorney General Pam Bondi and Homeland Security Secretary Kristi Noem to investigate Krebs, and directing Bondi and Director of National Intelligence Tulsi Gabbard to revoke security clearances held by Krebs and the cybersecurity company for which he worked, SentinelOne.  EFF’s letter urges that both of these actions be reversed immediately. 

“An independent infosec community is fundamental to protecting our democracy, and to the profession itself,” EFF’s letter said. “It is only by allowing us to do our jobs and report truthfully on systems in an impartial and factual way without fear of political retribution that we can hope to secure those systems. We take this responsibility upon ourselves with the collective knowledge that if any one of us is targeted for our work hardening these systems, then we all can be. We must not let that happen. And united, we will not let that happen.” 

EFF also has filed friend-of-the-court briefs supporting four law firms targeted for retribution in Trump’s unconstitutional executive orders. 

For the letter in support of Krebs: https://www.eff.org/document/chris-krebs-support-letter-april-28-2025

To sign onto the letter: https://eff.org/r.uq1r 

Update 04/29/2025: The letter now has over 400 signatures. You can view it here: https://www.eff.org/ChrisKrebsLetter

Contact:  WilliamBudingtonSenior Staff Technologistbill@eff.org
Josh Richman
Checked
1 hour 52 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed