Yes to the “ICE Out of Our Faces Act”

8 hours 24 minutes ago

Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights and civil liberties. For example, immigration agents are routinely scanning faces of people they suspect of unlawful presence in the country – 100,000 times, according to the Wall Street Journal. The technology has already misidentified at least one person, according to 404 Media.

Face recognition technology is so dangerous that government should not use it at all—least of all these out-of-control immigration agencies.

To combat these abuses, EFF is proud to support the “ICE Out of Our Faces Act.” This new federal bill would ban ICE and CBP agents, and some local police working with them, from acquiring or using biometric surveillance systems, including face recognition technology, or information derived from such systems by another entity. This bill would be enforceable, among other ways, by a strong private right of action.

The bill’s lead author is Senator Ed Markey. We thank him for his longstanding leadership on this issue, including introducing similar legislation that would ban all federal law enforcement agencies, and some federally-funded state agencies, from using biometric surveillance systems (a bill that EFF also supported). The new “ICE Out of My Face Act” is also sponsored by Senator Merkley, Senator Wyden, and Representative Jayapal.

As EFF explains in the new bill’s announcement:

It’s past time for the federal government to end its use of this abusive surveillance technology. A great place to start is its use for immigration enforcement, given ICE and CBP’s utter disdain for the law. Face surveillance in the hands of the government is a fundamentally harmful technology, even under strict regulations or if the technology was 100% accurate. We thank the authors of this bill for their leadership in taking steps to end this use of this dangerous and invasive technology.

You can read the bill here, and the bill’s announcement here.

Adam Schwartz

Protecting Our Right to Sue Federal Agents Who Violate the Constitution

1 day 6 hours ago

Federal agencies like Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights. For example, we have a First Amendment right to record on-duty police, including ICE and CBP, but federal agents are violating this right. Indeed, Alex Pretti was exercising this right shortly before federal agents shot and killed him. So were the many people who filmed agents shooting and killing Pretti and Renee Good – thereby creating valuable evidence that contradicts false claims by government leaders.

To protect our digital rights, we need the rule of law. When an armed agent of the government breaks the law, the civilian they injure must be made whole. This includes a lawsuit by the civilian (or their survivor) against the agent, seeking money damages to compensate them for their injury. Such systems of accountability encourage agents to follow the law, whereas impunity encourages them to break it.

Unfortunately, there is a gaping hole in the rule of law: when a federal agent violates the U.S. Constitution, it is increasingly difficult to sue them for damages. For these reasons, EFF supports new statutes to fill this hole, including California S.B. 747.

The Problem

In 1871, at the height of Reconstruction following the Civil War, Congress enacted a landmark statute empowering people to sue state and local officials who violated their constitutional rights. This was a direct response to state-sanctioned violence against Black people that continued despite the formal end of slavery. The law is codified today at 42 U.S.C. § 1983.

However, there is no comparable statute empowering people to sue federal officials who violate the U.S. Constitution.

So in 1971, the U.S. Supreme Court stepped into this gap, in a watershed case called Bivens v. Six Unknown FBI Agents. The plaintiff alleged that FBI agents unlawfully searched his home and used excessive force against him. Justice Brennan, writing for a six-Justice majority of the Court, ruled that “damages may be obtained for injuries consequent upon a violation of the Fourth Amendment by federal officials.”  He explained: “Historically, damages have been regarded as the ordinary remedy for an invasion of personal interests in liberty.” Further: “The very essence of civil liberty certainly consists of the right of every individual to claim the protection of the laws, whenever he receives an injury.”

Subsequently, the Court expanded Bivens in cases where federal officials violated the U.S. Constitution by discriminating in a workplace, and by failing to provide medical care in a prison.

In more recent years, however, the Court has whittled Bivens down to increasing irrelevance. For example, the Court has rejected damages litigation against federal officials who allegedly violated the U.S. Constitution by strip searching a detained person, and by shooting a person located across the border.

In 2022, the Court by a six-to-three vote rejected a damages claim against a Border Patrol agent who used excessive force when investigating alleged smuggling.  In an opinion concurring in the judgment, Justice Gorsuch conceded that he “struggle[d] to see how this set of facts differs meaningfully from those in Bivens itself.” But then he argued that Bivens should be overruled because it supposedly “crossed the line” against courts “assuming legislative authority.”

Last year, the Court unanimously declined to extend Bivens to excessive force in a prison.

The Solution

At this juncture, legislatures must solve the problem. We join calls for Congress to enact a federal statute, parallel to the one it enacted during Reconstruction, to empower people to sue federal officials (and not just state and local officials) who violate the U.S. Constitution.

In the meantime, it is heartening to see state legislatures step forward fill this hole. One such effort is California S.B. 747, which EFF is proud to endorse.

State laws like this one do not violate the Supremacy Clause of the U.S. Constitution, which provides that the Constitution is the supreme law of the land. In the words of one legal explainer, this kind of state law “furthers the ultimate supremacy of the federal Constitution by helping people vindicate their fundamental constitutional rights.” 

This kind of state law goes by many names. The author of S.B. 747, California Senator Scott Wiener, calls it the “No Kings Act.” Protect Democracy, which wrote a model bill, calls it the “Universal Constitutional Remedies Act.” The originator of this idea, Professor Akhil Amar, calls it a “converse 1983”: instead of Congress authorizing suit against state officials for violating the U.S. Constitution, states would authorize suit against federal officials for doing the same thing.

We call these laws a commonsense way to protect the rule of law, which is a necessary condition to preserve our digital rights. EFF has long supported effective judicial remedies, including support for nationwide injunctions and private rights of action, and opposition to qualified immunity.

We also support federal and state legislation to guarantee our right to sue federal agents for damages when they violate the U.S. Constitution.

Adam Schwartz

Smart AI Policy Means Examining Its Real Harms and Benefits

1 day 8 hours ago

The phrase "artificial intelligence" has been around for a long time, covering everything from computers with "brains"—think Data from Star Trek or Hal 9000 from 2001: A Space Odyssey—to the autocomplete function that too often has you sending emails to the wrong person. It's a term that sweeps a wide array of uses into it—some well-established, others still being developed.

Recent news shows us a rapidly expanding catalog of potential harms that may result from companies pushing AI into every new feature and aspect of public life—like the automation of bias that follows from relying on a backward-looking technology to make consequential decisions about people's housing, employment, education, and so on. Complicating matters, the computation needed for some AI services requires vast amounts of water and electricity, leading to sometimes difficult questions about whether the increased fossil fuel use or consumption of water is justified.

We are also inundated with advertisements and exhortations to use the latest AI-powered apps, and with hype insisting AI can solve any problem.

Obscured by this hype, there are some real examples of AI proving to be a helpful tool. For example, machine learning is especially useful for scientists looking at everything from the inner workings of our biology to cosmic bodies in outer space. AI tools can also improve accessibility for people with disabilities, facilitate police accountability initiatives, and more. There are reasons why these problems are amenable to machine learning and why excitement over these uses shouldn’t translate into a perception that just any language model or AI technology possesses expert knowledge or can solve whatever problem it’s marketed as solving.

EFF has long fought for sensible, balanced tech policies because we’ve seen how regulators can focus entirely on use cases they don’t like (such as the use of encryption to hide criminal behavior) and cause enormous collateral harm to other uses (such as using encryption to hide dissident resistance). Similarly, calls to completely preempt state regulation of AI would thwart important efforts to protect people from the real harms of AI technologies. Context matters. Large language models (LLMs) and the tools that rely on them are not magic wands—they are general-purpose technologies. And if we want to regulate those technologies in a way that doesn’t shut down beneficial innovations, we have to focus on the impact(s) of a given use or tool, by a given entity, in a specific context. Then, and only then, can we even hope to figure out what to do about it.

So let’s look at the real-world landscape.

AI’s Real and Potential Harms

Thinking ahead about potential negative uses of AI helps us spot risks. Too often, the corporations developing AI tools—as well as governments that use them—lose sight of the real risks, or don’t care. For example, companies and governments use AI to do all sorts of things that hurt people, from price collusion to mass surveillance. AI should never be part of a decision about whether a person will be arrested, deported, placed into foster care, or denied access to important government benefits like disability payments or medical care.

There is too much at stake, and governments have a duty to make responsible, fair, and explainable decisions, which AI can’t reliably do yet. Why? Because AI tools are designed to identify and reproduce patterns in data that they are “trained” on.  If you train AI on records of biased government decisions, such as records of past arrests, it will “learn” to replicate those discriminatory decisions.

And simply having a human in the decision chain will not fix this foundational problem. Studies have shown that having a human “in the loop” doesn’t adequately correct for AI bias, both because the human tends to defer to the AI and because the AI can provide cover for a biased human to ratify decisions that agree with their biases and override the AI at other times.

These biases don’t just arise in obvious contexts, like when a government agency is making decisions about people. It can also arise in equally life-affecting contexts like medical care. Whenever AI is used for analysis in a context with systemic disparities and whenever the costs of an incorrect decision fall on someone other than those deciding whether to use the tool.  For example, dermatology has historically underserved people of color because of a focus on white skin, with the resulting bias affecting AI tools trained on the existing and biased image data.

These kinds of errors are difficult to detect and correct because it’s hard or even impossible to understand how an AI tool arrives at individual decisions. These tools can sometimes find and apply patterns that a human being wouldn't even consider, such as basing diagnostic decisions on which hospital a scan was done at. Or determining that malignant tumors are the ones where there is a ruler next to them—something that a human would automatically exclude from their evaluation of an image. Unlike a human, AI does not know that the ruler is not part of the cancer.

Auditing and correcting for these kinds of mistakes is vital, but in some cases, might negate any sort of speed or efficiency arguments made in favor of the tool. We all understand that the more important a decision is, the more guardrails against disaster need to be in place. For many AI tools, those don't exist yet. Sometimes, the stakes will be too high to justify the use of AI. In general, the higher the stakes, the less this technology should be used.

We also need to acknowledge the risk of over-reliance on AI, at least as it is currently being released. We've seen shades of a similar problem before online (see: "Dr. Google"), but the speed and scale of AI use—and the increasing market incentive to shoe-horn “AI” into every business model—have compounded the issue.

Moreover, AI may reinforce a user’s pre-existing beliefs—even if they’re wrong or unhealthy. Many users may not understand how AI works, what it is programmed to do, and how to fact check it. Companies have chosen to release these tools widely without adequate information about how to use them properly and what their limitations are. Instead they market them as easy and reliable. Worse, some companies also resist transparency in the name of trade secrets and reducing liability, making it harder for anyone to evaluate AI-generated answers. 

Other considerations may weigh against AI uses are its environmental impact and potential labor market effects. Delving into these is beyond the scope of this post, but it is an important factor in determining if AI is doing good somewhere and whether any benefits from AI are equitably distributed.

Research into the extent of AI harms and means of avoiding them is ongoing, but it should be part of the analysis.

AI’s Real and Potential Benefits

However harmful AI technologies can sometimes be, in the right hands and circumstances, they can do things that humans simply can’t. Machine learning technology has powered search tools for over a decade. It’s undoubtedly useful for machines to help human experts pore through vast bodies of literature and data to find starting points for research—things that no number of research assistants could do in a single year. If an actual expert is involved and has a strong incentive to reach valid conclusions, the weaknesses of AI are less significant at the early stage of generating research leads. Many of the following examples fall into this category.

Machine learning differs from traditional statistics in that the analysis doesn’t make assumptions about what factors are significant to the outcome. Rather, the machine learning process computes which patterns in the data have the most predictive power and then relies upon them, often using complex formulae that are unintelligible to humans. These aren’t discoveries of laws of nature—AI is bad at generalizing that way and coming up with explanations. Rather, they’re descriptions of what the AI has already seen in its data set.

To be clear, we don't endorse any products and recognize initial results are not proof of ultimate success. But these cases show us the difference between something AI can actually do versus what hype claims it can do.

Researchers are using AI to discover better alternatives to today’s lithium-ion batteries, which require large amounts of toxic, expensive, and highly combustible materials. Now, AI is rapidly advancing battery development: by allowing researchers to analyze millions of candidate materials and generate new ones. New battery technologies discovered with the help of AI have a long way to go before they can power our cars and computers, but this field has come further in the past few years than it had in a long time.

AI Advancements in Scientific and Medical Research

AI tools can also help facilitate weather prediction. AI forecasting models are less computationally intensive and often more reliable than traditional tools based on simulating the physical thermodynamics of the atmosphere. Questions remain, though about how they will handle especially extreme events or systemic climate changes over time.

For example:

  • The National Oceanic and Atmospheric Administration has developed new machine learning models to improve weather prediction, including a first-of-its-kind hybrid system that  uses an AI model in concert with a traditional physics-based model to deliver more accurate forecasts than either model does on its own. to augment its traditional forecasts, with improvements in accuracy when the AI model is used in concert with the physics-based model.
  • Several models were used to forecast a recent hurricane. Google DeepMind’s AI system performed the best, even beating official forecasts from the U.S. National Hurricane Center (which now uses DeepMind’s AI model).

 Researchers are using AI to help develop new medical treatments:

  • Deep learning tools, like the Nobel Prize-winning model AlphaFold, are helping researchers understand protein folding. Over 3 million researchers have used AlphaFold to analyze biological processes and design drugs that target disease-causing malfunctions in those processes.
  • Researchers used machine learning simulate and computationally test a large range of new antibiotic candidates hoping they will help treat drug-resistant bacteria, a growing threat that kills millions of people each year.
  • Researchers used AI to identify a new treatment for idiopathic pulmonary fibrosis, a progressive lung disease with few treatment options. The new treatment has successfully completed a Phase IIa clinical trial. Such drugs still need to be proven safe and effective in larger clinical trials and gain FDA approval before they can help patients, but this new treatment for pulmonary fibrosis could be the first to reach that milestone.
  • Machine learning has been used for years to aid in vaccine development—including the development of the first COVID-19 vaccines––accelerating the process by rapidly identifying potential vaccine targets for researchers to focus on.
AI Uses for Accessibility and Accountability 

AI technologies can improve accessibility for people with disabilities. But, as with many uses of this technology, safeguards are essential. Many tools lack adequate privacy protections, aren’t designed for disabled users, and can even harbor bias against people with disabilities. Inclusive design, privacy, and anti-bias safeguards are crucial. But here are two very interesting examples:

  • AI voice generators are giving people their voices back, after losing their ability to speak. For example, while serving in Congress, Rep. Jennifer Wexton developed a debilitating neurological condition that left her unable to speak. She used her cloned voice to deliver a speech from the floor of the House of Representatives advocating for disability rights.
  • Those who are blind or low-vision, as well as those who are deaf or hard-of-hearing, have benefited from accessibility tools while also discussing their limitations and drawbacks. At present, AI tools often provide information in a more easily accessible format than traditional web search tools and many websites that are difficult to navigate for users that rely on a screen reader. Other tools can help blind and low vision users navigate and understand the world around them by providing descriptions of their surroundings. While these visual descriptions may not always be as good as the ones a human may provide, they can still be useful in situations when users can’t or don’t want to ask another human to describe something. For more on this, check out our recent podcast episode on “Building the Tactile Internet.”

When there is a lot of data to comb through, as with police accountability, AI is very useful for researchers and policymakers:

  •  The Human Rights Data Analysis Group used LLMs to analyze millions of pages of records regarding police misconduct. This is essentially the reverse of harmful use cases relating to surveillance; when the power to rapidly analyze large amounts of data is used by the public to scrutinize the state there is a potential to reveal abuses of power and, given the power imbalance, very little risk that undeserved consequences will befall those being studied.
  • An EFF client, Project Recon, used an AI system to review massive volumes of transcripts of prison parole hearings to identify biased parole decisions. This innovative use of technology to identify systemic biases, including racial disparities, is the type of AI use we should support and encourage.

It is not a coincidence that the best examples of positive uses of AI come in places where experts, with access to infrastructure to help them use the technology and the requisite experience to evaluate the results, are involved. Moreover, academic researchers are already accustomed to explaining what they have done and being transparent about it—and it has been hard won knowledge that ethics are a vital step in work like this.

Nor is it a coincidence that other beneficial uses involve specific, discrete solutions to problems faced by those whose needs are often unmet by traditional channels or vendors. The ultimate outcome is beneficial, but it is moderated by human expertise and/or tailored to specific needs.

Context Matters

It can be very tempting—and easy—to make a blanket determination about something, especially when the stakes seem so high. But we urge everyone—users, policymakers, the companies themselves—to cut through the hype. In the meantime, EFF will continue to work against the harms caused by AI while also making sure that beneficial uses can advance.

Tori Noble

EFF to Close Friday in Solidarity with National Shutdown

1 week ago

The Electronic Frontier Foundation stands with the people of Minneapolis and with all of the communities impacted by the ongoing campaign of ICE and CBP violence. EFF will be closed Friday, Jan. 30 as part of the national shutdown in opposition to ICE and CBP and the brutality and terror they and other federal agencies continue to inflict on immigrant communities and any who stand with them.

We do not make this decision lightly, but we will not remain silent. 

Cindy Cohn

Introducing Encrypt It Already

1 week ago

Today, we’re launching Encrypt It Already, our push to get companies to offer stronger privacy protections to our data and communications by implementing end-to-end encryption. If that name sounds a little familiar, it’s because this is a spiritual successor to our 2019 campaign, Fix It Already, a campaign where we pushed companies to fix longstanding issues.

End-to-end encryption is the best way we have to protect our conversations and data. It ensures the company that provides a service cannot access the data or messages you store on it. So, for secure chat apps like WhatsApp and Signal, that means the company that makes those apps cannot see the contents of your messages, and they’re only accessible on your and your recipients. When it comes to data, like what’s stored using Apple’s Advanced Data Protection, it means you control the encryption keys and the service provider will not be able to access the data.  

We’ve divided this up into three categories, each with three different demands:

  • Keep your Promises: Features that the company has publicly stated they’re working on, but which haven’t launched yet.
    • Facebook should use end-to-end encryption for group messages
    • Apple and Google should deliver on their promise of interoperable end-to-end encryption of RCS
    • Bluesky should launch its promised end-to-end encryption for DMs
  • Defaults Matter: Features that are available on a service or in app already, but aren’t enabled by default.
    • Telegram should default to end-to-end encryption for DMs
    • WhatsApp should use end-to-end encryption for backups by default
    • Ring should enable end-to-end encryption for its cameras by default
  • Protect Our Data: New features that companies should launch, often because their competition is doing it already.
    • Google should launch end-to-end encryption for Google Authenticator backups
    • Google should offer end-to-end encryption for Android backup data
    • Apple and Google should offer an AI permissions per app option to block AI access to secure chat apps

What is only half the problem. How is just as important.

What Companies Should Do When They Launch End-to-End Encryption Features

There’s no one-size fits all way to implement end-to-end encryption in products and services, but best practices can support the security of the platform with the transparency that makes it possible for its users to trust it protects data like the company claims it does. When these encryption features launch, companies should consider doing so with:

  • A blog post written for a general audience that summarizes the technical details of the implementation, and when it makes sense, a technical white paper that goes into further detail for the technical crowd.
  • Clear user-facing documentation around what data is and isn’t end-to-end encrypted, and robust and clear user controls when it makes sense to have them.
  • Data minimization principles whenever feasible, storing as little metadata as possible.

Technical documentation is important for end-to-encryption features, but so is clear documentation that makes it easy for users to understand what is and isn’t protected, what features may change, and what steps they need to take to set it up so they’re comfortable with how data is protected.

What You Can Do

When it’s an option, enable any end-to-end encryption features you can, like on Telegram, WhatsApp, and Ring.

For everything else, let companies know that these are features you want! You can find messages to share on social media on the Encrypt It Already website, and take the time to customize those however you’d like. 

In some cases, you can also reach out to a company directly with feature requests, which all the above companies, except for Google and WhatsApp, offer in some form. We recommend filing these through any service you use for any of the above features you’d like to see:

As for Ring and Telegram, we’ve already made the asks and just need your help to boost them. Head over to the Telegram bug and suggestions and upvote this post, and Ring’s feature request board and boost this post.

End-to-end encryption protects what we say and what we store in a way that gives users—not companies or governments—control over data. These sorts of privacy-protective features should be the status quo across a range of products, from fitness wearables to notes apps, but instead it’s a rare feature limited to a small set of services, like messaging and (occasionally) file storage. These demands are just the start. We deserve this sort of protection for a far wider array of products and services. It’s time to encrypt it already!

Join EFF

Help protect digital privacy & free speech for everyone

Thorin Klosowski

Google Settlement May Bring New Privacy Controls for Real-Time Bidding

1 week ago

EFF has long warned about the dangers of the “real-time bidding” (RTB) system powering nearly every ad you see online. A proposed class-action settlement with Google over their RTB system is a step in the right direction towards giving people more control over their data. Truly curbing the harms of RTB, however, will require stronger legislative protections.

What Is Real-Time Bidding?

RTB is the process by which most websites and apps auction off their ad space. Unfortunately, the milliseconds-long auctions that determine which ads you see also expose your personal information to thousands of companies a day. At a high-level, here’s how RTB works:

  1. The moment you visit a website or app with ad space, it asks an ad tech company to determine which ads to display for you. This involves sending information about you and the content you’re viewing to the ad tech company.
  2. This ad tech company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers. 
  3. The bid request may contain information like your unique advertising ID, your GPS coordinates, IP address, device details, inferred interests, demographic information, and the app or website you’re visiting. The information in bid requests is called “bidstream data” and typically includes identifiers that can be linked to real people. 
  4. Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on the ad space. 
  5. The highest bidder gets to display an ad for you, but advertisers (and the adtech companies they use to buy ads) can collect your bidstream data regardless of whether or not they bid on the ad space.   
Why Is Real-Time Bidding Harmful?

A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive data about the person who would see their ad. As a result, anyone posing as an ad buyer can access a stream of sensitive data about billions of individuals a day. Data brokers have taken advantage of this vulnerability to harvest data at a staggering scale. Since bid requests contain individual identifiers, they can be tied together to create detailed profiles of people’s behavior over time.

Data brokers have sold bidstream data for a range of invasive purposes, including tracking union organizers and political protesters, outing gay priests, and conducting warrantless government surveillance. Several federal agencies, including ICE, CBP and the FBI, have purchased location data from a data broker whose sources likely include RTB. ICE recently requested information on “Ad Tech” tools it could use in investigations, further demonstrating RTB’s potential to facilitate surveillance. RTB also poses national security risks, as researchers have warned that it could allow foreign states to obtain compromising personal data about American defense personnel and political leaders.

The privacy harms of RTB are not just a matter of misuse by individual data brokers. RTB auctions broadcast torrents of personal data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately used. Once your information is broadcast through RTB, it’s almost impossible to know who receives it or control how it’s used. 

Proposed Settlement with Google Is a Step in the Right Direction

As the dominant player in the online advertising industry, Google facilitates the majority of RTB auctions. Google has faced several class-action lawsuits for sharing users’ personal information with thousands of advertisers through RTB auctions without proper notice and consent. A recently proposed settlement to these lawsuits aims to give people more knowledge and control over how their information is shared in RTB auctions.

Under the proposed settlement, Google must create a new privacy setting (the “RTB Control”) that allows people to limit the data shared about them in RTB auctions. When the RTB Control is enabled, bid requests will not include identifying information like pseudonymous IDs (including mobile advertising IDs), IP addresses, and user agent details. The RTB Control should also prevent cookie matching, a method companies use to link their data profiles about a person to a corresponding bid request. Removing identifying information from bid requests makes it harder for data brokers and advertisers to create consumer profiles based on bidstream data. If the proposed settlement is approved, Google will have to inform all users about the new RTB Control via email. 

While this settlement would be a step in the right direction, it would still require users to actively opt out of their identifying information being shared through RTB. Those who do not change their default settings—research shows this is most people—will remain vulnerable to RTB’s massive daily data breach. Google broadcasting your personal data to thousands of companies each time you see an ad is an unacceptable and dangerous default. 

The impact of RTB Control is further limited by technical constraints on who can enable it. RTB Control will only work for devices and browsers where Google can verify users are signed in to their Google account, or for signed-out users on browsers that allow third-party cookies. People who don't sign in to a Google account or don't enable privacy-invasive third-party cookies cannot benefit from this protection. These limitations could easily be avoided by making RTB Control the default for everyone. If the settlement is approved, regulators and lawmakers should push Google to enable RTB Control by default.

The Real Solution: Ban Online Behavioral Advertising

Limiting the data exposed through RTB is important, but we also need legislative change to protect people from the online surveillance enabled and incentivized by targeted advertising. The lack of strong, comprehensive privacy law in the U.S. makes it difficult for individuals to know and control how companies use their personal information. Strong privacy legislation can make privacy the default, not something that individuals must fight for through hidden settings or additional privacy tools. EFF advocates for data privacy legislation with teeth and a ban on ad targeting based on online behavioral profiles, as it creates a financial incentive for companies to track our every move. Until then, you can limit the harms of RTB by using EFF’s Privacy Badger to block ads that track you, disabling your mobile advertising ID (see instructions for iPhone/Android), and keeping an eye out for Google’s RTB Control.

Lena Cohen

✍️ The Bill to Hand Parenting to Big Tech | EFFector 38.2

1 week 1 day ago

Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. We're diving into the latest attempt to control how kids access the internet and more with our latest EFFector newsletter.

Since 1990, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue tracks what to do when you hit an age gate online, explains why rent-only copyright culture makes us all worse off, and covers the dangers of law enforcement purchasing straight-up military drones.

Prefer to listen in? In our audio companion, EFF Senior Policy Analyst Joe Mullin explains what lawmakers should do if they really want to help families. Find the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 38.2 - ✍️ THE BILL TO HAND PARENTING TO BIG TECH

Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight to protect people from these data breaches and unlawful surveillance when you support EFF today!

Christian Romero

DSA Human Rights Alliance Publishes Principles Calling for DSA Enforcement to Incorporate Global Perspectives

1 week 1 day ago

The Digital Services Act (DSA) Human Rights Alliance has, since its founding by EFF and Access Now in 2021, worked to ensure that the European Union follows a human rights-based approach to platform governance by integrating a wide range of voices and perspectives to contextualise DSA enforcement and examining the DSA’s effect on tech regulations around the world.

As the DSA moves from legislation to enforcement, it has become increasingly clear that its impact depends not only on the text of the Act but also how it’s interpreted and enforced in practice. This is why the Alliance has created a set of recommendations to include civil society organizations and rights-defending stakeholders in the enforcement process. 

 The Principles for a Human Rights-Centred Application of the DSA: A Global Perspective, a report published this week by the Alliance, outlines steps the European Commission, as the main DSA enforcer, as well as national policymakers and regulators, should take to bring diverse groups to the table as a means of ensuring that the implementation of the DSA is grounded in human rights standards.

 The Principles also offer guidance for regulators outside the EU who look to the DSA as a reference framework and international bodies and global actors concerned with digital governance and the wider implications of the DSA. The Principles promote meaningful stakeholder engagement and emphasize the role of civil society organisations in providing expertise and acting as human rights watchdogs.

“Regulators and enforcers need input from civil society, researchers, and affected communities to understand the global dynamics of platform governance,” said EFF International Policy Director Christoph Schmon. “Non-EU-based civil society groups should be enabled to engage on equal footing with EU stakeholders on rights-focused elements of the DSA. This kind of robust engagement will help ensure that DSA enforcement serves the public interest and strengthens fundamental rights for everyone, especially marginalized and vulnerable groups.”

“As activists are increasingly intimidated, journalists silenced, and science and academic freedom attacked by those who claim to defend free speech, it is of utmost importance that the Digital Services Act's enforcement is centered around the protection of fundamental rights, including the right to the freedom of expression,” said Marcel Kolaja, Policy & Advocacy Director—Europe at Access Now. “To do so effectively, the global perspective needs to be taken into account. The DSA Human Rights Principles provide this perspective and offer valuable guidance for the European Commission, policymakers, and regulators for implementation and enforcement of policies aiming at the protection of fundamental rights.”

“The Principles come at the crucial moment for the EU candidate countries, such as Serbia, that have been aligning their legislation with the EU acquis but still struggle with some of the basic rule of law and human rights standards,” said Ana Toskic Cvetinovic, Executive Director for Partners Serbia. “The DSA HR Alliance offers the opportunity for non-EU civil society to learn about the existing challenges of DSA implementation and design strategies for impacting national policy development in order to minimize any negative impact on human rights.”

 The Principles call for:

◼ Empowering EU and non-EU Civil Society and Users to Pursue DSA Enforcement Actions

◼ Considering Extraterritorial and Cross-Border Effects of DSA Enforcement

◼ Promoting Cross-Regional Collaboration Among CSOs on Global Regulatory Issues

◼ Establishing Institutionalised Dialogue Between EU and Non-EU Stakeholders

◼ Upholding the Rule of Law and Fundamental Rights in DSA Enforcement, Free from Political Influence

◼ Considering Global Experiences with Trusted Flaggers and Avoid Enforcement Abuse

◼ Recognising the International Relevance of DSA Data Access and Transparency Provisions for Human Rights Monitoring

The Principles have been signed by 30 civil society organizations,researchers, and independent experts.

The DSA Human Right Alliance represents diverse communities across the globe to ensure that the DSA embraces a human rights-centered approach to platform governance and that EU lawmakers consider the global impacts of European legislation.

 

Karen Gullo

Beware: Government Using Image Manipulation for Propaganda

1 week 2 days ago

U.S. Homeland Security Secretary Kristi Noem last week posted a photo of the arrest of Nekima Levy Armstrong, one of three activists who had entered a St. Paul, Minn. church to confront a pastor who also serves as acting field director of the St Paul Immigration and Customs Enforcement (ICE) office. 

A short while later, the White House posted the same photo – except that version had been digitally altered to darken Armstrong’s skin and rearrange her facial features to make it appear she was sobbing or distraught. The Guardian one of many media outlets to report on this image manipulation, created a handy slider graphic to help viewers see clearly how the photo had been changed.  

This isn’t about “owning the libs” — this is the highest office in the nation using technology to lie to the entire world. 

The New York Times reported it had run the two images through Resemble.AI, an A.I. detection system, which concluded Noem’s image was real but the White House’s version showed signs of manipulation. "The Times was able to create images nearly identical to the White House’s version by asking Gemini and Grok — generative A.I. tools from Google and Elon Musk’s xAI start-up — to alter Ms. Noem’s original image." 

Most of us can agree that the government shouldn’t lie to its constituents. We can also agree that good government does not involve emphasizing cruelty or furthering racial biases. But this abuse of technology violates both those norms. 

“Accuracy and truthfulness are core to the credibility of visual reporting,” the National Press Photographers Association said in a statement issued about this incident. “The integrity of photographic images is essential to public trust and to the historical record. Altering editorial content for any purpose that misrepresents subjects or events undermines that trust and is incompatible with professional practice.” 

This isn’t about “owning the libs” — this is the highest office in the nation using technology to lie to the entire world.

Reworking an arrest photo to make the arrestee look more distraught not only is a lie, but it’s also a doubling-down on a “the cruelty is the point” manifesto. Using a manipulated image further humiliates the individual and perpetuate harmful biases, and the only reason to darken an arrestee’s skin would be to reinforce colorist stereotypes and stoke the flames of racial prejudice, particularly against dark-skinned people.  

History is replete with cruel and racist images as propaganda: Think of Nazi Germany’s cartoons depicting Jewish people, or contemporaneously, U.S. cartoons depicting Japanese people as we placed Japanese-Americans in internment camps. Time magazine caught hell in 1994 for using an artificially darkened photo of O.J. Simpson on its cover, and several Republican politcal campaigns in recent years have been called out for similar manipulation in recent years. 

But in an age when we can create or alter a photo with a few keyboard strokes, when we can alter what viewers think is reality so easily and convincingly, the danger of abuse by government is greater.   

Had the Trump administration not ham-handedly released the retouched perp-walk photo after Noem had released the original, we might not have known the reality of that arrest at all. This dishonesty is all the more reason why Americans’ right to record law enforcement activities must be protected. Without independent records and documentation of what’s happening, there’s no way to contradict the government’s lies. 

This incident raises the question of whether the Trump Administration feels emboldened to manipulate other photos for other propaganda purposes. Does it rework photos of the President to make him appear healthier, or more awake? Does it rework military or intelligence images to create pretexts for war? Does it rework photos of American citizens protesting or safeguarding their neighbors to justify a military deployment? 

In this instance, like so much of today’s political trolling, there’s a good chance it’ll be counterproductive for the trolls: The New York Times correctly noted that the doctored photograph could hinder the Armstrong’s right to a fair trial. “As the case proceeds, her lawyers could use it to accuse the Trump administration of making what are known as improper extrajudicial statements. Most federal courts bar prosecutors from making any remarks about court filings or a legal proceeding outside of court in a way that could prejudice the pool of jurors who might ultimately hear the case.” They also could claim the doctored photo proves the Justice Department bore some sort of animus against Armstrong and charged her vindictively. 

In the past, we've urged caution when analyzing proposals to regulate technologies that could be used to create false images. In those cases, we argued that any new regulation should rely on the established framework for addressing harms caused by other forms of harmful false information. But in this situation, it is the government itself that is misusing technology and propagating harmful falsehoods. This doesn't require new laws; the government can and should put an end to this practice on its own. 

Any reputable journalism organization would fire an employee for manipulating a photo this way; many have done exactly that. It’s a shame our government can’t adhere to such a basic ethical and moral code too. 

Josh Richman

EFF Statement on ICE and CBP Violence

1 week 3 days ago

Dangerously unchecked surveillance and rights violations have been a throughline of the Department of Homeland Security since the agency’s creation in the wake of the September 11th attacks. In particular, Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have been responsible for countless civil liberties and digital rights violations since that time. In the past year, however, ICE and CBP have descended into utter lawlessness, repeatedly refusing to exercise or submit to the democratic accountability required by the Constitution and our system of laws.  

The Trump Administration has made indiscriminate immigration enforcement and mass deportation a key feature of its agenda, with little to no accountability for illegal actions by agents and agency officials. Over the past year, we’ve seen massive ICE raids in cities from Los Angeles to Chicago to Minneapolis. Supercharged by an unprecedented funding increase, immigration enforcement agents haven’t been limited to boots on the ground: they’ve been scanning faces, tracking neighborhood cell phone activity, and amassing surveillance tools to monitor immigrants and U.S. citizens alike. 

Congress must vote to reject any further funding of ICE and CBP

The latest enforcement actions in Minnesota have led to federal immigration agents killing Renee Good and Alex Pretti. Both were engaged in their First Amendment right to observe and record law enforcement when they were killed. And it’s only because others similarly exercised their right to record that these killings were documented and widely exposed, countering false narratives the Trump Administration promoted in an attempt to justify the unjustifiable.  

These constitutional violations are systemic, not one-offs. Just last week, the Associated Press reported a leaked ICE memo that authorizes agents to enter homes solely based on “administrative” warrants—lacking any judicial involvement. This government policy is contrary to the “very core” of the Fourth Amendment, which protects us against unreasonable search and seizure, especially in our own homes.  

These violations must stop now. ICE and CBP have grown so disdainful of the rule of law that reforms or guardrails cannot suffice. We join with many others in saying that Congress must vote to reject any further funding of ICE and CBP this week. But that is not enough. It’s time for Congress to do the real work of rebuilding our immigration enforcement system from the ground up, so that it respects human rights (including digital rights) and human dignity, with real accountability for individual officers, their leadership, and the agency as a whole.

Cindy Cohn

Search Engines, AI, And The Long Fight Over Fair Use

1 week 6 days ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Long before generative AI, copyright holders warned that new technologies for reading and analyzing information would destroy creativity. Internet search engines, they argued, were infringement machines—tools that copied copyrighted works at scale without permission. As they had with earlier information technologies like the photocopier and the VCR, copyright owners sued.

Courts disagreed. They recognized that copying works in order to understand, index, and locate information is a classic fair use—and a necessary condition for a free and open internet.

Today, the same argument is being recycled against AI. It’s whether copyright owners should be allowed to control how others analyze, reuse, and build on existing works.

Fair Use Protects Analysis—Even When It’s Automated

U.S. courts have long recognized that copying for purposes of analysis, indexing, and learning is a classic fair use. That principle didn’t originate with artificial intelligence. It doesn’t disappear just because the processes are performed by a machine.

Copying works in order to understand them, extract information from them, or make them searchable is transformative and lawful. That’s why search engines can index the web, libraries can make digital indexes, and researchers can analyze large collections of text and data without negotiating licenses from millions of rightsholders. These uses don’t substitute for the original works; they enable new forms of knowledge and expression.

Training AI models fits squarely within that tradition. An AI system learns by analyzing patterns across many works. The purpose of that copying is not to reproduce or replace the original texts, but to extract statistical relationships that allow the AI system to generate new outputs. That is the hallmark of a transformative use. 

Attacking AI training on copyright grounds misunderstands what’s at stake. If copyright law is expanded to require permission for analyzing or learning from existing works, the damage won’t be limited to generative AI tools. It could threaten long-standing practices in machine learning and text-and-data mining that underpin research in science, medicine, and technology. 

Researchers already rely on fair use to analyze massive datasets such as scientific literature. Requiring licenses for these uses would often be impractical or impossible, and it would advantage only the largest companies with the money to negotiate blanket deals. Fair use exists to prevent copyright from becoming a barrier to understanding the world. The law has protected learning before. It should continue to do so now, even when that learning is automated. 

A Road Forward For AI Training And Fair Use 

One court has already shown how these cases should be analyzed. In Bartz v. Anthropic, the court found that using copyrighted works to train an AI model is a highly transformative use. Training is a kind of studying how language works—not about reproducing or supplanting the original books. Any harm to the market for the original works was speculative. 

The court in Bartz rejected the idea that an AI model might infringe because, in some abstract sense, its output competes with existing works. While EFF disagrees with other parts of the decision, the court’s ruling on AI training and fair use offers a good approach. Courts should focus on whether training is transformative and non-substitutive, not on fear-based speculation about how a new tool could affect someone’s market share. 

AI Can Create Problems, But Expanding Copyright Is the Wrong Fix 

Workers’ concerns about automation and displacement are real and should not be ignored. But copyright is the wrong tool to address them. Managing economic transitions and protecting workers during turbulent times are core functions of government. Copyright law doesn’t help with those tasks in the slightest. Expanding copyright control over learning and analysis won’t stop new forms of worker automation—it never has. But it will distort copyright law and undermine free expression. 

Broad licensing mandates may also do harm by entrenching the current biggest incumbent companies. Only the largest tech firms can afford to negotiate massive licensing deals covering millions of works. Smaller developers, research teams, nonprofits, and open-source projects will all get locked out. Copyright expansion won’t restrain Big Tech—it will give it a new advantage.  

Fair Use Still Matters

Learning from prior work is foundational to free expression. Rightsholders cannot be allowed to control it. Courts have rejected that move before, and they should do so again.

Search, indexing, and analysis didn’t destroy creativity. Nor did the photocopier, nor the VCR. They expanded speech, access to knowledge, and participation in culture. Artificial intelligence raises hard new questions, but fair use remains the right starting point for thinking about training.

Joe Mullin

Rent-Only Copyright Culture Makes Us All Worse Off

2 weeks ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

In the Netflix/Spotify/Amazon era, many of us access copyrighted works purely in digital form – and that means we rarely have the chance to buy them. Instead, we are stuck renting them, subject to all kinds of terms and conditions. And because the content is digital, reselling it, lending it, even preserving it for your own use inevitably requires copying. Unfortunately, when it comes to copying digital media, US copyright law has pretty much lost the plot.

As we approach the 50th anniversary of the 1976 Copyrights, the last major overhaul of US copyright law, we’re not the only ones wondering if it’s time for the next one. It’s a high-risk proposition, given the wealth and influence of entrenched copyright interests who will not hesitate to send carefully selected celebrities to argue for changes that will send more money, into fewer pockets, for longer terms. But it’s equally clear that and nowhere is that more evident than the waning influence of Section 109, aka the first sale doctrine.

First sale—the principle that once you buy a copyrighted work you have the right to re-sell it, lend it, hide it under the bed, or set it on fire in protest—is deeply rooted in US copyright law. Indeed, in an era where so many judges are looking to the Framers for guidance on how to interpret current law, it’s worth noting that the first sale principles (also characterized as “copyright exhaustion”) can be found in the earliest copyright cases and applied across the rights in the so-called “copyright bundle.”

Unfortunately, courts have held that first sale, at least as it was codified in the Copyright Act, only applies to distribution, not reproduction. So even if you want to copy a rented digital textbook to a second device, and you go through the trouble of deleting it from the first device, the doctrine does not protect you.

We’re all worse off as a result. Our access to culture, from hit songs to obscure indie films, are mediated by the whims of major corporations. With physical media the first sale principle built bustling second hand markets, community swaps, and libraries—places where culture can be shared and celebrated, while making it more affordable for everyone.

And while these new subscription or rental services have an appealing upfront cost, it comes with a lot more precarity. If you love rewatching a show, you may be chasing it between services or find it is suddenly unavailable on any platform. Or, as fans of Mad Men or Buffy the Vampire Slayer know, you could be stuck with a terrible remaster as the only digital version available

Last year we saw one improvement with California Assembly Bill 2426 taking effect. In California companies must now at least disclose to potential customers if a “purchase” is a revocable license—i.e. If they can blow it up after you pay. A story driving this change was Ubisoft revoking access to “The Crew” and making customers’ copies unplayable a decade after launch. 

On the federal level, EFF, Public Knowledge, and 15 other public interest organizations backed Sen. Ron Wyden’s message to the FTC to similarly establish clear ground rules for digital ownership and sales of goods. Unfortunately FTC Chairman Andrew Ferguson has thus far turned down this easy win for consumers.

As for the courts, some scholars think they have just gotten it wrong. We agree, but it appears we need Congress to set them straight. The Copyright Act might not need a complete overhaul, but Section 109 certainly does. The current version hurts consumers, artists, and the millions of ordinary people who depend on software and digital works every day for entertainment, education, transportation, and, yes, to grow our food. 

We realize this might not be the most urgent problem Congress confronts in 2026—to be honest, we wish it was—but it’s a relatively easy one to solve. That solution could release a wave of new innovation, and equally importantly, restore some degree of agency to American consumers by making them owners again.

Corynne McSherry

Copyright Kills Competition

2 weeks 1 day ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Copyright owners increasingly claim more draconian copyright law and policy will fight back against big tech companies. In reality, copyright gives the most powerful companies even more control over creators and competitors. Today’s copyright policy concentrates power among a handful of corporate gatekeepers—at everyone else’s expense. We need a system that supports grassroots innovation and emerging creators by lowering barriers to entry—ultimately offering all of us a wider variety of choices.

Pro-monopoly regulation through copyright won’t provide any meaningful economic support for vulnerable artists and creators. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is like trying to help a bullied kid by giving them more lunch money for the bully to take.

Entertainment companies’ historical practices bear out this concern. For example, in the late-2000’s to mid-2010’s, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now- $100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There’s no reason to think that these same companies would treat their artists more fairly now.

AI Training

In the AI era, copyright may seem like a good way to prevent big tech from profiting from AI at individual creators’ expense—it’s not. In fact, the opposite is true. Developing a large language model requires developers to train the model on millions of works. Requiring developers to license enough AI training data to build a large language model would  limit competition to all but the largest corporations—those that either have their own trove of training data or can afford to strike a deal with one that does. This would result in all the usual harms of limited competition, like higher costs, worse service, and heightened security risks. New, beneficial AI tools that allow people to express themselves or access information.

For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry.

Legacy gatekeepers have already used copyright to stifle access to information and the creation of new tools for understanding it. Consider, for example, Thomson Reuters v. Ross Intelligence, the first of many copyright lawsuits over the use of works train AI. ROSS Intelligence was a legal research startup that built an AI-based tool to compete with ubiquitous legal research platforms like Lexis and Thomson Reuters’ Westlaw. ROSS trained its tool using “West headnotes” that Thomson Reuters adds to the legal decisions it publishes, paraphrasing the individual legal conclusions (what lawyers call “holdings”) that the headnotes identified. The tool didn’t output any of the headnotes, but Thomson Reuters sued ROSS anyways. A federal appeals court is still considering the key copyright issues in the case—which EFF weighed in on last year. EFF hopes that the appeals court will reject this overbroad interpretation of copyright law. But in the meantime, the case has already forced the startup out of business, eliminating a would-be competitor that might have helped increase access to the law.

Requiring developers to license AI training materials benefits tech monopolists as well. For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry. The cost of licensing enough works to train an LLM would be prohibitively expensive for most would-be competitors.

The DMCA’s “Anti-Circumvention” Provision

The Digital Millennium Copyright Act’s “anti-circumvention” provision is another case in point. Congress ostensibly passed the DMCA to discourage would-be infringers from defeating Digital Rights Management (DRM) and other access controls and copy restrictions on creative works.

Section 1201 has been used to block competition and innovation in everything from printer cartridges to garage door openers

In practice, it’s done little to deter infringement—after all, large-scale infringement already invites massive legal penalties. Instead, Section 1201 has been used to block competition and innovation in everything from printer cartridges to garage door openers, videogame console accessories, and computer maintenance services. It’s been used to threaten hobbyists who wanted to make their devices and games work better. And the problem only gets worse as software shows up in more and more places, from phones to cars to refrigerators to farm equipment. If that software is locked up behind DRM, interoperating with it so you can offer add-on services may require circumvention. As a result, manufacturers get complete control over their products, long after they are purchased, and can even shut down secondary markets (as Lexmark did for printer ink, and Microsoft tried to do for Xbox memory cards.)

Giving rights holders a veto on new competition and innovation hurts consumers. Instead, we need balanced copyright policy that rewards consumers without impeding competition.

Tori Noble

Copyright Should Not Enable Monopoly

2 weeks 1 day ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

There’s a crisis of creativity in mainstream American culture. We have fewer and fewer studios and record labels and fewer and fewer platforms online that serve independent artists and creators.  

At its core, copyright is a monopoly right on creative output and expression. It’s intended to allow people who make things to make a living through those things, to incentivize creativity. To square the circle that is “exclusive control over expression” and “free speech,” we have fair use.

However, we aren’t just seeing artists having a time-limited ability to make money off of their creations. We are also seeing large corporations turn into megacorporations and consolidating huge stores of copyrights under one umbrella. When the monopoly right granted by copyright is compounded by the speed and scale of media company mergers, we end up with a crisis in creativity. 

People have been complaining about the lack of originality in Hollywood for a long time. What is interesting is that the response from the major studios has rarely, especially recently, to invest in original programming. Instead, they have increased their copyright holdings through mergers and acquisitions. In today’s consolidated media world, copyright is doing the opposite of its intended purpose: instead of encouraging creativity, it’s discouraging it. The drive to snap up media franchises (or “intellectual properties”) that can generate sequels, reboots, spinoffs, and series for years to come has crowded out truly original and fresh creativity in many sectors. And since copyright terms last so long, there isn’t even a ticking clock to force these corporations to seek out new original creations. 

In theory, the internet should provide a counterweight to this problem by lowering barriers to entry for independent creators. But as online platforms for creativity likewise shrink in number and grow in scale, they have closed ranks with the major studios.  

It’s a betrayal of the promise of the internet: that it should be a level playing field where you get to decide what you want to do, watch, listen to, read. And our government should be ashamed for letting it happen.  

Katharine Trendacosta

Statutory Damages: The Fuel of Copyright-based Censorship

2 weeks 2 days ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Imagine every post online came with a bounty of up to $150,000 paid to anyone who finds it violates opaque government rules—all out of the pocket of the platform. Smaller sites could be snuffed out, and big platforms would avoid crippling liability by aggressively blocking, taking down, and penalizing speech that even possibly violates these rules. In turn, users would self-censor, and opportunists would turn accusations into a profitable business.

This dystopia isn’t a fantasy, it’s close to how U.S. copyright’s broken statutory damages regime actually works.

Copyright includes "statutory damages,” which means letting a jury decide how big of a penalty the defendant will have to pay—anywhere from $200 to $150,000 per work—without the jury necessarily seeing any evidence of actual financial losses or illicit profits. In fact, the law gives judges and juries almost no guidelines on how to set damages. This is a huge problem for online speech.

One way or another, everyone builds on the speech of others when expressing themselves online: quoting posts, reposting memes, sharing images from the news. For some users, re-use is central to their online expression: parodists, journalists, researchers, and artists use others’ words, sounds, and images as part of making something new every day. Both these users and the online platforms they rely on risk unpredictable, potentially devastating penalties if a copyright holder objects to some re-use and a court disagrees with the user’s well-intentioned efforts.

On Copyright Week, we like to talk about ways to improve copyright law. One of the most important would be to fix U.S. copyright’s broken statutory damages regime. In other areas of civil law, the courts have limited jury-awarded punitive damages so that they can’t be far higher than the amount of harm caused. Extremely large jury awards for fraud, for example, have been found to offend the Constitution’s Due Process Clause. But somehow, that’s not the case in copyright—some courts have ruled that Congress can set damages that are potentially hundreds of times greater than actual harm.

Massive, unpredictable damages awards for copyright infringement, such as a $222,000 penalty for sharing 24 music tracks online, are the fuel that drives overzealous or downright abusive takedowns of creative material from online platforms. Capricious and error-prone copyright enforcement bots, like YouTube’s Content ID, were created in part to avoid the threat of massive statutory damages against the platform. Those same damages create an ever-present bias in favor of major rightsholders and against innocent users in the platforms’ enforcement decisions. And they stop platforms from addressing the serious problems of careless and downright abusive copyright takedowns.

By turning litigation into a game of financial Russian roulette, statutory damages also discourage artistic and technological experimentation at the boundaries of fair use. None but the largest corporations can risk ruinous damages if a well-intentioned fair use crosses the fuzzy line into infringement.

“But wait”, you might say, “don’t legal protections like fair use and the safe harbors of the Digital Millennium Copyright Act protect users and platforms?” They do—but the threat of statutory damages makes that protection brittle. Fair use allows for many important re-uses of copyrighted works without permission. But fair use is heavily dependent on circumstances and can sometimes be difficult to predict when copyright is applied to new uses. Even well-intentioned and well-resourced users avoid experimenting at the boundaries of fair use when the cost of a court disagreeing is so high and unpredictable.

Many reforms are possible. Congress could limit statutory damages to a multiple of actual harm. That would bring U.S. copyright in line with other countries, and with other civil laws like patent and antitrust. Congress could also make statutory damages unavailable in cases where the defendant has a good-faith claim of fair use, which would encourage creative experimentation. Fixing fair use would make many of the other problems in copyright law more easily solvable, and create a fairer system for creators and users alike.

Mitch Stoltz

💾 The Worst Data Breaches of 2025—And What You Can Do | EFFector 38.1

2 weeks 2 days ago

So many data breaches happen throughout the year that it can be pretty easy to gloss over not just if, but how many different breaches compromised your data. We're diving into these data breaches and more with our latest EFFector newsletter.

Since 1990, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue tracks U.S. Immigration and Customs Enforcement's (ICE) surveillance spending spree, explains how hackers are countering ICE's surveillance, and invites you to our free livestream covering online age verification mandates.

Prefer to listen in? In our audio companion, EFF Security and Privacy Activist Thorin Klosowski explains what you can do to protect yourself from data breaches and how companies can better protect their users. Find the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 38.1 - 💾 THE WORST DATA BREACHES OF 2025—and what you can do

Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight to protect people from these data breaches and unlawful surveillance when you support EFF today!

Christian Romero

EFF Joins Internet Advocates Calling on the Iranian Government to Restore Full Internet Connectivity

2 weeks 2 days ago

Earlier this month, Iran’s internet connectivity faced one of its most severe disruptions in recent years with a near-total shutdown from the global internet and major restrictions on mobile access.

EFF joined architects, operators, and stewards of the global internet infrastructure in calling upon authorities in Iran to immediately restore full and unfiltered internet access. We further call upon the international technical community to remain vigilant in monitoring connectivity and to support efforts that ensure the internet remains open, interoperable, and accessible to all.

This is not the first time the people in Iran have been forced to experience this, with the government suppressing internet access in the country for many years. In the past three years in particular, people of Iran have suffered repeated internet and social media blackouts following an activist movement that blossomed after the death of Mahsa Amini, a woman murdered in police custody for refusing to wear a hijab. The movement gained global attention and in response, the Iranian government rushed to control both the public narrative and organizing efforts by banning social media and sometimes cutting off internet access altogether. 

EFF has long maintained that governments and occupying powers must not disrupt internet or telecommunication access. Cutting off telecommunications and internet access is a violation of basic human rights and a direct attack on people's ability to access information and communicate with one another. 

Our joint statement continues:

“We assert the following principles:

  1. Connectivity is a Fundamental Enabler of Human Rights: In the 21st century, the right to assemble, the right to speak, and the right to access information are inextricably linked to internet access.
  2. Protecting the Global Internet Commons: National-scale shutdowns fragment the global network, undermining the stability and trust required for the internet to function as a global commons.
  3. Transparency: The technical community condemns the use of BGP manipulation and infrastructure filtering to obscure events on the ground.”

Read the letter in full here

Paige Collings

EFF Condemns FBI Search of Washington Post Reporter’s Home

2 weeks 6 days ago

Government invasion of a reporter’s home, and seizure of journalistic materials, is exactly the kind of abuse of power the First Amendment is designed to prevent. It represents the most extreme form of press intimidation. 

Yet, that’s what happened on Wednesday morning to Washington Post reporter Hannah Natanson, when the FBI searched her Virginia home and took her phone, two laptops, and a Garmin watch. 

The Electronic Frontier Foundation has joined 30 other press freedom and civil liberties organizations in condemning the FBI’s actions against Natanson. The First Amendment exists precisely to prevent the government from using its powers to punish or deter reporting on matters of public interest—including coverage of leaked or sensitive information. Searches like this threaten not only journalists, but the public’s right to know what its government is doing.

In the statement published yesterday, we call on Congress: 

To exercise oversight of the DOJ by calling Attorney General Pam Bondi before Congress to answer questions about the FBI’s actions; 

To reintroduce and pass the PRESS Act, which would limit government surveillance of journalists, and its ability to compel journalists to reveal sources; 

To reform the 108-year-old Espionage Act so it can no longer be used to intimidate and attack journalists. 

And to pass a resolution confirming that the recording of law enforcement activity is protected by the First Amendment. 

We’re joined on this letter by Free Press Action, the American Civil Liberties Union, PEN America, the NewsGuild-CWA, the Society of Professional Journalists, the Committee to Protect Journalists, and many other press freedom and civil liberties groups.

Further Reading:

Joe Mullin

EFF to California Appeals Court: First Amendment Protects Journalist from Tech Executive’s Meritless Lawsuit

2 weeks 6 days ago

EFF asked a California appeals court to uphold a lower court’s decision to strike a tech CEO’s lawsuit against a journalist that sought to silence reporting the CEO, Maury Blackman, didn’t like.

The journalist, Jack Poulson, reported on Maury Blackman’s arrest for felony domestic violence after receiving a copy of the arrest report from a confidential source. Blackman didn’t like that. So, he sued Poulson—along with Substack, Amazon Web Services, and Poulson’s non-profit, Tech Inquiry—to try and force Poulson to take his articles down from the internet.

Fortunately, the trial court saw this case for what it was: a classic SLAPP, or a strategic lawsuit against public participation. The court dismissed the entire complaint under California’s anti-SLAPP statute, which provides a way for defendants to swiftly defeat baseless claims designed to chill their free speech.

The appeals court should affirm the trial court’s correct decision.  

Poulson’s reporting is just the kind of activity that the state’s anti-SLAPP law was designed to protect: truthful speech about a matter of public interest. The felony domestic violence arrest of the CEO of a controversial surveillance company with U.S. military contracts is undoubtedly a matter of public interest. As we explained to the court, “the public has a clear interest in knowing about the people their government is doing business with.”

Blackman’s claims are totally meritless, because they are barred by the First Amendment. The First Amendment protects Poulson’s right to publish and report on the incident report. Blackman argues that a court order sealing the arrest overrides Poulson’s right to report the news—despite decades of Supreme Court and California Court of Appeals precedent to the contrary. The trial correctly rejected this argument and found that the First Amendment defeats all of Blackman’s claims. As the trial court explained, “the First Amendment’s protections for the publication of truthful speech concerning matters of public interest vitiate Blackman’s merits showing.”

The court of appeals should reach the same conclusion.

Related Cases: Blackman v. Substack, et al.
Karen Gullo

Baton Rouge Acquires a Straight-Up Military Surveillance Drone

2 weeks 6 days ago

The Baton Rouge Police Department announced this week that it will begin using a drone designed by military equipment manufacturer Lockheed Martin and Edge Autonomy, making it one of the first local police departments to use an unmanned aerial vehicle (UAV) with a history of primary use in foreign war zones. Baton Rouge is now one of the first local police departments in the United States to deploy an unmanned aerial vehicle (UAV) with such extensive surveillance capabilities — a dangerous escalation in the militarization of local law enforcement.

This is a troubling development in an already long history of local law enforcement acquiring and utilizing military-grade surveillance equipment. It should be a cautionary tale that prods  communities across the country to be proactive in ensuring that drones can only be acquired and used in ways that are well-documented, transparent, and subject to public feedback. 

Baton Rouge bought the Stalker VXE30 from Edge Autonomy, which partners with Lockheed Martin and began operating under the brand Redwire this week. According to reporting from WBRZ ABC2 in Louisiana, the drone, training, and batteries, cost about $1 million. 

Baton Rouge Police Department with Stalker VXE30 drone Baton Rouge Police Department officers stand with the Stalker VXE30 drone in a photo shared by the BRPD via Facebook.

All of the regular concerns surrounding drones apply to this new one in use by Baton Rouge:

  • Drones can access and view spaces that are otherwise off-limits to law enforcement, including backyards, decks, and other areas of personal property.
  • Footage captured by camera-enabled drones may be stored and shared in ways that go far beyond the initial flight.
  • Additional camera-based surveillance can be installed on the drone, including automated license plate readers and the retroactive application of biometric analysis, such as face recognition.

However, the use of a military-grade drone hypercharges these concerns. Stalker VXE30's surveillance capabilities extend for dozens of miles, and it can fly faster and longer than standard police drones already in use. 

“It can be miles away, but we can still have a camera looking at your face, so we can use it for surveillance operations," BRPD Police Chief TJ Morse told reporters.

Drone models similar to the Stalker VXE30 have been used in military operations around the world and are currently being used by the U.S. Army and other branches for long-range reconnaissance. Typically, police departments deploy drone models similar to those commercially available from companies like DJI, which until recently was the subject of a proposed Federal Communications Commission (FCC) ban, or devices provided by police technology companies like Skydio, in partnership with Axon and Flock Safety

Additionally troubling is the capacity to add additional equipment to these drones: so-called “payloads” that could include other types of surveillance equipment and even weapons. 

The Baton Rouge community must put policies in place that restrict and provide oversight of any possible uses of this drone, as well as any potential additions law enforcement might make. 

EFF has filed a public records request to learn more about the conditions of this acquisition and gaps in oversight policies. We've been tracking the expansion of police drone surveillance for years, and this acquisition represents a dangerous new frontier. We'll continue investigating and supporting communities fighting back against the militarization of local police and mass surveillance. To learn more about the surveillance technologies being used in your city, please check out the Atlas of Surveillance.

Beryl Lipton
Checked
3 hours 15 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed