California: Tweet at Governor Newsom to Get A.B. 566 Signed Into Law

14 hours 24 minutes ago

We need your help to make a common-sense bill into California law. Despite the fact that California has one of the nation’s most comprehensive data privacy laws, it’s not always easy for people to exercise those privacy rights. A.B. 566 intends to make it easy by directing browsers to give all their users the option to tell companies they don’t want personal information that’s collected about them on the internet to be sold or shared. Now, we just need Governor Gavin Newsom to sign it into law by October 13, 2025, and this toolkit will help us put on the pressure. Tweet at Gov. Gavin Newsom and help us get A.B. 566 signed into law!

First, pick your platform of choice. Reach Gov. Newsom at any of his social media handles:

Then, pick a message that resonates with you. Or, feel free to remix!

Sample Posts

  • It should be easy for Californians to exercise our rights under the California Consumer Privacy Act, but major internet browser companies are making it difficult for us to do that. @CAgovernor, sign AB 566 and give power to the consumers to protect their privacy!
  • We are living in a time of mass surveillance and tracking. Californian consumers should be able to easily control their privacy and AB 566 would make that possible. @CAgovernor, sign AB 566 and ensure that millions of Californians can opt out of the sale and sharing of their private information!
  • People seeking abortion care, immigrants, and LGBTQ+ people are at risk of bad actors using their online activity against them. @CAgovernor could sign AB 566 and protect the privacy of vulnerable communities and all Californians.
  • AB 566 gives Californians a practical way to use their right to opt-out of websites selling or sharing their private info. @CAgovernor can sign it and give consumers power over their privacy choices under the California Consumer Privacy Act.
  • Hey @CAgovernor! AB 566 makes it easy for Californians to tell companies what they want to happen with their own private information. Sign it and make the California Consumer Privacy Act more user-friendly!
  • Companies haven’t made it easy for Californians to tell companies not to sell or share their personal information. We need AB 566 so that browsers MUST give users the option to easily opt out of this data sharing. @CAgovernor, sign AB 566!
  • Major browsers have made it hard for Californians to opt out of the share and sale of their private info. Right now, consumers must individually opt out at every website they visit. AB 566 can change that by requiring browsers to create one single opt-out preference, but @CAgovernor MUST sign it!
  • It should be easy for Californians to opt out of the share and sale of their private info, such as health info, immigration status, and political affiliation, but browsers have made it difficult. @CAgovernor can sign AB 566 and give power to consumers to more easily opt out of this data sharing.
  • Right now, if a Californian wants to tell companies not to sell or share their info, they must go through the processes set up by each company, ONE BY ONE, to opt out of data sharing. AB 566 can remove that burden. @CAgovernor, sign AB 566 to empower consumers!
  • Industry groups who want to keep the scales tipped in favor of corporations who want to profit off the sale of our private info have lobbied heavily against AB 566, a bill that will make it easy for Californians to tell companies what they want to happen with their own info. @CAgovernor—sign it!
Kenyatta Thomas

Yes to California’s “No Robo Bosses Act”

1 day 11 hours ago

California’s Governor should sign S.B. 7, a common-sense bill to end some of the harshest consequences of automated abuse at work. EFF is proud to join dozens of labor, digital rights, and other advocates in support of the “No Robo Bosses Act.”

Algorithmic decision-making is a growing threat to workers. Bosses are using AI to assess the body language and voice tone of job candidates. They’re using algorithms to predict when employees are organizing a union or planning to quit. They’re automating choices about who gets fired. And these employment algorithms often discriminate based on gender, race, and other protected statuses. Fortunately, many advocates are resisting.

What the Bill Does

S.B. 7 is a strong step in the right direction. It addresses “automated decision systems” (ADS) across the full landscape of employment. It applies to bosses in the private and government sectors, and it protects workers who are employees and contractors. It addresses all manner of employment decisions that involve automated decisionmaking, including hiring, wages, hours, duties, promotion, discipline, and termination. It covers bosses using ADS to assist or replace a person making a decision about another person.

Algorithmic decision-making is a growing threat to workers.

The bill requires employers to be transparent when they rely on ADS. Before using it to make a decision about a job applicant or current worker, a boss must notify them about the use of ADS. The notice must be in a stand-alone, plain language communication. The notice to a current worker must disclose the types of decisions subject to ADS, and a boss cannot use an ADS for an undisclosed purpose. Further, the notice to a current worker must disclose information about how the ADS works, including what information goes in and how it arrives at its decision (such as whether some factors are weighed more heavily than others).

The bill provides some due process to current workers who face discipline or termination based on the ADS. A boss cannot fire or punish a worker based solely on ADS. Before a boss does so based primarily on ADS, they must ensure a person reviews both the ADS output and other relevant information. A boss must also notify the affected worker of such use of ADS. A boss cannot use customer ratings as the only or primary input for such decisions. And every worker can obtain a copy of the most recent year of their own data that their boss might use as ADS input to punish or fire them.

Other provisions of the bill will further protect workers. A boss must maintain an updated list of all ADS it currently uses. A boss cannot use ADS to violate the law, to infer whether a worker is a member of a protected class, or to target a worker for exercising their labor and other rights. Further, a boss cannot retaliate against a worker who exercises their rights under this new law. Local laws are not preempted, so our cities and counties are free to enact additional protections.

Next Steps

The “No Robo Bosses Act” is a great start. And much more is needed, because many kinds of powerful institutions are using automated decision-making against us. Landlords use it to decide who gets a home. Insurance companies use it to decide who gets health care. ICE uses it to decide who must submit to location tracking by electronic monitoring.

EFF has long been fighting such practices. We believe technology should improve everyone’s lives, not subject them to abuse and discrimination. We hope you will join us.

Adam Schwartz

Meta is Removing Abortion Advocates' Accounts Without Warning

2 days 10 hours ago

This is the fifth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

When the team at Women Help Women signed into Instagram last winter, they were met with a distressing surprise: without warning, Meta had disabled their account. The abortion advocacy non-profit organization found itself suddenly cut off from its tens of thousands of followers and with limited recourse. Meta claimed Women Help Women had violated its Community Standards on “guns, drugs, and other restricted goods,” but the organization told EFF it uses Instagram only to communicate about safe abortion practices, including sharing educational content and messages aimed at reducing stigma. Eventually, Women Help Women was able to restore its account—but only after launching a public campaign and receiving national news coverage

Unfortunately, Women Help Women’s experience is not unique. Around a quarter of our Stop Censoring Abortion campaign submissions reported that their entire account or page had been disabled or taken down after sharing abortion information—primarily on Meta platforms. This troubling pattern indicates that the censorship crisis goes beyond content removal. Accounts providing crucial reproductive health information are disappearing, often without warning, cutting users off from their communities and followers entirely.

whw_screenshot.jpeg

What's worse, Meta appears to be imposing these negative account actions without clearly adhering to its own enforcement policies. Meta’s own Transparency Center stipulates that an account should receive multiple Community Standards violations or warnings before it is restricted or disabled. Yet many affected users told EFF they experienced negative account actions without any warning at all, or after only one alleged violation (many of which were incorrectly flagged, as we’ve explained elsewhere in this series). 

While Meta clearly has the right to remove accounts from its platforms, disabling or banning an account is an extreme measure. It completely silences a user, cutting off communication with their followers and preventing them from sharing any information, let alone abortion information. Because of this severity, Meta should be extremely careful to ensure fairness and accuracy when disabling or removing accounts. Rules governing account removal should be transparent and easy to understand, and Meta must enforce these policies consistently across different users and categories of content. But as our Stop Censoring Abortion results demonstrate, this isn't happening for many accounts sharing abortion information.  

Meta's Maze of Enforcement Policies 

If you navigate to Meta’s Transparency Center, you’ll find a page titled “How Meta enforces its policies.” This page contains a web of intersecting policies on when Meta will restrict accounts, disable accounts, and remove pages and groups. These policies overlap but don’t directly refer to each other, making it trickier for users to piece together how enforcement happens. 

At the heart of Meta's enforcement process is a strike system. Users receive strikes for posting content that violates Meta’s Community Standards. But not all Community Standards violations result in strikes, and whether Meta applies one depends on the “severity of the content” and the “context in which it was shared.” Meta provides little additional guidance on what violations are severe enough to amount to a strike or how context affects this assessment.  

According to Meta's Restricting Accounts policy, for most violations, 1 strike should only result in a warning—not any action against the account. How additional strikes affect an account differs between Facebook and Instagram (but Meta provides no specific guidance for Threads). Facebook relies on a progressive system, where additional strikes lead to increasing restrictions. Enforcement on Instagram is more opaque and leaves more to Meta’s discretion. Meta still counts strikes on Instagram, but it does not follow the same escalating structure of restrictions as it does on Facebook. 

Despite some vagueness in these policies, Meta is quite clear about one thing: On both Facebook and Instagram, an account should only be disabled or removed after “repeated” violations, warnings, or strikes. Meta states this multiple times throughout its enforcement policies. Its Disabling Accounts policy suggests that generally, an account needs to receive at least 5 strikes for Meta to disable or remove it from the platform. The only caveat is for severe violations, such as posting child sexual exploitation content or violating the dangerous individuals and organizations policy. In those extreme cases, Meta may disable an account after just one violation. 

Meta’s Practices Don’t Match Its Policies 

Our survey results detailed a different reality. Many survey respondents told EFF that Meta disabled or removed their account without warning and without indication that they had received repeated strikes.  It’s important to note that Meta does not have a unique enforcement process for prescription drug or abortion-related content. When EFF asked Meta about this issue, Meta confirmed that "enforcement actions on prescription drugs are subject to Meta's standard enforcement policies.” 

So here are a couple other possible explanations for this disconnect—each of them troubling in their own way:

Meta is Ignoring Its Own Strike System 

If Meta is taking down accounts without warning or after only one alleged Community Standards violation, the company is failing to follow its own strike system. This makes enforcement arbitrary and denies users the opportunity for correction that Meta's system supposedly provides. It’s also especially problematic for abortion advocates, given that Meta has been incorrectly flagging educational abortion content as violating its Community Standards. This means that a single content moderation error could result not only in the post coming down, but the entire account too.  

This may be what happened to Emory University’s RISE Center for Reproductive Health Research (a story we described in more detail earlier in this series). After sharing an educational post about mifepristone, RISE’s Instagram account was suddenly disabled. RISE received no earlier warnings from Meta before its account went dark. When RISE was finally able to get back into its account, it discovered only that this single post had been flagged. Again, according to Meta's own policies, one strike should only result in a warning. But this isn’t what happened here. 

Similarly, the Tamtang Foundation, an abortion advocacy organization based in Thailand, had its Facebook account suddenly disabled earlier this year. Tamtang told EFF it had received a warning on only one flagged post that it had posted 10 months prior to its account being taken down. It received none of the other progressive strike restrictions Meta claims to apply Facebook accounts. 

tamtang_screenshot.jpg

Meta is Misclassifying Educational Content as "Extreme Violations" 

If Meta is accurately following its strike policy but still disabling accounts after only one violation, this points to an even more concerning possibility. Meta’s content moderation system may be categorizing educational abortion information as severe enough to warrant immediate disabling, treating university research posts and clinic educational materials as equivalent to child exploitation or terrorist content.  

This would be a fundamental and dangerous mischaracterization of legitimate medical information, and it is, we hope, unlikely. But it’s unfortunately not outside the realm of possibility. We already wrote about a similar disturbing mischaracterization earlier in this series. 

Users Are Unknowingly Receiving Multiple Strikes 

Finally, Meta may be giving users multiple strikes without notifying them. This raises several serious concerns.

First is the lack of transparency. Meta explicitly states in its "Restricting Accounts" policy that it will notify users when it “remove[s] your content or add[s] restrictions to your account, Page or group.” This policy is failing if users are not receiving these notifications and are not made aware there’s an issue with their account. 

It may also mean that Meta’s policies themselves are too vague to provide meaningful guidance to users. This lack of clarity is harmful. If users don’t know what's happening to their accounts, they can’t appeal Meta’s content moderation decisions, adjust their content, or understand Meta's enforcement boundaries moving forward. 

Finally—and most troubling—if Meta is indeed disabling accounts that share abortion information for receiving multiple violations, this points to an even broader censorship crisis. Users may not be aware just how many informational abortion-related posts are being incorrectly flagged and counted as strikes. This is especially concerning given that Meta places a one-year time limit on strikes, meaning the multiple alleged violations could not have accumulated over multiple years.  

The Broader Censorship Crisis 

These account suspensions represent just one facet of Meta's censorship of reproductive health information documented by our Stop Censoring Abortion campaign. When combined with post removals, shadowbanning, and content restrictions, the message is clear: Meta platforms are increasingly unfriendly environments for abortion advocacy and education. 

If Meta wants to practice what it preaches, then it must reform its enforcement policies to provide clear, transparent guidelines on when and how strikes apply, and then consistently and accurately apply those policies. Accounts should not be taken down for only one alleged violation when the policies state otherwise.  

The stakes couldn't be higher. In a post-Roe landscape where access to accurate reproductive health information is more crucial than ever, Meta's enforcement system is silencing the very voices communities need most. 

This is the fifth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion  

Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people. 

Lisa Femia

Governor Newsom Should Make it Easier to Exercise Our Privacy Rights

2 days 12 hours ago

California has one of the nation’s most comprehensive consumer data privacy laws. But it’s not always easy for people to exercise those privacy rights. That’s why we supported Assemblymember Josh Lowenthal’s A.B. 566 throughout the legislative session and are now asking California Governor Gavin Newsom to sign it into law. 

The easier it is to exercise your rights, the more power you have.  

A.B. 566 does a very simple thing. It directs browsers—such as Google’s Chrome, Apple’s Safari, Microsoft’s Edge or Mozilla’s Firefox—to give all their users the option to tell companies they don't want companies to  to sell or share personal information  that’s collected about them on the internet. In other words: it makes it easy for Californians to tell companies what they want to happen with their own information.

By making it easy to use tools that allow you to send these sorts of signals to companies’ websites, A.B. 566 makes the California Consumer Privacy Act more user-friendly. And the easier it is to exercise your rights, the more power you have.  

This is a necessary step, because even though the CCPA gives all people in California the right to tell companies not to sell or share their personal information, companies have not made it easy to exercise this right. Right now, someone who wants to make these requests has to go through the processes set up by each company that may collect their information individually. Companies have also often made it pretty hard to make, or even find out how to make, these requests. Giving people the option for an easier way to communicate how they want companies to treat their personal information helps rebalance the often-lopsided relationship between the two.

Industry groups who want to keep the scales tipped firmly in the favor of corporations have lobbied heavily against A.B. 566. But we urge Gov. Newsom not to listen to those who want to it to remain difficult for people to exercise their CCPA rights. EFF’s technologists, lawyers, and advocates think A.B. 566 empowers consumers without imposing regulations that would limit innovation. We think Californians should have easy tools to tell companies how to deal with their information, and urge Gov. Newsom to sign this bill. 

Hayley Tsukayama

Safeguarding Human Rights Must Be Integral to the ICC Office of the Prosecutor’s Approach to Tech-Enabled Crimes

2 days 16 hours ago

This is Part I of a two-part series on EFF’s comments to the International Criminal Court Office of the Prosecutor (OTP) about its draft policy on cyber-enabled crimes.

As human rights atrocities around the world unfold in the digital age, genocide, war crimes and crimes against humanity are as heinous and wrongful as they were before the advent of AI and social media.

But criminal methods and evidence increasingly involve technology. Think mass digital surveillance of an ethnic or religious community used to persecute them as part of a widespread or systematic attack against civilians, or cyberattacks that disable hospitals or other essential services, causing injury or death.

The International Criminal Court (ICC) Office of the Prosecutor (OTP) intends to use its mandate and powers to investigate and prosecute cyber-enabled crimes within the court's jurisdiction—those covered under the 1989 Rome Statute treaty. The office released for public comment in March 2025 a draft of its proposed policy for how it plans to go about it.

We welcome the OTP draft and urge the OTP to ensure its approach is consistent with internationally recognized human rights, including the rights to free expression, to privacy (with encryption as a vital safeguard), and to fair trial and due process.

We believe those who use digital tools to commit genocide, crimes against humanity, or war crimes should face justice. At the same time, EFF, along with our partner Derechos Digitales, emphasized in comments submitted to the OTP that safeguarding human rights must be integral to its investigations of cyber-enabled crimes.

That’s how we protect survivors, prevent overreach, gather evidence that can withstand judicial scrutiny, and hold perpetrators to account. In a similar context, we’ve opposed abusive domestic cybercrime laws and policing powers that invite censorship, arbitrary surveillance, and other human rights abuses

In this two-part series, we’ll provide background on the ICC and OTP’s draft policy, including what we like about the policy and areas that raise questions.

OTP Defines Cyber-Enabled Crimes

The ICC, established by the Rome Statute, is the permanent international criminal court with jurisdiction over individuals for four core crimes—genocide, crimes against humanity, war crimes, and the crime of aggression. It also exercises jurisdiction over offences against the administration of justice at the court itself. Within the court, the OTP is an independent organization responsible for investigating these crimes and prosecuting them.

The OTP’s draft policy explains how it will apply the statute when crimes are committed or facilitated by digital means, while emphasizing that ordinary cybercrimes (e.g., hacking, fraud, data theft) are outside ICC jurisdiction and remain the responsibility of national courts to address.

The OTP defines “cyber-enabled crime” as crimes within the court’s jurisdiction that are committed or facilitated by technology. “Committed by” covers cases where the online act is the harmful act (or an essential digital contribution), for example, malware is used to disable a hospital and people are injured or die, so the cyber operation can be the attack itself.

A crime is “facilitated by” technology, according to the OTP draft, when digital activity helps someone commit a crime under modes of liability other than direct commission (e.g., ordering, inducing, aiding or abetting), and it doesn’t matter if the main crime was itself committed online. For example, authorities use mass digital surveillance to locate members of a protected group, enabling arrests and abuses as part of a widespread or systematic attack (i.e., persecution).

It further makes clear that the OTP will use its full investigative powers under the Rome Statute—relying on national authorities acting under domestic law and, where possible, on voluntary cooperation from private entities—to secure digital evidence across borders.

Such investigations can be highly intrusive and risk sweeping up data about people beyond the target. Yet many states’ current investigative practices fall short of international human rights standards. The draft should therefore make clear that cooperating states must meet those standards, including by assessing whether they can conduct surveillance in a manner consistent with the rule of law and the right to privacy.

Digital Conduct as Evidence of Rome Statute Crimes

Even when no ICC crime happens entirely online, the OTP says online activity can still be relevant evidence. Digital conduct can help show intent, context, or policies behind abuses (for example, to prove a persecution campaign), and it can also reveal efforts to hide or exploit crimes (like propaganda). In simple terms, online activity can corroborate patterns, link incidents, and support inferences about motive, policy, and scale relevant to these crimes.

The prosecution of such crimes or the use of related evidence must be consistent with internationally recognized human rights standards, including privacy and freedom of expression, the very freedoms that allow human rights defenders, journalists, and ordinary users to document and share evidence of abuses.

In Part II we’ll take a closer look at the substance of our comments about the policy’s strengths and our recommendations for improvements and more clarity.

Karen Gullo

EFF Statement on TikTok Ownership Deal

3 days 10 hours ago

One of the reasons we opposed the TikTok "ban" is that the First Amendment is supposed to protect us from government using its power to manipulate speech. But as predicted, the TikTok "ban" has only resulted in turning over the platform to the allies of a president who seems to have no respect for the First Amendment.

TikTok was never proven to be a current national security problem, so it's hard to say the sale will alleviate those unproven concerns. And it remains to be seen if the deal places any limits on the new ownership sharing user data with foreign governments or anyone else—the security concern that purportedly justified the forced sale. As for the algorithm, if the concern had been that TikTok could be a conduit for Chinese government propaganda—a concern the Supreme Court declined to even consider—people can now be concerned that TikTok could be a conduit for U.S. government propaganda. An administration official reportedly has said the new TikTok algorithm will be "retrained" with U.S. data to make sure the system is "behaving properly."

David Greene

Going Viral vs. Going Dark: Why Extremism Trends and Abortion Content Gets Censored

3 days 12 hours ago

This is the fourth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

One of the goals of our Stop Censoring Abortion campaign was to put names, stories, and numbers to the experiences we’d been hearing about: people and organizations having their abortion-related content – or entire accounts – removed or suppressed on social media. In reviewing survey submissions, we found that multiple users reported experiencing shadowbanning. Shadowbanning (or “deranking”) is widely experienced and reported by content creators across various social media platforms, and it’s a phenomenon that those who create content about abortion and sexual and reproductive health know all too well.

Shadowbanning is the often silent suppression of certain types of content or creators in your social media feeds. It’s not something that a U.S-based creator is notified about, but rather something they simply find out when their posts stop getting the same level of engagement that they’re used to, or when people are unable to easily find their account using the platform’s search function. Essentially, it is when a platform or its algorithm decides that other users should see less of a creator or specific topic. Many platforms deny that shadowbanning exists; they will often blame reduced reach of posts on ‘bugs’ in the algorithm. At the same time, companies like Meta have admitted that content is ranked, but much about how this ranking system works remains unknown.  Meta says that there are five content categories that while allowed on its platforms, “may not be eligible for recommendation.” Content discussing abortion pills may fall under the umbrella of “Content that promotes the use of certain regulated products,” but posts that simply affirm abortion as a valid reproductive decision or are of storytellers sharing their experiences don’t match any of the criteria that would make it unable to be recommended by Meta.

Whether a creator relies on a platform for income or uses it to educate the public, shadowbanning can be devastating for the growth of an account. And this practice often seems to disproportionately affect people who are talking about ‘taboo’ topics like sex, abortion, and LGBTQ+ identities, such as Kim Adamski, a sexual health educator who shared her story with our Stop Censoring Abortion project. As you can see in the images below, Kim’s Instagram account does not show up as a suggestion when being searched, and can only be found after typing in the full username.


Earlier this year, the Center for Intimacy Justice shared their report, "The Digital Gag: Suppression of Sexual and Reproductive Health on Meta, TikTok, Amazon, and Google", which found that of the 159 nonprofits, content creators, sex educators, and businesses surveyed, 63% had content removed on Meta platforms and 55% had content removed on TikTok. This suppression is happening at the same time as platforms continue to allow and elevate videos of violence and gore and extremist hateful content. This pattern is troubling and is only becoming more prevalent as people turn to social media to find the information they need to make decisions about their health.

Reproductive rights and sex education have been under attack across the U.S. for decades. Since the Dobbs v. Jackson decision in 2022, 20 states have banned or limited access to abortion. Meanwhile, 16 states don’t require sex education in public schools to be medically accurate, 19 states have laws that stigmatize LGBTQ+ identities in their sex education curricula, and 17 states specifically stigmatize abortion in their sex education curricula.

In a world that is constantly finding ways to legislate away bodily autonomy and hide queer identities, social media platforms have an opportunity to stand as safe havens for access to community and knowledge.

Online platforms are critical lifelines for people seeking possibly life-saving information about their sexual and reproductive health. We know that when people are unable to find or access the information they need within their communities, they will turn to the internet and social media. This is especially important for abortion-seekers and trans youth living in states where healthcare is being criminalized.

In a world that is constantly finding ways to legislate away bodily autonomy and hide queer identities, social media platforms have an opportunity to stand as safe havens for access to community and knowledge. Limiting access to this information by suppressing the people and organizations who are providing it is an attack on free expression and a profound threat to freedom of information—principles that these platforms claim to uphold. Now more than ever, we must continue to push back against censorship of sexual and reproductive health information so that the internet can still be a place where all voices are heard and where all can learn.

This is the fourth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion

Kenyatta Thomas

That Drone in the Sky Could Be Tracking Your Car

3 days 21 hours ago

Police are using their drones as flying automated license plate readers (ALPRs), airborne police cameras that make it easier than ever for law enforcement to follow you. 

"The Flock Safety drone, specifically, are flying LPR cameras as well,” Rahul Sidhu, Vice President of Aviation at Flock Safety, recently told a group of potential law enforcement customers interested in drone-as-first-responder (DFR) programs

The integration of Flock Safety’s flagship ALPR technology with its Aerodome drone equipment is a police surveillance combo poised to elevate the privacy threats to civilians caused by both of these invasive technologies as drone adoption expands. 

flock_drone_flying_police_platform.png

A slide from a Flock Safety presentation to Rutherford County Sheriff's Office in North Carolina, obtained via public records, featuring Flock Safety products, including the Aerodome drone and the Wing product, which helps convert surveillance cameras into ALPR systems

The use of DFR programs has grown exponentially. The biggest police technology companies, like Axon, Flock Safety, and Motorola Solutions, are broadening their drone offerings, anticipating that drones could become an important piece of their revenue stream. 

Communities must demand restrictions on how local police use drones and ALPRs, let alone a dangerous hybrid of the two. Otherwise, we can soon expect that a drone will fly to any call for service and capture sensitive location information about every car in its flight path, capturing more ALPR data to add to the already too large databases of our movements. 

ALPR systems typically rely on cameras that have been fixed along roadways or attached to police vehicles. These cameras capture the image of a vehicle, then use artificial intelligence technology to log the license plate, make, model, color, and other unique identifying information, like dents and bumper stickers. This information is usually stored on the manufacturer’s servers and often made available on nationwide sharing networks to police departments from other states and federal agencies, including Immigration and Customs Enforcement. ALPRs are already used by most of the largest police departments in the country, and Flock Safety also now offers the ability for an agency to turn almost any internet-enabled cameras into an ALPR camera. 

ALPRs present a host of problems. ALPR systems vacuum up data—like the make, model, color, and location of vehicles—on people who will never be involved in a crime, used in gridding areas to systematically make a record of when and where vehicles have been. ALPRs routinely make mistakes, causing police to stop the wrong car and terrorize the driver. Officers have abused law enforcement databases in hundreds of cases. Police have used them to track across state lines people seeking legal health procedures. Even when there are laws against sharing data from these tools with other departments, some policing agencies still do.

Drones, meanwhile, give police a view of roofs, backyards, and other fenced areas where cops can’t casually patrol, and their adoption is becoming more common. Companies that sell drones have been helping law enforcement agencies to get certifications from the Federal Aviation Authority (FAA), and recently-implemented changes to the restrictions on flying drones beyond the visual line of sight will make it even easier for police to add this equipment. According to the FAA, since a new DFR waiver process was implemented in May 2025, the FAA has granted more than 410 such waivers, already accounting for almost a third of the approximately 1,400 DFR waivers that have been granted since such programs began in 2018.

Local officials should, of course, be informed that the drones they’re buying are equipped to do such granular surveillance from the sky, but it is not clear that this is happening. While the ALPR feature is available as part of Flock drone acquisitions, some government customers may not realize that to approve a drone from Flock Safety may also mean approving a flying ALPR. And though not every Flock safety drone is currently running the ALPR feature, some departments, like Redondo Beach Police Department, have plans to activate it in the near future. 

ALPRs aren’t the only so-called payloads that can be added to a drone. In addition to the high resolution and thermal cameras with which drones can already be equipped, drone manufacturers and police departments have discussed adding cell-site simulators, weapons, microphones, and other equipment. Communities must mobilize now to keep this runaway surveillance technology under tight control.

When EFF posed questions to Flock Safety about the integration of ALPR and its drones, the company declined to comment.

Mapping, storing, and tracking as much personal information as possible—all without warrants—is where automated police surveillance is heading right now. Flock has previously described its desire to connect ALPR scans to additional information on the person who owns the car, meaning that we don’t live far from a time when police may see your vehicle drive by and quickly learn that it’s your car and a host of other details about you. 

EFF has compiled a list of known drone-using police departments. Find out about your town’s surveillance tools at the Atlas of Surveillance. Know something we don't? Reach out at aos@eff.org.

Beryl Lipton

Companies Must Provide Accurate and Transparent Information to Users When Posts are Removed

6 days 17 hours ago

This is the third installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

Imagine sharing information about reproductive health care on social media and receiving a message that your content has been removed for violating a policy intended to curb online extremism. That’s exactly what happened to one person using Instagram who shared her story with our Stop Censoring Abortion project.

Meta’s rules for “Dangerous Organizations and Individuals” (DOI) were supposed to be narrow: a way to prevent the platform from being used by terrorist groups, organized crime, and those engaged in violent or criminal activity. But over the years, we’ve seen these rules applied in far broader—and more troubling—ways, with little transparency and significant impact on marginalized voices.

EFF has long warned that the DOI policy is opaque, inconsistently enforced, and prone to overreach. The policy has been critiqued by others for its opacity and propensity to disproportionately censor marginalized groups.

Samantha Shoemaker's post about Plan C was flagged under Meta's policy on dangerous organizations and individuals

Meta has since added examples and clarifications in its Transparency Center to this and other policies, but their implementation still leaves users in the dark about what’s allowed and what isn’t.

The case we received illustrates just how harmful this lack of clarity can be. Samantha Shoemaker, an individual sharing information about abortion care, shared straightforward, facts about accessing abortion pills. Her posts included:

  • A video linking to Plan C’s website, which lists organizations that provide abortion pills in different states.

  • A reshared image from Plan C’s own Instagram account encouraging people to learn about advance provision of abortion pills.

  • A short clip of women talking about their experiences taking abortion pills.
Information Provided to Users Must Be Accurate

Instead of allowing her to facilitate informed discussion, Instagram flagged some of her posts under its “Prescription Drugs” policy, while others were removed under the DOI policy—the same set of rules meant to stop violent extremism from being shared.

We recognize that moderation systems—both human and automated—will make mistakes. But when Meta equates medically accurate, harm-reducing information about abortion with “dangerous organizations,” it underscores a deeper problem: the blunt tools of content moderation disproportionately silence speech that is lawful, important, and often life-saving.

At a time when access to abortion information is already under political attack in the United States and around the world, platforms must be especially careful not to compound the harm. This incident shows how overly broad rules and opaque enforcement can erase valuable speech and disempower users who most need access to knowledge.

And when content does violate the rules, it’s important that users are provided with accurate information as to why. An individual sharing information about health care will undoubtedly be confused or upset by being told that they have violated a policy meant to curb violent extremism. Moderating content responsibly means offering the greatest transparency and clarity to users as possible. As outlined in the Santa Clara Principles on Transparency and Accountability in Content Moderation, users should be able to readily understand:

  • What types of content are prohibited by the company and will be removed, with detailed guidance and examples of permissible and impermissible content;
  • What types of content the company will take action against other than removal, such as algorithmic downranking, with detailed guidance and examples on each type of content and action; and
  • The circumstances under which the company will suspend a user’s account, whether permanently or temporarily.
What You Can Do if Your Content is Removed

If you find your content removed under Meta’s policies, you do have options:

  • Appeal the decision: Every takedown notice should give you the option to appeal within the app. Appeals are sometimes reviewed by a human moderator rather than an automated system.
  • Request Oversight Board review: In certain cases, you can escalate to Meta’s independent Oversight Board, which has the power to overturn takedowns and set policy precedents.
  • Document your case: Save screenshots of takedown notices, appeals, and your original post. This documentation is essential if you want to report the issue to advocacy groups or in future proceedings.
  • Share your story: Projects like Stop Censoring Abortion collect cases of unjust takedowns to build pressure for change. Speaking out, whether to EFF and other advocacy groups or to the media, helps illustrate how policies harm real people.

Abortion is health care. Sharing information about it is not dangerous—it’s necessary. Meta should allow users to share vital information about reproductive care. The company must also ensure that users are provided with clear information about how their policies are being applied and how to appeal seemingly wrongful decisions.

This is the third post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion   

Jillian C. York

Shining a Spotlight on Digital Rights Heroes: EFF Awards 2025

1 week ago

It's been a year full of challenges, but also important victories for digital freedoms. From EFF’s new lawsuit against OPM and DOGE, to launching Rayhunter (our new tool to detect cellular spying), to exposing the censorship of abortion-related content on social media, we’ve been busy! But we’re not the only ones leading the charge. 

On September 10 in San Francisco, we presented the annual EFF Awards to three courageous honorees who are pushing back against unlawful surveillance, championing data privacy, and advancing civil liberties online. This year’s awards went to Just Futures LawErie Meyer, and the Software Freedom Law Center, India

If you missed the celebration in person, you can still watch it live! The full event is posted on YouTube and the Internet Archive, and a transcript of the live captions is also available.  

WATCH NOW

SEE THE EFF AWARDS CEREMONY ON YOUTUBE

Looking Back, Looking Ahead

EFF Executive Director Cindy Cohn opened the evening by reflecting on our victories this past year and reiterated how vital EFF’s mission to protect privacy and free speech is today. She also announced her upcoming departure as Executive Director after a decade in the role (and over 25 years of involvement with EFF!). No need to be too sentimental—Cindy isn’t going far. As we like to say: you can check out at any time, but you never really leave the fight. 

Cindy then welcomed one of EFF’s founders, Mitch Kapor, who joked that he had been “brought out of cold storage” for the occasion. Mitch recalled EFF’s early days, when no one knew exactly how constitutional rights would interact with emerging technologies—but everyone understood the stakes. “We understood that the matter of digital rights were very important,” he reflected. And history has proven them right. 

Honoring Defenders of Digital Freedom

The first award of the night, the EFF Award for Defending Digital Freedoms, went to the Software Freedom Law Center, India (SFLC.IN). Presenting the award, EFF Civil Liberties Director David Greene emphasized the importance of international partners like SFLC.IN, whose local perspectives enrich and strengthen EFF’s own work. 

SFLC.IN is at the forefront of digital rights in India—challenging internet shutdowns, tracking violations of free expression with their Free Speech Tracker, and training lawyers across the country. Accepting the award, SFLC.IN founder Mishi Choudhary reminded us: “These freedoms are not abstract. They are fought for every day by people, by organizations, and by movements.” 

SFLC.IN founder Mishi Choudhary accepts the EFF Award for Defending Digital Freedoms

Next, EFF Staff Attorney Mario Trujillo introduced the winner of the EFF Award for Protecting Americans’ Data, Erie Meyer. Erie has served as CTO of the Federal Trade Commission and Consumer Financial Protection Bureau, and was a founding member of the U.S. Digital Service. Today, she continues to fight for better government technology and safeguards for sensitive data. 

In her remarks, Erie underscored the urgency of protecting personal data at scale: “We need to protect people’s data the same way we protect this country from national security risks. What’s happening right now is like all the data breaches in history rolled into one. ‘Trust me, bro’ is not a way to handle 550 million Americans’ data.” 

Erie Meyer accepts the EFF Award for Protecting Americans’ Data

Finally, EFF General Counsel Jennifer Lynch introduced the EFF Award for Leading Immigration and Surveillance Litigation, presented to Just Futures Law. Co-founder and Executive Director Paromita Shah accepted on behalf of the organization, which works to challenge the ways surveillance disproportionately harms people of color in the U.S. 

“For years, corporations and law enforcement—including ICE—have been testing the legal limits of their tools on communities of color,” Paromita said in her speech. Just Futures Law has fought back, suing the Department of Homeland Security to reveal its use of AI, and defending activists against surveillance technologies like Clearview AI. 

Just Futures Law Executive Director Paromita Shah accepted the EFF Award for Leading Immigration and Surveillance Litigation

Carrying the Work Forward

We’re honored to shine a spotlight on these award winners, who are doing truly fearless and essential work to protect online privacy and free expression. Their courage reminds us that the fight for civil liberties will be won when we work together—across borders, communities, and movements. 

Join the fight and donate today


A heartfelt thank you to all of the EFF members worldwide who make this work possible. Public support is what allows us to push for a better internet. If you’d like to join the fight, consider becoming an EFF member—you’ll receive special gear as our thanks, and you’ll help power the digital freedom movement. 

And finally, special thanks to the sponsor of this year’s EFF Awards: Electric Capital.

  Catch Up From the Event

Reminder that if you missed the event, you can watch the live recording on our YouTube and the Internet Archive. Plus, a special thank you to our photographers, Alex Schoenfeldt and Carolina Kroon. You can see some of our favorite group photos that were taken during the event, and photos of the awardees with their trophies. 

Christian Romero

EFF, ACLU to SFPD: Stop Illegally Sharing Data With ICE and Anti-Abortion States

1 week ago

The San Francisco Police Department is the latest California law enforcement agency to get caught sharing automated license plate reader (ALPR) data with out-of-state and federal agencies. EFF and the ACLU of Northern California are calling them out for this direct violation of California law, which has put every driver in the city at risk and is especially dangerous for immigrants, abortion seekers, and other targets of the federal government.

This week, we sent the San Francisco Police Department a demand letter and request for records under the city’s Sunshine Ordinance following the SF Standard’s recent report that SFPD provided non-California agencies direct access to the city’s ALPR database. Reporters uncovered that at least 19 searches run by these agencies were marked as related to U.S. Immigration and Customs Enforcement (“ICE”). The city’s ALPR database was also searched by law enforcement agencies from Georgia and Texas, both states with severe restrictions on reproductive healthcare.

ALPRs are cameras that capture the movements of vehicles and upload the location of the vehicles to a searchable, shareable database. It is a mass surveillance technology that collects data indiscriminately on every vehicle on the road. As of September 2025, SFPD operates 415 ALPR cameras purchased from the company Flock Safety.

Since 2016, sharing ALPR data with out-of-state or federal agencies—for any reason—violates California law (SB 34). If this data is shared for the purpose of assisting with immigration enforcement, agencies violate an additional California law (SB 54).

In total, the SF Standard found that SFPD had allowed out-of-state cops to run 1.6 million searches of their data. “This sharing violated state law, as well as exposed sensitive driver location information to misuse by the federal government and by states that lack California’s robust privacy protections,” the letter explained.

EFF and ACLU are urging SFPD to launch a thorough audit of its ALPR database, institute new protocols for compliance, and assess penalties and sanctions for any employee found to be sharing ALPR information out of state.

“Your office reportedly claims that agencies outside of California are no longer able to access the SFPD ALPR database,” the letter says. “However, your office has not explained how outside agencies obtained access in the first place or how you plan to prevent future violations of SB 34 and 54.”

As we’ve demonstrated over and over again, many California agencies continue to ignore these laws, exposing sensitive location information to misuse and putting entire communities at risk. As federal agencies continue to carry out violent ICE raids, and many states enforce harsh, draconian restrictions on abortion, ALPR technology is already being used to target and surveil immigrants and abortion seekers. California agencies, including SFPD, have an obligation to protect the rights of Californians, even when those rights are not recognized by other states or the federal government.

See the full letter here: https://www.eff.org/files/2025/09/17/aclu_and_eff_letter_to_sfpd_9.16.2025-1.pdf

Jennifer Pinsof

Appeals Court: Abandoned Phones Don’t Equal Abandoned Privacy Rights

1 week 1 day ago

This posted was drafted by EFF legal intern Alexandra Halbeck

The Court of Appeals for the Ninth Circuit, which covers California and most of the Western U.S., just delivered good news for digital privacy: abandoning a phone doesn’t abandon your Fourth Amendment rights in the phone’s contents. In United States v. Hunt, the court made clear that no longer having control of a device is not the same thing as surrendering the privacy of the information it contains. As a result, courts must separately analyze whether someone intended to abandon a physical phone and whether they intended to abandon the data stored within it. Given how much personal information our phones contain, it will be unlikely for courts to find that someone truly intended to give up their privacy rights in that data.

This approach mirrors what EFF urged in the amicus brief we filed in Hunt, joined by the ACLU, ACLU of Oregon, EPIC, and NACDL. We argued that a person may be separated from—or even discard—a device, yet still retain a robust privacy interest in the information it holds. Treating phones like wallets or backpacks ignores the reality of technology. Smartphones are comprehensive archives of our lives, containing years of messages, photos, location history, health data, browsing habits, and countless other intimate details. As the Supreme Court recognized in Riley v. California, our phones hold “the privacies of life,” and accessing those digital contents generally requires a warrant. This is an issue EFF has worked on across the country, and it is gratifying to see such an unambiguous ruling from an influential appellate court.

The facts of Hunt underscore why the court’s distinction between a device and its contents matters. In 2017, Dontae Hunt was shot multiple times and dropped an iPhone while fleeing for medical help. Police collected the phone from the crime scene and kept it as evidence. Nearly three years later—during an unrelated drug investigation—federal agents obtained a warrant and searched the phone’s contents. Hunt challenged both the warrantless seizure and the later search, arguing he never intended to abandon either the device or its data.

The court rejected the government’s sweeping abandonment theory and drew a crucial line for the digital age: even if police have legal possession of hardware, they do not have green light to rummage through its contents. The panel emphasized that courts must treat the device and the data as separate questions under a Fourth Amendment analysis.

In this specific case, because the government ultimately obtained a warrant before searching the device, that aspect of the case survived constitutional scrutiny—but crucially, only on that basis. The court also found that police acted reasonably in initially seizing the phone during the shooting investigation and keeping it as unclaimed property until a warrant could be obtained to search it.

Under Hunt, if officers find a phone that’s been misplaced, dropped during an emergency, or otherwise separated from its owner, they cannot leap from custody of the glass-and-metal shell to unfettered access to the comprehensive digital record inside. This decision ensures that constitutional protections don’t evaporate just because someone abandons their device, and that warrants still matter in the digital age. Our constitutional rights should follow our digital lives—no matter where our devices may end up.

Andrew Crocker

ICE 🤝 Cyber Mercenaries | EFFector 37.12

1 week 1 day ago

It's easy to keep up with the fight for digital privacy and free expression. Our EFFector newsletter delivers bite-sized updates, stories, and actions you can take to stay informed and help out.

In this latest issue, we show how libraries and schools can safeguard their computers with Privacy Badger; highlight the dangers of unaccountable corporations and billionaires buying surveillance tech for police; and share news that EFF’s Executive Director, Cindy Cohn, will be stepping down in mid-2026 after more than two decades of leadership.

EFFector isn’t just for reading—you can listen, too! In our audio companion, EFF Senior Staff Technologist Cooper Quintin explains why ICE’s contract with Paragon Solutions is so dangerous. Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.12 - ICE 🤝 Cyber Mercenaries

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

When Knowing Someone at Meta Is the Only Way to Break Out of “Content Jail”

1 week 1 day ago

This is the second instalment in a ten-part blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

During our Stop Censoring Abortion campaign, we set out to collect and spotlight the growing number of stories from people and organizations that have had abortion-related content removed, suppressed, or flagged by dominant social media platforms. Our survey submissions have revealed some alarming trends, including: if you don’t have a personal or second-degree connection at Meta, your chances of restoring your content or account are likely to drop significantly. 

Through the survey, we heard from activists, clinics, and researchers whose accounts were suspended or permanently removed for allegedly violating Meta’s policies on promoting or selling “restricted goods,” even when their posts were purely educational or informational. What the submissions also showed is a pattern of overenforcement, lack of transparency, and arbitrary moderation decisions that have specifically affected reproductive health and reproductive justice advocates. 

When accounts are taken down, appeals can take days, weeks, or even months (if they're even resolved at all, or if users are even given the option to appeal). For organizations and providers, this means losing access to vital communication tools and being cut off from the communities they serve. This is highly damaging since so much of that interaction happens on Meta’s platforms. Yet we saw a disturbing pattern emerge in our survey: on several occasions, accounts are swiftly restored once someone with a connection to Meta intervenes.

The Case Studies: An Abortion Clinic

The Red River Women's Clinic is an abortion clinic in Moorhead, MN. It was originally located in Fargo, North Dakota, and for many years was the only abortion clinic in North Dakota. In early January, the clinic’s director heard from a patient that she thought they only offered procedural/surgical abortions and not medication abortion. To clarify for other patients, they posted on the clinic’s page that they offered both procedural and medication abortions—attaching an image of a box of mifepristone. When they tried to boost the post, the ad was flagged and their account was suspended.

They appealed the decision and initially got the ad approved, yet the page was suspended again shortly after. But this time, multiple appeals and direct emails went unanswered, until they reached out to a digital rights organization that was able to connect with staff at Meta that stepped in. Only then was their page restored, with Meta noting that their post did not violate the policies but warning that future violations could lead to permanent removal.

While this may have been a glitch in Meta’s systems or a misapplication of policy, the suspension of the clinic’s Facebook account was detrimental for them. “We were unable to update our followers about dates/times we were closed, we were unable to share important information and news about abortion that would have kept our followers up to date, there was a legislative session happening and we were unable to share events and timely asks for reaching out to legislators about issues,” shared Tammi Kromenaker, Director of Red River Women's Clinic. The clinic was also prevented from starting an Instagram page due to the suspension. “Facebook has a certain audience and Instagram has another audience,” said Kromenaker, “we are trying to cater to all of our supporters so the loss of FB and the inability to access and start an Instagram account were really troubling to us.” 

The Case Studies: RISE at Emory University

RISE, a reproductive health research center at Emory University, launched an Instagram account to share community-centered research and combat misinformation related to reproductive health. In January of this year, they posted educational content about mifepristone on their instagram. “Let's talk about Mifepristone + its uses + the importance of access”, read the post. Two months later, their account was suddenly suspended, flagging the account under its policy against selling illegal drugs. Their appeal was denied, which led to the account being permanently deleted. 

Screenshot submitted by RISE to EFF

“As a team, this was a hit to our morale” shared Sara Redd, Director of Research Translation at RISE. “We pour countless hours of person-power, creativity, and passion into creating the content we have on our page, and having it vanish virtually overnight took a toll on our team.” For many organizational users like RISE, their social media accounts are a repository for resources and metrics that may not be stored elsewhere. “We spent a significant amount of already-constrained team capacity attempting to recover all of the content we’d created for Instagram that was potentially going to be permanently lost. [...] We also spent a significant amount of time and energy trying to understand what options we might have available from Meta to appeal our case and/or recover our account; their support options are not easily accessible, and the time it took to navigate this issue distracted from our existing work.”  

Meta restored the account only after RISE was able to connect with someone there. Once RISE logged back in, they confirmed that the flagged post was the one about mifepristone. The post never sold or directed people where to buy pills, it simply provided accurate information about the use and efficacy of the drug. 

This Shouldn’t Be How Content Moderation Works

Meta spokespersons have admitted to instances of “overenforcement” in various press statements, noting that content is sometimes incorrectly removed or blurred even when it doesn’t actually violate policy. Meta has insisted to the public that they care about free speech, as a spokesperson mentioned to The New York Times: “We want our platforms to be a place where people can access reliable information about health services, advertisers can promote health services and everyone can discuss and debate public policies in this space [...] That’s why we allow posts and ads about, discussing and debating abortion.” In fact, their platform policies directly mention this

Note that advertisers don’t need authorization to run ads that only:

  • Educate, advocate or give public service announcements related to prescription drugs

Additionally

Note: Debating or advocating for the legality or discussing scientific or medical merits of prescription drugs is allowed. This includes news and public service announcements. 

Meta also has policies specific to “Health and Wellness,” where they state: 

When targeting people 18 years or older, advertisers can run ads that:

  • Promote sexual and reproductive health and wellness products or services, as long as the focus is on health and the medical efficacy of the product or the service and not on the sexual pleasure or enhancement. And these ads must target people 18 years or older. This includes ads for: [...]
  • Family planning methods, such as:
    • Family planning clinics
    • In Vitro Fertilization (IVF) or any other artificial insemination procedures
    • Fertility awareness
    • Abortion medical consultation and related services

But these public commitments don’t always match users’ experiences. 

Take the widely covered case of Aid Access, a group that provides medication abortion by mail. This year, several of their Instagram posts were blurred and removed on Instagram, including one with tips for feeling safe and supported at home after taking abortion medication. But only after multiple national media outlets contacted Meta for comment on the story were the posts and account restored.

So the question becomes: If Meta admits its enforcement isn’t perfect, why does it still take knowing someone, or having the media involved, to get a fair review? When companies like Meta claim to uphold commitments to free speech, those commitments should materialize in clear policies that are enforced equally, not only when it is escalated through leveraging relationships with Meta personnel.

“Facebook Jail” Reform

There is no question that the enforcement of these content moderation policies on Meta platforms and the length of time people are spending in “content jail” or “Facebook/Instagram jail” has created a chilling effect

“I think that I am more cautious and aware that the 6.1K followers we have built up over time could be taken away at any time based on the whims of Meta,” Tammi from Red River Women’s Clinic told us. 

RISE sees it in a slightly different light, sharing that “[w]hile this experience has not affected our fundamental values and commitment to sharing our work and rigorous science, it has highlighted for us that no information posted on a third-party platform is entirely one’s own, and thus can be dismantled at any moment.”

At the end of the day, clinics are left afraid to post basic information, patients are left confused or misinformed, and researchers lose access to their audiences. But unless your issue catches the attention of a journalist or you know someone at Meta, you might never regain access to your account.

These case studies highlight the urgent need for transparent, equitable, and timely enforcement that is not dependent on insider connections, as well as accountability from platforms that claim to support open dialogue and free speech. Meta’s admitted overenforcement should, at minimum, be coupled with efficient and well-staffed review processes and policies that are transparent and easily understandable. 

It’s time for Meta and other social media platforms to implement the reforms they claim to support, and for them to prove that protecting access to vital health information doesn’t hinge on who you know.

This is the second post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion   

Rindala Alajaji

Mexican Allies Raise Alarms About New Mass Surveillance Laws, Call for International Support

1 week 1 day ago

The Mexican government passed a package of outrageously privacy-invasive laws in July that gives both civil and military law enforcement forces access to troves of personal data and forces every individual to turn over biometric information regardless of any suspicion of crime.   

The laws create a new interconnected intelligence system dubbed the Central Intelligence Platform, under which intelligence and security agencies at all levels of government—federal, state and municipal—have the power to access, from any entity public or private, personal information for “intelligence purposes,” including license plate numbers, biometric information, telephone details that allow the identification of individuals, financial, banking, and health records, public and private property records, tax data, and more. 

You read that right. Banks’ customer information databases? Straight into the platform. Hospital patient records? Same thing. 

The laws were ostensively passed in the name of gathering intelligence to fight high-impact crime. Civil society organizations, including our partners RD3 and Article 19 Mexico, have raised alarms about the bills—as R3D put it, these new laws establish an uncontrolled system of surveillance and social control that goes against privacy and free expression rights and the presumption of innocence.  

In a concept note made public recently, RD3 breaks down exactly how bad the bills are. The General Population Act forces every person in Mexico to enroll in a mandatory biometric ID system with fingerprints and a photo. Under the law, public and private entities are required to ask for the ID for any transaction or access to services, such as banking, healthcare, education, and access to social programs. All data generated through the ID mandate will feed into a new Unique Identity Platform under the Disappeared Persons Act.  

The use of biometric IDs creates a system for tracking activities of the population—also accessible through the Central Intelligence Platform.  

The Telecommunications Act requires telecom companies to create a registry that connects people’s phone numbers with their biometric ID held by the government and cut services off to customers who won’t go along with the practice.  

It gets worse. 

The Intelligence Act explicitly guarantees the armed forces, through the National Guard, legal access to the Central Intelligence Platform, which enables real-time consultation of interconnected databases across sectors.  

Companies, both domestic and international, must either interconnect their databases or hand over information on request. Mexican authorities can share that information even with foreign governments. It also exempts judicial authorization requirements for certain types of surveillance and classifies the entire system as confidential, with criminal penalties for disclosure. All of this is allowed without any suspicion of a crime or prior judicial approval.  

We urge everyone to pay close attention to and support efforts to hold the Mexican government accountable for this egregious surveillance system. RD3 challenged the laws in court and international support is critical to raise awareness and push back.  As R3D put it, "collaboration is vital for the defense of human rights," especially in the face of uncontrolled powers set by disproportionate laws.  

We couldn’t agree more and stand with our Mexican allies. 

Karen Gullo

California, Tell Governor Newsom: Regulate AI Police Reports and Sign S.B. 524

1 week 2 days ago

The California legislature has passed a necessary piece of legislation, S.B. 524, which starts to regulate police reports written by generative AI. Now, it’s up to us to make sure Governor Newsom will sign the bill. 

We must make our voices heard. These technologies obscure certain records and drafts from public disclosure. Vendors have invested heavily on their ability to sell police genAI. 

TAKE ACTION

AI-generated police reports are spreading rapidly. The most popular product on the market is Axon’s Draft One, which is already one of the country’s biggest purveyors of police tech, including body-worn cameras. By bundling their products together, Axon has capitalized on its customer base to spread their untransparent and potentially harmful genAI product. 

Many things can go wrong when genAI is used to write narrative police reports. First, because the product relies on body-worn camera audio, there’s a big chance of the AI draft missing context like sarcasm, culturally-specific or contextual vocabulary use and slang, languages other than English. While police are expected to edit the AI’s version of events to make up for these flaws, many officers will defer to the AI. Police are also supposed to make an independent decision before arresting a person who was identified by face recognition–and police mess that up all the time. The prosecutor of King County, Washington, has forbidden local officers from using Draft One out of fear that it is unreliable.
Then, of course, there’s the matter of dishonesty. Many public defenders and criminal justice practitioners have voiced concerns about what this technology would do to cross examination. If caught with a different story on the stand than the one in their police report, an officer can easily say, “the AI wrote that and I didn’t edit well enough.” The genAI creates a layer of plausible deniability. Carelessness is a very different offense than lying on the stand. 

To make matters worse, an investigation by EFF found that Axon’s Draft One product defies transparency by design. The technology is deliberately built to obscure what portion of a finished report was written by AI and which portions were written by an officer–making it difficult to determine if an officer is lying about which portions of a report were written by AI. 

But now, California has an important chance to join with other states like Utah that are passing laws to reign in these technologies, and what minimum safeguards and transparency must go along with using them. 

S.B. 524 does several important things: It mandates that police reports written by AI include disclaimers on every page or within the body of the text that make it clear that this report was written in part or in total by a computer. It also says that any reports written by AI must retain their first draft. That way, it should be easier for defense attorneys, judges, police supervisors, or any other auditing entity to see which portions of the final report were written by AI and which parts were written by the officer. Further, the bill requires officers to sign and verify that they read the report and its facts are correct. And it bans AI vendors from selling or sharing the information a police agency provided to the AI.

These common-sense, first-step reforms are important: watchdogs are struggling to figure out where and how AI is being used in a police context. In fact, Axon’s Draft One, would be out of compliance with this bill, which would require them to redesign their tool to make it more transparent—a small win for communities everywhere. 

So now we’re asking you: help us make a difference. Use EFF’s Action Center to tell Governor Newsom to sign S.B. 524 into law! 

TAKE ACTION

Matthew Guariglia

Our Stop Censoring Abortion Campaign Uncovers a Social Media Censorship Crisis

1 week 3 days ago

This is the first installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

We’ve been hearing that social media platforms are censoring abortion-related content, even when no law requires them to do so. Now, we’ve got the receipts. 

For months, EFF has been investigating stories from users whose abortion-related content has been taken down or otherwise suppressed by major social media platforms. In collaboration with our allies—including Plan C, Women on Web, Reproaction, and Women First Digital—we launched the #StopCensoringAbortion campaign to collect and amplify these stories.  

Submissions came from a variety of users, including personal accounts, influencers, healthcare clinics, research organizations, and advocacy groups from across the country and abroad—a spectrum that underscores the wide reach of this censorship. Since the start of the year, we’ve seen nearly 100 examples of abortion-related content taken down by social media platforms. 

We analyzed these takedowns, deletions, and bans, comparing the content to what platform policies allow—particularly those of Meta—and found that almost none of the submissions we received violated any of the platforms’ stated policies. Most of the censored posts simply provided factual, educational information. This Threads post is a perfect example: 

Screenshot submitted by Lauren Kahre to EFF

In this post, health policy strategist Lauren Kahre discussed abortion pills’ availability via mail. She provided factual information about two FDA approved medications (mifepristone and misoprostol), including facts like shelf life and how to store pills safely.  

Lauren’s post doesn’t violate any of Meta’s policies and shouldn’t have been removed. But don’t just take our word for it: Meta has publicly insisted that posts like these should not be censored. In a February 2024 letter to Amnesty International, Meta Human Rights Policy Director Miranda Sissons wrote: “Organic content (i.e., non paid content) educating users about medication abortion is allowed and does not violate our Community Standards. Additionally, providing guidance on legal access to pharmaceuticals is allowed.” 

Still, shortly after Lauren shared this post, Meta took it down. Perhaps even more perplexing was their explanation for doing so. According to Meta, the post was removed because “[they] don’t allow people to buy, sell, or exchange drugs that require a prescription from a doctor or a pharmacist.” 

Screenshot submitted by Lauren Kahre to EFF

In the submissions we received, this was the most common reason Meta gave for removing abortion-related content. The company frequently claimed that posts violated policies on Restricted Goods and Services, which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.”  

Yet in Lauren’s case and others, the posts very clearly did no such thing. And as Meta itself has explained: “Providing guidance on how to legally access pharmaceuticals is permitted as it is not considered an offer to buy, sell or trade these drugs.” 

In fact, Meta’s policies on Restricted Goods & Services further state: “We allow discussions about the sale of these goods in stores or by online retailers, advocating for changes to regulations of goods and services covered in this policy, and advocating for or concerning the use of pharmaceutical drugs in the context of medical treatment, including discussion of physical or mental side effects.” Also, “Debating or advocating for the legality or discussing scientific or medical merits of prescription drugs is allowed. This includes news and public service announcements.” 

Over and over again, the policies say one thing, but the actual enforcement says another. 

We spoke with multiple Meta representatives to share these findings. We asked hard questions about their policies and the gap between how they’re being applied. Unfortunately, we were mostly left with the same concerns, but we’re continuing to push them to do better.  

In the coming weeks, we will share a series of blogs further examining trends we found, including stories of unequal enforcement, where individuals and organizations needed to rely on internal connections at Meta to get wrongfully censored posts restored; examples of account suspensions without sufficient warnings; an exploration of Meta’s ad policies; practical tips for users to avoid being censored; and concrete steps platforms should take to reform their abortion content moderation practices. For a preview, we’ve already shared some of our findings with Barbara Ortutay at The Associated Press, whose report on some of these takedowns was published today.  

We hope this series highlighting examples of abortion content censorship will help the public and the platforms understand the breadth of this problem, who is affected, and with what consequences. These stories collectively underscore the urgent need for platforms to review and consistently enforce their policies in a fair and transparent manner.  

With reproductive rights under attack both in the U.S. and abroad, sharing accurate information about abortion online has never been more critical. Together, we can hold platforms like Meta accountable, demand transparency in moderation practices, and ultimately stop the censorship of this essential, sometimes life-saving information. 

This is the first post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion    

Jennifer Pinsof

EFF to Court: The Supreme Court Must Rein in Expansive Secondary Copyright Liability

2 weeks 1 day ago

If the Supreme Court doesn’t reverse a lower court’s ruling, internet service providers (ISPs) could be forced to terminate people’s internet access based on nothing more than mere accusations of copyright infringement. This would threaten innocent users who rely on broadband for essential aspects of daily life. EFF—along with the American Library Association, the Association of Research Libraries, and Re:Create—filed an amicus brief urging the Court to reverse the decision.

The Stakes: Turning ISPs into Copyright Police

Among other things, the Supreme Court approving the appeals court’s findings will radically change the amount of risk your ISP takes on if a customer infringes on copyright, forcing the ISP to terminate access to the internet for those users accused of copyright infringement—and everyone else who uses that internet connection.

This issue turns on what courts call “secondary liability,” which is the legal idea that someone can be held responsible not for what they did directly, but for what someone else did using their product or service.

The case began when music companies sued Cox Communications, arguing that the ISP should be held liable for copyright infringement committed by some of its subscribers. The Court of Appeals for the Fourth Circuit agreed, adopting a “material contribution” standard for contributory copyright liability (a rule for when service providers can be held liable for the actions of users). The lower court said that providing a service that could be used for infringement is enough to create liability when a customer infringes.

In the Patent Act, where Congress has explicitly defined secondary liability, there’s a different test: contributory infringement exists only where a product is incapable of substantial non-infringing use. Internet access, of course, is overwhelmingly used for lawful purposes, making it the very definition of a “staple article of commerce” that can’t be liable under the patent framework. Yet under the Fourth Circuit’s rule, ISPs could face billion-dollar damages if they fail to terminate users on the basis of even flimsy or automated infringement claims.

Our Argument: Apply Clear Rules from the Patent Act, Not Confusing Judge-Made Tests

Our brief urges the Court to do what it has done in the past: look to patent law to define the limits of secondary liability in copyright. That means contributory infringement must require more than a “material contribution” by the service provider—it should apply only when a product or service is especially designed for infringement and lacks substantial non-infringing uses.

The Human Cost: Losing Internet Access Hurts Everyone

The Fourth Circuit’s rule threatens devastating consequences for the public. Terminating an ISP account doesn’t just affect a person accused of unauthorized file sharing—it cuts off entire households, schools, libraries, or businesses that share an internet connection.

  • Public libraries, which provide internet access to millions of Americans who lack it at home, could lose essential service.
  • Universities, hospitals, and local governments could see internet access for whole communities disrupted.
  • Households—especially in low-income and communities of color, which disproportionately share broadband connections with other people—would face collective punishment for the alleged actions of a single user.

With more than a third of Americans having only one or no broadband provider, many users would have no way to reconnect once cut off. And given how essential internet access is for education, employment, healthcare, and civic participation, the consequences of termination are severe and disproportionate.

What’s Next

The Supreme Court has an opportunity to correct course. We’re asking the Court to reject the Fourth Circuit’s unfounded “material contribution” test, reaffirm that patent law provides the right framework for secondary liability, and make clear that the Constitution requires copyright to serve the public good. The Court should ensure that copyright enforcement doesn’t jeopardize the internet access on which participation in modern life depends.

We’ll be watching closely as the Court considers this case. In the meantime, you can read our amicus brief here.

Betty Gedlu

San Francisco Gets An Invasive Billionaire-Bought Surveillance HQ

2 weeks 1 day ago

San Francisco billionaire Chris Larsen once again has wielded his wallet to keep city residents under the eye of all-seeing police surveillance. 

The San Francisco Police Commission, the Board of Supervisors, and Mayor Daniel Lurie have signed off on Larsen’s $9.4 million gift of a new Real-Time Investigations Center. The plan involves moving the city’s existing police tech hub from the public Hall of Justice not to the city’s brand-new police headquarters but instead to a sublet in the Financial District building of Ripple Labs, Larsen’s crypto-transfer company. Although the city reportedly won’t be paying for the space, the lease reportedly cost Ripple $2.3 million and will last until December 2026. 

The deal will also include a $7.25 million gift from the San Francisco Police Community Foundation that Larsen created. Police foundations are semi-public fundraising arms of police departments that allow them to buy technology and gear that the city will not give them money for.  

In Los Angeles, the city’s police foundation got $178,000 from the company Target to pay for the services of the data analytics company Palantir to use for predictive policing. In Atlanta, the city’s police foundation funds a massive surveillance apparatus as well as the much-maligned Cop City training complex. (Despite police foundations’ insistence that they are not public entities and therefore do not need to be transparent or answer public records requests, a judge recently ordered the Atlanta Police Foundation to release documentation related to Cop City.) 

A police foundation in San Francisco brings the same concerns: that an unaccountable and untransparent fundraising arm shmoozing with corporations and billionaires would fund unpopular surveillance measures without having to reveal much to the public.  

Larsen was one of the deep pockets behind last year’s Proposition E, a ballot measure to supercharge surveillance in the city. The measure usurped the city’s 2019 surveillance transparency and accountability ordinance, which had required the SFPD to get the elected Board of Supervisors’ approval before buying and using new surveillance technology. This common-sense democratic hurdle was, apparently, a bridge too far for the SFPD and for Larsen.  

We’re no fans of real-time crime centers (RTCCs), as they’re often called elsewhere, to start with. They’re basically control rooms that pull together all feeds from a vast warrantless digital dragnet, often including automated license plate readers, fixed cameras, officers’ body-worn cameras, drones, and other sources. It’s a means of consolidating constant surveillance of the entire population, tracking everyone wherever they go and whatever they do – worrisome at any time, but especially in a time of rising authoritarianism.  

Think of what this data could do if it got into federal hands; imagine how vulnerable city residents would be subject to harassment if every move they made was centralized and recorded downtown. But you don’t have to imagine, because SFPD already has been caught sharing automated license plate reader data with out-of-state law enforcement agencies assisting in federal immigration investigations

We’re especially opposed to RTCCs using live feeds from non-city surveillance cameras to push that panopticon’s boundaries even wider, as San Francisco’s does. Those semi-private networks of some 15,000 cameras, already abused by SFPD to surveil lawful protests against police violence, were funded in part by – you guessed it – Chris Larsen

These technologies could potentially endanger San Franciscans by directing armed police at them due to reliance on a faulty algorithm or by putting already-marginalized communities at further risk of overpolicing and surveillance. But studies find that these technologies just don’t work. If the goal is to stop crime before it happens, to spare someone the hardship and the trauma of getting robbed or hurt, cameras clearly do not accomplish this. There’s plenty of footage of crime occurring that belies the idea that surveillance is an effective deterrent, and although police often look to technology as a silver bullet to fight crime, evidence suggests that it does little to alter the historic ebbs and flows of criminal activity. 

Yet now this unelected billionaire – who already helped gut police accountability and transparency rules and helped fund sketchy surveillance of people exercising their First Amendment rights – wants to bankroll, expand, and host the police’s tech nerve center. 

Policing must be a public function so that residents can control - and demand accountability and transparency from - those who serve and protect but also surveil and track us all. Being financially beholden to private interests erodes the community’s trust and control and can leave the public high and dry if a billionaire’s whims change or conflict with the will of the people. Chris Larsen could have tried to address the root causes of crime that affect our community; instead, he exercises his bank account's muscle to decide that surveillance is best for San Franciscans with less in their wallets. 

Elected officials should have said “thanks but no thanks” to Larsen and ensured that the San Francisco Police Department remained under the complete control and financial auspices of nobody except the people of San Francisco. Rich people should not be allowed to fund the further degradation of our privacy as we go about our lives in our city’s public places. Residents should carefully watch what comes next to decide for themselves whether a false sense of security is worth living under constant, all-seeing, billionaire-bankrolled surveillance. 

Josh Richman

Rayhunter: What We Have Found So Far

2 weeks 1 day ago

A little over a year ago we released Rayhunter, our open source tool designed to detect cell-site simulators. We’ve been blown away by the level of community engagement on this project. It has been installed on thousands of devices (or so we estimate, we don’t actually know since Rayhunter doesn’t have any telemetry!). We have received dozens of packet captures, hundreds of improvements, both minor and major, documentation fixes, and bug reports from our open source community. This project is a testament to the power and impact of open source and community driven counter-surveillance.  

If this is your first time hearing about Rayhunter, you can read our announcement blog post here. Or if you prefer, you can watch our DEF CON talk. In short, Rayhunter is an open source Linux program that runs on a variety of mobile hotspots (dedicated devices that use a cellular connection to give you Wi-Fi). Rayhunter’s job is to look for cell-site simulators (CSS), a tool police use to locate or identify people's cell phones, also known as IMSI catchers or Stingrays. Rayhunter analyzes the “handshakes” between your Rayhunter device and the cell towers it is connected to for behaviors consistent with that of a CSS. When it finds potential evidence of a CSS it alerts the user with an indicator on the screen and potentially a push notification to their phone.  

Understanding if CSS are being used to spy on protests is one of the main goals of the Rayhunter project. Thanks to members of our community bringing Rayhunter to dozens of protests, we are starting to get a picture of how CSS are currently being used in the US. So far Rayhunter has not turned up any evidence of cell-site simulators being used to spy on protests in the US — though we have found them in use elsewhere.  

So far Rayhunter has not turned up any evidence of cell-site simulators being used to spy on protests in the US.  

There are a couple of caveats here. First, it’s often impossible to prove a negative. Maybe Rayhunter just hasn’t been at protests where CSS have been present. Maybe our detection signatures aren’t picking up the techniques used by US law enforcement. But we’ve received reports from a lot of protests, including pro-Palestine protests, protests in Washington DC and Los Angeles, as well as the ‘No Kings’ and ‘50501’ protests all over the country. So far, we haven’t seen evidence of CSS use at any of them.  

A big part of the reason for the lack of CSS at protests could be that some courts have required a warrant for their use, and even law enforcement agencies not bound by these rulings have policies that require police to get a warrant. CSS are also costly to buy and use, requiring trained personnel to use nearly one million dollars worth of equipment.  

The fact is police also have potentially easier to use tools available. If the goal of using a CSS at a protest is to find out who was at the protest, police could use tools such as:  

  • License plate readers to track the vehicles arriving and leaving at the protest. 
  • Location data brokers, such as Locate X and Fog Data Science, to track the phones of protestors by their mobile advertising IDs (MAID).
  • Cellebrite and other forensic extraction tools to download all the data from phones of arrested protestors if they are able to unlock those phones.  
  • Geofence warrants, which require internet companies like Google to disclose the identifiers of devices within a given location at a given time.
  • Facial recognition such as Clearview AI to identify all present via public or private databases of peoples faces.
  • Tower dumps from phone companies, which, similar to geofence warrants, require phone companies to turn over a list of all the phones connected to a certain tower at a certain time.  

We think, due to the lack of evidence of CSS being used, protestors can worry less about CSS and more about these other techniques. Luckily, the actions one should take to protect themselves are largely the same: 

We feel pretty good about Rayhunter’s detection engine, though there could still be things we are missing. Some of our confidence in Rayhunter’s detection engine comes from the research we have done into how CSS work. But the majority of our confidence comes from testing Rayhunter against a commercial cell-site simulator thanks to our friends at Cape. Rayhunter detected every attack run by the commercial CSS.  

Where Rayhunter Has Detected Likely Surveillance

Rayhunter users have found potential evidence of CSS being used in the wild, though not at protests. One of the most interesting examples that triggered multiple detections and even inspired us to write some new detection rules was at a cruise port in the Turks and Caicos Islands. The person who captured this data put the packet captures online for other researchers to review

Rayhunter users have detected likely CSS use in the US as well. We have received reports from Chicago and New York where our “IMSI Sent without authentication” signature was triggered multiple times over the course of a couple hours and then stopped. Neither report was in the vicinity of a protest. We feel fairly confident that these reports are indicative of a CSS being present, though we don’t have any secondary evidence to back them up. 

We have received other reports that have triggered our CSS detection signatures, but the above examples are the ones we feel most confident about.  

We encourage people to keep using Rayhunter and continue bringing it to protests. Law enforcement trends can change over time and it is possible that some cities are using them more often than others (for example Fontana, California reportedly used their CSS over 300 times in two years). We also know that ICE still uses CSS and has recently renewed their contracts. Interestingly, in January, the FBI requested a warrant from the Foreign Intelligence Surveillance Court to use what was likely a CSS and was rejected. This was the first time the FBI has sought a warrant to use a CSS using the Foreign Intelligence Surveillance Act since 2015, when the Justice Department began requiring a warrant for their use. If police start using CSS to spy on protests we want to know.

There is still a lot we want to accomplish with Rayhunter, we have some future plans for the project that we are very excited to share with you in the near future, but the biggest thing we need right now is more testing outside of the United States.  

Taking Rayhunter International  

We are interested in getting Rayhunter data from every country to help us understand the global use of CSS and to refine our signatures. Just because CSS don't appear to be used to spy on protests in the US right now doesn't mean that is true everywhere. We have also seen that some signatures that work in the US are prone to false positives elsewhere (such as our 2G signature in countries that still have active 2G networks). The first device supported by Rayhunter, the Orbic hotspot, was US only, so we have very little international data. But we now have support for multiple devices! If you are interested in Rayhunter, but can’t find a device that works in your country, let us know. We recommend you consult with an attorney in your country to determine whether running Rayhunter is likely to be legally risky or outlawed in your jurisdiction.

Related Cases: Carpenter v. United States
Cooper Quintin
Checked
51 minutes 3 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed