The Growing Threat of Cybercrime Law Abuse: LGBTQ+ Rights in MENA and the UN Cybercrime Draft Convention

4 hours 55 minutes ago

This is Part II  of a series examining the proposed UN Cybercrime Treaty in the context of LGBTQ+ communities. Part I looks at the draft Convention’s potential implications for LGBTQ+ rights. Part II provides a closer look at how cybercrime laws might specifically impact the LGBTQ+ community and activists in the Middle East and North Africa (MENA) region.

In the digital age, the rights of the LGBTQ+ community in the Middle East and North Africa (MENA) are gravely threatened by expansive cybercrime and surveillance legislation. This reality leads to systemic suppression of LGBTQ+ identities, compelling individuals to censor themselves for fear of severe reprisal. This looming threat becomes even more pronounced in countries like Iran, where same-sex conduct is punishable by death, and Egypt, where merely raising a rainbow flag can lead to being arrested and tortured.

Enter the proposed UN Cybercrime Convention. If ratified in its present state, the convention might not only bolster certain countries' domestic surveillance powers to probe actions that some nations mislabel as crimes, but it could also strengthen and validate international collaboration grounded in these powers. Such a UN endorsement could establish a perilous precedent, authorizing surveillance measures for acts that are in stark contradiction with international human rights law. Even more concerning, it might tempt certain countries to formulate or increase their restrictive criminal laws, eager to tap into the broader pool of cross-border surveillance cooperation that the proposed convention offers. 

The draft convention, in Article 35, permits each country to set its own definitions for crimes under their domestic laws for cross border policing. Alarmingly, under the draft treaty, each country will be able to set its own definitions for what crimes will serve as the foundation for requesting other countries to assist it in collecting evidence. In certain countries, many of these laws might be based on subjective moral judgments that suppress what is considered free expression in other nations, rather than adhering to universally accepted standards.

Indeed, international cooperation is permissible for crimes that carry a penalty of four years of imprisonment or more; there's a concerning move afoot to suggest reducing this threshold to merely three years. This is applicable whether the alleged offense is cyber or not. Such provisions could result in heightened cross-border monitoring and potential repercussions for individuals, leading to torture or even the death penalty in some jurisdictions. 

While some countries may believe they can sidestep these pitfalls by not collaborating with countries that have controversial laws, this confidence may be misplaced. The draft treaty allows countries to refuse a request if the activity in question is not a crime in its domestic regime (the principle of "dual criminality"). However, given the current strain on the MLAT system, there's an increasing likelihood that requests, even from countries with contentious laws, could slip through the checks. This opens the door for nations to inadvertently assist in operations that might contradict global human rights norms. And where countries do share the same subjective values and problematically criminalize the same conduct, this draft treaty seemingly provides a justification for their cooperation.

One of the more recently introduced pieces of legislation that exemplifies these issues is the Cybercrime Law of 2023 in Jordan. Introduced as part of King Abdullah II’s modernization reforms to increase political participation across Jordan, this law was issued hastily and without sufficient examination of its legal aspects, social implications, and impact on human rights. In addition to this new law, the pre-existing cybercrime law in Jordan has already been used against LGBTQ+ people, and this new law expands its capacity to do so. This law, with its overly broad and vaguely defined terms, will severely restrict individual human rights across that country and will become a tool for prosecuting innocent individuals for their online speech. 

Article 13 of the Jordan law expansively criminalizes a wide set of actions tied to online content branded as “pornographic,” from its creation to distribution. The ambiguity in defining what is pornographic could inadvertently suppress content that merely expresses various sexualities, mistakenly deeming them as inappropriate. This goes beyond regulating explicit material; it can suppress genuine expressions of identity. The penalty for such actions entails a period of no less than six months of imprisonment. 

Meanwhile, the nebulous wording in Article 14 of Jordan's laws—terms like “expose public morals,” “debauchery,” and “seduction”—is equally concerning. Such vague language is ripe for misuse, potentially curbing LGBTQ+ content by erroneously associating diverse sexual orientation with immorality. Both articles, in their current form, cast shadows on free expression and are stark reminders that such provisions can lead to over-policing online content that is not harmful at all. During debates on the bill in the Jordanian Parliament, some MPs claimed that the new cybercrime law could be used to criminalize LGBTQ+ individuals and content online. Deputy Leader of the Opposition, Saleh al Armouti, went further and claimed that “Jordan will become a big jail.” 

Additionally, the law imposes restrictions on encryption and anonymity in digital communications, preventing individuals from safeguarding their rights to freedom of expression and privacy. Article 12 of the Cybercrime Law prohibits the use of Virtual Private Networks (VPNs) and other proxies, with at least six months imprisonment or a fine for violations. 

This will force people in Jordan to choose between engaging in free online expression or keeping their personal identity private. More specifically, this will negatively impact LGBTQ+ people and human rights defenders in Jordan who particularly rely on VPNs and anonymity to protect themselves online. The impact of Article 12 is exacerbated by the fact that there is no comprehensive data privacy legislation in Jordan to protect people’s rights during cyber attacks and data breaches.  

This is not the first time Jordan has limited access to information and content online. In December 2022, Jordanian authorities blocked TikTok to prevent the dissemination of live updates and information during the workers’ protests in the country's south, and authorities there previously had blocked Clubhouse as well. 

This crackdown on free speech has particularly impacted journalists, such as the recent arrest of Jordanian journalist Heba Abu Taha for criticizing Jordan’s King over his connections with Israel. Given that online platforms like TikTok and Twitter are essential for activists, organizers, journalists, and everyday people around the world to speak truth to power and fight for social justice, the restrictions placed on free speech by Jordan’s new Cybercrime Law will have a detrimental impact on political activism and community building across Jordan.

People across Jordan have protested the law and the European Union has  expressed concern about how the law could limit freedom of expression online and offline. In August, EFF and 18 other civil society organizations wrote to the King of Jordan, calling for the rejection of the country’s draft cybercrime legislation. With the law now in effect, we urge Jordan to repeal the Cybercrime Law 2023.

Jordan’s Cybercrime Law has been said to be a “true copy” of the United Arab Emirates (UAE) Federal Decree Law No. 34 of 2021 on Combatting Rumors and Cybercrimes. This law replaced its predecessor, which had been used to stifle expression critical of the government or its policies—and was used to sentence human rights defender Ahmed Mansoor to 10 years in prison. 

The UAE’s new cybercrime law further restricts the already heavily-monitored online space and makes it harder for ordinary citizens, as well as journalists and activists, to share information online. More specifically, Article 22 mandates prison sentences of between three and 15 years for those who use the internet to share “information not authorized for publishing or circulating liable to harm state interests or damage its reputation, stature, or status.” 

In September 2022, Tunisia passed its new cybercrime law in Decree-Law No. 54 on “combating offenses relating to information and communication systems.” The wide-ranging decree has been used to stifle opposition free speech, and mandates a five-year prison sentence and a fine for the dissemination of “false news” or information that harms “public security.” In the year since Decree-Law 54 was enacted, authorities in Tunisia have prosecuted media outlets and individuals for their opposition to government policies or officials. 

The first criminal investigation under Decree-Law 54 saw the arrest of student Ahmed Hamada in October 2022 for operating a Facebook page that reported on clashes between law enforcement and residents of a neighborhood in Tunisia. 

Similar tactics are being used in Egypt, where the 2018 cybercrime law, Law No. 175/2018, contains broad and vague provisions to silence dissent, restrict privacy rights, and target LGBTQ+ individuals. More specifically, Articles 25 and 26 have been used by the authorities to crackdown on content that allegedly violates “family values.” 

Since its enactment, these provisions have also been used to target LGBTQ+ individuals across Egypt, particularly regarding the publication or sending of pornography under Article 8, as well as illegal access to an information network under Article 3. For example, in March 2022 a court in Egypt charged singers Omar Kamal and Hamo Beeka with “violating family values” for dancing and singing in a video uploaded to YouTube. In another example, police have used cybercrime laws to prosecute LGBTQ+ individuals for using dating apps such as Grindr.

And in Saudi Arabia, national authorities have used cybercrime regulations and counterterrorism legislation to prosecute online activism and stifle dissenting opinions. Between 2011 and 2015, at least 39 individuals were jailed under the pretense of counterterrorism for expressing themselves online—for composing a tweet, liking a Facebook post, or writing a blog post. And while Saudi Arabia has no specific law concerning gender identity and sexual orientation, authorities have used the 2007 Anti-Cyber Crime Law to criminalize online content and activity that is considered to impinge on “public order, religious values, public morals, and privacy.” 

These provisions have been used to prosecute individuals for peaceful actions, particularly since the Arab Spring in 2011. More recently, in August 2022, Salma al-Shehab was sentenced to 34 years in prison with a subsequent 34-year travel ban for her alleged “crime” of sharing content in support of prisoners of conscience and women human rights defenders.

These cybercrime laws demonstrate that if the proposed UN Cybercrime Convention is ratified in its current form with its broad scope, it would authorize domestic surveillance for the investigation of any offenses, as those in Articles 12, 13, and 14 of Jordan's law. Additionally, the convention could authorize international cooperation for investigation of crimes penalized with three or four years of imprisonment, as seen in countries such as the UAE, Tunisia, Egypt, and Saudi Arabia.

As Canada warned (at minute 01:56 ) at the recent negotiation session, these expansive provisions in the Convention permit states to unilaterally define and broaden the scope of criminal conduct, potentially paving the way for abuse and transnational repression. While the Convention may incorporate some procedural safeguards, its far-reaching scope raises profound questions about its compatibility with the key tenets of human rights law and the principles enshrined in the UN Charter. 

The root problem lies not in the severity of penalties, but in the fact that some countries criminalize behaviors and expression that are protected under international human rights law and the UN Charter. This is alarming, given that numerous laws affecting the LGBTQ+ community carry penalties within these ranges, making the potential for misuse of such cooperation profound.

In a nutshell, the proposed UN treaty amplifies the existing threats to the LGBTQ+ community. It endorses a framework where nations can surveil benign activities such as sharing LGBTQ+ content, potentially intensifying the already-precarious situation for this community in many regions.

Online, the lack of legal protection of subscriber data threatens the anonymity of the community, making them vulnerable to identification and subsequent persecution. The mere act of engaging in virtual communities, sharing personal anecdotes, or openly expressing relationships could lead to their identities being disclosed, putting them at significant risk.

Offline, the implications intensify with amplified hesitancy to participate in public events, showcase LGBTQ+ symbols, or even undertake daily routines that risk revealing their identity. The draft convention's potential to bolster digital surveillance capabilities means that even private communications, like discussions about same-sex relationships or plans for LGBTQ+ gatherings, could be intercepted and turned against them. 

To all member states: This is a pivotal moment. This is our opportunity to ensure the digital future is one where rights are championed, not compromised. Pledge to protect the rights of all, especially those communities like the LGBTQ+ that are most vulnerable. The international community must unite in its commitment to ensure that the proposed convention serves as an instrument of protection, not persecution.

Katitza Rodriguez

Watch EFF's Talks from DEF CON 31

1 day 11 hours ago

EFF had a blast at DEF CON 31! Thank you to everyone who came and supported EFF at the membership booth, participated in our contests, and checked out our various talks. We had a lot of things going on this year, and it was great to see so many new and familiar faces.

This year was our biggest DEF CON yet, with over 900 attendees starting or renewing an EFF membership at the conference. Thank you! Your support is the reason EFF can push for initiatives like protecting encrypted messaging, fighting back against illegal surveillance, and defending your right to tinker and hack the devices you own. Of course if you missed us at DEF CON, you can still become an EFF member and grab some new gear when you make a donation today!

Now you can catch up on the EFF talks from DEF CON 31! Below is a playlist with the various talks EFF participated in that covers topics from digital surveillance, the world's dumbest cyber mercenaries, the UN Cybercrime Treaty, and more. Check them out here:

Watch EFF Talks from DEF CON 31

Thank you to everyone in the infosec community who supports our work. DEF CON 32 will come sooner than we all expect, so hopefully we'll see you there next year!

Christian Romero

Get Real, Congress: Censoring Search Results or Recommendations Is Still Censorship

1 day 12 hours ago

Are you a young person fighting back against bad bills like KOSA? Become an EFF member at a new, discounted Neon membership level specifically for you--stickers included! 

For the past two years, Congress has been trying to revise the Kids Online Safety Act (KOSA) to address criticisms from EFF, human and digital rights organizations, LGBTQ groups, and others, that the core provisions of the bill will censor the internet for everyone and harm young people. All of those changes fail to solve KOSA’s inherent censorship problem: As long as the “duty of care” remains in the bill, it will still force platforms to censor perfectly legal content. (You can read our analyses here and here.)

Despite never addressing this central problem, some members of Congress are convinced that a new change will avoid censoring the internet: KOSA’s liability is now theoretically triggered only for content that is recommended to users under 18, rather than content that they specifically search for. But that’s still censorship—and it fundamentally misunderstands how search works online. 

Congress should be smart enough to recognize this bait-and-switch fails to solve KOSA’s many faults

As a reminder, under KOSA, a platform would be liable for not “acting in the best interests of a [minor] user.” To do this, a platform would need to “tak[e] reasonable measures in its design and operation of products and services to prevent and mitigate” a long list of societal ills, including anxiety, depression, eating disorders, substance use disorders, physical violence, online bullying and harassment, sexual exploitation and abuse, and suicidal behaviors. As we have said, this will be used to censor what young people and adults can see on these platforms. The bills’ coauthors agree, writing that KOSA “will make platforms legally responsible for preventing and mitigating harms to young people online, such as content promoting suicide, eating disorders, substance abuse, bullying, and sexual exploitation.” 

Our concern, and the concern of others, is that this bill will be used to censor legal information and restrict the ability for minors to access it, while adding age verification requirements that will push adults off the platforms as well. Additionally, enforcement provisions in KOSA give power to state attorneys general to decide what is harmful to minors, a recipe for disaster that will exacerbate efforts already underway to restrict access to information online (and offline). The result is that platforms will likely feel pressured to remove enormous amounts of information to protect themselves from KOSA’s crushing liability—even if that information is not harmful.

The ‘Limitation’ section of the bill is intended to clarify that KOSA creates liability only for content that the platform recommends. In our reading, this is meant to refer to the content that a platform shows a user that doesn’t come from an account the user follows, is not content the user searches for, and is not content that the user deliberately visits (such as by clicking a URL). In full, the ‘Limitation’ section states that the law is not meant to prevent or preclude “any minor from deliberately and independently searching for, or specifically requesting, content,” nor should it prevent the “platform or individuals on the platform from providing resources for the prevention or mitigation of suicidal behaviors, substance use, and other harms, including evidence-informed information and clinical resources.” 

In layman’s terms, minors will supposedly still have the freedom to follow accounts, search for, and request any type of content, but platforms won’t have the freedom to share some types of content to them. Again, that fundamentally misunderstands how social media works—and it’s still censorship. 



Courts Have Agreed: Recommendations are Protected

If, as the bills’ authors write, they want to hold platforms accountable for “knowingly driving toxic, addicting, and dangerous content” to young people, why stop at search—which can also show toxic, addicting, or dangerous content? We think this section was added for two reasons. 

First, members of Congress have attacked social media platforms’ use of automated tools to present content for years, claiming that it causes any number of issues ranging from political strife to mental health problems. The evidence supporting those claims is unclear (and the reverse may be true). 

Second, and perhaps more importantly, the authors of the bill likely believe pinning liability on recommendations will allow them to square a circle and get away with censorship while complying with the First Amendment. It will not.

Courts have affirmed that recommendations are meant to facilitate the communication and content of others and “are not content in and of themselves.” Making a platform liable for the content they recommend is making them liable for the content itself, not the recommending of the content—and that is unlawful. Platforms’ ability to “filter, screen, allow, or disallow content;” “pick [and] choose” content; and make decisions about how to “display,” “organize,” or “reorganize” content is protected by 47 U.S.C. § 230 (“Section 230”), and the First Amendment. (We have written about this in various briefs, including this one.) This “Limitation” in KOSA doesn’t make the bill any less censorious. 

Search Results Are Recommendations

Practically speaking, there is also no clear distinction between “recommendations” and “search results.” The coauthors of KOSA seem to think that content which is shown as a result of a search is not a recommendation by the platform. But of course it is. Accuracy and relevance in search results are algorithmically generated, and any modern search method uses an automated process to determine the search results and the order in which they are presented, which it then recommends to the user. 

KOSA’s authors also assume, incorrectly, that content on social media can easily be organized, tagged, or described in the first place, such that it can be shown when someone searches for it, but not otherwise. But content moderation at infinite scale will always fail, in part because whether content fits into a specific bucket is often subjective in the first place.

The coauthors of KOSA seem to think that content which is shown as a result of a search is not a recommendation by the platform. But of course it is.

For example: let’s assume that using KOSA, an attorney general in a state has made it clear that a platform that recommends information related to transgender healthcare will be sued for increasing the risk of suicide in young people. (Because trans people are at a higher risk of suicide, this is one of many ways that we expect an attorney general could torture the facts to censor content—by claiming that correlation is causation.) 

If a young person in that state searches social media for “transgender healthcare,” does this mean that the platform can or cannot show them any content about “transgender healthcare” as a result? How can a platform know which content is about transgender healthcare, much less whether the content matches the attorney general’s views on the subject, or whether they have to abide by that interpretation in search results? What if the user searches for “banned healthcare?” What if they search for “trans controversy?” (Most people don’t search for the exact name of the piece of content they want to find, and most pieces of content on social media aren’t “named” at all.) 

In this example, and in an enormous number of other cases, platforms can’t know in advance what content a person is searching for—and will, at the risk of showing something controversial that the person did not intend to find, remove it entirely—from recommendations as well as search results. If liability exists for showing it, platforms will remove users’ ability to access all content that relates to a dangerous topic rather than risk showing it in the occasional instance when they can determine, for certain, that is what the user is looking for. This blunt response will not only harm children who need access to information, but adults who also may seek the same content online.

“Nerd Harder” to Remove Content Will Never Work

Third, as we have written before, it is impossible for platforms to know what types of content they would be liable for recommending (or showing in search results) in the first place. Because there is no definition of harmful or depressing content that doesn’t include a vast amount of protected expression, almost any content could fit into the categories that platforms would have to censor.  This would include truthful news about what’s going on in the world, such as wars, gun violence, and climate change. 

This Limitation section will have no meaningful effect on the censorial nature of the law. If KOSA passes, the only real option for platforms would be to institute age verification and ban minors entirely, or to remove any ‘recommendations’ and ‘search’ functions almost entirely for minors. As we’ve said repeatedly, these efforts will also impact adult users who either lack the ability to prove they are not minors or are deterred from doing so. Most smaller platforms would be pressured to ban minors entirely, while larger ones, with more money for content moderation and development, would likely block them from finding enormous swathes of content unless they have the exact URL to locate it. In that way, KOSA’s censorship would further entrench the dominant social media platforms.

Congress should be smart enough to recognize this bait-and-switch fails to solve KOSA’s many faults. We urge anyone who cares about free speech and privacy online to send a message to Congress voicing your opposition. 



Are you a young person fighting back against bad bills like KOSA? Become an EFF member at a new, discounted Neon membership level specifically for you--stickers included! 

Jason Kelley

The Federal Government’s Privacy Watchdog Concedes: 702 Must Change

1 day 12 hours ago

The Privacy and Civil Liberties Oversight Board (PCLOB) has released its much-anticipated report on Section 702, a legal authority that allows the government to collect a massive amount of digital communications around the world and in the U.S. The PCLOB agreed with EFF and organizations across the political spectrum that the program requires significant reforms if it is to be renewed before its December 31, 2023 expiration. Of course, EFF believes that Congress should go further–including letting the program expire–in order to restore the privacy being denied to anyone whose communications cross international boundaries. 

PCLOB is an organization within the federal government appointed to monitor the impact of national security and law enforcement programs and techniques on civil liberties and privacy. Despite this mandate, the board has a history of tipping the scales in favor of the privacy annihilating status quo. This history is exactly why the recommendations in their new report are such a big deal: the report says Congress should require individualized authorization from the Foreign Intelligence Surveillance Court (FISC) for any searches of 702 databases for U.S. persons. Oversight, even by the secretive FISC, would be a departure from the current system, in which the Federal Bureau of Investigation can, without warrant or oversight, search for communications to or from anyone of the millions of people in the United States whose communications have been  vacuumed up by the mass surveillance program.

The report also recommends a permanent end to the legal authority that allows “abouts” collection, a search that allows the government to look at digital communications between two “non-targets”–people who are not the subject of the investigation–as long as they are talking “about” a specific individual.  The Intelligence Community voluntarily ceased this collection after increasing skepticism about its legality from the FISC. We agree with the PCLOB that it’s time to put the final nail in the coffin of this unconstitutional mass collection. 

Section 702 allows the National Security Agency to collect communications from all over the world. Although the authority supposedly prohibits targeting people on U.S. soil, people in the United States communicate with people overseas all the time and routinely have their communications collected and stored under this program. This results in a huge pool of what the government calls “incidentally” collected communications from Americans which the FBI and other federal law enforcement organizations eagerly exploit by searching without a warrant. These unconstitutional “backdoor” searches have happened millions of times and have continued despite a number of attempts by courts and Congress to rein in the illegal practice.

Along with over a dozen organizations, including ACLU, Center for Democracy in Technology, Demand Progress, Freedom of the Press Foundation, Project on Government Oversight, Brennan Center, EFF lent its voice to the request that the following reforms be the bare minimum for precondition for any re-authorization of Section 702: 

  • Requiring the government to obtain a warrant before searching the content of Americans’ communications collected under intelligence authorities;
  • Establishing legislative safeguards for surveillance affecting Americans that is conducted overseas under Executive Order 12333–an authority that raises many of the same concerns as Section 702, as previously noted by PCLOB members;
  • Closing the data broker loophole, through which intelligence and law enforcement agencies purchase Americans’ sensitive location, internet, and other data without any legal process or accountability;
  • Bolstering judicial review in FISA-related proceedings, including by shoring up the government’s obligation to give notice when information derived from FISA is used against a person accused of a crime; and
  • Codifying reasonable limits on the scope of intelligence surveillance.

Use this handy tool to tell your elected officials: No reauthorization of 702 without drastic reform:

Take action

TELL congress: End 702 Absent serious reforms

Matthew Guariglia

EFF to D.C. Circuit: Animal Rights Activists Shouldn’t Be Censored on Government Social Media Pages Because Agency Disagrees With Their Viewpoint

1 day 14 hours ago

Intern Muhammad Essa contributed to this post.

EFF, along with the Foundation for Individual Rights and Expression (FIRE), filed a brief in the U.S. Court of Appeals for the D.C. Circuit urging the court to reverse a lower court ruling that upheld the censorship of public comments on a government agency’s social media pages. The district court’s decision is problematic because it undermines our right to freely express opinions on issues of public importance using a modern and accessible way to communicate with government representatives.

People for the Ethical Treatment of Animals (PETA) sued the National Institutes of Health (NIH), arguing that NIH blocks their comments against animal testing in scientific research on the agency’s Facebook and Instagram pages, thus violating of the First Amendment. NIH provides funding for research that involves testing on animals from rodents to primates.

NIH claims to apply a general rule prohibiting public comments that are “off topic” to the agency’s social media posts—yet the agency implements this rule by employing keyword filters that include words such as cruelty, revolting, tormenting, torture, hurt, kill, and stop. These words are commonly found in comments that express a viewpoint that is against animal testing and sympathetic to animal rights.

First Amendment law makes it clear that when a government agency opens a forum for public participation, such as the interactive spaces of the agency’s social media pages, it is prohibited from censoring a particular viewpoint in that forum. Any speech restrictions that it may apply must be viewpoint-neutral, meaning that the restrictions should apply equally to all viewpoints related to a topic, not just to the viewpoint that the agency disagrees with.

EFF’s brief argues that courts must approach with scepticism a government agency’s claim that its “off topic” speech restriction is viewpoint-neutral and is only intended to exclude irrelevant comments. How such a rule is implemented could reveal that it is in fact a guise for unconstitutional viewpoint discrimination. This is the case here and the district court erred in ruling for the government.

For example, EFF’s brief argues that NIH’s automated keyword filters are imprecise—they are incapable of accurately implementing an “off topic” rule because they are incapable of understanding context and nuance, which is necessary when comparing a comment to a post. Also, NIH’s keyword filters and the agency’s manual enforcement of the “off topic” rule are highly underinclusive—that is, other people's comments that are “off topic” to a post are often allowed to remain on the agency’s social media pages. Yet PETA’s comments against animal testing are reliably censored.

Imprecise and underinclusive enforcement of the “off topic” rule suggests that NIH’s rule is not viewpoint-neutral but is really a means to block PETA activists from engaging with the agency online.

EFF’s brief urges the D.C. Circuit to reject the district court’s erroneous holding and rule in favor of the plaintiffs. This would protect everyone’s right to express their opinions freely online. The free exchange of opinions informs public policy and is a crucial characteristic of a democratic society. A genuine representative government must not be afraid of public criticism.

Sophia Cope

How To Turn Off Google’s “Privacy Sandbox” Ad Tracking—and Why You Should

1 day 16 hours ago

Google has rolled out "Privacy Sandbox," a Chrome feature first announced back in 2019 that, among other things, exchanges third-party cookies—the most common form of tracking technology—for what the company is now calling "Topics." Topics is a response to pushback against Google’s proposed Federated Learning of Cohorts (FLoC), which we called "a terrible idea" because it gave Google even more control over advertising in its browser while not truly protecting user privacy. While there have been some changes to how this works since 2019, Topics is still tracking your internet use for Google’s behavioral advertising.

If you use Chrome, you can disable this feature through a series of three confusing settings.

With the version of the Chrome browser released in September 2023, Google tracks your web browsing history and generates a list of advertising "topics" based on the web sites you visit. This works as you might expect. At launch there are almost 500 advertising categories—like "Student Loans & College Financing," "Parenting," or "Undergarments"—that you get dumped into based on whatever you're reading about online. A site that supports Privacy Sandbox will ask Chrome what sorts of things you're supposedly into, and then display an ad accordingly. 

The idea is that instead of the dozens of third-party cookies placed on websites by different advertisers and tracking companies, Google itself will track your interests in the browser itself, controlling even more of the advertising ecosystem than it already does. Google calls this “enhanced ad privacy,” perhaps leaning into the idea that starting in 2024 they plan to “phase out” the third-party cookies that many advertisers currently use to track people. But the company will still gobble up your browsing habits to serve you ads, preserving its bottom line in a world where competition on privacy is pushing it to phase out third-party cookies. 

Google plans to test Privacy Sandbox throughout 2024. Which means that for the next year or so, third-party cookies will continue to collect and share your data in Chrome.

The new Topics improves somewhat over the 2019 FLoC. It does not use the FLoC ID, a number that many worried would be used to fingerprint you. The ad-targeting topics are all public on GitHub, hopefully avoiding any clearly sensitive categories such as race, religion, or sexual orientation. Chrome's ad privacy controls, which we detail below, allow you to see what sorts of interest categories Chrome puts you in, and remove any topics you don't want to see ads for. There's also a simple means to opt out, which FLoC never really had during testing.

Other browsers, like Firefox and Safari, baked in privacy protections from third-party cookies in 2019 and 2020, respectively. Neither of those browsers has anything like Privacy Sandbox, which makes them better options if you'd prefer more privacy. 

Google referring to any of this as “privacy” is deceiving. Even if it's better than third-party cookies, the Privacy Sandbox is still tracking, it's just done by one company instead of dozens. Instead of waffling between different tracking methods, even with mild improvements, we should work towards a world without behavioral ads.

But if you're sticking to Chrome, you can at least turn these features off.

How to Disable Privacy Sandbox

Depending on when you last updated Chrome, you may have already received a pop-up asking you to agree to “Enhanced ad privacy in Chrome.” If you just clicked the big blue button that said “Got it” to make the pop-up go away, you opted yourself in. But you can still get back to the opt out page easily enough by clicking the Three-dot icon (⋮) > Settings > Privacy & Security > Ad Privacy page. Here you'll find this screen with three different settings:

  • Ad topics: This is the fundamental component of Privacy Sandbox that generates a list of your interests based on the websites you visit. If you leave this enabled, you'll eventually get a list of all your interests, which are used for ads, as well as the ability to block individual topics. The topics roll over every four weeks (up from weekly in the FLOCs proposal) and random ones will be thrown in for good measure. You can disable this entirely by setting the toggle to "Off."
  • Site-suggested ads: This confusingly named toggle is what allows advertisers to do what’s called "remarketing" or "retargeting," also known as “after I buy a sofa, every website on the internet advertises that same sofa to me.” With this feature, site one gives information to your Chrome instance (like “this person loves sofas”) and site two, which runs ads, can interact with Chrome such that a sofa ad will be shown, even without site two learning that you love sofas. Disable this by setting the toggle to "Off."
  • Ad measurement: This allows advertisers to track ad performance by storing data in your browser that's then shared with other sites. For example, if you see an ad for a pair of shoes, the site would get information about the time of day, whether the ad was clicked, and where it was displayed. Disable this by setting the toggle to "Off."

If you're on Chrome, Firefox, Edge, or Opera, you should also take your privacy protections a step further with our own Privacy Badger, a browser extension that blocks third-party trackers that use cookies, fingerprinting, and other sneaky methods. On Chrome, Privacy Badger also disables the Topics API by default.

Thorin Klosowski

EFF's Comment to the Meta Oversight Board on Polish Anti-Trans Facebook Post 

2 days 18 hours ago

EFF recently submitted comments in response to the Meta Oversight Board’s request for input on a Facebook post in Polish from April 2023 that targeted trans people. The Oversight Board was created by Meta in 2020 as an appellate body and has 22 members from around the world who review contested content moderation decisions made by the platform.  

Our comments address how Facebook’s automated systems failed to prioritize content for human review. From our observations—and the research of many within the digital rights community—this is a common deficiency made worse during the pandemic, when Meta decreased the number of workers moderating content on its platforms. In this instance, the content was eventually sent for human review and was still assessed to be non-violating and therefore not escalated further. Facebook kept the content online despite 11 different users reporting the content 12 times and only removed the content once the Oversight Board decided to take the case for review. 

As EFF has demonstrated, Meta has at times over-removed legal LGBTQ+ related content whilst simultaneously keeping content online that depicts hate speech toward the LGBTQ+ community. This is often because the content—as in this specific case—is not an explicit depiction of such hate speech, but rather a message that is embedded in a wider context that automated content moderation tools and inadequately trained human moderators are simply not equipped to consider. These tools do not have the ability to recognize nuance or the context of statements, and human reviewers are not provided the training to remove content that depicts hate speech beyond a basic slur. 

This incident serves as part of the growing body of evidence that Facebook’s systems are inadequate in detecting seriously harmful content, particularly that which targets marginalized and vulnerable communities. Our submission looks at the various reasons for these shortcomings and makes the case that Facebook should have removed the content—and should keep it offline.

Read the full submission in the PDF below.

Paige Collings

EFF, ACLU and 59 Other Organizations Demand Congress Protect Digital Privacy and Free Speech

3 days 13 hours ago

Earlier this week, EFF joined the ACLU and 59 partner organizations to send a letter to Senate Majority Leader Chuck Schumer urging the Senate to reject the STOP CSAM Act. This bill threatens encrypted communications and free speech online, and would actively harm LGBTQ+ people, people seeking reproductive care, and many others. EFF has consistently opposed this legislation. This bill has unacceptable consequences for free speech, privacy, and security that will affect how we connect, communicate, and organize.



The STOP CSAM Act, as amended, would lead to censorship of First Amendment protected speech, including speech about reproductive health, sexual orientation and gender identity, and personal experiences related to gender, sex, and sexuality. Even today, without this bill, platforms regularly remove content that has vague ties to sex or sexuality for fear of liability. This would only increase if STOP CSAM incentivized apps and websites to exercise a heavier hand at content moderation.

If enacted, the STOP CSAM Act will also make it more difficult to communicate using end-to-end encryption. End-to-end encrypted communications cannot be read by anyone but the sender or recipient — that means authoritarian governments, malicious third parties, and the platforms themselves can’ read user messages. Offering encrypted services could open apps and websites up to liability, because a court could find that end-to-end encryption services are likely to be used for CSAM, and that merely offering them is reckless.

Congress should not pass this law, which will undermine security and free speech online. Existing law already requires online service providers who have actual knowledge of CSAM on their platforms to report that content to the National Center for Missing and Exploited Children (NCMEC), a quasi-government entity that works closely  with law enforcement agencies. Congress and the FTC have many tools already at their disposal to tackle CSAM, some of which are not used. 

India McKinney

EFF at FIFAfrica 2023

4 days 14 hours ago

EFF is excited to be in Dar es Salaam, Tanzania for this year's iteration of the Forum on Internet Freedom in Africa (FIFAfrica), organized by CIPESA (Collaboration on International ICT Policy for East and Southern Africa) between 27-29 September 2023.

FIFAfrica is a landmark event in the region that convenes an array of stakeholders from across internet governance and online rights to discuss and collaborate on opportunities for advancing privacy, protecting free expression, and enhancing the free flow of information online. FIFAfrica also offers a space to identify new and important digital rights issues, as well as exploring avenues to engage with these debates across national, regional, and global spaces.

We hope you have an opportunity to connect with us at the panels listed below. In addition to these, EFF will be attending many other events at FIFAfrica. We look forward to meeting you there!


Combatting Disinformation for Democracy 

2pm to 3:30pm local time 
Location: Hyatt Hotel - Kibo 

Hosted by: CIPESA


  • Paige Collings, Senior Speech and Privacy Activist, Electronic Frontier Foundation 
  • Nompilo Simanje, Africa Advocacy and Partnerships Lead, International Press Institute 
  • Obioma Okonkwo, Head, Legal Department, Media Rights Agenda
  • Daniel O’Maley, Senior Digital Governance Specialist, Center for International Media Assistance 

In an age of falsehoods, facts, and freedoms marked by the rapid spread of information and the proliferation of digital platforms, the battle against disinformation has never been more critical. This session brings together experts and practitioners at the forefront of this fight, exploring the pivotal roles that media, fact checkers, and technology play in upholding truth and combating the spread of false narratives. 

This panel will delve into the multifaceted challenges posed by disinformation campaigns, examining their impact on societies, politics, and public discourse. Through an engaging discussion, the session will spotlight innovative strategies, cutting-edge technologies, and collaborative initiatives employed by media organizations, tech companies, and civil society to safeguard the integrity of information.


Platform Accountability in Africa: Content Moderation and Political Transitions

11am to 12:30pm local time
Location: Hyatt Hotel - Kibo 

Hosted by: Meta Oversight Board, CIPESA, Open Society Foundations 


  • Paige Collings, Senior Speech and Privacy Activist, Electronic Frontier Foundation 
  • Nerima Wako, Executive Director, SIASA PLACE
  • Abigail Bridgman, Deputy Vice President, Content Review and Policy, Meta Oversight Board 
  • Afia Asantewaa Asare-Kyei, Member, Meta Oversight Board

Social media platforms are often criticized for failing to address significant and seemingly preventable harms stemming from online content. This is especially true during volatile political transitions, where disinformation, violence incitement, and hate speech on the basis of gender, religion, ethnicity, and other characteristics, are highly associated with increased real-life harms.

This session will discuss best practices for combating harmful online content through the lens of the most urgent and credible threats to political transitions on the African continent. With critical general, presidential, and legislative elections fast approaching, as well as the looming threat of violent political transitions, the panelists will highlight current trends of online content, the impact of harmful content, and chart a path forward for the different stakeholders. The session will also assess the various roles that different institutions, stakeholders, and experts can play to strike the balance between addressing harms and respecting the human rights of users under such a context.

Paige Collings

Digital Rights Updates with EFFector 35.12

4 days 16 hours ago

With so much happening in the digital rights movement, it can be difficult to keep up. But EFF has you covered with our EFFector newsletter, containing a collection of the latest headlines! The latest issue is out now and covers a new update to our Privacy Badger browser extension, the fight to require law enforcement gather a warrant before using a drone to spy on a home, and EFF's victory helping free the law with public resource.

Learn more about all of the latest news by reading the full newsletter here, or you can even listen to an audio version of the newsletter below!

Listen on YouTube

EFFector 35.12 | Freeing the Law with Public Resource

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

The U.S. Government’s Database of Immigrant DNA Has Hit Scary, Astronomical Proportions

4 days 18 hours ago

The FBI recently released its proposed budget for 2024, and its request for a massive increase in funding for its DNA database should concern us all. The FBI is asking for an additional $53 million in funding to aid in the collection, organization, and maintenance of its Combined DNA Index System (CODIS) database in the wake of a 2020 Trump Administration rule that requires the Department of Homeland Security to collect DNA from anyone in immigration detention. The database approximately houses the genetic information on over 21 million people, adding an average of 92,000 DNA samples a month in the last year alone–over 10 times the historical sample volume. The FBI’s increased budget request demonstrates that the federal government has, in fact, made good on its projection of collecting over 750,000 new samples annually from immigrant detainees for CODIS. This type of forcible DNA collection and long-term hoarding of genetic identifiers not only erodes civil liberties by exposing individuals to unnecessary and unwarranted government scrutiny, but it also demonstrates the government’s willingness to weaponize biometrics in order to surveil vulnerable communities.

After the Supreme Court’s decision in Maryland v. King (2013), which upheld a Maryland statute to collect DNA from individuals arrested for a violent felony offense, states have rapidly expanded DNA collection to encompass more and more offenses—even when DNA is not implicated in the nature of the offense. For example, in Virginia, the ACLU and other advocates fought against a bill that would have added obstruction of justice and shoplifting as offenses for which DNA could be collected. The federal government’s expansion of DNA collection from all immigrant detainees is the most drastic effort to vacuum up as much genetic information as possible, based on false assumptions linking crime to immigration status despite ample evidence to the contrary.

As we’ve previously cautioned, this DNA collection has serious consequences. Studies have shown that increasing the number of profiles in DNA databases doesn’t solve more crimes. A 2010 RAND report instead stated that the ability of police to solve crimes using DNA is “more strongly related to the number of crime-scene samples than to the number of offender profiles in the database.” Moreover, inclusion in a DNA database increases the likelihood that an innocent person will be implicated in a crime. 

Lastly, this increased DNA collection exacerbates the existing racial disparities in our criminal justice system by disproportionately impacting communities of color. Black and Latino men are already overrepresented in DNA databases. Adding nearly a million new profiles of immigrant detainees annually—who are almost entirely people of color, and the vast majority of whom are Latine—will further skew the 21 million profiles already in CODIS.

We are all at risk when the government increases its infrastructure and capacity for collecting and storing vast quantities of invasive data. With the resources to increase the volume of samples collected, and an ever-broadening scope of when and how law enforcement can collect genetic material from people, we are one step closer to a future in which we all are vulnerable to mass biometric surveillance. 

Saira Hussain

Don’t Fall for the Intelligence Community’s Monster of the Week Justifications

1 week ago

In the beloved episodic television shows of yesteryear, the antagonists were often “monsters of the week”: villains who would show up for one episode and get vanquished by the heroes just in time for them to fight the new monster in the following episode. Keeping up with the Intelligence Community and law enforcement’s justifications for invasive, secretive, and uncontrollable surveillance powers and authorities is a bit like watching  one of these shows. This week, they could say they need it to fight drugs or other cross-border contraband. Next week, they might need it to fight international polluters or revert to the tried-and-true national security justifications. The fight over the December 31, 2023 expiration of Section 702 of the Foreign Intelligence Surveillance Act is no exception to the Monster of the Week phenomenon.

Section 702 is a surveillance authority that allows the National Security Agency to collect communications from all over the world. Although the authority supposedly prohibits targeting people on U.S. soil, people in the United States communicate with people overseas all the time and routinely have their communications collected and stored under this program. This results in a huge pool of “incidentally” collected communications from Americans which the Federal Bureau of Investigation eagerly exploits by searching through without a warrant. These unconstitutional “backdoor” searches have happened millions of times and have continued despite a number of attempts by courts and Congress to rein in the illegal practice.

Take action

TELL congress: End 702 Absent serious reforms

Now, Section 702 is set to expire at the end of December. The Biden administration and intelligence community, eager to renew their embattled and unpopular surveillance powers, is searching for whatever sufficiently important policy concern that’s in the news—no matter how disconnected from Section 702’s original purpose—might convince  lawmakers to let them keep all their invasive tools. Justifying the continuation of Section 702 could take the form of vetting immigrants, stopping drug trafficking, or the original and most tried-and-true justification: national security. As the National Security Advisor Jake Sullivan wrote in July 2023, “Thanks to intelligence obtained under this authority, the United States has been able to understand and respond to threats posed by the People’s Republic of China, rally the world against Russian atrocities in Ukraine, locate and eliminate terrorists intent on causing harm to America, enable the disruption of fentanyl trafficking, mitigate the Colonial Pipeline ransomware attack, and much more.” Searching for the monster-du-jour that will scare the public into once again ceding their constitutional right to private communications is what the Intelligence Community does, and has done, for decades.

Fentanyl may be the IC’s current nemesis, but the argumentation behind it is weak. As one recent op-ed in the Hill noted, “Commonsense reforms to protect Americans’ privacy would not make the law less effective in addressing international drug trafficking or other foreign threats. To the contrary, it is the administration’s own intransigence on such reforms that has put reauthorization at risk.”

Since even before 2001, citing the need for new surveillance powers in order to secure the homeland has been a nearly foolproof way of silencing dissenters and creating hard-to-counter arguments for enhanced authorities. These surveillance programs are then so shrouded in secrecy that it becomes impossible to know how they’re being used, if they’re effective, or whether they’ve been abused.

With the pressure to renew Section 702 looming, we know the White House is feeling the pressure of our campaign to restore the privacy of our communications. No matter what bogeyman they present to us to justify its clean renewal, we have to keep the pressure up. You can use this easy tool to contact your members of Congress and tell them: absent major reforms, let 702 expire!

Take action

TELL congress: End 702 Absent serious reforms

Matthew Guariglia

This Bill Would Revive The Worst Patents On Software—And Human Genes  

1 week 1 day ago

The majority of high-tech patent lawsuits are brought by patent trolls—companies that exist not to provide products or services, but primarily have a business using patents to threaten others’ work. 

Some politicians are proposing to make that bad situation worse. Rather than taking the problem of patent trolling seriously, they want to encourage more bad patents, and make life easier—and more profitable—for the worst patent abusers. 

Take Action

Congress Must Not Bring Back The Worst Computer Patents

The Patent Eligibility Restoration Act, S. 2140, (PERA), sponsored by Senators Thom Tillis (R-NC) and Chris Coons (D-DE) would be a huge gift to patent trolls, a few tech firms that aggressively license patents, and patent lawyers. For everyone else, it will be a huge loss. That’s why we’re opposing it, and asking our supporters to speak out as well. 

Patent trolling is still a huge, multi-billion dollar problem that’s especially painful for small businesses and everyday internet users. But, in the last decade, we’ve made modest progress placing limits on patent trolling. The Supreme Court’s 2014 decision in Alice v. CLS Bank barred patents that were nothing more than abstract ideas with computer jargon added in. Using the Alice test, federal courts have kicked out a rogue’s gallery of hundreds of the worst patents. 

Under Alice’s clear rules, courts threw out ridiculous patents on “matchmaking”, online picture menus, scavenger hunts, and online photo contests. The nation’s top patent court, the Federal Circuit, actually approved a patent on watching an ad online twice before the Alice rules finally made it clear that patents like that cannot be allowed. The patents on “bingo on a computer?” Gone under Alice. Patents on loyalty programs (on a computer)? Gone. Patents on upselling (with a computer)? All gone

Alice isn’t perfect, but it has done a good job saving internet users from some of the worst patent claims. At EFF, we have  collected stories of people whose careers, hobbies, or small companies were “Saved by Alice.” It’s hard to believe that anyone would want to invite such awful patents back into our legal system—but that’s exactly what PERA does. 

PERA’s attempt to roll back progress goes beyond computer technology. For almost 30 years, some biotech and pharmaceutical companies actually applied for, and were granted, patents on naturally occuring human genes. As a consequence, companies were able to monopolize diagnostic tests that relied on naturally occurring genes in order to help predict diseases such as breast cancer, making such testing far more expensive. The ACLU teamed up with doctors to confront this horrific practice, and sued. That lawsuit led to a historic victory in 2013 when the Supreme Court disallowed patents on human genes found in nature. 

If PERA passes, it will explicitly overturn that ruling, allowing human genes to be patented once again. 

That’s why we’re going to fight against this bill, just as we fought off a very similar one last year. Put simply: it’s wrong to let anyone patent basic internet use. It hurts innovation, and it hurts free speech. Nor will we stand idly when threatened with patents on the building blocks of human life—a nightmarish concept that should be relegated to sci-fi shows. 

Take Action

Some Things Shouldn't Be Patented

This Bill Destroys The Best Legal Defense Against Bad Patents 

It’s critical that Alice allows patents to be thrown out under Section 101 of the patent law, before patent trolls can force their opponents into expensive litigation discovery. This is the most efficient and correct way for courts to throw out patents that never should have been issued in the first place. If the patent can’t pass the test under Alice, it’s really not much of an “invention” at all. 

But the effectiveness of the Alice test has meant that some patent trolls and IP lawyers aren’t making as much money. That’s why they want to insist that other areas of law should be used to knock out bad patents, like the ones requiring patents to be novel and non-obvious. 

This position is willfully blind to the true business model of patent trolling. The patent trolls know their patents are terrible—that’s why they often don’t want them tested in court at all. Many of the worst patent holders, such as Landmark Technology or some Leigh Rothschild entities, make it a point to never even get very far in litigation. The cases rarely get to claim construction (an early step in patent litigation, where a judge decides what the patent claims mean), much less to a full jury trial. Instead, they simply leverage the high cost of litigation. When it’s hard and expensive for defendants to file a motion challenging a patent, the patents often don’t even get properly tested. Then trolling companies get to use the judicial system for harassment, making their settlement demands cheaper than fighting back. For the rare defendant that fights back, they can drop the case. 

This Bill Has No Serious Safeguards

The bill eliminates the Alice test and every other judicial limitation on abstract patents that has formed over the decades. After ripping down this somewhat effective gate on the worst patents, it replaces it with a safeguard that’s nearly useless. 

On page 4 of the bill, it states that: 

“performing dance moves, offering marriage proposals, and the like shall not be eligible for patent coverage, and adding a non-essential reference to a computer by merely stating, for example, ‘‘do it on a computer’’ shall not establish such eligibility.” 

The addition of “do it on a computer” patents is an interesting change to last year’s version of the same bill, since that’s a specific phrase we used to critique the bill in our blog post last year

After Alice, EFF and others rightly celebrated courts’ ability to knock out most “do it on a computer” patents. But “do it on a computer” isn’t language that actually gets used in patents; it’s a description of a whole style of patent. And this bill specifically allows for such patents. It states that any process that “cannot be practically performed without the use of a machine (including a computer)” will be eligible for a patent. 

This language would mean that many of the most ridiculous patents that have been knocked out under Alice in the past decade will survive. They all describe the use of processors, “communications modules” and other jargon that requires computers. That means patents on an online photo contest, or displaying an object online, or tracking packages, or making an online menu—could once again become part of patent troll portfolios. All will be more effective at extorting everyday people and real innovators making actual products. 

“To See Your Own Blood, Your Own Genes”

From the 1980s until the 2013 Myriad decision, the U.S. Patent and Trademark Office granted patents on human genomic sequences. If researchers “isolated” the gene—a necessary part of analysis—they would then get a patent that described isolating, or purified, as a human process, and insist they weren’t getting a patent on the natural world itself.

But this concept of patenting an “isolated” gene was simply a word game, and a distinction without a difference. With the genetic patent in hand, the patent-holder could demand royalty payments from any kind of test or treatment involving that gene. And that’s exactly what Myriad Genetic did when they patented the BRCA1 and BRCA2 gene sequences, which are important indicators for the prevalence of breast or ovarian cancer. 

Myriad’s patents significantly increased the cost of those tests to U.S. patients. The company even sent some doctors cease and desist letters, saying the doctors could not perform simple tests on their own patients—even looking at the gene sequences without Myriad’s permission would constitute patent infringement. 

This behavior caused pathologists, scientists, and patients to band together with ACLU lawyers and challenge Myriad’s patents. They litigated all the way to the Supreme Court, and won. “A naturally occurring DNA segment is a product of nature and not patent eligible merely because it has been isolated,” the Supreme Court stated in Association for Molecular Pathology v. Myriad Genetics

A practice like granting and enforcing patents on human genes should truly be left in the dustbin of history. It’s shocking that pro-patent lobbyists have convinced these Senators to introduce legislation seeking to reinstate such patents. Last month, the President of the College of American Pathologists published an op-ed reminding lawmakers and the public about the danger of patenting the human genome, calling gene patents “dangerous to the public welfare.”  

As Lisbeth Ceriani, a breast cancer survivor and a plaintiff in the Myriad case said, “It’s a basic human right to see your own blood, your own genes.”

We can’t allow patents that allow internet users to be extorted for using the internet to express themselves, or do business. And we won’t allow our bodies to be patented. Tell Congress this bill is going nowhere. 

Take Action

Reject Human Gene Patents

Joe Mullin

Today The UK Parliament Undermined The Privacy, Security, And Freedom Of All Internet Users 

1 week 3 days ago

The U.K. Parliament has passed the Online Safety Bill (OSB), which says it will make the U.K. “the safest place” in the world to be online. In reality, the OSB will lead to a much more censored, locked-down internet for British users. The bill could empower the government to undermine not just the privacy and security of U.K. residents, but internet users worldwide

A Backdoor That Undermines Encryption

A clause of the bill allows Ofcom, the British telecom regulator, to serve a notice requiring tech companies to scan their users–all of them–for child abuse content.This would affect even messages and files that are end-to-end encrypted to protect user privacy. As enacted, the OSB allows the government to force companies to build technology that can scan regardless of encryption–in other words, build a backdoor. 

These types of client-side scanning systems amount to “Bugs in Our Pockets,” and a group of leading computer security experts has reached the same conclusion as EFF–they undermine privacy and security for everyone. That’s why EFF has strongly opposed the OSB for years

It’s a basic human right to have a private conversation. This right is even more important for the most vulnerable people. If the U.K. uses its new powers to scan people’s data, lawmakers will damage the security people need to protect themselves from harassers, data thieves, authoritarian governments, and others. Paradoxically, U.K. lawmakers have created these new risks in the name of online safety. 

The U.K. government has made some recent statements indicating that it actually realizes that getting around end-to-end encryption isn’t compatible with protecting user privacy. But given the text of the law, neither the government’s private statements to tech companies, nor its weak public assurances, are enough to protect the human rights of British people or internet users around the world. 

Censorship and Age-Gating

Online platforms will be expected to remove content that the U.K. government views as inappropriate for children. If they don’t, they’ll face heavy penalties. The problem is, in the U.K. as in the U.S., people do not agree about what type of content is harmful for kids. Putting that decision in the hands of government regulators will lead to politicized censorship decisions. 

The OSB will also lead to harmful age-verification systems. This violates fundamental principles about anonymous and simple access that has existed since the beginning of the Internet. You shouldn’t have to show your ID to get online. Age-gating systems meant to keep out kids invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary. 

In the coming months, we’ll be watching what type of regulations the U.K. government publishes describing how it will use these new powers to regulate the internet. If the regulators claim their right to require the creation of dangerous backdoors in encrypted services, we expect encrypted messaging services to keep their promises and withdraw from the U.K. if that nation’s government compromises their ability to protect other users. 

Joe Mullin

We Want YOU (U.S. Federal Employees) to Stand for Digital Freedoms

1 week 3 days ago

It's that time of the year again! U.S. federal employees and retirees can support the digital freedom movement through the Combined Federal Campaign (CFC).

The Combined Federal Campaign is the world's largest and most successful annual charity campaign for U.S. federal employees and retirees. Last year, 175 members of the CFC community raised over $34,000 for EFF's lawyers, activists, and technologists fighting for digital freedoms online. But, in a year with many threats popping up to our rights online, we need your support now more than ever.

Giving to EFF through the CFC is easy! Just head over to and use our ID 10437. Once there, click DONATE to give via payroll deduction, credit/debit, or an e-check. If you have a renewing pledge, you can increase your support as well! Scan the QR code below to easily make a pledge or go to!

This year's campaign theme—GIVE HAPPY—shows that when U.S. federal employees and retirees give together, they make a meaningful difference to a countless number of individuals throughout the world. They ensure that organizations like EFF can continue working towards our goals even during challenging times.

With support from those who pledged through the CFC last year, EFF has:

  • Authored amicus briefs in multiple court cases, leading a federal judge to find that device searches at the U.S. border require a warrant.
  • Forced the San Francisco Board of Supervisors to reverse a decision and stop police from equipping robots with deadly weapons.
  • Made great strides in passing protections for the right to repair your tech, with the combined strength of innovation advocates around the country.
  • Convinced Apple to finally abandon its device-scanning plan and encrypt iCloud storage for the good of all its customers.

Federal employees and retirees have a tremendous impact on the shape of our democracy and the future of civil liberties and human rights online. Support EFF’s work by using our CFC ID 10437 when you make a pledge today!

Christian Romero

New Privacy Badger Prevents Google From Mangling More of Your Links and Invading Your Privacy

1 week 3 days ago

We released a new version of Privacy Badger that updates how we fight “link tracking” across a number of Google products. With this update Privacy Badger removes tracking from links in Google Docs, Gmail, Google Maps, and Google Images results. Privacy Badger now also removes tracking from links added after scrolling through Google Search results.

Link tracking is a creepy surveillance tactic that allows a company to follow you whenever you click on a link to leave its website. As we wrote in our original announcement of Google link tracking protection, Google uses different techniques in different browsers. The techniques also vary across Google products. One common link tracking approach surreptitiously redirects the outgoing request through the tracker’s own servers. There is virtually no benefit 1 for you when this happens. The added complexity mostly just helps Google learn more about your browsing.

It's been a few years since our original release of Google link tracking protection. Things have changed in the meantime. For example, Google Search now dynamically adds results as you scroll the page ("infinite scroll" has mostly replaced distinct pages of results). Google Hangouts no longer exists! This made it a good time for us to update Privacy Badger’s first party tracking protections.

You can always check to see what Privacy Badger has done on the site you’re currently on by clicking on Privacy Badger’s icon in your browser toolbar. Whenever link tracking protection is active, you will see that reflected in Privacy Badger’s popup window.

We'll get into the technical explanation about how this all works below, but the TL;DR is that this is just one way that Privacy Badger continues to create a less tracking- and tracker-riddled internet experience.

More Details

This update is an overhaul of how Google link tracking removal works. Trying to get it all done inside a “content script” (a script we inject into Google pages) was becoming increasingly untenable. Privacy Badger wasn’t catching all cases of tracking and was breaking page functionality. Patching to catch the missed tracking with the content script was becoming unreasonably complex and likely to break more functionality.

Going forward, Privacy Badger will still attempt to replace tracking URLs on pages with the content script, but will no longer try to prevent links from triggering tracking beacon requests. Instead, it will block all such requests in the network layer.

Often the link destination is replaced with a redirect URL in response to interaction with the link. Sometimes Privacy Badger catches this mutation in the content script and fixes the link in time. Sometimes the page uses a more complicated approach to covertly open a redirect URL at the last moment, which isn’t caught in the content script. Privacy Badger works around these cases by redirecting the redirect to where you actually want to go in the network layer.

Google’s Manifest V3 (MV3) removes the ability to redirect requests using the flexible webRequest API that Privacy Badger uses now. MV3 replaces blocking webRequest with the limited by design Declarative Net Request (DNR) API. Unfortunately, this means that MV3 extensions are not able to properly fix redirects at the network layer at this time. We would like to see this important functionality gap resolved before MV3 becomes mandatory for all extensions.

Privacy Badger still attempts to remove tracking URLs with the content script so that you can always see and copy to clipboard the links you actually want, as opposed to mangled links you don’t. For example, without this feature, you may expect to copy “”, but you will instead get something like “”.

To learn more about this update, and to see a breakdown of the different kinds of Google link tracking, visit the pull request on GitHub.

Let us know if you have any feedback through email, or, if you have a GitHub account, through our GitHub issue tracker.

To install Privacy Badger, visit Thank you for using Privacy Badger!

  • 1. No benefit outside of removing the referrer information, which can be accomplished without resorting to obnoxious redirects.
Alexei Miagkov

UN Cybercrime Treaty Talks End Without Consensus on Scope And Deep Divides About Surveillance Powers

2 weeks 2 days ago

As the latest negotiating session on the proposed UN Cybercrime Treaty wrapped up in New York earlier this month, one thing was clear: with time running out to finalize the text, little progress and consensus was reached on crucial points, such as the treaty's overall scope of application and the reach of its criminal procedure mandates and international cooperation measures.

Instead, a plethora of proposed word changes was added, further complicated by additional amendments published in informal reports well after the two-week session ended September 1. We saw many of the same highly dangerous criminal offenses and surveillance measures that had not made it into the zero draft reintroduced back into the text. The original zero draft, as well as the last set of amendments discussed in behind-closed doors negotiations, turned into a sea of redlines.

It became apparent that many nations, including Russia, Eritrea, Burundi, Sierra Leone, Zimbabwe, Ghana, Korea, and others, were vying to expand the proposed treaty's surveillance scope to cover practically any offense imaginable where a computer was involved, both at home and abroad.

“We believe a future convention ought to cover the largest possible range of offenses that could be committed using information and communication technologies (ICTs),” said Burkina Faso’s delegate.

According to the domestic surveillance chapter, evidence gathering could be marshaled against any act deemed criminal as defined by that nation's own laws. When shifting to international cooperation, the initial drafts and several succeeding amendments indicate that the standard for such surveillance cooperation could be offenses with penalties ranging from three or more years in prison (earlier text limited it to four years), among other alternatives. This proposed treaty could serve as a global license to suppress dissenters, minorities, activists, journalists, and more.

Canada warned delegates about the potential consequences. In a statement (at minute 01:01) that garnered rare applause from the floor, it laid out in stark terms that the relentless push to expand the proposed treaty’s scope has turned it into a general criminal Mutual Legal Assistance treaty that leaves it completely in the hands of any state to decide what conduct is a “crime”or “a serious crime” and opens up a menu of measures to crack down on these crimes.

“This represents the potential, and indeed inevitability, for Orwellian reach and control by those states who will choose to abuse this instrument…”

“Criticizing a leader, innocently dancing on social media, being born a certain way, or simply saying a single word, all far exceed the definition of serious crime in some States. These acts will all come under the scope of this UN treaty in the current draft.

“…this is a UN Convention, and as such our responsibility is much bigger than ourselves, it is to the people in those places where there are no protections and where this treaty will be an unprecedented multilateral tool to extend the reach and collaboration of repression and persecution.”

What’s more, Canada said, the UN would be going against its own practices if the cybercrime treaty allows Member States to choose whatever crimes they wish to be covered and targeted under the convention.

“We can find no other UN criminal justice treaty, or any other  treaty under the UN for that matter, that leaves it completely in the hands and whims of Member States to define the breadth and type of subject matter that comes under the scope in the instrument, in perpetuity.”

New Zealand, Switzerland, Norway, Uruguay, and Costa Rica, along with Human Rights Watch, Article 19, EFF, Privacy International, Global Partners Digital, and other civil society groups, and companies like Microsoft, also sounded alarms, as we have for years, about the inherent human rights risks posed by the broad scope of the Convention. EFF continued to advocate for a narrow scope of the treaty and its chapters, adding robust data protection and human rights safeguards throughout the proposed Convention, removing Article 28.4, which empowers competent authorities to compel individuals with knowledge of specific computer or device functionalities to provide essential information for conducting searches (Read more about our current demands.)

The scope of the proposed Cybercrime Treaty will have a profound impact on human rights. The question of whether the Convention should apply broadly or be limited in its application affects everything, from criminal procedures (such as domestic surveillance) to international cooperation (like cross-border spying or assistance).

Simply put, if Country B chooses to act as "big brother" for Country A, it could tap into an activist’s live chats or trace their exact whereabouts, all based on the loose privacy standards and arbitrary criminal definitions set by Country B's laws. The absence of a mandate in the proposed Treaty for the same act to be a crime in both countries only magnifies the risks.

And the proposed 3 or 4-year sentence threshold to invoke the international cooperation powers does little to instill confidence. Many laws criminalizing speech could comfortably fit this mold, paving the way for wide-ranging surveillance misuse.

Sierra Leone told Member States during the New York negotiating session:

“Imagine a scenario where a particular national residing in another country continues to use the influence of social media to spread propaganda and hateful messages and incite violence that leads to fatal clashes with security forces,” Sierra Leone said. ”These crimes have the potential to interfere with the sovereignty of nations and their peace and stability when individuals become incited by opponents to cause mayhem in another state using ICTs.”

And while governments like the US say they’ll deny requests for electronic evidence on human rights grounds, the draft treaty as a whole risks formalizing a system for international cooperation that encourages surveillance and sharing of data, anchored in the laws of the country requesting the assistance, and the privacy standard of the country providing the assistance. To that respect, Human Rights Watch Senior Researcher Deborah Brown, underscoring the gravity of misaligned national laws with international standards, emphasized:

“There are many examples of national laws that are inconsistent with international free expression standards and carry 3+ year or 4+ year sentences, as well as examples of such laws being used to prosecute journalists, human rights defenders, free thinkers, and others.

"Some States argue that they will exercise their right to refuse assistance on investigations on human rights grounds. But leaving such critical decisions up to the discretion of government authorities is extremely risky. And if the treaty opens the gates to international cooperation for every conceivable offense, those authorities are going to need to become experts in every crime around the world and their potential misuses. This isn't a focused effort anymore. Rather than honing in on the cybercrimes this convention intended to tackle, there is a risk of diluting  efforts and overwhelming mutual legal assistance channels with a deluge of requests.”

But even if certain countries opt to adhere to the dual criminality principle, endorsing the treaty's broad scope raises concerns. This is because States could still apply dual criminality based on offenses that may not align with human rights law. Essentially, the proposed Treaty is laying a foundation for international cooperation on acts that, in some places, are seen more as expressions of opinion than as genuine criminal offenses.

“By narrowing the scope of this [international cooperation chapter], we are not only preserving these rights but also preventing the potential misuse of the treaty in jurisdictions where freedoms and human rights may not be as robustly protected, EFF's Katitza Rodriguez told delegates earlier this year.

As Canada said,

“…this is a UN Convention, and as such our responsibility is much bigger than ourselves, it is to the people in those places where there are no protections and where this treaty will be an unprecedented multilateral tool to extend the reach and collaboration of repression and persecution.”

Rights respecting nations participating in this proposed UN treaty must recognize the gravity of their commitment. A focus solely on their own nation's interests is a shortsighted approach when the global ramifications are so profound.

With such divergent views, it’s clear that reaching a consensus will be a meticulous process, and we question whether it’s even possible. The only acceptable path forward might just be including offenses as defined by the convention—anything more might be a compromise too great.

The next step in negotiations will be the release of a new draft, which is expected by the end of November. With so little consensus emerging from the New York negotiating session, it’s likely that further negotiations may be held in the coming months. A completed draft was supposed to be finalized and approved by Member States early next year—that seems unlikely given the lack of agreement. Whenever a draft is approved, it will be appended to a resolution for consideration and adoption by the UN General Assembly next year. Given the stark disagreements in views, it's increasingly likely the resolution will be put to a vote, demanding a two-third majority for approval.

The question remains whether a broadly-scoped treaty potentially legitimizing draconian surveillance powers for investigations of acts deemed criminal that target vulnerable communities and free expression and containing few human rights protections should be adopted by the UN at all. As outlined by Canada, the UN was founded to reaffirm faith in human rights, equal rights, and the dignity of human persons. It was also formed to establish conditions under which justice and respect for the obligations arising from treaties and other sources of international law can be maintained. “It is inconsistent with our mandate at the UN to have one aspect that contradicts the other, to have a treaty that that speaks on behalf of the UN but with a scope so broad that it obligates, condones and facilitates the domestic and international crackdown on an almost unbounded breadth of conduct,” Canada said.

We applaud this statement and will continue the hard work of ensuring that the fundamental rights of those who will be subject to the treaty are safeguarded.

Karen Gullo

EFF to Michigan Court: Governments Shouldn’t Be Allowed to Use a Drone to Spy on You Without a Warrant

2 weeks 2 days ago

Should the government have to get a warrant before using a drone to spy on your home and backyard? We think so, and in an amicus brief filed last Friday in Long Lake Township v. Maxon, we urged the Michigan Supreme Court to find that warrantless drone surveillance of a home violates the Fourth Amendment. 

In this case, Long Lake Township hired private operators to repeatedly fly drones over Todd and Heather Maxon's home to take aerial photos and videos of their property in a zoning investigation. The Township did this without a warrant and then sought to use this documentation in a court case against them. In our brief, we argue that the township's conduct was governed by and violated the Fourth Amendment and the equivalent section of the Michigan Constitution. 

The Township argued that the Maxons had no reasonable expectation of privacy based on a series of cases from the U.S. Supreme Court in the 1980s. In those cases, law enforcement used helicopters or small planes to photograph and observe private backyards that were thought to be growing cannabis. The Court found there was no reasonable expectation of privacy—and therefore no Fourth Amendment issue—from aerial surveillance conducted by manned aircraft.  

But, as we pointed out in our brief, drones are fundamentally different from helicopters or airplanes. Drones can silently and unobtrusively gather an immense amount of data at only a tiny fraction of the cost of traditional aircraft. In other words, the government can buy thousands of drones for the price of one helicopter and its hired pilot. Drones are also smaller and easier to operate. They can fly at much lower altitudes, and they can get into spaces—such as under eaves or between buildings—that planes and helicopters can never enter. And the noise created by manned airplanes and helicopters functions as notice to those who are being watched—it’s unlikely you’ll miss a helicopter circling overhead when you’re sunbathing in your yard, but you may not notice a drone.  

Drone prevalence has soared in recent years, fueled by both private and governmental use. We have documented more than 1,471 law enforcement agencies across the United States that operate drones. In some cities, police have begun implementing “drone as first responder” programs, in which drones are constantly flying over communities in response to routine calls for service. It’s important to remember that communities of color are more likely to be the targets of governmental surveillance. And authorities have routinely used aerial surveillance technologies against individuals participating in racial justice movements. Under this backdrop, states like Florida, Maine, Minnesota, Nevada, North Dakota, and Virginia have enacted statutes requiring warrants for police use of drones.  

Warrantless drone surveillance represents a formidable threat to privacy and it's imperative for courts to recognize the danger that governmental drone use poses to our Fourth Amendment rights.

Hannah Zhao

UN's Cybercrime Convention Draft: A Slippery Slope for LGBTQ+ and Gender Rights

2 weeks 2 days ago

This post is divided into two parts. Part I looks at the draft UN's Cybercrime Convention and its potential implications for LGBTQ+ rights. Part II provides a closer look at how cybercrime laws might specifically impact the LGBTQ+ community and activists in the Middle East and North Africa (MENA) region.

EFF has consistently voiced concerns over the misuse of cybercrime laws across the globe, and particularly their impact on marginalized and vulnerable communities—notably LGBTQ+ individuals. These laws, often marked by their broad scope and vague wording, have also been weaponized against security researchers, artists, journalists, and human rights defenders.

As nations progress in their negotiations surrounding the divisive UN Cybercrime Convention draft, they shoulder an immense responsibility. The potential expansion of cross-border surveillance powers is alarming. Some nations might exploit these powers to probe acts they controversially label as crimes based on subjective moral judgments, rather than universally accepted standards. Such powers might lead to surveillance over simple acts like online content sharing, jeopardizing vulnerable groups like the LGBTQ+ community. The UN must ensure that such broad authorizations are not legitimized. If left unchecked, the initial zero draft and its subsequent amendments could unintentionally bestow expansive investigative and prosecutorial surveillance powers across borders that risk undermining human rights both domestically and internationally.

Article 5 on Human Rights Must be Strengthened

So far, it’s looking bleak for human rights. A proposed amendment championed by Uruguay and backed by 50 nations aimed at bolstering human rights in Article 5 with gender mainstreaming (see minutes 01:15) met strong opposition. Nations like Malaysia, Russia, Syria, Nigeria, and Senegal directly opposed it. Meanwhile, countries like China, Saudi Arabia, Egypt, Iraq chose to back Article 5 as written in the zero draft, which fails to recognize gender mainstreaming. 

And nothing changed in subsequent behind the scenes negotiating sessions aimed at ironing out the amendments to this article—Japan, the chair of the informal group, reported that the "best way forward would be to respect [Ad Hoc Committee] Chair’s original Article 5 of the zero draft without any amendments." The outcomes of these secret informal deliberations were later presented in the main session. Uruguay's response was clear (see minutes 01:16): Integrating this language [gender, vulnerable groups and rule of law safeguards] isn't a threat nor imposition; it accurately mirrors contemporary realities, ensuring the Convention is up-to-date and aligned with current realities.

In contrast, the Preamble, Article 1 and 55 of the UN Charter support gender equality, and subsequent international instruments like the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) further obligate states to actively combat all forms of gender discrimination and advance gender equality. And the most recent UN General Assembly Resolution (A/RES/77/211) on privacy in the digital age recognizes the right to privacy as a way to prevent gender-based violence and encourages all relevant stakeholders to mainstream a gender perspective in the development and adoption of digital technologies.

As Derechos Digitales and APC told the UN Member States, “it is essential that international instruments mainstream gender to ensure that norms contribute to the fulfillment of human rights and gender equality.” AlSur echoed this recommendation “to address the specific needs of people of diverse sexual orientations and gender expressions.”

Tighten the Loopholes in the International Cooperation Chapter (Article 3 and 35)

At the end of the treaty negotiating session in August, Canada affirmed (see minutes 01:01) that the convention’s scope allows each country to define what constitutes a “crime” or “serious crime” on its own terms, potentially leading to overly broad definitions that can be abused. Canada's concerns about the treaty ring especially true when examining real-life cases. Take, for instance Human Rights Watch case of "Yamen," a young gay man from Jordan. Yamen, after being victimized online, turned to his country's authorities, expecting justice. Yet, under the very cybercrime law he sought protection from, he found himself accused and sentenced for "online prostitution.”

The expansive scope of the Treaty, as highlighted in the "zero draft" and later set of amendments, presents a significant flaw. The chapter on domestic surveillance in the draft endorses evidence gathering with very intrusive surveillance measures for any criminal offenses as defined in each country domestic law. And the international cooperation chapter (also coined "cross-spying assistance" chapter) gives countries an unsettling degree of freedom, enabling them to cooperate based on their national criminal laws when gathering e-evidence for crimes punishable by more than three years (results of the informal negotiations) or four years (as in the zero draft). 

In a nutshell, the draft text enables countries to assist each other in spying, but does so based so on each country criminal law rather than a limited set of core cybercrimes as defined by the Convention. This means that the country requesting the assistance can individually determine what they label as "crimes" and subsequently request another country to assist in deploying its sweeping surveillance measures to collect evidence for most crimes. Such a structure inadvertently greenlights nations to share surveillance data on actions or behaviors that might be intrinsically protected under international human rights law.

For instance, in some countries where LGBTQ+ online expressions, including sharing content deemed "immoral" are wrongly criminalized, the draft treaty provisions could be misused to further enable domestic surveillance measures targeting these communities. It can also allow one state to help another one to track an LGBTQ+ individual's whereabouts when that person is traveling abroad. While some countries can choose to require dual criminality, many who have similar laws or are friends with that government will be willing to cooperate. This is what is not acceptable. States should not only look at themselves but the broader picture of what they are authorizing under an UN umbrella.

The international cooperation chapter has another central problem. Its scope is overly dependent on the severity of penalties—specifically, three or four years of imprisonment—as the primary metric for allowing one country to request assistance from another in surveillance efforts. Numerous laws criminalizing LGBTQ+ individuals merely for their identity, or for content deemed "immoral," often carry penalties that are four years or more and are wrongfully considered "serious crimes." This poses a substantial threat, especially when such criteria can dictate international collaboration and surveillance. 

In some jurisdictions, acts that are considered minor offenses could be elevated to appear as serious crimes in others, creating an imbalance in the intensity of surveillance applied to these supposed 'infractions'. This design flaw could lead to 'charging up'—where authorities might be motivated to amplify charges to fit the '4-year/serious crime' criteria. While this threshold is an improvement compared to an open mandate for any crime, its ambiguity risks exploitation. Moreover, the resulting surge in requests could further burden an already overwhelmed mutual legal assistance treaty (MLAT) system, thus exacerbating existing resource challenges.

Refining the proposed treaty to focus exclusively on core cybercrimes, as explicitly detailed within, isn't merely a constructive approach—it might be the only route to secure approval from multiple national parliaments. This is way, Human Rights Watch, ARTICLE 19, EFF, Privacy International and many others have called for the proposed convention to explicitly rule out provisions for domestic surveillance and cross-border cooperation concerning non-core cybercrimes, ensuring that nations don’t offer a legal foundation under the UN to legitimize collaboration for gathering evidence for investigation of these arbitrary offenses—many of which are not inherently criminal conduct but are even discriminatory laws targeting LGBTQ+ individuals mere expressions of one's gender identity, sexual orientation, or beliefs. 

Imagine a nation assisting another spy on LGTBQ+ individual's internet use, discerning out what websites they visit. They intercept personal conversations in real time. And, they even track where this LGBTQ+ individual goes around their city. If authorities in certain countries disproportionately target LGBTQ+ individuals, surveilling them merely for expressing their authentic identities—because such expressions are wrongly categorized as  “serious crimes'' with penalties of over three years in prison—it glaringly exposes a deep-rooted injustice and raises profound concerns. This isn't just about invading someone's privacy. It's about using intrusive technology to deeply and unfairly discriminate against LGBTQ+ people, putting their safety and freedom at severe risk.

Indeed, this is not an abstract concern but a reality that we’ve seen play out repeatedly in various countries. For instance, the Human Rights Watch 2022 World Report, alongside Derechos Digitales' findings on cybercrimes laws used against LGBTQ+ communities, provides evidence that vague cybercrime laws are frequently used to muzzle dissent, with marginalized groups like women and LGBTQIA+ most affected.

Domestic surveillance laws and indiscriminate personal data sharing exacerbate the negative impact of such tools when in the hands of state authorities. They are frequently manipulated to amass “evidence”—not just for prosecuting individuals on the grounds of engaging in same-sex relationships, but also for invoking archaic and suppressive “morality clauses.” This unnerving synergy doesn’t merely facilitate hostility; it amplifies the risks for the LGBTQ+ community and supporting activists. Succumbing to these concessions in any international convention would be devastating, and would mark a perilous setback for human rights. 

Accepting a broader scope would be nothing short of catastrophic—especially for already vulnerable LGBTQ+ communities worldwide. There are many other aspects of the Treaty that raise red flags. Stay tuned to our blog in the coming days as we delve deeper into these pressing concerns.

Our second post will map out the recent cybercrime laws in the MENA region vis-a-vis the standards set under the proposed UN Cybercrime Treaty. Stay tuned.

Katitza Rodriguez

Apple and Google Are Introducing New Ways to Defeat Cell Site Simulators, But Is it Enough?

2 weeks 2 days ago

Cell-site simulators (CSS)—also known as IMSI Catchers and Stingrays—are a tool that law enforcement and governments use to track the location of phones, intercept or disrupt communications, spy on foreign governments, or even install malware. Cell-site simulators are also used by criminals to send spam and engage in fraud. We have written previously about the privacy implications of CSS, noting that a common tactic is to trick your phone into connecting to a fake 2G cell tower. In the U.S. every major carrier except for T-Mobile has turned off their 2G and 3G network. 1
But many countries outside of the U.S. have not taken steps to turn off their 2G networks yet, and there are still areas where 2G is the only option for cellular connections. Unfortunately almost all phones still support 2G, even those sold in countries like the U.S. where carriers no longer use the obsolete protocol. This is cause for concern; even if every 2G network was shut down tomorrow the fact that phones can still connect to 2G networks leaves them vulnerable.  Upcoming changes in iOS and Android could protect users against fake base station attacks, so let's take a look at how they'll work.

In 2021, Google released an optional feature for Android to turn off the ability to connect to 2G cell sites. We applauded this feature at the time. But we also suggested that other companies could do more to protect against cell-site simulators, especially Apple and Samsung, who had not made similar changes. This year more improvements are being made. 

Google's Efforts to Prevent CSS Attacks 

Earlier this year Google announced another new mobile security setting for Android. This new setting allows users to prevent their phone from using a “null cipher” when making a connection with a cell tower. In a well-configured network, every connection with a cell tower is authenticated and encrypted using a symmetric cipher, with a cryptographic key generated by the phone's sim card and the tower it is connecting to. However, when the null cipher is used, communications are instead sent in the clear and not encrypted. Null ciphers are useful for tasks like network testing, where an engineer might need to see the content of the packets going over the wire. Null ciphers are also critical for emergency calls where connectivity is the number one priority, even if someone doesn't have a SIM card installed. Unfortunately fake base stations can also take advantage of null ciphers to intercept traffic from phones, like SMS messages, calls, and non-encrypted internet traffic. 

By turning on this new setting, users can prevent their connection to the cell tower from using a null cipher (except in the case of a call to emergency services if necessary,) thus ensuring that their connection to the cell tower is always encrypted.

We are  excited to see Google putting more resources into giving Android users tools to protect themselves from fake base stations. Unfortunately, this setting has not been released yet in vanilla Android and it will only be available on newer phones running Android 14 or higher,2 but we hope that third-party manufacturers—especially those who make lower cost Android phones—will bring this change to their phones as well. 

Apple Is Taking Steps to Address CSS for the First Time

Apple has also finally taken steps to protect users against cell site simulators after being called on to do so by EFF and the broader privacy and security community. Apple announced that in iOS 17, out September 18, iPhones will not connect to insecure 2G mobile towers if they are placed in Lockdown Mode. As the name implies, Lockdown Mode is a setting originally released in iOS 16 that locks down several features for people who are concerned about being attacked by mercenary spyware or other nation state level attacks. This will be a huge step towards protecting iOS users from fake base station attacks, which have been used as a vector to install spyware such as Pegasus

We are excited to see Apple taking active measures to block fake base stations and hope it will take more measures in the future, such as disabling null ciphers, as Google has done. 

Samsung Continues to Fall Behind 

Not every major phone manufacturer is taking the issue of fake base stations seriously. So far Samsung has not taken any steps to include the 2G toggle from vanilla Android, nor has it indicated that it plans to any time soon. Hardware vendors often heavily modify Android before distributing it on their phones, so even though the setting is available in the Android Open Source Project, Samsung has so far chosen not to make it available on their phones. Samsung also failed to protect its users earlier this year when for months it did not take action against a fake version of the Signal app containing spyware hosted in the Samsung app store. These failures to act suggest that Samsung considers its users’ security and privacy to be an afterthought. Those concerned with the security and privacy of their mobile devices should strongly consider using other hardware.


We applaud the changes that Google and Apple are introducing with their latest round of updates. Cell-site simulators continue to be a problem for privacy and security all over the world, and it’s good that mobile OS manufacturers are starting to take the issue seriously. 

We recommend that iOS users who are concerned about fake base station attacks turn on Lockdown Mode in anticipation of the new protections in iOS 17. Android users with at least a Pixel 6 or newer Android phone should disable 2G and disable null ciphers as soon as their phone supports it.

  • 1. T-Mobile plans to disable its 2G network on April 2nd, 2024
  • 2. Specifically phones must be running the latest version of the hardware abstraction layer or HAL.
Cooper Quintin
2 hours 21 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed