Tools to Protect Your Privacy Online | EFFector 36.1

3 months ago

New year, but EFF is still here to keep you up to date with the latest digital rights happenings! Be sure to check out our latest newsletter, EFFector 36.1, which covers topics ranging from: our thoughts on AI watermarking, changes in the tech landscape we'd like to see in 2024, and updates to our Street Level Surveillance hub and Privacy Badger.

EFFector 36.1 is out now—you can read the full newsletter here, or subscribe to get the next issue in your inbox automatically! You can also listen to the audio version of the newsletter below:

LISTEN ON YouTube

EFFector 36.1 | Tools to Protect Your Privacy Online

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

The No AI Fraud Act Creates Way More Problems Than It Solves

3 months ago

Creators have reason to be wary of the generative AI future. For one thing, while GenAI can be a valuable tool for creativity, it may also be used to deceive the public and disrupt existing markets for creative labor. Performers, in particular, worry that AI-generated images and music will become deceptive substitutes for human models, actors, or musicians.

Existing laws offer multiple ways for performers to address this issue. In the U.S., a majority of states recognize a “right of publicity,” meaning, the right to control if and how your likeness is used for commercial purposes. A limited version of this right makes sense—you should be able to prevent a company from running an advertisement that falsely claims that you endorse its products—but the right of publicity has expanded well beyond its original boundaries, to potentially cover just about any speech that “evokes” a person’s identity.

In addition, every state prohibits defamation, harmful false representations, and unfair competition, though the parameters may vary. These laws provide time-tested methods to mitigate economic and emotional harms from identity misuse while protecting online expression rights.

But some performers want more. They argue that your right to control use of your image shouldn’t vary depending on what state you live in. They’d also like to be able to go after the companies that offer generative AI tools and/or host AI-generated “deceptive” content. Ordinary liability rules, including copyright, can’t be used against a company that has simply provided a tool for others’ expression. After all, we don’t hold Adobe liable when someone uses Photoshop to suggest that a president can’t read or even for more serious deceptions. And Section 230 immunizes intermediaries from liability for defamatory content posted by users and, in some parts of the country, publicity rights violations as well. Again, that’s a feature, not a bug; immunity means it’s easier to stick up for users’ speech, rather than taking down or preemptively blocking any user-generated content that might lead to litigation. It’s a crucial protection not just big players like Facebook and YouTube, but also small sites, news outlets, emails hosts, libraries, and many others.

Balancing these competing interests won’t be easy. Sadly, so far Congress isn’t trying very hard. Instead, it’s proposing “fixes” that will only create new problems.

Last fall, several Senators circulated a “discussion draft” bill, the NO FAKES Act. Professor Jennifer Rothman has an excellent analysis of the bill, including its most dangerous aspect: creating a new, and transferable, federal publicity right that would extend for 70 years past the death of the person whose image is purportedly replicated. As Rothman notes, under the law:

record companies get (and can enforce) rights to performers’ digital replicas, not just the performers themselves. This opens the door for record labels to cheaply create AI-generated performances, including by dead celebrities, and exploit this lucrative option over more costly performances by living humans, as discussed above.

In other words, if we’re trying to protect performers in the long run, just make it easier for record labels (for example) to acquire voice rights that they can use to avoid paying human performers for decades to come.

NO FAKES hasn’t gotten much traction so far, in part because the Motion Picture Association hasn’t supported it. But now there’s a new proposal: the “No AI FRAUD Act.” Unfortunately, Congress is still getting it wrong.

First, the Act purports to target abuse of generative AI to misappropriate a person’s image or voice, but the right it creates applies to an incredibly broad amount of digital content: any “likeness” and/or “voice replica” that is created or altered using digital technology, software, an algorithm, etc. There’s not much that wouldn’t fall into that category—from pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. If it involved recording or portraying a human, it’s probably covered. Even more absurdly, it characterizes any tool that has a primary purpose of producing digital depictions of particular people as a “personalized cloning service.” Our iPhones are many things, but even Tim Cook would likely be surprised to know he’s selling a “cloning service.”

Second, it characterizes the new right as a form of federal intellectual property. This linguistic flourish has the practical effect of putting intermediaries that host AI-generated content squarely in the litigation crosshairs. Section 230 immunity does not apply to federal IP claims, so performers (and anyone else who falls under the statute) will have free rein to sue anyone that hosts or transmits AI-generated content.

That, in turn, is bad news for almost everyone—including performers. If this law were enacted, all kinds of platforms and services could very well fear reprisal simply for hosting images or depictions of people—or any of the rest of the broad types of “likenesses” this law covers. Keep in mind that many of these service won’t be in a good position to know whether AI was involved in the generation of a video clip, song, etc., nor will they have the resources to pay lawyers to fight back against improper claims. The best way for them to avoid that liability would be to aggressively filter user-generated content, or refuse to support it at all.

Third, while the term of the new right is limited to ten years after death (still quite a long time), it’s combined with very confusing language suggesting that the right could extend well beyond that date if the heirs so choose. Notably, the legislation doesn’t preempt existing state publicity rights laws, so the terms could vary even more wildly depending on where the individual (or their heirs) reside.

Lastly, while the defenders of the bill incorrectly claim it will protect free expression, the text of the bill suggests otherwise. True, the bill recognizes a “First Amendment defense.” But every law that affects speech is limited by the First Amendment—that’s how the Constitution works. And the bill actually tries to limit those important First Amendment protections by requiring courts to balance any First Amendment interests “against the intellectual property interest in the voice or likeness.” That balancing test must consider whether the use is commercial, necessary for a “primary expressive purpose,” and harms the individual’s licensing market. This seems to be an effort to import a cramped version of copyright’s fair use doctrine as a substitute for the rigorous scrutiny and analysis the First Amendment (and even the Copyright Act) requires.

We could go on, and we will if Congress decides to take this bill seriously. But it shouldn’t. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation. The No AI FRAUD Act comes nowhere near the mark

Corynne McSherry

Companies Make it Too Easy for Thieves to Impersonate Police and Steal Our Data

3 months 1 week ago

For years, people have been impersonating police online in order to get companies to hand over incredibly sensitive personal information. Reporting by 404 Media recently revealed that Verizon handed over the address and phone logs of an individual to a stalker pretending to be a police officer who had a PDF of a fake warrant. Worse, the imposter wasn’t particularly convincing. His request was missing a form that is required for search warrants from his state. He used the name of a police officer that did not exist in the department he claimed to be from. And he used a Proton Mail account, which any person online can use, rather than an official government email address.

Likewise, bad actors have used breached law enforcement email accounts or domain names to send fake warrants, subpoenas, or “Emergency Data Requests” (which police can send without judicial oversight to get data quickly in supposedly life or death situations). Impersonating police to get sensitive information from companies isn’t just the realm of stalkers and domestic abusers; according to Motherboard, bounty hunters and debt collectors have also used the tactic.

We have two very big entwined problems. The first is the “collect it all” business model of too many companies, which creates vast reservoirs of personal information stored in corporate data servers, ripe for police to seize and thieves to steal. The second is that too many companies fail to prevent thieves from stealing data by pretending to be police.

Companies have to make it harder for fake “officers” to get access to our sensitive data. For starters, they must do better at scrutinizing warrants, subpoenas, and emergency data requests when they come in. These requirements should be spelled out clearly in a public-facing privacy policy, and all employees who deal with data requests from law enforcement should receive training in how to adhere to these requirements and spot fraudulent requests. Fake emergency data requests raise special concerns, because real ones depend on the discretion of both companies and police—two parties with less than stellar reputations for valuing privacy. 

Matthew Guariglia

EFF’s 2024 In/Out List

3 months 1 week ago

Since EFF was formed in 1990, we’ve been working hard to protect digital rights for all. And as each year passes, we’ve come to understand the challenges and opportunities a little better, as well as what we’re not willing to accept. 

Accordingly, here’s what we’d like to see a lot more of, and a lot less of, in 2024.
in-out-2024.png
IN

1. Affordable and future-proof internet access for all

EFF has long advocated for affordable, accessible, and future-proof internet access for all. We cannot accept a future where the quality of our internet access is determined by geographic, socioeconomic, or otherwise divided lines. As the online aspects of our work, health, education, entertainment, and social lives increase, EFF will continue to fight for a future where the speed of your internet connection doesn’t stand in the way of these crucial parts of life.

2. A privacy first agenda to prevent mass collection of our personal information

Many of the ills of today’s internet have a single thing in common: they are built on a system of corporate surveillance. Vast numbers of companies collect data about who we are, where we go, what we do, what we read, who we communicate with, and so on. They use our data in thousands of ways and often sell it to anyone who wants it—including law enforcement. So whatever online harms we want to alleviate, we can do it better, with a broader impact, if we do privacy first.

3. Decentralized social media platforms to ensure full user control over what we see online

While the internet began as a loose affiliation of universities and government bodies, the digital commons has been privatized and consolidated into a handful of walled gardens. But in the past few years, there's been an accelerating swing back toward decentralization as users are fed up with the concentration of power, and the prevalence of privacy and free expression violations. So, many people are fleeing to smaller, independently operated projects. We will continue walking users through decentralized services in 2024.

4. End-to-end encrypted messaging services, turned on by default and available always

Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption. But governments across the world are trying to erode this by scanning for all content all the time. As we’ve said many times, there is no middle ground to content scanning, and no “safe backdoor” if the internet is to remain free and private. Mass scanning of peoples’ messages is wrong, and at odds with human rights. 

5. The right to free expression online with minimal barriers and without borders

New technologies and widespread internet access have radically enhanced our ability to express ourselves, criticize those in power, gather and report the news, and make, adapt, and share creative works. Vulnerable communities have also found space to safely meet, grow, and make themselves heard without being drowned out by the powerful. No government or corporation should have the power to decide who gets to speak and who doesn’t. 

OUT

1. Use of artificial intelligence and automated systems for policing and surveillance

Predictive policing algorithms perpetuate historic inequalities, hurt neighborhoods already subject to intense amounts of surveillance and policing, and quite simply don’t work. EFF has long called for a ban on predictive policing and we’ll continue to monitor the rapid rise of law enforcement utilizing machine learning. This includes harvesting the data other “autonomous” devices collect and by automating important decision-making processes that guide policing and dictate people’s futures in the criminal justice system.

2. Ad surveillance based on the tracking of our online behaviors 

Our phones and other devices process vast amounts of highly sensitive personal information that corporations collect and sell for astonishing profits. This incentivizes online actors to collect as much of our behavioral information as possible. In some circumstances, every mouse click and screen swipe is tracked and then sold to ad tech companies and the data brokers that service them. This often impacts marginalized communities the most. Data surveillance is a civil rights problem, and legislation to protect data privacy can help protect civil rights. 

3. Speech and privacy restrictions under the guise of "protecting the children"

For years, government officials have raised concerns that online services don’t do enough to tackle illegal content, particularly child sexual abuse material. Their solution? Bills that ostensibly seek to make the internet safer, but instead achieve the exact opposite by requiring websites and apps to proactively prevent harmful content from appearing on messaging services. This leads to the universal scanning of all user content, all the time, and functions as a 21st-century form of prior restraint—violating the very essence of free speech.

4. Unchecked cross-border data sharing disguised as cybercrime protections 

Personal data must be safeguarded against exploitation by any government to prevent abuse of power and transnational repression. Yet, the broad scope of the proposed UN Cybercrime Treaty could be exploited for covert surveillance of human rights defenders, journalists, and security researchers. As the Treaty negotiations approach their conclusion, we are advocating against granting broad cross-border surveillance powers for investigating any alleged crime, ensuring it doesn't empower regimes to surveil individuals in countries where criticizing the government or other speech-related activities are wrongfully deemed criminal.

5. Internet access being used as a bargaining chip in conflicts and geopolitical battles

Given the proliferation of the internet and its use in pivotal social and political moments, governments are very aware of their power in cutting off that access. The internet enables the flow of information to remain active and alert to new realities. In wartime, being able to communicate may ultimately mean the difference between life and death. Shutting down access aids state violence and deprives free speech. Access to the internet shouldn't be used as a bargaining chip in geopolitical battles.

Paige Collings

EFF Unveils Its New Street Level Surveillance Hub

3 months 2 weeks ago
The Updated and Expanded Hub Sheds New Light on the Digital Surveillance Dragnet that Law Enforcement Deploys Against Everyone

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) today unveiled its new Street Level Surveillance hub, a standalone website featuring expanded and updated content on various technologies that law enforcement agencies commonly use to invade Americans’ privacy. 

The hub has new or updated pages on automated license plate readers, biometric surveillance, body-worn cameras, camera networks, cell-site simulators, drones and robots, face recognition, electronic monitoring, gunshot detection, forensic extraction tools, police access to the Internet of Things, predictive policing, community surveillance apps, real-time location tracking, social media monitoring, and police databases.  

It also features links to the latest articles by EFF’s Street Level Surveillance working group, consisting of attorneys, policy analysts, technologists, and activists with extensive experience in this field. 

“People are surveilled by police at more times and in more ways than ever before, and understanding this panopticon is the first step in protecting our rights,” said EFF Senior Policy Analyst Dr. Matthew Guariglia. “Our new hub is a ‘Field Guide to Police Surveillance;’ providing a reference source on recognizing the most-used police spy technology. But more than that it is a vital, constantly updated news feed offering cutting-edge, detailed analysis of law enforcement’s uses and abuses of these devices.” 

The new hub also interfaces with several of EFF’s ongoing projects, including: 

  • The Atlas of Surveillance, EFF’s collaboration with the Reynolds School of Journalism at the University of Nevada, Reno to map more than 12,000 police surveillance technologies in use across America; and 
  • Spot the Surveillance, an open-source educational virtual reality tool to help people identify street-level surveillance in their community. 

"We hope community groups, advocacy organizations, defense attorneys, and concerned individuals will use the hub to stay abreast of the latest legal cases and technological developments, and share their own stories with us,” Guariglia said. 

Visit EFF’s new Street Level Surveillance hub at https://sls.eff.org/ 

Contact:  MatthewGuariglia Senior Policy Analystmatthew@eff.org
Josh Richman

Privacy Badger Puts You in Control of Widgets

3 months 2 weeks ago

The latest version of Privacy Badger 1 replaces embedded tweets with click-to-activate placeholders. This is part of Privacy Badger's widget replacement feature, where certain potentially useful widgets are blocked and then replaced with placeholders. This protects privacy by default while letting you restore the original widget whenever you want it or need it for the page to function.

Websites often include external elements such as social media buttons, comments sections, and video players. Although potentially useful, these “widgets” often track your behavior. The tracking happens regardless of whether you click on the widget. If you see a widget, the widget sees you back.

This is where Privacy Badger's widget replacement comes in. When blocking certain social buttons and other potentially useful widgets, Privacy Badger replaces them with click-to-activate placeholders. You will not be tracked by these replacements unless you explicitly choose to activate them.

Privacy Badger’s placeholders tell you exactly what happened while putting you in control.

Changing the UI of a website is a bold move for a browser extension to do. That’s what Privacy Badger is all about though: making strong choices on behalf of user privacy and revealing how that privacy is betrayed by businesses online.

Privacy Badger isn’t the first software to replace embedded widgets with placeholders for privacy or security purposes. As early as 2004, users could install Flashblock, an extension that replaced embedded Adobe Flash plugin content, a notoriously insecure technology.

Flashblock’s Flash plugin placeholders lacked user-friendly buttons but got the (Flash blocking) job done.

Other extensions and eventually, even browsers, followed Flashblock in offering similar plugin-blocking placeholders. The need to do this declined as plugin use dropped over time, but a new concern rose to prominence. Privacy was under attack as social media buttons started spreading everywhere.

This brings us to ShareMeNot. Developed in 2012 as a research tool to investigate how browser extensions might enforce privacy on behest of the user, ShareMeNot replaced social media “share” buttons with click-to-activate placeholders. In 2014, ShareMeNot became a part of Privacy Badger. While the emphasis has shifted away from social media buttons to interactive widgets like video players and comments sections, Privacy Badger continues to carry on ShareMeNot's legacy.

Unfortunately, widget replacement is not perfect. The placeholder’s buttons may not work sometimes, or the placeholder may appear in the wrong place or may fail to appear at all. We will keep fixing and improving widget replacement. You can help by letting us know when something isn’t working right.

To report problems, first click on Privacy Badger’s icon in your browser toolbar. Privacy Badger’s “popup” window will open. Then, click the “Report broken site” button in the popup.

Pro tip #1: Because our YouTube replacement is not quite ready to be enabled by default, embedded YouTube players are not yet blocked or replaced. If you like though, you can try our YouTube replacement now.

To opt in, visit Privacy Badger's options page, select the “Tracking Domains” tab, search for “youtube.com”, and move the toggle for youtube.com to the “Block entirely” position.

Pro tip #2: The most private way to activate a replaced widget is to use the “this [YouTube] widget” link (inside the “Privacy Badger has replaced this [YouTube] widget” text), when the link is available. Going through the link, as opposed to one of the Allow buttons, means the widget provider doesn't necessarily get to know what site you activated the widget on. You can also right-click the link to save the widget URL; no need to visit the link or to use browser developer tools.

Click the link to open the widget in a new tab.

Privacy tools should be measured not only by efficacy, but also ease of use. As we write in the FAQ, we want Privacy Badger to function well without any special knowledge or configuration by the user. Privacy should be made easy, rather than gatekept for “power users.” Everyone should be able to decide for themselves when and with whom they want to share information. Privacy Badger fights to restore this control, biting back at sneaky non-consensual surveillance.

To install Privacy Badger, visit privacybadger.org. Thank you for using Privacy Badger!

 

  • 1. Privacy Badger version 2023.12.1
Alexei Miagkov

UAE Confirms Trial Against 84 Detainees; Ahmed Mansoor Suspected Among Them

3 months 2 weeks ago

The UAE confirmed this week that it has placed 84 detainees on trial, on charges of “establishing another secret organization for the purpose of committing acts of violence and terrorism on state territory.” Suspected to be among those facing trial is award-winning human rights defender Ahmed Mansoor, also known as the “the million dollar dissident,” as he was once the target of exploits that exposed major security flaws in Apple’s iOS operating system—the kind of “zero-day” vulnerabilities that fetch seven figures on the exploit market. Mansoor drew the ire of UAE authorities for criticizing the country’s internet censorship and surveillance apparatus and for calling for a free press and democratic freedoms in the country.

Having previously been arrested in 2011 and sentenced to three years' imprisonment for “insulting officials,'' Ahmed Mansoor was released after eight months due to a presidential pardon influenced by international pressure. Later, Mansoor faced new speech-related charges for using social media to “publish false information that harms national unity.” During this period, authorities held him in an unknown location for over a year, deprived of legal representation, before convicting him again in May 2018 to ten years in prison under the UAE’s draconian cybercrime law. We have long advocated for his release, and are joined in doing so by hundreds of digital and human rights organizations around the world.

At the recent COP28 climate talks, Human Rights Watch and Amnesty International and other activists conducted a protest inside the UN-protected “blue zone” to raise awareness of Mansoor’s plight, as well the cases of both UAE detainee Mohamed El-Siddiq and Egyptian-British activist  Alaa Abd El Fattah. At the same time, it was reported by a dissident group that the UAE was proceeding with the trial against 84 of its detainees.

We reiterate our call for Ahmed Mansoor’s freedom, and take this opportunity to raise further awareness of the oppressive nature of the legislation that was used to imprison him. The UAE’s use of its criminal law to silence those who speak truth to power is another example of how counter-terrorism laws restrict free expression and justify disproportionate state surveillance. This concern is not hypothetical; a 2023 study by the Special Rapporteur on counter-terrorism found widespread and systematic abuse of civil society and civic space through the use of similar laws supposedly designed to counter terrorism. Moreover, and problematically, references 'related to terrorism’ in the treaty preamble are still included in the latest version of a proposed United Nations Cybercrime Treaty, currently being negotiated with more than 190 member states, even though there is no  agreed-upon definition of terrorism in international law. If approved as currently written, the UN Cybercrime Treaty has the potential to substantively reshape international criminal law and bolster cross-border police surveillance powers to access and share users’ data, implicating the human rights of billions of people worldwide, and could enable States to justify repressive measures that overly restrict free expression and peaceful dissent.

Jillian C. York

Craig Newmark Philanthropies – Celebrating 30 Years of Support for Digital Rights

3 months 2 weeks ago

EFF has been awarded a new $200,000 grant from Craig Newmark Philanthropies to strengthen our cybersecurity work in 2024. We are especially grateful this year, as it marks 30 years of donations from Craig Newmark, who joined as an EFF member just three years after our founding and four years before he launched the popular website craigslist.  

Over the past several years, grants from Craig Newmark Philanthropies have focused on supporting trustworthy journalism to defend our democracy and hold the powerful accountable, as well as cybersecurity to protect consumers and journalists alike from malware and other dangers online. With this funding, EFF has built networks to help defend against disinformation warfare, fought online harassment, strengthened ethical journalism, and researched state-sponsored malware, cyber-mercenaries, and consumer spyware. EFF’s Threat Lab conducts research on surveillance technologies used to target journalists, communities, activists, and individuals. For example, we helped co-found, and continue to provide leadership to the Coalition Against Stalkerware. EFF also created and updated tools to educate and train working and student journalists alike to keep themselves safe from adversarial attacks. In addition to maintaining our popular Surveillance Self Defense guide, we scaled up our Report Back tool for student journalists, cybersecurity students, and grassroots volunteers to collaboratively study technology in society. 

In 2006, EFF recognized craigslist for cultivating a pervasive culture of trust and maintaining its public service charge even as it became one of the most popular websites in the world. Though Craig has retired from craigslist, this ethos continues through his philanthropic giving, which is “focused on a commitment to fairness and doing right by others.” EFF thanks Craig Newmark for his 30 years of financial support, which has helped us grow to become the leading nonprofit defending digital privacy, free speech, and innovation today. 

Josh Richman
Checked
19 minutes 57 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed