Ninth Circuit Gets It: Interoperability Isn’t an Automatic First Step to Liability

20 hours 11 minutes ago

A federal appeals court just gave software developers, and users, an early holiday present, holding that software updates aren’t necessarily “derivative,” for purposes of copyright law, just because they are designed to interoperate the software they update.

This sounds kind of obscure, so let’s cut through the legalese. Lots of developers build software designed to interoperate with preexisting works. This kind of interoperability is crucial to innovation, particularly in a world where a small number of companies control so many essential tools and platforms. If users want to be able to repair, improve, and secure their devices, they must be able to rely on third parties to help. Trouble is, Big Tech companies want to be able to control (and charge for) every possible use of the devices and software they “sell” you – and they won’t hesitate to use the law to enforce that control. 

Courts shouldn’t assist, but unfortunately a federal district court did just that in the latest iteration of Oracle v. Rimini. Rimini provides support to improve the use and security of Oracle products, so customers don’t have to depend entirely on Oracle itself . Oracle doesn’t want this kind of competition, so it sued Rimini for copyright infringement, arguing that a software update Rimini developed was a “derivative work” because it was intended to interoperate with Oracle's software, even though the update didn’t use any of Oracle’s copyrightable code. Derivative works are typically things like a movie based on a novel, or a translation of that novel. Here, the only “derivative” aspect was that Rimini’s code was designed to interact with Oracle’s code.  
 
Unfortunately, the district court initially sided with Oracle, setting a dangerous precedent. If a work is derivative, it may infringe the copyright in the preexisting work from which it, well, derives. For decades, software developers have relied, correctly, on the settled view that a work is not derivative under copyright law unless it is substantially similar to a preexisting work in both ideas and expression. Thanks to that rule, software developers can build innovative new tools that interact with preexisting works, including tools that improve privacy and security, without fear that the companies that hold rights in those preexisting works would have an automatic copyright claim to those innovations.  

Rimini appealed to the Ninth Circuit, on multiple grounds. EFF, along with a diverse group of stakeholders representing consumers, small businesses, software developers, security researchers, and the independent repair community, filed an amicus brief in support explaining that the district court ruling on interoperability was not just bad policy, but also bad law.  

 The Ninth Circuit agreed: 

In effect, the district court adopted an “interoperability” test for derivative works—if a product can only interoperate with a preexisting copyrighted work, then it must be derivative. But neither the text of the Copyright Act nor our precedent supports this interoperability test for derivative works. 

 The court goes on to give a primer on the legal definition of derivative work, but the key point is this: a work is only derivative if it “substantially incorporates the other work.”

Copyright already reaches far too broadly, giving rightsholders extraordinary power over how we use everything from music to phones to televisions. This holiday season, we’re raising a glass to the judges who sensibly reined that power in. 

Corynne McSherry

Customs & Border Protection Fails Baseline Privacy Requirements for Surveillance Technology

20 hours 50 minutes ago

U.S. Customs and Border Protection (CBP) has failed to address six out of six main privacy protections for three of its border surveillance programs—surveillance towers, aerostats, and unattended ground sensors—according to a new assessment by the Government Accountability Office (GAO).

In the report, GAO compared the policies for these technologies against six of the key Fair Information Practice Principles that agencies are supposed to use when evaluating systems and processes that may impact privacy, as dictated by both Office of Management and Budget guidance and the Department of Homeland Security's own rules.

These include:

  • Data collection. "DHS should collect only PII [Personally Identifiable Information] that is directly relevant and necessary to accomplish the specified purpose(s)."
  • Purpose specification. "DHS should specifically articulate the purpose(s) for which the PII is intended to be used."
  • Information sharing. "Sharing PII outside the department should be for a purpose compatible with the purpose for which the information was collected."
  • Data security. "DHS should protect PII through appropriate security safeguards against risks such as loss, unauthorized access or use, destruction, modification, or unintended or inappropriate disclosure."
  • Data retention. "DHS should only retain PII for as long as is necessary to fulfill the specified purpose(s)."
  • Accountability. "DHS should be accountable for complying with these principles, including by auditing the actual use of PII to demonstrate compliance with these principles and all applicable privacy protection requirements."

These baseline privacy elements for the three border surveillance technologies were not addressed in any "technology policies, standard operating procedures, directives, or other documents that direct a user in how they are to use a Technology," according to GAO's review.

CBP operates hundreds of surveillance towers along both the northern and southern borders, some of which are capable of capturing video more than seven miles away. The agency has six large aerostats (essentially tethered blimps) that use radar along the southern border, with others stationed in the Florida Keys and Puerto Rico. The agency also operates a series of smaller aerostats that stream video in the Rio Grande Valley of Texas, with the newest one installed this fall in southeastern New Mexico. And the report notes deficiencies with CBP's linear ground detection system, a network of seismic sensors and cameras that are triggered by movement or footsteps.

The GAO report underlines EFF's concerns that the privacy of people who live and work in the borderlands is violated when federal agencies deploy militarized, high-tech programs to confront unauthorized border crossings. The rights of border communities are too often treated as acceptable collateral damage in pursuit of border security.

CBP defended its practices by saying that it does, to some extent, address FIPS in its Privacy Impact Assessments, documents written for public consumption. GAO rejected this claim, saying that these assessments are not adequate in instructing agency staff on how to protect privacy when deploying the technologies and using the data that has been collected.

In its recommendations, the GAO calls on the CBP Commissioner to "require each detection, observation, and monitoring technology policy to address the privacy protections in the Fair Information Practice Principles." But EFF calls on Congress to hold CBP to account and stop approving massive spending on border security technologies that the agency continues to operate irresponsibly.

Dave Maass

The Breachies 2024: The Worst, Weirdest, Most Impactful Data Breaches of the Year

1 day 17 hours ago

Every year, countless emails hit our inboxes telling us that our personal information was accessed, shared, or stolen in a data breach. In many cases, there is little we can do. Most of us can assume that at least our phone numbers, emails, addresses, credit card numbers, and social security numbers are all available somewhere on the internet.

But some of these data breaches are more noteworthy than others, because they include novel information about us, are the result of particularly noteworthy security flaws, or are just so massive they’re impossible to ignore. For that reason, we are introducing the Breachies, a series of tongue-in-cheek “awards” for some of the most egregious data breaches of the year.

If these companies practiced a privacy first approach and focused on data minimization, only collecting and storing what they absolutely need to provide the services they promise, many data breaches would be far less harmful to the victims. But instead, companies gobble up as much as they can, store it for as long as possible, and inevitably at some point someone decides to poke in and steal that data.

Once all that personal data is stolen, it can be used against the breach victims for identity theft, ransomware attacks, and to send unwanted spam. The risk of these attacks isn’t just a minor annoyance: research shows it can cause psychological injury, including anxiety, depression, and PTSD. To avoid these attacks, breach victims must spend time and money to freeze and unfreeze their credit reports, to monitor their credit reports, and to obtain identity theft prevention services.

This year we’ve got some real stinkers, ranging from private health information to—you guessed it—credit cards and social security numbers.

The Winners

The Just Stop Using Tracking Tech Award: Kaiser Permanente

In one of the year's most preventable breaches, the healthcare company Kaiser Permanente exposed 13 million patients’ information via tracking code embedded in its website and app. This tracking code transmitted potentially sensitive medical information to Google, Microsoft, and X (formerly known as Twitter). The exposed information included patients’ names, terms they searched in Kaiser’s Health Encyclopedia, and how they navigated within and interacted with Kaiser’s website or app.

The most troubling aspect of this breach is that medical information was exposed not by a sophisticated hack, but through widely used tracking technologies that Kaiser voluntarily placed on its website. Kaiser has since removed the problematic code, but tracking technologies are rampant across the internet and on other healthcare websites. A 2024 study found tracking technologies sharing information with third parties on 96% of hospital websites. Websites usually use tracking technologies to serve targeted ads. But these same technologies give advertisers, data brokers, and law enforcement easy access to details about your online activity.

While individuals can protect themselves from online tracking by using tools like EFF’s Privacy Badger, we need legislative action to make online privacy the norm for everyone. EFF advocates for a ban on online behavioral advertising to address the primary incentive for companies to use invasive tracking technology. Otherwise, we’ll continue to see companies voluntarily sharing your personal data, then apologizing when thieves inevitably exploit a vulnerability in these tracking systems.

Head back to the table of contents.

The Most Impactful Data Breach for 90s Kids Award: Hot Topic

If you were in middle or high school any time in the 90s you probably have strong memories of Hot Topic. Baby goths and young punk rockers alike would go to the mall, get an Orange Julius and greasy slice of Sbarro pizza, then walk over to Hot Topic to pick up edgy t-shirts and overpriced bondage pants (all the while debating who was the biggest poser and which bands were sellouts, of course). Because of the fundamental position Hot Topic occupies in our generation’s personal mythology, this data breach hits extra hard.

In November 2024, Have I Been Pwned reported that Hot Topic and its subsidiary Box Lunch suffered a data breach of nearly 57 million data records. A hacker using the alias “Satanic” claimed responsibility and posted a 730 GB database on a hacker forum with a sale price of $20,000. The compromised data about approximately 54 million customers reportedly includes: names, email addresses, physical addresses, phone numbers, purchase history, birth dates, and partial credit card details. Research by Hudson Rock indicates that the data was compromised using info stealer malware installed on a Hot Topic employee’s work computer. “Satanic” claims that the original infection stems from the Snowflake data breach (another Breachie winner); though that hasn’t been confirmed because Hot Topic has still not notified customers, nor responded to our request for comment.

Though data breaches of this scale are common, it still breaks our little goth hearts, and we’d prefer stores did a better job of securing our data. Worse, Hot Topic still hasn’t publicly acknowledged this breach, despite numerous news reports. Perhaps Hot Topic was the real sellout all along. 

Head back to the table of contents.

The Only Stalkers Allowed Award: mSpy

mSpy, a commercially-available mobile stalkerware app owned by Ukrainian-based company Brainstack, was subject to a data breach earlier this year. More than a decade’s worth of information about the app’s customers was stolen, as well as the real names and email addresses of Brainstack employees.

The defining feature of stalkerware apps is their ability to operate covertly and trick users into believing that they are not being monitored. But in reality, applications like mSpy allow whoever planted the stalkerware to remotely view the contents of the victim’s device in real time. These tools are often used to intimidate, harass, and harm victims, including by stalkers and abusive (ex) partners. Given the highly sensitive data collected by companies like mSpy and the harm to targets when their data gets revealed, this data breach is another example of why stalkerware must be stopped

Head back to the table of contents.

The I Didn’t Even Know You Had My Information Award: Evolve Bank

Okay, are we the only ones  who hadn’t heard of Evolve Bank? It was reported in May that Evolve Bank experienced a data breach—though it actually happened all the way back in February. You may be thinking, “why does this breach matter if I’ve never heard of Evolve Bank before?” That’s what we thought too!

But here’s the thing: this attack affected a bunch of companies you have heard of, like Affirm (the buy now, pay later service), Wise (the international money transfer service), and Mercury Bank (a fintech company). So, a ton of services use the bank, and you may have used one of those services. It’s been reported that 7.6 million Americans were affected by the breach, with most of the data stolen being customer information, including social security numbers, account numbers, and date of birth.

The small bright side? No customer funds were accessed during the breach. Evolve states that after the breach they are doing some basic things like resetting user passwords and strengthening their security infrastructure

Head back to the table of contents.

The We Told You So Award: AU10TIX

AU10TIX is an “identity verification” company used by the likes of TikTok and X to confirm that users are who they claim to be. AU10TIX and companies like it collect and review sensitive private documents such as driver’s license information before users can register for a site or access some content.

Unfortunately, there is growing political interest in mandating identity or age verification before allowing people to access social media or adult material. EFF and others oppose these plans because they threaten both speech and privacy. As we said in 2023, verification mandates would inevitably lead to more data breaches, potentially exposing government IDs as well as information about the sites that a user visits.

Look no further than the AU10TIX breach to see what we mean. According to a report by 404 Media in May, AU10TIX left login credentials exposed online for more than a year, allowing access to very sensitive user data.

404 Media details how a researcher gained access to the company’s logging platform, “which in turn contained links to data related to specific people who had uploaded their identity documents.” This included “the person’s name, date of birth, nationality, identification number, and the type of document uploaded such as a drivers’ license,” as well as images of those identity documents.

The AU10TIX breach did not seem to lead to exposure beyond what the researcher showed was possible. But AU10TIX and other companies must do a better job at locking down user data. More importantly, politicians must not create new privacy dangers by requiring identity and age verification.

If age verification requirements become law, we’ll be handing a lot of our sensitive information over to companies like AU10TIX. This is the first We Told You So Breachie award, but it likely won’t be the last. 

Head back to the table of contents.

The Why We’re Still Stuck on Unique Passwords Award: Roku

In April, Roku announced not yet another new way to display more ads, but a data breach (its second of the year) where 576,000 accounts were compromised using a “credential stuffing attack.” This is a common, relatively easy sort of automated attack where thieves use previously leaked username and password combinations (from a past data breach of an unrelated company) to get into accounts on a different service. So, if say, your username and password was in the Comcast data breach in 2015, and you used the same username and password on Roku, the attacker might have been able to get into your account. Thankfully, less than 400 Roku accounts saw unauthorized purchases, and no payment information was accessed.

But the ease of this sort of data breach is why it’s important to use unique passwords everywhere. A password manager, including one that might be free on your phone or browser, makes this much easier to do. Likewise, credential stuffing illustrates why it’s important to use two-factor authentication. After the Roku breach, the company turned on two-factor authentication for all accounts. This way, even if someone did get access to your account password, they’d need that second code from another device; in Roku’s case, either your phone number or email address.

Head back to the table of contents.

The Listen, Security Researchers are Trying to Help Award: City of Columbus

In August, the security researcher David Ross Jr. (also known as Connor Goodwolf) discovered that a ransomware attack against the City of Columbus, Ohio, was much more serious than city officials initially revealed. After the researcher informed the press and provided proof, the city accused him of violating multiple laws and obtained a gag order against him.

Rather than silencing the researcher, city officials should have celebrated him for helping victims understand the true extent of the breach. EFF and security researchers know the value of this work. And EFF has a team of lawyers who help protect researchers and their work. 

Here is how not to deal with a security researcher: In July, Columbus learned it had suffered a ransomware attack. A group called Rhysida took responsibility. The city did not pay the ransom, and the group posted some of the stolen data online. The mayor announced the stolen data was “encrypted or corrupted,” so most of it was unusable. Later, the researcher, David Ross, helped inform local news outlets that in fact the breach did include usable personal information on residents. He also attempted to contact the city. Days later, the city offered free credit monitoring to all of its residents and confirmed that its original announcement was inaccurate.

Unfortunately, the city also filed a lawsuit, and a judge signed a temporary restraining order preventing the researcher from accessing, downloading, or disseminating the data. Later, the researcher agreed to a more limited injunction. The city eventually confirmed that the data of hundreds of thousands of people was stolen in the ransomware attack, including drivers licenses, social security numbers, employee information, and the identities of juvenile victims, undercover police officers, and confidential informants.

Head back to the table of contents.

The Have I Been Pwned? Award: Spoutible

The Spoutible breach has layers—layers of “no way!” that keep revealing more and more amazing little facts the deeper one digs.

It all started with a leaky API. On a per-user basis, it didn’t just return the sort of information you’d expect from a social media platform, but also the user’s email, IP address, and phone number. No way! Why would you do that?

But hold on, it also includes a bcrypt hash of their password. No way! Why would you do that?!

Ah well, at least they offer two-factor authentication (2FA) to protect against password leakages, except… the API was also returning the secret used to generate the 2FA OTP as well. No way! So, if someone had enabled 2FA it was immediately rendered useless by virtue of this field being visible to everyone.

However, the pièce de resistance comes with the next field in the API: the “em_code.” You know how when you do a password reset you get emailed a secret code that proves you control the address and can change the password? That was the code! No way!

-EFF thanks guest author Troy Hunt for this contribution to the Breachies.

Head back to the table of contents.

The Reporting’s All Over the Place Award: National Public Data

In January 2024, there was almost no chance you’d have heard of a company called National Public Data. But starting in April, then ramping up in June, stories revealed a breach affecting the background checking data broker that included names, phone numbers, addresses, and social security numbers of at least 300 million people. By August, the reported number ballooned to 2.9 billion people. In October, National Public Data filed for bankruptcy, leaving behind nothing but a breach notification on its website.

But what exactly was stolen? The evolving news coverage has raised more questions than it has answered. Too bad National Public Data has failed to tell the public more about the data that the company failed to secure.

One analysis found that some of the dataset was inaccurate, with a number of duplicates; also, while there were 137 million email addresses, they weren’t linked to social security numbers. Another analysis had similar results. As for social security numbers, there were likely somewhere around 272 million in the dataset. The data was so jumbled that it had names matched to the wrong email or address, and included a large chunk of people who were deceased. Oh, and that 2.9 billion number? That was the number of rows of data in the dataset, not the number of individuals. That 2.9 billion people number appeared to originate from a complaint filed in Florida.

Phew, time to check in with Count von Count on this one, then.

How many people were truly affected? It’s difficult to say for certain. The only thing we learned for sure is that starting a data broker company appears to be incredibly easy, as NPD was owned by a retired sheriff’s deputy and a small film studio and didn’t seem to be a large operation. While this data broker got caught with more leaks than the Titanic, hundreds of others are still out there collecting and hoarding information, and failing to watch out for the next iceberg.

Head back to the table of contents.

The Biggest Health Breach We’ve Ever Seen Award: Change Health

In February, a ransomware attack on Change Healthcare exposed the private health information of over 100 million people. The company, which processes 40% of all U.S. health insurance claims, was forced offline for nearly a month. As a result, healthcare practices nationwide struggled to stay operational and patients experienced limits on access to care. Meanwhile, the stolen data poses long-term risks for identity theft and insurance fraud for millions of Americans—it includes patients’ personal identifiers, health diagnoses, medications, insurance details, financial information, and government identity documents.

The misuse of medical records can be harder to detect and correct that regular financial fraud or identity theft. The FTC recommends that people at risk of medical identity theft watch out for suspicious medical bills or debt collection notices.

The hack highlights the need for stronger cybersecurity in the healthcare industry, which is increasingly targeted by cyberattacks. The Change Healthcare hackers were able to access a critical system because it lacked two-factor authentication, a basic form of security.

To make matters worse, Change Healthcare’s recent merger with Optum, which antitrust regulators tried and failed to block, even further centralized vast amounts of sensitive information. Many healthcare providers blamed corporate consolidation for the scale of disruption. As the former president of the American Medical Association put it, “When we have one option, then the hackers have one big target… if they bring that down, they can grind U.S. health care to a halt.” Privacy and competition are related values, and data breach and monopoly are connected problems.

Head back to the table of contents.

The There’s No Such Thing As Backdoors for Only “Good Guys” Award: Salt Typhoon

When companies build backdoors into their services to provide law enforcement access to user data, these backdoors can be exploited by thieves, foreign governments, and other adversaries. There are no methods of access that are magically only accessible to “good guys.” No security breach has demonstrated that more clearly than this year’s attack by Salt Typhoon, a Chinese government-backed hacking group.

Internet service providers generally have special systems to provide law enforcement and intelligence agencies access to user data. They do that to comply with laws like CALEA, which require telecom companies to provide a means for “lawful intercepts”—in other words, wiretaps.

The Salt Typhoon group was able to access the powerful tools that in theory have been reserved for U.S. government agencies. The hackers infiltrated the nation’s biggest telecom networks, including Verizon, AT&T, and others, and were able to target their surveillance based on U.S. law enforcement wiretap requests. Breaches elsewhere in the system let them listen in on calls in real time. People under U.S. surveillance were clearly some of the targets, but the hackers also targeted both 2024 presidential campaigns and officials in the State Department. 

While fewer than 150 people have been identified as targets so far, the number of people who were called or texted by those targets run into the “millions,” according to a Senator who has been briefed on the hack. What’s more, the Salt Typhoon hackers still have not been rooted out of the networks they infiltrated.

The idea that only authorized government agencies would use such backdoor access tools has always been flawed. With sophisticated state-sponsored hacking groups operating across the globe, a data breach like Salt Typhoon was only a matter of time. 

Head back to the table of contents.

The Snowballing Breach of the Year Award: Snowflake

Thieves compromised the corporate customer accounts for U.S. cloud analytics provider Snowflake. The corporate customers included AT&T, Ticketmaster, Santander, Neiman Marcus, and many others: 165 in total.

This led to a massive breach of billions of data records for individuals using these companies. A combination of infostealer malware infections on non-Snowflake machines as well as weak security used to protect the affected accounts allowed the hackers to gain access and extort the customers. At the time of the hack, April-July of this year, Snowflake was not requiring two-factor authentication, an account security measure which could have provided protection against the attacks. A number of arrests were made after security researchers uncovered the identities of several of the threat actors.

But what does Snowflake do? According to their website, Snowflake “is a cloud-based data platform that provides data storage, processing, and analytic solutions.” Essentially, they store and index troves of customer data for companies to look at. And the larger the amount of data stored, the bigger the target for malicious actors to use to put leverage on and extort those companies. The problem is the data is on all of us. In the case of Snowflake customer AT&T, this includes billions of call and text logs of its customers, putting individuals’ sensitive data at risk of exposure. A privacy-first approach would employ techniques such as data minimization and either not collect that data in the first place or shorten the retention period that the data is stored. Otherwise it just sits there waiting for the next breach.

Head back to the table of contents.

Tips to Protect Yourself

Data breaches are such a common occurrence that it’s easy to feel like there’s nothing you can do, nor any point in trying. But privacy isn’t dead. While some information about you is almost certainly out there, that’s no reason for despair. In fact, it’s a good reason to take action.

There are steps you can take right now with all your online accounts to best protect yourself from the the next data breach (and the next, and the next):

  • Use unique passwords on all your online accounts. This is made much easier by using a password manager, which can generate and store those passwords for you. When you have a unique password for every website, a data breach of one site won’t cascade to others.
  • Use two-factor authentication when a service offers it. Two-factor authentication makes your online accounts more secure by requiring additional proof (“factors”) alongside your password when you log in. While two-factor authentication adds another step to the login process, it’s a great way to help keep out anyone not authorized, even if your password is breached.
  • Freeze your credit. Many experts recommend freezing your credit with the major credit bureaus as a way to protect against the sort of identity theft that’s made possible by some data breaches. Freezing your credit prevents someone from opening up a new line of credit in your name without additional information, like a PIN or password, to “unfreeze” the account. This might sound absurd considering they can’t even open bank accounts, but if you have kids, you can freeze their credit too.
  • Keep a close eye out for strange medical bills. With the number of health companies breached this year, it’s also a good idea to watch for healthcare fraud. The Federal Trade Commission recommends watching for strange bills, letters from your health insurance company for services you didn’t receive, and letters from debt collectors claiming you owe money. 

Head back to the table of contents.

(Dis)Honorable Mentions

By one report, 2023 saw over 3,000 data breaches. The figure so far this year is looking slightly smaller, with around 2,200 reported through the end of the third quarter. But 2,200 and counting is little comfort.

We did not investigate every one of these 2,000-plus data breaches, but we looked at a lot of them, including the news coverage and the data breach notification letters that many state Attorney General offices host on their websites. We can’t award the coveted Breachie Award to every company that was breached this year. Still, here are some (dis)honorable mentions:

ADT, Advance Auto Parts, AT&T, AT&T (again), Avis, Casio, Cencora, Comcast, Dell, El Salvador, Fidelity, FilterBaby, Fortinet, Framework, Golden Corral, Greylock, Halliburton, HealthEquity, Heritage Foundation, HMG Healthcare, Internet Archive, LA County Department of Mental Health, MediSecure, Mobile Guardian, MoneyGram, muah.ai, Ohio Lottery, Omni Hotels, Oregon Zoo, Orrick, Herrington & Sutcliffe, Panda Restaurants, Panera, Patelco Credit Union, Patriot Mobile, pcTattletale, Perry Johnson & Associates, Roll20, Santander, Spytech, Synnovis, TEG, Ticketmaster, Twilio, USPS, Verizon, VF Corp, WebTPA.

What now? Companies need to do a better job of only collecting the information they need to operate, and properly securing what they store. Also, the U.S. needs to pass comprehensive privacy protections. At the very least, we need to be able to sue companies when these sorts of breaches happen (and while we’re at it, it’d be nice if we got more than $5.21 checks in the mail). EFF has long advocated for a strong federal privacy law that includes a private right of action.

Thorin Klosowski

Saving the Internet in Europe: Defending Free Expression

1 day 21 hours ago

This post is part two in a series of posts about EFF’s work in Europe. Read about how and why we work in Europe here. 

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.

In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and how what happens in Europe can affect digital rights across the globe. 

EFF’s approach to free speech

The global spread of Internet access and digital services promised a new era of freedom of expression, where everyone could share and access information, speak out and find an audience without relying on gatekeepers and make, tinker with and share creative works.  

Everyone should have the right to express themselves and share ideas freely. Various European countries have experienced totalitarian regimes and extensive censorship in the past century, and as a result, many Europeans still place special emphasis on privacy and freedom of expression. These values are enshrined in the European Convention of Human Rights and the Charter of Fundamental Rights of the European Union – essential legal frameworks for the protection of fundamental rights.  

Today, as so much of our speech is facilitated by online platforms, there is an expectation, that they too respect fundamental rights. Through their terms of services, community guidelines or house rules, platforms get to unilaterally define what speech is permissible on their services. The enforcement of these rules can be arbitrary, untransparent and selective, resulting in the suppression of contentious ideas and minority voices.  

That’s why EFF has been fighting against both government threats to free expression and to hold tech companies accountable for grounding their content moderation practices in robust human rights frameworks. That entails setting out clear rules and standards for internal processes such as notifications and explanations to users when terms of services are enforced or changed. In the European Union, we have worked for decades to ensure that laws governing online platforms respect fundamental rights, advocated against censorship and spoke up on behalf of human rights defenders. 

What’s the Digital Services Act and why do we keep talking about it? 

For the past years, we have been especially busy addressing human rights concerns with the drafting and implementation of the DSA the Digital Services Act (DSA), the new law setting out the rules for online services in the European Union. The DSA covers most online services, ranging from online marketplaces like Amazon, search engines like Google, social networks like Meta and app stores. However, not all of its rules apply to all services – instead, the DSA follows a risk-based approach that puts the most obligations on the largest services that have the highest impact on users. All service providers must ensure that their terms of services respect fundamental rights, that users can get in touch with them easily, and that they report on their content moderation activities. Additional rules apply to online platforms: they must give users detailed information about content moderation decisions and the right to appeal and additional transparency obligations. They also have to provide some basic transparency into the functioning of their recommender systems and are not allowed to target underage users with personalized ads. The most stringent obligations apply to the largest online platforms and search engines, which have more than 45 million users in the EU. These companies, which include X, TikTok, Amazon, Google Search and Play, YouTube, and several porn platforms, must proactively assess and mitigate systemic risks related to the design, functioning and use of their service their services. These include risks to the exercise of fundamental rights, elections, public safety, civic discourse, the protection of minors and public health. This novel approach might have merit but is also cause for concern: Systemic risks are barely defined and could lead to restrictions of lawful speech, and measures to address these risks, for example age verification, have negative consequences themselves, like undermining users’ privacy and access to information.  

The DSA is an important piece of legislation to advance users’ rights and hold companies accountable, but it also comes with significant risks. We are concerned about the DSA’s requirement that service providers proactively share user data with law enforcement authorities and the powers it gives government agencies to request such data. We caution against the misuse of the DSA’s emergency mechanism and the expansion of the DSA’s systemic risks governance approach as a catch-all tool to crack down on undesired but lawful speech. Similarly, the appointment of trusted flaggers could lead to pressure on platforms to over remove content, especially as the DSA does not limit government authorities from becoming trusted flaggers.  

EFF has been advocating for lawmakers to take a measured approach that doesn’t undermine the freedom of expression. Even though we have been successful in avoiding some of the most harmful ideas, concerns remain, especially with regards to the politicization of the enforcement of the DSA and potential over-enforcement. That’s why we will keep a close eye on the enforcement of the DSA, ready to use all means at our disposal to push back against over-enforcement and to defend user rights.  

European laws often implicate users globally. To give non-European users a voice in Brussels, we have been facilitating the DSA Human Rights Alliance. The DSA HR Alliance is formed around the conviction that the DSA must adopt a human rights-based approach to platform governance and consider its global impact. We will continue building on and expanding the Alliance to ensure that the enforcement of the DSA doesn’t lead to unintended negative consequences and respects users’ rights everywhere in the world.

The UK’s Platform Regulation Legislation 

In parallel to the Digital Services Act, the UK has passed its own platform regulation, the Online Safety Act (OSA). Seeking to make the UK “the safest place in the world to be online,” the OSA will lead to a more censored, locked-down internet for British users. The Act empowers the UK government to undermine not just the privacy and security of UK residents, but internet users worldwide. 

Online platforms will be expected to remove content that the UK government views as inappropriate for children. If they don’t, they’ll face heavy penalties. The problem is, in the UK as in the U.S. and elsewhere, people disagree sharply about what type of content is harmful for kids. Putting that decision in the hands of government regulators will lead to politicized censorship decisions.  

The OSA will also lead to harmful age-verification systems. You shouldn’t have to show your ID to get online. Age-gating systems meant to keep out kids invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary.  

As Ofcom is starting to release their regulations and guidelines, we’re watching how the regulator plans to avoid these human rights pitfalls, and will continue any fighting insufficient efforts to protect speech and privacy online.  

Media freedom and plurality for everyone 

Another issue that we have been championing is media freedom. Similar to the DSA, the EU recently overhauled its rules for media services: the European Media Freedom Act (EMFA). In this context, we pushed back against rules that would have forced online platforms like YouTube, X, or Instagram to carry any content by media outlets. Intended to bolster media pluralism, making platforms host content by force has severe consequences: Millions of EU users can no longer trust that online platforms will address content violating community standards. Besides, there is no easy way to differentiate between legitimate media providers, and such that are known for spreading disinformation, such as government-affiliated Russia sites active in the EU. Taking away platforms' possibility to restrict or remove such content could undermine rather than foster public discourse.  

The final version of EMFA introduced a number of important safeguards but is still a bad deal for users: We will closely follow its implementation to ensure that the new rules actually foster media freedom and plurality, inspire trust in the media and limit the use of spyware against journalists.  

Exposing censorship and defending those who defend us 

Covering regulation is just a small part of what we do. Over the past years, we have again and again revealed how companies’ broad-stroked content moderation practices censor users in the name of fighting terrorism, and restrict the voices of LGBTQ folks, sex workers, and underrepresented groups.  

Going into 2025, we will continue to shed light on these restrictions of speech and will pay particular attention to the censorship of Palestinian voices, which has been rampant. We will continue collaborating with our allies in the Digital Intimacy Coalition to share how restrictive speech policies often disproportionally affect sex workers. We will also continue to closely analyze the impact of the increasing and changing use of artificial intelligence in content moderation.  

Finally, a crucial part of our work in Europe has been speaking out for those who cannot: human rights defenders facing imprisonment and censorship.  

Much work remains to be done. We have put forward comprehensive policy recommendations to European lawmakers and we will continue fighting for an internet where everyone can make their voice heard. In the next posts in this series, you will learn more about how we work in Europe to ensure that digital markets are fair, offer users choice and respect fundamental rights. 

Svea Windwehr

We're Creating a Better Future for the Internet 🧑‍🏭

1 day 22 hours ago

In the early years of the internet, website administrators had to face off with a burdensome and expensive process to deploy SSL certificates. But today, hundreds of thousands of people have used EFF’s free Certbot tool to spread that sweet HTTPS across the web. Now almost all internet traffic is encrypted, and everyone gets a basic level of security. Small actions mean big change when we act together. Will you support important work like this and give EFF a Year-End Challenge boost?

Give Today

Unlock Bonus Grants Before 2025

Make a donation of ANY SIZE by December 31 and you’ll help us unlock bonus grants! Every supporter gets us closer to a series of seven Year-End Challenge milestones set by EFF’s board of directors. These grants become larger as the number of online rights supporters grows. Everyone counts! See our progress.

🚧 Digital Rights: Under Construction 🚧

Since 1990, EFF has defended your digital privacy and free speech rights in the courts, through activism, and by making open source privacy tools. This team is committed to watching out for the users no matter what directions technological innovation may take us. And that’s funded entirely by donations.

fix_copyright_and_stay_golden.png

Show your support for digital rights with free EFF member gear.

With help from people like you, EFF has been able to help unravel legal and ethical questions surrounding the rise of AI; push the USPTO to withdraw harmful patent proposals; fight for the public's right to access police drone footage; and show why banning TikTok and passing laws like the Kids Online Safety Act (KOSA) will not achieve internet safety.

As technology’s reach continues to expand, so do everyone’s concerns about harmful side effects. That’s where EFF’s ample experience in tech policy, the law, and human rights shines. You can help us.

Donate to defend digital rights today and you’ll help us unlock bonus grants before the year ends.

Join EFF!

Proudly Member-Supported Since 1990

________________________

EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating ELEVEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Aaron Jue

There’s No Copyright Exception to First Amendment Protections for Anonymous Speech

1 day 23 hours ago

Some people just can’t take a hint. Today’s perfect example is a group of independent movie distributors that have repeatedly tried, and failed, to force Reddit to give up the IP addresses of several users who posted about downloading movies. 

The distributors claim they need this information to support their copyright claims against internet service provider Frontier Communications, because it might be evidence that Frontier wasn’t enforcing its repeat infringer policy and therefore couldn’t claim safe harbor protections under the Digital Millennium. Copyright Act. Courts have repeatedly refused to enforce these subpoenas, recognizing the distributors couldn’t pass the test the First Amendment requires prior to unmasking anonymous speakers.  

Here's the twist: after the magistrate judge in this case applied this standard and quashed the subpoena, the movie distributors sought review from the district court judge assigned to the case. The second judge also denied discovery as unduly burdensome but, in a hearing on the matter, also said there was no First Amendment issue because the users were talking about copyright infringement. In their subsequent appeal to the Ninth Circuit, the distributors invite the appellate court to endorse the judge’s statement. 

As we explain in an amicus brief supporting Reddit, the court should refuse that invitation. Discussions about illegal activity clearly are protected speech. Indeed, the Supreme Court recently affirmed that even “advocacy of illegal acts” is “within the First Amendment’s core.” In fact, protecting such speech is a central purpose of the First Amendment because it ensures that people can robustly debate civil and criminal laws and advocate for change. 

There is no reason to imagine that this bedrock principle doesn’t apply just because the speech concerns copyright infringement – —especially where the speakers aren’t even defendants in the case, but independent third parties. And unmasking Does in copyright cases carries particular risks given the long history of copyright claims being used as an excuse to take down lawful as well as infringing content online. 

We’re glad to see Reddit fighting back against these improper subpoenas, and proud to stand with the company as it stands up for its users. 

Corynne McSherry

UK Politicians Join Organizations in Calling for Immediate Release of Alaa Abd El-Fattah

2 days 3 hours ago

As the UK’s Prime Minister Keir Starmer and Foreign Secretary David Lammy have failed to secure the release of British-Egyptian blogger, coder, and activist Alaa Abd El-Fattah, UK politicians call for tougher measures to secure Alaa’s immediate return to the UK.

During a debate on detained British nationals abroad in early December, chairwoman of the Commons Foreign Affairs Committee Emily Thornberry asked the House of Commons why the UK has continued to organize industry delegations to Cairo while “the Egyptian government have one of our citizens—Alaa Abd El-Fattah—wrongfully held in prison without consular access.”

In the same debate, Labour MP John McDonnell urged the introduction of a “moratorium on any new trade agreements with Egypt until Alaa is free,” which was supported by other politicians. Liberal Democrat MP Calum Miller also highlighted words from Alaa, who told his mother during a recent prison visit that he had “hope in David Lammy, but I just can’t believe nothing is happening...Now I think either I will die in here, or if my mother dies I will hold him to account.”

Alaa’s mother, mathematician Laila Soueif, has been on hunger strike for 79 days while she and the rest of his family have worked to engage the British government in securing Alaa’s release. On December 12, she also started protesting daily outside the Foreign Office and has since been joined by numerous MPs.

Support for Alaa has come from many directions. On December 6, 12 Nobel laureates wrote to Keir Starmer urging him to secure Alaa’s immediate release “Not only because Alaa is a British citizen, but to reanimate the commitment to intellectual sanctuary that made Britain a home for bold thinkers and visionaries for centuries.” The pressure on Labour’s senior politicians has continued throughout the month, with more than 100 MPs and peers writing to David Lammy on December 15 demanding Alaa’ be freed.   

Alaa should have been released on September 29, after serving his five-year sentence for sharing a Facebook post about a death in police custody, but Egyptian authorities have continued his imprisonment in contravention of the country’s own Criminal Procedure Code. British consular officials are prevented from visiting him in prison because the Egyptian government refuses to recognise Alaa’s British citizenship.

David Lammy met with Alaa’s family in November and promised to take action. But the UK’s Prime Minister failed to raise the case at the G20 Summit in Brazil when he met with Egypt’s President El-Sisi. 

If you’re based in the UK, here are some actions you can take to support the calls for Alaa’s release:

  1. Write to your MP (external link): https://freealaa.net/message-mp 
  2. Join Laila Soueif outside the Foreign Office daily between 10-11am
  3. Share Alaa’s plight on social media using the hashtag #freealaa

The UK Prime Minister and Foreign Secretary’s inaction is unacceptable. Every second counts, and time is running out. The government must do everything it can to ensure Alaa’s immediate and unconditional release.

Paige Collings

What You Should Know When Joining Bluesky

2 days 21 hours ago

Bluesky promises to rethink social media by focusing on openness and user control. But what does this actually mean for the millions of people joining the site?

November was a good month for alternatives to X. Many users hit their balking point after two years of controversial changes turned Twitter into X, a restrictive hub filled with misinformation and hate speech. Musk’s involvement in the U.S. presidential election was the last straw for many who are now looking for greener pastures.

Threads, the largest alternative, grew about 15% with 35 million new users. However, the most explosive growth came from Bluesky, seeing over 500% growth and a total user base of over 25 million users at the time of writing.

We’ve dug into the nerdy details of how Mastodon, Threads, and Bluesky compare, but given this recent momentum it’s important to clear up some questions for new Bluesky users, and what this new approach to the social web really means for how you connect with people online.

Note that Bluesky is still in an early stage, and many big changes are anticipated from the project. Answers here are accurate as of the time of writing, and will indicate the company’s future plans where possible.

Is Bluesky Just Another Twitter?

At face value the Bluesky app has a lot of similarities to Twitter prior to becoming X. That’s by design: the Bluesky team has prioritized making a drop-in replacement for 2022 Twitter, so everything from the layout, posting options, and even color scheme will feel familiar to users familiar with that site. 

While discussed in the context of decentralization, this experience is still very centralized like traditional social media, with a single platform controlled by one company, Bluesky PBLLC. However, a few aspirations from this company make it stand out: 

  1. Prioritizing interoperability and community development: Other platforms frequently get this wrong, so this dedication to user empowerment and open source tooling is commendable. 
  2. “Credible Exit” Decentralization: Bluesky the company wants Bluesky, the network, to be able to function even if the company is eliminated or ‘enshittified.’

The first difference is evident already from the wide variety of tools and apps on the network. From blocking certain content to highlighting communities you’re a part of, there are a lot of settings to make your feed yours— some of which we walked through here. You can also abandon Bluesky’s Twitter-style interface for an app like Firesky, which presents a stream of all Bluesky content. Other apps on the network can even be geared towards sharing audio, events, or work as a web forum, all using the same underlying AT protocol. This interoperable and experimental ecosystem parallels another based on the ActivityPub protocol, called “The Fediverse”, which connects Threads to Mastodon as well as many other decentralized apps which experiment with the functions of traditional social media sites.

That “credible exit” priority is less immediately visible, but explains some of the ways Bluesky looks different. The most visible difference is that usernames are domain names, with the default for new users being a subdomain of bsky.social. EFF set it up so that our account name is our website, @eff.org, which will be the case across the Bluesky network, even if viewed with different apps. Comparable to how Mastodon handles verification, no central authority or government documents are needed for verification, just proof of control over a site or record.

As Bluesky decentralizes, it is likely to diverge more from the Twitter experience as the tricky problems of decentralization creep in. 

How Is Bluesky for Privacy?

While Bluesky is not engaged in surveillance-based advertising like many incumbent social media platforms, users should be aware that shared information is more public and accessible than they might expect.

Bluesky, the app, offers some sensible data-minimizing defaults like requiring user consent for third-party embedded media, which can include tracking. The real assurance to users, however, is that even if the flagship apps were to become less privacy protective, the open tools let others make full-featured alternative apps on the same network.

However, by design, Bluesky content is fully public on the network. Users can change privacy settings to encourage apps on the network to require login to view your account, but it is optional to honor. Every post, every like, and every share is visible to the world. Even blocking data is plainly visible. By design all of this information is also accessible in one place, as Bluesky aims to be the megaphone for a global audience Twitter once was.

This transparency extends to how Bluesky handles moderation, where users and content are labeled by a combination of Bluesky moderators, community moderators, and automated labeling. The result is information about you will, over time, be held by these moderators to either promote or hide your content.

Users leaving X out of frustration for the platform using public content to feed AI training may also find that this approach of funneling all content into one stream is very friendly to scraping for AI training by third parties.  Bluesky’s CEO has been clear the company will not engage in AI licensing deals, but it’s important to be clear this is inherent to any network prioritizing openness. The freedom to use public data for creative expression, innovation, and research extends to those who use it to train AI.

Users you have blocked may also be able to use this public stream to view your posts without interacting with you. If your threat model includes trolls and other bad actors who might reshare your posts in other contexts, this is important to consider.

Direct messages are not included in this heap of public information. However they are not end-to-end encrypted, and only hosted by Bluesky servers. As was the case for X, that means any DM is visible to Bluesky PBLLC. DMs may be accessed for moderation, for valid police warrants, and may even one day be public through a data breach. Encrypted DMs are planned, but we advise sensitive conversations be moved to dedicated fully encrypted conversations.

How Do I Find People to Follow?

Tools like Skybridge are being built to make it easier for people to import their Twitter contacts into Bluesky. Similar to advice we gave for joining Mastodon, keep in mind these tools may need extensive account access, and may need to be re-run as more people switch networks.

Bluesky has also implemented “starter packs,” which are curated lists of users anyone can create and share to new users. EFF recently put together a few for you to check out:

Is Bluesky In the Fediverse?

Fediverse” refers to a wide variety of sites and services generally communicating with each other over the ActivityPub protocol, including Threads, Mastodon, and a number of other projects. Bluesky uses the AT Protocol, which is not currently compatible with ActivityPub, thus it is not part of “the fediverse.”

However, Bluesky is already being integrated into the vision of an interoperable and decentralized social web. You can follow Bluesky accounts from the fediverse over RSS. A number of mobile apps will also seamlessly merge Bluesky and fediverse feeds and let you post to both accounts. Even with just one Bluesky or fediverse account, users can also share posts and DMs to both networks using a project called Bridgy Fed.

In recent weeks this bridging also opened up to the hundreds of millions of Threads users. It just requires an additional step of enabling fediverse sharing, before connecting to the fediverse Bridgy Fed account.  We’re optimistic that all of these projects will continue to improve integrations even more in the future.

Is the Bluesky Network Decentralized?

The current Bluesky network is not decentralized. 

It is nearly all made and hosted by one company, Bluesky PBLLC, which is working on creating the “credible exit” from their control as a platform host. If Bluesky the company and the infrastructure it operates disappeared tonight, however, the entire Bluesky network would effectively vanish along with it.

Of the 25 million users, only 10,000 are hosted by a non-Bluesky services — most of which through fediverse connections. Changing to another host is also currently a one-way exit.  All DMs rely on Bluesky owned servers, as does the current system for managing user identities, as well as the resource-intensive “Relay” server aggregating content from across the network. The same company also handles the bulk of moderation and develops the main apps used by most users. Compared to networks like the fediverse or even email, hosting your own Bluesky node currently requires a considerable investment.

Once this is no longer the case, a “credible exit” is also not quite the same as “decentralized.” An escape hatch for particularly dire circumstances is good, but it falls short of the distributed power and decision making of decentralized networks. This distinction will become more pressing as the reliance on Bluesky PBLLC is tested, and the company opens up to more third parties for each component of the network. 

How Does Bluesky Make Money?

The past few decades have shown the same ‘enshittification’ cycle too many times. A new startup promises something exciting, users join, and then the platform turns on users to maximize profits—often through surveillance and restricting user autonomy. 

Will Bluesky be any different? From the team’s outlined plan we can glean that Bluesky promises not to use surveillance-based advertising, nor lock-in users. Bluesky CEO Jay Graber also promised to not sell user content to AI training licenses and intends to always keep the service free to join. Paid services like custom domain hosting or paid subscriptions seem likely. 

So far, though, the company relies on investment funding. It was initially incubated by Twitter co-founder Jack Dorsey— who has since distanced himself from the project—and more recently received 8 million and 15 million dollar rounds of funding. 

That later investment round has raised concerns among the existing userbase that Bluesky would pivot to some form of cryptocurrency service, as it was led by Blockchain Capital, a cryptocurrency focused venture capital company which also had a partner join the Bluesky board. Jay Graber committed to “not hyperfinancialize the social experience” with blockchain projects, and emphasized that Bluesky does not use blockchain.

As noted above, Bluesky has prioritized maintaining a “credible exit” for users, a commitment to interoperability that should keep the company accountable to the community and hopefully prevent the kind of “enshittification” that drove people away from X. Holding the company to all of these promises will be key to seeing the Bluesky network and the AT protocol reach that point of maturity.

How Does Moderation Work?

Our comparison of Mastodon, Threads, and Bluesky gets into more detail, but as it stands Bluesky’s moderation is similar to Twitter’s before Musk. The Bluesky corporation uses the open moderation tools to label posts and users, and will remove users from their hosted services for breaking their terms of service. This tooling keeps the Bluesky company’s moderation tied to its “credible exit” goals, giving it the same leverage any other future operator might have. It also means  Bluesky’s centralized moderation of today can’t scale, and even with a good faith effort it will run into issues.

Bluesky accounts for this by opening its moderation tools to the community. Advanced options are available under settings in the web app, and anyone can label content and users on the site. These labels let users filter, prioritize, or block content. However, only Bluesky has the power to “deplatform” poorly behaved users by removing them, either by no longer hosting their account, no longer relaying their content to other users, or both.

Bluesky aspires to censorship resistance, and part of creating a “credible exit” means reducing the company’s ability to remove users entirely. In a future with a variety of hosts and relays on the Bluesky network, removing a user looks more like removing a website from the internet—not impossible, but very difficult. Instead users will need to settle with filtering out or blocking speech they object to, and take some comfort that voices they align with will not be removed from the network. 

The permeability of Bluesky also means community tooling will need to address network abuses, like last May when a pro-Trump botnet on Nostr bridged to Bluesky via Mastodon to flood timelines. It’s possible that like in the Fediverse, Bluesky may eventually form a network of trusted account hosts and relays to mitigate these concerns.

Bluesky is still a work in progress, but its focus on decentralization, user control, and interoperability makes it an exciting space to watch. Whether you’re testing the waters or planning a full migration, these insights should help you navigate the platform.

Rory Mir

Australia Banning Kids from Social Media Does More Harm Than Good

2 days 21 hours ago

Age verification systems are surveillance systems that threaten everyone’s privacy and anonymity. But Australia’s government recently decided to ignore these dangers, passing a vague, sweeping piece of age verification legislation after giving only a day for comments. The Online Safety Amendment (Social Media Minimum Age) Act 2024, which bans children under the age of 16 from using social media, will force platforms to take undefined “reasonable steps” to verify users’ ages and prevent young people from using them, or face over $30 million in fines. 

The country’s Prime Minister, Anthony Albanese, claims that the legislation is needed to protect young people in the country from the supposed harmful effects of social media, despite no study showing such an impact. This legislation will be a net loss for both young people and adults who rely on the internet to find community and themselves.

The law does not specify which social media platforms will be banned. Instead, this decision is left to Australia’s communications minister who will work alongside the country’s internet regulator, the eSafety Commissioner, to enforce the rules. This gives government officials dangerous power to target services they do not like, all at a cost to both minor and adult internet users.

The legislation also does not specify what type of age verification technology will be necessary to implement the restrictions but prohibits using only government IDs for this purpose. This is a flawed attempt to protect privacy.

Since platforms will have to provide other means to verify their users' ages other than by government ID, they will likely rely on unreliable tools like biometric scanners. The Australian government awarded the contract for testing age verification technology to a UK-based company, Age Check Certification Scheme (ACCS) who, according to the company website, “can test all kinds of age verification systems,” including “biometrics, database lookups, and artificial intelligence-based solutions.” 

The ban will not take effect for at least another 12 months while these points are decided upon, but we are already concerned that the systems required to comply with this law will burden all Australians’ privacy, anonymity, and data security.

Banning social media and introducing mandatory age verification checks is the wrong approach to protecting young people online, and this bill was hastily pushed through the Parliament of Australia with little oversight or scrutiny. We urge politicians in other countries—like the U.S. and France—to explore less invasive approaches to protecting all people from online harms and focus on comprehensive privacy protections, rather than mandatory age verification.

Paige Collings

EFF Statement on U.S. Supreme Court's Decision to Consider TikTok Ban

2 days 22 hours ago

The TikTok ban itself and the DC Circuit's approval of it should be of great concern even to those who find TikTok undesirable or scary. Shutting down communications platforms or forcing their reorganization based on concerns of foreign propaganda and anti-national manipulation is an eminently anti-democratic tactic, one that the U.S. has previously condemned globally.

The U.S. government should not be able to restrict speech—in this case by cutting off a tool used by 170 million Americans to receive information and communicate with the world—without proving with evidence that the tools are presently seriously harmful. But in this case, Congress has required and the DC Circuit approved TikTok’s forced divestiture based only upon fears of future potential harm. This greatly lowers well-established standards for restricting freedom of speech in the U.S. 

So we are pleased that the Supreme Court will take the case and will urge the justices to apply the appropriately demanding First Amendment scrutiny.

David Greene

Speaking Freely: Winnie Kabintie

2 days 22 hours ago

Winnie Kabintie is a journalist and Communications Specialist based in Nairobi, Kenya. As an award-winning youth media advocate, she is passionate about empowering young people with Media and Information Literacy skills, enabling them to critically engage with and shape the evolving digital  media landscape in meaningful ways.

Greene: To get us started, can you tell us what the term free expression means to you? 

I think it's the opportunity to speak in a language that you understand and speak about subjects of concern to you and to anybody who is affected or influenced by the subject of conversation. To me, it is the ability to communicate openly and share ideas or information without interference, control, or restrictions. 

As a journalist, it means having the freedom to report on matters affecting society and my work without censorship or limitations on where that information can be shared. Beyond individual expression, it is also about empowering communities to voice their concerns and highlight issues that impact their lives. Additionally, access to information is a vital component of freedom of expression, as it ensures people can make informed decisions and engage meaningfully in societal discourse because knowledge is power.

Greene: You mention the freedom to speak and to receive information in your language. How do you see that currently? Are language differences a big obstacle that you see currently? 

If I just look at my society—I like to contextualize things—we have Swahili, which is a national language, and we have English as the secondary official language. But when it comes to policies, when it comes to public engagement, we only see this happening in documents that are only written in English. This means when it comes to the public barazas (community gatherings) interpretation is led by a few individuals, which creates room for disinformation and misinformation. I believe the language barrier is an obstacle to freedom of speech. We've also seen it from the civil society dynamics, where you're going to engage the community but you don't speak the same language as them, then it becomes very difficult for you to engage them on the subject at hand. And if you have to use a translator, sometimes what happens is you're probably using a translator for whom their only advantage, or rather the only advantage they bring to the table, is the fact that they understand different languages. But they're not experts in the topic that you're discussing.

Greene: Why do you think the government only produces materials in English? Do you think part of that is because they want to limit who is able to understand them? Or is it just, are they lazy or they just disregard the other languages? 

In all fairness, I think it comes from the systematic approach on how things run. This has been the way of doing things, and it's easier to do it because translating some words from, for example, English to Swahili is very hard. And you see, as much as we speak Swahili in Kenya—and it's our national language—the kind of Swahili we speak is also very diluted or corrupted with English and Sheng—I like to call “ki-shenglish”. I know there were attempts to translate the new Kenyan Constitution, and they did translate some bits of the summarized copy, but even then it wasn’t the full Constitution. We don't even know how to say certain words in Swahili from English which makes it difficult to translate many things. So I think it's just an innocent omission. 

Greene: What makes you passionate about freedom of expression?

 As a journalist and youth media advocate, my passion for freedom of expression stems from its fundamental role in empowering individuals and communities to share their stories, voice their concerns, and drive meaningful change. Freedom of expression is not just about the right to speak—it’s about the ability to question, to challenge injustices, and to contribute to shaping a better society.

For me, freedom of expression is deeply personal as I like to question, interrogate and I am not just content with the status quo. As a journalist, I rely on this freedom to shed light on critical issues affecting society, to amplify marginalized voices, and to hold power to account. As a youth advocate, I’ve witnessed how freedom of expression enables young people to challenge stereotypes, demand accountability, and actively participate in shaping their future. We saw this during the recent Gen Z revolution in Kenya when youth took to the streets to reject the proposed Finance Bill.

Freedom of speech is also about access. It matters to me that people not only have the ability to speak freely, but also have the platforms to articulate their issues. You can have all the voice you need, but if you do not have the platforms, then it becomes nothing. So it's also recognizing that we need to create the right platforms to advance freedom of speech. These, in our case, include platforms like radio and social media platforms. 

So we need to ensure that we have connectivity to these platforms. For example, in the rural areas of our countries, there are some areas that are not even connected to the internet. They don't have the infrastructure including electricity. It then becomes difficult for those people to engage in digital media platforms where everybody is now engaging. I remember recently during the Reject Finance Bill process in Kenya, the political elite realized that they could leverage social media and meet with and engage the youth. I remember the President was summoned to an X-space and he showed up and there was dialogue with hundreds of young people. But what this meant was that the youth in rural Kenya who didn’t have access to the internet or X were left out of that national, historic conversation. That's why I say it's not just as simple as saying you are guaranteed freedom of expression by the Constitution. It's also how governments are ensuring that we have the channels to advance this right. 

Greene: Have you had a personal experience or any personal experiences that shaped how you feel about freedom of expression? Maybe a situation where you felt like it was being denied to you or someone close to you was in that situation?

At a personal level I believe that I am a product of speaking out and I try to use my voice to make an impact! There is also this one particular incident that stands out during my early career as a journalist. In 2014 I amplified a story from a video shared on facebook by writing a news article that was published on The Kenya Forum, which at the time was one of the two publications that were fully digital in the country covering news and feature articles.

The story, which was a case of gender based assault, gained traction drawing attention to the unfortunate incident that had seen a woman stripped naked allegedly for being “dressed indecently.” The public uproar sparked the famous #MyDressMyChoice protest in Kenya where women took to the streets countrywide to protest against sexual violence.

Greene: Wow. Do you have any other specific stories that you can tell about the time when you spoke up and you felt that it made a difference? Or maybe you spoke up, and there was some resistance to you speaking up? 

I've had many moments where I've spoken up and it's made a difference including the incident I shared in the previous question. But, on the other hand, I also had a moment where I did not speak out years ago, when a classmate in primary school was accused of theft. 

There was this girl once in class, she was caught with books that didn't belong to her and she was accused of stealing them. One of the books she had was my deskmate’s and I was there when she had borrowed it. So she was defending herself and told the teacher, “Winnie was there when I borrowed the book.” When the teacher asked me if this was true I just said, “I don't know.” That feedback was her last line of defense and the girl got expelled from school. So I’ve always wondered, if I'd said yes, would the teacher have been more lenient and realized that she had probably just borrowed the rest of the books as well? I was only eight years old at the time, but because of that, and how bad the outcome made me feel, I vowed to myself to always stand for the truth even when it’s unpopular with everyone else in the room. I would never look the other way in the face of an injustice or in the face of an issue that I can help resolve. I will never walk away in silence.

Greene: Have you kept to that since then? 

Absolutely.

Greene: Okay, I want to switch tracks a little bit. Do you feel there are situations where it's appropriate for government to limit someone's speech?

Yes, absolutely. In today’s era of disinformation and hate speech, it’s crucial to have legal frameworks that safeguard society. We live in a society where people, especially politicians, often make inflammatory statements to gain political mileage, and such remarks can lead to serious consequences, including civil unrest.

Kenya’s experience during the 2007-2008 elections is a powerful reminder of how harmful speech can escalate tensions and pit communities against each other. That period taught us the importance of being mindful of what leaders say, as their words have the power to unite or divide.

I firmly believe that governments must strike a balance between protecting freedom of speech and preventing harm. While everyone has the right to express themselves, that right ends where it begins to infringe on the rights and safety of others. It’s about ensuring that freedom of speech is exercised responsibly to maintain peace and harmony in society.

Greene: So what do we have to be careful about with giving the government the power to regulate speech? You mentioned hate speech can be hard to define. What's the risk of letting the government define that?

The risk is that the government may overstep its boundaries, as often happens. Another concern is the lack of consistent and standardized enforcement. For instance, someone with influence or connections within the government might escape accountability for their actions, while an activist doing the same thing could face arrest. This disparity in treatment highlights the risks of uneven application of the law and potential misuse of power.

Greene: Earlier you mentioned special concern for access to information. You mentioned children and you mentioned women. Both of those are groups of people where, at least in some places, someone else—not the government, but some other person—might control their access, right? I wonder if you could talk a little bit more about why it's so important to ensure access to information for those particular groups. 

I believe home is the foundational space where access to information and freedom of expression are nurtured. Families play a crucial role in cultivating these values, and it’s important for parents to be intentional about fostering an environment where open communication and access to information are encouraged. Parents have a responsibility to create opportunities for discussion within their households and beyond.

Outside the family, communities provide broader platforms for engagement. In Kenya, for example, public forums known as barazas serve as spaces where community members gather to discuss pressing issues, such as insecurity and public utilities, and to make decisions that impact the neighborhood. Ensuring that your household is represented in these forums is essential to staying informed and being part of decisions that directly affect you.

It’s equally important to help people understand the power of self-expression and active participation in decision-making spaces. By showing up and speaking out, individuals can contribute to meaningful change. Additionally, exposure to information and critical discussions is vital in today’s world, where misinformation and disinformation are prevalent. Families can address these challenges by having conversations at the dinner table, asking questions like, “Have you heard about this? What’s your understanding of misinformation? How can you avoid being misled online?”

By encouraging open dialogue and critical thinking in everyday interactions, we empower one another to navigate information responsibly and contribute to a more informed and engaged society.

Greene: Now, a question we ask everyone, who is your free speech hero? 

I have two. One is a Human Rights lawyer and a former member of Parliament Gitobu Imanyara.  He is one of the few people in Kenya who fought by blood and sweat, literally, for the freedom of speech and that of the press in Kenya. He will always be my hero when we talk about press freedom. We are one of the few countries in Africa that enjoys extreme freedoms around speech and press freedom and it’s thanks to people like him. 

The other is an activist named Boniface Mwangi. He’s a person who never shies away from speaking up. It doesn’t matter who you are or how dangerous it gets, Boni, as he is popularly known, will always be that person who calls out the government when things are going wrong. You’re driving on the wrong side of the traffic just because you’re a powerful person in government. He'll be the person who will not move his car and he’ll tell you to get back in your lane. I like that. I believe when we speak up we make things happen.

Greene: Anything else you want to add? 

I believe it’s time we truly recognize and understand the importance of freedom of expression and speech. Too often, these rights are mentioned casually or taken at face value, without deeper reflection. We need to start interrogating what free speech really means, the tools that enable it, and the ways in which this right can be infringed upon.

As someone passionate about community empowerment, I believe the key lies in educating people about these rights—what it looks like when they are fully exercised and what it means when they are violated and especially in today’s digital age. Only by raising awareness can we empower individuals to embrace these freedoms and advocate for better policies that protect and regulate them effectively. This understanding is essential for fostering informed, engaged communities that can demand accountability and meaningful change.

David Greene

“Can the Government Read My Text Messages?”

2 days 22 hours ago

You should be able to message your family and friends without fear that law enforcement is reading everything you send. Privacy is a human right, and that’s why we break down the ways you can protect your ability to have a private conversation

Learn how governments are able to read certain text messages, and how to ensure your messages are end-to-end encrypted on Digital Rights Bytes, our new site dedicated to helping break down tech issues into byte-sized pieces.  

Whether you’re just starting to think about your privacy online, or you’re already a regular user of encrypted messaging apps, Digital Rights Bytes is here to help answer some of the common questions that may be bothering you about the devices you use. Watch the short video that explains how to keep your communications private online--and share it with family and friends who may have asked similar questions! 

Have you also wondered why it is so expensive to fix your phone, or if you really own the digital media you paid for? We’ve got answers to those and other questions as well! And, if you’ve got additional questions you’d like us to answer in the future, let us know on your social platform of choice using the hashtag #DigitalRightsBytes.  

Christian Romero

10 Resources for Protecting Your Digital Security | EFFector 36.15

3 days 20 hours ago

Get a head-start on your New Year's resolution to stay up-to-date on digital rights news by subscribing to EFF's EFFector newsletter! 

This edition of the newsletter covers our top ten digital security resources for those concerned about the incoming administration, a new bill that could put an end to SLAPP lawsuits, and our recent amicus brief arguing that device searches at the border require a warrant (we've been arguing this for a long time).

You can read the full newsletter here, and even get future editions directly to your inbox when you subscribe! Additionally, we've got an audio edition of EFFector on the Internet Archive, or you can view it by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.15 - 10 Resources for Protecting Your Digital Security

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Still Flawed and Lacking Safeguards, UN Cybercrime Treaty Goes Before the UN General Assembly, then States for Adoption

4 days 12 hours ago

Most UN Member States, including the U.S., are expected to support adoption of the flawed UN Cybercrime Treaty when it’s scheduled to go before the UN General Assembly this week for a vote, despite warnings that it poses dangerous risks to human rights.

EFF and its civil society partners–along with cybersecurity and internet companies, press organizations, the International Chamber of Congress, the United Nations High Commissioner for Human Rights, and others–for years have raised red flags that the treaty authorizes open-ended evidence gathering powers for crimes with little nexus to core cybercrimes, and has minimal safeguards and limitations.

 The final draft, unanimously approved in August by over 100 countries that had participated in negotiations, will permit intrusive surveillance practices in the name of engendering cross-border cooperation.

The treaty that will go before the UN General Assembly contains many troubling provisions and omissions that don’t comport with international human rights standards and leave the implementation of human rights safeguards to the discretion of Member States. Many of these Member States have poor track records on human rights and national laws that don’t protect privacy while criminalizing free speech and  gender expression.

Thanks to the work of a coalition of civil society groups that included EFF, the U.S. now seems to recognize this potential danger. In a statement by the U.S. Deputy Representative to the Economic and Social Council, the U.S. said it “shares the legitimate concerns” of industry and civil society, which warned that some states could leverage their human rights-challenged national legal frameworks to enable transnational repression.

We expressed grave concerns that the treaty facilitates requests for user data that will enable cross-border spying and the targeting and harassment of those, for example, who expose and work against government corruption and abuse. Our full analysis of the treaty can be found here.

Nonetheless, the U.S. said it will support the convention when it comes up for this vote, noting among other things that its terms don’t permit parties from using it to violate or suppress human rights.

While that’s true as far as it goes, and is important to include in principle, some Member States’ laws empowered by the treaty already fail to meet human rights standards. And the treaty fails to adopt specific safeguards to truly protect human rights.

The safeguards contained in the convention, such as the need for judicial review in the chapter on procedural measures in criminal investigations, are undermined by being potentially discretionary and contingent on state’s domestic laws. In many countries, these domestic laws don’t require judicial authorization based on reasonable suspicion for surveillance and or real-time collection of traffic.

For example, our partner Access Now points out that in Algeria, Lebanon, Palestine, Tunisia, and Egypt, cybercrime laws require telecommunications service providers to preemptively and systematically collect large amounts of user data without judicial authorization.

Meanwhile, Jordan’s cybercrime law has been used against LGBTQ+ people, journalists, human rights defenders, and those criticizing the government.

The U.S. says it is committed to combating human rights abuses by governments that misuse national cybercrime statues and tools to target journalists and activists. Implementing the treaty, it says, must be paired with robust domestic safeguards and oversight.

It’s hard to imagine that governments will voluntarily revise cybercrime laws as they ratify and implement the treaty; what’s more realistic is that the treaty normalizes such frameworks.

Advocating for improvements during the two years-long negotiations was a tough slog. And while the final version is highly problematic, civil society achieved some wins. An early negotiating document named 34 purported cybercrime offenses to be included, many of which would criminalize forms of speech. Civil society warned of the dangers of including speech-related offenses; the list was dropped in later drafts.

Civil society advocacy also helped secure specific language in the general provision article on human rights specifying that protection of fundamental rights includes freedom of expression, opinion, religion, conscience, and peaceful assembly. Left off the list, though, was gender expression.

The U.S., meanwhile, has called on all states “to take necessary steps within their domestic legal systems to ensure the Convention will not be applied in a manner inconsistent with human rights obligations, including those relating to speech, political dissent, and sexual identity.”

Furthermore, the U.S. government pledges to demand accountability – without saying how it will do so – if states seek to misuse the treaty to suppress human rights. “We will demand accountability for States who try to abuse this Convention to target private companies’ employees, good-faith cybersecurity researchers, journalists, dissidents, and others.” Yet the treaty contains no oversight provisions.

The U.S. said it is unlikely to sign or ratify the treaty “unless and until we see implementation of meaningful human rights and other legal protections by the convention’s signatories.”

We’ll hold the government to its word on this and on its vows to seek accountability. But ultimately, the destiny of the U.S declarations and the treaty’s impact in the U.S are more than uncertain under a second Trump administration, as ratification would require both the Senate’s consent and the President's formal ratification.

Trump withdrew from climate, trade, and arms agreements in his first term, so signing the UN Cybercrime Treaty may not be in the cards – a positive outcome, though probably not motivated by concerns for human rights.

Meanwhile, we urge states to vote against adoption this week and not ratify the treaty at home. The document puts global human rights at risk. In a rush to to win consensus, negotiators gave Member States lots of leeway to avoid human rights safeguards in their “criminal” investigations, and now millions of people around the world might pay a high price.

 

Karen Gullo

Saving the Internet in Europe: How EFF Works in Europe

4 days 23 hours ago

This post is part one in a series of posts about EFF’s work in Europe.

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.

In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and how what happens in Europe can affect digital rights across the globe.

Why EFF Works in Europe

European lawmakers have been highly active in proposing laws to regulate online services and emerging technologies. And these laws have the potential to impact the whole world. As such, we have long recognized the importance of engaging with organizations and lawmakers across Europe. In 2007, EFF became a member of the European Digital Rights Initiative (EDRi), a collective of NGOs, experts, advocates and academics that have for two decades worked to advance digital rights throughout Europe. From the early days of the movement, we fought back against legislation threatening user privacy in Germany, free expression in the UK, and the right to innovation across the continent.

Over the years, we have continued collaborations with EDRi as well as other coalitions including IFEX, the international freedom of expression network, Reclaim Your Face, and Protect Not Surveil. In our EU policy work, we have advocated for fundamental principles like transparency, openness, and information self-determination. We emphasized that legislative acts should never come at the expense of protections that have served the internet well: Preserve what works. Fix what is broken. And EFF has made a real difference: We have ensured that recent internet regulation bills don’t turn social networks into censorship tools and safeguarded users’ right to private conversations. We also helped guide new fairness rules in digital markets to focus on what is really important: breaking the chokehold of major platforms over the internet.

Recognizing the internet’s global reach, we have also stressed that lawmakers must consider the global impact of regulation and enforcement, particularly effects on vulnerable groups and underserved communities. As part of this work, we facilitate a global alliance of civil society organizations representing diverse communities across the world to ensure that non-European voices are heard in Brussels’ policy debates.

Our Teams

Today, we have a robust policy team that works to influence policymakers in Europe. Led by International Policy Director Christoph Schmon and supported by Assistant Director of EU Policy Svea Windwehr, both of whom are based in Europe, the team brings a set of unique expertise in European digital policy making and fundamental rights online. They engage with lawmakers, provide policy expertise and coordinate EFF’s work in Europe.

But legislative work is only one piece of the puzzle, and as a collaborative organization, EFF pulls expertise from various teams to shape policy, build capacity, and campaign for a better digital future. Our teams engage with the press and the public through comprehensive analysis of digital rights issues, educational guides, activist workshops, press briefings, and more. They are active in broad coalitions across the EU and the UK, as well as in East and Southeastern Europe.

Our work does not only span EU digital policy issues. We have been active in the UK advocating for user rights in the context of the Online Safety Act, and also work on issues facing users in the Balkans or accession countries. For instance, we recently collaborated with Digital Security Lab Ukraine on a workshop on content moderation held in Warsaw, and participated in the Bosnia and Herzegovina Internet Governance Forum. We are also an active member of the High-Level Group of Experts for Resilience Building in Eastern Europe, tasked to advise on online regulation in Georgia, Moldova and Ukraine.

EFF on Stage

In addition to all of the behind-the-scenes work that we do, EFF regularly showcases our work on European stages to share our mission and message. You can find us at conferences like re:publica, CPDP, Chaos Communication Congress, or Freedom not Fear, and at local events like regional Internet Governance Forums. For instance, last year Director for International Freedom of Expression Jillian C. York gave a talk with Svea Windwehr at Berlin’s re:publica about transparency reporting. More recently, Senior Speech and Privacy Activist Paige Collings facilitated a session on queer justice in the digital age at a workshop held in Bosnia and Herzegovina.

There is so much more work to be done. In the next posts in this series, you will learn more about what EFF will be doing in Europe in 2025 and beyond, as well as some of our lessons and successes from past struggles.

Jillian C. York

Speaking Freely: Prasanth Sugathan

1 week ago

Interviewer: David Greene

This interview has been edited for length and clarity.*

Prasanth Sugathan is Legal Director at Software Freedom Law Center, India. (SFLC.in). Prasanth is a lawyer with years of practice in the fields of technology law, intellectual property law, administrative law and constitutional law. He is an engineer turned lawyer and has worked closely with the Free Software community in India. He has appeared in many landmark cases before various Tribunals, High Courts and the Supreme Court of India. He has also deposed before Parliamentary Committees on issues related to the Information Technology Act and Net Neutrality.

David Greene: Why don’t you go ahead and introduce yourself. 

Sugathan: I am Prasanth Sugathan, I am the Legal Director at the Software Freedom Law Center, India. We are a nonprofit organization based out of New Delhi, started in the year 2010. So we’ve been working at this for 14 years now, working mostly in the area of protecting rights of citizens in the digital space in India. We do strategic litigation, policy work, trainings, and capacity building. Those are the areas that we work in. 

Greene: What was your career path? How did you end up at SFLC? 

That’s an interesting story. I am an engineer by training. Then I was interested in free software. I had a startup at one point and I did a law degree along with it. I got interested in free software and got into it full time. Because of this involvement with the free software community, the first time I think I got involved in something related to policy was when there was discussion around software patents. When the patent office came out with a patent manual and there was this discussion about how it could affect the free software community and startups. So that was one discussion I followed, I wrote about it, and one thing led to another and I was called to speak at a seminar in New Delhi. That’s where I met Eben and Mishi from the Software Freedom Law Center. That was before SFLC India was started, but then once Mishi started the organization I joined as a Counsel. It’s been a long relationship. 

Greene: Just in a personal sense, what does freedom of expression mean to you? 

Apart from being a fundamental right, as evident in all the human rights agreements we have, and in the Indian Constitution,  freedom of expression is the most basic aspect for a democratic nation. I mean without free speech you can not have a proper exchange of ideas, which is most important for a democracy. For any citizen to speak what they feel, to communicate their ideas, I think that is most important. As of now the internet is a medium which allows you to do that. So there definitely should be minimum restrictions from the government and other agencies in relation to the free exchange of ideas on this medium. 

Greene: Have you had any personal experiences with censorship that have sort of informed or influenced how you feel about free expression? 

When SFLC.IN was started in 2010 our major idea was to support the free software community. But then how we got involved in the debates on free speech and privacy on the internet was when in 2011 there were the IT Rules were introduced by the government as a draft for discussion and finally notified. This was on  regulation of intermediaries, these online platforms. This was secondary legislation based on the Information Technology Act (IT Act) in India, which is the parent law. So when these discussions happened we got involved in it and then one thing led to another. For example, there was a provision in the IT Act called Section 66-A which criminalized the sending of offensive messages through a computer or other communication devices. It was, ostensibly, introduced to protect women. And the irony was that two women were arrested under this law. That was the first arrest that happened, and it was a case of two women being arrested for the comments that they made about a leader who expired. 

This got us working on trying to talk to parliamentarians, trying to talk to other people about how we could maybe change this law. So there were various instances of content being taken down and people being arrested, and it was always done under Section 66-A of the IT Act. We challenged the IT Rules before the Supreme Court. In a judgment in a 2015 case called Shreya Singhal v. Union of India the Supreme Court read down the rules relating to intermediary liability. As for the rules, the platforms could be asked to take down the content. They didn’t have much of an option. If they don’t do that, they lose their safe harbour protection. The Court said it can only be actual knowledge and what actual knowledge means is if someone gets a court order asking them to take down the content. Or let’s say there’s direction from the government. These are the only two cases when content could be taken down.

Greene: You’ve lived in India your whole life. Has there ever been a point in your life when you felt your freedom of expression was restricted? 

Currently we are going through such a phase, where you’re careful about what you’re speaking about. There is a lot of concern about what is happening in India currently. This is something we can see mostly impacting people who are associated with civil society. When they are voicing their opinions there is now a kind of fear about how the government sees it, whether they will take any action against you for what you say, and how this could affect your organization. Because when you’re affiliated with an organization it’s not just about yourself. You also need to be careful about how anything that you say could affect the organization and your colleagues. We’ve had many instances of nonprofit organizations and journalists being targeted. So there is a kind of chilling effect when you really don’t want to say something you would otherwise say strongly. There is always a toning down of what you want to say. 

Greene: Are there any situations where you think it’s appropriate for governments to regulate online speech? 

You don’t have an absolute right to free speech under India’s Constitution. There can be restrictions as stated under Article 19(2) of the Constitution. There can be reasonable restrictions by the government, for instance, for something that could lead to violence or something which could lead to a riot between communities. So mostly if you look at hate speech on the net which could lead to a violent situation or riots between communities, that could be a case where maybe the government could intervene. And I would even say those are cases where platforms should intervene. We have seen a lot of hate speech on the net during India’s current elections as there have been different phases of elections going on for close to two months. We have seen that happening with not just political leaders but with many supporters of political parties publishing content on various platforms which aren’t really in the nature of hate speech but which could potentially create situations where you have at least two communities fighting each other. It’s definitely not a desirable situation. Those are the cases where maybe platforms themselves could regulate or maybe the government needs to regulate. In this case, for example, when it is related to elections, the Election Commission also has its role, but in many cases we don’t see that happening. 

Greene: Okay, let’s go back to hate speech for a minute because that’s always been a very difficult problem. Is that a difficult problem in India? Is hate speech well-defined? Do you think the current rules serve society well or are there problems with it? 

I wouldn’t say it’s well-defined, but even in the current law there are provisions that address it. So anything which could lead to violence or which could lead to animosity between two communities will fall in the realm of hate speech. It’s not defined as such, but then that is where your free speech rights could be restricted. That definitely could fall under the definition of hate speech. 

Greene: And do you think that definition works well? 

I mean the definition is not the problem. It’s essentially a question of how it is implemented. It’s a question of how the government or its agency implements it. It’s a question of how platforms are taking care of it. These are two issues where there’s more that needs to be done. 

Greene: You also talked about misinformation in terms of elections. How do we reconcile freedom of expression concerns with concerns for preventing misinformation? 

I would definitely say it’s a gray area. I mean how do you really balance this? But I don’t think it’s a problem which cannot be addressed. Definitely there’s a lot for civil society to do, a lot for the private sector to do. Especially, for example, when hate speech is reported to the platforms. It should be dealt with quickly, but that is where we’re seeing the worst difference in how platforms act on such reporting in the Global North versus what happens in the Global South. Platforms need to up their act when it comes to handling such situations and handling such content. 

Greene: Okay, let’s talk about the platforms then. How do you feel about censorship or restrictions on freedom of expression by the platforms? 

Things have changed a lot as to how these platforms work. Now the platforms decide what kind of content gets to your feed and how the algorithms work to promote content which is more viral. In many cases we have seen how misinformation and hate speech goes viral. And content that is debunking the misinformation which is kind of providing the real facts, that doesn’t go as far. The content that debunks misinformation doesn’t go viral or come up in your feed that fast. So that definitely is a problem, the way platforms are dealing with it. In many cases it might be economically beneficial for them to make sure that content which is viral and which puts forth misinformation reaches more eyes. 

Greene: Do you think that the platforms that are most commonly used in India—and I know there’s no TikTok in India— serve free speech interests or not? 

When the Information Technology Rules were introduced and when the discussions happened, I would say civil society supported the platforms, essentially saying these platforms ensured we can enjoy our free speech rights, people can enjoy their free speech rights and express themselves freely. How the situation changed over a period of time is interesting. Definitely these platforms are still important for us to express these rights. But when it comes to, let’s say, content being regulated, some platforms do push back when the government asks them to take down the content, but we have not seen that much. So whether they’re really the messiahs for free speech, I doubt. Over the years, we have seen that it is most often the case that when the government tells them to do something, it is in their interest to do what the government says. There has not been much pushback except for maybe Twitter challenging it in the court.  There have not been many instances where these platforms supported users. 

Greene: So we’ve talked about hate speech and misinformation, are there other types of content or categories of online speech that are either problematic in India now or at least that regulators are looking at that you think the government might try to do something with? 

One major concern which the government is trying to regulate is about deepfakes, with even the Prime Minister speaking about it. So suddenly that is something of a priority for the government to regulate. So that’s definitely a problem, especially when it comes to public figures and particularly women who are in politics who often have their images manipulated. In India we see that at election time. Even politicians who have been in the field for a long time, their images have been misused and morphed images have been circulated. So that’s definitely something that the platforms need to act on. For example, you cannot have the luxury of, let’s say, taking 48 hours to decide what to do when something like that is posted. This is something which platforms have to deal with as early as possible. We do understand there’s a lot of content and a lot of reporting happening, but in some cases, at least, there should be some prioritization of these reporting related to non-consensual sexual imagery. Maybe then the priority should go up. 

Greene: As an engineer, how do you feel about deepfake tech? Should the regulatory concerns be qualitatively different than for other kinds of false information? 

When it comes to deepfakes, I would say the problem is that it has become more mainstream. It has become very easy for a person to use these tools that have become more accessible. Earlier you needed to have specialized knowledge, especially when it came to something like editing videos. Now it’s become much easier. These tools are made easily available. The major difference now is how easy it is to access these applications. There can not be a case of fully regulating or fully controlling a technology. It’s not essentially a problem with the technology, because there would be a lot of ethical use cases. Just because something is used for a harmful purpose doesn’t mean that you completely block the technology. There is definitely a case for regulating AI and regulating deepfakes, but that doesn’t mean you put a complete stop to it. 

Greene: How do you feel about TikTok being banned in India? 

I think that’s less a question of technology or regulation and more of a geopolitical issue. I don’t think it has anything to do with the technology or even the transfer of data for that matter. I think it was just a geopolitical issue related to India/ China relations. The relations have kind of soured with the border disputes and other things, I think that was the trigger for the TikTok ban. 

Greene: What is your most significant legal victory from a human rights perspective and why? 

The victory that we had in the fight against the 2011 Rules and the portions related to intermediary liability, which was shot down by the Supreme Court. That was important because when it came to platforms and when it came to people expressing their critical views online, all of this could have been taken down very easily. So that was definitely a case of free speech rights being affected without much recourse. So that was a major victory. 

Greene: Okay, now we ask everyone this question. Who is your free speech hero and why?

I can’t think of one person, but I think of, for example, when the country went through a bleak period in the 1970s and the government declared a national state of emergency. During that time we had journalists and politicians who fought for free speech rights with respect to the news media. At that time even writing something in the publications was difficult. We had many cases of journalists who were fighting this, people who had gone to jail for writing something, who had gone to jail for opposing the government or publicly criticizing the government. So I don’t think of just one person, but we have seen journalists and political leaders fighting back during that state of emergency. I would say those are the heroes who could fight the government, who could fight law enforcement. Then there was the case of Justice H.R. Khanna, a judge who stood up for citizen’s rights and gave his dissenting opinion against the majority view, which cost him the position of Chief Justice. Maybe I would say he’s a hero, a person who was clear about constitutional values and principles.

David Greene

EFF Speaks Out in Court for Citizen Journalists

1 week 1 day ago

No one gets to abuse copyright to shut down debate. Because of that, we at EFF represent Channel 781, a group of citizen journalists whose YouTube channel was temporarily shut down following copyright infringement claims made by Waltham Community Access Corporation (WCAC). As part of that case, the federal court in Massachusetts heard oral arguments in Channel 781 News v. Waltham Community Access Corporation, a pivotal case for copyright law and digital journalism. 

WCAC, Waltham’s public access channel, records city council meetings on video. Channel 781, a group of independent journalists, curates clips of those meetings for its YouTube channel, along with original programming, to spark debate on issues like housing policy and real estate development. WCAC sent a series of DMCA takedown notices that accused Channel 781 of copyright infringement, resulting in YouTube deactivating Channel 781’s channel just days before a critical municipal election.

Represented by EFF and the law firm Brown Rudnick LLP, Channel 781 sued WCAC for misrepresentations in its DMCA takedown notices. We argued that using clips of government meetings from the government access station to engage in public debate is an obvious fair use under copyright. Also, by excerpting factual recordings and using captions to improve accessibility, the group aims to educate the public, a purpose distinct from WCAC’s unannotated broadcasts of hours-long meetings. The lawsuit alleges that WCAC’s takedown requests knowingly misrepresented the legality of Channel 781's use, violating Section 512(f) of the DMCA.

Fighting a Motion to Dismiss

In court this week, EFF pushed back against WCAC’s motion to dismiss the case. We argued to District Judge Patti Saris that Channel 781’s use of video clips of city government meetings was an obvious fair use, and that by failing to consider fair use before sending takedown notices to YouTube, WCAC violated the law and should be liable for damages.

If Judge Saris denies WCAC’s motion, we will move on to proving our case. We’re confident that the outcome will promote accountability for copyright holders who misuse the powerful notice-and-takedown mechanism that the DMCA provides, and also protect citizen journalists in their use of digital tools.

EFF will continue to provide updates as the case develops. Stay tuned for the latest news on this critical fight for free expression and the protection of digital rights.

Betty Gedlu

X's Last-Minute Update to the Kids Online Safety Act Still Fails to Protect Kids—or Adults—Online

1 week 1 day ago

Late last week, the Senate released yet another version of the Kids Online Safety Act, written, reportedly, with the assistance of X CEO Linda Yaccarino in a flawed attempt to address the critical free speech issues inherent in the bill. This last minute draft remains, at its core, an unconstitutional censorship bill that threatens the online speech and privacy rights of all internet users. 

TELL CONGRESS: VOTE NO ON KOSA

no kosa in last minute funding bills

Update Fails to Protect Users from Censorship or Platforms from Liability

The most important update, according to its authors, supposedly minimizes the impact of the bill on free speech. As we’ve said before, KOSA’s “duty of care” section is its biggest problem, as it would force a broad swath of online services to make policy changes based on the content of online speech. Though the bill’s authors inaccurately claim KOSA only regulates designs of platforms, not speech, the list of harms it enumerates—eating disorders, substance use disorders, and suicidal behaviors, for example—are not caused by the design of a platform. 

The authors have failed to grasp the difference between immunizing individual expression and protecting a platform from the liability that KOSA would place on it.

KOSA is likely to actually increase the risks to children, because it will prevent them from accessing online resources about topics like addiction, eating disorders, and bullying. It will result in services imposing age verification requirements and content restrictions, and it will stifle minors from finding or accessing their own supportive communities online. For these reasons, we’ve been critical of KOSA since it was introduced in 2022. 

This updated bill adds just one sentence to the “duty of care” requirement:“Nothing in this section shall be construed to allow a government entity to enforce subsection a [the duty of care] based upon the viewpoint of users expressed by or through any speech, expression, or information protected by the First Amendment to the Constitution of the United States.” But the viewpoint of users was never impacted by KOSA’s duty of care in the first place. The duty of care is a duty imposed on platforms, not users. Platforms must mitigate the harms listed in the bill, not users, and the platform’s ability to share users’ views is what’s at risk—not the ability of users to express those views. Adding that the bill doesn’t impose liability based on user expression doesn’t change how the bill would be interpreted or enforced. The FTC could still hold a platform liable for the speech it contains.

Let’s say, for example, that a covered platform like reddit hosts a forum created and maintained by users for discussion of overcoming eating disorders. Even though the speech contained in that forum is entirely legal, often helpful, and possibly even life-saving, the FTC could still hold reddit liable for violating the duty of care by allowing young people to view it. The same could be true of a Facebook group about LGBTQ issues, or for a post about drug use that X showed a user through its algorithm. If a platform’s defense were that this information is protected expression, the FTC could simply say that they aren’t enforcing it based on the expression of any individual viewpoint, but based on the fact that the platform allowed a design feature—a subreddit, Facebook group, or algorithm—to distribute that expression to minors. It’s a superfluous carveout for user speech and expression that KOSA never penalized in the first place, but which the platform would still be penalized for distributing. 

It’s particularly disappointing that those in charge of X—likely a covered platform under the law—had any role in writing this language, as the authors have failed to grasp the world of difference between immunizing individual expression, and protecting their own platform from the liability that KOSA would place on it.  

Compulsive Usage Doesn’t Narrow KOSA’s Scope 

Another of KOSA’s issues has been its vague list of harms, which have remained broad enough that platforms have no clear guidance on what is likely to cross the line. This update requires that the harms of “depressive disorders and anxiety disorders” have “objectively verifiable and clinically diagnosable symptoms that are related to compulsive usage.” The latest text’s definition of compulsive usage, however, is equally vague: “a persistent and repetitive use of a covered platform that significantly impacts one or more major life activities, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” This doesn’t narrow the scope of the bill. 

 The bill doesn’t even require that the impact be a negative one. 

It should be noted that there is no clinical definition of “compulsive usage” of online services. As in past versions of KOSA, this updated definition cobbles together a definition that sounds just medical, or just legal, enough that it appears legitimate—when in fact the definition is devoid of specific legal meaning, and dangerously vague to boot. 

How could the persistent use of social media not significantly impact the way someone socializes or communicates? The bill doesn’t even require that the impact be a negative one. Comments on an Instagram photo from a potential partner may make it hard to sleep for several nights in a row; a lengthy new YouTube video may impact someone’s workday. Opening a Snapchat account might significantly impact how a teenager keeps in touch with her friends, but that doesn’t mean her preference for that over text messages is “compulsive” and therefore necessarily harmful. 

Nonetheless, an FTC weaponizing KOSA could still hold platforms liable for showing content to minors that they believe results in depression or anxiety, so long as they can claim the anxiety or depression disrupted someone’s sleep, or even just changed how someone socializes or communicates. These so-called “harms” could still encompass a huge swathe of entirely legal (and helpful) content about everything from abortion access and gender-affirming care to drug use, school shootings, and tackle football. 

Dangerous Censorship Bills Do Not Belong in Must-Pass Legislation

The latest KOSA draft comes as incoming nominee for FTC Chair, Andrew Ferguson—who would be empowered to enforce the law, if passed—has reportedly vowed to protect free speech by “fighting back against the trans agenda,” among other things. As we’ve said for years (and about every version of the bill), KOSA would give the FTC under this or any future administration wide berth to decide what sort of content platforms must prevent young people from seeing. Just passing KOSA would likely result in platforms taking down protected speech and implementing age verification requirements, even if it's never enforced; the FTC could simply express the types of content they believe harms children, and use the mere threat of enforcement to force platforms to comply.  

No representative should consider shoehorning this controversial and unconstitutional bill into a continuing resolution. A law that forces platforms to censor truthful online content should not be in a last minute funding bill.

TELL CONGRESS: VOTE NO ON KOSA

no kosa in last minute funding bills

Jason Kelley

Brazil’s Internet Intermediary Liability Rules Under Trial: What Are the Risks?

1 week 3 days ago

The Brazilian Supreme Court is on the verge of deciding whether digital platforms can be held liable for third-party content even without a judicial order requiring removal. A panel of eleven justices is examining two cases jointly, and one of them directly challenges whether Brazil’s internet intermediary liability regime for user-generated content aligns with the country’s Federal Constitution or fails to meet constitutional standards. The outcome of these cases can seriously undermine important free expression and privacy safeguards if they lead to general content monitoring obligations or broadly expand notice-and-takedown mandates. 

The court’s examination revolves around Article 19 of Brazil’s Civil Rights Framework for the Internet (“Marco Civil da Internet”, Law n. 12.965/2014). The provision establishes that an internet application provider can only be held liable for third-party content if it fails to comply with a judicial order to remove the content. A notice-and-takedown exception to the provision applies in cases of copyright infringement, unauthorized disclosure of private images containing nudity or sexual activity, and content involving child sexual abuse. The first two exceptions are in Marco Civil, while the third one comes from a prior rule included in the Brazilian child protection law.

The decision the court takes will set a precedent for lower courts regarding two main topics: whether Marco Civil’s internet intermediary liability regime is aligned with Brazil's Constitution and whether internet application providers have the obligation to monitor online content they host and remove it when deemed offensive, without judicial intervention. Moreover, it can have a regional and cross-regional impact as lawmakers and courts look across borders at platform regulation trends amid global coordination initiatives.

After a public hearing held last year, the Court's sessions about the cases started in late November and, so far, only Justice Dias Toffoli, who is in charge of Marco Civil’s constitutionality case, has concluded the presentation of his vote. The justice declared Article 19 unconstitutional and established the notice-and-takedown regime set in Article 21 of Marco Civil, which relates to unauthorized disclosure of private images, as the general rule for intermediary liability. According to his vote, the determination of liability must consider the activities the internet application provider has actually carried out and the degree of interference of these activities.

However, platforms could be held liable for certain content regardless of notification, leading to a monitoring duty. Examples include content considered criminal offenses, such as crimes against the democratic state, human trafficking, terrorism, racism, and violence against children and women. It also includes the publication of notoriously false or severely miscontextualized facts that lead to violence or have the potential to disrupt the electoral process. If there’s reasonable doubt, the notice-and-takedown rule under Marco Civil’s Article 21 would be the applicable regime.

The court session resumes today, but it’s still uncertain whether all eleven justices will reach a judgement by year’s end.  

Some Background About Marco Civil’s Intermediary Liability Regime

The legislative intent back in 2014 to establish Article 19 as the general rule for internet application providers' liability for user-generated content reflected civil society’s concerns over platform censorship. Faced with the risk of being held liable for user content, internet platforms generally prioritize their economic interests and security over preserving users’ protected expression and over-remove content to avoid legal battles and regulatory scrutiny. The enforcement overreach of copyright rules online was already a problem when the legislative discussion of Marco Civil took place. Lawmakers chose to rely on courts to balance the different rights at stake in removing or keeping user content online. The approval of Marco Civil had wide societal support and was considered a win for advancing users’ rights online.

The provision was in line with the Special Rapporteurs for Freedom of Expression from the United Nations and the Inter-American Commission on Human Rights (IACHR). In that regard, the then IACHR’s Special Rapporteur had clearly remarked that a strict liability regime creates strong incentives for private censorship, and would run against the State’s duty to favor an institutional framework that protects and guarantees free expression under the American Convention on Human Rights. Notice-and-takedown regimes as the general rule also raised concerns of over-removal and the weaponization of notification mechanisms to censor protected speech.

A lot has happened since 2014. Big Tech platforms have consolidated their dominance, the internet ecosystem is more centralized, and algorithmic mediation of content distribution online has intensified, increasingly relying on a corporate surveillance structure. Nonetheless, the concerns Marco Civil reflects remain relevant just as the balance its intermediary liability rule has struck persists as a proper way of tackling these concerns. Regarding current challenges, changes to the liability regime suggested in Dias Toffoli's vote will likely reinforce rather than reduce corporate surveillance, Big Tech’s predominance, and digital platforms’ power over online speech.

The Cases Under Trial and The Reach of the Supreme Court’s Decision

The two individual cases under analysis by the Supreme Court are more than a decade old. Both relate to the right to honor. In the first one, the plaintiff, a high school teacher, sued Google Brasil Internet Ltda to remove an online community created by students to offend her on the now defunct Orkut platform. She asked for the deletion of the community and compensation for moral damages, as the platform didn't remove the community after an extrajudicial notification. Google deleted the community following the decision of the lower court, but the judicial dispute about the compensation continued.

In the second case, the plaintiff sued Facebook after the company didn’t remove an offensive fake account impersonating her. The lawsuit sought to shut down the fake account, obtain the identification of the account’s IP address, and compensation for moral damages. As Marco Civil had already passed, the judge denied the moral compensation request. Yet, the appeals court found that Facebook could be liable for not removing the fake account after an extrajudicial notification, finding Marco Civil’s intermediary liability regime unconstitutional vis-à-vis Brazil’s constitutional protection to consumers. 

Both cases went all the way through the Supreme Court in two separate extraordinary appeals, now examined jointly. For the Supreme Court to analyze extraordinary appeals, it must identify and approve a “general repercussion” issue that unfolds from the individual case. As such, the topics under analysis of the Brazilian Supreme Court in these appeals are not only the individual cases, but also the court’s understanding about the general repercussion issues involved. What the court stipulates in this regard will orient lower courts’ decisions in similar cases. 

The two general repercussion issues under scrutiny are, then, the constitutionality of Marco Civil’s internet intermediary liability regime and whether internet application providers have the obligation to monitor published content and take it down when considered offensive, without judicial intervention. 

There’s a lot at stake for users’ rights online in the outcomes of these cases. 

The Many Perils and Pitfalls on the Way

Brazil’s platform regulation debate has heated up in the last few years. Concerns over the gigantic power of Big Tech platforms, the negative effects of their attention-driven business model, and revelations of plans and actions from the previous presidential administration to remain in power arbitrarily inflamed discussions of regulating Big Tech. As its main vector, draft bill 2630 (PL 2630), didn’t move forward in the Brazilian Congress, the Supreme Court’s pending cases gained traction as the available alternative for introducing changes. 

We’ve written about intermediary liability trends around the globe, how to move forward, and the risks that changes in safe harbors regimes end up reshaping intermediaries’ behavior in ways that ultimately harm freedom of expression and other rights for internet users. 

One of these risks is relying on strict liability regimes to moderate user expression online. Holding internet application providers liable for user-generated content regardless of a notification means requiring them to put in place systems of content monitoring and filtering with automated takedowns of potential infringing content. 

While platforms like Facebook, Instagram, X (ex-Twitter), Tik Tok, and YouTube already use AI tools to moderate and curate the sheer volume of content they receive per minute, the resources they have for doing so are not available for other, smaller internet application providers that host users’ expression. Making automated content monitoring a general obligation will likely intensify the concentration of the online ecosystem in just a handful of large platforms. Strict liability regimes also inhibit or even endanger the existence of less-centralized content moderation models, contributing yet again to entrenching Big Tech’s dominance and business model.

But the fact that Big Tech platforms already use AI tools to moderate and restrict content doesn’t mean they do it well. Automated content monitoring is hard at scale and platforms constantly fail at purging content that violates its rules without sweeping up protected content. In addition to historical issues with AI-based detection of copyright infringement that have deeply undermined fair use rules, automated systems often flag and censor crucial information that should stay online.  

Just to give a few examples, during the wave of protests in Chile, internet platforms wrongfully restricted content reporting police's harsh repression of demonstrations, having deemed it violent content. In Brazil, we saw similar concerns when Instagram censored images of Jacarezinho’s community’s massacre in 2021, which was the most lethal police operation in Rio de Janeiro’s history. In other geographies, the quest to restrict extremist content has removed videos documenting human rights violations in conflicts in countries like Syria and Ukraine.

These are all examples of content similar to what could fit into Justice Toffoli’s list of speech subject to a strict liability regime. And while this regime shouldn’t apply in cases of reasonable doubt, platform companies won’t likely risk keeping such content up out of concern that a judge decides later that it wasn’t a reasonable doubt situation and orders them to pay damages.  Digital platforms have, then, a strong incentive to calibrate their AI systems to err on the side of censorship. And depending on how these systems operate, it means a strong incentive for conducting prior censorship potentially affecting protected expression, which defies Article 13 of the American Convention.  

Setting the notice-and-takedown regime as the general rule for an intermediary’s liability also poses risks. While the company has the chance to analyze and decide whether to keep content online, again the incentive is to err on the side of taking it down to avoid legal costs.

Brazil's own experience in courts shows how tricky the issue can be. InternetLab's research based on rulings involving free expression online indicated that Brazilian courts of appeals denied content removal requests in more than 60% of cases. The Brazilian Association of Investigative Journalism (ABRAJI) has also highlighted data showing that at some point in judicial proceedings, judges agreed with content removal requests in around half of the cases, and some were reversed later on. This is especially concerning in honor-related cases. The more influential or powerful the person involved, the higher the chances of arbitrary content removal, flipping the public-interest logic of preserving access to information. We should not forget companies that thrived by offering reputation management services built upon the use of takedown mechanisms to disappear critical content online.

It's important to underline that this ruling comes in the absence of digital procedural justice guarantees. While Justice Toffoli’s vote asserts platforms’ duty to provide specific notification channels, preferably electronic, to receive complaints about infringing content, there are no further specifications to avoid the misuse of notification systems. Article 21 of Marco Civil sets that notices must allow the specific identification of the contested content (generally understood as the URL) and elements to verify that the complainant is the person offended. Except for that, there is no further guidance on which details and justifications the notice should contain, and whether the content’s author would have the opportunity, and the proper mechanism, to respond or appeal to the takedown request. 

As we said before, we should not mix platform accountability with reinforcing digital platforms as points of control over people's online expression and actions. This is a dangerous path considering the power big platforms already have and the increasing intermediation of digital technologies in everything we do. Unfortunately, the Supreme Court seems to be taking a direction that will emphasize such a role and dominant position, creating also additional hurdles for smaller platforms and decentralized models to compete with the current digital giants. 

Veridiana Alimonti

Introducing EFF’s New Video Series: Gate Crashing

1 week 3 days ago

The promise of the internet—at least in the early days—was that it would lower the barriers to entry for any number of careers. Traditionally, the spheres of novel writing, culture criticism, and journalism were populated by well-off straight white men, with anyone not meeting one of those criteria being an outlier. Add in giant corporations acting as gatekeepers to those spheres and it was a very homogenous culture. The internet has changed that. 

There is a lot about the internet that needs fixing, but the one thing we should preserve and nurture is the nontraditional paths to success it creates. In this series of interviews, called “Gate Crashing,” we look to highlight those people and learn from their examples. In an ideal world, lawmakers will be guided by lived experiences like these when thinking about new internet legislation or policy. 

In our first video, we look at creators who honed their media criticism skills in fandom spaces. Please join Gavia Baker-Whitelaw and Elizabeth Minkel, co-creators of the Rec Center newsletter, in a wide-ranging discussion about how they got started, where it has led them, and what they’ve learned about internet culture and policy along the way. 

%3Ciframe%20title%3D%22YouTube%20video%20player%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FaeplIxvskx8%3Fsi%3DJJtXxSdTkjYiTrTT%26autoplay%3D1%26mute%3D1%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%20allowfullscreen%3D%22allowfullscreen%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

Katharine Trendacosta
Checked
1 hour 44 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed