Speaking Freely: Benjamin Ismail

3 months 2 weeks ago

Interviewer: Jillian York

Benjamin Ismail is the Campaign and Advocacy Director for GreatFire, where he leads efforts to expose the censorship apparatus of authoritarian regimes worldwide. He also runs/oversees the App Censorship Project, including the AppleCensorship.com and GoogleCensorship.org platforms, which track mobile app censorship globally. From 2011 to 2017, Benjamin headed the Asia-Pacific desk at Reporters Without Borders (RSF).

Jillian York: Hi Benjamin, it's great to chat with you. We got to meet at the Global Gathering recently and we did a short video there and it was wonderful to get to know you a little bit. I'm going to start by asking you my first basic question: What does free speech or free expression mean to you?

Benjamin Ismail: Well, it starts with a very, very big question. What I have in mind is a cliche answer, but it's what I genuinely believe. I think about all freedoms. So when you say free expression, free speech, or freedom of information or Article 19, all of those concepts are linked together, I immediately think of all human rights at once. Because what I have seen during my current or past work is how that freedom is really the cornerstone of all freedom. If you don’t have that, you can’t have any other freedom. If you don’t have freedom of expression, if you don't have journalism, you don't have pluralism of opinions—you have self-censorship.

You have realities, violations, that exist but are not talked about, and are not exposed, not revealed, not tackled, and nothing is really improved without that first freedom. I also think about Myanmar because I remember going there in 2012, when the country had just opened after the democratic revolution. We got the chance to meet with many officials, ministers, and we got to tell them that they should start with that because their speech was “don’t worry, don’t raise freedom of speech, freedom of the press will come in due time.”

And we were saying “no, that’s not how it works!” It doesn’t come in due time when other things are being worked on. It starts with that so you can work on other things. And so I remember very well those meetings and how actually, unfortunately, the key issues that re-emerged afterwards in the country were precisely due to the fact that they failed to truly implement free speech protections when the country started opening.

JY: What was your path to this work?

BI: This is a multi-faceted answer. So, I was studying Chinese language and civilization at the National Institute of Oriental Languages and Civilizations in Paris along with political science and international law. When I started that line of study, I considered maybe becoming a diplomat…that program led to preparing for the exams required to enter the diplomatic corps in France.

But I also heard negative feedback on the Ministry of Foreign Affairs and, notably, first-hand testimonies from friends and fellow students who had done internships there. I already knew that I had a little bit of an issue with authority. My experience as an assistant at Reporters Without Borders challenged the preconceptions I had about NGOs and civil society organizations in general. I was a bit lucky to come at a time when the organization was really trying to find its new direction, its new inspiration. So it a brief phase where the organization itself was hungry for new ideas.

Being young and not very experienced, I was invited to share my inputs, my views—among many others of course. I saw that you can influence an organization’s direction, actions, and strategy, and see the materialization of those strategic choices. Such as launching a campaign, setting priorities, and deciding how to tackle issues like freedom of information, and the protection of journalists in various contexts.

That really motivated me and I realized that I would have much less to say if I had joined an institution such as the Ministry of Foreign Affairs. Instead, I was part of a human-sized group, about thirty-plus employees working together in one big open space in Paris.

After that experience I set my mind on joining the civil society sector, focusing on freedom of the press. on journalistic issues, you get to touch on many different issues in many different regions, and I really like that. So even though it’s kind of monothematic, it's a single topic that's encompassing everything at the same time.

I was dealing with safety issues for Pakistani journalists threatened by the Taliban. At the same time I followed journalists pressured by corporations such as TEPCO and the government in Japan for covering nuclear issues. I got to touch on many topics through the work of the people we were defending and helping. That’s what really locked me onto this specific human right.

 I already had my interest when I was studying in political and civil rights, but after that first experience, at the end of 2010, I went to China and got called by Reporters Without Borders. They told me that the head of the Asia desk was leaving and invited me to apply for the position. At that time, I was in Shanghai, working to settle down there. The alternative was accepting a job that would take me back to Paris but likely close the door on any return to China. Once you start giving interviews to outlets like the BBC and CNN, well… you know how that goes—RSF was not viewed favorably in many countries. Eventually, I decided it was a huge opportunity, so I accepted the job and went back to Paris, and from then on I was fully committed to that issue.

 JY: For our readers, tell us what the timeline of this was.

 BI: I finished my studies in 2009. I did my internship with Reporters Without Borders that year and continued to work pro bono for the organization on the Chinese website in 2010. Then I went to China, and in January 2011, I was contacted by Reporters without Borders about the departure of the former head of the Asia Pacific Desk.

I did my first and last fact-finding mission in China, and went to Beijing. I met the artist Ai Weiwei in Beijing just a few weeks before he was arrested, around March 2011, and finally flew back to Paris and started heading the Asia desk. I left the organization in 2017. 

JY: Such an amazing story. I’d love to hear more about the work that you do now.

 BI: The story of the work I do now actually starts in 2011. That was my first year heading the Asia Pacific Desk. That same year, a group of anonymous activists based in China started a group called GreatFire. They launched their project with a website where you can type any URL you want and that website will test the connection from mainland China to that URL and tell you know if it’s accessible or blocked. They also kept the test records so that you can look at the history of the blocking of a specific website, which is great. That was GreatFire’s first project for monitoring web censorship in mainland China.

We started exchanging information, working on the issue of censorship in China. They continued to develop more projects which I tried to highlight as well. I also helped them to secure some funding. At the very beginning, they were working on these things as a side job. And progressively they managed to get some funding, which was very difficult because of the anonymity.

One of the things I remember is that I helped them get some funding from the EU through a mechanism called “Small Grants”, where every grant would be around €20- 30,000. The EU, you know, is a bureaucratic entity and they were demanding some paperwork and documents. But I was telling them that they wouldn’t be able to get the real names of the people working at GreatFire, but that they should not be concerned about that because, what they wanted was to finance that tool. So if we were to show them that the people they were going to send the money to were actually the people controlling that website, then it would be fine. And so we featured a little EU logo just for one day, I think on the footer of the website so they could check that. And that’s how we convinced the EU to support GreatFire for that work. Also, there's this tactic called “Collateral Freedom” that GreatFire uses very well.

The idea is that you host sensitive content on HTTPS servers that belong to companies which also operate inside China and are accessible there. Because it’s HTTPS, the connection is encrypted, so the authorities can’t just block a specific page—they can’t see exactly which page is being accessed. To block it, they’d have to block the entire service. Now, they can do that, but it comes at a higher political and economic cost, because it means disrupting access to other things hosted on that same service—like banks or major businesses. That’s why it’s called “collateral freedom”: you’re basically forcing the authorities to risk broader collateral damage if they want to censor your content.

When I was working for RSF, I proposed that we replicate that tactic on the 12th of March—that's the World Day against Cyber Censorship. We had the habit of publishing what we called the “enemies of the Internet” report, where we would highlight and update the situation on the countries which were carrying out the harshest repression online; countries like Iran, Turkmenistan, North Korea, and of course, China. I suggested in a team meeting: “what if we highlighted the good guys? Maybe we could highlight 10 exiled media and use collateral freedom to uncensor those. And so we did: some Iranian media, Egyptian media, Chinese media, Turkmen media were uncensored using mirrors hosted on https servers owned by big, and thus harder to block, companies...and that’s how we started to do collateral freedom and it continued to be an annual thing.

I also helped in my personal capacity, including after I left Reporters Without Borders. After I left RSF, I joined another NGO focusing on China, which I knew also from my time at RSF. I worked with that group for a year and a half; GreatFire contacted me to work on a website specifically. So here we are, at the beginning of 2020, they had just started this website called Applecensorship.com that allowed users to test availability of any app in any of Apple’s 175 App Stores worldwide They needed a better website—one that allowed advocacy content—for that tool.

The idea was to make a website useful for academics doing research, journalists investigating app store censorship and control and human rights NGOs, civil society organizations interested in the availability of any tools. Apple’s censorship in China started quickly after the company entered the Chinese market, in 2010.

In 2013, one of the projects by GreatFire which had been turned into an iOS app was removed by Apple 48 hours after its release on the App Store, at the demand of the Chinese authorities. That project was Free Weibo, which is a website which features censored posts from Weibo, the Chinese equivalent of Twitter—we crawl social media and detect censored posts and republish them on the site. In 2017 it was reported that Apple had removed all VPNs from the Chinese app store.

So between that episode in 2013, and the growing censorship of Apple in China (and in other places too) led to the creation of AppleCensorship in 2019. GreatFire asked me to work on that website. The transformation into an advocacy platform was successful. I then started working full time on that project, which has since evolved into the App Censorship Project, which includes another website, googlecensorship.org (offering features similar to Applecensorship.com but for the 224 Play Stores worldwide). In the meantime, I became the head of campaigns and advocacy, because of my background at RSF.  

 JY: I want to ask you, looking beyond China, what are some other places in the world that you're concerned about at the moment, whether on a professional basis, but also maybe just as a person. What are you seeing right now in terms of global trends around free expression that worry you?

BI: I think, like everyone else, that what we're seeing in Western democracies—in the US and even in Europe—is concerning. But I'm still more concerned about authoritarian regimes than about our democracies. Maybe it's a case of not learning my lesson or of naive optimism, but I'm still more concerned about China and Russia than I am about what I see in France, the UK, or the US.

There has been some recent reporting about China developing very advanced censorship and surveillance technologies and exporting them to other countries like Myanmar and Pakistan. What we’re seeing in Russia—I’m not an expert on that region, but we heard experts saying back in 2022 that Russia was trying to increase its censorship and control, but that it couldn’t become like China because China had exerted control over its internet from the very beginning: They removed Facebook back in 2009, then Google was pushed away by the authorities (and the market). And the Chinese authorities successfully filled the gaps left by the absence of those foreign Western companies.

Some researchers working on Russia were saying that it wasn’t really possible for Russia to do what China had done because it was unprepared and that China had engineered it for more than a decade. What we are seeing now is that Russia is close to being able to close its Internet, to close the country, to replace services by its own controlled ones. It’s not identical, but it’s also kind of replicating what China has been doing. And that’s a very sad observation to make.

 Beyond the digital, the issue of how far Putin is willing to go in escalating concerns. As a human being and an inhabitant of the European continent, I’m concerned by the ability of a country like Russia to isolate itself while waging a war. Russia is engaged in a real war and at the same time is able to completely digitally close down the country. Between that and the example of China exporting censorship, I’m not far from thinking that in ten or twenty years we’ll have a completely splintered internet.

JY: Do you feel like having a global perspective like this has changed or reshaped your views in any way?

BI: Yes, in the sense that when you start working with international organizations, and you start hearing about the world and how human rights are universal values, and you get to meet people and go to different countries, you really get to experience how universal those freedoms and aspirations are. When I worked RSF and lobbied governments to pass a good law or abolish a repressive one, or when I worked on a case of a jailed journalist or blogger, I got to talk to authorities and to hear weird justifications from certain governments (not mentioning any names but Myanmar and Vietnam) like “those populations are different from the French” and I would receive pushback that the ideas of freedoms I was describing were not applicable to their societies. It’s a bit destabilizing when you hear that for the first time. But as you gain experience, you can clearly explain why human rights are universal and why different populations shouldn’t be ruled differently when it comes to human rights.

Everyone wants to be free. This notion of “universality” is comforting because when you’re working for something universal, the argument is there. The freedoms you defend can’t be challenged in principle, because everyone wants them. If governments and authorities really listened to their people, they would hear them calling for those rights and freedoms.

Or that’s what I used to think. Now we hear this growing rhetoric that we (people from the West) are exporting democracy, that it’s a western value, and not a universal one. This discourse, notably developed by Xi Jinping in China, “Western democracy” as a new concept— is a complete fallacy. Democracy was invented in the West, but democracy is universal. Unfortunately, I now believe that, in the future, we will have to justify and argue much more strongly for the universality of concepts like democracy, human rights and fundamental freedoms. 

JY: Thank you so much for this insight. And now for our final question: Do you have a free speech hero?

BI: No.

JY: No? No heroes? An inspiration maybe.

BI: On the contrary, I’ve been disappointed so much by certain figures that were presented as human rights heroes…Like Aung San Suu Kyi during the Rohingya crisis, on which I worked when I was at RSF.

Myanmar officially recognizes 135 ethnic groups, but somehow this one additional ethnic minority (the Rohingya) is impossible for them to accept. It’s appalling. It’s weird to say, but some heroes are not really good people either. Being a hero is doing a heroic action, but people who do heroic actions can also do very bad things before or after, at a different level. They can be terrible persons, husbands or friends and be a “human rights” hero at the same time.

Some people really inspired me but they’re not public figures. They are freedom fighters, but they are not “heroes”. They remain in the shadows. I know their struggles; I see their determination, their conviction, and how their personal lives align with their role as freedom fighters. These are the people who truly inspire me.

Jillian C. York

A Surveillance Mandate Disguised As Child Safety: Why the GUARD Act Won't Keep Us Safe

3 months 2 weeks ago

A new bill sponsored by Sen. Hawley (R-MO), Sen. Blumenthal (D-CT), Sen. Britt (R-AL), Sen. Warner (D-VA), and Sen. Murphy (D-CT) would require AI chatbots to verify all users’ ages, prohibit minors from using AI tools, and implement steep criminal penalties for chatbots that promote or solicit certain harms. That might sound reasonable at first, but behind those talking points lies a sprawling surveillance and censorship regime that would reshape how people of all ages use the internet.

The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot.

The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot—from customer-service bots to search-engine assistants. The GUARD Act could force countless AI companies to collect sensitive identity data, chill online speech, and block teens from using the digital tools that they rely on every day.

EFF has warned for years that age-verification laws endanger free expression, privacy, and competition. There are legitimate concerns about transparency and accountability in AI, but the GUARD Act’s sweeping mandates are not the solution.

TAKE ACTION

TELL CONGRESS: The guard act won't keep us safe

Young People's Access to Legitimate AI Tools Could Be Cut Off Entirely. 

The GUARD Act doesn’t give parents a choice—it simply blocks minors from AI companions altogether. If a chat system’s age-verification process determines that a user is under 18, that user must then be locked out completely. The GUARD Act contains no parental consent mechanism, no appeal process for errors in age estimation, and no flexibility for any other context.

The bill’s definition of an AI “companion” is ambiguous enough that it could easily be interpreted to extend beyond general-use LLMs like ChatGPT, causing overcautious companies to block young people from other kinds of AI services too. In practice, this means that under the GUARD Act, teenagers may not be able to use chatbots to get help with homework, seek customer service assistance for a product they bought, or even ask a search engine a question. It could also cut off all young people’s access to educational and creative tools that have quickly become a part of everyday learning and life online.

The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true.

By treating all young people—whether seven or seventeen—the same, the GUARD Act threatens their ability to explore their identities, get answers to questions free from shame or stigma, and gradually develop a sense of autonomy as they mature into adults. Denying teens’ access to online spaces doesn’t make them safer, it just keeps them uninformed and unprepared for adult life.  

The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true. Instead, it will undermine both safety and autonomy by replacing parental guidance with government mandates and building mass surveillance infrastructure instead of privacy controls.

All Age Verification Systems Are Dangerous. This Is No Different. 

Teens aren’t the only ones who lose out under the GUARD Act. The bill would require platforms to confirm the ages of all users—young and old—before allowing them to speak, learn, or engage with their AI tools.

Under the GUARD Act, platforms can’t rely on a simple “I’m over 18” checkbox or self-attested birthdate. Instead, they must build or buy a “commercially reasonable” age-verification system that collects identifying information (like a government ID, credit record, or biometric data) from every user before granting them access to the AI service. Though the GUARD Act does contain some data minimization language, its mandate to periodically re-verify users means that platforms must either retain or re-collect that sensitive user data as needed. Both of those options come with major privacy risks.  

EFF has long documented the dangers of age-verification systems:

  • They create attractive targets for hackers. Third-party services that collect users’ sensitive ID and biometric data for the purpose of age verification have been repeatedly breached, exposing millions to identity theft and other harms.
  • They implement mass surveillance systems and ruin anonymity. To verify your age, a system must determine and record who you are. That means every chatbot interaction could feasibly be linked to your verified identity.
  • They disproportionately harm vulnerable groups. Many people—especially activists and dissidents, trans and gender-nonconforming folks, undocumented people, and survivors of abuse—avoid systems that force identity disclosure. The GUARD Act would entirely cut off their ability to use these public AI tools.
  • They entrench Big Tech. Only the biggest companies can afford the compliance and liability burden of mass identity verification. Smaller, privacy-respecting developers simply can’t compete.

As we’ve said repeatedly, there’s no such thing as “safe” age verification. Every approach—whether it’s facial or biometric scans, government ID uploads, or behavioral or account analysis—creates new privacy, security, and expressive harms.

Vagueness + Steep Fines = Censorship. Full Stop. 

Though mandatory age-gates provide reason enough to oppose the GUARD Act, the definitions of “AI chatbot” and “AI companion” are also vague and broad enough to raise alarms. In a nutshell, the Act’s definitions of these two terms are so expansive that they could cover nearly any system capable of generating “human-like” responsesincluding not just general-purpose LLMs like ChatGPT, but also more tailored services like those used for customer service interactions, search-engine summaries, and subject-specific research tools.

The bill defines an “AI chatbot” as any service that produces “adaptive” or “context-responsive” outputs that aren’t fully predetermined by a developer or operator. That could include Google’s search summaries, research tools like Perplexity, or any AI-powered Q&A tool—all of which respond to natural language prompts and dynamically generate conversational text.

Meanwhile, the GUARD Act’s definition of an “AI companion”—a system that both produces “adaptive” or “context-responsive” outputs and encourages or simulates “interpersonal or emotional interaction”—will easily sweep in general-purpose tools like ChatGPT. Courts around the country are already seeing claims that conversational AI tools manipulate users’ emotions to increase engagement. Under this bill, that’s enough to trigger the “AI companion” label, putting AI developers at risk even when they do not intend to cause harm.

Both of these definitions are imprecise and unconstitutionally overbroad. And, when combined with the GUARD Act’s incredibly steep fines (up to $100,000 per violation, enforceable by the federal Attorney General and every state AG), companies worried about their legal liability will inevitably err on the side of prohibiting minors from accessing their chat systems. The GUARD Act leaves them these options: censor certain topics en masse, entirely block users under 18 from accessing their services, or implement broad-sweeping surveillance systems as a prerequisite to access. No matter which way platforms choose to go, the inevitable result for users is less speech, less privacy, and less access to genuinely helpful tools.

How You Can Help

While there may be legitimate problems with AI chatbots, young people’s safety is an incredibly complex social issue both on- and off-line. The GUARD Act tries to solve this complex problem with a blunt, dangerous solution.

In other words, protecting young people’s online safety is incredibly important, but to do so by forcing invasive ID checks, criminalizing AI tools, and banning teens from legitimate digital spaces is not a good way out of this.

The GUARD Act would make the internet less free, less private, and less safe for everyone.

The GUARD Act would make the internet less free, less private, and less safe for everyone. It would further consolidate power and resources in the hands of the bigger AI companies, crush smaller developers, and chill innovation under the threat of massive fines. And it would cut off vulnerable groups’ ability to use helpful everyday AI tools, further stratifying the internet we know and love.

Lawmakers should reject the GUARD Act and focus instead on policies that provide transparency, more options for users, and comprehensive privacy for all. Help us tell Congress to oppose the GUARD Act today.

TAKE ACTION

TELL CONGRESS: OPPOSe THE GUARD ACT

Molly Buckley

Lawmakers Want to Ban VPNs—And They Have No Idea What They're Doing

3 months 2 weeks ago

Remember when you thought age verification laws couldn't get any worse? Well, lawmakers in Wisconsin, Michigan, and beyond are about to blow you away.

It's unfortunately no longer enough to force websites to check your government-issued ID before you can access certain content, because politicians have now discovered that people are using Virtual Private Networks (VPNs) to protect their privacy and bypass these invasive laws. Their solution? Entirely ban the use of VPNs. 

Yes, really.

As of this writing, Wisconsin lawmakers are escalating their war on privacy by targeting VPNs in the name of “protecting children” in A.B. 105/S.B. 130. It’s an age verification bill that requires all websites distributing material that could conceivably be deemed “sexual content” to both implement an age verification system and also to block the access of users connected via VPN. The bill seeks to broadly expand the definition of materials that are “harmful to minors” beyond the type of speech that states can prohibit minors from accessing—potentially encompassing things like depictions and discussions of human anatomy, sexuality, and reproduction. 

This follows a notable pattern: As we’ve explained previously, lawmakers, prosecutors, and activists in conservative states have worked for years to aggressively expand the definition of “harmful to minors” to censor a broad swath of content: diverse educational materials, sex education resources, art, and even award-winning literature

Wisconsin’s bill has already passed the State Assembly and is now moving through the Senate. If it becomes law, Wisconsin could become the first state where using a VPN to access certain content is banned. Michigan lawmakers have proposed similar legislation that did not move through its legislature, but among other things, would force internet providers to actively monitor and block VPN connections. And in the UK, officials are calling VPNs "a loophole that needs closing."

This is actually happening. And it's going to be a disaster for everyone.

Here's Why This Is A Terrible Idea 

VPNs mask your real location by routing your internet traffic through a server somewhere else. When you visit a website through a VPN, that website only sees the VPN server's IP address, not your actual location. It's like sending a letter through a P.O. box so the recipient doesn't know where you really live. 

So when Wisconsin demands that websites "block VPN users from Wisconsin," they're asking for something that's technically impossible. Websites have no way to tell if a VPN connection is coming from Milwaukee, Michigan, or Mumbai. The technology just doesn't work that way.

Websites subject to this proposed law are left with this choice: either cease operation in Wisconsin, or block all VPN users, everywhere, just to avoid legal liability in the state. One state's terrible law is attempting to break VPN access for the entire internet, and the unintended consequences of this provision could far outweigh any theoretical benefit.

Almost Everyone Uses VPNs

Let's talk about who lawmakers are hurting with these bills, because it sure isn't just people trying to watch porn without handing over their driver's license.

  1. Businesses run on VPNs. Every company with remote employees uses VPNs. Every business traveler connecting through sketchy hotel Wi-Fi needs one. Companies use VPNs to protect client and employee data, secure internal communications, and prevent cyberattacks. 
  2. Students need VPNs for school. Universities require students to use VPNs to access research databases, course materials, and library resources. These aren't optional, and many professors literally assign work that can only be accessed through the school VPN. The University of Wisconsin-Madison’s WiscVPN, for example, “allows UW–‍Madison faculty, staff and students to access University resources even when they are using a commercial Internet Service Provider (ISP).” 
  3. Vulnerable people rely on VPNs for safety. Domestic abuse survivors use VPNs to hide their location from their abusers. Journalists use them to protect their sources. Activists use them to organize without government surveillance. LGBTQ+ people in hostile environments—both in the US and around the world—use them to access health resources, support groups, and community. For people living under censorship regimes, VPNs are often their only connection to vital resources and information their governments have banned. 
  4. Regular people just want privacy. Maybe you don't want every website you visit tracking your location and selling that data to advertisers. Maybe you don't want your internet service provider (ISP) building a complete profile of your browsing history. Maybe you just think it's creepy that corporations know everywhere you go online. VPNs can protect everyday users from everyday tracking and surveillance.
It’s A Privacy Nightmare

Here's what happens if VPNs get blocked: everyone has to verify their age by submitting government IDs, biometric data, or credit card information directly to websites—without any encryption or privacy protection.

We already know how this story ends. Companies get hacked. Data gets breached. And suddenly your real name is attached to the websites you visited, stored in some poorly-secured database waiting for the inevitable leak. This has already happened, and is not a matter of if but when. And when it does, the repercussions will be huge.

Forcing people to give up their privacy to access legal content is the exact opposite of good policy. It's surveillance dressed up as safety.

"Harmful to Minors" Is Not a Catch-All 

Here's another fun feature of these laws: they're trying to broaden the definition of “harmful to minors” to sweep in a host of speech that is protected for both young people and adults.

Historically, states can prohibit people under 18 years old from accessing sexual materials that an adult can access under the First Amendment. But the definition of what constitutes “harmful to minors” is narrow — it generally requires that the materials have almost no social value to minors and that they, taken as a whole, appeal to a minors’ “prurient sexual interests.” 

Wisconsin's bill defines “harmful to minors” much more broadly. It applies to materials that merely describe sex or feature descriptions/depictions of human anatomy. This definition would likely encompass a wide range of literature, music, television, and films that are protected under the First Amendment for both adults and young people, not to mention basic scientific and medical content.

Additionally, the bill’s definition would apply to any websites where more than one third of the site’s material is "harmful to minors." Given the breadth of the definition and its one-third trigger, we anticipate that Wisconsin could argue that the law applies to most social media websites. And it’s not hard to imagine, as these topics become politicised, Wisconsin claiming it applies to websites containing LGBTQ+ health resources, basic sexual education resources, and reproductive healthcare information. 

This breadth of the bill’s definition isn't a bug, it's a feature. It gives the state a vast amount of discretion to decide which speech is “harmful” to young people, and the power to decide what's "appropriate" and what isn't. History shows us those decisions most often harm marginalized communities

It Won’t Even Work

Let's say Wisconsin somehow manages to pass this law. Here's what will actually happen:

People who want to bypass it will use non-commercial VPNs, open proxies, or cheap virtual private servers that the law doesn't cover. They'll find workarounds within hours. The internet always routes around censorship. 

Even in a fantasy world where every website successfully blocked all commercial VPNs, people would just make their own. You can route traffic through cloud services like AWS or DigitalOcean, tunnel through someone else's home internet connection, use open proxies, or spin up a cheap server for less than a dollar. 

Meanwhile, everyone else (businesses, students, journalists, abuse survivors, regular people who just want privacy) will have their VPN access impacted. The law will accomplish nothing except making the internet less safe and less private for users.

Nonetheless, as we’ve mentioned previously, while VPNs may be able to disguise the source of your internet activity, they are not foolproof—nor should they be necessary to access legally protected speech. Like the larger age verification legislation they are a part of, VPN-blocking provisions simply don't work. They harm millions of people and they set a terrifying precedent for government control of the internet. More fundamentally, legislators need to recognize that age verification laws themselves are the problem. They don't work, they violate privacy, they're trivially easy to circumvent, and they create far more harm than they prevent.

A False Dilemma

People have (predictably) turned to VPNs to protect their privacy as they watched age verification mandates proliferate around the world. Instead of taking this as a sign that maybe mass surveillance isn't popular, lawmakers have decided the real problem is that these privacy tools exist at all and are trying to ban the tools that let people maintain their privacy. 

Let's be clear: lawmakers need to abandon this entire approach.

The answer to "how do we keep kids safe online" isn't "destroy everyone's privacy." It's not "force people to hand over their IDs to access legal content." And it's certainly not "ban access to the tools that protect journalists, activists, and abuse survivors.”

If lawmakers genuinely care about young people's well-being, they should invest in education, support parents with better tools, and address the actual root causes of harm online. What they shouldn't do is wage war on privacy itself. Attacks on VPNs are attacks on digital privacy and digital freedom. And this battle is being fought by people who clearly have no idea how any of this technology actually works. 

If you live in Wisconsin—reach out to your Senator and urge them to kill A.B. 105/S.B. 130. Our privacy matters. VPNs matter. And politicians who can't tell the difference between a security tool and a "loophole" shouldn't be writing laws about the internet.

Rindala Alajaji

🔔 Ring's Face Scan Plan | EFFector 37.16

3 months 3 weeks ago

Cozy up next to the fireplace and we'll catch you up on the latest digital rights news with EFF's EFFector newsletter.

In our latest issue, we’re exposing surveillance logs that reveal racist policing; explaining the harms of Google’s plan for Android app gatekeeping; and continuing our new series, Gate Crashing, exploring how the internet empowers people to take nontraditional paths into the traditional worlds of journalism, creativity, and criticism.

Prefer to listen in? Check out our audio companion, where EFF Staff Attorney Mario Trujillo explains why Ring's upcoming facial recognition tool could violate the privacy rights of millions of people. Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.16 - 🔔 RING'S FACE SCAN PLAN

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

Washington Court Rules That Data Captured on Flock Safety Cameras Are Public Records

3 months 3 weeks ago

A Washington state trial court has shot down local municipalities’ effort to keep automated license plate reader (ALPR) data secret.

The Skagit County Superior Court in Washington rejected the attempt to block the public’s right to access data gathered by Flock Safety cameras, protecting access to information under the Washington Public Records Act (PRA). Importantly, the ruling from the court makes it clear that this access is protected even when a Washington city uses Flock Safety, a third-party vendor, to conduct surveillance and store personal data on behalf of a government agency.

"The Flock images generated by the Flock cameras...are public records," the court wrote in its ruling. "Flock camera images are created and used to further a governmental purpose. The Flock images created by the cameras located in Stanwood and Sedro-Woolley were paid for by Stanwood and Sedro Wooley [sic] and were generated for the benefit of Stanwood and Sedro-Woolley."

The cities’ move to exempt the records from disclosure was a dangerous attempt to deny transparency and reflects another problem with the massive amount of data that police departments collect through Flock cameras and store on Flock servers: the wiggle room cities seek when public data is hosted on a private company’s server.

Flock Safety's main product is ALPRs, camera systems installed throughout communities to track all drivers all the time. Privacy activists and journalists across the country recently have used public records requests to obtain data from the system, revealing a variety of controversial uses. This has included agencies accessing data for immigration enforcement and to investigate an abortion, the latter of which may have violated Washington law. A recent report from the University of Washington found that some cities in the state are also sharing the ALPR data from their Flock Safety systems with federal immigration agents. 

In this case, a member of the public in April filed a records request with a Flock customer, the City of Stanwood, for all footage recorded during a one-hour period in March. Shortly afterward, Stanwood and another Flock user, the City of Sedro-Woolley requested the local court rule that this data is not a public record, asserting that “data generated by Flock [automated license plate reader cameras (ALPRs)] and stored in the Flock cloud system are not public records unless and until a public agency extracts and downloads that data." 

If a government agency is conducting mass surveillance, EFF supports individuals’ access to data collected specifically on them, at the very least. And to address legitimate privacy concerns, governments can and should redact personal information in these records while still disclosing information about how the systems work and the data that they capture. 

This isn’t what these Washington cities offered, though. They tried a few different arguments against releasing any information at all. 

The contract between the City of Sedron-Woolley and Flock Safety clearly states that "As between Flock and Customer, all right, title and interest in the Customer Data, belong to and are retained solely by Customer,” and “Customer Data” is defined as "the data, media, and content provided by Customer through the Services. For the avoidance of doubt, the Customer Data will include the Footage." Other Flock-using police departments across the country have also relied on similar contract language to insist that footage captured by Flock cameras belongs to the jurisdiction in question. 

The contract language notwithstanding, officials in Washington attempted to restrict public access by claiming that video footage stored on Flock’s servers and requests for that information would constitute the generation of a new record. This part of the argument claimed that any information that was gathered but not otherwise accessed by law enforcement, including thousands of images taken every day by the agency’s 14 Flock ALPR cameras, had nothing to do with government business, would generate a new record, and should not be subject to records requests. The cities shut off their Flock cameras while the litigation was ongoing.

If the court had ruled in favor of the cities’ claim, police could move to store all their data — from their surveillance equipment and otherwise — on private company servers and claim that it's no longer accessible to the public. 

The cities threw another reason for withholding information at the wall to see if it would stick, claiming that even if the court found that data collected on Flock cameras are in fact public record, the cities should still be able to block the release of the requested one hour of footage either because all of the images captured by Flock cameras are sensitive investigation material or because they should be treated the same way as automated traffic safety cameras

EFF is particularly opposed to this line of reasoning. In 2017, the California Supreme Court sided with EFF and ACLU in a case arguing that “the license plate data of millions of law-abiding drivers, collected indiscriminately by police across the state, are not ‘investigative records’ that law enforcement can keep secret.” 

Notably, when Stanwood Police Chief Jason Toner made his pitch to the City Council to procure the Flock cameras in April 2024, he was adamant that the ALPRs would not be the same as traffic cameras. “Flock Safety Cameras are not ‘red light’ traffic cameras nor are they facial recognition cameras,” Chief Toner wrote at the time, adding that the system would be a “force multiplier” for the department. 

If the court had gone along with this part of the argument, cities could have been able to claim that the mass surveillance conducted using ALPRs is part of undefined mass investigations, pulling back from the public huge amounts of information being gathered without warrants or reason.

The cities seemed to be setting up contradictory arguments. Maybe the footage captured by the cities’ Flock cameras belongs to the city — or maybe it doesn’t until the city accesses it. Maybe the data collected by the cities’ taxpayer-funded cameras are unrelated to government business and should be inaccessible to the public — or maybe it’s all related to government business and, specifically, to sensitive investigations, presumably of every single vehicle that goes by the cameras. 

The requester, Jose Rodriguez, still won’t be getting his records, despite the court’s positive ruling. 

“The cities both allowed the records to be automatically deleted after I submitted my records requests and while they decided to have their legal council review my request. So they no longer have the records and can not provide them to me even though they were declared to be public records,” Rodriguez told 404 Media — another possible violation of that state’s public records laws. 

Flock Safety and its ALPR system have come under increased scrutiny in the last few months, as the public has become aware of illegal and widespread sharing of information. 

The system was used by the Johnson County Sheriff’s Office to track someone across the country who’d self-administered an abortion in Texas. Flock repeatedly claimed that this was inaccurate reporting, but materials recently obtained by EFF have affirmed that Johnson County was investigating that individual as part of a fetal death investigation, conducted at the request of her former abusive partner. They were not looking for her as part of a missing person search, as Flock said. 

In Illinois, the Secretary of State conducted an audit of Flock use within the state and found that the Flock Safety system was facilitating Customs and Border Protection access, in violation of state law. And in California, the Attorney General recently sued the City of El Cajon for using Flock to illegally share information across state lines.


Police departments are increasingly relying on third-party vendors for surveillance equipment and storage for the terabytes of information they’re gathering. Refusing the public access to this information undermines public records laws and the assurances the public has received when police departments set these powerful spying tools loose in their streets. While it’s great that these records remain public in Washington, communities around the country must be swift to reject similar attempts at blocking public access.

Beryl Lipton

EFFecting Change: This Title Was Written by a Human

3 months 3 weeks ago

Generative AI is like a Rorschach test for anxieties about technology–be they privacy, replacement of workers, bias and discrimination, surveillance, or intellectual property. Our panelists discuss how to address complex questions and risks in AI while protecting civil liberties and human rights online.

Join EFF Director of Policy and Advocacy Katharine Trendacosta, EFF Staff Attorney Tori NobleBerkeley Center for Law & Technology Co-Director Pam Samuelson, and Icarus Salon Artist Şerife Wong for a live discussion with Q&A. 

EFFecting Change Livestream Series:
This Title Was Written by a Human
Thursday, November 13th (New Date!)
10:00 AM - 11:00 AM Pacific
This event is LIVE and FREE!




Accessibility

This event will be live-captioned and recorded. EFF is committed to improving accessibility for our events. If you have any accessibility questions regarding the event, please contact events@eff.org.

Event Expectations

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Upcoming Events

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates. If you have a friend or colleague that might be interested, please join the fight for your digital rights by forwarding this link: eff.org/EFFectingChange. Thank you for helping EFF spread the word about privacy and free expression online. 

Recording

We hope you and your friends can join us live! If you can't make it, we’ll post the recording afterward on YouTube and the Internet Archive!

Melissa Srago
Checked
1 hour 34 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed