Four Actions You Can Take To Protect Digital Rights this International Women’s Day

2 months ago

This International Women’s Day, defend free speech, fight surveillance, and support innovation by calling on our elected politicians and private companies to uphold our most fundamental rights—both online and offline.

1. Pass the “My Body, My Data” Act

Privacy fears should never stand in the way of healthcare. That's why this common-sense federal bill, sponsored by U.S. Rep. Sara Jacobs, will require businesses and non-governmental organizations to act responsibly with personal information concerning reproductive health care. Specifically, it restricts them from collecting, using, retaining, or disclosing reproductive health information that isn't essential to providing the service someone asks them for. The protected information includes data related to pregnancy, menstruation, surgery, termination of pregnancy, contraception, basal body temperature or diagnoses. The bill would protect people who, for example, use fertility or period-tracking apps or are seeking information about reproductive health services. It also lets people take on companies that violate their privacy with a strong private right of action.

2. Ban Government Use of Face Recognition

Study after study shows that facial recognition algorithms are not always reliable, and that error rates spike significantly when involving faces of folks of color, especially Black women, as well as trans and nonbinary people. Because of face recognition errors, a Black woman, Porcha Woodruff, was wrongfully arrested, and another, Lamya Robinson, was wrongfully kicked out of a roller rink.

Yet this technology is widely used by law enforcement for identifying suspects in criminal investigations, including to disparately surveil people of color. At the local, state, and federal level, people across the country are urging politicians to ban the government’s use of face surveillance because it is inherently invasive, discriminatory, and dangerous. Many U.S. cities have done so, including San Francisco and Boston. Now is our chance to end the federal government’s use of this spying technology. 

3. Tell Congress: Don’t Outlaw Encrypted Apps

Advocates of women's equality often face surveillance and repression from powerful interests. That's why they need strong end-to-end encryption. But if the so-called “STOP CSAM Act” passes, it would undermine digital security for all internet users, impacting private messaging and email app providers, social media platforms, cloud storage providers, and many other internet intermediaries and online services. Free speech for women’s rights advocates would also be at risk. STOP CSAM would also create a carveout in Section 230, the law that protects our online speech, exposing platforms to civil lawsuits for merely hosting a platform where part of the illegal conduct occurred. Tell Congress: don't pass this law that would undermine security and free speech online, two critical elements for fighting for equality for all genders.  

4. Tell Facebook: Stop Silencing Palestine

Since Hamas’ attack on Israel on October 7, Meta’s biased moderation tools and practices, as well as policies on violence and incitement and on dangerous organizations and individuals (DOI) have led to Palestinian content and accounts being removed and banned at an unprecedented scale. As Palestinians and their supporters have taken to social platforms to share images and posts about the situation in the Gaza strip, some have noticed their content suddenly disappear, or had their posts flagged for breaches of the platforms’ terms of use. In some cases, their accounts have been suspended, and in others features such liking and commenting have been restricted

This has an exacerbated impact for the most at risk groups in Gaza, such as those who are pregnant or need reproductive healthcare support, as sharing information online is both an avenue to communicating the reality with the world, as well as sharing information with others who need it the most.

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Reasons to Protect the Internet this International Women’s Day
  2. Four Infosec Tools for Resistance this International Women’s Day
  3. Four Voices You Should Hear this International Women’s Day
Paige Collings

Four Infosec Tools for Resistance this International Women’s Day 

2 months ago

While online violence is alarmingly common globally, women are often more likely to be the target of mass online attacks, nonconsensual leaks of sensitive information and content, and other forms of online violence. 

This International Women’s Day, visit EFF’s Surveillance Self-Defense (SSD) to learn how to defend yourself and your friends from surveillance. In addition to tutorials for installing and using security-friendly software, SSD walks you through concepts like making a security plan, the importance of strong passwords, and protecting metadata.

1. Make Your Own Security Plan

This IWD, learn what a security plan looks like and how you can build one. Trying to protect your online data—like pictures, private messages, or documents—from everything all the time is impractical and exhausting. But, have no fear! Security is a process, and through thoughtful planning, you can put together a plan that’s best for you. Security isn’t just about the tools you use or the software you download. It begins with understanding the unique threats you face and how you can counter those threats. 

2. Protect Yourself on Social Networks

Depending on your circumstances, you may need to protect yourself against the social network itself, against other users of the site, or both. Social networks are among the most popular websites on the internet. Facebook, TikTok, and Instagram each have over a billion users. Social networks were generally built on the idea of sharing posts, photographs, and personal information. They have also become forums for organizing and speaking. Any of these activities can rely on privacy and pseudonymity. Visit our SSD guide to learn how to protect yourself.

3. Tips for Attending Protests

Keep yourself, your devices, and your community safe while you make your voice heard. Now, more than ever, people must be able to hold those in power accountable and inspire others through the act of protest. Protecting your electronic devices and digital assets before, during, and after a protest is vital to keeping yourself and your information safe, as well as getting your message out. Theft, damage, confiscation, or forced deletion of media can disrupt your ability to publish your experiences, and those engaging in protest may be subject to search or arrest, or have their movements and associations surveilled. 

4. Communicate Securely with Signal or WhatsApp

Everything you say in a chat app should be private, viewable by only you and the person you're talking with. But that's not how all chats or DMs work. Most of those communication tools aren't end-to-end encrypted, and that means that the company who runs that software could view your chats, or hand over transcripts to law enforcement. That's why it's best to use a chat app like Signal any time you can. Signal uses end-to-end encryption, which means that nobody, not even Signal, can see the contents of your chats. Of course, you can't necessarily force everyone you know to use the communication tool of your choice, but thankfully other popular tools, like Apple's Messages, WhatsApp and more recently, Facebook's Messenger, all use end-to-end encryption too, as long as you're communicating with others on those same platforms. The more people who use these tools, even for innocuous conversations, the better.

On International Women’s Day and every day, stay safe out there! Surveillance self-defense can help.

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Reasons to Protect the Internet this International Women’s Day
  2. Four Voices You Should Hear this International Women’s Day
  3. Four Actions You Can Take To Protect Digital Rights this International Women’s Day
Paige Collings

Four Reasons to Protect the Internet this International Women’s Day

2 months ago

Today is International Women’s Day, a day celebrating the achievements of women globally but also a day marking a call to action for accelerating equality and improving the lives of women the world over. 

The internet is a vital tool for women everywhere—provided they have access and are able to use it freely. Here are four reasons why we’re working to protect the free and open internet for women and everyone.

1. The Fight For Reproductive Privacy and Information Access Is Not Over

Data privacy, free expression, and freedom from surveillance intersect with the broader fight for reproductive justice and safe access to abortion. Like so many other aspects of managing our healthcare, these issues are fundamentally tied to our digital lives. With the decision of Dobbs v. Jackson to overturn the protections that Roe v. Wade offered for people seeking abortion healthcare in the United States, what was benign data before is now potentially criminal evidence. This expanded threat to digital rights is especially dangerous for BIPOC, lower-income, immigrant, LGBTQ+ people and other traditionally marginalized communities, and the healthcare providers serving these communities. The repeal of Roe created a lot of new dangers for people seeking healthcare. EFF is working hard to protect your rights in two main areas: 1) your data privacy and security, and 2) your online right to free speech.

2. Governments Continue to Cut Internet Access to Quell Political Dissidence   

The internet is an essential service that enables people to build and create communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. Governments are very aware of their power to cut off access to this crucial lifeline, and frequently undertake targeted initiatives to shut down civilian access to the internet. In Iran, people have suffered Internet and social media blackouts on and off for nearly two years, following an activist movement rising up after the death of Mahsa Amini, a woman murdered in police custody for refusing to wear a hijab. The movement gained global attention, and in response, the Iranian government rushed to control visibility on the injustice. Social media has been banned in Iran and intermittent shutdowns of the entire peoples’ access to the Internet has cost the country millions, all in effort to control the flow of information and quell political dissidence.

3. People Need to Know When They Are Being Stalked Through Tracking Tech 

At EFF, we’ve been sounding the alarm about the way physical trackers like AirTags and Tiles can be slipped into a target’s bag or car, allowing stalkers and abusers unprecedented access to a person’s location without their knowledge. We’ve also been calling attention to stalkerware, commercially-available apps that are designed to be covertly installed on another person’s device for the purpose of monitoring their activity without their knowledge or consent. This is a huge threat to survivors of domestic abuse as stalkers can track their locations, as well as access a lot of sensitive information like all passwords and documents. For example, Imminent Monitor, once installed on a victim’s computer, could turn on their webcam and microphone, allow perpetrators to view their documents, photographs, and other files, and record all keystrokes entered. Everyone involved in these industries has the responsibility to create a safeguard for people.

4. LGBTQ+ Rights Online Are Being Attacked 

An increase in anti-LGBTQ+ intolerance is harming individuals and communities both online and offline across the globe. Several countries are introducing explicitly anti-LGBTQ+ initiatives to restrict freedom of expression and privacy, which is in turn fuelling offline intolerance against LGBTQ+ people. Across the United States, a growing number of states prohibited transgender youths from obtaining gender-affirming health care, and some restricted access for transgender adults. That’s why we’ve worked to pass data sanctuary laws in pro-LGBTQ+ states to shield health records from disclosure to anti-LGBTQ+ states.

The problem is global. In Jordan, the new Cybercrime Law of 2023 in Jordan restricts encryption and anonymity in digital communications. And in Ghana, the country’s Parliament just voted to pass the country’s draconian Family Values Bill, which introduces prison sentences for those who partake in LGBTQ+ sexual acts, as well as those who promote the rights of gay, lesbian or other non-conventional sexual or gender identities. EFF is working to expose and resist laws like these, and we hope you’ll join us!

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Infosec Tools for Resistance this International Women’s Day
  2. Four Voices You Should Hear this International Women’s Day
  3. Four Actions You Can Take To Protect Digital Rights this International Women’s Day
Paige Collings

The Atlas of Surveillance Removes Ring, Adds Third-Party Investigative Platforms

2 months ago

Running the Atlas of Surveillance, our project to map and inventory police surveillance across the United States, means experiencing emotional extremes.

Whenever we announce that we've added new data points to the Atlas, it comes with a great sense of satisfaction. That's because it almost always means that we're hundreds or even thousands of steps closer to achieving what only a few years ago would've seemed impossible: comprehensively documenting the surveillance state through our partnership with students at the University of Nevada, Reno Reynolds School of Journalism.

At the same time, it's depressing as hell. That's because it also reflects how quickly and dangerously the surveillance technology is metastasizing.

We have the exact opposite feeling when we remove items from the Atlas of Surveillance. It's a little sad to see our numbers drop, but at the same time that change in data usually means that a city or county has eliminated a surveillance program.

That brings us to the biggest change in the Atlas since our launch in 2018. This week, we removed 2,530 data points: an entire category of surveillance. With the announcement from Amazon that its home surveillance company Ring will no longer facilitate warrantless requests for consumer video footage, we've decided to sunset that particular dataset.

While law enforcement agencies still maintain accounts on Ring's Neighbors social network, it seems to serve as a communications tool, a function on par with services like Nixle and Citizen, which we currently don't capture in the Atlas. That's not to say law enforcement won't be gathering footage from Ring cameras: they will, through legal process or by directly asking residents to give them access via the Fusus platform. But that type of surveillance doesn't result from merely having a Neighbors account (agencies without accounts can use these methods to obtain footage), which was what our data documented. You can still find out which agencies are maintaining camera registries through the Atlas. 

Ring's decision was a huge victory – and the exact outcome EFF and other civil liberties groups were hoping for. It also has opened up our capacity to track other surveillance technologies growing in use by law enforcement. If we were going to remove a category, we decided we should add one too.

Atlas of Surveillance users will now see a new type of technology: Third-Party Investigative Platforms, or TPIPs. Commons TPIP products include Thomson Reuters CLEAR, LexisNexis Accurint Virtual Crime Center, TransUnion TLOxp, and SoundThinking CrimeTracer (formerly Coplink X from Forensic Logic). These are technologies we've been watching for awhile, but have been struggling to categorize and define. But here's the definition we've come up with:

Third-Party Investigative Platforms are cloud-based software systems that law enforcement agencies subscribe to in order to access, share, mine, and analyze various sources of investigative data. Some of the data the agencies upload themselves, but the systems also provide access to data from other law enforcement, as well as from commercial sources and data brokers. Many products offer AI features, such as pattern identification, face recognition, and predictive analytics. Some agencies employ multiple TPIPs.

We are calling this new category a beta feature in the Atlas, since we are still figuring out how best to research and compile this data nationwide. You'll find fairly comprehensive data on the use of CrimeTracer in Tennessee and Massachusetts, because both states provide the software to local law enforcement agencies throughout the state. Similarly, we've got a large dataset for the use of the Accurint Virtual Crime Center in Colorado, due to a statewide contract. (Big thanks to Prof. Ran Duan's Data Journalism students for working with us to compile those lists!) We've also added more than 60 other agencies around the country, and we expect that dataset to grow as we hone our research methods.

If you've got information on the use of TPIPs in your area, don't hesitate to reach out. You can email us at aos@eff.org, submit a tip through our online form, or file a public records request using the template that EFF and our students have developed to reveal the use of these platforms. 

Dave Maass

Join us for EFF's 8th Annual Tech Trivia Night!

2 months ago

Join us in San Francisco on May 9th for EFF's 8th annual Tech Trivia Night! Explore the obscure minutiae of digital security, online rights, and internet culture.

Enjoy delicious tacos, churros, and complimentary adult beverages and soft drinks as you and your team battle through rounds of questions—and cutthroat live judging!—to see who will take home the coveted 1st, 2nd, and 3rd place trophies and EFF swag!


Register Now

$45 for CURRENT EFF Members • $55 for General Admission

Thursday, May 9th, 2024 at Public Works from 6 PM to 10 PM
This event is 21+. Please remember to bring ID and a mask.

Thanks to EFF's Luminary Organizational Members DuckDuckGo, No Starch Press, and the Hering Foundation for their year-round support of EFF's mission.

Fighting for first place at EFF’s Tech Trivia Night helps us fight for your rights online! Sponsor one of our annual events and join the movement for digital privacy, free speech, and innovation. Please contact tierney@eff.org for more information.

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Melissa Srago

Victory! EFF Helps Resist Unlawful Warrant and Gag Order Issued to Independent News Outlet

2 months ago

Over the past month, the independent news outlet Indybay has quietly fought off an unlawful search warrant and gag order served by the San Francisco Police Department. Today, a court lifted the gag order and confirmed the warrant is void. The police also promised the court to not seek another warrant from Indybay in its investigation.

Nevertheless, Indybay was unconstitutionally gagged from speaking about the warrant for more than a month. And the SFPD once again violated the law despite past assurances that it was putting safeguards in place to prevent such violations.

EFF provided pro bono legal representation to Indybay throughout the process.

Indybay’s experience highlights a worrying police tactic of demanding unpublished source material from journalists, in violation of clearly established shield laws. Warrants like the one issued by the police invade press autonomy, chill news gathering, and discourage sources from contributing. While this is a victory, Indybay was still gagged from speaking about the warrant, and it would have had to pay thousands of dollars in legal fees to fight the warrant without pro bono counsel. Other small news organizations might not be so lucky. 

It started on January 18, 2024, when an unknown member of the public published a story on Indybay’s unique community-sourced newswire, which allows anyone to publish news and source material on the website. The author claimed credit for smashing windows at the San Francisco Police Credit Union.

On January 24, police sought and obtained a search warrant that required Indybay to turn over any text messages, online identifiers like IP address, or other unpublished information that would help reveal the author of the story. The warrant also ordered Indybay not to speak about the warrant for 90 days. With the help of EFF, Indybay responded that the search warrant was illegal under both California and federal law and requested that the SFPD formally withdraw it. After several more requests and shortly before the deadline to comply with the search warrant, the police agreed to not pursue the warrant further “at this time.” The warrant became void when it was not executed after 10 days under California law, but the gag order remained in place.

Indybay went to court to confirm the warrant would not be renewed and to lift the gag order. It argued it was protected by California and federal shield laws that make it all but impossible for law enforcement to use a search warrant to obtain unpublished source material from a news outlet. California law, Penal Code § 1524(g), in particular, mandates that “no warrant shall issue” for that information. The Federal Privacy Protection Act has some exceptions, but they were clearly not applicable in this situation. Nontraditional and independent news outlets like Indybay are covered by these laws (Indybay fought this same fight more than a decade ago when one of its photographers successfully quashed a search warrant). And when attempting to unmask a source, an IP address can sometimes be as revealing as a reporter’s notebook. In a previous case, EFF established that IP addresses are among the types of unpublished journalistic information typically protected from forced disclosure by law.

In addition, Indybay argued that the gag order was an unconstitutional content-based prior restraint on speech—noting that the government did not have a compelling interest in hiding unlawful investigative techniques.

Rather than fight the case, the police conceded the warrant was void, promised not to seek another search warrant for Indybay’s information during the investigation, and agreed to lift the gag order. A San Francisco Superior Court Judge signed an order confirming that.

That this happened at all is especially concerning since the SFPD had agreed to institute safeguards following its illegal execution of a search warrant against freelance journalist Bryan Carmody in 2019. In settling a lawsuit brought by Carmody, the SFPD agreed to ensure all its employees were aware of its policies concerning warrants to journalists. As a result the department instituted internal guidance and procedures, which do not all appear to have been followed with Indybay.

Moreover, the search warrant and gag order should never have been signed by the court given that it was obviously directed to a news organization. We call on the court and the SFPD to meet with those representing journalists to make sure that we don't have to deal with another unconstitutional gag order and search warrant in another few years.

The San Francisco Police Department's public statement on this case is incomplete. It leaves out the fact that Indybay was gagged for more than a month and that it was only Indybay's continuous resistance that prevented the police from acting on the warrant. It also does not mention whether the police department's internal policies were followed in this case. For one thing, this type of warrant requires approval from the chief of police before it is sought, not after. 

Read more here: 

Stipulated Order

Motion to Quash

Search Warrant

Trujillo Declaration

Burdett Declaration

SFPD Press Release

Mario Trujillo

Should Caddy and Traefik Replace Certbot?

2 months ago

Can free and open source software projects like Caddy and Traefik eventually replace EFF’s Certbot? Although Certbot continues to be developed, we think tools like these help offer a promising path forward in the further development of a secure and encrypted web. For some users, tools like these can replace Certbot completely. 

We started development on Certbot in the mid-2010s with the goal of making it as easy as possible for website operators to offer HTTPS. To accomplish this, we made Certbot interact the best we could with existing web servers like Apache and Nginx without requiring any changes on their end. Unfortunately, this approach of using an external tool to provide functionality beyond what the server was originally designed for presents several challenges. With the help of open source libraries and hundreds of contributors from around the world, we designed Certbot to try to reparse Apache and Nginx configuration files and modify them as needed to set up HTTPS. Certbot interacted with these web servers using the same command line tools as a human user, and then waiting an estimated period of time until the server had (probably) finished doing what we asked it to. 

All of this worked remarkably well. Today, Certbot is used to maintain HTTPS for over 30 million domain names and it continues to be one of the most popular ways for people to interact with Let’s Encrypt, a free certificate authority, which has been hugely successful by many metrics. Despite this, the ease of enabling HTTPS remains hindered by the need for people to run Certbot in addition to their web server. 

That's where software like Caddy and Traefik are different. They are designed with easy HTTPS automation in mind. Caddy even enables HTTPS by default. They both implement the ACME protocol internally, allowing them to integrate with services like Let’s Encrypt to automate regularly obtaining the certificates needed to offer HTTPS. Since this support is built into the server, it completely avoids problems that Certbot sometimes has as an external tool, such as not parsing configuration files in the same way that the software it's trying to configure did. Most importantly, there's less effort required for a website operator to turn on HTTPS, further lowering the barrier to entry, making the internet more secure for everyone. 

Both Caddy and Traefik are written in Go, a memory safe programming language. The Apache and Nginx web servers that Certbot interacts with were written in C, which is not memory safe. This may seem like a minor technical detail, but it’s not. A memory safe programming language is one that systematically prevents software written in it from having certain types of memory access errors which can occur in other programming languages. Studies have found that these memory safety errors are responsible for the majority of security vulnerabilities, leading to a growing push for the development of memory safe software. By adopting software like Caddy or Traefik, you’re able to proactively eliminate an entire class of common security vulnerabilities from that part of your system. 

With these benefits and Certbot’s limitations, should tools like Caddy and Traefik replace Certbot? Yes, they probably should eventually. While EFF does not endorse any specific product or service, we think that software like this is part of a larger suite of tools that will eventually make Certbot no longer needed. The ecosystem will be better served by using integrated software, not external tools that try to configure old and hard-to-use ones. 

No single approach to securing traffic to a website will work for everyone. For example, many hosting providers now offer HTTPS, and this will almost certainly be an easier approach than using any other external software. If you run a website and previously used a tool like Certbot though, consider whether software like Caddy or Traefik is a better fit for you. These tools have been around for years and have extensive user bases. You can use Caddy or Traefik as a TLS terminating reverse proxy or even use Caddy directly as your file server

If Certbot continues to work best for you for some use cases, that's also okay. We plan to continue developing the project until the happy day comes when running an HTTPS site is so simple that Certbot is no longer needed. Until that day, if you do continue using Certbot, please consider donating to EFF so that we’re able to continue supporting the project.

Brad Warren

Privacy First and Competition

2 months ago

Privacy First” is a simple, powerful idea: seeing as so many of today’s technological problems are also privacy problems, why don’t we fix privacy first?

Whether you’re worried about kids’ mental health, or tech’s relationship to journalism, or spying by foreign adversaries, or reproductive rights, or AI deepfakes, or nonconsensual pornography, you’re worried about a problem rooted in the primitive, deplorable state of American privacy law.

It’s really impossible to overstate how bad the state of federal privacy law is in America. The last time the USA got a big, muscular, broadly applicable new consumer privacy law, the year was 1988, and the law was targeted at video-store clerks who leaked your VHS rental history.

It’s been a minute. America is long overdue for a strong, comprehensive privacy law

A new privacy law will help us with all those issues, and more. It would level the playing field between giants with troves of user data and startups who want to build something better. Such a law would keep competition from becoming a race to the bottom on user privacy.

Importantly, a strong privacy law will go a long way to improving the dismal state of competition in America’s ossified and decaying tech sector.

Take the tech sector’s relationship to the news media. The ad-tech duopoly has rigged the advertising market and takes $0.51 out of every advertising dollar. Without their vast troves of nonconsensually harvested personal data, Meta and Google wouldn’t be able to misappropriate billions from the publishers. Banning surveillance advertising wouldn’t just be good for our privacy - it would give publishers leverage to shift those billions back onto their own balance sheets. 

Undoing market concentration will require interoperability so that users can move from dominant services to new, innovative rivals without losing their data and relationships. The biggest challenge to interoperability? Privacy. Every time a user moves from one service to another, the resulting data-flows create risks for those users and their friends, families, customers and other social connections. Congress knows this, which is why every proposed interoperability law incorporates its own little privacy law. Privacy shouldn’t be an afterthought in a tech regulation. A standalone privacy law would give lawmakers the freedom to promote interoperability without having to work out a new privacy system for each effort.

That’s also true of Right to Repair laws: these laws are routinely opposed by tech monopolists who insist that giving Americans the right to choose their own repair shop or parts exposes them to privacy risks. It’s true that our devices harbor vast troves of sensitive information - but that doesn’t mean we should let Big Tech (or Big Car) monopolize repair. Instead, we should require everyone - both original manufacturers and independent repair shops - to honor your privacy.

America’s legal privacy vacuum is largely the result of the commercial surveillance industry’s lobbying power. Increasing competition in the tech sector won’t just help our privacy: it’ll also weaken tech’s lobbying power, which is a function of the vast profits that can be extracted in the absence of “wasteful competition” and the ease with which a concentrated sector can converge on a common lobbying position. 

That’s why EFF has urged the FTC and DOJ to consider privacy impacts when scrutinizing proposed mergers: not just to protect internet users from the harms of surveillance business models, but to protect democracy from the corrupting influence of surveillance cartels.

Privacy isn’t dead. Far from it. For a quarter of a century, would-be tech monopolists have been insisting that we have no privacy and telling us to “get over it.” The vast majority of the public wants privacy and will take it if offered, and grab it if it’s not.  

Whenever someone tells you that privacy is dead, they’re just wishcasting. What they mean is: “If I can convince you privacy is dead, I can make more money at your expense.”

Monopolists want us to believe that their power over our lives is inevitable and unchangeable, just as the surveillance industry banks on convincing you that the fight for privacy was and always will be a lost cause. But we once had a better internet, and we can get a better internet again. The fight for that better internet starts with privacy, a battle that we all want to win.




Cory Doctorow

European Court of Human Rights Confirms: Weakening Encryption Violates Fundamental Rights

2 months ago

In a milestone judgment—Podchasov v. Russiathe European Court of Human Rights (ECtHR) has ruled that weakening of encryption can lead to general and indiscriminate surveillance of the communications of all users and violates the human right to privacy.  

In 2017, the landscape of digital communication in Russia faced a pivotal moment when the government required Telegram Messenger LLP and other “internet communication” providers to store all communication data—and content—for specified durations. These providers were also required to supply law enforcement authorities with users’ data, the content of their communications, as well as any information necessary to decrypt user messages. The FSB (the Russian Federal Security Service) subsequently ordered Telegram to assist in decrypting the communications of specific users suspected of engaging in terrorism-related activities.

Telegram opposed this order on the grounds that it would create a backdoor that would undermine encryption for all of its users. As a result, Russian courts fined Telegram and ordered the blocking of its app within the country. The controversy extended beyond Telegram, drawing in numerous users who contested the disclosure orders in Russian courts. A Russian citizen, Mr Podchasov, escalated the issue to the European Court of Human Rights (ECtHR), arguing that forced decryption of user communication would infringe on the right to private life under Article 8 of the European Convention of Human Rights (ECHR), which reads as follows:  

Everyone has the right to respect for his private and family life, his home and his correspondence (Article 8 ECHR, right to respect for private and family life, home and correspondence) 

EFF has always stood against government intrusion into the private lives of users and advocated for strong privacy guarantees, including the right to confidential communication. Encryption not only safeguards users’ privacy but also protects their right to freedom of expression protected under international human rights law. 

In a great victory for privacy advocates, the ECtHR agreed. The Court found that the requirement of continuous, blanket storage of private user data interferes with the right to privacy under the Convention, emphasizing that the possibility for national authorities to access these data is a crucial factor for determining a human rights violation [at 53]. The Court identified the inherent risks of arbitrary government action in secret surveillance in the present case and found again—following its stance in Roman Zakharov v. Russia—that the relevant legislation failed to live up to the quality of law standards and lacked the adequate and effective safeguards against misuse [75].  Turning to a potential justification for such interference, the ECtHR emphasized the need of a careful balancing test that considers the use of modern data storage and processing technologies and weighs the potential benefits against important private-life interests [62-64]. 

In addressing the State mandate for service providers to submit decryption keys to security services, the court's deliberations culminated in the following key findings [76-80]:

  1. Encryption being important for protecting the right to private life and other fundamental rights, such as freedom of expression: The ECtHR emphasized the importance of encryption technologies for safeguarding the privacy of online communications. Encryption safeguards and protects the right to private life generally while also supporting the exercise of other fundamental rights, such as freedom of expression.
  2. Encryption as a shield against abuses: The Court emphasized the role of encryption to provide a robust defense against unlawful access and generally “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.” The Court held that this must be given due consideration when assessing measures which could weaken encryption.
  3. Decryption of communications orders weakens the encryption for all users: The ECtHR established that the need to decrypt Telegram's "secret chats" requires the weakening of encryption for all users. Taking note again of the dangers of restricting encryption described by many experts in the field, the Court held that backdoors could be exploited by criminal networks and would seriously compromise the security of all users’ electronic communications. 
  4. Alternatives to decryption: The ECtHR took note of a range of alternative solutions to compelled decryption that would not weaken the protective mechanisms, such as forensics on seized devices and better-resourced policing.  

In light of these findings, the Court held that the mandate to decrypt end-to-end encrypted communications risks weakening the encryption mechanism for all users, which was a disproportionate to the legitimate aims pursued. 

In summary [80], the Court concluded that the retention and unrestricted state access to internet communication data, coupled with decryption requirements, cannot be regarded as necessary in a democratic society, and are thus unlawful. It emphasized that a direct access of authorities to user data on a generalized basis and without sufficient safeguards impairs the very essence of the right to private life under the Convention. The Court also highlighted briefs filed by the European Information Society Institute (EISI) and Privacy International, which provided insight into the workings of end-to-end encryption and explained why mandated backdoors represent an illegal and disproportionate measure. 

Impact of the ECtHR ruling on current policy developments 

The ruling is a landmark judgment, which will likely draw new normative lines about human rights standards for private and confidential communication. We are currently supporting Telegram in its parallel complaint to the ECtHR, contending that blocking its app infringes upon fundamental rights. As part of a collaborative efforts of international human rights and media freedom organisations, we have submitted a third-party intervention to the ECtHR, arguing that blocking an entire app is a serious and disproportionate restriction on freedom of expression. That case is still pending. 

The Podchasov ruling also directly challenges ongoing efforts in Europe to weaken encryption to allow access and scanning of our private messages and pictures.

For example, the controversial UK's Online Safety Act creates the risk that online platforms will use software to search all users’ photos, files, and messages, scanning for illegal content. We recently submitted comments to the relevant UK regulator (Ofcom) to avoid any weakening of encryption when this law becomes operational. 

In the EU, we are concerned about the European Commission’s message-scanning proposal (CSAR) as being a disaster for online privacy. It would allow EU authorities to compel online services to scan users’ private messages and compare users’ photos to against law enforcement databases or use error-prone AI algorithms to detect criminal behavior. Such detection measures will inevitably lead to dangerous and unreliable Client-Side Scanning practices, undermining the essence of end-to-end encryption. As the ECtHR deems general user scanning as disproportionate, specifically criticizing measures that weaken existing privacy standards, forcing platforms like WhatsApp or Signal to weaken security by inserting a vulnerability into all users’ devices to enable message scanning must be considered unlawful

The EU regulation proposal is likely to be followed by other proposals to grant law enforcement access to encrypted data and communications. An EU high level expert group on ‘access to data for effective law enforcement’ is expected to make policy recommendations to the next EU Commission in mid-2024. 

We call on lawmakers to take the Court of Human Rights ruling seriously: blanket and indiscriminate scanning of user communication and the general weakening of encryption for users is unacceptable and unlawful. 

Christoph Schmon

Voting No on Prop E Is Easy and Important for San Francisco

2 months ago

San Francisco’s ballot initiative Proposition E is a dangerous and deceptive measure that threatens our privacy, safety, and democratic ideals. It would give the police more power to surveil, chase, and harm. It would allow the police to secretly acquire and use unproven surveillance technologies for a year or more without oversight, eliminating the hard-won protections backed by a majority of San Franciscans that are currently in place. Prop E is not a solution to the city’s challenges, but rather a threat to our rights and freedoms. 

Don’t be fooled by the misleading arguments of Prop E's supporters. A group of tech billionaires have contributed a small fortune to convince San Francisco voters that they would be safer if surveilled. They want us to believe that Prop E will make us safer and more secure, but the truth is that it will do the opposite. Prop E will allow the police to use any surveillance technology they want for up to a year without considering whether it works as promised—or at all—or whether it presents risks to residents’ privacy or safety. Police only have to present a use policy after a year of free and unaccountable use, and absent a majority vote of the Board of Supervisors rejecting the policy, this unaccountable use could continue indefinitely. Worse still, some technologies, like surveillance cameras and drones, would be exempt from oversight indefinitely, putting the unilateral decision about when, where, and how to deploy such technology in the hands of the SFPD.

We want something different for our city. In 2019, with the support a wide range of community members and civil society groups including EFF, San Francisco’s Board of Supervisors took a historic step forward by passing a groundbreaking surveillance transparency and accountability ordinance through a 10-1 vote. The law requires that before a city department, including the police, acquire or use a surveillance technology, the department must present a use policy to the Board of Supervisors, which then considers the proposal in a public process that offers opportunity for public comment. This process respects privacy, dignity, and safety and empowers residents to make their voices heard about the potential impacts and risks. 

Despite what Prop E proponents would have you believe, the city’s surveillance ordinance has not stopped police from acquiring new technologies. In fact, they have gained access to broad networks of live-feed cameras. Current law helps ensure that the police follow reasonable guidelines on using technology and mitigating potentially disparate harms. Prop E would gut police accountability from this law and return decision-making about how we are surveilled to closed spaces where unproven and unvetted vendor promises rule the narrative. 

As San Francisco residents, we must stand up for ourselves and our city and vote No on Prop E. Voting No on Prop E is not only an easy choice, but also a necessary one. It is a choice that reflects our values and vision for San Francisco. It is a choice that shows that we will not let a million-dollar campaign of fear drive us to sacrifice our rights. Voting No on Prop E is a choice that proves we are unwilling to accept anything less than what we deserve: privacy, safety, and accountability.

March 5 is election day. Make your voice heard. Vote No on Prop E.  

Nathan Sheard

Celebrating 15 Years of Surveillance Self-Defense

2 months ago

On March 3rd, 2009, we launched Surveillance Self-Defense (SSD). At the time, we pitched it as, "an online how-to guide for protecting your private data against government spying." In the last decade hundreds of people have contributed to SSD, over 20 million people have read it, and the content has nearly doubled in length from 40,000 words to almost 80,000. SSD has served as inspiration for many other guides focused on keeping specific populations safe, and those guides have in turn affected how we've approached SSD. A lot has changed in the world over the last 15 years, and SSD has changed with it. 

The Year Is 2009

Let's take a minute to travel back in time to the initial announcement of SSD. Launched with the support of the Open Society Institute, and written entirely by just a few people, we detailed exactly what our intentions were with SSD at the start:

EFF created the Surveillance Self-Defense site to educate Americans about the law and technology of communications surveillance and computer searches and seizures, and to provide the information and tools necessary to keep their private data out of the government's hands… The Surveillance Self-Defense project offers citizens a legal and technical toolkit with tips on how to defend themselves in case the government attempts to search, seize, subpoena or spy on their most private data.

SSD's design when it first launched in 2009.

To put this further into context, it's worth looking at where we were in 2009. Avatar was the top grossing movie of the year. Barack Obama was in his first term as president in the U.S. In a then-novel approach, Iranians turned to Twitter to organize protests. The NSA has a long history of spying on Americans, but we hadn't gotten to Jewel v. NSA or the Snowden revelations yet. And while the iPhone had been around for two years, it hadn't seen its first big privacy controversy yet (that would come in December of that year, but it'd be another year still before we hit the "your apps are watching you" stage).

Most importantly, in 2009 it was more complicated to keep your data secure than it is today. HTTPS wasn't common, using Tor required more technical know-how than it does nowadays, encrypted IMs were the fastest way to communicate securely, and full-disk encryption wasn't a common feature on smartphones. Even for computers, disk encryption required special software and knowledge to implement (not to mention time, solid state drives were still extremely expensive in 2009, so most people still had spinning disk hard drives, which took ages to encrypt and usually slowed down your computer significantly).

And thus, SSD in 2009 focused heavily on law enforcement and government access with its advice. Not long after the launch in 2009, in the midst of the Iranian uprising, we launched the international version, which focused on the concerns of individuals struggling to preserve their right to free expression in authoritarian regimes.

And that's where SSD stood, mostly as-is, for about six years. 

The Redesigns

In 2014, we redesigned and relaunched SSD with support from the Ford Foundation. The relaunch had at least 80 people involved in the writing, reviewing, design, and translation process. With the relaunch, there was also a shift in the mission as the threats expanded from just the government, to corporate and personal risks as well. From the press release:

"Everyone has something to protect, whether it's from the government or stalkers or data-miners," said EFF International Director Danny O'Brien. "Surveillance Self-Defense will help you think through your personal risk factors and concerns—is it an authoritarian government you need to worry about, or an ex-spouse, or your employer?—and guide you to appropriate tools and practices based on your specific situation."

2014 proved to be an effective year for a major update. After the murders of Michael Brown and Eric Garner, protestors hit the streets across the U.S., which made our protest guide particularly useful. There were also major security vulnerabilities that year, like Heartbleed, which caused all sorts of security issues for website operators and their visitors, and Shellshock, which opened up everything from servers to cameras to bug exploits, ushering in what felt like an endless stream of software updates on everything with a computer chip in it. And of course, there was still fallout from the Snowden leaks in 2013.

In 2018 we did another redesign, and added a new logo for SSD that came along with EFF's new design. This is more or less the same design of the site today.

SSD's current design, which further clarifies what sections a guide is in, and expands the security scenarios.

Perhaps the most notable difference between this iteration of SSD and the years before is the lack of detailed reasoning explaining the need for its existence on the front page. No longer was it necessary to explain why we all need to practice surveillance self-defense. Online surveillance had gone mainstream.

Shifting Language Over the Years

As the years passed and the site was redesigned, we also shifted how we talked about security. In 2009 we wrote about security with terms like, "adversaries," "defensive technology," "threat models," and "assets." These were all common cybersecurity terms at the time, but made security sound like a military exercise, which often disenfranchised the very people who needed help. For example, in the later part of the 2010s, we reworked the idea of "threat modeling," when we published Your Security Plan. This was meant to be less intimidating and more inclusive of the various types of risks that people face.

The advice in SSD has changed over the years, too. Take passwords as an example, where in 2009 we said, "Although we recommend memorizing your passwords, we recognize you probably won't." First off, rude! Second off, maybe that could fly with the lower number of accounts we all had back in 2009, but nowadays nobody is going to remember hundreds of passwords. And regardless, that seems pretty dang impossible when paired with the final bit of advice, "You should change passwords every week, every month, or every year — it all depends on the threat, the risk, and the value of the asset, traded against usability and convenience."

Moving onto 2015, we phrased this same sentiment much differently, "Reusing passwords is an exceptionally bad security practice, because if an attacker gets hold of one password, she will often try using that password on various accounts belonging to the same person… Avoiding password reuse is a valuable security precaution, but you won't be able to remember all your passwords if each one is different. Fortunately, there are software tools to help with this—a password manager."

Well, that's much more polite!

Since then, we've toned that down even more, "Reusing passwords is a dangerous security practice. If someone gets ahold of your password —whether that's from a data breach, or wherever else—they can often gain access to any other account you used that same password. The solution is to use unique passwords everywhere and take additional steps to secure your accounts when possible."

Security is an always evolving process, so too is how we talk about it. But the more people we bring on board, the better it is for everyone. How we talk about surveillance self-defense will assuredly continue to adapt in the future.

Shifting Language(s) Over the Years

Initially in 2009, SSD was only available in English, and soon after launch, in Bulgarian. In the 2014 re-launch, we added Arabic and Spanish. Then added French, Thai, Vietnamese, and Urdu in 2015. Later that year, we added a handful of Amharic translations, too. This was accomplished through a web of people in dozens of countries who volunteered to translate and review everything. Many of these translations were done for highly specific reasons. For example, we had a Google Policy Fellow, Endalk Chala, who was part of the Zone 9 bloggers in Ethiopia. He translated everything into Amharic as he was fighting for his colleagues and friends who were imprisoned in Ethiopia on terrorism charges.

By 2019, we were translating most of SSD into at least 10 languages: Amharic, Arabic, Spanish, French, Russian, Turkish, Vietnamese, Brazilian Portuguese, Thai, and Urdu (as well as additional, externally-hosted community translations in Indonesian Bahasa, Burmese, Traditional Chinese, Igbo, Khmer, Swahili, Yoruba, and Twi).

Currently, we're focusing on getting the entirety of SSD re-translated into seven languages, then focusing our efforts on translating specific guides into other languages. 

Always Updating

Since 2009, we've done our best to review and update the guides in SSD. This has included minor changes to respond to news events, depreciating guides completely when they're no longer applicable in modern security plans, and massive rewrites when technology has changed.

The original version of SSD was launched mostly as a static text (we even offered a printer-friendly version), though updates and revisions did occur, they were not publicly tracked as clearly as they are today. In its early years, SSD was able to provide useful guidance across a number of important events, like Occupy Wall Street, before the major site redesign in 2014, which helped it become more useful training activists, including for Ferguson and Standing Rock, amongst others. The ability to update SSD along with changing trends and needs has ensured it can always be useful as a resource.

That redesign also better facilitated the updates process. The site became easier to navigate and use, and easier to update. For example, in 2017 we took on a round of guide audits in response to concerns following the 2016 election. In 2019 we continued that process with around seven major updates to SSD, and in 2020, we did five. We don't have great stats for 2021 and 2022, but in 2023 we managed 14 major updates or new guides. We're hoping to have the majority of SSD reviewed and revamped by the end of this year, with a handful of expansions along the way.

Which brings us to the future of SSD. We will continue updating, adapting, and adding to SSD in the coming years. It is often impossible to know what will be needed, but rest assured we'll be there to answer that whenever we can. As mentioned above, this includes getting more translations underway, and continuing to ensure that everything is accurate and up-to-date so SSD can remain one of the best repositories of security information available online.

We hope you’ll join EFF in celebrating 15 years of SSD!

Thorin Klosowski

Privacy Isn't Dead. Far From It. | EFFector 36.3

2 months ago

As we continue the journey of fighting for digital freedoms, it can be hard to keep up on the latest happenings. Thankfully, EFF has a guide to keep you in the loop! EFFector 36.3 is out now and covers the latest news, including recent changes to the Kids Online Safety Act (spoiler alert: IT'S STILL BAD), why we flew a plane over San Francisco, and the first episode of Season 5 of our award-winning "How to Fix the Internet" podcast!

You can read the full newsletter here, or subscribe to get the next issue in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFector 36.3 | Privacy Isn't Dead. Far From It.

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero

A Virtual Reality Tour of Surveillance Tech at the Border: A Conversation with Dave Maass of the Electronic Frontier Foundation

2 months ago

This interview is crossposted from The Markup, a nonprofit news organization that investigates technology and its impact on society.

By: Monique O. Madan, Investigative Reporter at The Markup

After reading my daily news stories amid his declining health, my grandfather made it a habit of traveling the world—all from his desk and wheelchair. When I went on trips, he always had strong opinions and recommendations for me, as if he’d already been there. “I've traveled to hundreds of countries," he would tell me. "It's called Google Earth. Today, I’m going to Armenia.” My Abuelo’s passion for teleporting via Google Street View has always been one of my fondest memories and has never left me. 

So naturally, when I found out that Dave Maass of the Electronic Frontier Foundation gave virtual reality tours of surveillance technology along the U.S.–Mexico border, I had to make it happen. I cover technology at the intersection of immigration, criminal justice, social justice and government accountability, and Maass’ tour aligns with my work as I investigate border surveillance. 

My journey began in a small, quiet, conference room at the Homestead Cybrarium, a hybrid virtual public library where I checked out virtual reality gear. The moment I slid the headset onto my face and the tour started, I was transported to a beach in San Diego. An hour and a half later, I had traveled across 1,500 miles worth of towns and deserts and ended up in Brownsville, Texas.

During that time, we looked at surveillance technology in 27 different cities on both sides of the border. Some of the tech I saw were autonomous towers, aerostat blimps, sky towers, automated license plate readers, and border checkpoints. 

After the excursion, I talked with Maass, a former journalist, about the experience. Our conversation has been edited for brevity and clarity.

Monique O. Madan: You began by dropping me in San Diego, California, and it was intense. Tell me why you chose the location to start this experience.

Dave Maass: So I typically start the tour in San Diego for two reasons. One is because it is the westernmost part of the border, so it's a natural place to start. But more importantly, it is such a stark contrast to be able to jump from one side to the other, from the San Diego side to the Tijuana side.

When you're in San Diego, you're in this very militarized park that's totally empty, with patrol vehicles and this very fierce-looking wall and a giant surveillance tower over your head. You can really get a sense of the scale.

And once you're used to that, I jump you to the other side of the wall. You're able to suddenly see how it's party time in Tijuana, how they painted the wall, and how there are restaurants and food stands and people playing on the beach and there are all these Instagram moments.

Credit: Electronic Frontier Foundation

Yet on the other side is the American militarized border, you know, essentially spying on everybody who's just going about their lives on the Mexican side.

It also serves as a way to show the power of VR. If there were no wall, you could walk that in a minute. But because of the border wall, you've got to go all the way to the border crossing, and then come all the way back. And we're talking, potentially, hours for you to be able to go that distance. 

Madan: I felt like I was in two different places, but it was really the same place, just feet away from each other. We saw remote video surveillance systems, relocatable ones. We saw integrated fixed towers, autonomous surveillance towers, sky towers, aerostat radar systems, and then covert automated license plate readers. How do you get the average person to digest what all these things really mean?

7 Stops on Dave Maass’ Virtual Reality Surveillance Tour of the U.S.–Mexico Border

The following links take you to Google Street View.

Maass: Me and some colleagues at EFF, we were looking at how we could use virtual reality to help people understand surveillance. We came up with a very basic game called “Spot the Surveillance,” where you could put on a headset and it puts you in one location with a 360-degree camera view. We took a photo of a corner in San Francisco that already had a lot of surveillance, but we also Photoshopped in other pieces of surveillance. The idea was for people to look around and try to find the surveillance.

When they found one, it would ping, and it would tell you what the technology could do. And we found that that helped people learn to look around their environment for these technologies, to understand it. So it gave people a better idea of how we exist in the environment differently than if they were shown a picture or a PowerPoint presentation that was like, “This is what a license plate reader looks like. This is what a drone looks like.”

That is why when we're on the southern border tour, there are certain places where I don't point the technology out to you. I ask you to look around and see if you can find it yourself.

Sometimes I start with one where it's overhead because people are looking around. They're pointing to a radio tower, pointing to something else. It takes them a while before they actually look up in the sky and see there's this giant spy mob over their head. But, yeah, one of the other ones is these license plate readers that are hidden in traffic cones. People don't notice them there because they're just these traffic cones that are so ubiquitous along highways and streets that they don't actually think about it.

Madan: People have the impression that surveillance ops are only in militarized settings. Can you talk to me about whether that’s true?

Maass: Certainly there are towers in the middle of the desert. Certainly there are towers that are in remote or rural areas. But there are just so many that are in urban areas, from big cities to small towns.

Rather than just a close-up picture of a tower, once you actually see one and you're able to look at where the cameras are pointed, you start to see things like towers that are able to look into people's back windows, and towers that are able to look into people's backyards, and whole communities that are going to have glimpses over their neighborhood all the time.

But so rarely in the conversation is the impact on the communities that live on both the U.S. and Mexican side of the border, and who are just there all the time trying to get by and have, you know, the normal dream of prospering and raising a family.

Madan: What does this mean from a privacy, human rights, and civil liberties standpoint? 

Maass: There’s not a lot of transparency around issues of technology. That is one of the major flaws, both for human rights and civil liberties, but it's also a flaw for those who believe that technology is going to address whatever amorphous problem they've identified or failed to identify with border security and migration. So it's hard to know when this is being abused and how.

But what we can say is that as [the government] is applying more artificial intelligence to its camera system, it's able to document the pattern of life of people who live along the border.

It may be capturing people and learning where they work and where they're worshiping or who they are associated with. So you can imagine that if you are somebody who lives in that community and if you're living in that community your whole life, the government may have, by the time you're 31 years old, your entire driving history on file that somebody can access at any time, with who knows what safeguards are in place.

But beyond all that, it really normalizes surveillance for a whole community.

There are a lot of psychological studies out there about how surveillance can affect people over time, affect their behavior, and affect their perceptions of a society. That's one of the other things I worry about: What kind of psychological trauma is surveillance causing for these communities over the long term, in ways that may not be immediately perceptible?

Madan: One of the most interesting uses of experiencing this tour via the VR technology was being able to pause and observe every single detail at the border checkpoint.

Maass: Most people are just rolling through, and so you don't get to notice all of the different elements of a checkpoint. But because the Google Street View car went through, we can roll through it at our leisure and point out different things. I have a series of checkpoints that I go through with people, show them this is where the license plate reader is, this is where the scanner truck is, here's the first surveillance camera, here's the second surveillance camera. We can see the body-worn camera on this particular officer. Here's where people are searched. Here's where they're detained. Here's where their car is rolled through an X-ray machine.

Madan: So your team has been mapping border surveillance for a while. Tell us about that and how it fits into this experience.

Maass: We started mapping out the towers in 2022, but we had started researching and building a database of at least the amount of surveillance towers by district in 2019. 

I don't think it was apparent to anyone until we started mapping these out, how concentrated towers are in populated areas. Maybe if you were in one of those populated areas, you knew about it, or maybe you didn't.

In the long haul, it may start to tell a little bit more about border policy in general and whether any of these are having any kind of impact, and maybe we start to learn more about apprehensions and other kinds of data that we can connect to.

Madan: If someone wanted to take a tour like this, if they wanted to hop on in VR and visit a few of these places, how can they do that? 

Maass: So if they have a VR headset, a Meta Quest 2 or newer, the Wander app is what you're going to use. You can just go into the app and position yourself somewhere in the border. Jump around a little bit, maybe it will be like five feet, and you can start seeing a surveillance tower.

If you don’t have a headset and want to do it in your browser, you can go to EFF’s map and click on a tower. You’ll see a Street View link when you scroll down. Or you can use those tower coordinates and then go to your VR headset and try to find it.

Madan: What are your thoughts about the Meta Quest headset—formerly known as the Oculus Rift—being founded by Palmer Luckey, who also founded the company that made one of the towers on the tour?

Maass: There’s certainly some irony about using a technology that was championed by Palmer Luckey to shine light on another technology championed by Palmer Luckey. That's not the only tech irony, of course: Wander [the app used for the tour] also depends on using products from Google and Meta, both of whom continue to contribute to the rise of surveillance in society, to investigate surveillance.

Madan: What's your biggest takeaway as the person giving this tour?

Maass: I am a researcher and educator, and an activist and communicator. To me, this is one of the most impactful ways that I can reach people and give them a meaningful experience about the border. 

I think that when people are consuming information about the border, they're just getting little snippets from a little particular area. You know, it's always a little place that they're getting a little sliver of what's going on. 

But when we're able to do this with VR, I'm able to take them everywhere. I'm able to take them to both sides of the border. We're able to see a whole lot, and they're able to come away by the end of it, feeling like they were there. Like your brain starts filling in the blanks. People get this experience that they wouldn't be able to get any other way.

Being able to linger over these spaces on my own time showed me just how much surveillance is truly embedded in people's daily lives. When I left the library, I found myself inspecting traffic cones for license plate readers. 

As I continue to investigate border surveillance, this experience really showed me just how educational these tools can be for academics, research and journalism. 

Thanks for reading,
Monique
Investigative Reporter
The Markup

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

Dave Maass

Ghana's President Must Refuse to Sign the Anti-LGBTQ+ Bill

2 months 1 week ago

After three years of political discussions, MPs in Ghana's Parliament voted to pass the country’s draconian Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill on February 28th. The bill now heads to Ghana’s President Nana Akufo-Addo to be signed into law. 

President Nana Akufo-Addo must protect the human rights of all people in Ghana and refuse to provide assent to the bill.

This anti-LGBTQ+ legislation introduces prison sentences for those who partake in LGBTQ+ sexual acts, as well as those who promote the rights of gay, lesbian or other non-conventional sexual or gender identities. This would effectively ban all speech and activity on and offline that even remotely supports LGBTQ+ rights.

Ghanaian authorities could probe the social media accounts of anyone applying for a visa for pro-LGBTQ+ speech or create lists of pro-LGBTQ+ supporters to be arrested upon entry. They could also require online platforms to suppress content about LGBTQ+ issues, regardless of where it was created. 

Doing so would criminalize the activity of many major cultural and commercial institutions. If President Akufo-Addo does approve the bill, musicians, corporations, and other entities that openly support LGBTQ+ rights would be banned in Ghana.

Despite this direct threat to online freedom of expression, tech giants are yet to speak out publicly against the LGBTQ+ persecution in Ghana. Twitter opened its first African office in Accra in April 2021, citing Ghana as “a supporter of free speech, online freedom, and the Open Internet.” Adaora Ikenze, Facebook’s head of Public Policy in Anglophone West Africa has said: “We want the millions of people in Ghana and around the world who use our services to be able to connect, share and express themselves freely and safely, and will continue to protect their ability to do that on our platforms.” Both companies have essentially dodged the question.

For many countries across Africa, and indeed the world, the codification of anti-LGBTQ+ discourses and beliefs can be traced back to colonial rule, and a recent CNN investigation from December 2023 found alleged links between the drafting of homophobic laws in Africa and a US nonprofit. The group denied those links, despite having hosted a political conference in Accra shortly before an early version of this bill was drafted.

Regardless of its origin, the past three years of political and social discussion have contributed to a decimation of LGBTQ+ rights in Ghana, and the decision by MPs in Ghana’s Parliament to pass this bill creates severe impacts not just for LGBTQ+ people in Ghana, but for the very principle of free expression online and off. President Nana Akufo-Addo must reject it.

Paige Collings

We Flew a Plane Over San Francisco to Fight Proposition E. Here's Why.

2 months 1 week ago

Proposition E, which San Franciscans will be asked to vote on in the March 5 election, is so dangerous that last weekend we chartered a plane to inform our neighbors about what the ballot measure does and urge them to vote NO on it. If you were in Dolores Park, Golden Gate Park, Chinatown, or anywhere in between on Saturday, there’s a chance you saw it, with a huge banner flying through the sky: “No Surveillance State! No on Prop E.”

Despite the fact that the San Francisco Chronicle has endorsed a NO vote on Prop E, and even quoted some police who don’t find its changes useful to keeping the public safe, proponents of Prop E have raised over $1 million to push this unnecessary, ill-thought out, and downright dangerous ballot measure.

San Francisco, Say NOPE: Vote NO on Prop E on March 5

What Does Prop E Do?

Prop E is a haphazard mess of proposals that tries to capitalize on residents’ fear of crime in an attempt to gut commonsense democratic oversight of the San Francisco Police Department (SFPD). In addition to removing certain police oversight authority from the civilian-staffed Police Commission and expanding the circumstances under which police may conduct high-speed vehicle chases, Prop E would also amend existing law passed in 2019 to protect San Franciscans from invasive, untested, or biased police surveillance technologies. Currently, if the SFPD wants to acquire a new technology, they must provide a detailed use policy to the democratically-elected Board of Supervisors, in a process that allows for public comment. The Board then votes on whether and how the police can use the technology.

Prop E guts these protective measures designed to bring communities into the conversation about public safety. If Prop E passes on March 5, then the SFPD can unilaterally use any technology they want for a full year without the Board’s approval, without publishing an official policy about how they’d use the technology, and without allowing community members to voice their concerns.

Why is Prop E Dangerous and Unnecessary?

Across the country, police often buy and deploy surveillance equipment without residents of their towns even knowing what police are using or how they’re using it. This means that dangerous technologies—technologies other cities have even banned—are being used without any transparency, accountability, or democratic control.

San Franciscans advocated for and overwhelmingly supported a law that provides them with more knowledge of, and a voice in, what technologies the police use. Under current law, if the SFPD wanted to use racist predictive policing algorithms that U.S. Senators are currently advising the Department of Justice to stop funding or if the SFPD wanted to buy up geolocation data being harvested from people’s cells phones and sold on the advertising data broker market, they have to let the public know and put it to a vote before the city’s democratically-elected governing body first. Prop E would gut any meaningful democratic check on police’s acquisition and use of surveillance technologies.

What Technology Would Prop E Allow Police to Use?

That's the thing—we don't know, and if Prop E passes, we may never know. Today, if the SFPD decides to use a piece of surveillance technology, there is a process for sharing that information with the public. With Prop E, that process won't happen until the technology has been in use for a full year. And if police abandon use of a technology before a year, we may never find out what technology police tried out and how they used it. 

Even though we don't know what technologies the SFPD is eyeing, we do know what technologies other police departments have been buying in cities around the country: AI-based “predictive policing,” and social media scanning tools are just two examples. And according to the City Attorney, Prop E would even enable the SFPD to outfit surveillance tools such as drones and surveillance cameras with face recognition technology. San Francisco currently has a ban on police using remote-controlled robots to deploy deadly force, but if passed, Prop E would allow police to invest in technologies like taser-armed drones without any oversight or potential for elected officials to block the sale. 

Don’t let police experiment on San Franciscans with dangerous, untested surveillance technologies. Say NOPE to a surveillance state. Vote NO on Prop E on March 5.  
Matthew Guariglia

Sen. Wyden Exposes Data Brokers Selling Location Data to Anti-Abortion Groups That Target Abortion Seekers

2 months 1 week ago

This post was written by Jack Beck, an EFF legal intern

In a recent letter to the FTC and SEC, Sen. Ron Wyden (OR) details new information on data broker Near, which sold the location data of people seeking reproductive healthcare to anti-abortion groups. Near enabled these groups to send targeted ads promoting anti-abortion content to people who had visited Planned Parenthood and similar clinics.

In May 2023, the Wall Street Journal reported that Near was selling location data to anti-abortion groups. Specifically, the Journal found that the Veritas Society, a non-profit established by Wisconsin Right to Life, had hired ad agency Recrue Media. That agency purchased location data from Near and used it to target anti-abortion messaging at people who had sought reproductive healthcare.

The Veritas Society detailed the operation on its website (on a page that was taken down but saved by the Internet Archive) and stated that it delivered over 14 million ads to people who visited reproductive healthcare clinics. These ads appeared on Facebook, Instagram, Snapchat, and other social media for people who had sought reproductive healthcare.

When contacted by Sen. Wyden’s investigative team, Recrue staff admitted that the agency used Near’s website to literally “draw a line” around areas their client wanted them to target. They drew these lines around reproductive health care facilities across the country, using location data purchased from Near to target visitors to 600 Planned Parenthood different locations. Sen. Wyden’s team also confirmed with Near that, until the summer of 2022, no safeguards were in place to protect the data privacy of people visiting sensitive places.

Moreover, as Sen. Wyden explains in his letter, Near was selling data to the government, though it claimed on its website to be doing no such thing. As of October 18, 2023, Sen. Wyden’s investigation found Near was still selling location data harvested from Americans without their informed consent.

Near’s invasion of our privacy shows why Congress and the states must enact privacy-first legislation that limits how corporations collect and monetize our data. We also need privacy statutes that prevent the government from sidestepping the Fourth Amendment by purchasing location information—as Sen. Wyden has proposed. Even the government admits this is a problem.  Furthermore, as Near’s misconduct illustrates, safeguards must be in place that protect people in sensitive locations from being tracked.

This isn’t the first time we’ve seen data brokers sell information that can reveal visits to abortion clinics. We need laws now to strengthen privacy protections for consumers. We thank Sen. Wyden for conducting this investigation. We also commend the FTC’s recent bar on a data broker selling sensitive location data. We hope this represents the start of a longstanding trend.

Adam Schwartz

EFF to D.C. Circuit: The U.S. Government’s Forced Disclosure of Visa Applicants’ Social Media Identifiers Harms Free Speech and Privacy

2 months 1 week ago

Special thanks to legal intern Alissa Johnson, who was the lead author of this post.

EFF recently filed an amicus brief in the U.S. Court of Appeals for the D.C. Circuit urging the court to reverse a lower court decision upholding a State Department rule that forces visa applicants to the United States to disclose their social media identifiers as part of the application process. If upheld, the district court ruling has severe implications for free speech and privacy not just for visa applicants, but also the people in their social media networks—millions, if not billions of people, given that the “Disclosure Requirement” applies to 14.7 million visa applicants annually.

Since 2019, visa applicants to the United States have been required to disclose social media identifiers they have used in the last five years to the U.S. government. Two U.S.-based organizations that regularly collaborate with documentary filmmakers around the world sued, challenging the policy on First Amendment and other grounds. A federal judge dismissed the case in August 2023, and plaintiffs filed an appeal, asserting that the district court erred in applying an overly deferential standard of review to plaintiffs’ First Amendment claims, among other arguments.

Our amicus brief lays out the privacy interests that visa applicants have in their public-facing social media profiles, the Disclosure Requirement’s chilling effect on the speech of both applicants and their social media connections, and the features of social media platforms like Facebook, Instagram, and X that reinforce these privacy interests and chilling effects.

Social media paints an alarmingly detailed picture of users’ personal lives, covering far more information that that can be gleaned from a visa application. Although the Disclosure Requirement implicates only “public-facing” social media profiles, registering these profiles still exposes substantial personal information to the U.S. government because of the number of people impacted and the vast amounts of information shared on social media, both intentionally and unintentionally. Moreover, collecting data across social media platforms gives the U.S. government access to a wealth of information that may reveal more in combination than any individual question or post would alone. This risk is even further heightened if government agencies use automated tools to conduct their review—which the State Department has not ruled out and the Department of Homeland Security’s component Customs and Border Protection has already begun doing in its own social media monitoring program. Visa applicants may also unintentionally reveal personal information on their public-facing profiles, either due to difficulties in navigating default privacy setting within or across platforms, or through personal information posted by social media connections rather than the applicants themselves.

The Disclosure Requirement’s infringements on applicants’ privacy are further heightened because visa applicants are subject to social media monitoring not just during the visa vetting process, but even after they arrive in the United States. The policy also allows for public social media information to be stored in government databases for upwards of 100 years and shared with domestic and foreign government entities.  

Because of the Disclosure Requirement’s potential to expose vast amounts of applicants’ personal information, the policy chills First Amendment-protected speech of both the applicant themselves and their social media connections. The Disclosure Requirement allows the government to link pseudonymous accounts to real-world identities, impeding applicants’ ability to exist anonymously in online spaces. In response, a visa applicant might limit their speech, shut down pseudonymous accounts, or disengage from social media altogether. They might disassociate from others for fear that those connections could be offensive to the U.S. government. And their social media connections—including U.S. persons—might limit or sever online connections with friends, family, or colleagues who may be applying for a U.S. visa for fear of being under the government’s watchful eye.  

The Disclosure Requirement hamstrings the ability of visa applicants and their social media connections to freely engage in speech and association online. We hope that the D.C. Circuit reverses the district court’s ruling and remands the case for further proceedings.

Saira Hussain

Podcast Episode: Open Source Beats Authoritarianism

2 months 1 week ago

What if we thought about democracy as a kind of open-source social technology, in which everyone can see the how and why of policy making, and everyone’s concerns and preferences are elicited in a way that respects each person’s community, dignity, and importance?

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F3269fca8-4236-4af6-b482-73e13b643b93%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com


   

(You can also find this episode on the Internet Archive and on YouTube.)

This is what Audrey Tang has worked toward as Taiwan’s first Digital Minister, a position the free software programmer has held since 2016. She has taken the best of open source and open culture, and successfully used them to help reform her country’s government. Tang speaks with EFF’s Cindy Cohn and Jason Kelley about how Taiwan has shown that openness not only works but can outshine more authoritarian competition wherein governments often lock up data.

In this episode, you’ll learn about:

  • Using technology including artificial intelligence to help surface our areas of agreement, rather than to identify and exacerbate our differences 
  • The “radical transparency” of recording and making public every meeting in which a government official takes part, to shed light on the policy-making process 
  • How Taiwan worked with civil society to ensure that no privacy and human rights were traded away for public health and safety during the COVID-19 pandemic 
  • Why maintaining credible neutrality from partisan politics and developing strong public and civic digital infrastructure are key to advancing democracy. 

Audrey Tang has served as Taiwan's first Digital Minister since 2016, by which time she already was known for revitalizing the computer languages Perl and Haskell, as well as for building the online spreadsheet system EtherCalc in collaboration with Dan Bricklin. In the public sector, she served on the Taiwan National Development Council’s open data committee and basic education curriculum committee and led the country’s first e-Rulemaking project. In the private sector, she worked as a consultant with Apple on computational linguistics, with Oxford University Press on crowd lexicography, and with Socialtext on social interaction design. In the social sector, she actively contributes to g0v (“gov zero”), a vibrant community focusing on creating tools for the civil society, with the call to “fork the government.”

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

AUDREY TANG
In 2016, October, when I first became Taiwan's digital minister, I had no examples to follow because I was the first digital minister. And then it turns out that in traditional Mandarin, as spoken in Taiwan, digital, shu wei, means the same as “plural” - so more than one. So I'm also a plural minister, I'm minister of plurality. And so to kind of explain this word play, I wrote my job description as a prayer, as a poem. It's very short, so I might as well just quickly recite it. It goes like this:
When we see an internet of things, let's make it an internet of beings.
When we see virtual reality, let's make it a shared reality.
When we see machine learning, let's make it collaborative learning.
When we see user experience, let's make it about human experience.
And whenever we hear that a singularity is near, let us always remember the plurality is here.

CINDY COHN
That's Audrey Tang, the Minister of Digital Affairs for Taiwan. She has taken the best of open source and open culture, and successfully used them to help reform government in her country of Taiwan. When many other cultures and governments have been closing down and locking up data and decision making, Audrey has shown that openness not only works, but it can win against its more authoritarian competition.
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I'm Jason Kelley, EFF's Activism Director. This is our podcast series, How to Fix the Internet.

CINDY COHN
The idea behind this show is we're trying to make our digital lives better. We spend so much time imagining worst-case scenarios, and jumping into the action when things inevitably do go wrong online but this is a space for optimism and hope.

JASON KELLEY
And our guest this week is one of the most hopeful and optimistic people we've had the pleasure of speaking with on this program. As you heard in the intro, Audrey Tang has an incredibly refreshing approach to technology and policy making.

CINDY COHN
We approach a lot of our conversations on the podcast using Lawrence Lessig’s framework of laws, norms, architecture and markets – and Audrey’s work as the Minister of Digital Affairs for Taiwan combines almost all of those pillars. A lot of the initiatives she worked on have touched on so many of the things that we hold dear here at EFF and we were just thrilled to get a chance to speak with her.
As you'll soon hear, this is a wide-ranging conversation but we wanted to start with the context of Audrey's day-to-day life as Taiwan's Minister of Digital Affairs.

AUDREY TANG
In a nutshell I make sure that every day I checkpoint my work so that everyone in the world knows not just the what of the policies made, but the how and why of policy making.
So for easily more than seven years everything that I did in the process, not the result, of policymaking, is visible to the general public. And that allows for requests, essentially - people who make suggestions on how to steer it into a different direction, instead of waiting until the end of policymaking cycle, where they have to say, you know, we protest, please scratch this and start anew and so on.
No, instead of protesting, we welcome demonstrators that demonstrates better ways to make policies as evidenced during the pandemic, where we rely on the civil society lead contact tracing and counter pandemic methods and for three years we've never had a single day of lockdown.

JASON KELLEY
Something just popped into my head about the pandemic since you mentioned the pandemic. I'm wondering if your role shifted during that time, or if it sort of remained the same except to focus on a slightly different element of the job in some way.

AUDREY TANG
That's a great question. So entering the pandemic, I was the minister with a portfolio in charge of open government, social innovation and youth engagement. And during the pandemic, I assumed a new role, which is the cabinet Chief Information Officer. And so the cabinet CIO usually focuses on, for example, making tax paying easier, or use the same SMS number for all official communications or things like that.
But during the pandemic, I played a role of like a Lagrange Point, right? Between the gravity centers of Privacy protection, social movement on one side and protecting the economy, keep TSMC running on the other side, whereas many countries, I would say everyone other than say Taiwan, New Zealand and a handful of other countries, everyone assumed it would be a trade-off.
Like there's a dial you'll have to, uh, sacrifice some of the human rights, or you have to sacrifice some lives, right? A very difficult choice. We refuse to make such trade-offs.
So as the minister in charge of social innovation, I work with the civil society leaders who themselves are the privacy advocates, to design contact tracing systems instead of relying on Google or Apple or other companies to design those and as cabinet CIO, whenever there is this very good idea, we make sure that we turn it into production, making a national level the next Thursday. So there's this weekly iteration that takes the best idea from the civil society and make it work on a national level. And therefore, it is not just counter pandemic, but also counter infodemic. We've never had a single administrative takedown of speech during the pandemic. Yet we don't have an anti-vax political faction, for example.

JASON KELLEY
That's amazing. I'm hearing already a lot of, uh, things that we might want to look towards in the U.S.

CINDY COHN
Yeah, absolutely. I guess what I'd love to do is, you know, I think you're making manifest a lot of really wonderful ideas in Taiwan. So I'd like you to step back and you know, what does the world look like, you know, if we really embrace openness, we embrace these things, what does the bigger world look like if we go in this direction?

AUDREY TANG
Yeah, I think the main contribution that we made is that the authoritarian regimes for quite a while kept saying that they're more efficient, that for emerging threats, including pandemic, infodemic, AI, climate, whatever, top-down, takedown, lockdown, shutdowns are more effective. And when the world truly embraces democracy, we will be able to pre-bunk – not debunk, pre-bunk – this idea that democracy only leads to chaos and only authoritarianism can be effective. If we do more democracy more openly, then everybody can say, oh, we don't have to make those trade-offs anymore.
So, I think when the whole world embraces this idea of plurality, we'll have much more collaboration and much more diversity. We won't refuse diversity simply because it's difficult to coordinate.

JASON KELLEY
Since you mentioned democracy, I had heard that you have this idea of democracy as a social technology. And I find that really interesting, partly because all the way back in season one, we talked to the chief innovation officer for the state of New Jersey, Beth Noveck, who talked a lot about civic technology and how to facilitate public conversations using technology. So all of that is a lead-in to me asking this very basic question. What does it mean when you say democracy is a social technology?

AUDREY TANG
Yeah. So if you look at democracy as it's currently practiced, you'll see voting, for example, if every four years someone votes for among, say, four presidential candidates, that's just two bits of information uploaded from each individual and the latency is very, very long, right? Four years, two years, one year.
Again, when emerging threats happen, pandemic, infodemic, climate, and so on, uh, they don't work on a four year schedule. They just come now and you have to make something next Thursday, in order to counter it at its origin, right? So, democracy, as currently practiced, suffers from the lack of bandwidth, so the preference of citizens are not fully understood, and latency, which means that the iteration cycle is too long.
And so to think of democracy as a social technology is to think about ways that make the bandwidth wider. To make sure that people's preferences can be elicited in a way that respects each community's dignities, choices, context, instead of compressing everything into this one dimensional poll results.
We can free up the polls so that it become wiki surveys. Everybody can write those polls, questions together. It can become co-creation. People can co-create a constitutional document for the next generation of AI that aligns itself to that document, and so on and so forth. And when we do this, like, literally every day, then also the latency shortens, and people can, like a radar, sense societal risks and come up with societal solutions in the here and now.

CINDY COHN
That's amazing. And I know that you've helped develop some of the actual tools. Or at least help implement them, that do this. And I'm interested in, you know, we've got a lot of technical people in our audience, like how do you build this and what are the values that you put in them? I'm thinking about things like Polis, but I suspect there are others too.

AUDREY TANG
Yes, indeed. Polis is quite well known in that it's a kind of social media that instead of polarizing people to drive so called engagement or addiction or attention, it automatically drives bridge making narratives and statements. So only the ideas that speak to both sides or to multiple sides will gain prominence in Polis.
And then the algorithm surfaces to the top so that people understand, oh, despite our seeming differences that were magnified by mainstream and other antisocial media, there are common grounds, like 10 years ago when UberX first came to Taiwan, both the Uber drivers and taxi drivers and passengers all actually agreed that insurance registration not undercutting existing meters. These are important things.
So instead of arguing about abstract ideas, like whether it's sharing economy, or extractive gig economy, uh, we focus, again, on the here and now and settle the ideas in a way that's called rough consensus. Meaning that everybody, maybe not perfectly, live with it, can live with it.

CINDY COHN
I just think they're wonderful and I love the flipping of this idea of algorithmic decision making such that the algorithm is surfacing places of agreement, and I think it also does some mapping as well about places of agreement instead of kind of surfacing the disagreement, right?
And that, that is really, algorithms can be programmed in either direction. And the thinking about how do you build something that brings stuff together to me is just, it's fascinating and doubly interesting because you've actually used it in the Uber example, and I think you've used some version of that also back in the early work with the Sunflower movement as well.

AUDREY TANG
Yeah, the Uber case was 2015, and the Sunflower Movement was, uh, 2014, and at 2014, the Ma Ying-jeou administration at the time, um, had a approval rate for citizens of less than 10%, which means that anything the administration says, the citizens ultimately don't believe, right? And so instead of relying on traditional partisan politics, which totally broke down circa 2014, Ma Ying-jeou worked with people that came from the tech communities and named, uh, Simon Chang from Google, first as vice premier and then as premier. And then in 2016, when the Tsai Ing Wen administration began again, the premier Lin Chuan was also independent. So we are after 2014-15, at a new phase of our democracy where it becomes normal for me to say, Oh, I don't belong to any parties but I work with all the parties. That credible neutrality, this kind of bridge making across parties, becomes something people expect the administration to do. And again, we don't see that much of this kind of bridge making action in other advanced democracies.

CINDY COHN
You know, I had this question and, and I know that one of our supporters did as well, which is, what's your view on, you know, kind of hackers? And, and by saying hackers here, I mean people with deep technical understanding. Do you think that they can have more impact by going into government than staying in private industry? Or how do you think about that? Because obviously you made some decisions around that as well.

AUDREY TANG
So my job description basically implies that I'm not working for the government. I'm just working with the government. And not for the people, but with the people. And this is very much in line with the internet governance technical community, right? The technical community within the internet governance communities kind of places ourselves as a hub between the public sector, the private sector, even the civil society, right?
So, the dot net suffix is something else. It is something that includes dot org, dot com, dot edu, dot gov, and even dot military, together into a shared fabric so that people can find rough consensus. And running code, regardless of which sector they come from. And I think this is the main gift that the hacker community gives to modern democracy, is that we can work on the process, but the process or the mechanism naturally fosters collaboration.

CINDY COHN
Obviously whenever you can toss rough consensus and running code into a conversation, you've got our attention at EFF because I think you're right. And, and I think that the thing that we've struggled with is how to do this at scale.
And I think the thing that's so exciting about the work that you're doing is that you really are doing a version of. transparency, rough consensus, running code, and finding commonalities at a scale that I would say many people weren't sure was possible. And that's what's so exciting about what you've been able to build.

JASON KELLEY
I know that before you joined with the government, you were a civic hacker involved in something called gov zero. And I'm wondering, maybe you can talk a little bit about that and also help people who are listening to this podcast think about ways that they can sort of follow your path. Not necessarily everyone can join the government to do these sorts of things, but I think people would love to implement some of these ideas and know more about how they could get to the position to do so.

AUDREY TANG
Collaborative diversity works not just in the dot gov, but if you're working in a large enough dot org or dot com, it all works the same, right? When I first discovered the World Wide Web, I learned about image tags, and the first image tag that I put was the Blue Ribbon campaign. And it was actually about unifying the concerns of not just librarians, but also the hosting companies and really everybody, right, regardless of their suffix. We saw their webpages turning black and there's this prominent blue ribbon at a center. So by making the movement fashionable across sectors, you don't have to work in the government in order to make a change. Just open source your code and somebody In the administration, that's also a civic hacker will notice and just adapt or fork, or merge your code back.
And that's exactly how Gov Zero works. In 2012 a bunch of civic hackers decided that they've had enough with PDF files that are just image scans of budget descriptions, or things like that, which makes it almost impossible for average citizens to understand what's going on with the Ma Ying-jeou administration.And so, they set up forked websites.
So for each website, something dot gov dot tw, the civic hackers register something dot g0v dot tw, which looks almost the same. So, you visit a regular government website, you change your O to a zero, and this domain hack ensures that you're looking at a shadow government versions of the same website, except it's on GitHub, except it’s powered by open data, except there's real interactions going on and you can actually have a conversation about any budget item around this visualization with your fellow civic hackers.
And many of those projects in Gov Zero became so popular that the administration, the ministries finally merged back their code so that if you go to the official government website, it looks exactly the same as the civic hacker version.

CINDY COHN
Wow. That is just fabulous. And for those who might be a little younger, the Blue Ribbon Campaign was an early EFF campaign where websites across the internet would put a blue ribbon up to demonstrate their commitment to free speech. And so I adore that that was one of the inspirations for the kind of work that you're doing now. And I love hearing these recent examples as well, that this is something that really you can do over and over again.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

TIME magazine recently featured Audrey Tang as one of the 100 most influential people in AI and one of the projects they mentioned is Alignment Assemblies, a collaboration with the Collective Intelligence Project policy organization that employs a chatbot to help enable citizens to weigh in on their concerns around AI and the role it should play.

AUDREY TANG
So it started as just a Polis survey of the leaders at the Summit for Democracy and AI labs and so on on how exactly are their concerns bridge-worthy when it comes to the three main values identified by the Collective Intelligence Project, which is participation, progress and safety. Because at the time, the conversation because of the GPT4 and its effect on everybody's mind, we hear a lot of strong trade-off arguments like to maximize safety, we have to, I don't know, restrict GPU Purchasing across the world to put a cap on progress or we hear that for to make open source possible we must give up the idea of the AI's aligning themselves, but actually having the uncensored model be like personal assistant so that everybody has one so that people become inoculated against deepfakes because everybody can very easily deepfake and so on.
And we also hear that maybe internet communication will be taken over by deepfakes. And so we will have to reintroduce some sort of real name internet because otherwise everybody will be a bot on the internet and so on. So all these ideas really push over the window, right? Because before generative AI, these ideas were considered fringe.
And suddenly, at the end of March this year, those ideas again gained prominent ground. So using Polis and using TalkToTheCity and other tools, we quickly mapped an actually overlapping consensus. So regardless of which value you come from, people generally understand that if we don't tackle the short term risks - the interactive deepfakes, the persuasion and addiction risks, and so on - then we won't even coordinate enough to live together to see the coordination around the extinction risks a decade or so down the line, right?
So we have to focus on the immediate risks first, and that led to the safe dot ai joint statement, which I signed, and also the Mozilla open and safety joint statement which I signed and so on.
So the bridge-making AI actually enabled a sort of deep canvassing where I can take all the sides and then make the narratives that bridges the three very different concerns. So it's not a trilemma, but rather reinforcing each other mutually. And so in Taiwan, a surprising consensus that we got from the Polis conversations and the two face-to-face day-long workshops, was that people in Taiwan want the Taiwanese government to pioneer this use of trustworthy AI.
So instead of the private sector producing the first experiences, they want the public servants to exercise their caution of course, but also to use gen AI in the public service. But with one caveat that this must be public code, that is to say, it should be free software, open source, the way it integrates into decision making should be an assistive role and everything need to be meticulously documented so the civil society can replicate it on their own personal computers and so on. And I think that's quite insightful. And therefore, we're actually doubling down on the societal evaluation and certification. And we're setting up a center for that at the end of this year.

CINDY COHN
So what are some of the lessons and things that you've learned in doing this in Taiwan that you think, you know, countries around the world or people around the world ought to take back and, and think about how they might implement it?
Are there pitfalls that you might want to avoid? Are there things that you think really worked well that people ought to double down on?

AUDREY TANG
I think it boils down to two main observations. The first one is that credible neutrality and alignment with the career public service is very, very important. The political parties come and go, but a career public service is very aligned with the civic hackers' kind of thinking because they maintain the mechanism.
They want the infrastructure to work and they want to serve people who belong to different political party. It doesn't matter because that's what a public service does. It serves the public. And so for the first few years of the Gov Zero movement the projects found not just natural allies in the Korean public service, but also the credibly neutral institutions in our society.
For example, our National Academy which doesn't report to the ministers, but rather directly to the president is widely seen as credibly neutral. And so civil society organizations can play such a role equally effectively if they work directly with the people, not just for the policy think tanks and so on.
So one good example may be like consumer report in the U. S. or the National Public Radio, and so on. So, basically, these are the mediators that are very similar to us, the civic hackers, and we need to find allies in them. So this is the first observation. And the second observation is that you can turn any crisis that urgently need clarity into an opportunity to future mechanisms that works better.
So if you have the civil society trust in it and the best way to win trust is to give trust. So by simply saying the opposition party, everyone has the real time API of the open data, and so if you make a critique of our policy, well, you have the same data as we do. So patches welcome, send us pull requests, and so on. This turns what used to be a zero sum or negative sum dynamic in politics thanks to a emergency like pandemic or infodemic and turned it into a co-creation opportunity and the resulting infrastructure become so legitimate that no political parties will dismantle it. So it become another part of political institution.
So having this idea of digital public infrastructure and ask for the parliament to give it infrastructure, money and investment, just like building parks and roads and highways. This is also super important.
So when you have a competent society, when we focus on not just the literacy, but competence of everyday citizens, they can contribute to public infrastructures through civic infrastructures. So credible neutrality on one and public and civic infrastructure as the other, I think these two are the most fundamental, but also easiest to practice way to introduce this plurality idea to other polities.

CINDY COHN
Oh, I think these are great ideas. And it reminds me a little of what we learned when we started doing electronic voting work at EFF. We learned that we needed to really partner with the people who run elections.
We were aligned that all of us really wanted to make sure that the person with the most votes was actually the person who won the election. But we started out a little adversarial and we really had to learn to flip that around. Now that’s something that our friends at Verified Voting have really figured out and have build some strong partnerships. But I suspect in your case it could have been a little annoying to officials that you were creating these shadow websites. I wonder, did it take a little bit of a conversation to flip them around to the situation in which they embraced it?

AUDREY TANG
I think the main intervention that I personally did back in the days when I run the MoEdDict, or the Ministry of Education Dictionary project, in the Gov Zero movement, was that we very prominently say, that although we reuse all the so-called copyright reserve data from the Ministry of Education, we relinquish all our copyright under the then very new Creative Commons 0, so that they cannot say that we're stealing any of the work because obviously we're giving everything back to the public.
So by serving the public in an even more prominent way than the public service, we make ourselves not just the natural allies, but kind of reverse mentors of the young people who work with cabinet ministers. But because we serve the public better in some way, they can just take entire website design, the entire Unicode, interoperability, standard conformance, accessibility and so on and simply tell their vendors, and say, you know, you can merge it. You don't have to pay these folks a dime. And naturally then the service increases and they get praise from the press and so on. And that fuels this virtuous cycle of collaboration.

JASON KELLEY
One thing that you mentioned at the beginning of our conversation that I would love to hear more about is the idea of radical transparency. Can you talk about how that shows up in your workflow in practice every day? Like, do you wake up and have a cabinet meeting and record it and transcribe it and upload it? How do you find time to do all that? What is the actual process?

AUDREY TANG
Oh I have staff of course. And also, nowadays, language models. So the proofreading language models are very helpful. And I actually train my own language models. Because the pre-training of all the leading large language models already read from the seven years or so of public transcript that I published.
So they actually know a lot about me. In fact, when facilitating the chatbot conversations, one of the more powerful prompts we discovered was simply, facilitate this conversation in the manner of Audrey Tang. And then language model actually know what to do because they've seen so many facilitative transcripts.

CINDY COHN
Nice! I may start doing that!

AUDREY TANG
It's a very useful elicitation prompt. And so I train my local language model. My emails, especially English ones, are all drafted by the local model. And it has no privacy concern because it runs in airplane mode. The entire fine tuning inference. Everything is done locally and so while it does learn from my emails and so on, I always read fully before hitting send.
But this language model integration of personal computing already saved, I would say 90 percent of my time, during daily chores, like proofreading, checking transcripts, replying to emails and things like that. And so I think one of the main arguments we make in the cabinet is that this kind of use of what we call local AI, edge AI, or community open AI, are actually better to discover the vulnerabilities and flaws and so on, because then the public service has a duty to ensure the accuracy and what better way to ensure accuracy of language model systems than integrating it in the flow of work in a way that doesn't compromise privacy and personal data protection. And so, yeah, AI is a great time saver, and we're also aligning AI as we go.
So for the other ministries that want to learn from this radical transparency mechanism and so on, we almost always sell it as a more secure and time saving device. And then once they adopt it, then they see the usefulness of getting more public input and having a language model to digest the collective inputs and respond to the people in the here and now.

CINDY COHN
Oh, that is just wonderful because I do know that when you start talking with public servants about more public participation, often what you get is, Oh, you're making my job harder. Right? You're making more work for me. And, and what you've done is you've kind of been able to use technology in a way that actually makes their job easier. And I think the other thing I just want to lift up in what you said, is how important it is that these AI systems that you're using are serving you. And it's one of the things we talk about a lot about the dangers of AI systems, which is, who bears the downside if the AI is wrong?
And when you're using a service that is air gapped from the rest of the internet and it is largely using to serve you in what you're doing, then the downside of it being wrong doesn't go on, you know, the person who doesn't get bail. It's on you and you're in the best position to correct it and actually recognize that there's a problem and make it better.

AUDREY TANG
Exactly. Yeah. So I call these AI systems assistive intelligence, after assistive technology because it empowers the dignity of me, right? I have this assistive tech, which is a bunch of eyeglasses. It's very transparent, and if I see things wrong after putting those eyeglasses, nobody blamed the eyeglasses.
It's always the person that is empowered by the eyeglasses. But if instead I wear not eyeglasses, but those VR devices that consumes all the photons, upload it to the cloud for some very large corporation to calculate and then project back to my eyes and maybe with some advertisement in it and so on, then it's very hard to tell whether the decision making falls on me or on those intermediaries that basically blocks my eyesight and just present me a alternate reality. So I always prefer things that are like eyeglasses, or bicycles for that matter that someone can repair it themselves, without violating an NDA or paying $3 million in license fees.

CINDY COHN
That's great. And open source for the win again there. Yeah.

AUDREY TANG
Definitely.

CINDY COHN
Yeah, well thank you so much, Audrey. I tell you, this has been kind of like a breath of fresh air, I think, and I really appreciate you giving us a glimpse into a world in which, you know, the values that I think we all agree on are actually being implemented and implementing, as you said, in a way that scales and makes things better for ordinary people.

AUDREY TANG
Yes, definitely. I really enjoy the questions as well. Thank you so much. Live long and prosper.

JASON KELLEY
Wow. A lot of the time we talk to folks and it's hard to get to a vision of the future that we feel positive about. And this was the exact opposite. I have rarely felt more positively about the options for the future and how we can use technology to improve things and this was just - what an amazing conversation. What did you think, Cindy?

CINDY COHN
Oh I agree. And the thing that I love about it is, she’s not just positing about the future. You know, she’s telling us stories that are 10 years old about how they fix things in Taiwan. You know, the Uber story and some of the other stories of the Sunflower movement. She didn't just, like, show up and say the future's going to be great, like, she's not just dreaming, They're doing.

JASON KELLEY
Yeah. And that really stood out to me when talking about some of the things that I expected to get more theoretical answers to, like, what do you mean when you say democracy is a technology and the answer is quite literally that democracy suffers from a lack of bandwidth and latency and the way that it takes time for individuals to communicate with the government can be increased in the same way that we can increase bandwidth and it was just such a concrete way of thinking about it.
And another concrete example was, you know, how do you get involved in something like this? And she said, well, we just basically forked the website of the government with a slightly different domain and put up better information until the government was like, okay, fine, we'll just incorporate it. These are such concrete things that people can sort of understand about this. It's really amazing.

CINDY COHN
Yeah, the other thing I really liked was pointing out how, you know, making government better and work for people is really one of the ways that we counter authoritarianism. She said one of the arguments in favor of authoritarianism is that it's more efficient, and it can get things done faster than a messy, chaotic, democratic process.
And she said, well, you know, we just fixed that so that we created systems in which democracy was more efficient. than authoritarianism. And she talked a lot about the experience they had during COVID. And the result of that being that they didn't have a huge misinformation problem or a huge anti-vax community in Taiwan because the government worked.

JASON KELLEY
Yeah that's absolutely right, and it's so refreshing to see that, that there are models that we can look toward also, right? I mean, it feels like we're constantly sort of getting things wrong, and this was just such a great way to say, Oh, here's something we can actually do that will make things better in this country or in other countries,
Another point that was really concrete was the technology that is a way of twisting algorithms around instead of surfacing disagreements, surfacing agreements. The Polis idea and ways that we can make technology work for us. There was a phrase that she used which is thinking of algorithms and other technologies as assistive. And I thought that was really brilliant. What did you think about that?

CINDY COHN
I really agree. I think that, you know, building systems that can surface agreement as opposed to doubling down on disagreement seems like so obvious in retrospect and this open source technology, Polis has been doing it for a while, but I think that we really do need to think about how do we build systems that help us build towards agreement and a shared view of how our society should be as opposed to feeding polarization. I think this is a problem on everyone's mind.
And, when we go back to Larry Lessig's four pillars, here's actually a technological way to surface agreement. Now, I think Audrey's using all of the pillars. She's using law for sure. She's using norms for sure, because they're creating a shared norm around higher bandwidth democracy.
But really you know in her heart, you can tell she's a hacker, right? She's using technologies to try to build this, this shared world and, and it just warms my heart. It's really cool to see this approach and of course, radical openness as part of it all being applied in a governmental context in a way that really is working far better than I think a lot of people believe could be possible.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org/podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.
We’ve got a newsletter, EFFector, as well as social media accounts on many, many, many platforms you can follow.
This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. In this episode you heard reCreation by airtone, Kalte Ohren by Alex featuring starfrosch and Jerry Spoon, and Warm Vacuum Tube by Admiral Bob featuring starfrosch.
You can find links to their music in our episode notes, or on our website at eff.org/podcast.
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
I hope you’ll join us again soon. I’m Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

Josh Richman

EFF Statement on Nevada's Attack on End-to-End Encryption

2 months 1 week ago

EFF learned last week that the state of Nevada is seeking an emergency order prohibiting Meta from rolling out end-to-end encryption in Facebook Messenger for all users in the state under the age of 18. The motion for a temporary restraining order is part of a lawsuit by the state Attorney General alleging that Meta’s products are deceptively designed to keep users addicted to the platform. While we regularly fight legal attempts to limit social media access, which are primarily based on murky evidence of its effects on different groups, blocking minors’ use of end-to-end encryption would be entirely counterproductive and just plain wrong.

Encryption is the most vital means we have to protect privacy, which is especially important for young people online. Yet in the name of protecting children, Nevada seems to be arguing that merely offering encryption on a social media platform that Meta knows has been used by criminals is itself illegal. This cannot be the law; in practice it would let the state prohibit all platforms from offering encryption, and such a ruling would raise serious constitutional concerns. Lawsuits like this also demonstrate the risks posed by bills like EARN IT and Stop CSAM that are now pending before Congress: state governments already are trying to eliminate encryption for all of us, and these dangerous bills would give them even more tools to do so.

EFF plans to speak up for users in the Nevada proceeding and fight this misguided effort to prohibit encryption.  Stay tuned.

Andrew Crocker

EFF Urges Ninth Circuit to Reinstate X’s Legal Challenge to Unconstitutional California Content Moderation Law

2 months 2 weeks ago

The Electronic Frontier Foundation (EFF) urged a federal appeals court to reinstate X’s lawsuit challenging a California law that forces social media companies to file reports to the state about their content moderation decisions, and with respect to five controversial issues in particular—an unconstitutional intrusion into platforms’ right to curate hosted speech free of government interference.

While we are enthusiastic proponents of transparency and have worked, through the Santa Clara Principles and otherwise, to encourage online platforms to provide information to their users, we see the clear threat in the state mandates. Indeed, the Santa Clara Principles itself warns against government’s use of its voluntary standards as mandates. California’s law is especially concerning since it appears aimed at coercing social media platforms to more actively moderate user posts.

In a brief filed with the U.S. Court of Appeals for the Ninth Circuit, we asserted—as we have repeatedly in the face of state mandates around the country about what speech social media companies can and cannot host—that allowing California to interject itself into platforms’ editorial processes, in any form, raises serious First Amendment concerns.

At issue is California A.B. 587, a 2022 law requiring large social media companies to semiannually report to the state attorney general detailed information about the content moderation decisions they make and, in particular, with respect to hot button issues like hate speech or racism, extremism or radicalization, disinformation or misinformation, harassment, and foreign political interference.

A.B. 587 requires companies to report “detailed descriptions” of its content moderation practices generally and for each of these categories, and also to report detailed information about all posts flagged as belonging to any of those categories, including how content in these categories is defined, how it was flagged, how it was moderated, and whether their action was appealed. Companies can be fined up to $15,000 a day for failing to comply.

X, the social media company formerly known as Twitter, sued to overturn the law, claiming correctly that it violates its First Amendment right against being compelled to speak. A federal judge declined to put the law on temporary hold and dismissed the lawsuit.

We agree with Twitter and urge the Ninth Circuit to reverse the lower court. The law was intended to be and is operating as an informal censorship scheme to pressure online intermediaries to moderate user speech, which the First Amendment does not allow.

It’s akin to requiring a state attorney general or law enforcement to be able to listen in on editorial board meetings at the local newspaper or TV station, a clear interference with editorial freedom. The Supreme Court has consistently upheld this general principle of editorial freedom in a variety of speech contexts. There shouldn’t be a different rule for social media.

From a legal perspective, the issue before the court is what degree of First Amendment scrutiny is used to analyze the law. The district court found that the law need only be justified and not burdensome to comply with, a low degree of analysis known as Zauderer scrutiny, that is reserved for compelled factual and noncontroversial commercial speech. In our brief, we urge that as a law that both intrudes upon editorial freedom and disfavors certain categories of speech it must survive the far more rigorous strict First Amendment scrutiny. Our brief sets out several reasons why strict scrutiny should be applied.

Our brief also distinguishes A.B. 587’s speech compulsions from ones that do not touch the editorial process such as requirements that companies disclose how they handle user data. Such laws are typically subject to an intermediate level of scrutiny, and EFF strongly supports such laws that can pass this test.

A.B. 587 says X and other social media companies must report to the California Attorney General whether and how it curates disfavored and controversial speech and then adhere to those statements, or face fines. As a practical matter, this requirement is unworkable—content moderation policies are highly subjective, constantly evolving, and subject to numerous influences.

And as a matter of law, A.B. 587 interferes with platforms’ constitutional right to decide whether, how, when, and in what way to moderate controversial speech. The law is a thinly veiled attempt to coerce sites to remove content the government doesn’t like.

We hope the Ninth Circuit agrees that’s not allowed under the First Amendment.

David Greene
Checked
2 hours 5 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed