Ban Government Use of Face Recognition In the UK

4 hours 58 minutes ago

In 2015, Leicestershire Police scanned the faces of 90,000 individuals at a music festival in the UK and checked these images against a database of people suspected of crimes across Europe. This was the first known deployment of Live Facial Recognition (LFR) at an outdoor public event in the UK. In the years since, the surveillance technology has been frequently used throughout the country with little government oversight and no electoral mandate. 

Face recognition presents an inherent threat to individual privacy, free expression, information security, and social justice. It has an egregious history of misidentifying people of color, leading for example to wrongful arrest, as well as failing to correctly identify trans and nonbinary people. Of course, even if overnight the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance capable of identifying and tracking people on a massive scale. 

EFF has spent the last few years advocating for a ban on government use of face recognition in the U.S.–and we’ve watched and helped as many municipalities have, including in our own backyard–but we’ve seen enough of its use in the UK as well. 

That’s why we are calling for a ban on government use of face recognition in the UK. We are not alone. London-based civil liberties group Big Brother Watch has been driving the fight to end government-use of face recognition across the country. Human rights organization Liberty brought the first judicial challenge against police use of live facial recognition, on the grounds that it breached the Human Rights Act 1998. The government’s own privacy regulator raised concerns about the technical bias of LFR technology, the use of watchlist images with uncertain provenance, and ways that the deployment of LFR evades compliance with data protection principles. And the first independent report commissioned by Scotland Yard challenged police use of LFR as lacking an explicit basis and found the technology 81% inaccurate. The independent Ryder Review also recommended the suspension of LFR in public places until further regulations are introduced.

What Is the UK’s Current Policy on Face Recognition? 

Make no mistake: Police forces across the UK, like police in the US, are using live face recognition. That means full-on Minority Report-style real-time attempts to match people’s faces as they walk on the street to databases of photographs, including suspect photos. 

Of the five forces that have used the technology in England and Wales, the silent rollout has been primarily driven by London’s Metropolitan Police (better known as the Met) and South Wales Police, which oversees the over-1-million-person metro area of Cardiff. The technology is often supplied by Japanese tech company NEC Corporation. It scans every face that walks past a camera and checks it against a watchlist of people suspected of crimes or who are court-involved. Successful matches have resulted in immediate arrests. Six police forces in the UK also use Retrospective Facial Recognition (RFR), which compares images obtained by a camera to a police database, but not in real-time. Police Scotland has reported its intention to introduce LFR by 2026. On the contrary, the Police Service of Northern Ireland apparently has not obtained or implemented face recognition to date.

Unfortunately, the expanding roll-out of this dangerous technology has evaded legislative scrutiny through Parliament. Police forces are unilaterally making the decisions, including whether to adopt LFR, and if so, what safeguards to implement. And earlier this year the UK Government rejected a House of Lords report calling for the introduction of regulations and mandatory training to counter the negative impact that the current deployment of surveillance technologies has on human rights and the rule of law. The evidence that the rules around face recognition need to change are there–many are just unwilling to see or do anything about it. 

Police use of facial recognition was subject to legal review in an August 2020 court case brought by a private citizen against South Wales Police. The Court of Appeal held that the force’s use of LFR was unlawful insofar it breached privacy rights, data protection laws, and equality legislation. In particular, the court found that the police had too much discretion in determining the location of video cameras and the composition of watchlists. 

In light of the ruling, the College of Policing published new guidance: images placed on databases should meet proportionality and necessity criteria, and police should only use LFR when other “less intrusive” methods are unsuitable. Likewise, the then-UK Information Commissioner, Elizabeth Denham, issued a formal opinion warning against law enforcement using LFR for reasons of efficiency and cost reduction alone. Guidance has also been issued on police using surveillance cameras, most notably the December 2020 Surveillance Camera Commissioner’s guidance for LFR, and the January 2022 Surveillance Camera Code of Practice for technology systems connected to surveillance cameras. But these do not provide coherent protections on the individual right to privacy.

London’s Met Police 

Across London, the Met Police uses LFR by bringing a van with mounted cameras to a public place, scanning faces of people walking past, and instantly matching those faces against the Police National Database (PND). 

Images on the PND are predominantly sourced from people who have been arrested, which includes many individuals that were never charged or were cleared of committing a crime. In 2019, the PND reportedly held around 20 million facial images. According to one report, 67 people requested that their images be removed from police databases; only 34 requests were accepted; and of those, 14 were declined and the remainder were pending. Yet the High Court informed the police in 2012 that the biometric details of innocent people were unlawfully held on the database. 

This means that once a person is arrested, even if they are cleared, they remain a “digital suspect” having their face searched again and again by LFR. This violation of privacy rights is exacerbated by data sharing between police forces. For example, a 2019 police report detailed how the Met and British Transport Police shared images of seven people with the King’s Cross Estate for a secret use of face recognition between 2016 and 2018.

Between 2016 and 2019, the Met deployed LFR 12 times across London. The first came at Notting Hill Carnival in 2016–the UK’s biggest African-Caribbean celebration. One person was a false positive. Similarly, at Notting Hill Carnival in 2017, two people were falsely matched and another individual was correctly matched but was no longer wanted. Big Brother Watch reported that at the 2017 Carnival, LFR cameras were mounted on a van behind an iron sheet, thus making it a semi-covert deployment. Face recognition software has been proven to misidentify ethnic minorities, young people, and women at higher rates. And reports of deployments in spaces like Notting Hill Carnival–where the majority of attendees are Black–exacerbate concerns about the inherent bias of face recognition technologies and the ways that government use amplifies police powers and aggravates racial disparities.

 After suspending deployments during the COVID-19 pandemic, the force has since resumed its use of LFR across central London. On 28 January 2022–one day after the UK Government relaxed mask wearing requirements–the Met deployed LFR with a watchlist of 9,756 people. Four people were arrested, including one who was misidentified and another who was flagged on outdated information. Similarly, a 14 July 2022 deployment outside Oxford Street tube station reportedly scanned around 15,600 people’s data and resulted in four “true alerts” and three arrests. The Met has previously admitted to deploying LFR in busy areas to scan as many people as possible, despite face recognition data being prone to error. This can implicate people for crimes they haven’t committed. 

The Met also recently purchased significant amounts of face recognition technology for Retrospective Facial Recognition (RFR) to use alongside its existing LFR system. In August 2021, the Mayor of London’s office approved a proposal permitting the Met to expand its RFR technology as part of a four-year deal with NEC Corporation worth £3,084,000. And whilst LFR is not currently deployed through CCTV cameras, RFR compares images from national custody databases with already-captured images from CCTV cameras, mobile phones, and social media. The Met’s expansion into RFR will enable the force to tap into London’s extensive CCTV network to obtain facial images–with almost one million CCTV cameras in the capital. According to one 2020 report, London is the third most-surveilled city in the world, with over 620,000 cameras. Another report claims that between 2011 and 2022, the number of CCTV cameras more than doubled across the London Boroughs. 

While David Tucker, head of crime at the College of Policing, said RFR will be used “overtly,” he acknowledged that the public will not receive advance notice if an undefined “critical threat” is declared. Cameras are getting more powerful and technology is rapidly improving. And in sourcing images from more than one million cameras, face recognition data is easy for law enforcement to collect and hard for members of the public to avoid. 

South Wales Police

South Wales Police were the first force to deploy LFR in the UK. They have reportedly used the surveillance technology more frequently than the Met, with a June 2020 report revealing more than 70 deployments. Two of these led to the August 2020 court case discussed above. In response to the Court of Appeal’s ruling, South Wales Police published a briefing note claiming that it also used RFR to process 8,501 images between 2017 and 2019 and identified 1,921 individuals suspected of committing a crime in the process. 

South Wales Police have primarily deployed their two flagship facial recognition projects, LOCATE and IDENTIFY, at peaceful protests and sporting events. LOCATE was first deployed in June 2017 during UEFA Champions League Final week and led to the first arrest using LFR, alongside 2,297 false positives from 2,470 ‘potential matches’. Similarly, IDENTIFY was launched in August 2017 but utilizes the Custody Images Database and allows officers to retrospectively search CCTV stills or other media to identify suspects.

South Wales Police also deployed LFR during peaceful protests at an arms fair in March 2018. The force convened a watchlist of 508 individuals from its custody database that were wanted for arrest and a further six people that were “involved in disorder at the previous event.” No arrests were made. Similar trends are evident in the United States where face recognition has been used to target people engaging in protected speech, such as deployments at protests surrounding the death of Freddie Gray. Free speech and the right to protest are essential civil liberties and government use of face recognition at these events discourages free speech, harms entire communities, and violates individual freedoms. 

In 2018 the UN Special Rapporteur on the right to privacy criticized the Welsh police’s use of LFR as unnecessary and disproportionate, and urged the government and police to implement privacy assessments prior to deployment to offset violations on privacy rights. The force maintains that it is “absolutely convinced that Facial Recognition is a force for good in policing in protecting the public and preventing harm.” This is despite face recognition getting worse as the number of people in the database increases as when the likelihood of similar faces increases, matching accuracy decreases. 

The Global Perspectives 

Previous legislative initiatives in the UK have fallen off the policy agenda, and calls from inside Parliament to suspend LFR pending legislative review have been ignored. In contrast, European policymakers have advocated for an end to government use of the technology. The European Parliament recently voted overwhelmingly in favor of a non-binding resolution calling for a ban on police use of facial recognition technology in public places. In April 2021, the European DataProtection Supervisor called for a ban on the use of AI for automated recognition of human features in publicly accessible spaces as part of the European Commission’s legislative proposal for an Artificial Intelligence Act. Likewise, in January 2021 the Council of Europe called for strict regulation of the tech and noted in their new guidelines that face recognition technologies should be banned when used to solely determine a person’s skin color, religious or other belief, sex, racial or ethnic origin, age, health, or social status. Civil liberties groups have also called on the EU to ban biometric surveillance on the grounds of inconsistencies with EU human rights.

The United States Congress continues to debate ways of regulating government use of face surveillance. Also, U.S. states and municipalities have taken it upon themselves to restrict or outright ban police use of face recognition technology. Cities across the United States, large and small, have stood up to this invasive technology by passing local ordinances banning its use. If the UK passes strong FRT rules, they would be an example for governments around the world including the United States.

Next Steps

Face recognition is a dangerous technology that harms privacy, racial justice, free expression, and information security. And the UK’s silent rollout has facilitated unregulated government surveillance of this personal biometric data. Please join us in demanding a ban on government use of face recognition in the UK. Together, we can end this threat.

Paige Collings

Study of Electronic Monitoring Smartphone Apps Confirms Advocates’ Concerns of Privacy Harms

3 days 5 hours ago

Researchers at the University of Washington and Harvard Law School recently published a groundbreaking study analyzing the technical capabilities of 16 electronic monitoring (EM) smartphone apps used as “alternatives” to criminal and civil detention. The study, billed as the “first systematic analysis of the electronic monitoring apps ecosystem,” confirmed many advocates’ fears that EM apps allow access to wide swaths of information, often contain third-party trackers, and are frequently unreliable. The study also raises further questions about the lack of transparency involved in the EM app ecosystem, despite local, state, and federal government agencies’ increasing reliance on these apps.

As of 2020, over 2.3 million people in the United States were incarcerated, and an additional 4.5 million were under some form of “community supervision,” including those on probation, parole, pretrial release, or in the juvenile or immigration detention systems. While EM in the form of ankle monitors has long been used by agencies as an “alternative” to detention, local, state, and federal government agencies have increasingly been turning to smartphone apps to fill this function. The way it works is simple: in lieu of incarceration/detention or an ankle monitor, a person agrees to download an EM app on their own phone that allows the agency to track the person’s location and may require the person to submit to additional conditions such as check-ins involving face or voice recognition. The low costs associated with requiring a person to use their own device for EM likely explains the explosion of EM apps in recent years. Although there is no accurate count of the total number of people who use an EM app as an alternative to detention, in the immigration context alone, today nearly 100,000 people are on EM through the BI Smartlink app, up from just over 12,000 in 2018. Such a high usage calls for a greater need for public understanding of these apps and the information they collect, retain, and share.

Technical Analysis

The study’s technical analysis, the first of its kind for these types of apps, identified several categories of problems with the 16 apps surveyed. These include privacy issues related to the permissions these apps request (and often require), concerns around the types of third-party libraries and trackers they use, who they send data to and how they do it, as well as some fundamental issues around usability and app malfunctions.

Permissions

When an app wants to collect data from your phone, e.g. by taking a picture with your camera or capturing your GPS location, it must first request permission from you to interact with that part of your device. Because of this, knowing which permissions an app requests gives a good idea for what data it can collect. And while denying unnecessary requests for permission is a great way to protect your personal data, people under EM orders often don’t have that luxury, and some EM apps simply won’t function until all permissions are granted.

Perhaps unsurprisingly, almost all of the apps in the study request permissions like GPS location, camera, and microphone access, which are likely used for various check-ins with the person’s EM supervisor. But some apps request more unusual permissions. Two of the studied apps request access to the phone’s contacts list, which the authors note can be combined with the “read phone state” permission to monitor who someone talks to and how often they talk. And three more request “activity recognition” permissions, which report if the user is in a vehicle, on a bicycle, running, or standing still.

Third-Party Libraries & Trackers

App developers almost never write every line of code that goes into their software, instead depending on so-called “libraries” of software written by third-party developers. That an app includes these third-party libraries is hardly a red flag by itself. However, because some libraries are written to collect and upload tracking data about a user, it’s possible to correlate their existence in an app with intent to track, and even monetize, user data.

The study found that nearly every app used a Google analytics library of some sort. As EFF has previously argued, Google Analytics may not be particularly invasive if it were only used in a single app, but when combined with its nearly ubiquitous use across the web, it provides Google with a panoptic view of individuals’ online behavior. Worse yet, the app Sprokit “appeared to contain the code necessary for Google AdMob and Facebook Ads SDK to serve ads.” If that is indeed the case, Sprokit’s developers are engaging in an appalling practice of monetizing their captive audience.

Information Flows

The study aimed to capture the kinds of network traffic these apps sent during normal operation, but was limited by not having active accounts for any of the apps (either because the researchers could not create their own accounts or did not do so to avoid agreeing to terms of service). Even still, by installing software that allows them to snoop on app communications, they were able to draw some worrying conclusions on a few studied apps.

Nearly half of the apps made requests to web domains that could be uniquely associated with the app. This is important because even though those web requests are encrypted, the domain they were addressed to is not, meaning that whoever controls the network a user is on (e.g. coffee shops, airports, schools, employers, Airbnb hosts, etc) could theoretically know if someone is under EM. One app which we’ve already mentioned, Sprokit, was particularly egregious with how often it sent data: every five minutes, it would phone home to Facebook’s ad network endpoint with numerous data points harvested from phone sensors and other sensitive data.

It’s worth reiterating that, due to the limitations of the study, this is far from an exhaustive picture of each EM app’s behavior. There are still a number of important open questions about what data they send and how they send it.

App Bugs and Technical Issues

As with any software, EM apps are prone to bugs. But unlike other apps, if someone under EM has issues with their app, they’re liable to violate the terms of their court order, which could result in disciplinary action or even incarceration—issues that those who’ve been subjected to ankle monitors have similarly faced.

To study how bugs and other issues with EM apps affected the people forced to use them, the researchers performed a qualitative analysis of the apps’ Google Play store reviews. These reviews were, by a large margin, overwhelmingly negative. Many users report being unable to successfully check-in with the app, sometimes due to buggy GPS/facial recognition, and other times due to not receiving notifications for a check-in. One user describes such an issue in their review: “I’ve been having trouble with the check-ins not alerting my phone which causes my probation officer to call and threaten to file a warrant for my arrest because I missed the check-ins, which is incredibly frustrating and distressing.”

Privacy Policies

As many people who use online services and mobile apps are aware, before you can use a service you often have to agree to a lengthy privacy policy. And whether or not you’ve actually read it, you and your data are bound by its terms if you choose to agree. People who are under EM, however, don’t get a say in the matter: the terms of their supervision are what they’ve agreed to with a prosecutor or court, and often those terms will force them to agree to an EM app’s privacy policy.

And some of those policies include some heinous terms. For example, while almost all of the apps’ privacy policies contained language about sharing data with law enforcement to comply with a warrant, they also state reasons they’d share that data without a warrant. Several apps mention that data will be used for marketing. One app, BI SmartLINK, even used to have conditions which allowed the app’s developers to share “virtually any information collected through the application, even beyond the scope of the monitoring plan.” After these conditions were called out in a publication by Just Futures Law and Mijente, the privacy policy was taken down.

Legal Issues 

The study also addressed the legal context in which issues around EM arise. Ultimately, legal challenges to EM apps are likely to be difficult because although the touchstone of the Fourth Amendment’s prohibition against unlawful search and seizure is “reasonableness,” courts have long held that probationers and parolees have diminished expectations of privacy compared to the government’s interests in preventing recidivism and reintegrating probationers and parolees into the community.

Moreover, the government likely would be able to get around Fourth Amendment challenges by claiming that the person consented to the EM app. But as we’ve argued in other contexts, so-called “consent searches” are a legal fiction. They often occur in high-coercion settings, such as traffic stops or home searches, and leave little room for the average person to feel comfortable saying no. Similarly, here, the choice to submit to an EM app is hardly a choice at all, especially when faced with incarceration as a potential alternative.

Outstanding Questions

This study is the first comprehensive analysis into the ecosystem of EM apps, and lays crucial groundwork for the public’s understanding of these apps and their harms. It also raises additional questions that EM app developers and government agencies that contract with these apps must provide answers for, including:

  • Why EM apps request dangerous permissions that seem to be unrelated to typical electronic monitoring needs, such as access to a phone’s contacts or precise phone state information
  • What developers of EM apps that lack privacy policies do with the data they collect
  • What protections people under EM have against warrantless search of their personal data by law enforcement, or from advertising data brokers buying their data
  • What additional information will be uncovered by being able to establish an active account with these EM apps
  • What information is actually provided about the technical capabilities of EM apps to both government agencies contracting with EM app vendors and people who are on EM apps 

The people who are forced to deal with EM apps deserve answers to these questions, and so does the general public as the adoption of electronic monitoring grows in our criminal and civil systems.

Saira Hussain

San Francisco’s Board of Supervisors Grants Police More Surveillance Powers

4 days 5 hours ago

In a 4-7 vote, San Francisco’s Board of Supervisors passed a 15-month pilot program granting the San Francisco Police Department (SFPD) more live surveillance powers. This was despite the objections of a diverse coalition of community groups and civil rights organizations, residents, the Bar Association of San Francisco, and even members of the city’s Police Commission, a civilian oversight body comprising of mayoral and Board appointees. The ordinance, backed by the Mayor and the SFPD, enables the SFPD to access live video streams from private non-city cameras for the purposes of investigating crimes, including misdemeanor and property crimes. Once the SFPD gets access, they can continue live streaming for 24 hours. The ordinance authorizes such access by consent of the camera owner or a court order.

Make no mistake, misdemeanors like vandalism or jaywalking happen on nearly every street of San Francisco on any given day—meaning that this ordinance essentially gives the SFPD the ability to put the entire city under live surveillance indefinitely.

This troubling ordinance also allows police to surveil “significant events,” loosely defined as large or high-profile events, “for placement of police personnel.” This essentially gives police a green light to monitor—in real-time—protests and other First Amendment-protected activities, so long as they require barricades or street closures associated with public gatherings. The SFPD has previously been caught using these very same cameras to surveil protests following George Floyd’s murder, and the SF Pride Parade, facts that went unaddressed by the majority of Supervisors who authorized the ordinance.

The Amendments

During the hearing, Supervisor Hillary Ronen introduced two key amendments to address and mitigate the ordinance’s civil liberties impacts. The first would have prohibited the SFPD from live monitoring public gatherings unless there was imminent threat of death or bodily harm. This failed, by the same 4-7 tally as the ordinance itself.

The second, which was successful, required stronger reporting requirements on SFPD’s use of live surveillance and the appointment of an independent auditor on the efficacy of the pilot program. This amendment was needed to ensure that an independent entity rather than the SFPD itself, assesses the pilot program’s data to determine exactly how, when, and why these new live monitoring powers were used.

What’s This All About?

What is this all about? During the hearing, several of the Supervisors talked about how San Franciscans are worried about crime, but failed to articulate how giving police live monitoring abilities addresses those fears.

And in fact, many of the examples that both the SFPD and the Supervisors who voted for this ordinance pointed to are the types of situations where live surveillance would not help. Some Supervisors pointed to retail theft or car break-ins as examples of why live surveillance is needed. But under the ordinance, an officer would need to first seek permission from an SFPD captain and then go to a camera owner to request access to live surveillance—steps that would take far longer than the seconds or minutes for these incidents to occur. And if police have reason to believe a crime is about to occur at a particular location, it makes far more sense to send an officer rather than go through the process of getting permission to live monitor a camera, which carries the risk of putting an intersection or a pharmacy under constant police surveillance for no reason.

Moreover, as Supervisor Shamann Walton pointed out, police have always been able to get historical footage of crimes simply by sending a request to the camera’s owner—this is especially true of the thousands of Business Improvement District/Commercial Benefit District cameras from which police have long been obtaining historic footage to build cases or gather evidence. So other than a desire to actively watch large swaths of the city, it’s unclear how live monitoring helps police get anything they couldn’t already get by sending a simple request after the fact.

Which leaves us to the sad conclusion that this ordinance isn’t really about the safety of San Franciscans—it’s about security theater. It’s about putting voters at ease that something, anything is being done about crime—even if that proactive move has no discernible effect on crime and, in fact, actively threatens to harm San Francisco’s activists and most vulnerable populations.

A Heartfelt Thank You

A very large coalition pushed against this ordinance. Without their efforts and the efforts of many other other San Franciscans who weighed in during public comment, the 15-month sunset date for the pilot or the independent audit provision would not have been possible.

Commendations should also be heaped upon Supervisors Chan, Preston, Ronen, and Walton for their brave stand at the Board of Supervisors meeting, their sharp critique and questioning of the legislation, and their willingness to listen to concerned community members.

Watching the Watchers

Because this bill has a sunset provision that requires it to be renewed 15 months from now, we have another chance to put on our boots, dust off our megaphones, and fight like hell to protect San Franciscans from police overreach. In the meantime, and along with our coalition, we’ll be monitoring for violations and tracking the data that the SFPD produces. And we’ll be there in 15 months to hopefully prevent the reauthorization of this dangerous ordinance. 

Related Cases: Williams v. San Francisco
Matthew Guariglia

Lawsuit: SMUD and Sacramento Police Violate State Law and Utility Customers’ Privacy by Sharing Data Without a Warrant

4 days 8 hours ago
The public power utility and police racially profiled Asian communities in the illegal data-sharing scheme.

SACRAMENTO—The Sacramento Municipal Utility District (SMUD) searches entire zip codes’ worth of people’s private data and discloses it to police without a warrant or any suspicion of wrongdoing, according to a privacy lawsuit filed Wednesday in Sacramento County Superior Court.

SMUD’s bulk disclosure of customer utility data turns its entire customer base into potential leads for police to chase and has particularly targeted Asian homeowners, says the lawsuit filed by the Electronic Frontier Foundation (EFF) and law firm Vallejo, Antolin, Agarwal, and Kanter LLP on behalf of plaintiffs the Asian American Liberation Network, a Sacramento-based nonprofit, and Khurshid Khoja, an Asian American Sacramento resident, SMUD customer, cannabis industry attorney, and cannabis rights advocate. 

“SMUD’s policies claim that ‘privacy is fundamental’ and that it ‘strictly enforces privacy safeguards,’ but in reality, its standard practice has been to hand over its extensive trove of customer data whenever police request it,” said EFF Staff Attorney Saira Hussain. “Doing so violates utility customers’ privacy rights under state law and the California Constitution while disproportionately subjecting Asian and Asian American communities to police scrutiny.”

Utility data has historically provided a detailed picture of what occurs within a home. The advent of smart utility meters has only enhanced that image. Smart meters provide usage information in increments of 15 minutes or less; this granular information is beamed wirelessly to the utility several times each day and can be stored in the utility’s databases for years. As that data accumulates over time, it can provide inferences about private daily routines such as what devices are being used, when they are in use, and how this changes over time.

The California Public Utilities Code says public utilities generally “shall not share, disclose, or otherwise make accessible to any third party a customer’s electrical consumption data ....” except “as required under federal or state law.” The California Public Records Act prohibits public utilities from disclosing consumer data, except “[u]pon court order or the request of a law enforcement agency relative to an ongoing investigation.” 

“Privacy, not discrimination, was what SMUD promised when it rolled out smart meters,” said Monty Agarwal, EFF’s co-counsel at Vallejo, Antolin, Agarwal, and Kanter LLP.

Yet SMUD in recent years has given protected customer data to the Sacramento Police Department, who asked for it on an ongoing basis—without a warrant or any other court order, nor any suspicion of a particular resident—to find possible illicit cannabis grows. The program has been highly lucrative for the city: Sacramento Police in 2017 began issuing large penalties to owners of properties where cannabis is found under a new city ordinance, and levied nearly $100 million in fines in just two years. 

About 86 percent of those penalties were levied upon people of Asian descent. The lawsuit alleges that officials intentionally designed their mass surveillance to have this disparate impact on Asian communities. The complaint details how a SMUD analyst who provided data to police excluded homes in a predominantly white neighborhood, as well as how one police architect of Sacramento’s program removed non-Asian names on a SMUD list and sent only Asian-sounding names onward for further investigation.  

“SMUD and the Sacramento Police Department’s mass surveillance program is unlawful, advances harmful stereotypes, and overwhelmingly impacts Asian communities,” said Megan Sapigao, co-executive director of the Asian American Liberation Network. “It’s unacceptable that two public agencies would carelessly flout state law and utility customers’ privacy rights, and even more unacceptable that they targeted a specific community in doing so.”

“California voters rejected discriminatory enforcement of cannabis laws in 2016, while the Sacramento Police Department and SMUD conduct illegal dragnets through utility customer data to continue these abuses to this day,” Khoja said. “This must stop.”

For the complaint: https://eff.org/document/asian-american-liberation-network-v-smud-complaint

Contact:  SairaHussainStaff Attorneysaira@eff.org AaronMackeySenior Staff Attorneyamackey@eff.org
Josh Richman

How to Ditch Facebook Without Losing Your Friends (Or Family, Customers or Communities)

1 week ago

Today, we launch “How to Ditch Facebook Without Losing Your Friends” - a narrated slideshow and essay explaining how Facebook locks in its users, how interoperability can free them, and what it would feel like to use an “interoperable Facebook” of the future, such as the one contemplated by the US ACCESS Act.

%3Ciframe%20src%3D%22https%3A%2F%2Farchive.org%2Fembed%2Finteroperable-facebook%22%20webkitallowfullscreen%3D%22true%22%20mozallowfullscreen%3D%22true%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22384%22%20frameborder%3D%220%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from archive.org


Watch the video on the Internet Archive

Watch the video on Youtube

Millions of Facebook users claim to hate the service - its moderation, both high-handed and lax, its surveillance, its unfair treatment of the contractors who patrol it and the publishers who fill it with content - but they keep on using it.

Both Facebook and its critics have an explanation for this seeming paradox: people use Facebook even though they don’t like it because it’s so compelling. For some critics, this is proof that Facebook has perfected an “addictive technology” with techniques like “dopamine loops.” Facebook is rather fond of this critique, as it integrates neatly with Facebook’s pitch to advertisers: “We are so good at manipulating our users that we can help you sell anything.”

We think there’s a different explanation: disgruntled Facebook users keep using the service because they don’t want to leave behind their friends, family, communities and customers. Facebook’s own executives share this belief, as is revealed by internal memos in which those execs plot to raise “switching costs” for disloyal users who quit the service.

“Switching costs” are the economists’ term for everything you have to give up when you switch products or services. Giving up your printer might cost you all the ink you’ve bulk-purchased; switching mobile phone OSes might cost you the apps and media you paid for. 

The switching cost of leaving Facebook is losing touch with the people who stay behind. Because Facebook locks its messaging and communities inside a “walled garden” that can only be accessed by users who are logged into Facebook, leaving Facebook means leaving behind the people who matter to you (hypothetically, you could organize all of them to leave, too, but then you run into a “collective action problem” - another economists’ term describing the high cost of getting everyone to agree to a single course of action).

That’s where interoperability comes in. Laws like the US ACCESS Act and the European Digital Markets Act (DMA) aim to force the largest tech companies to allow smaller rivals to plug into them, so their users can exchange messages with the individuals and communities they’re connected to on Facebook - without using Facebook.

“How to Ditch Facebook Without Losing Your Friends” explains the rationale behind these proposals - and offers a tour of what it would be like to use a federated, interoperable Facebook, from setting up your account to protecting your privacy and taking control of your own community’s moderation policies, overriding the limits and permissions that Facebook has unilaterally imposed on its users.

You can get the presentation as a full video, or a highlight reel, or a PDF or web-page. We hope this user manual for an imaginary product will stimulate your own imagination and give you the impetus to demand - or make - something better than our current top-heavy, monopoly-dominated internet.

Cory Doctorow

Giving Big Corporations “Closed Generic” Top-Level Domain Names to Run as Private Kingdoms Is Still a Bad Idea

1 week 3 days ago

No business can own the generic word for the product it sells. We would find it preposterous if a single airline claimed exclusive use of the word “air,” or a broadband service tried to stop its rivals from using the word “broadband.” Until this year, it seemed settled that the internet’s top-level domain names (like .com, .org, and so on) would follow the same obvious rule. Alas, ICANN (the California nonprofit that governs the global domain name system) seems intent on taking domains in a more absurd direction by revisiting the thoroughly discredited concept of “closed generics.”

In a nutshell, closed generics are top-level domain names using common words, like “.car.” But unlike other TLDs like “.com,” a closed generic TLD is under the control of a single company, and that company controls all of the domain names within the TLD. This is a terrible idea, for all of the same reasons it has failed twice already. And for one additional reason—defenders of open competition and free expression should not have to fight the same battle a third time.

Closed Generics Rejected and Then Resurrected

The context of this fight is the “new generic top-level domains” process, which expanded the list of “gTLDs” from the original six (.com, .net, .org, .edu, .gov, and .mil) to the 1,400 or so in use today, like .hot, .house, and .horse. In 2012, during the first round of applications to operate new gTLDs, some companies asked for complete, exclusive control over domains like .baby, .blog, .book, .cars, .food, .mail, .movie, .music, .news, .shop, and .video, plus similar terms written in Chinese characters. Most of the applicants were among the largest players in their industries (like Amazon for .book and Johnson & Johnson for .baby).

The outcry was fierce, and ICANN was flooded with public comments. Representatives of domain name registrars, small businesses, non-commercial internet users, and even Microsoft urged ICANN to deny these applications.

Fortunately, ICANN heeded the public’s wishes, telling the applicants that they could operate these top-level domains only if they allowed others to register their own names within those domains. Amazon would not be the sole owner of .book, and Google would not control .map as its private fiefdom. (Some TLDs that are non-generic brand names like .honda, .hermes, and .hyatt were given to the companies that own those brands as their exclusive domains, and some like .pharmacy are restricted to a particular kind of business . . . but not one business.)

A working group within the ICANN community continued to debate the “closed generics” issue, but the working group’s final report in 2020 made no recommendation. Both the supporters and opponents of closed generics tried to find some middle ground, but there was none to be found that protected competition and prevented monopolization of basic words.

That’s where things sat until early this year, when the Chairman of the ICANN Board, out of the blue, asked two bodies who don’t normally make policy to conduct a “dialogue” on closed generics: the ICANN GNSO Council (which oversees community policymaking for generic TLDs) and the ICANN Government Advisory Committee (a group of government representatives which as its name indicates, only “advises”). The Board hasn’t voted on the issue, so it’s not clear how many members actually support moving forward.

The Board’s letter was followed up a few days later by a paper from ICANN’s paid staff. It claimed to be a “framing paper” on the proposed dialogue. But in reality, the paper presented a slanted and one-sided history of the issue, suggesting incorrectly that closed generics were “implicitly” allowed under previous ICANN policies. The notion of “implicit” policy is anathema to a body whose legitimacy depends on open, transparent, and participatory decision-making. What’s more, the ICANN staff paper gives no weight to a huge precedent – one of ICANN’s largest waves of global public input, which was almost unanimously opposed to closed generics.

As the ICANN Board (or at least some of its members) try to start a “dialogue” that would keep the closed generics proposal alive, the staff paper went even further and tried to pre-determine the outcome of that dialogue, by suggesting that some closed generic domains would have to be allowed, as long as lawyers for the massive companies that seek to control those domains could come up with convincing “public interest goals.”

As a result, the land rush for new private kingdoms at the highest level of the internet’s domain name system appears poised to begin again.

Still a Bad, Pro-Monopoly Idea

The problems with giving control of every possible domain name within a generic top-level domain to a single company are the same as they were in 2012 and in 2020.

First, it’s out of step with trademark law. In the US and most countries, businesses can’t register a trademark in the generic term for that kind of business. That’s why a computer company and a record label can get trademarks in the name “Apple,” but a fruit company cannot. Some trademark attorneys in the ICANN community have suggested that the US Supreme Court’s decision in the Booking.com case means that trademarks in generic words are now fair game, but that’s misleading. The Supreme Court ruled that adding “.com” to a generic word might result in a valid trademark—but the applicant still has to show with evidence that the public associates that domain name with a particular business, not a general category. And that’s still difficult and rare. If trademark law doesn’t allow companies to “own” generic words, as part of a domain name or otherwise, then ICANN shouldn’t be giving a single company what amounts to ownership over those words as top-level domains.

Second, closed generics are bad policy because they give an unfair advantage to businesses that are already large and often dominant in their field. Control of a new gTLD doesn’t come cheap—the application fee alone is several hundred thousand dollars, and ongoing fees to ICANN are also high. Allowing a bookstore owner named Garcia to run a website at garcia.book is a powerful tool for building a new independent business with its own online identity. A business with a memorable, descriptive domain name like garcia.book is less dependent on its placement in Google’s search results, or Facebook’s news feed. If, instead, only Amazon could create websites that ended in .book, the small businesses of the world would lose that competitive boost, and the image of Amazon as the only online bookseller would be even more durable.

Third, closed generics would blast a big hole in the pro-competitive firewall at the heart of ICANN: the rule that registries (the wholesalers like Verisign who operate top-level domains) and registrars (the retailers like Namecheap who register names for internet users) must remain separate. That rule dates from ICANN’s founding in 1998, and was designed to break a monopoly over domain names. The structural separation rule, which is relatively easy to enforce, helps stop new monopolists from arising in the domain name business. Exclusive control over a generic top-level domain would mean that single companies would act as the registry and the sole registrar for a top-level domain.

The Public Doesn’t Need Closed Generics, and “Public Interest” Promises Don’t Work in ICANN-Land

The ICANN Board’s letter shared the GAC’s 2013 suggestion that closed generics should be allowed if they could be structured to “serve the public interest.” But which “public” might that be? There’s no reason why giving full control of a generic TLD to a single company would serve internet users better than a domain that’s open to all (or at least all members of a particular business or profession). The justifications we’ve seen boil down to arguing that someone, somewhere will come up with an innovative use for a closed generic domain. That simply begs the question, while not explaining how exclusive control is a necessary feature.

On top of that, ICANN does not have a good track record of holding domain registries to the “public interest” promises they make—its enforcement mechanism is slow, cumbersome, and tends to embroil ICANN in content moderation issues, which is something the organization is rightfully forbidden to do.

No More Sequels

Over the decade-plus of ICANN’s project to expand the top-level domains, no company has been allowed to operate a generic TLD as its private kingdom. And despite two rounds of heated debate, the community has not come up with a plan for doing this well or fairly.

It’s time to stop.

The only motive behind the continuing push for “compromise” on the closed generics issue is the wealthiest players’ desire to control the internet’s basic resources. ICANN should put its foot down at last, put the closed generics idea on the shelf, and leave it there.

Mitch Stoltz

EFF’s DEF CON 30 Puzzle—SOLVED

1 week 4 days ago

Puzzlemaster Aaron Steimle of the Muppet Liberation Front contributed to this post.

Every year, EFF joins thousands of computer security professionals, tinkerers, and hobbyists for Hacker Summer Camp, the affectionate term used for the series of Las Vegas technology conferences including BSidesLV, Black Hat, DEF CON, and more. EFF has a long history of standing with online creators and security researchers at events like these for the benefit of all tech users. We’re proud to honor this community’s spirit of curiosity, so each year at DEF CON we unveil a limited edition EFF member t-shirt with an integrated puzzle for our supporters (check the archive!). This year we had help from some special friends.

"The stars at night are big and bright down on the strip of Vegas"

For EFF’s lucky 13th member t-shirt at DEF CON 30, we had the opportunity to collaborate with iconic hacker artist Eddie the Y3t1 Mize and the esteemed multi-year winners of EFF’s t-shirt puzzle challenge: Elegin, CryptoK, Detective 6, and jabberw0nky of the Muppet Liberation Front.

Extremely Online members' design with an integrated challenge.

The result is our tongue-in-cheek Extremely Online T-Shirt, an expression of our love for the internet and the people who make it great. In the end, one digital freedom supporter solved the final puzzle and stood victorious. Congratulations and cheers to our champion cr4mb0!

But How Did They Do It?

Take a guided tour through each piece of the challenge with our intrepid puzzlemasters from the Muppet Liberation Front. Extreme spoilers ahead! You’ve been warned…

_____________________

Puzzle 0

The puzzle starts with the red letters on the shirt on top of a red cube. Trying common encodings won’t work, but a quick Google search of the letters will return various results containing InterPlanetary File System (IPFS) links. The cube is also the logo for IPFS. Thus, the text on the shirt resolves to the following IPFS hash/address:

ipfs://bafkreiebzehf2qlxsm5bdk7cnrnmtnojwb53bnwyrgkkt7ypx5u53typcu

QR codes have a standard format and structure that requires the large squares to be placed in three of the four corners. With this in mind, the image can be seen as four separate smaller squares, with the two middle ones overlapping at the large square in the center. These squares can be reconstructed into a valid QR code using an image editing program.

Answer:

Resolves to https://eff.org/Defcon30EFFPuzzleExtraordinaire

This site contains two groups of text: the first paragraph contains four lines that start with the same letters, and the second paragraph looks like Base64-encoded information. Notice that the four lines in the first paragraph all start with the same letters as the text on the shirt. These are also IPFS addresses of the remaining puzzles.

Puzzle 1

ipfs://bafkreigex7eadjwdggka7t6h2ln66ck5wuecq7gnryaayqjcirmdyjgwoe

Wordle players will immediately recognize the style of the puzzle. You can use a wordlist and some regular expressions / pattern matching to identify the only possible solution to this puzzle. Note that the first five words also act as a hint to the theme of each puzzle answer: space/stars.

Answer: PEACOCK

Puzzle 2 Challenge Text

Word on the street is that the font of youth is the key.

[Flight enabling bird feature.] + [Short resonant tones, often indicating a correct response.] + [First Fermat Prime]

55rhyykkqisq 4ubhYpYfwg 5pYrmmkks6qi prkuy6qlf eakjZjk4a rhXkgwy6iqhrddb

This puzzle consists of some cryptic clues and a line of ciphertext. First, consider the wording of the initial line: “Word on the street is that the font of youth is the key.” These clues should indicate that the solver will need to look into Microsoft Word Fonts.

Next, to decode the clues in the second line:

  1. Flight enabling bird feature = WING
  2. Short resonant tones, often indicating a correct response = DINGS
  3. First Fermat Prime = 3

∴ WINGDINGS 3

Decoding the Cipher Text

55rhyykkqisq 4ubhYpYfwg 5pYrmmkks6qi prkuy6qlf eakjZjk4a rhXkgwy6iqhrddb

The solver now knows that the ciphertext has something to do with Microsoft Word and the Wingdings 3 font. Typed out in Wingdings 3 font, each character results in some type of arrow. The characters are categorized as arrows as follows:

UP: XYhpr5
DOWN: iqs60
LEFT: Zbdftv
RIGHT: aceguw4
UP-LEFT: jz
UP-RIGHT: k
DOWN-LEFT: lx
DOWN-RIGHT: my

Using these arrows as instructions to a pen, one can draw shapes that resemble letters. Each word of the ciphertext should map to a single letter, with a new plot starting after each space.


Solution

Reading the drawn shapes as letters – the solution: MIMOSA

Puzzle 3

Puzzle solution:

"The name of the game isn’t Craps" and the picture of a person snapping their fingers are references to the game "Snaps." The puzzle uses the rules of Snaps transferred onto a Craps board. Snaps is a game where a clue-giver uses statements and finger-snapping to spell out a well-known name.

Looking at the differences between the given board and a standard Craps board indicates which components are meant to give clues. In a game of Snaps, vowels are indicated by the number of snaps, translated here as the number of pips shown on the colored die. Consonants are indicated with the first letter of a statement given by the clue-giver. On this board, "COME," "NOT PASS BAR," "PASS LINE," and "HOW TO PLAY" have been added or altered, indicating that these statements give the necessary consonants C, N, P, and H by taking the first letter of each statement, as in the game Snaps. The dice have been colored, giving the numbers 1-4 which in Snaps indicate the vowels A, E, I, and O. To order these elements, the rainbow circles to the left of the dice have been colored with the corresponding colors, giving the answer PHOENICIA.

Final answer: PHOENICIA

Puzzle 4

Puzzle Solution:

Unlike the previous puzzles, this image does not take up the entire page, indicating that there might be more information available by inspecting the html. Doing so shows that the embedded image has the file name "OrangeJuicePaperFakeBook.jpg." Deconstructing this, "OrangeJuicePaper" clues the word "pulp" and "FakeBook" clues the word fiction, letting the solver know the puzzle's theme will revolve around the movie Pulp Fiction.

The image itself is hiding information steganographically, and the information can be extracted using the tool steghide. Using steghide on OrangeJuicePaperFakeBook.jpg with no password will write the file QuartDeLivreAvecDuFromage.txt, containing a long series of binary strings of length 8.

'Quart de livre avec du fromage' is 'quarter pounder with cheese' in French. "Do you know what they call a quarter pounder with cheese in Paris?" is a quote from Vincent Vega in Pulp Fiction.

The binary numbers within the file are the ASCII representation of letters and spaces, and can be converted using any of the many tools available upon searching for "binary ASCII converter." Converting the file contents gives legible but nonsensical results:

overconstructed efficiencyapartments coeffect jeffs counterefforts phosphatidylethanolamines eye effed I nonefficient aftereffects theocracy teachereffectiveness inefficaciousnesses a ineffervescibility psychoneuroimmunologically superefficiency coefficientofacceleration o toxic jeffersonian teffs differentialcoefficient milkshake propulsiveefficiency effulges bad lockpick effed upper nonrevolutionaries revolutionarinesses teffs temperaturecoefficient maleffect effable foe butterflyeffect eerie tranquillizing magnetoopticaleffect jeffs plantthermalefficiency nulls rappers I effectiveresistance

These words aren't used directly, but instead the length of each word is relevant. Converting each word to its character count, and then converting that character count to its letter of the alphabet gives: othenyceallitsarzoyaelewithcheersevigcoentevegas

"They call it a royale with cheese" is another quote from Vincent Vega, also the answer to the previous quote ("Do you know what they call a quarter pounder with cheese in Paris?").

Looking at othenyceallitsarzoyaelewithcheersevigcoentevegas, it contains "they call it a royale with cheese," followed by "vigcent vega." The extra characters mixed in spell 'ones zeroes,' which is a hint that each of the nonsensical words should be converted to a one or a zero themselves. But how? Looking back at the original image, it shows that the EFF score is 1 and the DEF CON score is 0—so represent each word containing the letters "EFF" with a 1, and all other words with a 0. This gives a new binary string, which can itself be again converted to ASCII, giving the ciphertext ymgdzq.

Going back to the quote derived from counting the number of characters in each word, note that Vincent was intentionally misspelled as Vigcent. This is a clue to use a vigenere cipher to decrypt this new ciphertext with key vega.

Applying Vigenere to text 'ymgdzq' with key 'vega' gives the solution: DIADEM

Bonus Easter Egg: The first character of each non-eff word in the wordlist results in: opeitapotmblunrfetnri, which anagrams to muppet liberation front.

META

The final block of text is encoded in Base64. Decoding it reveals that the data starts with "Salted__", an artifact of encrypting using OpenSSL.

Concatenate the answers from the four previous puzzles in alphabetical order to create the passphrase that will be used to decrypt the text. With the block of text placed in a file called final.enc, the openssl command to decrypt the text is as follows:

$ openssl aes-256-cbc -d -in final.enc -out final.txt
enter aes-256-cbc decryption password: DiademMimosaPeacockPhoenicia

Decrypting it reveals the solution to the puzzle:

"On behalf of EFF and Muppet Liberation Front,

congratulations on solving the puzzle challenge!

Email the phrase 'The stars at night are big and bright down on the strip of Vegas' to membership@eff.org"

_____________________

EFF is deeply thankful to the Muppet Liberation Front members for creating this puzzle and Eddie the Y3t1 for designing the artwork. After all, how can we fight for a better digital future without some beauty and brainteasers along the way? The movement for digital rights depends on cooperation and mutual support in our communities, and EFF is grateful to everyone on the team!

Happy Hacking!

Aaron Jue

It’s Time For A Federal Anti-SLAPP Law To Protect Online Speakers

1 week 4 days ago

Our country’s fair and independent courts exist to resolve serious disputes. Unfortunately, some parties abuse the civil litigation process to silence others’ speech, rather than resolve legitimate claims. These types of censorious lawsuits have been dubbed Strategic Lawsuits Against Public Participation, or SLAPPs, and they have been on the rise over the past few decades. 

Plaintiffs who bring SLAPPs intend to use the high cost of litigation to harass, intimidate, and silence critics who are speaking out against them. A deep-pocketed plaintiff who files a SLAPP doesn’t need to win the case on the merits—by putting financial pressure on a defendant, along with the stress and time it takes to defend a case, they can take away a person’s free speech rights. 

Fortunately, a bill introduced in Congress today, the SLAPP Protection Act of 2022 (H.R. 8864), aims to deter vexatious plaintiffs from filing these types of lawsuits in federal court.

TAKE ACTION

TELL CONGRESS TO PASS A FEDERAL ANTI-SLAPP BILL

To stop lawsuits that are meant to harass people into silence, we need strong anti-SLAPP laws. When people get hit with a lawsuit because they’re speaking out on a matter of public concern, effective anti-SLAPP law allows for a quick review by a judge. If it’s determined that the case is a SLAPP, the lawsuit gets thrown out, and the SLAPP victim can recover their legal fees. 

In recent years, more states have passed new anti-SLAPP laws or strengthened existing ones.  Those state protections are effective against state court litigation, but they don’t protect people who are sued in federal court. 

Now, a bill has been introduced that would make real progress in stopping SLAPPs in federal courts. The SLAPP Protection Act will provide strong protections to nearly all speakers who are discussing issues of public concern. The SLAPP Protection Act also creates a process that will allow most SLAPP victims in federal court to get their legal fees paid by the people who bring the SLAPP suits. (Here’s our blog post and letter supporting the last federal anti-SLAPP bill that was introduced, more than seven years ago.) 

“Wealthy and powerful corporate entities are dragging citizens through meritless and costly litigation, to expose anyone who dares to stand up to them to financial and personal ruin,” said bill sponsor Rep. Jamie Raskin (D-MD) at a hearing yesterday in which he announced the bill. 

SLAPPs All Around 

SLAPP lawsuits in federal court are increasingly being used to target activists and online critics. Here are a few recent examples: 

Coal Ash Company Sued Environmental Activists

In 2016, activists in Uniontown, Alabama—a poor, predominantly Black town with a median per capita income around $8,000—were sued for $30 million by a Georgia-based company that put hazardous coal ash into Uniontown’s residential landfill. The activists were sued over statements on their website and Facebook page, that said things like the landfill “affected our everyday life,” and “You can’t walk outside, and you can not breathe.” The plaintiff settled the case after the ACLU stepped in to defend the activist group. 

Shiva Ayyadurai Sued A Tech Blog That Reported On Him

In 2016, technology blog Techdirt published articles disputing Shiva Ayyadurai’s claim to have “invented email.” Techdirt founder Mike Masnick was hit with a $15 million libel lawsuit in federal court. Masnick fought back in court and his reporting remains online, but the legal fees had a big effect on his business. 

Logging Company Sued Greenpeace 

In 2016, environmental non-profit Greenpeace was sued along with several individual activists by Resolute Forest Products. Resolute sued over blog post statements such as Greenpeace’s allegation that Resolute’s logging was “bad news for the climate.” (After four years of litigation, Resolute was ordered to pay nearly $1 million in fees to Greenpeace—because a judge found that California’s strong anti-SLAPP law should apply.) 

Pipeline Company Sued Environmental Activists

In 2017, Greenpeace, Rainforest Action, the Sierra Club, and other environmental groups were sued by Energy Transfer Partner because they opposed the Dakota Access Pipeline project. Energy Transfer said that the activists’ tweets, among other communications, amounted to a “fraudulent scheme” and that the oil company should be able to sue them under RICO anti-racketeering laws, which were meant to take on organized crime. 

Congressman Sued His Twitter Critics 

In 2019, anonymous Twitter accounts were sued by Rep. Devin Nunes, then a Congressman representing parts of Central California. Nunes used lawsuits to attempt to unmask and punish two Twitter users who used the handles @DevinNunesMom and @DevinCow to criticize his actions as a politician. Nunes filed these actions in a state court in Henrico County, Virginia. The location had little connection to the case, but Virginia’s lack of an anti-SLAPP law has enticed many plaintiffs there. 

The Same Congressman Sued Media Outlets For Reporting On Him

Over the next few years, Nunes went on to sue many other journalists who published critical articles about him, using state and federal courts to sue CNN, The Washington Post, his hometown paper the Fresno Bee, and NBC. 

Fast Relief From SLAPPs

The SLAPP Protection Act meets EFF's criteria for a strong anti-SLAPP law. It would be a powerful tool for defendants hit with a federal lawsuit meant to take away their free speech rights. If the bill passes, any defendant sued for speaking out on a matter of public concern would be allowed to file a special motion to dismiss, which will be decided within 90 days. If the court grants the speaker’s motion, the claims are dismissed. In many situations, speakers who prevail on an anti-SLAPP motion will be entitled to their legal fees. 

The bill won’t reduce protections under state anti-SLAPP laws, either. So in cases where the state law may be as good, or even stronger, the current bill will become a floor, not a ceiling, for the rights of SLAPP defendants. 

EFF has been defending the rights of online speakers for more than 30 years. A strong federal anti-SLAPP law will bring us closer to the vision of an internet that allows anyone to speak out and organize for change, especially when they speak against those with more power and resources. Anti-SLAPP laws enhance the rights of all. We hope Congress passes the SLAPP Protection Act soon. 

TAKE ACTION

TELL CONGRESS TO PASS THE SLAPP PROTECTION ACT

Joe Mullin

Members of Congress Urge FTC to Investigate Fog Data Science

1 week 4 days ago

In the week since EFF and the Associated Press exposed how Fog Data Science purchases geolocation data on hundreds of millions of digital devices in the United States, and maps them for easy-to-use and cheap mass surveillance by police, elected officials have voiced serious concerns about this dangerous tech.

In a strong letter to Lina Khan, the chair of the Federal Trade Commission (FTC), Rep. Anna Eshoo of California on Tuesday criticized the “significant Fourth Amendment search and seizure concerns” raised by Fog and urged the FTC to investigate fully. As public records obtained by EFF show, police often use Fog’s mass surveillance tools without a warrant, in violation of our Fourth Amendment rights.

Eshoo wrote:

“The use of Fog is also seemingly incompatible with protections against unlawful search and seizure guaranteed by the Fourth Amendment. Consumers do not realize that they are potentially nullifying their Fourth Amendment rights when they download and use free apps on their phones. It would be hard to imagine consumers consenting to this if actually given the option, yet this is functionally what occurs.”

Eshoo also pointed out the new threat that Fog’s surveillance tool poses to people seeking reproductive healthcare. In a state where abortion has been criminalized, Fog’s Reveal tool could potentially allow police, without a warrant, to draw a geofence around a health clinic over state lines in a state where abortion is not criminalized, allowing them to see if any phones there return to their state. “In a post Roe v. Wade world., it’s more important than ever to be highly mindful of how tools like Fog Reveal may present new threats as states across the country pass increasingly draconian bills restricting people’s access to abortion services and targeting people seeking reproductive healthcare,” Eshoo wrote.

The FTC recently sued another company selling geolocation data, Kochava, a commendable step to hold the company accountable for its unfair practices.

Eshoo is not alone. Senator Ron Wyden said in a tweet about Fog’s ability to facilitate mass surveillance, “Unfortunately, while it’s outrageous that data brokers are selling location data to law-enforcement agencies, it’s not surprising.”

We echo Eshoo’s request that the FTC conduct a full and thorough investigation into Fog Data Science. We continue to urge Congress to act quickly to regulate this out-of-control industry that jeopardizes our privacy, and allows police to conduct warrantless mass surveillance.  

Matthew Guariglia

The Fight to Overturn FOSTA, an Unconstitutional Internet Censorship Law, Continues

1 week 4 days ago

More than four years after its enactment, FOSTA remains an unconstitutional law that broadly censored the internet and harmed sex workers and others by chilling their ability to speak, organize, and access information online.

And the fight to overturn FOSTA continues. Last week, two human rights organizations, a digital library, a sex worker activist, and a certified massage therapist filed their opening brief in a case that seeks to strike down the law for its many constitutional violations.

Their brief explains to a federal appellate court why FOSTA is a direct regulation of people’s speech that also censors online intermediaries that so many rely upon to speak—classic First Amendment violations. The brief also details how FOSTA has harmed the plaintiffs, sex workers, and allies seeking to decriminalize the work and make it safer, primarily because of its vague terms and its conflation of sex work with coercive trafficking.

“FOSTA created a predictable speech-suppressing ratchet leading to ‘self-censorship of constitutionally protected material’ on a massive scale,” the plaintiffs, Woodhull Freedom Foundation, Human Rights Watch, The Internet Archive, Alex Andrews, and Eric Koszyk, argue. “Websites that support sex workers by providing health-related information or safety tips could be liable for promoting or facilitating prostitution, while those that assist or make prostitution easier—i.e., ‘facilitate’ it—by advocating for decriminalization are now uncertain of their own legality.”

FOSTA created new civil and criminal liability for anyone who “owns, manages, or operates an interactive computer service” and creates content (or hosts third-party content) with the intent to “promote or facilitate the prostitution of another person.” The law also expands criminal and civil liability to classify any online speaker or platform that allegedly assists, supports, or facilitates sex trafficking as though they themselves were participating “in a venture” with individuals directly engaged in sex trafficking.

FOSTA doesn’t just seek to hold platforms and hosts criminally responsible for the actions of sex-traffickers. It also introduces significant exceptions to the civil immunity provisions of one of the internet’s most important laws, 47 U.S.C. § 230. These exceptions create new state law criminal and civil liability for online platforms based on whether their users' speech might be seen as promoting or facilitating prostitution, or as assisting, supporting or facilitating sex trafficking.

The plaintiffs are not alone in viewing FOSTA as an overbroad censorship law that has harmed sex workers and other online speakers. Four friend-of-the-court briefs filed in support of their case this week underscore FOSTA’s disastrous consequences. 

The Center for Democracy & Technology’s brief argues that FOSTA negated the First Amendment’s protections for online intermediaries and thus undercut the vital role those services provide by hosting a broad and diverse array of users’ speech online.

“Although Congress may have only intended the laudable goal of halting sex trafficking, it went too far: chilling constitutionally protected speech and prompting online platforms to shut down users’ political advocacy and suppress communications having nothing to do with sex trafficking for fear of liability,” CDT’s brief argues.

A brief from the Transgender Law Center describes how FOSTA’s breadth has directly harmed lesbian, gay, transgender, and queer people.

“Although FOSTA’s text may not name gender or sexual orientation, FOSTA’s regulation of speech furthers the profiling and policing of LGBTQ people, particularly TGNC people, as the statute’s censorial effect has resulted in the removal of speech created by LGBTQ people and discussions of sexuality and gender identity,” the brief argues. “The overbroad censorship resulting from FOSTA has resulted in real and substantial harm to LGBTQ people’s First Amendment rights as well as economic harm to LGBTQ people and communities.”

Two different coalitions of sex worker advocacy and harm reduction groups filed briefs in support of the plaintiffs that show FOSTA’s direct impact on sex workers and how the law’s conflation of consensual sex work with coercive trafficking has harmed both victims of trafficking and sex workers.

A brief led by Call Off Your Old Tired Ethics (COYOTE) of Rhode Island published data from its recent survey of sex workers showing that FOSTA has made sex trafficking more prevalent and harder to combat.

“Every kind of sex worker, including trafficking survivors, have been impacted by FOSTA precisely because its broad terms fail to distinguish between different types of sex work and trafficking,” the brief argues. The brief goes on to argue that FOSTA’s First Amendment problems have “made sex work more dangerous by curtailing the ability to screen clients on trusted online databases, also known as blacklists.”

A brief led by Decriminalize Sex Work shows that “FOSTA is part of a legacy of federal and state laws that have wrongfully conflated human trafficking and adult consensual sex work while overlooking the realities of each.”

“The limitations on free speech caused by FOSTA have essentially censored harm reduction and safety information sharing, removed tools that sex workers used to keep themselves and others safe, and interrupted organizing and legislative endeavors to make policies that will enhance the wellbeing of sex workers and trafficking survivors alike,” the brief argues. “Each of these effects has had a devastating impact on already marginalized and vulnerable communities; meanwhile, FOSTA has not addressed nor redressed any of the issues cited as motivation for its enactment.”

The plaintiffs’ appeal marks the second time the case has gone up to the U.S. Court of Appeals for the District of Columbia. The plaintiffs previously prevailed in the appellate court when it ruled in 2020 that they had the legal right, known as standing, to challenge FOSTA, reversing an earlier district court ruling.

Members of Congress have also been concerned about FOSTA’s broad impacts, with senators introducing the SAFE SEX Workers Study Act for the last two years, though it has not become law.

The plaintiffs are represented by Davis, Wright Tremaine LLP, Walters Law Group, Daphne Keller, and EFF.

Related Cases: Woodhull Freedom Foundation et al. v. United States
Aaron Mackey

San Francisco Police Must End Irresponsible Relationship with the Northern California Fusion Center

1 week 4 days ago

In yet another failure to follow the rules, the San Francisco Police Department is collaborating with the regional fusion center with nothing in writing—no agreements, no contracts, nothing— governing the relationship, according to new records released to EFF in its ongoing complaint against the agency.

This means that there is no document in place that establishes the limits and responsibilities for sharing and handling criminal justice data or intelligence between SFPD and the fusion center and other law enforcement agencies who access sensitive information through its network.

SFPD must withdraw immediately from any cooperation with the Northern California Regional Information Center (NCRIC). Any moment longer it continues to collaborate with NCRIC puts sensitive data and the civil rights of Bay Area residents at severe risk.

Fusion centers were started in the wake of 9/11 as part of a Department of Homeland Security program to improve data sharing between local, state, tribal, and federal law enforcement agencies. There are 79 fusion centers across the United States, each with slightly different missions and responsibilities, ranging from generating open-source intelligence reports to monitoring camera networks. NCRIC historically has served as the Bay Area hub for sharing data across agencies from automated license plate readers (ALPRs), face recognition, social media monitoring, drone operations, and "Suspicious Activity Reports" (SARS).

NCRIC requires all participating agencies to sign a data sharing agreement and non-disclosure agreement ("Safeguarding Sensitive But Unclassified Information"), which is consistent with federal guidelines for operating a fusion center. EFF has independently confirmed with NCRIC staff that SFPD has not signed such an agreement. This failure is even more surprising considering that SFPD has had two liaisons assigned to the fusion center and the police chief has served as chair of NCRIC's executive board.

In December 2020, EFF filed a public records request under the San Francisco Sunshine Ordinance, following a San Francisco Chronicle report suggesting that an SFPD officer had submitted a photo of a suspect to the fusion center's email list and received in response a match generated by face recognition, which would potentially violate San Francisco's face recognition ban. We sought records related to this particular case, but more generally, we sought communications related to other requests for photo identification submitted by SFPD, communications about face recognition, and any agreements between SFPD and NCRIC.

When SFPD failed to comply with our records request, we filed a complaint with the San Francisco Sunshine Ordinance Task Force, the citizen body assigned to oversee violations of open records and meetings laws. Many new documents were released and SFPD was found by the task force to have violated both the Sunshine Ordinance and the California Public Records Act. One document was missing though: the fusion center agreement.

New records released in the complaint now explain why: no such agreements exist. SFPD didn't sign any, according to multiple emails sent between staff.

SFPD can't simply solve this problem by signing the boilerplate agreement tomorrow. Any formal partnership or data-sharing relationship with NCRIC would have to go through the process required by the city's surveillance oversight ordinance, which requires public input into such agreements and the board of supervisors’ approval. SFPD should expect public opposition to its involvement with the fusion center, just as there was opposition to its involvement in the FBI's Joint Terrorism Task Force.

Even if that process were to move forward, the public must be involved in crafting the exact language of the agreement. For example, when the Bay Area Rapid Transit (BART) Police Department pursued an agreement with NCRIC, the grassroots advocacy group Oakland Privacy (an Electronic Frontier Alliance member) helped negotiate an agreement with stronger considerations for civil liberties and privacy.

This isn't the first time SFPD has played fast and loose with data regulations. EFF is currently suing the department for accessing a live camera network to spy on protesters without first following the process required by the surveillance oversight ordinance. EFF has also filed a second Sunshine Ordinance complaint against SFPD for failing to produce a mandated ALPR report in response to a public records request.

This latest episode re-emphasizes that SFPD has not earned the trust of the people when it comes to its use of technology and data. SFPD should be cut off from NCRIC immediately, and the Board of Supervisors should treat any claim about accountability from SFPD with skepticism. SFPD has proven it doesn't believe rules matter, and that should always be a deal-breaker when it comes to surveillance. 

Related Cases: Williams v. San Francisco
Dave Maass

EFF’s “Cover Your Tracks” Will Detect Your Use of iOS 16’s Lockdown Mode

2 weeks ago

Apple’s new iOS 16 offers a powerful tool for its most vulnerable users. Lockdown Mode reduces the avenues attackers have to hack into users’ phones by disabling certain often-exploited features. While providing a solid defense against intrusion, it is also trivial to detect that this new feature is enabled on a device. Our web fingerprinting tool Cover Your Tracks has incorporated detection of Lockdown Mode and alerts the user that we’ve determined they have this mode enabled.

Over the last few years, journalists, human rights defenders, and activists have increasingly become targets of sophisticated hacking campaigns. With a small cost to usability, at-risk populations can protect themselves from commonly used entry points into their devices. One such entry point is downloading remote fonts when visiting a webpage. iOS 16 in Lockdown Mode disallows remote fonts from being loaded from the web, which would otherwise have the potential to allow access to a device by exploiting the complex ways fonts are rendered. However, it is also easy to use a small piece of JavaScript code on the page to determine whether the font was blocked from being loaded.

While a large win for endpoint security, this is also a small loss for privacy. Lockdown Mode is unlikely to be used by many people, compared to the millions who use iOS devices, and as such it makes those that do enable it stand out amongst the crowd as a person who needs extra protection. Web fingerprinting is a powerful technique to determine a user's browsing habits, circumventing normal mechanisms users have to avoid tracking, such as clearing cookies.

Make no mistake: Apple’s introduction of this powerful new protection is a welcome development for those that need it the most. But users should also be aware of the information they are exposing to the web while using this feature.

Bill Budington

U.S. Federal Employees Can Take A Stand for Digital Freedoms

2 weeks 3 days ago

It’s that time of the year again when the weather starts to cool down and the leaves start to turn all different shades and colors. More importantly, it is also time for U.S. federal employees to pledge their support for digital freedoms through the Combined Federal Campaign (CFC)!

The pledge period for the CFC is underway and EFF needs your help. Last year, U.S. federal employees raised over $34,000 for EFF through the CFC, helping us fight for free expression, privacy, and innovation on the internet so that we can help create a better digital future.

The Combined Federal Campaign is the world’s largest and most successful annual charity campaign for U.S. federal employees and retirees. Since its inception in 1961, the CFC fundraiser has raised more than $8.6 billion for local, national, and international charities. This year’s campaign runs from September 1 to January 14, 2023. Be sure to make your pledge for the Electronic Frontier Foundation before the campaign ends!

U.S. federal employees and retirees can give to EFF by going to GiveCFC.org and clicking the DONATE button to give via payroll deduction, credit/debit, or an e-check! If you have a renewing pledge, you can increase your support as well. Be sure to use EFF’s CFC ID #10437. Scan the QR code below to easily make a pledge!

This year’s CFC campaign theme continues to build off of 2020’s “You Can Be The Face of Change.” U.S. federal employees and retirees give through the CFC to change the world for the better, together. With your support, EFF can continue our strides towards a diverse, and free internet that benefits all of its users.

With support from those who pledged EFF last year we have: rang alarm bells about a police equipment vendor’s now-thwarted plan to arm drones with tasers in response to school shootings, pushed back against government involvement in content moderation on social media platforms, and developed numerous digital security guides applicable for those seeking and offering abortion resources after the overturning of federal protections for reproductive rights.

Federal employees have a tremendous impact on the shape of our democracy and the future of civil liberties and human rights online. Support EFF today by using our CFC ID #10437 when you make a pledge!

Christian Romero

EFF to California Governor: Protect Abortion Data Privacy

2 weeks 3 days ago

In the wake of the Supreme Court’s Dobbs decision, anti-choice sheriffs and bounty hunters will try to investigate and punish abortion seekers based on their internet browsing, private messaging, and phone app location data. Legislators must act now to protect this personal data. Reproductive justice requires data privacy.

That’s why EFF urges California Governor Gavin Newsom to sign A.B. 1242, authored by Assemblymember Rebecca Bauer-Kahan. This bill would protect the data privacy of people seeking abortion, by limiting how California-based entities disclose abortion-related information. Some of the bill’s requirements include the following:     

  • California courts would be prohibited from authorizing wiretaps, pen registers, and other searches for the purpose of enforcing out-of-state laws against abortions that are lawful in California.
  • California businesses that provide electronic communication services, such as email and private messaging, would be prohibited from, in California, providing information in response to out-of-state legal process that arises from anti-abortion laws.
  • California businesses that provide electronic communication services or remote computing services would be prohibited from disclosing communications content and metadata in response to an out-of-state warrant that arises from anti-abortion laws.
  • California government agencies would be prohibited from providing information to any individual or out-of-state agency regarding an abortion lawfully performed in California.

This bill is a strong step forward. But more is needed. Congress and the states must enact comprehensive consumer data privacy legislation, like the federal “My Body, My Data” bill, that limits how businesses collect, retain, use, and share our data. Legislators also must enact new limits on police obtaining personal data from businesses, like banning dragnet police demands to identify all people who visited the same place or used the same keyword search term.

EFF also supports two other California bills that would protect the data privacy of vulnerable people who seek medical sanctuary in California. S.B. 107 would protect trans youths who visit to obtain gender-affirming care, and A.B. 2091 would protect people who visit to obtain abortion.

You can read here our letter urging California’s Governor to sign A.B. 1242.

Adam Schwartz

VICTORY: Slack Offers Retention Settings to Free Workspaces

2 weeks 5 days ago

In a victory for users, Slack has fixed its long-standing retention problems for free workspaces. Instead of holding onto your messages on its servers for as long as your workspace exists, Slack is now giving free workspace admins the option to automatically delete all messages older than 90 days. This basic ability to decide which information Slack should keep and which information it should delete should be available to all users, and we applaud Slack for making this change.

The new retention settings for free accounts were announced in a July blog post and are effective as of September 1st. Follow these steps from Slack to change retention in your own Slack workspaces, or share them with your workspace admin:

Since 2018, we have urged Slack to recognize its higher-risk users and take more steps to protect them. While Slack is intended for use in white-collar office environments, its free version has proven useful for abortion rights activists, get-out-the-vote phone banking organizers, unions, and other political organizing and activism activities.

Some might argue that the mismatch between enterprise tool design and wider use cases means Slack is simply the wrong tool for high-risk activists. But for many people, especially small and under-resourced organizations, Slack is the most viable option: it’s convenient, easy to use without extensive technical expertise, and already familiar to many.

Enterprise companies have a prerogative to charge more money for an advanced product, but best-practice privacy and security features should not be restricted to those who can afford to pay a premium. Slack’s decision to do the right thing and offer basic retention settings more widely is especially important because the people who cannot afford enterprise subscriptions are often the ones who need strong security and privacy protections the most. 

Gennie Gebhart

FTC Sues Location Data Broker

2 weeks 5 days ago

Phone app location data brokers are a growing menace to our privacy and safety. All you did was click a box while downloading an app. Now the app tracks your every move and sends it to a broker, which then sells your location data to the highest bidder.

So three cheers for the Federal Trade Commission for seeking to end this harmful marketplace! The FTC recently sued Kochava, a location data broker, alleging the company violated a federal ban on unfair business practices. The FTC’s complaint against Kochava illustrates the dangers created by this industry.

Kochava harvests and monetizes a staggering volume of location data. The company claims that on a monthly basis, it provides its customers access to 94 billion data points arising from 125 million active users. The FTC analyzed just one day of Kochava’s data, and found 300 million data points arising from 60 million devices.

Kochava’s data can easily be linked to identifiable people. According to the FTC:

The location data provided by Kochava is not anonymized. It is possible to use the geolocation data, combined with the mobile device’s MAID [that is, its “Mobile Advertising ID”], to identify the mobile device’s user or owner. For example, some data brokers advertise services to match MAIDs with ‘offline’ information, such as consumers’ names and physical addresses.

Even without such services, however, location data can be used to identify people. The location data sold by Kochava typically includes multiple timestamped signals for each MAID. By plotting each of these signals on a map, much can be inferred about the mobile device owners. For example, the location of a mobile device at night likely corresponds to the consumer’s home address. Public or other records may identify the name of the owner or resident of a particular address.

Kochava’s location data can harm people, according to the FTC:

[T]he data may be used to identify consumers who have visited an abortion clinic and, as a result, may have had or contemplated having an abortion. In fact, … it is possible to identify a mobile device that visited a women’s reproductive health clinic and trace that mobile device to a single-family residence.

Likewise, the FTC explains that the same data can be used to identify people who visit houses of worship, domestic violence shelters, homeless shelters, and addiction recovery centers. Such invasions of location privacy expose people, in the words of the FTC, to “stigma, discrimination, physical violence, emotional distress, and other harms.”

The FTC Act bans “unfair or deceptive acts or practices in or affecting commerce.” Under the Act, a practice is “unfair” if: (1) the practice “is likely to cause substantial injury to consumers”; (2) the practice “is not reasonably avoidable by consumers themselves”; and (3) the injury is “not outweighed by countervailing benefits to consumers or to competition.”

The FTC lays out a powerful case that Kochava’s brokering of location data is unfair and thus unlawful. We hope the court will rule in the FTC’s favor. Other location data brokers should take a hard look at their own business model or risk similar judicial consequences.

The FTC has recently taken many other welcome actions to protect people’s digital rights. Last month, the agency announced it is exploring new rulemaking against commercial surveillance. Earlier this year, the FTC fined Twitter for using account security data for targeted ads, brought lawsuits to protect people’s right-to-repair, and issued a policy statement against edtech surveillance.

Adam Schwartz

EFF to Ninth Circuit: Social Media Content Moderation is Not "State Action"

2 weeks 5 days ago

Former EFF intern Shashank Sirivolu contributed to this blog post.  

Social media users who have sued companies for deleting, demonetizing, and otherwise moderating their content have tried several arguments that this violates their constitutional rights. Courts have consistently ruled against them because social media platforms themselves have the First Amendment right to moderate content. The government and the courts cannot tell them what speech they must remove or, on the flip side, what speech they must carry. And when the government unlawfully conspires with or coerces a platform to censor a user, the user should only be able to hold the platform liable for the government’s interference in rare circumstances.  

In some cases, based on the “state action” doctrine, courts can treat a platform’s action as that of the government. This may allow a user to hold the platform liable for what would otherwise be a the platform’s private exercise of their First Amendment rights. These cases are rare and narrow. “Jawboning,” or when the government influences content moderation policies, is common. We have argued that courts should only hold a jawboned social media platform liable as a state actor if: (1) the government replaces the intermediary’s editorial policy with its own, (2) the intermediary willingly cedes its editorial implementation of that policy to the government regarding the specific user speech, and (3) the censored party has no remedy against the government.  

To ensure that the state action doctrine does not nullify social media platforms’ First Amendment rights, we recently filed two amicus briefs in the Ninth Circuit in Huber v. Biden and O'Handley v. Weber. Both briefs argued that these conditions were not met, and the courts should not hold the platforms liable under a state action theory.  

In Huber v. Biden, the plaintiff accused Twitter of conspiring with the White House to suspend a user’s account for violating the company’s policy against disseminating harmful and misleading information related to COVID-19. Our brief argued that the plaintiff’s theory was flawed for several reasons. First, the government did not replace Twitter’s editorial policy with its own, but, at most, advised the company about its concerns regarding the harm of misinformation about the virus. Second, Huber does not allege that the government ever read, much less talked to Twitter about, the tweet at issue. Finally, because Huber brought a claim against the government directly, she may have a remedy for her claim.  

In O’Handley v. Weber, the plaintiff accused Twitter of conspiring with the California Secretary of State to censor and suspend a user’s Twitter account for violating the company’s policies regarding election integrity. In direct response to concerns about election interference in the 2016 Presidential election, the California Legislature established the Office of Election Cybersecurity within the California Secretary of State's office. While the Office of Election Cybersecurity notified Twitter about one of the plaintiff’s tweets that it believed contained potential misinformation, there is nothing unconstitutional about the government speaking about its concerns to a private actor. And even if the government did cross the line, O'Handley did not demonstrate that this one notification got Twitter to cede its editorial decision-making to the government. Rather, Twitter may have considered the government’s view but ultimately made its own decision to suspend O’Handley. Finally, because O’Handley brought a claim against the Secretary of State directly, he may have a remedy. 

While it is important that internet users have a well-defined avenue for holding social media companies liable for harmful collaborations with the government, it must be narrow enough to preserve the platforms’ First Amendment rights to curate and edit their content. Otherwise, users themselves will end up being harmed because they will lose access to platforms with varied forums for speech. 

Mukund Rathi

Arizona Law Tramples People’s Constitutional Right to Record Police

2 weeks 6 days ago
EFF, two Arizona chapters of the National Lawyers Guild, Poder in Action, and Mass Liberation AZ filed a brief in federal court opposing the government's attempt to thwart police accountability.

SAN FRANCISCO–A new Arizona law that bans people from recording videos within eight feet of police violates the constitutional rights of legal observers, grassroots activists, and other Arizonans, says a brief filed Friday in federal court by the Electronic Frontier Foundation (EFF), two Arizona chapters of the National Lawyers Guild (NLG), Poder in Action, and Mass Liberation AZ.

“Arizonans routinely hold police accountable throughout their communities by recording them within eight feet,” said Mukund Rathi, an EFF attorney and Stanton Fellow focusing on free speech litigation. “Arizonans use these recordings to document police activity at protests, expose false charges against protesters, and inform the public of police racism and misconduct. Everyone must be free to use mobile devices and social media to record and publish the news, including how police use their powers.”

The new law makes it a crime, punishable by up to a month in jail, to record videos within eight feet of law enforcement activity. The law was signed by Gov. Doug Ducey in July and is scheduled to take effect Sept. 24.

Several news organizations and the American Civil Liberties Union of Arizona sued last month to prevent the law from going into effect, arguing it “creates an unprecedented and facially unconstitutional content-based restriction on speech about an important governmental function.”

The friend-of-the-court brief filed Friday by EFF and its collaborators agrees, and helps illustrate the potential impact of the law by detailing how the grassroots organizations create and use recordings to hold police accountable and keep their communities free and safe.

The groups, represented by EFF and co-counsel Kathleen E. Brody of Phoenix, told the court the new law harms efforts by legal observers and others to exercise their fundamental right to record police activity. Protest activity often occurs within eight feet of police, and sightlines of police activity are often obscured at greater distances. Also, officers often move closer to protestors and those who are recording videos–essentially creating the crime under this law. Video recordings also are more accurate, detailed, and shareable than written note-taking.

"Police and prosecutors in Maricopa County have arrested and falsely charged hundreds of protesters for their free expression in recent years," said Lola N'sangou, Executive Director of Mass Liberation AZ, a Black-led abolitionist group based in south Phoenix and organizing throughout Arizona. "Scores of these protesters faced false felony charges that were later dropped, in many cases due to recordings filmed within eight feet of the arrests and surrounding circumstances. Without these recordings, most of these protesters would have spent decades in prison. One protester faced 100.5 years on completely fabricated charges." 

The case is Arizona Broadcasters Association v. Brnovich, 2:22-cv-01431 in the U.S District Court, District of Arizona.

For the EFF, NLG, Poder in Action, and Mass Liberation AZ amicus brief: https:/eff.org/document/arizona-broadcasters-association-v-brnovich-amicus-brief

For the underlying complaint: https://www.aclu.org/legal-document/arizona-broadcasters-association-v-brnovich-complaint

For more on the right to record: https://www.eff.org/issues/right-record

Contact:  MukundRathiStanton Legal Fellowmukund@eff.org
Josh Richman

Honoring Peter Eckersley, Who Made the Internet a Safer Place for Everyone

3 weeks ago

With deep sadness, EFF mourns the loss of our friend, the technologist, activist, and cybersecurity expert Peter Eckersley. Peter worked at EFF for a dozen years and was EFF’s Chief Computer Scientist for many of those. Peter was a tremendous force in making the internet a safer place. He was recently diagnosed with colon cancer and passed away suddenly on Friday. 

The impact of Peter’s work on encrypting the web cannot be overstated. The fact that transport layer encryption on the web is so ubiquitous that it's nearly invisible is thanks to the work Peter began. It’s a testament to the boldness of his vision that he decided that we could and should encrypt the web, and to his sheer tenacity that he kept at it despite disbelief from so many, and a seemingly endless series of blockages and setbacks. There is no doubt that without Peter’s relentless energy, his strategy of cheerful cajoling, and his flexible cleverness, the project would not have even launched, much less succeeded so thoroughly.

While encrypting the web would have been enough, Peter played a central role in many groundbreaking projects to create free, open source tools that protect the privacy of users’ internet experience by encrypting communications between web servers and users. Peter’s work at EFF included privacy and security projects such as Panopticlick, HTTPS Everywhere, Switzerland, Certbot, Privacy Badger, and the SSL Observatory.

His most ambitious project was probably Let’s Encrypt, the free and automated certificate authority, which entered public beta in 2015. Peter had been incubating the project for several years, but was able to leverage the famous “smiley face” image from the Edward Snowden leaks showing where SSL was added and removed, to build a coalition that actually made it happen. Let’s Encrypt fostered the web’s transition from non-secure HTTP connections that were vulnerable to eavesdropping, content injection, and cookie stealing, to the more secure HTTPS, so websites could offer secure connections to their users and protect them from network-based threats. 

By 2017 it had issued 100 million certificates; by 2021, about 90% of all web page visits use HTTPS. As of today it has issued over a billion certificates to over 280 million websites. 

Peter joined EFF as a staff technologist in 2006, when the role was largely to advise EFF’s lawyers and activists so that our work was always technically correct and smart. His passion at the time was the mismatch between copyright law and how the Internet functions, and he finished his PhD while at EFF. Soon, Peter and EFF’s first staff technologist Seth Schoen began to see ways they could leverage small hacks existing to internet infrastructure systems to build technologies to spur more security and freedom online, as well as ensure that the internet served everyone. They began to build technical projects, recruited and hired some of the internet's most innovative technologists, and before long created EFF’s Technology Projects Team as a full pillar of EFF’s work.

Peter helped launch a tool to tell uses when their ISP was interfering in their web traffic, called Switzerland, which created a movement for open wireless networks. He also documented violations of net neutrality, advocated for keeping modern computer platforms open, and was a driving force behind the campaign against SOPA/PIPA internet blacklist legislation, after a call from his friend Aaron Swartz. The list goes on and on and includes advising EFF lawyers and activists on all manner of litigation and lobbying efforts.

We'll never forget the gleam in his eye as Peter started talking about his latest idea, nor his wide smile as he kept working to find a way to overcome obstacles and often almost bodily carry his ideas into being. He had the gift of being able to widen the aperture of any problem, giving a perspective that could help see patterns and options that were previously invisible. His single-minded passion could sometimes lead him to step on toes and gloss over problems, but his heart and vision never wavered from what would best serve humanity as a whole. We’ll also never forget the time he secretly built a gazebo on the roof of EFF, or his puckish fashion sense—one year we made special red EFF-logo socks for the entire staff to honor his style.

Peter left EFF in 2018 to focus on studying and calling attention to the malicious use of artificial intelligence and machine learning. He founded AI Objectives Institute, a collaboration between major technology companies, civil society, and academia, to ensure that AI is designed and used to benefit humanity.

Peter’s vision, audacity, and commitment made the web, and the world, a better place. We will miss him.

 

Cindy Cohn

Hollywood’s Insistence on New Draconian Copyright Rules Is Not About Protecting Artists

3 weeks 4 days ago

Stop us if you’ve heard these: piracy is driving artists out of business. The reason they are starving is because no one pays for things, just illegally downloads them. You wouldn’t steal a car. These arguments are old and being dragged back out to get support for rules that would strangle online expression. And they are, as ever, about Hollywood wanting to control creativity and not protecting artists.

When it comes to box office numbers, they’ve remained pretty consistent except when a global pandemic curtailed theater visits. The problem facing Hollywood is the same one that it’s faced since its inception: greed.

From the fever-pitch moral panic of the early 2000s, discussions about "piracy" disappeared from pop culture for about a decade. It’s come back, both from the side explaining why and the side that wants everyone punished.

Illegal downloading and streaming are not the cause of Hollywood’s woes. They’re a symptom of a system that is broken for everyone except the few megacorporations and the billionaires at the top of them.  Infringement went down when the industry adapted and gave people what they wanted: convenient, affordable, and legal alternatives. But recently, corporations have given up on affordability and convenience.

The Streaming Hellscape

It’s not news to anyone that the video streaming landscape has, in the last few years, become unnavigable. Finding the shows and movies you want has become a treasure hunt where, when you find the prize, you have to fork over your credit card information for it. And then the prize could disappear at any moment.

Rather than having a huge catalog of diverse studio material, which is what made Netflix popular to begin with, convenience has been replaced with exclusivity. But people don’t want everything a single studio offers. They want certain things. But just like the cable bundles that streaming replaced, a subscription fee isn’t for just what you want, it’s for everything the company offers. And it feels like a bargain to pay for all of it when a physical copy for one thing costs the same as a month’s subscription.

Except that paying for every service isn’t affordable. There are too many and they all have one or two things people want. So you can rotate which ones you pay for every so often, which is inconvenient, or just swallow the cost, which is not affordable. And none of that guarantees that what you want is going to be available. Content appears and disappears from streaming services all the time.

Disney removed Avatar from Disney+ because it is re-releasing it in theaters ahead of the sequel. Avatar is a 13-year-old movie, and rereleasing it in theaters should be a draw because of the theater-going experience. Avatar shouldn’t have to be removed from streaming since its major appeal is what it looks like on a big screen in 3D. But Disney isn’t taking the chance that the moviegoing experience of Avatar alone will get people to pay. It’s making sure people have to pay extra—either by going to the theater or paying for a copy.

And that’s when the content even has a physical form.

After the Warner Bros. merger with Discovery, the new owners wasted almost no time removing things from the streaming service HBO Max, including a number of things that were exclusive to the streaming service. That means there is no place to find copies of the now-removed shows. People used to joke that the internet was forever—once something was online it could not be removed. But that’s not the case anymore. Services that go under take all of their exclusive media with them. Corporate decisions like this remove things from the public record.

It’s a whole new kind of lost media, and like lost media of the past, it’s only going to be preserved by those individuals who did the work to make and save copies of it, often risking draconian legal liability, regardless of how the studio feels about that work.

When things are shuffled around, disappeared, or flat out not available for purchase, people will make their own copies in order to preserve it. That is not a failure of adequate punishment for copyright infringement. It’s a failure of the market to provide what consumers want.

It’s disingenuous for Hollywood’s lobbyists to claim that they need harsher copyright laws to protect artists when it’s the studios that are busy disappearing the creations of these artists. Most artists want their work to find an audience and the fractured, confusing, and expensive market prevents that, not the oft-alleged onslaught of copyright infringement.

Hollywood Cares About Money, Not Artists

There’s a saying that, in various forms, prevails within the creative industry. It goes something like “Art isn’t made in Hollywood. Occasionally, if you get very lucky, it escapes.”

Going back to Warner Bros. and HBO Max: another decision made by the new management was to cancel projects that were largely finished. This included a Batgirl movie, which had a budget of $90 million. The decision was made so that the studio could take a tax write-off, against the wishes of its star and directors, who said, “As directors, it is critical that our work be shown to audiences, and while the film was far from finished, we wish that fans all over the world would have the opportunity to see and embrace the final film themselves. Maybe one day they will insha’Allah.”

The point is that Hollywood isn’t in the art business. It’s in the business business. It is never trying to pay artists, it’s always trying to find a way to keep money out of artists’ hands and in the corporate coffers. There’s a reason “Hollywood accounting” has a Wikipedia entry. It’s an industry infamous for arguing that a movie that made a billion dollars at the box office actually made no money, all to keep from paying the artists involved.

Traditional movie making is a unionized endeavor. Basically everyone involved save the studio has a guild or union. That means that there are minimum standards for the employment contracts that studios have to meet. New technology is attractive to studios because it isn’t covered by those union agreements. They can ignore the demands of labor and then, if the unions threaten to refuse to work with them, they get to negotiate new terms. That’s why the Writers Guild went on strike in 2007.

The new streaming landscape also allowed studios to mistreat their below-the-line workers; everyone who is not an actor, producer, writer, or director. So, most people. IATSE, the union that represented most of those workers, overwhelmingly authorized a strike over working conditions. They particularly called out how streaming projects paid them less, even if they had budgets larger than that of traditional media.

Streaming has ruined the ability of writers to make a livable wage off of a job, and has all but eliminated mentoring and on-set experience, contrary to the desires of the actual people who make the shows. Instead of investing in writers, studios push for more “efficient” models that make writing jobs harder to get and producing experience nearly impossible.

So when Hollywood lobbyists argue for draconian copyright laws “for artists,” it should ring especially hollow.

What they want is exclusive control. That includes the ability to constantly charge for access, which means preventing people from having their own copies. Hollywood has fought against audiences having their own copies for as long as the technology has existed. They sued to eliminate VCRs and when they lost, then they started selling tapes. They sued the makers of DVRs, and when they lost again they opened up to video-on-demand. And now, streaming has given them what they’ve always wanted: complete control over copies of their work. No one owns a copy of the material they watch on a streaming service, they get only a license to watch it for a temporary period.

This way, the studios can make you pay for something every month instead of once. They can take it down so you can’t watch it at all. They can edit things post-release, losing some of the history of the creation. And without copies available to own, they prevent creative newcomers from exercising their right to make fair use of it. All of this is anti-artist.

Studios want to point to an outside reason for their actions. Copyright infringement is convenient that way. And when they endorse draconian legislation like the filter mandates of the Strengthening Measures to Advance Rights Technologies Copyright Act, that is why. But when infringement happens, it’s a symptom of a market not meeting demand, not the cause of the problem.

Take Action

Tell your senators to oppose The Filter Mandate

Katharine Trendacosta
Checked
37 minutes 53 seconds ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed