You Really Do Have Some Expectation of Privacy in Public

3 months 2 weeks ago

Being out in the world advocating for privacy often means having to face a chorus of naysayers and nihilists. When we spend time fighting the expansion of Automated License Plate Readers capable of tracking cars as they move, or the growing ubiquity of both public and private surveillance cameras, we often hear a familiar refrain: “you don’t have an expectation of privacy in public.” This is not true. In the United States, you do have some expectation of privacy—even in public—and it’s important to stand up and protect that right.

How is it possible to have an expectation of privacy in public? The answer lies in the rise of increasingly advanced surveillance technology. When you are out in the world, of course you are going to be seen, so your presence will be recorded in one way or another. There’s nothing stopping a person from observing you if they’re standing across the street. If law enforcement has decided to investigate you, they can physically follow you. If you go to the bank or visit a courthouse, it’s reasonable to assume you’ll end up on their individual video security system.

But our ever-growing network of sophisticated surveillance technology has fundamentally transformed what it means to be observed in public. Today’s technology can effortlessly track your location over time, collect sensitive, intimate information about you, and keep a retrospective record of this data that may be stored for months, years, or indefinitely. This data can be collected for any purpose, or even for none at all. And taken in the aggregate, this data can paint a detailed picture of your daily life—a picture that is more cheaply and easily accessed by the government than ever before.

Because of this, we’re at risk of exposing more information about ourselves in public than we were in decades past. This, in turn, affects how we think about privacy in public. While your expectation of privacy is certainly different in public than it would be in your private home, there is no legal rule that says you lose all expectation of privacy whenever you’re in a public place. To the contrary, the U.S. Supreme Court has emphasized since the 1960’s that “what [one] seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.” The Fourth Amendment protects “people, not places.”  U.S. privacy law instead typically asks whether your expectation of privacy is something society considers “reasonable.”

This is where mass surveillance comes in. While it is unreasonable to assume that everything you do in public will be kept private from prying eyes, there is a real expectation that when you travel throughout town over the course of a day—running errands, seeing a doctor, going to or from work, attending a protest—that the entirety of your movements is not being precisely tracked, stored by a single entity, and freely shared with the government. In other words, you have a reasonable expectation of privacy in at least some of the uniquely sensitive and revealing information collected by surveillance technology, although courts and legislatures are still working out the precise contours of what that includes.

In 2018, the U.S. Supreme Court decided a landmark case on this subject, Carpenter v. United States. In Carpenter, the court recognized that you have a reasonable expectation of privacy in the whole of your physical movements, including your movements in public. It therefore held that the defendant had an expectation of privacy in 127 days worth of accumulated historical cell site location information (CSLI). The records that make up CSLI data can provide a comprehensive chronicle of your movements over an extended period of time by using the cell site location information from your phone.  Accessing this information intrudes on your private sphere, and the Fourth Amendment ordinarily requires the government to obtain a warrant in order to do so.

Importantly, you retain this expectation of privacy even when those records are collected while you’re in public. In coming to its holding, the Carpenter court wished to preserve “the degree of privacy against government that existed when the Fourth Amendment was adopted.” Historically, we have not expected the government to secretly catalogue and monitor all of our movements over time, even when we travel in public. Allowing the government to access cell site location information contravenes that expectation. The court stressed that these accumulated records reveal not only a person’s particular public movements, but also their “familial, political, professional, religious, and sexual associations.”

As Chief Justice John Roberts said in the majority opinion:

“Given the unique nature of cell phone location records, the fact that the information is held by a third party does not by itself overcome the user’s claim to Fourth Amendment protection. Whether the Government employs its own surveillance technology . . . or leverages the technology of a wireless carrier, we hold that an individual maintains a legitimate expectation of privacy in the record of his physical movements as captured through [cell phone site data]. The location information obtained from Carpenter’s wireless carriers was the product of a search. . . .

As with GPS information, the time-stamped data provides an intimate window into a person’s life, revealing not only his particular movements, but through them his “familial, political, professional, religious, and sexual associations.” These location records “hold for many Americans the ‘privacies of life.’” . . .  A cell phone faithfully follows its owner beyond public thoroughfares and into private residences, doctor’s offices, political headquarters, and other potentially revealing locales. Accordingly, when the Government tracks the location of a cell phone it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user.”

As often happens in the wake of a landmark Supreme Court decision, there has been some confusion among lower courts in trying to determine what other types of data and technology violate our expectation of privacy when we’re in public. There are admittedly still several open questions: How comprehensive must the surveillance be? How long of a time period must it cover? Do we only care about backward-looking, retrospective tracking? Still, one overall principle remains certain: you do have some expectation of privacy in public.

If law enforcement or the government wants to know where you’ve been all day long over an extended period of time, that combined information is considered revealing and sensitive enough that police need a warrant for it. We strongly believe the same principle also applies to other forms of surveillance technology, such as automated license plate reader camera networks that capture your car’s movements over time. As more and more integrated surveillance technologies become the norm, we expect courts will expand existing legal decisions to protect this expectation of privacy.

It's crucial that we do not simply give up on this right. Your location over time, even if you are traversing public roads and public sidewalks, is revealing. More revealing than many people realize. If you drive from a specific person’s house to a protest, and then back to that house afterward—what can police infer from having those sensitive and chronologically expansive records of your movement? What could people insinuate about you if you went to a doctor’s appointment at a reproductive healthcare clinic and then drove to a pharmacy three towns away from where you live? Scenarios like this involve people driving on public roads or being seen in public, but we also have to take time into consideration. Tracking someone’s movements all day is not nearly the same thing as seeing their car drive past a single camera at one time and location.

The courts may still be catching up with the law and technology, but that doesn’t mean it’s a surveillance free-for-all just because you’re in the public. The government still has important restrictions against tracking our movement over time and in public even if you find yourself out in the world walking past individual security cameras. This is why we do what we do, because despite the naysayers, someone has to continue to hold the line and educate the world on how privacy isn’t dead.

Matthew Guariglia

EFF & 140 Other Organizations Call for an End to AI Use in Immigration Decisions

3 months 2 weeks ago

EFF, Just Futures Law, and 140 other groups have sent a letter to Secretary Alejandro Mayorkas that the Department of Homeland Security (DHS) must stop using artificial intelligence (AI) tools in the immigration system. For years, EFF has been monitoring and warning about the dangers of automated and so-called “AI-enhanced” surveillance at the U.S.-Mexico border. As we’ve made clear, algorithmic decision-making should never get the final say on whether a person should be policed, arrested, denied freedom, or, in this case, are worthy of a safe haven in the United States.  

The letter is signed by a wide range of organizations, from civil liberties nonprofits to immigrant rights groups, to government accountability watchdogs, to civil society organizations. Together, we declared that DHS’s use of AI, defined by the White House as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” appeared to violate federal policies governing its responsible use, especially when it’s used as part of the decision-making regarding immigration enforcement and adjudications.

Read the letter here

The letter highlighted the findings from a bombshell report published by Mijente and Just Futures Law on the use of AI and automated decision-making by DHS and its sub-agencies, U.S. Citizenship and Immigration Services (USCIS), Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP). Despite laws, executive orders, and other directives to establish standards and processes for the evaluation, adoption, and use of AI by DHS—as well as DHS’s pledge that pledge that it “will not use AI technology to enable improper systemic, indiscriminate, or large-scale monitoring, surveillance or tracking of individuals”—the agency has seemingly relied on the loopholes for national security, intelligence gathering, and law enforcement to avoid compliance with those requirements. This completely undermines any supposed attempt on the part of the federal government to use AI responsibly and contain the technology’s habit of merely digitizing and accelerating decisions based preexisting on biases and prejudices. 

Even though AI is unproven in its efficacy, DHS has frenetically incorporated AI into many of its functions. These products are often a result of partnerships with vendors who have aggressively pushed the idea that AI will make immigration processing more efficient, more objective and less biased

Yet the evidence begs to differ, or, at best, is mixed.  

As the report notes, studies, including those conducted by the government, have recognized that AI has often worsened discrimination due to the reality of “garbage in, garbage out.” This phenomenon was visible in Amazon’s use—and subsequent scrapping—of AI to screen résumés, which highlighted male applicants more often because the data on which the program had been trained included more applications from men. The same pitfalls arises in predictive policing products, something EFF categorically opposes, which often “predicts” crimes more likely to occur in Black and Brown neighborhoods due to the prejudices embedded in the historical crime data used to design that software. Furthermore, AI tools are often deficient when used in complex contexts, such as the morass that is immigration law. 

In spite of these grave concerns, DHS has incorporated AI decision-making into many levels of its operation with without taking the necessary steps to properly vet the technology. According to the report, AI technology is part of USCIS’s process to determine eligibility for immigration benefit or relief, credibility in asylum applications, and public safety or national security threat level of an individual. ICE uses AI to automate its decision-making on electronic monitoring, detention, and deportation. 

At the same time, there is a disturbing lack of transparency regarding those tools. We urgently need DHS to be held accountable for its adoption of opaque and untested AI programs promulgated by those with a financial interest in the proliferation of the technology. Until DHS adequately addresses the concerns raised in the letter and report, the Department should be prohibited from using AI tools. 

Hannah Zhao

U.S. Federal Employees: Plant Your Flag for Digital Freedoms Today!

3 months 2 weeks ago

Like clockwork, September is here—and so is the Combined Federal Campaign (CFC) pledge period!  

The CFC is the world’s largest and most successful annual charity campaign for U.S. federal employees and retirees. You can now make a pledge to support EFF’s lawyers, technologists, and activists in the fight for privacy and free speech online. Last year members of the CFC community raised nearly $34,000 to support digital civil liberties. 

Giving to EFF through the CFC is easy! Just head over to GiveCFC.org and use our ID 10437. Once there, click DONATE to give via payroll deduction, credit/debit, or an e-check. If you have a renewing pledge, you can increase your support as well! Scan the QR code below to easily make a pledge or go to GiveCFC.org

This year's campaign theme—GIVE HAPPY—shows that when U.S. federal employees and retirees give together, they make a meaningful difference to a countless number of individuals throughout the world. They ensure that organizations like EFF can continue working towards our goals even during challenging times. 

With support from those who pledged through the CFC last year, EFF has:

Federal employees and retirees have a tremendous impact on the shape of our democracy and the future of civil liberties and human rights online. Support EFF’s work by using our CFC ID 10437 when you make a pledge today!

Christian Romero

EFF Calls For Release of Alexey Soldatov, "Father of the Russian Internet"

3 months 2 weeks ago

EFF was deeply disturbed to learn that Alexey Soldatov, known as the “father of the Russian Internet,” was sentenced in July to two years in prison by a Moscow court for alleged “misuse” of IP addresses.

In 1990, Soldatov led the Relcom computer network that made the first Soviet connection to the global internet. He also served as Russia’s Deputy Minister of Communications from 2008 to 2010.

Soldatov was convicted on charges related to an alleged deal to transfer IP addresses to a foreign organization. He and his lawyers have denied the accusations. His family, many supporters, and Netzpolitik suggest that the accusations are politically motivated. Soldatov’s former business partner, Yevgeny Antipov, was also sentenced to eighteen months in prison.

Soldatov was a trained nuclear scientist at Kurchatov nuclear research institute who, during the Soviet era, built the Russian Institute for Public Networks (RIPN), which was responsible for administering and allocating IP addresses in Russia from the early 1990s onwards. The  network RIPN created was called Relcom (RELiable COMmunication). During the 1991 KGB-led coup d’etat Relcom—unlike traditional media—remained uncensored. As his son, journalist Andrei Soldatov recalls, Alexey Soldatov insisted on keeping the lines open under all circumstances.

Following the collapse of the Soviet Union, Soldatov ran Relcom as the first ISP in Russia and has since helped establish organizations that provide the technical backbone of the Russian Internet. For this long service, he has been dubbed “the father of RuNet” (the term used to describe the Russian-speaking internet). During the time that Soldatov served as Russia’s deputy minister of communications, he was instrumental in getting ICANN to approve the use of Cyrillic in domain names. He also rejected then-preliminary discussions about isolating the Russian internet from the global internet. 

We are deeply concerned that this is a politically motivated prosecution. Multiple reports indicate this may be true. Soldatov suffers from both prostate cancer and a heart condition, and this sentence would almost certainly further endanger his health.

His son Andrei Soldatov writes, “The Russian state, vindictive and increasingly violent by nature, decided to take his liberty, a perfect illustration of the way Russia treats the people who helped contribute to the modernization and globalization of the country.”

Because of our concerns, EFF calls for his immediate release.

Electronic Frontier Foundation

Victory! California Bill To Impose Mandatory Internet ID Checks Is Dead—It Should Stay That Way

3 months 2 weeks ago

A misguided bill that would have required many people to show ID to get online has died without getting a floor vote in the California legislature, where key deadlines for bill passage passed this weekend. Thank you to our supporters for helping us to kill this wrongheaded bill, especially those of you who took the time to reach out to your legislators

EFF opposed this bill from the start. Bills that allow politicians to define what is “sexually explicit” content and then enact punishments for those who engage with it are inherently censorship bills—and they never stop with minors. 

A.B. 3080 would have required an age verification system, most likely a scanned uploaded government-issued ID, to be erected for any website that had more than 33% “sexually explicit” content. The proposal did not, and could not have, differentiated between sites that are largely graphic sexual content and a huge array of sites that have some content that is appropriate for minors, along with other content that is geared towards adults. Bills like this are similar to having state prosecutors insist on ID uploads in order to turn on Netflix, regardless of whether the movie you’re seeking is G-rated or R-rated. 

Political attempts to use pornography as an excuse to censor and control the internet are now almost 30 years old. These proposals persist despite the fact that applying government overseers to what Americans read and watch is not only unconstitutional, but broadly unpopular. In Reno v. ACLU, the Supreme Court overruled almost all of the Communications Decency Act, a 1996 law that was intended to keep “obscene or indecent” material away from minors. In 2004, the Supreme Court again rejected an age-gated internet in ACLU v. Ashcroft, striking down most of a federal law of that era. 

The right of adults to read and watch what they want online is settled law. It is also a right that the great majority of Americans want to keep. The age-gating systems that propose to analyze and copy our biometric data, our government IDs, or both, will be a huge privacy setback for Americans of all ages. Electronically uploading and copying IDs is far from the equivalent of an in-person card check. And they won’t be effective at moderating what children see, which can and must be done by individuals and families. 

Other states have passed online age-verification bills this year, including a Texas bill that EFF has asked the U.S. Supreme Court to evaluate. Tennessee’s age-verification bill even includes criminal penalties, allowing prosecutors to bring felony charges against anyone who “publishes or distributes”—i.e., links to—sexual material. 

California politicians should let this unconstitutional and censorious proposal fade away, and resist the urge to bring it back next year. Californians do not want mandatory internet ID checks, nor are they interested in fines and incarceration for those who fail to use them. 

Joe Mullin

EFF to Tenth Circuit: Protest-Related Arrests Do Not Justify Dragnet Device and Digital Data Searches

3 months 2 weeks ago

The Constitution prohibits dragnet device searches, especially when those searches are designed to uncover political speech, EFF explained in a friend-of-the-court brief filed in the U.S. Court of Appeals for the Tenth Circuit.

The case, Armendariz v. City of Colorado Springs, challenges device and data seizures and searches conducted by the Colorado Springs police after a 2021 housing rights march that the police deemed “illegal.” The plaintiffs in the case, Jacqueline Armendariz and a local organization called the Chinook Center, argue these searches violated their civil rights.

The case details repeated actions by the police to target and try to intimidate plaintiffs and other local civil rights activists solely for their political speech. After the 2021 march, police arrested several protesters, including Ms. Armendariz. Police alleged Ms. Armendariz “threw” her bike at an officer as he was running, and despite that the bike never touched the officer, police charged her with attempted simple assault. Police then used that charge to support warrants to seize and search six of her electronic devices—including several phones and laptops. The search warrant authorized police to comb through these devices for all photos, videos, messages, emails, and location data sent or received over a two-month period and to conduct a time-unlimited search of 26 keywords—including for terms as broad and sweeping as “officer,” “housing,” “human,” “right,” “celebration,” “protest,” and several common names. Separately, police obtained a warrant to search all of the Chinook Center’s Facebook information and private messages sent and received by the organization for a week, even though the Center was not accused of any crime.

After Ms. Armendariz and the Chinook Center filed their civil rights suit, represented by the ACLU of Colorado, the defendants filed a motion to dismiss the case, arguing the searches were justified and, in any case, officers were entitled to qualified immunity. The district court agreed and dismissed the case. Ms. Armendariz and the Center appealed to the Tenth Circuit.

As explained in our amicus brief—which was joined by the Center for Democracy & Technology, the Electronic Privacy Information Center, and the Knight First Amendment Institute at Columbia University—the devices searched contain a wealth of personal information. For that reason, and especially where, as here, political speech is implicated, it is imperative that warrants comply with the Fourth Amendment.

The U.S. Supreme Court recognized in Riley v. California that electronic devices such as smartphones “differ in both a quantitative and a qualitative sense” from other objects. Our electronic devices’ immense storage capacities means that just one type of data can reveal more than previously possible because they can span years’ worth of information. For example, location data can reveal a person’s “familial, political, professional, religious, and sexual associations.” And combined with all of the other available data—including photos, video, and communications—a device such as a smartphone or laptop can store a “digital record of nearly every aspect” of a person’s life, “from the mundane to the intimate.” Social media data can also reveal sensitive, private information, especially with respect to users' private messages.

It’s because our devices and the data they contain can be so revealing that warrants for this information must rigorously adhere to the Fourth Amendment’s requirements of probable cause and particularity.

Those requirements weren’t met here. The police’s warrants failed to establish probable cause that any evidence of the crime they charged Ms. Armendariz with—throwing her bike at an officer—would be found on her devices. And the search warrant, which allowed officers to rifle through months of her private records, was so overbroad and lacking in particularity as to constitute an unconstitutional “general warrant.” Similarly, the warrant for the Chinook Center’s Facebook messages lacked probable cause and was especially invasive given that access to these messages may well have allowed police to map activists who communicated with the Center and about social and political advocacy.

The warrants in this case were especially egregious because they appear designed to uncover First Amendment-protected activity. Where speech is targeted, the Supreme Court has recognized that it’s all the more crucial that warrants apply the Fourth Amendment’s requirements with “scrupulous exactitude” to limit an officer’s discretion in conducting a search. But that failed to happen here, and thus affected several of Ms. Armendariz and the Chinook Center’s First Amendment rights—including the right to free speech, the right to free association, and the right to receive information.

Warrants that fail to meet the Fourth Amendment’s requirements disproportionately burden disfavored groups. In fact, the Framers adopted the Fourth Amendment to prevent the “use of general warrants as instruments of oppression”—but as legal scholars have noted, law enforcement routinely uses low-level, highly discretionary criminal offenses to impose order on protests. Once arrests are made, they are often later dropped or dismissed—but the damage is done, because protesters are off the streets, and many may be chilled from returning. Protesters undoubtedly will be further chilled if an arrest for a low-level offense then allows police to rifle through their devices and digital data, as happened in this case.

The Tenth Circuit should let this case to proceed. Allowing police to conduct a virtual fishing expedition of a protester’s devices, especially when justification for that search is an arrest for a crime that has no digital nexus, contravenes the Fourth Amendment’s purposes and chills speech. It is unconstitutional and should not be tolerated.

Brendan Gilligan

Americans Are Uncomfortable with Automated Decision-Making

3 months 2 weeks ago

Imagine a company you recently applied to work at used an artificial intelligence program to analyze your application to help expedite the review process. Does that creep you out? Well, you’re not alone.

Consumer Reports recently released a national survey finding that Americans are uncomfortable with use of artificial intelligence (AI) and algorithmic decision-making in their day to day lives. The survey of 2,022 U.S. adults was administered by NORC at the University of Chicago and examined public attitudes on a variety of issues. Consumer Reports found:

  • Nearly three-quarters of respondents (72%) said they would be “uncomfortable”— including nearly half (45%) who said they would be “very uncomfortable”—with a job interview process that allowed AI to screen their interview by grading their responses and in some cases facial movements.
  • About two-thirds said they would be “uncomfortable”— including about four in ten (39%) who said they would be “very uncomfortable”— allowing banks to use such programs to determine if they were qualified for a loan or allowing landlords to use such programs to screen them as a potential tenant.
  • More than half said they would be “uncomfortable”— including about a third who said they would be “very uncomfortable"— with video surveillance systems using facial recognition to identity them, and with hospital systems using AI or algorithms to help with diagnosis and treatment planning.

The survey findings indicate that people are feeling disempowered by lost control over their digital footprint, and by corporations and government agencies adopting AI technology to make life-altering decisions about them. Yet states are moving at breakneck speed to implement AI “solutions” without first creating meaningful guidelines to address these reasonable concerns. In California, Governor Newsom issued an executive order to address government use of AI, and recently granted five vendors approval to test and AI for a myriad of state agencies. The administration hopes to apply AI to such topics as health-care facility inspections, assisting residents who are not fluent in English, and customer service.

The vast majority of Consumer Reports’ respondents (83%) said they would want to know what information was used to instruct AI or a computer algorithm to make a decision about them.  Another super-majority (91%) said they would want to have a way to correct the data where a computer algorithm was used.

As states explore how to best protect consumers as corporations and government agencies deploy algorithmic decision-making, EFF urges strict standards of transparency and accountability. Laws should have a “privacy first” approach that ensures people have a say in how their private data is used. At a minimum, people should have a right to access what data is being used to make decisions about them and have the opportunity to correct it. Likewise, agencies and businesses using automated decision-making should offer an appeal process. Governments should ensure that consumers have protections from discrimination in algorithmic decision-making by both corporations and the public sector. Another priority should be a complete ban on many government uses of automated decision-making, including predictive policing.

From deciding who gets housing or the best mortgages, who gets an interview or a job, or who law enforcement or ICE investigates, people are uncomfortable with algorithmic decision-making that will affect their freedoms. Now is the time for strong legal protections.

Catalina Sanchez
Checked
1 hour 46 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed