Victory! California Bill To Impose Mandatory Internet ID Checks Is Dead—It Should Stay That Way
A misguided bill that would have required many people to show ID to get online has died without getting a floor vote in the California legislature, where key deadlines for bill passage passed this weekend. Thank you to our supporters for helping us to kill this wrongheaded bill, especially those of you who took the time to reach out to your legislators.
EFF opposed this bill from the start. Bills that allow politicians to define what is “sexually explicit” content and then enact punishments for those who engage with it are inherently censorship bills—and they never stop with minors.
A.B. 3080 would have required an age verification system, most likely a scanned uploaded government-issued ID, to be erected for any website that had more than 33% “sexually explicit” content. The proposal did not, and could not have, differentiated between sites that are largely graphic sexual content and a huge array of sites that have some content that is appropriate for minors, along with other content that is geared towards adults. Bills like this are similar to having state prosecutors insist on ID uploads in order to turn on Netflix, regardless of whether the movie you’re seeking is G-rated or R-rated.
Political attempts to use pornography as an excuse to censor and control the internet are now almost 30 years old. These proposals persist despite the fact that applying government overseers to what Americans read and watch is not only unconstitutional, but broadly unpopular. In Reno v. ACLU, the Supreme Court overruled almost all of the Communications Decency Act, a 1996 law that was intended to keep “obscene or indecent” material away from minors. In 2004, the Supreme Court again rejected an age-gated internet in ACLU v. Ashcroft, striking down most of a federal law of that era.
The right of adults to read and watch what they want online is settled law. It is also a right that the great majority of Americans want to keep. The age-gating systems that propose to analyze and copy our biometric data, our government IDs, or both, will be a huge privacy setback for Americans of all ages. Electronically uploading and copying IDs is far from the equivalent of an in-person card check. And they won’t be effective at moderating what children see, which can and must be done by individuals and families.
Other states have passed online age-verification bills this year, including a Texas bill that EFF has asked the U.S. Supreme Court to evaluate. Tennessee’s age-verification bill even includes criminal penalties, allowing prosecutors to bring felony charges against anyone who “publishes or distributes”—i.e., links to—sexual material.
California politicians should let this unconstitutional and censorious proposal fade away, and resist the urge to bring it back next year. Californians do not want mandatory internet ID checks, nor are they interested in fines and incarceration for those who fail to use them.
EFF to Tenth Circuit: Protest-Related Arrests Do Not Justify Dragnet Device and Digital Data Searches
The Constitution prohibits dragnet device searches, especially when those searches are designed to uncover political speech, EFF explained in a friend-of-the-court brief filed in the U.S. Court of Appeals for the Tenth Circuit.
The case, Armendariz v. City of Colorado Springs, challenges device and data seizures and searches conducted by the Colorado Springs police after a 2021 housing rights march that the police deemed “illegal.” The plaintiffs in the case, Jacqueline Armendariz and a local organization called the Chinook Center, argue these searches violated their civil rights.
The case details repeated actions by the police to target and try to intimidate plaintiffs and other local civil rights activists solely for their political speech. After the 2021 march, police arrested several protesters, including Ms. Armendariz. Police alleged Ms. Armendariz “threw” her bike at an officer as he was running, and despite that the bike never touched the officer, police charged her with attempted simple assault. Police then used that charge to support warrants to seize and search six of her electronic devices—including several phones and laptops. The search warrant authorized police to comb through these devices for all photos, videos, messages, emails, and location data sent or received over a two-month period and to conduct a time-unlimited search of 26 keywords—including for terms as broad and sweeping as “officer,” “housing,” “human,” “right,” “celebration,” “protest,” and several common names. Separately, police obtained a warrant to search all of the Chinook Center’s Facebook information and private messages sent and received by the organization for a week, even though the Center was not accused of any crime.
After Ms. Armendariz and the Chinook Center filed their civil rights suit, represented by the ACLU of Colorado, the defendants filed a motion to dismiss the case, arguing the searches were justified and, in any case, officers were entitled to qualified immunity. The district court agreed and dismissed the case. Ms. Armendariz and the Center appealed to the Tenth Circuit.
As explained in our amicus brief—which was joined by the Center for Democracy & Technology, the Electronic Privacy Information Center, and the Knight First Amendment Institute at Columbia University—the devices searched contain a wealth of personal information. For that reason, and especially where, as here, political speech is implicated, it is imperative that warrants comply with the Fourth Amendment.
The U.S. Supreme Court recognized in Riley v. California that electronic devices such as smartphones “differ in both a quantitative and a qualitative sense” from other objects. Our electronic devices’ immense storage capacities means that just one type of data can reveal more than previously possible because they can span years’ worth of information. For example, location data can reveal a person’s “familial, political, professional, religious, and sexual associations.” And combined with all of the other available data—including photos, video, and communications—a device such as a smartphone or laptop can store a “digital record of nearly every aspect” of a person’s life, “from the mundane to the intimate.” Social media data can also reveal sensitive, private information, especially with respect to users' private messages.
It’s because our devices and the data they contain can be so revealing that warrants for this information must rigorously adhere to the Fourth Amendment’s requirements of probable cause and particularity.
Those requirements weren’t met here. The police’s warrants failed to establish probable cause that any evidence of the crime they charged Ms. Armendariz with—throwing her bike at an officer—would be found on her devices. And the search warrant, which allowed officers to rifle through months of her private records, was so overbroad and lacking in particularity as to constitute an unconstitutional “general warrant.” Similarly, the warrant for the Chinook Center’s Facebook messages lacked probable cause and was especially invasive given that access to these messages may well have allowed police to map activists who communicated with the Center and about social and political advocacy.
The warrants in this case were especially egregious because they appear designed to uncover First Amendment-protected activity. Where speech is targeted, the Supreme Court has recognized that it’s all the more crucial that warrants apply the Fourth Amendment’s requirements with “scrupulous exactitude” to limit an officer’s discretion in conducting a search. But that failed to happen here, and thus affected several of Ms. Armendariz and the Chinook Center’s First Amendment rights—including the right to free speech, the right to free association, and the right to receive information.
Warrants that fail to meet the Fourth Amendment’s requirements disproportionately burden disfavored groups. In fact, the Framers adopted the Fourth Amendment to prevent the “use of general warrants as instruments of oppression”—but as legal scholars have noted, law enforcement routinely uses low-level, highly discretionary criminal offenses to impose order on protests. Once arrests are made, they are often later dropped or dismissed—but the damage is done, because protesters are off the streets, and many may be chilled from returning. Protesters undoubtedly will be further chilled if an arrest for a low-level offense then allows police to rifle through their devices and digital data, as happened in this case.
The Tenth Circuit should let this case to proceed. Allowing police to conduct a virtual fishing expedition of a protester’s devices, especially when justification for that search is an arrest for a crime that has no digital nexus, contravenes the Fourth Amendment’s purposes and chills speech. It is unconstitutional and should not be tolerated.
【月刊マスコミ評・出版】「終わらない戦争」と「現場の生の声」=荒屋敷 宏
Americans Are Uncomfortable with Automated Decision-Making
Imagine a company you recently applied to work at used an artificial intelligence program to analyze your application to help expedite the review process. Does that creep you out? Well, you’re not alone.
Consumer Reports recently released a national survey finding that Americans are uncomfortable with use of artificial intelligence (AI) and algorithmic decision-making in their day to day lives. The survey of 2,022 U.S. adults was administered by NORC at the University of Chicago and examined public attitudes on a variety of issues. Consumer Reports found:
- Nearly three-quarters of respondents (72%) said they would be “uncomfortable”— including nearly half (45%) who said they would be “very uncomfortable”—with a job interview process that allowed AI to screen their interview by grading their responses and in some cases facial movements.
- About two-thirds said they would be “uncomfortable”— including about four in ten (39%) who said they would be “very uncomfortable”— allowing banks to use such programs to determine if they were qualified for a loan or allowing landlords to use such programs to screen them as a potential tenant.
- More than half said they would be “uncomfortable”— including about a third who said they would be “very uncomfortable"— with video surveillance systems using facial recognition to identity them, and with hospital systems using AI or algorithms to help with diagnosis and treatment planning.
The survey findings indicate that people are feeling disempowered by lost control over their digital footprint, and by corporations and government agencies adopting AI technology to make life-altering decisions about them. Yet states are moving at breakneck speed to implement AI “solutions” without first creating meaningful guidelines to address these reasonable concerns. In California, Governor Newsom issued an executive order to address government use of AI, and recently granted five vendors approval to test and AI for a myriad of state agencies. The administration hopes to apply AI to such topics as health-care facility inspections, assisting residents who are not fluent in English, and customer service.
The vast majority of Consumer Reports’ respondents (83%) said they would want to know what information was used to instruct AI or a computer algorithm to make a decision about them. Another super-majority (91%) said they would want to have a way to correct the data where a computer algorithm was used.
As states explore how to best protect consumers as corporations and government agencies deploy algorithmic decision-making, EFF urges strict standards of transparency and accountability. Laws should have a “privacy first” approach that ensures people have a say in how their private data is used. At a minimum, people should have a right to access what data is being used to make decisions about them and have the opportunity to correct it. Likewise, agencies and businesses using automated decision-making should offer an appeal process. Governments should ensure that consumers have protections from discrimination in algorithmic decision-making by both corporations and the public sector. Another priority should be a complete ban on many government uses of automated decision-making, including predictive policing.
From deciding who gets housing or the best mortgages, who gets an interview or a job, or who law enforcement or ICE investigates, people are uncomfortable with algorithmic decision-making that will affect their freedoms. Now is the time for strong legal protections.