EFF Tells Patent Office: Don’t Cut the Public Out of Patent Review

3 months 2 weeks ago

EFF has submitted its formal comment to the U.S. Patent and Trademark Office (USPTO) opposing a set of proposed rules that would sharply restrict the public’s ability to challenge wrongly granted patents. These rules would make inter partes review (IPR)—the main tool Congress created to fix improperly granted patents—unavailable in most of the situations where it’s needed most.

If adopted, they would give patent trolls exactly what they want: a way to keep questionable patents alive and out of reach.

If you haven’t commented yet, there’s still time. The deadline is today, December 2.

TAKE ACTION

Tell USPTO: The public has a right to challenge bad patents

Sample comment:

I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.

IPR Is Already Under Siege, And These Rules Would Make It Worse

Since USPTO Director John Squires was sworn into office just over two months ago, we’ve seen the Patent Office take an increasingly aggressive stance against IPR petitions. In a series of director-level decisions, the USPTO has denied patent challengers the chance to be heard—sometimes dozens of them at a time—without explanation or reasoning. 

That reality makes this rulemaking even more troubling. The USPTO is already denying virtually every new petition challenging patents. These proposed rules would cement that closed-door approach and make it harder for challengers to be heard. 

What EFF Told the USPTO

Our comment lays out how these rules would make patent challenges nearly impossible to pursue for small businesses, nonprofits, software developers, and everyday users of technology. 

Here are the core problems we raised:

First, no one should have to give up their court defenses just to use IPR. The USPTO proposal would force defendants to choose: either use IPR and risk losing their legal defenses, or keep their defenses and lose IPR.

That’s not a real choice. Anyone being sued or threatened for patent infringement needs access to every legitimate defense. Patent litigation is devastatingly expensive, and forcing people to surrender core rights in federal court is unreasonable and unlawful.

Second, one early case should not make a bad patent immune forever. Under the proposed rules, if a patent survives any earlier validity fight—no matter how rushed, incomplete, or poorly reasoned—everyone else could be barred from filing an IPR later.

New prior art? Doesn’t matter. Better evidence? Doesn’t matter. 

Congress never intended IPR to be a one-shot shield for bad patents. 

Third, patent owners could manipulate timing to shut down petitions. The rules would let the USPTO deny IPRs simply because a district court case might move faster.

Patent trolls already game the system by filing in courts with rapid schedules. This rule would reward that behavior. It allows patent owners—not facts, not law, not the merits—to determine whether an IPR can proceed. 

IPR isn't supposed to be a race to the courthouse. It’s supposed to be a neutral review of whether the Patent Office made a mistake.

Why Patent Challenges Matter

IPR isn’t perfect, and it doesn’t apply to every patent. But compared to multimillion-dollar federal litigation, it’s one of the only viable tools available to small companies, developers, and the public. It needs to remain open. 

When an overbroad patent gets waved at hundreds or thousands of people—podcasters, app developers, small retailers—IPR is often the only mechanism that can actually fix the underlying problem: the patent itself. These rules would take that option away.

There’s Still Time To Add Your Voice

If you haven’t submitted a comment yet, now is the time. The more people speak up, the harder it becomes for these changes to slip through.

Comments don’t need to be long or technical. A few clear sentences in your own words are enough. We’ve written a short sample comment below. It’s even more powerful if you add a sentence or two describing your own experience. If you mention EFF in your comment, it helps our collective impact. 

TAKE ACTION

Sample comment: 

I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.

Further reading:

Joe Mullin

AI Chatbot Companies Should Protect Your Conversations From Bulk Surveillance

3 months 2 weeks ago

EFF intern Alexandra Halbeck contributed to this blog

When people talk to a chatbot, they often reveal highly personal information they wouldn’t share with anyone else. Chat logs are digital repositories of our most sensitive and revealing information. They are also tempting targets for law enforcement, to which the U.S. Constitution gives only one answer: get a warrant.

AI companies have a responsibility to their users to make sure the warrant requirement is strictly followed, to resist unlawful bulk surveillance requests, and to be transparent with their users about the number of government requests they receive.

Chat logs are deeply personal, just like your emails.

Tens of millions of people use chatbots to brainstorm, test ideas, and explore questions they might never post publicly or even admit to another person. Whether advisable or not, people also turn to consumer AI companies for medical information, financial advice, and even dating tips. These conversations reveal people’s most sensitive information.

Without privacy protections, users would be chilled in their use of AI systems.


Consider the sensitivity of the following prompts: “how to get abortion pills,” “how to protect myself at a protest,” or “how to escape an abusive relationship.” These exchanges can reveal everything from health status to political beliefs to private grief. A single chat thread can expose the kind of intimate detail once locked away in a handwritten diary.

Without privacy protections, users would be chilled in their use of AI systems for learning, expression, and seeking help.

Chat logs require a warrant.

Whether you draft an email, edit an online document, or ask a question to a chatbot, you have a reasonable expectation of privacy in that information. Chatbots may be a new technology, but the constitutional principle is old and clear. Before the government can rifle through your private thoughts stored on digital platforms, it must do what it has always been required to do: get a warrant.

For over a century, the Fourth Amendment has protected the content of private communications—such as letters, emails, and search engine prompts—from unreasonable government searches. AI prompts require the same constitutional protection.

This protection is not aspirational—it already exists. The Fourth Amendment draws a bright line around private communications: the government must show probable cause and obtain a particularized warrant before compelling a company to turn over your data. Companies like OpenAI acknowledge this warrant requirement explicitly, while others like Anthropic could stand to be more precise.

AI companies must resist bulk surveillance orders.

AI companies that create chatbots should commit to having your back and resisting unlawful bulk surveillance orders. A valid search warrant requires law enforcement to provide a judge with probable cause and to particularly describe the thing to be searched. This means that bulk surveillance orders often fail that test.

What do these overbroad orders look like? In the past decade or so, police have often sought “reverse” search warrants for user information held by technology companies. Rather than searching for one particular individual, police have demanded that companies rummage through their giant databases of personal data to help develop investigative leads. This has included “tower dumps” or “geofence warrants,” in which police order a company to search all users’ location data to identify anyone that’s been near a particular place at a particular time. It has also included “keyword” warrants, which seek to identify any person who typed a particular phrase into a search engine. This could include a chilling keyword search for a well-known politician’s name or busy street, or a geofence warrant near a protest or church.

Courts are beginning to rule that these broad demands are unconstitutional. And after years of complying, Google has finally made it technically difficult—if not impossible—to provide mass location data in response to a geofence warrant.

This is an old story: if a company stores a lot of data about its users, law enforcement (and private litigants) will eventually seek it out. Law enforcement is already demanding user data from AI chatbot companies, and it will only increase. These companies must be prepared for this onslaught, and they must commit to fighting to protect their users.

In addition to minimizing the amount of data accessible to law enforcement, they can start with three promises to their users. These aren’t radical ideas. They are basic transparency and accountability standards to preserve user trust and to ensure constitutional rights keep pace with technology:

  1. commit to fighting bulk orders for user data in court,
  2. commit to providing users with advanced notice before complying with a legal demand so that users can choose to fight on their own behalf, and 
  3. commit to publishing periodic transparency reports, which tally up how many legal demands for user data the company receives (including the number of bulk orders specifically).
Mario Trujillo

How to Identify Automated License Plate Readers at the U.S.-Mexico Border

3 months 2 weeks ago

U.S. Customs and Border Protection (CBP), the Drug Enforcement Administration (DEA), and scores of state and local law enforcement agencies have installed a massive dragnet of automated license plate readers (ALPRs) in the US-Mexico borderlands. 

In many cases, the agencies have gone out of their way to disguise the cameras from public view. And the problem is only going to get worse: as recently as July 2025, CBP put out a solicitation to purchase 100 more covert trail cameras with license plate-capture ability. 

Last month, the Associated Press published an in-depth investigation into how agencies have deployed these systems and exploited this data to target drivers. But what do these cameras look like? Here's a guide to identifying ALPR systems when you're driving the open road along the border.

Special thanks to researcher Dugan Meyer and AZ Mirror's Jerod MacDonald-Evoy. All images by EFF and Meyer were taken within the last three years. 

ALPR at Checkpoints and Land Ports of Entry 

All land ports of entry have ALPR systems that collect all vehicles entering and exiting the country. They typically look like this: 

ALPR systems at the Eagle Pass International Bridge Port of Entry. Source: EFF

Most interior checkpoints, which are anywhere from a few miles to more than 60 from the border, are also equipped with ALPR systems operated by CBP. However, the DEA operates a parallel system at most interior checkpoints in southern border states. 

When it comes to checkpoints, here's the rule of thumb: If you're traveling away from the border, you are typically being captured by a CBP/Border Patrol system (Border Patrol is a sub-agency of CBP). If you're traveling toward the border, it is most likely a DEA system.

Here's a representative example of a CBP checkpoint camera system:

ALPR system at the Border Patrol checkpoint near Uvalde, Texas. Source: EFF

At a typical port of entry or checkpoint, each vehicle lane will have an ALPR system. We've even seen border patrol checkpoints that were temporarily closed continue to funnel people through these ALPR lanes, even though there was no one on hand to vet drivers face-to-face. According CBP's Privacy Impact Assessments (2017, 2020), CBP keeps this data for 15 years, but generally agents can only search the most recent five years worth of data. 

The scanners were previously made by a company called Perceptics which was infamously hacked, leading to a breach of driver data. The systems have since been "modernized" (i.e. replaced) by SAIC.

Here's a close up of the new systems:

Frontal ALPR camera at the checkpoint near Uvalde, Texas. Source: EFF

In 2024, the DEA announced plans to integrate port of entry ALPRs into its National License Plate Reader Program (NLPRP), which the agency says is a network of both DEA systems and external law enforcement ALPR systems that it uses to investigate crimes such as drug trafficking and bulk cash smuggling.

Again, if you're traveling towards the border and you pass a checkpoint, you're often captured by parallel DEA systems set up on the opposite side of the road. However, these systems have also been found to be installed on their own away from checkpoints. 

These are a major component of the DEA's NLPRP, which has a standard retention period of 90 days. This program dates back to at least 2010, according to records obtained by the ACLU. 

Here is a typical DEA system that you will find installed near existing Border Patrol checkpoints:

DEA ALPR set-up in southern Arizona. Source: EFF

These are typically made by a different vendor, Selex ES, which also includes the brands ELSAG and Leonardo. Here is a close-up:

Close-up of a DEA camera near the Tohono O'odham Nation in Arizona. Source: EFF

Covert ALPR

As you drive along border highways, law enforcement agencies have disguised cameras in order to capture your movements. 

The exact number of covert ALPRs at the border is unknown, but to date we have identified approximately 100 sites. We know CBP and DEA each operate covert ALPR systems, but it isn't always possible to know which agency operates any particular set-up. 

Another rule of thumb: if a covert ALPR has a Motorola Solutions camera (formerly Vigilant Solutions) inside, it's likely a CBP system. If it has a Selex ES camera inside, then it is likely a DEA camera. 

Here are examples of construction barrels with each kind of camera: 

A covert ALPR with a Motorola Solutions ALPR camera near Calexico, Calif. Source: EFF

These are typically seen along the roadside, often in sets of three, but almost always connected to some sort of solar panel. They are often placed behind existing barriers.

A covert ALPR with a Selex ES camera in southern Arizona. Source: EFF

The DEA models are also found by the roadside, but they also can be found inside or near checkpoints. 

If you're curious (as we were), here's what they look like inside, courtesy of the US Patent and Trademark Office:

Patent for portable covert license plate reader. Source: USPTO

In addition to orange construction barrels, agencies also conceal ALPRs in yellow sandbarrels. For example, these can be found throughout southern Arizona, especially in the southeastern part of the state.

A covert ALPR system in Arizona. Source: EFF

ALPR Trailers

Sometimes a speed trailer or signage trailer isn't designed so much for safety but to conceal ALPR systems. Sometimes ALPRs are attached to indistinct trailers with no discernible purpose that you'd hardly notice by the side of the road. 

It's important to note that its difficult to know who these belong to, since they aren't often marked. We know that all levels of government, even in the interior of the country, have purchased these set ups.  

Here are some of the different flavors of ALPR trailers:

An ALPR speed trailer in Texas. Source: EFF

ALPR trailer in Southern California. Source. EFF

ALPR trailer in Southern California. Source. EFF

An ALPR unit in southern Arizona. Source: EFF

ALPR unit in southern Arizona. Source: EFF

A Jenoptik Vector ALPR trailer in La Joya, Texas. Source: EFF

One particularly worrisome version of an ALPR trailer is the Jenoptik Vector: at least two jurisdictions along the border have equipped these trailers not only with ALPR, but with TraffiCatch technology that gathers Bluetooth and Wi-Fi identifiers. This means that in addition to gathering plates, these devices would also document mobile devices, such as phones, laptops, and even vehicle entertainment systems.

Stationary ALPR 

Stationary or fixed ALPR is one of the more traditional ways of installing these systems. The cameras are placed on existing utility poles or other infrastructure or on poles installed by the ALPR vendor. 

For example, here's a DEA system installed on a highway arch:

The lower set of ALPR cameras belong to the DEA. Source: Dugan Meyer CC BY

ALPR camera in Arizona. Source: Dugan Meyer CC BY

Flock Safety

At the local level, thousands of cities around the United States have adopted fixed ALPR, with the company Flock Safety grabbing a huge chunk of the market over the last few years. County sheriffs and municipal police along the border have also embraced the trend, with many using funds earmarked for border security to purchase these systems. Flock allows these agencies to share with one another and contribute their ALPR scans to a national pool of data. As part of a pilot program, Border Patrol had access to this ALPR data for most of 2025. 

A typical Flock Safety setup involves attaching cameras and solar panels to poles. For example:

Flock Safety ALPR poles installed just outside the Tohono O'odham Nation in Arizona. Source: EFF

A close-up of a Flock Safety camera in Douglas, Arizona. Source: EFF

We've also seen these camera poles placed outside the Santa Teresa Border Patrol station in New Mexico.

Flock may now be the most common provider nationwide, but it isn't the only player in the field. DHS recently released a market survey of 16 different vendors providing similar technology.  

Mobile ALPR 

ALPR cameras can also be found attached to patrol cars. Here's an example of a Motorola Solutions ALPR attached to a Hidalgo County Constable vehicle in South Texas:

Mobile ALPR on a Hidalgo County Constable vehicle. Source: Weslaco Police Department

These allow officers not only to capture ALPR data in real time as they are driving along, but they will also receive an in-car alert when a scan matches a vehicle on a "hot list," the term for a list of plates that law enforcement has flagged for further investigation. 

Here's another example: 

Mobile ALPR in La Mesa, Calif.. Source: La Mesa Police Department Facebook page

Identifying Other Technologies 

EFF has been documenting the wide variety of technologies deployed at the border, including surveillance towers, aerostats, and trail cameras. To learn more, download EFF's zine, "Surveillance Technology at the US-Mexico Border" and explore our map of border surveillance, which includes Google Streetview links so you can see exactly how each installation looks on the ground. Currently we have mapped out most DEA and CBP checkpoint ALPR setups, with covert cameras planned for addition in the near future.

Dave Maass

We’re Doubling Down on Digital Rights. You Can, Too.

3 months 2 weeks ago

Technology can uplift democracy, or it can be an authoritarian weapon. EFF is making sure it stays on the side of freedom. We’re defending encryption, exposing abusive surveillance tech, fighting government overreach, and standing up for free expression. But we need your help to protect digital rights—and right now, your donation will be matched dollar-for-dollar.

Power up!

Join EFF Today & Get a Free Donation Match

It’s Power Up Your Donation Week and all online contributions get an automatic match up to $302,700. Many thanks to the passionate EFF supporters who created this year's matching fund! The Power Up matching challenge offers a rare opportunity to double your impact on EFF’s legal, educational, advocacy, and free software work when it’s needed most. If you’ve been waiting for the right moment to give—this is it.

Digital rights are human rights. Governments have silenced online speech, corporations seek to exploit our data for profit, and police are deploying dystopian tools to track our every move. But the fight is far from over, with the support of EFF’s members.

How EFF is fighting back:

  • Creating tools to help people understand and protect their rights
  • Holding powerful institutions accountable in court when those rights are threatened
  • Pushing back against surveillance regimes through the justice system and in legislatures
  • Locking arms with attorneys, technologists, and defenders of digital freedom—including you

As an EFF member, you’ll have your choice of conversation-starting gear as a token of our thanks. Choose from stickers, EFF's 35th Anniversary Cityscape t-shirt, Motherboard hoodie, and more. You’ll also get a bonus Take Back CTRL-themed camera cover set with any member gift.

Will you donate today for privacy and free speech? Your gift will be matched for free, fueling the fight to stop tech from being a tyrant’s dream.

Already an EFF Member? Help Us Spread the Word!

EFF Members have carried the movement for privacy and free expression for decades. You can help move the mission even further! Here’s some sample language that you can share with your networks:

Don't let democracy be undermined by tools of surveillance and control. Donate to EFF this week and you'll get an automatic match. https://eff.org/power-up

BlueskyFacebook | LinkedInMastodon
(More at eff.org/social)

_________________

EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating TWELVE YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Aaron Jue

The UK Has It Wrong on Digital ID. Here’s Why.

3 months 2 weeks ago

In late September, the United Kingdom’s Prime Minister Keir Starmer announced his government’s plans to introduce a new digital ID scheme in the country to take effect before the end of the Parliament (no later than August 2029). The scheme will, according to the Prime Minister, “cut the faff” in proving people’s identities by creating a virtual ID on personal devices with information like people’s name, date of birth, nationality or residency status, and photo to verify their right to live and work in the country. 

This is the latest example of a government creating a new digital system that is fundamentally incompatible with a privacy-protecting and human rights-defending democracy. This past year alone, we’ve seen federal agencies across the United States explore digital IDs to prevent fraud, the Transportation Security Administration accepting “Digital passport IDs” in Android, and states contracting with mobile driver’s license providers (mDL). And as we’ve said many times, digital ID is not for everyone and policymakers should ensure better access for people with or without a digital ID. 

But instead, the UK is pushing forward with its plans to rollout digital ID in the country. Here’s three reasons why those policymakers have it wrong. 

Digital ID allows the state to determine what you can access, not just verify who you are, by functioning as a key to opening—or closing—doors to essential services and experiences. 

Mission Creep 

In his initial announcement, Starmer stated: “You will not be able to work in the United Kingdom if you do not have digital ID. It's as simple as that.” Since then, the government has been forced to clarify those remarks: digital ID will be mandatory to prove the right to work, and will only take effect after the scheme's proposed introduction in 2028, rather than retrospectively. 

The government has also confirmed that digital ID will not be required for pensioners, students, and those not seeking employment, and will also not be mandatory for accessing medical services, such as visiting hospitals. But as civil society organizations are warning, it's possible that the required use of digital ID will not end here. Once this data is collected and stored, it provides a multitude of opportunities for government agencies to expand the scenarios where they demand that you prove your identity before entering physical and digital spaces or accessing goods and services. 

The government may also be able to request information from workplaces on who is registering for employment at that location, or collaborate with banks to aggregate different data points to determine who is self-employed or not registered to work. It potentially leads to situations where state authorities can treat the entire population with suspicion of not belonging, and would shift the power dynamics even further towards government control over our freedom of movement and association. 

And this is not the first time that the UK has attempted to introduce digital ID: politicians previously proposed similar schemes intended to control the spread of COVID-19, limit immigration, and fight terrorism. In a country increasing the deployment of other surveillance technologies like face recognition technology, this raises additional concerns about how digital ID could lead to new divisions and inequalities based on the data obtained by the system. 

These concerns compound the underlying narrative that digital ID is being introduced to curb illegal immigration to the UK: that digital ID would make it harder for people without residency status to work in the country because it would lower the possibility that anyone could borrow or steal the identity of another. Not only is there little evidence to prove that digital ID will limit illegal immigration, but checks on the right to work in the UK already exist. This is nothing more than inflammatory and misleading; Liberal Democrat leader Ed Davey noted this would do “next to nothing to tackle channel crossings.”

Inclusivity is Not Inevitable, But Exclusion Is 

While the government announced that their digital ID scheme will be inclusive enough to work for those without access to a passport, reliable internet, or a personal smartphone, as we’ve been saying for years, digital ID leaves vulnerable and marginalized people not only out of the debate and ultimately out of the society that these governments want to build. We remain concerned about the potential for digital identification to exacerbate existing social inequalities, particularly for those with reduced access to digital services or people seeking asylum. 

The UK government has said a public consultation will be launched later this year to explore alternatives, such as physical documentation or in-person support for the homeless and older people; but it’s short-sighted to think that these alternatives are viable or functional in the long term. For example, UK organization Big Brother Watch reported that about only 20% of Universal Credit applicants can use online ID verification methods. 

These individuals should not be an afterthought that are attached to the end of the announcement for further review. It is essential that if a tool does not work for those without access to the array of essentials, such as the internet or the physical ID, then it should not exist.

Digital ID schemes also exacerbate other inequalities in society, such as abusers who will be able to prevent others from getting jobs or proving other statuses by denying access to their ID. In the same way, the scope of digital ID may be expanded and people could be forced to prove their identities to different government agencies and officials, which may raise issues of institutional discrimination when phones may not load, or when the Home Office has incorrect information on an individual. This is not an unrealistic scenario considering the frequency of internet connectivity issues, or circumstances like passports and other documentation expiring.

Any identification issued by the government with a centralized database is a power imbalance that can only be enhanced with digital ID.

Attacks on Privacy and Surveillance 

Digital ID systems expand the number of entities that may access personal information and consequently use it to track and surveil. The UK government has nodded to this threat. Starmer stated that the technology would “absolutely have very strong encryption” and wouldn't be used as a surveillance tool. Moreover, junior Cabinet Office Minister Josh Simons told Parliament that “data associated with the digital ID system will be held and kept safe in secure cloud environments hosted in the United Kingdom” and that “the government will work closely with expert stakeholders to make the programme effective, secure and inclusive.” 

But if digital ID is needed to verify people’s identities multiple times per day or week, ensuring end-to-encryption is the bare minimum the government should require. Unlike sharing a National Insurance Number, a digital ID will show an array of personal information that would otherwise not be available or exchanged. 

This would create a rich environment for hackers or hostile agencies to obtain swathes of personal information on those based in the UK. And if previous schemes in the country are anything to go by, the government’s ability to handle giant databases is questionable. Notably, the eVisa’s multitude of failures last year illustrated the harms that digital IDs can bring, with issues like government system failures and internet outages leading to people being detained, losing their jobs, or being made homeless. Checking someone’s identity against a database in real-time requires a host of online and offline factors to work, and the UK is yet to take the structural steps required to remedying this.

Moreover, we know that the Cabinet Office and the Department for Science, Innovation and Technology will be involved in the delivery of digital ID and are clients of U.S.-based tech vendors, specifically Amazon Web Services (AWS). The UK government has spent millions on AWS (and Microsoft) cloud services in recent years, and the One Government Value Agreement (OGVA)—first introduced in 2020 and of which provides discounts for cloud services by contracting with the UK government and public sector organizations as a single client—is still active. It is essential that any data collected is not stored or shared with third parties, including through cloud agreements with companies outside the UK.

And even if the UK government published comprehensive plans to ensure data minimization in its digital ID, we will still strongly oppose any national ID scheme. Any identification issued by the government with a centralized database is a power imbalance that can only be enhanced with digital ID, and both the public and civil society organizations in the country are against this.

Ways Forward

Digital ID regimes strip privacy from everyone and further marginalize those seeking asylum or undocumented people. They are pursued as a technological solution to offline problems but instead allow the state to determine what you can access, not just verify who you are, by functioning as a key to opening—or closing—doors to essential services and experiences. 

We cannot base our human rights on the government’s mere promise to uphold them. On December 8th, politicians in the country will be debating a petition that reached almost 3 million signatories rejecting mandatory digital ID. If you’re based in the UK, you can contact your MP (external campaign links) to oppose the plans for a digital ID system. 

The case for digital identification has not been made. The UK government must listen to people in the country and say no to digital ID.

Paige Collings

EFF’s Holiday Gift Guide

3 months 2 weeks ago

Technology is supercharging the attack on democracy and EFF is fighting back. We’re suing to stop government surveillance. We're fighting to protect free expression online. And we're building tools to protect your data privacy.

Help support our mission with new gear from EFF's online store, perfect gifts for the digital rights defender in your life. Take 20% your order today with code BLACKFRI. Thanks for being an EFF supporter!

Liquid Core Dice are perfect for tabletop games. The metal clear-view EFF display tin contains a seven piece set of sharp-edge dice. These glittery dice will show that you roll with the crew protecting our civil liberties online.

Celebrate equity and accessibility with this tactile braille sticker that depicts the fiery figure of Lady Justice with braille characters reading "justice" and "EFF." With this embossed sticker, you won't just be showing off your support for justice, you'll actually be able to feel it.

Applaud reproductive rights with this gift bundle hailing your data privacy and personal freedom. The bundle includes all items featuring our mascot for choice and privacy, Lady Lock: the "My Body, My Data, My Choice" tote bag, a "Honey, I Encrypt Everything" sticker, and a heat-changing mug that reveals its secret slogan when hot.

Explore the mysteries of the web with an iconic Bigfoot de la Sasquatch lapel pinprivacy is a "human" right! Continue the journey with with campfire tales from The Encryptids, the rarely-seen creatures who’ve become digital rights legends. This sparkling cloisonne pin measures 1.5 inches tall and features a high quality spring backing.

Find all these items, plus t-shirts, hoodies, beanies, and more at the EFF Online Shop. And as always, you can donate to EFF and give the gift of membership to the digital rights defender or newbie in your life.

Shop Now

Support Digital Rights with Every Purchase

Are you hoping for delivery by December 25 in the continental U.S.? Please place your order by Thursday, December 10. Email us with any questions.

Olivia Montesano

EFF to Arizona Federal Court: Protect Public School Students from Surveillance and Punishment for Off-Campus Speech

3 months 3 weeks ago

Legal Intern Alexandra Rhodes contributed to this blog post. 

EFF filed an amicus brief urging the Arizona District Court to protect public school students’ freedom of speech and privacy by holding that the use of a school-issued laptop or email account does not categorically mean a student is “on campus.” We argued that students need private digital spaces beyond their school’s reach to speak freely, without the specter of constant school surveillance and punishment.  

Surveillance Software Exposed a Bad Joke Made in the Privacy of a Student’s Home 

The case, Merrill v. Marana Unified School District, involves a Marana High School student who, while at home one morning before school started, asked his mother for advice about a bad grade he received on an English assignment. His mother said he should talk to his English teacher, so he opened his school-issued Google Chromebook and started drafting an email. The student then wrote a series of jokes in the draft email that he deleted each time. The last joke stated: “GANG GANG GIMME A BETTER GRADE OR I SHOOT UP DA SKOOL HOMIE,” which he narrated out loud to his mother in a silly voice before deleting the draft and closing his computer.  

Within the hour, the student’s mother received a phone call from the school principal, who said that Gaggle surveillance software had flagged a threat from her son and had sent along the screenshot of the draft email. The student’s mother attempted to explain the situation and reassure the principal that there was no threat. Nevertheless, despite her reassurances and the student’s lack of disciplinary record or history of violence, the student was ultimately suspended over the draft email—even though he was physically off campus at the time, before school hours, and had never sent the email.  

After the student’s suspension was unsuccessfully challenged, the family sued the school district alleging infringement of the student’s right to free speech under the First Amendment and violation of the student’s right to due process under the Fourteenth Amendment. 

Public School Students Have Greater First Amendment Protection for Off-Campus Speech 

The U.S. Supreme Court has addressed the First Amendment rights of public school students in a handful of cases

Most notably, in Tinker v. Des Moines Independent Community School District (1969), the Court held that students may not be punished for their on-campus speech unless the speech “materially and substantially” disrupted the school day or invaded the rights of others. 

Decades later, in Mahanoy Area School District v. B.L. by and through Levy (2021), in which EFF filed a brief, the Court further held that schools have less leeway to regulate student speech when that speech occurs off campus. Importantly, the Court stated that schools should have a limited ability to punish off-campus speech because “from the student speaker’s perspective, regulations of off-campus speech, when coupled with regulations of on-campus speech, include all the speech a student utters during the full 24-hour day.” 

The Ninth Circuit has further held that off-campus speech is only punishable if it bears a “sufficient nexus” to the school and poses a credible threat of violence. 

In this case, therefore, the extent of the school district’s authority to regulate student speech is tied to whether the high schooler was on or off campus at the time of the speech. The student here was at home and thus physically off campus when he wrote the joke in question; he wrote the draft before school hours; and the joke was not emailed to anyone on campus or anyone associated with the campus.  

Yet the school district is arguing that his use of a school-issued Google Chromebook and Google Workspace for Education account (including the email account) made his speech—and makes all student speech—automatically “on campus” for purposes of justifying punishment under the First Amendment.  

Schools Provide Students with Valuable Digital Tools—But Also Subject Them to Surveillance 

EFF supports the plaintiffs’ argument that the student’s speech was “off campus,” did not bear a sufficient nexus to the school, and was not a credible threat. In our amicus brief, we urged the trial court at minimum to reject a rule that the use of a school-issued device or cloud account always makes a student’s speech “on campus.”   

Our amicus brief supports the plaintiffs’ First Amendment arguments through the lens of surveillance, emphasizing that digital speech and digital privacy are inextricably linked.  

As we explained, Marana Unified School District, like many schools and districts across the country, offers students free Google Chromebooks and requires them to have an online Google Account to access the various cloud apps in Google Workspace for Education, including the Gmail app.  

Marana Unified School District also uses three surveillance technologies that are integrated into Chromebooks and Google Workspace for Education: Gaggle, GoGuardian, and Securly. These surveillance technologies collectively can monitor virtually everything students do on their laptops and online, from the emails and documents they write (or even just draft) to the websites they visit.  

School Digital Surveillance Chills Student Speech and Further Harms Students 

In our amicus brief, we made four main arguments against a blanket rule that categorizes any use of a school-issued device or cloud account as “on campus,” even if the student is geographically off campus or outside of school hours.  

First, we pointed out that such a rule will result in students having no reprieve from school authority, which runs counter to the Supreme Court’s admonition in Mahanoy not to regulate “all the speech a student utters during the full 24-hour day.” There must be some place that is “off campus” for public school students even when using digital tools provided by schools, otherwise schools will reach too far into students’ lives.  

Second, we urged the court to reject such an “on campus” rule to mitigate the chilling effect of digital surveillance on students’ freedom of speech—that is, the risk that students will self-censor and choose not to express themselves in certain ways or access certain information that may be disfavored by school officials. If students know that no matter where they are or what they are doing with their Chromebooks and Google Accounts, the school is watching and the school has greater legal authority to punish them because they are always “on campus,” students will undoubtedly curb their speech. 

Third, we argued that such an “on campus” rule will exacerbate existing inequities in public schools among students of different socio-economic backgrounds. It would distinctly disadvantage lower-income students who are more likely to rely on school-issued devices because their families cannot afford a personal laptop or tablet. This creates a “pay for privacy” scheme: lower-income students are subject to greater school-directed surveillance and related discipline for digital speech, while wealthier students can limit surveillance by using personal laptops and email accounts, enabling them to have more robust free speech protections. 

Fourth, such an “on campus” rule will incentivize public schools to continue eroding student privacy by subjecting them to near constant digital surveillance. The student surveillance technologies schools use are notoriously privacy invasive and inaccurate, causing various harms to students—including unnecessary investigations and discipline, disclosure of sensitive information, and frustrated learning. 

We urge the Arizona District Court to protect public school students’ freedom of speech and privacy by rejecting this approach to school-managed technology. As we said in our brief, students, especially high schoolers, need some sphere of digital autonomy, free of surveillance, judgment, and punishment, as much as anyone else—to express themselves, to develop their identities, to learn and explore, to be silly or crude, and even to make mistakes.  

Sophia Cope

✋ Get A Warrant | EFFector 37.17

3 months 3 weeks ago

Even with the holidays coming up, the digital rights news doesn't stop. Thankfully, EFF is here to keep you up-to-date with our EFFector newsletter!

In our latest issue, we’re explaining why politicians latest attempts to ban VPNs is a terrible idea; asking supporters to file public comments opposing new rules that would make bad patents untouchable; and sharing a privacy victory—Sacramento is forced to end its dragnet surveillance program of power meter data.

Prefer to listen in? Check out our audio companion, where EFF Surveillance Litigation Director Andrew Crocker explains our new lawsuit challenging the warrantless mass surveillance of drivers in San Jose. Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.17 - ✋ GET A WARRANT

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero
Checked
1 hour 23 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed