Don’t Let Congress Bring Back the Worst Patents

4 days 13 hours ago

Two dangerous patent bills—PERA and PREVAIL—are back in Congress. These bills would revive harmful patents and make it harder for the public to fight back.

The Patent Eligibility Restoration Act (PERA) would overturn key Supreme Court decisions that currently protect us from patents on the most basic internet software, and even human genes. This would open the floodgates to vague, overbroad claims on simple, widely used web features—exactly the kind of patents that patent trolls exploit.

The PREVAIL Act would gut the inter partes review (IPR) process, one of the most effective tools for challenging bad patents. It would ban many public interest groups, including EFF, from filing challenges.

Electronic Frontier Foundation

Congress Can Act Now to Protect Reproductive Health Data

4 days 13 hours ago

Privacy fears should never stand in the way of healthcare. That's why this common-sense bill will require businesses and non-governmental organizations to act responsibly with personal information concerning reproductive health care. Specifically, it restricts them from collecting, using, retaining, or disclosing reproductive health information that isn't essential to providing the service someone asks them for.

Electronic Frontier Foundation

Tell Congress: Throw Out the NO FAKES Act and Start Over

4 days 13 hours ago

AI-generated imitations raise legitimate concerns, and Congress should consider narrowly-targeted and proportionate proposals to deal with them. Instead, some Senators have proposed the broad NO FAKES Act, which would create an expansive and confusing new intellectual property right with few real safeguard against abuse. Tell Congress to throw out the NO FAKES Act and start over.

Electronic Frontier Foundation

Congress Shouldn't Control What We’re Allowed to Read Online

4 days 13 hours ago

The Kids Online Safety Act (KOSA) is back—and it still threatens free expression online. It would let government officials pressure or sue platforms to block or remove lawful content—especially on topics like mental health, sexuality, and drug use.

To avoid liability, platforms will over-censor. When forums or support groups get deleted, it’s not just teens who lose access—we all do. KOSA will also push services to adopt invasive age verification, handing private data to companies like Clear or ID.me.

Lawmakers should reject KOSA. Tell your Senators to vote NO.

Electronic Frontier Foundation

Weekly Report: IPAが「2025年度 夏休みにおける情報セキュリティに関する注意喚起」を公開

4 days 14 hours ago
独立行政法人情報処理推進機構(IPA)は、「2025年度 夏休みにおける情報セキュリティに関する注意喚起」を公開しました。長期休暇における、個人の利用者、企業や組織の利用者、企業や組織の管理者、それぞれの対象者に対して取るべき対策を説明しています。

EFF to Court: Chatbot Output Can Reflect Human Expression

4 days 19 hours ago

When a technology can have a conversation with you, it’s natural to anthropomorphize that technology—to see it as a person. It’s tempting to see a chatbot as a thinking, speaking robot, but this gives the technology too much credit. This can also lead people—including judges in cases about AI chatbots—to overlook the human expressive choices connected to the words that chatbots produce. If chatbot outputs had no First Amendment protections, the government could potentially ban chatbots that criticize the administration or reflect viewpoints the administration disagrees with.

In fact, the output of chatbots can reflect not only the expressive choices of their creators and users, but also implicates users’ right to receive information. That’s why EFF and the Center for Democracy and Technology (CDT) have filed an amicus brief in Garcia v. Character Technologies explaining how large language models work and the various kinds of protected speech at stake.

Among the questions in this case is the extent to which free speech protections extend to the creation, dissemination, and receipt of chatbot outputs. Our brief explains how the expressive choices of a chatbot developer can shape its output, such as during reinforcement learning, when humans are instructed to give positive feedback to responses that align with the scientific consensus around climate change and negative feedback for denying it (or vice versa). This chain of human expressive decisions extends from early stages of selecting training data to crafting a system prompt. A user’s instructions are also reflected in chatbot output. Far from being the speech of a robot, chatbot output often reflects human expression that is entitled to First Amendment protection.

In addition, the right to receive speech in itself is protected—even when the speaker would have no independent right to say it. Users have a right to access the information chatbots provide.

None of this is to suggest that chatbots cannot be regulated or that the harms they cause cannot be addressed. The First Amendment simply requires that those regulations be appropriately tailored to the harm to avoid unduly burdening the right to express oneself through the medium of a chatbot, or to receive the information it provides.

We hope that our brief will be helpful to the court as the case progresses, as the judge decided not to send the question up on appeal at this time.

Read our brief below.

Katharine Trendacosta