注意喚起: トレンドマイクロ製企業向けエンドポイントセキュリティ製品における複数のOSコマンドインジェクションの脆弱性に関する注意喚起 (公開)
JVN: サトー製ラベルプリンタCL4/6NX-J PlusおよびCL4/6NX Plusシリーズにおける複数の脆弱性
レイバーネット夏合宿のご案内 : 8/23-24埼玉県毛呂山
JVN: Tigo Energy製Cloud Connect Advancedにおける複数の脆弱性
Don’t Let Congress Bring Back the Worst Patents
Two dangerous patent bills—PERA and PREVAIL—are back in Congress. These bills would revive harmful patents and make it harder for the public to fight back.
The Patent Eligibility Restoration Act (PERA) would overturn key Supreme Court decisions that currently protect us from patents on the most basic internet software, and even human genes. This would open the floodgates to vague, overbroad claims on simple, widely used web features—exactly the kind of patents that patent trolls exploit.
The PREVAIL Act would gut the inter partes review (IPR) process, one of the most effective tools for challenging bad patents. It would ban many public interest groups, including EFF, from filing challenges.
Congress Can Act Now to Protect Reproductive Health Data
Privacy fears should never stand in the way of healthcare. That's why this common-sense bill will require businesses and non-governmental organizations to act responsibly with personal information concerning reproductive health care. Specifically, it restricts them from collecting, using, retaining, or disclosing reproductive health information that isn't essential to providing the service someone asks them for.
Tell Congress: Throw Out the NO FAKES Act and Start Over
AI-generated imitations raise legitimate concerns, and Congress should consider narrowly-targeted and proportionate proposals to deal with them. Instead, some Senators have proposed the broad NO FAKES Act, which would create an expansive and confusing new intellectual property right with few real safeguard against abuse. Tell Congress to throw out the NO FAKES Act and start over.
Congress Shouldn't Control What We’re Allowed to Read Online
The Kids Online Safety Act (KOSA) is back—and it still threatens free expression online. It would let government officials pressure or sue platforms to block or remove lawful content—especially on topics like mental health, sexuality, and drug use.
To avoid liability, platforms will over-censor. When forums or support groups get deleted, it’s not just teens who lose access—we all do. KOSA will also push services to adopt invasive age verification, handing private data to companies like Clear or ID.me.
Lawmakers should reject KOSA. Tell your Senators to vote NO.
報告「週刊新潮」の差別的コラムで抗議会見
「8.3緊急シンポジウム◇学問の自由は守られるのか?」記録映像及び報告資料を公開
韓国労働ニュース7月後半号「政権が変わると、こうなるのだ」
Weekly Report: IPAが「2025年度 夏休みにおける情報セキュリティに関する注意喚起」を公開
情報通信審議会 情報通信技術分科会 電波有効利用委員会 無線設備の認証の在り方検討作業班(第1回)
住民基本台帳に基づく人口、人口動態及び世帯数(令和7年1月1日現在)
電波監理審議会 有効利用評価部会(第48回)会議資料
情報通信審議会 情報通信技術分科会 電波有効利用委員会 価額競争の実施方法に関する検討作業班(第2回)
情報通信審議会 情報通信技術分科会 新世代モバイル通信システム委員会 HAPS検討作業班
第43回政策評価審議会(第42回政策評価制度部会と合同)(令和7年6月24日開催)資料・議事要旨・議事録
EFF to Court: Chatbot Output Can Reflect Human Expression
When a technology can have a conversation with you, it’s natural to anthropomorphize that technology—to see it as a person. It’s tempting to see a chatbot as a thinking, speaking robot, but this gives the technology too much credit. This can also lead people—including judges in cases about AI chatbots—to overlook the human expressive choices connected to the words that chatbots produce. If chatbot outputs had no First Amendment protections, the government could potentially ban chatbots that criticize the administration or reflect viewpoints the administration disagrees with.
In fact, the output of chatbots can reflect not only the expressive choices of their creators and users, but also implicates users’ right to receive information. That’s why EFF and the Center for Democracy and Technology (CDT) have filed an amicus brief in Garcia v. Character Technologies explaining how large language models work and the various kinds of protected speech at stake.
Among the questions in this case is the extent to which free speech protections extend to the creation, dissemination, and receipt of chatbot outputs. Our brief explains how the expressive choices of a chatbot developer can shape its output, such as during reinforcement learning, when humans are instructed to give positive feedback to responses that align with the scientific consensus around climate change and negative feedback for denying it (or vice versa). This chain of human expressive decisions extends from early stages of selecting training data to crafting a system prompt. A user’s instructions are also reflected in chatbot output. Far from being the speech of a robot, chatbot output often reflects human expression that is entitled to First Amendment protection.
In addition, the right to receive speech in itself is protected—even when the speaker would have no independent right to say it. Users have a right to access the information chatbots provide.
None of this is to suggest that chatbots cannot be regulated or that the harms they cause cannot be addressed. The First Amendment simply requires that those regulations be appropriately tailored to the harm to avoid unduly burdening the right to express oneself through the medium of a chatbot, or to receive the information it provides.
We hope that our brief will be helpful to the court as the case progresses, as the judge decided not to send the question up on appeal at this time.
Read our brief below.