出版ネッツ : 世田谷区史編さん争議報告集できました!
伊方原発運転差止山口訴訟、結審 敷地数百m前に活断層があると住民側主張
大間原発建設差止等請求訴訟 住民ら、審理終結に向け舵を切る
処分撤回を求めて(571)7月31日、東京「君が代」裁判五次訴訟・地裁判決 傍聴支援を
中国:フォックスコンのあるベテラン女性工員の闘争記(そ の1)
アリの一言:参院選・参政党、国民民主党の躍進は重大事態
【焦点】台湾有事 戦争回避の道 ASEANと連携強化 米中緊張緩和へGDP4位と6位タッグを 対米追従から独立自尊へ 布施祐仁氏オンライン講演=橋詰雅博
杉原浩司:「大軍拡と基地強化にNO!アクション2025」8.2発足集会
経産省前脱原発テント日誌(7/17)脱原発・核廃絶、ガザ虐殺をとめよう、排外主義糾弾
第27回参議院議員通常選挙の投票日における中央選挙管理会委員長談話
第27回参議院議員通常選挙の投票日における総務大臣談話
【連続公開講座】フジから見えた テレビの未来=河野慎二
「大椿ゆうこが国会にいなくちゃダメだ」〜差別に抗う一票求めて街頭アピール
万博工事費未払い問題:被害者と労働組合が万博協会に公開質問
ご案内 : 第15回「日の丸・君が代」問題等全国学習交流集会
報告 : 大阪「どうしたら戦争をなくせるの?」集会
EFF to Court: The DMCA Didn't Create a New Right of Attribution, You Shouldn't Either
Amid a wave of lawsuits targeting how AI companies use copyrighted works to train large language models that generate new works, a peculiar provision of copyright law is suddenly in the spotlight: Section 1202 of the Digital Millennium Copyright Act (DMCA). Section 1202 restricts intentionally removing or changing copyright management information (CMI), such as a signature on a painting or attached to a photograph. Passed in 1998, the rule was supposed to help rightsholders identify potentially infringing uses of their works and encourage licensing.
Open AI and Microsoft used code from Github as part of the training data for their LLMs, along with billions of other works. A group of anonymous Github contributors sued, arguing that those LLMs generated new snippets of code that were substantially similar to theirs—but with the CMI stripped. Notably, they did not claim that the new code was copyright infringement—they are relying solely on Section 1202 of the DMCA. Their problem? The generated code is different from their original work, and courts across the US have adopted an “identicality rule,” on the theory that Section 1202 is supposed to apply only when CMI is removed from existing works, not when it’s simply missing from a new one.
It may sound like an obscure legal question, but the outcome of this battle—currently before the Ninth Circuit Court of Appeals—could have far-reaching implications beyond generative AI technologies. If the rightholders were correct, Section 1202 effectively creates a freestanding right of attribution, creating potential liability even for non-infringing uses, such as fair use, if those new uses simply omit the CMI. While many fair users might ultimately escape liability under other limitations built into Section 1202, the looming threat of litigation, backed by risk of high and unpredictable statutory penalties, will be enough to pressure many defendants to settle. Indeed, an entire legal industry of “copyright trolls” has emerged to exploit this dynamic, with no corollary benefit to creativity or innovation.
Fortunately, as we explain in a brief filed today, the text of Section 1202 doesn’t support such an expansive interpretation. The provision repeatedly refers to “works” and “copies of works”—not “substantially similar” excerpts or new adaptations—and its focus on “removal or alteration” clearly contemplates actions taken with respect to existing works, not new ones. Congress could have chosen otherwise and written the law differently. Wisely it did not, thereby ensuring that rightsholders couldn’t leverage the omission of CMI to punish or unfairly threaten otherwise lawful re-uses of a work.
Given the proliferation of copyrighted works in virtually every facet of daily life, the last thing any court should do is give rightsholders a new, freestanding weapon against fair uses. As the Supreme Court once observed, copyright is a “tax on readers for the purpose of giving a bounty to writers.” That tax—including the expense of litigation—can be an important way to encourage new creativity, but it should not be levied unless the Copyright Act clearly requires it.
California A.B. 412 Stalls Out—A Win for Innovation and Fair Use
A.B. 412, the flawed California bill that threatened small developers in the name of AI “transparency,” has been delayed and turned into a two-year bill. That means it won’t move forward in 2025—a significant victory for innovation, freedom to code, and the open web.
EFF opposed this bill from the start. A.B. 412 tried to regulate generative AI, not by looking at the public interest, but by mandating training data “reading lists” designed to pave the way for new copyright lawsuits, many of which are filed by large content companies.
Transparency in AI development is a laudable goal. But A.B. 412 failed to offer a fair or effective path to get there. Instead, it gave companies large and small the impossible task of differentiating between what content was copyrighted and what wasn’t—with severe penalties for anyone who couldn’t meet that regulation. That would have protected the largest AI companies, but frozen out smaller and non-commercial developers who might want to tweak or fine-tune AI systems for the public good.
The most interesting work in AI won’t necessarily come from the biggest companies. It will come from small teams, fine-tuning for accessibility, privacy, and building tools that identify AI harms. And some of the most valuable work will be done using source code under permissive licenses.
A.B. 412 ignored those facts, and would have punished some of the most worthwhile projects.
The Bill Blew Off Fair Use RightsThe question of whether—and how much—AI training qualifies as fair use is being actively litigated right now in federal courts. And so far, courts have found much of this work to be fair use. In a recent landmark AI case, Bartz v. Anthropic, for example, a federal judge found that AI training work is “transformative—spectacularly so.” He compared it to how search engines copy images and text in order to provide useful search results to users.
Copyright is federally governed. When states try to rewrite the rules, they create confusion—and more litigation that doesn’t help anyone.
If lawmakers want to revisit AI transparency, they need to do so without giving rights-holders a tool to weaponize copyright claims. That means rejecting A.B. 412’s approach—and crafting laws that protect speech, competition, and the public’s interest in a robust, open, and fair AI ecosystem.