EFF to Court: The DMCA Didn't Create a New Right of Attribution, You Shouldn't Either

1 week 4 days ago

Amid a wave of lawsuits targeting how AI companies use copyrighted works to train large language models that generate new works, a peculiar provision of copyright law is suddenly in the spotlight: Section 1202 of the Digital Millennium Copyright Act (DMCA). Section 1202 restricts intentionally removing or changing copyright management information (CMI), such as a signature on a painting or attached to a photograph. Passed in 1998, the rule was supposed to help rightsholders identify potentially infringing uses of their works and encourage licensing.

Open AI and Microsoft used code from Github as part of the training data for their LLMs, along with billions of other works. A group of anonymous Github contributors sued, arguing that those LLMs generated new snippets of code that were substantially similar to theirs—but with the CMI stripped. Notably, they did not claim that the new code was copyright infringement—they are relying solely on Section 1202 of the DMCA. Their problem? The generated code is different from their original work, and courts across the US have adopted an “identicality rule,” on the theory that Section 1202 is supposed to apply only when CMI is removed from existing works, not when it’s simply missing from a new one.

It may sound like an obscure legal question, but the outcome of this battle—currently before the Ninth Circuit Court of Appeals—could have far-reaching implications beyond generative AI technologies. If the rightholders were correct, Section 1202 effectively creates a freestanding right of attribution, creating potential liability even for non-infringing uses, such as fair use, if those new uses simply omit the CMI. While many fair users might ultimately escape liability under other limitations built into Section 1202, the looming threat of litigation, backed by risk of high and unpredictable statutory penalties, will be enough to pressure many defendants to settle. Indeed, an entire legal industry of “copyright trolls” has emerged to exploit this dynamic, with no corollary benefit to creativity or innovation.

Fortunately, as we explain in a brief filed today, the text of Section 1202 doesn’t support such an expansive interpretation. The provision repeatedly refers to “works” and “copies of works”—not “substantially similar” excerpts or new adaptations—and its focus on “removal or alteration” clearly contemplates actions taken with respect to existing works, not new ones. Congress could have chosen otherwise and written the law differently. Wisely it did not, thereby ensuring that rightsholders couldn’t leverage the omission of CMI to punish or unfairly threaten otherwise lawful re-uses of a work.

Given the proliferation of copyrighted works in virtually every facet of daily life, the last thing any court should do is give rightsholders a new, freestanding weapon against fair uses. As the Supreme Court once observed, copyright is a “tax on readers for the purpose of giving a bounty to writers.” That tax—including the expense of litigation—can be an important way to encourage new creativity, but it should not be levied unless the Copyright Act clearly requires it.

Corynne McSherry

California A.B. 412 Stalls Out—A Win for Innovation and Fair Use

1 week 4 days ago

A.B. 412, the flawed California bill that threatened small developers in the name of AI “transparency,” has been delayed and turned into a two-year bill. That means it won’t move forward in 2025—a significant victory for innovation, freedom to code, and the open web.

EFF opposed this bill from the start. A.B. 412 tried to regulate generative AI, not by looking at the public interest, but by mandating training data “reading lists” designed to pave the way for new copyright lawsuits, many of which are filed by large content companies. 

Transparency in AI development is a laudable goal. But A.B. 412 failed to offer a fair or effective path to get there. Instead, it gave companies large and small the impossible task of differentiating between what content was copyrighted and what wasn’t—with severe penalties for anyone who couldn’t meet that regulation. That would have protected the largest AI companies, but frozen out smaller and non-commercial developers who might want to tweak or fine-tune AI systems for the public good. 

The most interesting work in AI won’t necessarily come from the biggest companies. It will come from small teams, fine-tuning for accessibility, privacy, and building tools that identify AI harms. And some of the most valuable work will be done using source code under permissive licenses. 

A.B. 412 ignored those facts, and would have punished some of the most worthwhile projects. 

The Bill Blew Off Fair Use Rights

The question of whether—and how much—AI training qualifies as fair use is being actively litigated right now in federal courts. And so far, courts have found much of this work to be fair use. In a recent landmark AI case, Bartz v. Anthropic, for example, a federal judge found that AI training work is “transformative—spectacularly so.” He compared it to how search engines copy images and text in order to provide useful search results to users.

Copyright is federally governed. When states try to rewrite the rules, they create confusion—and more litigation that doesn’t help anyone.

If lawmakers want to revisit AI transparency, they need to do so without giving rights-holders a tool to weaponize copyright claims. That means rejecting A.B. 412’s approach—and crafting laws that protect speech, competition, and the public’s interest in a robust, open, and fair AI ecosystem. 

Joe Mullin

【お知らせ】出版流通を振り返り・未来を議論するセッションの開催=出版部会

1 week 4 days ago
 2022年、日本出版学会出版産業研究部会では『平成の出版が歩んだ道――激変する「出版業界の夢と冒険」30年史』をテーマに、平成の出版産業を振り返った。それから3年の短い間に、出版産業をめぐる動きは大きく変わり、特に出版流通は大規模な変化に直面しようとしている。 そのような状況において、このたび能勢 仁・八木壯一・樽見博『出版流通が歩んだ道――近代出版流通誕生150年の軌跡』が刊行された。 著者の一人・能勢仁さんから本書の2・3章の内容を下敷きに、「第4章 出版業界の生き残り..
JCJ

Amazon Ring Cashes in on Techno-Authoritarianism and Mass Surveillance

1 week 4 days ago

Ring founder Jamie Siminoff is back at the helm of the surveillance doorbell company, and with him is the surveillance-first-privacy-last approach that made Ring one of the most maligned tech devices. Not only is the company reintroducing new versions of old features which would allow police to request footage directly from Ring users, it is also introducing a new feature that would allow police to request live-stream access to people’s home security devices. 

This is a bad, bad step for Ring and the broader public. 

Ring is rolling back many of the reforms it’s made in the last few years by easing police access to footage from millions of homes in the United States. This is a grave threat to civil liberties in the United States. After all, police have used Ring footage to spy on protestors, and obtained footage without a warrant or consent of the user. It is easy to imagine that law enforcement officials will use their renewed access to Ring information to find people who have had abortions or track down people for immigration enforcement

Siminoff has announced in a memo seen by Business Insider that the company will now be reimagined from the ground up to be “AI first”—whatever that means for a home security camera that lets you see who is ringing your doorbell. We fear that this may signal the introduction of video analytics or face recognition to an already problematic surveillance device. 

It was also reported that employees at Ring will have to show proof that they use AI in order to get promoted. 

Not to be undone with new bad features, they are also planning on rolling back some of the necessary reforms Ring has made: namely partnering with Axon to build a new tool that would allow police to request Ring footage directly from users, and also allow users to consent to letting police livestream directly from their device. 

After years of serving as the eyes and ears of police, the company was compelled by public pressure to make a number of necessary changes. They introduced end-to-end encryption, they ended their formal partnerships with police which were an ethical minefield, and they ended their tool that facilitated police requests for footage directly to customers. Now they are pivoting back to being a tool of mass surveillance. 

Why now? It is hard to believe the company is betraying the trust of its millions of customers in the name of “safety” when violent crime in the United States is reaching near-historically low levels. It’s probably not about their customers—the FTC had to compel Ring to take its users’ privacy seriously. 

No, this is most likely about Ring cashing in on the rising tide of techno-authoritarianism, that is, authoritarianism aided by surveillance tech. Too many tech companies want to profit from our shrinking liberties. Google likewise recently ended an old ethical commitment that prohibited it from profiting off of surveillance and warfare. Companies are locking down billion-dollar contracts by selling their products to the defense sector or police.

Shame on Ring.

Matthew Guariglia

〈参政党とナチス・エコロジズム〉田中優子

1 week 4 days ago
 SNSがフェイク情報を山のように出す今日、私たちが心しなければならないのは「騙される」ことだ。『週刊金曜日』の読者は、簡単には騙されないとは思うが、オーガニックとか、有機農法はどうだろう?「買ってはいけない」を発信し続 […]
admin

〈排外主義的風潮の拡大を憂う〉宇都宮健児

1 week 4 days ago
 ここ数年、日本社会において外国人、外国ルーツの人々を敵視する排外主義的風潮が急速に拡大している。NHKとJX通信社が6月に実施した調査では、「日本社会では外国人が必要以上に優遇されている」という質問に対し、「強くそう思 […]
admin

We Support Wikimedia Foundation’s Challenge to UK’s Online Safety Act

1 week 5 days ago

The Electronic Frontier Foundation and ARTICLE 19 strongly support the Wikimedia Foundation’s legal challenge to the categorization regulations of the United Kingdom’s Online Safety Act.

The Foundation – the non-profit that operates Wikipedia and other Wikimedia projects – announced its legal challenge earlier this year, arguing that the regulations endanger Wikipedia and the global community of volunteer contributors who create the information on the site. The High Court of Justice in London will hear the challenge on July 22 and 23.

EFF and ARTICLE 19 agree with the Foundation’s argument that, if enforced, the Category 1 duties - the OSA’s most stringent obligations – would undermine the privacy and safety of Wikipedia’s volunteer contributors, expose the site to manipulation and divert essential resources from protecting people and improving the site. For example, because the law requires Category 1 services to allow users to block all unverified users from editing any content they post, the law effectively requires the Foundation to verify the identity of many Wikipedia contributors. However, that compelled verification undermines the privacy that keeps the site’s volunteers safe.

Wikipedia is the world’s most trusted and widely used encyclopedia, with users across the word accessing its wealth of information and participating in free information exchange through the site. The OSA must not be allowed to diminish it and jeopardize the volunteers on which it depends.

Beyond the issues raised in Wikimedia’s lawsuit, EFF and ARTICLE 19 emphasize that the Online Safety Act poses a serious threat to freedom of expression and privacy online, both in the U.K. and globally. Several key provisions of the law become operational July 25, and some companies already are rolling out age-verification mechanisms which undermine free expression and privacy rights of both adults and minors.

David Greene