6/6-7「戦争止めよう!沖縄・西日本ネットワーク」東京行動にご参加を!
「週刊金曜日」ニュース:強権振りかざすトランプ政権
Keeping the Web Up Under the Weight of AI Crawlers
If you run a site on the open web, chances are you've noticed a big increase in traffic over the past few months, whether or not your site has been getting more viewers, and you're not alone. Operators everywhere have observed a drastic increase in automated traffic—bots—and in most cases attribute much or all of this new traffic to AI companies.
BackgroundAI—in particular, Large Language Models (LLMs) and generative AI (genAI)—rely on compiling as much information from relevant sources (i.e., "texts written in English" or "photographs") as possible in order to build a functional and persuasive model that users will later interact with. While AI companies in part distinguish themselves by what data their models are trained on, possibly the greatest source of information—one freely available to all of us—is the open web.
To gather up all that data, companies and researchers use automated programs called scrapers (sometimes referred to by the more general term "bots") to "crawl" over the links available between various webpages and save the types of information they're tasked with as they go. Scrapers are tools with a long, and often beneficial, history: services like search engines, the Internet Archive, and all kinds of scientific research rely on them.
When scrapers are not deployed thoughtfully, however, they can contribute to higher hosting costs, lower performance, and even site outages, particularly when site operators see so many of them in operation at the same time. In the long run all this may lead to some sites shutting down rather than bearing the brunt of it.
For-profit AI companies must ensure they do not poison the well of the open web they rely on in a short-sighted rush for training data.
Bots: Read the RoomThere are existing best practices those who use scrapers should follow. When bots and their operators ignore these guideposts it sends a signal to site operators, sometimes explicitly, that they can or should cut off their access, impede performance, and in the worst case it may take a site down for all users. Some companies appear to follow these practices most of the time, but we see increasing reports and evidence of new bots that don't.
First, where possible, scrapers should follow instructions given in a site's robots.txt file, whether those are to back off to a certain crawling rate, exclude certain paths, or not to crawl the site at all.
Second, bots should send their requests with a clearly labeled User Agent string which indicates their operator, their purpose, and a means of contact.
Third, those running scrapers should provide a process for site operators to request back-offs, rate caps, exclusions, and to report problematic behavior via the means of contact info or response forms linked via the User Agent string.
Mitigations for Site OperatorsOf course, if you're running a website dealing with a flood of crawling traffic, waiting for those bots to change their behavior for the better might not be realistic. Here are a few suggested, if imperfect, mitigations based in part on our own sometimes frustrating experiences.
First, use a caching layer. In most cases a Content Delivery Network (CDN) or an "edge platform" (essentially a newer iteration of a CDN) can provide this for you, and some services offer a free tier for non-commercial users. There are also a number of great projects if you prefer to self-host. Some of the tools we've used for caching include varnish, memcached, and redis.
Second, convert to static content to prevent resource-intensive database reads. In some cases this may reduce the need for caching.
Third, use targeted rate limiting to slow down bots without taking your whole site down. But know this can get difficult when scrapers try to disguise themselves with misleading User Agent strings or by spreading a fleet of crawlers out across many IP addresses.
Other mitigations such as client-side validation (e.g. CAPTCHAs or proof-of-work) and fingerprinting carry privacy and usability trade-offs, and we warn against deploying them without careful forethought.
Where Do We Go From Here?To reiterate, whatever one's opinion of these particular AI tools, scraping itself is not the problem. Automated access is a fundamental technique of archivists, computer scientists, and everyday users that we hope is here to stay—as long as it can be done non-destructively. However, we realize that not all implementers will follow our suggestions for bots above, and that our mitigations are both technically advanced and incomplete.
Because we see so many bots operating for the same purpose at the same time, it seems there's an opportunity here to provide these automated data consumers with tailored data providers, removing the need for every AI company to scrape every website, seemingly, every day.
And on the operators' end, we hope to see more web-hosting and framework technology that is built with an awareness of these issues from day one, perhaps building in responses like just-in-time static content generation or dedicated endpoints for crawlers.
EFF to the FTC: DMCA Section 1201 Creates Anti-Competitive Regulatory Barriers
As part of multi-pronged effort towards deregulation, the Federal Trade Commission has asked the public to identify any and all “anti-competitive” regulations. Working with our friends at Authors Alliance, EFF answered, calling attention to a set of anti-competitive regulations that many don’t recognize as such: the triennial exemptions to Section 1201 of the Digital Millennium Copyright Act, and the cumbersome process on which they depend.
Copyright grants exclusive rights to creators, but only as a means to serve the broader public interest. Fair use and other limitations play a critical role in that service by ensuring that the public can engage in commentary, research, education, innovation, and repair without unjustified restriction. Section 1201 effectively forbids fair uses where those uses require circumventing a software lock (a.k.a. technological protection measures) on a copyrighted work.
Congress realized that Section 1201 had this effect, so it adopted a safety valve—a triennial process by which the Library of Congress could grant exemptions. Under the current rulemaking framework, however, this intended safety valve functions more like a chokepoint. Individuals and organizations seeking an exemption to engage in lawful fair use must navigate a burdensome, time-consuming administrative maze. The existing procedural and regulatory barriers ensure that the rulemaking process—and Section 1201 itself—thwarts, rather than serves, the public interest.
The FTC does not, of course, control Congress or the Library of Congress. But we hope its investigation and any resulting report on anti-competitive regulations will recognize the negative effects of Section 1201 and that the triennial rulemaking process has failed to be the check Congress intended. Our comments urge the FTC to recommend that Congress repeal or reform Section 1201. At a minimum, the FTC should advocate for fundamental revisions to the Library of Congress’s next triennial rulemaking process, set for 2026, so that copyright law can once again fulfill its purpose: to support—rather than thwart—competitive and independent innovation.
You can find the full comments here.
Stepping outside the algorithm
[B] 沖縄の海と暮らしを守るために−辺野古新基地建設に反対する院内集会−
令和6年度地方財政審議会(3月21日)議事要旨
令和7年度地方財政審議会(5月20日)議事要旨
第737回 入札監理小委員会(会議資料)
ワット・ビット連携官民懇談会(第2回)配布資料
広域大規模災害を想定した放送サービスの維持・確保方策の充実・強化検討チーム(第5回)配布資料
不適正利用対策に関するワーキンググループ(第10回)
令和7年度過疎地域持続的発展支援交付金の交付決定
電気通信事業法施行規則の一部を改正する省令案に対する意見募集の実施
村上総務大臣閣議後記者会見の概要
家計調査報告(二人以上の世帯)2025年(令和7年)4月分
総合職技術系既合格者向け官庁訪問の情報を更新しました。
【映画の鏡】横浜市民の底力にスポット『The Spirit of Yokohama』市長選の年「街づくり」の在り方示す=鈴木賀津彦
The Dangers of Consolidating All Government Information
The Trump administration has been heavily invested in consolidating all of the government’s information into a single searchable, or perhaps AI-queryable, super database. The compiling of all of this information is being done with the dubious justification of efficiency and modernization–however, in many cases, this information was originally siloed for important reasons: to protect your privacy, to prevent different branches of government from using sensitive data to punish or harass you, and to perserve the trust in and legitimacy of important civic institutions.
This process of consolidation has taken several forms. The purported Department of Government Efficiency (DOGE) has been seeking access to the data and computer systems of dozens of government agencies. According to one report, access to the data of these agencies has given DOGE, as of April 2025, hundreds of pieces of personal information about people living in the United States–everything ranging from financial and tax information, health and healthcare information, and even computer I.P. addresses. EFF is currently engaged in a lawsuit against the U.S. Office of Personnel Management (OPM) and DOGE for disclosing personal information about government employees to people who don’t need it in violation of the Privacy Act of 1974.
Another key maneuver in centralizing government information has been to steamroll the protections that were in place that keep this information away from agencies that don’t need, or could abuse, this information. This has been done by ignoring the law, like the Trump administration did when it ordered the IRS make tax information available for the purposes of immigration enforcement. It has also been done through the creation of new (and questionable) executive mandates that all executive branch information be made available to the White House or any other agency. Specifically, this has been attempted with the March 20, 2025 Executive Order, “Stopping Waste Fraud and Abuse by Eliminating Information Silos” which mandates that the federal government, as well as all 50 state governments, allow other agencies “full and prompt access to all unclassified agency records, data, software systems, and information technology systems.” But executive orders can’t override privacy laws passed by Congress.
Not only is the Trump administration trying to consolidate all of this data institutionally and statutorily, they are also trying to do it technologically. A new report revealed that the administration has contracted Palantir—the open-source surveillance and security data-analytics firm—to fuse data from multiple agencies, including the Department of Homeland Security and Health and Human Services.
The consolidation of government records equals more government power that can be abused. Different government agencies necessarily collect information to provide essential services or collect taxes. The danger comes when the government begins pooling that data and using it for reasons unrelated to the purpose it was collected.
Imagine, for instance, a scenario where a government employee could be denied health-related public services or support because of the information gathered about them by an agency that handles HR records. Or a person’s research topic according to federal grants being used to weigh whether or not that person should be allowed to renew a passport.
Marginalized groups are most vulnerable to this kind of abuse, including to locate individuals for immigration enforcement using tax records. Government records could also be weaponized against people who receive food subsidies, apply for student loans, or take government jobs.
Congress recognized these dangers 50 years ago when it passed the Privacy Act to put strict limits on the government’s use of large databases. At that time, trust in the government eroded after revelations about White House enemies’ lists, misuse of existing government personality profiles, and surveillance of opposition political groups.
There’s another important issue at stake: the future of federal and state governments that actually have the information and capacity to help people. The more people learn to distrust the government because they worry the information they give certain government agencies may be used to hurt them in the future, the less likely people will be to participate or seek the help they need. The fewer people engage with these agencies, the less likely they will be to survive. Trust is a key part of any relationship between the governed and government and when that trust is abused or jettisoned, the long-term harms are irreparable.
EFF, like dozens of other organizations, will continue to fight to ensure personal records held by the government are only used and disclosed as needed and only for the purpose they were collected, as federal law demands.
Related Cases: American Federation of Government Employees v. U.S. Office of Personnel Management