注意喚起: 2024年8月マイクロソフトセキュリティ更新プログラムに関する注意喚起 (公開)
注意喚起: Adobe AcrobatおよびReaderの脆弱性(APSB24-57)に関する注意喚起 (公開)
Inside the Digital Society: Making the digital economy sustainable
投稿 : プッチダモン、カタルーニャ独立闘争の継続を明確にするために帰還
EFFecting Change: Reproductive Justice in the Digital Age
Please join EFF for the next segment of EFFecting Change, our newest livestream series, diving into topics near and dear to our hearts.
August 28: Reproductive Justice in the Digital AgeThis summer marks the two-year anniversary of the Dobbs decision overturning Roe v. Wade. Join EFF for a livestream discussion about restrictions to reproductive healthcare and the choices people seeking an abortion must face in the digital age where everything is connected, and surveillance is rampant. Learn what’s happening across the United States and how you can get involved with our panel featuring EFF Staff Technologist Daly Barnett, EFF Associate Director of Legislative Activism Hayley Tsukayama, EFF Staff Attorney Jennifer Pinsof, Director of Research and Policy at the Surveillance Resistance Lab Cynthia Conti-Cook, and community organizer Adri Perez.
October 17: How to Protest with Privacy in Mind
Do you know what to do if you’re subjected to a search or arrest at a protest? Join EFF for a livestream discussion about how to protect your electronic devices and digital assets before, during, and after a demonstration. Learn how you can avoid confiscation or forced deletion of media, and keep your movements and associations private.
We hope you and your friends can join us live for both events! Be sure to spread the word, and share our past livestreams. Please note that all future events will be recorded for later viewing.
Check out the first segment of EFFecting Change: The U.S. Supreme Court Takes on the Internet by watching the recording on our YouTube page!
アリの一言:「父の戦争トラウマは自分の問題」黒井秋夫氏を覚醒させたもの
【出版トピック】サイバー攻撃を受けたKADOKAWAの復旧と被害状況=出版部会
[B] 抗議活動を抑圧し、取り締まりと市民監視を強化する欧州各国 アムネスティが警告
Digital Apartheid in Gaza: Big Tech Must Reveal Their Roles in Tech Used in Human Rights Abuses
This is part two of an ongoing series. Part one on unjust content moderation is here.
Since the start of the Israeli military response to Hamas’ deadly October 7 attack, U.S.-based companies like Google and Amazon have been under pressure to reveal more about the services they provide and the nature of their relationships with the Israeli forces engaging in the military response.
We agree. Without greater transparency, the public cannot tell whether these companies are complying with human rights standards—both those set by the United Nations and those they have publicly set for themselves. We know that this conflict has resulted in alleged war crimes and has involved massive, ongoing surveillance of civilians and refugees living under what international law recognizes as an illegal occupation. That kind of surveillance requires significant technical support and it seems unlikely that it could occur without any ongoing involvement by the companies providing the platforms.
Google's Human Rights statement claims that “In everything we do, including launching new products and expanding our operations around the globe, we are guided by internationally recognized human rights standards. We are committed to respecting the rights enshrined in the Universal Declaration of Human Rights and its implementing treaties, as well as upholding the standards established in the United Nations Guiding Principles on Business and Human Rights (UNGPs) and in the Global Network Initiative Principles (GNI Principles). Google goes further in the case of AI technologies, promising not to design or deploy AI in technologies that are likely to facilitate injuries to people, gather or use information for surveillance or be used in violation of human rights, or even where the use is likely to cause overall harm.”
Amazon states that it is "Guided by the United Nations Guiding Principles on Business and Human Rights," and that their “approach on human rights is informed by international standards; we respect and support the Core Conventions of the International Labour Organization (ILO), the ILO Declaration on Fundamental Principles and Rights at Work, and the UN Universal Declaration of Human Rights.”
It is time for Google and Amazon to tell the truth about use of their technologies in Gaza so that everyone can see whether their human rights commitments were real or simply empty promises.
Concerns about Google and Amazon Facilitating Human Rights AbusesThe Israeli government has long procured surveillance technologies from corporations based in the United States. Most recently, an investigation in August by +972 and Local Call revealed that the Israeli military has been storing intelligence information on Amazon’s Web Services (AWS) cloud after the scale of data collected through mass surveillance on Palestinians in Gaza was too large for military servers alone. The same article reported that the commander of Israel’s Center of Computing and Information Systems unit—responsible for providing data processing for the military—confirmed in an address to military and industry personnel that the Israeli army had been using cloud storage and AI services provided by civilian tech companies, with the logos of AWS, Google Cloud, and Microsoft Azure appearing in the presentation.
This is not the first time Google and Amazon have been involved in providing civilian tech services to the Israeli military, nor is it the first time that questions have been raised about whether that technology is being used to facilitate human rights abuses. In 2021, Google and Amazon Web Services signed a $1.2 billion joint contract with the Israeli military called Project Nimbus to provide cloud services and machine learning tools located within Israel. In an official announcement for the partnership, the Israeli Finance Ministry said that the project sought to “provide the government, the defense establishment and others with an all-encompassing cloud solution.” Under the contract, Google and Amazon reportedly cannot prevent particular agencies of the Israeli government, including the military, from using its services.
Not much is known about the specifics of Nimbus. Google has publicly stated that the project is not aimed at military uses; the Israeli military publicly credits Nimbus with assisting the military in conducting the war. Reports note that the project involves Google establishing a secure instance of the Google Cloud in Israel. According to Google documents from 2022, Google’s Cloud services include object tracking, AI-enabled face recognition and detection, and automated image categorization. Google signed a new consulting deal with the Israeli Ministry of Defense based around the Nimbus platform in March 2024, so Google can’t claim it’s simply caught up in the changed circumstances since 2021.
Alongside Project Nimbus, an anonymous Israeli official reported that the Israeli military deploys face recognition dragnets across the Gaza Strip using two tools that have facial recognition/clustering capabilities: one from Corsight, which is a "facial intelligence company," and the other built into the platform offered through Google Photos.
Clarity NeededBased on the sketchy information available, there is clearly cause for concern and a need for the companies to clarify their roles.
For instance, Google Photos is a general-purpose service and some of the pieces of Project Nimbus are non-specific cloud computing platforms. EFF has long maintained that the misuse of general-purpose technologies alone should not be a basis for liability. But, as with Cisco’s development of a specific module of China’s Golden Shield aimed at identifying the Falun Gong (currently pending in litigation in the U.S. Court of Appeals for the Ninth Circuit), companies should not intentionally provide specific services that facilitate human rights abuses. They must also not willfully blind themselves to how their technologies are being used.
In short, if their technologies are being used to facilitate human rights abuses, whether in Gaza or elsewhere, these tech companies need to publicly demonstrate how they are adhering to their own Human Rights and AI Principles, which are based in international standards.
We (and the whole world) are waiting, Google and Amazon.
第6回 消費者をエンパワーするデジタル技術に関する専門調査会【8月20日開催】
農薬第三専門調査会(第30回)の開催について(非公開)【8月21日開催】
[B] 野添憲治の《秋田県における朝鮮人強制連行12》相内鉱山の鉱石を運ぶ 鹿角郡小坂町相内
[B] ミャンマーで拘束のイオンの日本人駐在員、判決後に解放
宮崎県日向灘を震源とする地震に関する被害状況等について(第9報)
第49回独立行政法人評価制度委員会 議事概要
第66回独立行政法人評価制度委員会評価部会 議事概要
令和6年8月13日付 総務省人事
Federal Appeals Court Finds Geofence Warrants Are “Categorically” Unconstitutional
In a major decision on Friday, the federal Fifth Circuit Court of Appeals held that geofence warrants are “categorically prohibited by the Fourth Amendment.” Closely following arguments EFF has made in a number of cases, the court found that geofence warrants constitute the sort of “general, exploratory rummaging” that the drafters of the Fourth Amendment intended to outlaw. EFF applauds this decision because it is essential that every person feels like they can simply take their cell phone out into the world without the fear that they might end up a criminal suspect because their location data was swept up in open-ended digital dragnet.
The new Fifth Circuit case, United States v. Smith, involved an armed robbery and assault of a US Postal Service worker at a post office in Mississippi in 2018. After several months of investigation, police had no identifiable suspects, so they obtained a geofence warrant covering a large geographic area around the post office for the hour surrounding the crime. Google responded to the warrant with information on several devices, ultimately leading police to the two defendants.
On appeal, the Fifth Circuit reached several important holdings.
First, it determined that under the Supreme Court’s landmark ruling in Carpenter v. United States, individuals have a reasonable expectation of privacy in the location data implicated by geofence warrants. As a result, the court broke from the Fourth Circuit’s deeply flawed decision last month in United States v. Chatrie, noting that although geofence warrants can be more “limited temporally” than the data sought in Carpenter, geofence location data is still highly invasive because it can expose sensitive information about a person’s associations and allow police to “follow” them into private spaces.
Second, the court found that even though investigators seek warrants for geofence location data, these searches are inherently unconstitutional. As the court noted, geofence warrants require a provider, almost always Google, to search “the entirety” of its reserve of location data “while law enforcement officials have no idea who they are looking for, or whether the search will even turn up a result.” Therefore, “the quintessential problem with these warrants is that they never include a specific user to be identified, only a temporal and geographic location where any given user may turn up post-search. That is constitutionally insufficient.”
Unsurprisingly, however, the court found that in 2018, police could have relied on such a warrant in “good faith,” because geofence technology was novel, and police reached out to other agencies with more experience for guidance. This means that the evidence they obtained will not be suppressed in this case.
Nevertheless, it is gratifying to see an appeals court recognize the fundamental invasions of privacy created by these warrants and uphold our constitutional tradition prohibiting general searches. Police around the country have increasingly relied on geofence warrants and other reverse warrants, and this opinion should act as a warning against narrow applications of Fourth Amendment precedent in these cases.
Related Cases: Carpenter v. United States