メルカリで不正アクセスによる情報流出。顧客情報やソースコードなど
すべて読む | セキュリティセクション | セキュリティ | 情報漏洩 |
関連ストーリー:
恋活・婚活マッチングアプリ「Omiai」で不正アクセス被害、171万人分の個人情報漏洩の可能性 2021年05月24日
偽の裁判所令状を使用したドメイン乗っ取りが発生 2021年05月06日
内閣府のファイル共有ストレージFileZenに不正アクセス、231人分の個人情報が流出か 2021年04月28日
カプコン、昨年発生の不正アクセス事件の調査結果を発表。古いVPN装置が原因 2021年04月15日
求人サービス「engage」の画像・動画が全て消失、不正アクセスによる被害 2021年04月07日
松井証券口座から委託先企業のSEが不正引き出し。被害総額は約2億円 2021年03月25日
海洋研究開発機構、基幹ネットワークシステムへの不正アクセスを受ける 2021年03月22日
富士通の情報共有ツール「ProjectWEB」へ不正アクセス。国交省や外務省などから情報流出 2021年05月31日
[B] 国軍の攻撃でミャンマー避難民が増加 支援団体が食糧などの援助を訴え
[B] ウイルスも 「屁」と言われちゃ 治まるめぇ
税務システム等標準化検討会 固定資産税WT(第10回機能要件)
「11/15/18GHz帯固定通信システムの高度化に係る技術的条件」
「空間伝送型ワイヤレス電力伝送システムの運用調整に関する基本的な在り方」及び意見募集結果の公表
インターネットトラヒック研究会(第7回)配布資料
接続料の算定等に関する研究会(第44回)
情報通信法学研究会通信法分科会(令和3年度第1回会合)
第627回 入札監理小委員会(会議資料)
情報通信審議会 電気通信事業政策部会 電気通信番号政策委員会(第26回)配付資料・議事録
Amid Systemic Censorship of Palestinian Voices, Facebook Owes Users Transparency
Over the past few weeks, as protests in—and in solidarity with—Palestine have grown, so too have violations of the freedom of expression of Palestinians and their allies by major social media companies. From posts incorrectly flagged by Facebook as incitement to violence, to financial censorship of relief payments made on Venmo, and the removal of Instagram Stories (which also heavily affected activists in Colombia, Canada, and Brazil), Palestinians are experiencing an unprecedented level of censorship during a time where digital communications are absolutely critical.
The vitality of social media during a time like this cannot be understated. Journalistic coverage from the ground is minimal—owing to a number of factors, including restrictions on movement by Israeli authorities—while, as the New York Times reported, misinformation is rife and has been repeated by otherwise reliable media sources. Israeli officials have even been caught spreading misinformation on social media.
Palestinian digital rights organization 7amleh has spent the past few weeks documenting content removals, and a coalition of more than twenty organizations, including EFF, have reached out to social media companies, including Facebook and Twitter. Among the demands are for the companies to immediately stop censoring—and reinstate—the accounts and content of Palestinian voices, to open an investigation into the takedowns, and to transparently and publicly share the results of those investigations.
A brief history
Palestinians face a number of obstacles when it comes to online expression. Depending on where they reside, they may be subject to differing legal regimes, and face censorship from both Israeli and Palestinian authorities. Most Silicon Valley tech companies have offices in Israel (but not Palestine), while some—such as Facebook—have struck particular deals with the Israeli government to deal with incitement. While incitement to violence is indeed against the company’s community standards, groups like 7amleh say that this agreement results in inconsistent application of the rules, with incitement against Palestinians often allowed to remain on the platform.
Additionally, the presence of Hamas—which is the democratically-elected government of Gaza, but is also listed as a terrorist organization by the United States and the European Union—complicates things for Palestinians, as any mention of the group (including, at times, something as simple as the group’s flag flying in the background of an image) can result in content removals.
And it isn’t just Hamas—last week, Buzzfeed documented an instance where references to Jerusalem’s Al Aqsa mosque, one of the holiest sites in Islam, were removed because “Al Aqsa” is also contained within another designated group, Al Aqsa Martyrs’ Brigade. Although Facebook apologized for the error, this kind of mistake has become all too common, particularly as reliance on automated moderation has increased amidst the pandemic.
“Dangerous Individuals and Organizations”
Facebook’s Community Standard on Dangerous Individuals and Organizations gained a fair bit of attention a few weeks back when the Facebook Oversight Board affirmed that President Trump violated the standard with several of his January 6 posts. But the standard is also regularly used as justification for the widespread removal of content by Facebook pertaining to Palestine, as well as other countries like Lebanon. And it isn’t just Facebook—last Fall, Zoom came under scrutiny for banning an academic event at San Francisco State University (SFSU) at which Palestinian figure Leila Khaled, alleged to belong to another US-listed terrorist organization, was to speak.
SFSU fell victim to censorship again in April of this year when its Arab and Muslim Ethnicities and Diasporas (AMED) Studies Program discovered that its Facebook event “Whose Narratives? What Free Speech for Palestine?,” scheduled for April 23, had been taken down for violating Facebook Community Standards. Shortly thereafter, the program’s entire page, “AMED STUDIES at SFSU,” was deleted, along with its years of archival material on classes, syllabi, webinars and vital discussions not only on Palestine but on Black, Indigenous, Asian and Latinx liberation, gender and sexual justice and a variation of Jewish voices and perspectives including opposition to Zionism. Although no specific violation was noted, Facebook has since confirmed that the post and the page were removed for violating the Dangerous Individuals and Organizations standard. This was in addition to cancellations by other platforms including Google, Zoom, and Eventbrite.
Given the frequency and the high-profile contexts in which Facebook’s Dangerous Individuals and Organizations Standard is applied, the company should take extra care to make sure the standard reflects freedom of expression and other human rights values. But to the contrary, the standard is a mess of vagueness and overall lack of clarity—a point that the Oversight Board has emphasized.
Facebook has said that the purpose of this community standard is to “prevent and disrupt real-world harm.” In the Trump ruling, the Oversight Board found that President Trump’s January 6 posts readily violated the Standard. “The user praised and supported people involved in a continuing riot where people died, lawmakers were put at serious risk of harm, and a key democratic process was disrupted. Moreover, at the time when these restrictions were extended on January 7, the situation was fluid and serious safety concerns remained.”
But in two previous decisions, the Oversight Board criticized the standard. In a decision overturning Facebook’s removal of a post featuring a quotation misattributed to Joseph Goebbels, the Oversight Board admonished Facebook for not including all aspects of its policy on dangerous individuals and organizations in the community standard.
Facebook apparently has self-designated lists of individuals and organizations subject to the policy that it does not share with users, and treats any quoting of such persons as an “expression of support” unless the user provides additional context to make their benign intent explicit, a condition also not disclosed to users. Facebook's lists evidently include US-designated foreign terrorist organizations, but also seems to go beyond that list.
As the Oversight Board concluded, “this results in speech being suppressed which poses no risk of harm” and found that the standard fell short of international human rights standards: “the policy lacks clear examples that explain the application of ‘support,’ ‘praise’ and ‘representation,’ making it difficult for users to understand this Community Standard. This adds to concerns around legality and may create a perception of arbitrary enforcement among users.” Moreover, “the policy fails to explain how it ascertains a user’s intent, making it hard for users to foresee how and when the policy will apply and conduct themselves accordingly.”
The Oversight Board recommended that Facebook explain and provide examples of the application of key terms used in the policy, including the meanings of “praise,” “support,” and “representation.” The Board also recommended that the community standard provide clearer guidance to users on making their intent apparent when discussing such groups, and that a public list of “dangerous” organizations and individuals be provided to users.
The United Nations Special Rapporteur on Freedom of Expression also expressed concern that the standard, and specifically the language of “praise” and “support,” was “excessively vague.”
Recommendations
Policies such as Facebook’s that restrict references to designated terrorist organizations may be well-intentioned, but in their blunt application, they can have serious consequences for documentation of crimes—including war crimes—as well as vital expression, including counterspeech, satire, and artistic expression, as we’ve previously documented. While companies, including Facebook, have regularly claimed that they are required to remove such content by law, it is unclear to what extent this is true. The legal obligations are murky at best. Regardless, Facebook should be transparent about the composition of its "Dangerous Individuals and Organizations" list so that users can make informed decisions about what they post.
But while some content may require removal under certain jurisdictions, it is clear that other decisions are made on the basis of internal policies and external pressure—and are often not in the best interest of the individuals that they claim to serve. This is why it is vital that companies include vulnerable communities—in this case, Palestinians—in policy conversations.
Finally, transparency and appropriate notice to users would go a long way toward mitigating the harm of such takedowns—as would ensuring that every user has the opportunity to appeal content decisions in every circumstance. The Santa Clara Principles on Transparency and Accountability in Content Moderation offer a baseline for companies.
【時事マンガ】河井事件の1・5億円 何に使った?!=画・八方美人
まるで満州事変時の帝国議会のよう 「重要土地調査規制法案」を廃案へ
たんぽぽ舎メルマガ NO.4206/原発に抗うは、国のあり様を問うこと(村上達也)
Inside the Digital Society: The trajectory of the internet - past, present, future
Last week I wrote about how we define the internet. This week, some thoughts about its history and its trajectory. The internet has been around now long enough for its history to be written.
納税額決定通知書のグリフ違いによるトラブル。千葉市
すべて読む | ITセクション | グラフィック | IT | 政府 | スラドに聞け! |
関連ストーリー:
裁判でコピー代高額化から証拠のデジタル開示を求めるも拒否。スキャナーの使用も不可 2021年01月21日
Firefox 85、印刷時に非連続ページ範囲の指定がようやく可能に 2020年12月21日
河野大臣、書類の枠と文字の間隔がキッチリ5mmの位置にあるか測っていた「青枠」の慣行廃止へ 2020年10月19日
河野大臣がツイート。メールも受付時間外は受付できないと自動返信する日本大使館 2020年10月15日
FAXが多用される芸能界、事務所の送り状や形式がセキュアに 2020年10月07日