特定信書便事業の現況
利用者情報に関するワーキンググループ(第12回) 開催案内
感染症対策に関する行政評価・監視 −国際的に脅威となる感染症への対応を中心として− <勧告に対する改善措置状況(2回目のフォローアップ)の概要>
情報通信審議会 情報通信技術分科会 陸上無線通信委員会 基幹系無線システム作業班(第1回)の開催について
電気通信番号計画の一部を変更する告示案に対する意見募集
令和6年度「郵便局等の公的地域基盤連携推進事業」における 「ドライブ・バイを活用した配達車両による寒冷地でのスマート水道検針」に関する実証事業の実施
松本総務大臣閣議後記者会見の概要
令和6年度地方財政審議会(8月27日)議事要旨
令和6年度地方財政審議会(8月30日)議事要旨
令和6年度地方財政審議会(9月4日)議事要旨
2020年基準 消費者物価指数 全国 2024年(令和6年)8月分
情報通信審議会 情報通信技術分科会 陸上無線通信委員会 V-High帯公共BB/狭帯域無線システム作業班(第3回)
官民競争入札等監理委員会事務局総務担当 非常勤職員採用情報
Strong End-to-End Encryption Comes to Discord Calls
We’re happy to see that Discord will soon start offering a form of end-to-end encryption dubbed “DAVE” for its voice and video chats. This puts some of Discord’s audio and video offerings in line with Zoom, and separates it from tools like Slack and Microsoft Teams, which do not offer end-to-end encryption for video, voice, or any other communications on those apps. This is a strong step forward, and Discord can do even more to protect its users’ communications.
End-to-end encryption is used by many chat apps for both text and video offerings, including WhatsApp, iMessage, Signal, and Facebook Messenger. But Discord operates differently than most of those, since alongside private and group text, video, and audio chats, it also encompasses large scale public channels on individual servers operated by Discord. Going forward, audio and video will be end-to-end encrypted, but text, including both group channels and private messages, will not.
When a call is end-to-end encrypted, you’ll see a green lock icon. While it's not required to use the service, Discord also offers a way to optionally verify that the strong encryption a call is using is not being tampered with or eavesdropped on. During a call, one person can pull up the “Voice Privacy Code,” and send it over to everyone else on the line—preferably in a different chat app, like Signal—to confirm no one is compromising participants’ use of end-to-end encryption. This is a way to ensure someone is not impersonating someone and/or listening in to a conversation.
By default, you have to do this every time you initiate a call if you wish to verify the communication has strong security. There is an option to enable persistent verification keys, which means your chat partners only have to verify you on each device you own (e.g. if you sometimes call from a phone and sometimes from a computer, they’ll want to verify for each).
Key management is a hard problem in both the design and implementation of cryptographic protocols. Making sure the same encryption keys are shared across multiple devices in a secure way, as well as reliably discovered in a secure way by conversation partners, is no trivial task. Other apps such as Signal require some manual user interaction to ensure the sharing of key-material across multiple devices is done in a secure way. Discord has chosen to avoid this process for the sake of usability, so that even if you do choose to enable persistent verification keys, the keys on separate devices you own will be different.
While this is an understandable trade-off, we hope Discord takes an extra step to allow users who have heightened security concerns the ability to share their persistent keys across devices. For the sake of usability, they could by default generate separate keys for each device while making sharing keys across them an extra step. This will avoid the associated risk of your conversation partners seeing you’re using the same device across multiple calls. We believe making the use of persistent keys easier and cross-device will make things safer for users as well: they will only have to verify the key for their conversation partners once, instead of for every call they make.
Discord has performed the protocol design and implementation of DAVE in a solidly transparent way, including publishing the protocol whitepaper, the open-source library, commissioning an audit from well-regarded outside researchers, and expanding their bug-bounty program to include rewarding any security researchers who report a vulnerability in the DAVE protocol. This is the sort of transparency we feel is required when rolling out encryption like this, and we applaud this approach.
But we’re disappointed that, citing the need for content moderation, Discord has decided not to extend end-to-end encryption offerings to include private messages or group chats. In a statement to TechCrunch, they reiterated they have no further plans to roll out encryption in direct messages or group chats.
End-to-end encrypted video and audio chats is a good step forward—one that too many messaging apps lack. But because protection of our text conversations is important and because partial encryption is always confusing for users, Discord should move to enable end-to-end encryption on private text chats as well. This is not an easy task, but it’s one worth doing.
Canada’s Leaders Must Reject Overbroad Age Verification Bill
Canadian lawmakers are considering a bill, S-210, that’s meant to benefit children, but would sacrifice the security, privacy, and free speech of all internet users.
First introduced in 2023, S-210 seeks to prevent young people from encountering sexually explicit material by requiring all commercial internet services that “make available” explicit content to adopt age verification services. Typically, these services will require people to show government-issued ID to get on the internet. According to bill authors, this is needed to prevent harms like the “development of pornography addiction” and “the reinforcement of gender stereotypes and the development of attitudes favorable to harassment and violence…particularly against women.”
The motivation is laudable, but requiring people of all ages to show ID to get online won’t help women or young people. If S-210 isn't stopped before it reaches the third reading and final vote in the House of Commons, Canadians will be forced to a repressive and unworkable age verification regulation.
Flawed Definitions Would Encompass Nearly the Entire InternetThe bill’s scope is vast. S-210 creates legal risk not just for those who sell or intentionally distribute sexually explicit materials, but also for those who just transmit it–knowingly or not.
Internet infrastructure intermediaries, which often do not know the type of content they are transmitting, would also be liable, as would all services from social media sites to search engines and messaging platforms. Each would be required to prevent access by any user whose age is not verified, unless they can claim the material is for a “legitimate purpose related to science, medicine, education or the arts,” or by implementing age verification.
Basic internet infrastructure shouldn’t be regulating content at all, but S-210 doesn’t make the distinction. When these large services learn they are hosting or transmitting sexually explicit content, most will simply ban or remove it outright, using both automated tools and hasty human decision-making. History shows that over-censorship is inevitable. When platforms seek to ban sexual content, over-censorship is very common.
Rules banning sexual content usually hurt marginalized communities and groups that serve them the most. That includes organizations that provide support and services to victims of trafficking and child abuse, sex workers, and groups and individuals promoting sexual freedom.
Promoting Dangerous Age Verification MethodsS-210 notes that “online age-verification technology is increasingly sophisticated and can now effectively ascertain the age of users without breaching their privacy rights.”
This premise is just wrong. There is currently no technology that can verify users’ ages while protecting their privacy. The bill does not specify what technology must be used, leaving it for subsequent regulation. But the age verification systems that exist are very problematic. It is far too likely that any such regulation would embrace tools that retain sensitive user data for potential sale or harms like hacks and lack guardrails preventing companies from doing whatever they like with this data once collected.
We’ve said it before: age verification systems are surveillance systems. Users have no way to be certain that the data they’re handing over is not going to be retained and used in unexpected ways, or even shared to unknown third parties. The bill asks companies to maintain user privacy and destroy any personal data collected but doesn’t back up that suggestion with comprehensive penalties. That’s not good enough.
Companies responsible for storing or processing sensitive documents like drivers’ licenses can encounter data breaches, potentially exposing not only personal data about users, but also information about the sites that they visit.
Finally, age-verification systems that depend on government-issued identification exclude altogether Canadians who do not have that kind of ID.
Fundamentally, S-210 leads to the end of anonymous access to the web. Instead, Canadian internet access would become a series of checkpoints that many people simply would not pass, either by choice or because the rules are too onerous.
Dangers for Everyone, But This Can Be StoppedCanada’s S-210 is part of a wave of proposals worldwide seeking to gate access to sexual content online. Many of the proposals have similar flaws. Canada’s S-210 is up there with the worst. Both Australia and France have paused the rollout of age verification systems, because both countries found that these systems could not sufficiently protect individuals’ data or address the issues of online harms alone. Canada should take note of these concerns.
It's not too late for Canadian lawmakers to drop S-210. It’s what has to be done to protect the future of a free Canadian internet. At the very least, the bill’s broad scope must be significantly narrowed to protect user rights.