注意喚起: Adobe AcrobatおよびReaderの脆弱性(APSB24-92)に関する注意喚起 (公開)
みんなで書こう! 12.12経済安保・秘密保護法パブコメセミナー
Weekly Report: UNIVERGE IX/IX-R/IX-Vシリーズルーターに複数の脆弱性
法人企業景気予測調査(令和6年10-12月期)
[B] 野添憲治の「秋田県の朝鮮人強制連行の記録」18回 奥州無煙炭鉱の索道 北秋田市桂瀬
電波監理審議会 有効利用評価部会(第38回)会議資料
電波監理審議会 有効利用評価部会(第37回)会議資料
電気通信事業法施行規則の一部改正に対する 意見募集の結果及び情報通信行政・郵政行政審議会からの答申
危険物の規制に関する政令の一部を改正する政令(案)に対する意見公募
東日本電信電話株式会社及び西日本電信電話株式会社の 第一種指定電気通信設備に関する接続約款の変更の認可
第二号基礎的電気通信役務の提供に係る第二種交付金及び第二種負担金算定等規則案等に対する意見募集
情報通信行政・郵政行政審議会 電気通信事業部会(第150回)配付資料・議事概要・議事録
行政評価局総務課 任期付職員採用情報
Introducing EFF’s New Video Series: Gate Crashing
The promise of the internet—at least in the early days—was that it would lower the barriers to entry for any number of careers. Traditionally, the spheres of novel writing, culture criticism, and journalism were populated by well-off straight white men, with anyone not meeting one of those criteria being an outlier. Add in giant corporations acting as gatekeepers to those spheres and it was a very homogenous culture. The internet has changed that.
There is a lot about the internet that needs fixing, but the one thing we should preserve and nurture is the nontraditional paths to success it creates. In this series of interviews, called “Gate Crashing,” we look to highlight those people and learn from their examples. In an ideal world, lawmakers will be guided by lived experiences like these when thinking about new internet legislation or policy.
In our first video, we look at creators who honed their media criticism skills in fandom spaces. Please join Gavia Baker-Whitelaw and Elizabeth Minkel, co-creators of the Rec Center newsletter, in a wide-ranging discussion about how they got started, where it has led them, and what they’ve learned about internet culture and policy along the way.
%3Ciframe%20title%3D%22YouTube%20video%20player%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FaeplIxvskx8%3Fsi%3DJJtXxSdTkjYiTrTT%26autoplay%3D1%26mute%3D1%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%20allowfullscreen%3D%22allowfullscreen%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E
Privacy info.
This embed will serve content from youtube.com
Speaking Freely: Tomiwa Ilori
Interviewer: David Greene
*This interview has been edited for length and clarity.
Tomiwa Ilori is an expert researcher and a policy analyst with focus on digital technologies and human rights. Currently, he is an advisor for the B-Tech Africa Project at UN Human Rights and a Senior ICFP Fellow at HURIDOCS. His postgraduate qualifications include masters and doctorate degrees from the Centre for Human Rights, Faculty of Law, University of Pretoria. All views and opinions expressed in this interview are personal.
Greene: Why don’t you start by introducing yourself?
Tomiwa Ilori: My name is Tomiwa Ilori. I’m a legal consultant with expertise in digital rights and policy. I work with a lot of organizations on digital rights and policy including information rights, business and human rights, platform governance, surveillance studies, data protection and other aspects.
Greene: Can you tell us more about the B-Tech project?
The B-Tech project is a project by the UN human rights office and the idea behind it is to mainstream the UN Guiding Principles on Business and Human Rights (UNGPs) into the tech sector. The project looks at, for example, how social media platforms can apply human rights due diligence frameworks or processes to their products and services more effectively. We also work on topical issues such as Generative AI and its impacts on human rights. For example, how do the UNGPs apply to Generative AI? What guidance can the UNGPs provide for the regulation of Generative AI and what can actors and policymakers look for when regulating Generative AI and other new and emerging technologies?
Greene: Great. This series is about freedom of expression. So my first question for you is what does freedom of expression mean to you personally?
I think freedom of expression is like oxygen, more or less like the air we breathe. There is nothing about being human that doesn’t involve expression, just like drawing breath. Even beyond just being a right, it’s an intrinsic part of being human. It’s embedded in us from the start. You have this natural urge to want to express yourself right from being an infant. So beyond being a human right, it is something you can almost not do without in every facet of life. Just to put it as simply as possible, that’s what it means to me.
Greene: Is there a single experience or several experiences that shaped your views about freedom of expression?
Yes. For context, I’m Nigerian and I also grew up in the Southwestern part of the country where most of the Yorùbá people live. As a Yoruba person and as someone who grew up listening and speaking the Yoruba language, language has a huge influence on me, my philosophy and my ideas. I have a mother who loves to speak in proverbs and mostly in Yorùbá. Most of these proverbs which are usually profound show that free speech is the cornerstone of being human, being part of a community, and exercising your right to life and existence. Sharing expression and growing up in that kind of community shaped my worldview about my right to be. Closely attached to my right to be is my right to express myself. More importantly, it also shaped my view about how my right to be does not necessarily interrupt someone else’s right to be. So, yes, my background and how I grew up really shaped me. Then, I was fortunate that I also grew up and furthered my studies. My graduate studies including my doctorate focused on freedom of expression. So I got both the legal and traditional background grounded in free speech studies and practices in unique and diverse ways.
Greene: Can you talk more about whether there is something about Yorùbá language or culture that is uniquely supportive of freedom of expression?
There’s a proverb that goes, “A kìí pa ohùn mọ agogo lẹ́nu” and what that means in a loose English translation is that you cannot shut the clapperless bell up, it is the bell’s right to speak, to make a sound. So you have no right to stop a bell from doing what it’s meant to do, it suggests that it is everyone’s right to express themselves. It suffices to say that according to that proverb, you have no right to stop people from expressing themselves. There’s another proverb that is a bit similar which is,“Ọmọdé gbọ́n, àgbà gbọ́n, lafí dá ótù Ifẹ̀” which when loosely translated refers to how both the old and the young collaborate to make the most of a society by expressing their wisdom.
Greene: Have you ever had a personal experience with censorship?
Yes and I will talk about two experiences. First, and this might not fit the technical definition of censorship, but there was a time when I lived in Kampala and I had to pay tax to access the internet which I think is prohibitive for those who are unable to pay it. If people have to make a choice between buying bread to eat and paying a tax to access the internet, especially when one item is an opportunity cost for the other, it makes sense that someone would choose bread over paying that tax. So you could say it’s a way of censoring internet users. When you make access prohibitive through taxation, it is also a way of censoring people. Even though I was able to pay the tax, I could not stop thinking about those who were unable to afford it and for me that is problematic and qualifies as a kind of censorship.
Another one was actually very recent. Even though the internet service provider insisted that they did not shut down or throttle the internet,, I remember that during the recent protests in Nairobi, Kenya in June of 2024, I experienced an internet shutdown for the first time. According to the internet service provider, the shut down was as a result of an undersea cable cut. Suddenly my emails just stopped working and my Twitter (now X) feed won’t load. The connection appeared to work for a few seconds, and then all of a sudden it would stop, then work for some time, then all of a sudden nothing. I felt incapacitated and helpless. That’s the way I would describe it. I felt like, “Wow, I have written, thought, spoken about this so many times and this is it.” For the first time I understood what it means to actually experience an internet shutdown and it’s not just the experience, it’s the helplessness that comes with it too.
Greene: Do you think there is ever a time when the government can justify an internet shutdown?
The simple answer is no. In my view, those who carry out internet shutdowns, especially state actors, believe that since freedom of expression and some other associated rights are not absolute, they have every right to restrict them without measure. I think what many actors that are involved in internet shutdowns use as justification is a mask for their limited capacity to do the right thing. Actors involved in shutting down the internet say that they usually do not have a choice. For example, they say that hate speech, misinformation, and online violence are being spread online in such a way that it could spill over into offline violence. Some have even gone as far as saying that they’re shutting down the internet because they want to curtail examination fraud. When these are the kind of excuses used by actors, it demonstrates the limited understanding of actors on what international human rights standards prescribe and what can actually be done to address the online harms that are used to justify internet shutdowns.
Let me use an example: international human rights standards provide clear processes for instances where state actors must address online harms or where private actors must address harms to forestall offline violence. The perception is that these standards do not even give room for addressing harms, which is not the case. The process requires that whatever action you take must be legal i.e. be provided clearly in a law, must not be vague, must be unequivocal and show in detail the nature of the right that is limited. Another requirement says that whatever action to be taken to limit a right must be proportional. If you are trying to fight hate speech online, don’t you think it is disproportionate to shut down the entire network just to fight one section of people spreading such speech? Another requirement is that its necessity must be justified i.e. to protect clearly defined public interest or order which must be specific and not the blanket term ‘national security.’ Additionally international human rights law is clear that these requirements must be cumulative i.e. you can not fulfill the requirement of legality and not fulfill that of proportionality or necessity.
This shows that when trying to regulate online harms, it needs to be very specific. So, for example, state actors can actually claim that a particular content or speech is causing harm which the state actors must prove according to the requirements above. You can make a request such that just that content alone is restricted. Also these must be put in context. Using hate speech as an example. There’s the RabatAction Plan on Hate Speech which was developed by the UN, and it’s very clear on the conditions that must be met before the speech can be categorized as hate speech. So are these conditions met by state actors before, for example, they ask platforms to remove particular hate content? There are steps and processes involved in the regulation of problematic content, but state actors never go simply for targeted removal that comply with international human rights standards, they usually go for the entire network.
I’d also like to add that I find it problematic and ironic that most state actors who are supposedly champions of digital transformation are also the ones quick to shut down the internet during political events. There is no digital transformation that does not include a free, accessible and interoperable internet. These are some of the challenges and problematic issues that I think we need to address in more detail so we can hear each other better, especially when it comes to regulating online speech and fighting internet shutdowns.
Greene: So shutdowns are then inherently disproportionate and not authorized by law. You talked about the types of speech that might be limited. Can you give us a sense of what types of online speech you think might be appropriately regulated by governments?
For categories of speech that can be regulated, of course, that includes hate speech. It’s under international law as provided for underArticle 20 of the International Covenant on Civil and Political Rights (ICCPR) prohibits propagation of war, etc. The International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) also provides for this. However, these applicable provisions are not carte blanche for state actors. The major conditions that must be met before avspeech qualifies as hate speech must be fulfilled before it can be regarded as one. This is done in order to address instances where powerful actors define what constitutes hate speech and violate human rights under the guise of combating it. There are still laws that criminalize disaffection against the state which are used to prosecute dissent.
Greene: In Nigeria or in Kenya or just on the continent in general?
Yes, there are countries that still have lèse-majesté laws in criminal laws and penal codes. We’ve had countries like Nigeria that were trying to come up with a version of such laws for the online space, but which have been fought down by mostly civil society actors.
So hate speech does qualify as speech that could be limited, but with caveats. There are several conditions that must be made before speech qualifies as hate speech. There must be context around the speech. For example, what kind of power does the person who makes the speech wield? What is the likelihood of that speech leading to violence? What audience has the speech been made to? These are some of the criteria that must be fulfilled before you say, “okay, this qualifies as hate speech.”
There’s also other clearly problematic content, child sexual abuse material for example, that are prima facie illegal and must be censored or removed or disallowed. That goes without saying. It’s customary international human rights law especially as it applies to platform governance. Another category of speech could also be non-consensual sharing of intimate images which could qualify as online gender-based violence. So these are some of the categories that could come under regulation by states.
I also must sound a note that there are contexts to applying speech laws. It is also the reason why speech laws are one of the most difficult regulations to come up with because they are usually context-dependent especially when they are to be balanced against international human rights standards. Of course, some of the biggest fears in platform regulation that touch on freedom of expression is how state actors could weaponize those laws to track or to attack dissent and how businesses platform speech mainly for profit.
Greene: Is misinformation something the government should have a role in regulating or is that something that needs to be regulated by the companies or by the speakers? If it’s something we need to worry about, who has a role in regulating it?
State actors have a role. But in my opinion I don’t think it’s regulation. The fact that you have a hammer does not mean that everything must look like a nail. The fact that a state actor has the power to make laws does not mean that it must always make laws on all social problems. I believe non-legal and multi-stakeholder solutions are required for combatting online harms. State actors have tried to do what they do best by coming up with laws that regulate misinformation. But where has that led us? The arrest and harassment of journalists, human rights defenders and activists. So it has really not solved any problems.
When your approach is not solving any problems, I think it’s only right to re-evaluate. That’s the reason I said state actors have a role. In my view, state actors need to step back in a sense that you don’t necessarily need to leave the scene, but step back and allow for a more holistic dialogue among stakeholders involved in the information ecosystem. You could achieve a whole lot more through digital literacy and skills than you will with criminalizing misinformation. You can do way more by supporting journalists with fact-checking skills than you will ever achieve by passing overbroad laws that limit access to information. You can do more by working with stakeholders in the information ecosystem like platforms to label problematic content than you will ever by shutting down the internet. These are some of the non-legal methods that could be used to combat misinformation and actually get results. So, state actors have a role, but it is mainly facilitatory in the sense that it should bring stakeholders together to brainstorm on what the contexts are and the kinds of useful solutions that could be applied effectively.
Greene: What do you feel the role of the companies should be?
Companies also have an important role, one of which is to respect human rights in the course of providing services. What I always say for technology companies is that, if a certain jurisdiction or context is good enough to make money from, it is good enough to pay attention to and respect human rights there.
One of the perennial issues that platforms face in addressing online harms is aligning their community standards with international human rights standards. But oftentimes what happens is that corporate-speak is louder than the human rights language in many of these standards.
That said, some of the practical things that platforms could do is to step out of the corporate talk of, “Oh, we’re companies, there’s not much we can do.” There’s a lot they can do. Companies need to get more involved, step into the arena and walk with key state actors, including civil society, to educate and develop capacity on how their platforms actually work. For example, what are the processes involved, for example, in taking down a piece of content? What are the processes involved in getting appeals? What are the processes involved in actually getting redress when a piece of content has been wrongly taken down? What are the ways platforms can accurately—and I say accurately emphatically because I’m not speaking about using automated tools—label content? Platforms also have responsibilities in being totally invested in the contexts they do business in. What are the triggers for misinformation in a particular country? Elections, conflict, protests? These are like early warning sign systems that platforms need to start paying attention to to be able to understand their contexts and be able to address the harms on their platforms better.
Greene: What’s the most pressing free speech issue in the region in which you work?
Well, for me, I think of a few key issues. Number one, which has been going on for the longest time, is the government’s use of laws to stifle free speech. Most of the laws that are used are cybercrime laws, electronic communication laws, and old press codes and criminal codes. They were never justified and they’re still not justified.
A second issue is the privatization of speech by companies regarding the kind of speech that gets promoted or demoted. What are the guidelines on, for example, political advertisements? What are the guidelines on targeted advertisement? How are people’s data curated? What is it like in the algorithm black box? Platforms’ roles on who says what, how, when and where also is a burning free speech issue. And we are moving towards a future where speech is being commodified and privatized. Public media, for example, are now being relegated to the background. Everyone wants to be on social media and I’m not saying that’s a terrible thing, but it gives us a lot to think about, a lot to chew on.
Greene: And finally, who is your free speech hero?
His name is Felá Aníkúlápó Kútì. Fela was a political musician and the originator of Afrobeat not afrobeats with an “s” but the original Afrobeat which that one came from. Fela never started out as a political musician, but his music became highly political and highly popular among the people for obvious reasons. His music also became timely because, as a political musician in Nigeria who lived during the brutal military era, it resonated with a lot of people. He was a huge thorn in the flesh of despotic Nigerian and African leaders. So, for me, Fela is my free speech hero. He said quite a lot with his music that many people in his generation would never dare to say because of the political climate at that time. Taking such risks even in the face of brazen violence and even death was remarkable.
Fela was not just a political musician who understood the power of expression. He was also someone who understood the power of visual expression. He’s unique in his own way and expresses himself through music, through his lyrics. He’s someone who has inspired a lot of people including musicians, politicians and a lot of new generation activists.
A Fundamental-Rights Centered EU Digital Policy: EFF’s Recommendations 2024-2029
The European Union (EU) is a hotbed for tech regulation that often has ramifications for users globally. The focus of our work in Europe is to ensure that EU tech policy is made responsibly and lives up to its potential to protect users everywhere.
As the new mandate of the European institution begins – a period where newly elected policymakers set legislative priorities for the coming years – EFF today published recommendations for a European tech policy agenda that centers on fundamental rights, empowers users, and fosters fair competition. These principles will guide our work in the EU over the next five years. Building on our previous work and success in the EU, we will continue to advocate for users and work to ensure that technology supports freedom, justice, and innovation for all people of the world.
Our policy recommendations cover social media platform intermediary liability, competition and interoperability, consumer protection, privacy and surveillance, and AI regulation. Here’s a sneak peek:
- The EU must ensure that the enforcement of platform regulation laws like the Digital Services Act and the European Media Freedom Act are centered on the fundamental rights of users in the EU and beyond.
- The EU must create conditions of fair digital markets that foster choice innovation and fundamental rights. Achieving this requires enforcing the user-rights centered provisions of the Digital Markets Act, promoting app store freedom, user choice, and interoperability, and countering AI monopolies.
- The EU must adopt a privacy-first approach to fighting online harms like targeted ads and deceptive design and protect children online without reverting to harmful age verification methods that undermine the fundamental rights of all users.
- The EU must protect users’ rights to secure, encrypted, and private communication, protect against surveillance everywhere, stay clear of new data retention mandates, and prioritize the rights-respecting enforcement of the AI Act.
Read on for our full set of recommendations.
【国会前集会】「憲法を守れ」総選挙後初 2300人集会 安倍元首相襲撃裁判開かれぬ裏に 被爆国日本の国際的使命は=保坂 義久
Statewatch is seeking new Trustees
We are looking for Trustees who meet the following criteria:
- are committed to the vision, mission and objectives of Statewatch;
- have experience of working in the field - and with people working in the field - of human rights and civil liberties and the state in the UK, the EU, and wider Europe;
- have an understanding of how civil society organisations work, and how they should work;
- are able to commit to the responsibilities of being a Trustee, including by regularly reviewing documentation and paperwork, and providing inputs by email and in Trustee meetings (quarterly, online, with additional meetings as required). The term for Trustees is 3 years (renewable up to 3 times).
Please note that there are legal restrictions on who may serve as a Trustee. The board is currently co-chaired by two of the Trustees (Vicky Canning and Lilana Keith).
In particular, we are looking for people with expertise and experience in the following areas:
- fundraising;
- legal knowledge and expertise, particularly as regards employment and charity law;
- organisational growth, development and improving internal policies and procedures;
- are carrying out research and investigation, are working for civil society organisations, or are campaigners and activists on:
- critical perspectives on counter-terrorism and securitisation;
- addressing harms of policing and prisons;
- anti-racism;
- threats to civil liberties in the United Kingdom;
- are based in, or have good knowledge of, the Central and Eastern Europe geographical region.
Our vision
An open Europe of democracy, civil liberties, personal and political rights, free movement, freedom of information, equality and diversity.
Our mission
To monitor, analyse and expose state activity that threatens civil liberties, human rights and democratic standards in order to inform and enable a culture of diversity, debate and dissent.
Our objectives
At the beginning of 2022 we adopted a new five-year strategic plan based around three overarching themes:
- Objective 1: Strengthen civil society’s access to information
- Objective 2: Expose and challenge new means of surveillance, coercion and control
- Objective 3: Build a more sustainable and more effective organisation
You can read a summary of the strategic plan here.
Applicants are asked to look here and at our website generally to familiarise yourself with our work and to send us:
- A CV.
- A short statement (no more than one page) explaining what you will bring to Statewatch, including details of your involvement in similar organisations and/or related activities.
- A completed equality and diversity monitoring form (pdf).
- The contact details of two people who can provide a reference.
Please send these to Rahmatolla Tavakkoli at admin [at] statewatch.org
Please note that some applications will be reviewed by the whole Board of Trustees and Director of Statewatch. All information provided will be treated in accordance with our Privacy Policy.
The deadline for applications is 18:00 GMT on Monday 13 January.
Please note that we will only be able to respond to applicants who are invited for interview. We will contact interviewees by Friday 31 January.
If you have any inquiries about this role, please write to: office [at] statewatch.org