Six Years of Dangerous Misconceptions Targeting Ola Bini and Digital Rights in Ecuador

2 weeks 2 days ago

Ola Bini was first detained in Quito’s airport six years ago, called a “Russian hacker,” and accused of “alleged participation in the crime of assault on the integrity of computer systems.” It wouldn't take long for Ecuadorean authorities to find out that he was Swedish and an internationally respected free software developer and computer expert. 

Lacking evidence, authorities rapidly changed the criminal offense underpinning the accusation against Bini and struggled to build a case based on a mere image that shows no wrongdoing. Yet, Bini remained arbitrarily detained for 70 days in 2019 and outrageously remains under criminal prosecution.

This week, the Observation Mission monitoring Ola Bini’s case is again calling out the prosecution’s inaccuracies and abuses that weaponize misunderstandings about computer security, undermining both Bini’s rights and digital security more broadly. The Observation Mission is comprised of digital and human rights organizations, including EFF. Specifically, we highlight how Ecuadorean law enforcement authorities have tried to associate the use of Tor, a crucial privacy protection tool, with inherently suspicious activity. 

Following a RightsCon 2025 session about the flaws and risks of such an interpretation, we are releasing this week a technical statement (see below) pointing out why Ecuadorean courts must reaffirm Bini’s innocence and repudiate misconceptions about technology and technical knowledge that only disguise the prosecutor’s lack of evidence supporting the accusations against Bini. 

Let’s not forget that Bini was unanimously acquitted in early 2023. Nonetheless, the Prosecutor’s Office appealed and the majority of the appeals court considered him guilty of attempted unauthorized access of a telecommunications system. The reasoning leading to this conclusion has many problems, including mixing the concepts of private and public IP addresses and disregarding key elements of the acquittal sentence.  

The ruling also refers to the use of Tor. Among other issues, the prosecution argued that Tor is not a tool known by any person except for technical experts since its purpose is to hide your identity on the internet while leaving no trace you're using it. As we stressed at RightsCon, this argument turns the use of a privacy-protective, security-enhancing technology into an indication of suspicious criminal activity, which is a dangerous extrapolation of the “nothing-to-hide argument.” 

The prosecutor’s logic, which the majority appeal ruling endorses, is if you’re keeping your online activities private it’s because you’re most likely doing something wrong, instead of we all have privacy rights, so we are entitled to use technologies that ensure privacy and security by default. 

Backing such an understanding in a court ruling sets an extremely worrying precedent for privacy and security online. The use of Tor must not be up for grabs when a prosecutor lacks actual evidence to sustain a criminal case.

Bini’s defense has appealed the unfounded conviction. We remain vigilant, hoping that the Ecuadorean judicial system will correct the course as per basic tenets of the right to a fair trial, recognizing the weakness of the case rather than surrendering to pressure and prejudice. It's past time for justice to prevail in this case. Six years of a lingering flimsy prosecution coupled with the undue restriction of Bini’s fundamental rights is already far too long.

Read the English translation of the statement below (see here the original one in Spanish):

TECHNICAL STATEMENT
Ola Bini’s innocence must be reaffirmed 

In the context of RightsCon Taipei 2025, the Observation Mission of the Ola Bini case and the Tor Project organized a virtual session to analyze the legal proceedings against the digital security expert in Ecuador and to discuss to what extent and with what implications the use of the Tor digital tool is criminalized1. In that session, which included organizations and speakers from civil society from different countries, we reached the following conclusions and technical consensuses: 

  1. The criminal case against Bini was initiated by political motivations and actors and has been marked by dozens of irregularities and illegalities that undermine its legal legitimacy and technical viability. Rather than a criminal case, this is a persecution. 
  2. The way the elements of conviction of the case were established sets a dangerous precedent for the protection of digital rights and expert knowledge in the digital realm in Ecuador and the region. 
  3. The construction of the case and the elements presented as evidence by the Ecuadorian Attorney General’s Office (EAG) are riddled with serious procedural distortions and/or significant technical errors2
  4. Furthermore, to substantiate the crime supposedly under investigation, the EAG has not even required a digital forensic examination that demonstrate whether any kind of system (be it computer, telematic, or telecommunications) was accessed without authorization. 
  5. The reasoning used by the Appeals Court to justify its guilty verdict lacks sufficient elements to prove that Ola Bini committed the alleged crime. This not only violates the rights of the digital expert but also creates precedents of arbitrariness that are dangerous for the rule of law3
  6. More specifically, because of the conviction, part of the Ecuadorian judiciary is creating a concerning precedent for the exercise of the rights to online security and privacy, by holding that the mere use of the Tor tool is sufficient indication of the commission of a criminal act. 
  7. Furthermore, contrary to the global trend that should prevail, this ruling could even inspire courts to criminalize the use of other digital tools used for the defense of human rights online, such as VPNs, which are particularly useful for key actors—like journalists, human rights defenders, academics, and others—in authoritarian political contexts. 
  8. Around the world, millions of people, including state security agencies, use Tor to carry out their activities. In this context, although the use of Tor is not the central focus of analysis in the present case, the current conviction—part of a politically motivated process lacking technical grounding—constitutes a judicial interpretation that could negatively impact the exercise of the aforementioned rights

For these reasons, and six years after the beginning of Ola Bini’s criminal case, the undersigned civil society organizations call on the relevant Ecuadorian judicial authorities to reaffirm Bini’s presumption of innocence at the appropriate procedural stage, as was the first instance ruling demonstrated.

The Observation Mission will continue monitoring the development of the case until its conclusion, to ensure compliance with due process guarantees and to raise awareness of the case’s implications for the protection of digital rights.

1. RightsCon is the leading global summit on human rights in the digital age, organized by Access Now

2. See https://www.accessnow.org/wp-content/uploads/2022/05/Informe-final-Caso-Ola-Bini.pdf 

3. The Tribunal is composed of Maritza Romero, Fabián Fabara and Narcisa Pacheco. The majority decision is from Fabara and Pacheco. 

Veridiana Alimonti

【お知らせ】12年JCJ賞の OurPlanet-TVへのスラップ訴訟判決前の論点整理シンポ開催 金平茂紀氏らが登壇=MIC共催

2 weeks 2 days ago
 社会学者の開沼博氏が23年OurPlanet-TVの報道に対し500万円の損害賠償を求める裁判を起こした。2年間の審理を経て、6月6日(金)14時から東京地裁415号法廷で判決が言い渡される。論点は2つ。1つは、提訴会見の際に、原告側弁護士が口にした言葉を動画で配信したり、記事にしたりしたメディアもまた名誉毀損となるのかという点。もう1つは、裁判で原告が敗訴すると、提訴報道もまた違法となるのかという点だ。 こうしたことが認められれば、大きな萎縮効果を生み、自由で独立した報道..
JCJ

Interview with Marcela Guerra: Meaningful screen time

2 weeks 2 days ago
In this interview, Marcela Guerra - a member of the Portal Sem Porteiras community network in Brazil - tells us more about the challenge of tackling issues such as cybersecurity within the school…
Luisa Bagope

〈今年も株主提案〉田中優子

2 weeks 2 days ago
 昨年3月8日号のこのコラムで「テレビに声を上げよう」を書いた。「テレビ輝け!市民ネットワーク」の株主提案の知らせであった。この株主提案は今年も実施する。まだ文案は確定していないが、およそ5項目の定款追加を提案する予定だ […]
admin

東電責任者“無罪”放免の中、武藤類子さんにドイツ環境賞 「絶望の夜にも出来ることを」

2 weeks 2 days ago
 東京電力福島第一原発事故で最高裁が東電責任者を〝無罪〟放免した一方で、故郷フクシマを放射能で汚染され、破壊されたことへの静かな怒りを世界に向けて発信し、その責任を問い続けている武藤類子さん(71歳)に、ドイツの国際環境 […]
admin

〈わたしたちの起点を探す旅〉崔善愛

2 weeks 2 days ago
 3月下旬、韓国ソウルへ旅をした。現地では連日、南東部で発生した山火事の大惨事がトップニュースで流れていた。今回の旅は妹・善恵の道案内で、30年前に他界した父・崔昌華牧師の足跡の一部をたどるものだ。妹はわたしより韓国通で […]
admin

大崎事件、97歳女性の第4次再審請求を最高裁は棄却

2 weeks 2 days ago
 三度も再審開始決定が出されては取り消されるという異例の展開となった「大崎事件」は、今回も再審への扉が開かなかった。  1979年に鹿児島県大崎町で男性が変死した事件をめぐり殺人犯とされて懲役刑に服し、無実を訴えていた原 […]
admin

Digital Identities and the Future of Age Verification in Europe

2 weeks 2 days ago

This is the first part of a three-part series about age verification in the European Union. In this blog post, we give an overview of the political debate around age verification and explore the age verification proposal introduced by the European Commission, based on digital identities. Part two takes a closer look at the European Commission’s age verification app, and part three explores measures to keep all users safe that do not require age checks. 

As governments across the world pass laws to “keep children safe online,” more times than not, notions of safety rest on the ability of platforms, websites, and online entities being able to discern users by age. This legislative trend has also arrived in the European Union, where online child safety is becoming one of the issues that will define European tech policy for years to come. 

Like many policymakers elsewhere, European regulators are increasingly focused on a range of online harms they believe are associated with online platforms, such as compulsive design and the effects of social media consumption on children’s and teenagers’ mental health. Many of these concerns lack robust scientific evidence; studies have drawn a far more complex and nuanced picture about how social media and young people’s mental health interact. Still, calls for mandatory age verification have become as ubiquitous as they have become trendy. Heads of state in France and Denmark have recently called for banning under 15 year olds from social media Europe-wide, while Germany, Greece and Spain are working on their own age verification pilots. 

EFF has been fighting age verification mandates because they undermine the free expression rights of adults and young people alike, create new barriers to internet access, and put at risk all internet users’ privacy, anonymity, and security. We do not think that requiring service providers to verify users’ age is the right approach to protecting people online. 

Policy makers frame age verification as a necessary tool to prevent children from accessing content deemed unsuitable, to be able to design online services appropriate for children and teenagers, and to enable minors to participate online in age appropriate ways. Rarely is it acknowledged that age verification undermines the privacy and free expression rights of all users, routinely blocks access to resources that can be life saving, and undermines the development of media literacy. Rare, too, are critical conversations about the specific rights of young users: The UN Convention on the Rights of the Child clearly expresses that minors have rights to freedom of expression and access to information online, as well as the right to privacy. These rights are reflected in the European Charter of Fundamental Rights, which establishes the rights to privacy, data protection and free expression for all European citizens, including children. These rights would be steamrolled by age verification requirements. And rarer still are policy discussions of ways to improve these rights for young people.

Implicitly Mandatory Age Verification

Currently, there is no legal obligation to verify users’ age in the EU. However, different European legal acts that recently entered into force or are being discussed implicitly require providers to know users’ ages or suggest age assessments as a measure to mitigate risks for minors online. At EFF, we consider these proposals akin to mandates because there is often no alternative method to comply except to introduce age verification. 

Under the General Data Protection Regulation (GDPR), in practice, providers will often need to implement some form of age verification or age assurance (depending on the type of service and risks involved): Article 8 stipulates that the processing of personal data of children under the age of 16 requires parental consent. Thus, service providers are implicitly required to make reasonable efforts to assess users’ ages – although the law doesn’t specify what “reasonable efforts” entails. 

Another example is the child safety article (Article 28) of the Digital Services Act (DSA), the EU’s recently adopted new legal framework for online platforms. It requires online platforms to take appropriate and proportionate measures to ensure a high level of safety, privacy and security of minors on their services. The article also prohibits targeting minors with personalized ads. The DSA acknowledges that there is an inherent tension between ensuring a minor’s privacy, and taking measures to protect minors specifically, but it's presently unclear which measures providers must take to comply with these obligations. Recital 71 of the DSA states that service providers should not be incentivized to collect the age of their users, and Article 28(3) makes a point of not requiring service providers to collect and process additional data to assess whether a user is underage. The European Commission is currently working on guidelines for the implementation of Article 28 and may come up with criteria for what they believe would be effective and privacy-preserving age verification. 

The DSA does explicitly name age verification as one measure the largest platforms – so called Very Large Online Platforms (VLOPs) that have more than 45 million monthly users in the EU – can choose to mitigate systemic risks related to their services. Those risks, while poorly defined, include negative impacts on the protection of minors and users’ physical and mental wellbeing. While this is also not an explicit obligation, the European Commission seems to expect adult content platforms to adopt age verification to comply with their risk mitigation obligations under the DSA. 

Adding another layer of complexity, age verification is a major element of the dangerous European Commission proposal to fight child sexual abuse material through mandatory scanning of private and encrypted communication. While the negotiations of this bill have largely stalled, the Commission’s original proposal puts an obligation on app stores and interpersonal communication services (think messaging apps or email) to implement age verification. While the European Parliament has followed the advice of civil society organizations and experts and has rejected the notion of mandatory age verification in its position on the proposal, the Council, the institution representing member states, is still considering mandatory age verification. 

Digital Identities and Age Verification 

Leaving aside the various policy work streams that implicitly or explicitly consider whether age verification should be introduced across the EU, the European Commission seems to have decided on the how: Digital identities.

In 2024, the EU adopted the updated version of the so-called eIDAS Regulation, which sets out a legal framework for digital identities and authentication in Europe. Member States are now working on national identity wallets, with the goal of rolling out digital identities across the EU by 2026.

Despite the imminent roll out of digital identities in 2026, which could facilitate age verification, the European Commission clearly felt pressure to act sooner than that. That’s why, in the fall of 2024, the Commission published a tender for a “mini-ID wallet”, offering four million euros in exchange for the development of an “age verification solution” by the second quarter of 2025 to appease Member States anxious to introduce age verification today. 

Favoring digital identities for age verification follows an overarching trend to push obligations to conduct age assessments continuously further down in the stack – from apps to app stores to operating service providers. Dealing with age verification at the app store, device, or operating system level is also a demand long made by providers of social media and dating apps seeking to avoid liability for insufficient age verification. Embedding age verification at the device level will make it more ubiquitous and harder to avoid. This is a dangerous direction; digital identity systems raise serious concerns about privacy and equity.

This approach will likely also lead to mission creep: While the Commission limits its tender to age verification for 18+ services (specifically adult content websites), it is made abundantly clear that once available, age verification could be extended to “allow age-appropriate access whatever the age-restriction (13 or over, 16 or over, 65 or over, under 18 etc)”. Extending age verification is even more likely when digital identity wallets don’t come in the shape of an app, but are baked into operating systems. 

In the next post of this series, we will be taking a closer look at the age verification app the European Commission has been working on.

Svea Windwehr