The UN General Assembly and the Fight Against the Cybercrime Treaty

2 months 3 weeks ago

Note on the update: The text has been revised to reflect the updated timeline for the UN General Assembly’s consideration of the convention, which is now expected at the end of this year. The update also emphasizes that states should reject the convention. Additionally, a new section outlines the risks associated with broad evidence-sharing, particularly the lack of robust safeguards needed to act as checks against the misuse of power. While the majority of the  investigatory powers in the convention used the shall language in Chapter IV, and therefore, are mandatory, the safeguards are left to each state’s discretion in how they are applied. Please note that our piece in Just Security and this post are based on the latest version of the UNCC.

The final draft text of the United Nations Convention Against Cybercrime, adopted last Thursday by the United Nations Ad Hoc Committee, is now headed to the UN General Assembly for a vote. The last hours of deliberations were marked by drama as Iran repeatedly, though unsuccessfully, attempted to remove almost all human rights protections that survived in the final text, receiving support from dozens of nations. Although Iran’s efforts were defeated, the resulting text is still nothing to celebrate, as it remains riddled with unresolved human rights issues. States should vote No when the UNGA votes on the UN Cybecrime Treaty.

The Fight Moves to the UN General Assembly

States will likely consider adopting or rejecting the treaty at the UN General Assembly later this year. It is crucial for States to reject the treaty and vote against it. This moment offers a key opportunity to push back and build a strong, coordinated opposition. 

Over more than three years of advocacy, we consistently fought for clearer definitions, narrower scope, and stronger human rights protections. Since the start of the process, we made it clear that we didn’t believe the treaty was necessary, and, given the significant variation in privacy and human rights standards among member states, we raised concerns that the investigative powers adopted in the treaty may accommodate the most intrusive police surveillance practices across participating countries. Yet, we engaged in the discussions in good faith to attempt to ensure that the treaty would be narrow in scope and include strong, mandatory human rights safeguards.

However, in the end, the e-evidence sharing chapter remains broad in scope, and the rights section unfortunately falls short. Indeed, instead of merely facilitating cooperation on core cybercrime, this convention authorizes open-ended evidence gathering and sharing for any serious crime that a country chooses to punish with a sentence of at least four years or more, without meaningful limitations. While the convention excludes cooperation requests if there are substantial grounds to believe that the request is for the purpose of prosecuting or punishing someone based on their political beliefs or personal characteristics, it sets an extremely high bar for such exclusions and provides no operational safeguards or mechanisms to ensure that acts of transnational repression or human rights abuses are refused. 

The convention requires that these surveillance measures are proportionate, but leaves critical safeguards such as judicial review, the need for grounds of justifying surveillance, and the need for effective redress as optional despite the intrusive nature of the surveillance powers it adopts. Even more concerning, some states have already indicated that in their view the requirements for these critical safeguards is purely a matter of states' domestic law, many of which already fail to meet international human rights standards and lack meaningful judicial oversight or legal accountability. 

The convention ended up accommodating the most intrusive practices.  For example, blanket, generalized data retention is problematic under human rights law but states that ignore these restrictions, and have such powers under their domestic law, can respond to assistance requests by sharing evidence that was retained through blanket data retention regimes. Similarly, encryption is also protected under international human rights standards but nothing in this convention prevents a state from employing encryption-breaking powers they have under their domestic law when responding to a cross-border request to access data.

The convention’s underlying flaw is the assumption that, in accommodating all countries' practices, states will act in good faith. This assumption is flawed, as it only increases the likelihood that the powerful global cooperation tools established by the convention will be abused.

The Unsettling Concessions in the Treaty Negotiations

The key function of the Convention, if ratified, will be to create a means of requiring legal assistance between countries that do not already have mutual legal assistance treaties (MLATs) or other cooperation agreements. This would include repressive regimes who may previously have been hindered in their attempts to engage in cross-border surveillance and data sharing, in some cases because their concerning human rights records have excluded them from MLATs. For countries that already have MLATs in place, the new treaty’s cross-border cooperation provisions may provide additional tools for assistance.

A striking pattern throughout the Convention as adopted is the leeway that it gives to states to decide whether or not to require human rights safeguards; almost all of the details of how human rights protections are implemented is left up to national law. For example, the scope and definition of many offenses “may"—or may not—include certain protective elements. In addition, states are not required to decline requests from other states to help investigate acts that are not crimes under their domestic law; they can choose to cooperate with those requests instead. Nor does the treaty obligate states to carefully scrutinize surveillance requests to ensure they are not pretextual attempts at persecution.

This pattern continues. For example, the list of core cybercrimes under the convention—that in the past swept in good faith security research, whistleblowers, and journalistic activities—let states choose whether specific elements must be included before an act will be considered a crime, for example that the offense was done with dishonest intent or that it caused serious harm. Sadly, these elements are optional, not required.

Similarly, provisions on child sexual abuse material (CSAM) allow states to adopt exceptions that would ensure scientific, medical, artistic or educational materials are not wrongfully targeted, and that would exclude consensual, age-appropriate exchanges between minors, in line with international human rights standards. Again, these exceptions are optional, meaning that over-criminalization is not only consistent with the Convention but also qualifies for the Convention's cross-border surveillance and extradition mechanisms.

The broad discretion granted to states under the UN Cybercrime Treaty is a deliberate design intended to secure agreement among countries with varying levels of human rights protections. This flexibility, in certain cases, allows states with strong protections to uphold them, but it also permits those with weaker standards to maintain their lower levels of protection. This pattern was evident in the negotiations, where key human rights safeguards were made optional rather than mandatory, such as in the  list of core cybercrimes and provisions on cross-border surveillance.

These numerous options in the convention are also disappointing because they took the place of what would have been preferred: advancing the protections in their national laws as normative globally, and encouraging or requiring other states to adopt them. 

Exposing States’ Contempt For Rights

Iran’s last-ditch attempts to strip human rights protections from the treaty were a clear indicator of the challenges ahead. In the final debate, Iran proposed deleting provisions that would let states refuse international requests for personal data when there’s a risk of persecution based on political opinions, race, ethnicity, or other factors. Despite its disturbing implications, the proposal received 25 votes in support including from India, Cuba, China, Belarus, Korea, Nicaragua, Nigeria, Russia, and Venezuela.

That was just one of a series of proposals by Iran to remove specific human rights or procedural protections from the treaty at the last minute. Iran also requested a vote on deleting Article 6(2) of the treaty, another human rights clause that explicitly states that nothing in the Convention should be interpreted as allowing the suppression of human rights or fundamental freedoms, as well as Article 24, which establishes the conditions and safeguards—the essential checks and balances—for domestic and cross-border surveillance powers.

Twenty-three countries, including Jordan, India, and Sudan, voted to delete Article 6(2), with 26 abstentions from countries like China, Uganda, and Turkey. This means a total of 49 countries either supported or chose not to oppose the removal of this critical clauses, showing a significant divide in the international community's commitment to protecting fundamental freedoms.  And 11 countries voted to delete Article 24, with 23 abstentions.

These and other Iranian proposals would have removed nearly every reference to human rights from the convention, stripping the treaty of its substantive human rights protections and impacting both domestic legislation and international cooperation, leaving only the preamble and general clause, which states: "State Parties shall ensure that the implementation of their obligations under this Convention is consistent with their obligations under international human rights law.”

Additional Risks of Treaty Abuse

The risk that treaty powers can be abused to persecute people is real and urgent. It is even more concerning that some states have sought to declare (by announcing a future potential “reservation”) that they may intend to not follow Article 6.2 (general human rights clause), Article 24 (conditions and safeguards for domestic and cross border spying assistance), and Article 40(22) on human-rights-based grounds for refusing mutual legal assistance, despite their integral roles in the treaty.

Such reservations should be prohibited. According to the International Law Commission’s "Guide to Practice on Reservations to Treaties," a reservation is impermissible if it is incompatible with the object and purpose of the treaty. Human-rights safeguards, while not robust enough, are essential elements of the treaty, and reservations that undermine these safeguards could be considered incompatible with the treaty’s object and purpose. Furthermore, the Guide states that reservations should not affect essential elements necessary to the general tenor of the treaty, and if they do, such reservations impair the raison d’être of the treaty itself. Therefore, allowing reservations against human rights safeguards may not only undermine the treaty’s integrity but also challenge its legal and moral foundations.

All of the attacks on safeguards in the treaty process raise particular concerns when foreign governments use the treaty powers to demand information from U.S. companies, who should be able to rely on the strong standards embedded in US law. Where norms and safeguards were made optional, we can presume that many states will choose to forego them.

Cramming Even More Crimes Back In?

Throughout the negotiations, several delegations voiced concerns that the scope of the Convention did not cover enough crimes, including many that threaten online content protected by the rights to free expression and peaceful protest. Russia, China, Nigeria, Egypt, Iran, and Pakistan advocated for broader criminalization, including crimes like incitement to violence and desecration of religious values. In contrast, the EU, the U.S., Costa Rica, and others advocated for a treaty that focuses solely on computer-related offenses, like attacks on computer systems, and some cyber-enabled crimes like CSAM and grooming.

Despite significant opposition, Russia, China, and other states successfully advanced the negotiation of a supplementary protocol for additional crimes, even before the core treaty has been ratified and taken effect. This move is particularly troubling as it leaves unresolved the critical issue of consensus on what constitutes core cybercrimes—a ticking time bomb that could lead to further disputes and could retroactively expand application of the Convention's cross-border cooperation regime even further. 

Under the final agreement, it will take 40 ratifications for the treaty to enter into force and 60 before any new protocols can be adopted. While consensus remains the goal, if it cannot be reached, a protocol can still be adopted with a two-thirds majority vote of the countries present.

The treaty negotiations are disappointing, but civil society and human rights defenders can unite to urge states to vote against the convention at the next UN General Assembly, ensuring that these flawed provisions do not undermine human rights globally.

Katitza Rodriguez

Digital ID Isn't for Everybody, and That's Okay

2 months 3 weeks ago

How many times do you pull out your driver’s license a week? Maybe two to four times to purchase age restricted items, pick up prescriptions, or go to a bar. If you get a mobile driver’s license (mDL) or other forms of digital identification (ID) being offered in Google and Apple wallets, you may have to share this information much more often than before, because this new technology may expand the scope of scenarios demanding your ID.

These mDLs and digital IDs are being deployed faster than states can draft privacy protections, including for presenting your ID to more third parties than ever before. While proponents of these digital schemes emphasize a convenience factor, these IDs can easily expand into new territories like controversial age verification bills that censor everyone. Moreover, digital ID is simultaneously being tested in sensitive situations, and expanded into a potential regime of unprecedented data tracking.

In the digital ID space, the question of “how can we do this right?” often usurps the more pertinent question of “should we do this at all?” While there are highly recommended safeguards for these new technologies, we must always support each person’s right to choose to continue using physical documentation instead of going digital. Also, we must do more to bring understanding and decision power over these technologies to all, overzealously promoting them as a potential equalizer.

What’s in Your Wallet?

With modern hardware, phones can now safely store more sensitive data and credentials with higher levels of security. This enables functionalities like Google and Apple Pay exchanging transaction data online with e-commerce sites. While there’s platform-specific terminology, the general term to know is “Trusted Platform Module” (TPM). This hardware enables “Trusted Execution Environments” (TEEs) for sensitive data to be processed within this environment. Most modern phones, tablets, and laptops come with TPMs.

Digital IDs are considered at a higher level of security within the Google and Apple wallets (as they should be). So if you have an mDL provisioned with this device, the contents of the mDL is not “synced to the cloud.” Instead, it stays on that device, and you have the option to remotely wipe the credential if the device is stolen or lost.

Moving away from digital wallets already common on most phones, some states have their own wallet app for mDLs that would require downloading from an app store. The security on these applications can vary, along with the data they can and can’t see. Different private partners have been making wallet/ID apps for different states. These include IDEMIA, Thales, and Spruce ID, to name a few. Digital identity frameworks, like Europe’s (eIDAS), have been creating language and provisions for “open wallets,” where you don’t have to necessarily rely on big tech for a safe and secure wallet. 

However, privacy and security need to be paramount. If privacy is an afterthought, digital IDs can quickly become yet another gold mine of breaches for data brokers and bad actors.

New Announcements, New Scope

Digital ID has been moving fast this summer.

Proponents of digital ID frequently present the “over 21” example, which is often described like this:

You go to the bar, you present a claim from your phone that you are over 21, and a bouncer confirms the claim with a reader device for a QR code or a tap via NFC. Very private. Very secure. Said bouncer will never know your address or other information. Not even your name. This is called an “abstract claim”, where more-sensitive information is not exchanged, but instead just a less-sensitive attestation to the verifier. Like an age threshold rather than your date of birth and name.

But there is a high privacy price to pay for this marginal privacy benefit. mDLs will not just swap in as a 1-on-1 representation of your physical ID. Rather, they are likely to expand the scenarios where businesses and government agencies demand that you prove your identity before entering physical and digital spaces or accessing goods and services. Our personal data will be passed at more frequent rates than ever, via frequent online verification of identity per day or week with multiple parties. This privacy menace far surpasses the minor danger of a bar bouncer collecting, storing, and using your name and address after glancing at your birth-date on your plastic ID for 5 seconds in passing. In cases where bars do scan ID, we’re still being asked to consider one potential privacy risk for an even more expanded privacy risk through digital ID presentation across the internet.

While there are efforts to enable private businesses to read mDLs, these credentials today are mainly being used with the TSA. In contracts and agreements we have seen with Apple, the company largely controls the marketing and visibility of mDLs.

In another push to boost adoption, Android allows you to create a digital passport ID for domestic travel. This development must be seen through the lens of the federal government’s 20-year effort to impose “REAL ID” on state-issued identification systems. REAL ID is an objective failure of a program that pushes for regimes that strip privacy from everyone and further marginalize undocumented people. While federal-level use of digital identity so far is limited to TSA, this use can easily expand. TSA wants to propose rules for mDLs in an attempt (the agency says) to “allow innovation” by states, while they contemplate uniform rules for everyone. This is concerning, as the scope of TSA —and its parent agency, the Department of Homeland Security—is very wide. Whatever they decide now for digital ID will have implications way beyond the airport.

Equity First > Digital First

We are seeing new digital ID plans being discussed for the most vulnerable of us. Digital ID must be designed for equity (as well as for privacy).

With Google’s Digital Credential API and Apple’s IP&V Platform (as named from the agreement with California), these two major companies are going to be in direct competition with current age verification platforms. This alarmingly sets up the capacity for anyone to ask for your ID online. This can spread beyond content that is commonly age-gated today. Different states and countries may try to label additional content as harmful to children (such as LGBTQIA content or abortion resources), and require online platforms to conduct age verification to access that content.

For many of us, opening a bank account is routine, and digital ID sounds like a way to make this more convenient. Millions of working class people are currently unbanked. Digital IDs won’t solve their problems. Many people can’t get simple services and documentation for a variety of reasons that come with having low-income. Millions of people in our country don’t have identification. We shouldn’t apply regimes that utilize age verification technology against people who often face barriers to compliance, such as license suspension for unpaid, non-traffic safety related fines. A new technical system with far less friction to attempt to verify age will, without regulation to account for nuanced lives, lead to an expedited, automated “NO” from digital verification.

Another issue is that many lack a smartphone or an up-to-date smartphone, or may share a smartphone with their family. Many proponents of “digital first” solutions assume a fixed ratio of one smartphone for each person. While this assumption may work for some, others will need humans to talk to on a phone or face-to-face to access vital services. In the case of an mDL, you still need to upload your physical ID to even obtain an mDL, and need to carry a physical ID on your person. Digital ID cannot bypass the problem that some people don’t have physical ID. Failure to account for this is a rush to perceived solutions over real problems.

Inevitable?

No, digital identity shouldn’t be inevitable for everyone: many people don’t want it or lack resources to get it. The dangers posed by digital identity don’t have to be inevitable, either—if states legislate protections for people. It would also be great (for the nth time) to have a comprehensive federal privacy law. Illinois recently passed a law that at least attempts to address mDL scenarios with law enforcement. At the very minimum, law enforcement should be prohibited from using consent for mDL scans to conduct illegal searches. Florida completely removed their mDL app from app stores and asked residents who had it, to delete it; it is good they did not simply keep the app around for the sake of pushing digital ID without addressing a clear issue.

State and federal embrace of digital ID is based on claims of faster access, fraud prevention, and convenience. But with digital ID being proposed as a means of online verification, it is just as likely to block claims of public assistance as facilitate them. That’s why legal protections are at least as important as the digital IDs themselves.

Lawmakers should ensure better access for people with or without a digital ID.

 

Alexis Hancock

地方公務員等共済組合法施行令及び被用者年金制度の一元化等を図るための厚生年金保険法等の一部を改正する法律及び地方公務員等共済組合法及び被用者年金制度の一元化等を図るための厚生年金保険法等の一部を改正する法律の一部を改正する法律の施行に伴う地方公務員等共済組合法による長期給付等に関する経過措置に関する政令の一部を改正する政令の一部を改正する政令案に対する意見募集の結果

2 months 3 weeks ago
地方公務員等共済組合法施行令及び被用者年金制度の一元化等を図るための厚生年金保険法等の一部を改正する法律及び地方公務員等共済組合法及び被用者年金制度の一元化等を図るための厚生年金保険法等の一部を改正する法律の一部を改正する法律の施行に伴う地方公務員等共済組合法による長期給付等に関する経過措置に関する政令の一部を改正する政令の一部を改正する政令案に対する意見募集の結果
総務省