EFF Sues DOGE and the Office of Personnel Management to Halt Ransacking of Federal Data

1 month 2 weeks ago

EFF and a coalition of privacy defenders have filed a lawsuit today asking a federal court to block Elon Musk’s Department of Government Efficiency (DOGE) from accessing the private information of millions of Americans that is stored by the Office of Personnel Management (OPM), and to delete any data that has been collected or removed from databases thus far. The lawsuit also names OPM, and asks the court to block OPM from sharing further data with DOGE.

The Plaintiffs who have stepped forward to bring this lawsuit include individual federal employees as well as multiple employee unions, including the American Federation of Government Employees and the Association of Administrative Law Judges.

This brazen ransacking of Americans’ sensitive data is unheard of in scale. With our co-counsel Lex Lumina, State Democracy Defenders Fund, and the Chandra Law Firm, we represent current and former federal employees whose privacy has been violated. We are asking the court for a temporary restraining order to immediately cease this dangerous and illegal intrusion. This massive trove of information includes private demographic data and work histories of essentially all current and former federal employees and contractors as well as federal job applicants. Access is restricted by the federal Privacy Act of 1974. Last week, a federal judge temporarily blocked DOGE from accessing a critical Treasury payment system under a similar lawsuit

The mishandling of this information could lead to such significant and varied abuses that they are impossible to detail. 

What’s in OPM’s Databases?

The data housed by OPM is extraordinarily sensitive for several reasons. The federal government is the nation’s largest employer, and OPM’s records are one of the largest, if not the largest, collection of employee data in the country. In addition to personally identifiable information such as names, social security numbers, and demographics, it includes work experience, union activities, salaries, performance, and demotions; health information like life insurance and health benefits; financial information like death benefit designations and savings programs; and classified information nondisclosure agreements. It holds records for millions of federal workers and millions more Americans who have applied for federal jobs. 

The mishandling of this information could lead to such significant and varied abuses that they are impossible to detail. On its own, DOGE’s unchecked access puts the safety of all federal employees at risk of everything from privacy violations to political pressure to blackmail to targeted attacks. Last year, Elon Musk publicly disclosed the names of specific government employees whose jobs he claimed he would cut before he had access to the system. He has also targeted at least one former employee of Twitter. With unrestricted access to OPM data, and with his ownership of the social media platform X, federal employees are at serious risk.

And that’s just the danger from disclosure of the data on individuals. OPM’s records could give an overview of various functions of entire government agencies and branches. Regardless of intention, the law makes it clear that this data is carefully protected and cannot be shared indiscriminately.

In late January, OPM reportedly sent about two million federal employees its "Fork in the Road" form email introducing a “deferred resignation” program. This is a visible way in which the data could be used; OPMs databases contain the email addresses for every federal employee. 

How the Privacy Act Protects Americans’ Data

Under the Privacy Act of 1974, disclosure of government records about individuals generally requires the written consent of the individual whose data is being shared, with few exceptions

Congress passed the Privacy Act in response to a crisis of confidence in the government as a result of scandals including Watergate and the FBI’s Counter Intelligence Program (COINTELPRO). The Privacy Act, like the Foreign Intelligence Surveillance Act of 1978, was  created at a time when the government was compiling massive databases of records on ordinary citizens and had minimal restrictions on sharing them, often with erroneous  information and in some cases for retaliatory purposes

These protections were created the last time Congress rose to the occasion of limiting the surveillance powers of an out-of-control President.

Congress was also concerned with the potential for abuse presented by the increasing use of electronic records and the use of identifiers such as social security numbers, both of which made it easier to combine individual records housed by various agencies and to share that information. In addition to protecting our private data from disclosure to others, the Privacy Act, along with the Freedom of Information Act, also allows us to find out what information is stored about us by the government. The Privacy Act includes a private right of action, giving ordinary people the right to decide for themselves whether to bring a lawsuit to enforce their statutory privacy rights, rather than relying on government agencies or officials.

It is no coincidence that these protections were created the last time Congress rose to the occasion of limiting the surveillance powers of an out-of-control President. That was fifty years ago; the potential impact of leaking this government information, representing the private lives of millions, is now even more serious. DOGE and OPM are violating Americans’ most fundamental privacy rights at an almost unheard-of scale. 

OPM’s Data Has Been Under Assault Before

Ten years ago, OPM announced that it had been the target of two data breaches. Over twenty-million security clearance records—information on anyone who had undergone a federal employment background check, including their relatives and references—were reportedly stolen by state-sponsored attackers working for the Chinese government. At the time, it was considered one of the most potentially damaging breaches in government history. 

DOGE employees likely have access to significantly more data than this. Just as an example, the OPM databases also include personal information for anyone who applied to a federal job through USAJobs.gov—24.5 million people last year. Make no mistake: this is, in many ways, a worse breach than what occurred in 2014. DOGE has access to ten more years of data; it likely includes what was breached before, as well as significantly more sensitive data. (This is not to mention that while DOGE has access to these databases, they reportedly have the ability to not only export records, but to add them, modify them, or delete them.) Every day that DOGE maintains its current level of access, more risks mount. 

EFF Fights for Privacy

EFF has fought to protect privacy for nearly thirty-five years at the local, state, and federal level, as well as around the world. 

We have been at the forefront of exposing government surveillance and invasions of privacy: In 2006, we sued AT&T on behalf of its customers for violating privacy law by collaborating with the NSA in the massive, illegal program to wiretap and data-mine Americans’ communications. We also filed suit against the NSA in 2008; both cases arose from surveillance that the U.S. government initiated in the aftermath of 9/11. In addition to leading or serving as co-counsel in lawsuits, such as in our ongoing case against Sacramento's public utility company for sharing customer data with police, EFF has filed amicus briefs in hundreds of cases to protect privacy, free speech, and creativity.

EFF’s fight for privacy spans advocacy and technology, as well: Our free browser extension, Privacy Badger, protects millions of individuals from invasive spying by third-party advertisers. Another browser extension, HTTPS Everywhere, alongside Certbot, a tool that makes it easy to install free HTTPS certificates for websites, helped secure the web, which has now largely switched from non-secure HTTP to the more secure HTTPS protocol. 

EFF is glad to join the brigade of lawsuits to protect this critical information. 

EFF also fights to improve privacy protections by advancing strong laws, such as the California Electronic Communications Privacy Act (CalECPA) in 2015, which requires state law enforcement to get a warrant before they can access electronic information about who we are, where we go, who we know, and what we do. We also have a long, successful history of pushing companies, as well, to protect user privacy, from Apple to Amazon

What’s Next

The question is not “what happens if this data falls into the wrong hands.” The data has already fallen into the wrong hands, according to the law, and it must be safeguarded immediately. Violations of Americans’ privacy have played out across multiple agencies, without oversight or safeguards, and EFF is glad to join the brigade of lawsuits to protect this critical information. Our case is fairly simple: OPM’s data is extraordinarily sensitive, OPM gave it to DOGE, and this violates the Privacy Act. We are asking the court to block any further data sharing and to demand that DOGE immediately destroy any and all copies of downloaded material. 

You can view the press release for this case here.

Related Cases: American Federation of Government Employees v. U.S. Office of Personnel Management
Jason Kelley

Building a Community Privacy Plan

1 month 2 weeks ago

Digital security training can feel overwhelming, and not everyone will have access to new apps, new devices, and new tools. There also isn't one single system of digital security training, and we can't know the security plans of everyone we communicate with—some people might have concerns about payment processors preventing them from obtaining fees for their online work, whilst others might be concerned about doxxing or safely communicating sensitive medical information. 

This is why good privacy decisions begin with proper knowledge about your situation and a community-oriented approach. To start, explore the following questions together with your friends and family, organizing groups, and others:

  1. What do we want to protect? This might include sensitive messages, intimate images, or information about where protests are organized.
  2. Who do we want to protect it from? For example, law enforcement or stalkers. 
  3. How much trouble are we willing to go through to try to prevent potential consequences? After all, convincing everyone to pivot to a different app when they like their current service might be tricky! 
  4. Who are our allies? Besides those who are collaborating with you throughout this process, it’s a good idea to identify others who are on your side. Because they’re likely to share the same threats you do, they can be a part of your protection plans. 

This might seem like a big task, so here are a few essentials:

Use Secure Messaging Services for Every Communication 

Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption, ensuring that only the sender and recipient of any communication have access to the content. But this protection does not reach its full potential without others joining you in communicating on these platforms. 

Of the most common messaging apps, Signal provides the most extensive privacy protections through its use of end-to-end encryption, and is available for download across the globe. But we know it might not always be possible to encourage everyone in your network to transition away from their current services. There are alternatives, though. WhatsApp, one of the most popular communication platforms in the world, uses end-to-end encryption, but collects more metadata than Signal. Facebook Messenger now also provides end-to-end encryption by default in one-on-one direct messages. 

Specific privacy concerns remain with group chats. Facebook Messenger has not enabled end-to-end encryption for chats that include more than two people, and popular platforms like Slack and Discord similarly do not provide these protections. These services may appear more user-friendly in accommodating large numbers, but in the absence of real privacy protections, make sure you consider what is being communicated on these sites and use alternative messaging services when talking about sensitive topics.

As a service's user base gets larger and more diverse, it's less likely that simply downloading and using it will indicate anything about a particular user's activities. For example, the more people use Signal, the less those seeking reproductive health care or coordinating a protest would stand out by downloading it. So beyond protecting just your communications, you’re building up a user base that can protect others who use encrypted, secure services and give them the shield of a crowd. 

It also protects your messages from being available for law enforcement should they request it from the platforms you use. In choosing a platform that protects our privacy, we create a space from safety and authenticity away from government and corporate surveillance.  

For example, prosecutors in Nebraska used messages sent via Facebook Messenger (prior to the platform enabling end-to-end encryption by default) as evidence to charge a mother with three felonies and two misdemeanors for assisting her daughter with an abortion. Given that someone known to the family reported the incident to law enforcement, it’s unlikely using an end-to-end encrypted service would have prevented the arrest entirely, but it would have prevented the contents of personal messages turned over by Meta from being used as evidence in the case. 

Beyond this, it's important to know the privacy limitations of the platforms you communicate on. For example, while a secure messaging app might prevent government and corporate eavesdroppers from snooping on conversations, that doesn't stop someone you're communicating with from taking screenshots, or the government from attempting to compel you (or your contact) to turn over your messages yourselves. Secure messaging apps also don't protect when someone gets physical access to an unlocked phone with all those messages on it, which is why you may want to consider enabling disappearing message features for certain conversations.

Consider The Content You Post On Social Media 

We’re all interconnected in this digital age. Even without everyone having access to their own personal device or the internet, it is pretty difficult to completely opt out of the online. One person’s decision to upload a picture to a social media platform may impact another person without the second even knowing it, such as an association with a movement or a topic that you don’t want to be public knowledge. 

Talk with your friends about the potentially sensitive data you reveal about each other online. Even if you don’t have a social media account, or if you untag yourself from posts, friends can still unintentionally identify you, report your location, and make their connections to you public. This works in the offline world too, such as sharing precautions with organizers and fellow protesters when going to a demonstration, and discussing ahead of time how you can safely document and post the event online without exposing those in attendance to harm.

It’s important to carefully consider the tradeoffs between publicity and privacy when it comes to social media. If you’re promoting something important that needs greater reach, it may be more worth posting to the more popular platforms that undermine user privacy. To do so, it’s vital that you compartmentalize your personal information (registration credentials, post attribution, friends list, etc) away from these accounts.

If you are organising online or conversing on potentially sensitive issues, choose platforms that limit the amount of information collected and tracking undertaken. We know this is not always possible—perhaps people cannot access different applications, or might not have interest in downloading or using a different service. In this scenario, think about how you can protect your community on the platform you currently engage on. For example, if you currently use Facebook for organizing, work with others to keep your Facebook groups as private and secure as Facebook allows.

Think About Cloud Servers as Other People’s Computers  

For our online world to function, corporations use online servers (often referred to as the cloud) to store the mass amounts of data collected from our devices. When we back up our content to these cloud services, corporations may run automated tools to check the content being stored, including scanning all our messages, pictures, and videos. The best case scenario in the event of a false flag is that your account is temporarily blocked, but worst case could see your entire account deleted and/or legal action initiated for perceivably illegal content. 

For example, in 2021 a father took pictures of son’s groin area and sent these to a health care provider’s messaging service. Days later, his Google account was disabled because the photos constituted a “a severe violation of Google’s policies and might be illegal,” with an attached link flagging “child sexual abuse and exploitation” as one of the possible reasons. Despite the photos being taken for medical purposes, Google refused to reinstate the account, meaning that the father lost access to years of emails, pictures, account login details, and more. In a similar case, a father in Houston took photos of his child’s infected intimate parts to send to his wife via Google’s chat feature. Google refused to reinstate this account, too.

The adage goes, “there are no clouds, just other peoples’ computers.” It’s true! As countless discoveries over the years have revealed, the information you share on Slack at work is on Slack's computers and made accessible to your employer. So why not take extra care to choose whose computers you’re trusting with sensitive information? 

If it makes sense to back up your data onto encrypted thumb drives or limited cloud services that provide options for end-to-end encryption, then so be it. What’s most important is that you follow through with backing it up. And regularly!

Assign Team Roles

Adopting all of these best practices can be daunting, we get it. Every community is made up of people with different strengths, so with some consideration you can make smart decisions about who does what for the collective privacy and security. Once these tasks are broken down into smaller, more easily done tasks, it’s easier for a group to accomplish together. As familiarity with these tasks grows, you’ll realize you’re developing a team of experts, and after some time, you can teach each other.

Create Incident Response Plans

Developing a plan for if or when something bad happens is a good practice for anyone, but especially a community of people who face increased risk. Since many threats are social in nature, such as doxxing or networked harassment, it’s important to strategize with your allies around what to do in the event of such things happening. Doing so before an incident occurs is much easier than when you’re presently facing a crisis.

Only you and your allies can decide what belongs on such a plan, but some strategies might be: 

  • Isolating the impacted areas, such as shutting down social media accounts and turning off affected devices
  • Notifying others who may be affected
  • Switching communications to a predetermined more secure alternative
  • Noting behaviors of suspected threats and documenting these 
  • Outsourcing tasks to someone further from the affected circle who is already aware of this potential responsibility.

Everyone's security plans and situations will always be different, which is why we often say that security and privacy are a state of mind, not a purchase. But the first step is always taking a look at your community and figuring out what's needed and how to get everyone else on board.

Paige Collings

Privacy Loves Company

1 month 2 weeks ago

Most of the internet’s blessings—the opportunities for communities to connect despite physical borders and oppressive controls, the avenues to hold the powerful accountable without immediate censorship, the sharing of our hopes and frustrations with loved ones and strangers alike—tend to come at a price. Governments, corporations, and bad actors too often use our content for surveillance, exploitation, discrimination, and harm.

It’s easy to dismiss these issues because you don’t think they concern you. It might also feel like the whole system is too pervasive to actively opt-out of. But we can take small steps to better protect our own privacy, as well as to build an online space that feels as free and safe as speaking with those closest to us in the offline world.

This is why a community-oriented approach helps. In speaking with your friends and family, organizing groups, and others to discuss your specific needs and interests, you can build out digital security practices that work for you. This makes it more likely that your privacy practices will become second nature to you and your contacts.  

Good privacy decisions begin with proper knowledge about your situation—and we’ve got you covered. To learn more about building a community privacy plan, read our ‘how to’ guide here, where we talk you through the topics below in more detail: 

Using Secure Messaging Services For Every Communication 

At some point, we all need to send a message that’s safe from prying eyes, so the chances of these apps becoming the default for sensitive communications is much higher if we use these platforms for all communications. On an even simpler level, it also means that messages and images sent to family and friends in group chats will be safe from being viewed by automated and human scans on services like Telegram and Facebook Messenger. 

Consider The Content You Post On Social Media 

Our decision to send messages, take pictures, and interact with online content has a real offline impact, and whilst we cannot control for every circumstance, we can think about how our social media behaviour impacts those closest to us, as well as those in our proximity. 

Think About Cloud Servers as Other People’s Computers  

When we backup our content to online cloud services, corporations may run automated tools to check the content being stored, including scanning all our messages, pictures, and videos. Whilst we might think we don't have anything to hide, these tools scan without context, and what might be an innocent picture to you may be flagged as harmful or illegal by a corporation's service. So why not take extra care to choose whose computers you’re entrusting with sensitive information. 

Assign Team Roles

Once these privacy tasks are broken down into smaller, more easily done projects, it’s much easier for a group to accomplish together. 

Create Incident Response Plans

Since many threats are social in nature, such as doxxing or networked harassment, it’s important to strategize with your allies what to do in such circumstances. Doing so before an incident occurs is much easier than on the fly when you’re already facing a crisis.

To dig in deeper, continue reading in our blog post Building a Community Privacy Plan here.

Paige Collings

【映画の鏡】94歳 兄の無罪を信じて『いもうとの時間』冤罪事件の理不尽さを炙り出す=鈴木 賀津彦<br />

1 month 2 weeks ago
                     1961年の事件発生以来、東海テレビが撮り続けてきた映像をふんだんに使って「名張ぶどう酒事件」の全体像を描き直し、冤罪事件の理不尽さを分かりやすく炙り出す。 自白のみで5人殺害の犯人とされた奥西勝さん(当時35歳)は一審では無罪となるが、2審で死刑。判決確定後も獄中から無実を訴え続けたが89歳で亡くなった。再審請求を引き継いだのは妹の岡美代子さん。10度目も再審はかなわず(昨年1月最高裁特別抗告棄却)、美代子さんは現在94歳。再審請..
JCJ

APC at RightsCon 2025

1 month 2 weeks ago
The APC network is gearing up to participate at the 13th edition of RightsCon, taking place in Taipei from 24 to 27 February. Here are the events and sessions where you will find APC staff, members…
APCNews

Why the So-Called AI Action Summit Falls Short

1 month 2 weeks ago

Ever since Chat-GPT’s debut, artificial intelligence (AI) has been the center of worldwide discussions on the promises and perils of new technologies. This has spawned a flurry of debates on the governance and regulation of large language models and “generative” AI, which have, among others, resulted in the Biden administration’s executive order on AI and international guiding principles for the development of generative AI and influenced Europe’s AI Act. As part of that global policy discussion, the UK government hosted the AI Safety Summit in 2023, which was followed in 2024 by the AI Seoul Summit, leading up to this year’s AI Action Summit hosted by France.

As heads of states and CEOs are heading to Paris for the AI Action Summit, the summit’s shortcomings are becoming glaringly obvious. The summit, which is hosted by the French government, has been described as a “pivotal moment in shaping the future of artificial intelligence governance”. However, a closer look at its agenda and the voices it will amplify tells a different story.

Focusing on AI’s potential economic contributions, and not differentiating between for example large language models and automated decision-making, the summit fails to take into account the many ways in which AI systems can be abused to undermine fundamental rights and push the planet's already stretched ecological limits over the edge. Instead of centering nuanced perspectives on the capabilities of different AI systems and associated risks, the summit’s agenda paints a one-sided and simplistic image, not reflective of global discussion on AI governance. For example, the summit’s main program does not include a single panel addressing issues related to discrimination or sustainability.

A summit captured by industry interests cannot claim to be a transformative venue

This imbalance is also mirrored in the summit’s speakers, among which industry representatives notably outnumber civil society leaders. While many civil society organizations are putting on side events to counterbalance the summit’s misdirected priorities, an exclusive summit captured by industry interests cannot claim to be a transformative venue for global policy discussions.

The summit’s significant shortcomings are especially problematic in light of the leadership role European countries are claiming when it comes to the governance of the AI. The European Union’s AI Act, which recently entered into force, has been celebrated as the world’s first legal framework addressing the risks of AI. However, whether the AI Act will actually “promote the uptake of human centric and trustworthy artificial intelligence” remains to be seen. 

It's unclear if the AI Act will provide a framework that incentivizes the roll out of user-centric AI tools or whether it will lock-in specific technologies at the expense of users. We like that the new rules contain a lot of promising language on fundamental rights protection, however, exceptions for law enforcement and national security render some of the safeguards fragile. This is especially true when it comes to the use of AI systems in high-risks contexts such as migration, asylum, border controls, and public safety, where the AI Act does little to protect against mass surveillance and profiling and predictive technologies. We are also concerned by the  possibility that other governments will copy-paste the AI Act’s broad exceptions without having the strong constitutional and human rights protections that exist within the EU legal system. We will therefore keep a close eye on how the AI Act is enforced in practice.

The summit also lags in addressing the essential role human rights should play in providing a common baseline for AI deployment, especially in high-impact uses. Although human-rights-related concerns appear in a few sessions, the Summit as purportedly a global forum aimed at unleashing the potential of AI for the public good and in the public interest, at a minimum, seems to miss the opportunity to clearly articulate how such a goal connects with fulfilling international human rights guarantees and which steps this entail.

Countries must address the AI divide without replicating AI harms.

Ramping up government use of AI systems is generally a key piece in national strategies for AI development worldwide. While countries must address the AI divide, doing so must not mean replicating AI harms. For example, we’ve elaborated on leveraging Inter-American human rights standards to tackle challenges and violations that emerge from public institutions’ use of algorithmic systems for rights-affecting determinations in Latin America.

In times of a global AI arms race, we do not need more hype for AI. Rather, there is a crucial need for evidence-based policy debates that address AI power centralization and consider the real-world harms associated with AI systems—while enabling diverse stakeholders to engage at eye level. The AI Action Summit will not be the place to have this conversation.

Svea Windwehr

【月刊マスコミ評・新聞】予算案の真摯な審議のため監視を=山田 明

1 month 2 weeks ago
 2025年元日の全国紙社説タイトルは、朝日「不確実さ増す時代に政治を凝視し強い社会築く」、毎日「戦後80年混迷する世界と日本「人道第一」の秩序構築を」、読売「平和と民主主義を立て直す時協調の理念掲げ日本が先頭に」、日経「変革に挑み次世代に希望つなごう」。各紙の主張と特徴が社説に表れている。産経は論説委員長が年のはじめにで、「日本は数年内に、戦後初めて戦争を仕掛けられる恐れがある」と危機感を煽る。地方紙社説には沖縄2紙など示唆に富むものが多い。 戦後80年の今年は、国内外とも..
JCJ