Yes to California’s “No Robo Bosses Act”
California’s Governor should sign S.B. 7, a common-sense bill to end some of the harshest consequences of automated abuse at work. EFF is proud to join dozens of labor, digital rights, and other advocates in support of the “No Robo Bosses Act.”
Algorithmic decision-making is a growing threat to workers. Bosses are using AI to assess the body language and voice tone of job candidates. They’re using algorithms to predict when employees are organizing a union or planning to quit. They’re automating choices about who gets fired. And these employment algorithms often discriminate based on gender, race, and other protected statuses. Fortunately, many advocates are resisting.
What the Bill DoesS.B. 7 is a strong step in the right direction. It addresses “automated decision systems” (ADS) across the full landscape of employment. It applies to bosses in the private and government sectors, and it protects workers who are employees and contractors. It addresses all manner of employment decisions that involve automated decisionmaking, including hiring, wages, hours, duties, promotion, discipline, and termination. It covers bosses using ADS to assist or replace a person making a decision about another person.
Algorithmic decision-making is a growing threat to workers.
The bill requires employers to be transparent when they rely on ADS. Before using it to make a decision about a job applicant or current worker, a boss must notify them about the use of ADS. The notice must be in a stand-alone, plain language communication. The notice to a current worker must disclose the types of decisions subject to ADS, and a boss cannot use an ADS for an undisclosed purpose. Further, the notice to a current worker must disclose information about how the ADS works, including what information goes in and how it arrives at its decision (such as whether some factors are weighed more heavily than others).
The bill provides some due process to current workers who face discipline or termination based on the ADS. A boss cannot fire or punish a worker based solely on ADS. Before a boss does so based primarily on ADS, they must ensure a person reviews both the ADS output and other relevant information. A boss must also notify the affected worker of such use of ADS. A boss cannot use customer ratings as the only or primary input for such decisions. And every worker can obtain a copy of the most recent year of their own data that their boss might use as ADS input to punish or fire them.
Other provisions of the bill will further protect workers. A boss must maintain an updated list of all ADS it currently uses. A boss cannot use ADS to violate the law, to infer whether a worker is a member of a protected class, or to target a worker for exercising their labor and other rights. Further, a boss cannot retaliate against a worker who exercises their rights under this new law. Local laws are not preempted, so our cities and counties are free to enact additional protections.
Next StepsThe “No Robo Bosses Act” is a great start. And much more is needed, because many kinds of powerful institutions are using automated decision-making against us. Landlords use it to decide who gets a home. Insurance companies use it to decide who gets health care. ICE uses it to decide who must submit to location tracking by electronic monitoring.
EFF has long been fighting such practices. We believe technology should improve everyone’s lives, not subject them to abuse and discrimination. We hope you will join us.
令和6年度補正予算「データセンター等の地方分散によるデジタルインフラ強靭化事業」に係る基金設置法人による間接補助事業者の採択
自動運転時代の“次世代のITS通信”研究会(第3期第2回)
情報通信行政・郵政行政審議会 電気通信事業部会 市場検証委員会(第4回)
第327回官民競争入札等監理委員会 書面審議(会議資料)
第22回独立行政法人評価制度委員会 会計基準等部会
令和7年度消防設備関係功労者等に係る消防庁長官表彰
第748回 入札監理小委員会(開催案内)
情報通信審議会 情報通信技術分科会 衛星通信システム委員会(第49回)の開催について
「特定電気通信による情報の流通によって発生する権利侵害等への対処に関する法律第26条に関するガイドライン」(改定案)に対する意見募集の結果及び改定したガイドラインの公表(違法オンラインカジノ対策)
総合職技術系(情報通信行政)の説明会情報を更新しました。
【出版トピックス】小学館労組が出版労連脱退を決め、他のメディア系労組に波紋と衝撃が=篠田博之(月刊『創』編集長)<br />
New partnership tackles gendered disinformation in rural India
What comes next after Beijing+30? Feminist futures in focus in the new GenderIT.org edition
【フソー化成との闘い】健康診断の「貢献・協力度」要件化をはね返す!
APC celebrates activist Alaa Abd el-Fattah's release
さようなら原発9.23全国集会に4600人
Meta is Removing Abortion Advocates' Accounts Without Warning
This is the fifth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here.
When the team at Women Help Women signed into Instagram last winter, they were met with a distressing surprise: without warning, Meta had disabled their account. The abortion advocacy non-profit organization found itself suddenly cut off from its tens of thousands of followers and with limited recourse. Meta claimed Women Help Women had violated its Community Standards on “guns, drugs, and other restricted goods,” but the organization told EFF it uses Instagram only to communicate about safe abortion practices, including sharing educational content and messages aimed at reducing stigma. Eventually, Women Help Women was able to restore its account—but only after launching a public campaign and receiving national news coverage.
Unfortunately, Women Help Women’s experience is not unique. Around a quarter of our Stop Censoring Abortion campaign submissions reported that their entire account or page had been disabled or taken down after sharing abortion information—primarily on Meta platforms. This troubling pattern indicates that the censorship crisis goes beyond content removal. Accounts providing crucial reproductive health information are disappearing, often without warning, cutting users off from their communities and followers entirely.
What's worse, Meta appears to be imposing these negative account actions without clearly adhering to its own enforcement policies. Meta’s own Transparency Center stipulates that an account should receive multiple Community Standards violations or warnings before it is restricted or disabled. Yet many affected users told EFF they experienced negative account actions without any warning at all, or after only one alleged violation (many of which were incorrectly flagged, as we’ve explained elsewhere in this series).
While Meta clearly has the right to remove accounts from its platforms, disabling or banning an account is an extreme measure. It completely silences a user, cutting off communication with their followers and preventing them from sharing any information, let alone abortion information. Because of this severity, Meta should be extremely careful to ensure fairness and accuracy when disabling or removing accounts. Rules governing account removal should be transparent and easy to understand, and Meta must enforce these policies consistently across different users and categories of content. But as our Stop Censoring Abortion results demonstrate, this isn't happening for many accounts sharing abortion information.
Meta's Maze of Enforcement PoliciesIf you navigate to Meta’s Transparency Center, you’ll find a page titled “How Meta enforces its policies.” This page contains a web of intersecting policies on when Meta will restrict accounts, disable accounts, and remove pages and groups. These policies overlap but don’t directly refer to each other, making it trickier for users to piece together how enforcement happens.
At the heart of Meta's enforcement process is a strike system. Users receive strikes for posting content that violates Meta’s Community Standards. But not all Community Standards violations result in strikes, and whether Meta applies one depends on the “severity of the content” and the “context in which it was shared.” Meta provides little additional guidance on what violations are severe enough to amount to a strike or how context affects this assessment.
According to Meta's Restricting Accounts policy, for most violations, 1 strike should only result in a warning—not any action against the account. How additional strikes affect an account differs between Facebook and Instagram (but Meta provides no specific guidance for Threads). Facebook relies on a progressive system, where additional strikes lead to increasing restrictions. Enforcement on Instagram is more opaque and leaves more to Meta’s discretion. Meta still counts strikes on Instagram, but it does not follow the same escalating structure of restrictions as it does on Facebook.
Despite some vagueness in these policies, Meta is quite clear about one thing: On both Facebook and Instagram, an account should only be disabled or removed after “repeated” violations, warnings, or strikes. Meta states this multiple times throughout its enforcement policies. Its Disabling Accounts policy suggests that generally, an account needs to receive at least 5 strikes for Meta to disable or remove it from the platform. The only caveat is for severe violations, such as posting child sexual exploitation content or violating the dangerous individuals and organizations policy. In those extreme cases, Meta may disable an account after just one violation.
Meta’s Practices Don’t Match Its PoliciesOur survey results detailed a different reality. Many survey respondents told EFF that Meta disabled or removed their account without warning and without indication that they had received repeated strikes. It’s important to note that Meta does not have a unique enforcement process for prescription drug or abortion-related content. When EFF asked Meta about this issue, Meta confirmed that "enforcement actions on prescription drugs are subject to Meta's standard enforcement policies.”
So here are a couple other possible explanations for this disconnect—each of them troubling in their own way:
Meta is Ignoring Its Own Strike SystemIf Meta is taking down accounts without warning or after only one alleged Community Standards violation, the company is failing to follow its own strike system. This makes enforcement arbitrary and denies users the opportunity for correction that Meta's system supposedly provides. It’s also especially problematic for abortion advocates, given that Meta has been incorrectly flagging educational abortion content as violating its Community Standards. This means that a single content moderation error could result not only in the post coming down, but the entire account too.
This may be what happened to Emory University’s RISE Center for Reproductive Health Research (a story we described in more detail earlier in this series). After sharing an educational post about mifepristone, RISE’s Instagram account was suddenly disabled. RISE received no earlier warnings from Meta before its account went dark. When RISE was finally able to get back into its account, it discovered only that this single post had been flagged. Again, according to Meta's own policies, one strike should only result in a warning. But this isn’t what happened here.
Similarly, the Tamtang Foundation, an abortion advocacy organization based in Thailand, had its Facebook account suddenly disabled earlier this year. Tamtang told EFF it had received a warning on only one flagged post that it had posted 10 months prior to its account being taken down. It received none of the other progressive strike restrictions Meta claims to apply Facebook accounts.
Meta is Misclassifying Educational Content as "Extreme Violations"If Meta is accurately following its strike policy but still disabling accounts after only one violation, this points to an even more concerning possibility. Meta’s content moderation system may be categorizing educational abortion information as severe enough to warrant immediate disabling, treating university research posts and clinic educational materials as equivalent to child exploitation or terrorist content.
This would be a fundamental and dangerous mischaracterization of legitimate medical information, and it is, we hope, unlikely. But it’s unfortunately not outside the realm of possibility. We already wrote about a similar disturbing mischaracterization earlier in this series.
Users Are Unknowingly Receiving Multiple StrikesFinally, Meta may be giving users multiple strikes without notifying them. This raises several serious concerns.
First is the lack of transparency. Meta explicitly states in its "Restricting Accounts" policy that it will notify users when it “remove[s] your content or add[s] restrictions to your account, Page or group.” This policy is failing if users are not receiving these notifications and are not made aware there’s an issue with their account.
It may also mean that Meta’s policies themselves are too vague to provide meaningful guidance to users. This lack of clarity is harmful. If users don’t know what's happening to their accounts, they can’t appeal Meta’s content moderation decisions, adjust their content, or understand Meta's enforcement boundaries moving forward.
Finally—and most troubling—if Meta is indeed disabling accounts that share abortion information for receiving multiple violations, this points to an even broader censorship crisis. Users may not be aware just how many informational abortion-related posts are being incorrectly flagged and counted as strikes. This is especially concerning given that Meta places a one-year time limit on strikes, meaning the multiple alleged violations could not have accumulated over multiple years.
The Broader Censorship CrisisThese account suspensions represent just one facet of Meta's censorship of reproductive health information documented by our Stop Censoring Abortion campaign. When combined with post removals, shadowbanning, and content restrictions, the message is clear: Meta platforms are increasingly unfriendly environments for abortion advocacy and education.
If Meta wants to practice what it preaches, then it must reform its enforcement policies to provide clear, transparent guidelines on when and how strikes apply, and then consistently and accurately apply those policies. Accounts should not be taken down for only one alleged violation when the policies state otherwise.
The stakes couldn't be higher. In a post-Roe landscape where access to accurate reproductive health information is more crucial than ever, Meta's enforcement system is silencing the very voices communities need most.
This is the fifth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion
Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people.
Governor Newsom Should Make it Easier to Exercise Our Privacy Rights
California has one of the nation’s most comprehensive consumer data privacy laws. But it’s not always easy for people to exercise those privacy rights. That’s why we supported Assemblymember Josh Lowenthal’s A.B. 566 throughout the legislative session and are now asking California Governor Gavin Newsom to sign it into law.
The easier it is to exercise your rights, the more power you have.
A.B. 566 does a very simple thing. It directs browsers—such as Google’s Chrome, Apple’s Safari, Microsoft’s Edge or Mozilla’s Firefox—to give all their users the option to tell companies they don't want companies to to sell or share personal information that’s collected about them on the internet. In other words: it makes it easy for Californians to tell companies what they want to happen with their own information.
By making it easy to use tools that allow you to send these sorts of signals to companies’ websites, A.B. 566 makes the California Consumer Privacy Act more user-friendly. And the easier it is to exercise your rights, the more power you have.
This is a necessary step, because even though the CCPA gives all people in California the right to tell companies not to sell or share their personal information, companies have not made it easy to exercise this right. Right now, someone who wants to make these requests has to go through the processes set up by each company that may collect their information individually. Companies have also often made it pretty hard to make, or even find out how to make, these requests. Giving people the option for an easier way to communicate how they want companies to treat their personal information helps rebalance the often-lopsided relationship between the two.
Industry groups who want to keep the scales tipped firmly in the favor of corporations have lobbied heavily against A.B. 566. But we urge Gov. Newsom not to listen to those who want to it to remain difficult for people to exercise their CCPA rights. EFF’s technologists, lawyers, and advocates think A.B. 566 empowers consumers without imposing regulations that would limit innovation. We think Californians should have easy tools to tell companies how to deal with their information, and urge Gov. Newsom to sign this bill.