Speaking Freely: Tomiwa Ilori

1 week 3 days ago

Interviewer: David Greene

*This interview has been edited for length and clarity.

Tomiwa Ilori is an expert researcher and a policy analyst with focus on digital technologies and human rights. Currently, he is an advisor for the B-Tech Africa Project at UN Human Rights and  a Senior ICFP Fellow at HURIDOCS.  His postgraduate qualifications include masters and doctorate degrees from the Centre for Human Rights, Faculty of Law, University of Pretoria. All views and opinions expressed in this interview are personal. 

Greene: Why don’t you start by introducing yourself?

Tomiwa Ilori: My name is Tomiwa Ilori. I’m a legal consultant with expertise in digital rights and policy. I work with a lot of organizations on digital rights and policy including information rights, business and human rights, platform governance, surveillance studies, data protection and other aspects. 

Greene: Can you tell us more about the B-Tech project? 

The B-Tech project is a project by the UN human rights office and the idea behind it is to mainstream the UN Guiding Principles on Business and Human Rights (UNGPs) into the tech sector. The project looks at, for example, how  social media platforms can apply human rights due diligence frameworks or processes to their products and services more effectively. We also work on topical issues such as Generative AI and its impacts on human rights. For example, how do the UNGPs apply to Generative AI? What guidance can the UNGPs provide for the regulation of Generative AI and what can actors and policymakers look for when regulating Generative AI and other new and emerging technologies? 

Greene: Great. This series is about freedom of expression. So my first question for you is what does freedom of expression mean to you personally? 

I think freedom of expression is like oxygen, more or less like the air we breathe. There is nothing about being human that doesn’t involve expression, just like drawing breath. Even beyond just being a right, it’s an intrinsic part of being human. It’s embedded in us from the start. You have this natural urge to want to express yourself right from being an infant. So beyond being a human right, it is something you can almost not do without in every facet of life. Just to put it as simply as possible, that’s what it means to me. 

Greene: Is there a single experience or several experiences that shaped your views about freedom of expression? 

Yes. For context, I’m Nigerian and I also grew up in the Southwestern part of the country where most of the Yorùbá people live. As a Yoruba person and as someone who grew up listening and speaking the Yoruba language, language has a huge influence on me, my philosophy and my ideas. I have a mother who loves to speak in proverbs and mostly in Yorùbá. Most of these proverbs which are usually profound show that free speech is the cornerstone of being human, being part of a community, and exercising your right to life and existence. Sharing expression and growing up in that kind of community shaped my worldview about my right to be. Closely attached to my right to be is my right to express myself. More importantly, it also shaped my view about how my right to be does not necessarily interrupt someone else’s right to be. So, yes, my background and how I grew up really shaped me. Then, I was fortunate that I also grew up and furthered my studies. My graduate studies including my doctorate focused on freedom of expression. So I got both the legal and traditional background grounded in free speech studies and practices in unique and diverse ways. 

Greene: Can you talk more about whether there is something about  Yorùbá language or culture that is uniquely supportive of freedom of expression? 

There’s a proverb that goes, “A kìí pa ohùn mọ agogo lẹ́nu” and what that means in a loose English translation is that you cannot shut the clapperless bell up, it is the bell’s right to speak, to make a sound. So you have no right to stop a bell from doing what it’s meant to do, it suggests that it is everyone’s right to express themselves. It suffices to say that according to that proverb, you have no right to stop people from expressing themselves. There’s another proverb that is a bit similar which is,“Ọmọdé gbọ́n, àgbà gbọ́n, lafí dá ótù Ifẹ̀” which when loosely translated refers to how both the old and the young collaborate to make the most of a society by expressing their wisdom. 

Greene: Have you ever had a personal experience with censorship? 

Yes and I will talk about two experiences. First, and this might not fit the technical definition of censorship, but there was a time when I lived in Kampala and I had to pay tax to access the internet which I think is prohibitive for those who are unable to pay it. If people have to make a choice between buying bread to eat and paying a tax to access the internet, especially when one item is an opportunity cost for the other, it makes sense that someone would choose bread over paying that tax. So you could say it’s a way of censoring internet users. When you make access prohibitive through taxation, it is also a way of censoring people. Even though I was able to pay the tax, I could not stop thinking about those who were unable to afford it and for me that is problematic and qualifies as a kind of censorship. 

Another one was actually very recent. Even though the internet service provider insisted that they did not shut down or throttle the internet,, I remember that during the recent protests in Nairobi, Kenya in June of 2024, I experienced an internet shutdown for the first time. According to the internet service provider, the shut down was as a result of an undersea cable cut. Suddenly my emails just stopped working and my Twitter (now X) feed won’t load. The connection appeared to work for a few seconds, and then all of a sudden it would stop, then work for some time, then all of a sudden nothing. I felt incapacitated and helpless. That’s the way I would describe it. I felt like, “Wow, I have written, thought, spoken about this so many times and this is it.” For the first time I understood what it means to actually experience an internet shutdown and it’s not just the experience, it’s the helplessness that comes with it too. 

Greene: Do you think there is ever a time when the government can justify an internet shutdown? 

The simple answer is no. In my view, those who carry out internet shutdowns, especially state actors, believe that since freedom of expression and some other associated rights are not absolute, they have every right to restrict them without measure. I think what many actors that are involved in internet shutdowns use as justification is a mask for their limited capacity to do the right thing. Actors involved in shutting down the internet say that they usually do not have a choice. For example, they say that hate speech, misinformation, and online violence are being spread online in such a way that it could spill over into offline violence. Some have even gone as far as saying that they’re shutting down the internet because they want to curtail examination fraud. When these are the kind of excuses used by actors, it demonstrates the limited understanding of actors on what international human rights standards prescribe and what can actually be done to address the online harms that are used to justify internet shutdowns. 

Let me use an example: international human rights standards provide clear processes for instances where state actors must address online harms or where private actors must address harms to forestall offline violence. The perception is that these standards do not even give room for addressing harms, which is not the case. The process requires that whatever action you take must be legal i.e. be provided clearly in a law, must not be vague, must be unequivocal and show in detail the nature of the right that is limited. Another requirement says that whatever action to be taken to limit a right must be proportional. If you are trying to fight hate speech online, don’t you think it is disproportionate to shut down the entire network just to fight one section of people spreading such speech? Another requirement is that its necessity must be justified i.e. to protect clearly defined public interest or order which must be specific and not the blanket term ‘national security.’ Additionally international human rights law is clear that these requirements must be cumulative i.e. you can not fulfill the requirement of legality and not fulfill that of proportionality or necessity. 

This shows that when trying to regulate online harms, it needs to be very specific. So, for example, state actors can actually claim that a particular content or speech is causing harm which the state actors must prove according to the requirements above. You can make a request such that just that content alone is restricted. Also these must be put in context. Using hate speech as an example. There’s the RabatAction Plan on Hate Speech which was developed by the UN, and it’s very clear on the conditions that must be met before the speech can be categorized as hate speech. So are these conditions met by state actors before, for example, they ask platforms to remove particular hate content? There are steps and processes involved  in the regulation of problematic content, but state actors never go simply for targeted removal that comply with international human rights standards, they usually go for the entire network. 

I’d also like to add that I find it problematic and ironic that most state actors who are supposedly champions of digital transformation are also the ones quick to shut down the internet during political events. There is no digital transformation that does not include a free, accessible and interoperable internet. These are some of the challenges and problematic issues that I think we need to address in more detail so we can hear each other better, especially when it comes to regulating online speech and fighting internet shutdowns. 

Greene: So shutdowns are then inherently disproportionate and not authorized by law. You talked about the types of speech that might be limited. Can you give us a sense of what types of online speech you think might be appropriately regulated by governments? 

For categories of speech that can be regulated, of course, that includes hate speech. It’s under international law as provided for underArticle 20 of the International Covenant on Civil and Political Rights (ICCPR) prohibits propagation of war, etc. The International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) also provides for this. However, these applicable provisions are not carte blanche for state actors. The major conditions that must be met before avspeech qualifies as hate speech must be fulfilled before it can be regarded as one. This is done in order to address instances where powerful actors define what constitutes hate speech and violate human rights under the guise of combating it. There are still laws that criminalize disaffection against the state which are used to prosecute dissent. 

Greene: In Nigeria or in Kenya or just on the continent in general? 

Yes, there are countries that still have lèse-majesté laws in criminal laws and penal codes. We’ve had countries like Nigeria that were  trying to come up with a version of such laws for the online space, but which have been fought down by mostly civil society actors. 

So hate speech does qualify as speech that could be limited, but with caveats. There are several conditions that must be made before speech qualifies as hate speech. There must be context around the speech. For example, what kind of power does the person who makes the speech wield? What is the likelihood of that speech leading to violence? What audience has the speech been made to? These are some of the criteria that must be fulfilled before you say, “okay, this qualifies as hate speech.” 

There’s also other clearly problematic content, child sexual abuse material for example, that are prima facie illegal and must be censored or removed or disallowed. That goes without saying. It’s customary international human rights law especially as it applies to platform governance. Another category of speech could also be non-consensual sharing of intimate images which could qualify as online gender-based violence. So these are some of the categories that could come under regulation by states. 

I also must sound a note that there are contexts to applying speech laws. It is also the reason why speech laws are one of the most difficult regulations to come up with because they are usually context-dependent especially when they are to be balanced against international human rights standards. Of course, some of the biggest fears in platform  regulation that touch on freedom of expression is how state actors could weaponize those laws to track or to attack dissent and how businesses platform speech mainly for profit. 

Greene: Is misinformation something the government should have a role in regulating or is that something that needs to be regulated by the companies or by the speakers? If it’s something we need to worry about, who has a role in regulating it? 

State actors have a role. But in my opinion I don’t think it’s regulation. The fact that you have a hammer does not mean that everything must look like a nail. The fact that a state actor has the power to make laws does not mean that it must always make laws on all social problems. I believe non-legal and multi-stakeholder solutions are required for combatting online harms. State actors have tried to do what they do best by coming up with laws that regulate misinformation. But where has that led us? The arrest and harassment of journalists, human rights defenders and activists. So it has really not solved any problems. 

When your approach is not solving any problems, I think it’s only right to re-evaluate. That’s the reason I said state actors have a role. In my view, state actors need to step back in a sense that you don’t necessarily need to leave the scene, but step back and allow for a more holistic dialogue among stakeholders involved in the information ecosystem. You could achieve a whole lot more through digital literacy and skills than you will with criminalizing misinformation. You can do way more by supporting journalists with fact-checking skills than you will ever achieve by passing overbroad laws that limit access to information. You can do more by working with stakeholders in the information ecosystem like platforms to label problematic content than you will ever by shutting down the internet. These are some of the non-legal methods that could be used to combat misinformation and actually get results. So, state actors have a role, but it is mainly facilitatory in the sense that it should bring stakeholders together to brainstorm on what the contexts are and the kinds of useful solutions that could be applied effectively. 

Greene: What do you feel the role of the companies should be? 

Companies also have an important role, one of which is to respect human rights in the course of providing services. What I always say for technology companies is that, if a certain jurisdiction or context is good enough to make money from, it is good enough to pay attention to and respect human rights there.

One of the perennial issues that platforms face in addressing online harms is aligning their community standards with international human rights standards. But oftentimes what happens is that corporate-speak is louder than the human rights language in many of these standards. 

That said, some of the practical things that platforms could do is to step out of the corporate talk of, “Oh, we’re companies, there’s not much we can do.” There’s a lot they can do. Companies need to get more involved, step into the arena and walk with key state actors, including civil society, to  educate and develop capacity on how their  platforms actually work. For example, what are the processes involved, for example, in taking down a piece of content? What are the processes involved in getting appeals? What are the processes involved in actually getting redress when a piece of content has been wrongly taken down? What are the ways platforms can accurately—and I say accurately emphatically because I’m not speaking about using automated tools—label content? Platforms also have responsibilities in being totally invested in the contexts they do business in. What are the triggers for misinformation in a particular country? Elections, conflict, protests? These are like early warning sign systems that platforms need to start paying attention to to be able to understand their contexts and be able to address the harms on their platforms better. 

Greene: What’s the most pressing free speech issue in the region in which you work? 

Well, for me, I think of a few key issues. Number one, which has been going on for the longest time, is the government’s use of laws to stifle free speech. Most of the laws that are used are cybercrime laws, electronic communication laws, and old press codes and criminal codes. They were never justified and they’re still not justified. 

A second issue is the privatization of speech by companies regarding the kind of speech that gets promoted or demoted. What are the guidelines on, for example, political advertisements? What are the guidelines on targeted advertisement? How are people’s data curated? What is it like in the algorithm black box? Platforms’ roles on who says what, how,  when and where also is a burning free speech issue. And we are moving towards a future where speech is being commodified and privatized. Public media, for example, are now being relegated to the background. Everyone wants to be on social media and I’m not saying that’s a terrible thing, but it gives us a lot to think about, a lot to chew on. 

Greene: And finally, who is your free speech hero? 

His name is Felá Aníkúlápó Kútì. Fela was a political musician and the originator of Afrobeat not afrobeats with an “s” but the original Afrobeat which that one came from. Fela never started out as a political musician, but his music became highly political and highly popular among the people for obvious reasons. His music also became timely because, as a political musician in Nigeria who lived during the brutal military era, it resonated with a lot of people. He was a huge thorn in the flesh of despotic Nigerian and African leaders. So, for me, Fela is my free speech hero. He said quite a lot with his music that many people in his generation would never dare to say because of the political climate at that time. Taking such risks even in the face of brazen violence and even death was remarkable.

Fela was not just a political musician who understood the power of expression. He was also someone who understood the power of visual expression. He’s unique in his own way and expresses himself through music, through his lyrics. He’s someone who has inspired a lot of people including musicians, politicians and a lot of new generation activists.

David Greene

A Fundamental-Rights Centered EU Digital Policy: EFF’s Recommendations 2024-2029

1 week 3 days ago

The European Union (EU) is a hotbed for tech regulation that often has ramifications for users globally.  The focus of our work in Europe is to ensure that EU tech policy is made responsibly and lives up to its potential to protect users everywhere. 

As the new mandate of the European institution begins – a period where newly elected policymakers set legislative priorities for the coming years – EFF today published recommendations for a European tech policy agenda that centers on fundamental rights, empowers users, and fosters fair competition. These principles will guide our work in the EU over the next five years. Building on our previous work and success in the EU, we will continue to advocate for users and work to ensure that technology supports freedom, justice, and innovation for all people of the world. 

Our policy recommendations cover social media platform intermediary liability, competition and interoperability, consumer protection, privacy and surveillance, and AI regulation. Here’s a sneak peek:  

  • The EU must ensure that the enforcement of platform regulation laws like the Digital Services Act and the European Media Freedom Act are centered on the fundamental rights of users in the EU and beyond.
  • The EU must create conditions of fair digital markets that foster choice innovation and fundamental rights. Achieving this requires enforcing the user-rights centered provisions of the Digital Markets Act, promoting app store freedom, user choice, and interoperability, and countering AI monopolies. 
  • The EU must adopt a privacy-first approach to fighting online harms like targeted ads and deceptive design and protect children online without reverting to harmful age verification methods that undermine the fundamental rights of all users. 
  • The EU must protect users’ rights to secure, encrypted, and private communication, protect against surveillance everywhere, stay clear of new data retention mandates, and prioritize the rights-respecting enforcement of the AI Act. 

Read on for our full set of recommendations.

Christoph Schmon

FTC Rightfully Acts Against So-Called “AI Weapon Detection” Company Evolv

2 weeks ago

The Federal Trade Commission has entered a settlement with self-styled “weapon detection” company Evolv, to resolve the FTC’s claim that the company “knowingly” and repeatedly” engaged in “unlawful” acts of misleading claims about their technology. Essentially, Evolv’s technology, which is in schools, subways, and stadiums, does far less than they’ve been claiming. 

The FTC alleged in their complaint that despite the lofty claims made by Evolv, the technology is fundamentally no different from a metal detector: “The company has insisted publicly and repeatedly that Express is a ‘weapons detection’ system and not a ‘metal detector.’ This representation is solely a marketing distinction, in that the only things that Express scanners detect are metallic and its alarms can be set off by metallic objects that are not weapons.” A typical contract for Evolv costs tens of thousands of dollars per year—five times the cost of traditional metal detectors. One district in Kentucky spent $17 million to outfit its schools with the software. 

The settlement requires notice, to the many schools which use this technology to keep weapons out of classrooms, that they are allowed to cancel their contracts. It also blocks the company from making any representations about their technology’s:

  • ability to detect weapons
  • ability to ignore harmless personal items
  • ability to detect weapons while ignoring harmless personal items
  • ability to ignore harmless personal items without requiring visitors to remove any such items from pockets or bags

The company also is prohibited from making statements regarding: 

  • Weapons detection accuracy, including in comparison to the use of metal detectors
  • False alarm rates, including comparisons to the use of metal detectors
  • The speed at which visitors can be screened, as compared to the use of metal detectors
  • Labor costs, including comparisons to the use of metal detectors 
  • Testing, or the results of any testing
  • Any material aspect of its performance, efficacy, nature, or central characteristics, including, but not limited to, the use of algorithms, artificial intelligence, or other automated systems or tools.

If the company can’t say these things anymore…then what do they even have left to sell? 

There’s a reason so many people accuse artificial intelligence of being “snake oil.” Time and again, a company takes public data in order to power “AI” surveillance, only for taxpayers to learn it does no such thing. “Just walk out” stores actually required people watching you on camera to determine what you purchased. Gunshot detection software that relies on a combination of artificial intelligence and human “acoustic experts” to purportedly identify and locate gunshots “rarely produces evidence of a gun-related crime.” There’s a lot of well-justified suspicion about what’s really going on within the black box of corporate secrecy in which artificial intelligence so often operates. 

Even when artificial intelligence used by the government isn’t “snake oil,” it often does more harm than good. AI systems can introduce or exacerbate harmful biases that have massive  negative impacts on people’s lives. AI systems have been implicated with falsely accusing people of welfare fraud, increasing racial bias in jail sentencing as well as policing and crime prediction, and falsely identifying people as suspects based on facial recognition.   

Now, the politicians, schools, police departments, and private venues have been duped again. This time, by Evolv, a company which purports to sell “weapon detection technology” which they claimed would use AI to scan people entering a stadium, school, or museum and theoretically alert authorities if it recognizes the shape of a weapon on a person. 

Even before the new FTC action, there was indication that this technology was not an effective solution to weapon-based violence. From July to October, New York City rolled out a trial of Evolv technology in 20 subway systems in an attempt to keep people from bringing weapons on to the transit system. Out of 2,749 scans there were 118 false positives. Twelve knives and no guns were recovered. 

Make no mistake, false positives are dangerous. Falsely telling officers to expect an armed individual is a recipe for an unarmed person to be injured or even killed

Cities, performance venues, schools, and transit systems are understandably eager to do something about violence–but throwing money at the problem by buying unproven technology is not the answer and actually takes away resources and funding from more proven and systematic approaches. We applaud the FTC for standing up to the lucrative security theater technology industry. 

Matthew Guariglia

This Bill Could Put A Stop To Censorship By Lawsuit

2 weeks 1 day ago

For years now, deep-pocketed individuals and corporations have been turning to civil lawsuits to silence their opponents. These Strategic Lawsuits Against Public Participation, or SLAPPs, aren’t designed to win on the merits, but rather to harass journalists, activists, and consumers into silence by suing them over their protected speech. While 34 states have laws to protect against these abuses, there is still no protection at a federal level. 

Today, Reps. Jamie Raskin (D-MD) and Kevin Kiley (R-CA) introduced the bipartisan Free Speech Protection Act. This bill is the best chance we’ve seen in many years to secure strong federal protection for journalists, activists, and everyday people who have been subject to harassing meritless lawsuits. 

take action

Tell Congress We Don't want a weaponized court system

The Free Speech Protection Act is a long overdue tool to protect against the use of SLAPP lawsuits as legal weapons that benefit the wealthy and powerful. This bill will help everyday Americans of all political stripes who speak out on local and national issues. 

Individuals or companies who are publicly criticized (or even simply discussed) will sometimes use SLAPP suits to intimidate their critics. Plaintiffs who file these suits don’t need to win on the merits, and sometimes they don’t even intend to see the case through. But the stress of the lawsuit and the costly legal defense alone can silence or chill the free speech of defendants. 

State anti-SLAPP laws work. But since state laws are often not applicable in federal court, people and companies can still maneuver to manipulate the court system, filing cases in federal court or in states with weak or nonexistent anti-SLAPP laws. 

SLAPPs All Around 

SLAPP lawsuits in federal court are increasingly being used to target activists and online critics. Here are a few recent examples: 

Coal Ash Company Sued Environmental Activists

In 2016, activists in Uniontown, Alabama—a poor, predominantly Black town with a median per capita income of around $8,000—were sued for $30 million by a Georgia-based company that put hazardous coal ash into Uniontown’s residential landfill. The activists were sued over statements on their website and Facebook page, which said things like the landfill “affected our everyday life,” and, “You can’t walk outside, and you cannot breathe.” The plaintiff settled the case after the ACLU stepped in to defend the activist group. 

Shiva Ayyadurai Sued A Tech Blog That Reported On Him

In 2016, technology blog Techdirt published articles disputing Shiva Ayyadurai’s claim to have “invented email.” Techdirt founder Mike Masnick was hit with a $15 million libel lawsuit in federal court. Masnick, an EFF Award winner,  fought back in court and his reporting remains online, but the legal fees had a big effect on his business. With a strong federal anti-SLAPP law, more writers and publishers will be able to fight back against bullying lawsuits without resorting to crowd-funding. 

Logging Company Sued Greenpeace 

In 2016, environmental non-profit Greenpeace was sued along with several individual activists by Resolute Forest Products. Resolute sued over blog post statements such as Greenpeace’s allegation that Resolute’s logging was “bad news for the climate.” (After four years of litigation, Resolute was ordered to pay nearly $1 million in fees to Greenpeace—because a judge found that California’s strong anti-SLAPP law should apply.) 

Congressman Sued His Twitter Critics And Media Outlets 

In 2019, anonymous Twitter accounts were sued by Rep. Devin Nunes, then a congressman representing parts of Central California. Nunes used lawsuits to attempt to unmask and punish two Twitter users who used the handles @DevinNunesMom and @DevinCow to criticize his actions as a politician. Nunes filed these actions in a state court in Henrico County, Virginia. The location had little connection to the case, but Virginia’s weak anti-SLAPP law has enticed many plaintiffs there. 

Over the next few years, Nunes went on to sue many other journalists who published critical articles about him, using state and federal courts to sue CNN, The Washington Post, his hometown paper The Fresno Bee, MSNBC, a group of his own constituents, and others. Nearly all of these lawsuits were dropped or dismissed by courts. If a federal anti-SLAPP law were in place, more defendants would have a chance of dismissing such lawsuits early and recouping their legal fees. 

Fast Relief From SLAPPs

The Free Speech Protection Act gives defendants of SLAPP suits a powerful tool to defend themselves.

The bill would allow a defendant sued for speaking out on a matter of public concern to file a special motion to dismiss, which the court must generally decide on within 90 days. If the court grants the speaker-defendant’s motion, the claims are dismissed. In many situations, defendants who prevail on an anti-SLAPP motion will be entitled to have the plaintiff reimburse them for their legal fees. 

take action

Tell Congress to pass the free speech protection act

EFF has been defending the rights of online speakers for more than 30 years. A strong federal anti-SLAPP law will bring us closer to the vision of an internet that allows anyone to speak out and organize for change, especially when they speak against those with more power and resources. Anti-SLAPP laws enhance the rights of all. We urge Congress to pass The Free Speech Protection Act. 

Joe Mullin

Let's Answer the Question: "Why is Printer Ink So Expensive?"

2 weeks 1 day ago

Did you know that most printer ink isn’t even expensive to make? Why then is it so expensive to refill the ink on your printer

The answer is actually pretty simple: monopolies, weird laws, and companies exploiting their users for profit. If this sounds mildly infuriating and makes you want to learn ways to fight back, then head over to our new site, Digital Rights Bytes! We’ve even created a short video to explain what the heck is going on here.  

We’re answering the common tech questions that may be bugging you. Whether you’re hoping to learn something new or want to share resources with your family and friends, Digital Rights Bytes can be your one-stop-shop to learn more about the technology you use every day.  

Digital Rights Bytes also has answers to other common questions about device repair, ownership of your digital media, and more. If you’ve got additional questions you’d like us to tackle in the future, let us know on your favorite social platform using the hashtag #DigitalRightsBytes! 

Christian Romero

Location Tracking Tools Endanger Abortion Access. Lawmakers Must Act Now.

2 weeks 2 days ago

EFF wrote recently about Locate X, a deeply troubling location tracking tool that allows users to see the precise whereabouts of individuals based on the locations of their smartphone devices. Developed and sold by the data surveillance company Babel Street, Locate X collects smartphone location data from a variety of sources and collates that data into an easy-to-use tool to track devices. The tool features a navigable map with red dots, each representing an individual device. Users can then follow the location of specific devices as they move about the map.

Locate X–and other similar services–are able to do this by taking advantage of our largely unregulated location data market.

Unfettered location tracking puts us all at risk. Law enforcement agencies can purchase their way around warrant requirements and bad actors can pay for services that make it easier to engage in stalking and harassment. Location tracking tools particularly threaten groups especially vulnerable to targeting, such as immigrants, the LGBTQ+ community, and even U.S. intelligence personnel abroad. Crucially, in a post-Dobbs United States, location surveillance also poses a serious danger to abortion-seekers across the country.

EFF has warned before about how the location data market threatens reproductive rights. The recent reports on Locate X illustrate even more starkly how the collection and sale of location data endangers patients in states with abortion bans and restrictions.

In late October, 404 Media reported that privacy advocates from Atlas Privacy, a data removal company, were able to get their hands on Locate X and use it to track an individual device’s location data as it traveled across state lines to visit an abortion clinic. Although the tool was designed for law enforcement, the advocates gained access by simply asserting that they planned to work with law enforcement in the future. They were then able to use the tool to track an individual device as it traveled from an apparent residence in Alabama, where there is a complete abortion ban, to a reproductive health clinic in Florida, where abortion is banned after 6 weeks of pregnancy. 

Following this report, we published a guide to help people shield themselves from tracking tools like Locate X. While we urge everyone to take appropriate technical precautions for their situation, it’s far past time to address the issue at its source. The onus shouldn’t be on individuals to protect themselves from such invasive surveillance. Tools like Locate X only exist because U.S. lawmakers have failed to enact legislation that would protect our location data from being bought and sold to the highest bidder. 

Thankfully, there’s still time to reshape the system, and there are a number of laws legislators could pass today to help protect us from mass location surveillance. Remember: when our location information is for sale, so is our safety. 

Blame Data Brokers and the Online Advertising Industry

There are a vast array of apps available for your smartphone that request access to your location. Sharing this information, however, may allow your location data to be harvested and sold to shadowy companies known as data brokers. Apps request access to device location to provide various features, but once access has been granted, apps can mishandle that information and are free to share and sell your whereabouts to third parties, including data brokers. These companies collect data showing the precise movements of hundreds of millions of people without their knowledge or meaningful consent. They then make this data available to anyone willing to pay, whether that’s a private company like Babel Street (and anyone they in turn sell to) or government agencies, such as law enforcement, the military, or ICE.

This puts everyone at risk. Our location data reveals far more than most people realize, including where we live and work, who we spend time with, where we worship, whether we’ve attended protests or political gatherings, and when and where we seek medical care—including reproductive healthcare.

Without massive troves of commercially available location data, invasive tools like Locate X would not exist.

For years, EFF has warned about the risk of law enforcement or bad actors using commercially available location data to track and punish abortion seekers. Multiple data brokers have specifically targeted and sold location information tied to reproductive healthcare clinics. The data broker SafeGraph, for example, classified Planned Parenthood as a “brand” that could be tracked, allowing investigators at Motherboard to purchase data for over 600 Planned Parenthood facilities across the U.S.

Meanwhile, the data broker Near sold the location data of abortion-seekers to anti-abortion groups, enabling them to send targeted anti-abortion ads to people who visited clinics. And location data firm Placer.ai even once offered heat maps showing where visitors to Planned Parenthood clinics approximately lived. Sale to private actors is disturbing given that several states have introduced and passed abortion “bounty hunter” laws, which allow private citizens to enforce abortion restrictions by suing abortion-seekers for cash.

Government officials in abortion-restrictive states are also targeting location information (and other personal data) about people who visit abortion clinics. In Idaho, for example, law enforcement used cell phone data to charge a mother and son with kidnapping for aiding an abortion-seeker who traveled across state lines to receive care. While police can obtain this data by gathering evidence and requesting a warrant based on probable cause, the data broker industry allows them to bypass legal requirements and buy this information en masse, regardless of whether there’s evidence of a crime.

Lawmakers Can Fix This

So far, Congress and many states have failed to enact legislation that would meaningfully rein in the data broker industry and protect our location information. Locate X is simply the end result of such an unregulated data ecosystem. But it doesn’t have to be this way. There are a number of laws that Congress and state legislators could pass right now that would help protect us from location tracking tools.

1. Limit What Corporations Can Do With Our Data

A key place to start? Stronger consumer privacy protections. EFF has consistently pushed for legislation that would limit the ability of companies to harvest and monetize our data. If we enforce strict rules on how location data is collected, shared, and sold, we can stop it from ending up in the hands of private surveillance companies and law enforcement without our consent.

We urge legislators to consider comprehensive, across-the-board data privacy laws. Companies should be required to minimize the collection and processing of location data to only what is strictly necessary to offer the service the user requested (see, for example, the recently-passed Maryland Online Data Privacy Act). Companies should also be prohibited from processing a person’s data, except with their informed, voluntary, specific, opt-in consent.

We also support reproductive health-specific data privacy laws, like Rep. Sara Jacobs’ proposed “My Body My Data” Act. Laws like this would create important protections for a variety of reproductive health data, even beyond location data. Abortion-specific data privacy laws can provide some protection against the specific problem posed by Locate X. But to fully protect against location tracking tools, we must legally limit processing of all location data and not just data at sensitive locations, such as reproductive healthcare clinics.

While a limited law might provide some help, it would not offer foolproof protection. Imagine this scenario: someone travels from Alabama to New York for abortion care. With a data privacy law that protects only sensitive, reproductive health locations, Alabama police could still track that person’s device on the journey to New York. Upon reaching the clinic in New York, their device would disappear into a sensitive location blackout bubble for a couple of hours, then reappear outside of the bubble where police could resume tracking as the person heads home. In this situation, it would be easy to infer where the person was during those missing two hours, giving Alabama police the lead they need.

The best solution is to minimize all location data, no exceptions.

2. Limit How Law Enforcement Can Get Our Data

Congress and state legislatures should also pass laws limiting law enforcement’s ability to access our location data without proper legal safeguards.

Much of our mobile data, like our location data, is information law enforcement would typically need a court order to access. But thanks to the data broker industry, law enforcement can skip the courts entirely and simply head to the commercial market. The U.S. government has turned this loophole into a way to gather personal data on individuals without a search warrant

Lawmakers must close this loophole—especially if they’re serious about protecting abortion-seekers from hostile law enforcement in abortion-restrictive states. A key way to do this is for Congress to pass the Fourth Amendment is Not For Sale Act, which was originally introduced by Senator Ron Wyden in 2021 and made the important and historic step of passing the U.S. House of Representatives earlier this year. 

Another crucial step is to ban law enforcement from sending “geofence warrants” to corporate holders of location data. Unlike traditional warrants, a geofence warrant doesn’t start with a particular suspect or even a device or account; instead police request data on every device in a given geographic area during a designated time period, regardless of whether the device owner has any connection to the crime under investigation.This could include, of course, an abortion clinic. 

Notably, geofence warrants are very popular with law enforcement. Between 2018 and 2020, Google alone received more than 5,700 demands of this type from states that now have anti-abortion and anti-LGBTQ legislation on the books.

Several federal and state courts have already found individual geofence warrants to be unconstitutional and some have even ruled they are “categorically prohibited by the Fourth Amendment.” But instead of waiting for remaining courts to catch up, lawmakers should take action now, pass legislation banning geofence warrants, and protect all of us–abortion-seekers included–from this form of dragnet surveillance.

3. Make Your State a Data Sanctuary

In the wake of the Dobbs decision, many states stepped up to serve as health care sanctuaries for people seeking abortion care that they could not access in their home states. To truly be a safe refuge, these states must also be data sanctuaries. A state that has data about people who sought abortion care must protect that data, and not disclose it to adversaries who would use it to punish them for seeking that healthcare. California has already passed laws to this effect, and more states should follow suit.

What You Can Do Right Now

Even before lawmakers act, there are steps you can take to better shield your location data from tools like Locate X.  As noted above, we published a Locate X-specific guide several weeks ago. There are also additional tips on EFF’s Surveillance Self-Defense site, as well as many other resources available to provide more guidance in protecting your digital privacy. Many general privacy practices also offer strong protection against location tracking. 

But don’t stop there: we urge you to make your voice heard and contact your representatives. While these precautions offer immediate protection, only stronger laws will ensure comprehensive location privacy in the long run.

Lisa Femia

Top Ten EFF Digital Security Resources for People Concerned About the Incoming Trump Administration

2 weeks 3 days ago

In the wake of the 2024 election in the United States, many people are concerned about tightening up their digital privacy and security practices. As always, we recommend that people start making their security plan by understanding their risks. For most people in the U.S., the threats that they face and the methods by which they are likely to be surveilled or harassed have not changed, but the consequences of digital privacy or security failures may become much more serious, especially for vulnerable populations such as journalists, activists, LGBTQ+ people, people seeking or providing abortion-related care, Black or Indigenous people, and undocumented immigrants.

EFF has decades of experience in providing digital privacy and security resources, particularly for vulnerable people. We’ve written a lot of resources over the years and here are the top ten that we think are most useful right now:

1. Surveillance Self-Defense

https://ssd.eff.org/

Our Surveillance Self-Defense guides are a great place to start your journey of securing yourself against digital threats. We know that it can be a bit overwhelming, so we recommend starting with our guide on making a security plan so you can familiarize yourself with the basics and decide on your specific needs. Or, if you’re planning to head out to a protest soon and want to know the most important ways to protect yourself, check out our guide to Attending a Protest. Many people in the groups most likely to be targeted in the upcoming months will need advice tailored to their specific threat models, and for that we recommend the Security Scenarios module as a quick way to find the right information for your particular situation. 

2. Street-Level Surveillance

https://sls.eff.org/ 

If you are creating your security plan for the first time, it’s helpful to know which technologies might realistically be used to spy on you. If you’re going to be out on the streets protesting or even just existing in public, it’s important to identify which threats to take seriously. Our Street-Level Surveillance team has spent years studying the technologies that law enforcement uses and has made this handy website where you can find information about technologies including drones, face recognition, license plate readers, stingrays, and more.

3. Atlas Of Surveillance

https://atlasofsurveillance.org/ 

Once you have learned about the different types of surveillance technologies police can acquire from our Street-Level surveillance guides, you might want to know which technologies your local police has already bought. You can find that in our Atlas of Surveillance, a crowd-sourced map of police surveillance technologies in the United States. 

4. Doxxing: Tips To Protect Yourself Online & How to Minimize Harm

https://www.eff.org/deeplinks/2020/12/doxxing-tips-protect-yourself-online-how-minimize-harm

Surveillance by governments and law enforcement is far from the only kind of threat that people face online. We expect to see an increase in doxxing and harassment of vulnerable populations by vigilantes, emboldened by the incoming administration’s threatened policies. This guide is our thinking around the precautions you may want to take if  you are likely to be doxxed and how to minimize the harm if you’ve been doxxed already.

5. Using Your Phone in Times of Crisis

https://www.eff.org/deeplinks/2022/03/using-your-phone-times-crisis

Using your phone in general can be a cause for anxiety for many people. We have a short guide on what considerations you should make when you are using your phone in times of crisis. This guide is specifically written for people in war zones, but may also be useful more generally. 

6. Surveillance-Self Defense for Campus Protests

https://www.eff.org/deeplinks/2024/06/surveillance-defense-campus-protests 

One prediction we can safely make for 2025 is that campus protests will continue to be important. This blog post is our latest thinking about how to put together your security plan before you attend a protest on campus.

7. Security Education Companion

https://www.securityeducationcompanion.org/

For those who are already comfortable with Surveillance Self-Defense, you may be getting questions from your family, friends, or community about what to do now. You may even consider giving a digital security training session to people in your community, and for that you will need guidance and training materials. The Security Education Companion has everything you need to get started putting together a training plan for your community, from recommended lesson plans and materials to guides on effective teaching.

8. Police Location Tracking

https://www.eff.org/deeplinks/2024/11/creators-police-location-tracking-tool-arent-vetting-buyers-heres-how-protect 

One police surveillance technology we are especially concerned about is location tracking services. These are data brokers that get your phone's location, usually through the same invasive ad networks that are baked into almost every app, and sell that information to law enforcement. This can include historical maps of where a specific device has been, or a list of all the phones that were at a specific location, such as a protest or abortion clinic. This blog post goes into more detail on the problem and provides a guide on how to protect yourself and keep your location private.

9. Should You Really Delete Your Period Tracking App?

https://www.eff.org/deeplinks/2022/06/should-you-really-delete-your-period-tracking-app

As soon as the Supreme Court overturned Roe v. Wade, one of the most popular bits of advice going around the internet was to “delete your period tracking app.” Deleting your period tracking app may feel like an effective countermeasure in a world where seeking abortion care is increasingly risky and criminalized, but it’s not advice that is grounded in the reality of the ways in which governments and law enforcement currently gather evidence against people who are prosecuted for their pregnancy outcomes. This blog post provides some more effective ways of protecting your privacy and sensitive information. 

10. Why We Can’t Just Tell You Which Messenger App to Use

https://www.eff.org/deeplinks/2018/03/why-we-cant-give-you-recommendation

People are always asking us to give them a recommendation for the best end-to-end encrypted messaging app. Unfortunately, this is asking for a simple answer to an extremely nuanced question. While the short answer is “probably Signal most of the time,” the long answer goes into why that is not always the case. Since we wrote this in 2018, some companies have come and gone, but our thinking on this topic hasn’t changed much.

Bonus external guide

https://digitaldefensefund.org/learn

Our friends at the Digital Defense Fund have put together an excellent collection of guides aimed at particularly vulnerable people who are thinking about digital security for the first time. They have a comprehensive collection of links to other external guides as well.

***

EFF is committed to keeping our privacy and security advice accurate and up-to-date, reflecting the needs of a variety of vulnerable populations. We hope these resources will help you keep yourself and your community safe in dangerous times.

Cooper Quintin

Speaking Freely: Aji Fama Jobe

2 weeks 3 days ago

*This interview has been edited for length and clarity.

Aji Fama Jobe is a digital creator, IT consultant, blogger, and tech community leader from The Gambia. She helps run Women TechMakers Banjul, an organization that provides visibility, mentorship, and resources to women and girls in tech. She also serves as an Information Technology Assistant with the World Bank Group where she focuses on resolving IT issues and enhancing digital infrastructure. Aji Fama is a dedicated advocate working to leverage technology to enhance the lives and opportunities of women and girls in Gambia and across Africa.

Greene: Why don’t you start off by introducing yourself? 

My name is Aji Fama Jobe. I’m from Gambia and I run an organization called Women TechMakers Banjul that provides resources to women and girls in Gambia, particularly in the Greater Banjul area. I also work with other organizations that focus on STEM and digital literacy and aim to impact more regions and more people in the world. Gambia is made up of six different regions and we have host organizations in each region. So we go to train young people, especially women, in those communities on digital literacy. And that’s what I’ve been doing for the past four or five years. 

Greene: So this series focuses on freedom of expression. What does freedom of expression mean to you personally? 

For me it means being able to express myself without being judged. Because most of the time—and especially on the internet because of a lot of cyber bullying—I tend to think a lot before posting something. It’s all about, what will other people think? Will there be backlash? And I just want to speak freely. So for me it means to speak freely without being judged. 

Greene: Do you feel like free speech means different things for women in the Gambia than for men? And how do you see this play out in the work that you do? 

In the Gambia we have freedom of expression, the laws are there, but the culture is the opposite of the laws. Society still frowns on women who speak out, not just in the workspace but even in homes. Sometimes men say a woman shouldn’t speak loud or there’s a certain way women should express. It’s the culture itself that makes women not speak up in certain situations. In our culture it’s widely accepted that you let the man or the head of the family—who’s normally a man, of course—speak. I feel like freedom of speech is really important when it comes to the work we do. Because women should be able to speak freely. And when you speak freely it gives you that confidence that you can do something. So it’s a larger issue. What our organization does on free speech is address the unconscious bias in the tech space that impacts working women. I work as an IT consultant and sometimes when we’re trying to do something technical people always assume IT specialists are men. So sometimes we just want to speak up and say, “It’s IT woman, not IT guy.” 

Greene: We could say that maybe socially we need to figure this out, but now let me ask you this. Do you think the government has a role in regulating online speech? 

Those in charge of policy enforcement don’t understand how to navigate these online pieces. It’s not just about putting the policies in place. They need to train people how to navigate this thing or how to update these policies in specific situations. It’s not just about what the culture says. The policy is the policy and people should follow the rules, not just as civilians but also as policy enforcers and law enforcement. They need to follow the rules, too. 

Greene: What about the big companies that run these platforms? What’s their role in regulating online speech? 

With cyber-bullying I feel like the big companies need to play a bigger role in trying to bring down content sometimes. Take Facebook for example. They don’t have many people that work in Africa and understand Africa with its complexities and its different languages. For instance, in the Gambia we have 2.4 million people but six or seven languages. On the internet people use local languages to do certain things. So it’s hard to moderate on the platform’s end, but also they need to do more work. 

Greene: So six local languages in the Gambia? Do you feel there’s any platform that has the capability to moderate that? 

In the Gambia? No. We have some civil society that tries to report content, but it’s just civil society and most of them do it on a voluntary basis, so it’s not that strong. The only thing you can do is report it to Facebook. But Facebook has bigger countries and bigger issues to deal with, and you end up waiting in a lineup of those issues and then the damage has already been done. 

Greene: Okay, let’s shift gears. Do you consider the current government of the Gambia to be democratic? 

I think it is pretty democratic because you can speak freely after 2016 unlike with our last president. I was born in an era when people were not able to speak up. So I can only compare the last regime and the current one. I think now it’s more democratic because people are able to speak out online. I can remember back before the elections of 2016 that if you said certain things online you had to move out of the country. Before 2016 people who were abroad would not come back to Gambia for fear of facing reprisal for content they had posted online. Since 2016 we have seen people we hadn’t seen for like ten or fifteen years. They were finally able to come back. 

Greene: So you lived in the country under a non-democratic regime with the prior administration. Do you have any personal stories you could tell about life before 2016 and feeling like you were censored? Or having to go outside of the country to write something? 

Technically it was a democracy but the fact was you couldn’t speak freely. What you said could get you in trouble—I don’t consider that a democracy. 

During the last regime I was in high school. One thing I realized was that there were certain political things teachers wouldn’t discuss because they had to protect themselves. At some point I realized things changed because before 2016 we didn’t say the president’s name. We would give him nicknames, but the moment the guy left power we felt free to say his name directly. I experienced censorship from not being able to say his name or talk about him. I realized there was so much going on when the Truth, Reconciliation, and Reparations Commission (TRC) happened and people finally had the confidence to go on TV and speak about their stories. 

As a young person I learned that what you see is not everything that’s happening. There were a lot of things that were happening but we couldn’t see because the media was restricted. The media couldn’t publish certain things. When he left and through the TRC we learned about what happened. A lot of people lost their lives. Some had to flee. Some people lost their mom or dad or some got raped. I think that opened my world. Even though I’m not politically inclined or in the political space, what happened there impacted me. Because we had a political moment where the president didn’t accept the elections, and a lot of people fled and went to Senegal. I stayed like three or four months and the whole country was on lockdown. So that was my experience of what happens when things don’t go as planned when it comes to the electoral process. That was my personal experience. 

Greene: Was there news media during that time? Was it all government-controlled or was there any independent news media? 

We had some independent news media, but those were from Gambians outside of the country. The media that was inside the country couldn’t publish anything against the government. If you wanted to know what was really happening, you had to go online. At some point, WhatsApp was blocked so we had to move to Telegram and other social media. I also realized that at some point because my dad was in Iraq and I had to download a VPN so I could talk to him and tell him what was happening in the country because my mom and I were there. That’s why when people censor the internet I’m really keen on that aspect because I’ve experienced that. 

Greene: What made you start doing the work you’re doing now? 

First, when I started doing computer science—I have a computer science background—there was no one there to tell me what to do or how to do it. I had to navigate things for myself or look for people to guide me. I just thought, we don’t have to repeat the same thing for other people. That’s why we started Women TechMakers. We try to guide people and train them. We want employers to focus on skills instead of gender. So we get to train people, we have a lot of book plans and online resources that we share with people. If you want to go into a certain field we try to guide you and send you resources. That’s one of the things we do. Just for people to feel confident in their skills. And everyday people say to me, “Because of this program I was able to get this thing I wanted,” like a job or an event. And that keeps me going. Women get to feel confident in their skills and in the places they work, too. Companies are always looking for diversity and inclusion. Like, “oh I have two female developers.” At the end of the day you can say you have two developers and they’re very good developers. And yeah, they’re women. It’s not like they’re hired because they’re women, it’s because they’re skilled. That’s why I do what I do. 

Greene: Is there anything else you wanted to say about freedom of speech or about preserving online open spaces? 

I work with a lot of technical people who think freedom of speech is not their issue. But what I keep saying to people is that you think it’s not your issue until you experience it. But freedom of speech and digital rights are everybody’s issues. Because at the end of the day if you don’t have that freedom to speak freely online or if you are not protected online we are all vulnerable. It should be everybody’s responsibility. It should be a collective thing, not just government making policies. But also people need to be aware of what they’re posting online. The words you put out there can make or break someone, so it’s everybody’s business. That’s how I see digital rights and freedom of expression. As a collective responsibility. 

Greene: Okay, our last question that we ask everybody. Who is your free speech hero? 

My mom’s elder sister. She passed away in 2015, but her name is Mariama Jaw and she was in the political space even during the time when people were not able to speak. She was my hero because I went to political rallies with her and she would say what people were not willing to say. Not just in political spaces, but in general conversation, too. She’s somebody who would tell you the truth no matter what would happen, whether her life was in danger or not. I got so much inspiration from her because a lot of women don’t go into politics or do certain things and they just want to get a husband, but she went against all odds and she was a politician, a mother and sister to a lot of people, to a lot of women in her community.

David Greene

🍿 Today’s Double Feature: Privacy and Free Speech

2 weeks 4 days ago

It’s Power Up Your Donation Week! Right now, your contribution to the Electronic Frontier Foundation will go twice as far to protect digital privacy, security, and free speech rights for everyone. Will you donate today to get a free 2X match?

Power Up!

Give to EFF and get a free donation match

Thanks to a fund made by a group of dedicated supporters, your donation online gets an automatic match up to $307,200 through December 10! This means every dollar you give equals two dollars to fight surveillance, oppose censorship, defend encryption, promote open access to information, and much more. EFF makes every cent count.

Lights, Laptops, Action!

Who has time to decode tech policy, understand the law, then figure out how to change things for the users? EFF does. The purpose of every attorney, activist, and technologist at EFF is to watch your back and make technology better. But you are the superstar who makes it possible with your support.

'Fix Copyright' member shirt inspired by Steamboat Willie entering the public domain.

With the help of people like you, EFF has been able to help unravel legal and ethical questions surrounding the rise of AI; keep policymakers on the road to net neutrality; encourage the Fifth Circuit Court of Appeals to rule that location-based geofence warrants are unconstitutional; and explain why banning TikTok and passing laws like the Kids Online Safety Act (KOSA) will not achieve internet safety.

The world struggles to get tech right, but EFF’s experts advocate for you every day of the year. Take action by renewing your EFF membership! You can set the stage for civil liberties and human rights online for everyone. Please give today and let your donation go twice as far for digital rights!

Power Up!

Support internet freedom
(and get an Instant match!)

Already an EFF Member?

Strengthen the community when you help us spread the word about Power Up Your Donation Week! Here’s some sample language that you can share:

Donate to EFF this week for an instant match! Double your impact on digital privacy, security, and free speech rights for everyone. https://eff.org/power-up

Bluesky | Email | Facebook | LinkedIn | X
(More at eff.org/social)

Each of us has the power to help in the movement for internet freedom. Our future depends on forging a web where we can have private conversations and explore the world online with confidence, so I thank you for your moral support and hope to have you on EFF's side as a member, too.

________________________

EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating ELEVEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Aaron Jue

Amazon and Google Must Keep Their Promises on Project Nimbus

2 weeks 4 days ago

When a company makes a promise, the public should be able to rely on it. Today, nearly every person in the U.S. is a customer of either Amazon or Google—and many of us are customers of both technology giants. Both of these companies have made public promises that they will ensure their technologies are not being used to facilitate human rights violations. These promises are not just corporate platitudes; they’re commitments to every customer and to society at large.  

It’s a reasonable thing to ask if these promises are being kept. And it’s especially important since Amazon and Google have been increasingly implicated by reports that their technologies, specifically their joint cloud computing initiative called Project Nimbus, are being used to facilitate mass surveillance and human rights violations of Palestinians in the Occupied Territories of the West Bank, East Jerusalem, and Gaza. This was the basis of our public call in August 2024 for the companies to come clean about their involvement.   

But we didn’t just make a public call. We sent letters directly to the Global Head of Public Policy at Amazon and to Google’s Global Head of Human Rights in late September. We detailed what these companies have promised and asked them to tell us by November 1, 2024 how they were complying. We hoped that they could clear up the confusion, or at least explain where we, or the reporting we were relying on, were wrong.  

But instead, they failed to respond. This is unfortunate, since it leads us to question how serious they were in their promises. And it should lead you to question that too.

Project Nimbus: Technology at the Expense of Human Rights

Project Nimbus provides advanced cloud and AI capabilities to the Israeli government, tools that an increasing number of credible reports suggest are being used to target civilians under pervasive surveillance in the Occupied Palestinian Territories. This is more than a technical collaboration—it’s a human rights crisis in the making as evidenced by data-driven targeting programs like Project Lavender and Where’s Daddy, which have reportedly led to detentions, killings, and the systematic oppression of journalists, healthcare workers, aid workers, and ordinary families. 

Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation.

The consequences are serious. Vulnerable communities in Gaza and the West Bank suffer violations of their human rights, including their rights to privacy, freedom of movement, and free association, all of which can be fostered and furthered by pervasive surveillance. These documented violations underscore the ethical responsibility of Amazon and Google, whose technologies are at the heart of this surveillance scheme. 

Amazon and Google’s Promises

Amazon and Google have made public commitments to align with the UN Guiding Principles on Business and Human Rights and their own AI ethics frameworks. These frameworks are supposed to ensure that their technologies do not contribute to harm. But their silence on these pressing concerns speaks volumes, undermining trust in their supposed dedication to these principles and casting doubt on their sincerity.

Unanswered Letters, Unanswered Accountability

When we sent letters to Amazon and Google, it was with direct, actionable questions about their involvement in Project Nimbus. We asked for transparency about their contracts, clients, and risk assessments. We called for evidence that due diligence had been conducted and demanded explanations of the steps taken to prevent their technologies from facilitating abuse.

Our core demands were straightforward and tied directly to the company’s commitments:

  • Disclose the scope of their involvement in Project Nimbus.
  • Provide evidence of risk assessments tied to this project.
  • Explain how they are addressing credible reports of misuse.

Despite these reasonable and urgent requests, which are tied directly to the companies’ stated legal and ethical commitments, both companies have remained silent, and their silence isn’t just an insufficient response—it’s an alarming one.

Why Transparency Cannot Wait

Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation. For both of these companies, it’s an obligation they have promised to the rest of us. For global companies that wield immense power, silence in the face of abuse is inexcusable.

The Fight for Accountability

EFF is making these letters public to highlight the human rights obligations Amazon and Google have undertaken and to raise reasonable questions they should answer in light of public reports about the misuse of their technologies in the Occupied Palestinian Territories. We aren’t the first ones to raise concerns, but, having raised these questions publicly, and now having given the companies a chance to clarify, we are increasingly concerned about their complicity.   

Google and Amazon have promised all of us—their customers and noncustomers alike—that they would take steps to ensure that their technologies support a future where technology empowers rather than oppresses. It’s increasingly clear that those promises are being ignored, if not entirely broken. EFF will continue to push for transparency and accountability.

Betty Gedlu

One Down, Many to Go with Pre-Installed Malware on Android

3 weeks 2 days ago

Last year, we investigated a Dragon Touch children’s tablet (KidzPad Y88X 10) and confirmed that it was linked to a string of fully compromised Android TV Boxes that also had multiple reports of malware, adware, and a sketchy firmware update channel. Since then, Google has taken the (now former) tablet distributor off of their list of Play Protect certified phones and tablets. The burden of catching this type of threat should not be placed on the consumer. Due diligence by manufacturers, distributors, and resellers is the only way to tackle this issue of pre-installed compromised devices making their way into the hands of unknowing customers. But in order to mitigate this issue, regulation and transparency need to be a part of the strategy. 

As of October, Dragon Touch is not selling any tablets on their website anymore. However, there is lingering inventory still out there in places like Amazon and Newegg. There are storefronts that exist only on reseller sites for better customer reach, but considering Dragon Touch also wiped their blog of any mention of their tablets, we assume a little more than a strategy shift happened here.

We wrote a guide to help parents set up their kid’s Android devices safely, but it’s difficult to choose which device to purchase to begin with. Advising people to simply buy a more expensive iPad or Amazon Fire Tablet doesn’t change the fact people are going to purchase low-budget devices. Lower budget devices can be just as reputable if the ecosystem provided a path for better accountability.

Who is Responsible?

There are some tools in development for consumer education, like the newly developed, voluntary Cyber Trust Mark by the FCC. This label would aim to inform consumers of the capabilities and guarantee that minimum security standards were met for an IoT device. However, the consumer holding the burden to check for pre-installed malware is absolutely ridiculous. Responsibility should fall to regulators, manufacturers, distributors, and resellers to check for this kind of threat.

More often than not, you can search for low budget Android devices on retailers like Amazon or Newegg, and find storefront pages with little transparency on who runs the store and whether or not they come from a reputable distributor. This is true for more than just Android devices, but considering how many products are created for and with the Android ecosystem, working on this problem could mean better security for thousands of products.

Yes, it is difficult to track hundreds to thousands of distributors and all of their products. It is hard to keep up with rapidly developing threats in the supply chain. You can’t possibly know of every threat out there.

With all due respect to giant resellers, especially the multi-billion dollar ones: tough luck. This is what you inherit when you want to “sell everything.” You also inherit the responsibility and risk of each market you encroach or supplant. 

Possible Remedy: Firmware Transparency

Thankfully, there is hope on the horizon and tools exist to monitor compromised firmware.

Last year, Google presented Android Binary Transparency in response to pre-installed malware. This would help track firmware that has been compromised with these two components:

  • An append-only log of firmware information that is immutable, globally observable, consistent, and auditable. Assured with cryptographic properties.
  • A network of participants that invest in witnesses, log health, and standardization.

Google is not the first to think of this concept. This is largely extracting lessons of success from Certificate Transparency. Yet, better support directly from the Android ecosystem for Android images would definitely help. This would provide an ecosystem of transparency of manufacturers and developers that utilize the Android Open Source Project (AOSP) to be just as respected as higher-priced brands.

We love open source here at EFF and would like to continue to see innovation and availability in devices that aren’t necessarily created by bigger, more expensive names. But there needs to be an accountable ecosystem for these products so that pre-installed malware can be more easily detected and not land in consumer hands so easily. Right now you can verify your Pixel device if you have a little technical skill. We would like verification to be done by regulators and/or distributors instead of asking consumers to crack out their command lines to verify themselves.

It would be ideal to see existing programs like Android Play Protect certified run a log like this with open-source log implementations, like Trillian. This way, security researchers, resellers, and regulating bodies could begin to monitor and query information on different Android Original Equipment Manufacturers (OEMs).

There are tools that exist to verify firmware, but right now this ecosystem is a wishlist of sorts. At EFF, we like to imagine what could be better. While a hosted comprehensive log of Android OEMs doesn’t currently exist, the tools to create it do. Some early participants for accountability in the Android realm include F-Droid’s Android SDK Transparency Log and the Guardian Project’s (Tor) Binary Transparency Log.

Time would be better spent on solving this problem systemically, than researching whether every new electronic evil rectangle or IoT device has malware or not.

A complementary solution with binary transparency is the Software Bill of Materials (SBOMs). Think of this as a “list of ingredients” that make up software. This is another idea that is not very new, but has gathered more institutional and government support. The components listed in an SBOM could highlight issues or vulnerabilities that were reported for certain components of a software. Without binary transparency though, researchers, verifiers, auditors, etc. could still be left attempting to extract firmware from devices that haven’t listed their images. If manufacturers readily provided these images, SBOMs can be generated more easily and help create a less opaque market of electronics. Low budget or not.

We are glad to see some movement from last year’s investigations. Right in time for Black Friday. More can be done and we hope to see not only devices taken down more swiftly when reported, especially with shady components, but better support for proactive detection. Regardless of how much someone can spend, everyone deserves a safe, secure device that doesn’t have malware crammed into it.

Alexis Hancock

Tell the Senate: Don’t Weaponize the Treasury Department Against Nonprofits

3 weeks 2 days ago

Last week the House of Representatives passed a dangerous bill that would allow the Secretary of Treasury to strip a U.S. nonprofit of its tax-exempt status. If it passes the Senate and is signed into law, H.R. 9495 would give broad and easily abused new powers to the executive branch. Nonprofits would not have a meaningful opportunity to defend themselves, and could be targeted without disclosing the reasons or evidence for the decision. 

This bill is an existential threat to nonprofits of all stripes. Future administrations could weaponize the powers in this bill to target nonprofits on either end of the political spectrum. Even if they are not targeted, the threat alone could chill the activities of some nonprofit organizations.

The bill’s authors have combined this attack on nonprofits, originally written as H.R. 6408, with other legislation that would prevent the IRS from imposing fines and penalties on hostages while they are held abroad. These are separate matters. Congress should separate these two bills to allow a meaningful vote on this dangerous expansion of executive power. No administration should be given this much power to target nonprofits without due process. 

tell your senator

Protect nonprofits

Over 350 civil liberties, religious, reproductive health, immigrant rights, human rights, racial justice, LGBTQ+, environmental, and educational organizations signed a letter opposing the bill as written. Now, we need your help. Tell the Senate not to pass H.R. 9495, the so-called “Stop Terror-Financing and Tax Penalties on American Hostages Act.”

Jason Kelley

EFF Tells the Second Circuit a Second Time That Electronic Device Searches at the Border Require a Warrant

3 weeks 3 days ago

EFF, along with ACLU and the New York Civil Liberties Union, filed a second amicus brief in the U.S. Court of Appeals for the Second Circuit urging the court to require a warrant for border searches of electronic devices, an argument EFF has been making in the courts and Congress for nearly a decade.

The case, U.S. v. Smith, involved a traveler who was stopped at Newark airport after returning from a trip to Jamaica. He was detained by border officers at the behest of the FBI and his cell phone was forensically searched. He had been under investigation for his involvement in a conspiracy to control the New York area emergency mitigation services (“EMS”) industry, which included (among other things) insurance fraud and extortion. He was subsequently prosecuted and sought to have the evidence from his cell phone thrown out of court.

As we wrote about last year, the district court made history in holding that border searches of cell phones require a warrant and therefore warrantless device searches at the border violate the Fourth Amendment. However, the judge allowed the evidence to be used in Mr. Smith’s prosecution because, the judge concluded, the officers had a “good faith” belief that they were legally permitted to search his phone without a warrant.

The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2023, U.S. Customs and Border Protection (CBP) conducted 41,767 device searches.

The Supreme Court has recognized for a century a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless “routine” searches of luggage, vehicles, and other items crossing the border.

The primary justification for the border search exception has been to find—in the items being searched—goods smuggled to avoid paying duties (i.e., taxes) and contraband such as drugs, weapons, and other prohibited items, thereby blocking their entry into the country.

In our brief, we argue that the U.S. Supreme Court’s balancing test in Riley v. California (2014) should govern the analysis here—and that the district court was correct in applying Riley. In that case, the Supreme Court weighed the government’s interests in warrantless and suspicionless access to cell phone data following an arrest against an arrestee’s privacy interests in the depth and breadth of personal information stored on a cell phone. The Supreme Court concluded that the search-incident-to-arrest warrant exception does not apply, and that police need to get a warrant to search an arrestee’s phone.

Travelers’ privacy interests in their cell phones and laptops are, of course, the same as those considered in Riley. Modern devices, a decade later, contain even more data points that together reveal the most personal aspects of our lives, including political affiliations, religious beliefs and practices, sexual and romantic affinities, financial status, health conditions, and family and professional associations.

In considering the government’s interests in warrantless access to digital data at the border, Riley requires analyzing how closely such searches hew to the original purpose of the warrant exception—preventing the entry of prohibited goods themselves via the items being searched. We argue that the government’s interests are weak in seeking unfettered access to travelers’ electronic devices.

First, physical contraband (like drugs) can’t be found in digital data.

Second, digital contraband (such as child pornography) can’t be prevented from entering the country through a warrantless search of a device at the border because it’s likely, given the nature of cloud technology and how internet-connected devices work, that identical copies of the files are already in the country on servers accessible via the internet. As the Smith court stated, “Stopping the cell phone from entering the country would not … mean stopping the data contained on it from entering the country” because any data that can be found on a cell phone—even digital contraband—“very likely does exist not just on the phone device itself, but also on faraway computer servers potentially located within the country.”

Finally, searching devices for evidence of contraband smuggling (for example, text messages revealing the logistics of an illegal import scheme) and other evidence for general law enforcement (i.e., investigating non-border-related domestic crimes, as was the case of the FBI investigating Mr. Smith’s involvement in the EMS conspiracy) are too “untethered” from the original purpose of the border search exception, which is to find prohibited items themselves and not evidence to support a criminal prosecution.

If the Second Circuit is not inclined to require a warrant for electronic device searches at the border, we also argue that such a search—whether manual or forensic—should be justified only by reasonable suspicion that the device contains digital contraband and be limited in scope to looking for digital contraband. This extends the Ninth Circuit’s rule from U.S. v. Cano (2019) in which the court held that only forensic device searches at the border require reasonable suspicion that the device contains digital contraband, while manual searches may be conducted without suspicion. But the Cano court also held that all searches must be limited in scope to looking for digital contraband (for example, call logs are off limits because they can’t contain digital contraband in the form of photos or files).

In our brief, we also highlighted two other district courts within the Second Circuit that required a warrant for border device searches: U.S. v. Sultanov (2024) and U.S. v. Fox (2024). We plan to file briefs in their appeals, as well. Earlier this month, we filed a brief in another Second Circuit border search case, U.S. v. Kamaldoss. We hope that the Second Circuit will rise to the occasion in one of these cases and be the first circuit to fully protect travelers’ Fourth Amendment rights at the border.

Sophia Cope

Looking for the Answer to the Question, "Do I Really Own the Digital Media I Paid For?"

3 weeks 3 days ago

Sure, buying your favorite video game, movie, or album online is super convenient. I personally love being able to pre-order a game and play it the night of release, without needing to go to a store. 

But something you may not have thought about before making your purchase are the differences between owning a physical or digital copy of that media. Unfortunately, there’s quite a few rights you give up by purchasing a digital copy of your favorite game, movie, or album! On our new site, Digital Rights Bytes, we outline the differences between owning physical and digital media, and why we need to break down that barrier. 

Digital Rights Bytes explains this and answers other common questions about technology that may be getting on your nerves and includes short videos featuring adorable animals. You can also read up on what EFF is doing to ensure you actually own the digital media you pay for, and how you can take action, too. 

Got other questions you’d like us to answer in the future? Let us know on your favorite social platform using the hashtag #DigitalRightsBytes. 

Christian Romero

Organizing for Digital Rights in the Pacific Northwest

4 weeks 1 day ago

Recently I traveled to Portland, Oregon to speak at the PDX People’s Digital Safety Fair, meet up with five groups in the Electronic Frontier Alliance, and attend BSides PDX 2024. Portland’s first ever Digital Safety Fair was a success and five of our six EFA organizations in the area participated: Personal Telco Project, Encode Justice Oregon, PDX Privacy, TA3M Portland, and Community Broadband PDX. I was able to reaffirm our support with these organizations, and table with most of them as they met local people interested in digital rights. We distributed EFF toolkits as a resource, and we made sure EFA brochures and stickers had a presence on all their tables. A few of these organizations were also present at BSides PDX, and it was great seeing them being leaders in the local infosec and cybersecurity community.

PDX Privacy’s mission is to bring about transparency and control in the acquisition and use of surveillance systems in the Portland Metro area, whether personal data is captured by the government or by commercial entities. Transparency is essential to ensure privacy protections, community control, fairness, and respect for civil rights.

TA3M Portland is an informal meetup designed to connect software creators and activists who are interested in censorship, surveillance, and open technology.

The Oregon Chapter of Encode Justice, the world’s first and largest youth movement for human-centered artificial intelligence, works to mobilize policymakers and the public for guardrails to ensure AI fulfills its transformative potential. Its mission is to ensure we encode justice and safety into the technologies we build.

(l to r) Pictured here with the PDXPrivacy’s Seth, Boaz and new President, Nate. Pictured with Chris Bushick, legendary Portland privacy advocate of TA3M PDX. Pictured with the leaders of Encode Justice Oregon.

There's growing momentum in the Seattle and Portland areas

Community Broadband PDX’s focus is on expanding the existing dark fiber broadband network in Portland to all residents, creating an open-source model where the city owns the fiber, and it’s controlled by local nonprofits and cooperatives, not large ISP’s.

Personal Telco is dedicated to the idea that users have a central role in how their communications networks are operated. This is done by building our own networks that we share with our communities, and by helping to educate others in how they can, too.

At the People’s Digital Safety Fair I spoke in the main room on the campaign to bring high-speed broadband to Portland, which is led by Community Broadband PDX and the Personal TelCo Project. I made a direct call to action for those in attendance to join the campaign. My talk culminated with, “What kind of ACTivist would I be if I didn’t implore you to take an ACTion? Everybody pull out your phones.” Then I guided the room to the website for Community Broadband PDX and to the ‘Join Us’ page where people in that moment signed up to join the campaign, spread the word with their neighbors, and get organized by the Community Broadband PDX team. You can reach out to them at cbbpdx.org and personaltelco.net. You can get in touch with all the groups mentioned in this blog with their hyperlinks above, or use our EFA allies directory to see who’s organizing in your area. 

(l to r) BSidesPDX 2024 swag and stickers. A photo of me speaking at the People’s Digital Privacy Fair on broadband access in PDX. Pictured with Jennifer Redman, President of Community Broadband PDX and former broadband administrator for the city of Portland, OR. A picture of the Personal TelCo table with EFF toolkits printed and EFA brochures on hand. Pictured with Ted, Russell Senior, and Drew of Personal Telco Project. Lastly, it's always great to see a member and active supporter of EFF interacting with one of our EFA groups.

It’s very exciting to see what members of the EFA are doing in Portland! I also went up to Seattle and met with a few organizations, including one now in talks to join the EFA. With new EFA friends in Seattle, and existing EFA relationships fortified, I'm excited to help grow our presence and support in the Pacific Northwest, and have new allies with experience in legislative engagement. It’s great to see groups in the Pacific Northwest engaged and expanding their advocacy efforts, and even greater to stand by them as they do!

Electronic Frontier Alliance members get support from a community of like-minded grassroots organizers from across the US. If your group defends our digital rights, consider joining today. https://efa.eff.org

Christopher Vines

Speaking Freely: Anriette Esterhuysen

4 weeks 1 day ago

*This interview took place in April 2024 at NetMundial+10 in São Paulo, Brazil. This interview has been edited for length and clarity. 

Anriette Esterhuysen is a human rights defender and computer networking trailblazer from South Africa. She has pioneered the use of Internet and Communications Technologies (ICTs) to promote social justice in South Africa and throughout the world, focusing on affordable Internet access. She was the executive director of Association for Progressive Communications from 2007 to 2017.  In November 2019 Anriette was appointed by the Secretary-General of the United Nations to chair the Internet Governance Forum’s Multistakeholder Advisory Group

Greene: Can you go ahead and introduce yourself for us?

Esterhuysen: My name is Anriette Esterhuysen, I am from South Africa and I’m currently sitting here with David in Sao Paulo, Brazil. My closest association remains with the Association for Progressive Communications where I was executive director from 2000 to 2017.  I continue to work for APC as a consultant in the capacity of Senior Advisor on Internet Governance and convenor of the annual African School on Internet Governance (AfriSIG).

Greene: Can you tell us more about the African School on Internet Governance (AfriSIG)?

AfriSIG is fabulous. It differs from internet governance capacity building provided by the technical community in that it aims to build critical thinking. It also does not gloss over the complex power dynamics that are inherent to multistakeholder internet governance. It tries to give participants a hands-on experience of how different interest groups and sectors approach internet governance issues.

AfriSIG started as a result of Titi Akinsanmi,  a young Nigerian doing postgraduate studies in South Africa, approaching APC and saying, “Look, you’ve got to do something. There’s a European School of Internet Governance, there’s one in Latin America, and where is there more need for capacity-building than in Africa?” She convinced me and my colleague Emilar Vushe Gandhi, APC Africa Policy Coordinator at the time, to organize an African internet governance school in 2013 and since then it has taken place every year. It has evolved over time into a partnership between APC and the African Union Commission and Research ICT Africa.

It is a residential leadership development and learning event that takes place over 5 days. We bring together people who are already working in internet or communications policy in some capacity. We create space for conversation between people from government, civil society, parliaments, regulators, the media, business and the technical community on what in Africa are often referred to as “sensitive topics”. This can be anything from LGBTQ rights to online freedom of expression, corruption, authoritarianism, and accountable governance. We try to create a safe space for deep diving the reasons for the dividing lines between, for example, government and civil society in Africa. It’s very delicate. I love doing it because I feel that it transforms people’s thinking and the way they see one another and one another’s roles. At the end of the process, it is common for a government official to say they now understand better why civil society demands media freedom, and how transparency can be useful in protecting the interests of public servants. And civil society activists have a better understanding of the constraints that state officials face in their day-to-day work. It can be quite a revelation for individuals from civil society to be confronted with the fact that in many respects they have greater freedom to act and speak than civil servants do.

Greene: That’s great. Okay now tell me, what does free speech mean to you?

I think of it as freedom of expression. It’s fundamental. I grew up under Apartheid in South Africa and was active in the struggle for democracy. There is something deeply wrong with being surrounded by injustice, cruelty and brutality and not being allowed to speak about it. Even more so when one's own privilege comes at the expense of the oppressed, as was the case for white South Africans like myself. For me, freedom of expression is the most profound part of being human. You cannot change anything, deconstruct it, or learn about it at a human level without the ability to speak freely about what it is that you see, or want to understand. The absence of freedom of expression entrenches misinformation, a lack of understanding of what is happening around you. It facilitates willful stupidity and selective knowledge. That’s why it’s so smart of repressive regimes to stifle freedom of expression. By stifling free speech you disempower the victims of injustice from voicing their reality, on the one hand, and, on the other, you entrench the unwillingness of those who are complicit with the injustice to confront that they’re part of it.

It is impossible to shift a state of repression and injustice without speaking out about it. That is why people who struggle for freedom and justice speak about it, even if doing so gets them imprisoned, assassinated or executed. Change starts through people, the media, communities, families, social movements, and unions, speaking about what needs to change. 

Greene: Having grown up in Apartheid, is there a single personal experience or a group of personal experiences that really shaped your views on freedom of expression?

I think I was fortunate in the sense that I grew up with a mother who—based on her Christian beliefs—came to see Apartheid as being wrong. She was working as a social worker for the main state church—the Dutch Reformed Church (DRC) —at the time of the Cottesloe Consultation convened in Johannesburg by the World Council of Churches (WCC) shortly after the Sharpeville Massacre.  An outcome statement from this consultation, and later deliberations by the WCC in Geneva, condemned the DRC for its racism. In response the DRC decided to leave the WCC.  At a church meeting my mother attended she listened to the debate and someone in the church hierarchy who spoke against this decision and challenged the church for its racist stance. His words made sense to her. She spoke to him after the meeting and soon joined the organization he had started to oppose Apartheid, the Christian Institute. His name was Beyers Naudé and he became an icon of the anti-Apartheid struggle and an enemy of the apartheid state. Apparently, my first protest march was in a pushchair at a rally in 1961 to oppose the rightwing National Party government's decision for South Africa to leave the Commonwealth.

There’s no single moment that shaped my view of freedom of expression. The thing about living in the context of that kind of racial segregation and repression is that you see it every day. It’s everywhere around you, but like Nazi Germany, people—white South Africans—chose not to see it, or if they did, to find ways of rationalizing it.

Censorship was both a consequence of and a building block of the Apartheid system. There was no real freedom of expression. But because we had courageous journalists, and a broad-based political movement—above ground and underground—that opposed the regime, there were spaces where one could speak/listen/learn.  The Congress of Democrats established in the 1950s after the Communist Party was banned was a social justice movement in which people of different faiths and political ideologies (Jewish, Christian and Muslim South Africans alongside agnostics and communists) fought for justice together. Later in the 1980s, when I was a student, this broad front approach was revived through the United Democratic Front. Journalists did amazing things. When censorship was at its height during the State of Emergency in the 1980s, newspapers would go to print with columns of blacked-out text—their way of telling the world that they were being censored.

mediacensorshipexample.png

I used to type up copy filed over the phone or cassettes by reporters for the Weekly Mail when I was a student. We had to be fast because everything had to be checked by the paper’s lawyers before going to print. Lack of freedom of expression was legislated. The courage of editors and individual journalists to defy this, and if they could not, to make it obvious made a huge impact on me.

Greene: Is there a time when you, looking back, would consider that you were personally censored? 

I was very much personally censored at school. I went to an Afrikaans secondary school. And I kind of have a memory of when, after going back after a vacation, my math teacher—who I had no personal relationship with —walked past me in class and asked me how my holiday on Robben Island was. I thought, why is he asking me that? A few days later I heard from a teacher I was friendly with that there was a special staff meeting about me. They felt I was very politically outspoken in class and the school hierarchy needed to take action. No actual action was taken... but I felt watched, and through that, censored, even if not silenced.

I felt that because for me, being white, it was easier to speak out than for black South Africans, it would be wrong not to do so. As a teenager, I had already made that choice. It was painful from a social point of view because I was very isolated, I didn’t have many friends, I saw the world so differently from my peers. In 1976 when the Soweto riots broke out I remember someone in my class saying, “This is exactly what we’ve been waiting for because now we can just kill them all.”  This is probably also why I feel a deep connection with Israel/Palestine. There are many dimensions to the Apartheid analogy. The one that stands out for me is how, as was the case in South Africa too, those with power—Jewish Israelis—dehumanize and villainize the oppressed - Palestinians.

Greene: At some point did you decide that you want human rights more broadly and freedom of expression to be a part of your career?

I don’t think it was a conscious decision. I think it was what I was living for. It was the raison d’etre of my life for a long time. After high school, I had secured places at two universities. At one for a science degree and at the other for a degree in journalism. But I ended up going to a different university making the choice based on the strength of its student movement. The struggle against Apartheid was expressed and conceptualized as a struggle for human rights. The Constitution of democratic South Africa was crafted by human rights lawyers and in many respects it is a localized interpretation of the Universal Declaration.

Later, in the late 1980s,  when I started working on access to information through the use of Information and Communication Technologies (ICTS) it felt like an extension of the political work I had done as a student and in my early working life. APC, which I joined as a member—not staff—in the 1990s, was made up of people from other parts of the world who had been fighting their own struggles for freedom—Latin America, Asia, and Central/ Eastern Europe. All with very similar hopes about how the use of these technologies can enable freedom and solidarity.

Greene: So fast forward to now, currently do you think the platforms promote freedom of expression for people or restrict freedom of expression?

Not a simple question. Still, I think the net effect is more freedom of expression. The extent of online freedom of expression is uneven and it’s distorted by the platforms in some contexts. Just look at the biased pro-Israel way in which several platforms moderate content. Enabling hate speech in contexts of conflict can definitely have a silencing effect. By not restricting hate in a consistent manner, they end up restricting freedom of expression.  But I think it’s disingenuous to say that overall the internet does not increase freedom of expression. And social media platforms, despite their problematic business models, do contribute. They could of course do it so much better, fairly and consistently, and for not doing that they need to be held accountable. 

Greene: We can talk about some of the problems and difficulties. Let’s start with hate speech. You said it’s a problem we have to tackle. How do we tackle it? 

You’re talking to a very cynical old person here. I think that social media amplifies hate speech. But I don’t think they create the impulse to hate. Social media business models are extractive and exploitative. But we can’t fix our societies by fixing social media. I think that we have to deal with hate in the offline world. Channeling energy and resources into trying to grow tolerance and respect for human rights in the online space is not enough. It’s just dealing with the symptoms of intolerance and populism. We need to work far harder to hold people, particularly those with power, accountable for encouraging hate (and disinformation). Why is it easy to get away with online hate in India? Because Modi likes hate. It’s convenient for him, it keeps him in political power. Trump is another example of a leader that thrives on hate. 

What’s so problematic about social media platforms is the monetization of this. That is absolutely wrong and should be stopped—I can say all kinds of things about it. We need to have a multi-pronged approach. We need market regulation, perhaps some form of content regulation, and new ways of regulating advertising online. We need access to data on what happens inside these platforms. Intervention is needed, but I do not believe that content control is the right way to do it.  It is the business model that is at the root of the problem. That’s why I get so frustrated with this huge global effort by governments (and others)  to ensure information integrity through content regulation. I would rather they spend the money on strengthening independent media and journalism.

Greene: We should note we are currently at an information integrity conference today. In terms of hate speech, are there hazards to having hate speech laws? 

South Africa has hate speech laws which I believe are necessary. Racial hate speech continues to be a problem in South Africa. So is xenophobic hate speech. We have an election coming on May 29 [2024] and I was listening to talk radio on election issues and hearing how political parties use xenophobic tropes in their campaigns was terrifying. “South Africa has to be for South Africans.” “Nigerians run organized crime.”  “All drugs come from Mozambique,” and so on. Dangerous speech needs to be called out.  Norms are important. But I think that establishing legalized content regulation is risky. In contexts without robust protection for freedom of expression, such regulation can easily be abused by states to stifle political speech.

Greene: Societal or legal norms?

Both.  Legal norms are necessary because social norms can be so inconsistent, volatile. But social norms shape people’s everyday experience and we have to strive to make them human rights aware. It is important to prevent the abuse of legal norms—and states are, sadly, pretty good at doing just that. In the case of South Africa hate speech regulation works relatively well because there are strong protections for freedom of expression. There are soft and hard law mechanisms. The South African Human Rights Commission developed a social media charter to counter harmful speech online as a kind of self-regulatory tool. All of this works—not perfectly of course—because we have a constitution that is grounded in human rights. Where we need to be more consistent is in holding politicians accountable for speech that incites hate. 

Greene: So do we want checks and balances built into the regulatory scheme or are you just wanting it existing within a government scheme that has checks and balances built in? 

I don’t think you need new global rule sets. I think the existing international human rights framework provides what we need and just needs to be strengthened and its application adapted to emerging tech. One of the reasons why I don’t think we should be obsessive about restricting hate speech online is because it is a canary in a coal mine. In societies where there’s a communal or religious conflict or racial hate,  removing its manifestation online could be a missed opportunity to prevent explosions of violence offline.  That is not to say that there should not be recourse and remedy for victims of hate speech online. Or that those who incite violence should not be held accountable. But I believe we need to keep the bar high in how we define hate speech—basically as speech that incites violence.  

South Africa is an interesting case because we have very progressive laws when it comes to same-sex marriage, same-sex adoption, relationships, insurance, spousal recognition, medical insurance and so on, but there’s still societal prejudice, particularly in poor communities.  That is why we need a strong rights-oriented legal framework.

Greene: So that would be another area where free speech can be restricted and not just from a legal sense but you think from a higher level principles sense. 

Right. Perhaps what I am trying to say is that there is speech that incites violence and it should be restricted. And then there is speech that is hateful and discriminatory, and this should be countered, called out, and challenged, but not censored.  When you’re talking about the restriction—or not even the restriction but the recognition and calling out of—harmful speech it’s important not just to do that online. In South Africa stopping xenophobic speech online or on public media platforms would be relatively simple. But it’s not going to stop xenophobia in the streets.  To do that we need other interventions. Education, public awareness campaigns, community building, and change in the underlying conditions in which hate thrives which in our case is primarily poverty and unemployment, lack of housing and security.

Greene: This morning someone who spoke at this event was speaking about misinformation said, “The vast majority of misinformation is online.” And certainly in the US, researchers say that’s not true, most of it is on cable news, but it struck me that someone who is considered an expert should know better. We have information ecosystems and online does not exist separately. 

It’s not separate. Agree. There’s such a strong tendency to look at online spaces as an alternative universe. Even in countries with low internet penetration, there’s a tendency to focus on the online components of these ecosystems. Another example would be child online protection. Most child abuse takes place in the physical world, and most child abusers are close family members, friends or teachers of their victims—but there is a global obsession with protecting children online.  It is a shortsighted and ‘cheap’ approach and it won’t work. Not for dealing with misinformation or for protecting children from abuse.

Greene: Okay, our last question we ask all of our guests. Who is your free speech hero? 

Desmond Tutu. I have many free speech heroes but Bishop Tutu is a standout because he could be so charming about speaking his truths. He was fearless in challenging the Apartheid regime. But he would also challenge his fellow Christians.  One of his best lines was, “If LGBT people are not welcome in heaven, I’d rather go to the other place.”  And then the person I care about and fear for every day is Egyptian blogger Alaa Abd el-Fattah. I remember walking at night through the streets of Cairo with him in 2012. People kept coming up to him, talking to him, and being so obviously proud to be able to do so. His activism is fearless. But it is also personal, grounded in love for his city, his country, his family, and the people who live in it. For Alaa freedom of speech, and freedom in general, was not an abstract or a political goal. It was about freedom to love, to create art, music, literature and ideas in a shared way that brings people joy and togetherness.

Greene: Well now I have a follow-up question. You said you think free speech is undervalued these days. In what ways and how do we see that? 

We see it manifested in the absence of tolerance, in the increase in people claiming that their freedoms are being violated by the expression of those they disagree with, or who criticize them. It’s as if we’re trying to establish these controlled environments where we don’t have to listen to things that we think are wrong, or that we disagree with. As you said earlier, information ecosystems have offline and online components. Getting to the “truth” requires a mix of different views, disagreement, fact-checking, and holding people who deliberately spread falsehoods accountable for doing so. We need people to have the right to free speech, and to counter-speech. We need research and evidence gathering, investigative journalism, and, most of all, critical thinking. I’m not saying there shouldn't be restrictions on speech in certain contexts, but do it because the speech is illegal or actively inciteful. Don’t do it because you think it will achieve so-called information integrity. And especially, don’t do it in ways that undermine the right to freedom of expression.

David Greene

Oppose The Patent-Troll-Friendly PREVAIL Act

1 month ago

Update 11/21/2024: The Senate Judiciary Committee voted 11-10 in favor of PREVAIL, and several senators expressed concerns about the bill. Thanks to EFF supporters who spoke out! We will continue to oppose this misguided bill. 

Good news: the Senate Judiciary Committee has dropped one of the two terrible patent bills it was considering, the patent-troll-enabling Patent Eligibility Restoration Act (PERA).

Bad news: the committee is still pushing the PREVAIL Act, a bill that would hamstring the U.S.’s most effective system for invalidating bad patents. PREVAIL is a windfall for patent trolls, and Congress should reject  it.

Take Action

Tell Congress: No New Bills For Patent Trolls

One of the most effective tools to fight bad patents in the U.S. is a little-known but important system called inter partes review, or IPR. Created by Congress in 2011, the IPR process addresses a major problem: too many invalid patents slip through the cracks at the U.S. Patent and Trademark Office. While not an easy or simple process, IPR is far less expensive and time-consuming than the alternative—fighting invalid patents in federal district court.

That’s why small businesses and individuals rely on IPR for protection. More than 85% of tech-related patent lawsuits are filed by non-practicing entities, also known as “patent trolls”—companies that don’t have products or services of their own, but instead make dozens, or even hundreds, of patent claims against others, seeking settlement payouts.

So it’s no surprise that patent trolls are frequent targets of IPR challenges, often brought by tech companies. Eliminating these worst-of-the-worst patents is a huge benefit to small companies and individuals that might otherwise be unable to afford an IPR challenge themselves. 

For instance, Apple used an IPR-like process to invalidate a patent owned by the troll Ameranth, which claimed rights over using mobile devices to order food. Ameranth had sued over 100 restaurants, hotels, and fast-food chains. Once the patent was invalidated, after an appeal to the Federal Circuit, Ameranth’s barrage of baseless lawsuits came to an end. 

PREVAIL Would Ban EFF and Others From Filing Patent Challenges

The IPR system isn’t just for big tech—it has also empowered nonprofits like EFF to fight patents that threaten the public interest. 

In 2013, a patent troll called Personal Audio LLC claimed that it had patented podcasting. The patent titled “System for disseminating media content representing episodes in a serialized sequence,” became the basis for the company’s demand for licensing fees from podcasters nationwide. Personal Audio filed lawsuits against three podcasters and threatened countless others.  

EFF took on the challenge, raising over $80,000 through crowd-funding to file an IPR petition. The Patent Trial and Appeals Board agreed: the so-called “podcasting patent,” should never have been granted. EFF proved that Personal Audio’s claims were invalid, and our victory was upheld all the way to the Supreme Court

The PREVAIL Act would block such efforts. It limits IPR petitions to parties directly targeted by a patent owner, shutting out groups like EFF that protect the broader public. If PREVAIL becomes law, millions of people indirectly harmed by bad patents—like podcasters threatened by Personal Audio—will lose the ability to fight back.

PREVAIL Tilts the Field in Favor of Patent Trolls

The PREVAIL Act will make life easier for patent trolls at every step of the process. It is shocking that the Senate Judiciary Committee is using the few remaining hours it will be in session this year to advance a bill that undermines the rights of innovators and the public.  

Patent troll lawsuits target individuals and small businesses for simply using everyday technology. Everyone who can meet the legal requirements of an IPR filing should have the right to challenge invalid patents. Use our action center today and tell Congress: that’s not a right we want to give up today. 

Take Action

Tell Congress: reject the prevail act

More on the PREVAIL Act: 

Joe Mullin

The U.S. National Security State is Here to Make AI Even Less Transparent and Accountable

1 month ago

The Biden White House has released a memorandum on “Advancing United States’ Leadership in Artificial Intelligence” which includes, among other things, a directive for the National Security apparatus to become a world leader in the use of AI. Under direction from the White House, the national security state is expected to take up this leadership position by poaching great minds from academia and the private sector and, most disturbingly, leveraging already functioning private AI models for national security objectives.

Private AI systems like those operated by tech companies are incredibly opaque. People are uncomfortable—and rightly so—with companies that use AI to decide all sorts of things about their lives–from how likely they are to commit a crime, to their eligibility for a job, to issues involving immigration, insurance, and housing. Right now, as you read this, for-profit companies are leasing their automated decision-making services to all manner of companies and employers and most of those affected will never know that a computer made a choice about them and will never be able to appeal that decision or understand how it was made.

But it can get worse; combining both private AI with national security secrecy threatens to make an already secretive system even more unaccountable and untransparent. The constellation of organizations and agencies that make up the national security apparatus are notoriously secretive. EFF has had to fight in court a number of times in an attempt to make public even the most basic frameworks of global dragnet surveillance and the rules that govern it. Combining these two will create a Frankenstein’s Monster of secrecy, unaccountability, and decision-making power.

While the Executive Branch pushes agencies to leverage private AI expertise, our concern is that more and more information on how those AI models work will be cloaked in the nigh-impenetrable veil of government secrecy. Because AI operates by collecting and processing a tremendous amount of data, understanding what information it retains and how it arrives at conclusions will all become incredibly central to how the national security state thinks about issues. This means not only will the state likely make the argument that the AI’s training data may need to be classified, but they may also argue that companies need to, under penalty of law, keep the governing algorithms secret as well.

As the memo says, “AI has emerged as an era-defining technology and has demonstrated significant and growing relevance to national security.  The United States must lead the world in the responsible application of AI to appropriate national security functions.” As the US national security state attempts to leverage powerful commercial AI to give it an edge, there are a number of questions that remain unanswered about how much that ever-tightening relationship will impact much needed transparency and accountability for private AI and for-profit automated decision making systems. 

Matthew Guariglia

Now's The Time to Start (or Renew) a Pledge for EFF Through the CFC

1 month ago

The Combined Federal Campaign (CFC) pledge period is underway and runs through January 15, 2024! If you're a U.S. federal employee or retiree, be sure to show your support for EFF by using our CFC ID 10437.

Not sure how to make a pledge? No problem--it’s easy! First, head over to GiveCFC.org and click “DONATE.” Then you can search for EFF using our CFC ID 10437 and make a pledge via payroll deduction, credit/debit, or an e-check. If you have a renewing pledge, you can also increase your support there as well!

The CFC is the world’s largest and most successful annual charity campaign for U.S. federal employees and retirees. Last year, members of the CFC community raised nearly $34,000 to support EFF’s work advocating for privacy and free expression online. That support has helped us:

  • Push the Fifth Circuit Court of Appeals to find that geofence warrants are “categorically” unconstitutional.
  • Launch Digital Rights Bytes, a resource dedicated to teaching people how to take control of the technology they use every day.
  • Call out unconstitutional age-verification and censorship laws across the U.S.
  • Continue to develop and maintain our privacy-enhancing tools, like Certbot and Privacy Badger.

Federal employees and retirees greatly impact our democracy and the future of civil liberties and human rights online. Support EFF’s work by using our CFC ID 10437 when you make a pledge today!

Christian Romero

Speaking Freely: Marjorie Heins

1 month ago

This interview has been edited for length and clarity.*

Marjorie Heins is a writer, former civil rights/civil liberties attorney, and past director of the Free Expression Policy Project (FEPP) and the American Civil Liberties Union's Arts Censorship Project. She is the author of "Priests of Our Democracy: The Supreme Court, Academic Freedom, and the Anti-Communist Purge," which won the Hugh M. Hefner First Amendment Award in Book Publishing in 2013, and "Not in Front of the Children: Indecency, Censorship, and the Innocence of Youth," which won the American Library Association's Eli Oboler Award for Best Published Work in the Field of Intellectual Freedom in 2002. 

Her most recent book is "Ironies and Complications of Free Speech: News and Commentary From the Free Expression Policy Project." She has written three other books and scores of popular and scholarly articles on free speech, censorship, constitutional law, copyright, and the arts. She has taught at New York University, the University of California - San Diego, Boston College Law School, and the American University of Paris. Since 2015, she has been a volunteer tour guide at the Metropolitan Museum of Art in New York City.

Greene: Can you introduce yourself and the work you’ve done on free speech and how you got there?

Heins: I’m Marjorie Heins, I’m a retired lawyer. I spent most of my career at the ACLU. I started in Boston, where we had a very small office, and we sort of did everything—some sex discrimination cases, a lot of police misconduct cases, occasionally First Amendment. Then, after doing some teaching and a stint at the Massachusetts Attorney General’s office, I found myself in the national office of the ACLU in New York, starting a project on art censorship. This was in response to the political brouhaha over the National Endowment for the Arts starting around 1989/ 1990.

Culture wars, attacks on some of the grants made by the NEA, became a big hot button issue. The ACLU was able to raise a little foundation money to hire a lawyer to work on some of these cases. And one case that was already filed when I got there was National Endowment for the Arts vs Finley. It was basically a challenge by four theater performance artists whose grants had been recommended by the peer panel but then ultimately vetoed by the director after a lot of political pressure because their work was very much “on the edge.” So I joined the legal team in that case, the Finley case, and it had a long and complicated history. Then, by the mid-1990s we were faced with the internet. And there were all these scares over pornography on the internet poisoning the minds of our children. So the ACLU got very involved in challenging censorship legislation that had been passed by Congress, and I worked on those cases.

I left the ACLU in 1998 to write a book about what I had learned about censorship. I was curious to find out more about the history primarily of obscenity legislation—the censorship of sexual communications. So it’s a scholarly book called “Not in front of the Children.” Among the things I discovered is that the origins of censorship of sexual content, sexual communications, come out of this notion that we need to protect children and other “vulnerable beings.” And initially that included women and uneducated people, but eventually it really boiled down to children—we need censorship basically of everybody in order to protect children. So that’s what Not in front of the Children was all about. 

And then I took my foundation contacts—because at the ACLU if you have a project you have to raise money—and started a little project, a little think tank which became affiliated with the National Coalition Against Censorship called the Free Expression Policy Project. And at that point we weren’t really doing litigation anymore, we were doing a lot of friend of the court briefs, a lot of policy reports and advocacy articles about some of the values and competing interests in the whole area of free expression. And one premise of this project, from the start, was that we are not absolutists. So we didn’t accept the notion that because the First Amendment says “Congress shall make no law abridging the freedom of speech,” then there’s some kind of absolute protection for something called free speech and there can’t be any exceptions. And, of course, there are many exceptions. 

So the basic premise of the Free Expression Policy Project was that some exceptions to the First Amendment, like obscenity laws, are not really justified because they are driven by different ideas about morality and a notion of moral or emotional harm rather than some tangible harm that you can identify like, for example, in the area of libel and slander or invasion of privacy or harassment. Yes, there are exceptions. The default, the presumption, is free speech, but there could be many reasons why free speech is curtailed in certain circumstances. 

The Free Expression Policy Project continued for about seven years. It moved to the Brennan Center for Justice at NYU Law School for a while, and, finally, I ran out of ideas and funding. I kept up the website for a little while longer, then ultimately ended the website. Then I thought, “okay, there’s a lot of good information on this website and it’s all going to disappear, so I’m going to put it into a book.” Oh, I left out the other book I worked on in the early 2000s – about academic freedom, the history of academic freedom, called “Priests of Our Democracy: The Supreme Court, Academic Freedom, and the Anti-Communist Purge.” This book goes back in history even before the 1940s and 1950s Red Scare and the effect that it had on teachers and universities. And then this last book is called “Ironies and Complications of Free Speech: News and Commentary From the Free Expression Policy Project,” which is basically an anthology of the best writings from the Free Expression Policy Project. 

And that’s me. That’s what I did.

Greene: So we have a ton to talk about because a lot of the things you’ve written about are either back in the news and regulatory cycle or never left it. So I want to start with your book “Not in Front of the Children” first. I have at least one copy and I’ve been referring to it a lot and suggesting it because we’ve just seen a ton of efforts to try and pass new child protection laws to protect kids from online harms. And so I’m curious, first there was a raft of efforts around Tik Tok being bad for kids, now we’re seeing a lot of efforts aimed at shielding kids from harmful material online. Do you think this a throughline from concerns back from mid-19th Century England. Is it still the same debate or is there something different about these online harms? 

Both are true I think. It’s the same and it’s different. What’s the same is that using the children as an argument for basically trying to suppress information, ideas, or expression that somebody disapproves of goes back to the beginning of censorship laws around sexuality. And the subject matters have changed, the targets have changed. I’m not too aware of new proposals for internet censorship of kids, but I’m certainly aware of what states—of course, Florida being the most prominent example—have done in terms of school books, school library books, public library books, and education from not only k-12 but also higher education in terms of limiting the subject matters that can be discussed. And the primary target seems to be anything to do with gay or lesbian sexuality and anything having to do with a frank acknowledgement of American slavery or Jim Crow racism. Because the argument in Florida, and this is explicit in the law, is because it would make white kids feel bad, so let’s not talk about it. So in that sense the two targets that I see now—we’ve got to protect the kids against information about gay and lesbian people and information about the true racial history of this country—are a little different from the 19th century and even much of the 20th century. 

Greene: One of the things I see is that the harms motivating the book bans and school restrictions are the same harms that are motivating at least some of the legislators who are trying to pass these laws. And notably a lot of the laws only address online harmful material without being specific about subject matter. We’re still seeing some that are specifically about sexual material, but a lot of them, including the Kids Online Safety Act really just focus on online harms more broadly. 

I haven’t followed that one, but it sounds like it might have a vagueness problem!

Greene: One of the things I get concerned about with the focus on design is that, like, a state Attorney General is not going to be upset if the design has kids reading a lot of bible verses or tomes about being respectful to your parents. But they will get upset and prosecute people if the design feature is recommending to kids gender-affirming care or whatever. I just don’t know if there’s a way of protecting against that in a law. 

Well, as we all know, when we’re dealing with commercial speech there’s a lot more leeway in terms of regulation, and especially if ads are directed at kids. So I don’t have a problem with government legislation in the area of restricting the kinds of advertising that can be directed at kids. But if you get out of the area of commercial speech and to something that’s kind of medical, could you have constitutional legislation that prohibited websites from directing kids to medically dangerous procedures? You’re sort of getting close to the borderline. If it’s just information then I think the legislation is probably going to be unconstitutional even if it’s related to kids. 

Greene: Let’s shift to academic freedom. Which is another fraught issue. What do you think of the current debates now over both restrictions on faculty and universities restricting student speech? 

Academic freedom is under the gun from both sides of the political spectrum. For example, Diversity, Equity, and Inclusion (DEI) initiatives, although they seem well-intentioned, have led to some pretty troubling outcomes. So that when those college presidents were being interrogated by the members of Congress (in December 2023), they were in a difficult position, among other reasons, because at least at Harvard and Penn it was pretty clear there were instances of really appalling applications of this idea of Diversity, Equity, and Inclusion – both to require a certain kind of ideological approach and to censor or punish people who didn’t go along with the party line, so to speak. 

The other example I’m thinking of, and I don’t know if Harvard and Penn do this – I know that the University of California system does it or at least it used to – everybody who applies for a faculty position has to sign a diversity statement, like a loyalty oath, saying that these are the principles they agree with and they will promise to promote. 

And you know you have examples, I mean I may sound very retrograde on this one, but I will not use the pronoun “they” for a singular person. And I know that would mean I couldn’t get a faculty job! And I’m not sure if my volunteer gig at the Met museum is going to be in trouble because they, very much like universities, have given us instructions, pages and pages of instructions, on proper terminology – what terminology is favored or disfavored or should never be used, and “they” is in there. You can have circumlocutions so you can identify a single individual without using he or she if that individual – I mean you can’t even know what the individual’s preference is. So that’s another example of academic freedom threats from I guess you could call the left or the DEI establishment. 

The right in American politics has a lot of material, a lot of ammunition to use when they criticize universities for being too politically correct and too “woke.” On the other hand, you have the anti-woke law in Florida which is really, as I said before, directed against education about the horrible racial history of this country. And some of those laws are just – whatever you may think about the ability of state government and state education departments to dictate curriculum and to dictate what viewpoints are going to be promoted in the curriculum – the Florida anti-woke law and don’t say gay law really go beyond I think any kind of discretion that the courts have said state and local governments have to determine curriculum. 

Greene: Are you surprised at all that we’re seeing that book bans are as big of a thing now as they were twenty years ago? 

Well, nothing surprises me. But yes, I would not have predicted that there were going to be the current incarnations of what you can remember from the old days, groups like the American Family Association, the Christian Coalition, the Eagle Forum, the groups that were “culture warriors” who were making a lot of headlines with their arguments forty years ago against even just having art that was done by gay people. We’ve come a long way from that, but now we have Moms for Liberty and present-day incarnations of the same groups. The homophobia agenda is a little more nuanced, it’s a little different from what we were seeing in the days of Jesse Helms in Congress. But the attacks on drag performances, this whole argument that children are going to be groomed to become drag queens or become gay—that’s a little bit of a different twist, but it’s basically the same kind of homophobia. So it’s not surprising that it’s being churned up again if this is something that politicians think they can get behind in order to get elected. Or, let me put it another way, if the Moms for Liberty type groups make enough noise and seem to have enough political potency, then politicians are going to cater to them. 

And so the answer has to be groups on the other side that are making the free expression argument or the intellectual freedom argument or the argument that teachers and professors and librarians are the ones who should decide what books are appropriate. Those groups have to be as vocal and as powerful in order to persuade politicians that they don’t have to start passing censorship legislation in order to get votes.

Greene: Going back to the college presidents and being grilled on the hill, you wrote that maybe there was, in response to the genocide question, which I think they were most sharply criticized there, that there was a better answer that they could have given. Could you talk about that? 

I think in that context, both for political reasons and for reasons of policy and free speech doctrine, the answer had to be that if students on campus are calling for genocide of Jews or any other ethnic or religious group that should not be permitted on campus and that amounts to racial harassment. Of course, I suppose you could imagine scenarios where two antisemitic kids in the privacy of their dorm room said this and nobody else heard it—okay, maybe it doesn’t amount to racial harassment. But private colleges are not bound by the First Amendment. They all have codes of civility. Public colleges are bound by the First Amendment, but not the same standards as the public square. So I took the position that in that circumstance the presidents had to answer, “Yes, that would violate our policies and subject a student to discipline.” But that’s not the same as calling for the intifada or calling for even the elimination of the state of Israel as having been a mistake 75 years ago. So I got a little pushback on that little blog post that I wrote. And somebody said, “I’m surprised a former ACLU lawyer is saying that calling for genocide could be punished on a college campus.” But you know, the ACLU has many different political opinions within both the staff and Board. There were often debates on different kinds of free speech issues and where certain lines are drawn. And certainly on issues of harassment and when hate speech becomes harassment—under what circumstances it becomes harassment. So, yes, I think that’s what they should have said. A lot of legal scholars, including David Cole of the ACLU, said they gave exactly the right answer, the legalistic answer, that it depends on the context. In that political situation that was not the right answer. 

Greene: It was awkward. They did answer as if they were having an academic discussion and not as if they were talking to members of Congress. 

Well they also answered as if they were programmed. I mean Claudine Gay repeated the exact same words that probably somebody had told her to say at least twice if not more. And that did not look very good. It didn’t look like she was even thinking for herself. 

Greene: I do think they were anticipating the followup question of, “Well isn’t saying ‘From the River to the Sea’ a call for genocide and how come you haven’t punished students for that?” But as you said, that would then lead into a discussion of how we determine what is or is not a call for genocide. 

Well they didn’t need a followup question because to Elise Stefanik, “Intifada” or “from the river to the sea” was equivalent to a call for genocide, period, end of discussion. Let me say one more thing about these college hearings. What these presidents needed to say is that it’s very scary when politicians start interrogating college faculty or college presidents about curriculum, governance, and certainly faculty hires. One of the things that was going on there was they didn’t think there were enough conservatives on college faculties, and that was their definition of diversity. You have to push back on that, and say it is a real threat to academic freedom and all of the values that we talk about that are important at a university education when politicians start getting their hands on this and using funding as a threat and so forth. They needed to say that. 

 Greene: Let’s pull back and talk about free speech principles more broadly. Why is, after many years of work in this area, why do you think free expression is important? 

What is the value of free expression more globally? [laughs] A lot of people have opined on that. 

Greene: Why is it important to you personally? 

Well I define it pretty broadly. So it doesn’t just include political debate and discussion and having all points of view represented in the public square, which used to be the narrower definition of what the First Amendment meant, certainly according to the Supreme Court. But the Court evolved. And so it’s now recognized, as it should be, that free expression includes art. The movies—it doesn’t even have to be verbal—it can be dance, it can be abstract painting. All of the arts, which feed the soul, are part of free expression. And that’s very important to me because I think it enriches us. It enriches our intellects, it enriches our spiritual lives, our emotional lives. And I think it goes without saying that political expression is crucial to having a democracy, however flawed it may be. 

Greene: You mentioned earlier that you don’t consider yourself to be a free speech absolutist. Do you consider yourself to be a maximalist or an enthusiast? What do you see as being sort of legitimate restrictions on any individual’s freedom of expression?

Well, we mentioned this at the beginning. There are a lot of exceptions to the First Amendment that are legitimate and certainly, when I started at the ACLU I thought that defamation laws and libel and slander laws violate the first amendment. Well, I’ve changed my opinion. Because there’s real harm that gets caused by libel and slander. As we know, the Supreme Court has put some First Amendment restrictions around those torts, but they’re important to have. Threats are a well-recognized exception to the freedom of speech, and the kind of harm caused by threats, even if they’re not followed through on, is pretty obvious. Incitement becomes a little trickier because where do you draw the lines? But at some point an incitement to violent action I think can be restricted for obvious reasons of public safety. And then we have restrictions on false advertising but, of course, if we’re not in the commercial context, the Supreme Court has told us that lies are protected by the First Amendment. That’s probably wise just in terms of not trying to get the government and the judicial process involved in deciding what is a lie and what isn’t. But of course that’s done all the time in the context of defamation and commercial speech. Hate speech is something, as we know, that’s prohibited in many parts of Europe but not here. At least not in the public square as opposed to employment contexts or educational contexts. Some people would say, “Well, that’s dictated by the First Amendment and they don’t have the First Amendment over there in Europe, so we’re better.” But having worked in this area for a long time and having read many Supreme Court decisions, it seems to me the First Amendment has been subjected to the same kind of balancing test that they use in Europe when they interpret their European Convention on Human Rights or their individual constitutions. They just have different policy choices. And the policy choice to prohibit hate speech given the history of Europe is understandable. Whether it is effective in terms of reducing racism, Islamophobia, antisemitism… is there more of that in Europe than there is here? Hard to know. It’s probably not that effective. You make martyrs out of people who are prosecuted for hate speech. But on the other hand, some of it is very troubling. In the United States Holocaust denial is protected. 

Greene: Can you talk a little bit about your experience being a woman advocating for first amendment rights for sexual expression during a time when there was at least some form of feminist movement saying that some types of sexualization of women was harmful to women? 

That drove a wedge right through the feminist movement for quite a number of years. There’s still some of that around, but I think less. The battle against pornography has been pretty much a losing battle. 

Greene: Are there lessons from that time? You were clearly on one side of it, are there lessons to be learned from that when we talk about sort of speech harms? 

One of the policy reports we did at the Free Expression Policy Project was on media literacy as an alternative to censorship. Media literacy can be expanded to encompass a lot of different kinds of education. So if you had decent sex education in this country and kids were able to think about the kinds of messages that you see in commercial pornography and amateur pornography, in R-rated movies, in advertising—I mean the kind of sexist messages and demeaning messages that you see throughout the culture—education is the best way of trying to combat some of that stuff. 

Greene: Okay, our final question that we ask everyone. Who is your free speech hero? 

When I started working on “Priests of our Democracy” the most important case, sort of the culmination of the litigation that took place challenging loyalty programs and loyalty oaths, was a case called Keyishian v. Board of Regents. This is a case in which Justice Brennan, writing for a very slim majority of five Justices, said academic freedom is “a special concern of the First Amendment, which does not tolerate laws that cast a pall of orthodoxy over the classroom.” Harry Keyishian was one of the five plaintiffs in this case. He was one of five faculty members at the University of Buffalo who refused to sign what was called the Feinberg Certificate, which was essentially a loyalty oath. The certificate required all faculty to say “I’ve never been a member of the Communist Party and if I was, I told the President and the Dean all about it.” He was not a member of the Communist Party, but as Harry said much later in an interview – because he had gone to college in the 1950s and he saw some of the best professors being summarily fired for refusing to cooperate with some of these Congressional investigating committees – fast forward to the Feinberg Certificate loyalty oath: he said his refusal to sign was his “revenge on the 1950s.” And so he becomes the plaintiff in this case that challenges the whole Feinberg Law, this whole elaborate New York State law that basically required loyalty investigations of every teacher in the public system. So Harry became my hero. I start my book with Harry. The first line in my book is, “Harry Keyishian was a junior at Queen’s College in the Fall of 1952 when the Senate Internal Security Subcommittee came to town.” And he’s still around. I think he just had his 90th birthday!

 

David Greene
Checked
2 hours 30 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed