Courts Must Not Allow Litigants to Plead Around The First Amendment’s Speech Protections

1 day 6 hours ago

Meritless defamation lawsuits can deter legal speech by forcing people to spend time and money fighting them. That is why courts must diligently protect people’s First Amendment rights by quickly dismissing claims that target people’s protected opinions.

That did not happen in a case on appeal to the U.S. Court of Appeals for the Ninth Circuit, Amin v. Winslow, and EFF filed a friend-of-the-court brief last month describing the potential danger to speech when courts let these cases linger.

A whistleblower alleges that the plaintiff, Dr. Mahendra Amin, performed unnecessary medical procedures on women at an Immigrations and Customs Enforcement detention facility. When news reports described the whistleblower’s allegations, author Don Winslow shared news stories about the allegations and criticized Amin over a series of tweets.

Amin’s lawsuit focuses on a single Winslow tweet that he alleges is defamatory. But the First Amendment requires courts to analyze the broader context surrounding any statement. The federal district court hearing the case disregarded this long-standing principle when it denied Winslow’s motion to dismiss the case, according to EFF’s brief.

California law protects people against lawsuits known as SLAPPs, or Strategic Lawsuit Against Public Participation. The state’s anti-SLAPP law allows for motions such as Winslow’s, which should allow for meritless defamation lawsuits to be dismissed quickly.

“The district court’s decision encourages meritless lawsuits against online speakers by allowing plaintiffs to strip disputed statements of their context and evade California’s robust anti-SLAPP statute,” EFF’s brief argues. “Yet the First Amendment requires courts to examine the fuller context of any alleged actionable statement to protect hyperbole and avoid chilling speakers.”

The brief explains why when it comes to online speech—particularly on Twitter—understanding the context of any alleged defamatory statement is crucial.

Twitter users speak in short bursts of 280 characters or less, often rapidly firing off multiple tweets on the same topic all the while referencing other users’ tweets, news articles, and other media. Twitter users expect this and know that generally, no single tweet contains the full story. Users know that they often will need to read multiple tweets from other users or otherwise try to learn more about any conversation occurring on Twitter.

The district court also did not fully grasp how Twitter’s specific features lead to rapid-fire exchanges and often require users to find additional information to fully understand conversations on the site. Nor did the court take account of the fact  that the site is overwhelmingly caustic, the brief argued.

“For better or worse—but protected by the First Amendment all the same—Twitter is an irreverent and hyperbolic place.”

EFF’s brief asks for the 9th Circuit to reverse this erroneous ruling and remand the case.

Aaron Mackey

EFF, ACLU Seek to Protect the Public’s Right to Access Judicial Records

2 days 5 hours ago
Amicus Brief Urges the Court to Increase Transparency of SCA Warrant Requests

ST. LOUIS — The Electronic Frontier Foundation (EFF) and the American Civil Liberties Union (ACLU) today filed a friend-of-the-court brief in support of an appeal filed by the Reporters Committee for Freedom of the Press (RCFP). The brief argues that RCFP has standing to sue to access search warrants and other materials related to the Stored Communications Act (SCA) sealed by a federal district court. The federal district court’s decision that RCFP lacks such standing — and to keep this vital information under seal — contradicts decades of First Amendment and common law broadly granting organizations and the public the right to petition for unsealing.

The SCA authorizes the government to access, among other things, the content of a subscriber’s electronic communications by obtaining a warrant. SCA warrants and related court records, including dockets, are routinely filed and maintained under seal in federal district courts around the country without any reason as to why such secrecy is necessary. In October 2022, the Minnesota District Court denied RCFP’s request to unseal this information, stating that RCFP “does not allege that it has any intent, much less an imminent intent, to access or inspect any of the materials that it seeks to unseal.”

As the ACLU and EFF’s brief to the U.S. Court of Appeals for the Eighth Circuit explains, the lower court’s decision wrongfully cuts off court access in cases where, historically, access has proved important. “The Reporters Committee’s unsealing petition is a prime example of how secrecy can frustrate the public’s ability to even learn about the existence of certain judicial records, in this case, law enforcement requests for court authorization to engage in surveillance or to obtain people’s private data,” the brief reads. Keeping this information under seal implicates “people’s free speech and privacy rights, both in the physical world and digitally.”

In most places, the press and public have no way of knowing how many SCA warrants the government applies for, what kinds of records it’s seeking, what information the government presented to support its warrant applications, and how many of the applications are granted or denied.

For the brief:

Contact:  SophiaCopeSenior Staff AaronMackeySenior Staff
Josh Richman

The Breadth of the Fediverse

2 days 9 hours ago

The Washington Post recently published an op-ed by Megan McArdle titled "Twitter might be replaced, but not by Mastodon or other imitators." The article argues that Mastodon is falling into a common trap for open source projects: building a look-alike alternative which improves things a typical user doesn’t care about, while missing elements that made the original successful.  Instead, she suggests that deposing Twitter will require something that is wholly new, and offer the masses something they didn’t know they wanted.

Where we disagree, is that Mastodon (as part of the Fediverse) does offer that in the form of a truly interoperable and portable social media presence. Characterizing Mastodon as a mere Twitter-clone overlooks this strength of the fediverse to be or become any social platform you can imagine. That’s the power of protocols. The fediverse as a whole is a micro-blogging site, as well as for sharing photos, videos, book lists and reading updates, and more.

Since this is a widely held misconception about the fediverse, and as a picture is worth a thousand words, let’s take a look at how the wider world of ActivityPub works in practice.

This is PeerTube. It's a video hosting site that allows people to follow others, upload video, comment on, and “like” videos. This is the main channel page for the Blender open source project, and from here you can subscribe to the channel. (For all of these images, right click and choose "Open in a new tab" to get a better view.) blender.png

For this example we created a temporary Mastodon account at Once we hit "Remote subscribe" above, we’re taken to our Mastodon account. From here we just click follow and now our Mastodon account is following Blender’s PeerTube account.


Now, every time Blender posts a new video to PeerTube, it shows up in our Mastodon timeline. From here, we can also “like” the video and compose a reply… blender-reply.png

… and both the like and the reply seamlessly show up on the page for the video.


Pixelfed is another ActivityPub based service that takes the form of a picture sharing social network. Here’s the home page for Dan Supernault, the lead developer.


We can follow him from here, just as we did with the Blender organization’s PeerTube page above, but we can also search for him directly from our Mastodon account if we know his username.


Just as with PeerTube, once we follow Dan, his images will start showing up in Mastodon, and our likes and comments will show up in Pixelfed as well.


These are just a couple examples of the way that common protocols in general, and ActivityPub in particular, enable innovations in social networking. The fediverse also has BookWyrm, a social reading platform, FunkWhale, a music publishing and sharing service, and WriteFreely, a longer-form blogging platform, among others.

The promise of the fediverse is that all of these interoperate with however someone wants to view them. If I like Mastodon, I can still get pictures from PixelFed even if they might be presented better on Pixelfed. Moreover, my comments show up on Pixelfed in their expected form.

People coming from Twitter tend to think of the fediverse as a Twitter-replacement for the obvious reasons, and thus use Mastodon (or perhaps, but that’s only a fraction of its potential. The question isn’t if the fediverse can replace Twitter, but if protocols can usurp platforms in our life online. With enough momentum the fediverse can be the fabric of the social web, incorporating existing systems like Tumblr and Medium and outright replacing stragglers.

Ross Schulman

Civil Society Organizations Call on the House Of Lords to Protect Private Messaging in the Online Safety Bill

3 days 11 hours ago

As the UK's Online Safety Bill enters its Second Reading in the House of Lords, EFF, Liberty, Article 19, and Big Brother Watch are calling on Peers to protect end-to-end encryption and the right to private messaging online.

As we've said before, undermining protections for end-to-end encryption would make UK businesses and individuals less safe online, including the very groups that the Online Safety Bill intends to protect. Criminals, rogue employees, domestic abusers, and authoritarian governments are just some of the bad actors that will eagerly exploit backdoors like those proposed by the Online Safety Bill. Proposals like this threaten a basic human right: our right to have a private conversation.

The briefing continues:

In spite of some changes being made to the Bill during its Commons Committee stage, these provisions have remained untouched, and as a result of the breadth of the Bill have failed to be robustly scrutinised. Throughout the Bill’s passage in the Commons, multiple amendments were also tabled to safeguard end-to-end encryption, although they have not been accepted... We urge parliamentarians to oppose the Online Safety Bill’s intrusion into private messaging at Second Reading.

Paige Collings

Setting the Record Straight: EFF Statement in Support of FCC Nominee Gigi Sohn

4 days 8 hours ago

In the last week, a number of dangerous and conspiracy-driven attacks were made against EFF board member Gigi Sohn, an eminently qualified nominee to the Federal Communications Commission. These attacks attempt to twist EFF's long-held positions and commitments into dog whistles against Ms. Sohn. We’d like to set the record straight.

First, we’ve seen some outlandish headlines about EFF’s 2020 recognition of Danielle Blunt, a leader in the technology policy space and advocate for sex workers, because she is a professional dominatrix. Ms. Blunt is one of the co-founders of Hacking//Hustling, a collective of sex workers and others working at the intersection of tech and social justice to interrupt state surveillance and violence facilitated by technology. Through that work, Ms. Blunt is an expert on the impacts of the censorship law FOSTA-SESTA, and on how content moderation affects the movement work of sex workers and activists. No one is more aware of the way that the power imbalances of the real world permeate online, and is more poised to act, than she is.

Second, much has been made about EFF’s strong and continued opposition to FOSTA-SESTA. These attacks take the claims of FOSTA-SESTA's proponents at face value—that it was a good and useful measure to take against sex trafficking when all evidence points to the contrary. Our opposition to FOSTA-SESTA was and remains based on the facts: It will not stop sex trafficking and will instead make stopping it harder. At the same time, the law puts a wide range of online expression at risk and we are always, unapologetically, against the criminalization and chilling of legal speech.

Third, despite what its supporters claim, the EARN IT Act is a surveillance bill that would have a devastating impact on privacy, security, and free speech. If Congress passes this disastrous bill, it may become too legally risky for companies to offer encryption services. This bill treats every internet user as a potential criminal, and subjects all our communications to mass scanning. We are pleased that Congress has rejected it twice already.

Finally, the flurry of hyperbole and personal attacks should not be allowed to deflect attention from the most important thing about Gigi Sohn's nomination: She is one of the most qualified people possible for the role of FCC commissioner. She has been a fair and balanced advocate for public interest for her entire career, which is why she is supported by experts, industry associations, and consumer groups alike. That is why we were happy to add her to our board— a role from which she will step down if she is appointed—and why we would be thrilled to see her confirmed to the FCC. The public deserves an FCC commissioner who will fight for net neutrality, for rural broadband access, and for strong internet infrastructure. It is past time to let her get to work helping to build a better internet for everyone.

Electronic Frontier Foundation

EFF Files Amicus Briefs in Two Important Geofence Search Warrant Cases

4 days 9 hours ago

Should the police be able to identify everyone who was in a busy metropolitan area, just because a crime occurred there? In two amicus briefs just filed in appellate courts, we argue that’s a clearly unconstitutional search.[1]

The two cases are People v. Meza, in the California Court of Appeal, and United States v. Chatrie, in the federal Fourth Circuit Court of Appeals. In each case, the defendant is challenging the police use of a surveillance tool we’ve written about before called a “geofence warrant.” In both cases, the lower courts denied motions to suppress. In Chatrie, however, the district court issued a lengthy opinion holding the geofence warrant was unconstitutional before ruling that police relied on the warrant in “good faith” and therefore the evidence from their search was admissible.

Unlike traditional warrants for electronic records, a geofence warrant doesn’t start with a particular suspect or even a device or account; instead police request data on every device in a given geographic area during a designated time period, regardless of whether the device owner has any connection to the crime under investigation. Google has said that for each warrant, it must search its entire database of users’ location history information—data on hundreds of millions of users.

The data Google provides to police in response to a geofence warrant has the potential to be very precise—much more precise than cell site location information, for example. It allows Google to determine where a user was at a given date and time, sometimes to within twenty meters or less. Google can even determine a user’s elevation and establish what floor of a building that user may have been on. As the lower court noted in Chatrie last summer, Google’s database “appears to be the most sweeping, granular, and comprehensive tool—to a significant degree—when it comes to collecting and storing location data.” At the same time, however, Google does not guarantee accuracy. Google’s goal is to accurately infer a user’s location within a certain radius a bare 68% of the time. This creates the possibility of both false positives and false negatives—people could be implicated for a crime when they were nowhere near the scene, or the actual perpetrator might not be included at all in the data Google provides to police.

The warrants in both the Meza and Chatrie cases encompassed large geographic areas and time periods. In Meza, the police asked for all devices in six discrete, heavily populated areas of Los Angeles during time periods where people were likely to be in sensitive places, like their homes at church or a medical center, or driving along one of the many busy streets included within the geofenced areas. In total, police requested data for a geographic area equivalent to about 24 football fields or five to six city blocks during five morning commute hours. Similarly, in Chatrie, the geographic area was about 17.5 acres (about 3 and a half times the footprint of a New York city block) and included a church, a chain restaurant, a hotel, several apartments and residences, a senior living facility, a self-storage business, and two busy streets.

In our briefs, we argue these warrants are unconstitutional “general warrants” because they don’t require police to show probable cause to believe any one device was somehow linked to the crime under investigation. Instead, they target everyone in the area and then provide police with unlimited discretion to determine who to investigate further. In Meza, we also argue the practice violates CalECPA, California’s landmark electronic communications privacy law.

Chatrie and Meza are the first cases challenging geofence warrants to make it to the appellate level. However, they appear to just be the tip of the iceberg. The number of police requests for geofence warrants has increased dramatically since their first reported use in 2016. According to Google, geofence requests now constitute more than a quarter of the total number of all warrants it receives, and 20% of those come just from law enforcement agencies in California.

There is real reason to be concerned about these overbroad searches. They have, in the past, caused innocent people to be suspected of crimes they didn’t commit. And geofence warrants can and have been used in ways that impact fundamental rights, including free speech and freedom of association. For example, during the protests following the police shooting of Jacob Blake, the ATF used at least 12 geofence warrants to collect people’s location data during protests in Kenosha, Wisconsin, one of which encompassed a third of a major public park for a two-hour window. Police also used a geofence warrant in Minneapolis around the time of the protests following the police killing of George Floyd. And geofence warrants may be used in the near future to target people for reproductive health choices and outcomes. Google has been sufficiently concerned about this possibility to pledge to delete location information shortly after someone visits an abortion clinic, though critics have argued this would be insufficient to protect people.

The Chatrie and Meza cases will both likely be argued sometime later this year. The majority of courts to address geofence warrants in publicly available opinions have raised constitutional concerns, refusing to issue the warrant or suppressing the evidence. We hope these two appellate courts will do the same.

[1] EFF was represented on the Chatrie brief by the NYU Technology Law & Policy Clinic, and the excellent brief was drafted by law students Talya Nevins and Yanan Wang.

Related Cases: Carpenter v. United States
Jennifer Lynch

The FCC Broadband Maps: Meet the New Maps, Same as the Old Maps

4 days 10 hours ago

When the Federal Communications Commission (FCC) released their new broadband map in November 2022, many hoped the chronic inaccuracies of past FCC maps would be resolved. Previous maps of high-speed broadband access in the United States painted inaccurate pictures partly because the definitions of things like “access” and “high-speed” were, frankly, wrong. Furthermore, the maps were based on data self-reported by internet service providers, which have every interest in claiming better service than they actually provide. The new maps have all the problems of the old maps, with the new issue that they are the basis for how $42 billion in broadband infrastructure grants will be spent.

The problems have also been raised by states, local government, and community organizations, who have filed challenges to the FCC over these inaccuracies. It is now up to the FCC and NTIA to fix the map, and time is of the essence: the Biden administration is set to confirm how the money will be spent by the summer as part of its Broadband Equity, Access, and Deployment (BEAD) program.

Overreliance on internet service providers (ISPs) to report service locations and service availability is a recurring problem. ISPs have no incentive to accurately report, and in fact, have every incentive to overreport, because misinforming the government has never carried a heavy penalty. These same ISPs then use these faulty broadband maps - which are built on their bad data - to challenge and try to prevent would-be competitors from building infrastructure into areas that are underserved or unserved.

The FCC, recognizing this concern, created a challenge process through which government entities as well as individuals are able to challenge the ISPs over their service location and service availability. Setting aside the issues with the challenge process and the obvious discrepancy that is pitting an average consumer or small government agencies against well-resourced ISPs, these challenges only allow a glimpse of the true scope of the map’s inaccuracies. 

For example, in Nevada, the Nevada State Office of Science, Innovation, and Technology found over 20,000 purported broadband-serviceable locations that they believe overstate coverage. They also found incorrect information on quality of service available as well as missing serviceable location.

In Vermont, the maps show 100% coverage, with only 3% of residents lacking speeds greater than the FCC’s definition of high speed internet of "25/3" (25 megabits per second download and three megabits per second upload). The Vermont Department of Public Service found that the maps omitted 22% of the addresses in the state’s own database: over 60,000 locations. Their Community Broadband Board further estimates that 18.6% of residents, not the map’s stated 3%, lack access to speeds greater than 25/3. It was so bad, the Community Broadband Board released a call to action “urging Vermonters to challenge wrong FCC map data.”

It is not just single-family residences that were omitted but also public and multifamily housing units, where a single omitted location could mean hundreds of people being uncounted, as well as centers of community life like schools, churches, and libraries. A letter signed by 110 organizations representing housing, education, healthcare, library, and state and local government interests finds that nationwide, 20-25% of unconnected households reside in public and multifamily units. The undercounting of schools, churches, and libraries omits crucial places of community and gathering where improving service would have an outsized impact in connecting otherwise unserved rural communities. For these to go uncounted  omits some of the most vulnerable populations and ignores precisely those areas that this initiative is meant to assist. 

The NTIA, and subsequently state governments, must rely on these inaccurate maps to disburse $42 billion in taxpayer dollars to build out internet infrastructure. An undercounting of a cumulative hundreds of thousands if not millions of underserved and unserved locations and their residents severely hinders how these funds will address existing inequities.

Take for example Los Angeles County, in California: according to the data ISPs have submitted to the FCC, LA County is 100% served. What we know from community organizations in LA County is that this is simply not true.

Unfortunately, the danger of these inaccuracies has extended to the California Public Utilities Commission’s (CPUC), which recently issued its own priority areas map. The priority areas map marks areas the CPUC sees as priorities for investment, using the same underlying data as the FCC’s inaccurate broadband maps. By failing to correctly reflect the deep inequities that exist in LA County when it comes to internet access, the CPUC map would also fail to prioritize those areas for much needed investment. Of the 100 census tracts in LA County that are least connected - those with the lowest percentage of population with fixed broadband at home - only 5 are in a CPUC priority area. Of the 500 least connected census tracts, only 14 are in a CPUC priority area. Conversely, of the 100 best connected census tracts - those with the highest percentage of population with fixed high speed broadband at home - 19 are in priority areas, and of the 500 most connected census tracts, 80 are in priority areas. If funds are disbursed based on the current data, LA County’s best connected are set to receive more investments in more places than the county’s least connected.

The obvious solution is to require better data and penalize intentional over-reporting of coverage. Looking beyond the map, the NTIA should not distribute the full $42 billion using the current available data given what appears to be systemic inaccuracies that disproportionately harm low-income people in cities. Furthermore, the FCC needs to do more than just count on less resourced entities to correct the false ISP data. They should proactively ensure everyone receives their fair share of federal support from the infrastructure law and root out bad faith efforts by ISPs.

Community organizations and state and local governments can only do so much by flagging the flagrant inaccuracies. These inaccurate maps put at risk the once-in-a-lifetime opportunity to build future-proof internet infrastructure that benefits all Americans for generations to come. 

Chao Liu

Two Steps Forward, One Step Back on Vaccine Privacy in New York

4 days 11 hours ago

EFF was proud to support New York’s A. 7326/S. 6541, which the legislature passed to protect the confidentiality of medical immunity information. It limits what data can be collected or shared, who it can be shared with, and how long it can be stored. (In New York, bills must have identical versions in each chamber.) It’s important to put privacy protections in place now to ensure personal medical information is kept safe, and that that information won’t be used to harm the most vulnerable members of our society.

The original bill would have protected people from having their information misused by private companies, the government, or other entities that wish to track their movements or use their private medical information to punish or discriminate against them. It also would have expressly prohibited immunity information from being shared with immigration agencies seeking to deport someone, or with child services seeking to take away their children. Finally, it would have required those asking for immunity information to accept an analog credential, such as a paper record.

New York Gov. Kathy Hochul signed the bill into law at the end of December. Unfortunately, she amended the bill the legislature passed to weaken some of its provisions on data sharing.

New Yorkers are better off with this law on the books. But it’s disappointing to see signing amendments that run counter to the heart of the bill: that public health requires public trust. We should never worry that seeking health care, especially for something as routine as a vaccine, will land us in legal trouble.

We share the disappointment of the New York Civil Liberties Union (NYCLU), whose director Donna Lieberman said in a statement:

In the face of rising COVID cases and other infectious diseases, this is certainly a step in the right direction. Yet the Governor’s insistence on chapter amendments make it a missed opportunity to ensure New Yorkers can be as confident as possible sharing the personal information required to get a vaccine or use a vaccine passport. No one should be criminalized or deported as a result of participating in public health responses.

We agree. We hope that other states will recognize the importance of protecting vaccine information and pass legislation to fully protect that information from falling into unexpected hands.

Hayley Tsukayama

Stupid Patent of the Month: Digital Verification Systems Patents E-Signatures 

5 days 9 hours ago

Patent trolls make patents, and argue over them. They don’t have to ever make the thing described in their patents, if it’s even possible to determine what those things are. Instead, they generate legal threats and waste the time and money of companies that do do these things. 

This month’s Stupid Patent of the Month is a great example of that. U.S. Patent No. 9,054,860 has been used by a company called Digital Verification Services, LLC, (DVS) to sue more than 50 companies that provide different types of e-signature software. 

There’s no evidence that the inventor of this patent, Leigh Rothschild, ever created his own e-signature software. But in patent law, that doesn’t matter. He acquired this patent in 2015, by adding a trivial, almost meaningless limitation to an application that the U.S. Patent Office had spent the previous seven years rejecting. 

You can’t learn much about how to verify digital identities from the patent owned by Digital Verification Services. But the breadth of work on actual digital verification can be gleaned by looking at the long list of companies and products that DVS has sued. In fact, DVS has sued more than 50 different companies. Some are large, like NASDAQ-listed DocuSign, but many more of its targets are small companies with less than 50 or even less than 10 employees. They stand accused of offering “hardware and/or software for digital signature services.” 

That’s a pretty big chunk of litigation even for a Rothschild-linked company. Some of Rothschild’s other “inventions” include an internet drink mixer that’s positively out of a sci-fi novel, and a patent on online movies (from the cloud!) that was filed in 2011.  

So what’s described in this patent, which so many companies are accused of infringing? 

The patent’s key claim describes “module generating assembly” that will receive a “verification data element” resulting in a “digital identification module.” This “module” will then “at least partly associate” with an “entity,” and be embedded in a file. 

In his declaration, inventor Rothschild says the “module generating assembly” could be a lot of things—a computer application, a web server, a file server, or “or other computing device.” In a recent deposition, he declined to describe that term any further. In court filings, DVS described a person “skilled in the art” of understanding this patent as someone “having a bachelor’s degree in computer science or electrical engineering,” or the equivalent. Rothschild—whose software patents have been used to sue hundreds of companies—admitted he had no such degree. 

Rothschild’s patent, like the great majority of software patents, includes no code—it simply proclaims it’s made up of “modules,” “assemblies,” and “components.” 

Actual E-Signature Software Rest On Well-Known Standards And Laws 

Innovation in this area—electronic signatures—rests on a bedrock of publicly shared knowledge and publicly known law. There’s no evidence at all that patents on e-signatures (of which Rothschild’s stupid patent is not the only example) have done anything to push forward innovation in the e-signature space. 

The history of e-signatures is long. The U.S. government established standards for digital signatures in 1994, and Congress passed the E-Sign Act in 2000, establishing a general rule of validity for electronic signatures. The E.U. regulated them beginning in 1999. The United Nations published a Model Law on Electronic Commerce in 1996 that has addressed e-signatures. These foundational laws all have been revised multiple times. They are all much older than the claimed 2008 priority date on Rothschild’s patent application. 

Using electronic signals to verify identities and seal contracts pre-dates the internet, as well. A New Hampshire Supreme Court case from 1869 called Howley v. Whipple said that a contract established via telegraph was binding. “Nor does it make any difference that in one case common record ink is used, while in the other case a more subtle fluid, known as electricity, performs the same office,” wrote the justices in that case. 

That’s not to say there isn’t room for new and improved types of electronic signatures. The proof of innovation won’t be found in this stupid patent, however. But it can be found in the ongoing work of the dozens of companies, large and small, that have been sued by DVS. At least one DVS defendant has been providing e-signature services since 2000. As with so many other flourishing areas of software, patents don’t push forward the state of the art—they drag it down. 

Unfortunately, getting software patents based on vague terms like a “module generating assembly” and a “digital identification module” is not uncommon. We need better laws to kick out indefinite patents like these faster, so they can’t be used to harass companies into paying settlements, as most of DVS’ targets appear to have done. We also support a stronger patent review system to weed out patents that courts can’t get to. Finally, we need strong and enforceable fee-shifting in patent cases to penalize repeat litigants like the Rothschild-linked companies, which use some of the stupidest patents we’ve seen to terrorize small companies. 

Joe Mullin

California Law Says Electronic Search Data Must Be Posted Online. So Where Is It?

5 days 10 hours ago

Update, Feb. 1, 2023: Out of an abundance of caution, EFF has temporarily replaced the CalECPA disclosure data from 2020-2022 with new versions provided by the California Department of Justice (CADOJ) that do not include the "nature of investigation" and "facts giving rise to the emergency" columns. Following our publication of the data, CADOJ alerted us that it had failed to properly redact potentially personal information from these fields. This is extremely alarming because CADOJ has known since at least 2017 that its data contains personal information. When EFF raised this problem with CADOJ then, an official told us staff would "begin a secondary screening of the produced data that should resolve this issue." That appears to either never have never happened or that it stopped happening sometime before 2020. It certainly did not happen in the five-week period it took CADOJ to produce the data in response to our public records request. The agency had an additional three months to catch the error before we published our blog post. We also gave the CADOJ advanced notice that we planned to publish our report. Yet, CADOJ apparently did not catch the problem until after we went live. It is long past time for CADOJ to have a process in place for redacting this information.  A spokesperson says staff is currently reviewing the data and will provide us an updated dataset, with the columns returned but redacted, in the near future. CADOJ also committed to posting it on the OpenJustice website. We have posted their full statement at the end of the blog post.

When it was passed in 2015, the California Electronic Communications Act (CalECPA) was heralded as a major achievement for digital privacy, because it required law enforcement to obtain a warrant in most cases before searching a suspect's data, be it on a personal device or on the cloud. But the law also contained a landmark transparency measure: the legislature ordered the California  Department of Justice (CADOJ) to publish a regularly updated dataset of these search warrants on its website. 

Up until last year, CADOJ was doing a pretty good job at uploading this data to its OpenJustice website, where it hosts a number of public datasets related to criminal justice. Advocacy groups and journalists used it to better understand the digital search landscape and hold law enforcement accountable. For example, the Palm Springs Desert Sun analyzed the data and found that San Bernardino County law enforcement agencies were by a large margin filing more electronic search warrants than any other jurisdiction in the state. The Markup also published a piece highlighting a troubling discrepancy between the number of search warrants based on geolocation (a.k.a.geofence warrants) self-reported by Google and the number of search warrants disclosed by agencies to the California Department of Justice. 

But then, last summer, CADOJ accidentally exposed the personal data of 192,000 people who had applied for a concealed carry weapons permit. Among the various actions it took in response, CADOJ suspended its OpenJustice website. Over the next several months, other datasets–such as data about use of force, jail deaths, complaints against officers, and threats to reproductive health providers–returned to the website. 

But the electronic search warrant data is inexplicably missing, despite CalECPA stating that CADOJ “shall publish all those reports on its Internet Web site within 90 days of receipt.”

CADOJ’s failure to publish the CalECPA data is against the law, and EFF is calling on Attorney General Bonta to immediately put the data back on the website. 

When asked for comment, a CADOJ spokesperson said, "We are working to bring OpenJustice’s other functions back online as soon as possible." They also said that in the meantime, we could submit requests for the data via email. 

EFF did just that on September 30, 2022 through a California Public Records Act (CPRA) request. If CADOJ had been following the law, the data would have been available online instantly, but instead, we were forced to wait four weeks after CADOJ granted itself a deadline extension, and then missed that deadline by a week. 

You can download the data we obtained from 2016-2022 (see update).

When looking at this data, it's important to note that it does not cover all search warrants for data, but only certain categories: when an agency does not know the identity of the person they are targeting or when they delay notification of the target of the search warrant. 

For each of these search warrant, the agency must disclose information about its request, such as the nature of the investigation and crime, whether the warrant is targeting a device or an account, the name of any company who received the search warrant (such as Google or Facebook), the categories of data sought, and the start and end date for the information sought. After receiving the data, CADOJ must publish that data within 90 days, but they can also redact personal info from the data. 

Researchers can use this data to seek copies of the search warrants themselves, either through a CPRA request or by visiting the courts. In some cases, the warrants or portions of the warrants will be sealed; previously EFF has litigated the issue, resulting in some records being released, while courts allowed other portions of the records to remain sealed indefinitely.

These search warrant files can reveal important information. In San Bernardino County, the data and search warrants revealed the use of cell-site simulators, devices that masquerade as cell-phone towers in order to track and grab information from cell phones. Based on the CalECPA data, we also obtained a copy of a search warrant that the UC Berkeley Police Department filed to obtain phone records for people who attended a protest. 

The public should not have to file CPRA requests over and over again to receive this data. The California legislature wrote a law establishing that this information must be available online, and CADOJ must follow it.

California Department of Justice Statement, updated Feb. 1, 2023:

The electronic search warrant data previously provided to you erroneously contained personally identifying information (PII) in some of the dataset’s hundreds of data fields. Our office is working as quickly as possible to thoroughly review the data to ensure PII is protected. We will provide redacted data to you, as well as post it on OpenJustice, as soon as it’s available.

Our office is working as expeditiously as possible and we take our duty to release electronic search warrant information as required under the California Electronic Communications Privacy Act extremely seriously. We greatly appreciate your assistance in this matter as we work to meet our statutory obligations and protect PII. We will follow up with you as soon as we have an update.

Dave Maass

Brazil's Telecom Operators Made Strides and Had Shortcomings in Internet Lab's New Report on User Privacy Practices

1 week 1 day ago

Brazil’s biggest internet connection providers made moderate advances in protecting customer data and being transparent about their privacy practices, but fell short on meeting certain requirements for upholding users’ rights under Brazil's  data protection law, according to InternetLab’s 2022 Quem Defende Seus Dados? (Who Defends Your Data?) report.

In this seventh annual assessment of Brazil’s providers, InternetLab evaluated six companies, and looked at both their broadband and mobile services. Operators assessed include Oi fixed and mobile broadband; Vivo (Telefónica) fixed and mobile broadband, TIM fixed and mobile broadband,Claro/NET (América Móvil), Brisanet fixed and mobile broadband, and Algar (broadband only). The operators were evaluated in six categories, including providing information about their data protection policies, disclosing guidelines for law enforcement seeking user data, defending user privacy in courts, supporting pro-privacy policies, publishing transparency reports, and notifying users when the government requests their data.

This year, Oi broke into the top and tied with TIM in receiving the highest scores—each company garnered  full credit in four out of six categories. Every company in the report received full credit for challenging privacy-abusive legislation and government requests for user data except Algar, which received half credit. While Brisanet improved its overall standing, earning full credit in this category, it received the least amount of credit among its peers, echoing last year’s report.

With Brazilian providers steadily improving transparency and customer data protection over the years, methodological changes were made in this edition to raise the bar for achieving credit in a few categories. Specifically, assessing companies’ compliance with data protection legislation has been expanded to include more requirements for transparency about data sharing with third parties. New criteria for measuring transparency around customers’ rights,  data handovers to authorities, and cybersecurity protocols were also added.

Finally, InternetLab checked which companies took a public stance against making it mandatory for users to undergo facial recognition authentication to activate their mobile phone services.

The report’s complete results are here. 

Data Protection Policy Transparency: Pluses and Minuses

Nearly all companies received full credit for informing users about what data about them is collected, how long the information is kept, and who it is shared with. InternetLab noted advances in how companies were providing information to customers about their data, especially the creation of portals allowing users to click on links to access  privacy and transparency policies and file complaints concerning their rights under the Brazil Data Protection Law (Lei Geral de Proteção de Dados or LGPD).

However, the survey revealed deficiencies in companies’ response times to users’ requests through  the portals. Under the LGPD, customers have the right to access their personal information, ensure its accuracy, and request deletion, among other things. Most companies were not responding to users’ requests within the maximum of 15 days as required by the law. Only Claro/NET and Algar complied with the provision, under which companies are required to provide a clear and complete response. InternetLab researchers testing company practices were not able to obtain any information from Oi and Tim in response to requests seeking to confirm whether the companies held their personal data and, if so, the quality and quantity of such data. As for Vivo, InternetLab could not even file the request due to technical problems on the company's app.

Finally, Brisanet doesn't provide any online channel for non-customers to confirm whether the company processes their data. Non-customers may have their personal data processed by a telecom operator, for example, when calling or receiving calls from that operator's customers. They have the same right as customers to confirm whether the company processed their personal data and get access to that data. But Brisanet requires non-customers to send a physical letter to the company's headquarters with notarized copies of her national ID and signature. Although checking measures are relevant to verify if the data requested pertains to the person making the request, the company should provide an online and less bureaucratic alternative for all users, not only their customers.

Law Enforcement Guidelines and Public Advocacy for User Privacy

The report showed improvements in two important categories. Every company received full credit for disclosing information on how they handle law enforcement requests  for  user data, except Brisanet, which received no credit. Algar and TIM again stood out for publishing a specific document detailing their guidelines for law enforcement access to user information. Oi joined them for the first time with substantive guidelines on the types of data that can be requested, the legal basis required to obtain data, which competent authorities may request data, and the company's internal process for analyzing the request before handing user information to authorities. Vivo has also received partial credit for a specific section on its website about government requests.   

Regarding defending user privacy in court, five out of six companies, including Brisanet, received full credit. Algar received half credit. All companies, represented by telecom industry trade associations such as ACEL and TELCOMP, challenged in court state laws giving law enforcement officials power to request location data without a previous judicial order. Moreover, Oi, Claro/NET, TIM, Vivo, and Brisanet directly challenged government requests for user data because they lacked a judicial order, showed an insufficient legal basis, or went beyond companies' legal obligations to store data. Further, companies improved their scores for publicly taking a stance in support of user privacy, with Oi and TIM receiving full credit and Claro/NET, Vivo, and Algar receiving half credit. Among other actions, all of them collaborated in launching a code of good practices on data protection for the telecommunications sector, which industry trade group Conexis presented to Brazil's National Data Protection Authority. As in 2021, Brisanet received no credit in this category.

Companies Lack Commitments to Notify Users of Government Data Requests, But Improved on Reporting Numbers

This edition reiterates the complete lack of  public commitment from companies to notify users when their information is handed over to the government. Since the first edition, all companies have failed to receive credit in this category. The report also reveals that some companies need to improve their transparency reports. This year, Brisanet and Algar failed to disclose general statistical data about government requests for user data. Oi and Claro/NET disclosed this data  for the first time ever. Except for Vivo, no other company reveals the number of customers affected by government requests. However, Vivo failed to disclose the number of rejected requests. Only Oi mentioned challenged requests in its report, stating that it  filed 18 lawsuits to challenge requests it considered unlawful in 2021.

Finally, no evaluated company published a data protection impact assessment.

Use of Facial Recognition Assessed

Facial recognition use is on the rise at internet connection providers, especially for prepaid lines. As we’ve said, face recognition represents an inherent threat to privacy, social justice, free expression, and information security. Unfortunately, the InternetLab report showed there was little commitment from companies to increase privacy protections when implementing face recognition as a method of verification, which InternetLab considered especially privacy invasive. This is troubling because some companies have been actively promoting facial recognition technologies, which may have significant consequences for digital privacy in Brazil.

While Oi provides connectivity services for initiatives involving facial recognition in the context of bank fraud and public security, the company does not use the technology when registering users for mobile prepaid services, the InternetLab report shows. TIM, in turn, said it was using facial recognition only with the consent of account holders, and was not making its use mandatory as a security measure. Real and meaningful opt-in consent is the least companies should ensure when tying face recognition to the provision of telecommunication services. This is not the case if giving your face away is mandatory to activate your mobile account. Government proposals forcing users to provide biometric data to use mobile telephone services stirred great civil society resistance in México and Paraguay, which was able to suspend its implementation and final approval, respectively.


Over the last seven years, Brazil’s internet providers have made steady progress in transparency and commitments to protect user privacy. This year’s report shows this trend continued in 2022. The passage of the LGPD has led over the years to more sophisticated and user-friendly tools for customers to get information about how providers are handling their personal data. But the report also shows that companies still have work to do to fully comply with LGPD requirements and implement best practices for notifying users of data handovers, publishing data protection impact assessments and transparency reports, and taking a stronger stance in favor of user privacy when it comes to face recognition. InternetLab’s work is part of a series of reports across Latin America and Spain adapted from EFF’s Who Has Your Back? report, which for nearly a decade has evaluated the practices of major global tech companies.

Karen Gullo

EFF Tells Supreme Court: User Speech Must Be Protected

1 week 3 days ago

The Supreme Court is about to hear a case that could dramatically affect users’ speech rights online. EFF has filed a brief explaining what’s at stake, and urging the court to preserve the key law protecting user expression, 47 U.S.C § 230 (Section 230).

In Gonzalez v. Google, the petitioning plaintiffs make a radical argument about Section 230. They have asked the Supreme Court to rule that Section 230 doesn’t protect recommendations we get online, or how certain content gets arranged and displayed. According to the plaintiffs, U.S. law allows website and app owners to be sued if they make the wrong recommendation. 

In our brief, EFF explains that online recommendations and editorial arrangements are the digital version of what print newspapers have done for centuries: direct readers’ attention to whatever might be most interesting to them. Newspapers do this with article placement, font size, and use of photographs. Deciding where to direct readers is part of editorial discretion, which has long been protected under the First Amendment. 

If Courts Narrow Section 230, We’ll See A Censored Internet 

If the plaintiffs’ arguments are accepted, and Section 230 is narrowed, the internet as we know it could change dramatically. 

First, online platforms would engage in severe censorship. As of April 2022, there were more than 5 billion people online, including 4.7 billion using social media platforms. Last year, YouTube users uploaded 500 hours of video each minute. Requiring pre-publication human review is not feasible for platforms of even moderate size. Automated tools, meanwhile, often result in censorship of legal and valuable content created by journalists, human rights activists, and artists. Many smaller platforms, unable to even access these flawed automated tools, would shut down. 

The Gonzalez case deals with accusations that Google recommended content that was related to terrorism. If websites and apps can face severe punishments for recommending such content, they’re very likely to limit all speech related to terrorism, including anti-terrorism counter-speech, and critical analysis by journalists and intelligence analysts. The automated tools used to flag content can't tell whether the subject is being discussed, commented on, critiqued, or promoted. That censorship could also make it more difficult for people to access basic information about real-world events, including terrorist attacks.

Second, online intermediaries are likely to stop offering recommendations of new content. To avoid liability for recommendations that others later claim resulted in harm, services are likely to return to presenting content in blunt chronological order, a system that is far less helpful for navigating the vast seas of online information (notably,  a newspaper or magazine would never use such a system). 

Third, the plaintiffs want to create a legal distinction under Section 230 related to URLs (the internet’s Uniform Resource Locators, the addresses that begin with “http://”). They argue that Section 230 protects the service from liability for hosting the user-generated content, but it should not protect the service for providing a URL so that others can access the content. The Supreme Court should reject the idea that URLs can be exempted from Section 230 protection. The argument is wrong as both a legal and technical matter. Users direct the creation of URLs when they upload content to a service. Further, Section 230 does not contain any language that indicates Congress wanted to create such a hair-splitting distinction. To rule as the plaintiffs argue would cripple online services up and down the internet “stack,” not just social media companies. The primary means by which everyone accesses content online—the URL—would become a legal liability if the link led to objectionable content. 

Section 230 Has Allowed Online Culture To Flourish

In the beginning of the digital age, Congress saw that the internet would be a powerful tool for creating and finding diverse communities. They were right. Cultural and educational institutions like Wikipedia, the Internet Archive, and the Library of Congress’ oral history projects enrich our lives, and all benefit from the protections of Section 230. Every message board, email service, social media site, and online marketplace flourishes because of Section 230. The law holds users accountable for their own speech, while allowing more specialized moderation for niche sites and interests. 

The court is scheduled to hear oral arguments in this case on February 21, 2023. EFF’s brief was joined by the American Library Association, the Association of Research Libraries, the Freedom to Read Foundation, and the Internet Archive. You can read the entire brief here. We also filed a brief in a related case being heard by the Supreme Court next month, Taamneh v. Twitter.  

As the internet has grown, its problems have grown, too. But there are ways to address those problems without weakening a law that protects everyone’s digital speech. EFF has endorsed several paths in that regard, including comprehensive privacy legislation and renewed antitrust action. Removing protections for online speech, and online moderation, would be a foolish and damaging approach. The Supreme Court should use the Gonzalez case as an opportunity to ensure that Section 230 continues to offer broad protection of internet users’ rights. 

  • Brief of Amici Curiae Electronic Frontier Foundation et al. in support of Respondent 
Joe Mullin

The Next Stage in Security Expert’s Trial Set for January 31

1 week 4 days ago

Swedish computer security expert Ola Bini was arrested in April, 2019, in Ecuador, and a cloud has hung over his case ever since. Bini's case has been impacted  by numerous due process violations and human rights concerns, and there have been suspensions or delays at nearly every stage of his trial. EFF conducted a fact-finding mission into the case in 2019 and found that the allegations against Bini–who is known globally as a computer security expert, and a contributor to free software projects—were driven by politics more than legitimate accusations. Now, after almost four years, the next stage in the trial is set for the end of this month. Unfortunately, civil society groups, including EFF, remain concerned that  misunderstandings of technology and  political ramifications of the trial  will overshadow the prosecution’s flimsy case.

After years of pretrial procedures, Bini’s actual trial began in January of last year and resumed in May. This was not the end of trial proceedings because the defense still had evidence to present, and the court still had to hear Bini’s testimony and parties' closing arguments. The trial was set to continue in August, but it was rescheduled given the absence of an expert Swedish-Spanish translator, a right guaranteed by Ecuadorian Law for foreign defendants. The court called a new hearing for November, with no information on whether or not an expert translator would be present. Again, resumption of the trial was delayed when the prosecutor did not show up for the hearing, presenting a medical certificate two days later.

The next trial date is now set for January 31, though it is unlikely that this single day will be enough to complete the proceedings.  We hope the court carefully assesses testimonies and alleged evidence, ensuring Ola Bini's rights; misunderstandings of technology and political implications must not guide the final outcome. 

Previous Hearing Highlighted Lack of Legitimate and Founded Accusations

The core accusation against Bini relies mainly on a printed image of a telnet session (telnet is an insecure communication protocol that has largely been abandoned for public-facing technologies). This image, which was supposedly taken by Bini himself and sent to a colleague, shows the telnet login screen of a router. Although the image's authenticity is under debate, it is not even demonstrative of anything beyond the normal procedures that computer security professionals conduct as part of their work. Centro de Autonomía Digital, co-founded by Ola Bini, reported that expert witnesses on both sides of the case agreed the photo fails to sustain the prosecution's accusations. In fact, the prosecution’s technical expert reportedly told the court that the report issued by Ecuador’s national communications provider about the alleged attack didn’t include sufficient evidence that any access has ever happened. Expert witnesses on behalf of the defense, including Tor co-founder Roger Dingledine, reiterated the lack of evidence of non-authorized access to a computer system.

From the very outset of Bini’s arrest at the Quito airport there have been significant concerns about the legitimacy of the allegations. The prosecution has tried to portray the use of Tor as inherently suspicious and as underlined by journalist Diego Cazar, who wrote a book about Bini's case, much of the theory of the case is based on Bini's appearance, friendships, books, and the flimsy accusations the former Ecuadorian Ministry of Interior made almost 4 years ago to detain him (a detention later ruled illegal). Human rights groups observing the hearing have also emphasized the flimsy evidence.

It is perhaps not surprising that Bini’s arrest and detention were also fraught with a litany of due process violations. For example:

  • A Habeas Corpus decision considered his initial detention illegal, although  the investigation continued after his release—seeking evidence to back the alleged accusations against him. Problems continued, and as the delays dragged on, the Office of the Inter-American Commission on Human Rights (IACHR) Special Rapporteur for Freedom of Expression included its concern with the delay in Bini’s trial in its 2019, 2020 and 2021 annual reports. 
  • Bini's defense claimed he had been facing continuous monitoring by members of the National Police and unidentified persons and a judge requested that the government provide information about the alleged surveillance. When they did not, the judge concluded that they had unduly denied such information to Ola Bini, failing to offer a timely response to his previous information request. 
  • The judge Yadira Proaño, who oversaw Bini’s pre-trial hearing and determined that the prosecution could proceed with the criminal prosecution, was later "separated" from the case in a ruling that admitted the wrongdoing of successive pretrial suspensions and the violation of due process. 
Computer Security Expertise Is Not A Crime

Overly politicized “hacker panic” cases, which encourage unjust prosecutions when the political and social atmosphere demands it, are not new. EFF’s founding was due in part to a notorious case pursued in the United States by the Secret Service. Our Coder’s Rights Project has worked for decades to protect the security and encryption researchers that help build a safer future for all of us using digital technologies. Bini’s case is, unfortunately, part of a longstanding history of countering the unfair criminal persecution of security experts, who have been the subject of the same types of harassment as those they work to protect, such as human rights defenders and activists.

Ola Bini's detention has received international attention and shed a light on the increasing harassment of security experts in Latin America. We look forward to the conclusion of this trial, and hope Ola will be given the fair treatment and due process his case deserves.

Jason Kelley

Podcast Episode: Don't Be Afraid to Poke the Tigers

1 week 4 days ago

What can a bustling electronic components bazaar in Shenzhen, China, tell us about building a better technology future? To researcher and hacker Andrew “bunnie” Huang, it symbolizes the boundless motivation, excitement, and innovation that can be unlocked if people have the rights to repair, tinker, and create. 

Huang believes that to truly unleash innovation that betters everyone, we must replace our current patent and copyright culture with one that truly values making products better, cheaper, and more reliably by encouraging competition around production, quality, and cost optimization. He wants to remind people of the fun, inspiring era when makers didn’t have to live in fear of patent trolls, and to encourage them to demand a return of the “permissionless ecosystem” that nurtured so many great ideas. 

Huang speaks with EFF's Cindy Cohn and Jason Kelley about how we can have it all – from better phones to cooler drones, from handy medical devices to fun Star Wars fan gadgets – if we’re willing to share ideas and trade short-term profit for long-term advancement. Privacy info. This embed will serve content from


This episode is also available on the Internet Archive.

In this episode you’ll learn about: 

  • How “rent-seeking behavior” stifles innovation. 
  • Why questioning authority and “poking the tigers” of patent law is necessary to move things forward. 
  • What China can teach the United States about competitive production that advances creative invention. 
  • How uniting hardware and software hackers, fan fiction creators, farmers who want to repair their tractors, and other stakeholders into a single, focused right-to-repair movement could change the future of technology.  

Andrew “bunnie” Huang is an American security researcher and hardware hacker with a long history in reverse engineering. He's the author of the widely respected 2003 book, “Hacking the Xbox: An Introduction to Reverse Engineering,” and since then he served as a research affiliate for the Massachusetts Institute of Technology Media Lab and as a technical advisor for several hardware startups. EFF awarded him a Pioneer Award in 2012 for his work in hardware hacking, open source, and activism. He’s a native of Kalamazoo, MI, he holds a Ph.D. in electrical engineering from MIT, and he lives in Singapore.  


Music for How to Fix the Internet was created for us by Nat Keefe of Beatmower with Reed Mathis. This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by their creators: 

  • CommonGrond by airtone (c) copyright 2019 Licensed under a Creative Commons Attribution (3.0) Ft: simonlittlefield
  • Probably Shouldn’t by J.Lang (c) copyright 2012 Licensed under a Creative Commons Attribution (3.0) license. Ft: Mr_Yesterday

Additional beds and alternate theme remixes by Gaëtan Harris.


SFX a loud exciting vibrant market in China selling electronics

Andrew “bunnie” Huang:
It's like if you went to a wet market, like a bazaar kind of thing. But instead of pork and beads and art or whatever, it's just selling every variety of the electronic animal. They just carved it up into little pieces and you could buy the different cuts and put them together and make your own electronic roast at home at the end of the day, right?

And I remember standing on the bridge and just being like, God, what I would give to have every top US lawmaker just stand here for 30 seconds and take in the magnitude of economic activity and excitement and the energy of the scene. It's like nothing else that I've ever experienced. It's so much energy. There's so much motivation, so much excitement, so much potential. Everywhere I looked, every corner had a different surprise. Every corner had someone doing something new and interesting, finding a different way to do a different thing. Every single time I went back to the market, I was just gobsmacked by the things I would see there.

Theme music starts

Cindy Cohn:
That’s Andrew Bunnie Huang. He’s talking about the former electronics market in ShenZhen China – a place where, to be frank, the American approach to patents and copyrights doesn’t really apply. Instead, it’s a place where people copy, tweak, improve and Modify actual stuff like phones and other electronics and then sell it openly, without serious fear of punishment. Is that a good thing? Well … maybe.

I’m Cindy Cohn, the Executive Director of the Electronic Frontier Foundation.

Jason Kelley:
And I’m Jason Kelley, EFF’s Associate Director of Digital Strategy. This is our podcast series: How to Fix the Internet.

Theme music ends

Cindy Cohn:
The idea behind this show is that we're trying to fix the internet. We're trying to make our digital lives better. You know, EFF spends a lot of time talking about all the ways that things can go wrong and jumping into the fight when things do go wrong, but what we'd like to do with this podcast, for all of us, is to give us a vision of what the world looks like if we get it right. That of course includes everything from the way our networks work to actually the way our physical devices work and how they’re built and innovated on.

Jason Kelley:
Our guest this episode is Andrew Bunnie Huang. He’s a security researcher and hardware hacker with a long history in reverse engineering. Nearly two decades ago, he wrote a widely respected book, Hacking the Xbox, and since then he served as a research affiliate for the MIT Media Lab and a technical advisor for several hardware startups.

Bunnie, thank you so much for joining us. Let's start off with my phone. I have a decent phone. It's not the brand new model, but it's pretty new. It can do a ton of stuff, but what can't it do? What's wrong with the phone that I have that you would like to fix or change?

Bunnie Huang:
Well, if you start with an iPhone, it's pretty locked down. There's a lot of things you can't do with it. You have to have Apple's permission to essentially put software on it.

I actually myself routinely avoid Apple phones because I find them really frustrating to use, but I had a little bit of experience trying to, for example, do some stuff with like GPS and some sensors on it. I can't get raw sensor data. 

I was getting some sanitized versions of the GPS logs, I could tell, that didn't make sense for what I was putting into it, and it was really frustrating to me. That's an example of something that you can't do with it that I'd like to do with it.

There's a whole bunch of other things that are difficult to do with it. It's hard to fix them, it's hard to repair them. It's hard to reuse them in any particular meaningful way. It's hard to incorporate them into another product. There's a whole bunch of things that are problematic with the phone as it is today.

Cindy Cohn:
Our dear friend Peter Eckersley, who passed away recently, called Apple the crystal prison. It's really beautiful. It's really shiny, but there are hard limits on what you can do if you are the kind of person who wants to innovate and take things in a direction that Apple doesn't want you to go.

Bunnie Huang:
That's a great metaphor, crystal prison.

Cindy Cohn:
What's holding back this kind of innovation? What's in your way?

Bunnie Huang:
Well, there's a whole bunch of things that prevent us from doing the things we want to do with it. I guess first I want to frame it by thinking about some of the things that could be different if we could do the things I want to do with it.

Cindy Cohn:
Oh, excellent. Let's look at this shiny world that Bunnie could build us.

Bunnie Huang:
Well, not just me, anybody. That's the exciting part about it.

I think actually a really good real world example of what could happen if phones became more open actually does exist. If you look at the company DJI, the one that makes the drones out of China, the early drones they made, if you took them apart, were basically smartphones with four brushless motors on them. In Shenzhen at the particular time when they're starting up, the whole kind of Shanzhai movement where phones were being copied and the plans were out there and the circuit boards were being circulated, were very prevalent and very available. 

You could basically rip, mix, and burn with the pallet of cell phone ingredients. So when these people were presented with a challenge of we need something that's very lightweight, battery powered, powerful, has a good camera, has a full feature software stack on the inside, instead of having to go and build everything from the bottom up and deal with all sorts of stuff, they were basically able to rip, mix, burn, take out portions of the cellphone guts, put them into a lightweight frame, put motors on it, and they had a revolutionary new drone that took the world by storm.

Now, the inverse story of that is, some hardware startups are doing things like medical diagnostics and it would be really helpful for them to, in low volume, be able to create a diagnostic device. For example, they can stick something in your mouth that has a camera on it to look for disease and these sorts of things.

These beg for basically a smartphone with a nice camera on them, but all these startups, particularly when they come from the west, are stymied by the inability to go ahead and take these components and incorporate them into their devices. They're actually having to backtrack it. They live in the country that ostensibly has the rights to produce the world's best iPhone, but they go to China to go and figure out how to access that technology to put it back into the products they're developing in America. It’s this weird, bizarre, why are we running across the entire globe to go ahead and do this? What are the factors that brought us to the point where this becomes the accepted normal thing to do? That is the counterpoint to the example you see that happened with DJI, for example.

Jason Kelley:
Well, I see that you're coming at this from a really unique perspective, as sort of a builder. I can see how as you're creating a product or a piece of hardware, having access to all these different facilities, potential tools is really helpful. But I think a lot of us come at this from the perspective of, frankly, a user. I don't want to say consumer, but a user. As an example, you mentioned IOT devices. When I'm trying to buy something that will turn on my lights or something like that with my smartphone, what I want is something that connects to as many protocols as possible so that I know that in the future I'm not limited, but companies don't tend to do that. It seems like what they want is to put you in that crystal prison. What's the benefit to them and what's their thinking?

Bunnie Huang:
To me this is classic rent seeking behavior. Rent seeking behavior is one of those terms that I actually didn't understand fully until I read the Wikipedia page on it. I'll just say my version, how I understand it out loud to make sure we're all on the same page. 

The example I read was that you have a river and boats are going through it and doing commerce, and then someone gets the bright idea that they can put a chain across the river and then charge people for removing the chain. The person who has put the chain across the river is collecting rent from the river, but not adding any particular value to the river. The whole economic value they bring is actually removing obstruction that they introduced in the first place.

By definition, that type of behavior is the most profit you can generate for the least amount of effort. In an economic system that rewards maximized profits for minimum effort, rent seeking behavior is really, really typical and very common. The whole idea of locking people into devices, you can essentially make much more money, much more value, extract much more rent out of a resource by erecting barriers rather than simply allowing commerce to travel down the river.

Cindy Cohn:
Yeah, I think that's all true. Look, there's a benefit to creating a zone of scarcity. That's why it's in the American Constitution that in order to promote sciences and useful arts, we're going to have a patent system. We understand that some scarcity, some limiting, can help support innovation. Then the question is, what's the right timeframe, or scope to make that all work?

Bunnie Huang:

Cindy Cohn:
I think that at least from our perspective, we would never say that there ought to be no ability to control something, but I think that the sense that I'm hearing from you is that we're actually overprotected and now innovation is suffering as a result of overprotection, but the reason that we put it into the Constitution is the idea that we recognize that some protection will help.

Bunnie Huang:
If you're going to take a risk, you should be able to have a reward. Whatever, if you're the person who cleaned up the river so that people who go through it, maybe should have a period of time where you can collect rent and recover that investment. That totally makes sense. It gives you incentive to go ahead and make ways more passable for other people, at the end of the day. 

The problem is that it just turns out that one of the most high reward activities you can do is to take that rent, invest in extending your lease longer, effectively than was originally intended. There's a certain amount of time it takes to go ahead and collect the investment and make it profitable and give you incentive, but then there's this huge extra time that's been added over and over again by revising the laws and pushing that out longer.

If you look at the original time that was allotted in the Constitution, it's much shorter than the current limits today. I would be a fan actually of this original interpretation of what the extent should be, the problem is there’s different categories of investment. There's some things that generally do take 10, 20 years to really come to fruition, so you may want to seek protection and really do a long slot to it, but there is a ton of stuff, particularly in technology, 20 years is an eternity.

Musical sting

Cindy Cohn:
We've been talking a little bit about patents; we've talked about copyrights; There's also section 1201, the anti-circumvention provisions and the Computer Fraud and Abuse Act and contractual terms. There's an array of laws that turn out to be rent seeker protection acts... Am I right? Am I listing all of these? Obviously we represent you in a little case called Green vs. DOJ that's on the anti-circumvention provisions. But can you talk a little bit about how these laws are creating obstacles to cool new phones and other things you might want to build?

Bunnie Huang:
Yeah, no, that was actually, when you're first talking about why we can't load stuff onto the iPhones, it's not specifically in the copyright sense but section 1201 really, that really puts a chill on a ton of innovation. In fact it is incredibly chilling actually, because as an entrepreneur, as a person who's just into technology or an engineer or a geek hanging around, the threat that you could be exposed to legal liability is often enough to scare these people away.

And someone goes ahead and starts doing maybe what they naturally do, figuring out how to go ahead and jailbreak an iPhone and put some application and whatever it is. Then one of their friends tells them, "By the way, do you realize there's this law, you can read it just as well as I do, and it's these penalties and these fines and you go to jail and there's all sorts of stuff

And so when people really start thinking about it, they're like, man, I could innovate, or I could just go home and not get involved in this whole mess and walk away from it. I remember when we were working on an open hardware startup called Chumby. We were trying to build this little device before the iPhone existed that could stream content and that sort of thing. YouTube was around, it was a thing, but the content was not coded in the right format for us. It was too heavy for our little device. We're like, oh, but we could transcode it, we can buy servers, we can do certain stuff. Then someone's like, "But the DMCA." Then we were just like, "Oh, but you're right, and it would cause so much trouble and we don't have the money to fight it, and we're just a little startup." Oh, I guess we won't have videos on our product. That's like classic chilling effect.

Maybe today people will point out and say, "Well, people transcode stuff all the time and it's fine, whatever." But very large companies with lawyers have taken the risk and done this sort of stuff. All these little innovators essentially, were scared away out of the pool.

Cindy Cohn:
I think that's right and these are the people that EFF helps, but of course we can't help everybody. But this idea that innovation can only come from a big company that can afford to negotiate and pay a lot of terms and all of that, we leave a lot of stuff on the cutting room floor 

Bunnie Huang:
Absolutely. I think the cycle does have a tendency to repeat in every ecosystem, because what happens is that when you go to a greenfield, everyone's a small player and everyone has a fair shake, but then the first one to rise up and get big realizes they came from nothing, so could their competition. So we should use this legal system to go ahead and prevent competition, essentially.  We don’t want, you know, The best use of our money is not actually to plow more money into engineering and R&D, but actually to go ahead and create a barrier for the competition to go ahead. 

For example, when Japan was coming up, they copied a lot of American technology to create the first transistor radios and to create the first cars and whatever it is, and they were maligned for being copycats, whatever it is. Now they love IP protection. They love laws because it keeps everyone else down the chain and keeps them from competing up the thing, it takes on different forms of wherever it is. 

And the challenge is how do you strike that balance between allowing new greenfield innovation, but still obviously protecting and keeping big companies around because they employ people and they're an economic engine and they have a place too. I obviously am more on the greenfield side of things. I'm not saying you should obviously just break up all the big companies or whatever it is and smash all the walls or whatever it is. I'm not that crazy, but I would definitely like to see a lot more barriers reduced, particularly for small innovators, people getting started. The biggest question I get from a lot of people all the time is, "How do I even get started?" When they look at the litany of potential legal problems and barriers facing them of getting a product into the US market, it's really daunting and really discouraging. 

Musical sting

Jason Kelley:
You've mentioned China a bit, and I know you've talked in a variety of places about China being, at this moment, a place where innovation can happen. I wonder if you can just give us a sense of what that looks like and maybe how that fits into what you would like to see in a different global system overall.

Bunnie Huang:
In the Chinese ecosystem, the thing that struck me about it the most was that before I went to China, I was taught in the traditional American legal sense that people will not take risk to innovate unless they're promised a monopoly reward. That was just gospel, motherhood and apple pie to me. Why would I do anything if I don't have patents? I got out of college thinking we should have lots of patents and all this sorts of stuff. This is a good thing. 

Then I landed in China and I was told it'd be a fishing village and it'd be destitute and all sorts of stuff. All these people were telling me. I landed and I was like, holy cow, this is a modern city with the bustling ecosystem of people. I'm looking around and everyone's building a smartphone at a time when smartphones were really hard to build. It was just all over the place. I'm like, why are people bothering to take the risk to even do anything when it could all be stolen? It really flew in the face of that profound belief that I had put into me about the legal system when I was much younger. 

That really got me. It bothered me, so I started just yanking on that chain and digging deeper and deeper. What is the mechanism that allows people to recover their investment in a system where you don't have strong patents, you don't have strong copyright protection? It boils down to fundamentally a cultural values thing, I think. China is a communist country, and in the communist set of values, workers are glorified. Blue collar is not a stigma

When I first went to China, parents would literally say, "I hope my child gets a job at a factory. It'll be a better life for them." "I hope people produce things and build things and own factories." As a result, because of this sort of very pro-factory stance, lots of people owned factories. There's lots of means of production, and the way that you made money, the way you produced wealth was through production. It wasn't through rent seeking or barriers to IP, it was gaining customers and what is the best way to gain customers, but to go ahead and just interop and share as much as you can?

Basically, if you went ahead and you came up with a little module and you said, "Here's the module, anyone can produce it. Here's the specs of how to integrate it, but my factory will make it the cheapest of everyone else. My know-how, my IP so to speak, isn't so much about the design, but actually the production, how to make it better, how to make it cheaper, how to make it more reliable, more desirable for you as an end product. I'm just going to put it out there and I want everyone to buy and put it into their stuff." That became the de rigueur in China. That's what everyone essentially ended up doing. It became a very competitive system around production, optimization of costs, quality, these different factors. That became the value, not so much around the design. 

When you look at it that perspective, the ecosystem makes a lot of sense and it only works in a system where people really want to have factories, want to be producing stuff, want to be providing that service in the first place versus an ecosystem where the ideal job is you have a window office, you kick back and you watch some green lines on the screen ticking up or something like this and shareholder profits, that sort of thing. You're not getting your hands dirty on the factory line. Those are two very different outcomes you see at the end of the day. 

Jason Kelley:
Could we move that ethos you're describing into the US in some way? 

Bunnie Huang:
That boils down to alternating with the question is what does a jury of peers, where would they come down in a case, for example? If you say, I went ahead and I built a factory in the United States, and it's incredibly successful and it employs a lot of people, but I violated a couple patents along the way. Now in front of a jury of peers, and they look at me and they say, "Okay, well why'd you do this?" "Well, we had to create a factory making jobs, and this other person here, they didn't do anything. They actually just bought their patents from a troll or whatever it is, and they're coming after us. If you go and shut us down, then we won't have these factories and jobs."

If those people say, "Ah, our values are really on your side, we're going to go ahead and say the patent system is wrong and decide in your favor," then I think you would see more of that culture coming around. But unfortunately, generally these cases don't go that way. They don't cut that way for whatever reason. 

Whereas oftentimes in China, when you have disputes over stuff like this, the government has a fairly pragmatic look in it.

Cindy Cohn:
I see this. We have this culture in America. It's the right to repair culture. When I was a kid, we called them the gear heads, the people who would pull their cars in and open them up and change out the systems to a system that they liked. It's funny because this framing is very interesting because it's almost class-based framing, the blue collar, roll up your sleeves, fix your own tractor. We have a huge problem now with John Deere and not wanting people to repair their tractors. The right to repair movement is really gaining steam because there's something profoundly feeling very un-American about the idea that you can't fix your own things, and that there's something I think really powerful there that is building.

I see that as the ethos that you're talking about quite a bit, that the innovation is not in the idea space for the person who has one good idea and then just lives off of it and their family lives off of it for generations. To me it's an old school traditional way of thinking about things that ought to be a very American way of thinking about it, because people on frontiers had to fix their own stuff.

Bunnie Huang:

Music interlude

Jason Kelley:
I want to jump in here… for a little mid-show break to say thank you to our sponsor.

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

So… a tip of the hat to them for their assistance.

Cindy Cohn:
What does it look like Bunnie? Let's say we get rid of all the barriers that are getting in your way and all the other tinkerers out there, all the other people who want to build the cool new things. What do we get? What does our world look like?

Bunnie Huang:
I mean, I think in an ideal world if we could rewind a whole bunch of bad decisions that were made, we would end up in a situation where the stuff that’s in our phones today is exactly equivalent to what we're getting right now and in Arduino or Raspberry Pi. The stuff that we have as access at maker level is identical essentially to pro level. The reason why I think that's not an unrealistic ask is if you look at the last N generations of phones, there hasn't really been a lot of change in fundamentally the core specs or anything like this. And so that to me indicates that the playing field could level.  We've arrived at the point and there's just artificial barriers preventing us from getting there. 

People talked a lot about these of digital fabrication revolutions and building stuff at home. A lot of it didn't materialize. Part of it, the physics is hard for it, but a lot of it didn't materialize also because a lot of the sub modules and sub-components that you would want to have available aren't there. You can't just snap in the electronics knowledge you want inside of the case they just 3D printed. The best you could do is print a case for an existing phone as opposed to take the guts and remodel them on the inside. 

And the corresponding thing that would come with an ecosystem that provided these modules is you would actually also have lots of people who knew how to customize them. Anyone could build it, anyone could service them. We would see much more custom shapes and sizes, much more interesting things that you wouldn't even expect to come out.

There's a whole bunch of examples of these little sorts of gee whiz hardware startups that came and went, the GoPro and the Fitbit and all these sorts of things, but I think they would've been very different, much more interesting, much more integrated, much more exciting if we had more modules to play with at the end of the day. If you actually walk into a biology lab, you'll notice a lot of the equipment has really old school displays on it. They're not connected to the internet, but they're ungodly expensive, right? Those things would all be modern and cool looking, and research would be happening at a faster pace with lower costs. In a way that whole economic base would be lifted up and you would see a lot more interesting things.

You would see a lot of quirky things come out too. People would be like, I'm just really into karaoke or something like this, and so I'm going to build a karaoke microphone into my umbrella, so I always have it with me when I'm traveling or something like this. This is actually a product I saw in China. But it's one of those things that you can do if you're just really into technology and all the production base is there. Everyone has a little itch to scratch, can go ahead and scratch the itch without any sort of barrier to block it.

Cindy Cohn:
I would love to have my phone be able to talk to my car better. Car interfaces are horrible. 

I love devices. I want the devices to work for me and I want to unleash all the people who have the technical knowledge to build versions of these things that fit what I want, not a cookie cutter set of things. 

The other piece that I think is really important is we're ending up with the worst of both worlds here. We have IOT stuff that's tremendously insecure. We have a lot of this stuff that's horribly insecure, and people could actually build products that fill some of the holes in our current ecosystem. Right now, the reason for some of this regulation is often in the name of security, but it's not actually making us more secure. Instead we end up with things that don't work very well, things that don't talk to each other very well, and then we also don't get the benefit of stuff being more secure.

Bunnie Huang:
I think there's also just an interesting cultural aspect that could rise out of it, particularly if you also were in our magical world reform copyright. You could just unleash the fans essentially on all the media and stuff and you would have just weirdo bespoke little outfits and gadgets that mimicked what was in the movie and people would have these awesome things that the rip, mix, and burn off of the Star Wars franchise and making these things that look Star Wars-like and making a living off of selling gadgets that were inspired by science fiction. But you can't do that right now. If you wanted to go ahead and take something you saw in Star Wars and build something like that, that perform what you saw, or Guardians of the Galaxy or Star Trek or whatever this thing is, someone will definitely come after you and you'll have some very expensive lawyers coming after you, so you're scared from doing that sort of thing.

But I think fundamentally actually, that type of permission to innovate is what gets everyday people more excited, I think, more interested in technology. When we talk about securing IOT, “ah, that's just for nerds,” but my favorite fan fic or whatever it is can now be built and I can buy it and I can play with it. I can modify it. I can mix it with this other franchise and do these types of things. People really get into that sort of stuff and that pulls more people into the technology conversation and that just strengthens your overall technology base as opposed to just being spoon fed whatever the product marketing guy came up with for Black Friday. 

Jason Kelley:
We had Adam Savage on last season and he said some of the exact same things that you're saying, literally with the same franchises about the movement to tinker and the right to tinker, Here's what he said about that. 

Adam Savage clip:
"One of the things that I have also encountered is, I once played poker with the head council for Lucasfilm, and he was telling me, this is 20 years ago, but he was like, "We know everybody who's making stuff out there. We know all the stormtrooper costume makers. We know who's making what." He said, "We're not interested in stopping someone from making them. We just don't want someone to turn it into a multimillion dollar business. Which fair enough actually, but they wouldn't ever say that publicly and therein lies the problem."

Jason Kelley:
Do you see a movement towards this? Because I think we do on our end, a movement towards this idea that we should be able to tinker in these ways. I think that's what we're fighting for, but do you yourself see that as you do the work that you do?

Bunnie Huang:
I hope to God there is a movement for that. Otherwise I'm wasting my time. 

The biggest challenge I see for that really movement catching on is actually reminding people that there was a time when this was legal and that actually was okay. The problem I have now when I talk to some people is they just assume this is how it always was. Somehow in 1770, the Constitutional framers are like, “this is the world we envision and this is exactly how it was supposed to come out, and that is great.”

Cindy Cohn:
The US was a copyright piracy nation. You're exactly right.

Bunnie Huang:
No, we took all the great technology from Europe and steel furnaces and whatever it is, and improved them and built railroads and all that sort of stuff. A lot of that stuff came from over on the other side of the pond, just like is happening, people taking stuff to the other side of the pond from the United States. It's a cycle that repeats itself, but I was born early enough that I remember a time before the DMCA. At that time innovation was permissionless. When you took off the cover of something, you expected to see a schematic on the inside. That was just a given. You had to fix stuff yourself, and you were just putting stuff together. 

There were magazines and journals that went around and people were excited to share how to put things together, and the source code was just printed on pages of paper with, and there was no copyright particularly, I mean obviously there's the native copyright was bestowed in print, but the whole thing about GPL and lines of code, no one had to talk about it. No one even really worried about that at the time. People just typed it in and ran the code and then they shipped a product based on it. It was a great time to be in. 

And that kind of permissionless ecosystem was a lot of fun. It was really inspiring and really interesting to be able to tinker with my hardware in that way. I really miss it. I really feel like that's one of the things that is going away from the world today. People today, not even just kids, but people who are even just 20 years old or just getting off college, they never saw that world. They don't understand a world that exists in that way, and that lack of knowledge of how big the horizon can be, the fact they were always in that well, makes them not dream big. That is the biggest risk actually to the movement, is that these people who are coming out don't dream big enough, don't see a world that could be when they have all sort of stuff. 

As I get older and older, people think of me more as that weird curmudgeon who's standing on the lawn and shaking my fist at kids these days and that sort of stuff. My challenge is to try and find ways to connect with people and inspire them and get those cultural values to return to where it was before. I think that's fundamentally how that movement maintains steam and keeps going, right?

Cindy Cohn:
I just think that's so beautiful. I think that the thing that we get, if we get this right, is we get people's imaginations bigger. We get people to dream bigger and think of bigger things they could do because they're going to be able to do it. I also hear, what I really love is, there is actually a pretty big movement, but we've kind of separated into pillars. Hardware hacking feels separate from software hacking, which feels separate from fan fiction, which feels separate from the kinds of crazy innovation stuff that Adam Savage does where he brings wookies to life and has them do things. But they're all the same story.

If we count all the different pillars up, I think we have a pretty good-size movement. Maybe our work at EFF, but also our work together as people who think about this, is to try to tear down all of those phony walls because I think if you add all of us together, the farmers that want to fix their tractors, we actually are a pretty big group of people, but we've been segmented in a way that I think isn't serving us.

Bunnie Huang:
Yeah, yeah. No, I agree. That might be part of a deliberate strategy. I don't know. Maybe not deliberate, that's a little too conspiracy. It's a consequence of the adversaries trying to split our groups and getting us hyper-focused on these demons, these very high-risk legal cases. As individuals, what happens is that you get scared of the thing that growls, and you don't look at the world around you and all your friends are with you. When the tiger growls, you're looking at the tiger that's growling. By just creating these growling tigers around the innovators, they're getting them to look and be distracted in different ways and not band together and not actually see the bigger picture for these things. 

Obviously, I'm not a lawyer, this is not legal advice, but I think that a lot of this is just tigers growling. People should not be so afraid to go ahead and poke the tigers, at the end of the day. I've been poking tigers my entire life, and I still have all my fingers and toes. I might lose them one of these days, but I think my life is better for feeling okay to do that. That lack of imbuing of that value to go ahead and question authority and to look for allies and to grab all the means at your disposal to go ahead and innovate is missing somewhere from that ecosystem.

I don't know where we put it back in. I know a lot of people are trying to figure out how to put this back into the ecosystem, but definitely I feel like every university should at least have some mandatory course on both ethics and law essentially to.. not formal law, not making lawyers, but practically speaking, this is how it's going to go down when you get your first demand for payment on a patent. You just basically say, "No, I'm not going to pay it. You guys are trolls and it's going to be fine." 

Jason Kelley: 
It’s a very short course.

Cindy Cohn:
Well, and EFF is around as well, and we're happy with our coders rights project and other things. Of course Bunnie is a frequent flyer EFF client, but to help guide people through it. But we're just one little organization compared to the size of it. I would love to see more classes on fearless innovation in places. Well Bunnie, we really, really appreciate this conversation. It's been great fun.

Jason Kelley:
And very inspiring.

Bunnie Huang:
Thanks. Yeah, I'm glad I was able to make it onto this show. I thought we're just getting started. Thank you very much for having me, it was great fun talking and connecting.

Music interlude

Jason Kelley:
Well, that was a really great conversation with a lot to unpack. And I want to ask you, Cindy, of everything that Bunnie talked about, what's the one thing that sticks with you that you're going to tell people later today? You go out for dinner and you think, “You know, I interviewed Bunnie Huang today, and here's something that he said that really struck me.”What would that thing be?

Cindy Cohn:
I think his vision of the market in Shenzhen is really compelling. And what I appreciate about it is, he's seeing what to somebody's eyes might look like, just a wall of piracy. And he's seeing the innovation behind it. He's seeing what happens if you free people up to rip, mix, and burn their tools. They're going to build a whole bunch of cool things and the cornucopia of good ideas – some good, some not so good – but different ideas that you get when you open that up. It's visually something he saw by going to that market, and the excitement of what we could do if we got all these barriers out of the way and the kinds of innovation we could open up. I think that vision is the big takeaway. 

I would cheat a bit and add a second one, and the second one is this idea that all of the various fights about freeing up innovation are actually connected. So whether you're writing fan fiction or you want to fix your tractor, or you want to build hardware from scratch. And if you add all of us up together who are trying to innovate in that space, there's a lot of us, and that we could be a stronger political and social force.

Jason Kelley:
That's exactly what stuck with me. You have the kind of car culture of the ‘50s or something like that at some level that you can I think compare to the maker movement. We've talked about that before, but it never really occurred to me that when you combine all these movements, it is probably more people at this point that care about this than ever cared about it 75 years ago when it wasn't a thing, but also it was a smaller thing in some ways because it just, these laws didn't touch on every single aspect of the work that we do.

Cindy Cohn:
I think that's right, and it's core to the adversarial interoperability work that we've been doing, or competitive compatibility, but it also reminds me of the conversation we had with Anil Dash last season where he was talking about the K-pop kids building Heardle, a version of Heardle or other kinds of things. Again, it's not just the technical side. It's the cultural side as well where we really will see an explosion of innovation if we get some of these barriers out of the way.

Theme music in

Jason Kelley:
Well that’s it for this episode of How to Fix the Internet.

Thank you so much for listening. If you want to get in touch about the show, you can write to us at or check out the EFF website to become a member or donate.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at 

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology. 

We’ll see you again in two weeks.

I’m Jason Kelley

Cindy Cohn:
And I’m Cindy Cohn.

Music fades out

Josh Richman

For Would-Be Censors and the Thin-Skinned, Copyright Law Offers Powerful Tools

2 weeks 1 day ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Yesterday, we wrote about the importance of fair use as a safeguard for free expression. But all too often, fair use and other legal limits on copyright are not enough to stop copyright enforcement from serving as cover for silencing critics.

 Time and again, we see copyright claims getting textbook fair uses erased from the internet, taking particular advantage of the Digital Millenium Copyright’s (DMCA) takedown regime. One culprit, the ironically named No Evil Foods, went after journalists and podcasters who reported on accusations of union-busting, claiming copyright in a union organizer’s recordings of anti-union presentations by management.

Whether the presentations were even copyrightable was doubtful. And even if they were copyrightable, using such material to verify and strengthen news reporting is a textbook example of fair use. The public not only has an interest in this information; being able to hear the sources also helps us determine for ourselves how accurate the reporting is. By trying to silence critics using copyright, No Evil Foods was setting itself up for a lawsuit for its bad-faith use of the takedown system. So we sent a letter telling them to knock it off, explaining all of this in clear terms. The takedowns stopped after that.

In other cases, we see copyright claims invented out of thin air—the takedown target didn’t even use any copyrighted material from the claimant. In 2020, Nebraska’s Doane University used a DMCA notice to take down a faculty-built website created to protest deep academic program cuts, claiming copyright in a photo of the university. One problem: that photo was actually taken by an opponent of the cuts, specifically for the website. The professor who made the website submitted a counternotice, but the university’s board was scheduled to vote on the cuts before the the legally required waiting period would expire. EFF stepped in and demanded that Doane withdraw its claim, and it worked—the website was back up before the board vote.

A few months earlier, we saw a self-described Twitter troll using the DMCA to remove tweets about an interview he did because he did not like the results. Then, when his target tweeted about the takedown, he used the DMCA to remove the photo of the takedown notice.

And in one exceptionally egregious case, the U.S.-Nigerian investigative news agency Sahara Reporters became the target of a campaign of surveillance, cyberattacks, and DMCA takedowns meant to frustrate their critical journalism activities. There, the perpetrator copied the text of Sahara Reporters’ own article, reposted it in a back-dated blog post, and sent a takedown demand to their website host. Our lawyers were able to jump into action to represent them in filing a counter-notice, but their experience shows how easily copyright can be used in coordinated attacks on free speech and political activity.

Apart from bogus DMCA takedowns, bad-faith actors also use copyright lawsuits as a way to unmask anonymous critics, whether to identify them for retaliation or to intimidate them into silence. Because the DMCA gives platforms incentives to turn over users in order to not be the targets of lawsuits, it’s a great tool for unmasking anonymous criticism. In one case, EFF helped a Redditor win a fight to stay anonymous when Watchtower Bible and Tract Society, a group that publishes doctrines for Jehovah’s Witnesses, tried to learn their identity using copyright infringement allegations.

You may be wondering, why is copyright law so appealing as a censorship mechanism? A key factor is how easy it can be to exploit. Typically, U.S. law erects high barriers for restrictions on speech. But copyright is an outlier. The DMCA’s notice-and-takedown framework gives rightsholders (and anyone who claims to be a rightsholder) tremendous leverage to have content taken offline based on no more than an email or a web form submission. (The rise of automated copyright filters makes this easier than ever.) That copyright holders have a legal tool to get speech removed from the internet without ever setting foot in court is an anomaly in our legal system and an extraordinary advantage. No other area of law gives aggrieved parties this type of leverage to obtain extrajudicial resolution of their complaints.

The law does provide ways for internet users to fight back against abusive takedowns, including counternotice procedures and lawsuits for bad-faith takedowns. Yet the shortcomings of these options—waiting periods, high burdens of proof, the expense of litigation—mean that plenty of incentive remains for DMCA abuse. At EFF, we’re doing what we can to change that. If you think you’ve been the target of abusive copyright claims, reach out to us at

Cara Gagliano

Right to Repair Advocates Have Had Good Victories. We Have To Keep Fighting.

2 weeks 1 day ago

It’s been a good year for right to repair advocates. Colorado passed an important law to allow wheelchair users access to resources they need to fix their own chairs. The Federal Trade Commission has stepped up enforcement of companies that limit the right to repair. And New York made history by passing the first broad consumer right to repair legislation at the end of 2022, requiring some digital electronics manufacturers to provide access to parts, tools, and information necessary for repairing their products.

Thank you to everyone who wrote in to support these bills, and especially to our allies in the Repair Coalition who lead this fight. Despite these wins, however, it’s important that those who care about the right to repair keep pushing to build on these steps. Because while there are many victories to celebrate, there is still a long way to go. And the hard-won fights for the steps forward we took have exposed just how much opposition there is to the basic idea that you should be able to tinker with your own stuff.

Take the New York law, for example. While it is indisputably a milestone, the law signed by Gov. Kathy Hochul took a huge step back from the version of the bill that had passed both houses of New York’s state legislature. It was significantly weakened at the last hurdle. Why? The Times Union (Albany, N.Y.) reported that TechNet, which represents tech industry groups, launched a targeted lobbying assault on New York Gov. Kathy Hochul, asking for her to veto the bill, to modify the bill, and exempt specific types of companies from being covered under it.

They succeeded in a few major ways. The bill passed by the legislature would have covered all digital electronics, such as phones, tablets, and IT equipment. The law, as modified by the governor, will only cover products made after July 1, 2023. It also walked back language from the bill passed by the legislature by excluding products sold under “business-to-government” or “business-to-business” contracts. That could mean that schools, hospitals, and other organizations that manage a lot of devices will not benefit from the law. There are also a couple of loopholes added to the law, such as one that allows companies to offer assemblies of parts rather than the individual parts. Manufacturers may see this as an invitation to circumvent the spirit of the law, by making consumers buy unnecessary bundles of parts rather than just the one they need.

Finally, the law also says companies don’t have to provide materials to bypass security features, which is an important step in the legitimate diagnosis and repair of electronic devices. This provision responds to debunked worries that allowing independent repairers to work on devices is a security risk. We’ve written before about why that’s nonsense. We urge lawmakers in other states who are looking at right to repair bills for 2023 not to fall into the same traps.

Companies know that the right to repair is popular, and the wins this year—especially in New York—show that advocates can rally people like you to tell lawmakers how important it is to the everyday person. Big firms are feeling the pressure. Microsoft, Apple, and even John Deere, which have all opposed the right to repair in the past, have bowed to pressure and made concessions.

Two things, however, show that we still need to push harder. First, voluntary company action is typically either done for public relations or is at best the product of compromise, and doesn’t address the problems people have. It can also come at a cost. For example, the John Deere right to repair agreement with the Farm Bureau doesn’t fix all of the issues farmers face and doesn’t do anything to foster competition for repair. It also contains a promise that, in exchange for these half-measures, the organization won’t support any right to repair legislation. Time will tell if John Deere follows up on its side of the deal this time.

Second, the incredible lobbying effort still mobilized against right to repair laws, as in New York, shows that companies will make public promises, but privately don’t want to be held to them. That’s why anyone who cares about the right to repair should take this year as a sign to keep on pushing. Your work is making a difference. We just have to keep going.

Hayley Tsukayama

Fair Use Creep Is A Feature, Not a Bug

2 weeks 2 days ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Lawyers, scholars, and activists, including EFF, often highlight Section 512 of the Digital Millennium Copyright Act and Section 230 (originally of the Communications Decency Act) as the legal foundations of the internet. But there’s another, much older, doctrine that’s at least as important: Fair use, which dates back many decades and is codified in law as Section 107 of the Copyright Act. Fair use is, in essence, the right of the public to use a copyrighted work in a variety of circumstances, without the rightsholder’s permission. It’s why a reviewer can quote from the book they’re reviewing, a parody video can include excerpts from a movie, and security researchers can copy a software program in order to test it for malware.

Fair use is essential to internet for at least two reasons. First, the vast majority of what we do online, from email to texting to viewing images and making TikToks, involves creating, replicating, and/or repurposing copyrighted works. Since copyright is a limited but lengthy monopoly over those works, in theory, using or even viewing them might require a license; now, and for many decades in the future.

Second, technological innovation rarely means starting from scratch. Instead, developers build on existing technologies, hopefully improving them. But if the technology in question involves code, it is likely copyrightable. If so, that add-on innovation might require a license from the rightsholder, giving them a veto right on technological development.

As digital technologies dramatically (and sometime controversially) expand the reach of copyright, fair use helps ensure that the rights of the public expand as well.

Examples abound. In 2021, for example, the Supreme Court held that Google’s use of certain Java Application Programming Interfaces (APIs) was a lawful fair use. While we argued that the API’s weren’t copyrightable in the first place, the decision gave more legal certainty to software developers’ common practice of using, re-using, and re-implementing software interfaces written by others, a custom that underlies most of the internet and personal computing technologies we use every day. Or consider Authors’ Guild v. Hathitrust, where the Second Circuit Court of Appeals held that fair use sheltered book digitization. Contrary to the complaints of rightsholders, neither decision has discouraged investment in new creativity.

Today, fair use is helping to defend the efforts of public interest organizations to share culture, ideas, and knowledge in ways that would never have been possible without the internet. In one case, at stake is the ability of librarians to make decisions about how to curate and lend the books in their collections. In another, at stake is access to the law.

In Hachette v. Internet Archive, four of the biggest publishers in the world, are trying to shut down Controlled Digital Lending, which allows people to check out digital copies of books for two weeks or less and only permits patrons to check out as many copies as the Archive and its partner libraries physically own. That means that if the Archive and its partner libraries have only one copy of a book, then only one patron can borrow it at a time.

Supported by authors, libraries, and scholars, the Internet Archive has explained that CDL is a lawful fair use that serves copyright’s ultimate purpose: enriching our common culture. Through CDL, the Internet Archive is fostering research and learning by helping its patrons access books and by keeping books in circulation when their publishers have lost interest in them. Digital lending also allows patrons to borrow books without having their reading habits tracked by commercial entities, like OverDrive and Amazon, that may not share librarians’ traditional commitment to protecting privacy. Perhaps most importantly, it gives librarians the power to curate their own digital collections, just as they curate their physical collections. If the publishers have their way, however, books, like an increasing amount of other copyrighted works, will only be rented, never owned, available subject to the publishers’ whim.

In ASTM et al v. Public.Resource.Org, three huge industry associations are trying to prevent a tiny nonprofit, Public.Resource.Org, from posting online standards, such as building codes, that have been made into laws. Our laws belong to all of us, and we should be able to find, read, and comment on them free of registration requirements, fees, and other roadblocks. The industry associations insist that because they helped shepherd the volunteers who actually develop those standards, they own and can control access to those laws. As Public Resource explained to a federal appeals court last year, even assuming the standards can be subject to copyright at all, posting them online, for free, to facilitate research and comment, is a quintessential fair use. A lower court has already reached that conclusion, and we expect the appeals court will agree.

The lawsuits are ongoing, but these projects, and the benefits they create, might not exist at all if these nonprofits couldn’t rely on the fair use doctrine.

But even where a use is clearly lawful and fair, efforts to invoke it can be stymied by practical, technical, and legal barriers. Defending fair uses can be expensive. As Professor Larry Lessig once said, “Fair use is the right to hire a lawyer” – and many of us don’t have the resources to do that, nor access to pro bono counsel. What's worse is rightsholders often rely on a combination of contracts, technical measures, and legal constraints to prevent or inhibit fair uses. In the gaming space, for example, vendors require users to agree to contracts that forbid them from using add-on services, and do not hesitate to sue third parties who try to provide those services. They put digital locks on games to prevent efforts to remix or even just preserve games for posterity. And if anyone breaks those digital locks, even for otherwise lawful reason, they may face a legal claim under Section 1201 of the DMCA.

But this problem goes far beyond traditional creative industries. Manufacturers of everything from medical devices to tractors use the same tactics to prevent independent repair and competitive innovation that are otherwise protected fair uses.

As technology grows creeps into every facet of our lives, rightholders will continue to look to copyright to jealously guard their legacy position as gatekeepers. Fortunately for the public, fair use has likewise grown to protect the original purpose of copyright: to encourage forward progress. And no matter what Hollywood or John Deere tells you, that’s a feature, not a bug.

Related Cases: Oracle v. GoogleHachette v. Internet ArchiveFreeing the Law with Public.Resource.OrgAuthors Guild v. HathiTrust
Corynne McSherry

Have You Tried Turning It Off and On Again: Rethinking Tech Regulation and Creative Labor

2 weeks 2 days ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

“The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which.” -George Orwell, Animal Farm

The Internet Copyright Wars are in their third decade, and despite the billions of dollars and trillions of phosphors spilled on its battlegrounds around the world, precious little progress has been made. A quarter of a century after Napster’s founding, we’re still haunted by the same false binaries that have deadlocked us since the era of 56k modems:

  • Team User v. Team Creator. Creators are users, and not merely because “everything is a remix.” Creative labor builds on the works that came before it. “Genre” is just another word for “works that share a common set of touchstones, norms and assumptions.”
  • Big Tech v. Big Content. Entertainment monopolies aren’t staunch defenders of the creative workers whose labors generate their profits (far from it!) and tech giants aren’t selfless liberators of oppressed artists stuck sharecropping for legacy entertainment companies (not by a long chalk!). No matter whether a giant multinational is a member of the MPA or TechNet, it has the same overriding imperative: to reduce its wage bill and thus retain more earnings for its shareholders.

There is nothing especially virtuous or wicked about either tech companies or entertainment companies. Indeed, in an era in which Google owns the world’s most popular video site; where Amazon and Apple both own movie and television studios; where Microsoft owns multiple game production studios, and where the Big Three music labels own substantial stakes in Spotify, there is no longer a meaningful distinction between “a giant tech company” and “a giant entertainment company.” Both are simply: “a giant company.”

And giant companies are gonna giant company. As paperclip-maximizing artificial life-forms, limited liability corporations are on a remorseless, ceaseless quest for ways of reducing the cost of their inputs, and if payments to creative workers can be squeezed, they will be.

Advanced economies around the world have spent the past 40 years expanding copyright. Today, copyright lasts longer and covers more works than ever, with higher damages and lower bars to securing them than ever. Companies that sell entertainment products are more profitable than ever, and the entertainment sector is larger than ever.

But the share of that income going to creative workers is lower than it has been in generations, and it is continuing to decline

No one listens to a song because they loved the record executive who signed the performer’s royalty statement

Even if you think that copyright’s only legitimate purpose is to incentivize creativity, this stinks. No one listens to a song because they loved the record executive who signed the performer’s royalty statement or read a book because they wanted to reward the hard work of the lawyer who drafted the author’s contract. A copyright system that makes intermediaries richer and creative workers poorer is indefensible.

How can more copyright lead to less money for creators? To answer this question, we need to look at the structure of the entertainment and tech sectors. The web has been degraded into “five giant websites, each filled with screenshots of the other four.” 

The entertainment industry is no better, consisting of:

  • Five giant publishers;
  • Four giant movie studios;
  • Three giant record labels (who own three giant music publishers);
  • Two giant ad-tech companies (and two giant app companies);
  • One giant ebook and audiobook retailer.

Giving a creator extra copyright is like giving a bullied kid extra lunch money: it doesn’t matter how much money you give that kid, the bullies are going to take it all.

As these platforms have locked up billions of users inside walled gardens, they have made it all-but-impossible for creators to reach their audiences without first acceding to whatever terms a massive gatekeeper demands.

Under these market conditions, giving a creator extra copyright is like giving a bullied kid extra lunch money: it doesn’t matter how much money you give that kid, the bullies are going to take it all. This is true even - especially - if the bullies use some of that stolen lunch money to pay for a massive global ad campaign exhorting us to think of the poor hungry kids and demanding that we give them even more lunch money.

To create a copyright system that works for creative workers and their audiences, we need to think beyond copyright. Here are some non-copyright policies that would make copyright better:

The fight that matters isn’t tech vs. content—it’s corporate consolidation vs. creative workers and their audiences. We won’t win that fight with ever-more-Draconian copyright laws - we’ll win it with interventions that are laser-focused on increasing worker power, blunting corporate power, and transferring cash from the corporate side of the ledger to the creators’ side.

Cory Doctorow

Open Data and the AI Black Box

2 weeks 3 days ago

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

Artificial Intelligence (AI) grabs headlines with new tools like ChatGPT and DALL-E 2, but it is already here and having major impacts on our lives. Increasingly we see law enforcement, medical care, schools and workplaces all turning to the black box of AI to make life-altering decisions—a trend we should challenge at every turn. 

The vast and often secretive data sets behind this technology, used to train AI with machine learning, come with baggage. Data collected through surveillance and exploitation will reflect systemic biases and be “learned” in the process. In their worst form, the buzzwords of AI and machine learning are used to "tech wash" this bias, allowing the powerful to buttress oppressive practices behind the supposed objectivity of code.

It's time to break open these black boxes. Embracing collaboratively maintained Open Data sets in the development of AI would not only be a boon to transparency and accountability for these tools, but makes it possible for the would-be subjects to create their own innovative and empowering work and research. We need to reclaim this data and harness the power of a democratic and open science to build better tools and a better world.

Garbage in, Gospel out

Machine Learning is a powerful tool, and there are many impressive use-cases: like searching for signs of life on Mars or building synthetic antibodies. But at their core these algorithms are only as "intelligent" as the data they're fed. You know the saying: "garbage in, garbage out." Machine Learning ultimately relies on training data to learn how to make good guesses—the logic behind which is typically unknown even to the developers. But even the best guesses shouldn’t be taken as gospel. 

Things turn dire when this veiled logic is used to make life-altering decisions. Consider the impact of predictive policing tools, which are built on a foundation of notoriously inaccurate and biased crime data. This AI-enabled search for "future crimes" is a perfect example of how this new tool launders biased police data into biased policing—with algorithms putting an emphasis on already over-policed neighborhoods. This self-fulfilling prophecy even gets rolled out to predict criminality by the shape of your face. Then when determining cash bail, another algorithm can set the price using data riddled with the same racist and classist biases.

Fortunately, transparency laws let researchers identify and bring attention to these issues. Crime data, warts and all, is often made available to the public. This same transparency is not expected from private actors like your employer, your landlord, or your school

The answer isn’t simply to make all this data public. Some AI is trained on legitimately sensitive information, even if publicly available. They are toxic assets sourced by a mix of surveillance and compelled data disclosures. Preparation of this data is itself dubious, often relying on armies of highly exploited workers with no avenues to flag issues with the data or its processing. And despite many "secret sauce" claims, anonymizing these large datasets is very difficult and maybe even impossible, and the impacts of a breach would disproportionately impact the people tracked and exploited to produce it.

Instead, embracing collaboratively maintained open data sets would empower data scientists, who are already experts in transparency and privacy issues pertaining to data, to maintain them more ethically. By pooling resources in this way, consensual and transparent data collection would help address these biases, but unlock the creative potential of open science for the future of AI.

An Open and Empowering Future of AI

As we see elsewhere in Open Access, this removal of barriers and paywalls helps less-resourced people access and build expertise. The result could be an ecosystem where AI doesn’t just serve the haves over the have-nots, but in which everyone can benefit from the development of these tools.

Open Source software has long proven the power of pooling resources and collective experimentation. The same holds true of Open Data—making data openly accessible can identify deficits and let people build on one another's work more democratically. Purposefully biasing data (or "data poisoning") is possible and this unethical behavior already happens in less transparent systems and is harder to catch. While a move towards using Open Data in AI development would help mitigate bias and phony claims, it’s not a panacea; even harmful and secretive tools can be built with good data.

But an open system for AI development, from data, to code, to publication, can bring many humanitarian benefits, like in AI’s use in life-saving medical research. The ability to remix and quickly collaborate on medical research can supercharge the research process and uncover missed discoveries in the data. The result? Tools for lifesaving medical diagnosis and treatments for all peoples, mitigating the racial, gender, and other biases in medical research.

Open Data makes data work for the people.  While the expertise and resources needed for machine learning remain a barrier for many, crowd-sourced projects like Open Oversight already empower communities by making information about law enforcement visibility and transparency. Being able to collect, use, and remix data to make their own tools brings AI research from the ivory towers to the streets and breaks down oppressive power imbalances.

Open Data is not just about making data accessible. It's about embracing the perspectives and creativity of all people to set the groundwork for a more equitable and just society. It's about tearing down exploitative data harvesting and making sure everyone benefits from the future of AI.

Rory Mir

Digital Rights Updates with EFFector 35.1

2 weeks 4 days ago

It's a new year! There's no better time to keep up with the latest updates on your digital rights. Version 35, issue 1 of our EFFector newsletter is out now. Catch up on the latest EFF news by reading our newsletter or listening to the audio version below. This issue covers a collection of EFF's 2022 Year in Review posts (seriously, there are a lot of them!) as well as some upcoming events EFF will be attending and even new job postings.


EFFECTOR 35.1 - Digital Rights In Review 2022

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Christian Romero
1 hour 51 minutes ago
EFF's Deeplinks Blog: Noteworthy news from around the internet
Subscribe to EFF update feed