Face Scans to Estimate Our Age: Harmful and Creepy AF

1 month 2 weeks ago

Government must stop restricting website access with laws requiring age verification.

Some advocates of these censorship schemes argue we can nerd our way out of the many harms they cause to speech, equity, privacy, and infosec. Their silver bullet? “Age estimation” technology that scans our faces, applies an algorithm, and guesses how old we are – before letting us access online content and opportunities to communicate with others. But when confronted with age estimation face scans, many people will refrain from accessing restricted websites, even when they have a legal right to use them. Why?

Because quite simply, age estimation face scans are creepy AF – and harmful. First, age estimation is inaccurate and discriminatory. Second, its underlying technology can be used to try to estimate our other demographics, like ethnicity and gender, as well as our names. Third, law enforcement wants to use its underlying technology to guess our emotions and honesty, which in the hands of jumpy officers is likely to endanger innocent people. Fourth, age estimation face scans create privacy and infosec threats for the people scanned. In short, government should be restraining this hazardous technology, not normalizing it through age verification mandates.

Error and discrimination

Age estimation is often inaccurate. It’s in the name: age estimation. That means these face scans will regularly mistake adults for adolescents, and wrongfully deny them access to restricted websites. By the way, it will also sometimes mistake adolescents for adults.

Age estimation also is discriminatory. Studies show face scans are more likely to err in estimating the age of people of color and women. Which means that as a tool of age verification, these face scans will have an unfair disparate impact.

Estimating our identity and demographics

Age estimation is a tech sibling of face identification and the estimation of other demographics. To users, all face scans look the same and we shouldn’t allow them to become a normal part of the internet. When we submit to a face scan to estimate our age, a less scrupulous company could flip a switch and use the same face scan, plus a slightly different algorithm, to guess our name or other demographics.

Some companies are in both the age estimation business and the face identification business.

Other developers claim they can use age estimation’s underlying technology – application of an algorithm to a face scan – to estimate our gender (like these venders) and our ethnicity (like these venders). But these scans are likely to misidentify the many people whose faces do not conform to gender and ethnic averages (such as transgender people). Worse, powerful institutions can harm people with this technology. China uses face scans to identify ethnic Uyghurs. Transphobic legislators may try to use them to enforce bathroom bans. For this reason, advocates have sought to prohibit gender estimation face scans.

Estimating our emotions and honesty

Developers claim they can use face estimation’s underlying technology to estimate our emotions (like these venders). But this will always have a high error rate, because people express emotions differently, based on culture, temperament, and neurodivergence. Worse, researchers are trying to use face scans to estimate deception, and even criminality. Mind-reading technologies have a long and dubious history, from phrenology to polygraphs.

Unfortunately, powerful institutions may believe the hype. In 2008, the U.S. Department of Homeland Security disclosed its efforts to use “image analysis” of “facial features” (among other biometrics) to identify “malintent” of people being screened. Other policing agencies are using algorithms to analyze emotions and deception.

When police technology erroneously identifies a civilian as a threat, many officers overreact. For example, ALPR errors recurringly prompt police officers to draw guns on innocent drivers. Some government agencies now advise drivers to keep their hands on the steering wheel during a traffic stop, to reduce the risk that the driver’s movements will frighten the officer. Soon such agencies may be advising drivers not to roll their eyes, because the officer’s smart glasses could misinterpret that facial expression as anger or deception.

Privacy and infosec

The government should not be forcing tech companies to collect even more personal data from users. Companies already collect too much data and have proved they cannot be trusted to protect it.

Age verification face scans create new threats to our privacy and information security. These systems collect a scan of our face and guess our age. A poorly designed system might store this personal data, and even correlate it to the online content that we look at. In the hands of an adversary, and cross-referenced to other readily available information, this information can expose intimate details about us. Our faces are unique, immutable, and constantly on display – creating risk of biometric tracking across innumerable virtual and IRL contexts. Last year, hackers breached an age verification company (among many other companies).

Of course, there are better and worse ways to design a technology. Some privacy and infosec risks might be reduced, for example, by conducting face scans on-device instead of in-cloud, or by deleting everything immediately after a visitor passes the age test. But lower-risk does not mean zero-risk. Clever hackers might find ways to breach even well-designed systems, companies might suddenly change their systems to make them less privacy-protective (perhaps at the urging of government), and employees and contractors might abuse their special access. Numerous states are mandating age verification with varying rules for how to do so; numerous websites are subject to these mandates; and numerous vendors are selling face scanning services. Inevitably, many of these websites and services will fail to maintain the most privacy-preserving systems, because of carelessness or greed.

Also, face scanning algorithms are often trained on data that was collected using questionable privacy methods—whether it be from users with murky-consent or non-users. The government data sets used to test biometric algorithms sometimes come from prisoners and immigrants.

Most significant here, when most people arrive at most age verification checkpoints, they will have no idea whether the face scan system has minimized the privacy and infosec risks. So many visitors will turn away, and forego the content and conversations available on restricted website.

Next steps

Algorithmic face scans are dangerous, whether used to estimate our age, our other demographics, our name, our emotions, or our honesty. Thus, EFF supports a ban on government use of this technology, and strict regulation (including consent and minimization) for corporate use.

At a minimum, government must stop coercing websites into using face scans, as a means of complying with censorious age verification mandates. Age estimation does not eliminate the privacy and security issues that plague all age verification systems. And these face scans cause many people to refrain from accessing websites they have a legal right to access. Because face scans are creepy AF.

Adam Schwartz

Second Circuit Rejects Record Labels’ Attempt to Rewrite the DMCA

1 month 3 weeks ago

In a major win for creator communities, the U.S. Court of Appeals for the Second Circuit has once again handed video streaming site Vimeo a solid win in its long-running legal battle with Capitol Records and a host of other record labels.

The labels claimed that Vimeo was liable for copyright infringement on its site, and specifically that it can’t rely on the Digital Millennium Copyright Act’s safe harbor because Vimeo employees “interacted” with user-uploaded videos that included infringing recordings of musical performances owned by the labels. Those interactions included commenting on, liking, promoting, demoting , or posting them elsewhere on the site. The record labels contended that these videos contained popular songs, and it would’ve been obvious to Vimeo employees that this music was unlicensed.

But as EFF explained in an amicus brief filed in support of Vimeo, even rightsholders themselves mistakenly demand takedowns. Labels often request takedowns of music they don’t own or control, and even request takedowns of their own content. They also regularly target fair uses. When rightsholders themselves cannot accurately identify infringement, courts cannot presume that a service provider can do so, much less a blanket presumption as to hundreds of videos.

In an earlier ruling, the court  held that the labels had to show that it would be apparent to a person without specialized knowledge of copyright law that the particular use of the music was unlawful, or prove that the Vimeo workers had expertise in copyright law. The labels argued that Vimeo’s own efforts to educate its employees and user about copyright, among other circumstantial evidence, were enough to meet that burden. The Second Circuit disagreed, finding that:

Vimeo’s exercise of prudence in instructing employees not to use copyrighted music and advising users that use of copyrighted music “generally (but not always) constitutes copyright infringement” did not educate its employees about how to distinguish between infringing uses and fair use.

The Second Circuit also rejected another equally dangerous argument: that Vimeo lost safe harbor protection by receiving a “financial benefit” from infringing activity, such as user-uploaded videos, that the platform had a “right and ability to control.” The labels contended that any website that exercises editorial judgment—for example, by removing, curating, or organizing content—would necessarily have the “right and ability to control” that content. If they were correct, ordinary content moderation would put a platform at risk of crushing copyright liability.

As the Second Circuit put it, the labels’ argument:

would substantially undermine what has generally been understood to be one of Congress’s major objectives in passing the DMCA: encouraging entrepreneurs to establish websites that can offer the public rapid, efficient, and inexpensive means of communication by shielding service providers from liability for infringements placed on the sites by users.

Fortunately, the Second Circuit’s decisions in this case help preserve the safe harbors and the expression and innovation that they make possible. But it should not have taken well over a decade of litigation—and likely several millions of dollars in legal fees—to get there.

Related Cases: Capitol v. Vimeo
Tori Noble