Is Facial Recognition Inhibiting Our Freedom of Speech?

• Bookmarks: 130


When technology benefits us, it’s hard to not like it. It can track our sleeping patterns, check our news feed, alert us to meetings, and remind us to check the chicken in the oven. On a daily basis, it’s convenient, efficient, and not terribly invasive.

However, when the same technologies are deployed to preemptively enforce the law—to detect where we go, with whom we engage, and even to identify us within large gatherings—it can become an affront to our civil rights and liberties.

Facial Recognition, Exploited

Facial recognition has been around since at least the 1960s, beginning with the research of Woodrow Bledsoe at the University of Texas at Austin. It was initially used to classify people’s faces using a RAND tablet—a device that recorded coordinates of facial features. Its first publicly known implementation by law enforcement was at the 2002 Super Bowl, where police officers tried using it to detect people’s faces within large crowds. The software failed to be up to the task of capturing and analyzing moving faces in a crowded street; the program was ultimately shut down and the Super Bowl opted for its old surveillance system going forward.

In the 2010s, the technology radically evolved, mainly by using neural networks— machines taught to learn from repeated patterns—as the new standard to discern people’s faces. With this change, a machine could be fed millions of faces from a digital platform like Facebook and learn to differentiate you from your friends. Today, facial recognition’s uses range from detecting Osama bin Laden’s face to unlocking your iPhone X.

While many companies have employed facial recognition for good, some companies such as Clearview AI have come under scrutiny for more nefarious uses. The tech firm is partnering with Walmart, the Department of Defense, and law enforcement agencies to utilize the technology to detect suspects in crime scenes and prevent shoplifting. While this may seem beneficial to companies, Clearview AI’s tech has been known to have an alarming failure rate, which in policing can mean false arrests and even wrongful convictions.

In addition, facial recognition technology has now entered the public debate through its use at Black Lives Matter protests. Body cameras worn by police officers are theoretically collecting thousands of faces of protestors. Many scholars and lawyers are now questioning this practice’s legality, especially when these protestors are wearing face coverings.

Julian Murphy, in a Washington and Lee Law Review article, explores the constitutional implications of body-worn cameras and facial recognition in public protests. He proposes an example where a woman, who is wearing a “Black Lives Matter” bandana, punches someone at a white supremacist rally in Charlottesville and is later arrested at home after being identified by facial recognition technology. She ultimately sues the city for violating her Fourth Amendment rights.

In this scenario, Murphy questions whether it was lawful in the first place for the police to use software to “search” the suspect without her knowledge and attempt to arrest her at her residence. In other words, when does the First Amendment, which affords “the right of the people peaceably to assemble” exert pressure upon the Fourth Amendment right to protect people “against unreasonable searches and seizures?” Perhaps, he suggests, a sharpened view of the Fourth Amendment can act as a check on technologically enhanced mass surveillance.

Freedom of Speech or Domestic Terrorism?

In the paper, Murphy explores why facial recognition technology is a new challenge for the legal system in the United States. He argues that when police use body-worn cameras and facial recognition in public protests, they can implicate both the First and Fourth Amendments. If officers wear body cameras, and protesters are aware of them, would the cameras deter people from attending protests in the first place? Does the use of a body camera and facial recognition constitute a warrantless search?

The Supreme Court has carved out numerous exceptions to the Fourth Amendment Warrant Requirement: the “hot pursuit” of a fleeing suspect and routine administrative searches (e.g., drug testing of railway personnel), to name just two. Murphy notes that in the late 1960s, when there was a perceived uptick of domestic terrorism, courts used the “Special Needs” exception, an outgrowth of the administrative search doctrine, to justify terrorism-related searches. The main requirement for this doctrine is that the “government’s ‘direct’, ‘primary’ or ‘immediate’ purpose must be something other than a general interest in crime control.” In the protester’s hypothetical incident in Charlottesville, Murphy contends that a court could find that no warrant is required for a search due to a substantial threat to public safety.

In this thought experiment, one needs to determine whether the act of recording and targeting the protester while wearing a bandana constitutes a violation of a reasonable expectation of privacy. Given that the protester was covering her face, it would not have been possible for the police to identify her with the naked eye; the only way would have been for the police to physically remove her bandana, something legal experts might label a “physical intrusion into a constitutionally protected area.” Instead, they did the virtual equivalent, using footage from a body camera and artificial intelligence to identify the protestor’s face—and address.

While there is no clear answer to this dilemma—the Framers of the Constitution surely did not imagine machines sifting through billions of images in one second—Murphy worries that protesters’ First Amendment rights are being “chilled” through the use of body-worn cameras, facial recognition technology, and artificial intelligence. If the court adopts Murphy’s point of view, legislators may well be forced to re-balance police arresting efficacy with the right of people to demonstrate freely in public.

Enter AI Legislation & Ethics Standards

Unless there are changes to existing regulations, privacy may remain hostage by advances in artificial intelligence. The lure of convenient mobile applications hides the mountains of personal data flowing from our devices—which is why tech-savvy netizens have begun to engage in “privacy protests” to reject the effort of tech companies to acquire and sell their data. According to research by UC Davis Law Professor Elizabeth Joh, privacy protestors are “buying ‘burners’ (prepaid phones) or ‘freedom phones’ from Asia that have had all tracking devices removed, or hiding their smartphones in ad hoc Faraday cages that block their signals. They’re also using Tor to surf the Internet. They’re avoiding credit cards and choosing cash, prepaid debit cards, or bitcoins. Some are even moving ‘off the grid’ and burning their garbage.”

A March against NSA mass surveillance in Washington, D.C. on Oct. 26, 2013 (Elvert Barnes, CC BY-SA 2.0)
A March against NSA mass surveillance in Washington, D.C. on Oct. 26, 2013 (Elvert Barnes, CC BY-SA 2.0)

Even though some companies in the private sector such as Open AI are claiming to be taking the lead to make sure “artificial intelligence benefits all humanity,” many cities’ policies are still lagging. For instance, for a court to determine whether the body camera at the Charlottesville rally example invaded the protestor’s privacy, it would have to look at the city’s body-worn camera policy relating to activation, data retention, storage, and publication. Without a sufficient policy to delete the video after a certain time or prohibiting its use unless there is a proven domestic terrorist incidence, it would be difficult to predict how a court would rule.

Many tech leaders, such as Elon Musk, have decried the power and dangers of facial recognition as used in crime or crowd control. Nevertheless, countless cities continue to implement software to detect people’s faces and, subsequently, arrest them. Many of these cities don’t have existing regulations to protect citizens’ data or delete video after a certain time. In order for municipalities to continue incorporating these technologies into their policing strategies, there must be safeguards set in place to ensure they are not used as forms of oppression or totalitarianism.

1590 views
bookmark icon