San Francisco Partially Bans Facial Recognition, Putting Technology’s Future in Doubt
Photo by Gerry Roarty.
Civil rights and privacy activists asked, and the San Francisco Board of Supervisors delivered.
The city banned the use of facial recognition technology by law enforcement and other municipal agencies on Tuesday, becoming the first in the country to do so. Other bills in the works in Massachusetts and even on Capitol Hill suggest that additional restrictions on the technology may be forthcoming.
While proponents of facial recognition say it can accelerate law-enforcement procedures, potentially saving lives, critics such as the American Civil Liberties Union have clamored for a moratorium on its deployment, raising concerns about its impact on privacy and apparent bias against women and racial minorities.
Proponents of the technology say the danger of bias in its production and imprecision more broadly can be mitigated by proper implementation that doesn’t trust facial recognition software to provide absolute answers but rather uses it as a tool in a multifaceted decision-making process.
“The large producers of face matching algorithms know there are vulnerabilities in the algorithms like racial bias,” wrote Kevin Freiburger, director of identity programs at Valid, in an email to Street Fight. “To counter the bias, the industry’s major players are implementing machine learning behind the algorithms and using a much larger and diverse data set to train them, optimizing the face matching accuracy for the variability encountered across races, geographic regions, and genders. The technology can be safely deployed when used properly as a tool versus an absolute conclusion.”
A report that facial recognition will be implemented at top US airports by 2021 provoked outrage from critics who said regulations are not yet in place to ensure the efficacy of the technology and ensure the privacy of the hundreds of millions who will be affected. U.S. citizens can reportedly opt out of facial scans, though awareness of that right is uncertain. More broadly, the entire AI industry is suffering from a dearth of female engineers and engineers of color, leading to technologies that reflect the unconscious bias and cultural unawareness of their creators, according to a paper published by scientists at New York University’s AI Now Institute.
In the retail context, Apple is facing a $1 billion lawsuit for allegedly using facial recognition to misidentify a thief in its stores. Ousmane Bah, the 18-year-old man wrongly arrested for the crime, who is African-American, was linked to the thief because the latter was wearing an ID with Bah’s personal information. It is unclear whether the suit will hold up in court since Apple claims it does not in fact use facial recognition technology in its stores.
Whatever the result of Bah’s case, the suit points to conflicts that are likely to arise should retailers choose to use the technology either to collect consumer data or to beef up security. Facial recognition will almost certainly become one more flashpoint in our increasingly national conversation about privacy, particularly concerning the access companies have to consumer data and what uses they put it toward.
“The private sector will continue using biometrics,” Freiburger wrote. “It improves the user interface and user experience for their products. The opportunity is too large for [companies] to pass on a technology that is mature and market ready.”