The Deceptive Arguments Amazon Uses to Shirk Responsibility for AI

Share this:

In a recent column, Recode founder and New York Times columnist Kara Swisher cut to the core of what would seem to be concessionary calls for regulation from Big Tech firms, summarizing their attitude like this: “We make, we break, you fix.” She’s right, and with Google, Amazon, Apple, and Facebook doubling their combined lobbying spending from 2016 to $55 million in 2018, it is worth taking a closer look at the kinds of arguments the companies are trotting out to avoid responsibility for the outcomes of the technology they produce and sell. We should be particularly concerned about the arguments tech firms are making about AI, which is already remaking our society, replacing steps in crucial human decision-making processes with machine-generated solutions.

For an example of how tech firms are attempting to get away with peddling potentially dangerous AI-based tech to powerful entities like law enforcement agencies while accepting minimal accountability, consider Amazon’s Rekognition. The company is already selling its facial recognition software to police departments and has at least pitched it to US Immigration and Customs Enforcement, sparking outrage from its own employees.

In a conversation with Swisher last month, Amazon Web Services chief Andy Jassy deemed calls to restrict the technology wrongheaded, saying that in the hands of law enforcement, Rekognition is just one tool among many for human decision-makers. In other words, fears raised by the American Civil Liberties Union and others that the technology discriminates against women and people of color can be swept aside because Rekognition does not make final law-enforcement decisions on its own. Human actors must bear responsibility for any harmful usage of the technology. Guns don’t kill people; people kill people.

Jassy’s conversation with Swisher did not mark the first time Amazon had made this case. Holding humans—in this case, Amazon’s customers, any one of whom can be kicked off Amazon Web Services without disrupting the company’s bottom line—responsible for Rekognition’s repercussions is part of the ethical guidelines for the technology’s use that Amazon has floated on its blog.

Pushing back against press indicating its software yields discriminatory outcomes—the most notable incident being an ACLU experiment in which the technology mistakenly matched the headshots of 28 members of Congress with mugshots, disproportionately producing faulty results for people of color—Amazon claimed that in these cases the “service was not used properly.” It went on to provide these guidelines, among others, for facial recognition software’s use: “There should be notice when video surveillance and facial recognition technology are used together in public or commercial settings,” and, “Human review is a necessary component to ensure that the use of a prediction to make a decision does not violate civil rights.” Examining these two suggestions tells us a great deal about the danger Rekognition and other AI-driven advances pose as well as about corporate efforts to shirk responsibility for that danger.

First, Amazon’s suggestion that “notice” should be sufficient to record data on our faces for the purpose of facial recognition is consistent with the corporate logic that has run roughshod over privacy concerns, made data the new oil, and turned technology into the world’s most powerful industry in this century.

Shoshana Zuboff describes this corporate logic and its sweeping social implications in a book on surveillance capitalism, a term of her coinage. “The dispossession of human experience is the original sin of surveillance capitalism,” Zuboff writes, explaining that under surveillance capitalism, “Human experience,” the look of our faces, for example, “is claimed as raw material for datafication and all that follows, from manufacturing to sales.” This extraction of personal data, which is used by corporations such as Google, Facebook, and Amazon with minimal transparency to understand the behavior of individuals and predict our future behavior on an ongoing basis, is monetized to the tune of billions from advertisers, who pay to understand what we want to click, like, and buy. As DuckDuckGo founder Gabriel Weinberg argues, this hyper-targeted advertising is not indispensable for tech companies looking to make big profits, but there is little incentive for companies to pursue less invasive alternatives when the political calculus dictates that personal data is up for grabs.

As Zuboff suggests, the problem with surveillance capitalism is precisely this presumption that human experience is fairground for datafication with or without explicit consent. Take walking into a mall, for example, in which Amazon’s facial recognition software is at work. The company suggests that notice be given for that technology in the same way we are currently notified that cameras are active in public spaces. The terms, then, for disclosing our data, in this case the look of our faces, to facial recognition software are, like the terms of Facebook’s user policy or Google search, not truly up for debate. We either participate in these digital ecosystems, which are increasingly central to the practice of daily life, or we avoid them altogether. The impression that this binary between unfettered data extraction and an utter refusal to participate is sensible, ethical, or necessary, Zuboff rightfully insists, is a ruse intended to take and monetize our data without consent and while taking little to no responsibility for what is done with personal information.

Now, what happens if the companies and government agencies collecting our photos commit ethical or legal infractions based on that data? This is where the human blaming, the second of Amazon’s two guidelines I mentioned above, comes in. Amazon admits to the fallibility of facial recognition software, but it displaces culpability for any errors made based on that fallibility to the humans operating it. Here’s how that looks: When the machine fails, less than one percent of the time, to recognize a face correctly, the company says it is up to humans to intervene and render a correct final judgment. When the machine makes a slew of errors, as in the ACLU’s experiment, the machine’s failure is dubbed human failure, as it is humans who have failed to operate the technology properly.

According to this logic, convenient for Amazon but unfortunate for its clients and the many people affected by its technology, facial recognition software is at once responsible for the advent of a more just age, making its deployment a no-brainer for all but the Luddites among us, and is also not responsible for any unjust judgments its flaws facilitate. As Tim Cook alleged of Silicon Valley’s attitude toward responsibility in general, Amazon’s view is that the technology and the company itself are only responsible when there is credit to be taken—when chaos breaks out, someone else must step in to take the blame.

Why is the argument that Amazon and its technology bear no responsibility for poor judgments made based on Rekognition so misleading, and why should we be concerned about it? The problem is that it is human judgment, the judgment of Amazon’s engineers, that makes decisions facilitated by Rekognition and other AI-based solutions possible in the first place. As long as the monsters have their Frankensteins, conclusions reached via machine learning, even in tandem with human operators, will always be shaped by the errors and biases of the humans who make the tech.

The efficacy of the machines will also be affected by the cultural contexts and environments in which they operate, meaning they may not be as effective in one place as another or with people of certain demographic groups as with others. Research released in April by NYU’s AI Now Institute, the only organization dedicated to investigating the social impact of AI, supports that claim. The researchers, who include Google and Microsoft scientists, found that technology created by mostly white and male teams is likely to be less accurate and to discriminate, as Amazon’s AI-based hiring software did, against women and people of color. At both universities and tech firms, AI labs suffer from a lack of diversity even worse than overall rates in the tech industry, suggesting we have yet to see the worst of the biased outcomes the technology will yield if sold to powerful entities, especially without due regulation. 

The irony in Amazon’s two suggestions for ethically deploying Rekognition—that humans take responsibility for the judgments made based on it and that consent be obtained from the people whose data it will collect—is that both are essential arguments for deploying AI ethically. But contrary to the arguments the company has laid out thus far, both the consent and human responsibility it is demanding should be more comprehensive. Consent for data collection should be transparent and truly negotiable, not a sign in the corner of a mall hallway. Assigning responsibility for the role humans play in deploying AI should include, first and foremost, responsibility for the humans who make the tech and sell it.

Tags:
Joe Zappa is the Managing Editor of Street Fight. He has spearheaded the newsroom's editorial operations since 2018. Joe is an ad/martech veteran who has covered the space since 2015. You can contact him at [email protected]