To Understand the Tech Industry’s Responsibilities, We Must Think Differently About Humanity
Suppose we could drag the elites running tech and media companies back to the college classroom in order to start a wide-ranging discussion about the essence of humanity. What does it mean to be human today? What is humanity? What does the question have to do with technology?
These questions would be preludes to less abstract ones that will seem more familiar to the creatures of Silicon Valley. Is Facebook responsible if people use WhatsApp and Messenger to spread false news and incite genocide? Is that just the fault of (heinous) people being (heinous) people or should the platforms be held accountable? As for privacy and data collection, what rights do people have to safeguard their information from the communications platforms they use? What does data scraped from Google search or Amazon’s facial recognition technology have to do with our identities? Can data be human?
The most urgent answer to the above is that there is no humanity without technology. There is no longer a concept of the human entirely dissociable from—uncontaminated by—many of the technologies we recognize today in the form of corporate logos. The exasperating debate about responsibility for tech’s more disastrous social outcomes, the tug of war between blaming people and platforms, is fundamentally misguided because there is no thinking one without the other, no holding one responsible without the other. It is impossible to think about what it means to be human today, to hold humanity as a whole or even certain humans responsible for anything, without taking the tools invented by Facebook, Google, and Amazon into account.
Let’s think about this in relation to a very specific example. Just last week, former Facebook security chief Alex Stamos, now at Stanford, engaged in the following exchange with Kara Swisher:
Stamos: I don’t think Facebook has ruined democracy. I think there’s a couple things going on here. One, there’s a whole class of tech criticism that is actually criticism of other people. Right? The saying “Hell is other people”? Facebook is “other people.” When you talk about anti-vaxxers and crazy parents today recommending bleach … this is the collective decisions of millions of billions of people, when you give them a freedom they never had before.
Now, that doesn’t mean that the company doesn’t have responsibility, but I think one of the problems is we’re not teasing apart what the companies are doing actively and what kind of societal issues have been unleashed by the fact that we have gotten rid of the information gatekeepers. I think that’s one of the core disagreements between the Valley overall and the media, is sometimes for those of us in tech … it feels a bit like there’s a lot of media people who want to go back to the world where 38 middle-aged white guys decide what is the political …
Swisher: No, come on. That’s bullshit. That’s not true. That’s not the case. … What you’re essentially arguing is that “Facebook doesn’t kill people, people kill people.” Right, or not? It’s humanity, essentially.
Stamos: What I’m saying is that people will utilize speech sometimes to do really good things, and a lot of times to do bad things. We’ve got to think about what responsibility we want the companies to have, because when we give them responsibility, we also give them power.
The crucial point here is not whether the platforms (managers) or people (users) are responsible for verbal and physical violence enabled by Facebook’s communication properties. It is also not whether we would be giving Facebook too much power, too much responsibility, by expecting the company’s stewards to make decisions about what content is admissible, essentially crowning once more, as Stamos incorrectly frames the issue, the “38” middle-aged, white-guy gatekeepers of public discourse.
The key point, rather, is that Facebook already has more power and more responsibility than Stamos’ thinking suggests. Facebook and its Big Tech counterparts already control what discourse and data are housed on their platforms. We still have information gatekeepers; they just call a different coast of the United States home. Failing to moderate content with the discernment of a newspaper or magazine editor (the white male straw men of old) doesn’t make tech executives any less powerful or responsible than more methodical moderation would. A laissez-faire approach or half-baked contractor commissioning to handle content moderation is already a gatekeeping program, and its consequences, far from blameless or neutral, are manifest in tears and blood.
However, even this way of describing the tech industry’s responsibility for social welfare is incomplete. In fact, the big technology companies’ power far surpasses their policies on data collection and content moderation. Their most essential power, the one that makes them close to the gods their lust for everlasting life suggests some would like to be, is disrupting the ever-changing essence of humanity itself.
Equipped with pocket computers more powerful than the tools leading scientists of the early 20th-century imagined, our experience of the world, itself very different from past forms, stands in stark contrast to the lives our predecessors lived. To think about freedom, happiness, and security today without technology is impossible, not least so because scholars pursuing those ideas approach them through the mediating lens of Google search. No less than the physicists who created nuclear weapons, Big Tech’s software has reconfigured our material and metaphysical reality.
In this regard, both Stamos and Swisher are right. It is the platforms, and it is people who bear responsibility for the tech industry’s social impact. “It’s humanity, essentially”—the humanity Facebook, Amazon, and Google have created, the one they must thus embrace their responsibility and awesome power to steward, the one democratic institutions must step in to keep safe.