Facebook’s long-term refusal to strike down Holocaust-denial content is not a problem specific to Facebook. It’s not a decision limited to Zuckerberg or a few feckless executives. The problem is not even limited to tech.
Facebook’s purported refusal of politics — its reluctance to accept that it has always been a political actor and that its content-moderation policies and algorithms have real-world effects on what people believe and what they do, up to and including acts of physical violence as in Myanmar — is a structural feature of shareholder capitalism. A content ecosystem whose leaders are so timid as to let Holocaust denial flourish is the logical result of an approach to management that views its only responsibility as minimizing costs and maximizing market capitalization.
The many arguments adduced to spare Facebook the responsibility of monitoring its content, of removing content that leads to physical violence all the way down to false political advertising, fail because they are based on under-developed understandings of responsibility itself. To argue that Facebook should be spared almost all regulatory expectations because it is a technology like the telephone rather than a media site like the New York Times or that Facebook should not be entrusted with taking down false advertising or striking down violent speech because those are tasks best left to the government is a failure of imagination and a failure to imagine what (civic) responsibility entails. As the word suggests (respons-ibility), the responsibility of any company or person who provides the possibility of speech, who can take it away from any given user and makes billions in profits off it, is to answer for and consider the admittedly unpredictable and deeply complex ramifications of the speech spoken under the company’s or person’s auspices.
“Chronic” local listings fraud on Google Maps, where con artists pose as handymen and other local service providers, sometimes stealing the names of legitimate operations, is endangering consumers and sucking business away from viable local businesses, the Wall Street Journal reported.
As Google seeks to prop up its lucrative but “cresting” search business and consolidate its lead in local, the tech giant is struggling to address the fraud issue and perhaps even to care about it.
The task Facebook must take up as it attempts to police hateful content is one inseparable from political values, human judgment, and the interpretation of statements that need to be parsed by well-trained eyes and bright minds with a stomach for horror to boot. While machines will play an indispensable role in content moderation on a platform of Facebook’s scale, they will be far from sufficient. That’s because monitoring hate speech touches on nothing less than some of humanistic inquiry’s age-old questions: the debatable violence, status of truth, and foundations of meaning in language.