Facebook’s long-term refusal to strike down Holocaust-denial content is not a problem specific to Facebook. It’s not a decision limited to Zuckerberg or a few feckless executives. The problem is not even limited to tech.
Facebook’s purported refusal of politics — its reluctance to accept that it has always been a political actor and that its content-moderation policies and algorithms have real-world effects on what people believe and what they do, up to and including acts of physical violence as in Myanmar — is a structural feature of shareholder capitalism. A content ecosystem whose leaders are so timid as to let Holocaust denial flourish is the logical result of an approach to management that views its only responsibility as minimizing costs and maximizing market capitalization.
Nearly 60% of respondents overall said they’d be at least somewhat willing to pay for social media, and that figure could likely climb if a small monthly subscription fee were added. Twingate contends that Facebook/Instagram would only need to charge users $2.07/month, and Twitter $1.61/month, to earn via subscription fees what they earn via ad revenue. Respondents said they would pay $5.24 and $4.75/month, respectively.
But inertia and apathy are strong, money is even tighter outside the US market, and surveillance advertising, and the size of its audience, are the X-factors that catapulted Facebook to the top of the global corporate order. I’d bet Google, Facebook, and, increasingly, Amazon, will be slow to give up the surveillance revenues and walled-garden ecosystems that have made them this century’s most powerful corporate actors.
Experts at helping SMBs adapt to a tech-first commercial landscape say the pandemic has led some businesses to tap into their long-dormant potential as digital marketers and sellers, possibly setting them up for gains in the aftermath of the recession. Now that e-commerce is the only path to survival, mom-and-pop shops, aided by martech firms, agencies, and Silicon Valley giants, are capitalizing on cutting-edge marketing and retail techniques, many for the first time.
Thousands, if not millions, of Main Street businesses will close their doors for good as a result of the pandemic. Those that survive will be technologically savvier and sleeker than they were before.
It’s well established that Amazon dominates at dominating industries adjacent to retail. But that’s what makes its Just Walk Out solution more suspect. By doubling down on retail as a service, Amazon is courting enterprise customers in the very industry — brick-and-mortar retail — that its main e-commerce business gutted. The Seattle behemoth is asking firms like Walmart and Macy’s to pay it for the chance to meet the same Amazon-driven standards that put some of the retail champions of yesteryear out of business.
Prescriptions by Google, then? The company indeed lacks Amazon’s delivery capabilities but has a stranglehold on search and therefore on consumers’ connections to local businesses. It is not hard to imagine a world in which Google appears to keep its privacy promise by refusing to sell ads directly based on Fitbit user data but still capitalizes on the data by using it to connect Fitbit users with local health care service providers, pharmacists, and even gyms. That would just constitute one more way Google is edging out the digital middlemen that once closed the loop from Google search to a local service provider.
The many arguments adduced to spare Facebook the responsibility of monitoring its content, of removing content that leads to physical violence all the way down to false political advertising, fail because they are based on under-developed understandings of responsibility itself. To argue that Facebook should be spared almost all regulatory expectations because it is a technology like the telephone rather than a media site like the New York Times or that Facebook should not be entrusted with taking down false advertising or striking down violent speech because those are tasks best left to the government is a failure of imagination and a failure to imagine what (civic) responsibility entails. As the word suggests (respons-ibility), the responsibility of any company or person who provides the possibility of speech, who can take it away from any given user and makes billions in profits off it, is to answer for and consider the admittedly unpredictable and deeply complex ramifications of the speech spoken under the company’s or person’s auspices.
If brand safety in the 2020 election season does not immediately seem concerning, consider the following: You’re an advertiser hoping to run digital ads for your advertising tech solution. You pay a publisher with huge traffic big money to score impressions on its platform. But as soon as a Democratic voter navigates to the site and sees your ad, along with it pops up a big Trump ad making inflammatory claims about Biden. The web surfer navigates away from the site. Who wins?
Privacy has been slipping away from us since before then-CEO of Sun Microsystems Scott McNealy said we had none of it in January 1999. Americans still do not understand how companies use their data. While that is a transparency issue incumbent upon businesses to fix — and legislation will to some degree remedy it — I think it more likely than not that Americans will continue to hand over their data to Amazon for two-day delivery and Google for the sleekness of search. What we typically conceive of as privacy itself — concern about how much of our information companies possess — is not the factor that will turn the tides on company practices and legal standards.
Apple execs told the Times that the company’s apps show up so frequently in searches not because it tips the scales but because its apps are already very popular and are designed to please consumers. But that logic is in itself concerning: A company with nearly unparalleled power and insight into what consumers are looking for in terms of apps uses its understanding of consumer desire and vast resources to create apps that will defeat rivals (especially startups or young companies) in the App Store it owns. Even if there is no foul algorithmic play, the competitive advantage is clear. The question is whether it’s enough for antitrust action.
It’s that factor, consumer data and Amazon’s vast store of it, that stands out most in Jason Del Rey’s reporting on Recode’s new podcast series, Land of the Giants. Specifically striking is the episode on Alexa, in which Amazon employees openly speculate about a future in which smart microwaves will hook up with Amazon’s growing healthcare ambitions to tell you when it’s time to stop making popcorn and smart countertops will join the intelligent kitchen conversation. As Del Rey notes, Amazon execs talk about this future openly, dropping tidbits about customer obsession along the way and appearing truly unperturbed by the thought that such interventions into our domestic lives may go too far or generate unintended consequences. Optimism for the quality of Amazon products and a fervent belief in the company’s benefit to consumers—without due consideration for products’ risk and would-be limits—seem to pervade the corporate culture.
In a recent column, Recode founder and New York Times columnist Kara Swisher cut to the core of what would seem to be concessionary calls for regulation from Big Tech firms, summarizing their attitude like this: “We make, we break, you fix.” She’s right, and with Google, Amazon, Apple, and Facebook doubling their combined lobbying spending from 2016 to $55 million in 2018, it is worth taking a closer look at the kinds of arguments the companies are trotting out to avoid responsibility for the outcomes of the technology they produce and sell. We should be particularly concerned about the arguments tech firms are making about AI, which is already remaking our society, replacing steps in crucial human decision-making processes with machine-generated solutions.
For an example of how tech firms are attempting to get away with peddling potentially dangerous AI-based tech to powerful entities like law enforcement agencies while accepting minimal accountability, consider Amazon’s Rekognition.
With the moral and commercial high ground in clear sight, Tim Cook used the spotlight at Stanford University’s commencement ceremony Saturday to slam Big Tech peers Google, Facebook, and Twitter for failing to take responsibility for the hateful content and disinformation on their platforms.
These questions would be preludes to less abstract ones that will seem more familiar to the creatures of Silicon Valley. Is Facebook responsible if people use WhatsApp and Messenger to spread false news and incite genocide? Is that just the fault of (heinous) people being (heinous) people or should the platforms be held accountable? As for privacy and data collection, what rights do people have to safeguard their information from the communications platforms they use? What does data scraped from Google search or Amazon’s facial recognition technology have to do with our identities? Can data be human?
If criticism of Twitter and the news media is ubiquitous, it is largely because content on those platforms so often fails to rise to the challenge of responsibility. It aims to produce outrage and push partisan narratives without interrogating its assumptions and all the facts in play. It lacks thought at a time when the endless and rapid reproduction of content in digital space demands we be more thoughtful than ever because we never know where and in how many places our words will reappear.