Human Judgment, Automation, and the Future of Ad Tech

Share this:

Can ads be targeted and content recommended in such a way that respects consumer privacy and avoids surfacing hateful and fraudulent content on the Internet? What role will artificial intelligence play in these practices, and what are its limitations vis-à-vis human judgment?

These will be the big-picture questions facing the ad tech industry in the decades to come. As YouTube CEO Susan Wojcicki explained in a recent conversation with Recode’s Kara Swisher, Big Tech will need to turn to AI and machine learning—machines that can make decisions for themselves—in order to make ethical decisions about which content should be recommended and allowed on platforms dominated by user-generated material. AI will need to play a role in content moderation, recommendation, and ad targeting because the platforms owned by Alphabet, Facebook, and Amazon are simply too big to be managed entirely by humans at scale.

“We have to use humans and machines,” Wojcicki said. “And it’s the combination of using humans to generate basically what we’ll call the golden set or the initial set, the set that our machines can learn from. And then it’s the machines that go out and actually extend all this amazing knowledge that the humans have, and to be able to do that at scale.”

In a New York Times op-ed on Tuesday, Katherine Maher, chief executive of the Wikimedia Foundation, argued for precisely this hybrid human-machine approach to managing the Internet, warning against an AI-only approach.

“Too often, artificial intelligence is presented as an all-powerful solution to our problems, a scalable replacement for people,” Maher wrote. “Companies are automating nearly every aspect of their social interfaces, from creating to moderating to personalizing content. At its worst, A.I. can put society on autopilot that may not consider our dearest values.”

Maher points to Amazon’s use of AI for hiring as an example of the technology’s deployment with insufficient human oversight, as the technology learned from a history of mostly male candidates and seems to have concluded that men are simply preferable to women. Interestingly, Amazon itself has proposed the combination of human and machine thinking as key to ethical deployment of the latter in decision-making arenas as serious as law enforcement, suggesting for example that humans be the safeguard against poor decisions made by its facial recognition technology in the context of suspect identification.

The hybrid approach to decision-making is undoubtedly the one to which Silicon Valley will turn in the years to come. As Wojcicki argues, it’s the only feasible approach given the scale of the world’s most powerful platforms and interfaces: Google for search, Facebook and Instagram for social, and Amazon for commerce. Humans cannot regulate these entirely on their own, and human moderators of the most ghastly content on these platforms take up the task at risk to their own health, a fact the media has begun to illuminate and that Big Tech will need to ameliorate. Automated decision-making, albeit with human oversight, is here to stay.

On its face, the hybrid approach may appear to be a panacea, a dialectical synthesis that brings together the best of man and machine to resolve the conflict between them and eradicate the technological dread that’s coming to haunt our collective cultural imaginary. The real-time efficiency of machine learning paired with the slow-time decision-making skills of humans will be proposed by many as the good-faith solution we need to run the world’s largest companies without things going awry.

Yet beneath the veil of inevitability lies imperfection with urgent consequences, and it is these that the ad tech industry will need to recognize and confront if it is to do its work in the coming years while respecting consumer privacy and without propagating malicious practices (in the last couple weeks, for example, it’s come to light that YouTube has a major pedophilia problem).

For now, I propose two major concerns—two challenges, even, for further thought—surrounding AI for the ad tech industry.

The first is that the datafication of human experience that has allowed for precise ad targeting needs to be radically reconsidered, not just in terms of what can be done to obtain the consent of consumers for data collection, as the rising privacy movement has called tech companies to consider, but also in terms of what is lost and what is truly gained when the attributes of real people are transformed into consumer data. What is lost when the ad industry takes the very complex set of behaviors and preferences that characterizes a human being and renders her as an identity graph? There should be people at tech companies whose business models hinge on consumer data and ad targeting thinking hard about this question, and the public needs to hear from those who are already doing this work.

The second is that the human-machine hybrid decision-making model, while surely the best available in a hypothetical set that also includes human-only and machine-only models, will have to grapple with the bias and poor decisions of the humans who program the machines that will take on the task of regulating large platforms at scale. The humans who forge the “golden set,” as Wojcicki called it in the case of YouTube, that dictates which kinds of user-generated content should be eliminated from platforms, bring to that work their own susceptibility to corruption, their own prejudices, and their own human weaknesses in judgment. The Verge’s recent report on Facebook contractors employed to moderate the most noxious content on the platform highlights the risks here: Moderators claimed that continual exposure to hate and fake news actually led some of them to stop understanding what is real and fake, what is beyond or within the bounds of acceptable discourse. Again, we know there are smart people in Big Tech and smaller firms working on this problem, and we will need to hear from them more and more in the years to come as human-machine decision-making more deeply shapes our society.

With a harsher spotlight than ever upon it and the 2020 presidential race potentially posing existential changes to the tech industry, it is imperative that ad tech and the larger platforms in its orbit think critically about these profound ethical questions that extend beyond the implications of near-term profits. For the many answers already devised to these questions and the many others to come, the media, or at least this media organization, waits with open ears.

Joe Zappa can be reached at [email protected].

Joe Zappa is the Managing Editor of Street Fight. He has spearheaded the newsroom's editorial operations since 2018. Joe is an ad/martech veteran who has covered the space since 2015. You can contact him at [email protected]