Revamping Your Strategy and Audiences for the Latest “Safe & Civil” Facebook Advertising Update

Your social advertising strategy and audiences may need a bit of an overhaul to align with updates to Facebook’s “Safe & Civil” Advertising Policies, especially if they fall within particular categories.

If your customer is running a campaign that does fall within one of the three categories upended by recent Facebook policy changes, it’s best to launch your campaign with a Special Ads Category applied prior to publishing. This will save you time and a headache resolving errors during the length of your campaign.

The Trust Crash: How Our Platforms Are Failing Us At Every Level and What We Can Do About It

It doesn’t have to be this way. There are the seeds of a new generation of open platforms and technologies aimed at evolving the platform paradigm to one of transparency, value share, and universal governance representation. Sharing value with users via data revenue share; allowing users access to insights generated about them and their peers and help to understand who is trying to engage with them and why; rev share and benefits for service providers; collaborative governance; and abolition of unilateral platform expulsion or rule changes are just several of the major changes on the table. A whole host of new open platform operating protocols is emerging.

The Deceptive Arguments Amazon Uses to Shirk Responsibility for AI

In a recent column, Recode founder and New York Times columnist Kara Swisher cut to the core of what would seem to be concessionary calls for regulation from Big Tech firms, summarizing their attitude like this: “We make, we break, you fix.” She’s right, and with Google, Amazon, Apple, and Facebook doubling their combined lobbying spending from 2016 to $55 million in 2018, it is worth taking a closer look at the kinds of arguments the companies are trotting out to avoid responsibility for the outcomes of the technology they produce and sell. We should be particularly concerned about the arguments tech firms are making about AI, which is already remaking our society, replacing steps in crucial human decision-making processes with machine-generated solutions.

For an example of how tech firms are attempting to get away with peddling potentially dangerous AI-based tech to powerful entities like law enforcement agencies while accepting minimal accountability, consider Amazon’s Rekognition.

Tim Cook Demands New Commitment to Responsibility from Big Tech

With the moral and commercial high ground in clear sight, Tim Cook used the spotlight at Stanford University’s commencement ceremony Saturday to slam Big Tech peers Google, Facebook, and Twitter for failing to take responsibility for the hateful content and disinformation on their platforms.

To Understand the Tech Industry’s Responsibilities, We Must Think Differently About Humanity

These questions would be preludes to less abstract ones that will seem more familiar to the creatures of Silicon Valley. Is Facebook responsible if people use WhatsApp and Messenger to spread false news and incite genocide? Is that just the fault of (heinous) people being (heinous) people or should the platforms be held accountable? As for privacy and data collection, what rights do people have to safeguard their information from the communications platforms they use? What does data scraped from Google search or Amazon’s facial recognition technology have to do with our identities? Can data be human?

Twitter Time: Responsible Writing in Today’s Media Landscape

If criticism of Twitter and the news media is ubiquitous, it is largely because content on those platforms so often fails to rise to the challenge of responsibility. It aims to produce outrage and push partisan narratives without interrogating its assumptions and all the facts in play. It lacks thought at a time when the endless and rapid reproduction of content in digital space demands we be more thoughtful than ever because we never know where and in how many places our words will reappear.

Location Data Confidence in an Exploding Data Universe

Location intelligence, sourced securely and used in the right way, is an extremely powerful tool to craft precise targeting, predictive modeling, and creative media that drive meaningful marketing moments, massive ROI, and brand growth. Unfortunately, the location intelligence sector has also become a jungle of data fraught with fraudulence and insecurity.

Location intelligence is powerful, but in today’s highly scrutinized world, you have to challenge every resource you engage to ensure confidence in its quality. There are three critical questions you should ask data partners before you engage them.

Human Judgment, Automation, and the Future of Ad Tech

For now, I propose two major concerns—two challenges, even, for further thought—surrounding AI for the ad tech industry. The first is that the datafication of human experience that has allowed for precise ad targeting needs to be radically reconsidered, not just in terms of what can be done to obtain the consent of consumers for data collection, as the rising privacy movement has called tech companies to consider, but also in terms of what is lost and what is truly gained when the attributes of real people are transformed into consumer data. The second is that the human-machine hybrid decision-making model, while surely the best available in a hypothetical set that also includes human-only and machine-only models, will have to grapple with the bias and poor decisions of the humans who program the machines that will take on the task of regulating large platforms at scale. 

Navigating the ‘New Ethics of Local Journalism’: Dangerous Curves Ahead

Journalistic ethics is ordinarily a head-nodding, Sunday-sermon kind of subject. Unless a community website names a teenager who died of a drug overdose in what was a string of Oxycontin fatalities among local youths…or publishes a “news” story about a business that’s a regular advertiser or is being avidly sought….or takes sides on a divisive […]