It doesn’t have to be this way. There are the seeds of a new generation of open platforms and technologies aimed at evolving the platform paradigm to one of transparency, value share, and universal governance representation. Sharing value with users via data revenue share; allowing users access to insights generated about them and their peers and help to understand who is trying to engage with them and why; rev share and benefits for service providers; collaborative governance; and abolition of unilateral platform expulsion or rule changes are just several of the major changes on the table. A whole host of new open platform operating protocols is emerging.
In a recent column, Recode founder and New York Times columnist Kara Swisher cut to the core of what would seem to be concessionary calls for regulation from Big Tech firms, summarizing their attitude like this: “We make, we break, you fix.” She’s right, and with Google, Amazon, Apple, and Facebook doubling their combined lobbying spending from 2016 to $55 million in 2018, it is worth taking a closer look at the kinds of arguments the companies are trotting out to avoid responsibility for the outcomes of the technology they produce and sell. We should be particularly concerned about the arguments tech firms are making about AI, which is already remaking our society, replacing steps in crucial human decision-making processes with machine-generated solutions.
For an example of how tech firms are attempting to get away with peddling potentially dangerous AI-based tech to powerful entities like law enforcement agencies while accepting minimal accountability, consider Amazon’s Rekognition.
With the moral and commercial high ground in clear sight, Tim Cook used the spotlight at Stanford University’s commencement ceremony Saturday to slam Big Tech peers Google, Facebook, and Twitter for failing to take responsibility for the hateful content and disinformation on their platforms.
These questions would be preludes to less abstract ones that will seem more familiar to the creatures of Silicon Valley. Is Facebook responsible if people use WhatsApp and Messenger to spread false news and incite genocide? Is that just the fault of (heinous) people being (heinous) people or should the platforms be held accountable? As for privacy and data collection, what rights do people have to safeguard their information from the communications platforms they use? What does data scraped from Google search or Amazon’s facial recognition technology have to do with our identities? Can data be human?
If criticism of Twitter and the news media is ubiquitous, it is largely because content on those platforms so often fails to rise to the challenge of responsibility. It aims to produce outrage and push partisan narratives without interrogating its assumptions and all the facts in play. It lacks thought at a time when the endless and rapid reproduction of content in digital space demands we be more thoughtful than ever because we never know where and in how many places our words will reappear.
Location intelligence, sourced securely and used in the right way, is an extremely powerful tool to craft precise targeting, predictive modeling, and creative media that drive meaningful marketing moments, massive ROI, and brand growth. Unfortunately, the location intelligence sector has also become a jungle of data fraught with fraudulence and insecurity.
Location intelligence is powerful, but in today’s highly scrutinized world, you have to challenge every resource you engage to ensure confidence in its quality. There are three critical questions you should ask data partners before you engage them.
Journalistic ethics is ordinarily a head-nodding, Sunday-sermon kind of subject. Unless a community website names a teenager who died of a drug overdose in what was a string of Oxycontin fatalities among local youths…or publishes a “news” story about a business that’s a regular advertiser or is being avidly sought….or takes sides on a divisive […]