The Transparency Trap: On Low Standards for ‘Transparency’ in the Data Market
Media quality used to be a more challenging issue. Thanks to a number of transparency initiatives, the industry has reduced fraud, increased brand safety, and improved campaign ROI. However, when it comes to the data that drives those same campaigns, the industry, as led by organizations such as the IAB and the DMA, is electing to adopt a different, narrower definition of transparency. The difference is more than just a question of semantics; it’s a question of clarity, consistency, and data expectations, and it has real ramifications. This difference is holding—and possibly setting—the industry back.
In media, transparency demands accountability. In other words, it means asking media suppliers to “prove it.” It means expecting suppliers to “show me the viewability and fraud percentages, and allow me to suppress ads from running next to unsafe content.” Today, when it pertains to data, transparency just means “tell me where the data came from”—that’s it.
That is not enough.
When you order wine with your dinner, would you select an unknown, pricey bottle simply because it came from Napa Valley? You’d likely want to know more about it first. Is it highly rated? What are its characteristics? And finally, once your interest is piqued, you’re even going to taste the wine before it’s served. That’s the proof—proof that it’s what you expect and verification that you’re not about to serve vinegar to your guests.
Viewability and other verification tracking became essential because of the enormous potential for wasted impressions, and marketers needed proof that they were getting what they expected, and it has worked. According to Integral Ad Science, in the two years from Q3 2015 to H2 2017, viewability issues dropped 30%, fraud dropped 34%, and “brand risk” dropped 31% on direct publisher buys (the only apples-to-apples comparison between those two public reports).
A recent IAB study discovered more than $20 billion was spent by US marketers in 2017 on third-party data alone. Almost 90% of marketers expect to either increase or maintain spend on third-party data over the next two years. But the issue with segmentation data today is that it is highly inaccurate. Over 100 Emodo studies that compared location-based campaigns to highly accurate carrier data (which acts as a truth set of location data to filter out inaccurate data points) have shown that nearly half of mobile ads are delivered to consumers who shouldn’t even be included in the targeted segment. Nielsen DAR benchmarks through Q1 2018 show mobile age/gender targeting to be between 42% and 71% inaccurate depending on the specificity of the age range.
These numbers are reminiscent of pre-transparency viewability stats. Yet while bad data is every bit as threatening to campaign ROI as non-viewable impressions, practitioners, vendors, and even the IAB and the DMA seem to be willing to hold data transparency to a lower standard and a narrower, even detrimental, definition. In its efforts to make it easier for marketers to compare the composition and origins of data, the IAB has launched a new “Data Transparency Framework” initiative, and the DMA has a similar approach with its “Data Quality Labeling Standards” effort. It’s an important step, particularly for addressing privacy concerns, but even the IAB and the DMA are limiting their definition of data “transparency” to refer simply to data provenance and collection methods. Both claim that they ultimately want to push for accuracy, but neither is doing so in the near term. And worse, the DMA intends to establish a “third-party check,” “verification,” or “certification that what’s on the label is accurate.” In effect, a data provider could just buy data from a guy on the street corner for $10, and so long as they admit as much on their data label, the segment would be considered “verified.” This is bound to mislead data users about what “verification” means in this context.
Not only do these initiatives fall short of demanding proof, they essentially endorse data provenance as a proxy for accuracy. How can the industry’s standard organizations raise the standard of data quality if their prescription has nothing to do with assessing the actual quality of the data?
A recent Factual survey showed “nearly all (95%) of location data buyers agree that data transparency accurately indicates the quality of the data.” But that essentially means marketers are putting their trust in the quality of a given data set purely based on their gut instinct of the quality of its source. It also minimizes the skills of the data seller. For instance, the DMA standards call for data to be listed as deterministic or modeled. But “modeled” can mean a lot of different things. A model can be built for scale or accuracy. Some do it well; others do it badly. That subtlety is completely lost with these labels.
Now that we’re firmly in the era of the “Data Store,” it’s easier than ever to lose track of the differentiation between more than 30 different options for a “Nissan Intender” segment. Information about the data source may help narrow the list of options, but even data from the best-sounding sources can be wildly inaccurate. If the industry wants a higher standard of data, the initiative has to be about data quality, not just data origins. Even though different data buyers may have different intentions and standards for accuracy, “transparency” in data must be defined not only by honest disclosure, but also proof of how data measures up against objective measures of accuracy. The worst thing we can do is “verify” segments for honest disclosure rather than accuracy.
Next time you’re presented with a data transparency pitch, insist on the rest of the story. Ask about accuracy. Ask about proof. Anything less falls short of the industry’s media-established expectations of “transparency.”
Jake Moskowitz is head of the Emodo Institute, a dedicated organization within Ericsson Emodo wholly focused on the research, education, and resolution of data concerns that mobile advertisers face.