Seven Hard Truths About Location Data Accuracy | Street Fight

Seven Hard Truths About Location Data Accuracy

Seven Hard Truths About Location Data Accuracy

Before you choose the targeting data for mobile campaigns or any other location-powered objective, it’s good to have a few guideposts to help you narrow down the options. This article provides seven proven norms that can bring clarity to key data expectations and decision factors.

The seven norms are the conclusive findings of research conducted by Emodo over the last year. The project included an analysis of more than 100 location targeting studies that spanned every typical use case of location data, a wide range of verticals, and the vast majority of prominent location data vendors. In each of the studies, the location targeting data were meticulously compared against cell tower data from mobile carriers—the most reliable, accurate source of device location data. While GPS data tends to provide precise location coordinates, those specifics are often found to be incorrect. Since data accuracy was a core focus of the research, carrier data was used for its accuracy, to rule out impossibilities across all of the included datasets.

All in, the project produced perhaps the only uniquely objective, statistically viable sample of the true quality of today’s location data, including the seven definitive findings below.

1. Location data is wrong almost 50% of the time.

Only 39% of data points were within one mile of where they were claimed to be. Although a small portion of the studies included geo-fencing parameters which allowed for impressions to be delivered within a larger radius (five miles, for example), the trend is undeniable. Location data has significant quality concerns, regardless of source, use case, or vendor.

2. Vendor “fixes” miss two out of three quality issues.

On average, vendors that process raw location data and turn it into products such as attribution studies, segments, and geofencing campaigns remove only about a third of all inaccuracies. The data sets that vendors claimed they had already filtered (to remove inaccuracies) were only 33% lower than inaccuracy scores for raw data, on average (across both SDK and exchange data).

3. SDK data isn’t always better than exchange data.

There’s a widely held rule of thumb that SDK data is high quality and exchange data low quality. It turns out that it’s not so black and white. Overall, SDK data is more accurate than exchange data. SDK data averages 60-70% accuracy while exchange data averages 43% accuracy. However, every SDK has accuracy issues. Across the full gamut of studies, not a single SDK scored above a 70-75% accuracy rate, and some scored much lower. Exchanges vary widely. One exchange regularly scored higher than any SDK.

4. Data accuracy varies from vendor to vendor but also from segment to segment.

Across the vast majority of the studies analyzed, one of the highest accuracy scores and one of the lowest accuracy scores were measured on segments from the same vendor. In both cases, the vendor claimed that the location data had been filtered for inaccuracies.

5. More than 1/3 of point of interest (POI) visits are definitively wrong.

For use cases that involve measuring specific store or POI visits, such as attribution studies and segments based on historical shopping patterns, 30-35% of supposed visits were ruled out definitively as inaccurate. Note that the accuracy averages for attribution use cases were no better than those for segment use cases. Inaccuracies in attribution studies can lead to misleading study results.

6. More than half of real-time proximity data is incorrect.

Impressions that had been served based on real-time location (such as geofenced campaigns) were ruled out as inaccurate 50-55% of the time.

7. Managed service is twice as inaccurate as self-service.

This might be the most controversial finding. Across the studies in which a vendor handled key ad serving or location data decisions, the scores were far worse than for self-service use cases. Managed service use cases proved inaccurate a whopping 40-45% of the time, while self-service use cases proved inaccurate 20-25% of the time. There are many possible reasons why this is true. One certainly is that self-service campaigns tended to have a lower bar, such as DMA targeting as opposed to tight geo-fencing. But it’s also certainly possible that vendors may be getting away with corner-cutting, because even their data decisions are made in a black box!

The takeaway? You need proven industry benchmarks if you want to set realistic goals and expectations. Going forward, these norms can help you form and answer key questions about location data-based tactics, so you can make more informed data decisions. They can help you test the veracity of vendor claims and guide you toward data segments that outperform the averages. Whether you fund, plan, or execute location-targeted campaigns, or use location data for segmentation, attribution, or insights, you need to use real information to make truly informed decisions.

Jake Moskowitz is head of the Emodo Institute, a dedicated organization within Ericsson Emodo wholly focused on the research, education, and resolution of data concerns that mobile advertisers face.

1 thought on “Seven Hard Truths About Location Data Accuracy

  1. This is broad attack on location data accuracy by a bias entity with absolutely no insight into vendor by vendor filtering methods which I am most certain perform better than stated in this hit piece designed to derive fear.

Leave a Reply

Your email address will not be published. Required fields are marked *

Name *

1 thought on “Seven Hard Truths About Location Data Accuracy

  1. This is broad attack on location data accuracy by a bias entity with absolutely no insight into vendor by vendor filtering methods which I am most certain perform better than stated in this hit piece designed to derive fear.

Leave a Reply

Your email address will not be published. Required fields are marked *

Name *