Indoor mapping continues to be one of those “holy grail” topics in local. It picks up where GPS drops off, tracking consumers all the way to the cash register. That’s been beacons’ battle cry for years, though I’ve been skeptical due to their implementation and opt-in friction.
The latest move comes from Google, with an approach that could leapfrog beacons by using the positioning capability already in your phone (or soon will be). Built on the Tango platform, it applies “area learning” to map indoor spaces — a technology it calls Visual Positioning (VPS).
Unveiled at its I/O developer conference, VPS uses computer vision via smartphone camera to scan interior spaces and form a “point cloud.” That unique digital fingerprint then becomes the basis for positional tracking, indoor navigation and overlaying practical information.
The go-to example is overlaying positional data for store shelves and the items they carry. Shoppers can then find obscure items at the Home Depots of the world. A relatable dilemma: I used to work at Home Depot and know it well, but I can never find the sauerkraut at grocery stores.
“GPS can get you to the door, and then VPS can get you to the exact item that you’re looking for,” said Google’s VR/AR lead Clay Bavor at I/O. “Imagine in the future your phone could just take you to that exact screwdriver and point it out to you on the shelf.”
Again, this is not a new message but it is a new approach to the message that beacons have failed to fulfill. VPS is a superior technology, but its optical and sensory components have been cost prohibitive for smartphone integration. Moore’s law will change that over the next 1-2 years.
But it goes much further than the utility of finding things and saving people time, though the latter is the most underrated success factor of any tech product. The real local angle here is the ad attribution potential, given that it represents the last mile to the cash register.
There, the ties to Google’s ad business are pretty clear. This is just the latest in its march to embolden a search marketing value proposition with a better ROI story. And it knows the way to do that is track the dollars where they’re mostly spent: offline and locally.
But the bigger moment in Google’s I/O bonanza came and went without much notice from the local media world: Google Lens. Like Tango and VPS this is a computer vision technology that uses machine learning to identify visual content. It basically turns your camera into a search box.
Though the overt use case is organizing your Google Photos albums (yawn), the meatier implication is identifying storefronts. An unfulfilled promise of Google Goggles, this will let you scan building exteriors with your phone’s viewfinder to reveal identifying info and reviews.
This is another go-to example for AR cheerleaders. But the difference is that Google has the data backbone — place database and Street View imagery — to actually pull it off. We often forget that local AR will need lots of geotagged content to be a meaningful and populated experience.
This also opens up an opportunity for anyone who’s in the business of local data or listings optimization. Their already-galvanized value in the world of mobile search could be vaunted by the accelerated need for properly geotagged data to populate local AR apps like Google Lens.
And that’s where it all comes together for Google. We’ve long discussed its moves to counterbalance the smartphone-induced decline of search volume and CPCs. Google Assistant and “micromoments” were one answer; Visual search and VPS will be the next.
That means these technologies will play a part in protecting Google’s $48 billion search business. Combined with massive investments in VR (Daydream) and AR (Tango), Google will put serious muscle behind VPS. And anything Google is that motivated to drive is worth betting on.
Michael Boland is chief analyst and VP of content at BIA/Kelsey. Previously, he was a tech journalist for Forbes, Red Herring, Business 2.0, and other outlets.