Is Visual Mapping the Next Google-Apple Battleground?

Share this:

This post is the latest in our “Mapping the Future” series. It’s our editorial focus for the month of September, including topics like innovation in mapping, navigation and local discovery. See the rest of the series here


One of our top picks for augmented reality (AR) killer apps is visual search. Its utility and frequency mirror that of search and make it a front runner for killer-app status. It’s also a natural fit for local discovery, and Google is motivated to make it happen.

As background, “visual search” has several meanings including reverse-image searches on Google desktop. But what we’re talking about here is pointing your smartphone camera at an item to get a live explanation of what that thing is, or where you can buy it. It takes form so far in Google Lens.

Visual mapping is a close cousin of visual search. It applies similar computer vision and machine learning to help users navigate. Also known as visual positioning service (VPS), it overlays 3D urban walking directions on an upheld screen, and takes form so far in Google Live View.

This is more intuitive than looking down at that little blue dot in Google Maps to mentally translate it to 3D space — always painful when emerging from underground transit and not knowing what corner you’re standing on. Live View figures that out and guides you through 3D space.

How did Google build this, and where could it go next? As background, AR doesn’t “just work” the way most people think. Overlaying the right graphics on a given physical space first requires geometric and semantic understanding of that space.

As we’ve written, the AR cloud will be the key to unlock this. GPS alone won’t cut it because AR graphics need millimeter-level precision. So the AR cloud (assisted by 5G) offers a spatially relevant data layer that coats the earth and dynamically informs AR devices where they should anchor graphics.

Another name for this is the Internet of Places, which puts it more in Google terms. Just like Google created massive value by indexing the web, today’s opportunity is to index the physical world. Outcomes include visual search, mapping, and lots of other monetizable AR products.

So to develop Live View, Google essentially used its Street View imagery as a visual database against which to match a live camera feed and then “localize” that device. It’s a clever hack for visual mapping that applies existing assets and puts Google far ahead of anyone else.

But Google’s eventual goal for Live View could be indoor mapping. Considering all the commerce that happens offline, Google’s holy grail is to influence and track that “last mile” to the retail cash register. In-store visual mapping could be Google’s trojan horse to finally get there.

And it’s not alone. Apple’s initiative to revamp Apple Maps could spin out capability for 3D visual mapping. On the surface, it wants to improve Maps, make iPhones more attractive, and remove the black eye still felt from Mapgate. But a valuable byproduct could be 3D visual mapping.

It will do this similarly to how Google built Live View. While obtaining street-level imagery, Apple is future-proofing itself by simultaneously gathering 3D mapping data of roadways. This will be a good AR cloud asset, congruent with Apple’s many-sided AR master plan.

Meanwhile, there’s an entire subsector of AR Cloud startups building spatial maps. Scape is doing so in major cities for various consumer and enterprise AR use cases. 6D.ai has an API for users to map the physical world, as they’re out there using AR apps. It’s like Waze for AR.

Autonomous Vehicles (AV’s) could also create lots of visual mapping data. In order to “see” the road, AVs create real-time 3D lidar scans. A second use for that data could be a constantly evolving image recognition database for human-readable visual mapping like Google Live View.

Apple Street Mapping Cars. Image Source: TechCrunch

As all of these pieces come together, we get closer to ubiquitous visual mapping. If that happens, there will be significant implications for entities that currently use search and mapping for marketing or online presence. They’ll need to make sure they are optimized in this new format.

This could lead to an extension of SEO to cultivate presence in visual experiences. Just like in search, correct business location and details will need to be optimized to show up in the right places. You don’t want the AR overlay for your restaurant floating above the salon next door.

But like anything else, user behavior has to come first. If it reaches critical mass, it will be a necessary channel for local businesses to optimize — joining search, social, video and the rest of the current list. So, the remaining question is if and when that critical mass is reached.

Moving beyond early AR successes in gaming (Pokemon Go) and social (Snapchat lenses), we believe that eventual killer apps will be more about mundane utilities like visual mapping. They represent tangible user value and potentially high-frequency use cases… just like search.

In fact, one common theme with killer apps is making daily tasks we already do better or easier. That’s the case with everything from twitter (easier communications) to Uber (easier transportation) to Slack (easier work collaboration). AR will follow the same rules.

Tags:
Mike Boland has been a tech & media analyst for the past two decades, specifically covering mobile, local, and emerging technologies. He has written for Street Fight since 2011. More can be seen at Localogy.com