AR Cloud: The Linchpin for Local AR
Augmented Reality (AR) continues to capture the imagination of the tech world. But unfortunately “imagination” is the operative term. AR is far from its potential in both capability and consumer adoption. But it’s getting there slowly and early signals are emerging for best practices and business models.
One of the areas where AR will eventually shine is local commerce, for lots of the reasons we’ve covered here on Street Fight and at my firm, ARtillry Intelligence. But before we come close to that vision, there are a few other things we are going to need, including lots of data, some back-end stuff, and the AR Cloud.
First, as background, AR works by mapping its environment before overlaying graphics. “True AR” works as less of an overlay, integrating graphics in more dimensionally accurate ways (think: animations that pop in and out behind a real tree). ARkit and ARcore have democratized some of these capabilities, and many more advances are to come.
As for local, a close cousin of AR is visual search. Google Lens, for example, hopes to identify products and storefronts using your smartphone camera. Back to the point about first mapping the environment — that will happen through a combination of computer vision, object recognition, and GPS-defined local data.
For example, Google will use Street View imagery and Google My Business data to drive the object recognition in Google Lens. But what about the non-Googles of the world? How will they create AR and visual search apps that can map environments reliably and return the correct info or graphics?
The answer is the still-theoretical but critical AR Cloud. It’s essentially a data repository that AR devices can access wherever they are. The fundamental idea is that the mapping and object-recognition data for the entire world is too extensive to store on-device, but smartphones could offload some of that burden to the AR cloud.
Another way to think about the AR cloud is as a sort of extension of Google’s mission statement to organize the world’s information. But instead of a search index accessed through typed queries, the AR cloud delivers information “in situ,” or where an item is, by pointing a camera at it (millennial-friendly).
By relying on the AR cloud, popular use would also create it. Through a crowdsourced approach, a mass of outward-facing cameras captures data and feeds the cloud. The cloud then perpetually builds over time, just like the Web (and Google’s index). This is what 6D.ai is doing.
The AR cloud will also enable a key function: image persistence. In other words, AR graphics remain in place across separate AR sessions (come back and it’s still there) and between different users. The latter is key for social AR experiences and multiplayer support—both projected to drive AR’s killer apps.
At this point, you’re probably thinking, don’t we already have that? There are indeed mini-AR clouds or systems that perform similar functions. Google Lens (via Street View and GMB data) is one example. And Pokémon Go works through a home-grown GPS database built over years from it’s forebear, Ingress.
But these are smaller and proprietary AR clouds that are walled off for a single game or app. And in the above cases, the cloud is only available to giants that can afford and have spent years building them. What’s needed to unlock the AR app economy is a universal and open AR cloud that can be tapped and fed by billions of phones.
That’s where things get tricky. With self-interested tech giants competing to lead the next computing era, there’s understandably little incentive to share their proprietary and hard-earned data. In turn, that points to the possibility of centralized authorities (think: ICANN and DNS), or the more likely answer: blockchain.
I hesitate to go down that buzzword rabbit hole, but blockchain capability does align well with the construction, maintenance, and authentication needs of the AR Cloud. Beyond the above matters of building it out, there will be typical blockchain-centric issues like IP and ownership of graphical assets.
A deeper look into those issues can be saved for another column. For now, suffice it to say that a visual-search world will place lots of value on local listings data. Its current value in listings optimization for “regular” search could take on new life in visual search. The same can be said for voice search.
That means holders of location data (you know who you are) should think about how their product plays in a visual search world. The good news, back to an earlier point, is that it will take a while. In your next few product cycles, start to prepare for a transition to AR and visual search. It’s probably 3-5 years away.
Michael Boland is Street Fight’s lead analyst. He is also chief analyst of ARtillry Intelligence and SF President of the VR/AR Association.