At I/O, Google Offers a New Vision for Local Search

Share this:

When it comes to digital technology, there are very few subject areas in which Google doesn’t have a fairly deep investment. If you review the list of sessions last week at I/O, the company’s developer conference, you’ll encounter a familiarly broad range of topics, from payments to gaming to deep learning, security, smart homes, self-driving cars, cloud computing, foldable devices, and myriad others. It can be hard to determine what all of this adds up to, but if that’s what keynote addresses are for, Google CEO Sundar Pichai and his team delivered a strong message this time around.

At the start of the keynote, Pichai said that it was still Google’s mission to “organize the world’s information and make it universally accessible and useful.” But the company’s approach to this challenge has evolved: “We’re moving from a company that helps you find answers to a company that helps you get things done.”

The notion of “helping you get things done” provides a through-line for many of the announcements made during the keynote and the conference sessions that followed. While it might be obvious to state this, it struck me watching the presentations how thoroughly Google has become a consumer electronics company, a marketer of devices where search is more a central feature than a standalone product. Google, in other words, has become thoroughly dedicated to marketing its famous search capabilities in the context of devices that help you perform daily tasks.

Thus, many of the new features demoed at I/O centered on the Pixel phone. So too, many of those features exploited the fact that smartphone users are mobile in nature and are increasingly being trained to use their phones to interact with the external world. Local search, in particular, is arguably the canonical use case for the smartphone, and Pixel phones, given what Google had to say at I/O, may have just become the most sophisticated local search devices on the planet.

Announcements ranged from the privacy-related addition of incognito mode to Google Maps, to the introduction of AR-based features for orienting users who request walking directions, to the application of Google Lens technology to create “smart” restaurant menus that link to photo and review content from Google My Business. To be sure, not all of these features are exclusive to Pixel, but some require a combination of hardware and software updates and thus can’t easily be deployed on non-Pixel phones.

Google’s demo of AR-augmented walking directions in Maps

If you want to glimpse the fascinating technical underpinnings of technologies like AR-augmented walking directions, feel free to check out the recording of the session “Developing the First AR Experience for Google Maps,” where you can learn about such topics as visual inertial odometry and marvel at the sophistication required to accurately identify a phone’s location and physical orientation in a complex urban environment.

For most of us, though, it’s enough to know that such features add actual utility to our lives. Already, I’ve seen my own usage patterns changing as search becomes more tightly integrated with the world around us. Google Lens, for example, has become my constant companion in my quest to finally learn the names of trees and plants native to my part of California, and it has proven to be a very effective tool for that purpose.

Plant identification with Google Lens for iPhone

I’m reminded of a startup several years ago that some readers will recall, a gaming company called SCVNGR whose young CEO Seth Priebatsch announced that he wanted to create “the game layer on top of the world.” (The company pivoted to become LevelUp and was acquired last year for $390 million by Grubhub.)

Priebatsch was talking about games linked to real-world entities, something like an AR version of geocaching. Google, by extension, is creating the information layer on top of the world. In traditional search, the real world is a set of objects described by text or represented by pictures, sound, and video (or as my colleague Mike Boland calls it, an “Internet of places“). It’s a disembodied reality. Now, we need to start thinking of search as an embodied activity, conducted by users situated in a particular time and place, and related to real-world entities whose metadata is linked by technology to their physical presence and proximity to other entities.

It’s a world toward which we’ve been headed for some time but one with which we still haven’t quite come to terms. Wearable technologies like Google Glass tried to bridge the gap between information and reality but were rejected as cumbersome. Perhaps recognizing that its previous attempts were too ambitious, Google is now carefully integrating embodied search into existing apps, targeting use cases that people are likely to find especially intuitive.

For a few years now, apps like Snapchat have gradually acclimated many users to a life where the phone spends a great deal of time floating in the nether space between ourselves and the world around us, mediating our experience in a low-grade, omnipresent fashion. These experiences have been entertaining but haven’t offered us much in the way of “getting things done.” Google seems poised to change that.

Tags:
Damian Rollison is Director of Market Insights at SOCi.