Does ‘Multisearch Near Me’ Signal a Visual Search Future?

Share this:

Google’s latest local search play was on display at its recent I/O conference: Multisearch Near Me. If multisearch and local search had a baby, that’s essentially what was announced. It lets users search using images or screenshots along with the text “near me” to see local results.

By local results, we mean businesses. So, if you see a slick jean jacket on the street or an enticing dish on Instagram, snap a pic and use it as a search input, along with the term “near me.” The idea is that this adds visual dimension to search queries and reveals nearby retailers or restaurants that offer similar fare.

In some ways, this is an extension of Google’s longer-term evolution towards various content formats in search. Past manifestations of this broader principle include the 2010s’ universal search and its ongoing knowledge graph developments that feature multimedia-driven SERPs and knowledge panels.

But multisearch flips the script. Rather than using text queries to unearth a variety of multimedia search results, we’re talking here about using multimedia content as search inputs. This can involve starting a product/image search, then narrowing down results with text (e.g., “the same shirt in green”).

Snap Scan Shines New Light on the Company’s Local Discovery Ambitions

Search What You See

Another way to think about this is having “CTRL+F” for the physical world. You can use the imagery around you to launch searches for similar objects. Here Google taps into its knowledge graph and image database as a training set for AI object recognition. That makes it uniquely positioned to pull this off.

In that sense, Multisearch Near Me, as positioned at Google I/O, is only the beginning. For one, it will expand in the future beyond single items to entire scenes. In other words, users will be able to choose multiple objects within an image or frame to launch searches for related items or see contextual overlays.

For the latter, this could all converge at some point with another related visual search product: Google Lens. The beauty of Google Lens is that visual searches happen in real-time as you pan your phone around. Search activation dots pop up on your screen where you can launch a visual search.

This real-time approach offers a more seamless UX, as opposed to screenshots and image selection. In other words, when Multisearch Near Me comes to Google Lens – with the possible handoff to Google Live View for last-mile navigation – it could be a more elegant way to find things to do, see, and eat locally.

Connecting the Dots on Google’s Visual Road Map

Line of Sight

Another way all the above could evolve is when it becomes hands-free. We’re talking of course about AR glasses… which Google also teased at I/O. In fact, one adoption barrier to Google Lens and Live View involves arm fatigue and the social awkwardness of holding up your phone. Smart glasses could alleviate that.

Of course, AR glasses come with their own social awkwardness, as seen in Google Glass. But, importantly, a few things may be different now for Google’s AR glasses ambitions. Those factors include underlying tech and use cases. The former has advanced closer to the realm of socially-acceptable, wayfarer-like smart glasses.

Meanwhile, the importance of use cases is a lasting lesson learned from the Google Glass debacle. That’s why its newer concept glasses are positioned for a more practical and focused use case: real-time language translation. And Google can pull off these “captions for the real world” given Google Translate (see video below).

Moreover, though Google zeroed in on language translation as a central use case for the sake of simplicity and focus, other utility-based uses could be integrated later. And that brings us back to local search: Multisearch Near Me could reach new levels of utility if it’s in your direct line of sight.

This also makes local searches private. Sort of like AirPods (which involve their own flavor of geo-local AR), ‘captions for the real world’ can be delivered discreetly. Of course, there are still several wild cards and variables, including the fickleness of fashion, but we’re now at least closer to a realistic AR killer app.

Tags:
Mike Boland has been a tech & media analyst for the past two decades, specifically covering mobile, local, and emerging technologies. He has written for Street Fight since 2011. More can be seen at Localogy.com