Google's AI Chops and Local Search

Google’s AI Chops and Local Search

Share this:

ChatGPT celebrants have lauded it as a Google killer or questioned why Google wasn’t first to the generative AI phenomenon. But there’s a problem with that narrative: Google has been all over AI for years.

Its launch of Bard this week notwithstanding, Google is best positioned for any AI, given the knowledge graph it’s been building for almost 25 years. That data repository is the best AI training set anyone could ask for. The AI engine that runs on top of it can be built or bought, both of which Google can do.

And it’s already gotten started, given machine learning research and endpoints in Tensor and Transformer. More recently, it also announced the AI-powered multisearch feature. This lets users search with a combination of images (via Google Lens) and text (e.g., “show me the same dress in yellow”).

This is powered by a flavor of AI called Multitask Unified Model, which parses and processes data across a variety of formats to infer connections, meaning, and relevance (Google’s jam). In Google’s case, these formats include text, photos, and videos, which it, again, has spent the past 20+ years indexing.

Combining visual and text search

This week, Google pushed the ball forward. The company announced at its press event in Paris that Multisearch is now available on any mobile device where Google Lens is already available. For those unfamiliar, Lens is Google’s feature that lets users search based on image inputs instead of keywords.

So essentially, Multisearch is the marriage of visual search and text search. The two modalities come together to offer optionality for users that may be more inclined towards one or the other. For example, sometimes it’s easier to search with images or a live camera feed for fashion items you encounter in real life.

But starting a search in that visual modality only gets you so far. Lens is essentially Google’s latest version of reverse image search (not new), which returns visually similar search results or products. Being able to then refine or filter those results with text (per the above yellow dress example) is the key.

Google also noted that it’s working on a new version of Multisearch that will launch image searches from wherever you are on your phone. Known as “search your screen,” it brings Google Lens outside of Google’s walls and into the other apps and experiences on your phone.

The local version of next-gen AI search

Multisearch also comes in local flavors. Known as Multisearch Near Me, Google applies all of the above underlying tech toward local search. So, when searching for that same yellow dress, users can query an additional layer of attributes related to proximity. In other words, where can I buy this item locally?

Beyond the fashion discovery use case, Google has expanded into food-based searches to satisfy food lust with local restaurants. For example, see a dish on Instagram that you like, then use that image to identify the dish with Google Lens… then use Multisearch Near Me to find it (or similar fare) locally.

And once again, Google is uniquely positioned to pull this off. Though upstarts like OpenAI may have superior AI engines, do they have all that local business and product/inventory data? Google is one of the few entities that has such data (to an extent), given Google Business Profiles and Google Shopping.

As for the global rollout for all the above, multisearch (and its Near-Me counterpart) was previously made available in the U.S. in October, before expanding to India in December. Starting today, it’s available in all geographies and languages. Users only need a mobile device that runs Google Lens.

Tags:
Mike Boland has been a tech & media analyst for the past two decades, specifically covering mobile, local, and emerging technologies. He has written for Street Fight since 2011. More can be seen at Localogy.com