It’s Neural Matching: Google Explains the November Ranking Shakeup
A tweet on Monday from Google search liaison Danny Sullivan provides an explanation for the rankings shakeup that has perplexed the local search community since the beginning of November.
In early November, we began making use of neural matching as part of the process of generating local search results. Neural matching allows us to better understand how words are related to concepts, as explained more here: https://t.co/ShQm7g9CvN
— Google SearchLiaison (@searchliaison) December 2, 2019
Sullivan quotes a Twitter thread from March 21 that explains neural matching in more detail. Calling it a kind of “super-synonym system,” he explains that the technique has been used to improve general search results since 2018, by helping Google to “better relate words to searches.”
In Sullivan’s March 21 thread, he offers the example of a user who enters the phrase “why does my TV look strange” in search. Neural matching, according to Sullivan, “helps us understand that a search for ‘why does my TV look strange’ is related to the concept of ‘the soap opera effect.’ We can then return pages about the soap opera effect, even if the exact words aren’t used.”
In case you’re wondering, the soap opera effect is a phenomenon whereby some viewers are put off by the “too lifelike” quality of HD televisions, likening their vivid displays to the videotaped look of soap operas. It’s interesting to note that this example equates two terms that would not ordinarily be thought of as synonyms. After all, there are many reasons why a TV might “look strange” aside from the soap opera effect. If we’re thinking in terms of a Venn diagram, the soap opera effect would be a small circle inside the larger circle of “reasons my TV looks strange.”
But assuming the example is carefully chosen — and it probably is, since the very same example is cited in Google’s announcement of neural matching in 2018 — I imagine it speaks to a larger truth about the technique, which likely operates according to a set of probabilities indicating that many users who say X are fairly likely to really be asking about Y — even if they don’t know what to call it, or aren’t very specific at all in their queries. In the same 2018 announcement, Google’s Ben Gomes writes:
“[W]e’ve now reached the point where neural networks can help us take a major leap forward from understanding words to understanding concepts. Neural embeddings, an approach developed in the field of neural networks, allow us to transform words to fuzzier representations of the underlying concepts, and then match the concepts in the query with the concepts in the document. We call this technique neural matching.”
Importantly, neural networks are designed to match inputs to outputs without requiring anyone to write specific rules for doing so. Instead, they operate by means of machine learning, meaning that they learn by doing, only repeat what works, and get smarter the more data they churn through. The end result is that concepts are matched based on whether they are likely to satisfy the searcher’s actual intent, no matter how that intent is expressed in words.
In Sullivan’s Twitter announcement, he confirmed that the November update, nicknamed Bedlam by local expert Joy Hawkins, now has a formal name conferred by Google: the November 2019 Local Search Update. It’s a little more prosaic than Bedlam, but at least it’s official.
More importantly, Google’s confirmation points to the fact that local search has just undertaken a huge evolutionary step. No longer are local results being matched to user queries solely on the basis of identifiable ranking factors, such as proximity to searcher, keywords in business names, primary category of the listing, review count, and so on. That isn’t to say such factors are now unimportant, but they have been augmented by a broader and more general sense of relevance delivered by neural matching.
Hawkins was the first to notice that the November ranking shakeup had more to do with relevance than anything else, writing recently that Google was doing a much better job of “understanding a broader set of search terms that apply to a single business.” Hawkins cited examples of orthodontists and attorneys who were surfacing for relevant searches where they wouldn’t have done so before, because their primary category did not match the search query. Her characterization isn’t too far away from one of Sullivan’s tweets on Monday, where he states, “The use of neural matching means that Google can do a better job going beyond the exact words in business name or description to understand conceptually how it might be related to the words searchers use and their intents.”
In his Twitter comments, Sullivan has been careful to note that neural matching is not the same as RankBrain or BERT, two contemporary Google search technologies related to machine learning. RankBrain, you may recall, is the method by which Google finds optimal results for queries the search engine hasn’t encountered before. BERT, announced just a few weeks ago, is an algorithm that improves Google’s understanding of complex user queries.
Neural matching is likely to prove more significant than either of these technologies, given how broadly it expands Google’s ability to analyze primary and secondary listing signals in order to determine relevance. And yet, according to Sullivan, this is another one of those updates for which you can’t optimize.
The use of neural matching in local search doesn’t require any changes on behalf of businesses. Those looking to succeed should continue to follow the fundamental advice we offer here: https://t.co/tPkyuyMjsP
— Google SearchLiaison (@searchliaison) December 2, 2019
The link Sullivan provides is to Google’s standard help page for ranking well in local searches, which continues to recommend the basics: claim your listing; fill out all relevant information; add photos; respond to reviews.
According to Sullivan, the November 2019 Local Search Update has completed its rollout and is now in effect globally in all languages.