As Visual Search Takes Off, Brands Adapt to Shifting Demands
Never mind the attention that Amazon’s Alexa and Google Home have garnered in recent years. When it comes to driving growth in retail sales, images are where it’s at.
Visual search and image recognition are captivating the attention of investors, retail insiders, and everyday consumers.
Startups in the visual search and image recognition space are pulling in major funding this year. Just last month, ViSenze—an artificial intelligence firm that offers visual search tools for online retailers—raised $20 million in Series C funding, co-led by Gobi Ventures and Sonae IM. Meanwhile, over at the Shoptalk retail conference in Las Vegas this month, the technology company Slyce debuted a new mobile SDK that facilitates a host of visual search solutions for supermarket retailers.
To find out more about where visual search is heading, and what marketers can do to adapt their strategies with the latest trends in mind, we checked in with Apu Gupta, CEO of Curalate, a social commerce company that turns images and videos into storefronts. Here’s what he had to say about where visual search is headed, what retailers with physical outposts can learn from direct-to-consumer brands, and how his firm has been able to help big-name clients like Whole Foods, Nordstrom, and Crate & Barrel turn inspirational content into storefronts that exist on social, email, mobile apps, and ads.
Q. How are consumer expectations around visual search changing, and what are companies like your own doing to meet those evolving demands?
A. Visual search is increasingly making its way into mobile shopping environments to enable people to find similar products as the ones they may encounter in the real world. That said, visual search is still in its infancy.
What is becoming more commonplace is the application of visual search to make shoppable content that consumers can interact with. These shoppable images are becoming the new storefront, and becoming a core part of our content experiences on social. As consumers increasingly engage with shoppable content in channels like social, where they spend a lot of time, they’ll increasingly expect that they can arrive at any image regardless of where it is, tap on it, and see what’s inside of it. This will all be enabled by advances in visual search.
Q. How is Curalate’s visual search and image recognition technology different from anything else on the market?
A. A lot of image recognition software in existence is focused on semantic tagging. That is, a machine looks at an image and labels the image with general themes—for example, ‘beach,’ ‘sky,’ ‘water.’ While this is helpful for broadly categorizing content, it’s not particularly helpful in making content shoppable.
To make content shoppable at scale, you need to be able to more granularly tag content with specific products. Curalate focuses on this. To do this, Curalate has trained its neural net against lifestyle imagery created by thousands of different content creators including brands, retailers, consumers, and influencers. The result is a large repository of content shot in numerous creative ways that have enabled our neural net to detect objects with a high degree of accuracy.
Q. What can you tell me about the work you’ve done with global retailers, like Nordstrom and Crate & Barrel?
A. Brands and retailers use Curalate to monetize visual content. Brands take their investments in inspirational content and turn that content into storefronts that exist in social, on site, in email, in apps, and even in ads. In doing so, these brands are solving a critical problem that exists with e-commerce. E-commerce today is almost entirely focused on intent—on people who know what they’re looking for. But shopping has never only been about intent, it’s also about inspiration.
In stores, people can wander the aisles and discover things they never knew existed. E-commerce sites struggle with discovery, however. Every single one of the over 1,100 brands and retailers we work [with] has realized that visual content can play a central role in solving this problem.
Q. What are some of the challenges you see physical retailers face when it comes to social commerce?
A. Counterintuitively, physical store retailers are often amongst our clients who understand the discovery problem that social commerce can solve. That’s largely because these retailers were born in an era when physical stores drove product discovery and merchandising stores was all about getting people to browse. These retailers are all contending with a customer that is shifting spend from stores to sites, and they’re investing in bringing an inspirational layer to commerce. The one problem that these more established businesses contend with, however, is silos. They tend to operate in much more siloed manners than newer, smaller, and more agile direct-to-consumer brands.
Q. How are your visual content strategies different when you’re working with brands with physical storefronts—like Whole Foods or Nordstrom—compared to online-only brands?
A. For the most part they’re similar strategies. However, one big difference is that many of the digitally native brands have a much smaller product assortment. As a result, the challenge is less about introducing people to the breadth of products and more about helping people discover use cases for the products. By seeing the products in numerous environments, consumers can envision the product in their lives and gain the confidence to buy a product that they’ve not previously seen in person.
Q. How do you see visual search and image recognition evolving right now, and where do you think we’ll be in five to 10 years?
A. Image recognition will be tightly woven into most consumer-facing technologies over the next five to 10 years. It’s a tough problem, but there’s so much attention being put towards it that I’m confident within the next decade consumers will be able to arrive at any image, ask ‘What’s that?’ and get a meaningful answer.
This interview has been edited for length and clarity.