Placemeter CEO: How ‘Computer Vision’ Is Making Our Cities Smarter | Street Fight

Interviews

Placemeter CEO: How ‘Computer Vision’ Is Making Our Cities Smarter

0 Comments 28 January 2014 by

Screen Shot 2014-01-27 at 4.16.57 PMIt’s mid-afternoon in New York, and the Shake Shack in Madison Square Park is bustling. But there’s no need to bear the cold to find out — just ask Placemeter.

Thanks to rapid developments in “computer vision,” a technology that uses machine learning to identify patterns in video streams, a small team of technologists at the Brooklyn-based startup have built a system that uses over 500 personal and private video cameras sited throughout Manhattan to measure everything from the crowd in Times Square to the line outside of Shake Shack. The brainchild of Alex Winter, a French expatriate who began working as a computer vision researcher nearly two decades ago, Placemeter is one of a number of new technology startups that have harnessed a growing network of connected devices — or “internet of things” — to measure activity in the real world.

By the end of 2014, Winter says the company will process over 2,500 video streams, nearly 18 hours a day, in New York City alone. That amounts to a little more than 82 exabytes of data every day — the rough equivalent of 20 billion DVDs. The feeds come from traffic cameras, security footage, and a growing community of users using retired smartphones to record the goings on outside of their windows.

In the lead-up to Street Fight’s Local Data Summit in Denver on February 25th, we’re taking a deep dive into the world of local information, speaking with some of the sharpest minds in the industry about using local data to make businesses more efficient, and to make experiences richer for consumers. Street Fight recently caught up with Winter to discuss Placemeter’s vision, and the opportunity to use the internet of things to make cities, business and consumers more efficient.

Talk a bit about the origins of Placemeter and the problem the company aims to solve.
I wanted to find a big problem to solve, and I found that problem in something that’s becoming increasingly obvious: the growth of cities. Across the globe, cities are exploding and we need to figure out a way to use this limited space more efficiently. The idea was to measure what was going on in cities, in places, to optimize the way people interact with these locations. With the right amount of data — if you could measure it well — we can tell people when, for instance, Trader Joe’s is busy, and then they can adjust when they go shopping. Businesses can analyze where to place a new location based on traffic data and other trends.

You’ve been working on computer vision technology for nearly two decades. Talk a bit about how technology has been used in the past.
After the startup crash, a lot of the opportunities for commercial applications died out. But we found a good market in defense industry. We teamed up with a investigative units who were looking for child pornographers, and wanted details in the back to solve an investigation. Very often in child abuse of pornography faces are blanked out or blurred, so facial recognition software doesn’t work. But they could use the technology to identify certain patterns and determine where these crimes were committed by analyzing the background. They looked at the shape of a plug, or an object in a background.

Let’s talk sensors. There’s a bunch of different companies using a range of sensors, from wi-fi routers to mobile devices, to measure offline activity. Why choose computer vision?
First, there’s simply a lot more data coming to the world thanks to the explosion of smart objects. There’s even more potential for more data out there, but it’s not there yet. Down the road, when there’s more smart devices, we want to be the platform to gather and package that data, and read it to make it useful. But we’re just not to that point yet. This is the reason why we’re starting with computer vision. It’s a way to hack our way to through that environment.

Why couldn’t you build this product five years ago?
Every five to ten years, there’s a big revolution in computer vision. Five years ago there was a big leap with the invention of a kind of technology that made it possible to match printer, or flat, images very accurately. More recently, there’s been a big revolution in deep learning, a paradigmatic shift in traditional machine learning that allows computers to learn, so to speak, quicker.

But there’s also other technologies that have helped enable smaller companies like ourselves to be able to play with computer vision in a way that was previously impossible. So the combination of improved machine learning algorithms with the availability of cheap storage thanks to clouds like Amazon’s AWS plus steps forward in computing power, makes it possible to even consider application like the one we’re doing today at our scale.

Processing power is key here. The more training data you can process (i.e. video feeds), the better the algorithms become. So each step forward in computing power is a step forward for machine learning, and implicitly, computer vision.

Unlike store analytics software, which works with individual retailers, Placemeter accesses public video streams. If you’re measuring shared space, who pays for it?
There’s literally thousands of applications for this data. Right now, we’re focusing on two different angles. One project is to build a handful of consumer applications, and release some of the data directly to consumers through a labs division. For instance, we can give you real-time data on wait times, or tell users whether their grocery store is crowded, and when it will not be. In terms of selling the data, we’ve received a lot of interest from municipal governments, many of whom are investing heavily in smarter city initiatives.

But like others in this area, we’re also talking with retailers. One thing we can do better than others is competitive intelligence. Retailers are really interested into the data about place, which they don’t own. They might want information about a competitor’s location or a traffic data on a potential site for a future location.

Steven Jacobs is Street Fight’s deputy editor.

Find out more about how big data can be used in local context at Street Fight’s Local Data Summit, taking place on February 25th, in Denver. Learn from and network with some of the top local data experts in the country. Reserve your ticket today!




Newsletter

Get hyperlocal industry headlines in your inbox every morning. Subscribe to the Street Fight Daily newsletter.

Follow Us

Get the latest Street Fight news, information and analysis via Twitter and Facebook.

The Local Merchant Report

Learn how to better target this important-yet-elusive market. Key insights, case studies, and strategies make this a report you can't afford to miss.
Get your copy today!

Free eBook

How Mobile Location and Behavioral Context Skyrocket Conversion Rates: Location personas increase the value of ad inventories and give publishers a way to better target content. Learn how it works and improve your ROI now. Get the ebook produced by:
Download here

The $20 Billion Mobile Marketing Opportunity

Strategies and insights into the landscape of targeting options and how they deliver foot traffic and sales for SMBs.
Get your copy today!

Why Local is the Future of Commerce

The local marketplace is under renovation and four layers of disruptive technologies have emerged. Siloed early on, these industries are starting to coalesce, working together to form layers in a coordinated stack. Read the introduction to "The Local Stack" special report, underwritten by Yext.

How Back-Office Innovation Is Transforming Local

In this new report, Street Fight takes a look at the impact of supply-side technologies on the local marketing industry, detailing the opportunities and risks that these emergent services present to existing solutions providers.

Twitter

© 2014 Street Fight.

Powered by WordPress. Hosting by Page.ly