Sense360 Launches Developer Platform to Unlock Mobile Sensor Data

Share this:

Sense360

Since leaving Telenav last fall, hyperlocal veteran Eli Portnoy has been working hard on a new startup called Sense360, which is looking to analyze data from users’ smartphone sensors (such as light readers and accelerometers) to more accurately determine information about the location or physical context of a user. Portnoy, the founder of Thinknear (which he sold to Telenav in October 2012), announced in January that he had raised $2.75 million for Sense360 from a handful of investors, including Seamless founder Jason Finger.

Today Sense360 is officially opening its platform to developers. The company is hoping that the new tool will allow app makers to easily use this sensor data to better engage and retain users — and potentially combat the high churn rates endemic to the mobile application business. Portnoy says the platform, which utilizes and combines information from 10 different smartphone sensors, can help paint a more nuanced picture of user context in real-world situations.

Street Fight caught up with Portnoy recently to talk about the launch, as well as why he thinks that this kind of sensor data is potentially so valuable.

How did you first get interested in smartphone sensors?
I was fascinated while I was at Thinknear at just how much GPS had changed the mobile experience, and all of the cool things that were happening from it.  And what struck me is that over the years more and more sensors have been added to the phone.  The question I kept asking is “When would apps start using all of these sensors? When are they going to start building all of these incredible experiences that are now possible because of these sensors?”  And it just wasn’t happening.

And it turns out that there are four reasons why using sensor data is really hard today. The first is that every single sensor has a bunch of APIs around it — so just collecting the data is incredibly hard. … And then the underlying sensors themselves have all of these unique elements where they work differently under different conditions.  Everyone knows about GPS, where it can be more or less accurate if there are clouds around; but the barmeter also works differently depending on how much humidity is in the air.  So there are all of these things you have to get smart about. And then you’ve got battery optimization and privacy.

When you combine all of those things, it was too hard for apps and developers to make use of the sensor data.

Which sensors are you utilizing?
We’re working across the entire suite of sensors and we’re doing two things with those sensors.  We’re letting you look for events that are only possible to detect using those sensors — like, for instance you can say “I want to know when one of my users drives into a bar or leaves their office walking.” Or maybe you want to know when someone is in a specific building on the 25th floor.  Or “I want to know when a person is sitting with their phone out of their pocket or not.”

Or even just core location. Obviously there are all of these issues around location accuracy — using other sensors you can get a much more granular understanding of location and get accuracy levels way up.

What are some of the ways that you anticipate developers using the platform?
Some very basic ideas include: say you land in an airport and Uber knows to send you a notification asking if you need a car. Or you leave the office and your app knows this and automatically texts your wife to let her know that you’re coming home. Or you sign up for an app and you say “I want to go to the gym five times a week” and the app tracks it — and if you don’t do that it can hold you accountable by posting a note up on Twitter. There are all of these kinds of things you can do, and we want to enable that.

I think what we’re fundamentally saying is that we want to move from a pull-based environment to a push-based environment on mobile.

What kind of impact have the Apple Watch (and wearables, generally) had on the need to do stuff like this?
A really big one. When I think of [Internet of Things] and wearables, what I think what we’re fundamentally saying is that we want to move from a pull-based environment to a push-based environment on mobile.  Rather than me having to go into my phone or my computer and needing to ask “Where is the nearest restaurant? I’m hungry,” the app and the technology around us can start detecting this stuff on their own and anticipating our needs. And I think that’s the future that people are really expecting.

What’s the most under-appreciated sensor that you’ve come across as you’ve created Sense360?
It’s not so much one sensor, but I think what’s interesting is when you start combining the sensors and start looking at the derivative data.

For example, we were trying to answer the question “How can you tell whether someone is in a building or not?” Often GPS will tell you a coordinate, but there is some margin of error, and you can’t quite tell whether the person is right outside the Starbucks or right inside. We were looking at the gyroscope and the accelerometer, and all of these different things, when finally we figured out that the best way to get that information was the GPS signal strength — because if it’s really high then they’re outdoors, and if it’s really low then they’re indoors.

Or another thing was when we were trying to figure out how you could tell if a phone was inside someone’s pocket or outside. And we finally said to ourselves: there’s an ambient light sensor which is usually used to figure out how bright to make the screen. But when your phone is in your pocket ambient light is zero; and when it’s out of your pocket it’s greater — so we can use that information to figure out almost with 100% precision whether a phone is in a user’s pocket or not.

In thinking about the platform, have you been working backward from specific problems like this?
There are actually a bunch of different steps to this. The first is that we wanted to start with a problem, which was that these sensors could tell us so many different things.  And so we wanted to start with 10-15 different use cases like this where we could solve them using the sensors.

So we went out and mapped out a whole bunch of location issues we cared about, a whole bunch of activities and a whole bunch of contexts.  And we went out to our engineers to figure out the answers.

But then the second thing was that in order to make this scalable we had to have big data sets that we could work with — and so we started to build an infrastructure so that it wouldn’t be such a manual process. We basically have a listening app that helps us understand these sensors and how they light up based on these different things that we’re trying to detect — and that helps us mold the algorithm.

It’s a pretty intense process, but if we weren’t doing this for apps, then apps would have to be doing this for themselves.

Some of the ways that you’re talking about the nuances in understanding consumer context reminds me a bit of the ways you would talk about Thinknear three and four years ago. It seems like both companies are looking to acknowledge and account for nuance in day-to-day life. Is there some conceptual thinking from your last project that has informed this one?
Absolutely. I think the experience of starting Thinknear and thinking so much about how people use their mobile devices, and how location specifically plays a role, is very important.

I remember one particular point at Thinknear, thinking about how when I’m using my cell phone when I’m at home in bed the things I care about are just so different from when I’m commuting to work in the morning. Just trying to understand those nuances and constantly thinking about those things is in some ways what brought us to Sense360. It is all about where I am, what I’m doing, and what’s around me.

And another big lesson from the Thinknear days was that it’s just not enough to “build it and they will come” when talking about really cutting edge new stuff. … A lot of getting people to understand this is coming to people with very specific use cases. …

We’re doing a lot of underlying infrastructure and tech, but the question shouldn’t be “do you want to use sensors?” It should be “do you want to know specifically what your users are doing so that you can do X, Y, and Z?” It’s a nuanced sale and a lot of creative thinking around it.

David Hirschman is Street Fight’s co-founder and chief operating officer.

Tags: