Using Images to Spur Location-Based ‘Discovery’
It seems like there are a lot of apps these days that are trying to give insight into the world around you — revealing secret places and tips that others have discovered, and creating e-landmarks where there aren’t any. Foursquare talks about “unlocking the city” through check-ins that reveal where your friends go, Yelp’s users review various establishments and leave their notes, and users of Instagram and Color upload images that can be viewed by location.
Trover is somewhere in between these, seeking to unite photo sharing with location-based discovery. When the app loads, it presents a grid of photographs based on your current location, showing you the closest ones that Trover users have uploaded. As you scroll down through the grid of photographs, you see images that were uploaded from further and further away. Images range from photos of parks or graffiti, to the insides of restaurants, to specific events within a given venue, and users can leave comments about the scene.
The company’s co-founder and CEO, Jason Karas, likens the service to people “leaving breadcrumbs” for one another — places to discover nearby, with notes to give context. All of the content on the site is user-generated, and you can follow specific users or view tips left by everyone.
Street Fight recently caught up with Karas to talk about how the app works, why Trover is different from other photo-sharing sites, and how the location-based “discovery” space is evolving.
What were some of the main issues you had to deal with in creating an image-centric discovery app?
Our first challenge was to make an interface that would capture something to see or do and help the next person find it and use it really quickly.
The second challenge is: “Okay, we built the tool, but how are you going to get people to use it for that purpose?” People use cameras and mobile phones and GPS for a huge swath of things. How are you going to get them to use it? How are you going to incent them and point them to this particular use case, which is a use case around discoveries and things to see and do.
We spent a lot of time building our initial base of users. We went and hand-picked people who were really passionate about cities and they loved to take pictures of things to do. [We wanted to create a community] to say, “Hey, Trover is for this kind of photos. It’s for a very specific kind of content.”
The third challenge was developing an interface that would make it much more natural and engaging to explore things to see and do in a neighborhood. What am I going to do tonight? Where can I go in the next hour? Where can I go to bring my kids to play? Pretty general questions. What kind of interface would be the most efficient at delivering that kind of information to me? And this is when we designed this interface called a “spatial browser.”
People are really good at pattern recognition. They’re really good at synthesizing visual information that they can decide: “Oh, here is a pocket that looks interesting, I want to drill into this. It looks like a cool part of the neighborhood. Let me see what’s going on by drilling down deeper into it.” My sense is that these older interfaces, they just completely strip out that pattern recognition humans are so good at doing, so they’re not as engaging and they’re not as good a way to explore.
You’ve announced that you’re planning to integrate with Foursquare. Will Trover use photos from Foursquare’s API?
We could, but we run the risk of becoming just like any other photo site, which is just not as clearly purpose-driven. I think some of the photos in Foursquare would be awesome. Some of the pictures on Foursquare… would be a picture of a chimp drinking a beer, and those are the ones that are not going to help me as much if I’m trying to assess… well, actually, that one might help me.
We just want people to be thoughtful when they are adding content to Trover because it’s specific content. A forklift import of Instagram photos or Foursquare photos into Trover would be bad. It would water down the stuff that we have. Right now, it’s quite focused.
How do you make sure people are uploading photos tied to the places they’re uploading them from?
When somebody uploads a discovery, we do a couple of things: we take a GPS reading at the point where they’re uploading it. If it’s something they are taking something off their camera roll, sometimes the iPhone itself will stamp the photo with the location on it, so we’ll read that little location stamp right off the film. And, if not, our UI will drive it through a few steps and help them put the discovery in the proper place on the map. And then you ask people to flag their places if they’re not in the right spot. So, another user might go: “Hey, this is actually around the corner, not where you’ve located it to be. Please adjust it.” This is the painstaking stuff and we’re trying to be as useful as possible — so getting the location right is totally critical for us.
There very well may be a world where the value is high enough and sensitivity is well enough that location is just ambient information that you tote around with yourself.
Do you think the check-in behavior (people reporting their location) will continue to be popular over time? Will people become more comfortable with the idea that their phones are being ambiently tracked by various apps?
We’ve made big strides in this in a couple of years in what people are willing to do. You see a bunch of services that are rolling out now, sort of taking your location as a given and asking you for your locations on an ongoing basis as a way to show you things. Foursquare’s Radar is trying this, saying, “Hey, we watch you all the time, we can actually tell you when one of your friends is nearby you.” Some say: “That’s enough value for me. I’m in. Let’s do it. It is so important that I know where my friends are.” Or, you know: “It’s just not quite enough value for that level of tracking.”
It’s a crazy question they’re even asking, which could be normal in a year or two. People’s response now is: “There’s no way I’ll let you keep a radar on me. That’s crazy!” The service is called Radar, so it sounds like I’m a blip on some screen somewhere. The fact that people would even consider that question today, I think, is indicative of the level of comfort that people have with this. It’s pretty crazy. There very well may be a world where the value is high enough and sensitivity is well enough that location is just ambient information that you tote around with yourself.
Do you have any plans to integrate other apps’ APIs and data sets into Trover, to layer on more information?
Definitely. We think of the world as a funnel where people learn what options they have and new things they should try. We’re really close to the top of that funnel where they’re looking for a tool for exploring on that exploration part. If the user moves down and finds some options and then reduces that option set from five down to three, and then eventually down to one — the thing that they want to go do. It gets very tactical: they need to know the office hours, they need to know the website, they need to know the location. Some of that information we provide, like walking directions. But other, what we call “tactical” details, wouldn’t make sense for us to go and generate because they’re easy to get from other places.
This interview has been edited for length and clarity.