Last night I did a video interview for a project Gabriel Weiner and Dana Karwas are putting together. As we were talking in between segments, Gabe and I got into an interesting discussion about smartphones and what’s next.
The smartphone, and the location-based apps you put on them, seemed so incredibly cool and advanced when they first came out. For the first time ever you had a computer interface between you and the city, that went with you wherever you went and that could give you information about the things and people around you in the moment. That was mind-blowing, really.
And it still is to a certain extent. But when you think about it – and especially when you put your forward-thinking cap on for a minute – this interface is really very primitive. It was exciting because there was literally nothing that came before it (unless you count text-based applications). But it is just the beginning, just a baby step, really. I think we’ve been with the smartphone long enough that we are beginning to grasp this and think about what’s next.
So what is next? What is the the future of city-user interaction? (Or augmented city-user interaction I should say – cities and users have been interacting for as long as there have been cities.)
Here are some quick thoughts:
1) In future city interfaces, the city will become an intelligent part of the interaction. Right now, the user is the only intelligent player in any augmented interaction between city and user. But when the things around you begin to know who you are, where you are, and what you want, that paves the way for interaction that is much more meaningful, much faster, and potentially much deeper. An example of this sort of “intelligent action” from your surroundings is a product I just learned about, called BlueEyes. Developed as a way for the seeing-impaired to navigate Paris’ subway system, it senses where you are, using bluetooth, and gives you directions on how to get to your destination. Kind of like GPS at the micro level, and underground.
2) Once your surroundings become “aware” of you, interaction between cities and users will become less like users reading about their surroundings and more like users in conversation with their surroundings. Information will be going both ways. Rather than just pull from the user, future interfaces are going to be pull and push, from every player and thing you encounter on the street simultaneously. (This will, as you can imagine, have good and bad consequences).
3) The lag on interaction will go away. Current user/city interfaces come with huge lags for any sort of action. You have to open the phone, open the app, send a request for data, wait for it to come back over the network, and then possibly send another request, to hone in on the results. This amounts to a huge delay for any sort of interaction with your surroundings, sort of like being on a 14.4 modem. I expect that future interfaces will eliminate a lot of these lags, and the experience will become something closer to cable modem, in one way or another (not in download rate, but in user experience).
4) A city/user interface of the future will probably become de-paried from your phone, and may not be something you keep in your pocket. Maybe it will be something you wear on your wrist like a watch? I don’t know – this is getting waaaay out of my domain. I just know that pulling an interface out of your pocket every time you want to use it gets tiring pretty quickly. My guess is it will move out of the pocket at some point.
5) Filtering and permissions will need to be very good. When you’re moving through a city full of things that potentially know who you are, where you are and what you want, you are going to want very tight, very flexible filtering. You aren’t going to want the drugstore on the corner to be bombarding you with coupons as you walk by (and they will be, for sure). But you might want to let friends of friends know who you are. Interfaces in the future will I think have to allow for this level of parsing. Otherwise we’ll just have to turn our devices off to escape all of the noise.
Anyway, that’s where I think we’re going in terms of people interacting with their surroundings. Is it too soon for innovators/entrepreneurs to start thinking in terms of next-gen city/user interfaces in their products? Not necessarily. Some parts of this sort of interface may not be so far off into the future. And the way we’ll get there is by iterating on the current slate of products out there.
And if your product happens to be the thing that helps take us there, more power to you.
(If there’s anything I’ve left off in this post that you think will be – or *ought* to be – in the future user-city interface, feel free to post it in the comments below.)
[UPDATE: Less than 24 hours after writing this and hedging about my idea that our "city interfaces" might one day be worn on our wrists, I stumbled across this new post from NYTimes' Bits Blog on that very thing. The future is already here...]