Interacting with the future connected car was one of the subjects covered at the Los Angeles Auto Show’s Connected Car Expo I attended recently. CCE is the annual authoritative pow-wow of automotive and technology experts. It explores the shape of future transportation.
Microsoft was on hand with Bryan T. Biniak, ex-Nokia apps, and currently Global VP & General Manager at Microsoft in Silicon Valley. Biniak spoke on where he sees the connected car going—and it shouldn’t just be a parallel existence, he thinks.
Soul of the car
Biniak reckons that the key to making the connected car work is that the car has to understand the passengers, or occupants. In other words, it needs to listen, watch, and even start to get an understanding of who they are as individuals.
He used the terms “soul of the car” and “soul of the passengers,” and says the car has to go beyond co-existence. The car has to understand what’s important to the occupants.
Advanced mood recognition
Biniak says the car needs to be able to “tune for the experience.” What he means is that audio recognition needs to be implemented that will identify what’s going on in the passenger compartment.
For example, when kids are screaming in the back, the car should be able to identify the racket and provide a suitable audio program to counter it.
The car shouldn’t just turn the music up louder, he said. Jokes, trivia, or something related to gaming could work, he suggests.
Giving kids in the back something to do is, as we all know, a good solution to brat-like behavior. It works on spouses too, I think.
Biniak suggests a system whereby the parent manages the child’s experience from the front seat.
He says a gesture-based interface is the best solution. Conceivably, that could originate in some kind of tablet-like device. But he suggests tying it into a POI, or Point of Interest, recognition element.
In other words, tying in the vehicle’s known location with mapping and interesting POIs, like a building. The elements automatically appear in order to distract the kids.
Biniak mentioned the dragon training game School of Dragons. Dreamworks, the animation studio, has adapted it with virtual reality elements.
They took elements from the outside world and transformed them into a real-time game, playing out during a journey.
All of the POIs, as you are driving along, appear in the game. Biniak reckons this is where back-seat gaming should head.
He also mentioned Magic Leap, the cinematic reality company that is developing a goggles-based system that creates experiences floating around passengers.
Travel times applications
Biniak said that in-car apps should also tune to where you are when you’re traveling.
He thinks that this is particularly important for kids, and suggests that apps should start to wind up as the car gets closer to its destination, thus enabling the adult to retrieve the tablet from the child, or presumably from a spouse, with the minimum of fuss. They won’t be in the middle of vital dragon slaying, or other such activity, for example.
Understanding the individual
Navigation and in-car apps should try to understand the type of individual that the driver is and what’s important to that person, he says.
An example used was getting gas. Not only should the car tell the driver before running out of gas—as all cars do today—but it should tell you where the nearest station or cheapest gas is based on your location.
In other words, it should differentiate advice between family members. Cost and proximity of gas may be differing priorities for spouses, for example. Everything offered by the car’s search and recommendation functionality should be geared to what the occupant likes, in Biniak’s vision.
Other elements that could be proffered by the car include job searching on the move, and a friend finder, he said.
The smart home should be integrated, with the car advising the equally sensitive house when it gets close, so it can warm up, for example. At the same time, the car should advise family members of the arrival time and mood of the car’s driver.
The whole thing should be wrapped in night vision, facial and Cortana-like voice recognition—along with the aforementioned intuition, of course.
All I can say is: Are we there yet?
This article is published as part of the IDG Contributor Network. Want to Join?