Case study: The connected car
A thinking project
Introduction
Today our digital lives are very much connected to everything and everyone, the device we have in our possession nearly 100% of the time makes our life so much easier, it’s our wallet, our personal assistant, and now even our key to our car.
With cars becoming even more connected, they can unlock by proximity (phone or key), they know where we are and some can even come to us (well sort of), but one thing they are getting better at is making our lives easier & safer.
In this article I’m setting out to look at how modern car companies use the in and out-of-car experience to make the lives of the people that use them easy to use and at the foremost safe.
Research
Let’s start with Tesla. It’s said they started the new era of the modern car and I would completely agree. They own the full experience, allowing users to have small moments of delight when using their car, these can be funny little features, or really useful features such as dog mode or even sentry mode.
I’m not going to go into depth on automatic driving, as they’re many variations to the approach of such a feature, other brands like Audi have different limits on automation for example they don’t kick in until later on in the driver interaction which makes the driver still feel safe but they have to still be aware of their surroundings as be aware at all times.
Moving on to in-car experience, Riverian take the traditional Human Machine Interaction in a car such as a simple indicator stalk and improves it, they keep with everyone’s mental model but add an additional layer in the form of a visual aid showing what setting they have activated, sort of like a feedback state. For example changing the windscreen wipers speed has always just been muscle memory but they allow the experience to go one step further and indicate to the driver what setting they are changing and to what.
Using multiple screens within the in-car experience allows for each area to have its own set of purposes. The instrument cluster behind the wheel is for driver information, such as speed, navigation warning/notifications. While the center console screen/s are made for driver and passenger features, such as navigation, music, phone, settings, and comfort. Splitting the information and functions allows a driver to become easily accustomed to what is where but it also allows the driver to set up the screen how they want so it’s custom to them — this is limited at the moment depending on car manufacture but it usually allows for choosing the primary page order, so if you always listen to music on the morning commute it should begin to learn this and show this on they center console.
The architecture of information is key, everything that is a priority should be easy to reach. If it could be needed at an instance or a glance while driving it should be there, this can be something simple such as showing a larger priority to navigation when asking to exit a junction, or showing car stats & entertainment when parked up. The order of information is not just about making a driver’s life easier but also the safety of them in the passengers. If the driver has to hunt to key information this is a huge error.
With cars having so many variable elements, how the user interacts physically with the HMI is key. A user needs to know what an input should do at a glance or just by touch. An example of this is the new Tesla, they went down the route of removing the physical gear selector for the implementation of a touch input on the central display, now the aim by the company is to use the various sensors in/out of the car to predict what the user will want to do, this had plenty of people either loving or hating the update but it fundamentally goes back to how people think cars should traditionally work, this input had always being something physical to give the driver feedback that they have changed something within physical within the drivetrain. Now with cars being electric, it’s literally just a flick of a switch but the mental model is already built in so it can be hard to adjust. This way there are certain elements we should always be cautious to change to make the target market still feel comfortable to drive.
This mix between what a physical switch does and something on a touch screen is huge in a car. For example, do I want to be looking down in the car while driving to change the temperature? Now this can easily be voice commands, but people are not used to this yet.
- Problem
- Approach
- Next Steps
- Learnings
Problem
What do drivers want from the in/connected car experience?
Over the next section I will be looking into what a user would like from a car experience, looking for opportunities to help create an easier and delightful experience.
I would love to dive into a user test but my time and access constraints for this mini project are limited. My primary task was to find solutions that would improve the user experience. But of course this would all need to be backed up by learning, iterating, and data.
Approach
Customer Journey map
I wanted to create a mini journey map to guide the ideation into some key opportunity areas that would be good to explore, such as the post ordering process.
I decided to pivot my thinking around 2 user groups, the first is users that love tech and they look for the best features, and second are users who traditionally buy a new car every few years, but don’t necessarily care about all the tech features, they care more about safety and the drive.
To help put the users into context I turned the user groups into scenarios that apply to the person using the product.
Scenario
- They have just got their car delivered (New Customer)
- The driver uses the car for daily commute (the car is shared)
Helping to be in the mind of the user I began to build a general empathy map to see if I could find some common problems in current experiences.
Ideation affinity mapping was used to capture ideas on possible solution to improve the experience, this was about thinking wide and not limiting the thought process to one specific area.
To help keep this project short and get to ideating some ideas, I began to form How Might We’s for 2 key areas of focus based on the users’ scenarios. I then took them forward into ideating around the problem area in more detail.
From these I began to ideate some possible solutions to help find some key areas to dive into.
I think best in flows so I chose a single HMW and moved into sketching a journey for one of the key ideas, the key area was thinking how we create an adaptable personalised experience for different users (This can also be different profiles within the same car)
Next Steps
I think best in flows so I will choose a HMW and move into sketching a journey for one of the key ideation areas, then next steps of the project would be prototyping and testing some solutions, I will be basing it off one screen for the instrument cluster and one from the center console.
Learnings
Car manufacturers spend a huge amount of time diving into the data and testing across a varied amount of users, always iterating to learn and improve the experience as a whole and how it fits in with their wider ecosystem.
Due to the amount of research and data I could gather this article and thinking is very assumption driven, and therefore the decisions may have wider limitations.
The in-car experience should be easy to use, intuitive and at the foremost safe, so the HMI within a car needs to be built in by muscle memory. A car can have more than 1 driver and even more driver styles so it needs to be personalised — enabling the user to feel like their car knows them, and in return they know what is going on with the car at any given moment with our connected devices.
I would love to go into depth through different problem spaces and features, to further iterate and test. Experimenting not just on screens but how we build out a instrument cluster, that may have physically buttons between 2 screens or other endless possibilities.