There’s no doubt that technology has the ability to simplify, improve and enhance our lives. However, it’s crucial that it’s implemented correctly and safely, and there are definitely some issues we need to consider with innovation—especially in the automotive space, where it’s not simply about convenience (and making the 38 hours the average American commuter spends traffic each year as painless as possible), but safety as well.
Here we dive into a few issues that manufacturers need to take into account when innovating in the automotive space…
1. Danger from a Digital Standpoint
Last month, Wired published a story detailing a harrowing and extremely revealing experience in which two hackers remotely killed a Jeep on the highway, taking control of the automobile away from the writer and leaving him essentially helpless:
As the two hackers remotely toyed with the air-conditioning, radio, and windshield wipers, I mentally congratulated myself on my courage under pressure. That’s when they cut the transmission.
Immediately my accelerator stopped working. As I frantically pressed the pedal and watched the RPMs climb, the Jeep lost half its speed, then slowed to a crawl. This occurred just as I reached a long overpass, with no shoulder to offer an escape. The experiment had ceased to be fun.
Now granted, this was an experiment, and the reporter had volunteered for the dangerous drill, but it did start a serious conversation about the issues that arise as more and more automakers add wireless connections to vehicles’ internal networks—namely, the possibility of wireless carjacking (evidenced further when hackers cut a Corvette’s brakes via a simple gadget). This controveresial issue has caused Congress to push for more restrictions and even led Chrysler to recall 1.4 million vehicles.
The bottom line: In the digital age, vehicle safety goes far beyond seatbelts, airbags and the car’s physical form.
2. The Distracting Elements of Touch-Screen Interfaces
While touch screens have become an increasingly expected (one might say ubiquitous) feature in vehicles rolling off the assembly line, in terms of interfaces, this isn’t necessary the best practice. After all, touch screens are terrible for one very important use case: the case where you’re not looking at the screen. And when driving, clearly your eyes should be on the road.
Far from making our roads safer, the currently predominant mode of in-car human-machine interaction is actually harderto use and requires vastly more attention than did the more tactile and mechanical buttons and knobs of the past. Of course the scope of our interactions is now much broader and these touchscreens afford us never-before-possible options, but often, that presents more of a problem than a solution.
Earlier this year, an article in The Guardian examined the possibilities—or potential consequences, rather—of implementing phone interfaces in cars:
I know how distracting it is trying to change the radio stations, or get the heating levels right in a car; and we read about enough crashes where people were on the phone, or replying to a text. Just as worrying are the commenters on the video, who sing the praises of Android Auto because it has more things you can do at any time on each screen. That really misses the point of good interface design, which must be sensitive to context; offering lots of choices at a time when you want very few is bad, not good design…It’s not a problem if you walk along a street buried in your smartphone. In a car, it can be lethal.
To sum it up, this technology, while seemingly progressive, might not be as convenient or safe as we think.
3. Unnecessary “Upgrades”
We won’t spend too much time on this topic, as it’s something we’ve written about before, but there’s an increasing temptation for companies in all industries—including automotive—to step up the fancy factor and implement technology that, while impressive in its complexity, goes against everything good design is about.
In other words, it doesn’t streamline the user experience, and in many cases, it even makes life more complicated than it was before.
This is a concept that’s illustrated beautifully in Golden Krishna’s The Best Interface is No Interface, in which the author details one case in which the unlocking of a car door went from a two-part process to a process involving 13 steps. (Read more about that here.)
This is a point that we’ll continue to drive home, because user-centered technology is what we at Chaotic Moon are all about.
“As vehicles get more connected and more and more divergent technologies are integrated into our cars, it’s important for automakers to stop themselves from getting carried away,” said Chaotic Moon CEO Ben Lamm. “It’s not about more technology, it’s about the right technology, and we need to remember that the end goal of consumer-oriented tech is simple: it’s to make the user’s life easier.”
4. Automation Risks
While automation in many cases equals convenience (and the concept of cruise control is, granted, certainly nothing new), there are important questions to consider with automation: How does one alert the driver if they do take control? How do you keep the driver engaged so that they’re ready to do so at a moment’s notice? There are definitely potential issues with a vehicle taking complete control and allowing its passenger to zone out and, say, spend their commute reading The New York Times or playing Candy Crush on their iPad.
An article for the American Psychological Association details this problem:
For some people, automation might lead to complacency, says Nicholas Ward, PhD, a human factors psychologist in the department of mechanical and industrial engineering at Montana State University. Drivers who put too much trust in automation may become overly reliant on it, overestimating what the system can do for them.
In other words, if we rely on our vehicles to take care of everything, will we be ready to take the wheel when necessary?
5. Ethical Considerations
While Google’s self-driving cars have notably impressive records (700,000 accident-free miles had been clocked in Spring 2014, and by May of 2015, only 12 accidents had been totaled), there are some definite questions to consider as more and more self-driving cars take the roads—especially of the ethical variety.
One article in Mashable illustrates this issue well:
At a recent industry event, [Chris Gerdes, a professor at Stanford University,] gave an example of [a] scenario: a child suddenly dashing into the road, forcing the self-driving car to choose between hitting the child or swerving into an oncoming van.
“As we see this with human eyes, one of these obstacles has a lot more value than the other,” Gerdes said. “What is the car’s responsibility?”
Gerdees pointed out that it might even be ethically preferable to put the passengers of the self-driving car at risk. “If that would avoid the child, if it would save the child’s life, could we injure the occupant of the vehicle? These are very tough decisions that those that design control algorithms for automated vehicles face every day,” he said.
So can we allow machines to make these decisions for us? Is artificial intelligence of this sort capable of making what we would dub the “right” decision? Is the implementation of this technology–which does have the potential to save lives–ultimately more helpful or harmful, right or wrong? The Mashable article essentially ends with that question:
[Bryant Walker-Smith, an assistant professor at the University of South Carolina who studies the legal and social implications of self-driving vehicles] adds that, given the number of fatal traffic accidents that involve human error today, it could be considered unethical to introduce self-driving technology too slowly. “The biggest ethical question is how quickly we move. We have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill.”