Virtue-signaling, safety ratings, and reframing. These may be the keys to building public trust and removing psychological roadblocks to using and buying autonomous vehicles.
Just as Google’s Waymo driverless cars hit the streets, a new paper has been published in the influential Nature journal, entitled “Psychological roadblocks to the adoption of self-driving vehicles“. Co-authored by researchers at MIT, UCI and the Toulouse School of Economics, the report argues that trust-building should be a clear priority for manufacturers, service providers and regulators seeking to promote the adoption of this technology.
Specifically, the authors cite three psychological roadblocks undermining public trust in autonomous vehicles, and offer psychologically informed techniques to overcome these psychological roadblocks
- Psychological Roadblock One: I don’t trust this technology to not harm me
In the event of the unexpected, autonomous vehicles will be programmed to prioritize human safety. But whose safety? The passengers’ safety, the safety of pedestrians or of passengers in other cars? As our international SYZYGY AI survey found, most people say they believe autonomous vehicles should be programmed to reduce the risk of overall human harm, even if this increases the risk of harm to passengers. But ask them if they’d ride in such a vehicle, most people say no. This creates a logical impasse.
How to Clear this Roadblock. Deploy a twin strategy – reframe the debate and deploy Prius-style virtue-signaling.
i) Reframe the debate by comparing the superior safety of driverless cars (however programmed) to human-driven cars.
ii) Deploy virtue signaling through distinctive branding so that riding in an autonomous vehicle becomes an act of conspicuous consumption, allowing people to display their ethical virtue (minimising overall harm).
- Psychological Roadblock Two: I know it’s irrational, but I’m scared of riding
Like fear of flying but for cars. Additionally, media amplification of any rare accidents will play to cognitive biases that result in greater fear. For example, media focus on accidents will play to both the ‘availability heuristic’ (essentially, the more it’s talked about the more it’s likely to happen) and the ‘affect heuristic, (the more something evokes negative feelings, the greater the perceived risk). The practical upshot? rpAVP – autonomous vehicle phobia.
How to Clear this Roadblock. Manage expectations – promote the technology as being perfected, not perfect – and prepare people for the inevitability of rare accidents, but fewer accidents than with human drivers. Once again, compare the relative superior safety of autonomous vehicles with alternative modes of transport – particularly human-driven cars.
- Psychological Roadblock Three: I’m wary of technology I don’t understand
The AI technology in driverless cars is advanced and complex, and we tend to distrust or feel anxious about things we don’t understand, particularly if we cannot predict their behavior. On the other hand, too much information can confuse and also create anxiety.
How to Clear this Roadblock. Help people feel they understand how autonomous vehicles work using simple metaphors, analogies and models. The authors don’t go so far as to suggest that this could involve giving autonomous vehicles a functionally superfluous Alexa-style ‘virtual driver’ – with a name, personality, and importantly a safety record and rating. But psychologically, this simple personification of AI could help put people at ease, create a bond of trust, and help people understand that driverless cars are not, in fact, driverless – they are driven.
We think these insights make a useful addition to our initial recommendations on how to communicate AI to clients and consumers. As the field evolves, we’ll continue to update our recommendations.