Autonomous cars are becoming ever smarter, and the next step is teaching them how to better understand people.
Autonomous cars are becoming ever smarter, and the next step is teaching them how to better understand people.

Today’s car talk is all about the rise of autonomous vehicles, but there’s still a little way to go before their artificial intelligence becomes smart enough to figure out that most enigmatic of questions: what humans want.

And if this implies that we expect cars to be mind readers, well actually, we kind of do.

Modern cars are increasingly being equipped with vast numbers of sensors to give them 360° views of their surroundings, together with automatic steering and braking in a bid to reduce crashes, but they still don’t have a handle on human behaviour and gestures — things that human drivers intuitively recognise.

For instance an autonomous car that sees a pedestrian stepping into the road would come to a halt and wait for the person to cross, but won’t recognise if the pedestrian decides to stop and wave the car on.

A company called Perceptive Automata is addressing this particular problem with software that can read the pedestrian’s intent and pass this information to an autonomous car’s robotic brain.

The startup company based in Somerville, US, says it gives autonomous vehicles the ability to understand the state of mind of pedestrians, cyclists and other motorists.

South Korean car maker Hyundai has invested in this predictive technology which enables self-driving cars to make quick judgments about the intentions and awareness of people on the street, giving these cars unprecedented human-like intuition.

Hyundai says that one of the biggest hurdles facing autonomous vehicles is the inability to interpret the critical visual cues about human behaviour that human drivers can effortlessly process. By making autonomous vehicles that understand more like humans, it creates a safer and smoother driving experience, the theory goes.

However, making AI more human gives rise to dystopian pessimism about machine revolts as portrayed in movies like The Terminator and The Matrix. The software guiding these thinking, reasoning cars will have to be carefully managed, and it’s a little more complicated than just programming them not to turn against us.

We also need to make sure they can’t be hacked by those with ungodly intentions, and we also need to teach them to resolve tricky ethical issues.

For instance, if it comes to an unavoidable collision will the autonomous car choose to crash into a pedestrian, or swerve into a tree and kill its passengers?

While all this may seem to be in sci-fi realms, research into these complex matters is already well under way. Back in 2015 a symposium called "Autonomous Driving, Law and Ethics" was held in Frankfurt, Germany, where more than 100 experts from business, science, politics and the media discussed the ethical and legal challenges of autonomous cars.

The age of the "thinking" car will be upon us sooner than most people realise, with technological progress moving into the exponential realms. Exponential growth is deceptive: it starts out almost imperceptibly and then explodes with unexpected fury, and we’re headed for a near-future period during which technological change will be extremely rapid.

Some futurists believe we can expect computers to pass the Turing test, ie demonstrating intelligence indistinguishable from that of humans, by the end of the 2020s. That’s just 10 years away.

By then, we’d better be sure we’ve taught cars some ethics.