by Raimund Genes
Transport minister Claire Perry warned this week that hackers may look to disrupt driverless car systems in the future for political or economic ends. “The more we move to technologically assisted forms of transport, whether it’s smart motorways or driver assisted vehicles, there is also a risk of cyber hacking – so we are mindful of that,” she told the Commons transport committee.
It’s not the first time public figures have expressed concerns about our increasingly hi-tech transport systems and I’m sure it won’t be the last. But in focusing on the cyber security risks of driverless cars we risk missing the more important legal and privacy issues which should be addressed first.
Hacking the roads
I’m not suggesting there isn’t a potential danger that some cybercriminals will look to exploit vulnerabilities in on-board computers and smart roadside systems in he future. There are certainly multiple opportunities to cause havoc by remotely controlling cars – from stealing the vehicle itself to ramming or blocking police cars or even holding its occupants to ransom.
However, privacy risks are already accelerating at a rate faster than the likely advent of driverless cars in numbers great enough for hackers to start paying attention. Most modern cars already have a number of in-vehicle computers and sensors, monitoring air pressure, emissions, engine temperature and so on. So where is this data sent and stored? Well, usually back to the manufacturer’s HQ. Does that car maker even try to get permission from the driver before exporting said data? Could technical, car-related data even be ruled as personal and therefore privacy infringing?
We don’t know, but it’s a discussion lawmakers need to have now, at a national and European level, before we come to rely on these systems too much. In 15 years’ time it will be too late – rip and replace simply won’t be an option by then.
A new Highway Code
Even more important, however, is for legislators to begin building a legal framework for the manufacture and operation of driverless cars. So many disastrous scenarios present themselves from a legal perspective. Who will be held responsible if a driverless car comes off the road and steers into a group of pedestrians? What if the driver sees what’s happening and tries but fails to override the car? Who’s life is more important, the driver’s or the pedestrians’?
I certainly don’t have the answers, but we’d better start asking the questions at a governmental level, in order to work towards building in an appropriate number of checks, balances and controls. In the aviation industry, on-board flight controllers must legally be provided by three different companies to reduce the risk of a catastrophic failure, for example.
Isaac Asimov formulated his “Three Laws of Robotics” framework governing artificial intelligence as far back as the 1940s, and we must lay down a similar groundwork for driverless cars. Fail to do this now and the industry might have driven off a metaphorical cliff well before the hackers get involved.