Follow by Email

Wednesday, September 9, 2015

How Hackers Can Trick Self-Driving Cars


Copyright © 2015 Bold Ride LLC.

There has been plenty in the news about our connected cars being hacked. Some hacks have been about gaining access to info, some have been about breaking in to steal cars, and some have been demonstrations to show cars can be completely taken over by hackers. If you think your hacking worries will end with autonomous cars, think again.

It turns out autonomous cars are hackable, too, and they can be tricked into seeing things that don’t exist. According to The Guardian, hackers can mess with the lidar in these cars to change what they see. It’s not particularly difficult either, requiring only a gadget similar to a laser pointer that costs all of $60 to purchase.

Self-Driving Lexus Google

The hacker can be on any side of the car and use the device to confuse the lidar into detecting echoes of fake objects. Jonathan Petit of the University of Cork’s Computer Security Group wrote a paper demonstrating how easy this is to do with off-the-shelf parts and a Raspberry Pi or Arduino. It’s like a denial of service (DDoS) attack on the car’s tracking system.

The attack will force the car to stop as it will think there are objects everywhere and that it is unsafe to move. Lidar is currently used by companies like Google, Audi, Mercedes-Benz, and others on their prototypes so this is no small issue. Petit’s paper will be presented at November’s Black Hat Europe security conference. It will be the first opportunity for autonomous technology developers to get a good look at the hack and figure out how they’re going to stop this one.

Autonomous car from Delphi drives on Treasure Island in preparation for a cross-country trip from San Francisco to New York City in San Francisco