Self-Driving Cars Can Be Tricked Into “Seeing” Non-Existent Obstacles

Self Driving Cars

There is nothing more important to self-driving cars than sensing what is happening around them. Like human drivers, autonomous vehicles must be able to make instant decisions.

Today, most self-driving cars rely on multiple sensors to perceive the world. Most systems use a combination of cameras, radar sensors, and LiDAR (light detection and range) sensors. On-board computers merge this data to create a complete view of what is happening around the vehicle. 

Without this data, autonomous vehicles would have no hope of navigating the world safely. Cars with multiple sensor systems work better and are more secure – each of which can act as a check on the other – but no system is immune to attack.

Unfortunately, these systems are not infallible. Camera perception systems can be tricked simply by putting stickers on traffic signs to completely change their meaning.

Our work, from the University of Michigan’s RobustNet research group with UC Irvine computer scientist Qi Alfred Chen and SPQR lab colleagues, has shown that LiDAR-based cognitive systems also have can be understood. By strategically spoofing the LiDAR sensor signals, the attacker can trick the vehicle’s LiDAR-based perception system into “seeing” a non-existent obstacle. If this happens, a vehicle can cause an accident by blocking traffic or breaking suddenly.

LiDAR-Based Cognitive Systems Have Two Components:

Sensors and machine learning models that process sensor data are based on Lidar. The LiDAR sensor calculates the distance between itself and its surroundings by emitting a light signal and measuring the time it takes for that signal to bounce off the object and return to the sensor. This time back and forth is also known as “flight time”.

A LiDAR device sends tens of thousands of light signals per second. Its machine-learning model then uses the returned pulses to paint a picture of the world around the vehicle. It’s similar to how a bat uses echolocation to find out where obstacles are at night.

The problem is that these pulses can be spoofed. To fool the sensor, an attacker can direct their own light signal at the sensor. That’s all you need to mix sensors.

However, it is more difficult to spoof the LiDAR sensor to “see” a “vehicle” that is not there. To succeed, an attacker must precisely time the firing signals at the LiDAR victim. This must happen at the nanosecond scale, as signals travel at the speed of light. Small differences will appear when LiDAR calculates the distance using the measured flight time.

If an attacker successfully fools the LiDAR sensor, they must also fool the machine learning model. Work done at OpenAI Research Lab shows that machine learning models are vulnerable to specially crafted signals or inputs – so-called adversarial examples. For example, specially created stickers on traffic signs can deceive camera-based perception. 

We have found that an attacker can use a similar technique to generate perturbations that work against LiDAR. This won’t be a visible sticker, but specially crafted fake signals to trick the machine learning model into thinking there’s an obstacle when in fact there isn’t. 

The LiDAR sensor will convert the fake signals from the hacker to a machine learning model, which will recognize them as an obstacle.

Contradictory examples – mock objects – can be designed to meet the expectations of a machine-learning model. 

For example, an attacker could generate the signal of a truck not moving. Then, to carry out the attack, they can set it up at an intersection or place it on a moving vehicle in front of self-driving cars.

Two Possible Attacks

To demonstrate the pre-engineered attack, we chose an automated driving system used by many car manufacturers:

BaiduApollo. The product has more than 100 partners and has mass production agreements with several manufacturers, including Volvo and Ford.

By exploiting vulnerabilities in autonomous driving perception systems, we hope to sound the alarm bells for the teams building self-driving technology. Research into new types of safety problems in self-driving systems is just beginning, and we hope to spot other possible problems before they can be exploited by bad guys on the road.

Self-Driving Cars Can Be Tricked Into “Seeing” Non-Existent Obstacles