Why are driverless cars dangerous?

Self-driving cars are no longer science-fiction; they are just around the corner of becoming mainstream. The idea is that vehicles will be completely autonomous without any need for human intervention or manual navigation. However, as with any new technology especially one involving the transportation of human beings from point A to point B, there are inherent dangers involved.

Why are driverless cars dangerous?

Image credit: asawin via Pxhere, CC0 Public Domain

Danger #1 – An Unregulated Industry

Because information about the technology is limited, and although 200 car companies are jumping into the self-driving car space, there are not enough solid facts to create a baseline for safety standards. As yet, the industry is unregulated which is excellent for manufacturers but bad for consumers.

Before purchasing a self-driving car, buyers should first do a vin number lookup and find out as much as they can about it.

Danger #2 – More Accidents Blending Self-Driving and Manual Cars

Currently, American highways and side roads have not yet been optimized for self-driving cars. Driving is unpredictable, and because every possibility has to be programmed into the vehicle, accidents and unforeseen results will happen. The car may not have the proper software to know how to navigate extreme weather conditions or tricky congestion patterns.

Sometimes self-driving cars give the passengers a sense of false security when really, they should be extra cautious and ready to take the wheel at any given moment should the need arise.

Danger #3 – Vulnerability to Hacking & Remote Control

Any computer device connected to the internet is vulnerable to hacking. These cars also rely heavily on the software that runs their components, and if a hacker gets into the system, they can control every aspect of the car.

Other dangers to be aware of are the theft of private data and even gaining remote access to a cell phone connected to the car via Bluetooth. Self-driving vehicles may also be more susceptible to computer viruses.

Danger #4 – Computer Malfunctions

Most self-driving cars are made up of not one but 30 to 100 computers. That is a lot of technology where things could go wrong. The software that runs self-driving cars is admittedly sophisticated. However, one of the more difficult challenges that engineers struggle to solve is how to operate smoothly in all weather conditions. Correctly controlling sensors on the rear camera is also an issue. A particularly dangerous glitch is how to know when to execute a quick stop when someone is in the crosswalk in front of the car. Other concerns that should be solved before these cars hit the road are freeze-ups during autopilot mode, and how to account for the unpredictable behavior of other motorists.

The human component also confounds developers of self-driving cars, and they have yet to figure out how to make a vehicle decide between two bad options: hitting a pedestrian or another car. Self-driving cars are supposed to perform better than people but in these types of situations, “better” may be subjective because self-driving cars may be more dangerous.

Danger #5 – Exposure to Radiation

With all the goodies on board like GPS, remote controls, power accessories, Bluetooth, Wi-Fi, music and radio components drivers will be increasingly exposed to higher levels of electromagnetic field radiation. Exposure to electronic radiation can cause a myriad of serious health problems. Some of the more serious issues are high blood pressure, difficulty breathing, migraine headaches, eye issues, exhaustion, and sleeplessness.

Experts predict that self-driving cars will eventually save lives and be safer than manually driven vehicles. The positive aspect is that lemon cars can be prevented. However, there is a long way to go before that happens, and the industry will have to solve these issues and develop solid safety guidelines and standards beforehand.

An ordinary object on the side of the road can trick driverless cars into stopping abruptly or another undesired driving behavior, report researchers.

“A box, bicycle, or traffic cone may be all that is necessary to scare a driverless vehicle into coming to a dangerous stop in the middle of the street or on a freeway off-ramp, creating a hazard for other motorists and pedestrians,” says Qi Alfred Chen, professor of computer science at the University of California, Irvine, and coauthor of a new paper on the findings.

Chen recently presented the paper at the Network and Distributed System Security Symposium in San Diego.

Vehicles can’t distinguish between objects present on the road by pure accident or those left intentionally as part of a physical denial-of-service attack, Chen says. “Both can cause erratic driving behavior.”

Driverless cars and caution

Chen and his team focused their investigation on security vulnerabilities specific to the planning module, a part of the software code that controls autonomous driving systems. This component oversees the vehicle’s decision-making processes governing when to cruise, change lanes, or slow down and stop, among other functions.

“The vehicle’s planning module is designed with an abundance of caution, logically, because you don’t want driverless vehicles rolling around, out of control,” says lead author Ziwen Wan, a PhD student in computer science. “But our testing has found that the software can err on the side of being overly conservative, and this can lead to a car becoming a traffic obstruction, or worse.”

Boxes and bicycles

For the project, the researchers designed a testing tool, dubbed PlanFuzz, which can automatically detect vulnerabilities in widely used automated driving systems. As shown in video demonstrations, the team used PlanFuzz to evaluate three different behavioral planning implementations of the open-source, industry-grade autonomous driving systems Apollo and Autoware.

The researchers found that cardboard boxes and bicycles placed on the side of the road caused vehicles to permanently stop on empty thoroughfares and intersections. In another test, autonomously driven cars, perceiving a nonexistent threat, neglected to change lanes as planned.

“Autonomous vehicles have been involved in fatal collisions, causing great financial and reputation damage for companies such as Uber and Tesla, so we can understand why manufacturers and service providers want to lean toward caution,” says Chen.

“But the overly conservative behaviors exhibited in many autonomous driving systems stand to impact the smooth flow of traffic and the movement of passengers and goods, which can also have a negative impact on businesses and road safety.”

Additional coauthors are from UCLA and UC Irvine. The National Science Foundation funded the work.

Source: UC Irvine

Original Study

Let our global subject matter experts broaden your perspective with timely insights and opinions you can’t find anywhere else.

Choose your subscription

Learn more and compare subscriptions

Full Terms and Conditions apply to all Subscriptions.

Check if your university has an FT membership to read for free.

Check my access