Tesla faces lawsuits and investigations over Autopilot car accidents

In March of 2018, the driver of a Tesla Model X was killed in an accident during his commute on Highway 101 in Mountain View, CA. The car’s driver died from the injuries suffered in the high-speed crash, the car traveling at 71 mph when it drove head-on into a highway barrier. At the time of the accident, the Model X’s Autopilot feature was enabled, which has been promoted by Tesla as being capable of assisting drivers through the “most burdensome parts of driving.”

The victim’s family sued Tesla and the State of California in April 2019 for their involvement in the accident. The claim against California is that it failed to perform adequate maintenance of the traffic barrier struck by the Model X. It had been damaged in another accident 11 days before the crash, and CalTrans failed to repair it in that time.

In the claim against Tesla, it specifically names the Autopilot feature, and Tesla’s marketing of the system, as a central cause of the accident.

It is not the first time Tesla’s Autopilot system has been blamed for a fatal car accident. Since 2016, a series of crashes, including several fatal accidents, have struck repeated hammer blows on the company’s argument that Autopilot is safe.

Tesla’s CEO Elon Musk has gone on record saying that the company will be able to offer fully autonomous vehicles by 2020, should regulators allow it. But the company has missed ambitious deadlines and goals in the past, and safety groups are urging the Federal Trade Commission to start investigations into the Autopilot system, and the marketing surrounding it.

What Tesla claims Autopilot can do, and what drivers believe Autopilot is capable of, don’t line up.

According to Tesla, the Autopilot system is not yet fully autonomous. While that is the company’s stated goal, for now the system is closer to being a driver assist system, with semi-autonomous functionality.

The 2016 Enhanced Autopilot system uses a series of sensors and forward-facing radar to manage cruise control, make speed adjustments in relation to nearby vehicles, maintain a lane, or shift lanes safely when necessary. However, this system is only one of several, spread across different Tesla models and with varying feature sets. The capabilities also differ based on the actual physical hardware installed in each vehicle, in addition to the over-the-air software updates the company regularly provides. Because the system is in continuous development, Tesla has stated that drivers must remain attentive and always be ready to take control.

However, the lawsuit filed by the family of the Mountain View accident victim suggests that he likely believed the system to be far more capable. According to a report from the National Transportation Safety Board, the driver’s hands were on the wheel of his car for a total of 34 seconds in the 19 minutes before the crash. In the six seconds before his car impacted the faulty crash barrier, his hands were not detected on the wheel.

The lawsuit claims that Tesla’s own advertising described the Model X as ‘state-of-the-art’ and ‘safe.’ It asserts that the automatic emergency braking system and lane-keeping systems aboard the car were defective, and that had they been fully functional, the crash would never have happened. It claims that ultimately, the driver placed his faith in a faulty system, and that Tesla’s next software update rolled out after the accident was in response to these fatal flaws.

If the claims are true, the victim was one among many Tesla owners with such faith in the vehicles. Drivers have been caught sleeping inside their Teslas while driving at freeway speeds. It may be that many Tesla drivers, lulled into a sense of trust by the ‘Autopilot’ name, believe their vehicles are capable of full autonomy.

Consumer groups and vehicle owners have claimed that Tesla’s marketing is encouraging drivers to over-rely on a currently imperfect system.

Tesla and its Autopilot system have been under fire from critics for the accidents, and the perceived lack of safety allegedly inherent to the systems. Two nonprofit groups, the Center for Auto Safety and Consumer Watchdog, have called for the FTC to investigate Tesla for deceptive marketing after another accident killed a driver in Florida.

Consumer Reports has had a recently turbulent history with the car maker, famously stripping the Tesla Model 3 of their recommendation due to a spate of reliability concerns and manufacturer defects. In May 2019, the organization reiterated its warnings that the Autopilot system is less reliable than actual human drivers.

In testing, CR found that the Autopilot system is slow to respond in heavy traffic, or to vehicles approaching rapidly from behind. This can result in a Tesla cutting off vehicles or committing unsafe lane changes and illegal overtakes, which could cause an accident or result in a ticket from the local highway patrol. In the most extreme cases, the system may choose to make a lane change into oncoming traffic, a danger that is particular to narrow, two-lane roads with traffic going in opposite directions.

Owners of various Tesla models have also taken the company to task for deceptive marketing. In April 2017, a class-lawsuit was filed against the car maker for misrepresentation of the Autopilot system’s capabilities. The owners claimed they paid an additional $5,000 for the software to be added to their vehicles, but alleged that the system was not working as advertised, and was dangerous to operate. The company reached a settlement with the aggrieved owners in 2018.

All of this has had little effect on Tesla’s marketing. An upgrade to the Autopilot system was rolled out in 2019 under the name “Full Self-Driving Capability.” The company’s Autopilot page describes the system as being “able to conduct short and long distance trips with no action required by the person in the driver’s seat.” This conflicts with Tesla’s support page for the system, which describes it as a “driver assistance” feature that requires “active driver supervision and do not make the vehicle autonomous.”

A March 2019 Tesla Autopilot car accident suggests that one long-standing flaw of the system has yet to be addressed.

Critics of the Autopilot system as well as Tesla enthusiasts have argued that the current Autopilot system has a serious flaw that has gone unaddressed for years.

The first widely publicized fatal Tesla Autopilot crash happened in 2016, when a Model S Autopilot system failed to detect a white tractor-trailer crossing a highway. The car drove full speed into the trailer, sheering off the top of the vehicle and killing the driver.

Three years later, a similar accident happened in Florida in March 2019, when a Tesla Model 3 collided with a tractor-trailer crossing a freeway.

The Model 3, pictured after the fatal 2019 accident.

In both instances, the Autopilot system did not detect an obstacle on the road, which resulted in no attempt to slow down or otherwise avoid a collision.

Underriding a tractor-trailer is a comparatively rare car accident. It can be avoided by attentive drivers, even with little warning, if they react quickly. But available data in both accidents suggests that the drivers made no effort to override the Autopilot to perform emergency braking or evasive maneuvering.

In the three years that passed between the accidents, Tesla made considerable changes to the Autopilot sensor suite, going from a third-party developer to manufacturing the components to the system in-house. Despite these changes, the same flaw was allegedly present in the Tesla’s Autopilot system at the time of the March 2019 accident.

An anonymous Tesla driver and enthusiast hacker has done in-depth analyses of the the uses and flaws of the Autopilot system. In a series of Twitter posts in March 2019, the driver described a situation where their Tesla failed to read a trailer as an obstacle on the road in front of them. Rather, it was designated as an overhead structure, and the system would have attempted to navigate under the trailer if the driver if it had been in control at the time.

This is the view from one of the cameras aboard a Tesla, with overlays visible through a software hack. Note that the Autopilot system highlights a route under the trailer. If the driver was not in control, it is possible that the Autopilot system would attempt to navigate through the trailer.

Despite the Autopilot system’s shortcomings, driver error and neglect were also major factors in fatal Tesla accidents.

When the NTSB investigated the March 2018 accident, they determined that while the Autopilot did accelerate his Model X into the crash barrier, onboard telemetry showed the driver took no action to evade the barrier or apply the brakes. An attentive driver would have had ample time to take control of the car from the Autopilot system.

Similarly, in the two fatal trailer underride accidents, investigations and available data suggest that if the drivers could have prevented the crashes, or at least prevented their deaths, if they had been paying attention to the road.

The NTSB itself had cited driver error as a key factor in the 2016 crash, specifically saying that the driver’s overreliance on the Autopilot system was not in line with Tesla’s own warnings and guidance on how to use the system.

Tesla does repeatedly state, through onboard warnings, in the vehicles’ owner’s manuals, and on their website, that the Autopilot system is not fully autonomous. The company stresses that the system is still very much “in progress.” The onboard warnings specifically remind drivers through visual and audio cues to keep their hands on the wheel and stay alert.

Nearly 6.5 million car accidents were reported in the US in 2017, killing 37,133 people. The Autopilot system is part an industry-wide effort to try and make the roads safer through increased automation. But with the technology being pushed through rapid development, it is not yet clear if it can overcome the unpredictable human factors involved.

Reader Interactions