NTSB: Tesla’s Autopilot UX a “major role” in fatal Model S crash [Updated] | Ars Technica

Enlarge/ The Tesla Model S following its recovery last year from the crash scene near Williston, Florida.On Tuesday the National Transportation Safety Board met to discuss 2016's fatal Tesla Model S crash in Florida. NTSB did not have any major new findings beyond those we reported in June, nor did its findings differ from a National Traffic Highway Safety Administration investigation into the event. At this point the facts are clear: Joshua Brown was overly reliant on Tesla's Autopilot function and interacted with the car only seven times in 37 minutes—while traveling at 74mph.

The crash occurred on May 7, 2016, on US Highway 27A near Williston, Florida. The road in question is a four-lane highway with the eastbound and westbound lanes separated by a grass median, but it's important to note that it is not a controlled access or divided highway. As Brown was traveling east, he failed to notice a tractor trailer making a left-hand turn across his path onto a side road, and the truck driver failed to yield to the oncoming car. The Tesla hit the trailer roughly in its middle, a collision that sheared the roof from the electric vehicle, which continued another 300 feet (91m) down the road before slamming into a utility pole and coming to rest a further 50 feet (15m) in someone's front yard.

As NHTSA found, the Automatic Emergency Braking feature on the Tesla—in common with just about every other AEB fitted to other makes of cars—was not designed in such a way that it could have saved Brown's life. The machine learning algorithms that underpin AEB systems have only been trained to recognize the rear of other vehicles, not profiles or other aspects. The NTSB report did find that a Vehicle-to-Vehicle communication system (V2V) could have alerted both vehicles to the potential danger, but as we have discussed ad nauseam, V2V is still absent from new cars even though the spec is almost 20 years old.

Brown was driving a 2015 Model S, using the original Mobileye-sourced hardware and running Tesla's Firmware 7.1. Although that system works like most other adaptive cruise control and lane keeping "Level 2" semi-autonomous driving systems offered by other OEMs, Tesla's Autopilot differs in that it allowed the driver to go much, much longer without interacting with the car. The industry standard allows for just 15 seconds before it prompts the driver to interact with the vehicle—fail to do so and the car stops controlling the brakes, accelerator, and steering. Autopilot, on the other hand, allows for several minutes to pass between prompting the driver, and NTSB's data reconstruction showed there was no driver interaction for two minutes leading up to the crash. (Driver interaction in this case is measured by a steering wheel torque sensor.)

Level 2 systems like Autopilot are not meant to replace a human driver; although they will handle the steering and acceleration and braking of the vehicle, the human driver is responsible for situational awareness at all times. Since the fatal crash, Tesla has modified Autopilot. The relationship with Mobileye broke down, and Tesla is now developing its own sensor suite (HW2) and software (AP2), which has yet to reach the same level of functionality as its earlier cars.

Tesla has repeatedly tweaked the operating conditions of Autopilot in response to drivers "doing crazy things." Now, instead of just giving the driver warnings every few minutes with no other action, Autopilot will disable after three ignored prompts within an hour. However, even with the most recent firmware, its cars will still let drivers go between one and three minutes without human interaction when traveling above 45mph—sufficient time for Brown's crash to have happened. We should note that Tesla's instruction manual does tell owners that Autopilot requires the human driver to be ready to take over control at any time, and the company has repeatedly stressed that Autopilot is not meant to be a self-driving system when asked about this crash or other misuse of Autopilot.

The NTSB findings include noting that "steering wheel torque is a poor surrogate measure for driver engagement," and it is telling that all of the Level 3 systems that Ars has seen involve some form of driver-facing camera and gaze tracking to monitor driver awareness. Additionally, these level 3 cars—which are designed for lengthy periods of driver inattention—are geofenced such that they will only work on divided lane highways where an accident like this one would not be possible. (Both safety features are also true for Cadillac's new Super Cruise system.) Finally, this specific problem—confusion over whether a human or machine is in control—is the reason why some car makers have decided to skip level 3 automation entirely.

As NTSB Chairman Robert Sumwalt noted in conclusion, "Tesla's system worked as designed, but it was designed to perform limited tasks in a limited range of environments. Tesla allowed the driver to use the system outside of the environment for which it was designed, and the system gave far too much leeway to the driver to divert his attention to something other than driving. The result was a collision that, frankly, should have never happened."

Update: In response to the NTSB hearing today, a Tesla spokesperson told Ars that "at Tesla, the safety of our customers comes first, and one thing is very clear: Autopilot significantly increases safety, as NHTSA has found that it reduces accident rates by 40%. We appreciate the NTSB’s analysis of last year’s tragic accident and we will evaluate their recommendations as we continue to evolve our technology. We will also continue to be extremely clear with current and potential customers that Autopilot is not a fully self-driving technology and drivers need to remain attentive at all times.”

https://arstechnica.com/cars/2017/09/ntsb-teslas-autopilot-ux-a-major-role-in-fatal-model-s-crash/