+1 612.405.4059

The Fatal Tesla Crash and Medical Device Risk

There was a tragic car crash two months ago involving a Tesla Model S while the driver had the car in its “Autopilot” mode. The driver of the car lost his life after hitting a tractor trailer making a turn in front of him; Tesla stated that neither the driver nor the car recognized the looming hazard and did not apply the brakes. Not surprisingly, this first fatal crash using the Autopilot feature has been generating a lot of discussion regarding the many “self-driving” technologies, driver inattention and responsibility, comparisons to human-driven crash rates, liability and other issues related to the development and use of autonomous vehicles – certainly all areas outside of my expertise. But while reading about the crash, a quote by an engineering professor at Duke University caught my attention for two reasons. Dr. Missy Cummings, whose research includes human-unmanned vehicle interaction, was quoted in a CNN article as saying “If we know cars moving high speeds on highways have potential deadly blind spots under autopilot, then the onus on engineers is to either fix the software or turn it off… We should not accept situations where manufacturers know about technology problems, warn drivers about the limitations in an owner’s manual and then shift the blame to the driver when a predictable accident happens.”

First, for those who might have read my article on residual risk, this quote perfectly describes the rationale behind some of the changes made four years ago to ISO 14971 – Medical devices – Application of Risk Management to Medical Devices. As I mentioned, “ISO 14971 certainly allows the inclusion of “user information” in the assessment of a device’s risk. But if the assessment shows that the residual risk (what is left after applying all the risk control measures) is too high, the company can no longer lower their risk value to an acceptable level just by telling the user about it.” In this instance, “user information” would be a warning in the owner’s manual or on the Tesla center console display. While ISO 14971 applies only to medical devices, the concept of risk management and residual risk certainly applies to any technical industry that has high potential for catastrophic outcomes from a device failure. Other industries can use ISO 31000:2009 – Risk Management, although the discussion of residual risk is a little sparse. It would be interesting to know if Tesla adheres to this standard when assessing product risk.

Second, Dr. Cummings states “the onus on engineers is to either fix the software or turn it off”. Engineers can certainly do this, and their unbiased input is critical to making an informed decision, but the final decision on the releasing the software lies with Tesla management. It is a company’s management that ultimately decides the acceptable risk level for their product, since device failures can impact both the short- and long-term viability of the company.

There will always be a risk of a car crash no matter how sophisticated the technology, just like there will always be risks of injury or death with the use of medical devices. So where should Tesla management draw the line on the risks involved in using its Autopilot technology? Arguably, this crash, along with other non-fatal crashes while using the Autopilot feature in the Model S, suggests that the technology may not yet be ready for release to the general population. Regardless of where the line is drawn, the risks that remain even after extraordinary engineering can’t be reduced further by simply telling the driver about them.