As a former journalist, I know how hard covering automated driving technology can be. Making such a complex technology comprehensible to a general audience is particularly difficult, especially given the lack of standard language for many aspects and variations of the technology. Like the engineers developing the technology itself, journalists are finding a way through an immensely difficult topic without an established playbook, forcing them to figure things out as they go along.
So when we at PAVE highlight a “teachable moment” in media coverage of automated vehicles, we do it not to bash our hardworking media friends but to help journalists everywhere converge on the best ways to communicate these complex issues in an approachable yet accurate manner. We also want to avoid taking sides in controversies about the technology itself, but only intervene when we feel that a story has left out information that is crucial to the public’s understanding.
With that in mind, we turn to a Reuters report that has been generating some controversy on social media. Much of this controversy is simply related to the murky grey area between systems that are obviously intended to support a human driver and higher levels of automation, where language and technical distinctions are the most nuanced and unsettled. Rather than passing judgement on Reuters, the companies involved, or their systems, we hope to simply explain the controversy and add more detail than a news wire story can.
On one level there’s a bit of a nomenclature issue here: specifically, Reuters use of the term “semi-automated” to describe Ford’s upcoming Active Drive Assist, Tesla’s Autopilot and Cadillac’s SuperCruise. Though not as egregious as the once-popular term “semi-autonomous,” which Automotive News’s Pete Bigelow once described as “the vehicular equivalent of being a little bit pregnant,” “semi-automated” still sounds similar enough to that increasingly-discarded term to potentially invite confusion. Though semantically similar, the AP Stylebook’s preferred terms “partially automated” and (to a greater extent) “driver-assist systems” go further in avoiding any confusion that might lead a driver to dangerously over-trust a system that is only there to support them.
The complexity here deepens when it comes to the distinctions between the three systems that Reuters groups together as “semi-automated.” Ford claims that Active Driving Assist allows for “hands-free driving,” a claim that Cadillac also makes about SuperCruise. (GM is clear in its Owners Manual that Super Cruise is a driver assistance feature and that it requires constant vigilance by the driver.) By contrast, Tesla warns drivers that “before enabling Autopilot, the driver first needs to agree to ‘keep your hands on the steering wheel at all times.’” Our interest in this distinction is not to argue that the “hands-free” capability that Ford and GM offer make their systems better or worse, but illustrate important system-level distinctions that are not necessarily clearly reflected in common terminology or even tools like the SAE automation levels.
All of these systems are technically SAE Level 2, which means the human is driving and must constantly supervise the system. All three also have driver monitoring systems to ensure this supervision, but whereas SuperCruise and Active Driving Assist use cameras to track the driver’s head position and eye gaze, Tesla uses a torque sensor on the steering wheel to ensure driver attention. This is fundamentally why Tesla warns drivers to keep their hands on the wheel at all times, while Ford and GM list “hands-free driving” as features: without the driver’s hands on the wheel, Autopilot does not have a way to ensure that the human driver is paying attention.
There is one other fundamental distinction between SuperCruise and Active Driving Assist on the one hand, and Autopilot on the other: limits on the “operational design domain” (ODD) of the system. What this means, in plain English, is that GM and Ford’s systems can only be activated on highways that have been mapped and deemed a safe operating domain for the level of technology in the system. These limits can enable a higher level of confidence in safe hands-off operation, whereas Tesla’s system is not limited to pre-mapped highways and can in theory be activated in domains where the system is not designed to handle reliably.
In short, the distinction between a “hands free” Level 2 system and one that requires the driver to keep their hands on the wheel at all times is not reflective of higher levels of automation, but rather design choices concerning driver monitoring and ODD. We would be remiss if we did not mention the fact that National Transportation Safety Board investigations have highlighted the lack of operational design domain limits and shortcomings with wheel torque as a measure of driver attention as factors in multiple crashes involving Tesla Autopilot [see NTSB’s recommendations and automaker responses on these topics here and here], but that is an even more complex and controversial topic that is beyond the scope of today’s discussion.
The most important thing to remember is that all automation systems on cars available for sale in the United States today are fundamentally intended to assist a human driver who must maintain awareness and control at all times. Beyond that there are a range of complex design differences that are also important to be aware of, not because one is fundamentally better or more advanced than the others but because each needs to be approached with an awareness of its individual capabilities and limitations.
If you’d like to learn more about these issues and other topics related to the challenges of covering automated driving technology, we discuss them with three top journalists covering the space in a PAVE virtual panel.