In August, the National Highway Transportation Agency (NHTSA) started an investigation into 12 crashes between Tesla
The term ADAS Pilot refers to Advanced Driver Assist Systems which take over steering and speed while the human driver watches the road and supervises, ready to take over if needed at any time. The most famous, and generally praised as the highest performing, is the Tesla Autopilot, but other similar products such as GM “Super Cruise” are sold by other carmakers. Tesla’s though, has gotten most of the attention, and these crashes are part of the reason.
These additional inquiries will help NHTSA and the public understand what’s going on, and the difference among systems. They will point to answers to the question of why this number of incidents have been reported with Tesla vehicles. There are three likely reasons:
- Tesla Autopilot is more popular and used for far more miles than competing products, and so would be expected to be involved in more incidents.
- Telsa Autopilot is inferior at detecting stopped vehicles compared to accepted industry norms, and must improve.
- Tesla drivers are more complacent than usual, not watching the road and thus not doing their job to avoid hitting these vehicles. Tesla’s countermeasures against this complacency may be judged insufficient.
We will be able to compare Tesla with the other products and figure out the right philosophy of design for such systems.
Why these crashes happen
In a typical crash, a vehicle will be stopped by the side of the road where there’s a narrow shoulder, so that it occupies part of the lane, or with no shoulder it is stopped in the left or right lane. In particular, this is an emergency vehicle stopped to deal with another stopped vehicle or in particular an accident. It’s a dangerous situation. Traffic is supposed to change lanes early to get around these vehicles. Normally they are unoccupied, but there are still serious dangers of vehicles plowing in at highway speeds, obviously. This can be a problem for any stopped vehicles but is a particular concern for emergency crews which must do it regularly.
ADAS Pilot cars will crash into such vehicles due to a combination of these factors:
- The system, expected to be imperfect and depend on the driver, does not reliably detect and identify certain stopped vehicles in its path.
- In some cases, there is a large vehicle ahead of the ADAS Pilot car which blocks the view of the stopped vehicle. That vehicle, seeing it, changes lanes to avoid it, suddenly revealing the obstacle. The ADAS Pilot car may have only a short time to deal with what is suddenly revealed, and not react in time.
- The driver is not paying sufficient attention to the road — possibly ignoring it altogether — and does not intervene when the system fails to sufficiently brake or swerve.
It is important to understand that these systems, begin driver assist, rather than self driving, are not designed to recognize and see everything. It is expected they will miss things. While there are generally efforts to make them better and better at identifying any obstacle on the road, the responsibility for supervision and action still lives with the driver. The driver, however, often starts to see themselves as secondary, or not even needed at all.
There is a paradox with such systems. Because a basic cruise control does so little, no driver would ever imagine not paying attention to the road. Adaptive cruise controls do just a little more, but you might be OK with looking away for a while on a straight and empty road.
Once you get a system that steers as well, suddenly it becomes clear it’s not that scary to look away for a bit. People all are known to do it even in the most basic cars on straight roads, to fiddle with the radio or navigation, grab something from the back, take off their coat an more. They even write text messages. The ADAS pilot makes you feel more comfortable doing that, and it is indeed not that dangerous to do for a short time — but a rare event could lead to catastrophe, which is easy to forget. Even so, on an empty road with peripheral vision to alert if anybody gets close or the car starts to veer, we do it.
As the system gets even better, making errors only once a day or even longer, some drivers have taken to treating it like a self-drive system, going so far as to look full-time at their phones for long periods. Most go no further than that, but some have been known to even sleep, though that’s pretty rare. There are also those who do stunts, trying to get out of the driver’s seat. Of course, they should not do that, though it’s also true that should you fall asleep unintentionally (a very common cause of accidents) the best possible way to do it would be with an ADAS pilot engaged. 99% of the time you would not have an incident, while one is very common otherwise.
NHTSA and others have termed this situation “automation complacency.” It’s an odd paradox – the better the ADAS Pilot performs, the greater the risk of the complacency. Generally we want to encourage better performance. It seems odd to want to put extra burdens, liability or regulation on a system because it is superior. We’ve never regulated adaptive cruise controls or lanekeeping systems at all, but a superior combination of the two has raised new questions.
Recently release research on gaze using Tesla Autopilot from MIT confirms that Autopilot users do indeed spend more time looking at things unrelated to driving (ie. not the road or the dashboard) and they even built a model that is fairly good at predicting such behavior.
Even Elon Musk admits this happens, tweeting on Sept 19 that: “FSD beta system at times can seem so good that vigilance isn’t necessary, but it is. Also, any beta user who isn’t super careful will get booted. “
To reduce automation complacency, vendors have put countermeasures in place. They attempt to make sure you are still paying attention, and will alert you if you aren’t, and will stop the car and turn off, or even disable the ADAS pilot for a time if you don’t respond to the alerts.
Tesla’s system is fairly simple. They tell drivers to keep their hands on the wheel, and with a little force. This small amount of torque on the steering column lets the car know those hands are applying that force. If you take hands off the wheel, or keep them on but don’t apply the torque, after a short time visual warnings will appear, followed by an audible one. Eventually the car will slow to a stop if you don’t respond.
GM Super cruise and some others attempt to watch the driver’s eyes. They can spot where you are looking. If you look away from the road for too long, they can apply the same pattern of alerts. Such systems can allow you to take your hands off the wheel entirely. This is more relaxing but means a slightly longer response time putting hands back on the wheel.
Tesla’s system can be “defeated” by keeping one hand on the wheel to apply the force, while the eyes and other hand do something else. At a higher level, it can be defeated by attaching a weight to the wheel to apply the constant torque force. Some view such defeat as solely the responsibility of the driver do it. Others believe the automaker has a duty to make the countermeasures hard to defeat. The Tesla system will also not operate if the driver’s seatbelt is not in use, though it does not depend on the weight sensor in the seat. (You would not want it to disengage just because the driver lifted themselves up to take off a coat, for example, or if the sensor failed.)
Recently made Teslas have a camera mounted in the rear-view mirror which can see the driver and their eyes. Tesla has experimented with this camera but has resisted doing gaze based countermeasures.
It should be noted that many teams testing real self-driving cars with safety drivers also use gaze monitoring to make sure their paid staff are not shirking duty. This got a lot more attention after a safety driver for Uber ATG watched a video while doing her job, and did not intervene when the car struck and killed a pedestrian crossing the road in Tempe, AZ. Gaze countermeasures would have prevented that.
More extreme countermeasures are possible, and alluded to by Elon Musk in his tweet above. If a driver is clearly not paying proper attention, their ADAS pilot can be disabled — for a short time, for the rest of the trip, for the rest of the day — or even forever, if this was specified in the contract. Fear of this loss could make drivers more attentive. A car could even detect times when the road is clear around it (as it already does for lane changes) and deliberately drift towards a clear lane. A driver who doesn’t notice and intervene could be given a strike, even without gaze monitoring.
Detecting Emergency Vehicles
Many people initially assume that the big problem is that the Tesla system is not detecting these stopped vehicles. ADAS systems do not detect everything, which is why they rely on a supervising driver. The first simple systems, namely Adaptive Cruise Control (ACC) and Forward Collision Warning/Avoidance systems (also known as Automatic Emergency Braking) were largely radar based and did not work at all on stopped vehicles in front of you. Radar, of course, gets returns from stopped vehicles, but these are mixed in with returns from all the other “stopped” things in the environment like the ground, fences, signs, trees and stopped cars in other lanes. Typical automotive radars just ignore anything that’s stopped because you get such pings all the time.
More sophisticated radars, and vision systems, are able to know more about stopped things. High definition maps can know the location of the large radar reflectors in the environment, so that a car can know a radar ping in front of the car is likely not a sign or bridge, and pay more attention to it. Vision systems don’t see speed at all, except by looking for the same object in multiple frames of video and trying to figure out how it is moving. If a car in front is staying the same size moment to moment, it’s moving the same speed as you. If it’s getting bigger fast, it might be stopped.
Because of this, currently vision systems fail too often at detecting stopped vehicles, particularly if the type or angle of the vehicle are not similar to things it has been trained on. What we don’t know is if Tesla performs more poorly than other systems and thus could be judged substandard.
That seems unlikely. Tesla has impressive resources to use in training their vision systems, the most valuable of which is the fleet of a million privately owned Tesla cars driving the roads running Tesla software. If Tesla is interested in capturing images of police cars pulled to the side of the road to train their system, they have the ability to tell all those million cars to look for examples that are similar to that and send them back to Tesla, where they can be labeled and included in the training. I am unaware of any other company that has this ability — not even Intel/MobilEye, which has more cameras in more cars, but does not have the same ability to control them and update their software on demand. It is unlikely that Tesla is well below the state of the art in vision systems. However, having removed radar from their new cars, it is possible that they now suffer from this, though none of the accidents under investigation were in cars without radar.
Even if it finds that Tesla is as good or better than its competitors at detecting the stopped emergency vehicles, NHTSA might well wish to push all the vendors to put extra care into detecting these vehicles. While any crash into any vehicles at the side of the road is cause for great concern, the operators of these vehicles are there doing their jobs, and they are there much more often than ordinary drivers and vehicles. Extra effort could be demanded.
What if the good outweighs the bad?
As all admit, if drivers don’t pay attention to the road, for whatever reason, there is a risk of serious accidents, including these crashes into roadside vehicles. At the same time there is evidence that drivers who use these systems and do pay proper attention to the road become safer than they are on their own, and that most drivers and most driving is done properly.
This creates a philosophical dilemma. If 99% of drivers use the system properly, and it makes them safer when they use it by having “two sets of eyes” on the road (on human and one digital) then that’s a lot of increased safety. It could well increase overall safety more than it is hurt by that small set of people who don’t pay proper attention, either by accident or deliberately. If such systems were to be banned or limited because of those abusers, the result could be an overall reduction in road safety, which is not what we want. This is true even if the abusers are very bad. If 99% of drivers are 10% safer, and 1% of drivers are 1,000% (10x) worse, it’s still a win to have the system on.
It will even help the drivers who abuse it. Sept 17, a drunk woman passed out in her Tesla on Autopilot was guided to a gentle stop by police near Los Angeles. Huge numbers of accidents are caused by people falling asleep or being drunk. (This situation is odd, normally Teslas slow to a stop on their own after a few minutes of no response from the driver to alerts. The passed out woman must have had her foot on the accelerator.)
Tesla asserts this is true, but in a misleading way. Every quarter they publish numbers on how many miles their cars go between “accidents.” An accident is something that causes the airbags to deploy. They note that with Autopilot on, they can go twice as far between accidents. They also misleadingly point out that average cars only go 1/8th as far between “crashes.” A “crash” is NHTSA’s crashes, generally ones that are reported to police, which includes many where airbags don’t deploy.
The more misleading aspect of these statistics is that Autopilot is, according to Tesla’s own description of it, a highway only product, though it will activate on many arterial streets. Research suggests drivers use it almost exclusively on limited access highways and high speed rural highways. The problem is that the accident rate on those highways is already 3 times better than on urban streets, yet the Teslas are only doing twice as well. The math is somewhat complex, and to be done perfectly needs numbers Tesla has but refuses to release, but calculations suggest that the net of this is that the accident rate is fairly similar with Autopilot on and off. This means the abusers are being roughly balanced out by the increased safety, but not by much, if anything.
Over time, Autopilot will get better at improving safety for those who are diligent. It will also, as Elon Musk suggests, push more people into getting complacent. We can’t be sure which way the balance will fall. In the meantime, actual real truly full self driving products are in development for which ignoring the road will be a planned feature. These will, in time, drive more safely than human drivers — Waymo’s already do in the constrained suburban environment in which they work — and reduce accidents and risk on the road for all. NHTSA, Waymo and many others have expressed great frustration with the way Tesla calls its new prototype city street Autopilot “Full Self-Driving” because of course it isn’t even remotely that, but it may make some people get more complacent with a false label like that. Tesla has teased that it will make this beta version available to all customers who paid for “FSD” very soon, and seems to have already expanded the pool of testers from a small set of non-employees to a number of early buyers.
The new head of the NTSB, by the way, has expressed the view that the name FSD is irresponsible and is worried about this planned wider beta program.
It seems there is an obvious answer — better countermeasures against complacency. Tesla has resisted these, believing its torque monitor is sufficient. NHTSA will study whether it is. While a simple analysis may say, “have as many countermeasures as you can,” the reality is that countermeasures are also not perfect, and if they incorrectly nag drivers or reduce the utility of the products that is something neither drivers, nor car companies want. Whether the public wants it is a different question — if too much nagging stops the people who gain more safety from the product from using it, that can be a loss for the public interest as well.
That said, it seems that Tesla, which long ago installed a camera in the mirror which can monitor the driver, should be able to do better without over-nagging. Indeed, the use of the camera by GM allows them to let the driver take their hands off the wheel, which is more relaxing. There is an open debate over whether intervention reaction times are slower with hands off the wheel compared to the hands-on approach Tesla uses, and that’s also worth of study.
Getting it right is hard. Tesla’s forward collision warning, which sounds an alarm when a vehicle in front of you is going so slow you might hit, gives mostly false alarms, and false alarms eventually get ignored. But Tesla should apply their AI expertise to countermeasures that are both effective, and not bothersome to get the best of both worlds — and improved safety for all drivers.