More Autopilot Misinformation

Following the historic launch of Crew Dragon this weekend, many wondered what negative talking points Tesla short sellers could possibly muster when they clocked back in on Monday. If you have “Autopilot is going to kill you” on your bingo card, mark it off –– they went with the old favorite.

Of course, TSLAQ twisting facts to create misinformation about Tesla is nothing new. Unfortunately this time, in a troubling pattern that’s become all too common certain revenue starved online publications saw the TSLAQ propaganda and decided it would make some great clickbait. Once Frederic Lambert shared the TSLAQ tweets to his audience at Electrek, several other blogs eager to stir up a controversy reported on it too.

The Electrek article actually did a decent job at providing facts surrounding the incident. Normally this is helpful for fighting misinformation, unless you’re publicizing deceptive info that hasn’t already spread on its own. Sadly, on the internet few people read past the headlines, and the headline on this story was chosen to mislead readers and stir controversy:

Video of Tesla Model 3 crashing into a truck on Autopilot goes viral

Electrek

First off, let me just say that nobody is trying to claim Autopilot is perfect. It’s a great driver assistance feature that can do more and more every month, but it still needs to be supervised today. Any regular user of Autopilot has stories about times when the system has done something unexpected, and they’ve taken control. That’s normal and expected for where the technology is today.

Having said that, there’s a tendency to blame Autopilot for anything a Tesla does, even if it’s not being used. Why is this?

  1. Tesla short sellers, competitors, and misc. enemies –– Vocal and well connected to the media, these lunatics are eager to put down Tesla and/or discredit Autopilot at any available opportunity. When a Tesla gets in a car crash, you’ll often see them spreading the word on social media commenting “I hope they were using Autopilot” or trying to prove the system was on. Whether Autopilot was involved or not, they’re going to try and blame Tesla and get others to do the same for their own financial benefit.
  2. The Driver –– We’re all honest people here, but when an embarrassing and potentially costly crash happens it can be convenient to blame the car. If there’s loss of life or intoxication at play, that pressure might be even greater. Assume people are being honest, but also keep in mind the driver has a strong incentive to shift blame away from themselves.
  3. The Media and Blogs –– They need people to click on links to their website to make money. The truth is boring. A controversy gets clicks! Thousands of cars crash every day. That’s not news anymore than someone brushing their teeth is, so there’s no money covering that. But a crash involving Autopilot? People will sure click on that. Thus the media is incentivized to try and play up the role of Autopilot for the sake of their own bottom line.

Some might say getting all the facts doesn’t matter, and Autopilot should always be criticized and scrutinized by the media as much as possible. But there’s a lot at play here –– autonomous technology has the potential to save millions of lives around the world. The World Health Organization estimates 1.3 million people die in car crashes around the world every single year. More than that, autonomy has the potential to enable great convenience. It will transform the global economy by dramatically increasing the efficiency of the transportation infrastructure underlying every good and service on the market.

When you’re dealing with a new technology where so much hangs in the balance –– so much potential good, and so much potential harm –– it’s important to make judgements based on fact based reasoning rather than short seller propaganda. We’ll attempt to analyze these events in a fact based manner, without sensationalism, here today.

Was Autopilot Engaged?

The fact is, there’s no clear indication that Autosteer was engaged. Don’t get me wrong –– it looks like it very well could be Autopilot, it just hasn’t been confirmed. Using Autopilot on highways is very common and the video does make it look like something a human would have seen, so I wouldn’t be surprised at all if the vehicle was in fact using Autosteer at the time of the crash. But if you care about facts, the vehicle logs tell you for sure exactly what Autopilot features were in use and more. It’s important to separate what the event looks like or feels like from hard data and eye witness testimony. So far, people saying this is Autopilot are mostly speculating.

Without hard facts, people were left to form their own opinion about whether Autopilot was engaged or not, with most assuming it was:

A video of a Tesla Model 3 crashing into a truck on the highway while reportedly being on Autopilot is going viral. Here’s what we know.

The incident happened in Taiwan yesterday.

A trucked rolled over to its side on the highway, leaving the roof of its box exposed to upcoming traffic.

A Tesla Model 3 owner reportedly driving with some Autopilot driver-assist features didn’t see the truck and the safety feature didn’t stop a collision.

Tesla shorts have been using the accident as evidence that the automaker’s Autopilot system is unsafe:

Electrek

This was the story TSLAQ wanted to create, and even Tesla fans ate it up. As “TheTeslaLife” tweeted the Tesla “failed to see an overturned truck”, while TheMuskBros noted “hate to admit it […] but Autopilot is 100% at fault“. Others thought it looked like the car wasn’t centered in the lane, and thus Autosteer wasn’t in use, but this is simply speculation on both sides.

A misinformed reader might walk away thinking: “Wow! A truck flipped over on the highway, totally obvious for any person to see! And Tesla Autopilot was supposed to be driving, and failed to see it, which caused a crash! Tesla Autopilot is supposed to drive for you, so what’s the point of using Autopilot if it can’t see something so obvious? What a fundamental flaw. Tesla cars are unsafe and should be banned”. It’s a compelling story. But it’s not rooted in fact.

When the story was first published, this TSLAQ account shared by Frederic had around 100 followers and the video tweet had few views. After Frederic decided to help spread the misinformation to Electrek’s large audience, the tweet got thousands of likes and saw its follower count quadruple. Amusingly, the account was not thankful to Frederic for helping the tweet reach a much wider audience. At the time of writing, the account’s display name and bio were changed to the following:

the funniest part about this that it got printed in Electrek, since the tweet was embedded

The Tesla Saw the Truck

The first bit of misinformation here is that the Tesla didn’t see the truck, or even worse, Autopilot does not have the capability to see a turned over truck blocking the highway.

This is untrue –– from the video, it seems clear that the Tesla did in fact detect the incident and applied automatic emergency brakes 5 seconds before, 4 seconds before, and immediately before impact. You can see some white smoke or dust getting kicked up each time the automatic emergency brakes are applied, if you look closely:

It seems pretty clear that the Tesla’s industry leading Automatic Emergency Braking feature was activated before the crash, so the car did “see” that something was going on and react. This isn’t anything too special: Many cars have AEB installed standard these days, and it gets activated to prevent a collision whether a car is being driven manually or on Autopilot.

So if the car saw the obstacle, why did it collide with the truck? Shouldn’t it have stopped? From the security camera footage shared in the TSLAQ tweet, it looks like an open straight road with the overturned truck clearly visible. If it can detect any overturned trucks it should be able to detect this one, right? It would certainly appear the system has a serious flaw.

Looks can be deceiving. In fact, Autopilot can avoid most collisions with stopped cars, objects, or turned over trucks. The system is pretty impressive. However, it’s important for everyone to understand that the software can’t prevent all crashes –– expecting it to would be unrealistic. Drivers using Autopilot must remain alert, watch the road, and be ready to takeover if anything unexpected happens. They’re reminded of this every time they use the system. There are so many weird things that can happen on the road, it’s impossible to imagine all of them. For a rare “once every billion miles” event like a flipped over truck on a highway in Taiwan, you may never get a chance to test how your system responds until the first time it experiences that corner case in person.

Complicating Factors

While the security camera is setup to capture as much of the road as possible, closer inspection reveals that this road is both hilly (with the car traveling uphill before collision) and curved. Furthermore, the Tesla passes beneath an underpass immediately before detecting the truck and applying automatic emergency braking. The shadow of the bridge can momentarily affect visibility.

In past versions of the software, Autopilot has sometimes performed unnecessary hard brakes when approaching an underpass, mistakenly viewing the shadow or the bridge as an obstacle that needs to be avoided. After many people complained about this, I heard a rumor that work may have been done to avoid false positives on obstacle detection due to an underpass. This is all speculation, but these factors may have delayed the time at which the car became confident there was an obstacle ahead.

But the car still acted to avoid collision 5 seconds before the crash –– why wasn’t it able to avoid the collision? Isn’t that a flaw?

Depending on the speed, the stopping distance for this car could have been as much as 0.1 miles. Simply put, the car might not have had enough time to stop once it detected the truck.

Plus, you have to understand the car isn’t operating with the level of certainty you have as a human. “Stop to 0 mph smoothly when you see something in the road” might seem like a good plan to code up when watching a crash video like this, but let’s see how you feel when that same line of code on your car is going 80 to 0 in the middle of the freeway in less than 5 seconds because of a cardboard box or bird flying by. That’s not fun.

In a setup like this where a human driver is present, AEB can help give the driver more time to respond but shouldn’t create a dangerous situation by stopping to 0 in the middle of the highway. It might look like the car is completely blind in the video, but it’s just trying to take the safest course of action it can with the information available. Future updates will learn to handle situations like this better, but the car did the best it could and the driver walked away unharmed. Thank God.

Human & Computer Failed to Avoid a Crash

As of today, Autopilot is always monitored by a human. Even when the human is driving manually, Autopilot safety features remain on in the background.

That means anytime you see a Tesla crash, both the human and the computer failed to avoid (and maybe even to see) the danger ahead. The goal is always for the computer to avoid accidents completely on its own, no matter what the human does. However if an accident seems obvious and easy to avoid, it might be worth asking why the human didn’t see or respond to it either. Perhaps it wasn’t as obvious and easy to avoid as it appeared from a bird’s eye view.

Or perhaps the human driver wasn’t paying attention. This is a frequent criticism of Autopilot: That the system is so good, drivers will get too comfortable and stop paying attention. Then, rather than guarding against the system’s shortcomings the driver opens the door to the possibility of a crash.

There is of course some truth to this. Autopilot is good enough that you could stop paying attention for a moment safely but you definitely shouldn’t take your eyes off the road at any time. Ultimately just like we trust drivers not to text or smoke crack while they drive, we have to trust that they will be responsible for their car and pay attention when using driver assistance too.

There is inevitably some risk of danger due to inattention, but this risk is smaller than people think and dramatically overused as a talking point against Autopilot. People actually want to pay attention for their own safety, and it’s not hard to do so.

The vast majority of the time, the system just does the right thing anyway. While it may be easier to get away with not paying attention on Autopilot, you also drive much more calmly at a safer speed. You’re not going to see someone swerving through traffic at 120 miles an hour on Autopilot –– it just doesn’t go that fast, ever, because of rules hardcoded in the software. There are many aspects of Autopilot that provide huge safety benefits, and those far outweigh the quickly shrinking risk of distracted driving.

Besides, distracted driving isn’t a problem Autopilot created: Distracted driving is a growing epidemic, in all cars. Next time you’re on the road, look around you: everyone is using their phones. Long term, autonomy and advanced driver assistance are actually the solution to distracted driving: not the cause of the problem. Think no human could ever crash into a truck like this? You’d be surprised.

Driver Completely Uninjured; Walks away unharmed

However, they don’t note the most important thing, which is that the Tesla driver is reportedly uninjured.

The local media reported (translated):

It can be seen from the picture that the impact force is so great that even the truck shakes. It is understood that the Tesla driver was unharmed. He confessed to the police that the auxiliary system was turned on at the time, and the self-driving state was not adopted. There is no drunk driving situation, and the relevant transcripts have been completed so far, and the two parties have to face the subsequent compensation matters.

Electrek

If this is “the most important thing”, why wasn’t that the headline? “Tesla driver in Taiwan walks away unharmed after running into overturned truck”. That’s pretty amazing! I guess a positive story about Tesla safety isn’t as salacious as one where Autopilot takes the blame for what happened.

Something may have been lost in translation, so it’s unclear to me whether Autopilot was enabled or not. The driver talks about an “auxiliary system” but says “self-driving state was not adopted”. Your guess is as good as mine for what this means. Maybe it meant cruise control but no autosteer? Maybe Autopilot, but no Navigate on Autopilot? Maybe they mean they didn’t buy the FSD or Autopilot package? It’s not clear from the story, but the vehicle data has the truth.

That the car was “on Autopilot” is not clear, but that’s what made the headline. The fact that the truck was overturned is not mentioned unless you take the time to read the whole story. What we do know for sure –– that the driver was unharmed –– is just a footnote to the misleading headline everyone will read on Twitter. After a game of social media telephone twists that original headline, the truth gets lost in the noise.

Some have speculated that the automatic emergency braking events may have actually reduced the likelihood of injury or even saved the driver’s life. Leave it to TSLAQ to blame Autopilot for saving someone’s life, right? While it’s not clear what role the emergency braking played in the passenger’s safe outcome, it certainly couldn’t have hurt.

In the video, it also appears that the driver or the emergency automatic braking system applied the brakes at the last second based on brake skid smoke appearing a moment before the crash.

Electrek

Again, why wasn’t this in the headline? Although it’s great to see this clarified in the body of the article, many people (including many Tesla fans) saw the headline or skimmed the article and walked away thinking Autopilot has a serious flaw that prevents it from seeing a truck overturned in the middle of the road. In reality, even the author admits that it “appears the driver or the AEB system applied the brakes“. My money is on the AEB, as I assume a human reacting would have steered away rather than just applying the brakes. But again, just a guess.

It would be nice if Autopilot’s automatic emergency braking system would prevent accidents like that, but it doesn’t always work, and there’s no substitute for paying attention at all times.

Tesla is constantly improving its Autopilot features, but it still asks drivers to keep their hands on the wheel and pay attention at all times every time they activate the features, which ultimately means that the driver is at fault here.

Electrek

Always a good takeaway: Regardless of what happened here, it’s always good to pay attention at all times. This is important to remember as the system gets better and better and requires less supervision. However after reading this story I didn’t worry or pay attention to Autopilot any more than I already do. It’s always good to be reminded, but for experienced users having to pay attention on Autopilot is old news.

Worried about whether Autopilot is safe? Just pay attention. Since any input on the steering wheel or pedals immediately overrides the system, there’s no reason for your car to ever get into a collision even if Autopilot messes up very badly. We all accept responsibility for our car’s actions when we’re on Autopilot, and if we crash because we failed to pay attention while the software made a misstep, that is our fault as Tesla drivers. Don’t like it? Just drive yourself like before –– it’s more fun anyway.

The most important thing is that he is apparently uninjured, which is nothing short of a miracle after a crash like that. It looks like the roof of the bed acted as a great crumple zone.

Automatic emergency braking may have also played a role here in reducing the force of the impact, but it’s not clear if it was the system or the driver who applied the brakes at the last second.

Electrek

Put the misinformation in the headline, bury the truth in the footnotes.

We may call it a miracle, but the Autopilot team works very hard to make sure these miracles happen for us every day.

The reason why Tesla and other vehicles on driver-assist features hit stationary objects on the road while at highway speed like that is due to them trying to reduce the number of false-positive braking events.

It will improve over time, but it’s a good reminder that drivers have to stay attentive and ready to take control at all times.

Electrek

Stop falling for the same FUD over and over

Every day people brush their teeth, eat lunch, and crash their cars. It happens every day. As more Teslas get on the road, it is statistically certain we’ll see more Tesla crashes every day. I could find a picture of a new Toyota crash every day, and try and convince you Toyota had some horrible safety flaw. But that argument doesn’t hold water on its own, because it’s unrealistic to expect Toyota to prevent 100% of crashes. The built in safety systems on most Toyotas can actually prevent many crashes, but nowhere close to 100%.

Live Autopilot Miles Estimate

Autopilot has now driven over 3.6 billion miles. Before the end of this year, it will cross 5 billion miles.

As more Teslas are sold and people use Autopilot more often, the number of miles traveled by Tesla vehicles every day will grow exponentially. Even if the safety of Autopilot continues to improve as expected, it’s again a statistical certainty that the raw number of accidents will increase just based on more miles traveled. You will be mislead to believe that Tesla is causing a growing number of crashes, when the reality is that safety has been improved tremendously through the widespread adoption of advanced driver assistance.

The Mythical Perfect Robotaxi

It’s time to dispel the myth of the “perfect Robotaxi”. Yes, we’d all love an autonomous car that never makes any mistakes. But when it comes to deploying this technology, perfect is the enemy of good.

This is a tough pill to swallow. For years the autonomy players have operated on the industry wide lie that the tech would be released “once its perfect”. Reality check: humans aren’t perfect, and it’s not clear robots can be either.

If we say “only a perfect system that makes 0 mistakes can be deployed”, a lot of people will die waiting for perfection. Let’s say humans on average crash twice when driving a million miles. If Autopilot only crashed once every million miles, a universal deployment could theoretically cut accidents in half. But if our standard is “perfect” (0 accidents in 1 million miles), twice as many people will die or become injured every day, waiting for a miracle that may not be coming anytime soon.

The solution? Require human monitoring until the system can perform 10 times better than a human driver. I can guarantee you, based on my personal experience, that a human + a computer will outperform a human alone on safety no matter how you measure it.

Many people who predict autonomous cars are decades away are trying to guess how long it will take to build a perfect Robotaxi. What they need to realize is that the first Robotaxis won’t be perfect –– they’ll be kind of shitty. That may sound shocking and scary, but that’s the path to saving a million lives a year.

People will die

Anytime a Tesla has an accident, Autopilot activated or not, there’s an opportunity for the Autopilot team to examine the data and improve the vehicle’s safety systems to potentially prevent a similar collision in the future. Every crash should be investigated and taken seriously, and Tesla and regulators do exactly that. Any accident in a Tesla is a serious tragedy. To build a world without accidents we’ll need to take each one very seriously, no matter how small or whether anyone was hurt.

At the same time, we need to acknowledge that people are going to crash and even die in their Teslas, both with Autopilot on and off. In terms of the worst accidents involving Autopilot, sadly this isn’t even in the top five. I’m just glad the driver walked away unharmed. In the past, inattentive drivers have died while using Autopilot. If your expectation is that nobody will ever die on Autopilot again, or that nobody will ever die in a Tesla again, reality is going to disappoint you.

This is what I explained to Bloomberg when they interviewed me for a story about these fatal Autopilot accidents. The headline they ended up going with was “Tesla’s Autopilot Could Save the Lives of Millions, But it Will Kill Some People First“.

Now that’s a great clickbait headline if I’ve ever seen one, but there’s some truth to it: There’s no way to save millions of lives without trying the software in the real world, where people die every day. Not letting anyone inside your self-driving car is not a safety strategy. You may avoid the bad press of someone dying in your autonomous car, but you’re not going to save any lives either.

Real World Scale

I have Autopilot activated more than 90% of the time I’m in my car, on both city streets and highways. If I have the misfortune of dying in my car for whatever reason, Autopilot will likely be activated. Even though I’ve driven over 50,000 miles with Autopilot safely in the last 2 years, the media would likely try and blame Autopilot for my death while the fact is I just have Autopilot on all the time. As the system gets better, people will start to use it more and more often. Like me, they will have to take responsibility for their car and the software controlling it. Most will take that responsibility seriously, but the real world is crazy. In the real world, accidents happen.

Right now Autopilot drives roughly 115 miles per second, around the world. On average, a car in the United States sees an accident every 479,000 miles of travel. When you’re at the scale of Tesla Autopilot, that’s not that many miles: Autopilot travels 479,000 miles roughly every 4,165 seconds.

That means if you picked a random selection of cars from across America and had them drive as much as Autopilot does, you’d see a crash every 69 minutes. This would be completely “normal” for a human driver. If you created a computer system that drove exactly like a human, it would crash every 69 minutes too, just like a human.

In Q1 2020, Autopilot saw one accident every 4.68 million miles. Instead of one accident every 69 minutes, which would have been “normal”, Autopilot “only” crashed once every 11 and a half hours. I could walk up to an uninformed stranger and tell them “Did you know there’s a Tesla Autopilot crash every 12 hours?”, and they’d be horrified. But if they’re thoughtful, they might realize that if it’s “normal” to crash every 69 minutes one crash every 12 hours doesn’t look so bad.

Humans driving Teslas manually with Autopilot safety features active in the background saw a crash every 1.99 million miles, or every 4 hours and 48 minutes at Autopilot’s current rate of travel. Humans driving Teslas with no Autopilot safety features saw a crash every 1.42 million miles, or every 3 hours and 25 minutes at Autopilot’s current rate of travel. This tells us that most of the safety improvement over the US average comes from having a newer car, not from Autopilot. Still, it appears accidents happen twice as often when the human takes control, compared to when the computer takes the lead and the human merely monitors.

But the best statistic is this: In last year’s Q1, Autopilot had an accident every 2.87 million miles. Just one year later, Autopilot is able to travel 63% further without a collision. That just doesn’t happen with humans.

When a human dies or kills others in a drunk driving accident, it’s sad –– but the saddest part is that someone will do it again tomorrow. You can try and run a public awareness campaign, but humanity just comes with some intractable flaws.

With robots, we don’t just accept things like that. Any failure will be analyzed relentlessly, with data uploaded immediately to developers to review. Once they’ve fixed the problem, they can push out a new update to cars around the world with the push of a button. Keep doing this long enough, and we could see a year in our lifetime with 0 deaths on the road. It happened in the airline industry, and we can do it in cars too. But it’s just not possible without robots helping us drive –– robots that can continuously be updated to reduce the risk of an accident until theres virtually no risk at all.

This change will be hard for people to wrap their heads around. Accidents caused by Robotaxis will look foolish and obvious to a human, but that’s just because humans and robots have different strengths and weaknesses. Humans have common sense, robots are good at focusing on one task and never looking away. The thing to remember is that unlike human accidents, Robotaxi accidents can be eliminated completely just by iterating on software.

The real world has accidents. And Autopilot is operating at real world scale.

The Bottom Line

Don’t be fooled by people who show you some accident to try and scare people away from Autopilot. The fact is:

  1. We don’t know which Autopilot features were in use, if any
  2. Autopilot saw the truck and applied automatic emergency brakes, despite complicating factors
  3. Thankfully, the driver walked away from the incident unharmed

Sadly we are going to see accidents that are much worse than this, and have seen accidents much worse than this, with Autopilot and Full Self Driving both confirmed to be on. Very few people understand what the team is trying to achieve, and a lot of of misinformation will be spread in the media to try and smear Autopilot as unsafe. It’s important for the few people who actually understand the project to stand up and encourage people to look at the data. It may seem shocking, but when you’ve produced over a million electric vehicles you’re operating at real world scale.

If there were any other companies operating at real world scale, I doubt their safety record would be better than Tesla’s. The Autopilot team is extremely talented and takes this challenge very seriously.

P.S. Don’t support people who make money spreading TSLAQ Misinformation

It is tempting to stroke controversy and spread TSLAQ misinformation. It gets clicks, attention, makes people fight –– it’s good for business.

Don’t let it be good for business. When someone tries to make money spreading short seller propaganda they know isn’t true, hit em with the ad blocker.

P.P.S Driver Monitoring

Some have suggested that the solution to this problem is to require driver monitoring on Autopilot. I don’t support mandatory driver monitoring, because I think being able to track the driver’s attention properly is a computer vision problem on the order of many of the tasks needed to actually drive the car. Engineering time and resources should be put into driving the car better and reducing the risk of a collision –– not monitoring the driver. Autopilot is getting better so fast any driver monitoring feature would be obsolete before it shipped.

Driver monitoring can be extremely annoying if not implemented correctly. Many people advocate for it earnestly, but there are many people who don’t care about safety at all and just want driver monitoring as a way to make solutions like Autopilot less attractive and more annoying for users.

As mentioned above, distracted driving is not a problem that is unique to driver assistance systems. All drivers are distracted. So if you require driver monitoring, you need to require it whether they’re driving in manual mode or on Autopilot. Otherwise, people would just turn Autopilot off and drive manually so they could quickly use their phone and shoot out a text. That’s not going to help safety. Any system that’s too annoying to be used at all times in the car is too annoying to be used when on Autopilot.

We could end shoplifting if we mandated cameras in all changing rooms. But we don’t because the vast majority of people never shoplift, and people don’t want to be videotaped when they’re changing. I know a lot of people feel strongly about this but sorry –– I don’t want my nanny cam in my car beeping at me when it thinks I’m not looking at the road.

Leave a Reply