He's been promising "self-driving" for years, except the damn technology isn't ready/safe enough for what people are clearly using this from. The manufacturer puts a warning label, but the usage model/UI is clearly flawed.
His engineers know this, which is why Tesla's self-driving group has had such a high turnover rate over the last few years. Musk keeps overriding them.
"Move fast and break things" is probably the worst philosophy to sweep over SV. It encourages a worldview where externalities that potentially break laws and maim or kill other people should be disregarded in the pursuit of "progress" and profits. But it's nice to see the HN community finally start to come to this realization
That.... was just Facebook's idea. I don't know any other company apart from those founded by former Facebook employees that took the monicker seriously.
A more mainstream paradigm was 'disrupt!' as in disrupt old industries, by applying the rules in ways that they couldn't aided by technology that they are disinclined to use. For instance Uber used 'disrupt!' as the centerpiece of their business, by acting as contracting service for limo companies, before finally expanding the definition of 'limo' to include any Toyota made after 2015.
Facebook may have invented the slogan, but others still took the concept and ran with it. There's various incubators in SV that repeat that same mantra to startups because they heard it from one of the big boys (Facebook) or read about it in some VC Medium post. I'm not pointing fingers, but just saying that even in such innovative places people will still blindly follow those paradigms like it's conditioning.
It started at Facebook and made sense for Facebook since nobody’s life is ruined if they can’t use Facebook for 5 minutes. It’s not appropriate for Tesla or even Google.
"Move fast and break things" was never meant for actual real world physical things (or laws). It was a mantra that encouraged breaking sacred cows and assumptions but has been bastardized and vilified by the anti-big tech crowd. Obviously you can be a little more lax on your production quality if you're building inane web apps rather than rocket ships or medical devices.
False, read again the facts and stop spreading wrong conclusions or facts, btw the study you are thinking off is not open so it was not verified, I think there is an action in justice to open the data.
No, that's not at all clear, and is in fact unlikely; rather: some of the capabilities in the basket of features Tesla calls "autopilot" are associated with lower crash rates, and it's not at all clear which; there are steep reductions in accidents, possibly exceeding Tesla's numbers, from cars with automatic emergency braking. Just because AEBs are useful safety features does not mean that auto-steering is necessarily on net a safe feature.
The person posted same false fact/conclusion that there is data to prove autopilot is 40% safer all over HN, it looked to me as a misinformation campaign,
You are right though, it may look as a PR campaign but I can be sure, it could be just a misinformed fanboy
Since circa 1968, "cruise control" has been used to refer to automatic throttle adjustment to maintain speed. [1] Is there some previous meaning that makes you think it misleading? Because I've never thought of it as meaning anything else.
From looking at the photos on the latter article, I'm guessing zero people crashed expected it to do everything for them. The UI is a knob on which you set the speed you want.
The state of the art in computers at the time was the RAMAC 305, a vacuum tube computer which featured IBM's first disc drive. The computer itself was meant to be installed in a room 30'x50'. The disk drive itself was 4' across, weighed about a ton, and held 5 million characters (6 bits each). All this could be yours for only $300k/year (in current dollars).
Given that, and given that the transistor radio was introduced only a few years before, I doubt anybody expected the sort of magic we see as commonplace today. And if they did, they would let go of the wheel and find out in about 2 seconds that it was not staying on course.
Confusion over the function of cruise control was the subject of a long series of urban legends. I remember hearing at least one of these when I was growing up in the 90s. Ironically, the lesson of many of these stories is that the name for a fully self-driving system is "autopilot" not "cruise control".
> [1995] This guy saves up his money and finally gets the van he always wanted. Fridge and tv in the back, all the works. He starts driving out on a country road that leads to his home. He sets the van on cruise control and gets out of the drivers seat and goes into the back to get a beer. The van of course goes off the road, and when the paramedics ask him what happened, he said he thought he had auto-pilot.
> [1993] An old china man was driving along in his motor home. He turned on his ‘cruise control’. Apparently misunderstanding the function of ‘cruise control’, he then went into the back of the motor home. The motor home drove off the road and crashed. Apparently he did not realize that ‘cruise control’ is not ‘autopilot.’
Likely did contribute to some accidents. Even simple cruise control can catch you out if you're not really paying attention -- e.g. when coming up on slower traffic.
But nobody in 1958 would have the slightest expectation that the car would actually drive itself. Tesla has been using the terms "self driving" and "autopilot" together since they started markting the car.
This is a good example of a case where engineering from first principles, especially around user experience for driving, can lead to making the same mistakes again (e.g., autopilot naming, and potentially issues related to consistent braking performance found in the Consumer Reports review.)
A commercial pilot doesn't flip on autopilot and start browsing the internet on a phone or take a nap. And that's at 30k feet where there's little traffic to worry about.
It seems like a misunderstanding of what the technology does, by consumers and the media. I'm no expert on Tesla's marketing history, but I'm of the opinion that these crashes are due to drivers misunderstanding that the system does and what its capabilities are.
I don’t totally disagree with you but I will point out that planes land everyday in white out conditions with no visibility due to level 3 landing systems. The pilot can’t see anything and thus can’t help really even if they are awake. They’re mostly just there to talk to air traffic control for these kinds of landings. One pilot told me the Airbus planes no longer allow manual control.
Accoring to Wikipedia, you still need to have a minimum runway visibility unless you are landing CAT-IIIc, which is not used commercially (yet).
CAT-III landings are automatic, but the pilot needs some visibility to decide if the plane is going to touch down in the landing zone. If they are totally blind they will divert.
From the "Special CAT II and CAT III operations" you linked:
Some commercial aircraft are equipped with automatic landing systems that allow the aircraft to land without transitioning from instruments to visual conditions for a normal landing. Such autoland operations require specialized equipment, procedures and training, and involve the aircraft, airport, and the crew. Autoland is the only way some major airports such as Paris–Charles de Gaulle Airport remain operational every day of the year. Some modern aircraft are equipped with Enhanced flight vision systems based on infrared sensors, that provide a day-like visual environment and allow operations in conditions and at airports that would otherwise not be suitable for a landing. Commercial aircraft also frequently use such equipment for takeoffs when takeoff minima are not met.
I get your argument, but pilots do sleep all the time. Usually they take turns, but there have been multiple incidents that we know about where both pilots have fallen asleep at the same time.. we only ever hear about them if the pilot overshoots the airport, or in one case, the pilots self-reported it to encourage others to talk about the issue.
It's extremely likely it happens more often than we think and that most pilots simply just aren't going to voluntarily give up that information.
The first (and only) thing you see at first is "Full Self Driving hardware on all cars", and a blurb about how all Tesla's have the ability to be self-driven. It does not mention here, nor anywhere else on the page that the current implementation of autopilot in Tesla cars is
not fully self-driving. The video just underneath the headline is a video that shows a car using self-driving capabilities, not autopilot.
The entire rest of the page is like this as well. It talks at length about how Tesla's have the capability to drive themselves, and hardly ever mentions that self-driving and "autopilot" are not the same thing. There is only one single sentence, buried in the middle of the page, that mentions that drivers must remain alert and ready to take control.
And again, this is on Tesla's home page for their autopilot function. This is the first page that comes up when you Google "Tesla Autopilot". It's really shameful for Tesla to have marketing materials like this.
> It seems like a misunderstanding of what the technology does, by consumers and the media. I'm no expert on Tesla's marketing history, but I'm of the opinion that these crashes are due to drivers misunderstanding that the system does and what its capabilities are.
Perhaps the name "autopilot" has something to do with that misunderstanding.
Eh. This is reminding me of a law school friend bragging about how great lawyers are, giving the example of suing a firearms manufacturer for a 'faulty' safety. They'd done something insane, like pointing a loaded gun at a friend and pulling the trigger, thinking "haha the safety's on!" But lo and behold, the safety failed and someone was dead.
She was adamant that a safety was infallible, and any damage from failure was the manufacturer's responsibility. As someone who grew up with guns, I realized that only an idiot would put someone's life in the hands of a 'safety' mechanism.
If people have dumb, uninformed ideas about what words mean in context, they should probably educate themselves. Based on videos and articles I've seen, people are doing extremely stupid things that are clearly warned against by Tesla, like driving down a winding country road, or assuming it's going to somehow know about road work.
Then it's incumbent on car manufacturers who sell their cars with 'auto-pilot' as a marketing term to educate the public on what that term-of-art means. This might cost some money to do. A line in the manual doesn't count.
> A commercial pilot doesn't flip on autopilot and start browsing the internet on a phone or take a nap. And that's at 30k feet where there's little traffic to worry about.
That's rich, you may want to actually do some research into what commercial pilots do when flying before making such claims.
I drove around five miles to my friend's house and back today. I'd estimate that I was in near proximity with potential for collision with around 1,000 cars on that trip.
During the portion of a flight where traditional autopilot is used, how many planes are on a potential collision course? They're traveling at specified altitudes, along specified paths through 3D space. So tell me, how many planes is a jet airliner likely to collide with at cruising altitude when operating on autopilot?
> The manufacturer puts a warning label, but the usage model/UI is clearly flawed.
Not just the model/UI but the actual sensor suite. Remember, the Tesla that resulted in a decapitation had over-saturated cameras that couldn't see a tractor trailer. Of course, that's where the UI comes into play, because if the driver hadn't been watching Harry Potter, he would have seen the tractor trailer as well.
That's the thing, how do you make an UI that unambiguously conveys that the car hasn't seen an object in front of it, so that the driver can react to it?
Maybe with AR the car could highlight the objects detected so that the driver can brake if the car doesn't see an object it knows about, but a head mounted display seems pretty clunky.
The problem is that the "paying attention" part of driving is actually one of the most mentally taxing for the driver and it seems the pay attention 100% of the time is basically the same as saying 5% of the time an event might occurs that requires your immediate action. The problem being that if you have to jump into action to avoid something catastrophic 'sometimes' you basically have to be paying full attention all the time in order to collect enough contextual information about the incident. It's the same as watching someone toss a baseball to you and catching it vs your friend yelling "Headts up!" as a baseball is already flying through the air at your head.
Japanese train drivers are famous for pointing at things to always keep attention. May be similar system can be used for cars: for example car could narrate objects it sees and ask driver to point at them with hand, then check with depth sensor and gesture recognition that user pointed correctly, and then slow down and stop if user pointed wrong.
So wait, rather than just pay attention to the road when driving my old beater, I could constantly be pointing at things and struggling with poor voice recognition? Somehow this sounds like more work than just driving the car myself.
I don't think people realize that "Tesla Autopilot" will never, ever, ever stop for an object in front of it. Tesla has said repeatedly it isn't designed to do so. If you run a Tesla toward a brick wall with Autopilot on, it will crash into it 100 out of 100 times. It will hit ANYTHING in the lane ahead of you. It isn't designed to stop in such cases.
So there's no need for a UI. If there's something ahead of you, you must stop or you will hit it full speed without braking.
They definitely can and do stop. There are plenty of videos of accidents avoided because of stopping, and this is a rather common feature on most mid to high-end cars now, commonly called a collision avoidance system and grouped under the umbrella of pre-crash safety systems.
The avoidance feature is designed to stop the car in the event of an unavoidable obstacle directly ahead. It doesn't guarantee a stop but it will still apply the brakes faster and more fully than a human can react and thus shed more speed from the impact. Pre-crash safety will also tighten the seatbelts, move seat positions, close windows, and inflate auxiliary airbags in preparation, and these are all well-tested and proven features.
What? That's not true. It can and does brake for objects in front of you that it senses. Its ability to sense objects is sadly very incomplete, but it does (quite often, but not often enough) successfully sense objects and slow or stop for them.
Well, I guess the second doesn't presuppose that it senses them. But I think you concentrated on the wrong thing:
Autopilot is not designed to steer around obstructions.
Autopilot is designed to brake for obstructions.
It sometimes fails to brake for obstructions. But it sometimes succeeds in braking for obstructions. It never steers around obstructions, even if it senses them perfectly.
Emergency braking is designed to lessen the impact, not prevent crash. See manual. This is mostly for avoiding crashes into a car that suddenly stops in front of you, I.e. in lower speed city traffic and jams. For this, detection within a few meters is all you need - ultrasound sensors might do. If the velocity delta is 120km/h, you’re not going to meaningfully slow down when already this close.
> Of course, that's where the UI comes into play, because if the driver hadn't been watching Harry Potter, he would have seen the tractor trailer as well.
Reports at the time said he was watching it on a portable DVD player, not the Tesla screen. Not sure about the NTSB report someone just cited though. If that report really said he wasn't watching anything, I wonder how that rumor started?
I test drove a Volvo S90 recently. The Pilot Assist feature almost feels like a gimmick due to the attention required. If you don't give steering input in five seconds, Pilot Assist disables itself. It basically requires enough of your attention that there's little difference between using it and not using it.
It seems pretty clear to me that Volvo only has Pilot Assist in the first place as a method to gather data via telemetry for the next generation of self-driving vehicles, because it otherwise doesn't seem to add anything useful to the driving experience. Consequently, I haven't heard of any notable incidents where Pilot Assist caused an accident.
I haven't test driven Enhanced Autopilot but I see people on YouTube taking their hands off the wheel completely for much longer than the Volvo let me do. Tesla seems like they're pushing something half-baked out and representing it as something it isn't.
I’ve found lane keeping assistants to be fantastic, but you have to think of them as ASSISTING.
It’s not going to drive for you. But it does an amazing job of basically canceling out crosswinds when you find yourself driving down the highway. It makes a huge difference, it makes it much less tiring/frustrating.
Going slow on a windless day? Straight street? Not that useful.
For helping with very gentle curves or very annoying crosswinds it’s quite nice.
And for basic station keeping in relatively predictable stop and go traffic. It relieves you of some unnecessary micromanagement but it's not autonomous.
> Consequently, I haven't heard of any notable incidents where Pilot Assist caused an accident.
It may be, but is not necessarily “consequently”. That Volvo is not cool to write about plays a role. You don’t read about boring daily crashes either.
It’s pure arrogance. In the model 3 when you are in autopilot they have video games and memes to play/watch on the giant screen. How does this pass regulation and is allowed to be sold? Building products to actively distract an operator of a 2 ton vehicle moving 60 mph?
This is actually true. it baffled me too. For proof, check this review video by Top Gear. They demo the Easter eggs that are only available in AutoPilot mode; https://youtu.be/1GrNv3ow9H8
Weird, the tweet was dated 2018-05-22 but the Youtube video shows easter eggs already deployed by 2018-05-23. I know all Tesla vehicles have automatic over-the-air updates but that's still surprisingly quick. Must be Musk playing around with his time machine again.
The screen displays a Mars rover. You turn the steering wheel to control where that Mars rover goes. I'd consider that a video game.
I guess you don't consider that a video game, which is a perfectly valid opinion. But I'd just wish you had made your first comment "I don't consider Tesla's interactive easter eggs to be video games" instead. That way you clearly communicate your intent and this whole needless bikeshed could have been avoided in the first place.
"You turn the steering wheel to control where that Mars rover goes."
No you don't. The Mars map is exactly like the Earth map, with a Mars texture instead of the map, and a rover instead of an arrow. Technically you control where the rover goes by using the steering wheel, because the steering wheel controls where your car goes, and the rover's movements come from your car's GPS. But it's not the sort of video game you describe. There's no way it could possibly work as you describe. The steering wheel is mechanically connected to the front wheels and still steers the car even when Autopilot is engaged, so it could not possibly be used as a video game control.
The problem isn't my failure to disclose my opinion of the easter eggs, it's your misunderstanding of what they actually are.
Nope, I am still blaming government (and I am in no way a Musk fanboy)
Self-driving tech has been there for decades. Convoy driving of trucks (one human-driven truck in front, several trucks following its lead) on highways was developed in the 90s.
On well-maintained infrastructures, automated driving is a problem that was solved before deep learning. The fact that we don't have an automatic lane-driving mode for highways as a standard feature in new cars is a political failure.
In 1997 a huge and successful demonstrator for an automated highway system was made. And legislator did not follow up by allowing on the road the systems they spent more than 10 billions to develop. [1]
In my opinion, a technology should be considered ready/safe enough if it is x times safer than the average driver. Zero accident is a crazy requirement. 10 times safer as human is much more reasonable but no politician will accept that.
And obviously it is easier to develop a safe system on some well known and well maintained road segments (e.g. highways) than on 100% of the roads and dirt roads on the North American continent.
So yes, the only way for an economic actor to get into there is to bend the existing rules, as rules for automated vehicles were never made. You can't sell a car saying "no need to look at the road, I'll drive for you" so you put a disclaimer, while making sure people understand that you actually made a self-driving car.
You can't sell a self-driving car and argue that your rate of accident is acceptable because the laws for such a notions are not there. A single accident in your self-driving car (that may be 10 times safer than a manual car) and you will be considered responsible.
Entities like Elon Musk is a symptom of the worsening of the disease of stupidity that has gotten hold of the global human population....
It is not really surprising when our whole system of existence has the stupidity and weakness of its inhabitants at its foundation. Point is, this will only get worse. Because any "progress" along the current lines asks the common man to be more stupid...
> except the damn technology isn't ready/safe enough for what people are clearly using this from
Outrageously false. These incidents are so rare that they have whole news stories written about every single one. How many crashes were there in Laguna Beach yesterday?
Tesla accident rates in aggregate are quite good. Under autopilot the data is noisy but still underneath the level of an average car (not as low as luxury vehicles in the same price range, though).
Claiming that it "isn't ready" is innumerate nonsense. It could clearly be better, sure. So could you. And no one is demanding to revoke your license, because you aren't a scary AI.
The problem is that the argument being presented: "self driving is safer" is defeated by very obvious failures (driving into pillars, police cars, and people) that are /exactly/ the kinds of things that a computer should be able to handle. Human caused accidents in these situations are almost invariably due to inattentiveness that should not apply to any automated system.
So we get to the problem of "if it can't do handle the specific obvious case where it should be absolutely superior" (I would argue an automated system should not have ever been able to make either the police cruiser or the 280 crash in any circumstances. Those were both in perfect conditions - clear weather, etc, etc), how can we trust it to handle any of the complex situations that a self driving car necessarily must handle?
The Tesla system solves this by simply saying "the driver is still driving, we're merely assisting", but that assistance leads to inattentiveness - that's human nature, you have to design around it - which should be fine, if the systems were able to handle that, but that clearly are not.
I would argue that given the configuration of the Tesla system they are less safe, not necessarily due to net accidents, but rather because of the failure modes of the accidents. Bumper to bumper accidents on a freeway happen all the time, even driving perfectly you can end up in them because not everyone else is driving perfectly. The problem with things like the Tesla system is that it's generally fine, except in those cases where it drives at speed into objects because it gets confused.
If your vision or mental state leaves you unable to safely account for large obstructions you aren't allowed to drive. Because it isn't safe.
How many Tesla's are there? How many crash without autopilot, and how many crash with autopilot? How does this compare with other low-ownership rate cars in the same class?
Seems to me that it's premature to me making bold claims like this. Of cars costing over $100k, what's the accident rate?
There were numbers thrown around in the last thread on this, I gave you my memory. The answer is "Teslas are safe enough". Human drivers with the same accident rate wouldn't raise even a hair on an eyebrow.
And the "bold claims" bit is exactly backward. You are the one claiming that the technology "isn't ready" based on sensationalist local news story and not actual numbers.
By definition and design, Autopilot is used in the safest, most predictable driving situations, only. So it really makes no sense when you start comparing it with statistics on human drivers in general. My first reaction was that it is better to compare with interstate driving, but even that is still an unfair comparison, because humans are going to have a disproportionate number of fatalities in precisely the situations that most drivers with Autopilot turn it off.
What data are you basing this statement on? Or is this just your vague gut feeling based on reading about a few accidents in the news? I thought statistical data showed otherwise, that the technology was safe enough (ie. better than humans).
Reading that blog entry makes me more distrustful of Tesla than I was, because I don't think it's appropriate to compare Autopilot miles to generic human driver miles. Autopilot is not suitable for all types of driving. The most readily accessible statistics that would be more comparable would be fatalities on interstates.
US interstate deaths are about 1 per 183 million miles, so while Autopilot may be safer, it's not as clear depending on the uncertainty of the 320 million mile figure. Also, German drivers on the autobahn have roughly half the fatality rate, which is better than the claim for Autopilot. All in all, I think an honest assessment would be that it's probably as good as an average driver in good conditions who isn't drunk. But rhetoric about saving 900K lives raises my hackles - I do not want to trust the source of this sort of PR with my life.
These numbers from Tesla have been debunked several times (see every older discussion on this on HN) because of the flawed methodology (mostly comparing apples to oranges).
No offense but this is exactly the attitude that inhibits progress and led the auto industry to languish in mediocrity for years. If you wait until the technology would prevent all accidents, you'd be waiting forever.
Your attitude is inhibiting progress. Everybody knows the technology is not ready and everyone is fine with it and willing to face the consequences of refining it. Everyone except Tesla that is, which insists their self-driving tech is so advanced they even call it autopilot encouraging people to treat it as such and refuse to admit any fault in this mess.
> Everybody knows the technology is not ready
And by what standards are you gauging its not ready? My point was if you wait until it becomes "safe enough", you'll be waiting forever because Tesla would be forever at the mercy of whose rules govern what "safe" means. Roads are hazardous places and unfortunately accidents are going to happen.
What I do know is that the Autopilot system will never be distracted, will never fall asleep, will never drive intoxicated or recklessly, will never disobey traffic laws like humans do. Tesla also has a financial incentive to improve their technology to prevent this from happening. Unlike humans who, for the most part, have far less incentive to drive more carefully.
I think you're setting up a dichotomy here that doesn't exist. It's not a choice between waiting forever or disregarding safety. Look at how Waymo is approaching the problem, for example.
As good as a human at what? It's 100% better at not falling asleep behind the wheel or driving while under the influence which statistically are the most common cause of traffic accidents.
We don't yet have evidence that Tesla's using Autopilot are any less safe than human drivers. We do have plenty of examples of traditional auto makers negligently ignoring safety regulations, manufacturing faulty vehicles and also causing fatalities. But somehow we don't hear the same pleas for sympathy for those souls.
We don't yet have any evidence that Teslas using Autopilot are any safer than human drivers. We do have plenty of evidence suggesting that humans are unable to pay proper attention when asked to do tasks like driving a level 3 automation of cars. There is plenty of reason to believe that this going to cause an increase in accidents, particularly of the fatal variety (say, running over a pedestrian at lethal speed).
The NHTSA found a 40% crash rate reduction from Tesla's autopilot. So how about you tell all the people who lives have been saved that you wish they hadn't been because you're an emotional reactionary
I believe many people have found issues with the data used and how they were prevented. Also it's worth comparing the common case of accidents - bumper to bumper, etc on a freeway which I'd except all self driving systems to instantly reduce the rate of just by increasing the distance to the next car - to the number of serious crashes - e.g. driving into parked vehicles, concrete pillars - those have entirely different failure modes.
That said, the reality is that the types of self driving systems currently deployed - Tesla or whomever - are all fundamentally flawed because they do two things:
* Require attention from the human driver at all times
* Cause driver inattention
The first is fairly clear, literally every one of these products is prefaced with "the driver is still in control of the vehicle and must remain attentive", and the latter is clearly demonstrated because of all of the different mechanisms that the manufacturers are deploying to try and deal with the fact that if you tell a human to pay attention to a specific task, but they aren't doing that specific task, they will not pay attention. Every system tried just increases the time required for a human driver to adapt to unconsciously managing the various "pay attention" alarms.
The solution is fully self driving cars. We don't have that yet. I fully expect them to be safer once we get there.
I don't disagree that levels 2/3 are a danger zone in self-driving technology, but I feel the need to point out that there are plenty of people who are disabled or suffer chronic injuries resulting from being rear-ended. Stop and go traffic can be plenty dangerous as well.
He's been promising "self-driving" for years, except the damn technology isn't ready/safe enough for what people are clearly using this from. The manufacturer puts a warning label, but the usage model/UI is clearly flawed.
His engineers know this, which is why Tesla's self-driving group has had such a high turnover rate over the last few years. Musk keeps overriding them.