Self-driving in general has been overhyped by grifter tech bros like Elon and really shows the current limits of ML. Today, ML models are basically fuzzy, probabilistic functions that map inputs to outputs and are not capable of actual reasoning. There is a long tail of scenarios where a self-driving car will not generalize properly (i.e., will kill people). Throwing increasingly more data and compute at it won’t suddenly make it capable of reasoning like a human. Like other ML use cases, self-driving is a cool concept that can be put to good use under the right conditions, and can even operate mostly without human supervision. However, anyone claiming it’s safe to let today’s “self-driving” cars shuttle humans around at high speeds with no additional safeguards in place either has an unrealistic understanding of the tech or is a sociopath.
IMHO, this was really a video about camera-only automatic emergency braking, not autonomous driving.
Lots of cars have AEB now since a lot of regulators are requiring it, but most use a combination of cameras and ultrasonic sonic. The top-of-the-line systems have LiDAR, cameras, and ultrasonic.
Tesla’s sensors lack redundancy. If the cameras are obstructed or can’t distinguish shapes, the vehicle can’t fall back to another system.
Coming here because I saw how downvoted this post was on Reddit lol. I love that it’s triggering the Elon fanboys.
Maybe it was downvoted because of Mormon weirdo Mark Rober and not the content itself?
Is he a weirdo for being Mormon, or something else?
Based on comments it’s the Tesla stans.
Yeah don’t get me wrong I’m not a Mark Rober fan and I don’t think he’s making this video because he’s anti Elon even, he’s just making it because it’s popular to hate Elon and Tesla at the moment. It happens to be a good thing, but unfortunately, I think Mark isn’t doing it out of virtue.
Who cares about virtue? I just want to be entertained by Tesla cheaping out and then telling drivers it’s a premium car brand that will pay for itself with robo taxi services.
I don’t think anything Rober does is out of virtue.
I stopped watching his vids when it felt like literally every single one was a promotion for his kid box thing. I also feel I may have aged out of his target demo which feels weird because I’m still watching most of the other science chanels I followed at the time.
Ahah Tesla is like a 2000s knock-off of good existing technology
I’m bearish on TSLA, but still saw there’s some controversy surrounding his testing methodology and shortcomings in that video. Was talked about a bit on Philip DeFranco.
IMHO, at the end of the day, all of those vehicles have emergency braking systems. It doesn’t matter if he was in FSD, Autopilot, or manual control, AED (Automatic Emergency Braking) should’ve stopped or slowed the vehicle.
Ya AEB is an always on thing.
AEB might not always prevent a crash, but it should slow the vehicle at the very least so the crash has less energy.
You could have a system that never prevents a crash and you’d still get an insurance discount due to the slowing benefits.
Props to Benn Jordan for doing this a year ago on a slightly lower budget.
Bonus deep dive about using LiDAR to map out space mountain
I wouldn’t exactly call that a deep dive.
The channel is for 5-year-olds, they would drown in a real deep dive
Easy: „If you want to learn more check out our second channel where we explain in depth how we approached this topic, the technology used and what we learned.“
💯
Im shocked Disney isnt thowing a fit over that. Their legal team must be busy this week.
Getting into a legal battle with an immensely popular YouTuber would probably cost them a lot more in bad publicity than they would reasonably make from a lawsuit. I guarantee someone at Disney is doing or already has done the calculations.
https://www.theguardian.com/film/article/2024/aug/15/disney-wrongful-death-lawsuit-dismissal
Disney is not scared of lawyering up.
Oh I’m not disagreeing with you there. In this case, they were being brought in as a defendant for a wrongful death case. The probably realized there was some potential PR damage from considering invoking the Disney+ terms (rightfully so, fuck them), but I’m sure they weighed that against the liability of the wrongful death suit.
In the case of the video, I’d be surprised if they went after Robert, because I don’t see them gaining more from this case than they would lose in publicity. I could be wrong though, perhaps we’ll see!
Insane that the telsa drives into spaces its unsure of. So dangerous
Sure, but their sensors will detect if you aren’t paying enough attention and report back to Tesla headquarters to get the lawyers ready before you can even get out of your car.
That’s the thing that got me. I would have issues spotting that child through the fog as well, but I wouldn’t have sped through it.
A Tesla stopped for me at a crosswalk and I insisted, you go on ahead, I ain’t trusting Musk Tech with my life.
What makes you think it’s unsure?
True its not unsure but it should be. If it doesn’t have good viability it should have slowed down or disengaged auto pilot.
In fog it doesn’t know it doesn’t have good visibility.
If it had lidar then it would
Then it shouldnt work in fog and force the driver to drive.
Maybe it is sure but that doesn’t make it accurate
I’ve been shit-talking Elon’s (absolutely boneheaded) decision to intentionally eschew system-redundancy in systems that are critically responsible for human life for years now. Since he never missed an opportunity to show off his swastikar in MANY of his previous videos, I had assumed Mark Rober was a sponsored member of the alt-right intellectual dark web. But I’m pleasantly surprised to see that this video is a solid (WELL-justified) smear. 👌
I had assumed Mark Rober was a sponsored member of the alt-right intellectual dark web.
He is.
I am not a fan of Tesla/Elon but are you sure that no human driver would fall for this?
That is a completely legitimate question. That you are downvoted says a lot about the current state of Lemmy. Don’t get me wrong, I’m all for the Musk hate, but it looks like a nuanced discussion on topics where Nazi-Elon is involved is currently not possibe.
All the other cars he tested stopped just fine.
Part of the problem is the question of who is at fault if an autonomous car crashes. If a human falls for this and crashes, it’s their fault. They are responsible for their damages and the damages caused by their negligence. We expect a human driver to be able to handle any road hazards. If a self driving car crashes who’s fault is it? Tesla? They say their self driving is a beta test so drivers must remain attentive at all times. The human passenger? Most people would expect a self driving car would drive itself. If it crashes, I would expect the people that made the faulty software to be at fault, but they are doing everything they can to shift the blame off of themselves. If a self driving car crashes, they expect the owner to eat the cost.
As soon as we have hard data from real world use and FSD is safer than the average human, it would be unethical to not solve the regulatory and legal issues and apply it on a larger scale to save human lives.
If a human driver causes a crash, the insurance pays. Why shouldn’t they if a computer caused the crash, which drives safer overall, if only by let’s say 10%.
Lets assume that a human driver would fall for it, for sake of argument.
Would that make it a good idea to potentially run over a kid just because a human would have as well, when we have a decent option to do better than human senses?
What makes you assume that a vision based system performs worse than the average human? Or that it can’t be 20 times safer?
I think the main reason to go vision-only is the software complexity of merging mixed sensor data. Radar or Lidar alone also have their limitations.
I wish it was a different company or that Musk would sell Tesla. But I think they are the closest to reaching full autonomy. Let’s see how it goes when FSD launches this year.
The main problem in my mind with purely vision based FSD is that it just isn’t as smart as a real human. A real human can reason about what they see, detect inconsistencies that are too abstract for current ML algorithms to see, and act appropriately in never before seen circumstances. A real human wouldn’t drive full speed through very low visibility areas. They can use context to reason about a situation. Current ML algorithms can’t do any of that, they can’t reason. As such they are inherently incapable of using the same sensors (cameras/eyes) to the same effect. Lidar is extremely useful because it helps get a bit better of a picture that cameras can’t reliably provide. I’m still not sure that even with lidar you can make a fully safe FSD car, but it definitely will help.
The assumption that ML lacks reasoning is outdated. While it doesn’t “think” like a human, it learns from more scenarios than any human ever could. A vision-based system can, in principle, surpass human performance, as it has in other domains (e.g., AlphaGo, GPT, computer vision in medical imaging).
The real question isn’t whether vision-based ML can replace humans—it’s when it will reach the level where it’s unequivocally safer.
Somehow other car companies are managing to merge data from multiple sources fine. Tesla even used to do it, but stopped to shave a few dollars in their costs.
In terms of assuming there would be safety concerns, well this video clearly demonstrates that adding lidar avoids three scenarios, at least two of them realistic. As I said my standard is not “human driver” but safest options as demonstrated.
Which other system can drive autonomous in potentially any environment without relying on map data?
If merging data from different sensors increases complexity by factor 5, it’s just not worth it.
One, I don’t know if ‘autonomous no matter what’ is an important enough goal versus ADAS, but for another, the gold standard in the industry except Tesla is vehicle mounted LIDAR, with investments to bring down the tech price.
Merging data from different sources was never claimed by anyone to be too hard a problem, again, even Tesla used to and decided to downgrade their capabilities for cost. “It’s just not worth it” is a strange take on a video demonstrating quite clearly the better data from LIDAR than you can possibly get from cameras and the benefit of avoiding collisions, collisions that kill thousands a year. Even the relatively “won’t turn on unless things are perfect” autopilot has killed quite a few people, and incurred hundreds of accidents beyond that.
Autopilot is not FSD and I bet many of the deaths were caused by inattentive drivers.
Which other system has a similar architecture and similar potential?
Autopilot is not FSD, but these scenarios are supposed to be within the capabilities of autopilot to react. There’s no indication that FSD is better equipped to handle these sorts of scenarios than autopilot. Many of the autopilot scenarios are the car plowing into a static obstacle head on. Yes the drivers should have been paying attention, but again, the point is autopilot even with all the updates simply fails to accurately model the environment even for what is should be considering easy.
In terms of comparative systems, I frankly don’t know. No one has a launched offering, and we only know Tesla’s as well as we do because they opt to use random drivers on public roads as guinea pigs, which isn’t great. But again, this video demonstrated “easy mode” scenarios where the Tesla failed and another car succeeded. But all that’s beside the point, it’s not like radar and lidar would preclude fsd either way. The video makes clear the theory and reality of better sensing technology and it can only improve the safety of a system. FSD with added radar and lidar would have greater capacity for safety than FSD with just cameras. The lidar might be forgiven for cheap cars historically, but the radar is bonkers to remove as those are put on some pretty low end cars. No one else wants to risk FSD like capability without lidar because they see it as too risky. It’s not that take knows some magic to make cameras safe, they just are willing to inflict bigger risk, and willing to try to argue “humans are deadly too” whereas competition doesn’t even want to try that debate.
FSD is launching this year??! Where have I heard that before?
The road runner thing seems a bit far fetched yeah. But there were also tests with heavy rain and fog which were not passed by Tesla.
The road runner thing isn’t far fetched. Teslas have a track record of t-boning semi trucks in overcast conditions, where the sky matches the color of the truck’s container.
Isnt there a rule if weather very heavy and you cant see you must stop driving immediately
You mean a traffic rule? I can’t comment about the US but in Portugal I don’t recall such a rule when learning to drive. Also in Finland I have not experienced that since traffic keeps going even in heavy blizzards.
Should be fine if the car reduces speed to account for the conditions. Just like a human driver does.
And the Tesla doesn’t, that’s the problem. A human would slow down if they can’t see, the Tesla just barrels through blindly.
FSD is still in development
You said it will be released this year in another comment. Are they solving fundamental problems like that in the next 9 months?
Which are the unsolvable problems?
What about the claims that he only used Autopilot, and not Tesla’s Full Self Driving?
(Context: I hate Tesla, just curious for the sake of an honest argument)
All the other cars he tested stopped just fine. Who cares about fiddling with modes and shit.
“Full shelf driving” still needs to be in quotes. It’s a feature’s brand name for a product that doesn’t actually have full self driving capabilities.
Try not to carry water for their attempted, repeated lie.
Philip DeFranco had him on Yesterday and he said the reason he didn’t use FSD was that it required you to input an address, but that there isn’t any difference in terms of the sensors being used.
Given that the other car didn’t appear to have a version of FSD either, I’m unclear as to why Autopilot wasn’t the correct move for the most accurate comparison.
Not any tangible difference in this scenario. Both use vision only. And both use the same computers.
Human drivers use vision only
Human drivers generally use sound as well.
Fortunately humans have much better hardware and software to accompany that vision.
FSD hard and software has just to be good enough for the job.
And still we cause so many deaths because we are tired, distracted or emotional when driving.
FSD hard and software has just to be good enough for the job.
But…it’s not.
Based on? Have you seen the progress in users YouTube videos?
It’s not there yet but I don’t see how it can’t work with vision only. It just has to be safer than human drivers.
Based on?
The fact that both systems are vision only, as I said previously.
Have you seen the progress in users YouTube videos?
Even better, I used it myself. For 3 months. It’s awful.
But do they use a different software? Maybe FSD is more advanced than autopilot and could have reacted better?
Just playing devil’s advocate here.
Yes, completely different software, but both are limited by machine vision.
The software may change but these tests show it’s the hardware that’s limiting them. If the Tesla can’t see a kid through fog, it doesn’t matter what software you pick, that kid gunna die.
The other car only used emergency breaking, so there’s that.
He was helping out Tesla by doing that. He was helping them get the wins they got instead of just Tesla massacring the kid every time. Note to self: As a pedestrian and you see a tesla, don’t cross the street.
I like to help a Tesla out by throwing it batteries, Philly style.