- cross-posted to:
- technology@lemmy.zip
- cross-posted to:
- technology@lemmy.zip
"While Waymo is now providing over 150,000 autonomous rides every week, Tesla still has a long way to go until its controversial “Full Self-Driving” software is ready for the EV maker’s competing robotaxi service.
Just this week, a Tesla driver plowed through a deer without even a hint of slowing down with the $8,000 add-on feature turned on, and another smashed into someone else’s car when its owner employed its Summon feature."…
Musk: “We’ve spared no expense!”
Critics: “So, LiDAR?”
Musk: “We’ve spared some expense!”
I once worked for a big corporation that makes hydraulic rescue tools, where management somehow failed to grasp that the chief selling point of these tools is that they do the job reliably every time. No firefighter wants to be trying to get someone out of a car like, “Damnit! The cutter is acting up again. We should probably look into that.”
But the executives kept demanding that we add “features” to the tools that effectively compromised the reliability and then got all surprised Pikachu face when it was explained to them that the customers thought the tools were overpriced half-assed garbage.
I guess my point is I’ve seen plenty of incredibly stupid examples of management ignoring the engineers and yet somehow Musk demanding that radar be scrapped in favor of cameras is right at the top of the list. Especially if you want your customers to live long enough to buy your products more than once.
Even worse, LIDAR isn’t even that expensive. Musky just thought they should be able to do without it because, “humans do it with just eyes.”
Just so no one mistakes the above as hyperbolic, musk actually said this during an engineering meeting. His engineers kept telling him why Tesla needed the extra sensors, and he replied “people just have eyes and they can drive.”
Anytime someone tries to play him off an engineer or anything but a lucky gambler and flim flam man, just remember the above.
And then the car mistook the side of a white semi for the sky and plowed full speed into the semi, decapitating the driver. These systems see, but don’t really understand what they see.
Self driving doesn’t need to be perfect it only needs to be better than humans.
Examples like this are kind of pointless because humans are incredibly bad drivers statistically.
It needs to be quite a bit better than humans to actually take off. The difference between a person driving and anything else is who is in control, and therefore to blame. Just being statistically better than the average person isn’t good enough, imho.
An order of magnitude better would give trust. Slightly better doesn´t convince people.
People also have a brain. Most of them even use it when driving.
Ah, an optimist!
Absolutely.
Not a fan of Musk at all, but Lidar is quite expensive. A 64 line lidar with 100m+ range was about 30k+ a few years ago (not sure how prices have changed now). The long range lidar on the top of the Waymo car is probably even higher resolution than this. It’s likely that the sensor suite + compute platform on the waymo car costs way more than the actual Jaguar base vehicle itself, though waymo manufactures it’s own lidars. I think it would have been impossible to keep the costs of Teslas within the general public’s reach if they had done that. Of course, deploying a self driving/L2+ solution without this sensor fidelity is also questionable.
I agree that perception models will not be able deal with this well for a while. They are just not good enough at estimating depth information. That being said, a few other companies also attempted “vision-only” solutions. TuSimple (the autonomous trucking company) argued at some point that lidar didn’t offer enough range for their solution since semi trucks need a lot more time to slow down/react to events ahead because of their massive inertia.
Yeah we used to joke that if you wanted to sell a car with high-resolution LiDAR, the LiDAR sensor would cost as much as the car. I think others in this thread are conflating the price of other forms of LiDAR (usually sparse and low resolution, like that on 3D printers) with that of dense, high resolution LiDAR. However, the cost has definitely still come down.
I agree that perception models aren’t great at this task yet. IMO monodepth never produces reliable 3D point clouds, even though the depth maps and metrics look reasonable. MVS does better but is still prone to errors. I do wonder if any companies are considering depth completion with sparse LiDAR instead. The papers I’ve seen on this topic usually produce much more convincing pointclouds.
Even my cheap chinese vacuum cleaner has it.
More money tho.