Alphabet-owned Waymo unveiled its sixth-generation Driver system on Monday with a more efficient sensor setup. Despite having a reduced camera and LiDAR sensor count from the current platform, the self-driving ride’s new setup allegedly maintains safety levels. Once it’s ready for public rides, it will coexist with the current-gen lineup.

CNBC reports that the new system is built into Geely Zeekr electric vehicles. Waymo first said it would work with the Chinese EV maker in late 2021. The new platform’s rides are boxier than the current-gen lineup, built on Jaguar I-PACE SUVs. The Zeekr-built sixth-gen fleet is reportedly better for accessibility, including a lower step, higher ceiling and more legroom — with roughly the same overall footprint as the Jaguar-based lineup.

The sixth-gen Waymo Driver reduced its camera count from 29 to 13 and its LiDAR sensors from five to four. Alphabet says they work together with overlapping fields of view and safety-focused redundancies that let it perform better in various weather conditions. The company claims the new platform’s field of view extends up to 500 meters (1,640 feet) in daytime and nighttime and “a range of” weather conditions.

  • fpslem@lemmy.world
    link
    fedilink
    arrow-up
    25
    arrow-down
    6
    ·
    4 months ago

    Propoganda and marketing spin. Waymo also said its previous cars were safe and they’ve still had multiple incidents.

    It’s utterly unacceptable that these companies have been allowed to beta-test their 2-ton vehicles and beta software on public streets.

    • threelonmusketeers@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 months ago

      they’ve still had multiple incidents

      Human drivers have hundreds of incidents per day. Has anyone done an analysis to see if Waymo incident rate is better than the human incident rate? While we absolutely need to hold companies accountable, it’s important to remember that autonomous vehicles don’t need to be perfect to be an imovement. They just need to be better than humans, which is a rather low bar.

      • fpslem@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        4 months ago

        They just need to be better than humans, which is a rather low bar.

        Man, hard disagree. These systems have to be WAY better than humans to justify their huge costs. From a policy perspective, “better than humans” isn’t good enough. And from a fiscal and legal perspective, it’s disastrous. Companies need to achieve perfect or nearly perfect records to avoid being sued out of existence in products liability suits.

        Also, just a friendly reminder that Cruise (competitor to Waymo) admitted that it had an average of 1.5 employees directing each so-called autonomous car. Waymo hasn’t had to disclose those numbers yet, but it employs far more people than Cruise, so I think it’s safe to assume that the number is not zero. As much as I want it to be true, this tech is nowhere close to actually autonomous yet. My suspicion is that true autonomous vehicles are still many decades away, due to computing power constraints, sensor fidelity, etc. https://www.nextbigfuture.com/2023/11/one-and-half-remote-cruise-employees-were-supporting-each-driverless-car.html

        • IphtashuFitz@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          4 months ago

          Couldn’t agree more. If all driving was easily predictable then “just better than humans” would be reasonable. But in my decades of driving I’ve encountered so many edge cases I’ve had to deal with that I seriously doubt true self driving will exist until we developed true AI (not just the LLM stuff that’s currently all the rage) that can react to events that aren’t pre-programmed.

          Just a few examples of things I’ve encountered:

          • A car fully engulfed in flames in the middle of a busy multi-lane intersection. I had a green light but I could hear (and barely see) emergency vehicles approaching from a different direction, so I had to give way.
          • Trees fallen entirely across the road.
          • I’ve seen a Tesla get confused by a landscaping truck hauling a trailer overflowing with tree trimmings so much that it looked like a giant bush. You couldn’t see the trailer, brake lights, license plate, etc. Would a Waymo car be able to tell the difference between a trailer like this and the above mentioned tree blocking the road?
          • Part of a one-way road near me was closed for a while so a water main could be repaired. People who lived on the street, delivery vehicles, etc. had to drive the wrong way to get out of there. Would a self driving car recognize when to do this?
          • I once stopped at a red light where a construction crew was working at the corner. I didn’t notice a cop standing near them was waving me through the red light until he walked up to my car and yelled at me to go. Would a self driving car recognize when a police officer overrides a traffic light?
          • Driving after heavy rain and encountering flooded roads where the sun & other reflections make it tough to spot at a distance.
          • And many, many more…
          • threelonmusketeers@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            Thanks for sharing your experience. Do you think there are currently more unhandleable edge cases than there are human drivers who are tired, drunk, or distracted?

            My feeling is that autonomous vehicles will only get better from this point onward, and whereas I don’t foresee any appreciable improvement in human drivers. At what point do you think these lines will cross? 3 years? 8 years? 20 years?

            • IphtashuFitz@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              Well that’s the thing about edge cases - by definition they haven’t been explicitly taken into account by the programming in these cars. It is literally impossible to define them all, program responses for them, and test those situations in real-world situations. For a self driving car to handle real-world edge cases it needs to be able to identify when one is happening and very quickly determine a safe response to it.

              These cars may already be safer than drunk/drowsy drivers in optimal situations, but even a drowsy driver will likely respond safely if they encounter an unusual situation that they’ve never seen before. At the very least they’d likely slow down or stop until they can assess the situation and figure out to proceed. Self driving cars also need to be able to recognize completely new/unexpected situations and figure out how to proceed safely. I don’t think they will be able to do that without some level of human intervention until true AI exists, and we’re still many decades away from that.

    • Thorny_Insight@lemm.ee
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      4 months ago

      We let distracted and inexperienced apes drive on public roads as well. I bet these drive better than them.