KICK TECH BROS OUT OF 196

  • @jivemasta@reddthat.com
    link
    fedilink
    49 months ago

    I think the argument is that for it to truly be AI, it would need to be able to react to new situations that it isn’t trained on.

    Like everything it does now is just picking the most likely thing out of the things it was trained on, but with no thought to the current situation.

    For example, AI powered self driving cars can’t really make decisions like, “hey there is a child playing with a ball on the side of the road, it’s not a threat, but I’d better pay attention to where that ball is going”. It will just not do anything until it is on a collision course and by that time, it may not have enough space to stop in time, because it also can’t really tell the condition of the roads.

    The AI as it exists right now basically only knows about the moment it is currently in and the moment it just left. It is not looking toward the future and thinking of possible outcomes and plans of action like we do. It doesn’t attempt to identify situations until they actually happen so while it can react faster than a human, humans can make it so they never have to react at all.

    • @testfactor@lemmy.world
      link
      fedilink
      79 months ago

      Okay, two things.

      First, that’s just not true. Current driving models track all moving objects around them and what they’re doing, including pedestrians and objects like balls. And that counts towards “things happening in the moment”. Everything in sensor range is stuff happening “in the moment”.

      Second, and more philosophically, humans also don’t know how to react to situations they’ve never seen before, and just make a best guess based on prior experience. That’s, like, arguably the definition of intelligence. The only difference arguably is that humans are better at it.

      • @jivemasta@reddthat.com
        link
        fedilink
        39 months ago

        Yes they can track some moving objects and if it is currently on a collision course it will react, but not until the point where it’s clear that it is going to hit the thing. The car isn’t going to gauge the situation and identify that there may or may not be a situation in which it needs to act or not.

        For example, is an AI driver going to recognize an animal running in a fenced in yard as something it can ignore? What about when the animal is running in a trajectory that the car could see as an intersection in the future, but is otherwise prevented by the fence?

        Or another common occurrence, you are driving in the right lane of a street, and traffic gets backed up in the left lane so a person doesn’t look and just pulls into your lane. A good defensive driver would be slowing down a little and looking for any signs of someone trying to switch lanes. I guarantee an AI car would not identify the possibility until someone started making a move.

        For it to truly be AI, it needs to think in advance, sort of like the chess computers do. It needs to take the current and past states, and judge possible future states and weigh them. Then take the outcomes from that process, and integrate them into future decisions. That is true AI, a lot of the AI that exists is just this static chain of probabilitys that sprinkles some randomness on top to appear as if it’s different each time.

        • @testfactor@lemmy.world
          link
          fedilink
          39 months ago

          I think literally all those things are scenarios that a driving AI would be able to measure and heuristically say, “in scenarios like this that were in my training set, these are what often follows.” Like, do you think the training set has no instances of people pulling out of blind spots illegally? Of course that’s a scenario the model would have been trained on.

          And secondarily, those are all scenarios that “real intelligences” fail on very very regularly, so saying AI isn’t a real intelligence because it might fail in those scenarios doesn’t logically follow.

          But I think what you are trying to argue is that AI drivers aren’t as good as an “actual intelligence” driver, which is immaterial to the point I’m making, and is ultimately super quantifiable. As the data comes in we will know in a very objective way if an AI driver is safer on average than a human. That’s quantifiable. But regardless of the answer, it has no bearing on if the AI is in fact “intelligent” or not. Blind people are intelligent, but I don’t want a blind person driving me around either.