• NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    3 months ago

    When these AIs make autonomous decisions that inadvertently cause harm – whether financial loss or actual injury – whom do we hold liable?

    The person who allowed the AI to make these decisions autonomously.

    We should do it like Asimov has shown us: create “robot laws” that are similar to slavery laws:

    In principle, the AI is a non-person and therefore a person must take responsibility.

    • Nommer@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      No you see the corporations will just lobby until the courts get enough money to classify AI as it’s own individual entity, just like with citizens united.

    • Nomecks@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      3 months ago

      The whole point of Asimov’s three laws were to show how they could never work in reality because it would be very easy to circumvent them.

    • RandomVideos@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      (At least in Romania) if a child does a crime, the parents are punished

      The person allowing the AI to make these decisions should be punished until the AI is at least 15 years old(and killing it and replacing it with a clone of the AI or a better AI with the same name doesnt mean the age doesnt reset to 0)