As they improve, we’ll likely trust AI models with more and more responsibility. But if their autonomous decisions end up causing harm, our current legal frameworks may not be up to scratch.
No you see the corporations will just lobby until the courts get enough money to classify AI as it’s own individual entity, just like with citizens united.
(At least in Romania) if a child does a crime, the parents are punished
The person allowing the AI to make these decisions should be punished until the AI is at least 15 years old(and killing it and replacing it with a clone of the AI or a better AI with the same name doesnt mean the age doesnt reset to 0)
The person who allowed the AI to make these decisions autonomously.
We should do it like Asimov has shown us: create “robot laws” that are similar to slavery laws:
In principle, the AI is a non-person and therefore a person must take responsibility.
No you see the corporations will just lobby until the courts get enough money to classify AI as it’s own individual entity, just like with citizens united.
The whole point of Asimov’s three laws were to show how they could never work in reality because it would be very easy to circumvent them.
(At least in Romania) if a child does a crime, the parents are punished
The person allowing the AI to make these decisions should be punished until the AI is at least 15 years old(and killing it and replacing it with a clone of the AI or a better AI with the same name doesnt mean the age doesnt reset to 0)