I’m migrating millons of encrypted credit cards from one platform to another (it’s all in the same company, but different teams, different infra, etc).
I’m the one responsible for decrypting each card, preparing the data in a CSV, and encrypting that CSV for transit. Other guy is responsible for decrypting it, and loading it into the importer tool. The guy’s technical lead wanted me to generate the pair of keys and send him the private key, since that way I didn’t have to wait for the guy and “besides, it’s all in the same company, we’re like a family here”.
Of course I didn’t generate the key pair and told them that I didn’t want to ever have access to the private key, but wow. That made me lose a lot of respect for that tech lead.
I was thinking… What if we do manage to make the AI as intelligent as a human, but we can’t make it better than that? Then, the human intelligence AI will not be able to make itself better, since it has human intelligence and humans can’t make it better either.
Another thought would be, what if making AI better is exponentially harder each time. So it would be impossible to get better at some point, since there wouldn’t be enough resources in a finite planet.
Or if it takes super-human intelligence to make human-intelligence AI. So the singularity would be impossible there, too.
I don’t think we will see the singularity, at least in our lifetime.