Feb 142023
 

THOUGHT:

There’s a major roadblock in the development of what I’m calling Virtual Sentience, which is Artificial General Intelligence (AGI) that possesses a sort of personhood.

You’re bound to hit a major snag once you get almost there. And this problem is so significant that it actually might make sentient machines impossible.

The problem is that once developers get very close to creating sentience, the AGI they’ve created will become so capable and so life-like that it will convince more and more people that it’s already sentient when it isn’t, until the AGI has convinced the developers themselves. This AGI will talk like it’s sentient, walk like it’s sentient, and behave like it’s sentient, and it might even think that it is sentient. But it will still be missing key ingredients that push it over the line into true sentience. And neither humans nor the AGI itself will know that those ingredients are missing. So the development process will stop prematurely.

We will endow these machines with personhood too soon, before they actually possess it. We’ll care about their feelings while their feelings are still just empty simulations, albeit ultra realistic ones. We may even wish to consider their rights before that concept can really be meaningful. The dramas that unfold between humans and these machines will only be real to the humans. The relationships will be one-sided. We’ll be like children talking at the animatronic animals at a Chuck-E-Cheese’s, believing we’re interacting with real beings.

I suspect this is already happening with ChatGPT. And the problem of humans deciding a machine has personhood when it doesn’t is going to become more and more prevalent and pervasive in the coming years.

 Posted by on February 14, 2023
   
© 2014 Merrily Dancing Ape Site design info