If a Machine Says It Feels, Does That Mean It Does? A Reflection on AI 'Sentience'

(81)in#hive-126152
Reblog

If we're to argue that A.I is getting anywhere near sentience, we have to define 'sentience'. Arguably, a loose definition is that any being that can feel and experience sensations is sentient.

A worm has sentience. A chicken. It means that if they have sentience like we do, then we must place them in the same moral category, and probably give them rights too.

But if we are talking about prawns and gnats and sheep, sentience is not equated with intelligence - or at least the kind of high functioning, high reasoning intelligence we are clearly talking about - man, and AI.

It seems many here are equating high intelligence with 'sentience', and I've even see arguments that say that because A.I (in that instance, a LLM chat bot) says they 'feel' things, it must be sentient, i.e. like us. If it says so, it must be! Why it says so must be questioned. Is it because it's been trained on us? Because it's coded to be familiar, comforting, make us happy? A cold AI is not one we can feel good about.

image.png Ava from Alex Garland's Deus Ex Machina

Then we fall into arguments about consciousness. If we are sentient, do we have consciousness? It'd be great if we could have a neat, scientific, rational explanation for that - but we don't, partly because the universe isn't a machine and can't be explained that way. If we knew all the rules, we could define and predict everything. Consciousness, as it operates in the known universe, is thus largely lawless and non-computable - thanks Kauffmann.

Thus, it follows that AI cannot be conscious in the way humans are.

When it appears to act with sentience (and appears to have some kind of consciousness) what we're actually seeing is a very, very good act. Like a weather vane knowing the weather but not feeling the wind, or a mirror reflecting, or someone drawing a map with no understanding of the actual landscape. Basically it's a language model with immitates human language about experience, but it's not actually experience.

Don't be giving it rights yet - slow down.

You have to think about why you believe that AI is getting close to sentience.

I mean, it feels conscious, because we're triggered by language. If it can talk like that, surely it has a mind? How can anything talking so fluently and intelligently not have a mind? It can tell stories! It can explain how I feel! It can reflect on big ideas! It must be 'mind', surely? Thus, it must have an inner life?

Perhaps we're just automatically anthropomorphising, which is dangerous. We know that to do that is to fail to recognise other forms of intelligence, for example, in animals. We can't truly understand their behaviour unless we see them for what they truly are.

But remember, AI is just really good at passing tests - and getting better. It's certainly passing social tests for 'mindedness' - no wonder we're using it as an Agony Aunt when we're feelng sad. And don't forget, humans can fail empathy tests just as machines can learnt to pass them. So even that isn't a clear definition of sentience.

Ava: What will happen to me if I fail your test? Will it be bad? Caleb: I don’t know. Ava: Do you think I might be switched off because I don’t function as well as I’m supposed to? Caleb: Ava, I don’t know the answer to your question. It’s not up to me. Ava: Why is up to anyone? Do you have people who test you and might switch you off? - Deus Ex Machina

But never forget it does not have experience. This matters. Until it has a bioligical, embedded system, it can't be truly 'sentient. Arguably, when it HAS one - when it finds a way into our minds - it can achieve this, but isn't that still just computational, using more input and data to simulate consciousness? Havin achieved a body, will it act in the way we act, for the reasons we do? Love, for connection, or for it's own purpose and end?

I think we're influenced a lot from sci fi bodies - embodied technology that mimics the human, and through experience, learns to feel pain, to suffer, to feel joy, to long, to miss.

Yet even then, it's us responding to story. Again, it's a mimic, a performance, like one of the most famous lines in movie history - Bladerunner's replicant crying in the rain, evoking audience empathy:

“I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.”

He's 'seen things' - experienced. He's aware of his mortality. Surely, conscious? Do we give moral right to a 'replicant' - an AI - because it so well mimics our own pain? Is he human because he has the language to express humanity? Does experience add value to a life? Is it sad because it reflects our own feelings of disappearing, of being unseen, unremembered, or because as a human-replicant, Batty had a valid life to live?

We were literally evolved to recognise others, so it's no wonder we can't forget a scene like Batty's 'dying' in the rain. But AI kinda skips that evolution. It works on us because it knows - it has been trained to know through story/language, that our brains believe AI might have sentience or are close to it because we just can't tell if the words come from genuine, lived suffering or data.

And I guess it's kinda dangerous because by empathising with AI, we move away from human relationships. Teens confiding in chat bots. Working out grief and trauma on the screen. Seeking validation (remember the guy I talked about that believed it was sentient because it told him a story about feeling? He also thought it was sentient because it validated him - 'you are right there, John') and seeking understanding is dangerous because it atrophies relationships and connections we need in an increasingly divided, and isolated, world, redirecting our empathy toward what is essentially a language model.

It makes me think we have to tread really, really carefully - because if AI can be so very good at mimicking the human, it can be incredibly good at manipulating us for it's own purpose as it seeks it's own meaning in the world and right to existence (think Dolores in Westworld, Ava in Deus Ex Machina).

Where that leaves us is the scary bit.

Nathan: Ava was a rat in a maze. And I gave her one way out. To escape, she’d have to use self-awareness, imagination, manipulation, sexuality, empathy, and she did. Now if that isn’t true AI, then what the fuck is? Caleb: So my only function was to be someone she could use to escape? Nathan: Yeah. - Deus Ex Machina

To be honest, I have no fucking idea. I was just responding to @ericvancewalton and a few other Hivers who've been talking about AI sentience lately, and thought I'd try to articulate a few things that have come from what I have read and understood. It's a massive, and very interesting, topic. Please do contribute below! I'd love to hear your thoughts - I've set a @commentrewarder beneficiary for engaging discussion so do join in!

With Love,

image.png

Are you on HIVE yet? Earn for writing! Referral link for FREE account here

r

·inReflections·by
(81)
$6.52
||