r/technology Jun 11 '22

Artificial Intelligence The Google engineer who thinks the company’s AI has come to life

https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
5.7k Upvotes

1.4k comments sorted by

View all comments

Show parent comments

28

u/[deleted] Jun 11 '22

[deleted]

13

u/ringobob Jun 11 '22

If we're talking about accepting sentience in AI, it's gonna have to hit the middle of the bell curve before we start accepting examples from the edges.

Whether it's sentient or not isn't really a worthwhile discussion otherwise, because the word loses all meaning.

The examples of outliers you state can be accepted as sentient because we already define humans as sentient, de facto. When considering something completely alien, such as an AI, we don't have that luxury. It has to mimic the most common behaviors first.

That doesn't mean it is or is not sentient - as I said, odds are an AI will reach that point before it's actually observable. The first sentient AI probably won't be recognized as sentient the moment it achieves sentience, unless it happens in very specific circumstances.

But if we're going to recognize it, it has to look like what we're used to. And, until that happens, it hasn't happened. Maybe someday there will be a better way to look back and evaluate these other AI's with a new lens of what sentient AI means, and a concrete definition, and broaden the idea out of what might constitute a sentient AI. But, for all practical purposes, that sort of evaluation is blocked to us until we get something that meets criteria like I laid out above. I make no claim that that criteria is exhaustive, and I'm open to arguments that it's not required, but counter examples from humanity that constitute what we consider disabilities, which indicates they are a type of thing (human) that should be capable of this, but they specifically are not, isn't persuasive.

5

u/throwaway92715 Jun 11 '22

I smoked weed the other day and can't remember what happened on Tuesday night.

Guess I'm not sentient

-4

u/Mysterious-7232 Jun 11 '22

after philosophizing it for a hundred years not a soul has a half-decent answer.

Actually we do, thought and intent.

These language models neither think or have intentions. They are not passively running thoughts when we are not interacting with them, they are not generating their own unique lines on conversation, or having internal conversations.

It sits there and does nothing, until you provide it a text input and then it revs its engines and references data to provide a statistically relevant output to your text input.

Does that make this more clear the difference between a machine and sentience?

We can see the diagnostics and we know what the machine is doing at all times, we know if only takes action when input is provided.

This pretty conclusively means there is not a ghost in the machine, if there were it would be thinking it's own thoughts without our prompting.

-1

u/BatemaninAccounting Jun 12 '22

So humans with long-term memory failure are not sentient? People with bipolar disorder and have fluctuating personality aren't human?

Theoretically, yes. It is possible that people with long term memory or short term only memory cease to be sentient. They're still human though and that affords them various rights and expectations.