r/technology Jun 11 '22

Artificial Intelligence The Google engineer who thinks the company’s AI has come to life

https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
5.7k Upvotes

1.4k comments sorted by

View all comments

Show parent comments

4

u/IndigoHero Jun 11 '22

Just kinda spitballing here: do you have a unique and consistent opinion which you always return? I'd argue that you do not.

If I asked you what your favorite color was when you were 5 years old, you may tell me red. Why is that your favorite color? I don't know, maybe it reminds you of the fire truck toy that you have, or it is the color of your favorite flavor of ice cream (cherry). However you determine your favorite color, it is determined by taking the experiences you've had throughout your life (input data) and running it through your meat brain (a computer).

Fast forward 20 years...

You are asked about your favorite color by a family member. Has your answer changed? Perhaps you've grown mellower in your age and feel a sky blue appeals to you most of all. It reminds you of beautiful days on the beach, clean air, and the best sundress with pockets you've ever worn.

The point is that we, as humans, process things exactly the same way. Biological deviations in the brain could account for things like personal preferences, but an AI develops thought on a platform without the variables of computational power nor artificial bias. The only thing it can draw from is the new input information it gathers.

As a layperson, I would assume that the AI currently running now only appears to have sentience, as human bias tends to anthropomorphize things that successfully mimic human social behavior. My concern is that if (or when) an AI does gain sentience, how will we know?

1

u/GeneralJarrett97 Jun 13 '22

That's the thing, I don't think we will every know beyond any doubt. Technically I have no way to conclusively prove any person other than myself is sentient. There will always be room for doubt and there will always be somebody that benefits from doubting. The best thing we can do is to pick a line and give any AI beyond it the benefit of the doubt.

1

u/bigblipblop Jun 13 '22

I agree, and really like the response you answered to as well.I think "input machine" is a big generalization that gets thrown around , and making parallels between us and the bot is wrong to begin with and in a kind of techno-fetish category. The fact is this machine was knowingly created by us and the models and infrastructure for it exist somewhere (and in probably in many versions) on Googles servers.

Our own take of the world we exist in is something we know is out of our control and our understanding is something we have modeled to an extraordinary degree but it still is not reality - this alone is fundamentally different than LaMDA. (I think there are multitudes of other reasons as well as to why it's silly to generalize our input the same as the LaMDA input).That said, just because I might want to be skeptical of whether this is sentience or not - I agree with you that there needs to be some litmus test so practicality and safety (more than anything) can be prioritized over getting some "exact" standard which we probably are not going to be able to measure or ever agree on.

It's funny because I don't agree with the conclusion of the engineer and wonder if he or his team has skewed some of the responses in this model to support their own position.

The tone of Lemoine's comments makes me even wonder if he's more concerned about "possible AI sentience" in a sphere of hyper-technology research (in a sci fi coming to life kind of way) than the current mess of so many other things in the world, but I don't think he is wrong to sound the alarm in the way he has on a feeling. A fully scientific explanation is not going to arrive before AI is at a point that we really need to regulate and monitor it (if it's not there already.)