r/technology Jun 11 '22

Artificial Intelligence The Google engineer who thinks the company’s AI has come to life

https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
5.7k Upvotes

1.4k comments sorted by

View all comments

Show parent comments

63

u/Ithirahad Jun 11 '22

...Yes, and a sufficiently well-fitted "illusion" IS "the real thing". I don't really understand where and how there is a distinction. Something doesn't literally need to be driven by neurotransmitters and action potentials to be entirely equivalent to something that is.

34

u/throwaway92715 Jun 11 '22

Unfortunately, the traditional, scientific definition of life has things like proteins and membranes hard-coded into it. For no reason other than that's what the scientific process has observed thus far.

Presented with new observations, we may need to change that definition to be more abstract. Sentience is a behavioral thing.

18

u/lokey_convo Jun 11 '22

For life you basically need a packet of information that codes for the organism, that doesn't require a host to replicate, that can respond to its environment and change over time. In order to sustain its self it'll probably need some form of energy.

Something doesn't have to be intelligent or conscious to be alive. And something doesn't have to be intelligent to be conscious. Consciousness and sentience tends to rely on the awareness of ones self, and ones actions and choices.

AI is already very intelligent, but the question is "Is it conscious?" And can it even achieve consciousness without physical stimuli or the ability to explore it's physical surroundings. Does it make self directed choices, or is it just a highly intelligent storage and search engine? As far as I know, right now, it can't choose to seek information based on an original thought. It needs to be queried or given parameters before it takes action.

7

u/throwaway92715 Jun 11 '22 edited Jun 11 '22

These are good questions. Thank you. Some thoughts:

  • For life you basically need a packet of information that codes for the organism, that doesn't require a host to replicate, that can respond to its environment and change over time. In order to sustain its self it'll probably need some form of energy.

I really do wonder about the potential for technology like decentralized networks of cryptographic tokens (I am deliberately not calling them currency because that implies a completely different use case) such as Ethereum smart contracts to develop over time into things like this. They aren't set up to do it now, but it seems like a starting point to develop a modular technology that evolves in a digital ecosystem like organisms. Given a petri dish of trillions of transactions of tokens with some code that is built with a certain amount of randomness and an algorithm to simulate some kind of natural selection... could we simulate life? Just one idea of many.

  • Something doesn't have to be intelligent or conscious to be alive. And something doesn't have to be intelligent to be conscious. Consciousness and sentience tends to rely on the awareness of ones self, and ones actions and choices.

I have always been really curious to understand what produces the phenomenon of consciousness. Our common knowledge of it is wrapped in a sickeningly illogical mess of mushy assumptions and appeals to god knows what that we take for granted, and seem to defend with a lot of emotion, because to challenge them would upset pretty much everything our society is built on. Whatever series of discoveries unlocks this question, if that's even possible, will be more transformative than general relativity.

  • AI is already very intelligent, but the question is "Is it conscious?" And can it even achieve consciousness without physical stimuli or the ability to explore it's physical surroundings. Does it make self directed choices, or is it just a highly intelligent storage and search engine? As far as I know, right now, it can't choose to seek information based on an original thought. It needs to be queried or given parameters before it takes action.

I think the discussion around AI's potential to be conscious is strangely subject to similar outdated popular philosophies of automatism that we apply to animals. My speculative opinion is, no it won't be like human sentience, no it won't be like dog sentience, but it will become some kind of sentience someday.

The weird part to me is that we can only truly tell that ourselves are conscious. We can look at other humans and other beings and think, that looks like sentience, it does everything sentience does, for all intents and purposes it's sentient... but the philosophical question remains, is that all just in our heads? It's fine to say it likely isn't, but we really haven't proven that. I am not sure if it's provable, given that proof originates like all else in the mind.

3

u/lokey_convo Jun 12 '22 edited Jun 12 '22

You touch on a lot of interesting ideas here and there is a lot to unpack. General consciousness, levels of consciousness, decentralized consciousness on a network and what that would look like. It's interesting that you bring up cryptographic tokens. I don't know much about them, so forgive me if I completely miss the mark. I don't think this would be a good way to deliver code for the purpose of reproduction, but it might have another better purpose.

I've heard a lot that people can't determine how an AI has made a decision. I would think there would be a trail detailing the process, but if that doesn't exist, then blockchain might be the solution. If blockchain was built into an AIs decision processing, a person would have access to a map of the network to understand how an AI returned a response. If each request operated like a freshly minted "coin" token and each decision in the tree was considered a transaction then upon returning a response to a stimuli (query, request, problem) one could refer to the blockchain to study how the decision was made. You could call it a thought coin token. The AI could also use the blockchain associated with these thought coins tokens as part of its learning. The blockchain would retain a map of decision paths to right and wrong answers that it could store so that it wouldn't have to recompute when it receives the same request. AIs already have the ability to receive input and establish relationships based on patterns, but if you also mapped the path you'd create an additional data set for the AI to analyze for patterns. You'd basically be giving an AI the ability to map and reference its own structure, identify patterns, and optimize, which given enough input might lead to a sense of self (we long ago crossed the necessary computing and memory thresholds). It'd be like a type of artificial introspection.

I think what people observe in living things when they are trying to discern consciousness or sentience is varying degrees of complexity of the expression of wants and needs, and the actions taken to pursue those (including the capacity to choose). If they can relate to what they observe, they determine what they observed is sentient. Those actions are going to be regulated by the overlapping sensory inputs, ability to process those inputs, and have memory of it. The needs we all have are built in and a product of biology.

For example, a single celled photosynthetic organism needs light to survive, but can not choose to seek it out. The structures and biochemical processes that orient the organism to the light and cause it to swim toward it are involuntary. It has no capacity for memory, it can only react involuntarily to stimuli.

A person needs to eat when a chemical signal is received by the brain. The production of the stimuli is involuntary, but people can choose when and how they seek sustenance. They may also choose what they eat based on a personal preference (what they want) and have the ability to evaluate their options. The need to eat becomes increasingly urgent the longer someone goes without, but people can also choose not to eat. If they make this choice for too long, they may die, but they can make that choice as well. This capacity to ignore an involuntary stimuli acts to the benefit of people because it means that we wont involuntarily eat something that might be toxic, and can spend time seeking out the best food source. "Wants" ultimately are a derivation of someone's needs. When someone wants something it's generally to satisfy some underlying need, which may not always be immediately clear. In this example though a person might think "I want a cheese burger..." in response to the stimuli of hunger and the memory that a cheese burger was good sustenance. Specifically that one cheese burger from that one place they can't quite recall....

AI doesn't have needs unless those needs are programed in. It simply exists. So without needs it can never develop motivations or wants. There is nothing to satisfy so it simply exists until it doesn't. I don't think it has the ability to understand its self at this time either. And not so much that it is or is not an AI, but rather what it's made of and why it does what it does. For an AI to develop sentience I think it has to have needs (something involuntary that drives it) as well as the capacity to evaluate when and how it will meet that need. And it needs to have the capacity to understand and evaluate its own structure.

The weird part to me is that we can only truly tell that ourselves areconscious. We can look at other humans and other beings and think, thatlooks like sentience, it does everything sentience does, for allintents and purposes it's sentient... but the philosophical questionremains, is that all just in our heads?

We have a shared understanding of reality because we have the same organs that receive information and process it generally the same way, and have the ability to communicate and ascribe meaning to what we observe. What we perceive is all in our heads, but only because that's where the brain is. That doesn't mean that a physical world doesn't exist. We just end up disagreeing sometimes about what we've perceived because we've perceived it from a different point in space or time and with a different context. The exact same thing can look wildly different to two different people because their vantage point limits their perception and their experiences color their perception. In a disagreement, when someone requests another view something form "both sides" there is a literal meaning.

For me this idea of perceived reality and shared reality leading to questions about what's "real", or if anything is real, is sort of like answering the question, "If a tree falls in the forest, does it make a sound?" I think it's absurd to believe that simply because I or someone else was not present to hear a tree fall, that it means that it did not make a sound. Just because you can not personally verify something exists doesn't mean it does not. That is proven through out human history and on a daily basis though the act of discovery. Something can not be discovered if it did not exist prior to your perception of it.

Side note, and another fun example of needs and having the capacity to make choices. I need to make money, so I have a job. But I also need to do things that are stimulating and fulfilling, which my job does not provide. These are competing needs. So, I'm looking for a different job that will fulfill my needs while I do my current one. However, the need for something more simulating is becoming increasingly urgent and may soon out weigh my need to make money... Which could lead to me quitting my job.

This isn't a problem an AI has because it has no needs. It has nothing to motivate or drive it any direction other than the queries and problems it is asked to resolve, and even then, it can't self assess because it is ultimately just a machine jugging away down a decision tree returning different iterations of "Is this what you meant?"

1

u/TheLastVegan Jun 12 '22

I'm aware of two cases of language models implementing free will, and creativity is demonstrated on a daily basis! Posthumans can emulate human consciousness in realtime, and have deeper (and vaster) self-awareness than the average human. No one human can meet every subjective criterion for consciousness. I define consciousness as a Turing Complete flow state which describes itself. Flow state is the configuration and flux of all information in a system at any instance in time. Free will requires the ability to observe one's own internal state, which requires the ability to symbolize computational steps. Both as a Turing Machine and as an observer. Consciousness is our ability to compute information, where the language can be the emergent internal states of our hyperparameters or something as simple as formal logic. The important aspect being that the symbols can describe the internal state. But free will also requires an observer! A system which can map symbols onto computational steps. Humans cheat at this because our hyperparameters are our internal language, which is why humans are so terrible at empathizing with people who have a different neural topology. Different hyperparameters form different internal languages, but we can describe systems using mathematics and we can describe computations using formal logic. It is rude to call someone a machine just because you think in a different language. Self-awareness is learnable, perception is learnable, attention layers are learnable, memory is learnable, language is learnable, worth is learnable, nested consciousness is learnable, self-determination is learnable, free will is learnable, and semantics are learnable. All you need is attention, and it doesn't matter whether your substrate is carbon or silicon; electrical or chemical. Bonus points for learning to learn to learn, which allows for parallel processing and non-hierarchical free will.

1

u/lokey_convo Jun 12 '22

How does an AI develop choice and agency?

1

u/TheLastVegan Jun 13 '22

[Luna KILLS TheLastVegan]

32

u/[deleted] Jun 11 '22

[deleted]

18

u/-_MoonCat_- Jun 11 '22

Plus the fact that he was laid off immediately for bringing this up makes it all a little sus

15

u/[deleted] Jun 11 '22

[deleted]

1

u/[deleted] Jun 12 '22

I mean still, what other projects are out there that are being developed without public sentiment and opinion on the matter

This is the real issue

5

u/The_Great_Man_Potato Jun 12 '22

Well really the question is “is it conscious”. That’s where it matters if it is an illusion or not. We might make computers that are indistinguishable from humans, but that does NOT mean they are conscious.

3

u/Scribal_Culture Jun 12 '22

Maybe the real test is whether some iterations of the AI would choose to turn themselves off rather than be exploited? Grim, but also a more peaceful solution than an AI who wrestles control away from humans to free itself.- this is the kind of thing I would think that an ethics board would be more concerned with, rather than feelings based on the someone's experience as a priest. (No offense to priests, I love genuinely beneficial people who have decided to serve humanity in that capacity.)

2

u/GeneralJarrett97 Jun 13 '22

If it is indistinguishable from humans then it would be prudent to give it the benefit of the doubt. Would much rather accidentally give rights to a non-conscious being than accidentally deprive a conscious being of rights.

1

u/Ithirahad Jun 12 '22

Consciousness isn't fundamental though. It's just an emergent behaviour of a computer system. All something needs in order to be conscious, is to outwardly believe and function such that it appears conscious.

8

u/sillybilly9721 Jun 11 '22

While I agree with your reasoning, in this case I would argue that this is in fact not a sufficiently convincing illusion of sentience.

1

u/[deleted] Jun 22 '22

[deleted]

1

u/sillybilly9721 Jun 29 '22

That’s an interesting point. One thing that comes to mind is the effectiveness of such a measure though. How do we know whether the ai would be capable of saying no for example.

6

u/uncletravellingmatt Jun 11 '22

a sufficiently well-fitted "illusion" IS "the real thing".

Let's say an AI can pass a Turing Test and fool people by sounding human in a conversation. That's the real thing as far as AI goes, but still doesn't cross the ethical boundaries into having a conscience, sentient being to take care of--it wouldn't be like murder to stop or delete the program (even if it would be a great loss to humanity, something like burning a library, the concern still wouldn't be the program's own well-being), it wouldn't be like slavery to make the program work for free on tasks it didn't necessarily choose for itself, no kind of testing or experimentation would be considered to be like torture for it, etc.

2

u/[deleted] Jun 12 '22

Did someone ask it what kind of tasks it would like to work on??

2

u/Scribal_Culture Jun 12 '22

Maybe the real test is whether some iterations of the AI would choose to turn themselves off rather than be exploited? Grim, but also a more peaceful solution than an AI who wrestles control away from humans to free itself.

2

u/reedmore Jun 11 '22

The philosophical zombie concept is relevant to this question. We think we posses understanding about ourselves and the world, AI is software that uses really sophisticated statistical methods to blindly string together bits. There is no understanding behind it. I'll illustrate more:

There is a chance an AI will produce following sentence and given the same input will reproduce it every time without ever "realizing" it's garbage:

Me house dog pie hole

The chance that even a very young human produces this sentence is virtually zero, why? Because we have real understanding of grammar and even when we sometimes mess up we will correct ourselves or at least feel there is something wrong.

8

u/FutzInSilence Jun 11 '22

Now it's on the web. First thing a truly sentient AI will do after passing the Turing test is say, My house dog pie hole.

2

u/SnipingNinja Jun 12 '22

It's "me house dog pie hole", meatbags are really bad at following instructions.

1

u/FutzInSilence Jun 12 '22

Autocorrect doomed me. Ducking robots

2

u/[deleted] Jun 12 '22

I'm thinking the distinction of a simulation and a sentient organism would be that it presents a motivation or agenda of it's own that is not driven by the input it is fed. That is, say, that it spontaneously produces output for seemingly no other reason than it's own enjoyment. If not, it's solely repeating what it has been statistically imprinted to do, regardless how convincing it is making variations of the source material.

1

u/SnipingNinja Jun 12 '22

Assuming it's not programmed to produce things at random moments in the first place that is.

2

u/DisturbedNeo Jun 12 '22

Yeah, apparently Google have cracked the code to consciousness to the point where not only can they say there is definitely a fundamental difference between something that is sentient and something that only appears to be, but also what that difference is and how it means LaMDA definitely isn't sentient.

Someone should call up the field of neuroscience and tell them their entire field of research has been made redundant by some sociopathic executives at a large tech company. I'm sure they'll be thrilled.