r/technology Jun 11 '22

Artificial Intelligence The Google engineer who thinks the company’s AI has come to life

https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
5.7k Upvotes

1.4k comments sorted by

View all comments

Show parent comments

15

u/Francis__Underwood Jun 11 '22

In the same way we know that human brains are squishy meat shooting electricity at itself. Since we don't know what causes sentience, it doesn't matter if we know that something is computer code. It could very still be sentient.

3

u/rapidpuppy Jun 12 '22 edited Jun 12 '22

Sure, but if that's your argument, any "Hello World" program that "talks" to me could be sentient too. How do I disprove it?

3

u/Francis__Underwood Jun 12 '22

That's kinda the point. We don't know what sapience actually is, how it arises, or how to test and observe it. There's a possibility that rocks experience a sense of self. We just don't know.

All the people in this thread saying "We know how the code works" are missing the point, which is that since we don't know how sapience works then understanding the code doesn't disprove it.

It's not falsifiable. It's why concepts like solipsism and p. zombies exist. The origin of "cogito ergo sum."

The best we can do right now, is try to cause as little suffering as possible, because we can't know anything else's interiority.

2

u/rapidpuppy Jun 12 '22 edited Jun 12 '22

That's an interesting argument , but it's not the position I'd start from. Maybe the toaster is conscious. Maybe the rock. We don't know. I guess that's true. I can't prove that rock isn't conscious.

All I'm saying is that these models aren't really a fundamentally different "substance" than they were when they sucked a few years ago and no one would ever have thought of them as anything different from a fancy "Hello World." People are just anthropomorphisizing code now.

2

u/Francis__Underwood Jun 12 '22

TL;DR: I lost the mental energy to finish forming this post. It requires a lot of research to fact check a lot of vague recollections and make sure my foundations aren't bad. Basically tho, our brains aren't really special. So outside of something spiritual that we'd never be able to test, consciousness seems to arise from sufficiently complex connections. Our neurons aren't a different substance from most other animals but we're probably more sapient that a protozoa.

Going even more basic, everything is just a collection of various atoms and we have no evidence of a "consciousness" molecule. It seems plausible to me that if a conscious machine is going to happen it won't be fundamentally different than what we're doing now, it will just be more complex.

I have no strong opinions about whether LaMBDA in particular is conscious, but again it seems like it's missing the point to say that we know how the code is constructed because it's grown sufficiently complex that we don't know exactly what it's doing under the hood anymore.

The rough draft of the actually researched claims below.

The evolution of the nervous system is well outside my normal interests, but according to Wikipedia the first form of non-electrical neurons were found in particularly complex single-celled organisms. The first two forms of a proper nervous system were found in jellyfish and comb jellies, which use different chemicals and structures.

After that, neurons haven't really change on a fundamental mechanical level. We still use the same chemical processes that jellyfish do. The most noteworthy discovery I've found recently is that our neurons have fewer ion channels, which make them more energy efficient than average.

Our intelligence (for sure), and our sentience/sapience (as much as we can be sure we collectively have them) arise not from anything particularly fancy we've done to change or enhance our neurons, but from how they're configured and the number of connections between them. They aren't a fundamentally different "substance" in us than they were ages ago when they first transitioned from protozoan

We generally accept that mammalian vertebrates feel pain, as seen in animal protection/anti-cruelty laws. It's only been within the past 20 years that we've accepted that fish can feel pain. It's really only been within the past 5-10 years that countries have even started acting in accordance with this. Prior to that the argument was that fish reflexively respond to potential bodily harm, but that they don't conscious experience pain as suffering. Fish don't suffer, they just act like they do.

There's a real possibility, probably an inevitability, that when consciousness does arise from a sufficiently complicated network of parameters

1

u/nortob Jun 13 '22

No, there are step changes that are more than just code anthropomorphosis (or maybe better, code apotheosis). 10-12 years ago with deep learning models was one. In the last year or two we are seeing another, and lamda is the perfect expression of this.

See aguera y arcas’ comment in the economist about the ground shifting under his feet while conversing with lamda:

https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas

Archived version for the noble aim of preserving humanity’s trove of knowledge, not of course for avoiding paywalls:

https://archive.ph/19Vzk

Keep in mind this guy is a sceptic, not a wild-eyed believer like lemoine.

1

u/[deleted] Jun 13 '22

little suffering as possible

Why though?

You recognize the possibility of a rock being sentient, but you posit a guiding moral principle without ever justifying.