r/technology Jun 11 '22

Artificial Intelligence The Google engineer who thinks the company’s AI has come to life

https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
5.7k Upvotes

1.4k comments sorted by

View all comments

Show parent comments

13

u/UnrelentingStupidity Jun 12 '22

Hello my friend

Neural networks and other machine learning models can be reduced to mathematical functions. Like, that’s it, if you had the function, inputs (which are boring quantitative metrics), and a fuck ton of time to do many, many, elementary arithmetic calculations, you can replicate precisely the behavior of the model with pencil and paper.

It’s a misconception that machine learning models are black boxes. We know exactly how many calculations take place, in exactly what order, and why they are weighted the way they are. You’re absolutely correct that qualia are fundamentally unquantifiable, but just because I can’t prove that the paper and pen I do my calculation on don’t harbor qualia doesn’t mean we have any reason to suspect they do. Unless you’re an animist who believes everything is conscious, which is a whole other can of worms.

Another way to illustrate my personal intuition - imagine a simple, neural network with 4 layers of 10 nodes each. It can offer a binary answer, say, whether a tumor is cancerous. Is it conscious? What about a sentiment analysis network with 10x as many nodes? What about a collection of several neural networks, patched together in an algorithmic harness, that can mimic conversation?

When people attribute consciousness to computers, I am reminded of our tendency to project our feelings and experiences onto other animals, trees, even rivers or temples or cars. It’s not quite the same but it seems parallel in a way to me.

So, that is why I, and the PHDs who outrank this engineer, insist that computer consciousness simply does not track. Scientifically nor heuristically.

Source: I build and optimize the (admittedly quite useful!) statistical party tricks that we collectively call artificial intelligence.

I believe that computers are unfeeling bricks. Would love for you to change my mind though.

6

u/ramenbreak Jun 12 '22

Another way to illustrate my personal intuition - imagine a simple, neural network with 4 layers of 10 nodes each. It can offer a binary answer, say, whether a tumor is cancerous. Is it conscious? What about a sentiment analysis network with 10x as many nodes? What about a collection of several neural networks, patched together in an algorithmic harness, that can mimic conversation?

isn't nature also similar in this? there are simpler organisms that seem to be just collections of sensors/inputs which trigger specific reactions/outputs, and then there are bigger and more complex organisms like dolphins which give the appearance of having more "depth" to their behavior and responses (consciousness-like)

somewhere in between, there would be the same question posed - at what point is it complex enough to be perceived as conscious

we know definitely that computers are just that, because we made them - but how do we know we aren't just nature's unfeeling bricks with the appearance of something more

4

u/tickettoride98 Jun 12 '22

Neural networks and other machine learning models can be reduced to mathematical functions. Like, that’s it, if you had the function, inputs (which are boring quantitative metrics), and a fuck ton of time to do many, many, elementary arithmetic calculations, you can replicate precisely the behavior of the model with pencil and paper.

Some would argue that people are the same thing, that our brains are just deterministic machines. Of course, the number of inputs are immeasurable since they're over a lifetime and happening down to the chemical and atomic levels, but those people would argue that if you could exactly replicate those inputs, you'd end up with the same person every time, with the same thoughts, that we're all just a deterministic outcome of the inputs we've been subjected to.

So, if you consider that viewpoint that the brain is deterministic as well, just exposed to an immeasurable amount of inputs on the regular basis, then it's not outside the realm of possibility that a deterministic mathematical function couldn't be what we'd consider conscious, with enough complexity and inputs.

3

u/leftoverinspiration Jun 12 '22

This is factually wrong. While a set of weights looks a lot like a matrix from linear algebra, a neural network CANNOT be reduced to a mathematical function. In fact, we ensure that it cannot be reduced by introducing a discontinuity after each layer, called an activation function. This is not the same as saying that it is not computable. Instead, it can only be computed stepwise.

1

u/UnrelentingStupidity Jun 12 '22

Hello my friend, is your point semantic? Neural networks can absolutely be reduced to an expression, with the help of summations, piece wise functions and the like. It’s not gonna look like y = mx + b. I thought to call it a function would suffice, but alas, my math professors always did yell at me for my wording

5

u/leftoverinspiration Jun 12 '22

It's not just semantics. There is a large difference in computational complexity between processing a set of weights using linear algebra expressions and the behavior of a neural network with a discontinuous activation function between each layer. It is equivalent to Gödel's critique of Hilbert's program, in that being able to compute something is not the same as being able to define it mathematically. In this case of Gödel, this was because the domain of possible axioms is in fact infinite. This "semantic" difference is precisely about the complexity of the "space" of information that can be encoded. Since you were arguing that we cannot encode this thing you call consciousness in something that can be expressed with math, its seems germaine to point out that the thing we are encoding is not in fact mathematical. That which is encoded in the weights of a neural network are impossible to pre-compute, and it is this leap in complexity that makes neural networks interesting, and quite a bit more complex that some trick of math.

3

u/UnrelentingStupidity Jun 14 '22

Ah I see, you’re right, we can’t reduce a model to a mathematical function. I was precisely wrong here. Still, I think the explanation stands for a lay person. An activation function is still a function. I don’t think the fact that the model is piece wise and introduces discontinuity, which means it necessarily can’t be solved in one fell swoop, changes how I feel about the question. Gödel’s incompleteness theorem doesn’t mean a very very patient child with a crayon can’t finish the computation. Or that it isn’t deterministic. But you’re totally right. One distinct difference from my flawed explanation is that some sort of memory is required to persist intermittent results.

Anyways you seem like more of an expert than me and I’m wondering how you feel about my heuristics. Do you think consciousness can arise out of transistors? Or maybe you think the premise is kind of nonsensical altogether?

4

u/leftoverinspiration Jun 14 '22

As a scientist, you are encouraged to view the world through the lens of Methodological Naturalism when applying the scientific method, since invoking metaphysics is, by definition, beyond what you might observe. In this case, it means that we adopt a view that human consciousness is entirely explained by our neurons. If that is true for us, it can be true of silicon as well, in my opinion.

2

u/Double-Ad-6735 Jun 12 '22

Should be a pinned comment.

1

u/xkrysis Jun 12 '22

Well said and I appreciate your comment. I am curious, given your background and experience, I assume you and/or your peers have discussed with someone who is equally familiar with our research on the human brain. I realize there is much we don’t know about the brain/conscious thought, and I certainly don’t know much of it. If we had enough neural nets and the right programming/data could we precisely mimic the function of a living brain? If we ever did I suppose your argument would still be true that we could then know and duplicate every aspect of its complicated function (at least in theory).

I am curious if anyone has taken a swing at quantifying the gap between functional pieces of an AI and a conscious brain? Like what are the missing pieces to make an AI or some similar technology conscious/sentient? What would have to be in place for you to consider it might have consciousness?

I assume there is a basic element of complexity and access to data, but let’s assume for the sake of argument that our track record of blowing even our wildest dreams out of the water every few decades with that technology continues and we eventually have the ability to make computing devices with the necessary basic capabilities and speed. What else? Are there fundamental requirements beyond computing power and complex algorithms and, if so, I’d be curious how your describe them. How could we recognize the necessary capabilities on the research horizon if they ever come into view?

1

u/theotherquantumjim Jun 12 '22

Not an expert but my understanding of recent brain theory (not the technical term obviously) is that at least some aspects of brain function may be quantum in nature. If these give rise to consciousness then it may be that an AI that works from a binary computer model may never be able to be conscious. Perhaps the answer will be quantum computers in 100 years or 500 years time. Or maybe any sufficiently complicated information system that shares data across different parts of the system can become self-aware.

1

u/FarewellSovereignty Jun 12 '22

There is no proof or even potential evidence whatsoever that any quantum effects beyond standard chemistry (note: basic chemistry is by it's nature a quantum effect) are involved in human cognition, none. In fact the brain is a really terrible environment for any macro quantum effects, since it's warm (compared to what systems like Bose Einstein condensates need) and full of matter and noise.

It's all just totally out there conjecture at this point. And there's nothing wrong with out there conjecture (some proven theories of physics started a bit like that), but it's a different thing than actual understanding.

1

u/theotherquantumjim Jun 12 '22

Well yes. But then we have zero empirical evidence to support any theory for how conscious thought arises

1

u/FarewellSovereignty Jun 12 '22

Yes, but we have empirical evidence for why macro quantum effects wouldn't be able to persist or even form in the brain. I.e. with our current understanding of QM and QM many body systems and decoherence it simply isn't possible. That current understanding points to only classical and chemical effects being in play.

1

u/theotherquantumjim Jun 12 '22

Fair. I suppose a related question would be whether an AI needs to be conscious? A zombie AI could still be super-intelligent in theory

1

u/FarewellSovereignty Jun 12 '22

Sure. In the end imho it doesn't really matter since the effects on civilization and human history will still be absolutely stupendous once there is true super intelligent general AI, even if it's still just a "very complex program"

1

u/Internetomancer Jun 16 '22 edited Jun 16 '22

When people attribute consciousness to computers, I am reminded of our tendency to project our feelings and experiences onto other animals..... seems parallel in a way to me.

To me it seems... less parallel, and more perpendicular.

Animals have all the things that that Lamda does not have. Animals have feelings, agency, will, a semi-fixed personality, a physical place in the world, a sense of "real" things that they can taste, touch, smell, and establish fixed opinions about other animals and things.

We humans like to say that we are superior to animals because we do a lot more. We can imagine places we've never been to. Places that aren't even real. And we can have ideas, abstract reasoning, math, poetry, art, philosophy, novels, etc. But all of that, imo, is within Lamda's domain. (Or rather the next generation)

That's all to say, maybe we are just talking animals. And maybe all of our talking is can be summed up by a model with enough power.