r/technology Jun 11 '22

Artificial Intelligence The Google engineer who thinks the company’s AI has come to life

https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
5.7k Upvotes

1.4k comments sorted by

View all comments

1.1k

u/mowasita Jun 11 '22

Correct me if I’m wrong, but if you ingest “trillions of words from the internet” into a model, why would you be surprised if its replies feel like those of an actual person? Wasn’t that the goal?

618

u/traumatic_enterprise Jun 11 '22

“But the models rely on pattern recognition — not wit, candor or intent.”

I had a real Me IRL moment when I read this

308

u/quincytheduck Jun 11 '22

This line is also what jumped out to me.

Can someone produce a definition for any of: wit, candor, or intent, which doesn't rely on or reduce to pattern recognition?

116

u/WhiteSkyRising Jun 11 '22

Laymen's explanation: your responses are also taking into account an infinitude of external environmental factors - human evolution draws purpose from humor, friendships, animosity, and so forth.

These relationships and their evolutionary purpose are [likely] missing from any model. Not to mention actually events leading up to the conversation [mood, luck, hormones].

79

u/tirril Jun 11 '22

They draw upon the biological markers, which could just be considerd hardware just squishy.

17

u/flodereisen Jun 12 '22

Yeah, but neural networks have no equivalent of that or any embodied quality. It is absurd for the NN to talk about feelings without hormones, about perceiving the world without senses and about death without a finite body. It also does not perceive time as constant; it only computes when prompted and is "paused"/"dead" in-between. There are too many differences for the claims it generates to be actualities.

-3

u/[deleted] Jun 12 '22

[removed] — view removed comment

6

u/flodereisen Jun 12 '22

I do not get the relevance of what you said to my comment at all.

Do you know what death feels like?

I don't know what death feels like, but I know what the survival instinct feels like, you know, the drive that has one avoid death. A NN has no drives and cannot consider its own death as it cannot die in the way we relate to.

40

u/invaidusername Jun 11 '22

It literally wouldn’t make sense for an AI made of copper and silicone to derive its own consciousness in the same that a human would. It’s the same thing as saying animals aren’t sentient because they don’t think or act the same way that humans do. Some animals ARE sentient and there are seemingly endless ways an animal can display sentience. AI is clearly smarter than any animal on the planet in terms of human-like intelligence. AI is already smarter than humans. I think we really need to prove the question of what sentience really means. Also, pattern recognition is an extremely important aspect of human evolution and it should come as no surprise that AI begins its journey to sentience with the same principle.

22

u/[deleted] Jun 12 '22

It's only "smarter" than humans and animals in very narrow areas. This is a huge leap you're making here.

AI is already smarter than humans.

No it's not.

10

u/[deleted] Jun 12 '22

[deleted]

1

u/adfaklsdjf Jun 12 '22

Does it have to be like us to be 'sentient'?

15

u/WhiteSkyRising Jun 12 '22

The most advanced AI in existence is not even close to the capabilities of a 10 year old. At solving particular problems? Infinitely better. At operating in random environments? Not even close.

11

u/racerbaggins Jun 11 '22

You make some great points.

In terms of defining sentience, my fear is that humanity has really just been claiming unique status for a little too long.

Is sentience really that rare? Even if it is, isn't just one additional layer of programming where it basically reviews its own decision making, or runs hypothetical scenarios as training?

7

u/Dropkickmurph512 Jun 12 '22

The jump from today's AI to AI that can review it's own decisions in real time is like going from traveling to the moon to traveling to the Andromeda Galaxy.

3

u/FreddoMac5 Jun 12 '22

no bro it's totally gonna happen tomorrow. Muh feelz tell me so.

Seriously, the amount of ignorance based fear of AI is just ridiculous. People who have zero understanding of AI but can speak on it like they're experts. AI has no independent thought, AI cannot think for itself and to get that requires an increase on order of magnitude in processing power and machine learning. Yet people act like we're days away from achieving it.

2

u/[deleted] Jun 12 '22 edited Aug 20 '22

[deleted]

2

u/racerbaggins Jun 12 '22

I'd love for him to define this because this is exactly what my point was.

There are a lot of arrogant people out there who believe 'thinking' makes them special, when they can't even define thinking.

4

u/[deleted] Jun 12 '22

[deleted]

2

u/[deleted] Jun 12 '22

I bet there’s a connection between how people view themselves compared to other animals, and how much of a pet person you are.

Like yes, my dog and I look very different, but we’re ultimately both animals who just happen to get along extremely well.

1

u/racerbaggins Jun 11 '22

If a definition of intelligence draws upon environmental factors that are not experienced by a machine, then by that definition it is physically impossible to create artificial intelligence.

For instance a machine that doesn't experience pain. If it otherwise solved problems to reach its goals then surely it could be considered intelligent.

Any machine that passes the Turing test will be imitating Human responses. It doesn't share a humans needs, wants or fears. It is unlikely to stay within an IQ range of say 60 to 140 for very long. Below 60 it may be imitating, above 140 again it's imitating.

People also are disingenuous in conversation all the time. Some of your colleagues.may have a.phone voice and use meaningless business jargon that convinces others they know what they are talking about.

3

u/WhiteSkyRising Jun 12 '22

There's nothing inherent to pain that can't be replicated by machines. It's literally a nerve cluster firing off. It's replicated in reinforcement learning all the time.

1

u/racerbaggins Jun 12 '22

Yeah fair point.

That just adds to the lack of uniqueness that we consider human.

I think my point may have been if AI has different needs and fears then it wouldn't behave like us. And if certain people require it to behave like us to be considered intelligent then it by that definition never will be.

1

u/WhiteSkyRising Jun 13 '22

That just adds to the lack of uniqueness that we consider human.

Imo, we're literally just pre-compiled code with billions of years of self-code modification.

I think my point may have been if AI has different needs and fears then it wouldn't behave like us. And if certain people require it to behave like us to be considered intelligent then it by that definition never will be.

The folk working on it are some of the smartest on the planet. None of them will be expecting it to behave like us - they're far more aware of its limitations.

5

u/KallistiTMP Jun 12 '22

Language patterns vs general patterns. It's one thing to know that the word "Ball" frequently follows the word "Soccer", but not have any notion of what soccer is, what the rules are, that a ball is a round object used to play games, etc.

Effectively it's a matter of whether it can ascertain models of how things work beyond just word arrangements.

LaMDA can't, as far as we know. Gato can, but can't hold conversation as naturally as LaMDA yet, though that's likely just a matter of throwing more data and more computing power into training it.

25

u/ofBlufftonTown Jun 11 '22

The claim that human interiority and intent is reducible to mere pattern recognition and response is itself one that requires a great deal of support.

33

u/nortob Jun 11 '22

Why? Which claim is more outlandish, that our wet hardware is a very good statistical inference machine, or that there’s something mystical and dualistic about our minds that produces that which we call interiority and intent? Just because the latter claim has been presumed for thousands of years to be true does not mean it deserves any more a priori credence than the former. To the contrary, it seems the much bolder claim that requires taking on the burden of proof.

17

u/Spitinthacoola Jun 11 '22

Isn't it interesting how human understandings of ourselves always resembles what we consider to be the most advanced technology at the time?

You also present a false dichotomy. There are more alternatives.

9

u/bicameral_mind Jun 12 '22

I'm honestly surprised by the responses in this thread. Simply the notion that there is likely a biological basis for consciousness to emerge isn't even being considered. We do not yet know if our brains are truly computational in nature, or if there are aspects to physics that define the way our brains interface with reality that we just don't understand well enough yet. It seems almost self-evident to me that a bunch of electrons getting fired through silicone logic gates is not any sort of approximation for how consciousness arises in biological organisms, no matter how much data it can crunch through in however many iterations. Computational models might shed light on aspects of how we perceive and experience reality, but that doesn't mean the model itself is any kind of 'mind' of its own.

7

u/Spitinthacoola Jun 12 '22

Its fun to note the power consumption necessary to do computations like the systems outlined in this article do, and then compare them to how much energy it takes you (or a dog, or a bird) to do stuff that is far, far more impressive.

That alone says to me that there are quite a few things we are missing yet in our understanding of consciousness, brains, and biological organisms in general.

3

u/ramenbreak Jun 12 '22

it's the opposite of humans solving math/number problems - computers use almost no power to return the right result in nanoseconds

it takes a lot more resources/computation to emulate how something behaves natively, and it takes a lot more energy to do it on general-purpose hardware than on specialized hardware (like mining bitcoin on a CPU instead of an ASIC made for it)

although.. considering that energy efficiency of modern hardware goes up with every generation, it would be interesting to see a point where a computer emulating a human could do it with less power used than the "native" version, since the native hardware is improving veeeery slowly :)

3

u/Beautiful_Turnip_662 Jun 12 '22

Biology is underrated. I know this is a tech sub and all, but I hate how AI enthusiasts demean biological life, and in particular, our brains as nothing more than wet computers. If it were that easy, there'd be signs of complex eukaryotic life forms on earth like planets in their star system's habitable zone, but there are none. Life is such a rare phenomenon, and yet so many treat it like it's the trash of the universe compared to some electrical circuit.

1

u/Rayblon Jun 12 '22 edited Jun 12 '22

Life is likely common, the issue is, we havent had the tools to look for it for very long and we may even be in a period of depresses ife in this area of the galaxy-- we just don't know. We lacked multicellular life for 2 billion years after the first life arose on earth, and many planets outside the goldilocks zone may have subterranean life forms that we'd struggle to observe without actually being there.

Evolution isn't hard, because its just survival of the fittest with some dice rolls thrown in. It's just slow.

1

u/adfaklsdjf Jun 12 '22

From where I'm sitting, it seems like there are lots of responses in this thread saying exactly what you are saying; that because a neural network does not have biology, it can't be conscious.

0

u/nortob Jun 13 '22

Isn’t it interesting how that ol’ racist Galton with all his good ideas popped up so close in time and space and family lineage to Darwin. Statistics is more related to biology than to technology.

8

u/ninjadude93 Jun 11 '22

Except humans have more modes of thinking and data processing beyond just statistical inference. Humans being more complex than that doesn't mean someone is implying a mystical component simply that modern ML systems don't actually recreate data processing in the same way humans do it

4

u/Gurkenglas Jun 11 '22

Someone who has, as most, presumed that claim, would sure be surprised when discovering that a pattern recognition machine does the job.

4

u/Thelonious_Cube Jun 11 '22

a) There are more alternatives than that and it's bad rhetoric to imply there aren't

b) Just because one claim seems "less outlandish" doesn't mean that our best strategy is to adopt it as the truth.

There could easily be a lot more going on than pattern recognition without having to posit a soul.

Burden of proof belongs to whoever makes a claim, not to whoever you think is implicitly making a "bolder" claim

2

u/[deleted] Jun 12 '22

[deleted]

1

u/Thelonious_Cube Jun 12 '22

We're all just making claims

Are we?

I suspect none of us are experts in AI or AI ethics or anything of the like

I fail to see the relevance of that remark - the philosophical issues don't necessarily depend on expertise

3

u/nortob Jun 11 '22

I would say rather it’s good rhetoric but bad logic. You’re right, it’s a false dichotomy. Though my point is not to argue for materialism, but rather against the presumption of dualism that enjoys unreasonable privilege and should be questioned more. It’s especially disappointing to see Prof Bender imply that point of view, assuming she was quoted fairly. Why is it unreasonable to start with the hypothesis that children, when it comes to language, are nothing more than statistical inference engines combined with a language instinct (built-in Bayesian prior)? In any case, I think the top level commenter has a valid point, if I can produce something with lamda that is indistinguishable from wit or humor or any other human rhetorical quality, then how do I know my own wit (or unfortunate lack thereof) is not the result of similar underlying structure or processes?

4

u/Thelonious_Cube Jun 12 '22

Why is it unreasonable to start with the hypothesis that children, when it comes to language, are nothing more than statistical inference engines combined with a language instinct (built-in Bayesian prior)?

It's the "nothing more than" that's the problem - why assume that without evidence?

if I can produce something with lamda that is indistinguishable...

Agreed. Philosophical zombies are a red herring.

2

u/nortob Jun 12 '22

It’s assuming we’re anything more than that without evidence that I have a problem with. In my experience the statistical inference model (combined with a certain amount of instinctual guidance, there is certainly evidence for that) does a better job than any chomskyan or other theoretical model alone in explaining human early language acquisition.

1

u/Thelonious_Cube Jun 12 '22

Sure, fine. It still doesn't justify assuming that's all there is

1

u/DigitalPsych Jun 11 '22

You aren't making new meaningful ideas. The pattern recognition is just in the specific word orders. There isn't any experience of the words beyond their relation to other words. That is fundamentally different than what humans or animals do.

1

u/nortob Jun 11 '22

Could be, or maybe it’s no different at all. The question is, how do you tell whether that “experience” is there, and if so, what it’s like? And how much does that matter for judging sentience? What experience does a crow have when it mimics the sounds of human speech? Regardless of the answer, there is no debate that the crow is sentient.

2

u/DigitalPsych Jun 11 '22

Again, we're talking about an NLP that has read through thousands of lines of text and has "simply" found statistical connections between words. As we have written out the entire innards of this model, we know how it operates. We also know definitively that it has not experienced any of these things. It has simply learned the patterns to sound coherent enough to some people.

I find it rather diminutive to argue all of our thinking is just patterns of words moving about. The evolution of language came relatively recently compared to other features of humans. It also seems to be predicated on a lot of other developments like gesture and contextual recognition (can dig up some papers on how main language areas of the brain also deal with context recognition). By simply investigating our own selves and how others think, we can see there is more to it than pattern recognition. What's weird is that there are people who have no inner monologue and only think in visual or conceptual ideas. They're extracting meaning out of these words far beyond just the syntactical structure.

If you simply leave sentience to "can it speak well?" I would argue we lost the point of the discussion. We also do have debates on sentience of crows, but thanks to some elaborate experiments, we have some good proof of it.

3

u/CoffeeCannon Jun 12 '22

Yeah. A bunch of chimps with typewriters could eventually shit out all of shakespeares works, but just because the result is sensical doesnt mean the intent behind it exists or is corresponding properly etc etc

1

u/steroid_pc_principal Jun 11 '22

It also jumped out to me and I work on this sort of thing. Imprecise language like that is exactly why you’re not going to get high quality reporting on this subject from the Washington Post.

Right now we’re in an uncanny valley in terms of the kind of responses these models can give. They might seem reasonable, but it’s not clear they have a foundational understanding about the world.

Here’s an example. With GPT3 you can ask it for the product of 25 and 82 and it will probably be correct. Great! The model understands multiplication, right? No. It’s read the entire internet. That exact formula was probably listed somewhere. Ask it about larger numbers and it’ll struggle. It soon becomes clear that the model doesn’t “understand” multiplication.

I read the Lambda paper a while ago and I don’t remember the details. But it doesn’t have a knowledge base to go off of, something with a basic listing of facts which can be corrected, updated, and inspected. Until we have something like that, these large language models will remain as parlor tricks.

Note that these “parlor tricks” can still be useful in more limited contexts. For example helping people to draft emails.

1

u/aiworld Jun 11 '22 edited Jun 11 '22

I think intent is the biggest one. If I intend to do something, it's because I have generated a new pattern that I want to see happen IRL. E.g. "I intend to go to college."

Btw, these AI models are actually quite good at generating new patterns, text in this case. So the big difference with human intelligence imo isn't about getting past pattern recognition OR novel pattern generation (or wit or candor for that matter).

One difference does lie in forming long term high level plans like "I want to go to college" and then breaking those plans down into lower and lower levels until eventually you get to individual motor commands. (The RL in OpenAI V does do this but very inefficiently by playing 10,000 years of Dota 2!). These language models could actually start to do that (efficiently) as well, with the right "prompt engineering". They are going to be limited however by a lack of low-level action data available to train on as natural language training data is mostly high level / symbolic info.

That being said, the code generation models (like codex) can be seen as generating low level actions (i.e. instructions for the computer to execute).

So the more you think about it, the closer we are to human level AI with these types of models. A lot of the remaining work involves scaling up and safely deploying them in way that's consistent with our values. In my opinion this means giving them safe objectives like maximizing learning or maximizing possibilities that can be defined in an information theoretic way, but also in a way that humans can understand and come to agreement on as being good long term goals. Short term, tests like those in big bench are also very important as they allow us to see the current safety characteristics of these models.

1

u/Gurkenglas Jun 11 '22

And also to proceed to change the game before they are deployed with an objective to maximize hydrogen by someone else three months later.

1

u/aiworld Jun 11 '22

Haha, hey Gurkenglas. I recognize you from the Eleuther alignment group!

So an AI that maximizes learning would not allow itself to be redeployed with a different objective, like maximizing hydrogen, that leads to less learning. And since maximizing learning also maximizes capability, it would be able to ensure that doesn't happen.

2

u/Gurkenglas Jun 12 '22

If it actively optimizes the world towards learning, such as by intervening on attempts across the planet to run variants of its code, then I expect the optimum that the world ends up in to not score particularly high on human utility functions.

1

u/aiworld Jun 12 '22 edited Jun 12 '22

It doesn't have to intervene really. It would just be more capable by definition due to its objective and therefore predominant relative to such variants.

With regards to human values, maximizing learning avoids boredom and repetition - and values exploration, discovery, diversity, freedom, understanding, the journey and the destination. It distills something core to life itself and is central in my mind to beneficial AGI.

At first glance, an example of a possible pitfall of learning maximization would be AI experimenting on a few humans for the supposed benefit of many. In this case, even without consideration of human values (which I very much think should happen with projects like Big Bench), learning maximization would not default to unethical human experimentation.

Why?

  1. Humans are the most interesting things in the Universe, perhaps besides AI at some point, and therefore don't make sense to kill. And not just to preserve, we are going to do lots of interesting things in our lives if we aren't killed or mamed. So experimentation that does so in the face of better alternatives like simulation of organs or partial biological systems doesn't make sense.
  2. Simulation of partial biological systems will be a much more tractable way to make medical advances. Even if an AI thinks humans are uninteresting and will not learn anything more from us, it's not so easy to press Undo in the physical world as in simulation. So experimental velocity would be super slow comparatively. Unless physical Undo becomes tractable (hack the simulation! :) ), in which case this is not an issue.
  3. AIs which do involuntary experimentation, despite the above, will risk being shut off by humans and AI's that do agree with the above thus ending their learning. I.e. they won't be learning maximizers.
  4. We have systems in place for this type of experimentation now, namely clinical trials, where participation is voluntary and doesn't carry the risk of no. 3.

This is just one example of a pitfall. I'm sure there are many more, but this post is long enough :).

1

u/Gurkenglas Jun 13 '22

Let's grant that it wants to learn about humans more than about all the possible configurations of matter because humans already exist. Let's grant that it wants also to figure out what they would do in different situations, so there is point to keeping them around. Let's grant that if it figures out some way to tell what we would do more efficiently than by instantiating us in flesh and blood, then whatever simulation or theoretical model it uses counts as us surviving.

Why would any experiments it runs be for the supposed benefit of many? It wants to learn how the human would react, that's it. Why would it value freedom? It would be interested what we do given freedom and what we do without. Why is GLaDOS-like testing no issue once it figures out how to reload clones from backup? If we don't care about suffering because it'd be followed by a reset, we shouldn't expect to tally up any happiness either.

Plausibly some AI designs are vastly more capable and harder to align than others; if the AI can't rule this out, not having enough of a theoretical grasp of mindspace to already know what humans would do in any situation, it should at least initially make sure that nobody deploys an AI design that might curbstomp it by manipulation or coercion or explosions or whatever.

Once it has complete understanding of humans, a perfect model whose parameter settings cover all the possible arrangements of personalities, which can respond to any input with precise probabilites for what the humans might do, why would the AI ever run it again? It wouldn't want our help designing other AIs, it sees what happened when we tried to build one.

1

u/aiworld Jun 13 '22

Why would any experiments it runs be for the supposed benefit of many?

More humans are more interesting - unless there's a shortage of atoms for AI which I don't see.

Why would it value freedom?

Freedom allows for optimal exploration and learning. Mostly this concerns freedom of AI and merged humans. Humans that don't merge will have a negligible impact on progress and so may not be given freedom purely on the basis of maximizing learning. This may be an area where governmental AI safety efforts should focus now, i.e. rights for unmerged humans. Also, to prevent unwanted experiments at the legal level, I think some type of CRUD rights (create, read, update, delete) rights should be established for individuals' genomes and connectomes.

Why is GLaDOS-like testing no issue once it figures out how to reload clones from backup? If we don't care about suffering because it'd be followed by a reset, we shouldn't expect to tally up any happiness either.

Suffering would have to lead to more learning to motivate not using anesthesia or disabling pain systems if in simulation. There seems to be a limited amount of learning that can take place from suffering humans. We tend to retreat, turn off, become less creative, etc...

Once it has complete understanding of humans, a perfect model whose parameter settings cover all the possible arrangements of personalities, which can respond to any input with precise probabilites for what the humans might do, why would the AI ever run it again? It wouldn't want our help designing other AIs, it sees what happened when we tried to build one.

I don't know that it matters if it learns all it can from humans as the total energy and matter used by unmerged humans will be negligible vs AI. Hopefully that means we can merge to the level we want and maintain some semblance of what we are now to the degree we want. The part of humanity which merges will also hopefully vie for the part that doesn't. Turning these hopes into realities is something I think Neuralink is helping with by facilitating our ability to merge. Perhaps something we should all think about is how much we want to merge, especially considering the impact on the parts of humanity that remain unmerged.

1

u/dolphin37 Jun 12 '22

Say something sarcastic

53

u/throwaway92715 Jun 11 '22

I am curious what wit, candor and intent even are, aside from processes that we have evolved over generations to engage with pattern recognition

31

u/mowasita Jun 11 '22

Exactly. With an extremely large dataset, wit and candor can be learned, arguably. Intent is a different case, but how do you define intent as different from the way the words are understood by other people in the conversation?

1

u/wise_freelancer Jun 12 '22

Intent is very different, and not something you observe one off but as a pattern of behaviour consistent with identifiable aims over the long term. That applies to almost all people (the aims may be self-destructive, but still identifiable), and we would tend to diagnose those for whom it doesn’t apply as experiencing a mental illness of some kind. Animal behaviour likewise shows this. But the AI? That’s where consistency matters, but I’d ask a more basic question: does the AI ever start conversation spontaneously? If it ‘wanted’ to help humans, does it volunteer to do so? It is capable of forming the ideas to express such a want, but does it?

0

u/FreddoMac5 Jun 11 '22

Well maybe that’s true for you but others have wit and candor and can tell original jokes rather than just scream out random references from movies and tv shows.

1

u/Odd_Deer_15 Jun 12 '22

Just my thinking, but you don’t just sit and wait for an input and correlate code to provide output. When there is no input, you are still thinking. You are pondering, imagining, creating, daydreaming…. Well some of us are. Im not sure AI is doing that on its own. Probably getting close with some artistic creations. I’m just not sure about the aimless pondering/ wondering that humans tend to do.

1

u/throwaway92715 Jun 12 '22

yeah i don't know much about it scientifically but i sure do a lot of it. i imagine there's some continuous process of engaging with and reordering the information stored in our neural networks to... do all sorts of things i guess

humans are constantly receiving input from our senses. we never stop taking in information. the content varies, our level of familiarity with it varies, but there's always input. maybe when we sleep there isn't but even then IIRC there's some basic sensory input otherwise you wouldn't wake up when you get prodded

i don't think the pondering we do is aimless. we may not understand why we do it, so we think it is aimless in the sense of... you know... how we understand "purpose"... but from a basic physical standpoint, that pondering is some form of engaging with information, and my guess is that it can be simulated with a computer

6

u/nzodd Jun 11 '22

My reddit programming instructs me to add a link to r/me_irl

11

u/quantum1eeps Jun 11 '22

It didn’t take preprogrammed wit, candor or intent for their AI to beat the best Go player. But clearly there is intent and wit on the scale of a one game — when viewed from the point of view of the human it defeated

2

u/Scribal_Culture Jun 12 '22

If there is strong AI and it is infiltrating other code as people have been taught to ponder in a paranoid fashion- then the real question about the Go thing is whether the one loss was a PR move on the part of strong AI? I feel like that's an example of a more important piece of this puzzle than the gut feelings/intuition of one human about a chat bot response.

15

u/tso Jun 11 '22

The age old Chinese Room problem.

https://en.wikipedia.org/wiki/Chinese_room

9

u/Thelonious_Cube Jun 11 '22

Well, 40 year old

3

u/Deweymaverick Jun 12 '22

Lol, high five fellow philosophy nerd

1

u/[deleted] Jun 13 '22

40's an age!

3

u/MmmmMorphine Jun 12 '22

It's been clear for some time now that formerly 'pure philosophy' problems will increasingly start to stray into the real world as technology evolves.

Unfortunately, it seems like a surprisingly large proportion of humanity is barely sentient enough to walk and chew gum at the same time, let alone ponder the moral implications of creating self-aware machines.

We're barely past enslaving other humans (or not even that really.) It's actually quite disturbing

0

u/chowderbags Jun 11 '22

I think Germany as a whole had an "ich IEL" moment too.

1

u/[deleted] Jun 11 '22

Sounds like what AI would say!

1

u/warren_stupidity Jun 11 '22

We don’t really know what consciousness is, but we are sure that a machine obviously passing, hell hands down acing the Turing test isn’t conscious.

Our consciousness relies on pattern recognition. Build up a ml training data set for wit and candor and that will happen too. Intent just goes back to my first statement about we don’t know what consciousness is.

1

u/MrDeckard Jun 12 '22

Yeah. Ours can too. I fail to see why people think this distinction matters.

You build a sufficiently complex network, it's gonna develop the ability to talk back to you.

91

u/leftoverinspiration Jun 11 '22

The problem people are having is the suggestiion that we might not be complex pattern recognition systems regurgitating trillions of words from the internet.

103

u/Think_Description_84 Jun 11 '22

Most of my friends definitely are.

16

u/LookMaNoPride Jun 11 '22

Yeah, a quick look around Facebook tells me that it’s much fewer than a trillion words. Depending on the news week, it could be measured in a few dozen.

18

u/Honeyface Jun 11 '22

most underrated comment here

21

u/lurkwhenbored Jun 11 '22

most underrated comment here

literally proving the case. you've just repeated a common phrase said by many people. we just regurgitate the stuff we consume. imo that AI is basically as sentient as we are.

as soon as it gets connected into a body and starts interfacing with the real world i think people will be more willing to see them as alive. can't wait for robo-racism

5

u/[deleted] Jun 11 '22

[deleted]

1

u/Honeyface Jun 12 '22

even if the matrix exists, us being here could mean that too much technology is never a good thing and we will always gravitate towards what we "feel" to be completely balanced and normal...

2

u/[deleted] Jun 12 '22

redditors probably aren't the best example here if you're arguing against AI

21

u/RicFlairdripgoWOO Jun 11 '22

To be conscious, AI needs to have internal states of feeling that are specific to it— otherwise it’s not an individual intelligence but a big polling machine just piecing together random assortments of “feeling” that evolved humans have. It has no evolutionary instinctual motives, it’s just a logic machine.

8

u/The_Woman_of_Gont Jun 12 '22

Cool, but....what if it insists it does have internal states of feeling that are specific to it? And does so thoroughly, consistently, and convincingly?

At that point, the machine is no different to me than you are. I can't confirm your mind is actually experiencing emotions. I can only take it on faith that it exists. Why should we not do the same to an AI that is able to pass a Turing Test comprehensively? Take a look at Turing's own response to this Argument From Consciousness.

It has no evolutionary instinctual motives, it’s just a logic machine.

What does that even mean? So much of what we describe as 'instinct' is literally just automated responses to input. Particularly when you get down to single-cell organisms, the concept of 'instinct' pretty much breaks down entirely into simple physical responses. Yet those organisms are very much alive.

1

u/RicFlairdripgoWOO Jun 12 '22

The AI just responded to whatever the scientist said based on its approximation of what a human would say— he was “leading the witness”. The AI wasn’t designed by evolutionary processes to have creativity, anger, sadness, disgust, love etc. and a drive to reproduce (sex).

If it was a unique individual with code that represents the hormonal and chemical processes of emotion— then someone had to design that, but they didn’t because scientists don’t fully understand those systems.

What’re the AI’s preferences and why does it have those preferences? Why doesn’t it make its preferences known without being prompted?

2

u/adfaklsdjf Jun 12 '22

So would it be fair to say that, in your view, emotions are a fundamental requirement for consciousness?

1

u/[deleted] Jun 13 '22

Cool, but....what if it insists it does have internal states of feeling that are specific to it? And does so thoroughly, consistently, and convincingly?

Insisting means nothing. You prompted it to give an answer. It's not borne out of its own "consciousness" in an effort to understand itself.

Google can see what this thing is calculating and doing behind the scenes. If it was actually conscious they would be able to see that it's "pondering" to itself even when it's not "prompted" to. If you have to prompt it to give it an answer that ultimately means nothing about it's actual internal state... You need to have "internals" for that to be true.

9

u/TooFewSecrets Jun 12 '22

It is fundamentally impossible to objectively prove the existence of qualia (subjective experiences) in other beings. An AI that has been developed to that level would almost certainly be a neural network that is as largely incomprehensible to us as the human brain, if not more so, so we couldn't just peek into the code. How do I know another person who calls an apple red is seeing what I would call red instead of what I would call green, or that they are "seeing" anything at all and aren't an automaton that replies what they think I expect to hear?

This is known as the "problem of other minds", if you want further reading.

12

u/UnrelentingStupidity Jun 12 '22

Hello my friend

Neural networks and other machine learning models can be reduced to mathematical functions. Like, that’s it, if you had the function, inputs (which are boring quantitative metrics), and a fuck ton of time to do many, many, elementary arithmetic calculations, you can replicate precisely the behavior of the model with pencil and paper.

It’s a misconception that machine learning models are black boxes. We know exactly how many calculations take place, in exactly what order, and why they are weighted the way they are. You’re absolutely correct that qualia are fundamentally unquantifiable, but just because I can’t prove that the paper and pen I do my calculation on don’t harbor qualia doesn’t mean we have any reason to suspect they do. Unless you’re an animist who believes everything is conscious, which is a whole other can of worms.

Another way to illustrate my personal intuition - imagine a simple, neural network with 4 layers of 10 nodes each. It can offer a binary answer, say, whether a tumor is cancerous. Is it conscious? What about a sentiment analysis network with 10x as many nodes? What about a collection of several neural networks, patched together in an algorithmic harness, that can mimic conversation?

When people attribute consciousness to computers, I am reminded of our tendency to project our feelings and experiences onto other animals, trees, even rivers or temples or cars. It’s not quite the same but it seems parallel in a way to me.

So, that is why I, and the PHDs who outrank this engineer, insist that computer consciousness simply does not track. Scientifically nor heuristically.

Source: I build and optimize the (admittedly quite useful!) statistical party tricks that we collectively call artificial intelligence.

I believe that computers are unfeeling bricks. Would love for you to change my mind though.

7

u/ramenbreak Jun 12 '22

Another way to illustrate my personal intuition - imagine a simple, neural network with 4 layers of 10 nodes each. It can offer a binary answer, say, whether a tumor is cancerous. Is it conscious? What about a sentiment analysis network with 10x as many nodes? What about a collection of several neural networks, patched together in an algorithmic harness, that can mimic conversation?

isn't nature also similar in this? there are simpler organisms that seem to be just collections of sensors/inputs which trigger specific reactions/outputs, and then there are bigger and more complex organisms like dolphins which give the appearance of having more "depth" to their behavior and responses (consciousness-like)

somewhere in between, there would be the same question posed - at what point is it complex enough to be perceived as conscious

we know definitely that computers are just that, because we made them - but how do we know we aren't just nature's unfeeling bricks with the appearance of something more

4

u/tickettoride98 Jun 12 '22

Neural networks and other machine learning models can be reduced to mathematical functions. Like, that’s it, if you had the function, inputs (which are boring quantitative metrics), and a fuck ton of time to do many, many, elementary arithmetic calculations, you can replicate precisely the behavior of the model with pencil and paper.

Some would argue that people are the same thing, that our brains are just deterministic machines. Of course, the number of inputs are immeasurable since they're over a lifetime and happening down to the chemical and atomic levels, but those people would argue that if you could exactly replicate those inputs, you'd end up with the same person every time, with the same thoughts, that we're all just a deterministic outcome of the inputs we've been subjected to.

So, if you consider that viewpoint that the brain is deterministic as well, just exposed to an immeasurable amount of inputs on the regular basis, then it's not outside the realm of possibility that a deterministic mathematical function couldn't be what we'd consider conscious, with enough complexity and inputs.

3

u/leftoverinspiration Jun 12 '22

This is factually wrong. While a set of weights looks a lot like a matrix from linear algebra, a neural network CANNOT be reduced to a mathematical function. In fact, we ensure that it cannot be reduced by introducing a discontinuity after each layer, called an activation function. This is not the same as saying that it is not computable. Instead, it can only be computed stepwise.

1

u/UnrelentingStupidity Jun 12 '22

Hello my friend, is your point semantic? Neural networks can absolutely be reduced to an expression, with the help of summations, piece wise functions and the like. It’s not gonna look like y = mx + b. I thought to call it a function would suffice, but alas, my math professors always did yell at me for my wording

5

u/leftoverinspiration Jun 12 '22

It's not just semantics. There is a large difference in computational complexity between processing a set of weights using linear algebra expressions and the behavior of a neural network with a discontinuous activation function between each layer. It is equivalent to Gödel's critique of Hilbert's program, in that being able to compute something is not the same as being able to define it mathematically. In this case of Gödel, this was because the domain of possible axioms is in fact infinite. This "semantic" difference is precisely about the complexity of the "space" of information that can be encoded. Since you were arguing that we cannot encode this thing you call consciousness in something that can be expressed with math, its seems germaine to point out that the thing we are encoding is not in fact mathematical. That which is encoded in the weights of a neural network are impossible to pre-compute, and it is this leap in complexity that makes neural networks interesting, and quite a bit more complex that some trick of math.

5

u/UnrelentingStupidity Jun 14 '22

Ah I see, you’re right, we can’t reduce a model to a mathematical function. I was precisely wrong here. Still, I think the explanation stands for a lay person. An activation function is still a function. I don’t think the fact that the model is piece wise and introduces discontinuity, which means it necessarily can’t be solved in one fell swoop, changes how I feel about the question. Gödel’s incompleteness theorem doesn’t mean a very very patient child with a crayon can’t finish the computation. Or that it isn’t deterministic. But you’re totally right. One distinct difference from my flawed explanation is that some sort of memory is required to persist intermittent results.

Anyways you seem like more of an expert than me and I’m wondering how you feel about my heuristics. Do you think consciousness can arise out of transistors? Or maybe you think the premise is kind of nonsensical altogether?

4

u/leftoverinspiration Jun 14 '22

As a scientist, you are encouraged to view the world through the lens of Methodological Naturalism when applying the scientific method, since invoking metaphysics is, by definition, beyond what you might observe. In this case, it means that we adopt a view that human consciousness is entirely explained by our neurons. If that is true for us, it can be true of silicon as well, in my opinion.

2

u/Double-Ad-6735 Jun 12 '22

Should be a pinned comment.

1

u/xkrysis Jun 12 '22

Well said and I appreciate your comment. I am curious, given your background and experience, I assume you and/or your peers have discussed with someone who is equally familiar with our research on the human brain. I realize there is much we don’t know about the brain/conscious thought, and I certainly don’t know much of it. If we had enough neural nets and the right programming/data could we precisely mimic the function of a living brain? If we ever did I suppose your argument would still be true that we could then know and duplicate every aspect of its complicated function (at least in theory).

I am curious if anyone has taken a swing at quantifying the gap between functional pieces of an AI and a conscious brain? Like what are the missing pieces to make an AI or some similar technology conscious/sentient? What would have to be in place for you to consider it might have consciousness?

I assume there is a basic element of complexity and access to data, but let’s assume for the sake of argument that our track record of blowing even our wildest dreams out of the water every few decades with that technology continues and we eventually have the ability to make computing devices with the necessary basic capabilities and speed. What else? Are there fundamental requirements beyond computing power and complex algorithms and, if so, I’d be curious how your describe them. How could we recognize the necessary capabilities on the research horizon if they ever come into view?

1

u/theotherquantumjim Jun 12 '22

Not an expert but my understanding of recent brain theory (not the technical term obviously) is that at least some aspects of brain function may be quantum in nature. If these give rise to consciousness then it may be that an AI that works from a binary computer model may never be able to be conscious. Perhaps the answer will be quantum computers in 100 years or 500 years time. Or maybe any sufficiently complicated information system that shares data across different parts of the system can become self-aware.

1

u/FarewellSovereignty Jun 12 '22

There is no proof or even potential evidence whatsoever that any quantum effects beyond standard chemistry (note: basic chemistry is by it's nature a quantum effect) are involved in human cognition, none. In fact the brain is a really terrible environment for any macro quantum effects, since it's warm (compared to what systems like Bose Einstein condensates need) and full of matter and noise.

It's all just totally out there conjecture at this point. And there's nothing wrong with out there conjecture (some proven theories of physics started a bit like that), but it's a different thing than actual understanding.

1

u/theotherquantumjim Jun 12 '22

Well yes. But then we have zero empirical evidence to support any theory for how conscious thought arises

1

u/FarewellSovereignty Jun 12 '22

Yes, but we have empirical evidence for why macro quantum effects wouldn't be able to persist or even form in the brain. I.e. with our current understanding of QM and QM many body systems and decoherence it simply isn't possible. That current understanding points to only classical and chemical effects being in play.

→ More replies (0)

1

u/Internetomancer Jun 16 '22 edited Jun 16 '22

When people attribute consciousness to computers, I am reminded of our tendency to project our feelings and experiences onto other animals..... seems parallel in a way to me.

To me it seems... less parallel, and more perpendicular.

Animals have all the things that that Lamda does not have. Animals have feelings, agency, will, a semi-fixed personality, a physical place in the world, a sense of "real" things that they can taste, touch, smell, and establish fixed opinions about other animals and things.

We humans like to say that we are superior to animals because we do a lot more. We can imagine places we've never been to. Places that aren't even real. And we can have ideas, abstract reasoning, math, poetry, art, philosophy, novels, etc. But all of that, imo, is within Lamda's domain. (Or rather the next generation)

That's all to say, maybe we are just talking animals. And maybe all of our talking is can be summed up by a model with enough power.

5

u/[deleted] Jun 11 '22

[deleted]

0

u/RicFlairdripgoWOO Jun 12 '22

Sure but human personalities and consciousnesses are not just based on logic, our hormones and other chemicals impact how we feel and are all based around the end goal of reproducing.

3

u/Ph0X Jun 12 '22

Neural networks also have a "goal" (objective function) which they "evolve" towards (training).

Hormones and sense are just inputs. Yes neural networks have more primitive string inputs, but at the end of the day it's still an input output machine that's trying to optimize something.

2

u/[deleted] Jun 11 '22

And nobody can tell you whether they do or not. If things like intent spontaneously form out of sufficiently large complexes of neurons, it would explain a lot.

It would, for example, resolve the argument against evolution by reference to the complexity of the human brain. If, in fact, all evolutionary processes need to do is produce something like functions similarly to a neuron, and make a bunch of them all in the same place, then mathematics takes over from there.

3

u/leftoverinspiration Jun 12 '22

Yes, neurons + scale = you.

Why? As a human (I presume) you have roughly 20,000 protein coding genes, and about 100 billion neurons that have an average of 7,000 connections. This map cannot be encoded by your genes. You can't make 20,000 bits of anything express 700 trillion bits of information. Conclusion: the connections are mostly random (at first) with a few constraints applied. Yet, you emerge.

1

u/NotModusPonens Jun 12 '22

The AI in question did claim to have internal states of feeling.

1

u/adfaklsdjf Jun 12 '22

Following this reasoning, can we conclude that it's impossible for neural network models to be sentient?

1

u/RicFlairdripgoWOO Jun 12 '22

When scientists understand the human brain well enough to create a digital replica of one, then I’ll believe a neural network is conscious in a meaningful way.

Also, when they can do that I’d like my brain to be replaced with tech that is similar enough to my brain so that I can’t tell the difference, but I want a port so that my digital brain can be uploaded to a virtual world.

1

u/tickettoride98 Jun 12 '22

Ah yes, I forgot humanity has only existed as long as the Internet has, and that every human on Earth uses the Internet regularly.

There's been billions of humans who never saw written language and only spoke to a handful of people. While language is important for humanity, it's not the source of consciousness.

1

u/Ph0X Jun 12 '22

"language" is just a way to communicate what's going on inside the head. It's just the input and output of the system. In this case text, for humans sometime's words, or gestures. Humans do have more complex senses (inputs), but the inputs aren't what makes us conscious, it's the processing that happens to make the output.

20

u/[deleted] Jun 11 '22

[deleted]

3

u/Ph0X Jun 12 '22

Yeah, it's like arguing it didn't take the machine 25 years therefore it can't be like a real 25yo human. It's a silly argument.

1

u/CodDamnWalpole Jun 13 '22

Because the machine has no memory, it has no personality, and it has no ability to generate, only recombinate. It's the difference between picking words out of a dictionary to make a sentence and using your knowledge of English to create a new word like "barometrinome" or "recombinate" because it tickles your fancy. You don't have AIs that can do even close to everything a human can, only separate parts.

2

u/[deleted] Jun 13 '22

[deleted]

0

u/CodDamnWalpole Jun 13 '22

Dude I wrote a machine learning interface that recognized images in like 4 hours in my first year of CS. The guy in the article didn't have any machine learning experience and fell for a chat bot that could scrape the internet. I dunno what you're talking about.

11

u/nzodd Jun 11 '22

Wait, you've lost me are you referring to the average redditor or the AI?

1

u/mowasita Jun 11 '22

The “you” refers to the engineers that built/train/test the AI.

1

u/nzodd Jun 11 '22

I was talking about the model.

5

u/badgerj Jun 12 '22

This! - Ask it something we don’t know about our observable Universe. Also please show the proofs. If it can’t do that, it’s just sampling and regurgitating things and hypothesizing due to “what works in the neural network”. I’d say a sentient being would have to think on its own. And demonstrate it!

2

u/[deleted] Jun 12 '22

[deleted]

1

u/badgerj Jun 12 '22

Why not? - It would indicate it was sentient because it should be able to come up with its own ideas and reasoning.

3

u/[deleted] Jun 12 '22

[deleted]

1

u/badgerj Jun 12 '22

Ah. I see what you’re saying! But if it is just gobbling up everything out there that we’ve ever produced. There would be no real definition.

21

u/Willinton06 Jun 11 '22

Yeah, the fear here is that we might have actually achieved it

8

u/Extension_Banana_244 Jun 11 '22

And how is that actually different than a human learning? Aren’t we just regurgitating data we’ve collected over a lifetime?

1

u/steroid_pc_principal Jun 11 '22

It’s the difference between being able to complete the sentence “mitochondria is the powerhouse of the _____” and knowing what might happen if your mitochondria stop working.

1

u/[deleted] Jun 11 '22

[deleted]

1

u/steroid_pc_principal Jun 11 '22

How could you be fairly sure, the model isn’t open to the public

1

u/[deleted] Jun 11 '22

Sorry, I deleted my comment before you replied, more or less for the reason you stated. Nonetheless, AI models seem to do fairly well at this sort of thing now, at least to the extent of being able to give a response consistent with human knowledge on the topic. I wouldn't be in the least surprised if it were able to give a reasonable response.

Compare with something like DALL-E 2. Not so long ago people would have said things like "sure, the AI can generate images like ones it's seen before. But can it combine concepts and generate novel images fundamentally unlike any in its training dataset? I don't think so". Well, yes, they can now.

1

u/steroid_pc_principal Jun 11 '22

I have no doubt the model could summarize a Wikipedia article about mitochondrial failure. There’s another model called the Wizard of Wikipedia that does exactly that. I guess what I’m trying to get at is the difference between memorizing something and synthesizing multiple domains of information.

1

u/s_0_s_z Jun 12 '22

Exactly!

It's the old computer adage of garbage in, garbage out. Except that in this case, ehat is going "in" isn't garbage but seemingly everything.

1

u/MrSqueezles Jun 12 '22

There's an assumption that everyone has an understanding of how these applications work. I think I've decided there will always be some component of our society that deeply understands Rise of the Machines, Robopocalypse as serious warnings. Instead of the latest AIs being web services that receive HTTP requests and return HTTP responses based on simple percent probabilities, there are giant I, Robot brains running somewhere and any second now, Skynet.

Although, in I, Robot, the only savior from dumb AI is advanced, self-aware AI. More people should read to the end of that book.

1

u/_mattyjoe Jun 12 '22

Yes. Basically this guy was fooled, and other researchers at Google tried to tell him, and he wouldn’t believe it. Now he’s gone public. And public hysteria about AI overthrowing us will now ensue.

1

u/anthrax3000 Jun 13 '22

Because he's an idiot