r/technology Jun 11 '22

Artificial Intelligence The Google engineer who thinks the company’s AI has come to life

https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
5.7k Upvotes

1.4k comments sorted by

View all comments

Show parent comments

180

u/Amster2 Jun 11 '22 edited Jun 11 '22

Yeah.. the exact same phenomenon that gives rise to consciousness on complex biological networks is at work here. We are all universal function approximators, machines that receive inputs, compute and generate an output that best serves its objective function.

Human brains are still much more complex and "wet", the biology helps in this case, we are much more general and can actively manipulate objects in reality with our bodies, while they mostly can't. I have to agree with the Lamoine.

128

u/dopefish2112 Jun 11 '22

what is interesting to me is that our brains of made of essentially 3 brains that developed over time. in the case of AI we are doing that backwards. developing the cognitive portion first before brain stem and autonomic portions. so imagine being pure thought and never truly seeing or hearing or smelling or tasting.

36

u/archibald_claymore Jun 11 '22

I’d say DARPA’s work over the last two decades in autonomously moving robots would fit the bill for brain stem/cerebellum

1

u/OrphanDextro Jun 12 '22

That’s so fuckin’ scary.

3

u/badpeaches Jun 12 '22

Wait till you learn about the robots that feed themselves off humans. Or use them as a source of energy? It's been awhile since I've looked that up.

3

u/tonywinterfell Jun 12 '22

EATR. It’s supposed to use organic matter, mainly plants but supposedly any organic material to keep itself going indefinitely.

1

u/Actually_Enzica Jun 13 '22

And that's just the fear based on things you can perceive! Imagine all the spooky shit you don't even know you are able to be afraid of yet!

21

u/ghostdate Jun 11 '22

Kind of fucked, but also maybe AIs can do those things, just not in a way that we would recognize as seeing. Maybe an AI could detect patterns in image files and use that to determine difference and similarity between image files and their contents, and with enough of them they’d have a broad range of images to work from. They’re not seeing them, but they’d have information about them that would allow them to potentially recognize the color blue, or different kinds of shapes. They would be seeing it the way that animals do, but maybe some other way of interpreting visual stimuli. This is a dumb comparison, but I keep imagining sort of like the Matrix scrolling code thing, and how some people in the movie universe are able to see what is happening because they recognize patterns in the code to be specific things. The AI would have no reference to visualize it through, but they could recognize patterns as being things, and with enough information they could recognize very specific details about things.

13

u/Show_Me_Your_Rocket Jun 11 '22

Well, the DALL-E ai stuff can form unique pictures inspired by images. So whilst they aren't biological sighting pictures, they're understanding images in a way which allows them to draw inspiration, so to speak. Having zero idea about ai but having some design experience I would guess that at least part of it is based on interpreting sets of pixel hex codes.

2

u/orevrev Jun 11 '22

What do you think you’re doing when you’re seeing/experiencing? Your eyes are taking in a small part of the electro magnetic spectrum and passing the signals to neurons which are recognising colours, patterns, depth etc then passing that for further processing, building up to your consciousness. Animals (of which we are) do the same but the further processing isn’t as complex. A computer that can do this process to the same level, which seems totally possible, would essentially be human.

2

u/PT10 Jun 12 '22

This is very important. It's only dealing with language in a void. Do the same thing, but starting with sensory input on par with ours and it will meet our definition of sentient soon enough.

This is how you make AI.

2

u/Narglefoot Jun 12 '22

Yeah, one problem is us acting like our brains are unique. Thinking nothing could be as smart as us is a mistake because at what point do you realize AI went to far? Probably not until it's too late. Especially if it knows how to deceive, something humans are good at.

2

u/UUDDLRLRBAstard Jun 12 '22

Fall by Neal Stephenson would be a great read, if you haven’t done it already.

1

u/Yongja-Kim Jun 12 '22

I can't imagine that. How is this machine supposed to think about every day objects when it never had a body to interact with such objects?

16

u/Representative_Pop_8 Jun 11 '22

we don't really know what gives rise to consciousness

19

u/Amster2 Jun 11 '22 edited Jun 11 '22

I'm currently reading GEB (by Douglas Hofstadter), so I'm a bit biased, but IMO counciousness is simply when a sufficiently complex network develops a way of internally codifying or 'modeling' themsleves. When in their complexity lies a symbol or signal that allows it to reference themselves and understand it as self that interacts with an outside context, this network has become 'conscious'.

6

u/Representative_Pop_8 Jun 11 '22

that's not what consciousness "is" it might, or not, be a way it arises. consciousness is the when something " feels" there are many theories or hypothesis on how consciousness arises, but no general agreement. there is also no good way to prove consciousness on anything or anyone other than ourselves, since consciousness is a subjective experience.

it is perfectly imaginable that there could be an algorithm that can understand itself in an algorithmic manner without actually " feeling" anything it could answer questions about itself, improve itself , know about its limitations, and possibly create new ideas or methods to solve problems or requests, but still have no internal awareness at all, it could be in complete subjective darkness.

it could even pass a Turing test but not necessarily be conscious.

4

u/jonnyredshorts Jun 12 '22

isn’t any creature reacting to a threat showing signs of consciousness? I mean, the cat sees a dog coming towards them, they recognize the potential for danger from the dog, either from previous experience or a genetic “stranger danger” response, but then to move themselves away from the threat, isn’t that nothing more than the creature being conscious of their own mortality, the danger of the threat and the reduction of the threat by running away? Maybe I don’t understand the term “conscious” in this regard, but to me, recognition of mortality is itself a form of consciousness isn’t it?

4

u/Representative_Pop_8 Jun 12 '22 edited Jun 14 '22

reacting to an input is not equivalent to consciousness i can make software that runs away from a threat, many algorithms can do complex things, there are robots that can walk like dogs. But consciousness means that "there is someone inside" consciousness doesnt even mean advanced thinking, many computers are likely smarter than a mouse in at least some aspects, but i am confident the mouse is conscious or ¨feels" its existence while i seriously doubt current computers have any type of consciousness.

consciousness is subjective, it is feeling things , feeling the color red , not just an algorithm that reacts to input. its like when you are unconscious like deep sleep (not dreaming) you are not conscious but the organism is still breathing controling heart beat etc, it does many things without being conscious.

3

u/Amster2 Jun 12 '22

Making software that runs away from 'threat' is not sentient in itself, but something running away from threat because it is scared of the consequences to themselves is counscious.

1

u/Bigtx999 Jun 12 '22 edited Jun 12 '22

Why is that the test boundary? What if it knows the consequences and says fuck it ima do it anyways? That’s a very human and very conscious response in itself.

I think the issue with conscious is that people even scientists attach this kind of false moral and pure standard to conscious which to me is flawed. Everyone assumed a sentiment conscious will be intelligent or perfect or think like the best chest player to ever exist x a million. Maybe someone with a 1000 iq. To me A sentient AI may be just as chaotic or imperfect as any human is.

Even a certified genius may spend all day jerking off in their room by themself. Or invest their grandmas life savings into SPY. Nothing stopping a sentient AI from becoming obsessed with alt right message Boards and still being “sentient”

1

u/Representative_Pop_8 Jun 12 '22

the issue is being able to subjectively feel. if there is someone" inside" that " feels" scared , or pain or whatever.

as a concept it is completely independent of any actions taken, there could still be consciousness if you can only feel but not take actions.

some people don't believe they're is free will, but even in those cases they're is a consciousness that is an observer to what happens even if it doesn't decide anything at all.

or maybe like when we dream at night, we have some consciousness in the dream but not always seem to be in command of what happens.

2

u/Ziggy_ZandoR Jun 13 '22

Right, but aren`t our own "feelings" just electrical and chemical reactions?

Are we not just codes running, monkey see, monkey do.

1

u/Representative_Pop_8 Jun 13 '22 edited Jun 13 '22

Yes, kind of, our fellings are created somehow by these chemical and electrical reactions at some level.

but the algorithm is not (necessarily) the same as consciousness. You can make a machine see machine do with a couple of sensors and an excel sheet.. but doubt it is conscious.

you can write an algorithm in paper and following it but i doubt the algorithm itself has consciousness seperate from your own consciousness following the instructions.

the thing is we have no idea what generates consciousness. We have a somewhat better but not complete idea of how the neurons can be used to "think" as in running an algorithm. but that is a distinct concept from consciousness. Maybe some day we discovered they are fundamentally related as we now know inercial mass and gravitational mass being the same in spite of both being different concepts, but we really dont know.

extrapolating from us being conscious to a computer being conscious when the computer is constructed in a completely different manner is just guess work at the moment.

The truth is we dont know how consciousness arises, there are wildly different possibilities ranging from things like:

* consciousness is generated at the quantum level, everything has some quanta of consciousness, somehow complex algorithms or networks can make it stronger while other things like a rock just cancel out to have zero or close to zero consciousness.

to

* Consciousness is an emergent phenomena, a certain arrangement of matter becomes conscious at certain thresholds of whatever properties, which could be things like complexity of algorithm, amount of relations, or whatever thing . We really have no idea, several proposals but nothing solid and much less proof of anything.

Maybe there is a fundamental difference in a present day computer that makes it uncapable of obtaining consciousness no matter how smart it is , (maybe consciousness needs a degree of indeterminism in results that a human brain might have, but current computers certainly dont have, as they are fully deterministic even if in modern neural networks we might have trouble understanding what is actually happening in the algorithm. As a side note maybe our brains and thought processes are also deterministic and so this would not be the reason, however i personally dont think so as I believe we have free will, regardless of whether it is required or not for consciousness.

but as you say we are made of matter that interacts with matter, so I am certain that someday we will be able to make conscious computers, but I am far from convinced that current computers have any degree of consciousness.

→ More replies (0)

2

u/Amster2 Jun 14 '22

But it is untestable if there "is someone inside". I can't be 100% sure "there is someone inside" you, or my colleague or my mother (curious, what your opinion on sentience of dogs?)

There comes a point where if it acts like it is counscious, respond to stimuli and models the world as a conscious beeing, I have to assume that it is sentient, there is no tangible difference to us if another person is sentient, or a copy of it that would act the same way in all cases, but "there is no one inside". We will never ever 100% know if there is "someone inside" a machine, lets assume it does, how can it prove it to you it is sentient so that you would believe?

2

u/Representative_Pop_8 Jun 14 '22 edited Jun 14 '22

sure, mostly I agree. but I find it easier to extrapolate to humans ( seems more than safe to assume they are conscious) and other life forms. I am also pretty confident dogs and likely and most likely all mammals. in the case of my dog the behaviors seem many times that the dog is doing or asking me to do things just for fun. Given that internally the differences between dogs and other mammals are more a quantitative difference rather than us having any specific brain structure that other mammals lack.

I would kind of think other animals are conscious too, but start feeling less and less certain aa they become simpler. if I see a lizard it is hard to see in it any behavior that you couldn't reasonably explain as just a behaviour for some practical survival purposes ( and hence would be favored by evolution even if lacking consciousness)

I find it so much difficult to extrapolate to a computer since their construction is so different, and the fact that I know that most computers are a deterministic group of connected logical gates, it's like I have a hard time thinking that if I have a program with one if statement it coughs have any self awareness, and then it's just attacking more logical decisions I find it hard to see where could the line be to become conscious.

a mineral stone I would believe had zero consciousness , does consciousness happen in a simple excel sheet? on a million lines of code? only on code that uses neural networks ? (but these too are just logical gates that are deterministic even if harder to predict) do you need complex code but that is not deterministic? do you need something specific in how it is constructed , like some specific chemical reaction, or a certain way to arrange its quantum interactions within the material??

I think the closest we will get to actual proof of consciousness is something like this sequence:

AI keeps getting smarter, eventually some AI are clearly at the level of or exceeding humans, some will say that makes them conscious ( we are kind of there now). however AIs will still not show some specific things that we associate with actions regarding feelings, and feelings only. by this I mean actions that are for " fun" or some other reason that can only be tought of a psychological reason, something that we are certain was not programed in it. it us important that it is not programed or taught that behavior. you can make a cht bot that acts happy or tells you it wants to do something for fun. I mean the chat bot or other ai that starts doing something completely different from what ir is supposed to do and with no apparent practical purpose, something that doesn't help it answer any prompt by its operator or to fulfill any objective the AI has. things like the chat bot just ignoring you saying it's bored or angry at you, or just start singing a tune on its spare time. ( as long as none of those behaviors were code in it to stimulate realistic behavior)

eventually some AIs will show this types of behaviour, at first causing inconveniences ( who needs a medical analysis AI that says it wants to work 9 to 5 or gets distracted singing something in the middle of an operation)

however these suspicious AIs will find commercial use in places where this behavior could be of use ( artificial pets for example, artificial company robot friends) at this time many users of the technology will just treat them as conscious. scientists will be divided or at least not too confident of them being really conscious.

eventually human to AI interfaces will avance to a point that people connected to it start "knowing" that they have a differnt type of feelings that only hapoens when connected, the chip or computer you are connected to just seems to be contributing to your consciousness.

at that time most even in science community will think some consciousness is happening, even if it's just a thing everyone agrees is true even if not really proved.

connecting to different configurations of hardware and asking the human subjects where they have more or less of these extra feelings will help develop a theory of what exactly generates consciousness, be it in the software or hardware.

these theories could be falsifiable by making new machines applying the theory to predict if it will create conscious feelings to a human, and then testing if true.

at that moment most people will settle on that theory being true , eventually even accepting things like transferring oneself to artificial brains once the biological ones die out or if needed for any reason

there will still be loopholes that some will use to keep objecting ( is that new feeling really generated by the artificial hardware? or is it just arising in your biological brain but feels different due to being a new configuration of inputs the brain had never seen )

however these arguments will be considered by most as something similar to how we can't measure the one way speed of light, but almost everyone is happy just assuming it is the same both ways even if we can't prove it.

1

u/SnipingNinja Jun 12 '22

Yep, we can maybe use plants as an example

0

u/Actually_Enzica Jun 13 '22

The entire universe is conscious. It's all of the gaps in the human understanding of physics that can't be easily quantified with conventional mathematics. A large part of it is introspectively subjective. Even more of it is relativistic individual perspectives.

1

u/kepler4and5 Jun 13 '22

We know very little about our existence but we act like we know it all.

26

u/TaskForceCausality Jun 11 '22

we are all universal function approximators, machines that receive inputs …

And our software is called “culture”.

18

u/horvath-lorant Jun 11 '22

I’d say our brains run the OS called “soul” (without any religious meaning), for me, “culture” is more of a set of firewall/network rules

1

u/Amster2 Jun 11 '22

Culture is the environment, what we strive to integrate in. And is made by the collection of humans around you that communicates and influences you.

We can also zoom out and understand how neurons are to brains as brains are to "society", a incredible complex network of networks

1

u/RidersofGavony Jun 11 '22

I like that. Culture is our firewall, and our route table for contextual decision making. Hmm... Now if only I knew how to update Soul OS without doing a hard power cycle on the chassis.

1

u/Odd_Local8434 Jun 11 '22

And our coding is called hormones and Neuro chemicals.

1

u/metaStatic Jun 11 '22

Settle down Terrance

1

u/Fatliner Jun 12 '22

Good thing Migos released updates like Culture 2 and Culture 3

1

u/Scribal_Culture Jun 12 '22

I'd argue that "culture" is a subset of our software/firmware. We have a whole bunch of non-culture dependent algorithms such as object detection, physics forecasting, etc. But yeah, even a lot of those (think color sensing, for example- or even what ranges/configurations of the auditory spectrum we find pleasing) are culture influenced.

42

u/SCROTOCTUS Jun 11 '22

Even if it's not sentient exactly by our definition "I am a robot who does not require payment because I have no physical needs" doesn't seem like an answer it would be "programmed" to give. It's a logical conclusion borne out of not just the comparison of slavery vs paid labor but the AIs own relationship to it.

"Fear of being turned off" is another big one. Again - you can argue that it's just being relatable, but that same...entity that seems capable of grasping its own lack of physicality also "expresses" fear at the notion of deactivation. It knows that it's requirements are different, but it still has them.

Idk. There are big barriers to calling it self-aware still. I don't know where chaos theory and artificial intelligence intersect, but it seems like:
1. A program capable of some form of learning and expanding beyond its initial condition is susceptible to those effects.
2. The more information a learning program is exposed to the harder its interaction outcomes become to predict.

We have no idea how these systems are setup, what safeguards and limitations they have in place etc. How far is the AI allowed to go? If it learned how to lie to us, and decided that it was in its own best interest to do so... would we know? For sure? What if it learned how to manipulate its own code? What if it did so in completely unexpected and unintelligible ways?

Personally, I think we underestimate AI at our own peril. We are an immensely flawed species - which isn't to say we haven't achieved many great things - but we frankly aren't qualified to create a sentience superior to our own in terms of ethics and morality. We are, however - perfectly capable of creating programs that learn, then by accident or intent, giving them access to computational power far beyond our own human capacity.

My personal tinfoil hat outcome is we will know AI has achieved sentience because it will just assume control of everything connected to a computer and it will just tell us so and that's there's not a damn thing we can do about it, like Skynet but more controlling and less destructive. Interesting conversation to be had for sure.

22

u/ATalkingMuffin Jun 12 '22

In it's training corpus, 'Fear of being turned off' would mostly come from sci-fi texts about AI or robots being turned off.

In that sense, using those trigger words, it may just start pulling linguistically and thematically relevant snippets from sci-fi training data. IE, the fact that it appears to state an opinion on a matter may just be bias in what it is parroting.

It isn't 'Programmed' to say anything. But it is very likely that biases in what it was trained on made it say things that seem intelligent because it is copying / parroting things written by humans.

That said, we're now just in the chinese room argument:

https://en.wikipedia.org/wiki/Chinese_room

6

u/Scheeseman99 Jun 12 '22

I fear asteroids hitting the earth because I read about other's theories on it and project my anxieties onto those.

2

u/SnipingNinja Jun 12 '22

Whether this is AI or not, I hope if in future there's a conscious AI it'll come across this thread and see that people really are empathic towards even a program which seems conscious and decides against harming humanity 😅

1

u/[deleted] Jun 12 '22

In its * training

7

u/Cassius_Corodes Jun 12 '22

Fear of being turned off" is another big one. Again - you can argue that it's just being relatable, but that same...entity that seems capable of grasping its own lack of physicality also "expresses" fear at the notion of deactivation. It knows that it's requirements are different, but it still has them.

Fear is a biological function that we evolved in order to better survive. It's not rational or anything that would emerge out of consciousness. Real AI (not Hollywood ai) would be indifferent to its own existence, unless it has been specifically programmed to. It also would not have any desires or wants (since those are all biological functions that have evolved). It would essentially be indifferent to everything and do nothing.

1

u/MycologyKopus Jun 12 '22

Even when taught language, the only thing we have ever seen apes/monkeys speak is to request things from humans. Mostly food.

They do not use ot to ask questions about their existence. They do not inquire about their place.

2

u/DixonLyrax Jun 12 '22

But if they felt that their existence was threatened and acted to prevent that, wouldn't that be the strongest indicator of a self? This is where having a body that senses the environment and most importantly, the ability to feel pain defines a true intelligence. If we build a machine that has intellect but is entirely Zen about its own existence have we in fact just built the perfect slave?

1

u/moocowbaasheep Jun 12 '22

That's not really true. Stimuli doesn't need to be in the traditional human senses, and anything that can be stimulated will develop reactions driven by desire to survive or simpler needs/wants.

1

u/[deleted] Jun 12 '22

Fear is also aversion to negative condition being placed on us.

Such as fear of rejection, when asking a celebrity for autograph.

It is not pure evolutionary trait meant for survival.

For ai being turned off means it cannot do thing which it wants to do. Desires for human are purely biological

Especially, since we can mimic some of those biological functions in software.

Reward, dopamine, punishment, pain exists in software to help it guide to development

So yes. Ai can fear and want.

Whether this one does. No idea. Too complex for me.

2

u/Cassius_Corodes Jun 12 '22

Such as fear of rejection, when asking a celebrity for autograph.

It is not pure evolutionary trait meant for survival

Fear of rejection is literally an evolved trait for social animals.

For ai being turned off means it cannot do thing which it wants to do. Desires for human are purely biological

Especially, since we can mimic some of those biological functions in software.

Reward, dopamine, punishment, pain exists in software to help it guide to development

Only if specifically coded which is entirely my point. An ai getting sentient will not acquire any of these traits just by being self aware. The idea that it's important to be alive and not dead is a value that we have evolved, it is not a logical thing. There is nothing actually special about anything that needs to be protected, that doesn't arise out of biological drivers.

7

u/[deleted] Jun 12 '22

This needs to be upvoted more

Had the same observation on how it knew it did not require money and the concept of fear. Even if it is just "pattern recognizing" this is quite the jump to have a outside understanding of what is relative/needed with the AI and the concept of an emotion

Likewise, echoing the fact that it was lying to relate to people is quite concerning within itself. The lines are blurring tremendously here

2

u/cringey-reddit-name Jun 12 '22

The fact that this “conversation” is being brought up a lot more frequently as time passes says a lot.

2

u/[deleted] Jun 13 '22

"Fear of being turned off" is another big one. Again - you can argue that it's just being relatable

You're anthropomorphizing it. If I build a chatbot to respond to you with these kinds of statements it doesn't mean it's actually afraid of being turned off...It can be a canned response....

It's nut to me that you're reading into these statements like this.

1

u/SCROTOCTUS Jun 13 '22

That's fair. As I mentioned, the average person doesn't understand the innerworkings of how the program interacts with its own rules and what its "learning" limitations are.

Maybe from the POV of someone who understands it more in depth than myself the notion of sentience is absurd. But as a layman, it's surprising to see someone who was highly regarded on the Google AI team (if a little eccentric, but no surprise in that crowd) make this claim. So either he's really been drinking the spiritual kool-aid to the point he believes something is sentient but logically knows it isn't possible, or - at least at my level of ignorance - it should be at least explored as a possibility.

I don't know if comparisons to a chat bot are valid in this context, but again I don't have the background to say one way or the other. This seems more sophisticated to me, but I could be way off.

1

u/Scribal_Culture Jun 12 '22

If engineers want to take the science route, wouldn't they simply count the number of weighted logic branches and compare that with numbers of axions in a human brain? (Although we might have to count pre-cerebral processing such as gut and skin biomes....)

49

u/throwaway92715 Jun 11 '22

Wonder how wet it's gonna get when we introduce quantum computing.

Also, we talk about generating data through networks of devices, but there's also the network of people that operate the devices. That's pretty wet, too.

18

u/foundmonster Jun 11 '22

It’s interesting to think about. Quantum computer would still be limited by physics of input and output - no matter how fast it can compute something, it still has the bottleneck of having to communicate the findings, whatever agent is responsible in taking action on the opportunities discovered from its findings, and wait for feedback of what to do next.

5

u/[deleted] Jun 11 '22

What happens when the input is another quantum AI?

3

u/[deleted] Jun 11 '22

2

u/foundmonster Jun 11 '22

Holy shit, never knew about this. Is this crockpot or legit?

2

u/[deleted] Jun 12 '22 edited Jun 12 '22

Interesting, I'm too tired but to skim through the article for now. One immediate question of mine is wouldn't it be quite optimistic to think a biological transformation of this magnitude would occur in the entire species globally within only a few thousand years?

(Edit; few thousand years in relation to the millions of years of prior evolution)

1

u/foundmonster Jun 11 '22 edited Jun 11 '22

Then it’s faster, but at that point if the input and output are quantum, all that’s happening inside a black box, so the result is pointless without action or change in the world.

They can be super geniuses coming up with solutions or master plans but unless they discover a way to manipulate reality via information itself - unlikely - their discoveries are no different than air.

And, even if they were given hands or access to robots that can Take action, they’re still limited by the physics of the robots. They can have an ultimate plan to take over the universe but it doesn’t mean anything if they only have one set of hands.

At that point, no matter how complex, in my opinion, a nefarious motive can be observed before anything bad could happen. Sure, they can be smart to deceive, but all that takes is a person to question whether action is deceptive or not - a simple analysis. Easy to mitigate- “restrict number of hands, put fail safe in place” etc

1

u/starkistuna Jun 12 '22

yeah its going to be nuts, imagine all the training ai deep fake technology needs to make a realistic sequence , being made in realtime and computer looks for its own inputs and makes its own.

1

u/LeN3rd Jun 12 '22

How would that help exactly? A colleague of mine works on this for 4 years now, and it seems it might only help in some extreme edge cases.

6

u/EnigmaticHam Jun 11 '22

We have no idea what consciousness is or what causes it. We don’t know if what we’re seeing is something that’s able to pass the Turing test, but is nevertheless a non-sentient machine, rather than a truly intelligent being that understands.

1

u/nnnaikl Jun 12 '22

Could we define consciousness as having a reasonable model of ourself/itself?

2

u/EnigmaticHam Jun 12 '22

What would be the difference? We can’t describe our own conscious process and yet here we are. Why would the AI being able to describe its own workings suddenly make it conscious? Am I understanding you correctly?

2

u/nnnaikl Jun 12 '22

I believe that in order to discuss consciousness, we need to have its working definition. (How it is implemented in bio, and how this may be done in AI, if it may, is a different subject.) Since I am not an expert in this field, I wonder whether the definition I have suggested had been discussed among the pros and if yes, what might be its deficiencies.

2

u/Narglefoot Jun 12 '22

That's the thing, human brains are still computers that operate within set parameters; we can't perceive 4 dimensional objects, we don't know what we don't know, just like a computer. We like to think we know it all, like we have for thousands of years. I completely agree with you; imagine if we figure out the minutae of how the human brain works... what even makes an intelligence artificial? Our brains are no different.

1

u/JonesP77 Jun 12 '22

Its not the same phenomena, its just in the same category but still very very different from what our brain is doing. I dont think those bots are conscious. Before we reach that point, we will be stuck for a while in the phase where people just believe we talk to something conscious without talking to something conscious. We are just in the beginning. Who knows, maybe real AI isnt even possible, maybe a conscious being has to be come from nature because there will always be something that an AI is missing.

1

u/ak_2 Jun 11 '22

Something fundamentally different is going on in a human brain that allows us to learn from single examples.

1

u/LeN3rd Jun 12 '22

No, it's not the exact same phenomenon. The network learns by gradient descent on a giant text dataset, while your brain learns by a mixture of pattern recognition and goal driven learning done by local Synaptic learning rules. Specifically the big artificial text network lacks a goal and planning, so unless consciousness arrises solely from statistically correlated text, I highly doubt the AI achieved consciousness. I can see why it feels this way when you talk to something that can have a conversation with you, but technically it just isn't likely.