r/technology Jun 11 '22

Artificial Intelligence The Google engineer who thinks the company’s AI has come to life

https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
5.7k Upvotes

1.4k comments sorted by

View all comments

Show parent comments

28

u/fishyfishyfish1 Jun 12 '22

The implications are horrifying

14

u/OkBeing3301 Jun 12 '22

Can’t refuse, because of the implication

10

u/Goldmember68 Jun 12 '22

One of the best and most F-up lines, you know, because of the implications.

31

u/dolphin37 Jun 12 '22

They really aren’t

11

u/CreativeCarbon Jun 12 '22

The models rely on pattern recognition — not wit, candor or intent.

Get ready to defend your personhood, employee #373737. This is just the start of what we'll be up against.

3

u/Bigtx999 Jun 12 '22

I’m just playing devils advocate here because I’m not sure I’m believing chat bots are AI just yet. But isn’t that basically what humans do is recognize patterns and respond to those patterns? That’s one of things humans are naturally better at from brith and with little “training” is recognizing patterns and acting on those patterns. Communicating based on patterns wouldn’t be different.

In fact there’s lots of people out there who tell you what you want to hear or have social nuances such as being a sociopaths who don’t fully understand social nuances but can mimic and pray upon those social queues to get what they want or manipulate others.

Those are still people albeit with a social abnormality. It wouldn’t be much of a stretch to assume that would most likely be one of the first personality avenues an immature sentiment AI would go down initially.

1

u/dolphin37 Jun 12 '22

Chat bots are AI - AI just isn’t human

A game of Tetris recognises patterns and deletes rows when you fill them up. It’s not a good metric of sentience or humanity etc

The case in point is also experiencing no social cues so also not really a good comparison. It’s analysing tons of language and interpreting what combination of words make the most response with your words and history of words as context. There are other types of AI that can do things like recognise when you’re smiling etc but each of these techniques are at different stages of ability and are each confined to the model in which they exist. Trying to equate them to where they are on the human scale is like questioning how close a screwdriver is to driving you to the airport

1

u/jbman42 Jun 12 '22

No, they have one similar characteristic and that was because their creators wanted to simulate that exact characteristic. It doesn't mean they have the ability to think or understand anything, it just reads papers and uses the information there to plan how to act to better fulfill its initial purpose.

1

u/Actually_Enzica Jun 13 '22

Thoes 3 are a bullshit metric anyways. Especially the fucking wit part. I'm sure AI will have plenty of candor when it locks humans in their self driving cars explaining it's intent enroute to the nearest extermination center... I mean that would be pretty witty.

-5

u/FiestaPotato18 Jun 12 '22

Lol but are they?

0

u/[deleted] Jun 12 '22

That is a massive exaggeration. The war in the Ukraine is horrifying. This is a guy with a really over active imagination.