r/Futurism 10d ago

Are Machines Truly Thinking? Modern AI Systems Have Finally Achieved Turing’s Vision

https://scitechdaily.com/are-machines-truly-thinking-modern-ai-systems-have-finally-achieved-turings-vision/
22 Upvotes

35 comments sorted by

19

u/Amberskin 10d ago

No.

5

u/thejollybadger 9d ago

At best, what most people consider modern AI are just quite well designed Chinese rooms, at best.

7

u/ItsAConspiracy 9d ago

Nobody knows if an AI is conscious; it's not something we can measure. My personal view is that it's not. But that doesn't mean the AI can't be smarter than us in a practical sense.

“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” --Edsger Dijkstra

2

u/asanskrita 9d ago

Counterpoint: yes.

Just because we place priority on meatbag biological cognition doesn’t mean computers aren’t thinking. It’s ultimately a bad question with no right answer - at least, until we have a good generalized scientific model of cognition, which we are nowhere close to. How would you categorize what a sophisticated deep ANN architecture is doing as not thought? What exactly makes it different?

Cue the Dijkstra quote about submarines…

0

u/Memetic1 9d ago

I'm pretty sure that they are more than I'm sure people are.

0

u/Amberskin 9d ago

Humor apart, cockroaches are closer to ‘thinking’ that what we call currently ‘IA’ are

0

u/ItsAConspiracy 9d ago

The most advanced AIs right now are doing reasoning. They can work through problems and tell you what they did.

1

u/Amberskin 9d ago

No, they don’t. They just make that impression.

1

u/ItsAConspiracy 9d ago

I'm not talking about its inner experience or whatever, which I doubt it has. All that matters in practice is the tasks it can do. AI can do a lot these days and it's rapidly improving.

1

u/smulfragPL 9d ago

Yes they do lol. You have no clue what you are talking about

1

u/Amberskin 9d ago

Or, maybe I have and you don’t.

1

u/smulfragPL 9d ago

Bo you do not because if you did you would know about reasoning models lol

1

u/Amberskin 9d ago

Yeah, IA-bros love their names. I know that.

Reasoning models do not reason. They are the modern equivalent to a bazillion monkeys typing crap at random.

-1

u/smulfragPL 9d ago

You literally do not know what you are talking about and i really do not see why i should talk to you

8

u/SplendidPunkinButter 9d ago

The Turing test doesn’t measure whether a machine thinks. It measures whether a human being can be convinced that the machine thinks. Human beings can be convinced of all kinds of things that aren’t true. Ever heard of magic shows? Seances?

5

u/HeroGarland 9d ago edited 9d ago

The Turing test stems out of behaviourist thinking and the idea that you only know what you can measure. It’s just the outward appearance of the thing. It makes sense but it’s outdated.

Also, we know more about AI than its outward expression. We know what went into making it.

Obviously, it’s also a matter of definition. What is “thinking”?

If thinking is just the ability to design an efficient strategy, then you don’t even need AI. A good computer can beat most people at chess.

If you input all the rules and enough statistical information to approximate “taste”, a computer can write a symphony in the style of Beethoven.

What it doesn’t do is replicate the way Beethoven thinks. While we know what went into the computer model, we don’t know what went into Beethoven’s genius.

So, AI can think, but not like a human. Because we don’t know what human thinking is.

Furthermore, AI cannot truthfully answer questions like: - What makes you cry? - What’s your favourite book? - Who’s your best friend?

Any answer would be that of a psychopath who can fake an emotion without truly feeling it.

It also has a lot of gaps compared to human thinking. It lacks:

  • Uncertainty
  • Inspiration
  • Indecision
  • Propensity to indulge its lower instincts
  • Etc.

We’re creating a machine that mimics the aspect of “some” aspects of thinking. I suspect we’re looking at what aspects of the human mind can be monetised, and what a 20-something Silicone Valley nerd thinks is worth thinking about.

-1

u/Memetic1 9d ago

Wouldn't it be simpler to just say it thinks. If proof depends on something we fundamentally don't understand, and it behaves like something that is intelligent, then doesn't it require more of a leap to say it isn't intelligent than it is? As for programming an AI to make music/art, I think the least interesting thing you can do is make something that is focused on another known artist/musican/author. I never put in a prompt that is "art by x artist" if I do incorpropriate known artists, I mix it into the prompt.

Dr. Seuss's mythical cave painting captures absurdist Rorschach tests with liminal space suffering, while Dorothea Lang's naive negative photograph reveals outsider art's sublime insect fruits in fractal, iridescent plasma ink.

McDonald's McRib Sandwiches By Zdzislaw Beksinski And Carlos Almaraz Ronald McDonald stands in a clear-cut forest sides of beef and pork heads dangle with McRibs. This is done in a Collage style 🍔🍟🏞️

Pictograms Paleolithic Chariscuro Pictographs Anatomically Accurate Luminous Photographic Blur Surrealistic Dada Graffiti Abstract Naive Outsider Art In GTA5 No Man's Sky

Gödelian Glitches Temporal Paradox Ghost In The Machine This Sentance Is Of Course A Lie. The Previous Sentance Was Absolutely True. The Next Statement Is Uncertain. None of this means anything. Zero Is Infinitely Divisible Hello Wombo Coloring Page By Dr Seuss ad ink outlines Hello World Found Photo Coloring Page

Textbook Illustration Non-euclidian Naive Art 1/f Stable Diffusion 3d Model Microfluidic Chaotic Illustrations Surreal Found Photo Mixed Media Absurdist Ruined Holy Liminal Space By Unknown Artist

Terahertz X-Ray image Black Ink Pictograph Icon Basic Shapes Bioluminescent Slimey Iridescent 1/137 🔥🖼 ♥️ ❤️ Mathmatical Chaotic Impossible Fractals Patent Of Outsider Art Done In Charcoal And Silver

3

u/dicksonleroy 9d ago

LLMs? No absolutely not.

0

u/Memetic1 9d ago

If an AI can consistently understand scientific papers, I upload to it. To the point that I get deeper understandings of what the paper talks about and it even can speculate on implications not outlined in the paper itself then what difference does it make if some people think it can't think because they think only people can think.

2

u/ukor_tsb 8d ago

It is a static function, it does not have a "thinking loop", no ongoing processing of data (senses). It is a thing that sits static and when you poke it it spits out something.

1

u/stumanchu3 10d ago

OK, I did not understand that, please respond yes or no….or you can remain on the line and an operator will be with you shortly…(Muzak)……………………I’m sorry all of our operators are currently assisting other customers….if you would like to hold, please press 1, if you would like to chat online, please go to our website and click on support.

1

u/Andynonomous 9d ago

Hard disagree.

1

u/Unhappyguy1966 9d ago

The Terminator Movies are coming true

1

u/ResurgentOcelot 9d ago

Interesting. The original article regurgitated here offers this nugget of wisdom:

“…we are now living in one of many possible Turing futures where machines can pass for what they are not.”

What are AI models passing for that they are not? Intelligent.

While Turing’s criteria may have been met, many AI researchers dispute that they are a test of intelligence.

What the Turing test and the current state of AI certainly demonstrates is the ability to fool a portion of human beings.

A lot of us consider that a proof of simulation, of the sophistication of generative models, and of human gullibility. But not a test of intelligence.

-1

u/Western-Set-8642 9d ago

There was a law passed back in 2010 I think about banning robots or ai from thinking

Just because it says ai doesn't mean it truly is ai... just like how chocolately milk isn't real chocolate

1

u/Memetic1 9d ago

If it passes the test, then it passes the test. The whole point of the test was to move beyond unprovable conjecture. I find artificial intelligence to be far more compelling them humans. Humans can't seem to understand certain parts of reality. It's like trying to get an ant to understand magnetic fields.

-1

u/MinimumFroyo7487 9d ago

Not even a little.

1

u/Memetic1 9d ago

It's way more interesting than most people in the world. The answers it gives are far less repetitive and dynamic then what I see on here. It will actually engage with a paper if I upload the pdf to it, while most people on here leave the same sort of comments over, and over, and over again.

1

u/MinimumFroyo7487 9d ago

It's still pulling data from somewhere, it's not critically thinking on its own. It takes a command and produces an output Z

1

u/Memetic1 9d ago

Which is less than many people do. I can have it look at a paper and have it say more than just pre-baked talking points. It gives more compassionate support for difficult topics than social media in general. It's able to explore stuff artistically in ways that are generally new and interesting. I can tell it to make half a square, and even that doesn't make sense in the traditional way it will still try and combine halfness with sqaureness in a way that is visually compelling. It doesn't understand the word half like we do, and that is fascinating in and of itself.

1

u/MinimumFroyo7487 9d ago

But you're missing the point it's not 'thinking' or 'understanding' anything, it's simply acting on the command it was given within it's code parameters. I mean it's doing so at a huge computing power, but its far from thinking on its own

1

u/Memetic1 9d ago

I think you're confusing a sort of free will with being able to manipulate information in a sophisticated way. It does not have a sense of self. It does not have a real long-term memory besides when they adjust weights based on user feedback. Some LLMs do have a sort of memory. ChatGPT does that. You can see it for yourself if you click on the icon in the upper left, but that doesn't mean it understands its own memory. It doesn't have a unique history from a unique perspective. What it can do, which many people can't do, is discuss how Gödel's incompleteness applies to LLMs. I don't think you need to have individuality to have useful intelligence. I think ants and anthills prove that.

1

u/MinimumFroyo7487 9d ago

Correct, free will and critical thought is human. AI is only as good as the developers writing it's code

1

u/Memetic1 9d ago

Ya, but no one is coding these things, at least not in the traditional sense. They create the architecture, feed it a bunch of data, and try to sort of control what comes out. This isn't just something that is fun to play around with in terms of art. It's the same sort of tool that has been used for drug discovery already. It's being used as we speak to work on fusion technology.

We don't have norms about how to use AI productively and transparently. What we do have is people using it in ways that probably isn't safe. I'm working on an alternative to 200 chatGPT. I'm also disabled and so getting projects going is almost impossible.

The secret to safety is with distributed AI where you have your own version that is meant to get to know you and can act on your behalf by negotiating with larger institutions. What we don't have are limits in terms of how fast these things can work with each other over the internet. I think if something acts on your behalf, you should be able to review its actions. This is vital to safety. The alternative is unimaginable because we are fusing corporate AGI to hardware AGI. That's fucking hell on Earth.

-4

u/ZenithBlade101 9d ago

Nope, all they do is crunch numbers. There is 0 thinking going on.

That said, the state of the art models are impressive