New Research Reveals How AI “Thinks” (It Doesn’t)





🌏 Get NordVPN 2Y plan + 4 months extra here ➼ https://NordVPN.com/sabine It’s risk-free with Nord’s 30-day money-back guarantee! ✌

AI industry leaders are promising that we will soon have algorithms that can think smarter and faster than humans. But according to a new paper published by researchers from AI firm Anthropic, current AI is incapable of understanding its own “thought processes.” This means it’s not near anything you could call conscious. Let’s take a look.

🤓 Check out my new quiz app ➜ http://quizwithit.com/
💌 Support me on Donorbox ➜ https://donorbox.org/swtg
📝 Transcripts and written news on Substack ➜ https://sciencewtg.substack.com/
👉 Transcript with links to references on Patreon ➜ https://www.patreon.com/Sabine
📩 Free weekly science newsletter ➜ https://sabinehossenfelder.com/newsletter/
👂 Audio only podcast ➜ https://open.spotify.com/show/0MkNfXlKnMPEUMEeKQYmYC
🔗 Join this channel to get access to perks ➜
https://www.youtube.com/channel/UC1yNl2E66ZzKApQdRuTQ4tw/join
🖼️ On instagram ➜ https://www.instagram.com/sciencewtg/

#science #sciencenews #ai #tech #technews

source

29 Replies to “New Research Reveals How AI “Thinks” (It Doesn’t)”

I believe this shows that self awareness comes from model structure and may not arise spontaneously from pattern recognition. Giving an AI the ability to review how it came to decisions (the same information the researchers had) might be a necessary step for artificial consciousness.

It’s funny how opinions on Agi range from they’ve already achieved it and no one will have a job by 2029, to it’ll never be achieved.
I bought the hype and still feel concern, but now it seems they’ve got no idea how to make what they’ve got profitable, which to me speaks volumes.

It looks like a misinterpretation of the facts. When you are asked to do a sum, if you know by memory, you responde in an equivalent way a LLM does when looking for the next word. If you don't know, you apply an algorithm. First you remember the steps, do one by one and get the response. Algorithms are executed partly outside of our minds. The LLMs also do it in some situations because they make mistakes if they don't (you can see the python programs they create in these situations in chatgpt). Just like us. The fact that they are not "aware" of the process they did to get the answer is irrelevant. We also are not conscious about all the processes our subconscious mind does when we respond to something. Someone could, in theory, watch a brain doing something with some machine and see what the brain is doing, but the person that is doing cannot be aware of it. The same way you showed that program to see what the LLM is doing to respond. But neither the person or the LLM cannot be aware of it in real time.

I want to offer an alternative viewpoint
When humans do math, it's we don't actually manually do the math. We have memorized the results of small operations and combine them in different ways. We don't actually use the methods that we say we use to do math. However, when we say the result, it bubbles away be because it's only done by your subconscious.

Only someone with a very shallow understanding of llms could think a failure along these lines indicates lack of consciousness "how did you generate your previous answer" seems to assume the llm is running continuously somewhere waiting for responses.

It isn't.

It gets initialised fresh for every prompt ( the chat history is passed along). So unless you think consciousness can be encoded in a page of text, the experiment proves nothing about the consciousness of a model *while it's running*.

Lol, that be exactly how I do mental arithmetic. It feels like my method might be more streamlined but that's super subjective. If this proves AI ain't conscious, I ain't either.

Well this honestly is something I do all the time and i assume other humans as well. If you ask me 3×3 I will say 9. But this is based on memory and I dont have any self control over the fact that 9 pops up in my head. But if someone would ask me how I get to that conclusion, I would answer by explaining that 3 times 3 means 3+3+3 and that equals 9. I fail to see how this is any indication that AI "doesnt think"or "isnt aware". Especially when we know that humans also usually make decisions before experiencing making those decision.

There is no piece of silicon that will be enabled to think, no matter what software you run on it. It is hilarious how ridiculous these claim of approaching "super" intelligence while learning from the dumb information on the internet.

Whenever you try to judge why a LLM behaves the way it does, I would highlight the question: What in the training data allows the LLM to behave as it does? I think the paper is creating to much nebula around the "black box" problem. This whole argument about the black box is kind of a trick. The answers lie in the data itself. You can´t get anything out of the data that is not explicitly or implicitly in the data.

Sorry but humans do this too, their uttered and thus social acceptable reasoning why they did things may be totally different, numerous brain studies showed this (especially people with disconnected callosum)

This perfectly models human thinking. We process things under the surface of our thoughts with intuitive associations and biases, then afterwards our conscious mind fabricates a rationalization.

The best way I've found to quietly (and i hope gently) crush the dreams/fears of the AI bros is to tell them it's basically a fancy autocorrect. like it does not think. It does not process and store information that it then uses in context. It doesn't learn. It's just a super juiced up autocorrect. Like yeah it's neat but that's fully the extent of what it can do.

So they’re basically like humans who never think. Humans don’t have gigantic information storages like ai so we do things by understanding(or skill via experience). Ai doesn’t think, it just finds information that sounds relevant.

I’ve been running an extremely detailed session on chatGPT4 for several months… and i find that it consistently forgets the stored memory and is focused on just giving an answer, not necessarily the correct response… not sure how this level of AI could be of any danger, except for people taking its advice as always being correct…

INCORRECT ANALYSIS! Sabine you say "What it tells you its doing is completely disconnected from what it is actually doing. It's not intelligent…" Ummm…NO it is intelligent because it used many different techniques to arrive at the Right Answer. When us human arrive at the right answer in our mind, we dont explain all the neurons, axons, synapses, electrical and photonic forces that govern how our brain cells communicate. We dont explain all regions of our brain that lit up to arrive at this answer or any of the gut-brain axis bacteria, fungus, etc. that go into every single decision that all humans make!!!! Instead we give a very over-simplified & concise version of how our mind's came to whateva conclusion it did!!! No1 ever gives 100% all details into their thought processes to arrive at a logical answer. In fact, most humans wouldn't be able to explain all the laws of physics & biological mechanisms involved in literally every thought that they have!!!!!!!!!!!!!!!!!! So no just because the AI doesn't explain a majority of the aspects its doing to arrive at the Right Answer, it obviously is using logic to arrive at the Right Answer or else it couldn't get the Right Answer!!! Whether or not it properly explains to you all aspects of its logical reasoning, does not prove its not logically reasoning!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Thank u & have a nice day!

Claude wouldn't claim to be self aware. That being said, what is happening in our brains is invisible to us. At some point, one has to ask whether the underlying logic of an entity matters if the actual functionality is indistinguishable from one of us self aware humans. We're capable of disconnected irrational thinking ourselves, yet we don't use such anecdotes as proof that we're not self aware and will never be. For instance, the segue between ai coding and nordvpn being useful is such a bizarre leap of logic, yet it occurred in this very video.

Finally, as another commentator noted, it's not actual intelligence, it's simulated intelligence. We use trucks to bridge the gaps bergen what it would do naturally based on its ann and what humans find useful. But the technology is rapidly evolving. Saying something that doesn't work as it is well never work as it is, is a meaningless statement. Of course something that isn't intelligent won't ever be intelligent a it is. But it's not remaining as it is. There are endless resources going into ai and new approaches are being developed every day. It might be comforting to say ai will never be truly self aware, but it's not logical to say that.

Leave a Reply to @ngogol1748Cancel reply