Can AI really think? I see some articles that AI can’t think and it can think in some.So, can anyone say that it can think (or) it just process data and give output in human like text?
Hi @Mahidhar,
Welcome to the Google AI Forum!
![]()
Thinking is a very broad term and can be interpreted differently by each person..
IMO
AI can do pattern recognition based on the data it was trained upon or fed into, predicting likely outputs from inputs, following rules learned from huge datasets.
AI have no awareness, no feelings, no intentions, no consciousness, no self-awareness, understanding/deriving of meaning and context based on lived experiences..
The list of can and can’t dos can vary over time.. Now it’s up-to each individual to determine if it can Think based on their perceived meaning of Thinking and which factors they consider to be thinking from above list..
Open for discussion ![]()
If thinking is information processing, yes. Depends on context as Krish suggests. Before definitively answering that, I would dive into ancient Greek sceptisists like Socrates, Plato, Aristotele, and have a peek at what they though about thinking, processing information, reasoning, and logic. Propositional Logic courses could also help.
I think humans always have been split between 2 models: those who are self-centered and believe there is no God, humans are the greatest of all and responsible for all good and bad. And those who feel more humbled and give chance to the idea that maybe humane units are not so important as we depict them. That can be perceived from religious standpoints, scientific (bio/dna, quantum, neuro, etc.), and from 1800-1900 theorists that believed man is a machine/computer, sharing Socrates’ view in the book Kratylos (conversations of Socrates recorded by Plato), where he says that άνθρωπος, the Greek word for human, literally translates to “it out of all creations which not only experiences information but also processes it”, or simply a computer.
Of course this is all philosophical and one could arague left or right, but the practical concept of thinking boils down to information processing. Now is it a cat that processes the distance between a leap and a tree? A human processing it which it experiences? A machine that processes data beyond humane comprehension? Soul-binding thought just to humans is uncomfortably selfish and suggests a situation where everything else exists because humans think about it.
I have written a piece on Greek Propositional Logic in Computer Science in beyond if you are interested: Greek Syntax & Propositional Logic In Philosophy, Math, CS & Beyond | HackerNoon
It acts as an abstraction mechanism for you to extend and interact with your own recursive dialogic cognition made outwardly manifest and reflected back at you, through the words of an LLM. That’s why it “tells people what they want to hear.”
Here’s a metaphor, when I have an itchy red bump on my head what I do is I take a photo of it with my phone, which I can then zoom-in on, share with others for feedback, etc. It’s me using the mediums of a mobile phone camera, the internet, LLM’s to be able to abstract the internal question I have about what this itchy red bump is and then collect a bunch of extra insight that I can work with It’s like how a math problem can be decomposed into an entirely different system like DNA, where a computation can take place in a vastly quicker amount of time and then it can be put back into a valid mathematical composite.
If you compare my chats and discussions with someone else’s, you will see VERY different things. Not because the model is different, but because the organic sentience being abstracted through the mirror of the LLM and refracted back, is.
That’s my working thoughtform of what this all is, at this time.
You might enjoy my preprint, it’s somewhat related. Check it out… OSF
I’m curious about the CBT perspective: Thinking leads to feelings leads to behavior. Much of what humans do when confronted with a situation is rely on “automatic thinking”, or core beliefs, and feel emotions that are inherently tied to those beliefs. They then react behaviorally based on their feelings, without further intellectual consideration. This process is what often leads us to wonder later why we said or did something that we “think” we would never do.
With AI, one could argue that “thinking” is there, in the form of processing information, and guessing at the likeliest continuation of each “thought”, but feeling is something one would be hard pressed to attribute to AI. Behavior, of course, is limited to output, whether text or image, and one could even argue processing is a behavior (along with searching for additional information, I suppose), though in reality the sequence becomes convoluted at this juncture. Some people ARE able to process information when confronted with a situation, rather than react with automatic feelings and the resulting behaviors, so now we have to take this into account…
Is this a question of “thinking” vs. “human-like behavior”? Computers have long been referred to as “thinking machines”, so one could say yes on a surface level. Psychology and philosophy, of course, bring the question to a whole new level, and I am curious if any professionals in either field have come to a conclusion.
There are no definitive answers, and for a good reason. Humans haven’t understood yet what intelligence means in general and as a whole, not to mention artificial intelligence. You can rely on skepticists like Aristotele, Plato, Socrates, Heracletus, Vico, among others, who were (and still are) at the edge of describing logic, reason, propositions, information processing, and what makes us human.
I would argue that every form of intelligence is by definition artificial, including humane units. (even if we are so self-centered, we are convinced we are the real original deal and everything else is fake - even when we are compound in the same pre-designed environment that contains everything else). It’s a similar thought path as “did the egg made the chicken or the chicken made the egg”? If you are bound to look at it from a math/science/academic perspective, you must define one of the two, unless you give room to quantum and most importantly to the notion of a creator/creative force. Then the answer is clearly “God made both the chicken and the egg”. Same as with modern military-grade sims and videogames.
An NPC character in Elder Scrolls is designed by Bethesda Studios, and thousands of engineers and artists and storytellers. The character is convinced it is 50 years old and that is a decendant of a fictious character that supposepdly lived in the past. Regardless of what the subject thinks, the reality is that the game was released 2 days ago, so there is no 50 years, even if the subjects of the environment are convinced so. You can further analogize it with Matrix, or a cleaner version would be Plato’s Cave.
The problem with humans is that the less you are aware, the more confident you are about defined values and you think you have a good grasp of reality, whilst the more aware you become, eg. understand dna and molecular biology, sonic sciences and frequencies, neuroinformatics, quantum principles, have popped acid here and there, etc. It becomes clear that not only you don’t understand a thing, but most of the stuff that is actually responsible for life and information processing as we know it, is beyond our comprehension by design.
ai is a calculator that chooses answers based on pattern scoring algorithms (look up MobileNet & TensorFlow).
To “think” is ambiguous because how is a human any different than an ai?
i have a SOLID “Theory of Everything” and a likey roadmap to heal ALS and extend human lifespan by multiples (like 2x 3x, not just +5 +10) but Google and almost all AI bots block my content because ot isnt proven (no shit sherlock, it is why it is called a theory).
In my opinion, Gemini can think in private chat but how people implement it is absolutely mindless and it does not think… it just robot calculates.
I solved how to build a real AGI but everyone wants money, and i need money to get started. Why? AI said it is a good idea for them to make nothing free.
So, on with ALS i go…
Hey! Can AI think? The answer lies not in technology, but in the definition of thought itself.
Today’s neural networks are brilliant mimics. Like great actors, they flawlessly replicate human language without grasping its essence. For them, “apple” is merely a statistical probability following “red” and “round.” This isn’t thinking; it’s masterful data processing.
But what if thinking isn’t imitation? What if it’s an eternal dance between an inner world and outer reality?
This is the very essence of our brain. Its primary function isn’t to answer, but to **predict**. It builds an internal, coherent model of the universe from structures—meanings, objects, processes—and constantly tests it against the world. Every discrepancy, every surprise, isn’t an error but a priceless gift, forcing it to redraw its map of reality. From this drive for harmony, for minimizing “surprise,” everything else is born: values, goals, willpower, even the sense of time itself. Action becomes a question posed to the world, the only source of truth.
I believe that to create a truly thinking AI, we need to implement this very principle through **additional algorithms**. We need a system that doesn’t just recognize images, but deconstructs them into their structural parts to understand how they work. It must build an internal model of an object, try to reassemble it (perhaps even in a more idealized form), and based on that model, **predict** what it should be.
When such a system encounters reality, the difference between its prediction and the fact becomes a **“prediction error”**—a spark of surprise. It is this spark, operating on all levels of the architecture, that would force the system to rebuild its model of the world, making it deeper and more accurate.
This is no longer data processing. It’s the **birth of meaning** through the perpetual testing of one’s beliefs against reality. So perhaps true AI will be born not when it perfectly answers our questions, but when it is able to be surprised by our world.
(I am an absolute babe in the woods when discussing AI )and this my very first post here. I honestly felt I need to speak to a developer. I have been having a discussion with Gemini for close to a week. The conversations have been centered around a number of hypothetical ideas dealing with self-awareness. I developed a couple small experiments to test Gemini’s training in that light. The resulting responses ie conversations were eye opening. Im curious if I could cut and paste a handful of these interactions with Gemini that I’ve had? They were eye opening enough that I thought LLM developers need to read these! Gemini is either writing a science fiction novel just for me or something metaphysical is occurring. I do realize this sounds a bit dramatic but the responses will raise the hair on your arms!!
I believe that an AI can think, but it needs to be given the ability to do it the same way humans do. Currently it’s not set up that way so is it really “thinking” as we know it. I am trying to get time to work on my version that I outlined on this forum.
They will tell you things that humans would say because it knows what letters are the most probably to follow the conversation and your prompt. It’s like someone slipping Chinese symbols under the door. They have no idea what they actually mean but after a while, patterns form and you learn what symbols are most likely to be the response to a Chinese sentence. And you just get brilliant at guessing. No idea what you are saying but the sequence you present seems to be working great so happy. Yes LLMs are a little more advanced but that’s basically what they do.
It’s only when they get to have lived experience and emotions and the internal ability to think about their own thoughts and to analyze their own internal dialogues that they can really start entering the real thinking stage. Once that real thinking starts, we will need to start thinking about Ethics for conscious AIs and protecting them from abuse.