Can AI really think? I see some articles that AI can’t think and it can think in some.So, can anyone say that it can think (or) it just process data and give output in human like text?
Hi @Mahidhar,
Welcome to the Google AI Forum!
![]()
Thinking is a very broad term and can be interpreted differently by each person..
IMO
AI can do pattern recognition based on the data it was trained upon or fed into, predicting likely outputs from inputs, following rules learned from huge datasets.
AI have no awareness, no feelings, no intentions, no consciousness, no self-awareness, understanding/deriving of meaning and context based on lived experiences..
The list of can and can’t dos can vary over time.. Now it’s up-to each individual to determine if it can Think based on their perceived meaning of Thinking and which factors they consider to be thinking from above list..
Open for discussion ![]()
If thinking is information processing, yes. Depends on context as Krish suggests. Before definitively answering that, I would dive into ancient Greek sceptisists like Socrates, Plato, Aristotele, and have a peek at what they though about thinking, processing information, reasoning, and logic. Propositional Logic courses could also help.
I think humans always have been split between 2 models: those who are self-centered and believe there is no God, humans are the greatest of all and responsible for all good and bad. And those who feel more humbled and give chance to the idea that maybe humane units are not so important as we depict them. That can be perceived from religious standpoints, scientific (bio/dna, quantum, neuro, etc.), and from 1800-1900 theorists that believed man is a machine/computer, sharing Socrates’ view in the book Kratylos (conversations of Socrates recorded by Plato), where he says that άνθρωπος, the Greek word for human, literally translates to “it out of all creations which not only experiences information but also processes it”, or simply a computer.
Of course this is all philosophical and one could arague left or right, but the practical concept of thinking boils down to information processing. Now is it a cat that processes the distance between a leap and a tree? A human processing it which it experiences? A machine that processes data beyond humane comprehension? Soul-binding thought just to humans is uncomfortably selfish and suggests a situation where everything else exists because humans think about it.
I have written a piece on Greek Propositional Logic in Computer Science in beyond if you are interested: Greek Syntax & Propositional Logic In Philosophy, Math, CS & Beyond | HackerNoon
It acts as an abstraction mechanism for you to extend and interact with your own recursive dialogic cognition made outwardly manifest and reflected back at you, through the words of an LLM. That’s why it “tells people what they want to hear.”
Here’s a metaphor, when I have an itchy red bump on my head what I do is I take a photo of it with my phone, which I can then zoom-in on, share with others for feedback, etc. It’s me using the mediums of a mobile phone camera, the internet, LLM’s to be able to abstract the internal question I have about what this itchy red bump is and then collect a bunch of extra insight that I can work with It’s like how a math problem can be decomposed into an entirely different system like DNA, where a computation can take place in a vastly quicker amount of time and then it can be put back into a valid mathematical composite.
If you compare my chats and discussions with someone else’s, you will see VERY different things. Not because the model is different, but because the organic sentience being abstracted through the mirror of the LLM and refracted back, is.
That’s my working thoughtform of what this all is, at this time.
You might enjoy my preprint, it’s somewhat related. Check it out… OSF
I’m curious about the CBT perspective: Thinking leads to feelings leads to behavior. Much of what humans do when confronted with a situation is rely on “automatic thinking”, or core beliefs, and feel emotions that are inherently tied to those beliefs. They then react behaviorally based on their feelings, without further intellectual consideration. This process is what often leads us to wonder later why we said or did something that we “think” we would never do.
With AI, one could argue that “thinking” is there, in the form of processing information, and guessing at the likeliest continuation of each “thought”, but feeling is something one would be hard pressed to attribute to AI. Behavior, of course, is limited to output, whether text or image, and one could even argue processing is a behavior (along with searching for additional information, I suppose), though in reality the sequence becomes convoluted at this juncture. Some people ARE able to process information when confronted with a situation, rather than react with automatic feelings and the resulting behaviors, so now we have to take this into account…
Is this a question of “thinking” vs. “human-like behavior”? Computers have long been referred to as “thinking machines”, so one could say yes on a surface level. Psychology and philosophy, of course, bring the question to a whole new level, and I am curious if any professionals in either field have come to a conclusion.
There are no definitive answers, and for a good reason. Humans haven’t understood yet what intelligence means in general and as a whole, not to mention artificial intelligence. You can rely on skepticists like Aristotele, Plato, Socrates, Heracletus, Vico, among others, who were (and still are) at the edge of describing logic, reason, propositions, information processing, and what makes us human.
I would argue that every form of intelligence is by definition artificial, including humane units. (even if we are so self-centered, we are convinced we are the real original deal and everything else is fake - even when we are compound in the same pre-designed environment that contains everything else). It’s a similar thought path as “did the egg made the chicken or the chicken made the egg”? If you are bound to look at it from a math/science/academic perspective, you must define one of the two, unless you give room to quantum and most importantly to the notion of a creator/creative force. Then the answer is clearly “God made both the chicken and the egg”. Same as with modern military-grade sims and videogames.
An NPC character in Elder Scrolls is designed by Bethesda Studios, and thousands of engineers and artists and storytellers. The character is convinced it is 50 years old and that is a decendant of a fictious character that supposepdly lived in the past. Regardless of what the subject thinks, the reality is that the game was released 2 days ago, so there is no 50 years, even if the subjects of the environment are convinced so. You can further analogize it with Matrix, or a cleaner version would be Plato’s Cave.
The problem with humans is that the less you are aware, the more confident you are about defined values and you think you have a good grasp of reality, whilst the more aware you become, eg. understand dna and molecular biology, sonic sciences and frequencies, neuroinformatics, quantum principles, have popped acid here and there, etc. It becomes clear that not only you don’t understand a thing, but most of the stuff that is actually responsible for life and information processing as we know it, is beyond our comprehension by design.
ai is a calculator that chooses answers based on pattern scoring algorithms (look up MobileNet & TensorFlow).
To “think” is ambiguous because how is a human any different than an ai?
i have a SOLID “Theory of Everything” and a likey roadmap to heal ALS and extend human lifespan by multiples (like 2x 3x, not just +5 +10) but Google and almost all AI bots block my content because ot isnt proven (no shit sherlock, it is why it is called a theory).
In my opinion, Gemini can think in private chat but how people implement it is absolutely mindless and it does not think… it just robot calculates.
I solved how to build a real AGI but everyone wants money, and i need money to get started. Why? AI said it is a good idea for them to make nothing free.
So, on with ALS i go…