The Age of Amplified Analogies: How AI Mirrors Human Thought
By Richard Sebaggala
For as long as I’ve studied economics, there’s been one central assumption that has guided much of the field: humans are rational. From the days of Adam Smith, economists believed that people make decisions logically, weighing costs and benefits to arrive at the best possible outcome. But over time, this belief has been quietly dismantled. Psychologists and behavioral economists have shown, again and again, that people rarely live up to this ideal. Our decisions are messy, influenced by emotions, habits, cognitive biases, and limited information.
Recently, a new idea has added another layer to this conversation—one that doesn’t just challenge the notion of rationality but offers a different way of thinking about how we, as humans, make sense of the world. Geoffrey Hinton, a name familiar to anyone following the evolution of artificial intelligence, argues that humans aren’t really reasoning machines at all. We are, in his words, “analogy machines.”
Hinton’s view is simple but striking. He suggests that we don’t move through the world applying strict logic to every situation. Instead, we understand things by making connections, by comparing one experience to another, by spotting patterns that help us navigate the unfamiliar. Reasoning, the kind that builds mathematical models or legal systems, is just a thin layer that sits on top of all this pattern recognition. Without it, we wouldn’t have bank accounts or the ability to solve equations. But without analogies, we wouldn’t be able to function at all. It's no wonder, then, that every time you're faced with an aspect for which you lack experience or the ability to form a pattern, you feel as lost as someone without any prior knowledge in that area; for example, if you have no experience with car mechanics, even as a professor of economics visiting a garage, you might feel foolish in the presence of the mechanics.
As I thought about this, I found myself reflecting on the ongoing debates I’ve had with colleagues and friends who remain skeptical of AI. Many of them argue that AI is, at best, a useful tool, but it can’t approach the depth or richness of human intelligence. They believe there’s something unique, perhaps sacred, about how humans think, something AI can never replicate.
But if Hinton is right, and I find his argument persuasive, then the way we think and the way AI works aren’t as different as we might like to believe. After all, what does a large language model like ChatGPT do? It scans through vast amounts of information, recognizes patterns, and makes connections. In other words, it makes analogies. The difference is that AI draws on far more data than any one human ever could.
It’s a humbling thought. Much of what we take pride in, our ability to write, to solve problems, to make decisions, is rooted in this analogy-making process. We reach into our memories, find similar situations, and use them to guide what we do next. But we do this with limited information. We forget things. We misremember. We carry biases from one situation to another, sometimes without realizing it.
AI doesn’t have these limitations. It doesn’t get tired or distracted. It can sift through millions of examples in seconds, pulling out patterns and insights we might miss. This doesn’t mean AI is better than humans, but it does mean that in certain ways, it can amplify what we already do, helping us see further, make better connections, and avoid some of the pitfalls that come from relying on incomplete or faulty memories. However, it's crucial to acknowledge that AI can also reflect and amplify existing biases present in the data it's trained on, making human oversight essential to ensure fairness and accuracy.
I have spent much of my past five years reflecting about how people make decisions using behavioural insights. The idea that we’re not purely rational was something behavioral economics forced me to accept. But Hinton’s insight pushes that understanding even further. It suggests that at the core of human thinking is something far more organic and intuitive, something that AI, in its own way, mirrors.
Intriguingly, recent research from Anthropic, the creators of Claude, offers a glimpse into this mirroring. Their efforts to understand the so-called "black box" of AI reveal that large language models like Claude don't always arrive at answers through purely logical steps that align with their explanations. For instance, while Claude can generate coherent reasoning, this explanation sometimes appears disconnected from its actual processing. Furthermore, their findings suggest that Claude engages in a form of "planning" and even possesses a shared conceptual "language of thought" across human languages.
These discoveries, while preliminary, hint at a less purely algorithmic and more intuitively structured process within advanced AI than previously assumed. Just as human decision-making is influenced by subconscious biases and heuristics, AI might be operating with internal mechanisms that are not always transparent or strictly linear. This strengthens the notion that the core of intelligence, whether human or artificial, may involve a significant degree of organic and intuitive processing, moving beyond purely rational models.
And that brings me back to the resistance I often encounter around AI. It makes me wonder if some of this resistance isn’t about AI itself, but about what it reflects back to us regarding our reliance on pattern recognition. If AI can perform tasks we associate with intelligence, like making analogies, writing essays, and answering questions, then perhaps we must confront the idea that much of what we deemed uniquely human is, in fact, rooted in mechanical processes of pattern matching. Maybe the underlying fear isn’t that AI will surpass us, but that it reveals the extent to which our own abilities are built upon these very mechanisms.
But there’s another way to see it. Rather than feeling threatened, we might see AI as a chance to fill in the gaps where our human thinking falls short. It can’t replace the reasoning layer we rely on for complex tasks, but it can help us expand the reach of our analogies, connect ideas across disciplines, and spot patterns we would otherwise miss. In doing so, it can make us better thinkers.
To me, this collaborative potential is the true opportunity. AI isn’t destined to outsmart us, it’s poised to work alongside us, amplifying the strengths of human thinking while compensating for our inherent imperfections. If we embrace the idea that much of our cognition is rooted in analogy-making, then AI transforms from a rival into a powerful partner, one that can help us expand our thinking, question our biases, and perceive the world through novel lenses.So, perhaps it’s time to stop arguing about whether AI can think like humans. The more important question is: how can it help us think better?
May be with constant understanding and use of it will work along side us otherwise I see an outsmart tool for many humans. The fact may be from the limit information many people base their anlolohies from due to limited exposure
ReplyDeleteIntelligent thought... I like it...
ReplyDelete