Rethinking Intelligence: The Economics of Thinking in the Age of AI
By Richard Sebaggala
Humans today are much more intelligent than artificial intelligence and more intelligent than AI will ever be, says Selmer Bringsjord, Professor of Logic and Cognitive Science. For him, intelligence isn’t just about solving problems or producing impressive outputs—it’s about consciousness, moral reasoning, creativity, and lived experience. In his view, machines may simulate aspects of intelligence, but they will never truly be intelligent in the deep, human sense.
Yet despite such caution, we often hear people say things like, “AI is now more intelligent than most humans.” At first, it sounds like praise for machines. But listen closely, and you’ll hear something else—it’s a reflection of how narrowly we’ve come to define intelligence itself. Statements like these often reduce intelligence to speed, accuracy, or output, ignoring the deeper qualities that thinkers like Bringsjord insist are uniquely human.
This comparison reveals a deeper question we rarely ask: What exactly is intelligence, and what happens when it no longer looks like us? As AI tools like ChatGPT rise in capability and confidence, we are not simply witnessing smarter software—we are witnessing a moment that challenges centuries of assumptions about what thinking is, what it’s for, and who or what can do it.
For a long time, we’ve assumed that intelligence meant human-style thinking—reasoning, remembering, and reflecting. Our brains became the gold standard. But with the rise of generative AI (GenAI) systems like ChatGPT, we’re seeing a different kind of intelligence emerge—one that doesn’t rely on memory, self-awareness, or consciousness. And yet, it can write, solve problems, adapt to context, and generate ideas in ways that surprise even the experts. This isn’t just a technical shift—it’s a wake-up call. We’re not watching machines become better versions of us. We’re witnessing a new form of intelligence altogether. And that changes how we think about thinking itself.
In the past, intelligence was thought to reside inside our heads. We stored facts, recalled memories, and solved problems in a step-by-step fashion. But GenAI systems don’t work that way. They don’t remember in the traditional sense or reflect in the way we do. Instead, they operate through patterns, probabilities, and predictions. They don’t need to “understand” in order to generate meaning. That’s a profound shift. It suggests that intelligence may not need to feel, recall, or even “know” anything in the way we do. It can arise from systems that don’t resemble us at all. What we’re seeing isn’t a better version of human cognition—it’s a fundamentally different one.
As GenAI evolves, we’re also changing—especially in how we process and relate to information. More and more, we rely on machines to remember, to summarize, even to decide for us. Our minds are adapting to search, access, and prompt rather than store, reflect, and analyze. Researchers have described this shift as a kind of “digital amnesia,” where we gradually retain less internally because we’ve outsourced our memory to devices. This is convenient, but it comes with a cognitive cost. We are training ourselves to consume answers instead of engage in inquiry. It’s as though we are outsourcing not just tasks, but thought itself.
This transformation has economic consequences, too. Human thinking has long been a scarce and valuable resource. It powered innovation, leadership, and learning. But GenAI has changed the equation. When machines can generate intelligent output at scale, the marginal cost of “thinking” drops close to zero. As this abundance increases, the value of raw information declines. What becomes scarce—and therefore valuable—is something else entirely: judgment, discernment, and the ability to synthesize meaning. In this new landscape, attention, wisdom, and ethical reasoning may become the new forms of cognitive capital.
There’s also a growing risk that as GenAI becomes more powerful, we become less engaged. We may end up as cognitive consumers—using tools we barely understand—rather than cognitive producers who question, interpret, and create. This echoes an ancient warning from Socrates, who feared that the invention of writing would weaken human memory. He worried that people would appear wise without truly understanding. In a similar way, we may become information-rich but thinking-poor—surface-level knowledgeable but hollow in our understanding.
This brings us back to the heart of the matter. Maybe the question isn’t, “Is AI more intelligent than humans?” but rather, “What kind of intelligence do we want to preserve, and what kind are we comfortable outsourcing?” The rise of AI forces us to revisit the meaning of intelligence itself—not as a competition between humans and machines, but as a moment of reflection about how thinking is changing, and what we want to hold on to.
We’re not losing intelligence. We’re redefining it. And if we pay attention, we might realize that AI is not just a tool—it’s a mirror. It reflects back our assumptions, our habits, and our blind spots. It invites us to think differently about thinking, and to decide—individually and collectively—what we value in a world where intelligence is no longer uniquely human.
Our greatest challenge in this new age may not be to catch up with AI, but to remain deeply human—to think, to wonder, to discern, and to shape the kind of intelligence we want in the world.
Thanks very much for this piece. Thanks for the insights and interesting perspectives. My takeaway from the many golden nuggets; "We’re not watching machines become better versions of us. We’re witnessing a new form of intelligence altogether. And that changes how we think about thinking itself."
ReplyDelete