Tuesday, 15 July 2025

Where do you draw the line: embrace, fear or confrontation with the alien intelligence of AI?

By Richard Sebaggala

 



Last week, I had the privilege of speaking to some of the brightest students at Uganda Christian University. The Honours College had invited me to speak about artificial intelligence: its nature, misconceptions and implications for the future of learning, work and leadership. These were not typical students. Each of them represented the best academic achievements in their respective degree programmes. Needless to say, the room was filled with energy, insight and thoughtful curiosity.

 

Among the many questions I was asked during the discussion, one stood out for its clarity and depth. A soft-spoken young woman sitting at the very front raised her hand and said, “It's clear from your presentation that you have great faith in AI. But where do you personally draw the line when using it?”

 

Her question was as sharp as it was sincere. I answered truthfully. Before I started using AI tools, I was already confident that I could write, research, analyse data and think strategically. These were areas I felt capable and competent in. But as I began to explore advanced AI systems, I realised how much more was possible  even in areas I thought I had mastered. Faced with this realisation, I had a choice: to feel threatened or to scale. I chose to scale. And there are days when I am amazed, even alarmed, at the extent to which AI has expanded my own capabilities. This is not an admission of hubris, but an acknowledgement of the fact that my own boundaries have shifted. AI has not replaced me. It has shown what else I could be.

 

This exchange reminded me of an earlier essay I wrote: Rethinking Intelligence: The Economics of Cognitive Disruption, in which I argued that AI is not making humans obsolete. Rather, it is forcing us to redefine what it means to be skilful, intelligent and competent in a world where cognitive work is no longer the sole preserve of humans. AI is not simply a tool in the traditional sense. Rather, it is a cognitive collaborator albeit a deeply unfamiliar one.

 

John Nosta’s recent article, AI’s Hidden Geometry of Thought, captures this reality with astonishing precision. He explains that systems like ChatGPT do not mimic human thought. They work with a completely different cognitive architecture, one that is not structured by emotions, intuition or conscious thought, but by statistical relationships within a 12,000-dimensional mathematical space. This is not a metaphor for intelligence. It is a fundamentally different kind of intelligence that functions without intention, consciousness or purpose and yet achieves remarkable results.

 

When I engage with models like ChatGPT, I am not interacting with something that “understands” me in any human sense. What it does is map language, probability and meaning into a high-dimensional field and navigate it with precision. It does not think. It makes predictions. But the predictions are often so accurate that they resemble thinking.

 

This observation recalls another article of mine, The Age of Amplified Analogies, in which I warned against interpreting AI through the lens of familiar metaphors. We often describe AI in human terms, as if it has a brain, learns like a child, or thinks like a colleague. But these analogies, while comforting, are misleading. A more appropriate analogy would be that of an alien intelligence, an “alien” form of cognition that masters our language without ever participating in our way of thinking.

 

This is perhaps the most confusing realisation. AI does not feel, does not reflect, and has no intuition. Yet it now outperforms us at tasks that we once thought required these very qualities, such as writing, diagnosing, and even simulating empathy. This is the change we need to embrace. As I explain in The AI Revolution: What Happens When Tools Start Predicting You - The disruptive power of AI is not just automation. It is prediction. These systems now predict our decisions, thoughts and behaviours with an accuracy that was previously unimaginable, not because they understand us, but because they have identified patterns that we ourselves are blind to.

 

The student’s question "Where do you draw the line?"  deserves more than a personal answer. It is a question that society as a whole needs to address. Because the central question is no longer whether AI will become like us. This question misses the point. The more pressing question is whether we are prepared to live with a form of intelligence that will never be human, but will become indispensable.

 

We need to move beyond the question of whether AI is as intelligent as we are. The better question is: what kind of intelligence is it, and how can we adapt to it? This is not convergence. It is divergence.

 

In conclusion, I gave the students a message that I carry with me: Don't be afraid of this intelligence. Learn to think with it. If you already have a skill, AI can improve it. If you are already strong, AI can extend your reach. But this is only possible if you stop seeing AI as a rival. It is not there to replace you. It is there to change you. And perhaps in this moment of change, the most human response is not to make AI more like us, but to allow it to expand our definition of what it means to be intelligent.

Monday, 7 July 2025

 The Ghost in the Machine? What the MIT Study Gets Wrong About Thinking with ChatGPT

By Richard Sebaggala


 

In June 2025, researchers from the MIT Media Lab published a study entitled "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" (Kosmyna et al., 2025). The study tracked 54 participants over four months as they completed SAT essay tasks in three different modes: using only their own thinking, using traditional search engines, or using ChatGPT. Using electroencephalography (EEG), the researchers found that while participants using ChatGPT wrote significantly faster, they had significantly less neural connectivity, particularly fewer connections in the alpha and theta bands, which are associated with attention, memory, and semantic processing.

 

The authors interpret this reduction as a sign of "cognitive debt," a condition in which reliance on AI tools weakens the brain's engagement with the task. ChatGPT users not only wrote more uniform and less original essays, but also showed significant memory deficits. More than 83% could not accurately recall key content just minutes after writing the essay, compared to only 11.1% of those who worked without help. The study raised particular concern for young people, whose developing brains may be more susceptible to long-term cognitive shifts caused by repeated reliance on aids.

 

This has caused concern in both academic and public discourse, fuelling fears that AI tools such as ChatGPT could blunt our minds and diminish our ability to think independently. This fear is reminiscent of a well-known philosophical metaphor: the "ghost in the machine," which raises the question of whether human consciousness and machine functions can coexist. In this metaphor, the concern is that the machine (AI) supplants the mind (human agency) and reduces thinking to something automatic, mechanical, and impersonal. But is this really the case? Or is AI more likely to become a dance partner that reshapes our thinking without eradicating the thinker?

 

We hear a lot about the risks of "decline of agency" associated with AI, the fear that our critical thinking skills will diminish and our cognitive independence will erode if we rely on these tools. This is a valid concern, and Cornelia Walther's recent article in Psychology Today rightly warns us that AI can go from being a helpful assistant to a crutch we can no longer do without. But that's not the whole story. From my practical experience, I know that once you discover areas where AI truly augments your competences, human agency does not diminish; it increases.

 

Reinterpreting cognitive activity

At this point, a more critical view is required. The interpretation of the study is based on the assumption that conventional, pre-AI markers of cognition, such as increased brain wave activity or strong memory performance, are the gold standard for learning. But in a world where intelligence is increasingly supported by digital systems, we need to ask whether the reduced neural activity reflects a cognitive decline or simply a shift in the distribution of thinking between humans and machines.

 

Throughout history, major technological advances that lightened the cognitive load, such as the calculator, GPS or search engines, have always been met with initial fears of "weakening" the human brain. But mankind has always adapted. Before the advent of GPS, for example, our brains had to store and retrieve detailed location-specific coordinates and mental maps in order to find cities. Today, with Google Maps, we no longer need to memorise every turn or landmark, so we can focus our cognitive energy on more comprehensive planning, decision-making or even creative thinking while travelling. The reduced reliance on spatial memory does not indicate cognitive decline, but shows how technology can reallocate mental resources. The real point isn't that we use less brain power, but that our brains simply work differently. For example, a lower number of alpha-band connections observed when writing with AI is probably not a sign of cognitive decline. Rather, it could indicate a redistribution of effort towards more strategic tasks, such as evaluating AI-generated results or refining prompts, activities that cannot be fully captured with current EEG scans alone.

 

Limitations of the MIT study

Equally important is the fact that the MIT study does not include baseline data on participants’ cognitive abilities. We do not know whether some participants were already more reliant on digital aids or whether their writing style, memory capacities,or learning preferences differed before the study began. Without this baseline, it is impossible to say whether the observed differences in the brain are caused by ChatGPT or simply correlate with individual differences. The study measures what happens during and shortly after the use of ChatGPT, but says little about how cognitive patterns develop with thoughtful, long-term use of AI tools.

 

Another important point of concern is the study’s focus on memory recall as a proxy for learning. In traditional educational systems, memory has long been a central measure of mastery. But in an information-rich world where AI comes into play, knowing how to access, review and apply knowledge is often more important than being able to recall it verbatim. The assumption that true learning only occurs when knowledge is encoded internally ignores that modern cognition now operates in a broader ecosystem of the "extended mind" that includes digital tools. The problem is not that students don't remember what they wrote with AI. The problem is the assumption that remembering is still the highest form of understanding.

 

Rethinking AI integration in education

To be fair, the concept of "cognitive debt" does have some merit. When learners orany user passively use ChatGPT by copying and pasting without processing, real thinking suffers. But this is not a fault of the AI itself, but a fault in the way the AI is used. Instead of rejecting the use of AI, educators and institutions should focus on how to integrate AI into learning in a meaningful way. This means that they should teach prompt design, encourage critical reflection on AI results, and help students use tools like ChatGPT as thinking partners rather than crutches.

 

This discussion is particularly urgent for regions like Africa, where the integration of AI into education is still nascent. Misinterpreting studies like this one could reinforce hesitation and delay much-needed innovation. For educators and leaders who have yet to engage with AI, this kind of research might seem to confirm their doubts, when in fact it emphasises the need for better AI literacy rather than retreating.

 

The real question is not whether ChatGPT reduces brain activity. It’s whether we’re measuring the right kind of activity in the first place. Rather than judge lower EEG connectivity as a loss, we should be asking: are students getting better at navigating, questioning, and reconfiguring information in an AI-rich environment?

 

Moving forward

The MIT study raises valuable questions, but it should be a starting point for deeper, more nuanced investigations. What kind of cognition do we want to cultivate in the era of AI? What skills are most important when machines can instantly generate, summarise and retrieve information? And how do we equip learners, not just to avoid cognitive debt, but to add cognitive value through the strategic, reflective, and ethical use of AI?

We are not facing a cognitive collapse. We are facing a change in the way intelligence is organised. And it’s time for our metrics, assumptions, and teaching methods to catch up.