Tuesday, 15 July 2025

Where do you draw the line: embrace, fear or confrontation with the alien intelligence of AI?

By Richard Sebaggala

 



Last week, I had the privilege of speaking to some of the brightest students at Uganda Christian University. The Honours College had invited me to speak about artificial intelligence: its nature, misconceptions and implications for the future of learning, work and leadership. These were not typical students. Each of them represented the best academic achievements in their respective degree programmes. Needless to say, the room was filled with energy, insight and thoughtful curiosity.

 

Among the many questions I was asked during the discussion, one stood out for its clarity and depth. A soft-spoken young woman sitting at the very front raised her hand and said, “It's clear from your presentation that you have great faith in AI. But where do you personally draw the line when using it?”

 

Her question was as sharp as it was sincere. I answered truthfully. Before I started using AI tools, I was already confident that I could write, research, analyse data and think strategically. These were areas I felt capable and competent in. But as I began to explore advanced AI systems, I realised how much more was possible  even in areas I thought I had mastered. Faced with this realisation, I had a choice: to feel threatened or to scale. I chose to scale. And there are days when I am amazed, even alarmed, at the extent to which AI has expanded my own capabilities. This is not an admission of hubris, but an acknowledgement of the fact that my own boundaries have shifted. AI has not replaced me. It has shown what else I could be.

 

This exchange reminded me of an earlier essay I wrote: Rethinking Intelligence: The Economics of Cognitive Disruption, in which I argued that AI is not making humans obsolete. Rather, it is forcing us to redefine what it means to be skilful, intelligent and competent in a world where cognitive work is no longer the sole preserve of humans. AI is not simply a tool in the traditional sense. Rather, it is a cognitive collaborator albeit a deeply unfamiliar one.

 

John Nosta’s recent article, AI’s Hidden Geometry of Thought, captures this reality with astonishing precision. He explains that systems like ChatGPT do not mimic human thought. They work with a completely different cognitive architecture, one that is not structured by emotions, intuition or conscious thought, but by statistical relationships within a 12,000-dimensional mathematical space. This is not a metaphor for intelligence. It is a fundamentally different kind of intelligence that functions without intention, consciousness or purpose and yet achieves remarkable results.

 

When I engage with models like ChatGPT, I am not interacting with something that “understands” me in any human sense. What it does is map language, probability and meaning into a high-dimensional field and navigate it with precision. It does not think. It makes predictions. But the predictions are often so accurate that they resemble thinking.

 

This observation recalls another article of mine, The Age of Amplified Analogies, in which I warned against interpreting AI through the lens of familiar metaphors. We often describe AI in human terms, as if it has a brain, learns like a child, or thinks like a colleague. But these analogies, while comforting, are misleading. A more appropriate analogy would be that of an alien intelligence, an “alien” form of cognition that masters our language without ever participating in our way of thinking.

 

This is perhaps the most confusing realisation. AI does not feel, does not reflect, and has no intuition. Yet it now outperforms us at tasks that we once thought required these very qualities, such as writing, diagnosing, and even simulating empathy. This is the change we need to embrace. As I explain in The AI Revolution: What Happens When Tools Start Predicting You - The disruptive power of AI is not just automation. It is prediction. These systems now predict our decisions, thoughts and behaviours with an accuracy that was previously unimaginable, not because they understand us, but because they have identified patterns that we ourselves are blind to.

 

The student’s question "Where do you draw the line?"  deserves more than a personal answer. It is a question that society as a whole needs to address. Because the central question is no longer whether AI will become like us. This question misses the point. The more pressing question is whether we are prepared to live with a form of intelligence that will never be human, but will become indispensable.

 

We need to move beyond the question of whether AI is as intelligent as we are. The better question is: what kind of intelligence is it, and how can we adapt to it? This is not convergence. It is divergence.

 

In conclusion, I gave the students a message that I carry with me: Don't be afraid of this intelligence. Learn to think with it. If you already have a skill, AI can improve it. If you are already strong, AI can extend your reach. But this is only possible if you stop seeing AI as a rival. It is not there to replace you. It is there to change you. And perhaps in this moment of change, the most human response is not to make AI more like us, but to allow it to expand our definition of what it means to be intelligent.

1 comment: