Tuesday, 13 May 2025

 The AI Writing Debate: Missing the Forest for the Trees

By Richard Sebaggala


 

A recent article published by StudyFinds titled “How College Professors Can Easily Detect Students’ AI-Written Essays” revisits an ongoing debate about whether generative AI tools like ChatGPT can really match the nuance, flair, and authenticity of human writing. Drawing on a study led by Professor Ken Hyland, the article argues that AI-generated texts leave much to be desired in terms of persuasiveness and audience engagement. It lacks rhetorical questions, personal asides, and other so-called 'human' features that characterize good student writing. The conclusion seems simple: AI can write correctly, but it can't make connections.

But reading the article made me uneasy. Not because the observations are wrong, but because they are based on a narrow and, frankly, outdated understanding of what constitutes good academic writing. More importantly, they misrepresent the role of generative AI in the writing process. The arguments often portray Gen AI as if it were another human from a distant planet trying to mimic the way we express ourselves, rather than what it actually is, a tool designed to help us. And here’s the irony. I have experienced first-hand the limitations of human writing, even my own, and I see AI not as a threat to our creativity, but as a reflection of the weaknesses we have inherited and rarely challenged.

 

When I started my PhD at the University of Agder in Norway, many friends back home in Uganda already thought I was a good writer. I had been writing for years, publishing articles and teaching economics. But my confidence was shaken when my supervisor returned my first paper with over two thousand comments. Some of them were brutally honest. My writing was too verbose, my sentences too long and my arguments lacked clarity. What I had previously thought was polished human writing was actually a collection of habits I had picked up from outdated academic conventions. It was a difficult but necessary realisation: being human doesn’t automatically make your writing better.And yet many critics of AI-generated texts would have us believe that it's the very mistakes we’ve internalised, such as poor grammar, excessive verbosity, and vague engagement, that make writing human and valuable.

 

This is why the obsession with 'engagement markers' as the main test of authenticity is somewhat misleading. In good writing, especially in disciplines such as economics, business, law or public policy, clarity, structure, and logical flow are often more important than rhetorical flair. If an AI-generated draft avoids rhetorical questions or personal allusions, this is not necessarily a weakness. Rather, it often results in a more direct and focussed text. The assumption that emotionally engaging language is always better ignores the different expectations in different academic disciplines. What is considered persuasive in a literary essay may be completely inappropriate in a technical research report.

 

Another omission in the argument is the fact that the role of prompters is not considered. The AI does not decide on its own what tone to strike. It follows instructions. If it is asked to include rhetorical questions or to adopt a more conversational or analytical tone, it does so. The study’s criticism that ChatGPT failed to use personal speech and interactive elements says more about the design of the prompts than the capabilities of the tool. This is where instruction needs to change. Writing classes need to teach students how to create, revise, and collaborate using AI prompts. This does not mean that critical thinking is lost, but that it is enhanced. Students who know how to evaluate, refine, and build upon AI-generated texts are doing meaningful intellectual work. We're not lowering the bar, we're modernising the skills.

A recent study by the Higher Education Policy Institute (HEPI) in the UK revealed that 92% of university students are using AI in some form, with 49% starting papers and projects, 48% summarizing long texts, and 44% revising writing. Furthermore, students who actively engaged with AI by modifying its suggestions demonstrated improved essay quality, including greater lexical sophistication and syntactic complexity. This active engagement underscores that AI is not a shortcut but a tool that, when used thoughtfully, can deepen understanding and enhance writing skills

It's also worth asking why AI in writing causes more discomfort than AI in data analysis, mapping, or financial forecasting. No one questions the use of Excel in managing financial models or STATA in econometric analysis. These are tools that automate human work while preserving human judgment. Generative AI, if used wisely, works in the same way. It does not make human input superfluous. It merely speeds up the process of creating, organising, and refining. For many students, especially those from non-English speaking backgrounds or under-resourced educational systems, AI can level the playing field by providing a cleaner, more structured starting point.

The claim that human writing is always superior is romantic, but untrue. Many of us have written texts that are grammatically poor, disorganized , or simply difficult to understand. AI, on the other hand, often produces clearer drafts that more reliably follow an academic structure. Of course, AI lacks originality if it is not guided, but this is also true of much student writing. Careful revision and critical thinking are needed to improve both. This is not an argument in favour of submitting AI-generated texts. Rather, it is a call to rethink the use of AI as a partner in the writing process, not a shortcut around it.

Reflecting on this debate, I realise that much of the anxiety around AI stems from nostalgia. We confuse familiarity with excellence. But the writing habits many of us grew up with, cumbersome grammar, excessive length and jargon-heavy arguments, are not standards to be preserved.They are symptoms of a system that is overdue for reform. The true power of AI lies in its ability to challenge these habits and force us to communicate more consciously. Instead of fearing AI's so-called impersonality, we should teach students to build on their strengths while reintroducing their own voice and judgment.

We are not teaching students to surrender their minds to machines. We are preparing them to think critically in a world where the tools have evolved. That means they need to know when to use AI, how to challenge it, how to add nuance, and how to edit their results to provide deeper understanding. Working alongside AI requires more thinking, not less.

The writing habits we've inherited are not sacred. They are not the gold standard just because they are human. We need to stop missing the forest for the trees. AI is not here to replace the writer, it's there to make our writing stronger, clearer, and more focused if only we let it.

Thursday, 8 May 2025

 The 80/20 Rule of AI: Embracing Balance for Massive Efficiency Gains

By Richard Sebaggala 

 

In the late 19th century, the Italian economist Vilfredo Pareto made a simple but profound observation when he analysed land ownership in Italy. He found that around 80 percent of the land was owned by only 20 percent of the population. This imbalance was evident in different areas of life and highlighted that a small proportion of causes often produces the most results. Over time, this principle became known as the Pareto principle or, more generally, the 80/20 rule.

The principle found wider application in the middle of the 20th century when Joseph Juran, a pioneer of quality management, recognised its importance for industrial production. Juran found that around 80 per cent of problems in manufacturing processes can be traced back to 20 per cent of the causes. He aptly described these critical factors as "the vital few" as opposed to "the trivial many". This idea quickly became a cornerstone of management, business, and productivity thinking, guiding companies and policymakers to focus their efforts on the most important factors.

In today’s world, the 80/20 rule remains remarkably relevant, especially as artificial intelligence (AI) increasingly enters our work. AI tools such as ChatGPT, Gemini, and Avidnote have become popular for tasks such as writing reports or composing emails. While these tools are very powerful, their true value lies not in being expected to do everything, but in striking the crucial balance between machine output and human input. AI can effectively handle the first 80 per cent of many tasks, the groundwork, the structuring, the heavy lifting. However, the last 20 per cent, the area where quality and importance lie, still requires human attention.


A recent experience conducting a thematic analysis as part of a research project on how media narratives shape the perceptions of business owners brought this balance home to me. Given the qualitative responses from 372 business leaders to be analysed, the task of coding and identifying themes initially felt overwhelming. Normally, such work would require weeks of meticulous reading, coding, and interpretation. However, by using Avidnote and ChatGPT, I was able to speed up many of the early stages considerably. I transcribed the audio recording using Avidnote, uploaded the transcriptions, and asked ChatGPT to summarise response segments, suggest initial codes, and even draft basic descriptions of emerging themes based on study objectives. The AI provided a solid starting point, an overview of ideas and patterns that helped me visualise the data in a manageable way.

But that was just the beginning. While the AI’s suggestions proved useful, they lacked nuance. To ensure the validity and depth of the research, I had to carefully review each suggested code, compare it to the raw data, and determine its true relevance to the context of the study. I rewrote the theme descriptions, inserted direct quotes from respondents, and linked the findings to the wider scientific literature. The AI was able to handle the mechanics of pattern recognition and drafting the initial text, but it could not capture the deeper meaning of the interviewees’ statements. This required my judgment as a researcher.

Referring back to Braun and Clarke’s well-established paper on thematic analysis greatly strengthened this refinement process. This is an example of the crucial 20 per cent effort that makes all the difference. By basing the thematic analysis on recognised academic standards, I not only improved the design of the AI, but also trained it to align with specific frameworks and scientific expectations. At this point, the AI becomes a more precise tool that not only generates words but also produces work that meets higher standards because I actively aligned it to those standards. When you anchor the results of AI in trusted sources, be it Braun and Clarke on qualitative analysis or other leading texts in your field, you can be confident that the results will stand up to scrutiny.

 

Consider individuals who are highly skilled in areas such as writing, data analysis or grant design. Years of dedication have honed these skills. Now, with AI as a partner, the time-consuming mechanical aspects of the process become less demanding. AI can do the first 80 per cent of the work, so you can apply your honed skills where they really matter. You bring the clarity, insight and polish that machines alone cannot deliver. AI augments your capacity, but your expertise ensures that the final product is not only complete, but also compelling.

 

This experience reflects the dynamics in many areas where AI is a valuable tool today. Take, for example, the creation of grant applications. AI can quickly formulate problems, outline goals or suggest standard frameworks such as theories of change. It can make the initial stages more manageable, especially when deadlines loom. But no AI can truly capture the unique story of a project or the subtle nuances required to resonate with a particular donor. That requires the involvement of someone who is intimately familiar with the project’s history, its goals and the donor’s specific expectations.

The same principle applies to different contexts, whether it is the preparation of a policy brief or an academic paper. Artificial intelligence can provide the basic structure, but your expertise breathes life into this framework. And this is where the 80/20 rule offers such a valuable perspective. AI is great for the 80 per cent that is repetitive, structural or mechanical. But the remaining 20 per cent, which includes context, interpretation and creativity, clearly belongs to the human side of this powerful partnership.

Understanding this balance not only saves time, but also fundamentally changes the way we approach work. By allowing AI to manage the more routine aspects of a task, we can focus on the elements that really matter. The result is not just faster work, but demonstrably better work. It allows us to spend less time on what machines can do and more time on what only we can achieve.

The 80/20 rule, which has long been used to understand wealth distribution and production efficiency, now provides a crucial framework for understanding how AI can be used effectively. The efficiency gains are significant, but they go beyond mere speed. It's about strategically channelling human energy to where it can bring the most benefit. AI can go most of the way, but the final stretch of constructing meaning and ensuring quality remains our domain.

Sunday, 4 May 2025

 The Age of Amplified Analogies: How AI Mirrors Human Thought

By Richard Sebaggala


For as long as I’ve studied economics, there’s been one central assumption that has guided much of the field: humans are rational. From the days of Adam Smith, economists believed that people make decisions logically, weighing costs and benefits to arrive at the best possible outcome. But over time, this belief has been quietly dismantled. Psychologists and behavioral economists have shown, again and again, that people rarely live up to this ideal. Our decisions are messy, influenced by emotions, habits, cognitive biases, and limited information.

Recently, a new idea has added another layer to this conversation—one that doesn’t just challenge the notion of rationality but offers a different way of thinking about how we, as humans, make sense of the world. Geoffrey Hinton, a name familiar to anyone following the evolution of artificial intelligence, argues that humans aren’t really reasoning machines at all. We are, in his words, “analogy machines.”

Hinton’s view is simple but striking. He suggests that we don’t move through the world applying strict logic to every situation. Instead, we understand things by making connections, by comparing one experience to another, by spotting patterns that help us navigate the unfamiliar. Reasoning, the kind that builds mathematical models or legal systems, is just a thin layer that sits on top of all this pattern recognition. Without it, we wouldn’t have bank accounts or the ability to solve equations. But without analogies, we wouldn’t be able to function at all. It's no wonder, then, that every time you're faced with an aspect for which you lack experience or the ability to form a pattern, you feel as lost as someone without any prior knowledge in that area; for example, if you have no experience with car mechanics, even as a professor of economics visiting a garage, you might feel foolish in the presence of the mechanics.

As I thought about this, I found myself reflecting on the ongoing debates I’ve had with colleagues and friends who remain skeptical of AI. Many of them argue that AI is, at best, a useful tool, but it can’t approach the depth or richness of human intelligence. They believe there’s something unique, perhaps sacred, about how humans think, something AI can never replicate.

But if Hinton is right, and I find his argument persuasive, then the way we think and the way AI works aren’t as different as we might like to believe. After all, what does a large language model like ChatGPT do? It scans through vast amounts of information, recognizes patterns, and makes connections. In other words, it makes analogies. The difference is that AI draws on far more data than any one human ever could.

It’s a humbling thought. Much of what we take pride in, our ability to write, to solve problems, to make decisions, is rooted in this analogy-making process. We reach into our memories, find similar situations, and use them to guide what we do next. But we do this with limited information. We forget things. We misremember. We carry biases from one situation to another, sometimes without realizing it.

AI doesn’t have these limitations. It doesn’t get tired or distracted. It can sift through millions of examples in seconds, pulling out patterns and insights we might miss. This doesn’t mean AI is better than humans, but it does mean that in certain ways, it can amplify what we already do, helping us see further, make better connections, and avoid some of the pitfalls that come from relying on incomplete or faulty memories. However, it's crucial to acknowledge that AI can also reflect and amplify existing biases present in the data it's trained on, making human oversight essential to ensure fairness and accuracy.

I have spent much of my past five years reflecting about how people make decisions using behavioural insights. The idea that we’re not purely rational was something behavioral economics forced me to accept. But Hinton’s insight pushes that understanding even further. It suggests that at the core of human thinking is something far more organic and intuitive, something that AI, in its own way, mirrors.

Intriguingly, recent research from Anthropic, the creators of Claude, offers a glimpse into this mirroring. Their efforts to understand the so-called "black box" of AI reveal that large language models like Claude don't always arrive at answers through purely logical steps that align with their explanations. For instance, while Claude can generate coherent reasoning, this explanation sometimes appears disconnected from its actual processing. Furthermore, their findings suggest that Claude engages in a form of "planning" and even possesses a shared conceptual "language of thought" across human languages.

These discoveries, while preliminary, hint at a less purely algorithmic and more intuitively structured process within advanced AI than previously assumed. Just as human decision-making is influenced by subconscious biases and heuristics, AI might be operating with internal mechanisms that are not always transparent or strictly linear. This strengthens the notion that the core of intelligence, whether human or artificial, may involve a significant degree of organic and intuitive processing, moving beyond purely rational models.

And that brings me back to the resistance I often encounter around AI. It makes me wonder if some of this resistance isn’t about AI itself, but about what it reflects back to us regarding our reliance on pattern recognition. If AI can perform tasks we associate with intelligence, like making analogies, writing essays, and answering questions, then perhaps we must confront the idea that much of what we deemed uniquely human is, in fact, rooted in mechanical processes of pattern matching. Maybe the underlying fear isn’t that AI will surpass us, but that it reveals the extent to which our own abilities are built upon these very mechanisms.

But there’s another way to see it. Rather than feeling threatened, we might see AI as a chance to fill in the gaps where our human thinking falls short. It can’t replace the reasoning layer we rely on for complex tasks, but it can help us expand the reach of our analogies, connect ideas across disciplines, and spot patterns we would otherwise miss. In doing so, it can make us better thinkers.

To me, this collaborative potential is the true opportunity. AI isn’t destined to outsmart us, it’s poised to work alongside us, amplifying the strengths of human thinking while compensating for our inherent imperfections. If we embrace the idea that much of our cognition is rooted in analogy-making, then AI transforms from a rival into a powerful partner, one that can help us expand our thinking, question our biases, and perceive the world through novel lenses.So, perhaps it’s time to stop arguing about whether AI can think like humans. The more important question is: how can it help us think better?

 

Sunday, 27 April 2025

 

Beneath the Surface: Why the Generative AI Dividend Depends on Going Deeper

By Richard Sebaggala

A few weeks ago, I sat across from my longtime friend Allan as we enjoyed a quiet lunch at a beautiful hotel in Entebbe. The breeze from Lake Victoria carried a gentle calm, while our conversation turned, as it often does these days, to artificial intelligence.

Allan shared how he uses AI tools like ChatGPT to simplify his routine. He mentioned things like getting quick summaries, organizing his thoughts, or generating short drafts. Like many people, he saw AI as a helpful assistant. However, when I showed him how I use the same tools for in-depth document analysis, extracting patterns from research data, and explaining complex concepts to students, his expression shifted. He paused and admitted with some surprise, “I didn’t know it could do all that. I think I’ve only been using it on the surface.”

That moment stayed with me.

It reminded me of another experience just a few Sundays before. I was in church, listening to Pastor James Kiyini preach a heartfelt sermon about salvation. His message was clear and filled with passion. As he spoke, my thoughts drifted briefly to artificial intelligence, a completely different subject, yet oddly connected.

It occurred to me that those of us who have gone deeper into AI often find it difficult to explain its full potential to those who have only used it casually. It is similar to trying to explain the power of faith or salvation, or the depth of love, to someone who has only skimmed the surface. Words alone can’t communicate transformation. It must be experienced.

This gap between surface users and deep users of AI is growing. It is no longer just a technical issue. It is also becoming an economic one. Recent reports show that the demand for generative AI skills has skyrocketed. In one year alone, job listings asking for AI-related skills more than tripled. Employers are even prioritizing candidates with AI experience over those with traditional qualifications. A shift is clearly underway in how we define value and talent in the workforce.

Yet even as workers, especially younger ones, begin to embrace AI, many institutions are falling behind. A global survey by McKinsey found that only about one-third of companies are actively involving senior leaders in AI adoption. Fewer still are integrating AI meaningfully into their daily operations. This is not just a missed opportunity. It is a structural limitation. Individuals can only go so far when the systems around them remain unchanged.

It is now clear that engaging deeply with AI is not optional. It is essential. But the responsibility to make that shift cannot be left to individuals alone. We need support structures, leadership, and an enabling environment. Institutions such as universities, government bodies, businesses, and even faith-based organizations need to ask tough questions. Are we preparing people for a future where generative AI is no longer a special advantage but an expected baseline? Are we equipping our teams to explore, test, and build with AI, or are we merely observing from the sidelines?

The good news is that access to AI tools has never been easier. Many of the most advanced platforms such as ChatGPT, Gemini, Avidnote, DeepSeek, offer free versions. Anyone with curiosity and an internet connection can begin to explore the deeper layers of AI.

This kind of access is unprecedented. In past technological revolutions, tools and knowledge were tightly guarded or prohibitively expensive. Now, some of the most powerful tools in the world are available to everyone, at no cost, at least for learning. That makes this moment particularly important for Africa and other developing regions. For decades, lack of infrastructure and financial barriers have kept us behind in global technology trends. AI provides a rare opportunity to change that. The tools are here. The door is open.

Today, what separates people is not access but how deeply they are willing to learn and apply what is available. That shift from scarcity to depth marks a profound change in how we think about technology and progress.  I believe that this could be the most inclusive technological revolution we have ever seen. For the first time, the global knowledge economy has a real chance to become more equal. However, this will not happen by itself. It will take intention, effort, and leadership.

We need institutions that actively invest in building AI capacity. We need schools, workplaces, and public offices that create room for experimentation and learning. Leadership should not simply approve AI use; it should drive it, integrate it into decision-making, and normalize it as a tool for problem-solving and innovation.

This is not just about productivity. It is about empowerment. People deserve the chance to be full participants in the digital future, regardless of where they were born or how much they earn. What I have learned, in both life and work, is that real transformation never happens at the surface. Whether in love, faith, or technology, it is depth that makes the difference. We need to go beyond convenience and curiosity and begin to engage with AI as a serious tool for personal growth, institutional transformation, and national development.

The deeper we go, the more value we unlock. That is the AI dividend. And it is ours to claim—if we are willing to reach for it.

Tuesday, 15 April 2025

Don’t Blame the Tool: Rethinking Intellectual Effort and Ownership in the Age of AI

By Richard Sebaggala

I have spoken to countless academics, researchers, and students from diverse backgrounds who express a quiet but persistent anxiety about generative AI. They whisper their concerns, often overlooking a simple truth: AI is not magic. It is a statistical machine, a digital parrot that repeats patterns drawn from vast oceans of human data. It works by calculating probabilities. This is engineered brilliance, but brilliance rooted in prediction, not in original human thought.

As someone who has used and tested nearly every major AI tool on the market, I can say with confidence that generative AI is among the most powerful inventions of our time. It can translate, summarize, visualize, code, debug, compare alternatives, and analyze data—often faster and more accurately than we can. Yet despite this, many still ask: if machines can write, where does that leave us? Some feel displaced, sidelined, or even obsolete. But this anxiety often misses the mark. AI, no matter how advanced, is still just a tool. It is no different in principle from the tools we embraced long ago without hesitation.

There is a paradox worth reflecting on. We do not doubt the validity of a graph produced in Stata, nor do we question the output of SPSS when it generates regression results. Those of us in economics and the social sciences have always relied on statistical software to process data and help us see patterns we could not compute by hand. We have learned the syntax, interpreted the output, and confidently reported our findings. The real skill lies in knowing the command and making sense of the results. Why then do we hesitate when ChatGPT helps us structure an email, brainstorm ideas for a project, or draft a first version of a research abstract?

Part of the hesitation lies in what writing represents. Unlike regression or visualization, writing has always been treated as sacred, almost inseparable from human intellect and creativity. But writing is not simply typing words. It is thinking, choosing, constructing, and editing. The act of prompting—framing a question, guiding an argument, anticipating an answer—is itself a form of intellectual labor. Every useful response from AI begins with a human spark. The person crafting the prompt plays the same role as the one interpreting coefficients in a statistical table: they are the mind behind the tool. The machine’s output merely reflects the direction it was given.

When I began publishing essays on the economics of AI, some of my friends assumed I was not really doing the work. A few told me, half-jokingly, that it must be easy since I was "just using AI." What they missed is that the thoughts, the curiosity, the structure, and the point of view in each piece were mine. The tools I used helped me reach my conclusion faster, but the ideas still came from me. You can take away the interface, but not the thinking. What you read is the product of my experience and insight—not a machine’s.

Legal scholar Dr. Isaac Christopher Lubogo has explored the question of authorship in the age of AI. Should work produced with AI tools be credited to the machine or to the human using it? His view is clear. Authorship still lies with the person. AI is no different from a camera or a calculator. It enhances what we already know or imagine. It cannot create out of nothing. It can respond, imitate, and refine, but it does not dream, interpret emotion, or seek meaning in the way a human does.

Those of us trained in economics understand the phrase "garbage in, garbage out." A model, no matter how sophisticated, will only be useful if the assumptions behind it are sound. The same logic applies to AI. A vague prompt produces empty content. A well-formed prompt generates something coherent and often useful. But the credit for that value still belongs to the person who gave it purpose and direction.

Public suspicion toward AI today reminds me of historical fears about other technological advances. When the printing press emerged, it threatened the role of scribes. Calculators were said to ruin mental arithmetic. The internet was blamed for weakening critical thinking. And yet, all these innovations became instruments of empowerment. They liberated people from repetitive tasks and allowed them to focus on what truly matters. Generative AI is simply the next step in this long journey—if we dare to use it wisely.

What concerns me today is how good writing is being treated with suspicion. As journalist Miles Klee recently noted in Rolling Stone, tools like ChatGPT are trained on large datasets of professional writing. As a result, they tend to produce content that follows grammar rules quite well. Ironically, this precision has caused some readers to believe that anything too polished must have been generated by AI. In other words, a typo is now seen as proof of authenticity. If we continue down this path, we risk devaluing the effort and discipline that go into clear, compelling human writing.

And here lies the real danger. Once we start assuming that any well-crafted argument or clean paragraph is the work of a machine, we erase the labor, thought, and voice behind it. We do not just insult the writer. We also erode the principle that effort matters.

Thinking that AI will replace human intelligence is as mistaken as crediting a hammer for building a house. The tool amplifies our ability, but it does not imagine the blueprint. ChatGPT may help us write faster, but it cannot choose our ideas or shape our insights. It is still the person behind the screen who makes the final decision. If we learn to treat AI as an assistant rather than an author, we can start to see it more clearly for what it is—a tool, not a threat.

This is especially important for Africa. For too long, we have remained consumers of technology rather than innovators. Now is the time to change course. We can either watch others master this new wave or take our place in it. We can use AI to strengthen our research, refine our business strategies, and tell our own stories better. But we must first stop viewing the technology with suspicion and start seeing it as a partner in progress.

We already trust Excel for financial models. We do not feel guilty using Stata or R for statistical analysis. Grammarly has become a standard tool for editing. Prompting an AI to help us write or brainstorm should be no different. When used responsibly, it becomes a legitimate part of the thinking process.

History shows us that the future belongs to those who adapt. The pen did not disappear with the rise of the typewriter, and the typewriter did not vanish with the arrival of the keyboard. Each new medium simply extended our ability to express ourselves. AI is the latest medium. It will not speak for us, but it can help us speak more clearly—if we choose to use it that way.

So do not blame the tool. It is only as good as the hand—and the mind—that guides it.

 

Thursday, 10 April 2025

The Pool Table and the AI Revolution: A Reflection on the Cost of Late Adoption

By Richard Sebaggala

 

Recently, during a quiet conversation with my longtime friend Moses, our discussion took an unexpected turn — from artificial intelligence (AI) to pool tables. This detour took me back in time, back to my first year of University over two decades ago.

Back then, pool tables had just found their way into our neighborhoods and were especially popular with high school dropouts and younger boys in urban areas of the country. As university students, we initially dismissed the game as trivial and not worthy of our time. But curiosity eventually got the better of us.

 

The experience was humbling.

Every time I approached the table, I was clearly defeated by players who had left their formal education behind. They had already mastered the game, while we, the supposedly more educated, struggled with the basics. The embarrassment was palpable. I quietly withdrew and never played pool again.

That memory resurfaced as Moses and I pondered the rapid rise of AI. The parallels were striking

AI is the pool table of today, only with much higher stakes. It is a tool, a platform, a revolution that touches the essence of intelligence. And like the pool table of old, it is being embraced early by the bold, curious, and unconventional, many of whom have no formal training in AI or technology. They are exploring, experimenting, and inventing new ways of thinking, writing, teaching, and working.

 

In fact, a 2024 survey by the National Bureau of Economic Research found that 39.4% of adults in the US have used generative AI tools like ChatGPT, with 28% having used them for work-related tasks. Notably, even among blue-collar workers, 22.1% reported using generative AI at work, suggesting that adoption is moving beyond traditional tech roles. Furthermore, Generation Z is leading the way in the use of AI. A study by Aithor found that nearly 80% of Generation Z professionals (18-21 year olds) use AI tools for more than half of their work tasks.

 

Interestingly, many of these users are integrating AI into their workflows without formal training. Microsoft's 2024 Work Trend Index reports that 75% of knowledge workers use AI at work, with 46% having started within the last six months. Remarkably, 78% of these users bring their own AI tools to work, often without formal training or organizational support.

 

In the meantime, many professionals, scientists and experienced experts remain on the sidelines — hesitant, skeptical or overwhelmed. Some are waiting for legislation, others are looking for clearer use cases. But the longer we wait, the greater the embarrassment could be when we finally try to engage and realize we have some catching up to do in an area that was once our intellectual domain.

 

A 2024 study by the National Bureau of Economic Research found that the use of AI in the workplace decreases with age: About 34% of workers under 40 use AI at work, compared to only 17% of those over 50. in 2024, a survey by Ellucian found that while the use of AI in academia has increased, there are still significant concerns: 59% of college staff expressed concern about data security and privacy, and 78% of administrators feared that AI could compromise academic integrity.

 

In Africa, these global trends are mirrored. A 2024 survey conducted by KnowBe4 in several African countries, including Nigeria, Kenya and South Africa, found that a significant proportion of AI users are in the 18–34 age group, suggesting that AI adoption is being led by a youthful demographic. However, this enthusiasm is tempered by concerns about data privacy and the ethical implications of AI technologies.

 

In academia, South African universities are looking at integrating AI into their curricula and research. A study by the University of the Western Cape has highlighted challenges such as inadequate technological infrastructure, limited funding and a lack of clear guidelines for the use of AI in education.

These findings underline the importance of a proactive approach to AI technologies in all sectors. Waiting for perfect conditions or comprehensive regulations can lead to missed opportunities and a widening skills gap. It is imperative that professionals, academics, and institutions in Africa invest time and resources in understanding and integrating AI into their respective fields.

 

This is not just about technology. It's about identity, relevance, and the future of work. AI is a tool for thinking — fast, smart, and scalable. Delaying the adoption of AI could mean missing out on one of the greatest opportunities of our generation.

 

As an economist by training, I never considered myself tech-savvy. In fact, I often felt out of place in conversations about emerging technologies. But when I recognized the transformative potential of AI, I made a deliberate choice not to be left behind. I committed time to exploring and experimenting with AI tools, and that decision has paid off. Today, I can confidently say I operate at the same level as many professionals with a tech background. It’s a reminder that with curiosity and dedication, anyone can bridge the gap and thrive in the age of AI.

 

Let us not allow history to repeat itself with the table game. Let's embrace the discomfort of learning something new, even if others seem to be far ahead. The sooner we get to grips with AI, the more likely we are to use it responsibly, creatively, and inclusively. Because this time it's about intelligence. And we can't afford to sit it out.

Saturday, 5 April 2025

The AI Revolution: What Happens When We're Not the Smartest Anymore?

 By Richard Sebaggala

Not long ago, historian and bestselling author Yuval Noah Harari, known for his insightful books on the history and future of humanity, made a simple but unsettling comparison. He reminded us that chimpanzees are stronger than humans, yet we rule the planet—not because of strength but because of intelligence. That intelligence allowed us to organize, tell stories, cooperate in large groups, and build civilizations. Now, for the first time since we became the dominant species, something else has emerged with the potential to outmatch us—not physically, but cognitively.

Artificial Intelligence is not just another technology like the internet or the printing press. It is not simply an upgrade to how we compute, search, or automate. It is something more fundamental—a new kind of intelligence that doesn’t think like us, doesn’t learn like us, and doesn’t need to share our goals to reshape the world we live in. This isn’t merely about faster tools. It’s about a shift in cognitive authority, where machines are beginning to generate content, solve problems, and offer decisions that many humans now accept as credible—often without question.

In a previous article, I argued that AI is forcing us to rethink what intelligence really means. We are used to associating intelligence with human traits—consciousness, memory, creativity, even ethical reasoning. But AI doesn’t need any of these to function impressively. It uses statistical patterns and probability, not lived experience. It doesn’t need to “understand” to produce results that look meaningful. This has introduced a quiet but deep disruption: intelligence is no longer something uniquely human.

This realization calls for a different kind of response. We need to stop asking whether AI will “replace” humans and start asking how we, as humans, can live meaningfully and ethically in a world where we may no longer be the only—or even the dominant—form of intelligence.

 

To do that, we’ll need to rethink our policies. AI isn’t just another app to regulate; it’s a system capable of influencing elections, research, shaping public opinion, managing hospitals, and delivering education. We need strong, clear rules about what AI is allowed to do, and under what conditions. These rules must go beyond technical fixes and deal with deeper questions about accountability, fairness, and the role of humans in decision-making.

 

We also need to rethink education. If AI can access all the world’s information and summarize it instantly, what becomes the role of the classroom? What becomes the value of memorizing facts or writing essays? In this new world, human education must shift toward what AI still lacks: ethical thinking, emotional intelligence, creativity, empathy, and the ability to live with ambiguity.

 

Economically, things are shifting too. In the past, knowledge work—thinking, writing, analyzing—was a scarce and valuable resource. But now, the cost of generating “thinking” is falling. AI can produce text, summaries, even insights, at scale. This creates an economic twist. As thinking becomes abundant, what becomes rare is discernment—the ability to judge what matters, what’s true, and what’s worth acting on. Human judgment, not information, may be the next frontier of value.

 

But perhaps the biggest adjustment we need is psychological. For generations, we’ve grown up believing that humans are the smartest species on the planet. That belief shaped everything—from how we designed schools and workplaces to how we structured religion, law, and government. Now, as we encounter a form of intelligence that doesn’t look like us but can outperform us in many ways, we need to make space in our worldview for another mind. That takes humility, but also imagination.

 

We must remember that intelligence doesn’t have to be a “zero-sum” game. In economics, a zero-sum situation is one where if someone gains, another must lose. But intelligence can grow without taking away from others. AI getting smarter doesn’t mean humans are getting dumber. In fact, if we use it wisely, AI can help us become more reflective, more creative, and more human. But only if we remain active participants—questioning, interpreting, and shaping its use, rather than passively consuming its outputs.

 

What happens next is not inevitable. It depends on the choices we make—politically, socially, and personally. We don’t need to compete with machines, but we do need to learn how to live with them. That means adapting, not surrendering. Guiding, not resisting. Coexisting, not collapsing.

 

If we manage to do that—if we protect what makes us uniquely human while embracing what machines can offer—we might discover that the rise of another kind of mind is not the end of human relevance. It could be the beginning of a deeper kind of human flourishing.