Friday, 23 May 2025

 AI in Education: Shifting from Policing to Partnering

By Richard Sebaggala


 

I recently gave a keynote address at the University of South Africa in which I discussed the transformative power of AI for online and distance learning. The Institute for Open and Distance Learning (IODL) had invited me to explore these possibilities at its symposium, “Growing Excellence in Teaching, Learning and Research in Open Distance and e-Learning: Learning Analytics and Post-graduate Research”, but the real conversation began in the Q&A session. While talking about the AI revolution, a deep-seated fear surfaced: students "cheating" with AI. What struck me was that this fear stemmed not from ethical concerns but from a lack of understanding of the AI capabilities and how to use them responsibly.

 

This pattern is not unique; I have experienced the same fear-driven reaction back home in Uganda. The discussion about AI in higher education is fuelled by panic, not a thoughtful pedagogical approach, and this has an unfair impact on students trying to learn responsibly. In the United States, some students are reported to be recording themselves writing essays just to prove that these essays were not created by ChatGPT, after faulty AI detectors like Turnitin have mislabelled them. A Ugandan student recently confided that after discovering how generative tools can support her learning, she could no longer ignore them. Her university now warns that even “10 per cent” AI help amounts to misconduct. Honest learners are forced into a ritual of self-monitoring simply because the machines can no longer recognise whose words belong to whom.

 

The academic world has faced similar fears before. We used to worry that tools like SPSS or Stata would jeopardise the integrity of research; today, no one asks whether "Stata wrote the regression." We judge the researcher's interpretation, not the software's calculations. Generative AI is no different. True intellectual labour has never consisted of typing out every word by hand. It lies in judgement, synthesis, and the courage to stand by one's own conclusions. When the AI writes a paragraph that you refine, contextualise, and critique, it is still unmistakably your work.

 

Despite this historical precedent, a disproportionate amount of institutional energy has gone into policing rather than pedagogy since ChatGPT debuted in late 2022. Publishers are trying to perfect their detection algorithms, universities are imposing blanket bans and memos often read like prosecutorial briefs. Meanwhile, the very academics who need AI skills the most, especially in the Global South, are left without adequate training or support. Basically, we have prioritised building walls of surveillance over building bridges of understanding.

 

A healthier equilibrium is possible. Teachers need practical AI skills so they can model responsible use of AI instead of criminalising it. Guidelines should distinguish between unethical outsourcing and legitimate support, just as style guidelines already distinguish between plagiarism and citations. Assessments can encourage reflection and transparency by asking students to outline their process, explain their prompts, and annotate revisions made. Detectors, if used at all, should initiate a dialogue and not pass a guilty verdict.

 

Above all, we need to remember the real purpose of education: to prepare learners for tomorrow's economy, not to preserve outdated work processes. The future job market will require students to have the critical AI skills, adaptive problem-solving abilities, and ethical judgement to effectively navigate and use artificial intelligence. The economist in me sees a simple opportunity cost. The time spent on a witch hunt could be invested in training students to use AI for literature reviews, exploratory coding, data visualisation, or the messy first draft that every writer knows only too well. Done right, AI lowers the fixed costs of scientific production and frees up scarce cognitive resources for more in-depth investigations - a productivity gain that every economist should welcome.

 

The fundamental question is not whether students will use AI. They are already doing so and that will not change. The real question is whether universities will choose to rule by control or lead by empowerment. If we cling to fear, we will stifle innovation and undermine confidence. If we embrace a coaching mentality, we will train thinkers who can collaborate with machines without giving up their agency.

 

Education has outlived calculators, spreadsheets, and statistical software. It will also survive generative AI, provided we stop focusing on detection and start focusing on development, from panic to pedagogy. My own PhD supervisor, Professor Roy Mersland, is an example of this open approach. His open encouragement to explore, test, and appreciate AI tools was truly inspiring. It fuelled my own passion for learning and teaching  AI to others. This would not have been possible if he had taken the policing, negative stance of many professors. The decision we make now will determine whether we navigate the age of AI with suspicion or with the power of possibility, and whether future generations write with fear or with the confidence to think like innovators in a world where intelligence is increasingly a common commodity.

Saturday, 17 May 2025

 The price of trust: How fraudsters are paralysing small businesses in Uganda 

By Richard Sebaggala

 


I was going to spend the weekend writing about artificial intelligence — its rise, its potential and what it means for our future. But then something happened that completely changed my focus. It was a story that reminded me how fragile the lives of ordinary business people in Uganda can be.

On Friday 16 May, while shopping in the town of Mukono, I ran into a friend I hadn’t seen for years. She used to run a wholesale business in Watoni. After a brief greeting and trying to remember where we had last seen each other, she told me what had happened to her.

Her story was painful to hear.

Some time ago, three men entered her shop and pretended to be procurement officers from Mukono High School. They said they were buying food for the secondary school and the sister primary school. The list included rice, sugar, posho and other food items. After selecting the items, they loaded everything on a lorry and asked her to prepare the receipts. They told her she would accompany them to the school, deliver the goods and receive payment directly from the school bursar.

 

Everything seemed legitimate. The school was nearby. She even spoke to someone on the phone who introduced herself as the school's bursar and confirmed that they were expecting the delivery. When they arrived at the school gate, the security guard said the bursar was in a meeting, which matched what she had been told on the phone. This small detail convinced her that the transaction was genuine.

She got out of the lorry and waited inside the school while the men said they would first deliver the goods for the primary school, which were supposedly packed on top. She waited. And waited. But the lorry never came back. Only later did she learn that the school’s real bursar was a man. The woman on the phone had been part of the scam. The men had disappeared with goods worth 35 million shillings — her entire capital. Just like that, everything she had built up was gone.

Her troubles didn’t end there. Not long after the incident, the landlord increased the shop rent from 800,000 to 1.5 million shillings and demanded payment for a whole year in advance. With no stock and no money, she had no choice but to close the shop. She tried to start again in Bweyogerere, hoping for a fresh start, but the business never took off. That was the end of her life as a businesswoman.

 

As she told the story, there was a serenity in her voice that hid what she had been through. She had come to terms with it. But I left the conversation feeling heavy, troubled and angry — not only about what had happened to her, but also about how common stories like this are.

 

Uganda is often referred to as the most entrepreneurial country in the world. Our start-up rates are among the highest in the world. But behind this headline lies a sobering reality. Most Ugandan businesses do not survive beyond the first year. Over 70 per cent collapse within five years. While lack of capital, poor business planning and market saturation are common explanations, we rarely talk about the threat of fraud and con artists.

 

The trick used on my friend was not a matter of bad luck. It was well planned and carefully executed. And unfortunately, such scams are not uncommon. Every day, small business owners fall victim to similar tactics. Years ago, there was a television programme that exposed how these scammers operate across the country. The programme was both fascinating and frightening. The scams were sophisticated, clever and disturbingly effective.

 

If someone took the time to document these tricks in detail and profile them, I believe the results would shock the nation. We are losing billions of shillings, not through the economic downturn, but through manipulation and fraud.

 

The very next day, on Saturday the 17th, my own car was stolen while I was attending a funeral. This story deserves its own space. But it got me thinking about how easily things can be taken from us, no matter how careful or prepared we think we are.

 

There is an urgent need for practical business education that goes beyond accounting and customer service. Entrepreneurs need to be trained to check transactions, recognise manipulation and protect themselves. Fraud awareness should be part of every entrepreneurship course and support programme in this country.

 

At the same time, we need laws that treat economic fraud with the seriousness it deserves. These crimes don't just hurt individuals. They undermine economic confidence and discourage hard work and initiative. We also need awareness-raising campaigns and media platforms that educate the public about these risks clearly and understandably.

Trust should be the foundation of business. But in today's Uganda, trust has become a dangerous gamble.

We can no longer ignore this crisis. We need to talk about it. We need to listen to those who have suffered and learn from their experiences. And we need to build systems that protect the honest and penalise the cheats.


To anyone who has lost a business, not because of bad decisions, but because someone took advantage of their trust, you are not alone. Your story is important. And it needs to be part of the national conversation about what it really means to be an entrepreneur in Uganda.

Tuesday, 13 May 2025

 The AI Writing Debate: Missing the Forest for the Trees

By Richard Sebaggala


 

A recent article published by StudyFinds titled “How College Professors Can Easily Detect Students’ AI-Written Essays” revisits an ongoing debate about whether generative AI tools like ChatGPT can really match the nuance, flair, and authenticity of human writing. Drawing on a study led by Professor Ken Hyland, the article argues that AI-generated texts leave much to be desired in terms of persuasiveness and audience engagement. It lacks rhetorical questions, personal asides, and other so-called 'human' features that characterize good student writing. The conclusion seems simple: AI can write correctly, but it can't make connections.

But reading the article made me uneasy. Not because the observations are wrong, but because they are based on a narrow and, frankly, outdated understanding of what constitutes good academic writing. More importantly, they misrepresent the role of generative AI in the writing process. The arguments often portray Gen AI as if it were another human from a distant planet trying to mimic the way we express ourselves, rather than what it actually is, a tool designed to help us. And here’s the irony. I have experienced first-hand the limitations of human writing, even my own, and I see AI not as a threat to our creativity, but as a reflection of the weaknesses we have inherited and rarely challenged.

 

When I started my PhD at the University of Agder in Norway, many friends back home in Uganda already thought I was a good writer. I had been writing for years, publishing articles and teaching economics. But my confidence was shaken when my supervisor returned my first paper with over two thousand comments. Some of them were brutally honest. My writing was too verbose, my sentences too long and my arguments lacked clarity. What I had previously thought was polished human writing was actually a collection of habits I had picked up from outdated academic conventions. It was a difficult but necessary realisation: being human doesn’t automatically make your writing better.And yet many critics of AI-generated texts would have us believe that it's the very mistakes we’ve internalised, such as poor grammar, excessive verbosity, and vague engagement, that make writing human and valuable.

 

This is why the obsession with 'engagement markers' as the main test of authenticity is somewhat misleading. In good writing, especially in disciplines such as economics, business, law or public policy, clarity, structure, and logical flow are often more important than rhetorical flair. If an AI-generated draft avoids rhetorical questions or personal allusions, this is not necessarily a weakness. Rather, it often results in a more direct and focussed text. The assumption that emotionally engaging language is always better ignores the different expectations in different academic disciplines. What is considered persuasive in a literary essay may be completely inappropriate in a technical research report.

 

Another omission in the argument is the fact that the role of prompters is not considered. The AI does not decide on its own what tone to strike. It follows instructions. If it is asked to include rhetorical questions or to adopt a more conversational or analytical tone, it does so. The study’s criticism that ChatGPT failed to use personal speech and interactive elements says more about the design of the prompts than the capabilities of the tool. This is where instruction needs to change. Writing classes need to teach students how to create, revise, and collaborate using AI prompts. This does not mean that critical thinking is lost, but that it is enhanced. Students who know how to evaluate, refine, and build upon AI-generated texts are doing meaningful intellectual work. We're not lowering the bar, we're modernising the skills.

A recent study by the Higher Education Policy Institute (HEPI) in the UK revealed that 92% of university students are using AI in some form, with 49% starting papers and projects, 48% summarizing long texts, and 44% revising writing. Furthermore, students who actively engaged with AI by modifying its suggestions demonstrated improved essay quality, including greater lexical sophistication and syntactic complexity. This active engagement underscores that AI is not a shortcut but a tool that, when used thoughtfully, can deepen understanding and enhance writing skills

It's also worth asking why AI in writing causes more discomfort than AI in data analysis, mapping, or financial forecasting. No one questions the use of Excel in managing financial models or STATA in econometric analysis. These are tools that automate human work while preserving human judgment. Generative AI, if used wisely, works in the same way. It does not make human input superfluous. It merely speeds up the process of creating, organising, and refining. For many students, especially those from non-English speaking backgrounds or under-resourced educational systems, AI can level the playing field by providing a cleaner, more structured starting point.

The claim that human writing is always superior is romantic, but untrue. Many of us have written texts that are grammatically poor, disorganized , or simply difficult to understand. AI, on the other hand, often produces clearer drafts that more reliably follow an academic structure. Of course, AI lacks originality if it is not guided, but this is also true of much student writing. Careful revision and critical thinking are needed to improve both. This is not an argument in favour of submitting AI-generated texts. Rather, it is a call to rethink the use of AI as a partner in the writing process, not a shortcut around it.

Reflecting on this debate, I realise that much of the anxiety around AI stems from nostalgia. We confuse familiarity with excellence. But the writing habits many of us grew up with, cumbersome grammar, excessive length and jargon-heavy arguments, are not standards to be preserved.They are symptoms of a system that is overdue for reform. The true power of AI lies in its ability to challenge these habits and force us to communicate more consciously. Instead of fearing AI's so-called impersonality, we should teach students to build on their strengths while reintroducing their own voice and judgment.

We are not teaching students to surrender their minds to machines. We are preparing them to think critically in a world where the tools have evolved. That means they need to know when to use AI, how to challenge it, how to add nuance, and how to edit their results to provide deeper understanding. Working alongside AI requires more thinking, not less.

The writing habits we've inherited are not sacred. They are not the gold standard just because they are human. We need to stop missing the forest for the trees. AI is not here to replace the writer, it's there to make our writing stronger, clearer, and more focused if only we let it.

Thursday, 8 May 2025

 The 80/20 Rule of AI: Embracing Balance for Massive Efficiency Gains

By Richard Sebaggala 

 

In the late 19th century, the Italian economist Vilfredo Pareto made a simple but profound observation when he analysed land ownership in Italy. He found that around 80 percent of the land was owned by only 20 percent of the population. This imbalance was evident in different areas of life and highlighted that a small proportion of causes often produces the most results. Over time, this principle became known as the Pareto principle or, more generally, the 80/20 rule.

The principle found wider application in the middle of the 20th century when Joseph Juran, a pioneer of quality management, recognised its importance for industrial production. Juran found that around 80 per cent of problems in manufacturing processes can be traced back to 20 per cent of the causes. He aptly described these critical factors as "the vital few" as opposed to "the trivial many". This idea quickly became a cornerstone of management, business, and productivity thinking, guiding companies and policymakers to focus their efforts on the most important factors.

In today’s world, the 80/20 rule remains remarkably relevant, especially as artificial intelligence (AI) increasingly enters our work. AI tools such as ChatGPT, Gemini, and Avidnote have become popular for tasks such as writing reports or composing emails. While these tools are very powerful, their true value lies not in being expected to do everything, but in striking the crucial balance between machine output and human input. AI can effectively handle the first 80 per cent of many tasks, the groundwork, the structuring, the heavy lifting. However, the last 20 per cent, the area where quality and importance lie, still requires human attention.


A recent experience conducting a thematic analysis as part of a research project on how media narratives shape the perceptions of business owners brought this balance home to me. Given the qualitative responses from 372 business leaders to be analysed, the task of coding and identifying themes initially felt overwhelming. Normally, such work would require weeks of meticulous reading, coding, and interpretation. However, by using Avidnote and ChatGPT, I was able to speed up many of the early stages considerably. I transcribed the audio recording using Avidnote, uploaded the transcriptions, and asked ChatGPT to summarise response segments, suggest initial codes, and even draft basic descriptions of emerging themes based on study objectives. The AI provided a solid starting point, an overview of ideas and patterns that helped me visualise the data in a manageable way.

But that was just the beginning. While the AI’s suggestions proved useful, they lacked nuance. To ensure the validity and depth of the research, I had to carefully review each suggested code, compare it to the raw data, and determine its true relevance to the context of the study. I rewrote the theme descriptions, inserted direct quotes from respondents, and linked the findings to the wider scientific literature. The AI was able to handle the mechanics of pattern recognition and drafting the initial text, but it could not capture the deeper meaning of the interviewees’ statements. This required my judgment as a researcher.

Referring back to Braun and Clarke’s well-established paper on thematic analysis greatly strengthened this refinement process. This is an example of the crucial 20 per cent effort that makes all the difference. By basing the thematic analysis on recognised academic standards, I not only improved the design of the AI, but also trained it to align with specific frameworks and scientific expectations. At this point, the AI becomes a more precise tool that not only generates words but also produces work that meets higher standards because I actively aligned it to those standards. When you anchor the results of AI in trusted sources, be it Braun and Clarke on qualitative analysis or other leading texts in your field, you can be confident that the results will stand up to scrutiny.

 

Consider individuals who are highly skilled in areas such as writing, data analysis or grant design. Years of dedication have honed these skills. Now, with AI as a partner, the time-consuming mechanical aspects of the process become less demanding. AI can do the first 80 per cent of the work, so you can apply your honed skills where they really matter. You bring the clarity, insight and polish that machines alone cannot deliver. AI augments your capacity, but your expertise ensures that the final product is not only complete, but also compelling.

 

This experience reflects the dynamics in many areas where AI is a valuable tool today. Take, for example, the creation of grant applications. AI can quickly formulate problems, outline goals or suggest standard frameworks such as theories of change. It can make the initial stages more manageable, especially when deadlines loom. But no AI can truly capture the unique story of a project or the subtle nuances required to resonate with a particular donor. That requires the involvement of someone who is intimately familiar with the project’s history, its goals and the donor’s specific expectations.

The same principle applies to different contexts, whether it is the preparation of a policy brief or an academic paper. Artificial intelligence can provide the basic structure, but your expertise breathes life into this framework. And this is where the 80/20 rule offers such a valuable perspective. AI is great for the 80 per cent that is repetitive, structural or mechanical. But the remaining 20 per cent, which includes context, interpretation and creativity, clearly belongs to the human side of this powerful partnership.

Understanding this balance not only saves time, but also fundamentally changes the way we approach work. By allowing AI to manage the more routine aspects of a task, we can focus on the elements that really matter. The result is not just faster work, but demonstrably better work. It allows us to spend less time on what machines can do and more time on what only we can achieve.

The 80/20 rule, which has long been used to understand wealth distribution and production efficiency, now provides a crucial framework for understanding how AI can be used effectively. The efficiency gains are significant, but they go beyond mere speed. It's about strategically channelling human energy to where it can bring the most benefit. AI can go most of the way, but the final stretch of constructing meaning and ensuring quality remains our domain.

Sunday, 4 May 2025

 The Age of Amplified Analogies: How AI Mirrors Human Thought

By Richard Sebaggala


For as long as I’ve studied economics, there’s been one central assumption that has guided much of the field: humans are rational. From the days of Adam Smith, economists believed that people make decisions logically, weighing costs and benefits to arrive at the best possible outcome. But over time, this belief has been quietly dismantled. Psychologists and behavioral economists have shown, again and again, that people rarely live up to this ideal. Our decisions are messy, influenced by emotions, habits, cognitive biases, and limited information.

Recently, a new idea has added another layer to this conversation—one that doesn’t just challenge the notion of rationality but offers a different way of thinking about how we, as humans, make sense of the world. Geoffrey Hinton, a name familiar to anyone following the evolution of artificial intelligence, argues that humans aren’t really reasoning machines at all. We are, in his words, “analogy machines.”

Hinton’s view is simple but striking. He suggests that we don’t move through the world applying strict logic to every situation. Instead, we understand things by making connections, by comparing one experience to another, by spotting patterns that help us navigate the unfamiliar. Reasoning, the kind that builds mathematical models or legal systems, is just a thin layer that sits on top of all this pattern recognition. Without it, we wouldn’t have bank accounts or the ability to solve equations. But without analogies, we wouldn’t be able to function at all. It's no wonder, then, that every time you're faced with an aspect for which you lack experience or the ability to form a pattern, you feel as lost as someone without any prior knowledge in that area; for example, if you have no experience with car mechanics, even as a professor of economics visiting a garage, you might feel foolish in the presence of the mechanics.

As I thought about this, I found myself reflecting on the ongoing debates I’ve had with colleagues and friends who remain skeptical of AI. Many of them argue that AI is, at best, a useful tool, but it can’t approach the depth or richness of human intelligence. They believe there’s something unique, perhaps sacred, about how humans think, something AI can never replicate.

But if Hinton is right, and I find his argument persuasive, then the way we think and the way AI works aren’t as different as we might like to believe. After all, what does a large language model like ChatGPT do? It scans through vast amounts of information, recognizes patterns, and makes connections. In other words, it makes analogies. The difference is that AI draws on far more data than any one human ever could.

It’s a humbling thought. Much of what we take pride in, our ability to write, to solve problems, to make decisions, is rooted in this analogy-making process. We reach into our memories, find similar situations, and use them to guide what we do next. But we do this with limited information. We forget things. We misremember. We carry biases from one situation to another, sometimes without realizing it.

AI doesn’t have these limitations. It doesn’t get tired or distracted. It can sift through millions of examples in seconds, pulling out patterns and insights we might miss. This doesn’t mean AI is better than humans, but it does mean that in certain ways, it can amplify what we already do, helping us see further, make better connections, and avoid some of the pitfalls that come from relying on incomplete or faulty memories. However, it's crucial to acknowledge that AI can also reflect and amplify existing biases present in the data it's trained on, making human oversight essential to ensure fairness and accuracy.

I have spent much of my past five years reflecting about how people make decisions using behavioural insights. The idea that we’re not purely rational was something behavioral economics forced me to accept. But Hinton’s insight pushes that understanding even further. It suggests that at the core of human thinking is something far more organic and intuitive, something that AI, in its own way, mirrors.

Intriguingly, recent research from Anthropic, the creators of Claude, offers a glimpse into this mirroring. Their efforts to understand the so-called "black box" of AI reveal that large language models like Claude don't always arrive at answers through purely logical steps that align with their explanations. For instance, while Claude can generate coherent reasoning, this explanation sometimes appears disconnected from its actual processing. Furthermore, their findings suggest that Claude engages in a form of "planning" and even possesses a shared conceptual "language of thought" across human languages.

These discoveries, while preliminary, hint at a less purely algorithmic and more intuitively structured process within advanced AI than previously assumed. Just as human decision-making is influenced by subconscious biases and heuristics, AI might be operating with internal mechanisms that are not always transparent or strictly linear. This strengthens the notion that the core of intelligence, whether human or artificial, may involve a significant degree of organic and intuitive processing, moving beyond purely rational models.

And that brings me back to the resistance I often encounter around AI. It makes me wonder if some of this resistance isn’t about AI itself, but about what it reflects back to us regarding our reliance on pattern recognition. If AI can perform tasks we associate with intelligence, like making analogies, writing essays, and answering questions, then perhaps we must confront the idea that much of what we deemed uniquely human is, in fact, rooted in mechanical processes of pattern matching. Maybe the underlying fear isn’t that AI will surpass us, but that it reveals the extent to which our own abilities are built upon these very mechanisms.

But there’s another way to see it. Rather than feeling threatened, we might see AI as a chance to fill in the gaps where our human thinking falls short. It can’t replace the reasoning layer we rely on for complex tasks, but it can help us expand the reach of our analogies, connect ideas across disciplines, and spot patterns we would otherwise miss. In doing so, it can make us better thinkers.

To me, this collaborative potential is the true opportunity. AI isn’t destined to outsmart us, it’s poised to work alongside us, amplifying the strengths of human thinking while compensating for our inherent imperfections. If we embrace the idea that much of our cognition is rooted in analogy-making, then AI transforms from a rival into a powerful partner, one that can help us expand our thinking, question our biases, and perceive the world through novel lenses.So, perhaps it’s time to stop arguing about whether AI can think like humans. The more important question is: how can it help us think better?