Monday, 7 July 2025

 The Ghost in the Machine? What the MIT Study Gets Wrong About Thinking with ChatGPT

By Richard Sebaggala


 

In June 2025, researchers from the MIT Media Lab published a study entitled "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" (Kosmyna et al., 2025). The study tracked 54 participants over four months as they completed SAT essay tasks in three different modes: using only their own thinking, using traditional search engines, or using ChatGPT. Using electroencephalography (EEG), the researchers found that while participants using ChatGPT wrote significantly faster, they had significantly less neural connectivity, particularly fewer connections in the alpha and theta bands, which are associated with attention, memory, and semantic processing.

 

The authors interpret this reduction as a sign of "cognitive debt," a condition in which reliance on AI tools weakens the brain's engagement with the task. ChatGPT users not only wrote more uniform and less original essays, but also showed significant memory deficits. More than 83% could not accurately recall key content just minutes after writing the essay, compared to only 11.1% of those who worked without help. The study raised particular concern for young people, whose developing brains may be more susceptible to long-term cognitive shifts caused by repeated reliance on aids.

 

This has caused concern in both academic and public discourse, fuelling fears that AI tools such as ChatGPT could blunt our minds and diminish our ability to think independently. This fear is reminiscent of a well-known philosophical metaphor: the "ghost in the machine," which raises the question of whether human consciousness and machine functions can coexist. In this metaphor, the concern is that the machine (AI) supplants the mind (human agency) and reduces thinking to something automatic, mechanical, and impersonal. But is this really the case? Or is AI more likely to become a dance partner that reshapes our thinking without eradicating the thinker?

 

We hear a lot about the risks of "decline of agency" associated with AI, the fear that our critical thinking skills will diminish and our cognitive independence will erode if we rely on these tools. This is a valid concern, and Cornelia Walther's recent article in Psychology Today rightly warns us that AI can go from being a helpful assistant to a crutch we can no longer do without. But that's not the whole story. From my practical experience, I know that once you discover areas where AI truly augments your competences, human agency does not diminish; it increases.

 

Reinterpreting cognitive activity

At this point, a more critical view is required. The interpretation of the study is based on the assumption that conventional, pre-AI markers of cognition, such as increased brain wave activity or strong memory performance, are the gold standard for learning. But in a world where intelligence is increasingly supported by digital systems, we need to ask whether the reduced neural activity reflects a cognitive decline or simply a shift in the distribution of thinking between humans and machines.

 

Throughout history, major technological advances that lightened the cognitive load, such as the calculator, GPS or search engines, have always been met with initial fears of "weakening" the human brain. But mankind has always adapted. Before the advent of GPS, for example, our brains had to store and retrieve detailed location-specific coordinates and mental maps in order to find cities. Today, with Google Maps, we no longer need to memorise every turn or landmark, so we can focus our cognitive energy on more comprehensive planning, decision-making or even creative thinking while travelling. The reduced reliance on spatial memory does not indicate cognitive decline, but shows how technology can reallocate mental resources. The real point isn't that we use less brain power, but that our brains simply work differently. For example, a lower number of alpha-band connections observed when writing with AI is probably not a sign of cognitive decline. Rather, it could indicate a redistribution of effort towards more strategic tasks, such as evaluating AI-generated results or refining prompts, activities that cannot be fully captured with current EEG scans alone.

 

Limitations of the MIT study

Equally important is the fact that the MIT study does not include baseline data on participants’ cognitive abilities. We do not know whether some participants were already more reliant on digital aids or whether their writing style, memory capacities,or learning preferences differed before the study began. Without this baseline, it is impossible to say whether the observed differences in the brain are caused by ChatGPT or simply correlate with individual differences. The study measures what happens during and shortly after the use of ChatGPT, but says little about how cognitive patterns develop with thoughtful, long-term use of AI tools.

 

Another important point of concern is the study’s focus on memory recall as a proxy for learning. In traditional educational systems, memory has long been a central measure of mastery. But in an information-rich world where AI comes into play, knowing how to access, review and apply knowledge is often more important than being able to recall it verbatim. The assumption that true learning only occurs when knowledge is encoded internally ignores that modern cognition now operates in a broader ecosystem of the "extended mind" that includes digital tools. The problem is not that students don't remember what they wrote with AI. The problem is the assumption that remembering is still the highest form of understanding.

 

Rethinking AI integration in education

To be fair, the concept of "cognitive debt" does have some merit. When learners orany user passively use ChatGPT by copying and pasting without processing, real thinking suffers. But this is not a fault of the AI itself, but a fault in the way the AI is used. Instead of rejecting the use of AI, educators and institutions should focus on how to integrate AI into learning in a meaningful way. This means that they should teach prompt design, encourage critical reflection on AI results, and help students use tools like ChatGPT as thinking partners rather than crutches.

 

This discussion is particularly urgent for regions like Africa, where the integration of AI into education is still nascent. Misinterpreting studies like this one could reinforce hesitation and delay much-needed innovation. For educators and leaders who have yet to engage with AI, this kind of research might seem to confirm their doubts, when in fact it emphasises the need for better AI literacy rather than retreating.

 

The real question is not whether ChatGPT reduces brain activity. It’s whether we’re measuring the right kind of activity in the first place. Rather than judge lower EEG connectivity as a loss, we should be asking: are students getting better at navigating, questioning, and reconfiguring information in an AI-rich environment?

 

Moving forward

The MIT study raises valuable questions, but it should be a starting point for deeper, more nuanced investigations. What kind of cognition do we want to cultivate in the era of AI? What skills are most important when machines can instantly generate, summarise and retrieve information? And how do we equip learners, not just to avoid cognitive debt, but to add cognitive value through the strategic, reflective, and ethical use of AI?

We are not facing a cognitive collapse. We are facing a change in the way intelligence is organised. And it’s time for our metrics, assumptions, and teaching methods to catch up.

Thursday, 26 June 2025

Turning hours into gold: How generative AI can unlock Uganda’s productivity potential

 

By Richard Sebaggala



The world is currently experiencing a quiet revolution in the way work is done. Across industries and continents, generative AI tools like ChatGPT, Claude and Copilot are changing the way people do everyday tasks. A recent global survey presented by Visual Capitalist found that workers using AI can reduce the time it takes to complete their tasks by more than 60 percent. Writing a report, for example, no longer took 80 minutes, but only 25, while fixing technical problems, which normally took almost two hours, was reduced to 28 minutes. Even solving mathematical problems was reduced from 108 minutes to just 29 minutes. These are not just marginal improvements, they represent a complete change in what a single employee can accomplish in a day.

 

The survey also found that tasks that require deeper thinking and human judgment, such as critical thinking, time management or instructing others, saw dramatic increases in productivity. Time spent on critical thinking dropped from 102 to 27 minutes. The time required to instruct and manage employees was also reduced by almost 70 percent. This shows that AI is not only useful for programming or technical analysis, but also for teaching, planning, decision-making and communicating. When people are equipped with the right tools, they are able to produce much more in much less time.

 

While these gains are impressive in advanced economies, their potential is even greater in countries like Uganda. For decades, low productivity has held back development in many African countries. In sectors such as agriculture, education, small businesses and government, workers still spend large parts of their day doing slow, manual and repetitive tasks. Productivity levels remain far below the global average, and this gap continues to fuel inequality between the global North and South.

Uganda has recognized this challenge and is responding with a bold new vision. With its 10-fold  development strategy, the country aims to increase its GDP from 50 billion to 500 billion dollars in just 15 years. The plan focuses on unlocking value in key sectors such as agriculture, tourism, minerals and oil and gas. However, for this vision to succeed, it is not enough to invest in industries alone. Uganda also needs to improve the way people work, and this is where AI can be a game changer.

 

Many people still think that AI is something reserved for big companies or tech firms. However, the most immediate impact in Africa could come from small, everyday businesses. Just recently, I had an experience in the city of Entebbe that brought this home to me. I wanted to take a photo in a small secretariat that offers passport photos, typing and printing services. While I was waiting, I observed a young man helping a woman who had come in with a handwritten pieces of paper. She was applying for a job as a teacher in a kindergarten and needed a typed CV and cover letter. The man patiently asked her questions, read through her notes, typed slowly, rephrased what she had said and tried to create a professional document.

 

As I watched, I was struck by how much time they were spending on something that generative AI could do in seconds. All the man had to do was take a photo of the handwritten pages or scanned them, upload it to ChatGPT and ask it to create a customized resume and cover letter. He could even include the name of the school to make the cover letter more specific. In less than five minutes, she would have gone home with polished, professional documents, and the man could have moved on to the next client. Instead, this one task took almost an hour.

 

This small example represents a much larger reality. Across Uganda, there are hundreds of thousands of people running small businesses like this secretarial bureau. They type, translate, write letters, prepare reports and plan budgets, often by hand or on outdated computers. Most of them don't realize that there is a faster, smarter way to do the same work. AI tools, especially chatbots and mobile-based platforms, can multiply their output without the need to hire more staff or buy expensive software. Time saved is money earned. In many cases, this means better service, more customers and more dignity at work. Personally, before I start a task, I now ask myself how much faster I could do it with AI

 

In schools, AI can help teachers create lesson plans, grade assignments and design learning materials with just a few clicks. In government agencies, it can optimize reporting, organize data and improve decision-making. In agriculture, farmers can use mobile AI tools to diagnose plant diseases, find out about the weather or call up market prices in their local language. For young entrepreneurs, AI can help write business proposals, design logos, manage inventory and automate customer messaging.

 

Uganda has one of the youngest populations in the world. Our youth are curious about technology, innovative and willing to work. What many of them lack is not ambition, but access to tools that match their energy. Generative AI could completely change Uganda's productivity curve if it is widely adopted and made accessible through training and mobile-friendly platforms. This does not require billions of dollars or complex infrastructure. In many cases, awareness and basic digital skills are enough.

 

To seize this opportunity, Uganda needs to act thoughtfully. Schools and universities should teach students how to use AI tools as part of their education. Government employees should be trained to use AI in their daily tasks. Innovators should be supported to develop localized AI solutions that work in Ugandan languages and sectors. And, perhaps most importantly, the average citizen, like the secretarial worker, needs to see that AI is not a distant or abstract technology. It is a tool they can use today to work faster, serve better, and earn more.

 

If Uganda is serious about achieving its 10-fold growth strategy, improving the way people work must be at the center of the journey. AI will not replace human labor; it will augment it. In a country where every minute counts, the difference between three hours and thirty minutes could be the difference between survival and success.

Friday, 20 June 2025

 The Vanishing Ladder: Rethinking Academic Training in the Age of AI

By Richard Sebaggala

 

An article I recently came across contains a quiet but significant observation: AI is not only replacing jobs, it is also replacing learning opportunities. The types of tasks that helped young people gain experience (entry-level administrative work, basic data tasks, support functions) are being quietly handed over to machines. And while this trend is already causing alarm in the developed countries, I couldn’t help but think how much more disruptive it could be in countries like Uganda, where graduate unemployment is already painfully high and where academic training often lacks the necessary practical immersion.

In Uganda, almost every university has relied on internships as a bridge between theory and practise. It is the only moment in a student's academic career when they can actually enter a workplace and apply what they have learnt in class. Of course, these internships are not enough (the number of opportunities is often much less than the number of students who need them), but at least they exist. In my experience supervising interns across various organizations, I've observed that these placements largely served a dual purpose: interns handled repetitive administrative tasks while also quietly observing and learning the ropes.

Imagine what will happen when the tasks that are critical to learning such as filing reports, organising data, writing summaries and composing simple letters are done faster, cheaper and more consistently by AI tools. This is not a distant scenario. It is already happening in global companies and is slowly finding its way into companies and NGOs in Uganda. If we are not careful, AI will not only destroy jobs, it will also displace the basic tasks where students can observe, try, fail, ask questions and grow. If these tasks disappear, what will happen to internships as we know them?

To me, this is the more pressing threat: not mass unemployment overnight, but the silent erosion of educational opportunities. The risk isn't that students don't find a job after graduation, but that they never get a fair chance to prepare for the labour market at all. We often talk about AI taking over "low-skilled" jobs, but these tasks, as mundane as they may seem, are where many professionals actually start.

What worries me most is how little our academic systems have adapted. Universities are still producing graduates who are trained for jobs that may no longer exist. Curricula still focus heavily on theory and make little effort to incorporate AI skills, digital skills or simulations of real-world work environments. Meanwhile, students are entering a labour market that is rapidly changing under their feet without the support of institutions to help them find a balance.

At this point, universities and faculties must now ask themselves a new question: What will your graduate look like in the age of AI? A law school should ask how a future lawyer will work with or alongside AI? An economics department should ask what tools an economist will need to stay relevant when forecasting and modelling are now supported by AI. Business departments, medical schools, engineering programmes, even social science departments - all need to think of their graduates not as a competitor to AI, but as someone who is able to lead in a world shaped by it. Once we start asking these kinds of questions, we stop seeing AI as a threat and start seeing it as a force to be harnessed. But more importantly, we start to orientate education towards a future that is already upon us.

If faculties don't start this rethink now, we risk something much more subtle but deeply damaging. We could end up with a generation that is well-educated on paper but excluded in practise: graduates who are equipped with knowledge but lack the ability to apply it. They will enter a world that demands real-world experience and digital adaptability, yet the very systems that trained them have offered neither. This is not an alarm bell for a distant future, but a call for urgent reflection. The cost of ignoring these changes now will be far greater when the full consequences unfold.

We need to completely rethink the purpose of academic education. It is no longer enough to prepare students for exams. We need to prepare them to collaborate with AI, to learn quickly, to solve problems in environments where humans and machines will work side by side. This means that we need to revise the way we teach, but also that we need to create new types of internships and apprenticeships where students are exposed to AI-powered workflows rather than being displaced by them.

If traditional internships are shrinking, then universities should start using AI to simulate work tasks. If there are fewer and fewer entry-level positions, companies should be encouraged to keep some open, not for the sake of efficiency, but for the future of human learning. We may not be able to stop automation, but we can at least make sure it doesn’t flatten the stepping stones that young people need to grow into the skilled workers we will one day depend on.

AI will continue to change the landscape. That is a fact. But how we prepare our students to traverse this changing terrain is still a choice. The challenge now is not to resist AI, but to make sure it doesn’t steal our opportunity to learn. Because no matter how powerful machines become, it is still people who ensure that institutions function, societies develop and ideas grow. And they all have to start somewhere.

The future of academic education is not about choosing between people and machines. It’s about creating space for people to grow, even if machines do more. If we get this balance right, we’ll be fine. But if not, we’ll look back and realise that we built a smart future but forgot to educate the people who will inherit it.

Thursday, 12 June 2025

 The productivity dividend: No one should waste time on tasks that AI can do better

By Richard Sebaggala


 

A quiet revolution is underway in the UK civil service. More than 400,000 civil servants are being trained to integrate artificial intelligence into their daily work. This training is not just about improving performance, but also about rethinking what work should look like in the first place. The policy mantra underpinning this change is as direct as it is radical: "No person's substantive time should be spent on a task where digital or AI can do it better, quicker, and to the same high quality and standard digitally or through artificial intelligence."

 

It's not about replacing people. It's about distributing work more intelligently. It's about realising that not all hard work is productive and that not all productivity has to come at the expense of people. What we're seeing is the emergence of a productivity dividend; a return on investment that comes from considering how time is used and who or what is best suited to the task at hand.

 

For those of us who have called on students, researchers and institutions in Africa to engage early with AI, this shift is both encouraging and instructive. I have often argued that the real question is not whether AI can help, but why we are still doing things manually when it already can.

 

There is more at stake here than just efficiency. This moment forces us to reckon with something deeper. As Kai-Fu Lee has pointed out in AI Superpowers (2018), previous technologies such as electricity, steam engines or the internet have enhanced human capabilities. AI is different. It competes with our cognitive abilities. It doesn't just automate, it interferes with human thinking, decision making and problem solving. This makes it a unique disruptor, but also a unique transformer.

 

The evidence is already there. In the UK, AI tools such as "Humphrey" have been used to process public consultations faster than human analysts, with equally reliable results. The government estimates that it could save 75,000 working days per year on 500 annual consultations, which equates to around £20 million in labour costs. This is not simply a budget cut. It's a reallocation of time and talent in favour of tasks that require real human insight.

 

The pace of change continues to accelerate. OpenAI co-founder Ilya Sutskever put it bluntly at a recent University of Toronto event: "Anything I can learn, anything any of you can learn, AI could do." His message was not speculative, it was a warning. The fact that AI can't do everything yet doesn't mean that it won't do it. It just means that it can't do it yet. The real danger is standing still while AI evolves past you.

 

So, why are we still teaching students to format references by hand when tools can do it in seconds? Why are policy staff reading thousands of pages of public feedback when AI can collate and summarize it more efficiently? Why do researchers spend days cleaning data when automation can do it in minutes? For those familiar with econometric analysis, refusing automation in this way is akin to insisting we should derive beta coefficients in a regression using those abstract textbook formulas we learned in our second-year econometrics class. It's like saying we should ignore powerful software like Stata or SPSS, where a simple command produces results and your real job is to interpret them and check their validity.

 

 

In Africa, where resources are limited but creativity and adaptability are abundant, we have a rare opportunity to leapfrog. We aren't burdened by the outdated and often inefficient "legacy systems" that many developed nations are trying to transition away from. Instead of having to dismantle old, established ways of working, we can design our education, government, and economic systems with AI at the center from the start. This approach allows us to see AI not as a threat to be managed, but as a powerful tool to be mastered.

 

However, this means that we can no longer hesitate. It means that students must not only be afraid of AI plagiarism, but that they must work responsibly with AI. It means pushing government departments and universities to use AI to reduce administrative backlogs so that their staff can focus on delivering meaningful services. And it means cultivating a culture where the question "Why am I still doing this manually?" becomes second nature.

 

This is not a call for blind automation. It's a call for strategic delegation. If a machine can do a task better and just as reliably, leaving it in human hands is not care, it's inefficiency.

 

The real dividend of AI is not in what it replaces. It lies in what it unlocks: our ability to think deeper, act faster, and serve better. However, this dividend will only be paid to those who choose to collect it.

The future will not wait. And neither should we.

We need to start asking ourselves, our teams, and our institutions a difficult but necessary question: Why are we still doing this manually? And if the answer is: "Because that's how it's always been done," then perhaps it's time we let AI show us a better way. 

Friday, 23 May 2025

 AI in Education: Shifting from Policing to Partnering

By Richard Sebaggala


 

I recently gave a keynote address at the University of South Africa in which I discussed the transformative power of AI for online and distance learning. The Institute for Open and Distance Learning (IODL) had invited me to explore these possibilities at its symposium, “Growing Excellence in Teaching, Learning and Research in Open Distance and e-Learning: Learning Analytics and Post-graduate Research”, but the real conversation began in the Q&A session. While talking about the AI revolution, a deep-seated fear surfaced: students "cheating" with AI. What struck me was that this fear stemmed not from ethical concerns but from a lack of understanding of the AI capabilities and how to use them responsibly.

 

This pattern is not unique; I have experienced the same fear-driven reaction back home in Uganda. The discussion about AI in higher education is fuelled by panic, not a thoughtful pedagogical approach, and this has an unfair impact on students trying to learn responsibly. In the United States, some students are reported to be recording themselves writing essays just to prove that these essays were not created by ChatGPT, after faulty AI detectors like Turnitin have mislabelled them. A Ugandan student recently confided that after discovering how generative tools can support her learning, she could no longer ignore them. Her university now warns that even “10 per cent” AI help amounts to misconduct. Honest learners are forced into a ritual of self-monitoring simply because the machines can no longer recognise whose words belong to whom.

 

The academic world has faced similar fears before. We used to worry that tools like SPSS or Stata would jeopardise the integrity of research; today, no one asks whether "Stata wrote the regression." We judge the researcher's interpretation, not the software's calculations. Generative AI is no different. True intellectual labour has never consisted of typing out every word by hand. It lies in judgement, synthesis, and the courage to stand by one's own conclusions. When the AI writes a paragraph that you refine, contextualise, and critique, it is still unmistakably your work.

 

Despite this historical precedent, a disproportionate amount of institutional energy has gone into policing rather than pedagogy since ChatGPT debuted in late 2022. Publishers are trying to perfect their detection algorithms, universities are imposing blanket bans and memos often read like prosecutorial briefs. Meanwhile, the very academics who need AI skills the most, especially in the Global South, are left without adequate training or support. Basically, we have prioritised building walls of surveillance over building bridges of understanding.

 

A healthier equilibrium is possible. Teachers need practical AI skills so they can model responsible use of AI instead of criminalising it. Guidelines should distinguish between unethical outsourcing and legitimate support, just as style guidelines already distinguish between plagiarism and citations. Assessments can encourage reflection and transparency by asking students to outline their process, explain their prompts, and annotate revisions made. Detectors, if used at all, should initiate a dialogue and not pass a guilty verdict.

 

Above all, we need to remember the real purpose of education: to prepare learners for tomorrow's economy, not to preserve outdated work processes. The future job market will require students to have the critical AI skills, adaptive problem-solving abilities, and ethical judgement to effectively navigate and use artificial intelligence. The economist in me sees a simple opportunity cost. The time spent on a witch hunt could be invested in training students to use AI for literature reviews, exploratory coding, data visualisation, or the messy first draft that every writer knows only too well. Done right, AI lowers the fixed costs of scientific production and frees up scarce cognitive resources for more in-depth investigations - a productivity gain that every economist should welcome.

 

The fundamental question is not whether students will use AI. They are already doing so and that will not change. The real question is whether universities will choose to rule by control or lead by empowerment. If we cling to fear, we will stifle innovation and undermine confidence. If we embrace a coaching mentality, we will train thinkers who can collaborate with machines without giving up their agency.

 

Education has outlived calculators, spreadsheets, and statistical software. It will also survive generative AI, provided we stop focusing on detection and start focusing on development, from panic to pedagogy. My own PhD supervisor, Professor Roy Mersland, is an example of this open approach. His open encouragement to explore, test, and appreciate AI tools was truly inspiring. It fuelled my own passion for learning and teaching  AI to others. This would not have been possible if he had taken the policing, negative stance of many professors. The decision we make now will determine whether we navigate the age of AI with suspicion or with the power of possibility, and whether future generations write with fear or with the confidence to think like innovators in a world where intelligence is increasingly a common commodity.

Saturday, 17 May 2025

 The price of trust: How fraudsters are paralysing small businesses in Uganda 

By Richard Sebaggala

 


I was going to spend the weekend writing about artificial intelligence — its rise, its potential and what it means for our future. But then something happened that completely changed my focus. It was a story that reminded me how fragile the lives of ordinary business people in Uganda can be.

On Friday 16 May, while shopping in the town of Mukono, I ran into a friend I hadn’t seen for years. She used to run a wholesale business in Watoni. After a brief greeting and trying to remember where we had last seen each other, she told me what had happened to her.

Her story was painful to hear.

Some time ago, three men entered her shop and pretended to be procurement officers from Mukono High School. They said they were buying food for the secondary school and the sister primary school. The list included rice, sugar, posho and other food items. After selecting the items, they loaded everything on a lorry and asked her to prepare the receipts. They told her she would accompany them to the school, deliver the goods and receive payment directly from the school bursar.

 

Everything seemed legitimate. The school was nearby. She even spoke to someone on the phone who introduced herself as the school's bursar and confirmed that they were expecting the delivery. When they arrived at the school gate, the security guard said the bursar was in a meeting, which matched what she had been told on the phone. This small detail convinced her that the transaction was genuine.

She got out of the lorry and waited inside the school while the men said they would first deliver the goods for the primary school, which were supposedly packed on top. She waited. And waited. But the lorry never came back. Only later did she learn that the school’s real bursar was a man. The woman on the phone had been part of the scam. The men had disappeared with goods worth 35 million shillings — her entire capital. Just like that, everything she had built up was gone.

Her troubles didn’t end there. Not long after the incident, the landlord increased the shop rent from 800,000 to 1.5 million shillings and demanded payment for a whole year in advance. With no stock and no money, she had no choice but to close the shop. She tried to start again in Bweyogerere, hoping for a fresh start, but the business never took off. That was the end of her life as a businesswoman.

 

As she told the story, there was a serenity in her voice that hid what she had been through. She had come to terms with it. But I left the conversation feeling heavy, troubled and angry — not only about what had happened to her, but also about how common stories like this are.

 

Uganda is often referred to as the most entrepreneurial country in the world. Our start-up rates are among the highest in the world. But behind this headline lies a sobering reality. Most Ugandan businesses do not survive beyond the first year. Over 70 per cent collapse within five years. While lack of capital, poor business planning and market saturation are common explanations, we rarely talk about the threat of fraud and con artists.

 

The trick used on my friend was not a matter of bad luck. It was well planned and carefully executed. And unfortunately, such scams are not uncommon. Every day, small business owners fall victim to similar tactics. Years ago, there was a television programme that exposed how these scammers operate across the country. The programme was both fascinating and frightening. The scams were sophisticated, clever and disturbingly effective.

 

If someone took the time to document these tricks in detail and profile them, I believe the results would shock the nation. We are losing billions of shillings, not through the economic downturn, but through manipulation and fraud.

 

The very next day, on Saturday the 17th, my own car was stolen while I was attending a funeral. This story deserves its own space. But it got me thinking about how easily things can be taken from us, no matter how careful or prepared we think we are.

 

There is an urgent need for practical business education that goes beyond accounting and customer service. Entrepreneurs need to be trained to check transactions, recognise manipulation and protect themselves. Fraud awareness should be part of every entrepreneurship course and support programme in this country.

 

At the same time, we need laws that treat economic fraud with the seriousness it deserves. These crimes don't just hurt individuals. They undermine economic confidence and discourage hard work and initiative. We also need awareness-raising campaigns and media platforms that educate the public about these risks clearly and understandably.

Trust should be the foundation of business. But in today's Uganda, trust has become a dangerous gamble.

We can no longer ignore this crisis. We need to talk about it. We need to listen to those who have suffered and learn from their experiences. And we need to build systems that protect the honest and penalise the cheats.


To anyone who has lost a business, not because of bad decisions, but because someone took advantage of their trust, you are not alone. Your story is important. And it needs to be part of the national conversation about what it really means to be an entrepreneur in Uganda.

Tuesday, 13 May 2025

 The AI Writing Debate: Missing the Forest for the Trees

By Richard Sebaggala


 

A recent article published by StudyFinds titled “How College Professors Can Easily Detect Students’ AI-Written Essays” revisits an ongoing debate about whether generative AI tools like ChatGPT can really match the nuance, flair, and authenticity of human writing. Drawing on a study led by Professor Ken Hyland, the article argues that AI-generated texts leave much to be desired in terms of persuasiveness and audience engagement. It lacks rhetorical questions, personal asides, and other so-called 'human' features that characterize good student writing. The conclusion seems simple: AI can write correctly, but it can't make connections.

But reading the article made me uneasy. Not because the observations are wrong, but because they are based on a narrow and, frankly, outdated understanding of what constitutes good academic writing. More importantly, they misrepresent the role of generative AI in the writing process. The arguments often portray Gen AI as if it were another human from a distant planet trying to mimic the way we express ourselves, rather than what it actually is, a tool designed to help us. And here’s the irony. I have experienced first-hand the limitations of human writing, even my own, and I see AI not as a threat to our creativity, but as a reflection of the weaknesses we have inherited and rarely challenged.

 

When I started my PhD at the University of Agder in Norway, many friends back home in Uganda already thought I was a good writer. I had been writing for years, publishing articles and teaching economics. But my confidence was shaken when my supervisor returned my first paper with over two thousand comments. Some of them were brutally honest. My writing was too verbose, my sentences too long and my arguments lacked clarity. What I had previously thought was polished human writing was actually a collection of habits I had picked up from outdated academic conventions. It was a difficult but necessary realisation: being human doesn’t automatically make your writing better.And yet many critics of AI-generated texts would have us believe that it's the very mistakes we’ve internalised, such as poor grammar, excessive verbosity, and vague engagement, that make writing human and valuable.

 

This is why the obsession with 'engagement markers' as the main test of authenticity is somewhat misleading. In good writing, especially in disciplines such as economics, business, law or public policy, clarity, structure, and logical flow are often more important than rhetorical flair. If an AI-generated draft avoids rhetorical questions or personal allusions, this is not necessarily a weakness. Rather, it often results in a more direct and focussed text. The assumption that emotionally engaging language is always better ignores the different expectations in different academic disciplines. What is considered persuasive in a literary essay may be completely inappropriate in a technical research report.

 

Another omission in the argument is the fact that the role of prompters is not considered. The AI does not decide on its own what tone to strike. It follows instructions. If it is asked to include rhetorical questions or to adopt a more conversational or analytical tone, it does so. The study’s criticism that ChatGPT failed to use personal speech and interactive elements says more about the design of the prompts than the capabilities of the tool. This is where instruction needs to change. Writing classes need to teach students how to create, revise, and collaborate using AI prompts. This does not mean that critical thinking is lost, but that it is enhanced. Students who know how to evaluate, refine, and build upon AI-generated texts are doing meaningful intellectual work. We're not lowering the bar, we're modernising the skills.

A recent study by the Higher Education Policy Institute (HEPI) in the UK revealed that 92% of university students are using AI in some form, with 49% starting papers and projects, 48% summarizing long texts, and 44% revising writing. Furthermore, students who actively engaged with AI by modifying its suggestions demonstrated improved essay quality, including greater lexical sophistication and syntactic complexity. This active engagement underscores that AI is not a shortcut but a tool that, when used thoughtfully, can deepen understanding and enhance writing skills

It's also worth asking why AI in writing causes more discomfort than AI in data analysis, mapping, or financial forecasting. No one questions the use of Excel in managing financial models or STATA in econometric analysis. These are tools that automate human work while preserving human judgment. Generative AI, if used wisely, works in the same way. It does not make human input superfluous. It merely speeds up the process of creating, organising, and refining. For many students, especially those from non-English speaking backgrounds or under-resourced educational systems, AI can level the playing field by providing a cleaner, more structured starting point.

The claim that human writing is always superior is romantic, but untrue. Many of us have written texts that are grammatically poor, disorganized , or simply difficult to understand. AI, on the other hand, often produces clearer drafts that more reliably follow an academic structure. Of course, AI lacks originality if it is not guided, but this is also true of much student writing. Careful revision and critical thinking are needed to improve both. This is not an argument in favour of submitting AI-generated texts. Rather, it is a call to rethink the use of AI as a partner in the writing process, not a shortcut around it.

Reflecting on this debate, I realise that much of the anxiety around AI stems from nostalgia. We confuse familiarity with excellence. But the writing habits many of us grew up with, cumbersome grammar, excessive length and jargon-heavy arguments, are not standards to be preserved.They are symptoms of a system that is overdue for reform. The true power of AI lies in its ability to challenge these habits and force us to communicate more consciously. Instead of fearing AI's so-called impersonality, we should teach students to build on their strengths while reintroducing their own voice and judgment.

We are not teaching students to surrender their minds to machines. We are preparing them to think critically in a world where the tools have evolved. That means they need to know when to use AI, how to challenge it, how to add nuance, and how to edit their results to provide deeper understanding. Working alongside AI requires more thinking, not less.

The writing habits we've inherited are not sacred. They are not the gold standard just because they are human. We need to stop missing the forest for the trees. AI is not here to replace the writer, it's there to make our writing stronger, clearer, and more focused if only we let it.