Sunday, 10 August 2025

What the Calculator Panic of the 1980s Can Teach Us About AI Today

By Richard Sebaggala


In 1986, a group of American math teachers took to the streets holding signs that read, “Ban Calculators in Classrooms.” They feared that these small electronic devices would strip students of the ability to perform basic calculations. If a machine could handle addition, subtraction, multiplication, and division, what incentive would students have to learn those skills at all? At the time, the concern felt genuine and even reasonable.

With the benefit of hindsight, the story unfolded quite differently. Within a decade, calculators were not only accepted but actively encouraged in classrooms across many countries. Standardized exams began permitting their use, textbook problems were redesigned to incorporate them, and teachers found that students could tackle more complex, multi-step problems once freed from the grind of manual computation. Far from destroying mathematical thinking, calculators shifted the focus toward problem-solving, modeling, and a deeper grasp of underlying concepts.

 

Almost forty years later, the same conversation is happening, but the technology has changed. Artificial intelligence tools such as ChatGPT, Avidnote, and Gemini can now generate essays, solve problems, and summarize complex ideas in seconds. Today's concern is familiar: that students will stop thinking for themselves because the machine can do the thinking for them. The parallel with the calculator debate is striking. In the 1980s, the worry was that calculators would erase basic arithmetic skills; today, it is that AI will erode the capacity for critical and independent thought. In both cases, the tool itself is not the real problem. What matters is how it is introduced, how it is used, and how deeply it is woven into the learning process.

In economics, this recurring pattern is well understood through the study of general-purpose technologies, which are transformations such as electricity, the internet, and now AI, whose applications cut across multiple industries and fundamentally alter productivity potential. History shows that these technologies almost always meet initial resistance because they unsettle existing skills, workflows, and even the identities of entire professions. Yet, once institutions adjust and complementary innovations emerge, such as new teaching methods, updated regulations, or redesigned curricula, the long-run productivity gains become undeniable. In Africa, the mobile phone offers a clear example. Initially dismissed as a luxury, it became a platform for innovations like mobile money, which transformed financial inclusion, market access, and small business operations across the continent. The calculator did not diminish mathematical thinking; it reshaped it, shifting effort from mechanical tasks to higher-order reasoning. AI holds the same potential, but only if education systems are willing to reimagine how learning is structured around it.

 

When calculators entered the classroom, they prompted a shift in teaching and assessment. Teachers began creating problems where the calculator was useful, but understanding was still essential. Tests required not only the correct answer but also evidence of the reasoning behind it. The arrival of AI demands a similar change. Students can be taught to use AI for tasks such as brainstorming, structuring arguments, or refining drafts, but they should still be held accountable for evaluating and improving the output. Assessments can reward transparency in how AI is used and the quality of judgment applied to its suggestions.

This is where metacognition becomes essential. Metacognition is simply thinking about one's own thinking. In economics, we often speak of comparative advantage: doing what you do best while letting others handle the rest. AI shifts the boundaries of that calculation. The risk is that by outsourcing too much of our cognitive work, we weaken the very skills we need to make sense of the world. If universities fail to train students to integrate AI into their own reasoning, graduates may not only face economic disadvantages but may also experience a deeper sense of psychological displacement, feeling out of place in settings where AI competence is assumed.

Metacognition keeps us in control. It allows us to question the assumptions behind AI-generated answers, spot gaps in reasoning, align outputs with our goals, and know when to override automation in favor of deeper understanding. It is like applying the economist’s habit of examining incentives, not to markets, but to our own minds and to the machine’s mind.

Consider two graduate research students assigned to write a literature review. Both have access to the same AI tools. The first pastes the topic into the system, accepts the generated text without question, and drops it straight into the draft. The result is neat and coherent, with plenty of references, but some of the citations are fabricated, important regional studies are missing, and the structure is generic. Because the student never interrogates the output, the gaps remain. The supervisor flags the work as shallow and overly dependent on AI.

The second student uses AI to produce an initial outline and a list of possible sources. They then ask the tool follow-up questions: "What is this evidence based on? Are there African studies on the subject? Which perspectives might be missing?" They verify each reference, read key sources, and restructure the review to balance global theory with local findings. The final paper is richer, more original, and meets the highest academic standards. The difference lies in metacognition, not only thinking about one's own reasoning but also critically evaluating the machine's reasoning. Over time, this approach strengthens analytical skills and turns AI into a genuine thinking partner rather than a shortcut.

The real opportunity is to treat AI as a thinking accelerator. It can take over repetitive work like drafting, summarizing, and running quick computations so that human effort can be directed toward framing the right questions, challenging assumptions, and making judgments that depend on values and context. History shows that those who learn to work with transformative tools, rather than resist them, gain the advantage. The calculator era offers a clear lesson for our time: instead of banning the tool and sometimes focusing on who has used it or not, we should teach the skill of using it wisely and thinking about our thinking while we do so.

Monday, 4 August 2025

"You're Safe!": What This Joke Really Says About AI and the Future of Education

By Richard Sebaggala

Conversations about AI have become increasingly divided. Some see it as a breakthrough that will transform every sector, education included. Others still treat it as overblown or irrelevant to their day-to-day work. Most people are simply exhausted by the constant updates, ethical dilemmas, and uncertainty. This split has left many universities stuck, circling around the topic without moving forward in any meaningful way.

A recent WhatsApp exchange I saw was both humorous and unsettling: "Artificial intelligence cannot take your job if your job has never needed intelligence." The reply was, "I don't understand..." and the answer came back, "You're safe!" The joke's quiet truth is that if your work relies on knowledge, judgment, and problem-solving, then AI is already capable of doing parts of it. And the parts it replaces may be the very ones that once gave your job value.

For many of us, including lecturers, researchers, and analysts, our core productivity has come from how efficiently we produce or communicate knowledge. But AI is changing the way that knowledge is generated and shared. Tasks like reviewing literature, coding data, summarizing papers, and grading assignments are no longer things only humans can do. Tools like Elicit, Avidnote ai, and GPT-based platforms now handle many of these tasks faster, and in some cases, better.

Some universities are already moving ahead. Arizona State University has partnered with OpenAI to embed ChatGPT into coursework, research, and even administrative work. The University of Helsinki’s "Elements of AI" course has attracted learners from around the world and built a new foundation for digital literacy. These aren't theoretical exercises; they're practical steps that show what's possible when institutions stop hesitating.

I’ve seen individual lecturers using ChatGPT and Avidnote to draft student feedback, which frees up time for more direct engagement. Others are introducing AI tools like Perplexity and avidnote to help students refine their research questions and build better arguments. These are not just efficiency hacks; they’re shifts in how academic work is done.

Yet many universities remain stuck in observation mode. Meanwhile, the labour market is already changing. Companies like Klarna and IBM have openly said that AI is helping them reduce staffing costs. When AI can write reports, summarise meetings, or process data in seconds, the demand for certain types of graduate jobs will shrink. If universities fail to update what they offer, the value of a degree may start to fall. We're already seeing signs of a skills revaluation in the market.

This shift isn’t without complications. AI also brings new problems that institutions can’t ignore. Equity is one of them. Access to reliable AI tools and internet connections is far from universal. If only well-funded institutions can afford high-quality access and training, the digital divide will only widen. Universities need to think about how they support all learners, not just the privileged few.

There’s also the question of academic integrity. If students can complete assignments using generative AI, then we need to rethink how we assess learning. What kinds of skills are we really measuring? It’s time to move away from assignments that test simple recall and toward those that build judgment, ethical reasoning, and the ability to engage with complexity.

Data privacy matters too. Many AI platforms store and learn from user input. That means student data could be exposed if universities aren’t careful. Before rolling out AI tools at scale, institutions need clear, transparent policies for how data is collected, stored, and protected.

And then there’s bias. AI tools reflect the data they’re trained on, and that data often carries hidden assumptions. Without proper understanding, students may mistake bias for truth. Educators have a role to play in teaching not just how to use these tools, but how to question them.

These are serious concerns, but they are not reasons to stall. They are reasons to move forward thoughtfully. Just as we had to learn how to teach with the internet and digital platforms, we now need to learn how to teach with AI. Delaying action only increases the cost of catching up later.

What matters most now is how we prepare students for the labour market they’re entering. The safest jobs will be those that rely on adaptability, creativity, and ethical thinking traits that are harder to automate. Routine tasks will become commodities. What will set graduates apart is their ability to ask good questions, work across disciplines, and collaborate effectively with technology.

These changes are no longer hypothetical. They’re happening. Institutions that embrace this moment will continue to be relevant. Those that don’t may struggle to recover their footing when the changes become impossible to ignore.

Universities must lead, not lag. The time for think pieces and committee formation has passed. We need curriculum updates, collaborative investment in training, and national plans that ensure no institution is left behind. The early adopters will shape the new rules. Everyone else will follow or be left out.

That WhatsApp joke made us laugh, but its warning was real. AI is changing how the world defines intelligence and value. If education wants to stay meaningful, it has to change with it. We cannot afford to wait.

Sunday, 27 July 2025

AI and Africa’s Economy: Growth Simulations and the Policy Choices Ahead

By Richard Sebaggala



OpenAI's July 2025 Productivity Note made me wonder: if ChatGPT's productivity impact in developed economies is just a hint of what's possible, what could this mean for Africa? I particularly considered what might happen here in Uganda. Unlike earlier innovations like electricity or transistors, which took many years to spread through economies, AI is moving incredibly fast. ChatGPT, for instance, gained 100 million users in only two months, making it the quickest adopted consumer technology ever. Moreover, while tools like the steam engine or electricity mainly expanded physical work, AI enhances thinking itself. This gives it immense potential for boosting productivity, but also carries the risk of leaving some people behind. As The Economist has observed, technologies that transform productivity rarely share benefits equally. Without intentional effort, AI could worsen existing inequalities instead of reducing them.
OpenAI's analysis certainly showcased AI's impressive capabilities. Globally, over 500 million people now use OpenAI tools, exchanging 2.5 billion messages every day. In the United States, ChatGPT has been a significant time-saver, helping teachers save nearly six hours weekly on administrative duties and state workers 95 minutes daily on routine tasks. Entrepreneurs are also launching startups much more quickly than before, and a significant 28% of employed U.S. adults now use ChatGPT at work, a big increase from just 8% in 2023. However, the report largely overlooked Africa, a continent facing distinct challenges such as lower internet access, less developed digital infrastructure, a higher informal economy, and systemic obstacles that slow down quick adoption. Despite this, Africa has shown with the mobile money revolution that it can bypass entire stages of development when the right technology emerges at the opportune moment.
Modeling Africa's AI Future: My Approach
To explore the potential impact, I created a simulation model. This model used the same productivity factors as OpenAI's analysis, but I adjusted them to fit Africa's unique economic conditions. My focus was on Sub-Saharan Africa, with a specific scenario developed for Uganda. I examined three different AI adoption levels: low (10–15% of the workforce), medium (30–40%), and high (60–70%). My starting point assumed a 3% annual GDP growth without AI. I then factored in productivity increases based on adoption: 0.5% for low, 1.2% for medium, and 2% for high adoption across Africa. For Uganda, I made slightly lower adjustments due to more significant infrastructure limitations.
Africa's Economic Boost: A Trillion-Dollar AI Opportunity
The potential impact of AI on Sub-Saharan Africa’s economy is remarkable. The region’s total economic output (GDP) is approximately $1.9 trillion. The projections show that by 2035, AI could add significant value depending on how widely it’s adopted:
  •  Low AI adoption could add an extra $150 billion to Sub-Saharan Africa’s economy.
  •  Medium AI adoption could boost the economy by an additional $360 billion.
  •  High AI adoption has the potential to add over $610 billion.
To put this into perspective, if Sub-Saharan Africa embraces AI widely, its economy could grow by an amount equivalent to Nigeria’s entire economy today. These figures don’t even include the even greater potential for rapid progress in areas like education, healthcare, and agriculture, where AI can help overcome long-standing challenges.


Uganda's Economic Outlook: Significant Growth Potential

Uganda shows a similar promising trend, though on a smaller scale. Starting with a GDP of $49 billion in 2024, our model projects these outcomes by 2035 based on AI adoption:

  • Low AI adoption could see Uganda's GDP reach $65 billion.
  • Medium AI adoption could push the GDP to $71 billion.
  • High AI adoption could result in a GDP of nearly $77 billion.

This projected increase represents billions of dollars in new economic activity for Uganda. These additional resources could be invested in creating jobs, improving infrastructure, and enhancing social services across the country.  Beyond just economic figures, jobs are also a crucial factor. Even if some jobs change due to AI,  the analysis indicates that a medium or high adoption of AI would lead to an overall increase in available jobs. AI can free up workers from repetitive tasks, allowing them to take on more complex and valuable roles. However, without investing in training people for these new skills, this shift could worsen existing inequalities, particularly in rural areas and informal sectors.


AI's Impact Across Different Sectors

AI adoption won't affect all parts of the economy in the same way. Service industries are likely to gain the most, given that they rely heavily on knowledge work. Manufacturing will also see benefits. However, agriculture, which is Uganda's largest employer, will experience slower productivity improvements unless AI tools are specifically designed for small-scale farmers. This means developing things like precision farming applications and market information platforms. Because of this uneven impact, it's crucial to have policies that don't just focus on people working in cities but also extend AI's advantages to our agricultural areas.

 

Why Policy Decisions Matter for AI?

The main message here is that Africa simply cannot afford to sit back and watch the AI revolution unfold. The productivity gains I modeled are not automatic; they depend entirely on the deliberate choices we make. For Uganda, and for Africa more broadly, there are five key policy areas that need our attention. First, we need to introduce AI learning in schools, universities, and vocational training programs, fostering AI literacy across the board. Second, it's essential to expand internet access and make devices more widely available, especially in rural areas, strengthening our infrastructure. Third, we should encourage the use of AI in vital sectors like agriculture, healthcare, and education, not just in technology companies, ensuring sectoral integration. Fourth, we need to create safety nets to help workers who might be affected by AI automation, providing inclusive safety nets. Finally, it's important to develop rules for AI ethics and how data is used, specifically designed for African situations, establishing good governance.

 

The Bigger Picture for Africa

Adopting AI in Africa could be as transformative as mobile money was for making financial services accessible. But unlike mobile money, which was mainly driven by private companies, AI's benefits will require strong cooperation between the public and private sectors. The decisions Uganda and its neighboring countries make in the next five years will determine whether AI truly leads to inclusive growth or if it deepens existing inequalities. If we make the right moves, Africa could ride the wave of AI progress, overcoming limitations that have held us back for decades. If we don't, we risk falling further behind.



Saturday, 19 July 2025

 Artificial Intelligence and the Research Revolution: Lessons from History

By Richard Sebaggala

 


As an enthusiastic observer of AI developments, I recently had the opportunity to gain insights into the upcoming GPT-5 from the head of OpenAI. While existing models such as GPT-4.0 and its cousins, o1 and o3 impress with their advanced reasoning capabilities, the anticipated GPT-5 promises to be a true game-changer. Slated for release in July 2025, GPT-5 is not merely an upgrade; it's a leap forward that promises to unify advanced reasoning, memory, and multimodal processing into one coherent system. Imagine an AI model that can not only perform calculations but also browse the internet, perceive and interact with its environment, remember details over time, hold natural conversations, and perform complex logical tasks. This leap is revolutionary.

 

This reminds me of the mid-20th century, when the advent of statistical software like SPSS, SAS, and Stata marked a major shift in research methodology. These tools democratized data analysis and made sophisticated statistical techniques accessible to a wider range of scientists and researchers. This revolution not only increased productivity but also transformed the nature of research by enabling scientists to delve deeper into the data without needing to spend time on complex calculations.

 

Early adopters of these statistical tools found themselves at the forefront of their field, able to focus more on the interpretation of the data and less on the mechanics of the calculations. This shift not only increased productivity but also the quality and impact of their research. For example, psychologists using SPSS were able to replicate studies on cognitive behavior more quickly, which greatly accelerated the validation of new theories. Economists equipped with Stata’s robust econometric tools were able to analyze complex economic models with greater precision, leading to policy decisions that were deeply rooted in empirical evidence.

 

The AI revolution, led by technologies such as GPT-5, mirrors this historical development, but on a larger scale. AI goes beyond traditional statistical analysis by incorporating capabilities such as machine learning, natural language processing, and predictive analytics that open new dimensions of research potential. For example, AI can automate the tedious process of literature searches, predict trends from historical data, and suggest new research paths through predictive modeling. GPT-5’s expected one-million-token context window will allow it to handle entire books or datasets at once, making research synthesis and cross-domain integration faster and more insightful than ever before. These capabilities enable researchers to achieve more in less time and increase their academic output and influence.

 

In the realm of economics and beyond, the concept of "path dependency" states that early adopters of technology often secure a greater advantage over time. Those hesitant to adopt AI may soon find that they can't keep up in a world where AI is deeply embedded in work, research, and decision-making. The skepticism of some academics and policymakers, especially in countries like Uganda, toward AI could prove costly. As AI becomes more intuitive and indispensable, with models now able to act autonomously, remember prior tasks, and reason across modalities, those who delay its adoption risk losing valuable learning time and a competitive advantage.

 

Nonetheless, while the statistical revolution specifically transformed one facet of the research process, statistical analysis, researchers who had not embraced statistical tools were still able to succeed based on their strengths in other areas of research writing, such as qualitative analysis. The impact of AI, however, is much broader. GPT-5 and similar systems are expected to transform every phase of the research lifecycle: from conceptualization, literature review, and question framing to data analysis, manuscript drafting, and even grant application writing. This comprehensive influence means that AI is not just an optional tool, but a fundamental aspect of modern research that could determine the survival and success of future research endeavors. This makes the use of AI not only beneficial but essential for those who wish to remain relevant and influential in their field.

 

On the cusp of the GPT-5 era, the message is clear: AI will not replace researchers. Instead, the researchers who use AI effectively will set the new standard and replace those who do not. It's not about machines taking over; it's about using their capabilities to augment our own. Just as statistical software once redefined the scope and depth of research, AI promises to redefine it again, only more profoundly. Unlike earlier models, GPT-5 is positioned to act as an intelligent research collaborator, able to draft, revise, interpret, and even manage tasks in real time. In the history of scientific research, those who use these tools skillfully will lead the next stage of discovery and innovation.

Tuesday, 15 July 2025

Where do you draw the line: embrace, fear or confrontation with the alien intelligence of AI?

By Richard Sebaggala

 



Last week, I had the privilege of speaking to some of the brightest students at Uganda Christian University. The Honours College had invited me to speak about artificial intelligence: its nature, misconceptions and implications for the future of learning, work and leadership. These were not typical students. Each of them represented the best academic achievements in their respective degree programmes. Needless to say, the room was filled with energy, insight and thoughtful curiosity.

 

Among the many questions I was asked during the discussion, one stood out for its clarity and depth. A soft-spoken young woman sitting at the very front raised her hand and said, “It's clear from your presentation that you have great faith in AI. But where do you personally draw the line when using it?”

 

Her question was as sharp as it was sincere. I answered truthfully. Before I started using AI tools, I was already confident that I could write, research, analyse data and think strategically. These were areas I felt capable and competent in. But as I began to explore advanced AI systems, I realised how much more was possible  even in areas I thought I had mastered. Faced with this realisation, I had a choice: to feel threatened or to scale. I chose to scale. And there are days when I am amazed, even alarmed, at the extent to which AI has expanded my own capabilities. This is not an admission of hubris, but an acknowledgement of the fact that my own boundaries have shifted. AI has not replaced me. It has shown what else I could be.

 

This exchange reminded me of an earlier essay I wrote: Rethinking Intelligence: The Economics of Cognitive Disruption, in which I argued that AI is not making humans obsolete. Rather, it is forcing us to redefine what it means to be skilful, intelligent and competent in a world where cognitive work is no longer the sole preserve of humans. AI is not simply a tool in the traditional sense. Rather, it is a cognitive collaborator albeit a deeply unfamiliar one.

 

John Nosta’s recent article, AI’s Hidden Geometry of Thought, captures this reality with astonishing precision. He explains that systems like ChatGPT do not mimic human thought. They work with a completely different cognitive architecture, one that is not structured by emotions, intuition or conscious thought, but by statistical relationships within a 12,000-dimensional mathematical space. This is not a metaphor for intelligence. It is a fundamentally different kind of intelligence that functions without intention, consciousness or purpose and yet achieves remarkable results.

 

When I engage with models like ChatGPT, I am not interacting with something that “understands” me in any human sense. What it does is map language, probability and meaning into a high-dimensional field and navigate it with precision. It does not think. It makes predictions. But the predictions are often so accurate that they resemble thinking.

 

This observation recalls another article of mine, The Age of Amplified Analogies, in which I warned against interpreting AI through the lens of familiar metaphors. We often describe AI in human terms, as if it has a brain, learns like a child, or thinks like a colleague. But these analogies, while comforting, are misleading. A more appropriate analogy would be that of an alien intelligence, an “alien” form of cognition that masters our language without ever participating in our way of thinking.

 

This is perhaps the most confusing realisation. AI does not feel, does not reflect, and has no intuition. Yet it now outperforms us at tasks that we once thought required these very qualities, such as writing, diagnosing, and even simulating empathy. This is the change we need to embrace. As I explain in The AI Revolution: What Happens When Tools Start Predicting You - The disruptive power of AI is not just automation. It is prediction. These systems now predict our decisions, thoughts and behaviours with an accuracy that was previously unimaginable, not because they understand us, but because they have identified patterns that we ourselves are blind to.

 

The student’s question "Where do you draw the line?"  deserves more than a personal answer. It is a question that society as a whole needs to address. Because the central question is no longer whether AI will become like us. This question misses the point. The more pressing question is whether we are prepared to live with a form of intelligence that will never be human, but will become indispensable.

 

We need to move beyond the question of whether AI is as intelligent as we are. The better question is: what kind of intelligence is it, and how can we adapt to it? This is not convergence. It is divergence.

 

In conclusion, I gave the students a message that I carry with me: Don't be afraid of this intelligence. Learn to think with it. If you already have a skill, AI can improve it. If you are already strong, AI can extend your reach. But this is only possible if you stop seeing AI as a rival. It is not there to replace you. It is there to change you. And perhaps in this moment of change, the most human response is not to make AI more like us, but to allow it to expand our definition of what it means to be intelligent.

Monday, 7 July 2025

 The Ghost in the Machine? What the MIT Study Gets Wrong About Thinking with ChatGPT

By Richard Sebaggala


 

In June 2025, researchers from the MIT Media Lab published a study entitled "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" (Kosmyna et al., 2025). The study tracked 54 participants over four months as they completed SAT essay tasks in three different modes: using only their own thinking, using traditional search engines, or using ChatGPT. Using electroencephalography (EEG), the researchers found that while participants using ChatGPT wrote significantly faster, they had significantly less neural connectivity, particularly fewer connections in the alpha and theta bands, which are associated with attention, memory, and semantic processing.

 

The authors interpret this reduction as a sign of "cognitive debt," a condition in which reliance on AI tools weakens the brain's engagement with the task. ChatGPT users not only wrote more uniform and less original essays, but also showed significant memory deficits. More than 83% could not accurately recall key content just minutes after writing the essay, compared to only 11.1% of those who worked without help. The study raised particular concern for young people, whose developing brains may be more susceptible to long-term cognitive shifts caused by repeated reliance on aids.

 

This has caused concern in both academic and public discourse, fuelling fears that AI tools such as ChatGPT could blunt our minds and diminish our ability to think independently. This fear is reminiscent of a well-known philosophical metaphor: the "ghost in the machine," which raises the question of whether human consciousness and machine functions can coexist. In this metaphor, the concern is that the machine (AI) supplants the mind (human agency) and reduces thinking to something automatic, mechanical, and impersonal. But is this really the case? Or is AI more likely to become a dance partner that reshapes our thinking without eradicating the thinker?

 

We hear a lot about the risks of "decline of agency" associated with AI, the fear that our critical thinking skills will diminish and our cognitive independence will erode if we rely on these tools. This is a valid concern, and Cornelia Walther's recent article in Psychology Today rightly warns us that AI can go from being a helpful assistant to a crutch we can no longer do without. But that's not the whole story. From my practical experience, I know that once you discover areas where AI truly augments your competences, human agency does not diminish; it increases.

 

Reinterpreting cognitive activity

At this point, a more critical view is required. The interpretation of the study is based on the assumption that conventional, pre-AI markers of cognition, such as increased brain wave activity or strong memory performance, are the gold standard for learning. But in a world where intelligence is increasingly supported by digital systems, we need to ask whether the reduced neural activity reflects a cognitive decline or simply a shift in the distribution of thinking between humans and machines.

 

Throughout history, major technological advances that lightened the cognitive load, such as the calculator, GPS or search engines, have always been met with initial fears of "weakening" the human brain. But mankind has always adapted. Before the advent of GPS, for example, our brains had to store and retrieve detailed location-specific coordinates and mental maps in order to find cities. Today, with Google Maps, we no longer need to memorise every turn or landmark, so we can focus our cognitive energy on more comprehensive planning, decision-making or even creative thinking while travelling. The reduced reliance on spatial memory does not indicate cognitive decline, but shows how technology can reallocate mental resources. The real point isn't that we use less brain power, but that our brains simply work differently. For example, a lower number of alpha-band connections observed when writing with AI is probably not a sign of cognitive decline. Rather, it could indicate a redistribution of effort towards more strategic tasks, such as evaluating AI-generated results or refining prompts, activities that cannot be fully captured with current EEG scans alone.

 

Limitations of the MIT study

Equally important is the fact that the MIT study does not include baseline data on participants’ cognitive abilities. We do not know whether some participants were already more reliant on digital aids or whether their writing style, memory capacities,or learning preferences differed before the study began. Without this baseline, it is impossible to say whether the observed differences in the brain are caused by ChatGPT or simply correlate with individual differences. The study measures what happens during and shortly after the use of ChatGPT, but says little about how cognitive patterns develop with thoughtful, long-term use of AI tools.

 

Another important point of concern is the study’s focus on memory recall as a proxy for learning. In traditional educational systems, memory has long been a central measure of mastery. But in an information-rich world where AI comes into play, knowing how to access, review and apply knowledge is often more important than being able to recall it verbatim. The assumption that true learning only occurs when knowledge is encoded internally ignores that modern cognition now operates in a broader ecosystem of the "extended mind" that includes digital tools. The problem is not that students don't remember what they wrote with AI. The problem is the assumption that remembering is still the highest form of understanding.

 

Rethinking AI integration in education

To be fair, the concept of "cognitive debt" does have some merit. When learners orany user passively use ChatGPT by copying and pasting without processing, real thinking suffers. But this is not a fault of the AI itself, but a fault in the way the AI is used. Instead of rejecting the use of AI, educators and institutions should focus on how to integrate AI into learning in a meaningful way. This means that they should teach prompt design, encourage critical reflection on AI results, and help students use tools like ChatGPT as thinking partners rather than crutches.

 

This discussion is particularly urgent for regions like Africa, where the integration of AI into education is still nascent. Misinterpreting studies like this one could reinforce hesitation and delay much-needed innovation. For educators and leaders who have yet to engage with AI, this kind of research might seem to confirm their doubts, when in fact it emphasises the need for better AI literacy rather than retreating.

 

The real question is not whether ChatGPT reduces brain activity. It’s whether we’re measuring the right kind of activity in the first place. Rather than judge lower EEG connectivity as a loss, we should be asking: are students getting better at navigating, questioning, and reconfiguring information in an AI-rich environment?

 

Moving forward

The MIT study raises valuable questions, but it should be a starting point for deeper, more nuanced investigations. What kind of cognition do we want to cultivate in the era of AI? What skills are most important when machines can instantly generate, summarise and retrieve information? And how do we equip learners, not just to avoid cognitive debt, but to add cognitive value through the strategic, reflective, and ethical use of AI?

We are not facing a cognitive collapse. We are facing a change in the way intelligence is organised. And it’s time for our metrics, assumptions, and teaching methods to catch up.

Thursday, 26 June 2025

Turning hours into gold: How generative AI can unlock Uganda’s productivity potential

 

By Richard Sebaggala



The world is currently experiencing a quiet revolution in the way work is done. Across industries and continents, generative AI tools like ChatGPT, Claude and Copilot are changing the way people do everyday tasks. A recent global survey presented by Visual Capitalist found that workers using AI can reduce the time it takes to complete their tasks by more than 60 percent. Writing a report, for example, no longer took 80 minutes, but only 25, while fixing technical problems, which normally took almost two hours, was reduced to 28 minutes. Even solving mathematical problems was reduced from 108 minutes to just 29 minutes. These are not just marginal improvements, they represent a complete change in what a single employee can accomplish in a day.

 

The survey also found that tasks that require deeper thinking and human judgment, such as critical thinking, time management or instructing others, saw dramatic increases in productivity. Time spent on critical thinking dropped from 102 to 27 minutes. The time required to instruct and manage employees was also reduced by almost 70 percent. This shows that AI is not only useful for programming or technical analysis, but also for teaching, planning, decision-making and communicating. When people are equipped with the right tools, they are able to produce much more in much less time.

 

While these gains are impressive in advanced economies, their potential is even greater in countries like Uganda. For decades, low productivity has held back development in many African countries. In sectors such as agriculture, education, small businesses and government, workers still spend large parts of their day doing slow, manual and repetitive tasks. Productivity levels remain far below the global average, and this gap continues to fuel inequality between the global North and South.

Uganda has recognized this challenge and is responding with a bold new vision. With its 10-fold  development strategy, the country aims to increase its GDP from 50 billion to 500 billion dollars in just 15 years. The plan focuses on unlocking value in key sectors such as agriculture, tourism, minerals and oil and gas. However, for this vision to succeed, it is not enough to invest in industries alone. Uganda also needs to improve the way people work, and this is where AI can be a game changer.

 

Many people still think that AI is something reserved for big companies or tech firms. However, the most immediate impact in Africa could come from small, everyday businesses. Just recently, I had an experience in the city of Entebbe that brought this home to me. I wanted to take a photo in a small secretariat that offers passport photos, typing and printing services. While I was waiting, I observed a young man helping a woman who had come in with a handwritten pieces of paper. She was applying for a job as a teacher in a kindergarten and needed a typed CV and cover letter. The man patiently asked her questions, read through her notes, typed slowly, rephrased what she had said and tried to create a professional document.

 

As I watched, I was struck by how much time they were spending on something that generative AI could do in seconds. All the man had to do was take a photo of the handwritten pages or scanned them, upload it to ChatGPT and ask it to create a customized resume and cover letter. He could even include the name of the school to make the cover letter more specific. In less than five minutes, she would have gone home with polished, professional documents, and the man could have moved on to the next client. Instead, this one task took almost an hour.

 

This small example represents a much larger reality. Across Uganda, there are hundreds of thousands of people running small businesses like this secretarial bureau. They type, translate, write letters, prepare reports and plan budgets, often by hand or on outdated computers. Most of them don't realize that there is a faster, smarter way to do the same work. AI tools, especially chatbots and mobile-based platforms, can multiply their output without the need to hire more staff or buy expensive software. Time saved is money earned. In many cases, this means better service, more customers and more dignity at work. Personally, before I start a task, I now ask myself how much faster I could do it with AI

 

In schools, AI can help teachers create lesson plans, grade assignments and design learning materials with just a few clicks. In government agencies, it can optimize reporting, organize data and improve decision-making. In agriculture, farmers can use mobile AI tools to diagnose plant diseases, find out about the weather or call up market prices in their local language. For young entrepreneurs, AI can help write business proposals, design logos, manage inventory and automate customer messaging.

 

Uganda has one of the youngest populations in the world. Our youth are curious about technology, innovative and willing to work. What many of them lack is not ambition, but access to tools that match their energy. Generative AI could completely change Uganda's productivity curve if it is widely adopted and made accessible through training and mobile-friendly platforms. This does not require billions of dollars or complex infrastructure. In many cases, awareness and basic digital skills are enough.

 

To seize this opportunity, Uganda needs to act thoughtfully. Schools and universities should teach students how to use AI tools as part of their education. Government employees should be trained to use AI in their daily tasks. Innovators should be supported to develop localized AI solutions that work in Ugandan languages and sectors. And, perhaps most importantly, the average citizen, like the secretarial worker, needs to see that AI is not a distant or abstract technology. It is a tool they can use today to work faster, serve better, and earn more.

 

If Uganda is serious about achieving its 10-fold growth strategy, improving the way people work must be at the center of the journey. AI will not replace human labor; it will augment it. In a country where every minute counts, the difference between three hours and thirty minutes could be the difference between survival and success.