Friday, 19 September 2025

 

When more is not better: Rethinking rationality in the age of AI

By Richard Sebaggala (PhD)

Economists love simple assumptions, and one of the most enduring is the idea that more is better, or the non-satiation principle. More income, more production, more consumption: in our economics textbooks, a rational actor never rejects an additional unit of utility. By and large, this principle has proven to be reliable. Who would turn down more wealth, food or opportunity? However, there are exceptions. In monogamous marriages, “more” is rarely better and certainly not rational. Such humorous caveats aside, this assumption has informed much of our understanding of economic behaviour.

 

Economists refer to this principle as the monotonicity assumption, i.e. the idea that consumers always prefer more of a good over less. As Shon (2008) explains, monotonicity underpins key findings of microeconomics: utility maximisation takes individuals to the limit of their budget, and indifference curves cannot intersect. Even Gary Becker, who argued that monotonicity need not be explicitly assumed, concluded that rational agents behave as if “more is better” because they adjust their labour and consumption up to that point. In short, the discipline has long assumed that “more” is a safe rule of thumb for rational decision-making.

 

Artificial intelligence poses a challenge to this axiom. While most people recognise its potential, many are quick to emphasise the risks of overreliance, focusing on the negative impacts and overlooking the benefits that come from deeper engagement. My own experience is different. The more I use AI, the better I get at applying it to complex problems that once seemed unsolvable. It sharpens my thinking, increases my productivity and reveals patterns that were previously difficult to recognise. However, the critics are often louder. A recent essay in the Harvard Crimson warned that students use ChatGPT in ways that weaken human relationships: they look for recipes there instead of calling their mothers, they consult ChatGPT to complete assignments instead of going to office hours, and they even lean on ChatGPT to find companionship. For the author, any additional use of AI diminishes the richness of human interaction.

 

This view highlights a paradox. A technology that clearly creates abundance also creates hesitation. Economics offers a few explanations. One of them is diminishing marginal utility. The first experience with AI can be liberating as it saves time and provides new insights. However, with repeated use, there is a risk that the benefits will diminish if users accept the results uncritically. Another problem is that of external effects. For an individual, using ChatGPT for a task seems rational- faster and more efficient. However, if every student bypasses discussions with fellow students or avoids professors’ office hours, the community loses the opportunity for dialogue and deeper learning. The private benefit comes with a public price.

 

There is also the nature of the goods that are displaced. Economists often assume that goods are interchangeable, but AI shows the limits of this logic. It can reproduce an explanation or a recipe, but it cannot replace friendship, mentorship or the warmth of a shared conversation. These are relational goods whose value depends on their human origin. Finally, there is the issue of bounded rationality. Humans strive for more than efficiency; they seek belonging, trust and reflection. If students accept AI’s answers unquestioningly, what seems efficient in the short term undermines their judgement in the long term.

 

It is important to recognise these concerns, but it is equally important not to let them obscure the other side of the story. My own practise shows that the regular, deliberate use of AI does not lead to dependency, but to competence. The more I engage with it, the better I get at formulating questions, interpreting results and applying them to real-world problems. The time previously spent on routine work is freed up for thinking in higher dimensions. In this sense, the increased use does not make me less thoughtful but allows me to focus my thoughts where they are most important. So, the paradox is not that more AI is harmful. The problem is unthinking use, which can crowd out the relational and cognitive goods we value. The solution lies in balance: using AI sufficiently to build capabilities while protecting spaces for human relationships and critical engagement.

 

The implications are far-reaching. If AI undermines reflection, we weaken human capital. If it suppresses interaction, we weaken social capital. Both are essential for long-term growth and social cohesion. However, if we use AI as a complement rather than a substitute, it can strengthen both. This is important not only at elite universities, but also in African classrooms where I teach. Here, AI could help close resource gaps and expand access to knowledge. But if students only see it as a shortcut, they will miss out on the deeper learning that builds resilience. Used wisely, however, AI can help unlock skills that our education systems have struggled to cultivate.

 

For this reason, I characterise my perspective as pragmatic. I do not ignore the risks, nor do I believe that technology alone guarantees progress. Instead, I recognise both sides: the fears of those who see AI undermining relationships, and the reality that regular, deliberate use will make me better at solving problems. The challenge for economists is to clarify what we mean by rationality. It is no longer enough to say that more is always better. Rationality in the age of AI requires attention to quality, depth and sustainability. We need to measure not only the efficiency of obtaining answers, but also the strength of the human and social capital we obtain in the process.

 

So yes, more is better, until it isn't. The most sensible decision today may be to put the machine aside and reach out to a colleague, a mentor or a friend. And when it's time to return to the machine, do so with sharper questions and clearer judgement. In this way, we can preserve the human while embracing the transformative. That, I believe, is how to think like an economist in the age of AI.

Sunday, 7 September 2025

 

Humans, Nature, and Machines: Will AI Create More Jobs Than It Replaces?

By Richard Sebaggala (PhD)

Economists have long debated whether new technologies create more jobs than they destroy. Each industrial revolution, from steam engines to electricity, sparked fears of mass unemployment, only for new industries and occupations to emerge. Artificial intelligence, however, feels different. It does not only automate physical tasks; it reaches into the cognitive space once thought uniquely human (Brynjolfsson & McAfee, 2014).

So far, the evidence suggests AI is not sweeping workers aside in large numbers. Instead, it is altering the composition of work by reshaping tasks rather than eliminating whole professions. Coders now refine AI-generated drafts instead of writing from scratch. Paralegals summarize less case law manually. Marketers polish content rather than produce the first draft. In this sense, AI resembles a new species entering an ecosystem. It does not destroy the entire environment at once but gradually reshapes niches and interactions (Acemoglu & Restrepo, 2019).

Where AI adds the most value is in partnership with people. In chess, teams of humans and AI working together often beat both the best human players and the best AI systems. The same pattern is emerging in business, law, and research: AI accelerates analysis and routine drafting, while humans provide judgment, context, and values (Big Think, 2025). I have seen this in my own work as a researcher. Recently when reviewing a colleague’s draft paper, I began by reading it closely and noting my own independent observations arising from my rich research experience. I realized the paper seemed to have too many objectives mentioned in abstract, introduction and in the conceptual framework, the moderating role was not reflected in the title but rather smuggled in the theoretical discussions and methodology, and the case study design did not align with the quantitative approach. These were my own reflections, grounded in my reading. Only afterwards did I turn to ChatGPT, asking it to check the validity of my comments, highlight further weaknesses, and frame the feedback in a structured way. The model confirmed my insights, sharpened the phrasing, and suggested revisions. In that process, the AI acted as a sparring partner rather than a substitute. My reasoning stayed intact, but my communication became clearer. This kind of human–machine cooperation illustrates why complementarities matter more than simple substitution.

I have also seen this dynamic in data analysis. When I begin with clear objectives and a dataset, AI tools can be very useful as a starting point. They can suggest methods for analysis, highlight possible weaknesses, and even recommend additional checks such as sensitivity tests or robustness tests. Some of these insights might have taken me much longer to discover on my own, and in some cases I might not have uncovered them at all. Yet the value lies not in letting the tool run the entire analysis, but in using its suggestions to sharpen my own approach. I have discovered that if you are proficient in data analysis using Stata, as I am, you can allow AI tools such as ChatGPT, Avidnote, or Julius to run analysis in Python, while staying in control by asking the AI to generate Stata do-files for each analysis. Since I already have the data, I can validate the results in Stata. The efficiency gains are significant: less time spent on routine coding, more time to ask deeper questions, and occasional exposure to advanced methods that the AI suggests from its wider knowledge base.

 

Nature reinforces the point. Disruption is rarely the end of the story. When new species enter an ecosystem, some niches disappear, but others open. Grasslands gave way to forests. Forests gave way to cultivated fields and cities. The same is true of labor markets. AI is closing some roles but creating others such as prompt engineers, AI auditors, ethicists, and data curators. The central economic question is not whether niches vanish, but whether workers are supported in adapting to new ones. Without adaptation, extinction occurs not of species, but of livelihoods (Acemoglu & Restrepo, 2019).

Some commentators imagine a post-work society, where intelligent machines carry most productive effort and people focus on creativity, care, or leisure. Keynes (1930) once speculated that technological progress would eventually reduce the working week to a fraction of what it was. More recent writers describe this possibility as cognitarism, an economy led by cognitive machines. Yet history shows that transitions are rarely smooth. Without preparation, displacement can outpace creation. That is why policy choices matter. Retraining programs, investments in AI literacy, experiments with shorter workweeks, and social safety nets can soften shocks and broaden opportunity. Just as ecosystems survive through diversity and resilience, economies need deliberate institutions to spread the benefits of transformation.

AI, then, is powerful but not destiny. Like natural forces, it can be guided, shaped, and managed. The real risk lies not in the technology itself but in neglecting to align human institutions, social values, and machine capabilities. If we approach AI as gardeners who prune, plant, and tend, we can cultivate a labor ecosystem that grows new abundance rather than fear. If we fail, the outcome may be scarcity and division.

History suggests that technology does not eliminate work; it transforms it. The challenge today is to ensure that transformation is inclusive and sustainable. Human ingenuity, like nature, adapts under pressure. Machines are the newest force in that story. The question is not whether AI will take all jobs, but whether we will design the future of work or leave it to evolve without guidance. My own practice of drafting first and using ChatGPT second reflects the broader lesson: societies must take the lead, with AI as an assistant, not a replacement.

References

Acemoglu, D., & Restrepo, P. (2019). Automation and new tasks: How technology displaces and reinstates labor. Journal of Economic Perspectives, 33(2), 3–30. https://doi.org/10.1257/jep.33.2.3

Big Think. (2025, September). Will AI create more jobs than it replaces? Big Think. https://bigthink.com/business/will-ai-create-more-jobs-than-it-replaces/

Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company.

Keynes, J. M. (1930). Economic possibilities for our grandchildren. In Essays in persuasion (pp. 358–373). Macmillan.

 

Sunday, 31 August 2025

 Kia in the Classroom: The Economics of Boldness in Teaching with AI

 By Richard Sebaggala



On September 3, 2025, a lecture hall at Simon Fraser University will host a moment that feels closer to science fiction than to the routines of academic life. Students will gather expecting a professor at the podium, but instead will find two figures waiting. One is Steve DiPaola, a familiar human presence, and beside him is Kia, a three-dimensional artificial intelligence rendered with startling realism.

The digital figure meets the audience with a direct gaze, smiles at the right moment, and speaks in measured tones that carry the authority of an academic voice. The class is no longer a monologue delivered by a single lecturer but a dialogue between flesh and code, a human mind and its synthetic counterpart. For students who grew up with animated avatars and digital companions, Kia may not appear entirely alien. What makes the moment extraordinary is that it unfolds within a university classroom, one of the last places where knowledge has been carefully guarded by human authority.

The arrival of Kia is not a cautious step in educational technology but an unmistakable act of boldness. Around the world, universities hesitate to integrate AI openly, unsettled by fears of plagiarism, shallow assignments, or the erosion of genuine intellectual effort. DiPaola has chosen a different path. Rather than shielding his students from the technology, he has brought it into the centre of the classroom as a living demonstration. The decision transforms the lecture hall into a theatre of inquiry, where the question is not whether AI exists but whether it can belong at the core of teaching. Economists would call this innovation under criticism, a dynamic that has accompanied every major technological shift since the age of mechanization.

History shows that new tools are rarely welcomed without resistance. The typewriter was once distrusted, the calculator dismissed as the death of numeracy, and the personal computer regarded as a passing fad. Those who pressed forward despite the doubts gained more than a reputation for daring. They accumulated knowledge that others lacked, learning where the tools succeeded and where they fell short. Uncertainty became capital. DiPaola’s decision to place Kia in the classroom follows this tradition. This idea is far from new; economists have long studied the strategic role of bold moves. The concept of first‑mover advantage, for instance, frames how being first in a market can confer durable benefits such as reputational surplus, learning gains, and control over resourcesBy facing skepticism now, he accepts reputational risk in exchange for insight. That willingness to trade risk for knowledge is what allows innovation to move.

What makes Kia disquieting is not the information it can process but the social space it inhabits. It gestures, pauses, and responds with the timing of a colleague. Professor DiPaola himself has admitted that, despite decades of teaching and research, he occasionally finds Kia explaining certain concepts more clearly than he can. That admission resonates with many of us who have discovered that AI sometimes performs better at tasks we once considered our strengths. The unsettling question follows: is Kia a substitute for the professor or a complement to him? If it substitutes, it competes with the teacher, offering lectures without fatigue, explanations without limit, and perhaps even performance with greater flair. If it complements, it enlarges the professor’s presence, leaving him to design, mentor, and evaluate while the AI carries the weight of repetition and performance.

DiPaola insists on the latter. Kia will not design the syllabus or grade assignments. Its role is that of a partner in dialogue, a provocateur, an intellectual sparring figure. The authority remains firmly human, while the AI performs more like a chorus in ancient drama: commenting, provoking, and enriching, but never directing. Economists would recognize this as the difference between substitution and complementarity. Calculators did not erase the work of teaching mathematics; they moved it toward problem-solving. Online databases did not make librarians unnecessary; they turned them into navigators of vast digital landscapes. In the same way, Kia does not erase the professor but reshapes the meaning of teaching.

If this experiment works, the classroom becomes more productive. Students gain a source of explanation that does not tire, while lectures acquire immediacy and theatrical power. A professor’s energy is finite, but a digital persona can sustain attention endlessly. Economists call this capital deepening: the process by which new tools increase the return on human effort. Just as tractors increased the yield of farmers, systems like Kia could raise the intellectual return of every hour spent in teaching. Productivity in education cannot be reduced to exam scores alone. It is better measured in comprehension that lasts and insights that endure. By animating concepts in real time, Kia may heighten those outcomes beyond what conventional lectures achieve.

The further horizon is less certain but more provocative. Other educators may attempt their own digital partners: an AI Socrates in philosophy, an AI judge in law, an AI diplomat in international relations. Universities may then institutionalize these figures, treating them as distinguishing assets, just as libraries or laboratories once defined reputation. “Come study with Professor X and EconAI” could become a marketing pitch. With time, the border between teaching and performance may fade. Lectures could evolve into choreographed dialogues where human and artificial voices weave together, and students may come to expect a form of intellectual theatre. The professor’s role would then shift decisively to mentorship, ethical judgment, and the cultivation of wisdom, qualities that resist automation.

The greater risk lies not in adopting such tools too early but in refusing them altogether. Universities that avoid experiments like Kia risk producing graduates unprepared for a world where artificial intelligence is embedded in every profession. Avoidance may seem prudent, yet it carries its own danger: irrelevance. The opportunity cost of inaction is high, which is what makes DiPaola’s decision consequential. By accepting visible dangers such as criticism, failure, or embarrassment, he seeks to prevent the greater invisible danger of an institution unprepared for its future.

The introduction of Kia will not end the debate about AI in education. Critics will argue that it reduces teaching to spectacle and weakens the authenticity of intellectual exchange. Supporters will answer that it enriches learning and mirrors the environment students will encounter in their lives and work. Both positions have weight, but what is certain is that the demonstration will alter the conversation. For the first time, a digital persona will stand on equal footing with a professor in a lecture, and the world will be forced to ask what that means.

The essential question is not whether Kia will surpass the professor but whether educators and universities are willing to design a partnership between human insight and artificial presence. History suggests that institutions willing to take that risk, to transform criticism into knowledge, are the ones that shape the trajectory of change. When Kia begins to speak before students, the trial will not only measure the capacity of an AI system. It will measure the courage of higher education itself.

Tuesday, 19 August 2025

From Scarcity to Abundance: Will Universities Survive the Age of AI?

By Richard Sebaggala


For centuries, higher education benefited from the scarcity of knowledge. Universities held the key to specialised information, and society paid a high price for the degrees and expertise that only these institutions could provide. Professors were the guardians of wisdom, lecture theatres the places where it was passed on, and libraries the guarded vaults of human progress. From an economic perspective, this was a textbook case of supply and demand: the supply of advanced knowledge was low, the demand from individuals and employers was high, and universities could command both prestige and price. Degrees acted as economic signals for scarce intellectual capital. This monopoly has disappeared. Artificial intelligence now produces literature reviews in seconds, explains complex theories on demand, and even designs experiments or business strategies that used to be hidden in the minds of experts. The supply curve of knowledge has shifted dramatically outwards, reducing scarcity and lowering the “price” of access to information to almost zero. Knowledge is no longer scarce. What is scarce is the ability to integrate, apply, and scrutinise AI-produced knowledge. In economic terms, the new scarce commodity is interpretability;  the human ability to assess, contextualise and create value from a wealth of data. The survival of universities will depend not on guarding data, but on how well they manage to integrate AI into teaching, research, and public engagement;  and that means faculty must lead the way.

 

Globally, the gap between student adoption and institutional readiness is widening. Nearly 80% of students are already using generative AI, but most have no structured support from their universities. Every semester without faculty readiness compounds what education strategist Dr Aviva Legatt calls “pedagogical infrastructure debt.” In economics, this resembles a rising cost curve: the longer an institution delays investing in AI capabilities, the higher the future cost of catching up, both financially and in terms of lost market share. We've seen this before. Learning management systems were universally used, but were mainly for administration rather than changing pedagogy. MOOCs promised democratic access but often delivered little more than repackaged lectures with low completion rates. In both cases, the opportunity costs were high, as universities gained efficiency but lost innovation and competitive differentiation to external platforms. There is much more at stake with AI. This is not just about content delivery, but also about how the next generation thinks, decides, and solves problems, and whether universities can maintain their comparative advantage in training graduates who offer unique value in a labour market transformed by automation.

 

While many leaders in higher education remain cautious or indifferent, it's a different story at some universities in Uganda. At Victoria University, Vice-Chancellor Dr Lawrence Muganga urges students to embrace AI rather than fear it. He warns that by 2030, many tasks that humans are trained to do today will be replaced by machines, and the most foolish advice would be to advise students to avoid AI. Under his leadership, the university has made AI literacy mandatory, set up a state-of-the-art AI lab, and started developing localised AI tools for the African context. Muganga’s approach treats AI not as a threat, but as a foundation for employability, entrepreneurship, and innovation—a practical example of the faculty-driven integration that Legatt believes is essential. In economic terms, this is a case of strategic first-mover advantage: by investing early in AI capabilities, Victoria University sets its 'product' (the graduates) apart from the competition in a competitive education market and potentially increases its value in the labour market.

 

The economic significance could not be clearer. McKinsey estimates that AI could add up to $23 trillion a year to the global economy by 2040, with the biggest gains going to countries and sectors that can reskill quickly. For Africa, where youth unemployment is high, integrating AI under the guidance of educators is not an option, but a competitive strategy. From a labour economics perspective, AI skills represent a form of human capital that yields high returns in terms of productivity and employability. From a macroeconomic perspective, widespread AI skills could shift a country’s production capabilities outwards so that more can be produced with the same input. It can bridge the employability gap, stimulate local innovation, and ensure that AI tools reflect local languages, cultures, and realities, rather than importing solutions that don't fit.

 

The era of knowledge scarcity is over, and universities that cling to their old role as gatekeepers will be left behind by alternative providers and self-taught, AI-powered learners. Classical economics teaches that scarcity determines value. Higher education once had a price because it controlled access to a limited resource. Now that AI has flattened the supply curve of information, the equilibrium point has shifted. The price, in this case, the willingness to pay for traditional knowledge transfer, will fall unless universities offer something that the market still values. That “something” is the ability to produce graduates who can create and apply new knowledge in a way that AI cannot. The advantage no longer lies in possessing the knowledge, but in the ability to interpret, apply, and gain insights that AI alone cannot deliver. In other words, the comparative advantage of universities must now lie in fostering the scarce capacity of human judgement in abundance. Faculties are the critical link that enables universities to move from monopolists in a scarce market to innovators in an abundant market. Globally, the warning signs are clear; locally, leaders like Muganga are proving what is possible. The question is whether others will follow before the opportunity passes.

Sunday, 10 August 2025

What the Calculator Panic of the 1980s Can Teach Us About AI Today

By Richard Sebaggala


In 1986, a group of American math teachers took to the streets holding signs that read, “Ban Calculators in Classrooms.” They feared that these small electronic devices would strip students of the ability to perform basic calculations. If a machine could handle addition, subtraction, multiplication, and division, what incentive would students have to learn those skills at all? At the time, the concern felt genuine and even reasonable.

With the benefit of hindsight, the story unfolded quite differently. Within a decade, calculators were not only accepted but actively encouraged in classrooms across many countries. Standardized exams began permitting their use, textbook problems were redesigned to incorporate them, and teachers found that students could tackle more complex, multi-step problems once freed from the grind of manual computation. Far from destroying mathematical thinking, calculators shifted the focus toward problem-solving, modeling, and a deeper grasp of underlying concepts.

 

Almost forty years later, the same conversation is happening, but the technology has changed. Artificial intelligence tools such as ChatGPT, Avidnote, and Gemini can now generate essays, solve problems, and summarize complex ideas in seconds. Today's concern is familiar: that students will stop thinking for themselves because the machine can do the thinking for them. The parallel with the calculator debate is striking. In the 1980s, the worry was that calculators would erase basic arithmetic skills; today, it is that AI will erode the capacity for critical and independent thought. In both cases, the tool itself is not the real problem. What matters is how it is introduced, how it is used, and how deeply it is woven into the learning process.

In economics, this recurring pattern is well understood through the study of general-purpose technologies, which are transformations such as electricity, the internet, and now AI, whose applications cut across multiple industries and fundamentally alter productivity potential. History shows that these technologies almost always meet initial resistance because they unsettle existing skills, workflows, and even the identities of entire professions. Yet, once institutions adjust and complementary innovations emerge, such as new teaching methods, updated regulations, or redesigned curricula, the long-run productivity gains become undeniable. In Africa, the mobile phone offers a clear example. Initially dismissed as a luxury, it became a platform for innovations like mobile money, which transformed financial inclusion, market access, and small business operations across the continent. The calculator did not diminish mathematical thinking; it reshaped it, shifting effort from mechanical tasks to higher-order reasoning. AI holds the same potential, but only if education systems are willing to reimagine how learning is structured around it.

 

When calculators entered the classroom, they prompted a shift in teaching and assessment. Teachers began creating problems where the calculator was useful, but understanding was still essential. Tests required not only the correct answer but also evidence of the reasoning behind it. The arrival of AI demands a similar change. Students can be taught to use AI for tasks such as brainstorming, structuring arguments, or refining drafts, but they should still be held accountable for evaluating and improving the output. Assessments can reward transparency in how AI is used and the quality of judgment applied to its suggestions.

This is where metacognition becomes essential. Metacognition is simply thinking about one's own thinking. In economics, we often speak of comparative advantage: doing what you do best while letting others handle the rest. AI shifts the boundaries of that calculation. The risk is that by outsourcing too much of our cognitive work, we weaken the very skills we need to make sense of the world. If universities fail to train students to integrate AI into their own reasoning, graduates may not only face economic disadvantages but may also experience a deeper sense of psychological displacement, feeling out of place in settings where AI competence is assumed.

Metacognition keeps us in control. It allows us to question the assumptions behind AI-generated answers, spot gaps in reasoning, align outputs with our goals, and know when to override automation in favor of deeper understanding. It is like applying the economist’s habit of examining incentives, not to markets, but to our own minds and to the machine’s mind.

Consider two graduate research students assigned to write a literature review. Both have access to the same AI tools. The first pastes the topic into the system, accepts the generated text without question, and drops it straight into the draft. The result is neat and coherent, with plenty of references, but some of the citations are fabricated, important regional studies are missing, and the structure is generic. Because the student never interrogates the output, the gaps remain. The supervisor flags the work as shallow and overly dependent on AI.

The second student uses AI to produce an initial outline and a list of possible sources. They then ask the tool follow-up questions: "What is this evidence based on? Are there African studies on the subject? Which perspectives might be missing?" They verify each reference, read key sources, and restructure the review to balance global theory with local findings. The final paper is richer, more original, and meets the highest academic standards. The difference lies in metacognition, not only thinking about one's own reasoning but also critically evaluating the machine's reasoning. Over time, this approach strengthens analytical skills and turns AI into a genuine thinking partner rather than a shortcut.

The real opportunity is to treat AI as a thinking accelerator. It can take over repetitive work like drafting, summarizing, and running quick computations so that human effort can be directed toward framing the right questions, challenging assumptions, and making judgments that depend on values and context. History shows that those who learn to work with transformative tools, rather than resist them, gain the advantage. The calculator era offers a clear lesson for our time: instead of banning the tool and sometimes focusing on who has used it or not, we should teach the skill of using it wisely and thinking about our thinking while we do so.

Monday, 4 August 2025

"You're Safe!": What This Joke Really Says About AI and the Future of Education

By Richard Sebaggala

Conversations about AI have become increasingly divided. Some see it as a breakthrough that will transform every sector, education included. Others still treat it as overblown or irrelevant to their day-to-day work. Most people are simply exhausted by the constant updates, ethical dilemmas, and uncertainty. This split has left many universities stuck, circling around the topic without moving forward in any meaningful way.

A recent WhatsApp exchange I saw was both humorous and unsettling: "Artificial intelligence cannot take your job if your job has never needed intelligence." The reply was, "I don't understand..." and the answer came back, "You're safe!" The joke's quiet truth is that if your work relies on knowledge, judgment, and problem-solving, then AI is already capable of doing parts of it. And the parts it replaces may be the very ones that once gave your job value.

For many of us, including lecturers, researchers, and analysts, our core productivity has come from how efficiently we produce or communicate knowledge. But AI is changing the way that knowledge is generated and shared. Tasks like reviewing literature, coding data, summarizing papers, and grading assignments are no longer things only humans can do. Tools like Elicit, Avidnote ai, and GPT-based platforms now handle many of these tasks faster, and in some cases, better.

Some universities are already moving ahead. Arizona State University has partnered with OpenAI to embed ChatGPT into coursework, research, and even administrative work. The University of Helsinki’s "Elements of AI" course has attracted learners from around the world and built a new foundation for digital literacy. These aren't theoretical exercises; they're practical steps that show what's possible when institutions stop hesitating.

I’ve seen individual lecturers using ChatGPT and Avidnote to draft student feedback, which frees up time for more direct engagement. Others are introducing AI tools like Perplexity and avidnote to help students refine their research questions and build better arguments. These are not just efficiency hacks; they’re shifts in how academic work is done.

Yet many universities remain stuck in observation mode. Meanwhile, the labour market is already changing. Companies like Klarna and IBM have openly said that AI is helping them reduce staffing costs. When AI can write reports, summarise meetings, or process data in seconds, the demand for certain types of graduate jobs will shrink. If universities fail to update what they offer, the value of a degree may start to fall. We're already seeing signs of a skills revaluation in the market.

This shift isn’t without complications. AI also brings new problems that institutions can’t ignore. Equity is one of them. Access to reliable AI tools and internet connections is far from universal. If only well-funded institutions can afford high-quality access and training, the digital divide will only widen. Universities need to think about how they support all learners, not just the privileged few.

There’s also the question of academic integrity. If students can complete assignments using generative AI, then we need to rethink how we assess learning. What kinds of skills are we really measuring? It’s time to move away from assignments that test simple recall and toward those that build judgment, ethical reasoning, and the ability to engage with complexity.

Data privacy matters too. Many AI platforms store and learn from user input. That means student data could be exposed if universities aren’t careful. Before rolling out AI tools at scale, institutions need clear, transparent policies for how data is collected, stored, and protected.

And then there’s bias. AI tools reflect the data they’re trained on, and that data often carries hidden assumptions. Without proper understanding, students may mistake bias for truth. Educators have a role to play in teaching not just how to use these tools, but how to question them.

These are serious concerns, but they are not reasons to stall. They are reasons to move forward thoughtfully. Just as we had to learn how to teach with the internet and digital platforms, we now need to learn how to teach with AI. Delaying action only increases the cost of catching up later.

What matters most now is how we prepare students for the labour market they’re entering. The safest jobs will be those that rely on adaptability, creativity, and ethical thinking traits that are harder to automate. Routine tasks will become commodities. What will set graduates apart is their ability to ask good questions, work across disciplines, and collaborate effectively with technology.

These changes are no longer hypothetical. They’re happening. Institutions that embrace this moment will continue to be relevant. Those that don’t may struggle to recover their footing when the changes become impossible to ignore.

Universities must lead, not lag. The time for think pieces and committee formation has passed. We need curriculum updates, collaborative investment in training, and national plans that ensure no institution is left behind. The early adopters will shape the new rules. Everyone else will follow or be left out.

That WhatsApp joke made us laugh, but its warning was real. AI is changing how the world defines intelligence and value. If education wants to stay meaningful, it has to change with it. We cannot afford to wait.

Sunday, 27 July 2025

AI and Africa’s Economy: Growth Simulations and the Policy Choices Ahead

By Richard Sebaggala



OpenAI's July 2025 Productivity Note made me wonder: if ChatGPT's productivity impact in developed economies is just a hint of what's possible, what could this mean for Africa? I particularly considered what might happen here in Uganda. Unlike earlier innovations like electricity or transistors, which took many years to spread through economies, AI is moving incredibly fast. ChatGPT, for instance, gained 100 million users in only two months, making it the quickest adopted consumer technology ever. Moreover, while tools like the steam engine or electricity mainly expanded physical work, AI enhances thinking itself. This gives it immense potential for boosting productivity, but also carries the risk of leaving some people behind. As The Economist has observed, technologies that transform productivity rarely share benefits equally. Without intentional effort, AI could worsen existing inequalities instead of reducing them.
OpenAI's analysis certainly showcased AI's impressive capabilities. Globally, over 500 million people now use OpenAI tools, exchanging 2.5 billion messages every day. In the United States, ChatGPT has been a significant time-saver, helping teachers save nearly six hours weekly on administrative duties and state workers 95 minutes daily on routine tasks. Entrepreneurs are also launching startups much more quickly than before, and a significant 28% of employed U.S. adults now use ChatGPT at work, a big increase from just 8% in 2023. However, the report largely overlooked Africa, a continent facing distinct challenges such as lower internet access, less developed digital infrastructure, a higher informal economy, and systemic obstacles that slow down quick adoption. Despite this, Africa has shown with the mobile money revolution that it can bypass entire stages of development when the right technology emerges at the opportune moment.
Modeling Africa's AI Future: My Approach
To explore the potential impact, I created a simulation model. This model used the same productivity factors as OpenAI's analysis, but I adjusted them to fit Africa's unique economic conditions. My focus was on Sub-Saharan Africa, with a specific scenario developed for Uganda. I examined three different AI adoption levels: low (10–15% of the workforce), medium (30–40%), and high (60–70%). My starting point assumed a 3% annual GDP growth without AI. I then factored in productivity increases based on adoption: 0.5% for low, 1.2% for medium, and 2% for high adoption across Africa. For Uganda, I made slightly lower adjustments due to more significant infrastructure limitations.
Africa's Economic Boost: A Trillion-Dollar AI Opportunity
The potential impact of AI on Sub-Saharan Africa’s economy is remarkable. The region’s total economic output (GDP) is approximately $1.9 trillion. The projections show that by 2035, AI could add significant value depending on how widely it’s adopted:
  •  Low AI adoption could add an extra $150 billion to Sub-Saharan Africa’s economy.
  •  Medium AI adoption could boost the economy by an additional $360 billion.
  •  High AI adoption has the potential to add over $610 billion.
To put this into perspective, if Sub-Saharan Africa embraces AI widely, its economy could grow by an amount equivalent to Nigeria’s entire economy today. These figures don’t even include the even greater potential for rapid progress in areas like education, healthcare, and agriculture, where AI can help overcome long-standing challenges.


Uganda's Economic Outlook: Significant Growth Potential

Uganda shows a similar promising trend, though on a smaller scale. Starting with a GDP of $49 billion in 2024, our model projects these outcomes by 2035 based on AI adoption:

  • Low AI adoption could see Uganda's GDP reach $65 billion.
  • Medium AI adoption could push the GDP to $71 billion.
  • High AI adoption could result in a GDP of nearly $77 billion.

This projected increase represents billions of dollars in new economic activity for Uganda. These additional resources could be invested in creating jobs, improving infrastructure, and enhancing social services across the country.  Beyond just economic figures, jobs are also a crucial factor. Even if some jobs change due to AI,  the analysis indicates that a medium or high adoption of AI would lead to an overall increase in available jobs. AI can free up workers from repetitive tasks, allowing them to take on more complex and valuable roles. However, without investing in training people for these new skills, this shift could worsen existing inequalities, particularly in rural areas and informal sectors.


AI's Impact Across Different Sectors

AI adoption won't affect all parts of the economy in the same way. Service industries are likely to gain the most, given that they rely heavily on knowledge work. Manufacturing will also see benefits. However, agriculture, which is Uganda's largest employer, will experience slower productivity improvements unless AI tools are specifically designed for small-scale farmers. This means developing things like precision farming applications and market information platforms. Because of this uneven impact, it's crucial to have policies that don't just focus on people working in cities but also extend AI's advantages to our agricultural areas.

 

Why Policy Decisions Matter for AI?

The main message here is that Africa simply cannot afford to sit back and watch the AI revolution unfold. The productivity gains I modeled are not automatic; they depend entirely on the deliberate choices we make. For Uganda, and for Africa more broadly, there are five key policy areas that need our attention. First, we need to introduce AI learning in schools, universities, and vocational training programs, fostering AI literacy across the board. Second, it's essential to expand internet access and make devices more widely available, especially in rural areas, strengthening our infrastructure. Third, we should encourage the use of AI in vital sectors like agriculture, healthcare, and education, not just in technology companies, ensuring sectoral integration. Fourth, we need to create safety nets to help workers who might be affected by AI automation, providing inclusive safety nets. Finally, it's important to develop rules for AI ethics and how data is used, specifically designed for African situations, establishing good governance.

 

The Bigger Picture for Africa

Adopting AI in Africa could be as transformative as mobile money was for making financial services accessible. But unlike mobile money, which was mainly driven by private companies, AI's benefits will require strong cooperation between the public and private sectors. The decisions Uganda and its neighboring countries make in the next five years will determine whether AI truly leads to inclusive growth or if it deepens existing inequalities. If we make the right moves, Africa could ride the wave of AI progress, overcoming limitations that have held us back for decades. If we don't, we risk falling further behind.



Saturday, 19 July 2025

 Artificial Intelligence and the Research Revolution: Lessons from History

By Richard Sebaggala

 


As an enthusiastic observer of AI developments, I recently had the opportunity to gain insights into the upcoming GPT-5 from the head of OpenAI. While existing models such as GPT-4.0 and its cousins, o1 and o3 impress with their advanced reasoning capabilities, the anticipated GPT-5 promises to be a true game-changer. Slated for release in July 2025, GPT-5 is not merely an upgrade; it's a leap forward that promises to unify advanced reasoning, memory, and multimodal processing into one coherent system. Imagine an AI model that can not only perform calculations but also browse the internet, perceive and interact with its environment, remember details over time, hold natural conversations, and perform complex logical tasks. This leap is revolutionary.

 

This reminds me of the mid-20th century, when the advent of statistical software like SPSS, SAS, and Stata marked a major shift in research methodology. These tools democratized data analysis and made sophisticated statistical techniques accessible to a wider range of scientists and researchers. This revolution not only increased productivity but also transformed the nature of research by enabling scientists to delve deeper into the data without needing to spend time on complex calculations.

 

Early adopters of these statistical tools found themselves at the forefront of their field, able to focus more on the interpretation of the data and less on the mechanics of the calculations. This shift not only increased productivity but also the quality and impact of their research. For example, psychologists using SPSS were able to replicate studies on cognitive behavior more quickly, which greatly accelerated the validation of new theories. Economists equipped with Stata’s robust econometric tools were able to analyze complex economic models with greater precision, leading to policy decisions that were deeply rooted in empirical evidence.

 

The AI revolution, led by technologies such as GPT-5, mirrors this historical development, but on a larger scale. AI goes beyond traditional statistical analysis by incorporating capabilities such as machine learning, natural language processing, and predictive analytics that open new dimensions of research potential. For example, AI can automate the tedious process of literature searches, predict trends from historical data, and suggest new research paths through predictive modeling. GPT-5’s expected one-million-token context window will allow it to handle entire books or datasets at once, making research synthesis and cross-domain integration faster and more insightful than ever before. These capabilities enable researchers to achieve more in less time and increase their academic output and influence.

 

In the realm of economics and beyond, the concept of "path dependency" states that early adopters of technology often secure a greater advantage over time. Those hesitant to adopt AI may soon find that they can't keep up in a world where AI is deeply embedded in work, research, and decision-making. The skepticism of some academics and policymakers, especially in countries like Uganda, toward AI could prove costly. As AI becomes more intuitive and indispensable, with models now able to act autonomously, remember prior tasks, and reason across modalities, those who delay its adoption risk losing valuable learning time and a competitive advantage.

 

Nonetheless, while the statistical revolution specifically transformed one facet of the research process, statistical analysis, researchers who had not embraced statistical tools were still able to succeed based on their strengths in other areas of research writing, such as qualitative analysis. The impact of AI, however, is much broader. GPT-5 and similar systems are expected to transform every phase of the research lifecycle: from conceptualization, literature review, and question framing to data analysis, manuscript drafting, and even grant application writing. This comprehensive influence means that AI is not just an optional tool, but a fundamental aspect of modern research that could determine the survival and success of future research endeavors. This makes the use of AI not only beneficial but essential for those who wish to remain relevant and influential in their field.

 

On the cusp of the GPT-5 era, the message is clear: AI will not replace researchers. Instead, the researchers who use AI effectively will set the new standard and replace those who do not. It's not about machines taking over; it's about using their capabilities to augment our own. Just as statistical software once redefined the scope and depth of research, AI promises to redefine it again, only more profoundly. Unlike earlier models, GPT-5 is positioned to act as an intelligent research collaborator, able to draft, revise, interpret, and even manage tasks in real time. In the history of scientific research, those who use these tools skillfully will lead the next stage of discovery and innovation.