Wednesday, 25 March 2026

 

AI Is Not Killing Creativity. It Is Changing Who Gets to Be Creative

By

Sebaggala Richard (PhD)

 

It has become common to say that artificial intelligence is killing human creativity. You hear it in universities, in the media, and increasingly in professional spaces where people write, design, teach, or make things for a living. The fear is easy to understand. If a machine can generate an image, draft an essay, suggest a storyline, or mimic a style in seconds, then it can seem as though something deeply human is being pushed aside. Yet I think this debate often begins from the wrong starting point. The real issue is not whether AI can be creative in some abstract sense, nor simply whether it can outperform humans on selected tasks. The more interesting question is what happens when a tool makes creative production easier for ordinary people who may not have had the time, training, confidence, or support to do as much before.

Part of the confusion comes from the way creativity is discussed in public. It is often treated as though it were an all-or-nothing trait: either one is creative or one is not, either AI has creativity or it does not. But that is not how creativity works in practice. Creativity is unevenly distributed, like many other human capabilities. A small number of people operate at a very high level and produce work that surprises, unsettles, and sometimes shifts the direction of a field. Most people sit somewhere below that frontier. They may still have ideas, but they often struggle to develop them, express them clearly, or refine them into something useful. From that perspective, the arrival of AI does not necessarily mean that human creativity is being erased. It may instead mean that the baseline is shifting.

That possibility is becoming harder to ignore. Some recent studies suggest that large language models can outperform the average human on certain standardized creativity tasks. They can also produce short written pieces that judges rate quite highly in terms of novelty or quality. That matters, but it is only half the story. The same body of work also shows that the most creative human participants still tend to do better. So the evidence does not really support the dramatic claim that machines have overtaken human creativity. What it suggests instead is something more uneven: AI seems capable of lifting performance in the middle of the distribution without necessarily displacing excellence at the top.

This is a far more interesting result than either utopian or apocalyptic accounts allow. It suggests that AI may matter less as a replacement for great creators and more as a support tool for everyone else. For many people, the hardest part of creative work is not brilliance itself. It is getting started. It is producing the first draft, finding an angle, trying out alternatives, or pushing through the awkward stage where an idea still feels half-formed. These early stages are costly. They require time, effort, and a willingness to tolerate failure. AI lowers some of those costs. It helps people begin, and sometimes that is what matters most.

Economists should immediately recognize the logic here. When the cost of participating in an activity falls, more people are able to take part. That does not mean quality at the frontier disappears. It means the average level of participation and output can rise, even while the best performers remain distinct. We have seen versions of this before. Calculators did not end mathematics. Word processors did not end writing. Statistical software did not end serious empirical work. In each case, a tool removed some of the friction that made participation harder. The best people still stood out, but many more people could now do competent work than before. AI may be doing something similar for creativity.

That does not mean the concerns are misplaced. There is good reason to worry that AI-generated outputs may start to sound alike, look alike, or flatten important differences in voice and style. There is also a real danger that people may rely on these systems too heavily and lose the patience required for deeper thinking. In some settings, especially where judgment is weak, AI may produce work that appears polished while remaining shallow. So this is not a simple story of progress. It is a story of gains mixed with risks.

Still, the strongest criticism of AI and creativity often assumes too much. It assumes that if machines can help with creative work, then human originality must necessarily decline. That conclusion does not follow automatically. A tool can improve average performance without replacing the highest forms of human achievement. In fact, one of the more striking patterns in recent evidence is that AI often seems to help weaker performers more than stronger ones. That point matters because it suggests the technology may be compressing part of the gap between those who already know how to express ideas well and those who do not.

For education, this matters a great deal. In many contexts, the biggest barrier is not lack of intelligence but lack of support. Students may have ideas but struggle to structure them. Young researchers may know what they want to study but fail to turn a rough interest into a coherent question. Professionals may have genuine insight but lack the time to write clearly and quickly. In such situations, AI may not create originality out of nowhere, but it can help convert weak beginnings into usable drafts, and that is not a trivial contribution.

The same may be true more broadly in lower-resource environments. In much of Africa, for example, the issue is often not that people lack imagination. It is that many work without the layers of institutional support that help refine raw ability into polished output. Good editors are scarce. Mentorship is uneven. Feedback is delayed. Access to high-quality learning materials is limited. Under those conditions, a tool that reduces the cost of drafting, revising, or exploring ideas may have wider effects than critics in richer contexts fully appreciate. Even here, however, caution is necessary. Access to a tool is not the same as capability. AI does not automatically democratize creativity simply because it is available. People still need judgment. They still need subject knowledge. They still need some ability to distinguish a promising idea from a plausible-sounding bad one. Without that, the technology can mislead just as easily as it can help.

So the choice is not between saying that AI is destroying creativity and saying that it is liberating creativity. Both claims are too blunt. The better view is that AI is changing the structure of creative work. It is making some parts easier, broadening some forms of participation, and making some kinds of output more accessible. At the same time, it may also be encouraging sameness, overconfidence, and a false impression of depth.

The real question, then, is not whether AI can replace the best human creators. At least for now, that is the wrong benchmark. The more useful question is whether AI is raising the floor for creative work, and what follows when more people are able to produce, express, and refine ideas than before. If that is what is happening, then the social significance of AI may lie less in producing masterpieces and more in widening participation. It may not remove the frontier of human creativity, but it may change who gets close enough to it to matter. That is not the death of creativity. It is a shift in its distribution.

 

Friday, 20 March 2026

 You Do Not Need Every AI Tool: A Lesson from Econometrics

By Sebaggala Richard (PhD)

 

 

Every few weeks a new artificial intelligence tool is introduced with the promise of transforming research, teaching, writing, coding, and analysis in academia. The pace of innovation is impressive, but it has also created a certain level of anxiety within universities. Students feel compelled to experiment with every new platform they encounter. Lecturers worry about keeping pace with rapidly changing technologies. Researchers sometimes feel that failure to adopt the latest tool may leave them behind.

 

The real problem, however, is not the abundance of AI tools. It is that in trying to use all of them, researchers risk fragmenting their attention and weakening the depth of their thinking.

 

In many cases, the response to this environment has been predictable. Instead of building deep competence with a small number of tools, people begin to accumulate platforms. They open accounts on multiple systems, experiment briefly with each one, and then move to the next new tool when it appears. The result is often the opposite of what they intended. Productivity declines rather than improves.

 

Whenever I observe this pattern today, I remember a lesson from my econometrics training many years ago. At the time, we were being introduced to statistical software packages such as Stata, EViews, and SPSS. These programs were widely used in universities and research institutions around the world, and for students beginning to learn applied econometrics the choice of software seemed overwhelming. Many of us were unsure which package we should invest time in learning.

 

Our lecturer offered a simple but memorable analogy. He told us that one does not need to drive every car in order to become a good driver. What matters is learning one vehicle thoroughly and understanding how it works. He then advised us that if we learned Stata properly, we would not miss much from the other packages, and that the skills we acquired would make it easier to understand any other software we might encounter later.

 

At the time the comment seemed like practical advice about software. With experience, however, it becomes clear that the point was much deeper. The lesson was about mastery and focus. In economics we often think about the efficient allocation of scarce resources. Attention is one of those scarce resources. When attention is spread across too many tools, the quality of learning and productivity declines.

 

The current environment of artificial intelligence tools presents a similar challenge. A growing number of platforms now offer support for academic tasks such as summarizing literature, drafting text, generating code, analysing documents, and organizing research materials. Systems such as ChatGPT, Gemini, Claude, Perplexity, Elicit, Avidnote, ResearchRabbit, Scite, and NotebookLM have become increasingly visible in academic discussions. Each claims to provide significant advantages for research and knowledge work.

 

Students therefore frequently ask which of these tools they should learn. The question resembles the one we asked about econometrics software years ago. The answer is also similar. Researchers do not need to learn every available platform. What matters is developing a deep understanding of a small number of tools and learning how to use them effectively in an intellectual workflow.

 

When researchers attempt to use every tool available, several difficulties tend to emerge. The first is fragmentation of workflow. Instead of concentrating on the research problem itself, the researcher spends time switching between multiple systems. The second is superficial knowledge. Individuals may become familiar with the basic interface of several platforms without developing the skill required to use any of them effectively. The third is cognitive overload. Mental effort is directed toward managing tools rather than analysing data, developing arguments, or interpreting results.

 

There is, however, a deeper and less visible cost. When researchers constantly switch between AI systems, they do not only fragment their workflow; they also fragment their thinking. Each system structures responses differently, suggests particular framings, and nudges users toward specific ways of expressing ideas. Over time, this can weaken intellectual coherence. Instead of developing a consistent analytical voice, the researcher begins to adapt to the logic of whichever tool is being used at the moment.

 

Part of this confusion is reinforced by the strong competition currently taking place among major artificial intelligence developers. Large technology firms are investing heavily in AI systems and competing to build the most capable digital assistants. This has produced intense comparisons between leading platforms such as ChatGPT, Claude, and Gemini. Each system has particular strengths. Some are particularly effective in analysing long documents, others integrate well with search engines or cloud services, and others perform strongly in coding and structured analysis.

 

For most academic researchers, however, the differences between these systems are less important than the discussion surrounding them might suggest. Modern AI models already possess capabilities that would have been considered remarkable only a few years ago. They can summarize academic papers, assist in structuring literature reviews, explain theoretical frameworks, generate programming scripts, and help refine academic writing. The critical issue is therefore not access to artificial intelligence tools but the ability to use them thoughtfully.

 

From an economic perspective, this behaviour reflects classic problems of bounded rationality and switching costs. Each new tool requires time to learn, cognitive effort to integrate, and attention to evaluate. When these costs are ignored, researchers over-invest in exploration and under-invest in mastery. The result is diminishing returns to additional tools and, in many cases, a decline in overall productivity.

 

In practice, a focused combination of tools can already provide substantial support for academic work. Systems such as ChatGPT are particularly useful as intellectual companions during the research process. They can assist in refining research questions, clarifying conceptual frameworks, designing surveys, interpreting statistical output, and improving the structure of academic writing. When used carefully, such systems function less as automated generators of text and more as conversational partners that help researchers examine their reasoning.

 

Other platforms offer strengths in areas such as document analysis and information synthesis. Systems like Gemini are often helpful when researchers are working with large reports or multiple documents that need to be summarized and compared. Tools such as Claude have become known for their ability to handle very long texts and produce structured explanations. When used selectively, these capabilities can significantly reduce the time required to extract insights from extensive material.

 

The broader principle underlying this discussion is familiar in economics. Productivity does not necessarily increase with the number of technologies employed. It increases when individuals develop comparative advantage in the use of particular tools. A researcher who understands three systems deeply will usually work more efficiently than someone who attempts to use ten different platforms at once. Mastery compounds over time. Once the logic of AI interaction is understood, adapting to new tools becomes relatively straightforward.

 

This observation also has implications for universities. Institutions sometimes respond to technological change by attempting to introduce students to a large number of platforms. A more effective approach would focus on teaching core competencies. Students should learn the principles of AI literacy, critical engagement with algorithmic outputs, responsible and ethical use of AI, and disciplined integration of a few tools into their research workflow. The objective is not simply to familiarize students with technology but to help them think more effectively in an environment where intelligent systems are widely available.

 

Looking back, the lesson from my econometrics lecturer was not primarily about statistical software. It was about maintaining focus in a world that constantly presents new options. That insight remains highly relevant today. Artificial intelligence tools will continue to appear at a rapid pace, and debates about which system is superior will likely persist.

 

In the age of artificial intelligence, the constraint is no longer access to tools. It is the ability to think clearly while using them. The danger is not missing out on AI tools; it is becoming cognitively shallow while using them. Researchers who benefit most from these technologies will not be those who pursue every new platform, but those who develop disciplined, focused, and reflective ways of working with a few powerful tools.

 

In the same way that learning to drive one vehicle well provides the foundation for driving many others, mastering a few tools can provide the foundation for productive research in the age of artificial intelligence.

 

Sunday, 1 March 2026

 

Sequencing or Stagnation? Rethinking Africa’s Artificial Intelligence Strategy

By Sebaggala Richard (PhD)

 


 

In a recent Brookings commentary titled “Why Africa Should Sequence, Not Rush Into AI,” Mark-Alexandre Doumba argues that Africa’s greatest risk is not missing the AI revolution but joining it too early. Drawing on the work of Ricardo Hausmann and Dani Rodrik, the article cautions against what it describes as premature automation. The central concern is that without adequate digital infrastructure, data governance frameworks, and productive capabilities, rapid adoption of artificial intelligence could deepen dependency rather than accelerate structural transformation. It is a thoughtful intervention in an important policy debate and one that deserves serious engagement.


At the same time, the analogy underpinning the sequencing argument merits closer examination.  The thesis implicitly treats artificial intelligence as comparable to earlier industrial technologies such as factories, heavy manufacturing, or large-scale power infrastructure. In those historical periods, countries needed to accumulate domestic skills, supply chains, and institutional capacity before industrial investment could generate sustained returns. Where this foundation was weak, industrialization often produced enclaves with limited linkages to the broader economy.

 

Artificial intelligence operates in a different space. It does not primarily reorganize physical production; it reshapes how thinking and problem-solving are organized. It influences how research is conducted, how policies are drafted, how code is written, how diagnoses are made, and how information is processed. Its deployment is largely cloud-based and does not depend on ownership of heavy physical capital. More importantly, the use of AI tools itself contributes to skill formation. Individuals often develop competence through interaction, experimentation, and repeated application. Capability therefore grows partly through adoption rather than entirely before it.

 

This reality complicates the historical logic of waiting until foundations are fully consolidated. In earlier industrial waves, late entry sometimes allowed countries to observe pioneers, import mature technologies, and expand cautiously. In the current environment, the capability frontier moves quickly and continuously. Early adopters refine processes, accumulate institutional experience, and embed AI deeply into their systems. As experience compounds, catching up becomes more demanding.

 

The pattern is already visible at the individual level. Professionals who dismissed AI tools a few years ago often find that peers who experimented early have reorganized how they conduct research, prepare lectures, analyze data, and manage projects. The difference is not limited to marginal efficiency gains. It reflects changes in workflow, iteration speed, and analytical depth. When such shifts scale across institutions and economies, divergence becomes structural.

 

The labor market concerns raised in the sequencing argument are understandable. Automation can displace certain categories of routine work, particularly in service sectors. Yet many African economies have not developed large-scale industrial employment bases comparable to those that powered earlier development trajectories elsewhere. Informality remains widespread, and productivity gaps persist. In this context, the more pressing risk may not be premature deindustrialization but the failure to cultivate high-productivity knowledge and service sectors capable of absorbing a growing youth population.

 

Artificial intelligence should therefore be viewed not only as an automation technology but also as a productivity-enhancing instrument. It can strengthen agricultural advisory systems, support diagnostic processes in health care, enhance educational personalization, improve logistics coordination, and assist public administration. In environments where documentation remains paper-based and data fragmented; AI-assisted digitization and analysis can accelerate institutional modernization. In that sense, AI can contribute to building the very foundations that sequencing advocates consider prerequisites.

 

The concern about digital dependency is historically grounded. Africa’s experience with extractive development shows how exporting raw inputs while importing high-value outputs can entrench structural imbalances. A digital parallel could emerge if data is generated locally while algorithms, platforms, and standards are designed and controlled elsewhere.

 

However, dependency does not arise solely from early adoption. It can also result from disengagement. Global AI platforms will continue to expand regardless of cautious national strategies. Data ecosystems will evolve. Technical standards will consolidate. Countries that actively cultivate domestic competence are better positioned to negotiate terms, influence governance frameworks, and adapt systems to local realities. Sovereignty in the digital age depends not only on regulation but also on participation and expertise.

 

The labor dimension is equally nuanced. The relevant comparison is not between African workers and machines in isolation, but between workers who use AI effectively and those who do not. In global service markets, AI literacy is rapidly becoming a baseline expectation. Youth who master these tools strengthen their competitiveness in remote work, digital entrepreneurship, research support, and creative industries. Delayed exposure risks widening skill gaps that become increasingly difficult to close.

 

None of this diminishes the importance of governance, infrastructure, and regulatory design. Data protection regimes, interoperability standards, and digital public infrastructure remain essential pillars of a sustainable AI ecosystem. The question is whether these frameworks must be fully consolidated before meaningful adoption begins, or whether they can evolve alongside practical engagement. Institutional learning is often iterative. Policymakers refine regulatory approaches through exposure to real-world applications and emerging risks.

 

The strategic issue, then, is not whether Africa should move early or late. It is whether it will build the capacity to shape how AI is integrated into its economies and institutions. Artificial intelligence functions as a general-purpose technology that reshapes the production of knowledge and decision-making. Countries that embed it thoughtfully in education systems, research environments, entrepreneurial ecosystems, and public administration may realize productivity gains that conventional development models underestimate.

 

The debate should not be reduced to speed versus sequencing. It should focus on whether Africa approaches AI as a passive consumer or as an active capability builder. Postponement may appear prudent, but in a rapidly evolving technological landscape it carries opportunity costs that accumulate quietly yet persistently.

 

In this context, delay is not merely caution. It is a strategic position whose consequences deserve careful reflection.