Tuesday, 12 November 2024

The Economics of Generative AI Adoption: The risks of hesitation

 By Sebaggala Richard

 

Generative AI is revolutionizing professional fields, from writing and marketing to data analysis and design. However, the main obstacle to widespread integration is not technical challenges or ethical concerns — it’s people's hesitation to embrace it. This hesitation goes beyond personal reservations and carries economic and professional risks that could see many fall behind in an era of AI-enhanced productivity. Inspired by Melanie Holly Pasch’s article "Generative AI Isn’t Coming for You—Your Reluctance to Adopt It Is," this article explores the real threat to our careers: not AI itself, but the reluctance to adopt it. This article explores the economic impact of resistance to AI and how this hesitation can impact careers and industries, more profoundly than the AI technology itself.

Many professionals are skeptical of generative AI, seeing it as a threat to their specialized skills and hard-earned expertise. This reaction is only natural. Pasch herself initially doubted that AI could replicate the creativity and precision of her work. Professionals who have honed their skills over the years fear that AI could turn their expertise into a commodity. This resistance stems from a psychological barrier known as the“sunk cost fallacy”— a tendency to cling to skills we’ve invested time in, making us resistant to tools that seem to diminish those skills. However, resistance to new tools like AI can lead to being left behind while others use these innovations to accelerate their careers.

For those who resist AI, the opportunity cost is significant. Forgoing generative AI can mean missing out on productivity gains and falling behind the competition. Think of a writer who spends hours refining a draft that AI could complete in minutes, giving them more time for creative or strategic work. This procrastination is not only a missed opportunity, but also a lost competitive advantage. In academia, for example, generative AI can streamline the traditionally time-consuming tasks of literature research and synthesis. Tools like Avidnote allow researchers to efficiently organize and summarize literature, enabling researchers to focus on interpretation and insights. As AI evolves from a niche tool to a fundamental expectation, professionals who resist it will not only miss out on productivity gains but could also find their skills becoming less relevant. Over time, this could lead to an obsolescence of the profession as AI-driven practices become the industry standard.

Generative AI has redefined expectations of efficiency and productivity. Today, it is no longer enough to do a task well. Professionals are expected to deliver faster and more innovative results. In this environment, adaptability is becoming a competitive advantage. Those who are willing to experiment with artificial intelligence not only gain efficiency and effectiveness, but also acquire skills that make them valuable assets in evolving work environments. Pasch’s own journey illustrates this shift. After embracing AI, she found that it did not replace her work, but enhanced it. Her adaptability expanded her expertise and allowed her to redefine her professional values.

A common misconception about AI is that it devalues human talent by making skills more accessible. On the contrary, AI increases professional value by taking over routine tasks and freeing up time for higher-value work. Pasch initially feared that AI would dilute her craft, but she later realized that it allows her to focus on strategic, high-impact tasks. AI allows professionals to raise standards and focus on creativity, strategy, and relationship-building-the elements that truly set individuals apart.

For those still hesitant to adopt AI, some practical strategies can ease the transition. Starting with small steps is an effective way to get used to AI. Using it initially for routine tasks such as generating ideas or summarizing documents will help you familiarize yourself with its capabilities. Another important factor is formulating specific prompts that will make for more relevant and impactful AI-generated content. Recently, I discovered that you can share your initial thoughts with ChatGPT on what you want it to do, then ask the same AI to refine and improve the prompt before executing it. The results are impressive, effectively bridging the gap between those who are skilled and those less confident in communicating with AI. This approach enables clearer, more effective interactions with the AI, enhancing the quality of responses for everyone. The mindset that AI is a helpful assistant rather than a replacement can also encourage professionals to use it to tackle smaller tasks or overcome writer’s block, freeing up time for more complex projects. Finally, do not use AI output “as is,” but consider it a starting point that can be refined and personalized so that the final product reflects your personal expertise.

The hesitation to adopt AI is not just an individual problem, but a general economic risk. Industries and individuals that are slow to integrate AI will struggle to remain globally competitive. As productivity and growth increasingly depend on AI integration, resistance could hinder progress in sectors and economies already struggling with global competition. Adaptability to AI will be critical to economic resilience. As AI technology evolves, expectations for workers, industries, and economies will change. Those who adapt to AI could gain a competitive advantage, while those who resist risk being left behind.

Generative AI is not here to replace jobs but to augment human talent and eliminate inefficiencies. The economic impact of using AI goes beyond personal productivity and affects the competitiveness and survival of the entire human production chain. Using AI means using tools that reduce routine work and allow individuals and organizations to devote time and resources to high-impact, strategic initiatives. This transition opens up opportunities for innovation, skills development, and economic growth that can be transformative at every level —individual, organizational, and societal.

Hesitation to adopt AI comes at an economic cost by slowing productivity gains and reducing competitiveness. Regions, industries, and professionals that resist AI lose out on immediate efficiencies and miss out on the economic benefits that AI will bring over time. As more professionals and organizations integrate AI into their workflows, those who adopt it earlier can gain a first-mover advantage and establish themselves as leaders in their field. Early adopters are often better positioned to innovate, adapt to changing demands, and capitalize on the growing AI-driven economy.

The economics of AI adoption illustrate a broader principle: in an environment where technology is rapidly evolving, adaptability is no longer optional — it’s essential. The economic landscape is increasingly shaped by technological capabilities, and AI has set a new standard for what is achievable in terms of speed, depth, and scalability. By using generative AI, professionals can reduce costs, increase output, and improve the quality of their work, driving growth and resilience.

Pasch’s experience shows that the shift from resilience to adaptability allows us to redefine professional value, unlock new levels of productivity, and open doors to creative and strategic activities that were previously limited by time. Ultimately, the economics of AI adoption show that it is not just a tool to maintain relevance, but an investment in future productivity and competitive advantage. Generative AI is here to multiply opportunities and enable individuals and organizations to drive growth, innovation, and economic resilience in a fast-paced, technology-driven world.

Friday, 8 November 2024

Efficiency Trumps Nostalgia: Why AI-Enhanced Search is Not the "Death of Search"

By Sebaggala Richard

 

For those who are not yet familiar with generative AI, l will first provide a brief overview. Since the launch of ChatGPT in November 2022, the landscape of online information has changed dramatically. Generative AI tools such as ChatGPT are designed to generate human-like responses based on large amounts of training data. Initially, critics pointed out that these tools lacked real-time information and instead relied on static, pre-existing data, occasionally leading to what experts call “hallucinations" - moments where the AI generated inaccurate or outdated responses on newer topics. However, recent advances from companies such as OpenAI, Perplexity and Google have begun to address these limitations by integrating real-time search capabilities into their AI systems.

I was inspired to write this article after reading Matteo Wong's recent article "The Death of Search"," in which he addresses a sense of nostalgia — an emotional longing for the past — that some feel towards traditional search methods. Wong's article reflects a longing for the familiar process of trawling through pages of links and information, where users were in control of their search rather than having answers instantly curated by an AI. But is this move away from traditional search really a loss, or could it be seen as an overdue improvement that will help us become more efficient and cognitively empowered?

 

Steve Jobs famously likened computers to “bicycles for our minds",” tools that amplify our natural abilities (Markoff, 2011). Taking this metaphor further, AI could be seen as a “jet engine” for our minds, propelling us beyond the limits of our cognitive abilities. Whereas traditional search required a lot of mental energy to sift through links and refine keywords, AI-powered search engines streamline this journey, allowing us to access synthesized, relevant information almost instantaneously. Rather than lament the transition, we should recognize that AI elevates the concept of search to a higher level of cognitive empowerment.

From an economic perspective, search is not dying, it is evolving. This change is a well-known phenomenon of technological progress, comparable to the transition from slide rules to calculators. Just as calculators simplified arithmetic, AI search tools optimize information retrieval and allow users to focus on knowledge rather than information retrieval. Efficiency is a resource, and AI-powered search engines maximize its value by reducing redundant tasks and enabling deeper engagement with relevant data. Those who mourn traditional search are overlooking the greater potential that AI brings: Users can invest their time in critical thinking, analysis and innovation.

Indeed, AI-powered search mirrors the natural evolution of efficiency in our digital age. Every technological advance, from calculators to statistical software, has sparked similar fears of "loss," but these tools have consistently allowed users to move from mundane tasks to more meaningful pursuits. Calculators have not "killed" math, they have enhanced it. Similarly, AI-driven search doesn’t "kill" search, it optimizes it and provides a richer, results-driven experience. Why should we resist this development when it allows us to work smarter, not harder?

Wong’s article suggests that traditional search, with its search for keywords and lists of links, held intrinsic value. But this view holds to a process, not an outcome. AI search engines curate information with unprecedented precision, allowing users to bypass irrelevant links and go straight to what they need. This change does not mean that we are losing access to knowledge, but that we are accessing it more efficiently.

Let’s also tackle the notion that AI is a "replacement" for discovery. When calculators became indispensable, we didn’t mourn the loss of slide rules — we celebrated the new ease and accuracy they brought us. In the same way, AI search enables users to engage with knowledge, not replace it. The idea that efficiently delivered information has no value just because it’s fast is false; instead, it’s a testament to AI’s ability to serve us better.

In an age information overload, AI-driven search is essential. The economics of efficiency teaches us that our resources, especially time and cognitive energy, are finite. Why shouldn't we embrace a tool that helps us make the most of them? We should see the rise of AI-powered search not as the "death" of an old system, but as the birth of a smarter, more effective system.   To see how these AI advancements are improving our daily information handling, here are practical examples of how AI-powered search tools like ChatGPT are streamlining tasks and empowering users.

Efficiency in information retrieval and research: Traditional search engines require sifting through countless links, but AI-powered search can deliver summarized answers directly. For students and researchers, AI search tools like ChatGPT provide instant summaries and explanations, making research a more analytical, less repetitive process.

Personalized and real-time Assistance: AI search tools with features like real-time data access and personalized help make learning adaptable. Users can ask ChatGPT to explain new concepts or retrieve up-to-date information such as news or stock prices, simplifying knowledge acquisition and improving decision-making.

These examples show that AI-driven search not only increases efficiency, but also changes the way we interact with information, allowing us to focus on deeper insights and applications.

In conclusion, while I approach the future of AI with measured optimism, recent advancements offer the most compelling glimpse yet into what may lie ahead.The rapid refinement of these tools shows us how transformative AI can be, not just for routine tasks, but for reshaping entire fields, from research to education. For those who have not yet embraced these tools, now is the time. Early adopters will gain a significant advantage, and those who hesitate will find it increasingly difficult to keep up. In a world where technological inequality often separates the developed from the developing world, the widespread adoption of AI is not just about progress, but also about equity. By encouraging early and widespread adoption, we can help bridge this technological divide and create more equal opportunities. By harnessing the potential of AI now, we can all move towards a more connected, informed and empowered future.

Tuesday, 5 November 2024

 Economics of choice: Generative AI anxiety and Choice Overload

By Sebaggala Richard


Over the past six months, I have had the opportunity to promote the use of generative AI (GenAI) to researchers and students across Africa, introducing platforms such as Avidnote, ChatGPT, Elicit, Gemini and others. These tools are transforming research and learning. They offer features that can streamline literature reviews, enhance collaborative research and even automate aspects of data analysis and writing. Yet despite the transformative power of GenAI, I have noticed that many people, even those expected to be tech-savvy—students, academic staff and professionals at universities— have little familiarity with the wide range of  GenAI tools now available.

This lack of familiarity is not due to indifference. Rather, it is due to a mixture of AI anxiety and a feeling of being overwhelmed by the sheer number of tools available to choose from. Generative AI is developing at an astonishing rate. New applications appear almost daily, each with unique capabilities, price points and skill requirements. This rapid development creates a landscape of "choice overload," where the sheer number of options can lead to confusion, hesitation and ultimately inaction.   As I thought about this, I was reminded of the economics of choice, a field that examines how we make decisions when faced with a multitude of options — and the emotional and cognitive impact of too many choices. Understanding this concept is essential if we are to effectively combat the fears associated with the adoption  of GenAI.

The economics of choice and the Jam study

In the field of behavioral economics, one of the most illustrative studies on choice overload is the Jam Study (2000) by Sheena Iyengar and Mark Lepper; When choice is demotivating: Can one desire too much of a good thing? In this experiment, the researchers presented shoppers with two different displays of jam: one with 24 options and one with just six. Although more people were attracted to the large display, fewer people made purchases than those who were presented with only six options. This study illustrates a paradox: more options attract us, but they also lead to decision paralysis, which decreases our satisfaction and increases the likelihood that we will abandon the decision altogether.

Today’s GenAI landscape mirrors the Jam study on a much larger scale. GenAI tools promise endless possibilities — from boosting productivity to revolutionizing education. But for new users, choosing between a dozen tools that all seem to perform similar functions can feel as overwhelming as looking at a wall full of jam jars. This overabundance often leads to GenAI anxiety, where users, especially in developing regions, feel paralyzed by choice and afraid of missing out on the “perfect” tool.

Choice Overload and AI anxiety

Anxiety surrounding new technologies is not unique to artificial intelligence. Throughout history, every technological wave has brought with it a mixture of excitement and trepidation. From the printing press to the Industrial Revolution to the advent of the internet, humanity has often felt uneasy about the rapid changes these technologies bring. However, while previous advances primarily impacted mechanical or technical aspects of life — such as the efficiency of production or information processing — AI is attacking something much more profound: human intelligence itself.

Unlike previous innovations, GenAI, for example, doesn’t just automate physical tasks or process data faster. It emulates and improves cognitive tasks such as understanding language, generating creative content and even making recommendations based on user input. This interaction with human intelligence adds a layer of complexity and personal relevance that can increase anxiety.

Furthermore, the speed at which GenAI is evolving is unprecedented. When ChatGPT was launched in November 2022, it marked a milestone in public AI participation. Since then, the scale of GenAI applications has expanded at an astonishing pace, with new tools, features and integrations emerging almost daily. This rapid pace has heightened anxiety about AI, especially in developing regions where access to training and resources can be limited. It’s not just about getting used to new machines or a faster computer, but about coming to grips with an evolving ecosystem of tools that have far-reaching implications for research, business and even personal decision-making.

In developing regions, the fear of GenAI is compounded by unique contextual challenges. A digital divide, limited technical support and budget constraints make it difficult for users to freely explore and adopt new AI technologies. Many researchers and students I’ve spoken to feel intimidated when it comes to AI — not only because they are unfamiliar with the technology, but also for fear of choosing the "wrong" tool. This fear is compounded when you consider how much time and money required to subscribe to familiarize yourself with these tools. The misconceptions people have about generative AI, along with ethical challenges such as plagiarism, biases, and hallucinations (incorrect or misleading results that AI models generate), often create confusion around the technology's reliability and ethical use.

Imagine a postgraduate student in Africa who wants to use GenAI for their research. They may have heard of tools like ChatGPT for generating text, Avidnote for collaborative writing, Elicit for synthesizing research findings and Gemini for image creation. But each of these tools has its own learning curve, skill requirements and subscription costs. Without clear guidance, a student can easily become overwhelmed and not know where to start or which tool is most useful for their work. This sense of paralysis—of being overwhelmed by choice — inhibits the adoption of GenAI and prevents users from fully realizing the transformative potential that generative AI offers for research and innovation.

In the context of developing regions, this combination of choice overload and fear is not just a minor inconvenience, but a significant barrier to progress. By addressing these challenges thoughtfully and providing targeted support, we can pave the way for greater adoption of GenAI and ultimately enable users to harness the full potential of this revolutionary technology

Practical lessons for tackling choice overload

To counteract choice overload, we can be guided by a few economic principles. One of these is the concept of opportunity cost — the realization that every decision involves a trade-off. In the context of GenAI, this means that we recognize that choosing one particular tool may mean forgoing another. Instead, it can be an opportunity to focus on tools that provide the most immediate benefit and set aside more complex options for later exploration.

For many of the individuals and organizations I advise, I start with a simple, effective strategy: list the tasks that take up the most of your time. If you’re a student, this might be reviewing class notes, grasping complex concepts, or organizing study materials. If you’re a researcher, it might mean identifying the most time-consuming aspects of research, such as summarizing literature, managing citations, or analyzing data. Once these tasks are identified, we can explore how GenAI can help streamline them. By starting with specific, time-intensive tasks, users can develop an understanding of GenAI’s capabilities and gradually acquire the skills they need to use these tools effectively.

This targeted approach aligns well with the Jam study's findings: narrowing the scope and focusing on immediate, high-impact solutions can help users navigate the overwhelming landscape of AI options. For example, a small business owner could start by implementing a single AI-powered planning tool that offers clear benefits without being overly complex. This approach allows them to experience the benefits of GenAI in a manageable way before moving on to more advanced applications.

Another useful strategy, inspired by the Harvard Business Review, is to focus on the problem, not the tool. Before choosing an GenAI platform, I advise users to identify their most pressing challenge. For example, a researcher might have trouble managing citations, while a student might need help understanding dense study materials. By starting with a concrete problem, you can avoid the distraction of endless features and instead focus on an AI application that provides an immediate, practical benefit.

Conclusion

In order for GenAI to realize its full potential in developing regions, simplifying the adoption process is crucial. Universities and organizations could put together “starter” toolkits tailored to specific use cases, such as research or small business management, so that users can experiment more easily without feeling overwhelmed. training, peer to peer support from those who have gained the AI competences, guided introductions, and local tech support would also go a long way towards reducing the fear of GenAI and helping users gain confidence and competence in using these tools.

In summary, the economics of choice teaches us that while options can be empowering, too many choices can also discourage decision-making and leave potential untapped. As the GenAI landscape continues to expand, developing a mindful, problem-centered approach to tool selection will allow us to overcome choice overload, tackle the fear of AI head-on, and make GenAI a powerful ally in economic development in Africa.

 

Saturday, 2 November 2024

 The Cost of Ignoring Dissent: Critical Reflections on Uganda’s UCDA Debate

By Sebaggala Richard


In the recent discussions around the proposed dissolution of the Uganda Coffee Development Authority (UCDA), I have had the opportunity to listen to various stakeholders: Government officials, parliamentarians, academics, farmers, leaders from the church and traditional institutions and esteemed Ugandans. One might expect that such a wide range of perspectives would influence the government’s stance. However, the prevailing opinion seems to be a resolute determination to pass the National Coffee (Amendment) Bill, 2024 and there is little evidence of openness to alternative viewpoints. This observation prompts us to examine more deeply the nature of our political and policy-making processes.

James Buchanan and Gordon Tullock’s seminal work, The Calculus of Consent (2004), offers valuable insights into this phenomenon. The authors examine the complexity of collective decision-making and emphasize the importance of reconciling individual preferences with collective choices. They argue that decisions are only truly representative if the process takes into account different points of view and strives for consensus. In the context of the UCDA debate, the apparent exclusion of dissenting opinions raises questions about the inclusivity and legitimacy of the decision-making process.

A recurring theme among government officials and politicans has been  a 'know-it-all' ” attitude that effectively sidelines opposing views. This behavior is consistent with the observations of C. Wright Mills in The Power Elite (1956), in which he notes that elite groups often assume that they possess superior insight, which they use as justification for ignoring dissenting popular opinion. While it is true that some degree of elitism is common in many countries and that elite consensus can be beneficial, over-reliance on elite decision-making can have unintended, even harmful, consequences. In the case of Uganda coffee sector, this attitude is particularly worrying as  the choice decision at stake  could significantly affect the livelihoods of nearly 10 million Ugandans who depend on the coffee sector.

In a mature political environment, one would expect an issue of this magnitude to invite open, inclusive debate, ensuring that different viewpoints are not only heard, but actively considered in the formulation of policy. Interestingly, in many economics and political science courses, students are taught that “doing nothing” is sometimes the wisest policy response, especially when it comes to “pockets of effectiveness.” The premise here is that if effective results are already being achieved in certain institutions, forcing change may inadvertently harm what is working well. The adage “the devil you know” is particularly relevant here. It suggests that in some cases it may be wiser to preserve a structure that works than to risk its success with reforms that have history of failure in many countries.

The government’s persistent push to dissolve the UCDA despite significant opposition provides an insight into the dynamics of public choice theory. This theory states that policy makers, much like individuals in markets, often act out of self-interest. This can lead to decisions where political or personal goals take precedence over the common good. As a result, public policy becomes a commodity — a tool to promote individual goals rather than the common good. This commodification may explain the persistence behind dissolving the UCDA, as policymakers may be influenced by factors that are not fully aligned with the broader public interest. When policy is commoditized in this way, it can have serious consequences: reduced policy effectiveness, increased risk of policy failure, and erosion of public trust. To address these problems, mechanisms are needed to align the incentives of policy makers with the public good, ensuring that decisions truly meet people's needs and aspirations.

An apt example of the consequences of policy commodification are the Structural Adjustment Programs (SAPs) of the 1980s and 1990s. Mandated by international financial institutions, these programs required developing countries, including Uganda, to undertake far-reaching rationalization measures such as the privatization and dissolution of state-owned enterprises. In Uganda, as in many other countries, the government implemented these reforms despite widespread public concern and opposition, often ignoring dissenting voices.

The implications of this approach were profound. While the SAPs achieved certain macroeconomic goals, they also brought with them significant social challenges. The rapid privatization and downsizing of public services led to job losses, reduced access to essential services and increased poverty. Today, the proposal to dissolve the UCDA as part of ongoing rationalization efforts is reminiscent of these reforms of the past. Various stakeholders — farmers, church leaders, politicians, academics and traditional leaders— - are expressing concern about the potential negative impact on the coffee sector and related livelihoods. The adage "experience is the best teacher" seems to be almost disregarded in this context. I have read and heard Ugandans who were involved in the SAP reforms express concerns rooted in the hardships and lessons they have learned through these experiences. They recall the reasons for setting up agencies such as the UCDA as safeguards for vulnerable sectors. However, the government’s steadfast adherence to this policy despite significant opposition raises important questions about the inclusivity and responsiveness of the decision-making process.

Looking at Uganda’s experience with SAPs, it is clear that the exclusion of public input can lead to policies that are economically rationalized but do not serve the broader public interest. When policy decisions prioritize economic metrics woven in polictical at the expense of societal wellbeing , they risk alienating the public they are supposed to benefit, undermining trust and leading to unintended negative outcomes. To avoid repeating the mistakes of the past, it is imperative that the government engages in genuine dialogue with all stakeholders. This will ensure that policies are not only economically viable, but also socially just and reflect the needs and aspirations of the people they are intended to serve.

Institutional dynamics play a crucial role in shaping policy outcomes. It is often driven by the tendency of organizations to pursue policies that mirror those of their supposedly successful counterparts — a concept known as institutional isomorphism. In the case of Uganda, the drive to streamline public entities, including UCDA, may have been influenced by external pressures, similar to SAPs. However, history shows that blanket approaches can be problematic and often act like squared pegs in round holes. They tend not to take into account local contexts and the unique economic, social and cultural forces at play.

Should we continue to uncritically follow external advice, disregarding our own history, experience and the valuable insights of those directly affected? Rationalization is of course essential — no one disputes that it is important to improve efficiency. But meaningful reforms need to be tailored to Uganda’s unique socio-economic context and truly responsive to the voices of the people they will affect. The importance of contextualizing reforms cannot be overstated. Without it, we run the risk of repeating past mistakes that erode trust and compromise policy effectiveness.

The debate on the dissolution of the UCDA reminds us of the complexity of policy making and the importance of anchoring reforms both in the local context and in inclusive processes. In The Narrow Corridor:How nations struggle with liberty,  Acemoglu and Robinson (2019) argue that effective institutions emerge from a balance between state and society that evolves through constant negotiation and co-evolution, rather than rigid, top-down imposed reforms. They suggest that development is a journey along a “narrow corridor” where state and society must constantly adapt to each other. From this point of view, the dismantling of the UCDA should not be a unilateral decision. It must involve all stakeholders to ensure that it meets the needs of those who are most directly affected.

James Scott, in Seeing Like a State: How certain schemes to improve the human condition have failed, highlights the dangers of centralized reform efforts that overlook local knowledge and informal systems. Scott's beautfully written book uncover why states so often fail--sometimes catastrophically in grand efforts to engineer their society. He notes that development projects often fail when they ignore the practical ways in which people organize themselves and solve problems independently. In the case of Uganda, rushing to dissolve the UCDA without fully understanding the intricate relationships and practices within the coffee sector risks destroying the systems that have driven the coffee sector to the level admired by everyone.

The foregoing above remind us that we need to reevaluate our decision-making frameworks to ensure that they are inclusive, transparent and aligned with the common good. By considering diverse perspectives and fostering genuine dialogue, Uganda can develop policies that not only address immediate challenges but also lay the foundation for a more equitable and prosperous future. With these considerations in mind, the government would do well to reflect on whether the dissolution of UCDA is really in the best interest of the country. A thoughtful, inclusive approach could ultimately lead to a decision that is not only economically viable, but also socially responsible — one that takes into account the voices of those whose lives are most directly affected.