Don’t Blame the Tool: Rethinking Intellectual Effort and Ownership in the Age of AI
By Richard Sebaggala
I have spoken to countless academics, researchers, and students from diverse backgrounds who express a quiet but persistent anxiety about generative AI. They whisper their concerns, often overlooking a simple truth: AI is not magic. It is a statistical machine, a digital parrot that repeats patterns drawn from vast oceans of human data. It works by calculating probabilities. This is engineered brilliance, but brilliance rooted in prediction, not in original human thought.
As someone who has used and tested nearly every major AI tool on the market, I can say with confidence that generative AI is among the most powerful inventions of our time. It can translate, summarize, visualize, code, debug, compare alternatives, and analyze data—often faster and more accurately than we can. Yet despite this, many still ask: if machines can write, where does that leave us? Some feel displaced, sidelined, or even obsolete. But this anxiety often misses the mark. AI, no matter how advanced, is still just a tool. It is no different in principle from the tools we embraced long ago without hesitation.
There is a paradox worth reflecting on. We do not doubt the validity of a graph produced in Stata, nor do we question the output of SPSS when it generates regression results. Those of us in economics and the social sciences have always relied on statistical software to process data and help us see patterns we could not compute by hand. We have learned the syntax, interpreted the output, and confidently reported our findings. The real skill lies in knowing the command and making sense of the results. Why then do we hesitate when ChatGPT helps us structure an email, brainstorm ideas for a project, or draft a first version of a research abstract?
Part of the hesitation lies in what writing represents. Unlike regression or visualization, writing has always been treated as sacred, almost inseparable from human intellect and creativity. But writing is not simply typing words. It is thinking, choosing, constructing, and editing. The act of prompting—framing a question, guiding an argument, anticipating an answer—is itself a form of intellectual labor. Every useful response from AI begins with a human spark. The person crafting the prompt plays the same role as the one interpreting coefficients in a statistical table: they are the mind behind the tool. The machine’s output merely reflects the direction it was given.
When I began publishing essays on the economics of AI, some of my friends assumed I was not really doing the work. A few told me, half-jokingly, that it must be easy since I was "just using AI." What they missed is that the thoughts, the curiosity, the structure, and the point of view in each piece were mine. The tools I used helped me reach my conclusion faster, but the ideas still came from me. You can take away the interface, but not the thinking. What you read is the product of my experience and insight—not a machine’s.
Legal scholar Dr. Isaac Christopher Lubogo has explored the question of authorship in the age of AI. Should work produced with AI tools be credited to the machine or to the human using it? His view is clear. Authorship still lies with the person. AI is no different from a camera or a calculator. It enhances what we already know or imagine. It cannot create out of nothing. It can respond, imitate, and refine, but it does not dream, interpret emotion, or seek meaning in the way a human does.
Those of us trained in economics understand the phrase "garbage in, garbage out." A model, no matter how sophisticated, will only be useful if the assumptions behind it are sound. The same logic applies to AI. A vague prompt produces empty content. A well-formed prompt generates something coherent and often useful. But the credit for that value still belongs to the person who gave it purpose and direction.
Public suspicion toward AI today reminds me of historical fears about other technological advances. When the printing press emerged, it threatened the role of scribes. Calculators were said to ruin mental arithmetic. The internet was blamed for weakening critical thinking. And yet, all these innovations became instruments of empowerment. They liberated people from repetitive tasks and allowed them to focus on what truly matters. Generative AI is simply the next step in this long journey—if we dare to use it wisely.
What concerns me today is how good writing is being treated with suspicion. As journalist Miles Klee recently noted in Rolling Stone, tools like ChatGPT are trained on large datasets of professional writing. As a result, they tend to produce content that follows grammar rules quite well. Ironically, this precision has caused some readers to believe that anything too polished must have been generated by AI. In other words, a typo is now seen as proof of authenticity. If we continue down this path, we risk devaluing the effort and discipline that go into clear, compelling human writing.
And here lies the real danger. Once we start assuming that any well-crafted argument or clean paragraph is the work of a machine, we erase the labor, thought, and voice behind it. We do not just insult the writer. We also erode the principle that effort matters.
Thinking that AI will replace human intelligence is as mistaken as crediting a hammer for building a house. The tool amplifies our ability, but it does not imagine the blueprint. ChatGPT may help us write faster, but it cannot choose our ideas or shape our insights. It is still the person behind the screen who makes the final decision. If we learn to treat AI as an assistant rather than an author, we can start to see it more clearly for what it is—a tool, not a threat.
This is especially important for Africa. For too long, we have remained consumers of technology rather than innovators. Now is the time to change course. We can either watch others master this new wave or take our place in it. We can use AI to strengthen our research, refine our business strategies, and tell our own stories better. But we must first stop viewing the technology with suspicion and start seeing it as a partner in progress.
We already trust Excel for financial models. We do not feel guilty using Stata or R for statistical analysis. Grammarly has become a standard tool for editing. Prompting an AI to help us write or brainstorm should be no different. When used responsibly, it becomes a legitimate part of the thinking process.
History shows us that the future belongs to those who adapt. The pen did not disappear with the rise of the typewriter, and the typewriter did not vanish with the arrival of the keyboard. Each new medium simply extended our ability to express ourselves. AI is the latest medium. It will not speak for us, but it can help us speak more clearly—if we choose to use it that way.
So do not blame the tool. It is only as good as the hand—and the mind—that guides it.
No comments:
Post a Comment