Sunday, 19 October 2025

 

Don’t Blame the Hammer: How Poor Use of AI Tools Reveals Deeper Competence Gaps

By Richard Sebaggala (PhD)

When Deloitte was recently exposed for submitting a government report in Australia filled with fabricated citations produced by Azure OpenAI's GPT-4o, the headlines quickly became accusatory. Commentators framed it as yet another failure of artificial intelligence– a cautionary tale about machines gone wrong. However, that interpretation misses the essence of what happened. AI did not fail; people did. The Deloitte incident is not evidence that the technology is unreliable, but that its users lacked the skill to use it responsibly. Like every tool humanity has invented, artificial intelligence merely amplifies the quality of its handler. It does not make one lazy, careless, or mediocre; it only exposes those qualities if they are already present.

 

Generative AI is a mirror. It reflects the discipline, understanding, and ethics of the person behind the keyboard. The Deloitte report was not the first time this mirror has revealed uncomfortable truths about modern knowledge work. Many professionals, consultants, and even academics have quietly adopted AI tools to draft documents, write proposals, or summarise literature, yet few invest time in learning the principles of proper prompting, verification, and validation. When errors emerge, the reflex is to blame the tool rather than acknowledge the absence of rigour. But blaming the hammer when the carpenter misses the nail has never built a better house.

The fear surrounding hallucinations (AI’s tendency to produce plausible but false information) has become the favourite defence of those unwilling to adapt. Yes, hallucination remains a legitimate limitation of large language models. These systems predict language patterns rather than verify factual accuracy, and early versions of ChatGPT frequently produced citations that did not exist. Yet the scale of that problem has fallen sharply. Current models generate fewer hallucinations, and most can be avoided through simple measures: specifying reliable data sources, directing the model to cite only verifiable material, and performing a quick cross-check on references. The issue is not that AI cannot think; it is that many users do not.

 

I saw this firsthand while revising a research manuscript with a co-author. Several references generated in earlier drafts (while we were expanding our theoretical background) turned out to be incomplete or fabricated. Instead of blaming the tool, we treated the discovery as a test of our own academic discipline: cross-verifying every citation through Google Scholar and refining our theoretical background until it met publication standards. The experience reinforced a simple truth: AI is not a substitute for scholarly rigour; it is a magnifier of it.

In the process, I also discovered that when one identifies gaps or errors in AI-generated outputs and explicitly alerts the system, it responds with greater caution and precision. It not only corrects itself but often proposes credible alternative sources through targeted search. Over time, I have learned to instruct AI during prompting to be careful, critical, and to verify every fact or reference it cites. This practice has consistently improved the quality and reliability of its responses, turning AI from a speculative assistant into a more disciplined research partner.

This pattern is not new. Each technological leap in the history of work has produced the same anxiety and the same blame. When calculators arrived, some accountants abandoned mental arithmetic. When Excel spread through offices, others stopped understanding the logic of their formulas. When search engines became ubiquitous, some students stopped reading beyond the first page of results. Every generation confronts a moment when a tool reveals that the real deficit lies not in technology but in human effort. Artificial intelligence is simply the latest example.

The responsible use of AI therefore depends on three habits that serious professionals already practise: clarity, verification, and complementarity. Clarity means knowing exactly what one is asking for– just as an economist designs a model with clear variables and assumptions, a user must frame a prompt with precision and boundaries. Verification requires treating every AI output as a hypothesis, not a conclusion, and testing it against credible data or literature. Complementarity is the understanding that AI is a collaborator, not a substitute. The most capable researchers use it to draft, refine, and challenge their thinking, while maintaining ownership of judgement and interpretation. Those who surrender that judgement end up automating their own ignorance.

Refusing to learn how to work with AI will not preserve professional integrity; it will only ensure obsolescence. Every major innovation, from the printing press to the spreadsheet, initially appeared to threaten expertise but ultimately expanded it for those who embraced it. What AI changes is the return on competence. It increases the productivity of skilled workers far more than that of the unskilled, widening the gap between the thoughtful and the thoughtless. In economic terms, it shifts the production function upward for those who know how to use it and flattens it for those who do not.

This has important implications for universities, firms, and public institutions. Rather than issuing blanket bans on AI, the real task is to integrate AI literacy into education, professional training, and policy practice. Students must learn to interrogate information generated by machines. Analysts must learn to audit AI-assisted reports before submission. Organisations must cultivate a culture where transparency about AI use is encouraged rather than concealed. Using AI is not unethical; misusing it is.

The Deloitte episode will not be the last. Other institutions will repeat it because they see AI as a shortcut rather than an instrument of discipline. Yet the lesson remains clear: AI is not a threat to competence; it is a test of it. The technology does not replace understanding; it exposes whether understanding exists in the first place. Those who master it will multiply their insight and efficiency; those who misuse it will multiply their mistakes.

In truth, artificial intelligence has simply revived an old economic principle: productivity gains follow learning. The faster we acquire the skills to use these tools well, the more valuable our human judgement becomes. Blaming the hammer for the bent nail may feel satisfying, but it changes nothing. The problem is not the hammer; it is the hand that never learned how to swing it. Every correction, every verified reference, every disciplined prompt is part of that learning curve. Each moment of alertness – when we question an output, verify a citation, or refine an instruction – makes both the user and the tool more intelligent. The technology will keep improving, but whether knowledge improves with it depends entirely on us.

No comments:

Post a Comment