Saturday, 21 February 2026

 

The Architect, Not the Builder: Preserving Scholarly Judgment in the Age of AI

By Sebaggala Richard (PhD)

Last week I spoke to a group of researchers and PhD students about artificial intelligence in scholarly writing and literature review. The mood in the room was not defensive; most participants accepted that AI tools are already reshaping academic work, and the discussion was marked by curiosity and cautious optimism. Beneath that enthusiasm, however, lay a quieter concern that went beyond plagiarism or hallucinations. What unsettled many was a more fundamental question: in embracing AI, might we gradually outsource the habits of thinking that define scholarship?

 

This concern deserves serious attention because the central risk is not misconduct but the slow erosion of intellectual ownership. For doctoral students and early-career researchers, research is not simply the production of text; it is the development of judgment. It requires working through ambiguity, weighing competing explanations, and refining arguments until they can withstand scrutiny. Large language models make many parts of this process faster by summarizing articles, suggesting theoretical connections, and interpreting statistical output with impressive fluency. The results often look polished, yet polish should not be confused with understanding.

 

During the training, I demonstrated how AI can assist with drafting search strings, organizing literature into themes, suggesting model specifications, and clarifying the presentation of regression results. The tools proved useful, but throughout the session I emphasized that acceleration does not alter the underlying logic of research. A literature review still begins with a clearly defined question and proceeds through a transparent search strategy, systematic screening, careful comparison of findings, and verification of sources. While AI can help structure these steps, it cannot determine what counts as relevant evidence or where the conceptual gap lies. Those judgments remain the responsibility of the researcher.

 

The same boundary becomes even more important in empirical work. In our example using survey data, AI was permitted to suggest possible dependent and independent variables, outline potential models, and draft statistical syntax. It could recommend robustness checks and help structure the results section. It did not, however, choose the identification strategy, justify causal claims, test assumptions, or determine the substantive meaning of the findings in context. Model choice requires theoretical grounding, causal inference demands methodological reasoning, and interpretation depends on domain knowledge. Delegating these decisions would weaken the integrity of the research.

 

Responsible use therefore begins with clarity about where assistance ends and authorship begins. Before turning to AI, researchers would do well to ask not whether its use is permitted but whether it enhances their reasoning or replaces it. There is a meaningful difference between asking AI to critique a draft and asking it to write the draft itself, just as there is a difference between using it to uncover blind spots and using it to construct an argument from scratch. Although these approaches may appear similar from the outside, they cultivate very different intellectual habits.

 

The discussion also revealed a broader cultural dimension, particularly relevant in many African academic settings where struggle is often equated with learning and difficulty is treated as evidence of seriousness. When processes become faster or more efficient, suspicion sometimes follows, as if reduced effort necessarily implies reduced rigor. AI unsettles this assumption. The ability to map literature more efficiently or clarify statistical syntax quickly does not automatically diminish depth or weaken econometric understanding. Hardship is not a prerequisite for rigor.

 

Struggle has value when it produces insight, but it adds little when it is purely mechanical. Manually formatting references does not deepen theoretical reasoning, nor does repeating routine coding steps automatically strengthen econometric judgment. Spending hours constructing search strings does not guarantee conceptual clarity. Some forms of difficulty are intellectually formative, while others persist simply because they have long been part of academic practice. The aim is not to preserve difficulty for its own sake but to preserve active and disciplined thinking.

 

In practice, thoughtful use of AI can strengthen learning. During the workshop, once some mechanical aspects of literature searching were streamlined, participants were able to devote more attention to substantive questions, such as why findings differed across contexts, where theoretical tensions remained unresolved, and how to sharpen the articulation of their research gaps. Automation, in this sense, freed cognitive space for higher-level analysis. A similar pattern emerged in empirical writing, where AI’s suggestions about alternative specifications or potential weaknesses created room to focus more carefully on identification, assumptions, and interpretation, leaving the intellectual core of the exercise intact.

 

A constructive approach is therefore to think independently first by framing the research problem, interpreting results on one’s own, and sketching the structure of the argument without assistance. AI can then be used to expand and test that thinking by identifying weaknesses, proposing alternative explanations, or improving clarity. The final step requires taking full responsibility for the work by verifying every citation, checking every claim, and ensuring that the argument reflects genuine understanding. A simple test helps clarify ownership: if AI were unavailable, could you defend your research question, theoretical framework, model specification, identification strategy, and interpretation of findings? If the answer is yes, automation has supported the work without undermining it; if not, further reflection is required before it can be considered truly your own.

 

A doctoral degree is not a document production exercise but a process of intellectual formation. AI can make writing more efficient, yet it cannot substitute for judgment. History provides perspective: calculators, statistical software, and digital databases were all met with resistance when first introduced, each innovation reducing effort in certain tasks and prompting concerns about declining standards. Research did not deteriorate; it evolved, shaped less by the technology itself than by the norms governing its use.

 

AI does not eliminate the need for careful thinking; it reduces some of the mechanical burdens that surround it. Whether scholarship becomes more superficial or more sophisticated in this environment will depend less on the capability of AI and more on the discipline of those who use it. Before generating text, it is worth pausing to ask whether the tool is being used to deepen reasoning or to bypass it. Responsible use is not about preserving hardship but about preserving judgment, and judgment remains, as it always has been, a human responsibility.

No comments:

Post a Comment