Friday, January 9, 2026

IS YOUR RESEARCH ASSISTANT ACTUALLY SABOTAGING YOUR PAPER? THE HIDDEN RISK OF AI CHATBOTS

Mahtab Bashir
Islamabad

Experts from academia, tech, and policy have warned that the reflexive use of Artificial Intelligence (AI) chatbots is quietly undermining the integrity of research, cautioning that these tools, despite their utility, are propagating serious inaccuracies, encouraging intellectual complacency, and obscuring the path to trustworthy scholarship.

A recent UK survey underscores a seismic shift in higher education, revealing that a staggering 92% of students now regularly use generative AI tools, a dramatic surge from 66% just a year prior.


This integration into daily academic life presents a pressing question for researchers and institutions alike: is AI a transformative boon for scholarship or a fundamental threat to intellectual rigour?

In conversations with this scribe, a broad coalition of professionals, from educators and media figures to researchers, students, think tank officials, IT experts, and policymakers, shared a common fear. They worry that heavy reliance on AI-generated content could foster a generation skilled at compiling information but lacking the ability to analyse critically, synthesise concepts, or generate truly original ideas.

Dr. Tariq Banuri, former chairman Higher Education Commission Pakistan, and an expert on climate change, stated that while Artificial Intelligence (AI) is not new, the recent advent of Generative AI (GAI) tools marks a significant step toward human-like intelligence, or Artificial General Intelligence. He views this development as largely positive, though he acknowledges two categories of criticism: evaluative and apocalyptic.

Evaluative concerns, he explained, focus on the immediate difficulty of distinguishing between human creativity and machine-generated content. This poses challenges in areas like grading academic work or verifying digital evidence. His solution is to adapt our evaluation methods rather than reject the technology.

The apocalyptic criticism, while currently speculative, warns of a future where super-intelligent machines could surpass and endanger humanity, akin to dystopian science fiction. He noted this as a potential long-term issue but not an immediate reality.

Dr. Banuri described AI as a "stimulus technology" capable of boosting economic growth, productivity, and competitiveness. For Pakistan, he recommended two key policy responses: first, to stop disruptive internet shutdowns that drive businesses abroad, and second, to stimulate demand for GAI services, particularly in data generation and privacy.

While the private sector will naturally adopt GAI for efficiency, the government should modernise its own operations. Although some departments already use AI chatbots for drafting documents, resistance remains when it comes to enhancing transparency and reducing corruption, areas where Dr. Banuri urged proactive reform.

He emphasised that ethical guidelines for researchers, such as proper attribution of authorship, remain essential.

TORCH Global visiting Professor at the University of Oxford, UK, Dr. Fouzia Farooq, stated that acceptance of AI chatbots as research aids is essential for progress. She argued that instead of researchers exhausting effort to prove the originality of their work, they must acknowledge the reality of AI and focus on showcasing their unique intellectual contribution beyond automated outputs.

She noted that Pakistan’s Higher Education Commission (HEC) has integrated AI detection into its anti-plagiarism policy, which has instilled caution among researchers. While this sets a baseline, Dr. Farooq believes regulatory measures can only go so far.

According to her, AI has revolutionized research methodologies, especially in fields like statistical analysis and data projection, but its integration into qualitative domains such as literature and art will take longer. The advent of AI, she said, has fundamentally shifted global research metrics, demanding an evolution in academic values and methods.

Dr. Farooq emphasised that with AI handling preliminary tasks, researchers should now focus on higher-order thinking and innovation. However, in societies like Pakistan, where research culture is underdeveloped, there is a risk of over-dependence, outsourcing entire projects to AI, which she called “very wrong.”

Her advice was to “make AI our slave, not our master,” acknowledging that while AI offers significant benefits, it also poses a danger of encouraging intellectual laziness, particularly in work-shy communities. 

A key challenge, she highlighted, is learning how to frame meaningful questions and interpret AI-generated answers critically. If the entire question-and-answer cycle becomes AI-mediated, research could become “lifeless and without experience.”

Dr. Ilhan Niaz, a senior professor at Quaid-i-Azam University, Islamabad, warned that for university students, AI chatbots have become an "ultimate cheat code." 

He cautions that by allowing AI to generate assignments, leaving students to only "tweak" the output, we risk creating a generation that loses the fundamental skill of writing, a key activity for developing brainpower and critical thinking. This could trap students in a vicious cycle where reduced intellectual effort leads to greater dependence on AI, progressively diminishing human analytical ability.

For academics, he observed, AI can similarly serve as a convenient tool for aggregating information and producing text that is then presented as original work. He described a dystopian vision of the near-future university: a place where professors use AI to create lectures, and students use AI to complete assignments, resulting in a hollow educational experience.

While acknowledging AI’s legitimate and powerful applications, such as in medical diagnostics and processing astronomical data, Dr. Niaz described it as the "lazy person’s dream come true" for everyday research and education. He raised concerns about the internet becoming flooded with indistinguishable AI-generated content.

His prescription is strict, sector-specific regulation. He argued that to prevent harm, AI use must be limited to fields where it demonstrably does more good than harm, such as medicine and data science, while being restricted in humanities disciplines like history and philosophy. He also called for formal "AI co-authorship" crediting mechanisms.

Dr. Niaz asserted that without treating AI as a carefully rationed public good aimed at collective well-being, sustainable and effective governance of the technology will be impossible. He noted that Pakistan's Higher Education Commission (HEC) faces a significant challenge, as the current educational model is ill-equipped to adapt to AI's disruptive presence.

A faculty member of IT department of Jazan University, Saudi Arabia, Muhammad Humza Farooq, explained that because AI tools are now hard to spot and control, many universities are stopping the use of take-home essays. Instead, some are bringing back oral exams, where students have to talk through and defend their ideas right then and there.





The reason is straightforward: AI can write neat essays, but it can’t answer unexpected follow-up questions or show it really understands a topic when put on the spot. With an oral exam, it’s much tougher for students to use outside help.

He also points out that AI-detection software has proven unreliable, sometimes flagging original student work incorrectly. Face-to-face assessment removes this uncertainty by focusing on direct interaction.

Beyond academic integrity, supporters argue this method builds essential real-world skills, such as clear communication, critical thinking, and the ability to explain complex ideas on the spot.

Dr. Munawar Hussain, an expert in international relations and social media, noted that AI has introduced a degree of laziness into today's student population. In the past, students dedicated significant effort to tasks like reading books, analysing research papers, conducting interviews, and performing content analysis. Today, similar information is accessible with just one click.

He recognised the advantages: AI saves both time and money for many, minimising the need for extensive manual research. However, he highlights serious downsides. Primarily, the essential ability to work directly with raw data, verify facts, and discern patterns is being eroded. Secondly, researchers are growing less capable of producing original analysis, relying too heavily on AI-generated interpretations. Lastly, although AI is useful for brainstorming early research ideas, excessive dependence on it curbs independent thinking.

Dr. Hussain summarised AI's influence as a "mixed bag", offering clear efficiencies while posing significant threats to the cultivation of critical and analytical skills.

A postdoctoral researcher at Vrije Universiteit Brussel, Belgium, Dr. Faheem Siddiqui, argued that AI is fundamentally reshaping academic research. He believes that while the technology presents a mix of advantages and disadvantages, the overall benefits are greater.

He pointed to two primary ways academics can use AI effectively. First, by automating repetitive tasks to boost productivity and free up time for more complex, intellectually demanding work. Second, he highlighted the importance of understanding that AI tools are trained on existing data and can be prone to providing biased or user-pleasing answers. Mastering "prompt engineering", the skill of crafting precise inputs, is therefore crucial for obtaining objective and useful results from these systems.

Dr. Siddiqui noted that AI is already widely encouraged in academia for brainstorming and drafting literature reviews and is heavily used for generating code. In education, while not yet mainstream, some forward-looking programmes are using AI to personalise learning.

He offered clear guidance for responsible use: researchers should utilise AI as an aid in their work and publications, but they must maintain ultimate responsibility for their final output. He warned against using AI in the peer review process, arguing that removing the essential human judgment from evaluating new research could be detrimental to scientific progress in the long term.

Usman Farooq, an Islamabad-based Information Technology expert, characterised AI as an "indispensable research assistant" for academics. For students and junior researchers, it functions as a "tireless, instant co-pilot," streamlining tasks like literature reviews, paper summarisation, coding, and idea generation, thereby accelerating workflows and clarifying complex subjects.

However, this efficiency is not without significant pitfalls. Farooq pointed to AI's capacity to "fantasise," generating plausible yet entirely fabricated citations and information, which jeopardises the integrity of academic work. He further cautions that excessive dependence on AI risks corroding the profound analytical reasoning that is the hallmark of genuine scholarship, potentially yielding results that are "superficially competent but intellectually hollow."

A media practitioner and expert on governance, AI, and security, Dr. Saeed Ahmed Minhas,  acknowledged that AI has dramatically increased research speed, enabling faster literature reviews, data synthesis, and methodological testing, while boosting overall productivity.

However, he warned that without proper oversight, its unchecked use threatens to produce superficial research, standardised thinking, and degrade critical analysis. The core problem, he argues, is not AI, but the lack of accountability in how it is used. For society to truly benefit, regulatory bodies, like Pakistan’s Higher Education Commission (HEC), must set strict standards for quality, transparency, and intellectual ownership, similar to existing plagiarism policies.

He emphasised that AI must serve strictly as an assistant in scholarly work, not as a replacement for human judgment. Institutions should implement clear policies requiring the disclosure of AI use, limiting its role to technical tasks like language editing or data organization, and strictly prohibiting AI from constructing arguments or generating theories without human verification.

The HEC, he suggested, could take a leading role by developing AI-audit systems, training reviewers to identify AI-generated content, and embedding ethical guidelines into research assessment.

Dr. Minhas advocated for a hybrid scholarly model, where human researchers lead theoretical development, fieldwork, and critical interpretation, supported by AI for efficiency and scalability. He insists that peer review, archival rigor, and methodological integrity must remain human-centred processes, with AI acting only as a tool to augment, not replace, research skill. This balanced approach, he said, maintains intellectual depth while adapting to new technologies.

Experts agreed AI is a transformative but double-edged tool for research. Its benefit depends entirely on human oversight. Moving forward, researchers must master this new skill, approaching AI with both excitement and sharp scrutiny, balancing its power with indispensable human judgment.

The writer is a journalist based in Islamabad and holds an MPhil in International Relations from Quaid-i-Azam University.

IS YOUR RESEARCH ASSISTANT ACTUALLY SABOTAGING YOUR PAPER? THE HIDDEN RISK OF AI CHATBOTS

Mahtab Bashir mahtabbashir@gmail.com Islamabad Experts from academia, tech, and policy have warned that the reflexive use of Artificial Inte...