The arrival of generative AI tools like ChatGPT is changing the way we teach and practise biostatistics and epidemiology. Tasks that once took hours, like coding analyses or searching for information, can now be completed within minutes by simply asking the right questions. This development brings many opportunities, but it also brings new challenges. One of the biggest risks is that students may rely too much on AI without properly questioning what it produces.
In this new environment, our responsibility as educators must shift. It is no longer enough to teach students how to use AI. We must now teach them how to think critically about AI outputs. We must train them to question, verify and improve what AI generates, not simply accept it as correct.
Why critical thinking is important
AI produces answers that often sound very convincing. However, sounding convincing is not the same as being right. AI tools are trained to predict the most likely words and patterns based on large amounts of data. They do not understand the meaning behind the information they provide. In biostatistics and epidemiology, where careful thinking about study design, assumptions and interpretation is vital, careless use of AI could easily lead to wrong conclusions.
This is why students must develop a critical and questioning attitude. Every output must be seen as something to be checked, not something to be believed blindly.
Recent academic work also supports this direction. Researchers have pointed out that users must develop what is now called “critical AI literacy”, meaning the ability to question and verify AI outputs rather than accept them passively (Ng, 2023; Mocanu, Grzyb, & Liotta, 2023). Although the terms differ, the message is the same: critical thinking remains essential when working with AI.
How to train critical thinking when using AI
Build a sceptical mindset
Students should be taught from the beginning that AI is only a tool. It is not a source of truth. It should be seen like a junior intern: helpful and fast, but not always right. They should learn to ask questions such as:
What assumptions are hidden in this output? Are the methods suggested suitable for the data and research question? Is anything important missing?
Simple exercises, like showing students examples of AI outputs with clear mistakes, can help build this habit.
Teach structured critical appraisal
To help students evaluate AI outputs properly, it is useful to give them a structured way of thinking. A good framework involves five main points:
First, methodological appropriateness
Students must check whether the AI suggested the correct statistical method or study design. For example, if the outcome is time to death, suggesting logistic regression instead of survival analysis would be wrong.
Second, assumptions and preconditions
Every method has assumptions. Students must identify whether these assumptions are mentioned and whether they make sense. If assumptions are not stated, students must learn to recognise them and decide whether they are acceptable.
Third, completeness and relevance
Students should check whether the AI output missed important steps, variables or checks. For instance, has the AI forgotten to adjust for confounding factors? Is stratification by key variables missing?
Fourth, logical and statistical coherence
The reasoning must be checked for soundness. Are the conclusions supported by the results? Is there any step that does not follow logically?
Fifth, source validation and evidence support
Students should verify any references or evidence provided. AI sometimes produces references that do not exist or that are outdated. Cross-checking with real sources is necessary.
By using these five points, students can build a habit of structured checking, instead of relying on their instincts alone.
Encourage comparison and cross-verification
Students should not depend on one AI output. They should learn to ask the same question in different ways and compare the answers. They should also check against textbooks, lectures, or real research papers.
Practise reverse engineering
One effective exercise is to give students an AI-generated answer with hidden mistakes and ask them to find and correct the errors. This strengthens their ability to read carefully and think independently.
Make students teach back to AI
Another good practice is to ask students to correct the AI. After finding an error, they should write a prompt that explains the mistake to the AI and asks for a better answer. Being able to explain an error clearly shows true understanding.
Why logical thinking in coding and analysis planning remains essential
Although AI can now generate codes and suggest analysis steps, it does not replace the need for human logical thinking. Writing good analysis plans and coding correctly require structured reasoning. Without this ability, students will not know how to guide AI properly, how to spot mistakes, or how to build reliable results from raw data.
Logical thinking in analysis means asking and answering step-by-step questions such as:
What is the research question? What are the variables and their roles? What is the right type of analysis for this question? What assumptions need to be checked? What is the correct order of steps?
If students lose this skill and depend only on AI, they will not be able to detect when AI suggests inappropriate methods, forgets a critical step, or builds a wrong model. Therefore, teaching logical thinking in data analysis planning and coding must stay an important part of the curriculum.
Logical planning and good coding are not simply technical skills. They reflect the student’s ability to reason clearly, to see the structure behind the problem, and to create a defensible path from data to answer. These are skills that no AI can replace.
Ethical use of generative AI and the need for transparency
Along with critical and logical thinking, students must also be trained to use generative AI tools ethically. They must understand that using AI does not remove their professional responsibility. If they rely on AI outputs for any part of their work, they must check it, improve it where needed, and take ownership of the final product.
Students should also be taught about data privacy. Sensitive or identifiable information must never be shared with AI platforms, even during casual exploration or practice. Responsibility for patient confidentiality, research ethics, and academic honesty remains with the human user.
Another important point is transparency. Whenever AI tools are used to assist in study design, data analysis, writing or summarising, this use should be openly declared. Whether in academic assignments, published articles or professional reports, readers have the right to know how AI was involved in shaping the content. Full and honest declaration supports academic integrity, maintains trust, and shows respect for the standards of research and publication.
Students should be guided to include a simple statement such as:
“An AI tool was used to assist with [describe briefly], and the final content has been reviewed and verified by the author.”
By practising transparency from the beginning, students learn that AI is not something to hide, but something to use responsibly and openly.
Building a modern curriculum
To prepare students for this new reality, we must design courses that combine:
Training in critical thinking when using AI outputs Training in logical thinking for building analysis plans and writing codes Training in ethical use and transparent declaration of AI assistance
Students should be given real-world tasks where they must plan analyses from scratch, use AI as a helper but not as a leader, check every output carefully, and justify every step they take. They should also be trained to reflect on the choices they make, and on how to improve AI suggestions if they find them weak or incorrect.
By doing this, we can prepare future biostatisticians and epidemiologists who are not only technically skilled but also intellectually strong and ethically responsible.
A new way forward
Teaching students to use AI critically is not just a good idea. It is essential for the future. In biostatistics and epidemiology, where errors can affect public health and policy, we must prepare a new generation who can use AI wisely without losing their own judgement.
The best users of AI will not be those who follow it blindly, but those who can guide it with intelligence, knowledge and ethical care. Our role as teachers is to help students become leaders in the AI age, not followers.
References
Ng, W. (2023). Critical AI literacy: Toward empowering agency in an AI world. AI and Ethics, 3(1), 137–146. https://doi.org/10.1007/s43681-021-00065-5
Mocanu, E., Grzyb, B., & Liotta, A. (2023). Critical thinking in AI-assisted decision-making: Challenges and opportunities. Frontiers in Artificial Intelligence, 6, Article 1052289. https://doi.org/10.3389/frai.2023.1052289
Disclaimer
This article discusses the responsible use of generative AI tools in education and research. It is based on current understanding and practices as of 2025. Readers are encouraged to apply critical judgement, stay updated with evolving guidelines, and ensure compliance with their institutional, professional, and ethical standards.