Tag: ChatGPT

  • Artificial Intelligence and the Purpose of Knowledge

    As someone who works in education, I often think about how AI is changing the way we learn and teach. Artificial intelligence has become part of our daily routine, from helping us write to generating art or analysing data. It makes things faster and more convenient, but I sometimes wonder if it also makes us forget what being human really means. Professor Osman Bakar, in his recent essay Artificial Intelligence and the Future of Creative Thinking: A Reflection from Islamic Perspective (2025), raises the same concern. He reminds us that the question is not how powerful AI can become, but how wisely we decide to use it.

    He writes that AI, like all forms of knowledge, carries both benefit and harm. It can stimulate creativity and make learning more accessible, but it can also weaken our capacity for deep thought, especially when we let machines do the thinking for us. He shares Sweden’s experience of moving education from printed textbooks to digital tools, which coincided with a decline in reading comprehension and overall student well-being. The lesson is clear: technology is useful, but it is not neutral. It shapes how we think and who we become.

    In Islam, knowledge is also never neutral. The Prophet Muhammad (peace be upon him) taught us to seek ‘ilm naf‘, or beneficial knowledge, and to seek refuge from unbeneficial knowledge. This means that knowledge becomes valuable only when it improves the human being, both morally and spiritually. Professor Osman argues that AI should be guided by this same principle. It must help us grow in wisdom and compassion, not just in productivity or speed.

    He also reminds us to keep AI in its proper place. The machine can process information, but it cannot determine what is good or right. Only humans, guided by intellect (‘aql) and spirit (ruh), can make that judgment. AI should therefore assist us in developing creativity and critical thinking, not replace them. If we rely too heavily on technology to think for us, we risk losing our sense of purpose and accountability.

    Another point he makes is about balance. While digital tools can enrich education, they should not completely replace traditional and physical forms of learning. Reading a printed book, having a real conversation, or reflecting quietly on what we have learned are still vital experiences that shape our character. Over-digitalisation may make learning more efficient, but it can also make it shallow. Without space for empathy, humility, and reflection, education loses its human soul.

    The heart of Professor Osman’s idea is the unity between intellect and spirituality. True creativity, he says, happens when the mind and the soul work together. Thinking without spirituality becomes cold and mechanical. Spirituality without thinking can become blind and directionless. When both are integrated, creativity becomes meaningful, ethical, and transformative. In that sense, AI can be a tool that helps us think better, as long as we use it with moral awareness and spiritual grounding.

    For Muslim educators, researchers, and students, this has real implications. We need to design AI applications that serve higher goals. AI should help us address issues that truly matter, such as improving public health, promoting justice, caring for the environment, and nurturing compassion. It should not exist simply to make us faster or wealthier. Ethical principles drawn from maqasid al-shariah (the objectives of Islamic law) should guide how we create and use technology, ensuring it protects life, intellect, faith, lineage, and property.

    At the end of his essay, Professor Osman quotes a hadith stating that the world will not end until no one remembers God. It is a profound reminder that remembrance of the Divine is the foundation of human existence. Without that remembrance, all our progress loses meaning. In the same way, if AI advances but humanity forgets its spiritual purpose, we will end up with brilliant machines and empty hearts.

    Perhaps the real question for our time is not how intelligent AI can become, but how wise we can remain while using it. Technology will continue to evolve, but our task is to ensure it serves what is good, just, and beneficial. As Professor Osman beautifully reminds us, knowledge must be both true and good. AI, too, must follow that path.

    So, as generative tools like ChatGPT become part of our daily thinking and writing, how can we really adapt them to nurture not only intelligence but also conscience and compassion?

    Reference

    Bakar, O. (2025). Artificial intelligence and the future of creative thinking: A reflection from Islamic perspective. In The Muslim 500, 2025 Edition. The Royal Islamic Strategic Studies Centre. https://themuslim500.com/2025-edition/guest-contributions-2025/artificial-intelligence-and-the-future-of-creative-thinking-a-reflection-from-islamic-perspective/

  • Training Critical Thinking and Logical Thinking in the Age of AI for Biostatistics and Epidemiology

    The arrival of generative AI tools like ChatGPT is changing the way we teach and practise biostatistics and epidemiology. Tasks that once took hours, like coding analyses or searching for information, can now be completed within minutes by simply asking the right questions. This development brings many opportunities, but it also brings new challenges. One of the biggest risks is that students may rely too much on AI without properly questioning what it produces.

    In this new environment, our responsibility as educators must shift. It is no longer enough to teach students how to use AI. We must now teach them how to think critically about AI outputs. We must train them to question, verify and improve what AI generates, not simply accept it as correct.

    Why critical thinking is important

    AI produces answers that often sound very convincing. However, sounding convincing is not the same as being right. AI tools are trained to predict the most likely words and patterns based on large amounts of data. They do not understand the meaning behind the information they provide. In biostatistics and epidemiology, where careful thinking about study design, assumptions and interpretation is vital, careless use of AI could easily lead to wrong conclusions.

    This is why students must develop a critical and questioning attitude. Every output must be seen as something to be checked, not something to be believed blindly.

    Recent academic work also supports this direction. Researchers have pointed out that users must develop what is now called “critical AI literacy”, meaning the ability to question and verify AI outputs rather than accept them passively (Ng, 2023; Mocanu, Grzyb, & Liotta, 2023). Although the terms differ, the message is the same: critical thinking remains essential when working with AI.

    How to train critical thinking when using AI

    Build a sceptical mindset

    Students should be taught from the beginning that AI is only a tool. It is not a source of truth. It should be seen like a junior intern: helpful and fast, but not always right. They should learn to ask questions such as:

    What assumptions are hidden in this output? Are the methods suggested suitable for the data and research question? Is anything important missing?

    Simple exercises, like showing students examples of AI outputs with clear mistakes, can help build this habit.

    Teach structured critical appraisal

    To help students evaluate AI outputs properly, it is useful to give them a structured way of thinking. A good framework involves five main points:

    First, methodological appropriateness

    Students must check whether the AI suggested the correct statistical method or study design. For example, if the outcome is time to death, suggesting logistic regression instead of survival analysis would be wrong.

    Second, assumptions and preconditions

    Every method has assumptions. Students must identify whether these assumptions are mentioned and whether they make sense. If assumptions are not stated, students must learn to recognise them and decide whether they are acceptable.

    Third, completeness and relevance

    Students should check whether the AI output missed important steps, variables or checks. For instance, has the AI forgotten to adjust for confounding factors? Is stratification by key variables missing?

    Fourth, logical and statistical coherence

    The reasoning must be checked for soundness. Are the conclusions supported by the results? Is there any step that does not follow logically?

    Fifth, source validation and evidence support

    Students should verify any references or evidence provided. AI sometimes produces references that do not exist or that are outdated. Cross-checking with real sources is necessary.

    By using these five points, students can build a habit of structured checking, instead of relying on their instincts alone.

    Encourage comparison and cross-verification

    Students should not depend on one AI output. They should learn to ask the same question in different ways and compare the answers. They should also check against textbooks, lectures, or real research papers.

    Practise reverse engineering

    One effective exercise is to give students an AI-generated answer with hidden mistakes and ask them to find and correct the errors. This strengthens their ability to read carefully and think independently.

    Make students teach back to AI

    Another good practice is to ask students to correct the AI. After finding an error, they should write a prompt that explains the mistake to the AI and asks for a better answer. Being able to explain an error clearly shows true understanding.

    Why logical thinking in coding and analysis planning remains essential

    Although AI can now generate codes and suggest analysis steps, it does not replace the need for human logical thinking. Writing good analysis plans and coding correctly require structured reasoning. Without this ability, students will not know how to guide AI properly, how to spot mistakes, or how to build reliable results from raw data.

    Logical thinking in analysis means asking and answering step-by-step questions such as:

    What is the research question? What are the variables and their roles? What is the right type of analysis for this question? What assumptions need to be checked? What is the correct order of steps?

    If students lose this skill and depend only on AI, they will not be able to detect when AI suggests inappropriate methods, forgets a critical step, or builds a wrong model. Therefore, teaching logical thinking in data analysis planning and coding must stay an important part of the curriculum.

    Logical planning and good coding are not simply technical skills. They reflect the student’s ability to reason clearly, to see the structure behind the problem, and to create a defensible path from data to answer. These are skills that no AI can replace.

    Ethical use of generative AI and the need for transparency

    Along with critical and logical thinking, students must also be trained to use generative AI tools ethically. They must understand that using AI does not remove their professional responsibility. If they rely on AI outputs for any part of their work, they must check it, improve it where needed, and take ownership of the final product.

    Students should also be taught about data privacy. Sensitive or identifiable information must never be shared with AI platforms, even during casual exploration or practice. Responsibility for patient confidentiality, research ethics, and academic honesty remains with the human user.

    Another important point is transparency. Whenever AI tools are used to assist in study design, data analysis, writing or summarising, this use should be openly declared. Whether in academic assignments, published articles or professional reports, readers have the right to know how AI was involved in shaping the content. Full and honest declaration supports academic integrity, maintains trust, and shows respect for the standards of research and publication.

    Students should be guided to include a simple statement such as:

    “An AI tool was used to assist with [describe briefly], and the final content has been reviewed and verified by the author.”

    By practising transparency from the beginning, students learn that AI is not something to hide, but something to use responsibly and openly.

    Building a modern curriculum

    To prepare students for this new reality, we must design courses that combine:

    Training in critical thinking when using AI outputs Training in logical thinking for building analysis plans and writing codes Training in ethical use and transparent declaration of AI assistance

    Students should be given real-world tasks where they must plan analyses from scratch, use AI as a helper but not as a leader, check every output carefully, and justify every step they take. They should also be trained to reflect on the choices they make, and on how to improve AI suggestions if they find them weak or incorrect.

    By doing this, we can prepare future biostatisticians and epidemiologists who are not only technically skilled but also intellectually strong and ethically responsible.

    A new way forward

    Teaching students to use AI critically is not just a good idea. It is essential for the future. In biostatistics and epidemiology, where errors can affect public health and policy, we must prepare a new generation who can use AI wisely without losing their own judgement.

    The best users of AI will not be those who follow it blindly, but those who can guide it with intelligence, knowledge and ethical care. Our role as teachers is to help students become leaders in the AI age, not followers.

    References

    Ng, W. (2023). Critical AI literacy: Toward empowering agency in an AI world. AI and Ethics, 3(1), 137–146. https://doi.org/10.1007/s43681-021-00065-5

    Mocanu, E., Grzyb, B., & Liotta, A. (2023). Critical thinking in AI-assisted decision-making: Challenges and opportunities. Frontiers in Artificial Intelligence, 6, Article 1052289. https://doi.org/10.3389/frai.2023.1052289

    Disclaimer

    This article discusses the responsible use of generative AI tools in education and research. It is based on current understanding and practices as of 2025. Readers are encouraged to apply critical judgement, stay updated with evolving guidelines, and ensure compliance with their institutional, professional, and ethical standards.

  • Integrating AI in Healthcare

    Artificial intelligence (AI) is reshaping healthcare by offering remarkable capabilities in diagnostics, decision-making, and patient care. Recent research published in JAMA Network Open demonstrated that large language models (LLMs), such as ChatGPT, can outperform human physicians in diagnostic tasks under controlled scenarios (Hswen & Rubin, 2024). This potential has sparked enthusiasm, yet concerns about ethical implications and limitations remain prominent. For Muslims, integrating AI with a tawhidic (unity-based) approach offers an opportunity to align healthcare practices with a divine purpose, emphasising the spiritual connection AI cannot replicate.

    The capabilities of AI in healthcare

    AI systems excel in tasks requiring large-scale data analysis, offering diagnostic insights, synthesising medical literature, and recommending treatments. LLMs have even displayed a surprising ability to simulate empathy in patient interactions. In fact, recent studies revealed that AI-generated responses were rated as more empathetic than those of human physicians in some cases (Hswen & Rubin, 2024). This demonstrates AI’s potential as a tool to support clinicians in delivering more effective and thoughtful care.

    However, AI lacks the moral agency and contextual understanding of human doctors. Machines can sound competent and compassionate, but they do not possess the lived experience or ethical consciousness required for genuine patient engagement. For Muslim clinicians, this underscores the need to approach care with the understanding that true healing combines technical expertise with spiritual accountability.

    Concerns and challenges of AI in healthcare

    While AI shows great promise, it also introduces risks. One major issue is hallucination—where AI generates false but convincing information. For example, in the JAMA Network Open trial, doctors using AI often misinterpreted its outputs because they did not fully understand its limitations (Hswen & Rubin, 2024).

    Ethical concerns around patient privacy, algorithmic bias, and the potential for over-reliance on AI are also significant. Without careful integration, AI could erode critical clinical skills, reducing the human aspect of medicine to mere transactional interactions. For Muslims, this disconnect from the soul underscores why technology must serve as a complement to human care, rather than a replacement.

    Steps to prevent hallucination in AI responses

    To minimise the risks of relying on hallucinated AI outputs, healthcare professionals should:

    1. Cross-Reference Outputs: Validate AI-generated insights against trusted clinical resources such as PubMed or established guidelines.

    2. Request Citations: Ensure AI provides sources for its claims and scrutinise their accuracy.

    3. Use Clinical Judgment: Apply personal expertise to evaluate the plausibility of AI recommendations.

    4. Collaborate: Seek input from peers or subject matter experts when faced with critical decisions.

    These measures align with both scientific rigour and the Islamic principle of amanah (trustworthiness), ensuring that AI enhances, rather than jeopardises, patient care.

    Tawhidic approaches in medicine

    For Muslims, healthcare is not merely a technical practice but a sacred trust that aligns with the concept of tawhid, or the unity of creation under Allah. This approach integrates technical competence with spiritual accountability, bringing patients, doctors, and the healthcare system closer to the Creator.

    AI, no matter how advanced, cannot replicate the soul. It lacks the ability to embody true compassion, understand divine accountability, or guide patients towards spiritual healing. Therefore, a tawhidic approach to healthcare demands the presence of human doctors who can balance technical expertise with compassion, faith, and a sense of purpose rooted in serving Allah.

    A collaborative future

    AI’s role in healthcare should focus on enabling, not replacing, human physicians. As Dr. Chen pointed out, the future belongs to those who learn how to use AI effectively rather than those who resist it (Hswen & Rubin, 2024). By integrating AI responsibly, doctors can reclaim time for deeper patient connections and spiritual engagement, fostering a holistic approach to care.

    For Muslims, this responsibility is even greater, as healthcare becomes a means of ibadah (worship) when guided by tawhidic principles. AI may assist with efficiency, but the soul of medicine lies in human hands. Only a doctor can truly embody competence and compassion, ensuring that care not only heals the body but also brings solace to the spirit.

    References

    Chen, J., Goh, E., & Hswen, Y. (2024). An AI chatbot outperformed physicians and physicians plus AI in a trial—what does that mean? JAMA Network Open. https://doi.org/10.1001/jamanetworkopen.2024.40969

    Hswen, Y., & Rubin, R. (2024). AI in medicine: Medical news and perspectives. JAMA.