Tag: artificial intelligence

  • Artificial Intelligence and the Purpose of Knowledge

    As someone who works in education, I often think about how AI is changing the way we learn and teach. Artificial intelligence has become part of our daily routine, from helping us write to generating art or analysing data. It makes things faster and more convenient, but I sometimes wonder if it also makes us forget what being human really means. Professor Osman Bakar, in his recent essay Artificial Intelligence and the Future of Creative Thinking: A Reflection from Islamic Perspective (2025), raises the same concern. He reminds us that the question is not how powerful AI can become, but how wisely we decide to use it.

    He writes that AI, like all forms of knowledge, carries both benefit and harm. It can stimulate creativity and make learning more accessible, but it can also weaken our capacity for deep thought, especially when we let machines do the thinking for us. He shares Sweden’s experience of moving education from printed textbooks to digital tools, which coincided with a decline in reading comprehension and overall student well-being. The lesson is clear: technology is useful, but it is not neutral. It shapes how we think and who we become.

    In Islam, knowledge is also never neutral. The Prophet Muhammad (peace be upon him) taught us to seek ‘ilm naf‘, or beneficial knowledge, and to seek refuge from unbeneficial knowledge. This means that knowledge becomes valuable only when it improves the human being, both morally and spiritually. Professor Osman argues that AI should be guided by this same principle. It must help us grow in wisdom and compassion, not just in productivity or speed.

    He also reminds us to keep AI in its proper place. The machine can process information, but it cannot determine what is good or right. Only humans, guided by intellect (‘aql) and spirit (ruh), can make that judgment. AI should therefore assist us in developing creativity and critical thinking, not replace them. If we rely too heavily on technology to think for us, we risk losing our sense of purpose and accountability.

    Another point he makes is about balance. While digital tools can enrich education, they should not completely replace traditional and physical forms of learning. Reading a printed book, having a real conversation, or reflecting quietly on what we have learned are still vital experiences that shape our character. Over-digitalisation may make learning more efficient, but it can also make it shallow. Without space for empathy, humility, and reflection, education loses its human soul.

    The heart of Professor Osman’s idea is the unity between intellect and spirituality. True creativity, he says, happens when the mind and the soul work together. Thinking without spirituality becomes cold and mechanical. Spirituality without thinking can become blind and directionless. When both are integrated, creativity becomes meaningful, ethical, and transformative. In that sense, AI can be a tool that helps us think better, as long as we use it with moral awareness and spiritual grounding.

    For Muslim educators, researchers, and students, this has real implications. We need to design AI applications that serve higher goals. AI should help us address issues that truly matter, such as improving public health, promoting justice, caring for the environment, and nurturing compassion. It should not exist simply to make us faster or wealthier. Ethical principles drawn from maqasid al-shariah (the objectives of Islamic law) should guide how we create and use technology, ensuring it protects life, intellect, faith, lineage, and property.

    At the end of his essay, Professor Osman quotes a hadith stating that the world will not end until no one remembers God. It is a profound reminder that remembrance of the Divine is the foundation of human existence. Without that remembrance, all our progress loses meaning. In the same way, if AI advances but humanity forgets its spiritual purpose, we will end up with brilliant machines and empty hearts.

    Perhaps the real question for our time is not how intelligent AI can become, but how wise we can remain while using it. Technology will continue to evolve, but our task is to ensure it serves what is good, just, and beneficial. As Professor Osman beautifully reminds us, knowledge must be both true and good. AI, too, must follow that path.

    So, as generative tools like ChatGPT become part of our daily thinking and writing, how can we really adapt them to nurture not only intelligence but also conscience and compassion?

    Reference

    Bakar, O. (2025). Artificial intelligence and the future of creative thinking: A reflection from Islamic perspective. In The Muslim 500, 2025 Edition. The Royal Islamic Strategic Studies Centre. https://themuslim500.com/2025-edition/guest-contributions-2025/artificial-intelligence-and-the-future-of-creative-thinking-a-reflection-from-islamic-perspective/

  • Statistics and Machine Learning in Public Health: When to Use What

    If you’re trained in epidemiology or biostatistics, you likely think in terms of models, inference, and evidence. Now, with machine learning entering the scene, you’re probably hearing about algorithms that can “predict” disease, “detect” outbreaks, and “learn” from data. But while ML offers exciting possibilities, it’s important to understand how it differs from classical statistical approaches—especially when public health decisions depend on more than just prediction.

    Let’s explore how statistics and machine learning differ—not just in technique, but in mindset, use case, and the all-important concept of causality.

    How They Think

    Statistics and machine learning begin with different goals.

    Statistics is built to answer questions like: Does exposure X cause outcome Y? It aims to explain relationships, test hypotheses, and estimate effect sizes. It relies on assumptions—like randomness, independence, and model structure—to ensure that findings reflect the real world, not just the sample at hand.

    Machine learning, in contrast, asks: Given this data, what outcome should I predict? It doesn’t aim to explain but to perform—minimising error and maximising predictive accuracy, even if the relationships are complex or difficult to interpret.

    That’s a major shift. While statistics seeks truth about the population, ML seeks performance in unseen data.

    How They Work

    Statistical methods are grounded in probability theory and estimation. They involve fitting models with interpretable parameters: coefficients, confidence intervals, p-values. The analyst usually specifies the form of the model in advance, guided by theory and prior evidence.

    Machine learning models are trained through algorithms, often using large datasets and iterative techniques to optimise performance. Models like decision trees, support vector machines, and random forests find patterns without assuming linearity or distribution. You don’t always know what the model is “looking at”—you just know if it works.

    There are also hybrid approaches—like regularised regression, ensemble models, and causal forests—that blend the logic of both.

    What They Do Well

    Statistics excels in clarity and rigour. It tells you not just whether something matters, but how much, and with what certainty. It’s ideally suited for:

    Identifying risk factors Estimating treatment effects Designing policy interventions Publishing findings with transparent reasoning

    Machine learning is best when:

    Relationships are non-linear or unknown You have many predictors and large datasets You need fast, repeatable predictions (e.g. real-time risk scoring) The goal is performance, not explanation

    In short, statistics helps you understand, ML helps you predict.

    Where They Fall Short

    Statistics can break down when data gets messy—especially when model assumptions are violated or the number of variables overwhelms the number of observations. It also isn’t built to handle unstructured data like images or free text.

    Machine learning’s biggest limitation is often overlooked: it doesn’t care about causality. A model may predict hospitalisation risk with 95% accuracy, but it doesn’t tell you why. It might rely on variables that are associated, not causal. Worse, it might act on misleading proxies that look predictive but don’t offer actionable insight.

    This matters deeply in public health. Predicting who dies is not the same as preventing death. Models that ignore cause can lead to misguided interventions or unjust decisions.

    Another weakness of ML is interpretability. Many powerful algorithms (like gradient boosting or neural networks) are “black boxes”—hard to explain and harder to justify in policy decisions. While newer tools like SHAP can improve transparency, they still fall short of the clarity offered by traditional statistical models.

    When to Use Each

    Use statistics when:

    Your primary goal is inference or explanation You need to estimate effects or support causal conclusions You’re informing policy or making ethical decisions You want results that are interpretable and reportable

    Use machine learning when:

    Your primary goal is prediction or classification You’re handling high-dimensional or complex data You need scalable automation (e.g. early warning systems) You can validate predictions with real-world data

    Most importantly, if causality matters, don’t rely solely on ML—use statistical thinking or causal ML techniques that explicitly model counterfactuals and assumptions.

    What You Should Expect

    From statistics, expect:

    Clear models with interpretable outputs Transparent assumptions The ability to test hypotheses and quantify uncertainty

    From machine learning, expect:

    High performance with minimal assumptions Useful predictions even when mechanisms are unknown Some loss of interpretability (unless addressed deliberately)

    Just remember: good prediction doesn’t imply good understanding. And good models don’t always lead to good decisions—unless we interpret them wisely.

    A Path Forward for Epidemiologists and Biostatisticians

    Here’s the good news: your training in statistics and epidemiology is not a limitation—it’s your greatest asset. You already understand data, confounding, validity, and generalisability. You’re equipped to evaluate models critically and ask: Does this make sense? Is it actionable? Is it ethical?

    Start small. Try ML approaches that are extensions of what you know—like regularised logistic regression, decision trees, or ensemble methods. Explore tools like caret, tidymodels, or scikit-learn. And when you’re ready to dive deeper, look into causal ML methods like:

    • Targeted maximum likelihood estimation (TMLE)
    • Causal forests (grf)
    • Double machine learning (EconML)
    • DoWhy (for structural causal models)

    The best analysts of the future won’t just be statisticians or ML engineers—they’ll be methodologically bilingual, able to switch between explanation and prediction as the question demands.

    Your role isn’t to replace one with the other, but to integrate both—so that public health remains not just data-driven, but wisely so.

  • Good and Evil of AI in Medicine: Where Is the Boundary?

    Artificial intelligence (AI) is rapidly transforming the field of medicine, offering unprecedented opportunities to improve healthcare delivery, diagnosis, and population health management. However, with its promise comes a risk of harm, particularly when AI systems are poorly designed, implemented without appropriate safeguards, or driven by commercial interests at the expense of public good. This paper explores what constitutes good and evil in medical AI, provides examples of both, and outlines ethical boundaries and practical steps to ensure that AI serves humanity.

    AI in medicine refers to systems designed to assist with tasks such as diagnosis, prognosis, treatment recommendations, and public health surveillance. The good in medical AI lies in its capacity to enhance human well-being, reduce inequalities, and improve healthcare efficiency. AI applications can support clinical decisions, automate routine tasks, and extend healthcare reach to underserved populations (Rajkomar, Dean, & Kohane, 2019). Conversely, the potential for evil emerges when AI contributes to harm, reinforces inequities, or undermines essential human values such as compassion, accountability, and justice. This harm may arise from biased algorithms, opaque decision-making processes, or commercial exploitation that prioritises profit over patient welfare.

    The Goods

    One of the clearest demonstrations of AI’s positive contribution to medicine is in the field of early disease detection. AI systems trained on medical images have been shown to accurately detect conditions such as diabetic retinopathy and tuberculosis. A pivotal study demonstrated that an autonomous AI system could safely and effectively identify diabetic retinopathy in primary care settings, enabling earlier referrals and potentially preventing vision loss (Abràmoff, Lavin, Birch, Shah, & Folk, 2018). In tuberculosis screening, AI-based chest X-ray interpretation tools have been used in high-burden countries to prioritise patients for further diagnostic testing, particularly in settings where human expertise is limited (Codlin et al., 2025). These applications help address gaps in healthcare access and reduce delays in diagnosis and treatment.

    AI has also supported public health surveillance, particularly during emergencies such as the COVID-19 pandemic. AI models combined data from health records, mobility patterns, and social media to predict outbreaks, identify hotspots, and inform targeted interventions. This contributed to more timely and effective public health responses and resource allocation (Bullock, Luccioni, Hoffmann, & Jeni, 2020).

    The Evils

    Despite these benefits, AI has also been linked to harms that can undermine trust and exacerbate health inequities. One of the most pressing concerns is algorithmic bias. AI systems trained on data that do not represent the diversity of patient populations may produce biased outcomes. For example, machine learning tools for dermatology developed primarily using images of lighter skin tones have been found to perform less accurately on darker skin. This can lead to missed or delayed diagnoses in patients from minority groups, reinforcing existing disparities (Adamson & Smith, 2018).

    Commercial exploitation of AI is another area of concern. The rush to monetise AI in medicine has sometimes led to the deployment of systems that are insufficiently transparent or accountable. Proprietary algorithms may operate as black boxes, with their decision-making processes hidden from both clinicians and patients. This opacity undermines informed consent and shared decision-making, and can make it difficult to challenge or review AI-driven recommendations (Char, Shah, & Magnus, 2018).

    Furthermore, there is a risk that excessive reliance on AI could erode the compassionate, human-centred aspects of healthcare. While AI can assist with routine tasks and reduce administrative burdens, it must not be seen as a replacement for human empathy and professional judgement. There is concern that as AI takes on a greater role, the patient-doctor relationship could become depersonalised, weakening one of the core foundations of medical practice (Panch, Szolovits, & Atun, 2019).

    Ethical Boundaries for Responsible AI

    To ensure that AI in medicine serves the common good rather than causes harm, clear ethical boundaries are needed. Transparency is essential. AI systems must be designed in ways that make their decision-making processes understandable and open to scrutiny. This is critical to maintaining trust, supporting informed consent, and enabling clinicians to integrate AI recommendations into their decision-making with confidence.

    Fairness must also be prioritised. Developers need to ensure that AI tools are designed to promote equity rather than exacerbate disparities. This involves using diverse training datasets, actively auditing algorithms for bias, and engaging with communities to understand their needs and perspectives. Bias mitigation should be a central part of AI development and deployment, not an afterthought.

    Accountability is another key principle. Developers, healthcare providers, and regulators share responsibility for ensuring that AI systems are safe, effective, and aligned with ethical principles. Regulatory frameworks should define standards for AI in healthcare and provide mechanisms for monitoring, evaluation, and redress when harm occurs (Char et al., 2018).

    Compassion must remain central to healthcare, even as AI systems become more common. AI should be designed and used to support, rather than replace, the human connection between healthcare professionals and patients. The ultimate goal should be to free clinicians from administrative burdens and allow them to focus on what matters most: the well-being of the people they serve (Topol, 2019).

    Towards Governance and Action

    The development and use of medical AI should be guided by comprehensive national or regional governance frameworks that balance the promotion of innovation with the protection of public interest. Such frameworks need to address issues including data privacy, transparency, bias mitigation, and equitable access. They should be shaped through collaboration between governments, healthcare professionals, technologists, and civil society to ensure that they are both robust and responsive to local contexts and needs.

    Education and capacity building are also essential. Healthcare professionals, public health experts, and policymakers must be equipped with the knowledge and skills needed to engage with AI critically and effectively. Training should address not only technical competencies but also the ethical, legal, and social implications of AI.

    Finally, ongoing research is needed to evaluate the real-world impact of AI in healthcare. This research should assess not only clinical outcomes but also equity, patient safety, and the preservation of humanistic values. It should inform continuous improvement of AI systems and the policies that govern their use (Morley, Floridi, Kinsey, & Elhalal, 2020).

    Conclusion

    AI has the potential to greatly enhance healthcare, improving efficiency, accuracy, and access. However, without appropriate safeguards, it also carries the risk of causing harm, deepening inequities, and eroding core human values. The boundary between good and evil in medical AI lies in how these technologies are designed, implemented, and governed. By upholding principles of transparency, fairness, accountability, and compassion, and by embedding these principles in governance frameworks and professional practice, it is possible to ensure that AI serves as a tool for good in medicine.

    References

    Abràmoff, M. D., Lavin, P. T., Birch, M., Shah, N., & Folk, J. C. (2018). Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digital Medicine, 1, 39.

    Adamson, A. S., & Smith, A. (2018). Machine learning and health care disparities in dermatology. JAMA Dermatology, 154(11), 1247-1248.

    Bullock, J., Luccioni, A., Hoffmann, P. H., & Jeni, L. A. (2020). Mapping the landscape of artificial intelligence applications against COVID-19. Journal of Artificial Intelligence Research, 69, 807-845.

    Char, D. S., Shah, N. H., & Magnus, D. (2018). Implementing machine learning in health care – Addressing ethical challenges. New England Journal of Medicine, 378, 981-983.

    Chen, I. Y., Szolovits, P., & Ghassemi, M. (2019). Can AI help reduce disparities in general medical and mental health care? AMA Journal of Ethics, 21(2), E167-E179.

    Codlin, A. J., Dao, T. P., Vo, L. N. Q., Forse, R. J., Nadol, P., & Nguyen, V. N. (2025). Comparison of different Lunit INSIGHT CXR software versions when reading chest radiographs for tuberculosis. PLOS Digital Health, 4(4), e0000813.

    Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how: An overview of AI ethics tools, methods and research to translate principles into practices. AI & Society, 36, 59-71.

    Panch, T., Szolovits, P., & Atun, R. (2019). Artificial intelligence, machine learning and health systems. Journal of Global Health, 8(2), 020303.

    Rajkomar, A., Dean, J., & Kohane, I. (2019). Machine learning in medicine. New England Journal of Medicine, 380(14), 1347-1358.

    Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books.

  • Integrating AI in Healthcare

    Artificial intelligence (AI) is reshaping healthcare by offering remarkable capabilities in diagnostics, decision-making, and patient care. Recent research published in JAMA Network Open demonstrated that large language models (LLMs), such as ChatGPT, can outperform human physicians in diagnostic tasks under controlled scenarios (Hswen & Rubin, 2024). This potential has sparked enthusiasm, yet concerns about ethical implications and limitations remain prominent. For Muslims, integrating AI with a tawhidic (unity-based) approach offers an opportunity to align healthcare practices with a divine purpose, emphasising the spiritual connection AI cannot replicate.

    The capabilities of AI in healthcare

    AI systems excel in tasks requiring large-scale data analysis, offering diagnostic insights, synthesising medical literature, and recommending treatments. LLMs have even displayed a surprising ability to simulate empathy in patient interactions. In fact, recent studies revealed that AI-generated responses were rated as more empathetic than those of human physicians in some cases (Hswen & Rubin, 2024). This demonstrates AI’s potential as a tool to support clinicians in delivering more effective and thoughtful care.

    However, AI lacks the moral agency and contextual understanding of human doctors. Machines can sound competent and compassionate, but they do not possess the lived experience or ethical consciousness required for genuine patient engagement. For Muslim clinicians, this underscores the need to approach care with the understanding that true healing combines technical expertise with spiritual accountability.

    Concerns and challenges of AI in healthcare

    While AI shows great promise, it also introduces risks. One major issue is hallucination—where AI generates false but convincing information. For example, in the JAMA Network Open trial, doctors using AI often misinterpreted its outputs because they did not fully understand its limitations (Hswen & Rubin, 2024).

    Ethical concerns around patient privacy, algorithmic bias, and the potential for over-reliance on AI are also significant. Without careful integration, AI could erode critical clinical skills, reducing the human aspect of medicine to mere transactional interactions. For Muslims, this disconnect from the soul underscores why technology must serve as a complement to human care, rather than a replacement.

    Steps to prevent hallucination in AI responses

    To minimise the risks of relying on hallucinated AI outputs, healthcare professionals should:

    1. Cross-Reference Outputs: Validate AI-generated insights against trusted clinical resources such as PubMed or established guidelines.

    2. Request Citations: Ensure AI provides sources for its claims and scrutinise their accuracy.

    3. Use Clinical Judgment: Apply personal expertise to evaluate the plausibility of AI recommendations.

    4. Collaborate: Seek input from peers or subject matter experts when faced with critical decisions.

    These measures align with both scientific rigour and the Islamic principle of amanah (trustworthiness), ensuring that AI enhances, rather than jeopardises, patient care.

    Tawhidic approaches in medicine

    For Muslims, healthcare is not merely a technical practice but a sacred trust that aligns with the concept of tawhid, or the unity of creation under Allah. This approach integrates technical competence with spiritual accountability, bringing patients, doctors, and the healthcare system closer to the Creator.

    AI, no matter how advanced, cannot replicate the soul. It lacks the ability to embody true compassion, understand divine accountability, or guide patients towards spiritual healing. Therefore, a tawhidic approach to healthcare demands the presence of human doctors who can balance technical expertise with compassion, faith, and a sense of purpose rooted in serving Allah.

    A collaborative future

    AI’s role in healthcare should focus on enabling, not replacing, human physicians. As Dr. Chen pointed out, the future belongs to those who learn how to use AI effectively rather than those who resist it (Hswen & Rubin, 2024). By integrating AI responsibly, doctors can reclaim time for deeper patient connections and spiritual engagement, fostering a holistic approach to care.

    For Muslims, this responsibility is even greater, as healthcare becomes a means of ibadah (worship) when guided by tawhidic principles. AI may assist with efficiency, but the soul of medicine lies in human hands. Only a doctor can truly embody competence and compassion, ensuring that care not only heals the body but also brings solace to the spirit.

    References

    Chen, J., Goh, E., & Hswen, Y. (2024). An AI chatbot outperformed physicians and physicians plus AI in a trial—what does that mean? JAMA Network Open. https://doi.org/10.1001/jamanetworkopen.2024.40969

    Hswen, Y., & Rubin, R. (2024). AI in medicine: Medical news and perspectives. JAMA.

  • Using AI in Medicine and Preparing a Framework for Medical Education

    The integration of artificial intelligence (AI) in medicine is transforming healthcare, enabling advanced diagnostics, improved decision-making, and operational efficiencies. However, its application requires careful consideration to ensure that the essence of patient care—ethical responsibility and compassion—is maintained. Clear guidelines are essential to navigate this evolving landscape while simultaneously preparing medical professionals to harness AI effectively through education. As highlighted in a recent article by Hswen and Abbasi (2024), AI lacks emotional intelligence and fiduciary responsibility, which are critical in clinical decision-making. For example, while AI tools can enhance diagnostic accuracy, they cannot “worry” about a patient’s wellbeing or intuitively weigh the moral implications of medical choices.

    AI in medicine should always be viewed as a tool to supplement human expertise, not replace it. Tasks requiring moral agency, such as delivering bad news or making ethically complex decisions, must remain the responsibility of clinicians. Transparency is paramount in AI deployment, particularly in patient-facing applications. When patients interact with AI systems, it is ethically imperative that they are informed. Hswen and Abbasi caution against deceptive practices, noting that even unintentional opacity can erode trust. Additionally, the protection of sensitive data must remain a priority. Robust safeguards are needed to prevent unauthorised access or misuse of patient information.

    The increasing reliance on AI also sparks the need for a structured framework within medical education. Future clinicians must be equipped to understand, evaluate, and ethically apply AI tools in practice. This involves integrating core competencies such as algorithmic literacy, ethical awareness, and interdisciplinary collaboration into medical curricula. Scenario-based training, where students learn to interpret AI outputs alongside patient care, can provide practical insights. Furthermore, education must emphasise that while AI offers precision and efficiency, compassionate care and human connection remain irreplaceable aspects of medicine.

    The future of AI in healthcare extends beyond its current applications. Emerging technologies such as autonomous surgical systems, digital biomarkers, and brain-computer interfaces promise transformative potential. Future research should focus on areas such as personalising care through multi-omics data, integrating AI into lifestyle medicine, and using AI for preventive healthcare. Ethical considerations must guide these advancements. For instance, ensuring that AI systems address, rather than exacerbate, healthcare inequities is crucial. Transparency in algorithm design, patient consent, and cultural sensitivity are essential elements in this process.

    AI also holds promise for alleviating administrative burdens, enabling clinicians to dedicate more time to patient interaction. However, as Hswen and Abbasi observe, the unintended consequences of technology—such as increased clinician burnout due to overreliance on electronic systems—must not be overlooked. Efficiency should not come at the cost of quality care or meaningful clinician-patient relationships.

    In addition to enhancing clinical practice, AI can revolutionise medical education by enabling adaptive learning and immersive simulations. Generative AI and virtual reality platforms can provide personalised training environments, allowing students to practice high-stakes scenarios. However, these tools must be rigorously tested to ensure alignment with medical evidence and ethical standards. Collaborative research between educators and technologists will be vital to optimise the educational use of AI.

    The ethical integration of AI into healthcare requires a multidisciplinary approach, involving clinicians, data scientists, ethicists, and policymakers. As medicine evolves, guidelines and educational frameworks must ensure that technology serves humanity without undermining the moral fabric of care. By balancing innovation with compassion, we can prepare a future where AI enhances healthcare without compromising its core values.

    Disclaimer

    This article integrates insights from generative AI to enhance its development.