Tag: AI

  • Responsible Leadership in the Age of Popular Vote

    Introduction

    Modern democracies increasingly face a paradox. Leaders are elected through popular vote, yet popularity does not reliably translate into improved communities, functional cities, or stronger nations. Charismatic figures may win elections, dominate public discourse, and command loyal followings, but their tenure often leaves institutions weakened and public trust diminished. This tension forces a difficult question. Is the failure one of leadership, or of society itself?

    This paper argues that leadership outcomes in democratic systems reflect not only the quality of leaders but also the moral, cognitive, and institutional maturity of society. Improving leadership therefore requires more than producing better individuals. It requires reshaping the conditions under which leadership is chosen, sustained, and constrained.

    Popularity is not leadership

    Leadership theory has long distinguished influence from responsibility. Popular leaders are often highly influential, but influence alone does not ensure meaningful outcomes. Transformational leadership theory explains how leaders inspire and mobilise followers through vision and emotional connection. Yet inspiration without ethical grounding, systems awareness, and delivery capability risks becoming performance rather than progress.

    The repeated failure of popular leaders to improve cities and nations suggests that charisma, while electorally powerful, is insufficient for governing complex societies. Leadership in complex systems demands moral restraint, competence, and institutional stewardship, qualities that are rarely captured by popularity alone.

    Values as the foundation of responsible leadership

    Before discussing voter behaviour or institutional constraints, it is necessary to address a more fundamental issue, values. Leadership does not emerge in a moral vacuum. Leaders act based on what they believe is right, acceptable, or negotiable. Likewise, societies choose leaders based on what they admire, tolerate, or excuse.

    Values therefore sit at the core of leadership quality. A leader with technical brilliance but weak values may deliver short-term gains while corroding trust, justice, and institutional integrity. Conversely, leaders grounded in strong values are more likely to exercise restraint, accept accountability, and prioritise long-term societal wellbeing over personal or political survival.

    From this perspective, nation-building is inseparable from values formation. Development is not merely economic or infrastructural. It is moral and civilisational.

    Values shape both leaders and voters

    People who believe in and act upon values tend to recognise those same values in leadership. Where honesty, justice, responsibility, and humility are socially respected, leaders who lack these traits struggle to sustain legitimacy. Where values are weak or selectively applied, leaders without integrity can still thrive, provided they remain entertaining, divisive, or symbolically reassuring.

    This explains why leadership reform cannot rely solely on replacing individuals. Societies that wish to be led by leaders with values must themselves value integrity, truthfulness, competence, and service. In this sense, leadership choice becomes a mirror of collective moral priorities.

    This is not a moral judgement on citizens. It is a sociological reality. People respond to norms that are consistently rewarded in their environment.

    A tawhidic perspective on values and leadership

    In Islam, values are not socially negotiated preferences. They are rooted in tawhid, the affirmation of the oneness of Allah, which unifies belief, ethics, and action. A tawhidic mind does not separate power from accountability, success from responsibility, or leadership from moral consequence.

    From this worldview, leadership is an amanah, a trust, not a personal entitlement. Authority is exercised with the consciousness that all actions are accountable beyond worldly institutions. Justice is not optional, truth is not strategic, and service to people is inseparable from obedience to Allah.

    When values flow from tawhid, leadership is restrained by moral consciousness even when institutional oversight is weak. Equally important, a society shaped by tawhidic values is less easily deceived by rhetoric, because it evaluates leaders not only by what they promise, but by how they act, decide, and govern.

    Thus, values in Islam are not abstract virtues. They are operational principles that shape governance, accountability, and public trust.

    Leadership outcomes depend on decision conditions, not voter character

    It is tempting to conclude that societies simply choose poorly. This framing is misleading. Behavioural science shows that individuals operate under bounded rationality. Faced with complex policy choices, people rely on emotional cues, identity alignment, familiarity, and trusted narratives. These are not moral shortcomings but cognitive adaptations to uncertainty and information overload.

    However, values influence which cues people trust. Where values are strong, emotional manipulation loses effectiveness. Where values are weak or fragmented, deception becomes easier. The quality of leadership choice is therefore shaped by both cognitive constraints and moral orientation.

    Institutions determine whether values are protected or eroded

    Strong institutions reinforce values by making ethical behaviour normal and misconduct costly. Weak institutions allow values to be overridden by expediency and personality. Over time, this erodes public expectations, creating a cycle where both leaders and citizens lower their standards.

    Institutions alone cannot create values, but they can protect them. Likewise, values alone cannot guarantee good leadership, but they provide the moral compass without which institutions become hollow.

    Civic maturity is cultivated, not innate

    The ability to evaluate leadership is learned. Civic maturity develops when societies normalise ethical reasoning, discuss trade-offs honestly, and expose manipulation without cynicism. Education, public discourse, and moral leadership all contribute to this maturation.

    In societies where values are continuously reinforced, leadership quality improves not through coercion, but through expectation.

    Conclusion

    It is accurate to say that people matter in a democratic system. It is incomplete to say that people simply need to change.

    Leadership quality emerges from the interaction between values, institutions, and public choice. In the absence of values, popularity becomes dangerous. In the absence of institutions, values become fragile. In the absence of informed citizens, both are easily undermined.

    From an Islamic perspective, strengthening leadership therefore begins with strengthening values grounded in tawhid. A society that believes and acts upon values will choose leaders with values, not perfectly, but consistently enough to change its trajectory.

    Ultimately, societies do not merely elect leaders. They cultivate them.

  • Good and Evil of AI in Medicine: Where Is the Boundary?

    Artificial intelligence (AI) is rapidly transforming the field of medicine, offering unprecedented opportunities to improve healthcare delivery, diagnosis, and population health management. However, with its promise comes a risk of harm, particularly when AI systems are poorly designed, implemented without appropriate safeguards, or driven by commercial interests at the expense of public good. This paper explores what constitutes good and evil in medical AI, provides examples of both, and outlines ethical boundaries and practical steps to ensure that AI serves humanity.

    AI in medicine refers to systems designed to assist with tasks such as diagnosis, prognosis, treatment recommendations, and public health surveillance. The good in medical AI lies in its capacity to enhance human well-being, reduce inequalities, and improve healthcare efficiency. AI applications can support clinical decisions, automate routine tasks, and extend healthcare reach to underserved populations (Rajkomar, Dean, & Kohane, 2019). Conversely, the potential for evil emerges when AI contributes to harm, reinforces inequities, or undermines essential human values such as compassion, accountability, and justice. This harm may arise from biased algorithms, opaque decision-making processes, or commercial exploitation that prioritises profit over patient welfare.

    The Goods

    One of the clearest demonstrations of AI’s positive contribution to medicine is in the field of early disease detection. AI systems trained on medical images have been shown to accurately detect conditions such as diabetic retinopathy and tuberculosis. A pivotal study demonstrated that an autonomous AI system could safely and effectively identify diabetic retinopathy in primary care settings, enabling earlier referrals and potentially preventing vision loss (Abràmoff, Lavin, Birch, Shah, & Folk, 2018). In tuberculosis screening, AI-based chest X-ray interpretation tools have been used in high-burden countries to prioritise patients for further diagnostic testing, particularly in settings where human expertise is limited (Codlin et al., 2025). These applications help address gaps in healthcare access and reduce delays in diagnosis and treatment.

    AI has also supported public health surveillance, particularly during emergencies such as the COVID-19 pandemic. AI models combined data from health records, mobility patterns, and social media to predict outbreaks, identify hotspots, and inform targeted interventions. This contributed to more timely and effective public health responses and resource allocation (Bullock, Luccioni, Hoffmann, & Jeni, 2020).

    The Evils

    Despite these benefits, AI has also been linked to harms that can undermine trust and exacerbate health inequities. One of the most pressing concerns is algorithmic bias. AI systems trained on data that do not represent the diversity of patient populations may produce biased outcomes. For example, machine learning tools for dermatology developed primarily using images of lighter skin tones have been found to perform less accurately on darker skin. This can lead to missed or delayed diagnoses in patients from minority groups, reinforcing existing disparities (Adamson & Smith, 2018).

    Commercial exploitation of AI is another area of concern. The rush to monetise AI in medicine has sometimes led to the deployment of systems that are insufficiently transparent or accountable. Proprietary algorithms may operate as black boxes, with their decision-making processes hidden from both clinicians and patients. This opacity undermines informed consent and shared decision-making, and can make it difficult to challenge or review AI-driven recommendations (Char, Shah, & Magnus, 2018).

    Furthermore, there is a risk that excessive reliance on AI could erode the compassionate, human-centred aspects of healthcare. While AI can assist with routine tasks and reduce administrative burdens, it must not be seen as a replacement for human empathy and professional judgement. There is concern that as AI takes on a greater role, the patient-doctor relationship could become depersonalised, weakening one of the core foundations of medical practice (Panch, Szolovits, & Atun, 2019).

    Ethical Boundaries for Responsible AI

    To ensure that AI in medicine serves the common good rather than causes harm, clear ethical boundaries are needed. Transparency is essential. AI systems must be designed in ways that make their decision-making processes understandable and open to scrutiny. This is critical to maintaining trust, supporting informed consent, and enabling clinicians to integrate AI recommendations into their decision-making with confidence.

    Fairness must also be prioritised. Developers need to ensure that AI tools are designed to promote equity rather than exacerbate disparities. This involves using diverse training datasets, actively auditing algorithms for bias, and engaging with communities to understand their needs and perspectives. Bias mitigation should be a central part of AI development and deployment, not an afterthought.

    Accountability is another key principle. Developers, healthcare providers, and regulators share responsibility for ensuring that AI systems are safe, effective, and aligned with ethical principles. Regulatory frameworks should define standards for AI in healthcare and provide mechanisms for monitoring, evaluation, and redress when harm occurs (Char et al., 2018).

    Compassion must remain central to healthcare, even as AI systems become more common. AI should be designed and used to support, rather than replace, the human connection between healthcare professionals and patients. The ultimate goal should be to free clinicians from administrative burdens and allow them to focus on what matters most: the well-being of the people they serve (Topol, 2019).

    Towards Governance and Action

    The development and use of medical AI should be guided by comprehensive national or regional governance frameworks that balance the promotion of innovation with the protection of public interest. Such frameworks need to address issues including data privacy, transparency, bias mitigation, and equitable access. They should be shaped through collaboration between governments, healthcare professionals, technologists, and civil society to ensure that they are both robust and responsive to local contexts and needs.

    Education and capacity building are also essential. Healthcare professionals, public health experts, and policymakers must be equipped with the knowledge and skills needed to engage with AI critically and effectively. Training should address not only technical competencies but also the ethical, legal, and social implications of AI.

    Finally, ongoing research is needed to evaluate the real-world impact of AI in healthcare. This research should assess not only clinical outcomes but also equity, patient safety, and the preservation of humanistic values. It should inform continuous improvement of AI systems and the policies that govern their use (Morley, Floridi, Kinsey, & Elhalal, 2020).

    Conclusion

    AI has the potential to greatly enhance healthcare, improving efficiency, accuracy, and access. However, without appropriate safeguards, it also carries the risk of causing harm, deepening inequities, and eroding core human values. The boundary between good and evil in medical AI lies in how these technologies are designed, implemented, and governed. By upholding principles of transparency, fairness, accountability, and compassion, and by embedding these principles in governance frameworks and professional practice, it is possible to ensure that AI serves as a tool for good in medicine.

    References

    Abràmoff, M. D., Lavin, P. T., Birch, M., Shah, N., & Folk, J. C. (2018). Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digital Medicine, 1, 39.

    Adamson, A. S., & Smith, A. (2018). Machine learning and health care disparities in dermatology. JAMA Dermatology, 154(11), 1247-1248.

    Bullock, J., Luccioni, A., Hoffmann, P. H., & Jeni, L. A. (2020). Mapping the landscape of artificial intelligence applications against COVID-19. Journal of Artificial Intelligence Research, 69, 807-845.

    Char, D. S., Shah, N. H., & Magnus, D. (2018). Implementing machine learning in health care – Addressing ethical challenges. New England Journal of Medicine, 378, 981-983.

    Chen, I. Y., Szolovits, P., & Ghassemi, M. (2019). Can AI help reduce disparities in general medical and mental health care? AMA Journal of Ethics, 21(2), E167-E179.

    Codlin, A. J., Dao, T. P., Vo, L. N. Q., Forse, R. J., Nadol, P., & Nguyen, V. N. (2025). Comparison of different Lunit INSIGHT CXR software versions when reading chest radiographs for tuberculosis. PLOS Digital Health, 4(4), e0000813.

    Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how: An overview of AI ethics tools, methods and research to translate principles into practices. AI & Society, 36, 59-71.

    Panch, T., Szolovits, P., & Atun, R. (2019). Artificial intelligence, machine learning and health systems. Journal of Global Health, 8(2), 020303.

    Rajkomar, A., Dean, J., & Kohane, I. (2019). Machine learning in medicine. New England Journal of Medicine, 380(14), 1347-1358.

    Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books.

  • Children and Adolescents of the Future

    The unpredictable future, shaped by a myriad of global challenges, has profound implications for child and adolescent health. The COVID-19 pandemic has not only exposed vulnerabilities in health systems but has also disrupted education and altered social norms, creating a cascade of effects that disproportionately impact younger populations. This disruption is compounded by ongoing issues such as climate change, technological advancements, and evolving societal structures, which together create a complex landscape of health risks for children and adolescents.

    The COVID-19 pandemic has highlighted significant inequities in access to education and healthcare, particularly for marginalized populations. School closures have exacerbated educational disparities, particularly among children from low socioeconomic backgrounds who lack access to digital tools necessary for remote learning (Lancker & Parolin, 2020). Research indicates that the digital divide—characterized by unequal access to technology—has widened during the pandemic, leading to significant gaps in educational engagement and achievement (Mathrani et al., 2021; Azubuike et al., 2021; Early & Hernandez, 2021). This divide not only affects academic performance but also has long-term implications for mental health, as children who are unable to engage in learning may experience increased anxiety and depression (Lancker & Parolin, 2020; Early & Hernandez, 2021). Furthermore, the pandemic has underscored the inadequacies of mental health services for children, as the prevalence of mental health issues among adolescents has risen sharply during this period (Ahmadipour et al., 2018). Climate change presents another critical challenge to child and adolescent health.

    The increasing frequency of extreme weather events and environmental degradation poses direct threats to physical health, including respiratory issues exacerbated by air pollution and the risks associated with natural disasters (Zaitsu et al., 2022; Maity et al., 2020). The Lancet Countdown on Health and Climate Change emphasizes the urgent need for policies that address these environmental risks, particularly for vulnerable populations like children (Zaitsu et al., 2022). Moreover, the intersection of climate change and health is further complicated by socioeconomic factors, as children from disadvantaged backgrounds are often the most affected by environmental hazards and have less access to healthcare resources (Early & Hernandez, 2021; Kuo-Hsun, 2021).

    Technological advancements, while offering new opportunities for learning and development, also introduce risks that can negatively impact child health. The rise of digital platforms has facilitated educational access but has also led to increased exposure to cyberbullying and harmful content, which can adversely affect mental health (Azubuike et al., 2021; Zhang, 2023). Additionally, the shift towards digital learning environments has highlighted the need for digital literacy and online safety education, as many children are ill-equipped to navigate these new challenges (Mathrani et al., 2021; Zhang, 2023). The potential for technology to exacerbate existing inequalities is a pressing concern, as children from lower socioeconomic backgrounds may not have the same access to digital resources, further entrenching disparities in health and education outcomes (Early & Hernandez, 2021; Kuo-Hsun, 2021).

    Addressing these multifaceted challenges requires a comprehensive approach that prioritizes the social determinants of health. This includes enhancing access to quality healthcare, particularly mental health services, for all children, especially those from marginalized communities (Ahmadipour et al., 2018). Furthermore, educational policies must aim to bridge the digital divide by ensuring equitable access to technology and integrating digital literacy into curricula (Mathrani et al., 2021; Zhang, 2023). Community programs that focus on preventing violence, abuse, and neglect are essential, as these social factors significantly influence mental and emotional health outcomes for children (Ahmadipour et al., 2018).

    Finally, climate action must be prioritized to mitigate the health impacts of environmental degradation, with a focus on improving air quality and reducing exposure to pollutants that disproportionately affect children (Zaitsu et al., 2022; Maity et al., 2020). In conclusion, the future of child and adolescent health is fraught with challenges, but these are not insurmountable. By addressing the root causes of health disparities through the lens of the social determinants of health, stakeholders can work towards building a safer, healthier, and more equitable future for younger generations.

    Collaborative efforts involving governments, communities, and global organizations are essential to implement sustainable solutions that prioritize the well-being of children and adolescents in an ever-changing world.

    References

    Ahmadipour, S., Mohammadzadeh, M., Mohsenzadeh, A., Birjandi, M., & Almasian, M. (2018). Screening for developmental disorders in 4 to 60 months old children in iran (2015–2016). Journal of Pediatric Neurology, 17(01), 008-012. https://doi.org/10.1055/s-0037-1612620

    Azubuike, O., Adegboye, O., & Quadri, H. (2021). Who gets to learn in a pandemic? exploring the digital divide in remote learning during the covid-19 pandemic in nigeria. International Journal of Educational Research Open, 2, 100022. https://doi.org/10.1016/j.ijedro.2020.100022

    Early, J. and Hernandez, A. (2021). Digital disenfranchisement and covid-19: broadband internet access as a social determinant of health. Health Promotion Practice, 22(5), 605-610. https://doi.org/10.1177/15248399211014490

    Kuo-Hsun, J. (2021). The digital divide at school and at home: a comparison between schools by socioeconomic level across 47 countries. International Journal of Comparative Sociology, 62(2), 115-140. https://doi.org/10.1177/00207152211023540

    Lancker, W. and Parolin, Z. (2020). Covid-19, school closures, and child poverty: a social crisis in the making. The Lancet Public Health, 5(5), e243-e244. https://doi.org/10.1016/s2468-2667(20)30084-0

    Maity, S., Sahu, T., & Sen, N. (2020). Panoramic view of digital education in covid‐19: a new explored avenue. Review of Education, 9(2), 405-423. https://doi.org/10.1002/rev3.3250

    Mathrani, A., Sarvesh, T., & Umer, R. (2021). Digital divide framework: online learning in developing countries during the covid-19 lockdown. Globalisation Societies and Education, 20(5), 625-640. https://doi.org/10.1080/14767724.2021.1981253

    Zaitsu, M., Mizoguchi, T., Morita, S., Kawasaki, S., Iwanaga, A., & Matsuo, M. (2022). Developmental disorders in school children are related to allergic diseases. Pediatrics International, 64(1). https://doi.org/10.1111/ped.15358

    Zhang, X. (2023). The digital divide: class and equality education. SHS Web of Conferences, 157, 04027. https://doi.org/10.1051/shsconf/202315704027