top of page

Ethical Challenges of AI-based Medical Diagnosis and Forecasting

Andrew Chung

By: Andrew Chung, Biometry and Statistics ‘26


“AI is the future” talk is commonplace in this time and age, as it pertains to all facets of our society, and medicine is no exception. From clinical applications to data analysis and medical research, quantitative data analytics, and genomic sequencing, developments in AI and subfields like Machine Learning (ML) have revolutionized how we tackle intricate healthcare issues through the automation of tasks, mundane or sophisticated, that have traditionally been perceived to require human intelligence through highly evolved computer systems and accompanying algorithms.


Strides have already been made in the forecasting and diagnosis of diseases in numerous fields of human medicine – oncology, neurology, and epidemiology (e.g. COVID-19 diagnosis), to name a few; most such endeavors have reported exceptional classification accuracy rates (measured in raw percentages and recall/precision rates, as well as convoluted measures of predictive performance like AUC-ROC or F1 scores) with striking efficiency (Ahsan MM et al., 2022). Said AI models are continuously ‘fed’ real-world disease data and undergo rigorous training (incorporating meticulous algorithmic optimization and layers of cross-validation) that improves their diagnostic accuracy. Obviously, no artificial model or human can detect conditions with 100% accuracy since sweeping generalizations made about patient populations may fail to account for each individual patient’s unique needs. There is little doubt, however, that such models will continue improving with the continued influx of new data; accompanied by periodic optimization, they can “learn” to correct for outliers, decipher complex patterns, and process high volumes of data.


Notwithstanding the immense potential upsides of this data-guided approach to healthcare, ethical challenges abound. Large-scale training of conventional models tends not only to introduce susceptibilities against outliers, but can also exacerbate biases (racial, gender, socioeconomic, etc.) that are difficult to quantifyand incurs a risk of data breaches through selective publicization of confidential medical records. Basu (2020) emphasizes the marked vulnerability of algorithmic models against random noise eminent in real-world data. It is noted that the presence of outliers, however influential or numerous, can significantly hinder the consistency of most artificial models, drawing from the wide variations in accuracy observed between several ML algorithms tested with a skin cancer dataset (Masood et al., 2013). The challenge lies in curating a resilient model with consistent performance, without the characteristic sway observed with stochastic (random) noise, but at present, continuous tuning of parameters and introduction of relevant test data - requiring manual intervention - prove the most reliable, notwithstanding recent developments surrounding potential self-corrective abilities of AI that may eliminate a need for human supervision.


Another caveat of AI is the perpetuation of demographic bias. Coined ‘AI Bias,’ predictive models trained on strictly quantitative medical data are fraught to subconsciously reinforce harmful, derogatory biases implied in coded data but never explicitly addressed for in the training process (Jackson, 2021). Data collected from humans – who come with their own set of personal biases or a pre-established ‘code’ of systematic notions – are inherently biased as well; artificial models, without an ability for sentient thought or ethical reasoning, will naturally acquire such biases in the form of quantitative patterns and learn to discriminate between racial/ethnic, gender, or socioeconomic boundaries in making critical diagnostic, palliative, or academic decisions (Naik et al., 2022). Along with societal efforts towards anti-racism, gender equality, and equal representation of all, it is paramount to critically examine the methods through which predictive models are trained, such as the use of sample blocking, sensitivity analysis towards particular demographic factors and more.

The medical community faces the task of navigating these ethical crossroads while harnessing the immense potential of AI in healthcare. Continuous refinement of AI models, as well as compromises for a tenable balance and a sense of transparency between the medical community, AI developers, and policymakers, are vital. Integration of AI into healthcare holds immense promise, but ethical considerations such as outliers, biases, and privacy concerns cannot be ignored. The medical community, in the foreseeable future, will find themselves with an added responsibility of striking the right balance between technological advancements and ethical principles, pivotal for a future where AI serves as a valuable ally in the pursuit of better health outcomes for the world.


References: 

  1. Ahsan, M. M., Luna, S. A., & Siddique, Z. (2022). Machine-Learning-Based Disease Diagnosis: A Comprehensive Review. Healthcare, 10(3), 541. https://doi.org/10.3390/healthcare10030541

  2. ‌Basu, T., Menzer, O., & Engel-Wolf, S. (2020). The ethics of machine learning in medical sciences: Where do we stand today? Indian Journal of Dermatology, 65(5), 358. https://doi.org/10.4103/ijd.ijd_419_20

  3. ‌Masood, A., & Ali Al-Jumaily, A. (2013). Computer Aided Diagnostic Support System for Skin Cancer: A Review of Techniques and Algorithms. International Journal of Biomedical Imaging, 2013, 1–22. https://doi.org/10.1155/2013/323268

  4. Maya C. Jackson, Artificial Intelligence & Algorithmic Bias: The Issues With Technology Reflecting History & Humans, 16 J. Bus. & Tech. L. 299 (2021). https://digitalcommons.law.umaryland.edu/jbtl/vol16/iss2/5

  5. Naik, N., Hameed, B. M. Z., Shetty, D. K., Swain, D., Shah, M., Paul, R., Aggarwal, K., Ibrahim, S., Patil, V., Smriti, K., Shetty, S., Prasad, B., Chłosta, P., & Somani, B. (2022). Legal and ethical consideration in artificial intelligence in healthcare: Who takes responsibility? Frontiers in Surgery, 9. https://doi.org/10.3389/fsurg.2022.862322

11 views0 comments

Recent Posts

See All

Kommentare


bottom of page