Authored by Lyla Saxena, Master in Health Administration ‘24
Art by Jenny Li '26
How do we balance the technological revolution in healthcare with health equity? Are these two endeavors in conflict? Recent discourses about the increasing use of diagnostic artificial intelligence (AI)-based technologies reveal that equity may be brushed aside in the development of such tools and the actual use of them, despite the increasing prioritization of equity in healthcare policy discussions. AI refers to “computational models that automate tasks typically performed by humans,” including machine learning algorithms, while healthcare AI specifically focuses on disease surveillance, resource allocation, diagnostics and treatment, the delivery of health services, and workflow [1]. Diagnostic AI-based technologies are trained on data and pictures of many patients with and without certain conditions, and based on this training, are used to diagnose patients who match the data on which the algorithm was trained. For example, a colon cancer detection tool may be trained on many images of colon cancer in different patients and compare the colon of the patient being diagnosed to the training images to detect similar abnormalities. Examples of such technologies include Screenpoint and Therapixel, two breast cancer screening tools, and an Optum algorithm, which assesses patients who need additional health care to prevent hospitalization [2].
These diagnostic AI-based technologies can help ease and automate the process of patient care and reduce healthcare costs, however, they can also perpetuate bias and discrimination in medical care [3]. As AI aims to replicate human cognition via data-driven analytics, it can also replicate human biases [4]. Bias reproduction starts at the outset of the development of such technologies and continues throughout the product’s life cycle unless intentionally mitigated. Bias in these tools generally results from the limited patient datasets and images that the tools are trained on (i.e., the dataset that a breast cancer tool is trained on is comprised of 83% White patients and 17% Black patients and used on both demographics in practice when breast cancer is 46% more fatal for Black women) [6]. Misguided use of the tools can also promote bias. An algorithm developed by Optum to identify patients who need additional healthcare uses medical costs as its determinant [7]. This determinant ignores the fact that people of color use healthcare differently (i.e., they receive less care due to mistrust in the medical community and limited access to healthcare services) than White people. The tool’s use of medical costs as its determinant will likely prevent the tool from identifying Black people that actually may need additional care because we spend less money caring for Black patients. Adding another layer of complexity, the often proprietary nature of these technologies does not support efforts to investigate such biases.
How are these biases mitigated? One might look to the Food and Drug Administration (FDA) to regulate such technologies since they regulate traditional medical devices. In September 2017, the FDA issued guidance asking traditional and AI medical device makers to publicly report the demographics of the populations on which their tools are trained. In January 2021, the FDA released a discussion paper highlighting the importance of multistakeholder involvement in the proposed regulatory framework, emphasizing a patient-centered approach to technology through transparency [8]. As of September 2021, the FDA already approved 77 medical AI technologies, termed “Software as Medical Devices” or SaMDs [9]. Few medical technologies fall under this SaMDs product classification, yet are already used in the healthcare sector. Calls for the FDA to incorporate health equity in their AI-based medical technology regulation may not affect the FDA-exempt, already-in-use medical algorithms that impact medical care and decisions, healthcare access, and resource allocation. To complement federal regulation, those who create the tools (developers) and those who use the tools (health systems and clinicians) must also ensure equity is at the center of their work by using inclusive training datasets and images and identifying how a chosen metric may not actually be a good proxy for the diagnosis of interest.
Adding a new healthcare tool that can further reproduce problematic healthcare practices requires thoughtful consideration of its continued use, no matter how innovative the tool is. Diagnostic AI-based technologies lacking inclusive intention will entrench already existing health inequities unless action and accountability, at individual and federal levels, are taken to regulate these tools.
Works Cited
Thomasian, N. M., Eickhoff, C., & Adashi, E. Y. (2021). Advancing health equity with artificial intelligence. Journal of public health policy, 42(4), 605. https://doi.org/10.1057/s41271-021-00319-5
Ross, C. (2020). From a small town in North Carolina to big-city hospitals, how software infuses racism into U.S. health care. Promise and Peril: How Artificial Intelligence Is Transforming Health Care, 90. https://www.statnews.com/promise-and-peril/; Ross, C. (2021). Could AI tools for breast cancer worsen disparities? Patchy public data in FDA filings fuel concern. Promise and Peril: How Artificial Intelligence Is Transforming Health Care, 107. https://www.statnews.com/promise-and-peril/
Sunarti, S., Fadzlul Rahman, F., Naufal, M., Risky, M., Febriyanto, K., & Masnina, R. (2021). Artificial intelligence in healthcare: Opportunities and risk for future. The 1st International Conference on Safety and Public Health, 35, S67. https://doi.org/10.1016/j.gaceta.2020.12.019
Thomasian, N. M., Eickhoff, C., & Adashi, E. Y. (2021). Advancing health equity with artificial intelligence. Journal of public health policy, 42(4), 602. https://doi.org/10.1057/s41271-021-00319-5
Thomasian, N. M., Eickhoff, C., & Adashi, E. Y. (2021). Advancing health equity with artificial intelligence. Journal of public health policy, 42(4), 603. https://doi.org/10.1057/s41271-021-00319-5
Ross, C. (2021). Could AI tools for breast cancer worsen disparities? Patchy public data in FDA filings fuel concern. Promise and Peril: How Artificial Intelligence Is Transforming Health Care, 103. https://www.statnews.com/promise-and-peril/
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447. https://doi.org/10.1126/science.aax2342; Ross, C. (2020). From a small town in North Carolina to big-city hospitals, how software infuses racism into U.S. health care. Promise and Peril: How Artificial Intelligence Is Transforming Health Care, 90-91. https://www.statnews.com/promise-and-peril/
U.S. Food & Drug Administration. (2021). Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan (pp. 1–7). https://www.fda.gov/media/145022/download; Ross, C. (2021). Could AI tools for breast cancer worsen disparities? Patchy public data in FDA filings fuel concern. Promise and Peril: How Artificial Intelligence Is Transforming Health Care, 105. https://www.statnews.com/promise-and-peril/
Thomasian, N. M., Eickhoff, C., & Adashi, E. Y. (2021). Advancing health equity with artificial intelligence. Journal of public health policy, 42(4), 603. https://doi.org/10.1057/s41271-021-00319-5
Commentaires