Mitigating bias in AI.
AI is only as good as the data it learns from, and historically, health care data has been plagued by inequities based on how the data is collected, categorized, and disseminated. As AI becomes further entrenched in health care systems, these biases can become automated and invisible. AI is expected to play a larger role in diagnostic processes, treatment recommendations, and patient interactions through digital health platforms. If the AI's foundational data, categorization norms, and labeling systems are not rigorously evaluated and continuously updated for bias, the consequences could range from inequitable resource allocation to disparities in disease management and outcome predictions. Equally important, processes must be in place to prevent bias from occurring within health care systems. Careful training and quality assurance standards can go a long way to eliminating the risk of algorithmic bias.
Fortunately, many organizations are assisting with bias prevention, detection, and correction.
In 2022, The National Committee for Quality Assurance (NCQA) updated its Health Equity Strategy initiatives by including a Health Plan Accreditation requirement. It directly addresses potential problems with bias in AI by mandating that health plans identify and mitigate potential biases in the segmentation and stratification of populations. Within the new NCQA Population Health Management standards (PHM2), health plans are instructed to employ rigorous validation and testing protocols for their AI systems, ensuring the data is representative and the algorithms are transparent and fair.
Public health experts, technology providers, data analytics professionals, and others continue to work to help mitigate AI bias in health care. This includes reviewing how algorithms are developed, from data entry and cleaning, to algorithm and model choice, and implementation and dissemination of the results. Better training initiatives can also be highly effective in helping technology and health care professionals spot algorithm bias before it becomes embedded in AI tools and systems.