The dangers of algorithm bias. | MOBE

AI Algorithm Bias Hdr AI Algorithm Bias Hdr sm

The dangers of algorithm bias.

The definition of bias within artificial intelligence (AI) is pretty straightforward: PwC defines it as “AI that makes decisions that are systematically unfair to certain groups of people.”

Research has highlighted the profound impact of bias embedded in AI algorithms. A recent study by scientists at the University of Deusto in Spain reinforced the idea that AI “inherits” bias from humans. Even more alarming about the researchers' data was how it demonstrated the reverse, that humans also absorb bias from AI. Sadly, the effects of that bias can remain long after people stop using the algorithm.

While the problem of algorithmic bias is becoming widespread as more people incorporate open-source tools like ChatGPT into their daily lives, nowhere do we see more impact of algorithmic bias than in the health care industry. Providers, plans, and other health care organizations have been using machine learning algorithms for years, and unfortunately, we’ve already seen impacts from bias embedded in systems and tools using AI. For example:

  • A 2019 study published in Science reported racial bias within a clinical algorithm used by many hospitals. When providers used the algorithm to determine which patients needed care, Black patients had to be deemed much sicker than white patients to receive the same care.
  • An NPR story in 2023 highlighted research on sepsis in children at Duke University’s pediatric infectious diseases program. When using algorithms for care decisions, Duke doctors were taking longer to order blood tests for Hispanic children eventually diagnosed with sepsis than white children.
  • An NIH analysis of AI tools used for dermatology demonstrated a nearly 50% risk of poor predictive outcomes due to highly biased datasets based primarily on white patients.
  • A 2022 story in Fierce Healthcare reported a Cornell study on Medicare/Medicaid payments that found counties with greater proportions of Black residents had significantly higher rates of uncompensated hospital care.

As more research and stories like these emerge, health equity experts, news media, and others are putting pressure on technology companies to understand better how machine learning algorithms promote negative stereotypes and perpetuate discrimination.

Impacts of AI bias on health care.

Biases in the data used to train algorithms can lead to models that perform well for specific populations but fail to capture the complexity and needs of diverse and marginalized groups. This, in turn, can result in delayed or insufficient care that can significantly impact health outcomes.

Algorithmic bias is equally concerning when models are used for predictive analytics for patient health outcomes, risk assessments, and resource allocation. If an algorithm is trained predominantly on data from one demographic, it may not accurately predict the needs or outcomes of another, leading to less effective or even harmful health care recommendations for underrepresented populations when the results are inappropriately generalized. While this is problematic for care delivery, the implications when algorithms containing bias are used for more comprehensive planning or policy decision-making could be catastrophic.

If left unchecked, bias within AI health care systems and tools can lead to a multitude of adverse impacts:

  • Unchecked bias can exacerbate existing health care disparities, particularly for underrepresented communities. For example, if an algorithm incorrectly assesses risk or need based on biased data, it could lead to certain populations receiving less access to vital health care services or preventive measures.
  • Bias can result in misdiagnosis or less effective treatment recommendations. AI tools that aid in diagnosis might not perform equally well across different racial or ethnic groups, leading to poorer health outcomes for those the algorithm does not represent as well.
  • Patients may lose trust in health care systems if they perceive that the care they receive is influenced by bias. This is especially true for minority groups who have historically been subjected to systemic biases in health care.
  • Algorithms are increasingly used to inform decisions about resource distribution. This can lead to inefficient or unfair allocation of health care resources, affecting the quality of patient care and operational efficiency.
  • Providers and health plans could face legal challenges if biased algorithms result in systematic discrimination. Moreover, such practices violate ethical standards of fairness and equality in medical care.
  • Biased algorithms can increase costs due to misdiagnoses, delayed diagnoses, unnecessary treatments, or failure to prevent disease progression. This can have significant financial impacts on patients, providers, and plans.

Unchecked AI bias can perpetuate and deepen inequities in health care.

— Anjoli Punjabi, MOBE Director of Program Health Outcomes and Health Equity

“Unchecked AI bias can perpetuate and deepen inequities in health care, leading to a cascade of negative consequences for individuals and the health care system. Addressing these issues proactively is critical to ensure that health care AI fulfills its potential to improve care and make it more accessible and effective for everyone.” —Anjoli Punjabi, MOBE Director of Program Health Outcomes and Health Equity

Mitigating bias in AI.

AI is only as good as the data it learns from, and historically, health care data has been plagued by inequities based on how the data is collected, categorized, and disseminated. As AI becomes further entrenched in health care systems, these biases can become automated and invisible. AI is expected to play a larger role in diagnostic processes, treatment recommendations, and patient interactions through digital health platforms. If the AI's foundational data, categorization norms, and labeling systems are not rigorously evaluated and continuously updated for bias, the consequences could range from inequitable resource allocation to disparities in disease management and outcome predictions. Equally important, processes must be in place to prevent bias from occurring within health care systems. Careful training and quality assurance standards can go a long way to eliminating the risk of algorithmic bias.

Fortunately, many organizations are assisting with bias prevention, detection, and correction.

In 2022, The National Committee for Quality Assurance (NCQA) updated its Health Equity Strategy initiatives by including a Health Plan Accreditation requirement. It directly addresses potential problems with bias in AI by mandating that health plans identify and mitigate potential biases in the segmentation and stratification of populations. Within the new NCQA Population Health Management standards (PHM2), health plans are instructed to employ rigorous validation and testing protocols for their AI systems, ensuring the data is representative and the algorithms are transparent and fair.

Public health experts, technology providers, data analytics professionals, and others continue to work to help mitigate AI bias in health care. This includes reviewing how algorithms are developed, from data entry and cleaning, to algorithm and model choice, and implementation and dissemination of the results. Better training initiatives can also be highly effective in helping technology and health care professionals spot algorithm bias before it becomes embedded in AI tools and systems.

MOBE’s Algorithm Bias Committee.

In 2023, MOBE formed a committee to identify and address algorithmic bias. This diverse group of data analysts, health equity experts, executives, and other MOBE professionals continues to discuss ways to detect, address, correct, and prevent AI bias as an organization and in our industry. While significant measures already exist, the MOBE team continuously pursues ways to mitigate potential AI bias, including:

  • Population data comparisons
  • Advanced developer training on algorithmic bias
  • Identification of algorithms that could demonstrate a higher risk of bias
  • Re-training processes to avoid the introduction of bias in existing algorithms, including new ways to label
  • Continuous improvement of governance and oversight processes aimed at recognizing, addressing, and preventing AI bias

There is much work to be done as we learn better, more effective ways to reduce and remove bias from AI systems now and in the future. MOBE is committed to being a leader in the effort to provide fair, equitable health care for everyone.

“It has always been our philosophy at MOBE to do everything possible to prevent new bias within our algorithms. Our models exclude functions like race, ethnicity, language, and ZIP codes. We’re also smart about not perpetuating bias that might already exist in a dataset by avoiding training our models on any features we know have bias embedded within them.” —Travis Hoyt, Chief Information and Analytics Officer

It has always been our philosophy at MOBE to do everything possible to prevent new bias within our algorithms.

— Travis Hoyt, Chief Information and Analytics Officer

You might also like:

In conversation with: Travis Hoyt

Learn how MOBE uses data in unique, groundbreaking ways to simultaneously change lives and fuel business outcomes from this conversation with MOBE’s Chief Analytics Officer Travis Hoyt.

Webinar: Advancing Health Equity in Your Organization.

Social and economic barriers to health impact people in many ways. Three experts on the subject, Anjoli Punjabi, Pahoua Yang Hoffman, and Pearl Isawumi, discuss what their respective companies are doing to advance health equity and break…

Here’s a pharmacist-recommended checklist for medication therapy management program results.

Answer these three questions to get the most out of a medication management program.