In 2024, the Brookings Center for Technology Innovation launched The AI Equity Lab (“The Lab”) to explore interdisciplinary and cross-sector approaches for responsible, ethical, and inclusive design of autonomous models in consequential issue areas, including health care, journalism, education, criminal justice, and financial services. To advance the goals of The Lab, interdisciplinary and cross-sector working groups were established and convened by the health care, education, and journalism sector to develop purposeful and pragmatic solutions to enhance the access and adoption to artificial intelligence (AI), while considering the potential harms of discrimination and modeling errors.1 The Health and AI Working Group (“Working Group”) of The AI Equity Lab was comprised of 14 interdisciplinary and cross-sector health experts who were tasked with exploring key themes related to AI’s application in health and providing recommendations to ensure it is designed and deployed inclusively for underrepresented and medically vulnerable communities. Over the course of four online convenings, the Working Group focused on four key areas:
- AI opportunities and barriers related to underrepresented communities.
- Existing legislative and regulatory policies to protect patients from harmful and unethical uses of AI.
- Pragmatic programs and public policies to advance more inclusive AI models.
- Other pioneering actions and best practices to ensure responsible and ethical use of AI in health care.
The composition of the participating experts included technologists, health practitioners, primary and secondary care professionals, patient advocates, and industry leaders, who will be referred to as Working Group Experts (WGEs) throughout the paper (see Appendix One for the list of participants). Upon completion of the four sessions, the WGEs identified strategies for framing how practitioners and technologists should engage with AI in health care and offered recommendations on ways the sector can meaningfully advance the design and distribution of more inclusive technologies. In particular, the group proposed that policymakers and key stakeholders consider the following strategies:
- Prioritize the accessibility of required infrastructure to implement various use cases of AI in health care and measure the opportunities and risks of adoption in more detail.
- Accelerate AI literacy and awareness among patients, clinicians, practitioners, and industry professionals by leveraging the public health workforce to help with messaging, particularly among the medically underserved and vulnerable patients and communities.
- Ensure adequate patient representation in terms of demographics, region, and other various attributes throughout the lifecycle of AI design and deployment to ensure that models are representative of their unique health conditions, diagnoses, and communities.
- Evaluate the data infrastructure being used to train AI health models to avert differential treatment of patients in ways that perpetuate health disparities.
- Focus on explainability and transparency as strategic imperatives by sharing AI benefits, technical constraints, and explicit and implicit deficits in the training data.
- Advance research to gain a more comprehensive understanding of the opportunities and risks of health and AI.
- Institute governance frameworks and practices that advance a more comprehensive approach to health and AI.
The paper also concludes with a set of questions that health and AI professionals should consider when designing, procuring, and implementing such solutions within their organizations and among patients to determine who is shaping the initial phase of the AI lifecycle, what data will be resourced, and who will decide on the governance structure and principles. As health and AI increasingly intersect to improve care delivery and research, it is essential that AI technologies are available to all.
Artificial intelligence (AI) in the health care sector is transforming service delivery, administration, and patient care. AI is being used for data analysis, diagnostics, treatment recommendations, and other aspects of health care delivery.2 AI has begun to revolutionize health care by improving efficiency, accuracy, accessibility, and outcomes. It offers excellent breakthroughs and opportunities in the health sector and can improve access to quality care for medically vulnerable communities and patients through remote monitoring of micro-health data.3 By analyzing large datasets to identify patterns and insights, AI can enhance diagnostics, strengthen early disease detection, and design personalized treatment plans. Existing and emerging technologies also can help address health disparities by detecting and mitigating biases in health care datasets, algorithms, and care delivery models that do not accurately reflect the needs of the community.
AI in health care also poses risks, such as algorithmic bias and data privacy concerns.4 Ensuring AI is safe and effective requires designing, deploying, and using these technologies to benefit intended communities, in collaboration with partners and developers who understand the nuances of those impacted by AI or have relevant lived experiences. Medically vulnerable patients, communities, and even local health institutions are at risk of being left behind in the AI revolution due to not having basic access to high-speed broadband, data, resources, and education. Biases baked into AI algorithms can perpetuate existing health disparities by providing inaccurate or unfair results for underserved groups, especially when specific populations are disproportionately represented in clinical trials and medical research. Lack of diversity at the onset of AI’s development can result in technologies that do not align with the needs of diverse populations or in most extreme cases, generate medical mistakes and/or profiling. Finally, data privacy and ownership concerns are particularly salient for underserved communities, who may be dubious of sharing their health information with AI companies or platforms.
For example, recent innovations in AI-enabled pulse oximeters have been shown to overestimate oxygen saturation in patients with darker skin tones due to interference from melanin.5 Despite decades of research highlighting this bias, these devices remain in the marketplace, leading to inadequate care for some Black patients.6 The results of these pulse oximeters also sparked interest from the U.S. Food and Drug Administration (FDA), which is now revising its standards to address these disparities.7
The data that trains AI models presents challenges in health care applications and monitoring. Many health care algorithms rely on underrepresented data, leading to biased outcomes and amplified health disparities among patients of color. For example, studies have identified AI models that require patients of color to present with more severe symptoms than white patients to receive equivalent diagnoses or treatments, such as in cardiac surgery or kidney transplantation.8 To address these issues, the Agency for Healthcare Research and Quality and the National Institute on Minority Health and Health Disparities have proposed guiding principles to mitigate bias, emphasizing transparency, community engagement, and accountability at all stages of algorithm development.9
These applications of AI in health amplify the opportunities of its use but also make more transparent the flaws and risks for medically marginalized communities and providers who increasingly rely on these technologies to deliver care.
The AI Equity Lab’s Health and AI Working Group Experts (WGEs) consist of a diverse group of stakeholders working in health care, clinical research, patient management, software, and AI development companies. These experts were selected for their breadth of experiences, interests, and engagement in the evolving AI ecosystem in the U.S. The WGEs are cross-sector industry leaders who also have embraced an interdisciplinary approach to health care delivery, patient engagement, and advocacy. A full list of WGEs is in Appendix One.
Four online sessions were conducted over a two-month period, from September 2024 to October 2024. The goals of the convenings were to identify opportunities and best practices for AI in health care, from administration to clinical services. The goals of each session were to:
- Session 1: Interrogate the relationship between AI and health equity, along with the opportunities and challenges to cultivate a more inclusive AI ecosystem.
- Session 2: Discuss the required governance structures, policies, and frameworks to promote inclusive and responsible AI.
- Session 3: Explore the infrastructure and resources needed to enable widespread application of AI technologies for all communities.
- Session 4: Understand the key stakeholder needs to facilitate inclusive AI policy, governance, education, development, and distribution.
Participant attendance at the four sessions was consistent, and the WGEs used Chatham House rules to ensure that the conversation was free-flowing and authentic. A moderator and research lead from the WGEs were also assigned to the group and were responsible for drafting the paper. Participants’ views and perspectives were independent of their affiliated organizations, and throughout the paper, statements from various members will be shared without attribution.
It’s worth noting that it was not the intent of the WGEs to have consensus on the issues. One of the primary goals of The Lab is to establish dialogue and potential collaboration among stakeholders, balancing socio-technical approaches. Thus, the findings of the paper are proposed and suggested, and may not have been uniformly considered as strategies for advancing more equitable design and deployment of health and AI.
What is inclusive AI in health care?
Overall, WGEs agreed that AI has the potential to advance health equity. Still, special consideration should be given to intentional and ethical approaches to enable inclusive AI design, distribution, and regulation. The group addressed these topics through a series of questions, which unearthed specific themes related to health that must lead the debate: Access, trust, and education.
Participants acknowledged health access remains a critical challenge for marginalized populations, especially those in medically vulnerable communities in rural and urban areas. Limited health access can exacerbate health disparities and adverse health outcomes for them. According to the Health Resources and Services Administration (HRSA), approximately 89% of U.S. counties have been designated as primary care Health Professional Shortage Areas (pc-HPSAs).10 More disturbingly, 77 million people (or about 24% of the U.S. population) face provider shortage issues.11 AI-enabled technologies can play a prominent role in increasing access to care for underserved communities by leveraging telemedicine, real-time remote monitoring, advanced diagnostic, language and communication, and personalized care technologies.12 As such, participants believed that manufacturers should more effectively collaborate with diverse developers and patients to co-design strategies that better align technologies with underserved communities’ needs. This would include identifying diverse training datasets to improve AI accuracy and reliability, devising inclusive data governance structures and policies prioritizing equitable outcomes, developing cost-effective AI technologies for underserved communities, and formulating culturally informed communication strategies to enhance AI transparency and explainability.
Trust is another pillar in seeking and optimizing care for all communities. Unfortunately, underrepresented communities have experienced trauma and adverse effects that impact their ability to fully trust the health sector. Notable historical events, such as the Tuskegee Syphilis Study, COVID-19 vaccine misinformation, and persistent health disparities among underrepresented communities have continued to erode trust and confidence in the health sector.13 In the case of the Tuskegee experiment, Black men were recruited for the observation of untreated syphilis, yet they were not offered the promised treatment for syphilis even after it became widely available.14 WGEs expressed that the hypervigilant applications of digital health and AI technologies in the marketplace have led to over-confidence in mitigating the adverse outcomes associated with privacy, security, misuse, bias, and patient education.15
Minority health professionals, in particular, have become increasingly dubious of AI technologies and their ability to improve minority health due to a lack of diverse AI training datasets and paucity of technology efficacy data.16 This has resulted in a reluctance among minority health professionals to recommend these tools to patients and caregivers.17 When AI technologies are embedded in health products—like pulse oximeters or other wearable technologies—and exhibit some levels of failures, it greatly impacts the confidence expectations for patients and other users.18 Therefore, it is imperative that any AI-enabled product that interfaces with health care understands its intended audience, recognizes the nuances about that community, interrogates the datasets that are training AI models, and establishes an evaluation structure for product accountability and patient safety.
WGEs expressed that AI technology manufacturers and start-up entrepreneurs should strive to meaningfully engage with underrepresented communities in all phases of AI—from design and distribution to evaluation. Technology manufacturers should leverage advisory boards and governance structures, community-based organizations, multicultural medical and patient advocacy associations, and other trusted leaders to develop strategies, tactics, and programs to enhance AI trust, confidence, and adoption. These collaborations could generate culturally informed community engagement strategies, grantmaking opportunities to build AI community capabilities, diverse datasets to improve AI accuracy and reliability, and data governance structures to evaluate AI policies, standards, and performance. Taken together or apart, WGEs found these efforts could lead to a material increase in AI trust and confidence in underrepresented and underserved communities.
We must educate and engage the community from the outset to build sustainable trust.
Working Group Expert
WGEs stated that building an AI-literate population is vital to enhancing public confidence, national competitiveness, workforce preparedness, privacy and security, and online safety. These educational efforts apply to both broad AI applications and use, and more specifically to health care, where the risks are seemingly higher as the challenges become more clinically oriented. A deliberate and collaborative approach to designing a national AI educational campaign with key stakeholders from underrepresented communities could help address these systemic misperceptions, misinformation, concerns, and fears related to AI. The previous 118th Congress introduced various bipartisan bills to advance AI literacy, which offer models for responsible dissemination of information (See Table 1).
Table 1
The various proposed bipartisan bills focused on public awareness could strengthen efforts to advance AI literacy and build trust, which can enhance AI accessibility to medically vulnerable communities, including those with deep-seated trust concerns in health care.
The role of data and AI governance
WGEs also discussed required governance structures, policies, and frameworks to promote inclusive and responsible AI. They leveraged their backgrounds, which ranged from corporations and foundations to universities to expertise in medicine, life sciences, data science, policy, and technology, to consider the real-world impact of developing AI without appropriate governance. Here, participants expressed concern over who in an organization ultimately decides what is “appropriate and responsible AI.” Further, developers’ awareness of the regulatory and legislative frameworks is critical to developing responsible and ethical AI, as well as understanding how to enact best practices that deploy an equity lens throughout the lifecycle of the technology. For example, current literature suggests that equity-focused audits during AI model development would enhance fairness and equity in the development phase.19 Often, concerns are not addressed until AI goes awry. Participants discussed several frameworks, focusing on those that espoused embedding rather than simply adding on inclusive approaches to AI development.20
Public policies ensure that AI technologies are developed and deployed to advance health equity. WGEs discussed the impact and influence of federal actions, particularly the agencies with subject matter expertise in matters related to Medicare reimbursements, electronic health records compliance criteria, and data and privacy concerns. Measures to ensure that AI solutions are equitable in their development, deployment, and impact across different demographic groups can be challenging to quantify regarding the determination of harm without established legal precedent, making transparency and documentation of scientific and social results key factors in evaluating the efficacy of AI models in health care. These challenges are echoed in the paper “Ethical and Legal Challenges of Artificial Intelligence-Driven Healthcare,” which highlights the complexities of ensuring equitable AI deployment across diverse demographic groups.21 The study underscores the need for robust ethical and legal frameworks to address biases, quantify harm, and establish accountability in AI-driven health care solutions.22 This and other articles are included in the bibliography of the paper, and more annotated references are included in Appendix Two.
WGEs suggested that AI developers and others in the industry should be encouraged to prioritize “social good” with equitable approaches embedded at the outset. Governments (both in the U.S. and abroad), health care organizations, and technology companies can collaborate to form AI governance structures that prioritize equitable health outcomes. They can also work to identify the tradeoffs and develop mitigation strategies throughout the AI lifecycle that can mitigate biases in training data to improve outcomes for patients and their communities. A more connected structure—whether internal or external to an organization—can also articulate a risk management framework for AI’s application in their health environment.
The National Artificial Intelligence Risk Management Framework on AI
In the National Artificial Intelligence Risk Management Framework (AI RMF), the National Institute of Standards and Technology (NIST) proposed general risk management guidelines for AI, emphasizing safety, security and resilience, accountability and transparency, explainability and interpretability, privacy, and fairness.23 The agency could facilitate more responsible AI guardrails in health through more suggested standards in its design and use with particular focus on the identification and mitigation of biases.24 NIST has published some guidance on AI guardrails in health, especially in the area of data and suggested standards in its design and use with particular focus on the identification and mitigation of biases.25 These and future recommendations could be the basis for more concrete guidance and strategies for more purposeful AI.
AI has the potential to revolutionize health care, but it will not happen if the right stakeholders are not at the table.
Working Group Expert
Opportunities and challenges in health care and AI
When developing AI solutions, interested and vested stakeholders must be represented in initial conversations and considerations around AI’s use in health care. Those engaging in development, deployment, or policy will need to undertake specific efforts to engage underserved communities and those typically underrepresented in the technology ecosystem early in any process. Developing equitable frameworks for AI development necessitates involving multiple stakeholders across various demographics, including providers, users, patients, and developers. According to an article in Frontiers in Public Health,26 current debates include whether certain data types should remain open and accessible to ensure all health care stakeholders benefit from the insights. It further asserts that “principles should be implemented, with focus on fair and equitable benefit-sharing.”27 For the most part, AI-enabled models and tools tend to be developed from a top-down perspective, where technical and AI teams create solutions without input from the communities they are meant to serve, perpetuating inequities. More culturally and industry-relevant AI entails the engagement of local stakeholders, who can provide feedback and validate how technology benefits their communities, needs, and wants.
Patient-centered is not a catchphrase—it should be a strategic imperative that guides AI design and distribution.
Working Group Expert
Determining who those stakeholders are was at the heart of participant discussions. That is, the WGEs did an inventory of digitally invisible groups, like children, seniors, caregivers, and parents, who if more deliberately included in the design principles of AI in health could reduce risks, maximize opportunities for them, and improve their care. Particularly at the AI design phase, there is simply not enough space for patient groups who can advise and shape the intentions of AI developers and assist in the evaluation of the product. Further, the perceived lack of respect for and involvement of community-level experts in the health and AI community disservice the model development, especially when their lived experiences can greatly influence the direction of the product and service.
The process for making AI more accessible in marginalized communities also depends on how the medical community, and society at large, tackle digital infrastructure gaps or proximity to quality care institutions. Other areas that are rarely discussed focus on energy and workforce infrastructure that is essential to the widespread use of AI technologies. Participants found that these infrastructure areas are critical catalysts to enhancing access and use of AI. More support and resources will be needed to facilitate their greater benefit from AI tools. WGEs discussed that the examination of global models may yield interesting guidance for domestic AI infrastructure, especially in the handling of patient data and the mitigation of online biases. Some international studies demonstrate medical AI systems can propagate disparities based on race, gender, age, and other factors through subtle pathways beyond just training data biases.28 The lack of transparency into proprietary “black box” systems poses challenges for evaluating real-world safety, effectiveness, and equitable performance post-deployment.29
How health practitioners scale the use of AI technologies will determine the range of use cases with accessibility playing a central role in its dissemination. The next section goes into more detail on infrastructure and AI literacy, especially as it is deployed by the public health workforce to address disparities in AI use.
1. High-speed broadband, energy, and data infrastructures
Infrastructure plays an outsized role in enabling the development, distribution, and scalability of AI tools, particularly for underserved communities.30 WGEs called out the need for basic internet access. Before even delving into AI, it is imperative to prioritize closing the digital divide.31 When considering the deployment of telehealth, access to high-speed broadband is a requisite.32 Robust infrastructure, including networking systems, data ecosystems, hardware and software tools, and flexible regulatory frameworks will provide better access and use of AI health tools.
Experts also acknowledged that national and global networking infrastructure, including high-speed broadband, 5G, and Internet of Things (IoT) networks, are critical enablers to the equitable and scalable distribution of AI technologies.33 High-speed broadband access is arguably the most critical conduit and condition for access, yet 2.6 billion people worldwide do not have access to it.34 5G, which is the most advanced wireless option, is viewed by many experts as the solution to access challenges due to its ability to improve speed and reduce latency. Augmenting 5G infrastructure requires strategic investments like lowering the cost of 5G devices and closing educational gaps. Furthermore, many communities lack access to mobile devices, laptops, and computers due to the high cost of devices and consumer data plans. The WGEs highlighted concerns that limited access to affordable, reliable broadband and computing devices will continue to disenfranchise underserved communities, preventing the expanded use of AI technologies to better manage their health and well-being.
Energy infrastructure, an often-overlooked component of the AI ecosystem, is essential to AI scalability. AI use is predicated on reliable and sustainable energy sources to power data centers and AI network systems, while sophisticated cooling systems manage the heat from high-performance computing.35 Underserved and low-resource communities continue to struggle with stable and sustainable energy sources, which limits AI adoption for individuals residing in these areas.36 Participants suggested that federal and state investments in underserved communities’ energy infrastructure would help increase access to digital health and AI technologies. Overall, energy considerations have been relegated to the development and expansion of data centers, which require enormous amounts of energy to power. Yet, broadband connectivity, local digital infrastructure where medical institutions are located, and AI-enabled applications will all need energy to thrive in an ever-changing health care sector. Questions must be addressed regarding health institutions’ ability to scale AI technologies and patient adoption—particularly in rural areas—to benefit from these capabilities.
Another essential type of infrastructure is a more robust data ecosystem that supports the delivery of health through AI. WGEs expressed that data storage and process tools, such as cloud and on-premises storage, data lakes, distributed databases, and big data platforms are often cost-prohibitive for community-based organizations, nonprofits, and small providers to store, analyze, and aggregate data.37 Patients and communities are increasingly deploying digital health technologies, including wearables, IoT-enabled technologies, sensors, and apps to help patients better manage their health and well-being.38 These devices collect real-time data that can help inform health and wellness treatment programs and interventions that can be customized to the individual user.39 Participants expressed that community-based organizations or nonprofits that do not have access to the requisite data storage and processing tools will rely on third-party entities to provide data and insights that often lack cultural and local context. Further, WGEs found that affordable AI development and deployment tools and training should be available to assist communities with developing or co-developing culturally sensitive language models that device manufacturers can leverage to enhance efficacy. This is particularly true among local health clinics that may not have state-of-the-art facilities or high-speed bandwidth to engage AI applications in health care.
Rural health clinics are greatly affected when it comes to “blind spots” in data, according to a fact sheet by the National Telehealth Resource Center. The document emphasizes that rural clinics often lack the necessary resources to investigate and implement AI applications, underscoring the need for accessible and cost-effective solutions to enhance patient care in these settings.40 An article in PET Clinics highlighted the use of an “equity lens” in planning and evaluating AI applications to ensure implementation aligns with health equity goals and addresses disparities in AI-driven imaging.41 The WGEs debated whether adopting an implementation science approach could enhance the discussion on equitable AI development and yield sustainable, community-accessible results.
2. The importance of AI literacy and the public health workforce
Having an AI-literate workforce is essential to developing an effective and scalable AI ecosystem. WGEs stated that AI education should be fully integrated into secondary and post-secondary education. Various domestic and international institutions offer combined AI and health programs, such as three-month programs or AI scholar programs, where trainees collaborate with mentors to complete AI health care projects. Participants proposed developing a more comprehensive approach to AI health education, requiring a paradigm shift to better align interdisciplinary demands with modern AI and health care. Such an effort would necessitate a more adaptive approach to programmatic and course development to ensure learners from various backgrounds with non-traditional qualifications, including technical and non-technical, are considered throughout this process. These efforts could diversify the AI workforce and foster innovative ideas to develop inclusive AI technologies that work for all communities.42
The public health workforce is a vital component of care delivery in the U.S.43 It is composed of individuals who range from medical providers to health educators who can engage communities to reduce health disparities and promote more sustainable well-being.44 To maximize AI use, this group of professionals will require more readily available physical infrastructure, including hospitals, clinics, and provider practices in underserved areas, as well as training on AI to be more helpful and efficient in care delivery.
AI can help optimize facility design and placement, forecasting capacity, predictive modeling, and digital twins to establish the requisite infrastructure for smart hospitals and clinics. The infrastructure could enable IoT and sensor technology to collect and analyze data to improve operational efficiency and building maintenance. The experts believe that these efficiencies and cost savings could help underserved health providers reinvest in care delivery and community health programs. However, they acknowledge the need for special efforts to address data privacy and security, interoperability issues, and the costs of acquiring and implementing AI infrastructure.
Access does not equal availability of health care tools enabled by AI.
Working Group Expert
Bias mitigation and health equity
Any conversation on health and AI would be incomplete without discussions on bias identification and bias mitigation. Bias has been a recurring theme throughout this paper, with health care being one of the most consequential areas for false outcomes or results. Data that trains AI models will be taken from medical records and medical studies, most of which are inherently filled with biases—whether innocuous or discriminatory.45 AI biases will also result from how questions are formed, interrogated, and evaluated.46 In health, AI systems may perform poorly for underrepresented demographics due to insufficient data from those populations or lack of context or lived experience from AI companies or developers.47 AI models that learn through user interaction, especially those that utilize large language models (LLM), further exacerbate inequities if underserved communities are not adequately represented in the discourse, lack access to technologies, or do not feel comfortable using the medium to understand medical tests and other related inquiries. For example, AI systems that flag suspicious substance abuse activity may disproportionately target a particular patient demographic, not due to intentional discrimination, but because of biased training datasets that reflect unequal representation in health care utilization.48
The WGEs discussed the need for more effective approaches to bias mitigation in health AI and strategies that promote health equity in the AI design and development process by enhancing the diversity of training datasets, establishing comprehensive evaluation mechanisms, and developing robust AI explainability (XAI) protocols that would help clinicians and users better understand AI decisions.49 Potentially, this could lead to increased public confidence and trust in AI and broader adoption, especially for underserved and underrepresented communities.
The inability to identify biases embedded in AI models —or recognize them in context when deployed—is a critical concern for stakeholders who understand both the opportunities and costs of leveraging AI models in certain consequential areas, including complex diseases. To ensure AI effectively serves communities at greater risk of disease and national crises, it must account for populations that are underrepresented in clinical and health data. The WGEs proposed a series of recommendations, which are intended for health practitioners, AI developers, and the companies who use the technology, to advance fairness and equity in AI-driven health care.
Through rigorous and intentional debate, discussion, and wayfinding, WGEs developed recommendations for advancing health and AI. However, many depend on political will, funding, and efforts to close gaps in education, awareness, and policy.
In particular, the group proposes that policymakers and other stakeholders do the following:
Prioritize the accessibility of required infrastructure to implement various use cases of AI in health care by measuring the opportunities and risks in more detail.
Given the technical and structural complexities discussed in addressing health equity in the context of AI development, any solutions designed without the insight from the communities most impacted are susceptible to inaccuracy, rework, and misuse of critical resources. To ensure that the proposed measures are practical, sustainable, and implementable within the constraints of resources, technology, and policy environments, consideration must be given to opportunities for and barriers to implementation. Mitigating technological constraints—such as scalability, ethical consent, transparency, and privacy—is essential. It is recommended that AI manufacturers and health providers leverage diverse governance structures and advisory boards to provide key insights and context to ensure technologies accurately reflect the needs of the target population. Furthermore, there should be a minimum training standard for datasets that reflect the target population to facilitate a fair and efficacious way for all communities to fully leverage AI technologies.
Accelerate AI literacy and awareness among patients, clinicians, practitioners, and industry professionals by leveraging the public health workforce to help with messaging, particularly among the medically underserved and vulnerable patients and communities.
Implementation of AI in the health sciences requires an understanding of the local context, including the unique needs, resources, and barriers faced by urban, rural, low-resource, and marginalized communities. By conducting robust needs assessments, researchers can identify how broadband access, digital literacy, regulatory dynamics, and infrastructure gaps impact AI deployment in specific settings. It is recommended that AI manufacturers, health providers, and federal and state agencies intentionally collaborate with members, community-based organizations, public health workers, and leaders from underrepresented communities to co-design educational programs and strategies to enhance AI literacy, awareness, and adoption. This approach can lead to enhanced trust and confidence in AI tools. Raising awareness and educating the public on how AI can improve preventive health practices and support care for chronic diseases and other health disparities could lead to expanded AI adoption. However, this requires training public health workers on AI tools and ensuring they have access to high-speed broadband and the necessary infrastructure to support patients in real-time and on-site, preventing delays in care and management.
Ensure adequate patient representation in terms of demographics, region, and other attributes throughout the life cycle of AI design and deployment to ensure that models are representative of their unique health conditions, diagnoses, and communities.
More impactful AI will only happen when there is the deliberate engagement of strategic partnerships from organizations serving underrepresented populations, and their input helps to support culturally competent design and outreach methods, including language translation and relevant messaging that can only come from community experts. That is, community needs assessments should be conducted by people with lived experiences who possess a keen understanding of the health conditions, specific diagnoses, and community health assets. WGEs recommend that organizations responsible for developing AI should also be prepared to offer compensation to community members to access their specific expertise and sponsor community education, information dissemination, and engagement activities to support a community feedback loop. There should also be data governance agreements to protect patient data and information specific to communities at risk of medical exploitation.
Evaluate the data infrastructure being used to train AI health models to avert differential treatment of patients in ways that perpetuate health disparities.
A comprehensive and nuanced evaluation of data collection methods and protocols, validation processes, and access for marginalized groups is necessary to mitigate bias and increase transparency. WGEs recommend formulating a robust evaluation rubric to assess the aforementioned areas, along with criteria to evaluate the teams conducting fairness and ethical assessments. The criteria should address relevant technical expertise and direct experience and knowledge of the intended audience. The teams should have access to the requisite tools to specifically address fairness and explainability that contribute to the development of more ethical, efficacious, and inclusive AI technologies.
Focus on explainability and transparency as a strategic imperative by sharing AI benefits, technical constraints, and deficits in the training datasets (when applicable).
Promoting the development and public release of AI governance scorecards—created by governance boards in collaboration with community leaders and liaisons—can enhance transparency and explainability. Listening sessions led by patient advocacy groups can further engage communities, uncover gaps in understanding, and raise awareness regarding AI’s technical limitations. It is recommended that the medical device and product manufacturers collaborate with community stakeholders on a quarterly basis (or as determined by the governance board) to identify the most relevant data concerns and engagement requirements.
Advance research to gain a more comprehensive understanding of the opportunities and risks of health and AI.
AI has the potential to advance research in ways that are not yet fully understood. However, given the many unknown risks, it is imperative that researchers approach the development of these emerging technologies with specific concern for protecting the data and ultimately the lives of the people who will be impacted by the increase in adoption of AI platforms. The WGE recommends embedding ethics, performance standards, and responsible AI best practices into clinical education, research, and practice. This approach will help anticipate and address the unintended consequences of AI application in health care.
Institute governance frameworks and practices that advance a more comprehensive approach to health and AI.
AI governance structures and advisory boards with diverse representation can help AI manufactures or health organizations assess AI design and distribution protocols to help maximize adoption and utilization within underrepresented communities. For example, advisory boards can inform strategies to design AI applications for mobile devices to circumvent affordability concerns and the digital divide, which are critical considerations when developing an inclusive AI ecosystem. Further, it is suggested but not required to examine the condition of health institutions and medical providers’ technological infrastructure to ensure this cohort possesses the capability to use and optimize AI technologies. Health institutions and medical providers are critically important to expanding patient use of AI technologies. Thus, it is crucial that underserved health institutions gather the required resources to maintain AI systems, including data storage, collection, analytics, and training infrastructure.
Additional questions to ask
As medical providers and others in the health ecosystem consider the effective use of AI to protect patient data and offer new breakthroughs in managing medical conditions, they should always ask a set of key questions prior to adopting AI models for data sorting and medical devices. WGEs outlined essential questions to consider throughout the lifecycle of AI use in particular cases or projects.
- Who is at the table representing their patient and practitioner communities when AI is being both designed and deployed?
This question can help address assumptions and values about marginalized patient populations by involving experts who understand and embody the lived experiences of the community. These experts can address the real concerns and fears of patients who have historically distrusted the health care system. This approach will encourage groups to explore digitally invisible populations in their models and target communities.
- What data will be used to train AI health models?
How organizations define and address “blind spots” in data will better inform developers, populations, and other stakeholders, while also protecting the integrity and efficacy of AI products for all communities. Additionally, organizations may need to determine if other actions, such as policies or standards, are required to evaluate the use of diverse datasets, including medical, financial, and social data. Implementing data protection policies will be crucial to maintaining trust in the process.
- What decisions will be made around the governance of models and data, and will it be made known to customers (e.g., medical providers, patients, community organizations, etc.)?
Ensuring that the organizations lead with consideration of governance is imperative to avert any type of reputational risk, especially in health care. The AI model’s objectives should be closely aligned with the governance strategy and best practices. However, many organizations tend to establish these frameworks only after a product or service is deployed, which, in the health care sector, could lead to irreparable harm.
These questions complement the proposed recommendations and serve as a roadmap for more effective and ethical use of AI in health. Ultimately, the proposals in this paper focus on democratizing AI access and ensuring its availability to all patients, regardless of geography, condition, economic, or racial status.
The development and deployment of AI are complex, nuanced, and present undefined challenges. Efforts to ensure equity in health care are still evolving. However, the Working Group agrees that deliberate and ethical approaches must be applied at all stages of the AI lifecycle, with an adaptive perspective to emerging technological, social, and health changes, ensuring that new technologies benefit all communities, especially those that are underserved.
Michael Crawford, MBA, MHL
Michael Crawford, MBA, MHL, is the assistant vice president for strategy and innovation at Howard University’s Office of Health Affairs, founder and executive director of Howard University’s 1867 Health Innovations Project, and host and executive producer of the 21st Century Health podcast. Michael serves as a strategic advisor to the senior vice president of health affairs and CEO of Howard University Hospital Corporation and collaborates with medical science, health, academic, and HU Board leadership to advance Howard University’s academic, health, and innovation mission. Michael is also the founder and CEO of Digital HealthEX (DHEX), a global nonprofit organization designed to offer thought leadership, programs, solutions, and policy recommendations to facilitate digital health access, understanding, and adoption for all communities, especially the most vulnerable. DHEX seeks long-term solutions to complex challenges at the intersection of digital and conventional health that produce longstanding change and better health outcomes for all communities. Michael serves on numerous national and local committees, including the Consumer Technology Association AI Health Planning Council, the Robert Wood Johnson National Commission to Transform Public Health Data Systems, the America’s Essential Hospitals Innovation Committee, and the Advisory Board of National Minority Quality Forum. Michael served as the moderator for the Health and AI Working Group.
Malaika Simmons, MSHE
Malaika Simmons is the chief operating officer for NADPH, a data-driven nonprofit health research organization. She uses her background in research, psychology, and design thinking to promote empathy-based leadership. Malaika is committed to improving the health of underserved populations and more broadly supports these efforts through her organization, Momentology Media, LLC. Malaika fulfills her passion for eliminating economic and health disparities in marginalized communities by using her proprietary framework, The Momentology Method™, and through her doctoral studies in international psychology with a concentration on humanitarian organization mental health systems. She uses Momentology to empower women to own businesses, champion public health causes, and enter the corporate and political landscapes to affect change in their corner of the world. At Momentology Media, she promotes human-centered design principles to provide thought leadership, motivational speaking, empathy-based corporate training, and executive coaching services, helping people and teams increase performance and productivity through insight and impact. Malaika served as the co-moderator for the Health and AI Working Group.
Nicol Turner Lee, Ph.D.
Nicol Turner Lee, Ph.D., is a senior fellow in Governance Studies, the director of the Center for Technology Innovation, and co-editor of the TechTank blog and podcast at the Brookings Institution, which is a global think tank headquartered in Washington, D.C. Turner Lee’s research encompasses equitable access to technology across the U.S. and abroad. Her portfolio also includes leading research and public policy work focused on the identification and mitigation of online biases in artificial intelligence systems. She is the author of the book, “Digitally Invisible: How the Internet is Creating the New Underclass” (Brookings Press, 2024), and has appeared throughout various news media, testified before Congress and international global governance bodies, and written extensively on tech and telecom issues. In 2022, she was recognized for distinguished career contributions by the American Sociological Association at their annual conference.
The AI Equity Lab at Brookings was launched to bring interdisciplinary conversations and solutions to high-risk AI applications. The main goal of The AI Equity Lab is to embolden the use of existing and emerging AI technologies that are inclusive, safe, interdisciplinary-tested, and purposefully designed. The Health and AI Working Group is one of many small convenings as part of the Brookings AI Equity Lab, which includes interdisciplinary and cross-sector experts that are addressing the positive and negative effects of AI from health administration to the provision of clinical services with special attention to underserved communities.
“AIM-AHEAD | Data Science at NIH.” Accessed January 27, 2025. https://datascience.nih.gov/artificial-intelligence/aim-ahead.
Alowais, Shuroug A., Sahar S. Alghamdi, Nada Alsuhebany, Tariq Alqahtani, Abdulrahman I. Alshaya, Sumaya N. Almohareb, Atheer Aldairem, et al. “Revolutionizing Healthcare: The Role of Artificial Intelligence in Clinical Practice.” BMC Medical Education 23, no. 1 (September 22, 2023): 689. https://doi.org/10.1186/s12909-023-04698-z.
Anderson, Matt. “AI in Healthcare: Promising Future in Need of Improved Diversity.” SACS Meharry, April 6, 2023. https://sacsmeharry.org/blog/ai-in-healthcare-promising-future-in-need-of-improved-diversity/.
Backman, Isabella. “Eliminating Racial Bias in Health Care AI: Expert Panel Offers Guidelines.” Yale School of Medicine, December 21, 2023. https://medicine.yale.edu/news-article/eliminating-racial-bias-in-health-care-ai-expert-panel-offers-guidelines/.
Berdahl, Carl Thomas, Lawrence Baker, Sean Mann, Osonde Osoba, and Federico Girosi. “Strategies to Improve the Impact of Artificial Intelligence on Health Equity: Scoping Review.” JMIR AI 2, no. 1 (February 7, 2023): e42936. https://doi.org/10.2196/42936.
Bhagat, Neet. “IoT in Healthcare | Remote Patient Monitoring with Connected Devices.” ACL Digital (blog), August 1, 2024. https://www.acldigital.com/blogs/iot-healthcare-revolutionizing-patient-care-through-connected-devices-2024.
Black, Emily, Rakshit Naidu, Rayid Ghani, Kit Rodolfa, Daniel Ho, and Hoda Heidari. “Toward Operationalizing Pipeline-Aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools.” In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, 1–11. EAAMO ’23. New York, NY, USA: Association for Computing Machinery, 2023. https://doi.org/10.1145/3617694.3623259.
Buller, Robin. “Harnessing AI’s Potential to Lift Up Underserved Communities.” California Health Care Foundation (blog), September 17, 2024. https://www.chcf.org/blog/harnessing-ais-potential-lift-up-underserved-communities/.
CDC. “About The Untreated Syphilis Study at Tuskegee.” The U.S. Public Health Service Untreated Syphilis Study at Tuskegee, September 4, 2024. https://www.cdc.gov/tuskegee/about/index.html.
Chen, Richard J., Judy J. Wang, Drew F.K. Williamson, Tiffany Y. Chen, Jana Lipkova, Ming Y. Lu, Sharifa Sahai, and Faisal Mahmood. “Algorithm Fairness in Artificial Intelligence for Medicine and Healthcare.” Nature Biomedical Engineering 7, no. 6 (June 2023): 719–42. https://doi.org/10.1038/s41551-023-01056-8.
Chustecki, Margaret. “Benefits and Risks of AI in Health Care: Narrative Review.” Interactive Journal of Medical Research 13 (November 18, 2024): e53616. https://doi.org/10.2196/53616.
Clark, Cheryl R., Consuelo Hopkins Wilkins, Jorge A. Rodriguez, Anita M. Preininger, Joyce Harris, Spencer DesAutels, Hema Karunakaram, Kyu Rhee, David W. Bates, and Irene Dankwa-Mullan. “Health Care Equity in the Use of Advanced Analytics and Artificial Intelligence Technologies in Primary Care.” Journal of General Internal Medicine 36, no. 10 (October 2021): 3188–93. https://doi.org/10.1007/s11606-021-06846-x.
Cross, James L., Michael A. Choma, and John A. Onofrey. “Bias in Medical AI: Implications for Clinical Decision-Making.” PLOS Digital Health 3, no. 11 (November 7, 2024): e0000651. https://doi.org/10.1371/journal.pdig.0000651.
Economou-Zavlanos, Nicoleta J, Sophia Bessias, Michael P Cary Jr, Armando D Bedoya, Benjamin A Goldstein, John E Jelovsek, Cara L O’Brien, et al. “Translating Ethical and Quality Principles for the Effective, Safe and Fair Development, Deployment and Use of Artificial Intelligence Technologies in Healthcare.” Journal of the American Medical Informatics Association 31, no. 3 (March 1, 2024): 705–13. https://doi.org/10.1093/jamia/ocad221.
Ejike Innocent Nwankwo, Ebube Victor Emeihe, Mojeed Dayo Ajegbile, Janet Aderonke Olaboye, and Chukwudi Cosmos Maha. “Integrating Telemedicine and AI to Improve Healthcare Access in Rural Settings.” International Journal of Life Science Research Archive 7, no. 1 (August 30, 2024): 059–077. https://doi.org/10.53771/ijlsra.2024.7.1.0061.
Evans, R. S. “Electronic Health Records: Then, Now, and in the Future.” Yearbook of Medical Informatics, no. Suppl 1 (May 20, 2016): S48–61. https://doi.org/10.15265/IYS-2016-s006.
Gerke, Sara, Timo Minssen, and Glenn Cohen. “Ethical and Legal Challenges of Artificial Intelligence-Driven Healthcare.” Artificial Intelligence in Healthcare, 2020, 295–336. https://doi.org/10.1016/B978-0-12-818438-7.00012-5.
Hendricks-Sturrup, Rachele, Malaika Simmons, Shilo Anders, Kammarauche Aneni, Ellen Wright Clayton, Joseph Coco, Benjamin Collins, et al. “Developing Ethics and Equity Principles, Terms, and Engagement Tools to Advance Health Equity and Researcher Diversity in AI and Machine Learning: Modified Delphi Approach.” JMIR AI 2, no. 1 (December 6, 2023): e52888. https://doi.org/10.2196/52888.
Ho, Calvin Wai-Loon. “Operationalizing ‘One Health’ as ‘One Digital Health’ Through a Global Framework That Emphasizes Fair and Equitable Sharing of Benefits From the Use of Artificial Intelligence and Related Digital Technologies.” Frontiers in Public Health 10 (May 3, 2022). https://doi.org/10.3389/fpubh.2022.768977.
Hostetter, Martha, and Sarah Klein. “Understanding and Ameliorating Medical Mistrust Among Black Americans.” The Commonwealth Fund, January 14, 2021. https://doi.org/10.26099/9grt-2b21.
Jones, Jeffery A., Lois Banks, Ilya Plotkin, Sunny Chanthavongsa, and Nathan Walker. “Profile of the Public Health Workforce: Registered TRAIN Learners in the United States.” American Journal of Public Health 105, no. Suppl 2 (April 2015): e30–36. https://doi.org/10.2105/AJPH.2014.302513.
Kim, Jee Young, Alifia Hasan, Katherine C. Kellogg, William Ratliff, Sara G. Murray, Harini Suresh, Alexandra Valladares, et al. “Development and Preliminary Testing of Health Equity Across the AI Lifecycle (HEAAL): A Framework for Healthcare Delivery Organizations to Mitigate the Risk of AI Solutions Worsening Health Inequities.” PLOS Digital Health 3, no. 5 (May 9, 2024): e0000390. https://doi.org/10.1371/journal.pdig.0000390.
Laein, Ghasem Dolatkhah. “Global Perspectives on Governing Healthcare AI: Prioritising Safety, Equity and Collaboration.” BMJ Leader, May 28, 2024, leader. https://doi.org/10.1136/leader-2023-000904.
Mulukuntla, Sarika, and Saigurudatta Pamulaparthyvenkata. “Realizing the Potential of AI in Improving Health Outcomes: Strategies for Effective Implementation.” ESP Journal of Engineering & Technology Advancements 2, no. 3 (December 14, 2024): 8. https://doi.org/10.56472/25832646/JETA-V2I3P108.
National Consortium of Telehealth Resource Centers. “Artificial Intelligence in Rural Health,” December 6, 2024. https://telehealthresourcecenter.org/resources/fact-sheets/artificial-intelligence-in-rural-health/.
Nooraie, Reza Yousefi, Patrick G. Lyons, Ana A. Baumann, and Babak Saboury. “Equitable Implementation of Artificial Intelligence in Medical Imaging: What Can Be Learned from Implementation Science?” PET Clinics 16, no. 4 (October 1, 2021): 643–53. https://doi.org/10.1016/j.cpet.2021.07.002.
Omiye, Jesutofunmi A., Jenna C. Lester, Simon Spichak, Veronica Rotemberg, and Roxana Daneshjou. “Large Language Models Propagate Race-Based Medicine.” Npj Digital Medicine 6, no. 1 (October 20, 2023): 1–4. https://doi.org/10.1038/s41746-023-00939-z.
Rajkomar, Alvin, Michaela Hardt, Michael D. Howell, Greg Corrado, and Marshall H. Chin. “Ensuring Fairness in Machine Learning to Advance Health Equity.” Annals of Internal Medicine 169, no. 12 (December 18, 2018): 866–72. https://doi.org/10.7326/M18-1990.
Rep. Blunt Rochester, Lisa [D-DE-At Large. Text – H.R.6791 – 118th Congress (2023-2024): Artificial Intelligence Literacy Act of 2023 (2023). https://www.congress.gov/bill/118th-congress/house-bill/6791/text.
Sen. Cantwell, Maria [D-WA. S.4394 – 118th Congress (2023-2024): NSF AI Education Act of 2024 (2024). https://www.congress.gov/bill/118th-congress/senate-bill/4394/text.
Sen. Kelly, Mark [D-AZ. S.4838 – 118th Congress (2023-2024): Consumers LEARN AI Act (2024). https://www.congress.gov/bill/118th-congress/senate-bill/4838/text.
Sen. Young, Todd. S.4596 – 118th Congress (2023-2024): Artificial Intelligence Public Awareness and Education Campaign Act https://www.congress.gov/bill/118th-congress/senate-bill/4596.
Sandalow, David, Zhiyuan Fan, and Mariah Frances Carter. “Can AI Transform the Power Sector? – Center on Global Energy Policy at Columbia University SIPA | CGEP.” Center on Global Energy Policy at Columbia University SIPA | CGEP, December 2, 2024. https://www.energypolicy.columbia.edu/can-ai-transform-the-power-sector/.
Siala, Haytham, and Yichuan Wang. “SHIFTing Artificial Intelligence to Be Responsible in Healthcare: A Systematic Review.” Social Science & Medicine 296 (March 1, 2022): 114782. https://doi.org/10.1016/j.socscimed.2022.114782.
Spencer, Thomas, and Siddarth Singh. “What the Data Centre and AI Boom Could Mean for the Energy Sector – Analysis.” IEA, October 18, 2024. https://www.iea.org/commentaries/what-the-data-centre-and-ai-boom-could-mean-for-the-energy-sector.
Streeter, Robin A., John E. Snyder, Hayden Kepley, Anne L. Stahl, Tiandong Li, and Michelle M. Washko. “The Geographic Alignment of Primary Care Health Professional Shortage Areas with Markers for Social Determinants of Health.” Edited by Tayyab Ikram Shah. PLOS ONE 15, no. 4 (April 24, 2020): e0231443. https://doi.org/10.1371/journal.pone.0231443.
Tabassi, Elham. “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” Gaithersburg, MD: National Institute of Standards and Technology (U.S.), January 26, 2023. https://doi.org/10.6028/NIST.AI.100-1.
The Alan Turing Institute. “Action Needed to Reduce Bias in Medical Devices, Review Finds.” Accessed February 3, 2025. https://www.turing.ac.uk/news/action-needed-reduce-bias-medical-devices-review-finds.
Thomasian, Nicole M., Carsten Eickhoff, and Eli Y. Adashi. “Advancing Health Equity with Artificial Intelligence.” Journal of Public Health Policy 42, no. 4 (December 2021): 602–11. https://doi.org/10.1057/s41271-021-00319-5.
Turner Lee, Nicol. Digitally Invisible: How the Internet Is Creating the New Underclass. Rowman & Littlefield Publishers, Inc., 2024.
Turner Lee, Nicol. “Enabling Opportunities: 5G, the Internet of Things, and Communities of Color.” Brookings, January 19, 2019. https://www.brookings.edu/articles/enabling-opportunities-5g-the-internet-of-things-and-communities-of-color/.
Turner Lee, Nicol, Niam Yaraghi, and Samantha Lai. “The Roadmap to Telehealth Efficacy: Care, Health, and Digital Equities.” Brookings, July 25, 2022. https://www.brookings.edu/articles/the-roadmap-to-telehealth-efficacy-care-health-and-digital-equities/.
Winny, Annalies. “Pulse Oximeters’ Racial Bias | Johns Hopkins | Bloomberg School of Public Health.” Johns Hopkins Bloomberg School of Public Health, July 8, 2024. https://publichealth.jhu.edu/2024/pulse-oximeters-racial-bias.
Appendix One
Working group—Subject matter expert members
We would like to acknowledge the following individuals who graciously shared their time, experiences, insights, and expertise to this project:
- Michael Crawford, Assistant Vice President, Howard University and Founder the 1867 Health Innovations Project (Moderator)
- Malaika Simmons, Chief Operating Officer for NADPH, a data-driven nonprofit health research organization. (Co-Moderator)
- Alan Balch, CEO, National Patient Advocate Foundation
- Heather Cole-Lewis, PhD, Global Head of Health Equity
- Renee Cummings, Professor of Practice in Data Science at University of Virginia
- Keon Gilbert, Fellow, The Brookings Institution
- Dorosella Green, Director of Health Equity, Takeda Pharmaceutical
- Alex John London, K&L Gates Professor of Ethics and Computational Technologies, Carnegie Mellon University
- April Mimms, Vice President, ForHims
- Laura Munro, Business Development, SAIC
- Guadelupe Pacheco, Director of Programs, National Hispanic Health Foundation
- Kumba Sennaar, PhD candidate at Brandeis University and CEO of William Kelly Consulting, LLC
- Prateek Sharma, MD/FASGE/President, American Society for Gastrointestinal Medicine
- Claire Sheahan, President and CEO, Alliance for Health Policy
- Cara Tenenbaum, Health Policy advocate and Principal, Strathmore Health Strategy
- Niam Yaraghi, Associate Professor at Miami Herbet Business School & Senior Fellow at the Brookings Institution
Appendix Two
Select references (annotated)
1. Thomasian, N. M., and C. Eickhoff. “Advancing Health Equity with Artificial Intelligence.”
Journal of Public Health, 2021.
- Looks at how flaws in health care AI can widen existing health care disparities.
- Outlines need for robust regulatory frameworks, focusing on guiding AI applications in health to ensure they are fair and equitable. Suggests policy changes to mandate equity-focused audits during AI model development.
2. Berdahl, C. T., L. Baker, S. Mann, O. Osoba, and F. Girosi. “Strategies to Improve the Impact of Artificial Intelligence on Health Equity: Scoping Review.”
JMIR AI, 2023.
- Identifies major barriers and strategies to improve AI’s impact on health equity.
- Suggests integrating diverse demographic data to reduce bias in models and advocates for developing explainable AI to help clinicians understand AI decisions, improving trust and reducing potential disparities in health care.
3. Wang, Y., and H. Siala. “SHIFTing Artificial Intelligence to Be Responsible in Healthcare: A Systematic Review.”
Social Science & Medicine, 2022.
- Outlines how biases emerge in AI and identifies responsible AI principles for health care, such as transparency, accountability, and inclusivity.
- Recommends a “SHIFT” (Social, Health, Inclusivity, Fairness, Transparency) model to guide AI developers and health care providers in responsible AI implementation that addresses health care disparities.
4. Kim, J. Y., A. Hasan, K. C. Kellogg, and W. Ratliff. “Testing Health Equity Across the AI Lifecycle (HEAAL): A Framework for Healthcare Delivery Organizations.”
PLOS Digital Health, 2024.
- Proposes the HEAAL framework, which focuses on applying health equity assessments at each stage of AI development, from data collection to real-world deployment.
- Framework encourages iterative assessments to identify and correct potential biases, helping organizations deliver equitable AI solutions.
5. Clark, C. R., C. H. Wilkins, and J. A. Rodriguez. “Health Care Equity in the Use of Advanced Analytics and Artificial Intelligence Technologies in Primary Care.”
Journal of General Internal Medicine, 2021.
- Establishes that advanced analytics and AI can be applied equitably in primary care.
- Highlights the importance of inclusive data collection practices and suggests collaborative partnerships with underrepresented communities to improve model relevance and reduce health disparities in primary care.
6. Rajkomar, A., M. Hardt, and M. D. Howell. “Ensuring Fairness in Machine Learning to Advance Health Equity.”
Annals of Internal Medicine, 2018.
- Discusses fairness in machine learning, stressing the risk of biased algorithms amplifying existing disparities.
- Provides examples of AI systems in use, examining how developers can implement fairness audits and regular evaluations to identify biases and prevent AI from reinforcing social inequities.
7. Mulukuntla, S., and S. Pamulaparthyvenkata. “Realizing the Potential of AI in Improving Health Outcomes: Strategies for Effective Implementation.”
ESP Journal of Engineering and Technology, 2022.
- Focuses on best practices for effective AI implementation, recommending strategies like community partnerships, iterative model testing, and stakeholder involvement.
- Emphasizes the importance of continuously assessing AI’s impact on various patient demographics to ensure equitable health care outcomes.
8. Ho, C. W. L. “One Digital Health Framework for Fair and Equitable AI Use in Public Health.”
Frontiers in Public Health, 2022.
- Introduces a global “One Digital Health” framework, advocating for fair and equitable sharing of AI benefits in public health.
- Framework emphasizes governance and oversight to ensure AI tools provide universal benefits, particularly for underserved populations in global health contexts.
9. Nooraie, R. Y., P. G. Lyons, A. A. Baumann, and B. Saboury. “Equitable Implementation of Artificial Intelligence in Medical Imaging: What Can Be Learned from Implementation Science?”
PET Clinics, 2021.
- Highlights implementation science principles as applied to AI in health care. The authors suggest integrating an “equity lens” during planning and evaluating AI applications, ensuring that implementation aligns with health care equity goals and addresses disparities in AI-driven imaging.
10. Laein, G. D. “Global Perspectives on Governing Healthcare AI: Prioritising Safety, Equity, and Collaboration.”
BMJ Leader, 2024.
- Discusses the need for effective governance frameworks that address algorithmic bias and promote safety and equity in health care AI. It offers a global perspective on ethical considerations, recommending international collaboration to establish equitable health care standards.
11. Lytle, K. S., S. Balu, M. E. Lipkin, and A. I. Shariff. “Translating Ethical and Quality Principles for Effective, Safe, and Fair AI in Healthcare.”
Journal of the American Medical Informatics Association, 2024.
- Examines frameworks to ensure the ethical and safe use of AI in health care. It highlights quality control principles, advocating for thorough testing on diverse patient groups before AI is deployed widely, and emphasizes continuous monitoring to maintain equitable care standards across populations.
Journal of Medical Internet Research AI, 2023
- Recognizes the imperative to strengthen AI and ML literacy in underserved communities and build a more diverse AI and ML design and development workforce engaged in health research.
- Describes current efforts to put ethics and fairness at the forefront of AI and ML applications to build equity in biomedical research, education, and health care.
13. Emily Black, Rakshit Naidu, Rayid Ghani, Kit Rodolfa, Daniel Ho, and Hoda Heidari. “Toward Tools”
Association of Computing Machinery, 2022
- Recent work has called on the ML community to take a more holistic approach to tackle fairness issues by systematically investigating the many design choices made through the ML pipeline and identifying interventions that target the issue’s root cause, as opposed to its symptoms.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Source