Hospitals are teaming up with university health technology experts to address one of the biggest challenges in adopting advanced artificial intelligence (AI) in healthcare: bias. The initiative, called VALID AI, was launched by AI digital health specialists Dennis Chornenky and Ashish Atreja at the University of California, Davis. Their goal is to establish industry standards for AI by developing tools that incorporate patients' "social vital signs," such as socioeconomic status and access to care, which are crucial for health outcomes.
VALID AI proposes creating an AI toolkit that gathers diverse data, better reflecting social determinants of health. With this toolkit, healthcare providers could more effectively link patients to community resources, ultimately improving healthcare outcomes. The initiative has gained the support of over 50 members, including leading institutions like New York-Presbyterian, Ochsner Health in Louisiana, and Boston Children’s Hospital.
The importance of this effort lies in addressing the inherent biases in AI systems, which often mirror human prejudices against marginalized groups. By collaborating with organizations to train algorithms that detect and correct bias, VALID AI aims to accelerate the responsible adoption of AI in healthcare. The initiative hopes to reduce disparities in access and diagnosis, making healthcare more equitable and efficient. According to Craig Kwiatkowski, CIO at Cedars-Sinai Medical Center, AI has the potential to quickly analyze vast amounts of health data, identifying disparities far more efficiently than human efforts alone.
Click here to read the original news story.
Create a free account or log in to unlock content, event past recordings and more!