Enhancing Models for Breast Cancer Risk Prediction and Bias Mitigation through Clinician AI Collaboration
Focused Research Program
Our Focus
The Enhancing Models for Breast Cancer Risk Prediction and Bias Mitigation through Clinician-AI Collaboration FRP aims to identify the hidden sources of biological and demographic biases of artificial intelligence (AI) in breast cancer risk prediction based on mammography images. Using recent progress in Large Language Models (LLMs), this framework enables clinicians to interact with the AI model at a level impossible with the current AI framework.
This Focused Research Program is co-sponsored by the Digital Health Initiative at the Hariri Institute for Computing, the School of Public Health Population Health Data Science Program, the Clinical and Translational Science Institute, and the Evans Center for Interdisciplinary Biomedical Research.
Research Team Leaders
- Clare Poynton, MD, Ph.D, Assistant Professor, Radiology Department, BU Chobanian & Avedisian School of Medicine
- Kayhan Batmanghelich, Ph.D, Assistant Professor, Department of Electrical and Computer Engineering, BU
Research Thrusts
1. Develop a retrospective mammography dataset for auditing and enhancing AI models for breast cancer risk estimation
This thrust intends to address a critical knowledge gap regarding the performance of existing clinical and AI risk models in racially diverse patient populations, which is underrepresented in many datasets for training such models. The researchers will create a retrospective data set of screening mammograms and breast cancer risk factors representative of the diverse Boston Medical Center (BMC) patient population. They will also evaluate the performance of breast cancer risk estimators using existing clinical and AI risk models.
Thrust Leader
Core Faculty
2. Leverage large language model (LLM) to enable clinicians to audit for bias, interact, and intervene with the DL model for breast cancer risk prediction
The aim is to create a framework transforming deep learning models’ internal mechanisms into understandable language, enabling clinicians to improve performance and address biases without coding. The researchers will do that with the help of an LLM trained in medical text and radiology reports. Their “interpreter agent” identifies a subpopulation of patients for whom the risk prediction is underperforming and provides a plausible hypothesis for further inspection to the radiologist. The clinician provides feedback that can be incorporated back into the model.
Thrust Leader
Core Faculty
3. Assess the calibration and discrimination of the improved AI model and compare it to the performance of the established AI and clinical risk prediction models
The goal of this thrust is to compare the performance (i.e., discrimination, calibration) of the improved AI risk prediction model with established AI and clinical risk models, with a specific focus on performance in racial minority groups who have been underrepresented in this area of research to date.
Thrust Leader
Core Faculty
Events
How to get involved?
For program specific inquiries and questions, please contact FRP leaders: Clare Poynton or Kayhan Batmanghelich.
Faculty interested in submitting a Focused Research Programs proposal are strongly encouraged to discuss their ideas with Yannis Paschalidis, director of the Hariri Institute for Computing.
To learn more details about the Hariri Institute’s Focused Research Programs, visit here.