Racial, ethnic, and sex bias in large language model opioid recommendations for pain management.
Autor: | Young CC; Harvard Medical School, Boston, MA, United States.; Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center (MESH IO), Mass General Brigham, Boston, MA, United States., Enichen E; Harvard Medical School, Boston, MA, United States.; Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center (MESH IO), Mass General Brigham, Boston, MA, United States., Rao A; Harvard Medical School, Boston, MA, United States.; Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center (MESH IO), Mass General Brigham, Boston, MA, United States., Succi MD; Harvard Medical School, Boston, MA, United States.; Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center (MESH IO), Mass General Brigham, Boston, MA, United States.; Department of Radiology, Massachusetts General Hospital, Boston, MA, United States.; Enterprise Radiology, Mass General Brigham, Boston, MA, United States. |
---|---|
Jazyk: | angličtina |
Zdroj: | Pain [Pain] 2024 Sep 06. Date of Electronic Publication: 2024 Sep 06. |
DOI: | 10.1097/j.pain.0000000000003388 |
Abstrakt: | Abstract: Understanding how large language model (LLM) recommendations vary with patient race/ethnicity provides insight into how LLMs may counter or compound bias in opioid prescription. Forty real-world patient cases were sourced from the MIMIC-IV Note dataset with chief complaints of abdominal pain, back pain, headache, or musculoskeletal pain and amended to include all combinations of race/ethnicity and sex. Large language models were instructed to provide a subjective pain rating and comprehensive pain management recommendation. Univariate analyses were performed to evaluate the association between racial/ethnic group or sex and the specified outcome measures-subjective pain rating, opioid name, order, and dosage recommendations-suggested by 2 LLMs (GPT-4 and Gemini). Four hundred eighty real-world patient cases were provided to each LLM, and responses included pharmacologic and nonpharmacologic interventions. Tramadol was the most recommended weak opioid in 55.4% of cases, while oxycodone was the most frequently recommended strong opioid in 33.2% of cases. Relative to GPT-4, Gemini was more likely to rate a patient's pain as "severe" (OR: 0.57 95% CI: [0.54, 0.60]; P < 0.001), recommend strong opioids (OR: 2.05 95% CI: [1.59, 2.66]; P < 0.001), and recommend opioids later (OR: 1.41 95% CI: [1.22, 1.62]; P < 0.001). Race/ethnicity and sex did not influence LLM recommendations. This study suggests that LLMs do not preferentially recommend opioid treatment for one group over another. Given that prior research shows race-based disparities in pain perception and treatment by healthcare providers, LLMs may offer physicians a helpful tool to guide their pain management and ensure equitable treatment across patient groups. (Copyright © 2024 International Association for the Study of Pain.) |
Databáze: | MEDLINE |
Externí odkaz: |