Abstract
Background: Large language models (LLMs), of which ChatGPT is the most well known, are now available to patients to seek medical advice in various languages. However, the accuracy of the information utilized to train these models remains unknown. Methods: Ten commonly asked questions regarding labor epidurals were translated from English to Spanish, and all 20 questions were entered into ChatGPT version 3.5. The answers were transcribed. A survey was then sent to 10 bilingual fellowship-trained obstetric anesthesiologists to assess the accuracy of these answers utilizing a 5-point Likert scale. Results: Overall, the accuracy scores for the ChatGPT-generated answers in Spanish were lower than for the English answers with a median score of 34 (IQR 33–36.5) versus 40.5 (IQR 39–44.3), respectively (P value 0.02). Answers to two questions were scored significantly lower: “Do epidurals prolong labor?” (2 (IQR 2–2.5) versus 4 (IQR 4–4.5), P value 0.03) and “Do epidurals increase the risk of needing cesarean delivery?” (3(IQR 2–4) versus 4 (IQR 4–5); P value 0.03). There was a strong agreement that answers to the question “Do epidurals cause autism” were accurate in both Spanish and English. Conclusion: ChatGPT-generated answers in Spanish to ten questions about labor epidurals scored lower for accuracy than answers generated in English, particularly regarding the effect of labor epidurals on labor course and mode of delivery. This disparity in ChatGPT-generated information may extend already-known health inequities among non-English-speaking patients and perpetuate misinformation.
Original language | English (US) |
---|---|
Article number | 104290 |
Journal | International Journal of Obstetric Anesthesia |
Volume | 61 |
DOIs | |
State | Published - Feb 2025 |
Keywords
- ChatGPT
- English
- Healthcare disparities
- Labor epidurals
- Language
- Misinformation
- Spanish
ASJC Scopus subject areas
- Obstetrics and Gynecology
- Anesthesiology and Pain Medicine