ICGA-GPT: report generation and question answering for indocyanine green angiography images.

Autor: Chen X; School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China., Zhang W; School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China., Zhao Z; School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China., Xu P; State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China., Zheng Y; State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China., Shi D; School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China danli.shi@polyu.edu.hk.; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong, China., He M; School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China.; Research Centre for SHARP Vision (RCSV), The Hong Kong Polytechnic University, Kowloon, Hong Kong, China.; Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Hong Kong, China.
Jazyk: angličtina
Zdroj: The British journal of ophthalmology [Br J Ophthalmol] 2024 Sep 20; Vol. 108 (10), pp. 1450-1456. Date of Electronic Publication: 2024 Sep 20.
DOI: 10.1136/bjo-2023-324446
Abstrakt: Background: Indocyanine green angiography (ICGA) is vital for diagnosing chorioretinal diseases, but its interpretation and patient communication require extensive expertise and time-consuming efforts. We aim to develop a bilingual ICGA report generation and question-answering (QA) system.
Methods: Our dataset comprised 213 129 ICGA images from 2919 participants. The system comprised two stages: image-text alignment for report generation by a multimodal transformer architecture, and large language model (LLM)-based QA with ICGA text reports and human-input questions. Performance was assessed using both qualitative metrics (including Bilingual Evaluation Understudy (BLEU), Consensus-based Image Description Evaluation (CIDEr), Recall-Oriented Understudy for Gisting Evaluation-Longest Common Subsequence (ROUGE-L), Semantic Propositional Image Caption Evaluation (SPICE), accuracy, sensitivity, specificity, precision and F1 score) and subjective evaluation by three experienced ophthalmologists using 5-point scales (5 refers to high quality).
Results: We produced 8757 ICGA reports covering 39 disease-related conditions after bilingual translation (66.7% English, 33.3% Chinese). The ICGA-GPT model's report generation performance was evaluated with BLEU scores (1-4) of 0.48, 0.44, 0.40 and 0.37; CIDEr of 0.82; ROUGE of 0.41 and SPICE of 0.18. For disease-based metrics, the average specificity, accuracy, precision, sensitivity and F1 score were 0.98, 0.94, 0.70, 0.68 and 0.64, respectively. Assessing the quality of 50 images (100 reports), three ophthalmologists achieved substantial agreement (kappa=0.723 for completeness, kappa=0.738 for accuracy), yielding scores from 3.20 to 3.55. In an interactive QA scenario involving 100 generated answers, the ophthalmologists provided scores of 4.24, 4.22 and 4.10, displaying good consistency (kappa=0.779).
Conclusion: This pioneering study introduces the ICGA-GPT model for report generation and interactive QA for the first time, underscoring the potential of LLMs in assisting with automated ICGA image interpretation.
Competing Interests: Competing interests: None declared.
(© Author(s) (or their employer(s)) 2024. No commercial re-use. See rights and permissions. Published by BMJ.)
Databáze: MEDLINE