Developing Pretrained Language Models for Turkish Biomedical Domain
Autor: | Hazal Turkmen, Oguz Dikenelli, Cenk Eraslan, Mehmet Cem Calli, Suha Sureyya Ozbek |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2022 |
Předmět: | |
Popis: | 10th IEEE International Conference on Healthcare Informatics (IEEE ICHI) -- JUN 11-14, 2022 -- Rochester, MN Pretrained language models elevated with in-domain corpora show impressive results in biomedicine and clinical NLP tasks in English. However, there is minimal work in low-resource languages. This work introduces the BioBERTurk family, three pretrained models in Turkish for biomedicine. To evaluate models, we also introduce a labeled dataset to classify radiology reports of CT exams. Our first model was initialized from BERTurk and pretrained with biomedical corpus. The second model again continues to pretrain the general BERT model with a corpus of Ph.D. theses on radiology to test the effect of the task-related text. The final model combines radiology and biomedicine corpora with the corpus of BERTurk and pretrained a BERT model from scratch. F-scores of our models in the radiology resort classification are 92.99, 92.75, and 89.49 respectively. As far as we know, this is the first model that evaluates the effect of small size in-domain corpus in pretraining from scratch. IEEE,Mayo Clin, Dept AI & Informat,Mayo Clin, Robert D & Patricia E Kern Ctr Sci Hlth Care Delivery,Mayo Clin Platform,NSF,IEEE Comp Soc Tech Comm Intelligent Informat,Journal Healthcare Informat Res,Hlth Data Sci Tensorflow Research Cloud (TRC) We would like to acknowledge the support we received from the Tensorflow Research Cloud (TRC) team 4 in providing access to TPUv3 units. |
Databáze: | OpenAIRE |
Externí odkaz: |