Quantitative analysis of manual annotation of clinical text samples

Autor: José Antonio Miñarro-Giménez, Daniel Karlsson, Stefan Schulz, Ronald Cornet, Heike Dewenter, Kirstine Rosenbeck Gøeg, Sylvia Thun, Marie-Christine Jaulent
Přispěvatelé: APH - Methodology, APH - Quality of Care, Medical Informatics, APH - Digital Health, APH - Global Health
Jazyk: angličtina
Rok vydání: 2019
Předmět:
Zdroj: International journal of medical informatics, 123, 37-48. Elsevier Ireland Ltd
Miñarro-Giménez, J A, Cornet, R, Jaulent, M C, Dewenter, H, Thun, S, Gøeg, K R, Karlsson, D & Schulz, S 2019, ' Quantitative analysis of manual annotation of clinical text samples ', International Journal of Medical Informatics, vol. 123, pp. 37-48 . https://doi.org/10.1016/j.ijmedinf.2018.12.011
ISSN: 1386-5056
DOI: 10.1016/j.ijmedinf.2018.12.011
Popis: BACKGROUND: Semantic interoperability of eHealth services within and across countries has been the main topic in several research projects. It is a key consideration for the European Commission to overcome the complexity of making different health information systems work together. This paper describes a study within the EU-funded project ASSESS CT, which focuses on assessing the potential of SNOMED CT as core reference terminology for semantic interoperability at European level.OBJECTIVE: This paper presents a quantitative analysis of the results obtained in ASSESS CT to determine the fitness of SNOMED CT for semantic interoperability.METHODS: The quantitative analysis consists of concept coverage, term coverage and inter-annotator agreement analysis of the annotation experiments related to six European languages (English, Swedish, French, Dutch, German and Finnish) and three scenarios: (i) ADOPT, where only SNOMED CT was used by the annotators; (ii) ALTERNATIVE, where a fixed set of terminologies from UMLS, excluding SNOMED CT, was used; and (iii) ABSTAIN, where any terminologies available in the current national infrastructure of the annotators' country were used. For each language and each scenario, we configured the different terminology settings of the annotation experiments.RESULTS: There was a positive correlation between the number of concepts in each terminology setting and their concept and term coverage values. Inter-annotator agreement is low, irrespective of the terminology setting.CONCLUSIONS: No significant differences were found between the analyses for the three scenarios, but availability of SNOMED CT for the assessed language is associated with increased concept coverage. Terminology setting size and concept and term coverage correlate positively up to a limit where more concepts do not significantly impact the coverage values. The results did not confirm the hypothesis of an inverse correlation between concept coverage and IAA due to a lower amount of choices available. The overall low IAA results pose a challenge for interoperability and indicate the need for further research to assess whether consistent terminology implementation is possible across Europe, e.g., improving term coverage by adding localized versions of the selected terminologies, analysing causes of low inter-annotator agreement, and improving tooling and guidance for annotators. The much lower term coverage for the Swedish version of SNOMED CT compared to English together with the similarly high concept coverage obtained with English and Swedish SNOMED CT reflects its relevance as a hub to connect user interface terminologies and serving a variety of user needs.
Databáze: OpenAIRE