Standardizing Extracted Data Using Automated Application of Controlled Vocabularies.

Autor: Foster, Caroline1, Wignall, Jessica1, Kovach, Samuel1, Choksi, Neepa2, Allen, Dave2, Trgovcich, Joanne1, Rochester, Johanna R.1, Ceger, Patricia2, Daniel, Amber2, Hamm, Jon2, Truax, Jim2, Blake, Bevin3, McIntyre, Barry3, Sutherland, Vicki3, Stout, Matthew D.3, Kleinstreuer, Nicole4
Předmět:
Zdroj: Environmental Health Perspectives. Feb2024, Vol. 132 Issue 2, p027006-1-027006-13. 13p.
Abstrakt: BACKGROUND: Extraction of toxicological end points from primary sources is a central component of systematic reviews and human health risk assessments. To ensure optimal use of these data, consistent language should be used for end point descriptions. However, primary source language describing treatment-related end points can vary greatly, resulting in large labor efforts to manually standardize extractions before data are fit for use. OBJECTIVES: To minimize these labor efforts, we applied an augmented intelligence approach and developed automated tools to support standardization of extracted information via application of preexisting controlled vocabularies. METHODS: We created and applied a harmonized controlled vocabulary crosswalk, consisting of Unified Medical Language System (UMLS) codes, German Federal Institute for Risk Assessment (BfR) DevTox harmonized terms, and The Organization for Economic Co-operation and Development (OECD) end point vocabularies, to roughly 34,000 extractions from prenatal developmental toxicology studies conducted by the National Toxicology Program (NTP) and 6,400 extractions from European Chemicals Agency (ECHA) prenatal developmental toxicology studies, all recorded based on the original study report language. RESULTS: We automatically applied standardized controlled vocabulary terms to 75% of the NTP extracted end points and 57% of the ECHA extracted end points. Of all the standardized extracted end points, about half (51%) required manual review for potential extraneous matches or inaccuracies. Extracted end points that were not mapped to standardized terms tended to be too general or required human logic to find a good match. We estimate that this augmented intelligence approach saved >350 hours of manual effort and yielded valuable resources including a controlled vocabulary crosswalk, organized related terms lists, code for implementing an automated mapping workflow, and a computationally accessible dataset. DISCUSSION: Augmenting manual efforts with automation tools increased the efficiency of producing a findable, accessible, interoperable, and reusable (FAIR) dataset of regulatory guideline studies. This open-source approach can be readily applied to other legacy developmental toxicology datasets, and the code design is customizable for other study types. [ABSTRACT FROM AUTHOR]
Databáze: GreenFILE
Nepřihlášeným uživatelům se plný text nezobrazuje