Assessment of commercial NLP engines for medication information extraction from dictated clinical notes.
Autor: | Jagannathan V; MedQuist Inc., 235 High Street, Suite 213, Morgantown, WV 26505, USA. juggy@medquist.com, Mullett CJ, Arbogast JG, Halbritter KA, Yellapragada D, Regulapati S, Bandaru P |
---|---|
Jazyk: | angličtina |
Zdroj: | International journal of medical informatics [Int J Med Inform] 2009 Apr; Vol. 78 (4), pp. 284-91. Date of Electronic Publication: 2008 Oct 05. |
DOI: | 10.1016/j.ijmedinf.2008.08.006 |
Abstrakt: | Purpose: We assessed the current state of commercial natural language processing (NLP) engines for their ability to extract medication information from textual clinical documents. Methods: Two thousand de-identified discharge summaries and family practice notes were submitted to four commercial NLP engines with the request to extract all medication information. The four sets of returned results were combined to create a comparison standard which was validated against a manual, physician-derived gold standard created from a subset of 100 reports. Once validated, the individual vendor results for medication names, strengths, route, and frequency were compared against this automated standard with precision, recall, and F measures calculated. Results: Compared with the manual, physician-derived gold standard, the automated standard was successful at accurately capturing medication names (F measure=93.2%), but performed less well with strength (85.3%) and route (80.3%), and relatively poorly with dosing frequency (48.3%). Moderate variability was seen in the strengths of the four vendors. The vendors performed better with the structured discharge summaries than with the clinic notes in an analysis comparing the two document types. Conclusion: Although automated extraction may serve as the foundation for a manual review process, it is not ready to automate medication lists without human intervention. |
Databáze: | MEDLINE |
Externí odkaz: |