Revisiting the inter‐rater reliability of drug treatment assessments according to the STOPP/START criteria

Autor: Naldy Parodi López, Björn Belfrage, Anders Koldestam, Johan Lönnbro, Staffan A. Svensson, Susanna M. Wallerstedt
Rok vydání: 2022
Předmět:
Zdroj: British Journal of Clinical Pharmacology. 89:832-842
ISSN: 1365-2125
0306-5251
Popis: The aim of this study is to revisit the inter-rater reliability of drug treatment assessments according to the Screening Tool of Older Persons' Prescriptions (STOPP)/Screening Tool to Alert to Right Treatment (START) criteria.Potentially inappropriate medications (PIMs) and potential prescribing omissions (PPOs) were independently identified by two physicians in two cohorts of older people (I: 200 hip fracture patients, median age 85 years, STOPP/START version 1; II: 302 primary care patients, median age 74 years, STOPP/START version 2). Kappa statistics were used to evaluate inter-rater agreement.In cohort I, a total of 782 PIMs/PPOs, related to 68 (78%) out of 87 criteria, were identified by at least one assessor, 500 (64%) of which were discordantly identified by the assessors, that is, by one assessor but not the other. For four STOPP criteria, all PIMs (n = 9) were concordantly identified. In cohort II, 955 PIMs/PPOs, related to 80 (70%) out of 114 criteria, were identified, 614 (64%) of which were discordantly identified. For three STOPP criteria, all PIMs (n = 3) were concordantly identified. For no START criterion, with ≥1 PPO identified, were all assessments concordant. The kappa value for PIM/PPO identification was 0.52 in both cohorts. In cohort II, the kappa was 0.37 when criteria regarding influenza and pneumococcal vaccines were excluded. Further analysis of discordantly identified PIMs/PPOs revealed methodological aspects of importance, including the data source used and criteria wording.When the STOPP/START criteria are applied in PIM/PPO research, reliability seems to be an issue not encountered in previous reliability studies.
Databáze: OpenAIRE