Abstrakt: |
Performance evaluation is one of the most critical components in assuring the comprehensive development of e-learning in medical education (e-LMED). Although several studies evaluate performance in e-LMED, no study presently maps the rising scientific knowledge and evolutionary patterns that establish a solid background to investigate and quantify the efficacy of the evaluation of performance in e-LMED. Therefore, this study aims to quantify scientific productivity, identify the key terms and analyze the extent of research collaboration in this domain. We searched the SCOPUS database using search terms informed by the PICOS model, and a total of 315 studies published between 1991 and 2022 were retrieved. Performance analysis, science mapping, network analysis, and visualization were performed using R Bibliometrix, Biblioshiny, and VOSviewer packages. Findings reveal that authors are actively publishing and collaborating in this domain, which experienced a sporadic publication increase in 2021. Most of the top publications, collaborations, countries, institutions, and journals are produced in first-world countries. In addition, studies evaluating performance in e-LMED evaluated constructs such as efficacy, knowledge gain, student perception, confidence level, acceptability, feasibility, usability, and willingness to recommend e-learning, mainly using pre-tests and post-tests experimental design methods. This study can help researchers understand the existing landscape of performance evaluation in e-LMED and could be used as a background to investigate and quantify the efficacy of the evaluation of e-LMED. [ABSTRACT FROM AUTHOR] |