Abstrakt: |
This experimental study aims to evaluate the quality of post-edited texts, originally translated using computer-assisted translation (CAT) tools, in comparison with traditional human translation. This study investigates the quality of post-editing (PE) compared to traditional translation from scratch (TFS) in the context of Arabic-English translation, utilizing the Phrase CAT tool. The main hypothesis posits that PE yields a final product whose quality is similar or equivalent to that of TFS. The participants' scores and error frequencies were evaluated using the American Translators Association framework for standardized error marking, and terminology, word choice, mistranslation, addition/omission, spelling, punctuation, case, inconsistency, style, and grammar in both approaches were compared. Data from nine professional Saudi translators showed that PE generally outperformed TFS in terminology, spelling, punctuation, and case, whereas TFS exhibited strengths in consistency, style, grammar, and literal translation. Statistical analysis confirmed the similarity in overall error rates between PE and TFS. The difference in mean error numbers between TFS and PE was not statistically significant. Thus, the disparity in means likely resulted from random chance and might not indicate substantive differences between the two groups. These results imply that PE yields quality that is comparable or equivalent to that of TFS, proving the aforementioned hypothesis. The implications highlight the need for CAT tool training and PE skills among translators to meet the demands of evolving translation technologies. Furthermore, this study underscores the importance of integrating PE training into translation curricula and organizing workshops to improve CAT tool usage. [ABSTRACT FROM AUTHOR] |