Analyzing Two Automatic Latent Semantic Analysis (LSA) Assessment Methods (Inbuilt Rubric vs. Golden Summary) in Summaries Extracted from Expository Texts

Autor: José Ángel Martínez-Huertas, Olga Jastrzebska, Adrián Mencu, Jessica Moraleda, Ricardo Olmos, José Antonio León
Jazyk: English<br />Spanish; Castilian
Rok vydání: 2018
Předmět:
Zdroj: Psicología Educativa: Revista de los Psicólogos de la Educación, Vol 24, Iss 2, p 85 (2018)
Druh dokumentu: article
ISSN: 1135-755X
2174-0526
DOI: 10.5093/psed2048a9
Popis: The purpose of this study was to compare two automatic assessment methods using Latent Semantic Analysis (LSA): a novel LSA assessment method (Inbuilt Rubric) and a traditional LSA method (Golden Summary). Two conditions were analyzed using the Inbuilt Rubric method: the number of lexical descriptors needed to better accommodate an expert rubric (few vs. many) and a weighting function to penalize off-topic contents included in the student summaries (weighted vs. non-weighted). One hundred and sixty-six students divided in two different samples (81 undergraduates and 85 High School students) took part in this study. Students summarized two expository texts that differed in complexity (complex/easy) and length (1,300/500 words). Results showed that the Inbuilt Rubric method simulates human assessment better than Golden summaries in all cases. The similarity with human assessment was higher for Inbuilt Rubric (r = .78 and r = .79) than for Golden Summary (r = .67 and r = .47) in both texts. Moreover, to accommodate an expert rubric into the Inbuilt Rubric method was better using few descriptors and the weighted function.
Databáze: Directory of Open Access Journals
Nepřihlášeným uživatelům se plný text nezobrazuje