Validating the English Reading Subtest of the Advanced Subjects Test

Autor: Chang, Yi-Ju, 張邑如
Rok vydání: 2017
Druh dokumentu: 學位論文 ; thesis
Popis: 105
The present study attempted to take a close look at the construct validity of the vocabulary, cloze, and reading comprehension items in the English Subtest of the Advanced Subjects Test(AST). In particular, the study aimed to answer the following research questions: (1) Does a panel of English subject experts’ classification, based on Purpura’s (2004) model of grammatical knowledge, fit the test-takers’ responses to the vocabulary items in the English Subtest of the 2015 and the 2016 AST? (2) Does a panel of English subject experts’ classification, based on Purpura’s (2004) model of grammatical knowledge, fit the test-takers’ responses to the cloze items in the English Subtest of the 2015 and the 2016 AST? (3) Does a panel of English subject experts’ classification, based on the revised Bloom’s Taxonomy (2001), fit the test-takers’ responses to the reading comprehension items in the English Subtest of the 2015 and the 2016 AST? To answer these questions, the College Entrance Examination Center provided two sets of data. Each set of data included a total of 5,000 randomly-selected test-takers’ responses to ten vocabulary items, 25 multiple choice cloze items, and 16 reading comprehension items of the AST English Subtest from 2015 to 2016. To examine the cross-sample validation, each set of the data was further randomly split into two subsamples with each containing 2,250 test-takers’ responses. For each of the two years, the confirmatory factor analyses (CFAs) was performed on the two randomly-split subsamples through Mplus, a program claimed as a convenient mechanism to deal with dichotomously-scored or categorical responses. Before using the CFA on each set of data, the vocabulary and cloze items were examined and classified into five relevant language components proposed in Purpura’s (2004) model of grammatical knowledge by a panel of five experienced English teachers at tertiary institutes. Likewise, the simplified Revised Bloom’s Taxonomy (2001) was provided for the five teachers to classify the reading comprehension items. Particularly, the present study aimed to see whether the test-takers’ responses to the vocabulary, cloze and reading comprehension items in the 2015 and 2016 AST English subtests fit the five English teachers’ classifications. Based on the English teachers’ examination and classifications, the results showed that for both the 2015 and 2016 vocabulary sections, only two (out of five) components were identified: Lexical Meaning and Cohesive Form and Meaning. However, the result of the CFAs showed that the two-component models failed to fit the test-takers’ responses. Instead, the one-component model seemed to provide a better fit to the data based on the observed values of global model fit indices, individual parameter estimates, and the principle of parsimony. Likewise, for the cloze section of the 2015 AST, three (out of five) components were identified: Lexical Meaning, Morpho-syntactic Form, and Cohesive Form/Meaning. As to the year of 2016, another component, Morpho-syntactic Meaning, was also included in the English teachers’ classifications. However, the results of the CFA revealed that for the both years, these models specified by the English teachers failed to fit the responses to the cloze items. Similar to the vocabulary results, the one-component model seemed to provide a better fit to the test-takers’ responses to the cloze sections. With respect to the reading comprehension items of the 2015 AST, only three out of 19 reading skills were identified: Interpreting, Inferring, and Summarizing, while for the 2016 AST, Recognizing was also identified. Likewise, the results of the CFAs showed that the one-factor models seemed to best capture the data for the 2015 and 2016 AST reading comprehension sections. Finally, the results of the CFAs performed on the entire English Subtest suggested that for the both years, the vocabulary, cloze, or reading comprehension items all together appeared to measure one general English language ability, rather than a range of divisible language traits. According to the findings of the study, some implications and suggestions were provided for future research as well as for language test constructors.
Databáze: Networked Digital Library of Theses & Dissertations