An annotation protocol for evaluative stance in discourse

Autor: Laura Hidalgo-Downing, Paula Pérez-Sobrino, Laura Filardo-Lamas, Carmen Maíz Arévalo, Begoña Núñez Perucha, Alfonso Sánchez-Moya, Julia Williams Camus
Rok vydání: 2020
Popis: This paper is part of the work carried out in the funded research project Stance and subjectivity in discourse: towards an integrated model of the analysis of epistemicity, effectivity, evaluation and intersubjectivity from a critical discourse perspective (PGC2018-095798-B-I00).In this paper we propose a protocol for the annotation of evaluative stance across discourse types. We have used the protocol to annotate four 100,000-word corpora in English: opinion articles (The Guardian and The Times), science popularization in the press (The Guardian and The Times), political discourse (speeches delivered by British politicians) and fora on social issues (REDDIT). The development of the protocol has gone through two main stages. The first stage has consisted in a preliminary theoretical definition of the model of evaluative stance and its main categories, drawing from research on stance, evaluation and critical discourse analysis, together with methods for the identification of metaphoricity (du Bois 2007, Martin and White 2005, Pragglejazz 2007, van Leeuwen 2008, Wodak and Meyer 2015, among others). The preliminary model was tested in samples of the corpora and subsequently, the protocol underwent a first initial refinement and revision. The second stage has consisted in a process of establishing a good degree of inter-rater reliability for the full annotation of the corpora. The procedure of inter-rater reliability was carried out by three researchers (Hidalgo-Downing, Pérez-Sobrino, and Williams-Camus) who individually annotated samples from the corpora in four subsequent rounds. A joint discussion followed each round to discuss conflicting annotations and to refine the protocol for the ensuing round. The goal of these series of annotations was to know whether there was any variation in the inter-rater reliability with which evaluative stance was identified across researchers, rounds and genres. The results of the inter-rater reliability tests show a consistent increase in the kappa scores for the value category (positive vs negative evaluation) and, to a lesser extent, for metaphoricity (although, in both cases, kappa scores showed moderate to high agreement). These rounds were complemented with two rounds of annotation of sample texts by the full team (all seven researchers participating in this project) in order to ensure the understanding and uniform application of the criteria of the protocol for the annotation of the whole corpora.
Databáze: OpenAIRE