Example-Based Machine Translation from Text to a Hierarchical Representation of Sign Language
Autor: | Elise Bertin-Lemée, Braffort, Annelies, Challant, Camille, Danet, Claire, Filhol, Michael |
---|---|
Přispěvatelé: | Information, Langue Ecrite et Signée (ILES), Laboratoire Interdisciplinaire des Sciences du Numérique (LISN), Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)-Sciences et Technologies des Langues (STL), Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS), Sciences et Technologies des Langues (STL), Laboratoire Interdisciplinaire des Sciences du Numérique, Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS) |
Jazyk: | angličtina |
Rok vydání: | 2022 |
Předmět: | |
Zdroj: | Elise Bertin-Lemée Proceedings of the 24th Annual Conference of the European Association for Machine Translation 24th Annual Conference of the European Association for Machine Translation (EAMT 2023) 24th Annual Conference of the European Association for Machine Translation (EAMT 2023), Jun 2023, Tampere, Finland. pp.21-30 |
Popis: | This article presents an original method for Text-to-Sign Translation. It compensates data scarcity using a domain-specific parallel corpus of alignments between text and hierarchical formal descriptions of Sign Language videos in AZee. Based on the detection of similarities present in the source text, the proposed algorithm recursively exploits matches and substitutions of aligned segments to build multiple candidate translations for a novel statement. This helps preserving Sign Language structures as much as possible before falling back on literal translations too quickly, in a generative way. The resulting translations are in the form of AZee expressions, designed to be used as input to avatar synthesis systems. We present a test set tailored to showcase its potential for expressiveness and generation of idiomatic target language, and observed limitations. This work finally opens prospects on how to evaluate translation and linguistic aspects, such as accuracy and grammatical fluency. |
Databáze: | OpenAIRE |
Externí odkaz: |