Lex Rosetta
Autor: | Michał Araszkiewicz, Charlotte S. Alexander, Jaromír Šavelka, Aurore Clément Troussel, Hannes Westermann, David Restrepo Amariles, Karl Branting, Jakub Harašta, Matthias Grabmair, Tereza Novotná, Rajaa El Hamdani, Shiwanni Johnson, Elizabeth Chika Tippett, Sébastien Meeùs, Alexandra Ashley, Karim Benyekhlef, Mattia Falduti, Kevin D. Ashley, Jayla C. Grant |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Computation and Language Computer science business.industry Pooling Context (language use) 06 humanities and the arts 02 engineering and technology 0603 philosophy ethics and religion 16. Peace & justice computer.software_genre Sequence labeling Annotation Civil law (legal system) 0202 electrical engineering electronic engineering information engineering Criminal law 020201 artificial intelligence & image processing 060301 applied ethics Artificial intelligence business Transfer of learning Computation and Language (cs.CL) computer Sentence Natural language processing |
Zdroj: | ICAIL |
DOI: | 10.1145/3462757.3466149 |
Popis: | In this paper, we examine the use of multi-lingual sentence embeddings to transfer predictive models for functional segmentation of adjudicatory decisions across jurisdictions, legal systems (common and civil law), languages, and domains (i.e. contexts). Mechanisms for utilizing linguistic resources outside of their original context have significant potential benefits in AI & Law because differences between legal systems, languages, or traditions often block wider adoption of research outcomes. We analyze the use of Language-Agnostic Sentence Representations in sequence labeling models using Gated Recurrent Units (GRUs) that are transferable across languages. To investigate transfer between different contexts we developed an annotation scheme for functional segmentation of adjudicatory decisions. We found that models generalize beyond the contexts on which they were trained (e.g., a model trained on administrative decisions from the US can be applied to criminal law decisions from Italy). Further, we found that training the models on multiple contexts increases robustness and improves overall performance when evaluating on previously unseen contexts. Finally, we found that pooling the training data from all the contexts enhances the models' in-context performance. 10 pages |
Databáze: | OpenAIRE |
Externí odkaz: |