Cross-Domain Transfer of Generative Explanations Using Text-to-Text Models
Autor: | Mihhail Matskin, Amir H. Payberah, Anders Arpteg, Karl Fredrik Erliksson |
---|---|
Rok vydání: | 2021 |
Předmět: |
business.industry
Computer science media_common.quotation_subject Deep learning Machine learning computer.software_genre Domain (software engineering) Task (project management) Quality (business) Artificial intelligence Architecture Transfer of learning business computer Generative grammar Transformer (machine learning model) media_common |
Zdroj: | Natural Language Processing and Information Systems ISBN: 9783030805982 NLDB |
DOI: | 10.1007/978-3-030-80599-9_8 |
Popis: | Deep learning models based on the Transformers architecture have achieved impressive state-of-the-art results and even surpassed human-level performance across various natural language processing tasks. However, these models remain opaque and hard to explain due to their vast complexity and size. This limits adoption in highly-regulated domains like medicine and finance, and often there is a lack of trust from non-expert end-users. In this paper, we show that by teaching a model to generate explanations alongside its predictions on a large annotated dataset, we can transfer this capability to a low-resource task in another domain. Our proposed three-step training procedure improves explanation quality by up to 7% and avoids sacrificing classification performance on the downstream task, while at the same time reducing the need for human annotations. |
Databáze: | OpenAIRE |
Externí odkaz: |