Understanding BERT Rankers Under Distillation
Autor: | Luyu Gao, Jamie Callan, Zhuyun Dai |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Matching (statistics) Computer Science - Machine Learning Speedup business.industry Computer science Computation 010102 general mathematics InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL Inference Machine learning computer.software_genre 01 natural sciences law.invention Machine Learning (cs.LG) Computer Science - Information Retrieval law Software deployment Language model Artificial intelligence 0101 mathematics business Distillation computer Information Retrieval (cs.IR) |
Zdroj: | ICTIR |
Popis: | Deep language models such as BERT pre-trained on large corpus have given a huge performance boost to the state-of-the-art information retrieval ranking systems. Knowledge embedded in such models allows them to pick up complex matching signals between passages and queries. However, the high computation cost during inference limits their deployment in real-world search scenarios. In this paper, we study if and how the knowledge for search within BERT can be transferred to a smaller ranker through distillation. Our experiments demonstrate that it is crucial to use a proper distillation procedure, which produces up to nine times speedup while preserving the state-of-the-art performance. |
Databáze: | OpenAIRE |
Externí odkaz: |