Predicting In-Game Actions from Interviews of NBA Players
Autor: | Roi Reichart, Nadav Oved, Amir Feder |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Linguistics and Language Computer Science - Computation and Language Computer Science - Artificial Intelligence business.industry 02 engineering and technology Public relations Language and Linguistics Machine Learning (cs.LG) Computer Science Applications Artificial Intelligence (cs.AI) Work (electrical) Artificial Intelligence Abundance (ecology) 020204 information systems 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Sociology business Computation and Language (cs.CL) |
Zdroj: | Computational Linguistics. 46:667-712 |
ISSN: | 1530-9312 0891-2017 |
Popis: | Sports competitions are widely researched in computer and social science, with the goal of understanding how players act under uncertainty. While there is an abundance of computational work on player metrics prediction based on past performance, very few attempts to incorporate out-of-game signals have been made. Specifically, it was previously unclear whether linguistic signals gathered from players' interviews can add information which does not appear in performance metrics. To bridge that gap, we define text classification tasks of predicting deviations from mean in NBA players' in-game actions, which are associated with strategic choices, player behavior and risk, using their choice of language prior to the game. We collected a dataset of transcripts from key NBA players' pre-game interviews and their in-game performance metrics, totalling in 5,226 interview-metric pairs. We design neural models for players' action prediction based on increasingly more complex aspects of the language signals in their open-ended interviews. Our models can make their predictions based on the textual signal alone, or on a combination with signals from past-performance metrics. Our text-based models outperform strong baselines trained on performance metrics only, demonstrating the importance of language usage for action prediction. Moreover, the models that employ both textual input and past-performance metrics produced the best results. Finally, as neural networks are notoriously difficult to interpret, we propose a method for gaining further insight into what our models have learned. Particularly, we present an LDA-based analysis, where we interpret model predictions in terms of correlated topics. We find that our best performing textual model is most associated with topics that are intuitively related to each prediction task and that better models yield higher correlation with more informative topics. Comment: First two authors contributed equally. To be published in the Computational Linguistics journal. Code is available at: https://github.com/nadavo/mood |
Databáze: | OpenAIRE |
Externí odkaz: | |
Nepřihlášeným uživatelům se plný text nezobrazuje | K zobrazení výsledku je třeba se přihlásit. |