Zobrazeno 1 - 10
of 131
pro vyhledávání: '"Aas, Kjersti"'
Publikováno v:
Data Mining and Knowledge Discovery (2024)
Shapley values originated in cooperative game theory but are extensively used today as a model-agnostic explanation framework to explain predictions made by complex machine learning models in the industry and academia. There are several algorithmic a
Externí odkaz:
http://arxiv.org/abs/2305.09536
Autor:
Mancisidor, Rogelio A., Aas, Kjersti
Textual data from financial filings, e.g., the Management's Discussion & Analysis (MDA) section in Form 10-K, has been used to improve the prediction accuracy of bankruptcy models. In practice, however, we cannot obtain the MDA section for all public
Externí odkaz:
http://arxiv.org/abs/2211.08405
Factor models have become a common and valued tool for understanding the risks associated with an investing strategy. In this report we describe Exabel's factor model, we quantify the fraction of the variability of the returns explained by the differ
Externí odkaz:
http://arxiv.org/abs/2203.12408
Quantifying both historic and future volatility is key in portfolio risk management. This note presents and compares estimation strategies for volatility estimation in an estimation universe consisting on 28 629 unique companies from February 2010 to
Externí odkaz:
http://arxiv.org/abs/2203.12402
Publikováno v:
Journal of Machine Learning Research 23 (2022) 1-51
Shapley values are today extensively used as a model-agnostic explanation framework to explain complex predictive machine learning models. Shapley values have desirable theoretical properties and a sound mathematical foundation in the field of cooper
Externí odkaz:
http://arxiv.org/abs/2111.13507
We introduce MCCE: Monte Carlo sampling of valid and realistic Counterfactual Explanations for tabular data, a novel counterfactual explanation method that generates on-manifold, actionable and valid counterfactuals by modeling the joint distribution
Externí odkaz:
http://arxiv.org/abs/2111.09790
Deep generative models with latent variables have been used lately to learn joint representations and generative processes from multi-modal data. These two learning mechanisms can, however, conflict with each other and representations can fail to emb
Externí odkaz:
http://arxiv.org/abs/2110.04616
Shapley values has established itself as one of the most appropriate and theoretically sound frameworks for explaining predictions from complex machine learning models. The popularity of Shapley values in the explanation setting is probably due to it
Externí odkaz:
http://arxiv.org/abs/2106.12228
Publikováno v:
In Neural Networks January 2024 169:417-430
The original development of Shapley values for prediction explanation relied on the assumption that the features being described were independent. If the features in reality are dependent this may lead to incorrect explanations. Hence, there have rec
Externí odkaz:
http://arxiv.org/abs/2102.06416