Zobrazeno 1 - 3
of 3
pro vyhledávání: '"Grushetsky, Alexander"'
Autor:
Ravina, Walker, Sterling, Ethan, Oryeshko, Olexiy, Bell, Nathan, Zhuang, Honglei, Wang, Xuanhui, Wu, Yonghui, Grushetsky, Alexander
The goal of model distillation is to faithfully transfer teacher model knowledge to a model which is faster, more generalizable, more interpretable, or possesses other desirable characteristics. Human-readability is an important and desirable standar
Externí odkaz:
http://arxiv.org/abs/2101.08393
Autor:
Zhuang, Honglei, Wang, Xuanhui, Bendersky, Michael, Grushetsky, Alexander, Wu, Yonghui, Mitrichev, Petr, Sterling, Ethan, Bell, Nathan, Ravina, Walker, Qian, Hai
Interpretability of learning-to-rank models is a crucial yet relatively under-examined research area. Recent progress on interpretable ranking models largely focuses on generating post-hoc explanations for existing black-box ranking models, whereas t
Externí odkaz:
http://arxiv.org/abs/2005.02553
Autor:
Ponomareva, Natalia, Radpour, Soroush, Hendry, Gilbert, Haykal, Salem, Colthurst, Thomas, Mitrichev, Petr, Grushetsky, Alexander
TF Boosted Trees (TFBT) is a new open-sourced frame-work for the distributed training of gradient boosted trees. It is based on TensorFlow, and its distinguishing features include a novel architecture, automatic loss differentiation, layer-by-layer b
Externí odkaz:
http://arxiv.org/abs/1710.11555