Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Tiedemann, Stephen"'
Autor:
Mauch, Lukas, Tiedemann, Stephen, Garcia, Javier Alonso, Cong, Bac Nguyen, Yoshiyama, Kazuki, Cardinaux, Fabien, Kemp, Thomas
Recently, predictor-based algorithms emerged as a promising approach for neural architecture search (NAS). For NAS, we typically have to calculate the validation accuracy of a large number of Deep Neural Networks (DNNs), what is computationally compl
Externí odkaz:
http://arxiv.org/abs/2011.12043
Autor:
Cardinaux, Fabien, Uhlich, Stefan, Yoshiyama, Kazuki, Garcia, Javier Alonso, Mauch, Lukas, Tiedemann, Stephen, Kemp, Thomas, Nakamura, Akira
Operating deep neural networks (DNNs) on devices with limited resources requires the reduction of their memory as well as computational footprint. Popular reduction methods are network quantization or pruning, which either reduce the word length of t
Externí odkaz:
http://arxiv.org/abs/1911.04951
Autor:
Uhlich, Stefan, Mauch, Lukas, Cardinaux, Fabien, Yoshiyama, Kazuki, Garcia, Javier Alonso, Tiedemann, Stephen, Kemp, Thomas, Nakamura, Akira
Efficient deep neural network (DNN) inference on mobile or embedded devices typically involves quantization of the network parameters and activations. In particular, mixed precision networks achieve better performance than networks with homogeneous b
Externí odkaz:
http://arxiv.org/abs/1905.11452
Autor:
Cardinaux, Fabien, Uhlich, Stefan, Yoshiyama, Kazuki, García, Javier Alonso, Tiedemann, Stephen, Kemp, Thomas, Nakamura, Akira
Operating deep neural networks on devices with limited resources requires the reduction of their memory footprints and computational requirements. In this paper we introduce a training method, called look-up table quantization, LUT-Q, which learns a
Externí odkaz:
http://arxiv.org/abs/1811.05355
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.