Zobrazeno 1 - 2
of 2
pro vyhledávání: '"Eksarevskiy, Vadim"'
Autor:
Maleki, Saeed, Musuvathi, Madan, Mytkowicz, Todd, Saarikivi, Olli, Xu, Tianju, Eksarevskiy, Vadim, Ekanayake, Jaliya, Barsoum, Emad
Stochastic gradient descent (SGD) is an inherently sequential training algorithm--computing the gradient at batch $i$ depends on the model parameters learned from batch $i-1$. Prior approaches that break this dependence do not honor them (e.g., sum t
Externí odkaz:
http://arxiv.org/abs/2006.02924
Autor:
Ahmed, Zeeshan, Amizadeh, Saeed, Bilenko, Mikhail, Carr, Rogan, Chin, Wei-Sheng, Dekel, Yael, Dupre, Xavier, Eksarevskiy, Vadim, Erhardt, Eric, Eseanu, Costin, Filipi, Senja, Finley, Tom, Goswami, Abhishek, Hoover, Monte, Inglis, Scott, Interlandi, Matteo, Katzenberger, Shon, Kazmi, Najeeb, Krivosheev, Gleb, Luferenko, Pete, Matantsev, Ivan, Matusevych, Sergiy, Moradi, Shahab, Nazirov, Gani, Ormont, Justin, Oshri, Gal, Pagnoni, Artidoro, Parmar, Jignesh, Roy, Prabhat, Shah, Sarthak, Siddiqui, Mohammad Zeeshan, Weimer, Markus, Zahirazami, Shauheen, Zhu, Yiwen
Machine Learning is transitioning from an art and science into a technology available to every developer. In the near future, every application on every platform will incorporate trained models to encode data-based decisions that would be impossible
Externí odkaz:
http://arxiv.org/abs/1905.05715