Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Urmish Thakker"'
Publikováno v:
IEEE Internet of Things Journal. 9:1-24
Federated learning (FL) is a distributed machine learning strategy that generates a global model by learning from multiple decentralized edge clients. FL enables on-device training, keeping the client’s local data private, and further, updating the
Publikováno v:
Federated and Transfer Learning ISBN: 9783031117473
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::17cd6f6001de881e994832ef62b7436d
https://doi.org/10.1007/978-3-031-11748-0_2
https://doi.org/10.1007/978-3-031-11748-0_2
Autor:
Urmish Thakker, Chu Zhou, Matthew Mattina, Jesse Beu, Ganesh Dasika, Dibakar Gope, Igor Fedorov
Publikováno v:
ACM Journal on Emerging Technologies in Computing Systems. 17:1-18
Micro-controllers (MCUs) make up most of the processors in the world with widespread applicability from automobile to medical devices. The Internet of Things promises to enable these resource-constrained MCUs with machine learning algorithms to provi
Publikováno v:
AIChallengeIoT@SenSys
Recent trends have shown that deep learning models have become larger and more accurate at an increased computational cost, making them difficult to deploy for latency-constrained applications. Conditional execution methods address this increase in c
Publikováno v:
AIChallengeIoT@SenSys
There has been a recent surge in interest in dynamic inference technologies which can reduce the cost of inference, without sacrificing the accuracy of the model. These models are based on the assumption that not all parts of the output feature map (
Sequence model based NLP applications can be large. Yet, many applications that benefit from them run on small devices with very limited compute and storage capabilities, while still having run-time constraints. As a result, there is a need for a com
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::bb28ec4bdde46a856f6513e7d93cc3ec
Autor:
Ganesh Dasika, Jesse Beu, Matthew Mattina, Chu Zhou, Dibakar Gope, Urmish Thakker, Igor Fedorov
Publikováno v:
EMC2@NeurIPS
Recurrent Neural Networks (RNN) can be difficult to deploy on resource constrained devices due to their size. As a result, there is a need for compression techniques that can significantly compress RNNs without negatively impacting task accuracy. Thi
Publikováno v:
SenSys-ML
Recurrent Neural Networks (RNNs) break a time-series input (or a sentence) into multiple time-steps (or words) and process it one time-step (word) at a time. However, not all of these time-steps (words) need to be processed to determine the final out
Publikováno v:
2019 2nd Workshop on Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications (EMC2).
Recurrent neural networks can be large and compute-intensive, yet many applications that benefit from RNNs run on small devices with very limited compute and storage capabilities while still having run-time constraints. As a result, there is a need f