Zobrazeno 1 - 10
of 26
pro vyhledávání: '"Mohamed M. Sabry Aly"'
Publikováno v:
IEEE Access, Vol 10, Pp 82144-82155 (2022)
Spin Transfer Torque Random Access Memory (STT-RAM) has garnered interest due to its various characteristics such as non-volatility, low leakage power, high density. Its magnetic properties have a vital role in STT switching operations through therma
Externí odkaz:
https://doaj.org/article/9901382b30ac44dda5a5359158b8a10b
Publikováno v:
Emerging Computing: From Devices to Systems ISBN: 9789811674860
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::f8c7bf94919f33899dd450084cb71a12
https://doi.org/10.1007/978-981-16-7487-7_1
https://doi.org/10.1007/978-981-16-7487-7_1
Autor:
Umesh Chand, Mohamed M. Sabry Aly, Manohar Lal, Chen Chun-Kuei, Sonu Hooda, Shih-Hao Tsai, Zihang Fang, Hasita Veluri, Aaron Voon-Yew Thean
Publikováno v:
2022 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits).
Autor:
Chunyun Chen, Tianyi Zhang, Zehui Yu, Adithi Raghuraman, Shwetalaxmi Udayan, Jie Lin, Mohamed M. Sabry Aly
Publikováno v:
2022 Design, Automation & Test in Europe Conference & Exhibition (DATE).
Publikováno v:
2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC).
As Deep Neural Networks (DNNs) usually are overparameterized and have millions of weight parameters, it is challenging to deploy these large DNN models on resource-constrained hardware platforms, e.g., smartphones. Numerous network compression method
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::ad91c34b190c59c1ce80e9df2e084933
Publikováno v:
DAC
Power and area-efficient deep neural network (DNN) designs are key in edge applications. Compact DNNs, via compression or quantization, enable such designs by significantly reducing memory footprint. Lossless entropy coding can further reduce the siz
Autor:
Mohamed M. Sabry Aly, Gage Hills, Max M. Shulaker, William Hwang, Andrew Bartolo, Subhasish Mitra, Mary Wootters, Yash H. Malviya, H.-S. Philip Wong, Tony F. Wu, Igor L. Markov
Publikováno v:
Proceedings of the IEEE. 107:19-48
The world's appetite for analyzing massive amounts of structured and unstructured data has grown dramatically. The computational demands of these abundant-data applications, such as deep learning, far exceed the capabilities of today's computing syst
Publikováno v:
DCC
In this paper, we present a coding framework for deep convolutional neural network compression. Our approach utilizes the classical coding theories and formulates the compression of deep convolutional neural networks as a rate-distortion optimization
Autor:
Subhasish Mitra, Tony F. Wu, Binh Quang Le, Mohamed M. Sabry Aly, Robert M. Radway, H.-S. Philip Wong, Yunfeng Xin, Elisa Vianello, Mary Wootters, Paul C. Jolly, Pascal Vivet, Etienne Nowak, Pulkit Tandon, Andrew Bartolo, Zainab F. Khan, Edith Beigne
Publikováno v:
Nature Electronics
Nature Electronics, 2021, 4 (1), pp.71-80. ⟨10.1038/s41928-020-00515-3⟩
Nature Electronics, 2021, 4 (1), pp.71-80. ⟨10.1038/s41928-020-00515-3⟩
Hardware for deep neural network (DNN) inference often suffers from insufficient on-chip memory, thus requiring accesses to separate memory-only chips. Such off-chip memory accesses incur considerable costs in terms of energy and execution time. Fitt
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::bd2d9d4ac350a2de034b3a8b112d4e66
https://cea.hal.science/cea-03759925
https://cea.hal.science/cea-03759925