Zobrazeno 1 - 10
of 141
pro vyhledávání: '"Sze, Vivienne"'
Deep neural networks (DNNs) can deteriorate in accuracy when deployment data differs from training data. While performing online training at all timesteps can improve accuracy, it is computationally expensive. We propose DecTrain, a new algorithm tha
Externí odkaz:
http://arxiv.org/abs/2410.02980
Latency and energy consumption are key metrics in the performance of deep neural network (DNN) accelerators. A significant factor contributing to latency and energy is data transfers. One method to reduce transfers or data is reusing data when multip
Externí odkaz:
http://arxiv.org/abs/2409.13625
Constructing a high-fidelity representation of the 3D scene using a monocular camera can enable a wide range of applications on mobile devices, such as micro-robots, smartphones, and AR/VR headsets. On these devices, memory is often limited in capaci
Externí odkaz:
http://arxiv.org/abs/2409.09295
Autor:
Andrulis, Tanner, Chaudhry, Gohar Irfan, Suriyakumar, Vinith M., Emer, Joel S., Sze, Vivienne
Publikováno v:
ISPASS 2024 pp. 307-309
Photonics is a promising technology to accelerate Deep Neural Networks as it can use optical interconnects to reduce data movement energy and it enables low-energy, high-throughput optical-analog computations. To realize these benefits in a full syst
Externí odkaz:
http://arxiv.org/abs/2405.07266
Publikováno v:
ISPASS 2024 pp. 10-23
Compute-In-Memory (CiM) is a promising solution to accelerate Deep Neural Networks (DNNs) as it can avoid energy-intensive DNN weight movement and use memory arrays to perform low-energy, high-density computations. These benefits have inspired resear
Externí odkaz:
http://arxiv.org/abs/2405.07259
Analog Compute-in-Memory (CiM) accelerators use analog-digital converters (ADCs) to read the analog values that they compute. ADCs can consume significant energy and area, so architecture-level ADC decisions such as ADC resolution or number of ADCs c
Externí odkaz:
http://arxiv.org/abs/2404.06553
Publikováno v:
56th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO '23), 2023
Sparse tensor algebra is a challenging class of workloads to accelerate due to low arithmetic intensity and varying sparsity patterns. Prior sparse tensor algebra accelerators have explored tiling sparse data to increase exploitable data reuse and im
Externí odkaz:
http://arxiv.org/abs/2310.00192
Publikováno v:
IEEE Transactions on Robotics 40 (2024) 1339-1355
Energy consumption of memory accesses dominates the compute energy in energy-constrained robots which require a compact 3D map of the environment to achieve autonomy. Recent mapping frameworks only focused on reducing the map size while incurring sig
Externí odkaz:
http://arxiv.org/abs/2306.03740
Autor:
Wu, Yannan Nellie, Tsai, Po-An, Muralidharan, Saurav, Parashar, Angshuman, Sze, Vivienne, Emer, Joel S.
Due to complex interactions among various deep neural network (DNN) optimization techniques, modern DNNs can have weights and activations that are dense or sparse with diverse sparsity degrees. To offer a good trade-off between accuracy and hardware
Externí odkaz:
http://arxiv.org/abs/2305.12718