Zobrazeno 1 - 10
of 52
pro vyhledávání: '"Yongpan Liu"'
Autor:
Yongpan Liu, Xueqing Li, Huazhong Yang, Mingyang Gu, Yu Wang, Vijaykrishnan Narayanan, Hongtao Zhong
Publikováno v:
IEEE Transactions on Circuits and Systems II: Express Briefs. 67:3402-3406
This brief presents the concept of one-shot refresh (OSR) for dynamic memories. The uniqueness of OSR that differs from the conventional row-by-row refresh operations, is that OSR is able to refresh all rows in the entire array by just one single ref
Autor:
Feng Zhang, Jianfeng Gao, Xinghua Wang, Yiming Yang, Tian Wang, Yongpan Liu, Fei Tan, Liran Li, Yiming Wang
Publikováno v:
IEEE Transactions on Circuits and Systems II: Express Briefs. 67:1534-1538
To reduce the energy-consuming and time latency incurred by Von Neumann architecture, this brief developed a complete computing-in-memory (CIM) convolutional macro based on ReRAM array for the convolutional layers of a LeNet-like convolutional neural
Publikováno v:
IEEE Transactions on Electron Devices. 67:3010-3013
In this brief, an ultracompact and highly reliable physical unclonable function (PUF) is presented based on the mainstream resistive random access memory (RRAM) devices. With the entropy originating from the switching voltage between the high-resista
Autor:
Dunqiu Wang, Jinxiao Zhang, Jie Liu, Zhihong Tu, Danxia Liu, Liangliang Huang, Huijun He, Yongpan Liu
Publikováno v:
Water
Volume 13
Issue 2
Water, Vol 13, Iss 240, p 240 (2021)
Volume 13
Issue 2
Water, Vol 13, Iss 240, p 240 (2021)
In this study, a high-performance adsorbent Co@AC was prepared by loading cobalt ions (Co2+) on activated carbon (AC) via solution impregnation and high-temperature calcination technology, and was used to remove atrazine in water. The preparation fac
Publikováno v:
ASP-DAC
Nowadays, deep neural network (DNN) has played an important role in machine learning. Non-volatile computing-in-memory (nvCIM) for DNN has become a new architecture to optimize hardware performance and energy efficiency. However, the existing nvCIM a
Publikováno v:
ASP-DAC
Block-circulant based compression is a popular technique to accelerate neural network inference. Though storage and computing costs can be reduced by transforming weights into block-circulant matrices, this method incurs uneven data distribution in t
Publikováno v:
A-SSCC
This paper proposes an algorithm and hardware co-design methodology to accelerate CNNs for pix2pix tasks. An importance map is introduced to train an activation-sparse CNN model, which can effectively reduce the computing cost and external data trans
Autor:
Xueqing Li, Zhuqing Yuan, Zhe Yuan, Jinshan Yue, Huazhong Yang, Songming Yu, Jingyu Wang, Yongpan Liu
Publikováno v:
DAC
Recently CNN-based methods have made remarkable progress in broad fields. Both network pruning algorithms and hardware accelerators have been introduced to accelerate CNN. However, existing pruning algorithms have not fully studied the pattern prunin
Autor:
Yixiong Yang, Jinshan Yue, Xiulong Wu, Xueqing Li, Ruoyang Liu, Zhiting Lin, Huazhong Yang, Yongpan Liu, Xiaoyu Feng, Zhe Yuan
Publikováno v:
ISSCC
Convolutional Neural Networks (CNNs) have become widely used in image signal processing, such as tracking, classification and post-processing. Modern CNNs use millions of weights and activations, leading to critical challenges for both computation an
Publikováno v:
ASP-DAC
Ferroelectric FETs (FeFETs) have emerged as a promising multi-level/cell (MLC) nonvolatile memory (NVM) candidate for low-power applications. This originates from the advantages of both efficient memory access and intrinsic device-level in-memory com