Zobrazeno 1 - 10
of 16
pro vyhledávání: '"Hanmin Park"'
Publikováno v:
Aerospace, Vol 11, Iss 3, p 203 (2024)
This study introduces a fruit harvesting mechanism powered by a single motor, designed for integration with unmanned aerial vehicles (UAVs). The mechanism performs reciprocating motion by converting linear motion into rotational motion. Consequently,
Externí odkaz:
https://doaj.org/article/3b3703ebd8e742d9aae5cd867e6f3827
Autor:
Hanmin PARK
Publikováno v:
Uisahak, Vol 29, Iss 1, Pp 43-80 (2020)
In 1886, cholera was prevalent nationwide in Joseon. This year was not yet the time when the Joseon government officially overhauled quarantine rules to go into effect. Thus, quarantine efforts to prevent cholera varied depending on each of the three
Externí odkaz:
https://doaj.org/article/cd5a6afb6ae0464bb9ba05fb0b3ec577
Publikováno v:
Advanced Engineering Materials.
Publikováno v:
IEEE Transactions on Computers. 71:1537-1550
A vast amount of activation values of DNNs are zeros due to ReLU (Rectified Linear Unit), which is one of the most common activation functions used in modern neural networks. Since ReLU outputs zero for all negative inputs, the inputs to ReLU do not
Autor:
Hyuk-Jae Lee, Kiyoung Choi, Heesu Kim, Eojin Lee, Hanmin Park, Taehyun Kim, Jinho Lee, Soojung Ryu, Kwanheum Cho
Publikováno v:
HPCA
In this paper, we present GradPIM, a processing-in-memory architecture which accelerates parameter updates of deep neural networks training. As one of processing-in-memory techniques that could be realized in the near future, we propose an incrementa
Publikováno v:
2020 International Conference on Electronics, Information, and Communication (ICEIC).
The training time of a deep neural network has increased such that training process may take many days or even weeks using a single device. Further, conventional devices such as CPU and GPU pursuit generality on their use, it is inevitable that they
Publikováno v:
DAC
The training process of a deep neural network commonly consists of three phases: forward propagation, backward propagation, and weight update. In this paper, we propose a hardware architecture to accelerate the backward propagation. Our approach appl
Autor:
Hanmin Park, Kiyoung Choi
Publikováno v:
ASP-DAC
The datapath bit-width of hardware accelerators for convolutional neural network (CNN) inference is generally chosen to be wide enough, so that they can be used to process upcoming unknown CNNs. Here we introduce the cell division technique, which is
Publikováno v:
ICCD
Dynamic fixed-point (DFP) is one of the most successful attempts to reduce bit-widths in training neural networks. It has been reported that DFP can reduce the bit-widths of the training operations to 16 bits, except for parameter update operations;
Autor:
Hanmin Park, Kiyoung Choi
Publikováno v:
IET Computers & Digital Techniques. 10:37-44
This study presents a technique called adaptively weighted round-robin (RR) arbitration for equality of service in a many-core network-on-chip. The authors concentrate on the network congested with various traffic patterns generated by the applicatio