Zobrazeno 1 - 10
of 29
pro vyhledávání: '"Kim, Youngsok"'
Autor:
Noh, Si Ung, Hong, Junguk, Lim, Chaemin, Park, Seongyeon, Kim, Jeehyun, Kim, Hanjun, Kim, Youngsok, Lee, Jinho
Recent dual in-line memory modules (DIMMs) are starting to support processing-in-memory (PIM) by associating their memory banks with processing elements (PEs), allowing applications to overcome the data movement bottleneck by offloading memory-intens
Externí odkaz:
http://arxiv.org/abs/2404.08871
The recent huge advance of Large Language Models (LLMs) is mainly driven by the increase in the number of parameters. This has led to substantial memory capacity requirements, necessitating the use of dozens of GPUs just to meet the capacity. One pop
Externí odkaz:
http://arxiv.org/abs/2403.06664
With the advance in genome sequencing technology, the lengths of deoxyribonucleic acid (DNA) sequencing results are rapidly increasing at lower prices than ever. However, the longer lengths come at the cost of a heavy computational burden on aligning
Externí odkaz:
http://arxiv.org/abs/2403.06478
Graph neural networks (GNNs) are one of the rapidly growing fields within deep learning. While many distributed GNN training frameworks have been proposed to increase the training throughput, they face three limitations when applied to multi-server c
Externí odkaz:
http://arxiv.org/abs/2311.06837
Training large deep neural network models is highly challenging due to their tremendous computational and memory requirements. Blockwise distillation provides one promising method towards faster convergence by splitting a large model into multiple sm
Externí odkaz:
http://arxiv.org/abs/2301.12443
Autor:
Song, Jaeyong, Yim, Jinkyu, Jung, Jaewon, Jang, Hongsun, Kim, Hyung-Jin, Kim, Youngsok, Lee, Jinho
In training of modern large natural language processing (NLP) models, it has become a common practice to split models using 3D parallelism to multiple GPUs. Such technique, however, suffers from a high overhead of inter-node communication. Compressin
Externí odkaz:
http://arxiv.org/abs/2301.09830
Graph convolutional networks (GCNs) are becoming increasingly popular as they overcome the limited applicability of prior neural networks. A GCN takes as input an arbitrarily structured graph and executes a series of layers which exploit the graph's
Externí odkaz:
http://arxiv.org/abs/2301.10388
Autor:
Yoo, Mingi, Song, Jaeyong, Lee, Hyeyoon, Lee, Jounghoo, Kim, Namhyung, Kim, Youngsok, Lee, Jinho
Graph convolutional networks (GCNs) are becoming increasingly popular as they can process a wide variety of data formats that prior deep neural networks cannot easily support. One key challenge in designing hardware accelerators for GCNs is the vast
Externí odkaz:
http://arxiv.org/abs/2301.09813
Autor:
Hong, Deokki, Choi, Kanghyun, Lee, Hye Yoon, Yu, Joonsang, Park, Noseong, Kim, Youngsok, Lee, Jinho
Co-exploration of an optimal neural architecture and its hardware accelerator is an approach of rising interest which addresses the computational cost problem, especially in low-profile systems. The large co-exploration space is often handled by adop
Externí odkaz:
http://arxiv.org/abs/2301.09312
Autor:
Park, Seongyeon, Kim, Hajin, Ahmad, Tanveer, Ahmed, Nauman, Al-Ars, Zaid, Hofstee, H. Peter, Kim, Youngsok, Lee, Jinho
Sequence alignment forms an important backbone in many sequencing applications. A commonly used strategy for sequence alignment is an approximate string matching with a two-dimensional dynamic programming approach. Although some prior work has been c
Externí odkaz:
http://arxiv.org/abs/2301.09310