Zobrazeno 1 - 10
of 83
pro vyhledávání: '"Hwansoo Han"'
Publikováno v:
Sensors, Vol 21, Iss 7, p 2321 (2021)
To process data from IoTs and wearable devices, analysis tasks are often offloaded to the cloud. As the amount of sensing data ever increases, optimizing the data analytics frameworks is critical to the performance of processing sensed data. A key ap
Externí odkaz:
https://doaj.org/article/efed91505e0d4d4c9e73705ccee3c5f6
Autor:
Hanwoong Jung, Hexiang Ji, Alexey Pushchin, Maxim Ostapenko, Wenlong Niu, Ilya Palachev, Yutian Qu, Pavel Fedin, Yuri Gribov, Heewoo Nam, Dongguen Lim, Hyunjun Kim, Joonho Song, Seungwon Lee, Hwansoo Han
Publikováno v:
Proceedings of the 21st ACM/IEEE International Symposium on Code Generation and Optimization.
Publikováno v:
Journal of KIISE. 48:859-864
Autor:
Minseop Jeong, Hwansoo Han
Publikováno v:
Journal of KIISE. 48:479-485
Publikováno v:
IEEE Transactions on Computers. 70:332-346
Mission-critical embedded systems simultaneously run multiple graphics-processing-unit (GPU) computing tasks with different criticality and timeliness requirements. Considerable research effort has been dedicated to supporting the preemptive priority
Publikováno v:
CASES
Nvidia GPUs can oversubscribe CPU-side RAM, elevating GPU programmability and enabling GPU to use data beyond the size of GPU memory. However, oversubsrciption accompanies overhead of constant exchange of pages between the GPU and the CPU. To this pr
Publikováno v:
CASES
GPU profilers have been successfully used to analyze bottlenecks and slowdowns of GPU programs. Several instrumentation tools for profiling GPU binaries are introduced, but these tools take little consideration into GPU architectures. In this paper,
Autor:
Hwansoo Han, Youseok Nam
Publikováno v:
SMA
Recent cloud IDE services provide containers as development environment to users. Since users have little knowledge on specific tasks to run and computing resources required in their containers, it is difficult to decide exactly how many containers t
Publikováno v:
SMA
In the era of data-parallel analytics, caching intermediate results is used as a key method to speed up the framework. Existing frameworks apply various caching policies depending on run-time context or programmer’s decision. Since caching still le
Publikováno v:
KIISE Transactions on Computing Practices. 24:352-357