Zobrazeno 1 - 10
of 1 214
pro vyhledávání: '"Lee Jaeho"'
Autor:
Kim Sanmun, Shin Jeong Min, Lee Jaeho, Park Chanhyung, Lee Songju, Park Juho, Seo Dongjin, Park Sehong, Park Chan Y., Jang Min Seok
Publikováno v:
Nanophotonics, Vol 10, Iss 18, Pp 4533-4541 (2021)
The optical properties of thin-film light emitting diodes (LEDs) are strongly dependent on their structures due to light interference inside the devices. However, the complexity of the design space grows exponentially with the number of design parame
Externí odkaz:
https://doaj.org/article/bc049cbc8f49424ab8dead14590449dc
Neural fields are an emerging paradigm that represent data as continuous functions parameterized by neural networks. Despite many advantages, neural fields often have a high training cost, which prevents a broader adoption. In this paper, we focus on
Externí odkaz:
http://arxiv.org/abs/2410.04779
Autor:
Ok, Hyunjong, Lee, Jaeho
Singing voice synthesis and conversion have emerged as significant subdomains of voice generation, leading to much demands on prompt-conditioned generation. Unlike common voice data, generating a singing voice requires an understanding of various ass
Externí odkaz:
http://arxiv.org/abs/2409.09866
Recent studies have identified that language models, pretrained on text-only datasets, often lack elementary visual knowledge, \textit{e.g.,} colors of everyday objects. Motivated by this observation, we ask whether a similar shortcoming exists in te
Externí odkaz:
http://arxiv.org/abs/2409.08199
Autor:
Byun, Yuji, Lee, Jaeho
Low-rank adaptation (LoRA) is an attractive alternative of adapting full weights for the federated fine-tuning of large pretrained models, which can significantly reduce the memory and communication burden. In principle, federated LoRA can provide an
Externí odkaz:
http://arxiv.org/abs/2406.17477
How can small-scale large language models (LLMs) efficiently utilize the supervision of LLMs to improve their generative quality? This question has been well studied in scenarios where there is no restriction on the number of LLM supervisions one can
Externí odkaz:
http://arxiv.org/abs/2406.18002
Rethinking Pruning Large Language Models: Benefits and Pitfalls of Reconstruction Error Minimization
This work suggests fundamentally rethinking the current practice of pruning large language models (LLMs). The way it is done is by divide and conquer: split the model into submodels, sequentially prune them, and reconstruct predictions of the dense c
Externí odkaz:
http://arxiv.org/abs/2406.15524
Despite recent advances in LLM quantization, activation quantization remains to be challenging due to the activation outliers. Conventional remedies, e.g., mixing precisions for different channels, introduce extra overhead and reduce the speedup. In
Externí odkaz:
http://arxiv.org/abs/2406.12016
Autor:
Liu, Xiaoning, Wu, Zongwei, Li, Ao, Vasluianu, Florin-Alexandru, Zhang, Yulun, Gu, Shuhang, Zhang, Le, Zhu, Ce, Timofte, Radu, Jin, Zhi, Wu, Hongjun, Wang, Chenxi, Ling, Haitao, Cai, Yuanhao, Bian, Hao, Zheng, Yuxin, Lin, Jing, Yuille, Alan, Shao, Ben, Guo, Jin, Liu, Tianli, Wu, Mohao, Feng, Yixu, Hou, Shuo, Lin, Haotian, Zhu, Yu, Wu, Peng, Dong, Wei, Sun, Jinqiu, Zhang, Yanning, Yan, Qingsen, Zou, Wenbin, Yang, Weipeng, Li, Yunxiang, Wei, Qiaomu, Ye, Tian, Chen, Sixiang, Zhang, Zhao, Zhao, Suiyi, Wang, Bo, Luo, Yan, Zuo, Zhichao, Wang, Mingshen, Wang, Junhu, Wei, Yanyan, Sun, Xiaopeng, Gao, Yu, Huang, Jiancheng, Chen, Hongming, Chen, Xiang, Tang, Hui, Chen, Yuanbin, Zhou, Yuanbo, Dai, Xinwei, Qiu, Xintao, Deng, Wei, Gao, Qinquan, Tong, Tong, Li, Mingjia, Hu, Jin, He, Xinyu, Guo, Xiaojie, Sabarinathan, Uma, K, Sasithradevi, A, Bama, B Sathya, Roomi, S. Mohamed Mansoor, Srivatsav, V., Wang, Jinjuan, Sun, Long, Chen, Qiuying, Shao, Jiahong, Zhang, Yizhi, Conde, Marcos V., Feijoo, Daniel, Benito, Juan C., García, Alvaro, Lee, Jaeho, Kim, Seongwan, A, Sharif S M, Khujaev, Nodirkhuja, Tsoy, Roman, Murtaza, Ali, Khairuddin, Uswah, Faudzi, Ahmad 'Athif Mohd, Malagi, Sampada, Joshi, Amogh, Akalwadi, Nikhil, Desai, Chaitra, Tabib, Ramesh Ashok, Mudenagudi, Uma, Lian, Wenyi, Lian, Wenjing, Kalyanshetti, Jagadeesh, Aralikatti, Vijayalaxmi Ashok, Yashaswini, Palani, Upasi, Nitish, Hegde, Dikshit, Patil, Ujwala, C, Sujata, Yan, Xingzhuo, Hao, Wei, Fu, Minghan, choksy, Pooja, Sarvaiya, Anjali, Upla, Kishor, Raja, Kiran, Yan, Hailong, Zhang, Yunkai, Li, Baiang, Zhang, Jingyi, Zheng, Huan
This paper reviews the NTIRE 2024 low light image enhancement challenge, highlighting the proposed solutions and results. The aim of this challenge is to discover an effective network design or solution capable of generating brighter, clearer, and vi
Externí odkaz:
http://arxiv.org/abs/2404.14248
Autor:
Luo, Jiajian, Lee, Jaeho
Publikováno v:
J. Appl. Phys. 135, 244503 (2024)
Thermoelectric coolers (TECs) offer a promising solution for direct cooling of local hotspots and active thermal management in advanced electronic systems. However, TECs present significant trade-offs among spatial cooling, heating and power consumpt
Externí odkaz:
http://arxiv.org/abs/2404.13441