Zobrazeno 1 - 10
of 244
pro vyhledávání: '"Kim, Hyunjae"'
Autor:
Sohn, Jiwoong, Park, Yein, Yoon, Chanwoong, Park, Sihyeon, Hwang, Hyeon, Sung, Mujeen, Kim, Hyunjae, Kang, Jaewoo
Large language models (LLM) hold significant potential for applications in biomedicine, but they struggle with hallucinations and outdated knowledge. While retrieval-augmented generation (RAG) is generally employed to address these issues, it also ha
Externí odkaz:
http://arxiv.org/abs/2411.00300
Autor:
Lee, Taewhoo, Yoon, Chanwoong, Jang, Kyochul, Lee, Donghyeon, Song, Minju, Kim, Hyunjae, Kang, Jaewoo
Recent advancements in large language models (LLM) capable of processing extremely long texts highlight the need for a dedicated evaluation benchmark to assess their long-context capabilities. However, existing methods, like the needle-in-a-haystack
Externí odkaz:
http://arxiv.org/abs/2410.16848
Autor:
Gilson, Aidan, Ai, Xuguang, Xie, Qianqian, Srinivasan, Sahana, Pushpanathan, Krithi, Singer, Maxwell B., Huang, Jimin, Kim, Hyunjae, Long, Erping, Wan, Peixing, Del Priore, Luciano V., Ohno-Machado, Lucila, Xu, Hua, Liu, Dianbo, Adelman, Ron A., Tham, Yih-Chung, Chen, Qingyu
Large Language Models (LLMs) are poised to revolutionize healthcare. Ophthalmology-specific LLMs remain scarce and underexplored. We introduced an open-source, specialized LLM for ophthalmology, termed Language Enhanced Model for Eye (LEME). LEME was
Externí odkaz:
http://arxiv.org/abs/2410.03740
Generative models have become widely used in biomedical entity linking (BioEL) due to their excellent performance and efficient memory usage. However, these models are usually trained only with positive samples--entities that match the input mention'
Externí odkaz:
http://arxiv.org/abs/2408.16493
Autor:
Yin, Yu, Kim, Hyunjae, Xiao, Xiao, Wei, Chih Hsuan, Kang, Jaewoo, Lu, Zhiyong, Xu, Hua, Fang, Meng, Chen, Qingyu
Publikováno v:
J. Biomed. Inform.159 (2024) 104731
Training a neural network-based biomedical named entity recognition (BioNER) model usually requires extensive and costly human annotations. While several studies have employed multi-task learning with multiple BioNER datasets to reduce human effort,
Externí odkaz:
http://arxiv.org/abs/2406.10671
Autor:
Choi, Donghee, Gim, Mogan, Park, Donghyeon, Sung, Mujeen, Kim, Hyunjae, Kang, Jaewoo, Choi, Jihun
Publikováno v:
LREC-COLING 2024
This paper introduces CookingSense, a descriptive collection of knowledge assertions in the culinary domain extracted from various sources, including web data, scientific papers, and recipes, from which knowledge covering a broad range of aspects is
Externí odkaz:
http://arxiv.org/abs/2405.00523
Autor:
Kim, Hyunjae, Hwang, Hyeon, Lee, Jiwoo, Park, Sihyeon, Kim, Dain, Lee, Taewhoo, Yoon, Chanwoong, Sohn, Jiwoong, Choi, Donghee, Kang, Jaewoo
While recent advancements in commercial large language models (LM) have shown promising results in medical tasks, their closed-source nature poses significant privacy and security concerns, hindering their widespread use in the medical field. Despite
Externí odkaz:
http://arxiv.org/abs/2404.00376
Autor:
Kim, Hyunjae, Yoon, Seunghyun, Bui, Trung, Zhao, Handong, Tran, Quan, Dernoncourt, Franck, Kang, Jaewoo
Contrastive language-image pre-training (CLIP) models have demonstrated considerable success across various vision-language tasks, such as text-to-image retrieval, where the model is required to effectively process natural language input to produce a
Externí odkaz:
http://arxiv.org/abs/2402.15120
Autor:
Kim, Gangwoo, Kim, Hajung, Ji, Lei, Bae, Seongsu, Kim, Chanhwi, Sung, Mujeen, Kim, Hyunjae, Yan, Kun, Chang, Eric, Kang, Jaewoo
In this paper, we introduce CheXOFA, a new pre-trained vision-language model (VLM) for the chest X-ray domain. Our model is initially pre-trained on various multimodal datasets within the general domain before being transferred to the chest X-ray dom
Externí odkaz:
http://arxiv.org/abs/2307.07409
Question answering (QA) models often rely on large-scale training datasets, which necessitates the development of a data generation framework to reduce the cost of manual annotations. Although several recent studies have aimed to generate synthetic q
Externí odkaz:
http://arxiv.org/abs/2302.01691