Zobrazeno 1 - 10
of 271
pro vyhledávání: '"Lee, Hyunjae"'
The accelerated failure time (AFT) model is widely used to analyze relationships between variables in the presence of censored observations. However, this model relies on some assumptions such as the error distribution, which can lead to biased or in
Externí odkaz:
http://arxiv.org/abs/2402.02128
Bayesian optimization (BO) has contributed greatly to improving model performance by suggesting promising hyperparameter configurations iteratively based on observations from multiple training trials. However, only partial knowledge (i.e., the measur
Externí odkaz:
http://arxiv.org/abs/2304.12666
Publikováno v:
Advances in Information Retrieval: 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, Proceedings, Part II
Encoded representations from a pretrained deep learning model (e.g., BERT text embeddings, penultimate CNN layer activations of an image) convey a rich set of features beneficial for information retrieval. Embeddings for a particular modality of data
Externí odkaz:
http://arxiv.org/abs/2304.11095
Autor:
Lee, Hyunjae
This paper describes Difference-aware Deep continuous prompt for Contrastive Sentence Embeddings (D2CSE) that learns sentence embeddings. Compared to state-of-the-art approaches, D2CSE computes sentence vectors that are exceptional to distinguish a s
Externí odkaz:
http://arxiv.org/abs/2304.08991
Despite the evolution of Convolutional Neural Networks (CNNs), their performance is surprisingly dependent on the choice of hyperparameters. However, it remains challenging to efficiently explore large hyperparameter search space due to the long trai
Externí odkaz:
http://arxiv.org/abs/2209.12499
Enhancing Semantic Understanding with Self-supervised Methods for Abstractive Dialogue Summarization
Publikováno v:
Proc. Interspeech 2021, 796-800 (2021)
Contextualized word embeddings can lead to state-of-the-art performances in natural language understanding. Recently, a pre-trained deep contextualized text encoder such as BERT has shown its potential in improving natural language tasks including ab
Externí odkaz:
http://arxiv.org/abs/2209.00278
Autor:
Yi, Hoon, Kim, Hodam, Kim, Ka Ram, Kim, Ju Hyeon, Kim, Juhee, Lee, Hyunjae, Grewal, Sanjeet S., Freeman, William D., Yeo, Woon-Hong
Publikováno v:
In Biosensors and Bioelectronics 1 July 2024 255
Autor:
Lee, Hyunjae1 (AUTHOR) lhj501@gachon.ac.kr, Kim, Gildong2 (AUTHOR), Shon, Jingeun1 (AUTHOR) shon@gachon.ac.kr
Publikováno v:
Energies (19961073). Apr2024, Vol. 17 Issue 8, p1967. 17p.
Publikováno v:
In Maritime Transport Research December 2024 7
A Lite BERT (ALBERT) has been introduced to scale up deep bidirectional representation learning for natural languages. Due to the lack of pretrained ALBERT models for Korean language, the best available practice is the multilingual model or resorting
Externí odkaz:
http://arxiv.org/abs/2101.11363