Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Eom, SooHwan"'
Open-vocabulary 3D instance segmentation transcends traditional closed-vocabulary methods by enabling the identification of both previously seen and unseen objects in real-world scenarios. It leverages a dual-modality approach, utilizing both 3D poin
Externí odkaz:
http://arxiv.org/abs/2408.08591
Autor:
Yoon, Eunseop, Yoon, Hee Suk, Eom, SooHwan, Han, Gunsoo, Nam, Daniel Wontae, Jo, Daejin, On, Kyoung-Woon, Hasegawa-Johnson, Mark A., Kim, Sungwoong, Yoo, Chang D.
Reinforcement Learning from Human Feedback (RLHF) leverages human preference data to train language models to align more closely with human essence. These human preference data, however, are labeled at the sequence level, creating a mismatch between
Externí odkaz:
http://arxiv.org/abs/2407.16574
Autor:
Eom, SooHwan, Yoon, Eunseop, Yoon, Hee Suk, Kim, Chanwoo, Hasegawa-Johnson, Mark, Yoo, Chang D.
In Automatic Speech Recognition (ASR) systems, a recurring obstacle is the generation of narrowly focused output distributions. This phenomenon emerges as a side effect of Connectionist Temporal Classification (CTC), a robust sequence learning tool t
Externí odkaz:
http://arxiv.org/abs/2403.11578
Autor:
Yoon, Eunseop, Yoon, Hee Suk, Gowda, Dhananjaya, Eom, SooHwan, Kim, Daehyeok, Harvill, John, Gao, Heting, Hasegawa-Johnson, Mark, Kim, Chanwoo, Yoo, Chang D.
Text-to-Text Transfer Transformer (T5) has recently been considered for the Grapheme-to-Phoneme (G2P) transduction. As a follow-up, a tokenizer-free byte-level model based on T5 referred to as ByT5, recently gave promising results on word-level G2P c
Externí odkaz:
http://arxiv.org/abs/2308.08442
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.