Zobrazeno 1 - 3
of 3
pro vyhledávání: '"Ohmura, Junki"'
Instruction-tuned Large Language Models (LLMs) have achieved remarkable performance across various benchmark tasks. While providing instructions to LLMs for guiding their generations is user-friendly, assessing their instruction-following capabilitie
Externí odkaz:
http://arxiv.org/abs/2406.16356
Autor:
Takida, Yuhta, Shibuya, Takashi, Liao, WeiHsiang, Lai, Chieh-Hsin, Ohmura, Junki, Uesaka, Toshimitsu, Murata, Naoki, Takahashi, Shusuke, Kumakura, Toshiyuki, Mitsufuji, Yuki
One noted issue of vector-quantized variational autoencoder (VQ-VAE) is that the learned discrete representation uses only a fraction of the full capacity of the codebook, also known as codebook collapse. We hypothesize that the training scheme of VQ
Externí odkaz:
http://arxiv.org/abs/2205.07547
Autor:
Ohmura, Junki, Eskenazi, Maxine
Dialog response ranking is used to rank response candidates by considering their relation to the dialog history. Although researchers have addressed this concept for open-domain dialogs, little attention has been focused on task-oriented dialogs. Fur
Externí odkaz:
http://arxiv.org/abs/1811.11430