Zobrazeno 1 - 10
of 148
pro vyhledávání: '"Sun, Shao‐Hua"'
Autor:
Hung, Yu-Kai, Huang, Yun-Chien, Su, Ting-Yu, Lin, Yen-Ting, Cheng, Lung-Pan, Wang, Bryan, Sun, Shao-Hua
Audience feedback is crucial for refining video content, yet it typically comes after publication, limiting creators' ability to make timely adjustments. To bridge this gap, we introduce SimTube, a generative AI system designed to simulate audience f
Externí odkaz:
http://arxiv.org/abs/2411.09577
Learning from observation (LfO) aims to imitate experts by learning from state-only demonstrations without requiring action labels. Existing adversarial imitation learning approaches learn a generator agent policy to produce state transitions that ar
Externí odkaz:
http://arxiv.org/abs/2410.05429
Autor:
Hiranaka, Ayano, Chen, Shang-Fu, Lai, Chieh-Hsin, Kim, Dongjun, Murata, Naoki, Shibuya, Takashi, Liao, Wei-Hsiang, Sun, Shao-Hua, Mitsufuji, Yuki
Controllable generation through Stable Diffusion (SD) fine-tuning aims to improve fidelity, safety, and alignment with human guidance. Existing reinforcement learning from human feedback methods usually rely on predefined heuristic reward functions o
Externí odkaz:
http://arxiv.org/abs/2410.05116
Programmatic reinforcement learning (PRL) has been explored for representing policies through programs as a means to achieve interpretability and generalization. Despite promising outcomes, current state-of-the-art PRL methods are hindered by sample
Externí odkaz:
http://arxiv.org/abs/2405.16450
Autor:
Huang, Chun-Kai, Hsieh, Yi-Hsien, Chien, Ta-Jung, Chien, Li-Cheng, Sun, Shao-Hua, Su, Tung-Hung, Kao, Jia-Horng, Lin, Che
Multivariate time series (MTS) data, when sampled irregularly and asynchronously, often present extensive missing values. Conventional methodologies for MTS analysis tend to rely on temporal embeddings based on timestamps that necessitate subsequent
Externí odkaz:
http://arxiv.org/abs/2405.16557
Autor:
Lai, Chun-Mao, Wang, Hsiang-Chun, Hsieh, Ping-Chun, Wang, Yu-Chiang Frank, Chen, Min-Hung, Sun, Shao-Hua
Imitation learning aims to learn a policy from observing expert demonstrations without access to reward signals from environments. Generative adversarial imitation learning (GAIL) formulates imitation learning as adversarial learning, employing a gen
Externí odkaz:
http://arxiv.org/abs/2405.16194
Large language models (LLMs) have shown exceptional proficiency in natural language processing but often fall short of generating creative and original responses to open-ended questions. To enhance LLM creativity, our key insight is to emulate the hu
Externí odkaz:
http://arxiv.org/abs/2405.06373
Autor:
Tseng, Liang-Hsuan, Hu, En-Pei, Chiang, Cheng-Han, Tseng, Yuan, Lee, Hung-yi, Lee, Lin-shan, Sun, Shao-Hua
Unsupervised automatic speech recognition (ASR) aims to learn the mapping between the speech signal and its corresponding textual transcription without the supervision of paired speech-text data. A word/phoneme in the speech signal is represented by
Externí odkaz:
http://arxiv.org/abs/2402.03988
Deep reinforcement learning (deep RL) excels in various domains but lacks generalizability and interpretability. On the other hand, programmatic RL methods (Trivedi et al., 2021; Liu et al., 2023) reformulate RL tasks as synthesizing interpretable pr
Externí odkaz:
http://arxiv.org/abs/2311.15960
Autor:
Suwono, Nicholas Collin, Chen, Justin Chih-Yao, Hung, Tun Min, Huang, Ting-Hao Kenneth, Liao, I-Bin, Li, Yung-Hui, Ku, Lun-Wei, Sun, Shao-Hua
This work introduces a novel task, location-aware visual question generation (LocaVQG), which aims to generate engaging questions from data relevant to a particular geographical location. Specifically, we represent such location-aware information wit
Externí odkaz:
http://arxiv.org/abs/2310.15129