Zobrazeno 101 - 110
of 2 467
pro vyhledávání: '"Wang, William"'
Autor:
Hu, Xiyang, Chen, Xinchi, Qi, Peng, Kong, Deguang, Liu, Kunlun, Wang, William Yang, Huang, Zhiheng
Multilingual information retrieval (IR) is challenging since annotated training data is costly to obtain in many languages. We present an effective method to train multilingual IR systems when only English IR training data and some parallel corpora b
Externí odkaz:
http://arxiv.org/abs/2210.06633
Contrastive Language-Image Pretraining (CLIP) efficiently learns visual concepts by pre-training with natural language supervision. CLIP and its visual encoder have been explored on various vision and language tasks and achieve strong zero-shot or tr
Externí odkaz:
http://arxiv.org/abs/2210.05836
Is it possible to build a general and automatic natural language generation (NLG) evaluation metric? Existing learned metrics either perform unsatisfactorily or are restricted to tasks where large human rating data is already available. We introduce
Externí odkaz:
http://arxiv.org/abs/2210.05035
Autor:
Zhu, Wanrong, Yan, An, Lu, Yujie, Xu, Wenda, Wang, Xin Eric, Eckstein, Miguel, Wang, William Yang
Recent advances in text-to-image synthesis make it possible to visualize machine imaginations for a given context. On the other hand, when generating text, human writers are gifted at creative visualization, which enhances their writings by forming i
Externí odkaz:
http://arxiv.org/abs/2210.03765
With the recent advance in large pre-trained language models, researchers have achieved record performances in NLP tasks that mostly focus on language pattern matching. The community is experiencing the shift of the challenge from how to model langua
Externí odkaz:
http://arxiv.org/abs/2210.03849
Reinforcement learning (RL) agents have long sought to approach the efficiency of human learning. Humans are great observers who can learn by aggregating external knowledge from various sources, including observations from others' policies of attempt
Externí odkaz:
http://arxiv.org/abs/2210.03729
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data that involves multiple sub-components in a flexible and interpretable fashion. Here, we develop an approach that improves expressiveness,
Externí odkaz:
http://arxiv.org/abs/2210.03728
Autor:
Yu, Donghan, Zhang, Sheng, Ng, Patrick, Zhu, Henghui, Li, Alexander Hanbo, Wang, Jun, Hu, Yiqun, Wang, William, Wang, Zhiguo, Xiang, Bing
Question answering over knowledge bases (KBs) aims to answer natural language questions with factual information such as entities and relations in KBs. Previous methods either generate logical forms that can be executed over KBs to obtain final answe
Externí odkaz:
http://arxiv.org/abs/2210.00063
Autor:
Lu, Yujie, Zhang, Huiliang, Nie, Ping, Feng, Weixi, Xu, Wenda, Wang, Xin Eric, Wang, William Yang
Vision-Language Navigation requires the agent to follow natural language instructions to reach a specific target. The large discrepancy between seen and unseen environments makes it challenging for the agent to generalize well. Previous studies propo
Externí odkaz:
http://arxiv.org/abs/2209.04725
Autor:
Fu, Tsu-Jui, Li, Linjie, Gan, Zhe, Lin, Kevin, Wang, William Yang, Wang, Lijuan, Liu, Zicheng
Masked visual modeling (MVM) has been recently proven effective for visual pre-training. While similar reconstructive objectives on video inputs (e.g., masked frame modeling) have been explored in video-language (VidL) pre-training, previous studies
Externí odkaz:
http://arxiv.org/abs/2209.01540