Zobrazeno 1 - 10
of 302
pro vyhledávání: '"FUNG, PASCALE"'
Autor:
Ji, Ziwei, Chen, Delong, Ishii, Etsuko, Cahyawijaya, Samuel, Bang, Yejin, Wilie, Bryan, Fung, Pascale
The hallucination problem of Large Language Models (LLMs) significantly limits their reliability and trustworthiness. Humans have a self-awareness process that allows us to recognize what we don't know when faced with queries. Inspired by this, our p
Externí odkaz:
http://arxiv.org/abs/2407.03282
The capability to reason from text is crucial for real-world NLP applications. Real-world scenarios often involve incomplete or evolving data. In response, individuals update their beliefs and understandings accordingly. However, most existing evalua
Externí odkaz:
http://arxiv.org/abs/2406.19764
This paper establishes a formal information-theoretic framework for image captioning, conceptualizing captions as compressed linguistic representations that selectively encode semantic units in images. Our framework posits that good image captions sh
Externí odkaz:
http://arxiv.org/abs/2405.00485
Publikováno v:
Computational Linguistics, Vol 46, Iss 2, Pp 249-255 (2020)
We introduce the Computational Linguistics special issue on Multilingual and Interlingual Semantic Representations for Natural Language Processing. We situate the special issue’s five articles in the context of our fast-changing field, explaining o
Externí odkaz:
https://doaj.org/article/5971ced8b5a34a6dbf016e697e67de23
Autor:
Cahyawijaya, Samuel, Chen, Delong, Bang, Yejin, Khalatbari, Leila, Wilie, Bryan, Ji, Ziwei, Ishii, Etsuko, Fung, Pascale
The widespread application of Large Language Models (LLMs) across various tasks and fields has necessitated the alignment of these models with human values and preferences. Given various approaches of human value alignment, ranging from Reinforcement
Externí odkaz:
http://arxiv.org/abs/2404.07900
Autor:
Cahyawijaya, Samuel, Lovenia, Holy, Koto, Fajri, Putri, Rifki Afina, Dave, Emmanuel, Lee, Jhonson, Shadieq, Nuur, Cenggoro, Wawan, Akbar, Salsabil Maulana, Mahendra, Muhammad Ihza, Putri, Dea Annisayanti, Wilie, Bryan, Winata, Genta Indra, Aji, Alham Fikri, Purwarianti, Ayu, Fung, Pascale
Large language models (LLMs) show remarkable human-like capability in various domains and languages. However, a notable quality gap arises in low-resource languages, e.g., Indonesian indigenous languages, rendering them ineffective and inefficient in
Externí odkaz:
http://arxiv.org/abs/2404.06138
We propose to measure political bias in LLMs by analyzing both the content and style of their generated content regarding political issues. Existing benchmarks and measures focus on gender and racial biases. However, political bias exists in LLMs and
Externí odkaz:
http://arxiv.org/abs/2403.18932
In-context learning (ICL) empowers large language models (LLMs) to perform diverse tasks in underrepresented languages using only short in-context information, offering a crucial avenue for narrowing the gap between high-resource and low-resource lan
Externí odkaz:
http://arxiv.org/abs/2403.16512
Transformer-based vision models typically tokenize images into fixed-size square patches as input units, which lacks the adaptability to image content and overlooks the inherent pixel grouping structure. Inspired by the subword tokenization widely ad
Externí odkaz:
http://arxiv.org/abs/2402.14327
Autor:
Kim, Jaehyung, Mao, Yuning, Hou, Rui, Yu, Hanchao, Liang, Davis, Fung, Pascale, Wang, Qifan, Feng, Fuli, Huang, Lifu, Khabsa, Madian
Fine-tuning pre-trained language models (LMs) has become the de facto standard in many NLP tasks. Nevertheless, fine-tuned LMs are still prone to robustness issues, such as adversarial robustness and model calibration. Several perspectives of robustn
Externí odkaz:
http://arxiv.org/abs/2312.04032