Zobrazeno 1 - 10
of 116
pro vyhledávání: '"Pan, Shimei"'
Autor:
Zhang, Tao, Zeng, Ziqian, Xiao, Yuxiang, Zhuang, Huiping, Chen, Cen, Foulds, James, Pan, Shimei
Large Language Models (LLMs) are prone to generating content that exhibits gender biases, raising significant ethical concerns. Alignment, the process of fine-tuning LLMs to better align with desired behaviors, is recognized as an effective approach
Externí odkaz:
http://arxiv.org/abs/2406.13925
Large language models (LLMs) like ChatGPT demonstrate the remarkable progress of artificial intelligence. However, their tendency to hallucinate -- generate plausible but false information -- poses a significant challenge. This issue is critical, as
Externí odkaz:
http://arxiv.org/abs/2403.01193
Speech data has rich acoustic and paralinguistic information with important cues for understanding a speaker's tone, emotion, and intent, yet traditional large language models such as BERT do not incorporate this information. There has been an increa
Externí odkaz:
http://arxiv.org/abs/2311.07014
Recent advances in large language models (LLMs), such as ChatGPT, have led to highly sophisticated conversation agents. However, these models suffer from "hallucinations," where the model generates false or fabricated information. Addressing this cha
Externí odkaz:
http://arxiv.org/abs/2306.06085
We have developed a set of Python applications that use large language models to identify and analyze data from social media platforms relevant to a population of interest. Our pipeline begins with using OpenAI's GPT-3 to generate potential keywords
Externí odkaz:
http://arxiv.org/abs/2301.05198
Autor:
Brath, Richard, Keim, Daniel, Knittel, Johannes, Pan, Shimei, Sommerauer, Pia, Strobelt, Hendrik
With a constant increase of learned parameters, modern neural language models become increasingly more powerful. Yet, explaining these complex model's behavior remains a widely unsolved problem. In this paper, we discuss the role interactive visualiz
Externí odkaz:
http://arxiv.org/abs/2301.04528
It is now well understood that machine learning models, trained on data without due care, often exhibit unfair and discriminatory behavior against certain populations. Traditional algorithmic fairness research has mainly focused on supervised learnin
Externí odkaz:
http://arxiv.org/abs/2209.07044
When a human receives a prediction or recommended course of action from an intelligent agent, what additional information, beyond the prediction or recommendation itself, does the human require from the agent to decide whether to trust or reject the
Externí odkaz:
http://arxiv.org/abs/2205.02987
The importance of understanding and correcting algorithmic bias in machine learning (ML) has led to an increase in research on fairness in ML, which typically assumes that the underlying data is independent and identically distributed (IID). However,
Externí odkaz:
http://arxiv.org/abs/2202.07170
Autor:
Wang, Clarice, Wang, Kathryn, Bian, Andrew, Islam, Rashidul, Keya, Kamrun Naher, Foulds, James, Pan, Shimei
Currently, there is a surge of interest in fair Artificial Intelligence (AI) and Machine Learning (ML) research which aims to mitigate discriminatory bias in AI algorithms, e.g. along lines of gender, age, and race. While most research in this domain
Externí odkaz:
http://arxiv.org/abs/2106.07112