Zobrazeno 1 - 10
of 73
pro vyhledávání: '"Chen, Yida"'
While the biases of language models in production are extensively documented, the biases of their guardrails have been neglected. This paper studies how contextual information about the user influences the likelihood of an LLM to refuse to execute a
Externí odkaz:
http://arxiv.org/abs/2407.06866
Autor:
Chen, Yida, Wu, Aoyu, DePodesta, Trevor, Yeh, Catherine, Li, Kenneth, Marin, Nicholas Castillo, Patel, Oam, Riecke, Jan, Raval, Shivam, Seow, Olivia, Wattenberg, Martin, Viégas, Fernanda
Conversational LLMs function as black box systems, leaving users guessing about why they see the output they do. This lack of transparency is potentially problematic, especially given concerns around bias and truthfulness. To address this issue, we p
Externí odkaz:
http://arxiv.org/abs/2406.07882
Recent work found high mutual information between the learned representations of large language models (LLMs) and the geospatial property of its input, hinting an emergent internal model of space. However, whether this internal space model has any ca
Externí odkaz:
http://arxiv.org/abs/2312.16257
Publikováno v:
2023 3rd International Conference on Consumer Electronics and Computer Engineering (ICCECE). IEEE, 2023: 507-510
The complex background in the soil image collected in the field natural environment will affect the subsequent soil image recognition based on machine vision. Segmenting the soil center area from the soil image can eliminate the influence of the comp
Externí odkaz:
http://arxiv.org/abs/2309.00817
Latent diffusion models (LDMs) exhibit an impressive ability to produce realistic images, yet the inner workings of these models remain mysterious. Even when trained purely on images without explicit depth information, they typically output coherent
Externí odkaz:
http://arxiv.org/abs/2306.05720
Transformer models are revolutionizing machine learning, but their inner workings remain mysterious. In this work, we present a new visualization technique designed to help researchers understand the self-attention mechanism in transformers that allo
Externí odkaz:
http://arxiv.org/abs/2305.03210
When the input signal is correlated input signals, and the input and output signal is contaminated by Gaussian noise, the total least squares normalized subband adaptive filter (TLS-NSAF) algorithm shows good performance. However, when it is disturbe
Externí odkaz:
http://arxiv.org/abs/2211.03283
Autor:
Fang, Ze, Chen, Bo, Huang, Chengda, Yuan, Yifei, Luo, Yao, Wu, Liubin, Chen, Yida, Huang, Yuqing, Yang, Yu, Lin, Enping, Chen, Zhong
Publikováno v:
In Analytica Chimica Acta 15 May 2024 1303
Autor:
He, Xu, Chen, Mimi, Zhang, Xiongjinfu, Cheng, Xinyi, Chen, Yida, Shen, Hao, Yang, Huilin, Shi, Qin, Niu, Junjie
Publikováno v:
In Fundamental Research February 2024
Autor:
Chen, Bo, Wu, Liubin, Chen, Yida, Fang, Ze, Huang, Yuqing, Yang, Yu, Lin, Enping, Chen, Zhong
Publikováno v:
In Journal of Magnetic Resonance October 2023 355