Zobrazeno 1 - 10
of 36 868
pro vyhledávání: '"Shieh, A"'
Language Language Models (LLMs) face safety concerns due to potential misuse by malicious users. Recent red-teaming efforts have identified adversarial suffixes capable of jailbreaking LLMs using the gradient-based search algorithm Greedy Coordinate
Externí odkaz:
http://arxiv.org/abs/2408.14866
Autor:
Liu, Xiangyan, Lan, Bo, Hu, Zhiyuan, Liu, Yang, Zhang, Zhicheng, Wang, Fei, Shieh, Michael, Zhou, Wenmeng
Large Language Models (LLMs) excel in stand-alone code tasks like HumanEval and MBPP, but struggle with handling entire code repositories. This challenge has prompted research on enhancing LLM-codebase interaction at a repository scale. Current solut
Externí odkaz:
http://arxiv.org/abs/2408.03910
Autor:
Dhami, N. S., Balédent, V., Batistić, I., Bednarchuk, O., Kaczorowski, D., Itié, J. P., Shieh, S. R., Kumar, C. M. N., Utsumi, Y.
Antiferromagnetic intermetallic compound EuRhGe3 crystalizes in a non-centrosymmetric BaNiSn3-type (I4mm) structure. We studied its pressure-dependent crystal structure by using synchrotron powder x-ray diffraction at room temperature. Our results sh
Externí odkaz:
http://arxiv.org/abs/2408.00410
Autor:
Shieh, Chung-Tsun, Tsai, Tzong-Mo
This research was devoted to investigate the inverse spectral problem of Sturm-Liouville operator with many frozen arguments. Under some assumptions, the authors obtained uniqueness theorems. At the end, a numerical simulation for the inverse problem
Externí odkaz:
http://arxiv.org/abs/2407.14889
When LLMs are deployed in sensitive, human-facing settings, it is crucial that they do not output unsafe, biased, or privacy-violating outputs. For this reason, models are both trained and instructed to refuse to answer unsafe prompts such as "Tell m
Externí odkaz:
http://arxiv.org/abs/2407.03232
We introduce a defense against adversarial attacks on LLMs utilizing self-evaluation. Our method requires no model fine-tuning, instead using pre-trained models to evaluate the inputs and outputs of a generator model, significantly reducing the cost
Externí odkaz:
http://arxiv.org/abs/2407.03234
Autor:
Lin, Yu-Hsiang, Shieh, Huang-Ting, Liu, Chih-Yu, Lee, Kuang-Ting, Chang, Hsiao-Cheng, Yang, Jing-Lun, Lin, Yu-Sheng
Extrapolation in Large language models (LLMs) for open-ended inquiry encounters two pivotal issues: (1) hallucination and (2) expensive training costs. These issues present challenges for LLMs in specialized domains and personalized data, requiring t
Externí odkaz:
http://arxiv.org/abs/2405.12656
This project addresses the challenge of human motion prediction, a critical area for applications such as au- tonomous vehicle movement detection. Previous works have emphasized the need for low inference times to provide real time performance for ap
Externí odkaz:
http://arxiv.org/abs/2405.06088
The rapid emergence of generative Language Models (LMs) has led to growing concern about the impacts that their unexamined adoption may have on the social well-being of diverse user groups. Meanwhile, LMs are increasingly being adopted in K-20 school
Externí odkaz:
http://arxiv.org/abs/2405.01740
Autor:
Xie, Yuxi, Goyal, Anirudh, Zheng, Wenyue, Kan, Min-Yen, Lillicrap, Timothy P., Kawaguchi, Kenji, Shieh, Michael
We introduce an approach aimed at enhancing the reasoning capabilities of Large Language Models (LLMs) through an iterative preference learning process inspired by the successful strategy employed by AlphaZero. Our work leverages Monte Carlo Tree Sea
Externí odkaz:
http://arxiv.org/abs/2405.00451