Zobrazeno 1 - 10
of 438 379
pro vyhledávání: '"Safety issues"'
Autor:
Park, Suvin1 (AUTHOR), Kim, Hee-Jin1 (AUTHOR), Won, Heehyun1 (AUTHOR), Lee, Hui-Eon2 (AUTHOR), Cho, Haerin1 (AUTHOR), Choi, Nam-Kyong1,2 (AUTHOR) nchoi@ewha.ac.kr
Publikováno v:
PLoS ONE. 11/22/2024, Vol. 19 Issue 11, p1-13. 13p.
Autor:
Huang, Mianqiu, Liu, Xiaoran, Zhou, Shaojun, Zhang, Mozhi, Tan, Chenkun, Wang, Pengyu, Guo, Qipeng, Xu, Zhe, Li, Linyang, Lei, Zhikai, Li, Linlin, Liu, Qun, Zhou, Yaqian, Qiu, Xipeng, Huang, Xuanjing
With the development of large language models (LLMs), the sequence length of these models continues to increase, drawing significant attention to long-context language models. However, the evaluation of these models has been primarily limited to thei
Externí odkaz:
http://arxiv.org/abs/2411.06899
Autor:
Zhou, Yujun, Yang, Jingdong, Guo, Kehan, Chen, Pin-Yu, Gao, Tian, Geyer, Werner, Moniz, Nuno, Chawla, Nitesh V, Zhang, Xiangliang
Laboratory accidents pose significant risks to human life and property, underscoring the importance of robust safety protocols. Despite advancements in safety training, laboratory personnel may still unknowingly engage in unsafe practices. With the i
Externí odkaz:
http://arxiv.org/abs/2410.14182
Autor:
Farid, Farnaz, Ahamed, Farhad
The widespread use of AI technologies to generate digital content has led to increased misinformation and online harm. Deep fake technologies, a type of AI, make it easier to create convincing but fake content on social media, leading to various cybe
Externí odkaz:
http://arxiv.org/abs/2410.11856
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Ye, Junjie, Li, Sixian, Li, Guanyu, Huang, Caishuang, Gao, Songyang, Wu, Yilong, Zhang, Qi, Gui, Tao, Huang, Xuanjing
Publikováno v:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics 2024 (Volume 1: Long Papers)
Tool learning is widely acknowledged as a foundational approach or deploying large language models (LLMs) in real-world scenarios. While current research primarily emphasizes leveraging tools to augment LLMs, it frequently neglects emerging safety co
Externí odkaz:
http://arxiv.org/abs/2402.10753