Zobrazeno 1 - 10
of 151 932
pro vyhledávání: '"HELPFULNESS"'
Autor:
Ceylan, Gizem (AUTHOR) gizem.ceylan@yale.edu, Diehl, Kristin (AUTHOR), Proserpio, Davide (AUTHOR)
Publikováno v:
Journal of Marketing Research (JMR). Feb2024, Vol. 61 Issue 1, p5-26. 22p.
Autor:
Briers, Barbara (AUTHOR) barbara.briers@uantwerpen.be, He, Xzavier (AUTHOR), Lamey, Lien (AUTHOR)
Publikováno v:
Journal of Interactive Marketing. Aug2024, Vol. 59 Issue 3, p312-328. 17p.
Autor:
Wang, Liang1 (AUTHOR) liangwang@sanyau.edu.cn, Che, Gaofeng2 (AUTHOR) chegaofeng@htu.edu.cn, Hu, Jiantuan3 (AUTHOR) cug.edu@163.com, Chen, Lin3 (AUTHOR) cug.edu@163.com
Publikováno v:
Journal of Theoretical & Applied Electronic Commerce Research. Jun2024, Vol. 19 Issue 2, p1243-1266. 24p.
Autor:
Labarta, Tobias, Kulicheva, Elizaveta, Froelian, Ronja, Geißler, Christian, Melman, Xenia, von Klitzing, Julian
Publikováno v:
Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science, vol 2156
Explainable Artificial Intelligence (XAI) is essential for building advanced machine learning-powered applications, especially in critical domains such as medical diagnostics or autonomous driving. Legal, business, and ethical requirements motivate u
Externí odkaz:
http://arxiv.org/abs/2410.11896
Autor:
Mora, José-Domingo1 jmora@umassd.edu, Izadi, Anoosha1 aizadi@umassd.edu
Publikováno v:
Journal of Electronic Commerce Research. 2024, Vol. 25 Issue 3, p171-190. 20p.
Autor:
Fu, Ning1 (AUTHOR) ning.fu@csun.edu
Publikováno v:
International Journal of Market Research. May2022, Vol. 64 Issue 3, p354-375. 22p. 1 Diagram, 7 Charts.
Autor:
Kashyap, Rachita1 (AUTHOR) kashyap.rachita21@gmail.com, Kesharwani, Ankit2 (AUTHOR), Ponnam, Abhilash3 (AUTHOR)
Publikováno v:
Electronic Commerce Research. Dec2023, Vol. 23 Issue 4, p2183-2216. 34p.
Autor:
Wang, Yiru1 (AUTHOR), Kuchmaner, Christina A.2 (AUTHOR) kuchmanerc@duq.edu
Publikováno v:
Journal of Marketing Theory & Practice. Summer2024, Vol. 32 Issue 3, p346-361. 16p.
Fine-tuning large language models (LLMs) on human preferences, typically through reinforcement learning from human feedback (RLHF), has proven successful in enhancing their capabilities. However, ensuring the safety of LLMs during the fine-tuning rem
Externí odkaz:
http://arxiv.org/abs/2408.15313
Large language models (LLMs) for code are typically trained to align with natural language instructions to closely follow their intentions and requirements. However, in many practical scenarios, it becomes increasingly challenging for these models to
Externí odkaz:
http://arxiv.org/abs/2407.02518