Zobrazeno 1 - 10
of 1 540
pro vyhledávání: '"CHEN Zhiyu"'
Autor:
LIU Chunyan (刘春艳), CHEN Zhiyu (陈志宇)
Publikováno v:
中西医结合护理, Vol 10, Iss 9, Pp 159-161 (2024)
Objective To investigate the general situation of the cooperative relationship between nursing staff and caregivers of elderly hospitalized patients, and analyze the influencing factors of the cooperative relationship between them, so as to provide g
Externí odkaz:
https://doaj.org/article/7576595d3c0741ad8e586f8d6646bbb1
Publikováno v:
Frontiers in Pharmacology, Vol 15 (2024)
Pyroptosis induced by oxidative stress is a significant contributor to mental health disorders, including depression (+)-Catechin (CA), a polyphenolic compound prevalent in various food sources, has been substantiated by prior research to exhibit pot
Externí odkaz:
https://doaj.org/article/0a4d4f5ba98648e39048714cc67b79ab
In e-commerce, high consideration search missions typically require careful and elaborate decision making, and involve a substantial research investment from customers. We consider the task of identifying High Consideration (HC) queries. Identifying
Externí odkaz:
http://arxiv.org/abs/2410.13951
Autor:
Zhang, Mian, Yang, Xianjun, Zhang, Xinlu, Labrum, Travis, Chiu, Jamie C., Eack, Shaun M., Fang, Fei, Wang, William Yang, Chen, Zhiyu Zoey
There is a significant gap between patient needs and available mental health support today. In this paper, we aim to thoroughly examine the potential of using Large Language Models (LLMs) to assist professional psychotherapy. To this end, we propose
Externí odkaz:
http://arxiv.org/abs/2410.13218
Knowledge tracing is a technique that predicts students' future performance by analyzing their learning process through historical interactions with intelligent educational platforms, enabling a precise evaluation of their knowledge mastery. Recent s
Externí odkaz:
http://arxiv.org/abs/2409.06745
While large language models (LLMs) have been thoroughly evaluated for deductive and inductive reasoning, their proficiency in abductive reasoning and holistic rule learning in interactive environments remains less explored. We introduce RULEARN, a no
Externí odkaz:
http://arxiv.org/abs/2408.10455
Autor:
Chen, Zhiyu, Wen, Ziyuan, Wan, Weier, Pakala, Akhil Reddy, Zou, Yiwei, Wei, Wei-Chen, Li, Zengyi, Chen, Yubei, Yang, Kaiyuan
Analog compute-in-memory (CIM) in static random-access memory (SRAM) is promising for accelerating deep learning inference by circumventing the memory wall and exploiting ultra-efficient analog low-precision arithmetic. Latest analog CIM designs atte
Externí odkaz:
http://arxiv.org/abs/2407.12829
Autor:
Senel, Lütfi Kerem, Fetahu, Besnik, Yoshida, Davis, Chen, Zhiyu, Castellucci, Giuseppe, Vedula, Nikhita, Choi, Jason, Malmasi, Shervin
Recommender systems are widely used to suggest engaging content, and Large Language Models (LLMs) have given rise to generative recommenders. Such systems can directly generate items, including for open-set tasks like question suggestion. While the w
Externí odkaz:
http://arxiv.org/abs/2406.05255
Publikováno v:
Di-san junyi daxue xuebao, Vol 42, Iss 6, Pp 624-631 (2020)
Objective To investigate the protective effect of troxerutin against spinal cord injury (SCI) in rats and explore the underlying mechanisms. Methods Sixty 8-week-old male SD rats were randomly divided into sham operation group, SCI group and troxerut
Externí odkaz:
https://doaj.org/article/738081dbe0bf4b399f39a93a97d30b6b
Autor:
Zhang, Xinlu, Chen, Zhiyu Zoey, Ye, Xi, Yang, Xianjun, Chen, Lichang, Wang, William Yang, Petzold, Linda Ruth
Instruction Fine-Tuning (IFT) significantly enhances the zero-shot capabilities of pretrained Large Language Models (LLMs). While coding data is known to boost reasoning abilities during LLM pretraining, its role in activating internal reasoning capa
Externí odkaz:
http://arxiv.org/abs/2405.20535