Zobrazeno 1 - 10
of 193
pro vyhledávání: '"Li, Zehan"'
Autor:
Li, Zehan, Hu, Yan, Lane, Scott, Selek, Salih, Shahani, Lokesh, Machado-Vieira, Rodrigo, Soares, Jair, Xu, Hua, Liu, Hongfang, Huang, Ming
Accurate identification and categorization of suicidal events can yield better suicide precautions, reducing operational burden, and improving care quality in high-acuity psychiatric settings. Pre-trained language models offer promise for identifying
Externí odkaz:
http://arxiv.org/abs/2409.18878
Large language models (LLMs) are emerging as promising tools for mental health care, offering scalable support through their ability to generate human-like responses. However, the effectiveness of these models in clinical settings remains unclear. Th
Externí odkaz:
http://arxiv.org/abs/2408.11288
Retrieval-based code question answering seeks to match user queries in natural language to relevant code snippets. Previous approaches typically rely on pretraining models using crafted bi-modal and uni-modal datasets to align text and code represent
Externí odkaz:
http://arxiv.org/abs/2403.16702
This study aims to explore the best practices for utilizing GenAI as a programming tool, through a comparative analysis between GPT-4 and GLM-4. By evaluating prompting strategies at different levels of complexity, we identify that simplest and strai
Externí odkaz:
http://arxiv.org/abs/2402.12782
Autor:
Hua, Yining, Liu, Fenglin, Yang, Kailai, Li, Zehan, Na, Hongbin, Sheu, Yi-han, Zhou, Peilin, Moran, Lauren V., Ananiadou, Sophia, Beam, Andrew, Torous, John
The integration of large language models (LLMs) in mental health care is an emerging field. There is a need to systematically review the application outcomes and delineate the advantages and limitations in clinical settings. This review aims to provi
Externí odkaz:
http://arxiv.org/abs/2401.02984
Pre-trained language models (PLMs) have recently shown great success in text representation field. However, the high computational cost and high-dimensional representation of PLMs pose significant challenges for practical applications. To make models
Externí odkaz:
http://arxiv.org/abs/2311.05472
Autor:
Zhang, Xin, Li, Zehan, Zhang, Yanzhao, Long, Dingkun, Xie, Pengjun, Zhang, Meishan, Zhang, Min
In the large language model (LLM) revolution, embedding is a key component of various systems. For example, it is used to retrieve knowledge or memories for LLMs, to build content moderation filters, etc. As such cases span from English to other natu
Externí odkaz:
http://arxiv.org/abs/2310.08232
We present GTE, a general-purpose text embedding model trained with multi-stage contrastive learning. In line with recent advancements in unifying various NLP tasks into a single format, we train a unified text embedding model by employing contrastiv
Externí odkaz:
http://arxiv.org/abs/2308.03281
Recently, various studies have been directed towards exploring dense passage retrieval techniques employing pre-trained language models, among which the masked auto-encoder (MAE) pre-training architecture has emerged as the most promising. The conven
Externí odkaz:
http://arxiv.org/abs/2305.13197
Autor:
Hu, Yan, Chen, Qingyu, Du, Jingcheng, Peng, Xueqing, Keloth, Vipina Kuttichi, Zuo, Xu, Zhou, Yujia, Li, Zehan, Jiang, Xiaoqian, Lu, Zhiyong, Roberts, Kirk, Xu, Hua
Objective: This study quantifies the capabilities of GPT-3.5 and GPT-4 for clinical named entity recognition (NER) tasks and proposes task-specific prompts to improve their performance. Materials and Methods: We evaluated these models on two clinical
Externí odkaz:
http://arxiv.org/abs/2303.16416