Zobrazeno 1 - 10
of 10 741
pro vyhledávání: '"Phi, An"'
Autor:
Nguyen, Thanh Tam, Ren, Zhao, Pham, Trinh, Nguyen, Phi Le, Yin, Hongzhi, Nguyen, Quoc Viet Hung
The rapid advancement of large language models (LLMs) and multimodal learning has transformed digital content creation and manipulation. Traditional visual editing tools require significant expertise, limiting accessibility. Recent strides in instruc
Externí odkaz:
http://arxiv.org/abs/2411.09955
In the context of modern life, particularly in Industry 4.0 within the online space, emotions and moods are frequently conveyed through social media posts. The trend of sharing stories, thoughts, and feelings on these platforms generates a vast and p
Externí odkaz:
http://arxiv.org/abs/2411.04532
Federated Learning (FL) is a machine learning method for training with private data locally stored in distributed machines without gathering them into one place for central learning. Despite its promises, FL is prone to critical security risks. First
Externí odkaz:
http://arxiv.org/abs/2411.02773
Autor:
Nguyen, Dac Thai, Nguyen, Trung Thanh, Nguyen, Huu Tien, Nguyen, Thanh Trung, Pham, Huy Hieu, Nguyen, Thanh Hung, Truong, Thao Nguyen, Nguyen, Phi Le
Positron Emission Tomography (PET) and Computed Tomography (CT) are essential for diagnosing, staging, and monitoring various diseases, particularly cancer. Despite their importance, the use of PET/CT systems is limited by the necessity for radioacti
Externí odkaz:
http://arxiv.org/abs/2410.21932
We explore a robust version of the barycenter problem among $n$ centered Gaussian probability measures, termed Semi-Unbalanced Optimal Transport (SUOT)-based Barycenter, wherein the barycenter remains fixed while the others are relaxed using Kullback
Externí odkaz:
http://arxiv.org/abs/2410.08117
Autor:
Nguyen, Manh Duong, Nguyen, Trung Thanh, Pham, Huy Hieu, Hoang, Trong Nghia, Nguyen, Phi Le, Huynh, Thanh Trung
Federated Learning (FL) is a method for training machine learning models using distributed data sources. It ensures privacy by allowing clients to collaboratively learn a shared global model while storing their data locally. However, a significant ch
Externí odkaz:
http://arxiv.org/abs/2410.03070
Autor:
Nguyen, Minh Hieu, Nguyen, Huu Tien, Nguyen, Trung Thanh, Nguyen, Manh Duong, Hoang, Trong Nghia, Nguyen, Truong Thao, Nguyen, Phi Le
Federated Learning (FL) has emerged as a powerful paradigm for training machine learning models in a decentralized manner, preserving data privacy by keeping local data on clients. However, evaluating the robustness of these models against data pertu
Externí odkaz:
http://arxiv.org/abs/2410.03067
Autor:
Ming, Yifei, Purushwalkam, Senthil, Pandit, Shrey, Ke, Zixuan, Nguyen, Xuan-Phi, Xiong, Caiming, Joty, Shafiq
Ensuring faithfulness to context in large language models (LLMs) and retrieval-augmented generation (RAG) systems is crucial for reliable deployment in real-world applications, as incorrect or unsupported information can erode user trust. Despite adv
Externí odkaz:
http://arxiv.org/abs/2410.03727
Large Language Models (LLMs) have demonstrated remarkable capabilities in handling long context inputs, but this comes at the cost of increased computational resources and latency. Our research introduces a novel approach for the long context bottlen
Externí odkaz:
http://arxiv.org/abs/2409.17422
Autor:
Nguyen, Xuan-Phi, Pandit, Shrey, Purushwalkam, Senthil, Xu, Austin, Chen, Hailin, Ming, Yifei, Ke, Zixuan, Savarese, Silvio, Xong, Caiming, Joty, Shafiq
Retrieval Augmented Generation (RAG), a paradigm that integrates external contextual information with large language models (LLMs) to enhance factual accuracy and relevance, has emerged as a pivotal area in generative AI. The LLMs used in RAG applica
Externí odkaz:
http://arxiv.org/abs/2409.09916