Zobrazeno 1 - 10
of 157 529
pro vyhledávání: '"Hoang AT"'
While Multi-Object Tracking (MOT) has made substantial advancements, it is limited by heavy reliance on prior knowledge and limited to predefined categories. In contrast, Generic Multiple Object Tracking (GMOT), tracking multiple objects with similar
Externí odkaz:
http://arxiv.org/abs/2409.02490
Mapping and characterizing magnetic fields in the Rho Ophiuchus-A molecular cloud with SOFIA/HAWC$+$
Autor:
Lê, Ngân, Tram, Le Ngoc, Karska, Agata, Hoang, Thiem, Diep, Pham Ngoc, Hanasz, Michał, Ngoc, Nguyen Bich, Phuong, Nguyen Thi, Menten, Karl M., Wyrowski, Friedrich, Nguyen, Dieu D., Hoang, Thuong Duc, Khang, Nguyen Minh
(abridged) Together with gravity, turbulence, and stellar feedback, magnetic fields (B-fields) are thought to play a critical role in the evolution of molecular clouds and star formation processes. We aim to map the morphology and measure the strengt
Externí odkaz:
http://arxiv.org/abs/2408.17122
Autor:
Doan, Khang T., Huynh, Bao G., Hoang, Dung T., Pham, Thuc D., Pham, Nhat H., Nguyen, Quan T. M., Vo, Bang Q., Hoang, Suong N.
In this report, we introduce Vintern-1B, a reliable 1-billion-parameters multimodal large language model (MLLM) for Vietnamese language tasks. By integrating the Qwen2-0.5B-Instruct language model with the InternViT-300M-448px visual model, Vintern-1
Externí odkaz:
http://arxiv.org/abs/2408.12480
Autor:
Hoang-Trong, Duong D., Tran, Khang, Trieu, Doan-An, Truong, Quan-Hao, Le, Van-Hoang, Phan, Ngoc-Loan
Creating soft-Coulomb-type (SC) molecular potential within single-active-electron approximation (SAE) is essential since it allows solving time-dependent Schr\"odinger equations with fewer computational resources compared to other multielectron metho
Externí odkaz:
http://arxiv.org/abs/2408.12627
Autor:
Nguyen, Huy-Son, Bui, Tuan-Nghia, Nguyen, Long-Hai, Manh-Hung, Hoang, Nguyen, Cam-Van Thi, Le, Hoang-Quynh, Le, Duc-Trong
Bundle recommendation aims to enhance business profitability and user convenience by suggesting a set of interconnected items. In real-world scenarios, leveraging the impact of asymmetric item affiliations is crucial for effective bundle modeling and
Externí odkaz:
http://arxiv.org/abs/2408.08906
Large language models (LLMs) have achieved remarkable success across various NLP tasks, yet their focus has predominantly been on English due to English-centric pre-training and limited multilingual data. While some multilingual LLMs claim to support
Externí odkaz:
http://arxiv.org/abs/2410.03115
Autor:
Nguyen, Manh Duong, Nguyen, Trung Thanh, Pham, Huy Hieu, Hoang, Trong Nghia, Nguyen, Phi Le, Huynh, Thanh Trung
Federated Learning (FL) is a method for training machine learning models using distributed data sources. It ensures privacy by allowing clients to collaboratively learn a shared global model while storing their data locally. However, a significant ch
Externí odkaz:
http://arxiv.org/abs/2410.03070
Autor:
Nguyen, Minh Hieu, Nguyen, Huu Tien, Nguyen, Trung Thanh, Nguyen, Manh Duong, Hoang, Trong Nghia, Nguyen, Truong Thao, Nguyen, Phi Le
Federated Learning (FL) has emerged as a powerful paradigm for training machine learning models in a decentralized manner, preserving data privacy by keeping local data on clients. However, evaluating the robustness of these models against data pertu
Externí odkaz:
http://arxiv.org/abs/2410.03067
Weighted estimates for a bilinear fractional integral operator and its commutator: A union condition
Autor:
Hoang, Cong
The main theme of this paper is to give sufficient conditions for the weighted boundedness of the bilinear fractional integral operator $\mathsf{BI}_\al$. The proposed condition involves the union of multilinear Muckenhoupt-type conditions. We have a
Externí odkaz:
http://arxiv.org/abs/2410.02889
Autor:
Nguyen, Duy M. H., Diep, Nghiem T., Nguyen, Trung Q., Le, Hoang-Bao, Nguyen, Tai, Nguyen, Tien, Nguyen, TrungTin, Ho, Nhat, Xie, Pengtao, Wattenhofer, Roger, Zhou, James, Sonntag, Daniel, Niepert, Mathias
State-of-the-art medical multi-modal large language models (med-MLLM), like LLaVA-Med or BioMedGPT, leverage instruction-following data in pre-training. However, those models primarily focus on scaling the model size and data volume to boost performa
Externí odkaz:
http://arxiv.org/abs/2410.02615