Zobrazeno 1 - 10
of 215
pro vyhledávání: '"Nguyen Quang H"'
Diffusion models have shown remarkable abilities in generating realistic and high-quality images from text prompts. However, a trained model remains black-box; little do we know about the role of its components in exhibiting a concept such as objects
Externí odkaz:
http://arxiv.org/abs/2412.02542
Autor:
Nguyen, Quang H., Hoang, Duy C., Decugis, Juliette, Manchanda, Saurav, Chawla, Nitesh V., Doan, Khoa D.
The rapid progress in machine learning (ML) has brought forth many large language models (LLMs) that excel in various tasks and areas. These LLMs come with different abilities and costs in terms of computation or pricing. Since the demand for each qu
Externí odkaz:
http://arxiv.org/abs/2407.10834
Autor:
Nguyen, Quang H., Ngoc-Hieu, Nguyen, Ta, The-Anh, Nguyen-Tang, Thanh, Wong, Kok-Seng, Thanh-Tung, Hoang, Doan, Khoa D.
Deep neural networks are vulnerable to backdoor attacks, a type of adversarial attack that poisons the training data to manipulate the behavior of models trained on such data. Clean-label attacks are a more stealthy form of backdoor attacks that can
Externí odkaz:
http://arxiv.org/abs/2407.10825
Autor:
Yang, Sze Jue, La, Chinh D., Nguyen, Quang H., Wong, Kok-Seng, Tran, Anh Tuan, Chan, Chee Seng, Doan, Khoa D.
Backdoor attacks, representing an emerging threat to the integrity of deep neural networks, have garnered significant attention due to their ability to compromise deep learning systems clandestinely. While numerous backdoor attacks occur within the d
Externí odkaz:
http://arxiv.org/abs/2312.03419
Autor:
Hoang, Duy C., Nguyen, Quang H., Manchanda, Saurav, Peng, MinLong, Wong, Kok-Seng, Doan, Khoa D.
Despite outstanding performance in a variety of NLP tasks, recent studies have revealed that NLP models are vulnerable to adversarial attacks that slightly perturb the input to cause the models to misbehave. Among these attacks, adversarial word-leve
Externí odkaz:
http://arxiv.org/abs/2310.01452
Recent works have shown that deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify. Even with access only to the model's output, an attacker can employ black-box a
Externí odkaz:
http://arxiv.org/abs/2310.00567
By injecting a small number of poisoned samples into the training set, backdoor attacks aim to make the victim model produce designed outputs on any input injected with pre-designed backdoors. In order to achieve a high attack success rate using as f
Externí odkaz:
http://arxiv.org/abs/2202.11203
Publikováno v:
IJPHM (2023)
As the burden of respiratory diseases continues to fall on society worldwide, this paper proposes a high-quality and reliable dataset of human sounds for studying respiratory illnesses, including pneumonia and COVID-19. It consists of coughing, mouth
Externí odkaz:
http://arxiv.org/abs/2201.04581
Autor:
Nguyen, Quang H., Ngo, Hoang H., Nguyen-Vo, Thanh-Hoang, Do, Trang T.T., Rahardja, Susanto, Nguyen, Binh P.
Publikováno v:
In Computational and Structural Biotechnology Journal 2023 21:751-757
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.