Zobrazeno 1 - 10
of 323
pro vyhledávání: '"Cheung, Ngai Man"'
Urbanization as a global trend has led to many environmental challenges, including the urban heat island (UHI) effect. The increase in temperature has a significant impact on the well-being of urban residents. Air temperature ($T_a$) at 2m above the
Externí odkaz:
http://arxiv.org/abs/2412.13504
Autor:
Cao, Tri, Trinh, Minh-Huy, Deng, Ailin, Nguyen, Quoc-Nam, Duong, Khoa, Cheung, Ngai-Man, Hooi, Bryan
Anomaly detection (AD) is a machine learning task that identifies anomalies by learning patterns from normal training data. In many real-world scenarios, anomalies vary in severity, from minor anomalies with little risk to severe abnormalities requir
Externí odkaz:
http://arxiv.org/abs/2411.14515
Recently, prompt learning has emerged as the state-of-the-art (SOTA) for fair text-to-image (T2I) generation. Specifically, this approach leverages readily available reference images to learn inclusive prompts for each target Sensitive Attribute (tSA
Externí odkaz:
http://arxiv.org/abs/2410.18615
Skip connections are fundamental architecture designs for modern deep neural networks (DNNs) such as CNNs and ViTs. While they help improve model performance significantly, we identify a vulnerability associated with skip connections to Model Inversi
Externí odkaz:
http://arxiv.org/abs/2409.01696
Model Inversion (MI) is a type of privacy violation that focuses on reconstructing private training data through abusive exploitation of machine learning models. To defend against MI attacks, state-of-the-art (SOTA) MI defense methods rely on regular
Externí odkaz:
http://arxiv.org/abs/2409.01062
Publikováno v:
CVPR 2024
Model Inversion (MI) attacks aim to reconstruct private training data by abusing access to machine learning models. Contemporary MI attacks have achieved impressive attack performance, posing serious threats to privacy. Meanwhile, all existing MI def
Externí odkaz:
http://arxiv.org/abs/2405.05588
We study universal deepfake detection. Our goal is to detect synthetic images from a range of generative AI approaches, particularly from emerging ones which are unseen during training of the deepfake detector. Universal deepfake detection requires o
Externí odkaz:
http://arxiv.org/abs/2401.06506
In a model inversion (MI) attack, an adversary abuses access to a machine learning (ML) model to infer and reconstruct private training data. Remarkable progress has been made in the white-box and black-box setups, where the adversary has access to t
Externí odkaz:
http://arxiv.org/abs/2310.19342
Recently, there has been increased interest in fair generative models. In this work, we conduct, for the first time, an in-depth study on fairness measurement, a critical component in gauging progress on fair generative models. We make three contribu
Externí odkaz:
http://arxiv.org/abs/2310.19297
Autor:
Abdollahzadeh, Milad, Malekzadeh, Touba, Teo, Christopher T. H., Chandrasegaran, Keshigeyan, Liu, Guimeng, Cheung, Ngai-Man
In machine learning, generative modeling aims to learn to generate new data statistically similar to the training data distribution. In this paper, we survey learning generative models under limited data, few shots and zero shot, referred to as Gener
Externí odkaz:
http://arxiv.org/abs/2307.14397