Zobrazeno 1 - 10
of 24
pro vyhledávání: '"Erfani, Sarah Monazam"'
Publikováno v:
Pattern Recognition, Vol. 156, 2024, Article No. 110758
High-fidelity digital human representations are increasingly in demand in the digital world, particularly for interactive telepresence, AR/VR, 3D graphics, and the rapidly evolving metaverse. Even though they work well in small spaces, conventional m
Externí odkaz:
http://arxiv.org/abs/2410.17741
Unlearnable examples (UEs) refer to training samples modified to be unlearnable to Deep Neural Networks (DNNs). These examples are usually generated by adding error-minimizing noises that can fool a DNN model into believing that there is nothing (no
Externí odkaz:
http://arxiv.org/abs/2402.02028
Autor:
Huang, Hanxun, Campello, Ricardo J. G. B., Erfani, Sarah Monazam, Ma, Xingjun, Houle, Michael E., Bailey, James
Representations learned via self-supervised learning (SSL) can be susceptible to dimensional collapse, where the learned representation subspace is of extremely low dimensionality and thus fails to represent the full data distribution and modalities.
Externí odkaz:
http://arxiv.org/abs/2401.10474
Backdoor attacks present a substantial security concern for deep learning models, especially those utilized in applications critical to safety and security. These attacks manipulate model behavior by embedding a hidden trigger during the training pha
Externí odkaz:
http://arxiv.org/abs/2401.03215
Backdoor attacks have emerged as one of the major security threats to deep learning models as they can easily control the model's test-time predictions by pre-injecting a backdoor trigger into the model at training time. While backdoor attacks have b
Externí odkaz:
http://arxiv.org/abs/2211.07915
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks. A range of defense methods have been proposed to train adversarially robust DNNs, among which adversarial training has demonstrated promising results. However, despite pre
Externí odkaz:
http://arxiv.org/abs/2110.03825
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples/attacks, raising concerns about their reliability in safety-critical applications. A number of defense methods have been proposed to train robust DNNs resistant to adversa
Externí odkaz:
http://arxiv.org/abs/2104.10377
The volume of "free" data on the internet has been key to the current success of deep learning. However, it also raises privacy concerns about the unauthorized exploitation of personal data for training commercial models. It is thus crucial to develo
Externí odkaz:
http://arxiv.org/abs/2101.04898
Collaborative filtering is one of the most popular techniques in designing recommendation systems, and its most representative model, matrix factorization, has been wildly used by researchers and the industry. However, this model suffers from the lac
Externí odkaz:
http://arxiv.org/abs/1908.01099
Generative Adversarial Networks (GANs) are a powerful class of generative models. Despite their successes, the most appropriate choice of a GAN network architecture is still not well understood. GAN models for image synthesis have adopted a deep conv
Externí odkaz:
http://arxiv.org/abs/1905.02417