Zobrazeno 1 - 10
of 41 976
pro vyhledávání: '"decision boundary"'
Autor:
Gomes, Inês, Teixeira, Luís F., van Rijn, Jan N., Soares, Carlos, Restivo, André, Cunha, Luís, Santos, Moisés
The increasing use of deep learning across various domains highlights the importance of understanding the decision-making processes of these black-box models. Recent research focusing on the decision boundaries of deep classifiers, relies on generate
Externí odkaz:
http://arxiv.org/abs/2408.06302
Autor:
Hasegawa, Naoya, Sato, Issei
Real-world data distributions are often highly skewed. This has spurred a growing body of research on long-tailed recognition, aimed at addressing the imbalance in training classification models. Among the methods studied, multiplicative logit adjust
Externí odkaz:
http://arxiv.org/abs/2409.17582
Autor:
Tang, Jiakai, Dai, Sunhao, Sun, Zexu, Chen, Xu, Xu, Jun, Yu, Wenhui, Hu, Lantao, Jiang, Peng, Li, Han
In recent years, graph contrastive learning (GCL) has received increasing attention in recommender systems due to its effectiveness in reducing bias caused by data sparsity. However, most existing GCL models rely on heuristic approaches and usually a
Externí odkaz:
http://arxiv.org/abs/2407.10184
Autor:
Nie, Qiang, Fu, Weifu, Lin, Yuhuan, Li, Jialin, Zhou, Yifeng, Liu, Yong, Zhu, Lei, Wang, Chengjie
Instance-incremental learning (IIL) focuses on learning continually with data of the same classes. Compared to class-incremental learning (CIL), the IIL is seldom explored because IIL suffers less from catastrophic forgetting (CF). However, besides r
Externí odkaz:
http://arxiv.org/abs/2406.03065
Autor:
Dissanayake, Pasan, Dutta, Sanghamitra
Counterfactual explanations find ways of achieving a favorable model outcome with minimum input perturbation. However, counterfactual explanations can also be exploited to steal the model by strategically training a surrogate model to give similar pr
Externí odkaz:
http://arxiv.org/abs/2405.05369
Many machine learning models are susceptible to adversarial attacks, with decision-based black-box attacks representing the most critical threat in real-world applications. These attacks are extremely stealthy, generating adversarial examples using h
Externí odkaz:
http://arxiv.org/abs/2406.04998
Efforts to leverage deep learning models in low-resource regimes have led to numerous augmentation studies. However, the direct application of methods such as mixup and cutout to text data, is limited due to their discrete characteristics. While meth
Externí odkaz:
http://arxiv.org/abs/2403.15512
Deep neural networks (DNNs) have revolutionized various industries, leading to the rise of Machine Learning as a Service (MLaaS). In this paradigm, well-trained models are typically deployed through APIs. However, DNNs are susceptible to backdoor att
Externí odkaz:
http://arxiv.org/abs/2402.17465
Machine learning has been adopted for efficient cooperative spectrum sensing. However, it incurs an additional security risk due to attacks leveraging adversarial machine learning to create malicious spectrum sensing values to deceive the fusion cent
Externí odkaz:
http://arxiv.org/abs/2402.08986
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.