Zobrazeno 1 - 1
of 1
pro vyhledávání: '"Xia, Hanfeng"'
Backdoor attacks involve the injection of a limited quantity of poisoned examples containing triggers into the training dataset. During the inference stage, backdoor attacks can uphold a high level of accuracy for normal examples, yet when presented
Externí odkaz:
http://arxiv.org/abs/2406.16125