Zobrazeno 1 - 10
of 96
pro vyhledávání: '"Mao, Yuhao"'
Training certifiably robust neural networks is an important but challenging task. While many algorithms for (deterministic) certified training have been proposed, they are often evaluated on different training schedules, certification methods, and sy
Externí odkaz:
http://arxiv.org/abs/2406.04848
Dynamic graph learning equips the edges with time attributes and allows multiple links between two nodes, which is a crucial technology for understanding evolving data scenarios like traffic prediction and recommendation systems. Existing works obtai
Externí odkaz:
http://arxiv.org/abs/2405.17473
Autor:
Balauca, Stefan, Müller, Mark Niklas, Mao, Yuhao, Baader, Maximilian, Fischer, Marc, Vechev, Martin
Training neural networks with high certified accuracy against adversarial examples remains an open problem despite significant efforts. While certification methods can effectively leverage tight convex relaxations for bound computation, in training,
Externí odkaz:
http://arxiv.org/abs/2403.07095
Convex relaxations are a key component of training and certifying provably safe neural networks. However, despite substantial progress, a wide and poorly understood accuracy gap to standard networks remains, raising the question of whether this is du
Externí odkaz:
http://arxiv.org/abs/2311.04015
As robustness verification methods are becoming more precise, training certifiably robust neural networks is becoming ever more relevant. To this end, certified training methods compute and then optimize an upper bound on the worst-case loss over a r
Externí odkaz:
http://arxiv.org/abs/2306.10426
The increasing maturity of big data applications has led to a proliferation of models targeting the same objectives within the same scenarios and datasets. However, selecting the most suitable model that considers model's features while taking specif
Externí odkaz:
http://arxiv.org/abs/2305.13634
Training certifiably robust neural networks remains a notoriously hard problem. On one side, adversarial training optimizes under-approximations of the worst-case loss, which leads to insufficient regularization for certification, while on the other,
Externí odkaz:
http://arxiv.org/abs/2305.04574
Autor:
Gan, Yuyou, Mao, Yuhao, Zhang, Xuhong, Ji, Shouling, Pu, Yuwen, Han, Meng, Yin, Jianwei, Wang, Ting
Understanding the decision process of neural networks is hard. One vital method for explanation is to attribute its decision to pivotal features. Although many algorithms are proposed, most of them solely improve the faithfulness to the model. Howeve
Externí odkaz:
http://arxiv.org/abs/2209.01782
Autor:
Mao, Yuhao, Fu, Chong, Wang, Saizhuo, Ji, Shouling, Zhang, Xuhong, Liu, Zhenguang, Zhou, Jun, Liu, Alex X., Beyah, Raheem, Wang, Ting
One intriguing property of adversarial attacks is their "transferability" -- an adversarial example crafted with respect to one deep neural network (DNN) model is often found effective against other DNNs as well. Intensive research has been conducted
Externí odkaz:
http://arxiv.org/abs/2204.04063
Publikováno v:
In Atmospheric Research July 2024 304