Zobrazeno 1 - 10
of 19
pro vyhledávání: '"Rawal, Ruchit"'
Autor:
Rawal, Ruchit, Saifullah, Khalid, Basri, Ronen, Jacobs, David, Somepalli, Gowthami, Goldstein, Tom
Current datasets for long-form video understanding often fall short of providing genuine long-form comprehension challenges, as many tasks derived from these datasets can be successfully tackled by analyzing just one or a few random frames from a vid
Externí odkaz:
http://arxiv.org/abs/2405.08813
Autor:
Rawal, Ruchit, Toneva, Mariya
The rapid growth in natural language processing (NLP) research has led to numerous new models, outpacing our understanding of how they compare to established ones. One major reason for this difficulty is saturating benchmarks, which may not well refl
Externí odkaz:
http://arxiv.org/abs/2311.04166
With the increasing deployment of deep neural networks in safety-critical applications such as self-driving cars, medical imaging, anomaly detection, etc., adversarial robustness has become a crucial concern in the reliability of these networks in re
Externí odkaz:
http://arxiv.org/abs/2309.05132
The pretrain-finetune paradigm usually improves downstream performance over training a model from scratch on the same task, becoming commonplace across many areas of machine learning. While pretraining is empirically observed to be beneficial for a r
Externí odkaz:
http://arxiv.org/abs/2307.06006
The high cost of acquiring and annotating samples has made the `few-shot' learning problem of prime importance. Existing works mainly focus on improving performance on clean data and overlook robustness concerns on the data perturbed with adversarial
Externí odkaz:
http://arxiv.org/abs/2211.01598
Several companies often safeguard their trained deep models (i.e., details of architecture, learnt weights, training details etc.) from third-party users by exposing them only as black boxes through APIs. Moreover, they may not even provide access to
Externí odkaz:
http://arxiv.org/abs/2211.01579
Certified defense using randomized smoothing is a popular technique to provide robustness guarantees for deep neural networks against l2 adversarial attacks. Existing works use this technique to provably secure a pretrained non-robust model by traini
Externí odkaz:
http://arxiv.org/abs/2210.08929
Adversarial attack perturbs an image with an imperceptible noise, leading to incorrect model prediction. Recently, a few works showed inherent bias associated with such attack (robustness bias), where certain subgroups in a dataset (e.g. based on cla
Externí odkaz:
http://arxiv.org/abs/2205.02604
Deep models are highly susceptible to adversarial attacks. Such attacks are carefully crafted imperceptible noises that can fool the network and can cause severe consequences when deployed. To encounter them, the model requires training data for adve
Externí odkaz:
http://arxiv.org/abs/2204.01568
Learning modality invariant features is central to the problem of Visible-Thermal cross-modal Person Reidentification (VT-ReID), where query and gallery images come from different modalities. Existing works implicitly align the modalities in pixel an
Externí odkaz:
http://arxiv.org/abs/2111.05059