Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Agnihotri, Shashank"'
Not all learnable parameters (e.g., weights) contribute equally to a neural network's decision function. In fact, entire layers' parameters can sometimes be reset to random values with little to no impact on the model's decisions. We revisit earlier
Externí odkaz:
http://arxiv.org/abs/2410.14470
Image restoration networks are usually comprised of an encoder and a decoder, responsible for aggregating image content from noisy, distorted data and to restore clean, undistorted images, respectively. Data aggregation as well as high-resolution ima
Externí odkaz:
http://arxiv.org/abs/2406.07435
Pixel-wise predictions are required in a wide variety of tasks such as image restoration, image segmentation, or disparity estimation. Common models involve several stages of data resampling, in which the resolution of feature maps is first reduced t
Externí odkaz:
http://arxiv.org/abs/2311.17524
Autor:
Agnihotri, Shashank, Gandikota, Kanchana Vaishnavi, Grabinski, Julia, Chandramouli, Paramanand, Keuper, Margret
Following their success in visual recognition tasks, Vision Transformers(ViTs) are being increasingly employed for image restoration. As a few recent works claim that ViTs for image classification also have better robustness properties, we investigat
Externí odkaz:
http://arxiv.org/abs/2307.13856
Autor:
Sommerhoff, Hendrik, Agnihotri, Shashank, Saleh, Mohamed, Moeller, Michael, Keuper, Margret, Kolb, Andreas
The success of deep learning is frequently described as the ability to train all parameters of a network on a specific application in an end-to-end fashion. Yet, several design choices on the camera level, including the pixel layout of the sensor, ar
Externí odkaz:
http://arxiv.org/abs/2304.14736
While neural networks allow highly accurate predictions in many tasks, their lack of robustness towards even slight input perturbations often hampers their deployment. Adversarial attacks such as the seminal projected gradient descent (PGD) offer an
Externí odkaz:
http://arxiv.org/abs/2302.02213
While neural networks allow highly accurate predictions in many tasks, their lack of robustness towards even slight input perturbations hampers their deployment in many real-world applications. Recent research towards evaluating the robustness of neu
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d17111a6fea187e70dbb3becccf1e272