Zobrazeno 1 - 10
of 32
pro vyhledávání: '"Ravikumar, Deepak"'
Autor:
Ravikumar, Deepak, Yeo, Alex, Zhu, Yiwen, Lakra, Aditya, Nagulapalli, Harsha, Ravindran, Santhosh Kumar, Suh, Steve, Dutta, Niharika, Fogarty, Andrew, Park, Yoonjae, Khushalani, Sumeet, Tarafdar, Arijit, Parekh, Kunal, Krishnan, Subru
Publikováno v:
Proceedings of the VLDB Endowment, Vol. 17, No. 7 ISSN 2150-8097, 2024
The proliferation of big data and analytic workloads has driven the need for cloud compute and cluster-based job processing. With Apache Spark, users can process terabytes of data at ease with hundreds of parallel executors. At Microsoft, we aim at p
Externí odkaz:
http://arxiv.org/abs/2411.11326
In this paper, we explore the properties of loss curvature with respect to input data in deep neural networks. Curvature of loss with respect to input (termed input loss curvature) is the trace of the Hessian of the loss with respect to the input. We
Externí odkaz:
http://arxiv.org/abs/2407.02747
Compressed video action recognition classifies video samples by leveraging the different modalities in compressed videos, namely motion vectors, residuals, and intra-frames. For this purpose, three neural networks are deployed, each dedicated to proc
Externí odkaz:
http://arxiv.org/abs/2407.02713
Label corruption, where training samples have incorrect labels, can significantly degrade the performance of machine learning models. This corruption often arises from non-expert labeling or adversarial attacks. Acquiring large, perfectly labeled dat
Externí odkaz:
http://arxiv.org/abs/2403.08618
Deep Neural Nets (DNNs) have become a pervasive tool for solving many emerging problems. However, they tend to overfit to and memorize the training set. Memorization is of keen interest since it is closely related to several concepts such as generali
Externí odkaz:
http://arxiv.org/abs/2402.18726
Deep neural networks are over-parameterized and easily overfit the datasets they train on. In the extreme case, it has been shown that these networks can memorize a training set with fully randomized labels. We propose using the curvature of loss fun
Externí odkaz:
http://arxiv.org/abs/2307.05831
Decentralized learning enables serverless training of deep neural networks (DNNs) in a distributed manner on multiple nodes. This allows for the use of large datasets, as well as the ability to train with a wide variety of data sources. However, one
Externí odkaz:
http://arxiv.org/abs/2304.04326
Autor:
Ravikumar, Deepak, Roy, Kaushik
Out-of-Distribution (OoD) inputs are examples that do not belong to the true underlying distribution of the dataset. Research has shown that deep neural nets make confident mispredictions on OoD inputs. Therefore, it is critical to identify OoD input
Externí odkaz:
http://arxiv.org/abs/2205.03493
Deep neural networks have found widespread adoption in solving complex tasks ranging from image recognition to natural language processing. However, these networks make confident mispredictions when presented with data that does not belong to the tra
Externí odkaz:
http://arxiv.org/abs/2012.08398
Deep Learning models hold state-of-the-art performance in many fields, but their vulnerability to adversarial examples poses threat to their ubiquitous deployment in practical settings. Additionally, adversarial inputs generated on one classifier hav
Externí odkaz:
http://arxiv.org/abs/2008.01524