Zobrazeno 1 - 10
of 1 271
pro vyhledávání: '"Tejaswini, P."'
While being very successful in solving many downstream tasks, the application of deep neural networks is limited in real-life scenarios because of their susceptibility to domain shifts such as common corruptions, and adversarial attacks. The existenc
Externí odkaz:
http://arxiv.org/abs/2411.19853
Autor:
Medi, Tejaswini, Rampini, Arianna, Reddy, Pradyumna, Jayaraman, Pradeep Kumar, Keuper, Margret
Autoregressive (AR) models have achieved remarkable success in natural language and image generation, but their application to 3D shape modeling remains largely unexplored. Unlike diffusion models, AR models enable more efficient and controllable gen
Externí odkaz:
http://arxiv.org/abs/2411.19037
Deep neural networks are susceptible to adversarial attacks and common corruptions, which undermine their robustness. In order to enhance model resilience against such challenges, Adversarial Training (AT) has emerged as a prominent solution. Neverth
Externí odkaz:
http://arxiv.org/abs/2410.23142
Autor:
Ko, Ching-Yun, Chen, Pin-Yu, Das, Payel, Mroueh, Youssef, Dan, Soham, Kollias, Georgios, Chaudhury, Subhajit, Pedapati, Tejaswini, Daniel, Luca
Reducing the likelihood of generating harmful and toxic output is an essential task when aligning large language models (LLMs). Existing methods mainly rely on training an external reward model (i.e., another language model) or fine-tuning the LLM us
Externí odkaz:
http://arxiv.org/abs/2410.03818
Autor:
Ashktorab, Zahra, Desmond, Michael, Pan, Qian, Johnson, James M., Cooper, Martin Santillan, Daly, Elizabeth M., Nair, Rahul, Pedapati, Tejaswini, Achintalwar, Swapnaja, Geyer, Werner
Evaluation of large language model (LLM) outputs requires users to make critical judgments about the best outputs across various configurations. This process is costly and takes time given the large amounts of data. LLMs are increasingly used as eval
Externí odkaz:
http://arxiv.org/abs/2410.00873
Autor:
Khatiwada, Aamod, Kokel, Harsha, Abdelaziz, Ibrahim, Chaudhury, Subhajit, Dolby, Julian, Hassanzadeh, Oktie, Huang, Zhenhan, Pedapati, Tejaswini, Samulowitz, Horst, Srinivas, Kavitha
Enterprises have a growing need to identify relevant tables in data lakes; e.g. tables that are unionable, joinable, or subsets of each other. Tabular neural models can be helpful for such data discovery tasks. In this paper, we present TabSketchFM,
Externí odkaz:
http://arxiv.org/abs/2407.01619
Estimating uncertainty or confidence in the responses of a model can be significant in evaluating trust not only in the responses, but also in the model as a whole. In this paper, we explore the problem of estimating confidence for responses of large
Externí odkaz:
http://arxiv.org/abs/2406.04370
Medical image encryption could aid in preserving patient privacy. In this article, we provide a chaotic system-based medical picture encryption method. The diffusion and permutation architecture was used. The permutation based on plain image and chao
Externí odkaz:
http://arxiv.org/abs/2406.07560
Neural architecture search (NAS) enables the automatic design of neural network models. However, training the candidates generated by the search algorithm for performance evaluation incurs considerable computational overhead. Our method, dubbed nasgr
Externí odkaz:
http://arxiv.org/abs/2405.01306
Autor:
Dhurandhar, Amit, Pedapati, Tejaswini, Luss, Ronny, Dan, Soham, Lozano, Aurelie, Das, Payel, Kollias, Georgios
Transformer-based Language Models have become ubiquitous in Natural Language Processing (NLP) due to their impressive performance on various tasks. However, expensive training as well as inference remains a significant impediment to their widespread
Externí odkaz:
http://arxiv.org/abs/2404.01306