Zobrazeno 1 - 10
of 1 248
pro vyhledávání: '"P. Kasa"'
Large Language Models (LLMs) have seen widespread adoption due to their remarkable natural language capabilities. However, when deploying them in real-world settings, it is important to align LLMs to generate texts according to acceptable human stand
Externí odkaz:
http://arxiv.org/abs/2407.06443
\texttt{Mixture-Models} is an open-source Python library for fitting Gaussian Mixture Models (GMM) and their variants, such as Parsimonious GMMs, Mixture of Factor Analyzers, MClust models, Mixture of Student's t distributions, etc. It streamlines th
Externí odkaz:
http://arxiv.org/abs/2402.10229
Conformal prediction (CP) enables machine learning models to output prediction sets with guaranteed coverage rate, assuming exchangeable data. Unfortunately, the exchangeability assumption is frequently violated due to distribution shifts in practice
Externí odkaz:
http://arxiv.org/abs/2406.01416
Exploring Ordinality in Text Classification: A Comparative Study of Explicit and Implicit Techniques
Autor:
Kasa, Siva Rajesh, Goel, Aniket, Gupta, Karan, Roychowdhury, Sumegh, Bhanushali, Anish, Pattisapu, Nikhil, Murthy, Prasanna Srinivasa
Ordinal Classification (OC) is a widely encountered challenge in Natural Language Processing (NLP), with applications in various domains such as sentiment analysis, rating prediction, and more. Previous approaches to tackle OC have primarily focused
Externí odkaz:
http://arxiv.org/abs/2405.11775
Autor:
Gupta, Karan, Roychowdhury, Sumegh, Kasa, Siva Rajesh, Kasa, Santhosh Kumar, Bhanushali, Anish, Pattisapu, Nikhil, Murthy, Prasanna Srinivasa
In the In-Context Learning (ICL) setup, various forms of label biases can manifest. One such manifestation is majority label bias, which arises when the distribution of labeled examples in the in-context samples is skewed towards one or more specific
Externí odkaz:
http://arxiv.org/abs/2312.16549
Autor:
Roychowdhury, Sumegh, Gupta, Karan, Kasa, Siva Rajesh, Murthy, Prasanna Srinivasa, Chandra, Alok
Publikováno v:
NeurIPS 2023 - Workshop on Distribution Shifts
Pre-trained language models (PLMs) have seen tremendous success in text classification (TC) problems in the context of Natural Language Processing (NLP). In many real-world text classification tasks, the class definitions being learned do not remain
Externí odkaz:
http://arxiv.org/abs/2311.03320
Autor:
Kasa, Kevin, Taylor, Graham W.
Conformal prediction has emerged as a rigorous means of providing deep learning models with reliable uncertainty estimates and safety guarantees. Yet, its performance is known to degrade under distribution shift and long-tailed class distributions, w
Externí odkaz:
http://arxiv.org/abs/2307.01088
Autor:
Kása, Zoltán
In randomly created structures (be they natural or artificial) very often there exist ordered substructures. In this Hungarian language scientific essay we will present some of such structures in graph theory. E.g. R\'edei's theorem, Ramsey theory, T
Externí odkaz:
http://arxiv.org/abs/2112.02362
Autor:
Kasa, Siva Rajesh, Rajan, Vaibhav
Copulas provide a modular parameterization of multivariate distributions that decouples the modeling of marginals from the dependencies between them. Gaussian Mixture Copula Model (GMCM) is a highly flexible copula that can model many kinds of multi-
Externí odkaz:
http://arxiv.org/abs/2010.14359
We present the first deep learning based architecture for collective matrix tri-factorization (DCMTF) of arbitrary collections of matrices, also known as augmented multi-view data. DCMTF can be used for multi-way spectral clustering of heterogeneous
Externí odkaz:
http://arxiv.org/abs/2009.05805