Zobrazeno 1 - 10
of 3 241
pro vyhledávání: '"A. Himabindu"'
Publikováno v:
Electronic Journal of Plant Breeding, Vol 12, Iss 4, Pp 1261-1267 (2021)
A study was conducted at Horticultural research station, Venkataramannagudem for evaluating the variability of mango germplasm to conserve the elite ones and to identify the superior genotypes using molecular markers for future crop improvement. Gene
Externí odkaz:
https://doaj.org/article/e504cfd2ef8d4fc0b72a62d929803a41
Large language models have emerged as powerful tools for general intelligence, showcasing advanced natural language processing capabilities that find applications across diverse domains. Despite their impressive performance, recent studies have highl
Externí odkaz:
http://arxiv.org/abs/2411.15382
With the growing complexity and capability of large language models, a need to understand model reasoning has emerged, often motivated by an underlying goal of controlling and aligning models. While numerous interpretability and steering methods have
Externí odkaz:
http://arxiv.org/abs/2411.04430
Data Attribution (DA) methods quantify the influence of individual training data points on model outputs and have broad applications such as explainability, data selection, and noisy label identification. However, existing DA methods are often comput
Externí odkaz:
http://arxiv.org/abs/2410.09940
Autor:
Qi, Zhenting, Luo, Hongyin, Huang, Xuliang, Zhao, Zhuokai, Jiang, Yibo, Fan, Xiangjun, Lakkaraju, Himabindu, Glass, James
While large language models (LLMs) have shown exceptional capabilities in understanding complex queries and performing sophisticated tasks, their generalization abilities are often deeply entangled with memorization, necessitating more precise evalua
Externí odkaz:
http://arxiv.org/abs/2410.01769
Autor:
Rawal, Kaivalya, Lakkaraju, Himabindu
This paper presents a novel technique for incorporating user input when learning and inferring user preferences. When trying to provide users of black-box machine learning models with actionable recourse, we often wish to incorporate their personal p
Externí odkaz:
http://arxiv.org/abs/2409.13940
Predictive machine learning models are becoming increasingly deployed in high-stakes contexts involving sensitive personal data; in these contexts, there is a trade-off between model explainability and data privacy. In this work, we push the boundari
Externí odkaz:
http://arxiv.org/abs/2407.17663
Do different generative image models secretly learn similar underlying representations? We investigate this by measuring the latent space similarity of four different models: VAEs, GANs, Normalizing Flows (NFs), and Diffusion Models (DMs). Our method
Externí odkaz:
http://arxiv.org/abs/2407.13449
As Artificial Intelligence (AI) tools are increasingly employed in diverse real-world applications, there has been significant interest in regulating these tools. To this end, several regulatory frameworks have been introduced by different countries
Externí odkaz:
http://arxiv.org/abs/2407.08689
As Large Language Models (LLMs) are increasingly being employed in real-world applications in critical domains such as healthcare, it is important to ensure that the Chain-of-Thought (CoT) reasoning generated by these models faithfully captures their
Externí odkaz:
http://arxiv.org/abs/2406.10625