Zobrazeno 1 - 10
of 251 082
pro vyhledávání: '"A Arun"'
Autor:
Gao, Rujun, Guo, Xiaosu, Li, Xiaodi, Narayanan, Arun Balajiee Lekshmi, Thomas, Naveen, Srinivasa, Arun R.
This study explores the feasibility of using large language models (LLMs), specifically GPT-4o (ChatGPT), for automated grading of conceptual questions in an undergraduate Mechanical Engineering course. We compared the grading performance of GPT-4o w
Externí odkaz:
http://arxiv.org/abs/2411.03659
Publikováno v:
Kerala Journal of Ophthalmology, Vol 34, Iss 2, Pp 157-160 (2022)
This case report represents an unusual presentation of ocular tuberculosis (TB). Ocular TB is rare, but it can be the first clinical manifestation of the disease. Here, we report a case of a 67-year-old male, a chronic smoker who presented with pain,
Externí odkaz:
https://doaj.org/article/0c778d8655a14a98ab02b4e066fe8587
Autor:
Bose, Anjishnu, Paramekanti, Arun
Recent work has shown that the honeycomb lattice spin-$1/2$ $J_1$-$J_3$ XY model, with nearest-neighbor ferromagnetic exchange $J_1$ and frustration induced by third-neighbor antiferromagnetic exchange $J_3$, may be relevant to a wide range of cobalt
Externí odkaz:
http://arxiv.org/abs/2412.04544
Publikováno v:
Addepalli, S., Varun, Y., Suggala, A., Shanmugam, K. and Jain, P., Does Safety Training of LLMs Generalize to Semantically Related Natural Prompts?. In Neurips Safe Generative AI Workshop 2024
Large Language Models (LLMs) are known to be susceptible to crafted adversarial attacks or jailbreaks that lead to the generation of objectionable content despite being aligned to human preferences using safety fine-tuning methods. While the large di
Externí odkaz:
http://arxiv.org/abs/2412.03235
Autor:
Varun, Yerram, Madhavan, Rahul, Addepalli, Sravanti, Suggala, Arun, Shanmugam, Karthikeyan, Jain, Prateek
Publikováno v:
Varun, Y., Madhavan, R., Addepalli, S., Suggala, A., Shanmugam, K., & Jain, P. Time-Reversal Provides Unsupervised Feedback to LLMs. In The Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS), 2024
Large Language Models (LLMs) are typically trained to predict in the forward direction of time. However, recent works have shown that prompting these models to look back and critique their own generations can produce useful feedback. Motivated by thi
Externí odkaz:
http://arxiv.org/abs/2412.02626
We provide a general method to convert a "primal" black-box algorithm for solving regularized convex-concave minimax optimization problems into an algorithm for solving the associated dual maximin optimization problem. Our method adds recursive regul
Externí odkaz:
http://arxiv.org/abs/2412.02949
Autor:
Kolipaka, Varshita, Sinha, Akshit, Mishra, Debangan, Kumar, Sumit, Arun, Arvindh, Goel, Shashwat, Kumaraguru, Ponnurangam
Graph Neural Networks (GNNs) are increasingly being used for a variety of ML applications on graph data. As graph data does not follow the independently and identically distributed (i.i.d) assumption, adversarial manipulations or incorrect data can p
Externí odkaz:
http://arxiv.org/abs/2412.00789
In this paper, we evaluate the capability of large language models to conduct personalized phishing attacks and compare their performance with human experts and AI models from last year. We include four email groups with a combined total of 101 parti
Externí odkaz:
http://arxiv.org/abs/2412.00586
Many functionals of interest in statistics and machine learning can be written as minimizers of expected loss functions. Such functionals are called $M$-estimands, and can be estimated by $M$-estimators -- minimizers of empirical average losses. Trad
Externí odkaz:
http://arxiv.org/abs/2411.17087
Algorithmic agents are used in a variety of competitive decision settings, notably in making pricing decisions in contexts that range from online retail to residential home rentals. Business managers, algorithm designers, legal scholars, and regulato
Externí odkaz:
http://arxiv.org/abs/2411.16574