Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Wadhwa, Manya"'
Autor:
Sprague, Zayne, Yin, Fangcong, Rodriguez, Juan Diego, Jiang, Dongwei, Wadhwa, Manya, Singhal, Prasann, Zhao, Xinyu, Ye, Xi, Mahowald, Kyle, Durrett, Greg
Chain-of-thought (CoT) via prompting is the de facto method for eliciting reasoning capabilities from large language models (LLMs). But for what kinds of tasks is this extra ``thinking'' really helpful? To analyze this, we conducted a quantitative me
Externí odkaz:
http://arxiv.org/abs/2409.12183
Recent work has explored the capability of large language models (LLMs) to identify and correct errors in LLM-generated responses. These refinement approaches frequently evaluate what sizes of models are able to do refinement for what problems, but l
Externí odkaz:
http://arxiv.org/abs/2407.02397
The rise of large language models (LLMs) has brought a critical need for high-quality human-labeled data, particularly for processes like human feedback and evaluation. A common practice is to label data via consensus annotation over human judgments.
Externí odkaz:
http://arxiv.org/abs/2305.14770
We describe our approach towards building an efficient predictive model to detect emotions for a group of people in an image. We have proposed that training a Convolutional Neural Network (CNN) model on the emotion heatmaps extracted from the image,
Externí odkaz:
http://arxiv.org/abs/1710.01216
Autor:
Agarwal, Akshay, Keshari, Rohit, Wadhwa, Manya, Vijh, Mansi, Parmar, Chandani, Singh, Richa, Vatsa, Mayank
Publikováno v:
In Information Fusion January 2019 45:333-345
Publikováno v:
ACM International Conference Proceeding Series; 12/18/2016, p1-8, 8p