QUENCH: Measuring the gap between Indic and Non-Indic Contextual General Reasoning in LLMs
Autor: | Khan, Mohammad Aflah, Yadav, Neemesh, Masud, Sarah, Akhtar, Md. Shad |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | The rise of large language models (LLMs) has created a need for advanced benchmarking systems beyond traditional setups. To this end, we introduce QUENCH, a novel text-based English Quizzing Benchmark manually curated and transcribed from YouTube quiz videos. QUENCH possesses masked entities and rationales for the LLMs to predict via generation. At the intersection of geographical context and common sense reasoning, QUENCH helps assess world knowledge and deduction capabilities of LLMs via a zero-shot, open-domain quizzing setup. We perform an extensive evaluation on 7 LLMs and 4 metrics, investigating the influence of model size, prompting style, geographical context, and gold-labeled rationale generation. The benchmarking concludes with an error analysis to which the LLMs are prone. Comment: 17 Pages, 6 Figures, 8 Tables, COLING 2025 |
Databáze: | arXiv |
Externí odkaz: |