How to Teach Programming in the AI Era? Using LLMs as a Teachable Agent for Debugging

Autor: Ma, Qianou, Shen, Hua, Koedinger, Kenneth, Wu, Tongshuang
Rok vydání: 2023
Předmět:
Zdroj: AIED 2024, LNAI 14829, pp. 1-16
Druh dokumentu: Working Paper
DOI: 10.1007/978-3-031-64302-6_19
Popis: Large Language Models (LLMs) now excel at generative skills and can create content at impeccable speeds. However, they are imperfect and still make various mistakes. In a Computer Science education context, as these models are widely recognized as "AI pair programmers," it becomes increasingly important to train students on evaluating and debugging the LLM-generated code. In this work, we introduce HypoCompass, a novel system to facilitate deliberate practice on debugging, where human novices play the role of Teaching Assistants and help LLM-powered teachable agents debug code. We enable effective task delegation between students and LLMs in this learning-by-teaching environment: students focus on hypothesizing the cause of code errors, while adjacent skills like code completion are offloaded to LLM-agents. Our evaluations demonstrate that HypoCompass generates high-quality training materials (e.g., bugs and fixes), outperforming human counterparts fourfold in efficiency, and significantly improves student performance on debugging by 12% in the pre-to-post test.
Comment: 14 pages, 6 figures
Databáze: arXiv