Abstrakt: |
Recently, large language models (LLMs) have shown remarkable advancements in natural language processing tasks, including question-answering. However, despite their excellent performance, these models often generate inaccurate or incomplete answers, revealing certain limitations. To overcome these limitations, we propose a novel approach that leverages ensemble diversity to improve the response quality of LLMs. We aim to mitigate the shortcomings of a single LLM approach by utilizing the response diversity of various LLM methods, thereby generating more accurate and reliable answers. This approach is mainly based on parallel bagging, a technique of ensemble diversity. Furthermore, we allow the proposed method to predict relative performance metrics by comparing its results with other LLM methods. As a result of the experiment, the EDG-A showed an average performance improvement of 11.5% compared to the RAG method and an average of 16.5% compared to the DAG method. [ABSTRACT FROM AUTHOR] |