Reproducibility of Experiments in Recommender Systems Evaluation
Autor: | Nikolaos Polatidis, Stylianos Kapetanakis, Konstantinos Kosmidis, Elias Pimenidis |
---|---|
Rok vydání: | 2018 |
Předmět: |
Reproducibility
Point (typography) Computer science business.industry media_common.quotation_subject 02 engineering and technology Recommender system Machine learning computer.software_genre Replication (computing) 020204 information systems 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Quality (business) Artificial intelligence business computer media_common |
Zdroj: | IFIP Advances in Information and Communication Technology ISBN: 9783319920061 AIAI University of Brighton |
DOI: | 10.1007/978-3-319-92007-8_34 |
Popis: | Recommender systems evaluation is usually based on predictive accuracy metrics with better scores meaning recommendations of higher quality. However, the comparison of results is becoming increasingly difficult, since there are different recommendation frameworks and different settings in the design and implementation of the experiments. Furthermore, there might be minor differences on algorithm implementation among the different frameworks. In this paper, we compare well known recommendation algorithms, using the same dataset, metrics and overall settings, the results of which point to result differences across frameworks with the exact same settings. Hence, we propose the use of standards that should be followed as guidelines to ensure the replication of experiments and the reproducibility of the results. |
Databáze: | OpenAIRE |
Externí odkaz: |