Improving the significance of benchmarks for Petri nets model checkers

Autor: Hostettler, Steve Patrick, Linard, Alban, Marechal Marin, Alexis Ayar, Risoldi, Matteo
Jazyk: angličtina
Rok vydání: 2010
Předmět:
Zdroj: Proceedings of the workshops APNOC and SUMo, Braga, June 22, 2010 pp. 97-111
Popis: Benchmarking is a fundamental activity to rigorously quantify the improvements of a new approach or tool with respect to the state of the art. Generally, it consists in comparing results of a given technique with more or less similar approaches. In the Petri nets community, the comparison is often centered on model checking and/or state space calculation performance. However, there is sometimes little justification for the choice of the techniques to compare to. Also, benchmarks often lack context information, such as the exact model used, or how to reproduce the results. This makes it diffcult to draw conclusions from the comparisons. We conducted a survey among the Petri nets community in which we gathered information about the used formalisms and techniques. This revealed an unanimous interest for a common repository of benchmarks. The survey shows that existing efforts in this direction suffer from limitations that prevent their effectiveness. In this article we report the results of the survey and we outline perspectives for improving Petri nets benchmark repositories.
Databáze: OpenAIRE