Popis: |
Software is often released in multiple variants to address the needs of different customers or application scenarios. One frequent approach to create new variants is clone-and-own, whose systematic support has gained considerable research interest in the last decade. However, only few techniques have been evaluated in a realistic setting, due to a substantial lack of publicly available clone-and-own projects which could be used as experimental subjects. Instead, many studies use variants generated from software product lines for their evaluation. Unfortunately, the results might be biased, because variants generated from a single code base lack unintentional divergences that would have been introduced by clone-and-own. In this paper, we report about ongoing work towards a more systematic investigation of threats to the external validity of such experimental results. Using n-way model matching as a representative technique for supporting clone-and-own, we assess the performance of state-of-the-art algorithms on variant sets exposing increasing degrees of divergence. We compile our observations into four hypotheses which are meant to serve as a basis for discussion and which need to be investigated in more detail in future research. |