Popis: |
In Constraint Programming, constraints are usually represented as predicates allowing or forbidding combinations of values. However, some algorithms can exploit a finer representation: error functions. By associating a function to each constraint type to evaluate the quality of an assignment, it extends the expressiveness of regular Constraint Satisfaction Problem/Constrained Optimization Problem formalisms. Their usage comes with a price though: it makes problem modeling significantly harder, since users must provide a set of error functions that are not always easy to define. Here, we propose a method to automatically learn an error function corresponding to a constraint, given its predicate version only. This is, to the best of our knowledge, the first attempt to automatically learn error functions for hard constraints. In this paper, we also give for the first time a formal definition of combinatorial problems with hard constraints represented by error functions. Our method aims to learn error functions in a supervised fashion, trying to reproduce either the Hamming or the Manhattan distance, by using a graph model we named Interpretable Compositional Networks. This model allows us to get interpretable results. We run experiments on 7 different constraints to show its versatility. Experiments show that our system can learn functions that scale to high dimensions, and can learn fairly good functions over incomplete spaces. We also show that learned error functions can be used efficiently to represent constraints in different classic problems. |