Zobrazeno 1 - 10
of 299
pro vyhledávání: '"Medvet, Eric"'
The Problem-oriented AutoML in Clustering (PoAC) framework introduces a novel, flexible approach to automating clustering tasks by addressing the shortcomings of traditional AutoML solutions. Conventional methods often rely on predefined internal Clu
Externí odkaz:
http://arxiv.org/abs/2409.16218
Autor:
Mami, Ciro Antonio, Coser, Andrea, Medvet, Eric, Boudewijn, Alexander T. P., Volpe, Marco, Whitworth, Michael, Svara, Borut, Sgroi, Gabriele, Panfilo, Daniele, Saccani, Sebastiano
Synthetic data generation has recently gained widespread attention as a more reliable alternative to traditional data anonymization. The involved methods are originally developed for image synthesis. Hence, their application to the typically tabular
Externí odkaz:
http://arxiv.org/abs/2211.16889
Modularity in robotics holds great potential. In principle, modular robots can be disassembled and reassembled in different robots, and possibly perform new tasks. Nevertheless, actually exploiting modularity is yet an unsolved problem: controllers u
Externí odkaz:
http://arxiv.org/abs/2204.06481
Voxel-based Soft Robots (VSRs) are a form of modular soft robots, composed of several deformable cubes, i.e., voxels. Each VSR is thus an ensemble of simple agents, namely the voxels, which must cooperate to give rise to the overall VSR behavior. Wit
Externí odkaz:
http://arxiv.org/abs/2204.02099
Interpretability can be critical for the safe and responsible use of machine learning models in high-stakes applications. So far, evolutionary computation (EC), in particular in the form of genetic programming (GP), represents a key enabler for the d
Externí odkaz:
http://arxiv.org/abs/2204.02046
Publikováno v:
In Neurocomputing 21 January 2025 614
High-stakes applications require AI-generated models to be interpretable. Current algorithms for the synthesis of potentially interpretable models rely on objectives or regularization terms that represent interpretability only coarsely (e.g., model s
Externí odkaz:
http://arxiv.org/abs/2104.06060
Many risk-sensitive applications require Machine Learning (ML) models to be interpretable. Attempts to obtain interpretable models typically rely on tuning, by trial-and-error, hyper-parameters of model complexity that are only loosely related to int
Externí odkaz:
http://arxiv.org/abs/2004.11170
Voxel-based soft robots (VSRs) are aggregations of soft blocks whose design is amenable to optimization. We here present a software, 2D-VSR-Sim, for facilitating research concerning the optimization of VSRs body and brain. The software, written in Ja
Externí odkaz:
http://arxiv.org/abs/2001.08617