Autor: |
Carolin A. Rickert, Manuel Henkel, Oliver Lieleg |
Jazyk: |
angličtina |
Rok vydání: |
2023 |
Předmět: |
|
Zdroj: |
APL Machine Learning, Vol 1, Iss 1, Pp 016105-016105-14 (2023) |
Druh dokumentu: |
article |
ISSN: |
2770-9019 |
DOI: |
10.1063/5.0118207 |
Popis: |
With big datasets and highly efficient algorithms becoming increasingly available for many problem sets, rapid advancements and recent breakthroughs achieved in the field of machine learning encourage more and more scientific fields to make use of such a computational data analysis. Still, for many research problems, the amount of data available for training a machine learning (ML) model is very limited. An important strategy to combat the problems arising from data sparsity is feature elimination—a method that aims at reducing the dimensionality of an input feature space. Most such strategies exclusively focus on analyzing pairwise correlations, or they eliminate features based on their relation to a selected output label or by optimizing performance measures of a certain ML model. However, those strategies do not necessarily remove redundant information from datasets and cannot be applied to certain situations, e.g., to unsupervised learning models. Neither of these limitations applies to the network-based, correlation-driven redundancy elimination (NETCORE) algorithm introduced here, where the size of a feature vector is reduced by considering both redundancy and elimination efficiency. The NETCORE algorithm is model-independent, does not require an output label, and is applicable to all kinds of correlation topographies within a dataset. Thus, this algorithm has the potential to be a highly beneficial preprocessing tool for various machine learning pipelines. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|
Nepřihlášeným uživatelům se plný text nezobrazuje |
K zobrazení výsledku je třeba se přihlásit.
|