Zobrazeno 1 - 10
of 71
pro vyhledávání: '"minimisation du risque empirique"'
Publikováno v:
RR-9508, INRIA, Centre Inria d'Université Côte d'Azur, Sophia Antipolis. 2023
The effect of the relative entropy asymmetry is analyzed in the empirical risk minimization with relative entropy regularization (ERM-RER) problem. A novel regularization is introduced, coined Type-II regularization, that allows for solutions to the
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=od_______165::dac7de7a334a6f3b062b46763c43e0e9
https://hal.science/hal-04110899/file/RR9508.pdf
https://hal.science/hal-04110899/file/RR9508.pdf
Autor:
Papa, Guillaume
Publikováno v:
Machine Learning [stat.ML]. Télécom ParisTech, 2018. English. ⟨NNT : 2018ENST0005⟩
In this manuscript, we present and study applied sampling strategies, with problems related to statistical learning. The goal is to deal with the problems that usually arise in a context of large data when the number of observations and their dimensi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::688bef6f02c9a59ccdc4f646d86a1433
https://pastel.archives-ouvertes.fr/tel-03209978/document
https://pastel.archives-ouvertes.fr/tel-03209978/document
Autor:
Yu, Jiaqian
Cette thèse aborde le problème de l’apprentissage avec des fonctions de perte nonmodulaires. Pour les problèmes de prédiction, où plusieurs sorties sont prédites simultanément, l’affichage du résultat comme un ensemble commun de prédicti
Externí odkaz:
http://www.theses.fr/2017SACLC012/document
Publikováno v:
[Research Report] RR-9454, Inria. 2022
In this version minor edition is made to correct typos.; The empirical risk minimization (ERM) problem with relative entropy regularization (ERM-RER) is investigated under the assumption that the reference measure is a σ-finite measure, and not nec
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::38a13dc1ba6f8d28a1616850d40a61be
https://hal.science/hal-03560072v5/document
https://hal.science/hal-03560072v5/document
Publikováno v:
[Research Report] RR-9474, Inria. 2022, pp.22
An explicit expression for the sensitivity of the expected empirical risk (EER) induced by the Gibbs algorithm (GA) is presented in the context of supervised machine learning. The sensitivity is defined as the difference between the EER induced by th
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=od_______165::968342e135d926a3cef5ca4f25b24680
https://hal.science/hal-03703628v3/file/V2-INRIA-RR9474.pdf
https://hal.science/hal-03703628v3/file/V2-INRIA-RR9474.pdf
Autor:
Achab, Mastane
Publikováno v:
Machine Learning [stat.ML]. Institut Polytechnique de Paris, 2020. English. ⟨NNT : 2020IPPAT020⟩
This thesis divides into two parts: the first part is on ranking and the second on risk-aware reinforcement learning. While binary classification is the flagship application of empirical risk minimization (ERM), the main paradigm of machine learning,
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=od______2592::dff9e3e90db5a29ade9379643e38ed22
https://tel.archives-ouvertes.fr/tel-03043749/document
https://tel.archives-ouvertes.fr/tel-03043749/document
Autor:
Achab, Mastane
Publikováno v:
Machine Learning [stat.ML]. Institut Polytechnique de Paris, 2020. English. ⟨NNT : 2020IPPAT020⟩
This thesis divides into two parts: the first part is on ranking and the second on risk-aware reinforcement learning. While binary classification is the flagship application of empirical risk minimization (ERM), the main paradigm of machine learning,
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::dff9e3e90db5a29ade9379643e38ed22
https://tel.archives-ouvertes.fr/tel-03043749/document
https://tel.archives-ouvertes.fr/tel-03043749/document
Autor:
Papa, Guillaume
Publikováno v:
Machine Learning [stat.ML]. Télécom ParisTech, 2018. English. ⟨NNT : 2018ENST0005⟩
In this manuscript, we present and study applied sampling strategies, with problems related to statistical learning. The goal is to deal with the problems that usually arise in a context of large data when the number of observations and their dimensi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::9b51e5cdb0f3ddbc890c9a99517e3768
http://hdl.handle.net/20.500.12278/38648
http://hdl.handle.net/20.500.12278/38648
Autor:
Gazagnadou, Nidham
The considerable increase in the number of data and features complicates the learning phase requiring the minimization of a loss function. Stochastic gradient descent (SGD) and variance reduction variants (SAGA, SVRG, MISO) are widely used to solve t
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=od_______166::9c6ea8096df74ec0d3b21ec44f7eb5af
https://theses.hal.science/tel-03590678
https://theses.hal.science/tel-03590678
Autor:
Arlot, Sylvain
Publikováno v:
Apprentissage statistique et donn\'ees massives
Myriam Maumy-Bertrand; Gilbert Saporta; Christine Thomas-Agnan. Apprentissage statistique et donn\'ees massives, Editions Technip, 2018, 9782710811824
Myriam Maumy-Bertrand; Gilbert Saporta; Christine Thomas-Agnan. Apprentissage statistique et donn\'ees massives, Editions Technip, 2018, 9782710811824
This text is a tutorial on supervised statistical learning, from the mathematical point of view. We describe the general prediction problem and the two key examples of regression and binary classification. Then, we study two kinds of learning rules:
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::f6c0c6260514909241eb457c327bbabd
https://hal.archives-ouvertes.fr/hal-01485506/document
https://hal.archives-ouvertes.fr/hal-01485506/document