Zobrazeno 1 - 10
of 15
pro vyhledávání: '"Heiss, Jakob"'
Publikováno v:
Proceedings of the AAAI Conference on Artificial Intelligence, 38(9) (2024) 9891-9900
We study the design of iterative combinatorial auctions (ICAs). The main challenge in this domain is that the bundle space grows exponentially in the number of items. To address this, several papers have recently proposed machine learning (ML)-based
Externí odkaz:
http://arxiv.org/abs/2308.10226
Publikováno v:
Transactions on Machine Learning Research (TMLR) 2024
The Path-Dependent Neural Jump Ordinary Differential Equation (PD-NJ-ODE) is a model for predicting continuous-time stochastic processes with irregular and incomplete observations. In particular, the method learns optimal forecasts given irregularly
Externí odkaz:
http://arxiv.org/abs/2307.13147
Randomized neural networks (randomized NNs), where only the terminal layer's weights are optimized constitute a powerful model class to reduce computational time in training the neural network model. At the same time, these models generalize surprisi
Externí odkaz:
http://arxiv.org/abs/2303.11454
Publikováno v:
Proceedings of the AAAI Conference on Artificial Intelligence Vol 37 (2023)
We study the combinatorial assignment domain, which includes combinatorial auctions and course allocation. The main challenge in this domain is that the bundle space grows exponentially in the number of items. To address this, several papers have rec
Externí odkaz:
http://arxiv.org/abs/2208.14698
In practice, multi-task learning (through learning features shared among tasks) is an essential property of deep neural networks (NNs). While infinite-width limits of NNs can provide good intuition for their generalization behavior, the well-known in
Externí odkaz:
http://arxiv.org/abs/2112.15577
Publikováno v:
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence Main Track (2022). Pages 541-548
Many important resource allocation problems involve the combinatorial assignment of items, e.g., auctions or course allocation. Because the bundle space grows exponentially in the number of items, preference elicitation is a key challenge in these do
Externí odkaz:
http://arxiv.org/abs/2109.15117
Publikováno v:
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:8708-8758, 2022
We study methods for estimating model uncertainty for neural networks (NNs) in regression. To isolate the effect of model uncertainty, we focus on a noiseless setting with scarce training data. We introduce five important desiderata regarding model u
Externí odkaz:
http://arxiv.org/abs/2102.13640
In this paper, we consider one dimensional (shallow) ReLU neural networks in which weights are chosen randomly and only the terminal layer is trained. First, we mathematically show that for such networks L2-regularized regression corresponds in funct
Externí odkaz:
http://arxiv.org/abs/1911.02903
In practice, multi-task learning (through learning features shared among tasks) is an essential property of deep neural networks (NNs). While infinite-width limits of NNs can provide good intuition for their generalization behavior, the well-known in
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d4f3c7f829449b7fec028ba8721377fa
https://hdl.handle.net/20.500.11850/550890
https://hdl.handle.net/20.500.11850/550890
We prove in this paper that optimizing wide ReLU neural networks (NNs) with at least one hidden layer using l2-regularization on the parameters enforces multi-task learning due to representation-learning -- also in the limit of width to infinity. Thi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d05e6fa7de7d15d8197b33e3f1b4cab6
https://hdl.handle.net/20.500.11850/522526
https://hdl.handle.net/20.500.11850/522526