Zobrazeno 1 - 10
of 622
pro vyhledávání: '"Schmidhuber, J."'
Autor:
Nivel, E., Thórisson, K. R., Steunebrink, B. R., Dindo, H., Pezzulo, G., Rodriguez, M., Hernandez, C., Ognibene, D., Schmidhuber, J., Sanz, R., Helgason, H. P., Chella, A., Jonsson, G. K.
We have designed a machine that becomes increasingly better at behaving in underspecified circumstances, in a goal-directed way, on the job, by modeling itself and its environment as experience accumulates. Based on principles of autocatalysis, endog
Externí odkaz:
http://arxiv.org/abs/1312.6764
Publikováno v:
Information and Computation, Vol.205,Nr.2 (2007) 242-261
We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor $M$ from the true distribution $mu$ by the algorithmic complexity of $mu$. Here we assume
Externí odkaz:
http://arxiv.org/abs/cs/0701120
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
We share our experience with the recently released WILDS benchmark, a collection of ten datasets dedicated to developing models and training strategies which are robust to domain shifts. Several experiments yield a couple of critical observations whi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::bbc3fb4971c4d100d3cc3619ab431450
http://arxiv.org/abs/2112.15550
http://arxiv.org/abs/2112.15550
Autor:
Irie, Kazuki, Schmidhuber, J��rgen
The inputs and/or outputs of some neural nets are weight matrices of other neural nets. Indirect encodings or end-to-end compression of weight matrices could help to scale such approaches. Our goal is to open a discussion on this topic, starting with
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::98b0301eff829feef48552f82c845137
http://arxiv.org/abs/2112.15545
http://arxiv.org/abs/2112.15545
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Transformers with linearised attention (''linear Transformers'') have demonstrated the practical scalability and effectiveness of outer product-based Fast Weight Programmers (FWPs) from the '90s. However, the original FWP formulation is more general
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::18144b11403686fde8e99436c3f888f7
http://arxiv.org/abs/2106.06295
http://arxiv.org/abs/2106.06295
Autor:
Miladinovi��, ��or��e, Stani��, Aleksandar, Bauer, Stefan, Schmidhuber, J��rgen, Buhmann, Joachim M.
How to improve generative modeling by better exploiting spatial regularities and coherence in images? We introduce a novel neural network for building image generators (decoders) and apply it to variational autoencoders (VAEs). In our spatial depende
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::8326c4d092510e370bfe4a7fe7dd870c
http://arxiv.org/abs/2103.08877
http://arxiv.org/abs/2103.08877
Autor:
Sajid, Noor, Faccio, Francesco, Da Costa, Lancelot, Parr, Thomas, Schmidhuber, J��rgen, Friston, Karl
Under the Bayesian brain hypothesis, behavioural variations can be attributed to different priors over generative model parameters. This provides a formal explanation for why individuals exhibit inconsistent behavioural preferences when confronted wi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::e0bff538ff871070e945428b12caeed5