Zobrazeno 1 - 10
of 8 827
pro vyhledávání: '"Bechtle"'
Some highlights of the physics case for running an $e^+e^-$ collider at 500 GeV and above are discussed with a particular emphasis on the experimental access to the Higgs potential via di-Higgs and (at sufficiently high energy) triple Higgs productio
Externí odkaz:
http://arxiv.org/abs/2410.16191
Autor:
Bechtle, Philip, Breton, Dominique, Canet, Carlos Orero, Desch, Klaus, Dreiner, Herbi, Freyermuth, Oliver, Gauld, Rhorry, Gruber, Markus, Gutiérrez, César Blanch, Hajjar, Hazem, Hamer, Matthias, Heinrichs, Jan-Eric, Irles, Adrian, Kaminski, Jochen, Klipphahn, Laney, Lupberger, Michael, Maalmi, Jihane, Pöschl, Roman, Richarz, Leonie, Schiffer, Tobias, Schwäbig, Patrick, Schürmann, Martin, Zerwas, Dirk
We present a proposal for a future light dark matter search experiment at the Electron Stretcher Accelerator ELSA in Bonn: Lohengrin. It employs the fixed-target missing momentum based technique for searching for dark-sector particles. The Lohengrin
Externí odkaz:
http://arxiv.org/abs/2410.10956
We introduce a method to study quantum entanglement at a future $e^+e^-$ Higgs factory (here the Future Circular Collider colliding $e^+$ and $e^-$ (FCC-ee) operating at $\sqrt{s}=240\,\mathrm{GeV}$) in the $\tau\tau$ final state. This method is focu
Externí odkaz:
http://arxiv.org/abs/2409.20239
Autor:
Wulfmeier, Markus, Bloesch, Michael, Vieillard, Nino, Ahuja, Arun, Bornschein, Jorg, Huang, Sandy, Sokolov, Artem, Barnes, Matt, Desjardins, Guillaume, Bewley, Alex, Bechtle, Sarah Maria Elisabeth, Springenberg, Jost Tobias, Momchev, Nikola, Bachem, Olivier, Geist, Matthieu, Riedmiller, Martin
The majority of language model training builds on imitation learning. It covers pretraining, supervised fine-tuning, and affects the starting conditions for reinforcement learning from human feedback (RLHF). The simplicity and scalability of maximum
Externí odkaz:
http://arxiv.org/abs/2409.01369
Autor:
Bruce, Jake, Dennis, Michael, Edwards, Ashley, Parker-Holder, Jack, Shi, Yuge, Hughes, Edward, Lai, Matthew, Mavalankar, Aditi, Steigerwald, Richie, Apps, Chris, Aytar, Yusuf, Bechtle, Sarah, Behbahani, Feryal, Chan, Stephanie, Heess, Nicolas, Gonzalez, Lucy, Osindero, Simon, Ozair, Sherjil, Reed, Scott, Zhang, Jingwei, Zolna, Konrad, Clune, Jeff, de Freitas, Nando, Singh, Satinder, Rocktäschel, Tim
We introduce Genie, the first generative interactive environment trained in an unsupervised manner from unlabelled Internet videos. The model can be prompted to generate an endless variety of action-controllable virtual worlds described through text,
Externí odkaz:
http://arxiv.org/abs/2402.15391
Autor:
Springenberg, Jost Tobias, Abdolmaleki, Abbas, Zhang, Jingwei, Groth, Oliver, Bloesch, Michael, Lampe, Thomas, Brakel, Philemon, Bechtle, Sarah, Kapturowski, Steven, Hafner, Roland, Heess, Nicolas, Riedmiller, Martin
We show that offline actor-critic reinforcement learning can scale to large models - such as transformers - and follows similar scaling laws as supervised learning. We find that offline actor-critic algorithms can outperform strong, supervised, behav
Externí odkaz:
http://arxiv.org/abs/2402.05546
Autor:
Lampe, Thomas, Abdolmaleki, Abbas, Bechtle, Sarah, Huang, Sandy H., Springenberg, Jost Tobias, Bloesch, Michael, Groth, Oliver, Hafner, Roland, Hertweck, Tim, Neunert, Michael, Wulfmeier, Markus, Zhang, Jingwei, Nori, Francesco, Heess, Nicolas, Riedmiller, Martin
Reinforcement learning solely from an agent's self-generated data is often believed to be infeasible for learning on real robots, due to the amount of data needed. However, if done right, agents learning from real data can be surprisingly efficient t
Externí odkaz:
http://arxiv.org/abs/2312.11374
Contemporary artificial intelligence systems exhibit rapidly growing abilities accompanied by the growth of required resources, expansive datasets and corresponding investments into computing infrastructure. Although earlier successes predominantly f
Externí odkaz:
http://arxiv.org/abs/2312.01939