Zobrazeno 1 - 10
of 14 547
pro vyhledávání: '"A, Ostermann"'
Autor:
Rodriguez, Pedro Sales, Robinson, John M., Jepsen, Paul Niklas, He, Zhiyang, Duckering, Casey, Zhao, Chen, Wu, Kai-Hsin, Campo, Joseph, Bagnall, Kevin, Kwon, Minho, Karolyshyn, Thomas, Weinberg, Phillip, Cain, Madelyn, Evered, Simon J., Geim, Alexandra A., Kalinowski, Marcin, Li, Sophie H., Manovitz, Tom, Amato-Grill, Jesse, Basham, James I., Bernstein, Liane, Braverman, Boris, Bylinskii, Alexei, Choukri, Adam, DeAngelo, Robert, Fang, Fang, Fieweger, Connor, Frederick, Paige, Haines, David, Hamdan, Majd, Hammett, Julian, Hsu, Ning, Hu, Ming-Guang, Huber, Florian, Jia, Ningyuan, Kedar, Dhruv, Kornjača, Milan, Liu, Fangli, Long, John, Lopatin, Jonathan, Lopes, Pedro L. S., Luo, Xiu-Zhe, Macrì, Tommaso, Marković, Ognjen, Martínez-Martínez, Luis A., Meng, Xianmei, Ostermann, Stefan, Ostroumov, Evgeny, Paquette, David, Qiang, Zexuan, Shofman, Vadim, Singh, Anshuman, Singh, Manuj, Sinha, Nandan, Thoreen, Henry, Wan, Noel, Wang, Yiping, Waxman-Lenz, Daniel, Wong, Tak, Wurtz, Jonathan, Zhdanov, Andrii, Zheng, Laurent, Greiner, Markus, Keesling, Alexander, Gemelke, Nathan, Vuletić, Vladan, Kitagawa, Takuya, Wang, Sheng-Tao, Bluvstein, Dolev, Lukin, Mikhail D., Lukin, Alexander, Zhou, Hengyun, Cantú, Sergio H.
Realizing universal fault-tolerant quantum computation is a key goal in quantum information science. By encoding quantum information into logical qubits utilizing quantum error correcting codes, physical errors can be detected and corrected, enabling
Externí odkaz:
http://arxiv.org/abs/2412.15165
Autor:
Mahlau, Yannik, Schubert, Frederik, Bethmann, Konrad, Caspary, Reinhard, Lesina, Antonio Calà, Munderloh, Marco, Ostermann, Jörn, Rosenhahn, Bodo
We introduce an efficient open-source python package for the inverse design of three-dimensional photonic nanostructures using the Finite-Difference Time-Domain (FDTD) method. Leveraging a flexible reverse-mode automatic differentiation implementatio
Externí odkaz:
http://arxiv.org/abs/2412.12360
In this paper, we consider the application of exponential integrators to problems that are advection dominated, either on the entire or on a subset of the domain. In this context, we compare Leja and Krylov based methods to compute the action of expo
Externí odkaz:
http://arxiv.org/abs/2410.12765
This paper aims to delve into the rate-distortion-complexity trade-offs of modern neural video coding. Recent years have witnessed much research effort being focused on exploring the full potential of neural video coding. Conditional autoencoders hav
Externí odkaz:
http://arxiv.org/abs/2410.03898
Contextualized embeddings based on large language models (LLMs) are available for various languages, but their coverage is often limited for lower resourced languages. Using LLMs for such languages is often difficult due to a high computational cost;
Externí odkaz:
http://arxiv.org/abs/2409.18193
The dual phosphorylation network provides an essential component of intracellular signaling, affecting the expression of phenotypes and cell metabolism. For particular choices of kinetic parameters, this system exhibits multistationarity, a property
Externí odkaz:
http://arxiv.org/abs/2409.16234
Autor:
Donninger, Roland, Ostermann, Matthias
We consider corotational wave maps from Minkowski spacetime into the sphere and the equivariant Yang-Mills equation for all energy-supercritical dimensions. Both models have explicit self-similar finite time blowup solutions, which continue to exist
Externí odkaz:
http://arxiv.org/abs/2409.14733
In the era of high performing Large Language Models, researchers have widely acknowledged that contextual word representations are one of the key drivers in achieving top performances in downstream tasks. In this work, we investigate the degree of co
Externí odkaz:
http://arxiv.org/abs/2409.14097
Autor:
Wang, Qianli, Anikina, Tatiana, Feldhus, Nils, Ostermann, Simon, Möller, Sebastian, Schmitt, Vera
Natural language explanations (NLEs) are vital for elucidating the reasoning behind large language model (LLM) decisions. Many techniques have been developed to generate NLEs using LLMs. However, like humans, LLMs might not always produce optimal NLE
Externí odkaz:
http://arxiv.org/abs/2409.07123
Prompt tuning is an efficient solution for training large language models (LLMs). However, current soft-prompt-based methods often sacrifice multi-task modularity, requiring the training process to be fully or partially repeated for each newly added
Externí odkaz:
http://arxiv.org/abs/2408.01119