Zobrazeno 1 - 10
of 688
pro vyhledávání: '"Fortuin, P."'
Autor:
Flöge, Klemens, Udayakumar, Srisruthi, Sommer, Johanna, Piraud, Marie, Kesselheim, Stefan, Fortuin, Vincent, Günneman, Stephan, van der Weg, Karel J, Gohlke, Holger, Bazarova, Alina, Merdivan, Erinc
Recent AI advances have enabled multi-modal systems to model and translate diverse information spaces. Extending beyond text and vision, we introduce OneProt, a multi-modal AI for proteins that integrates structural, sequence, alignment, and binding
Externí odkaz:
http://arxiv.org/abs/2411.04863
Deep neural network ensembles are powerful tools for uncertainty quantification, which have recently been re-interpreted from a Bayesian perspective. However, current methods inadequately leverage second-order information of the loss landscape, despi
Externí odkaz:
http://arxiv.org/abs/2411.01887
State-of-the-art computer vision tasks, like monocular depth estimation (MDE), rely heavily on large, modern Transformer-based architectures. However, their application in safety-critical domains demands reliable predictive performance and uncertaint
Externí odkaz:
http://arxiv.org/abs/2409.17085
Autor:
van Ommen, H. B., van de Stolpe, G. L., Demetriou, N., Beukers, H. K. C., Yun, J., Fortuin, T. R. J., Iuliano, M., Montblanch, A. R. -P., Hanson, R., Taminiau, T. H.
The ability to sense and control nuclear spins near solid-state defects might enable a range of quantum technologies. Dynamically Decoupled Radio-Frequency (DDRF) control offers a high degree of design flexibility and long electron-spin coherence tim
Externí odkaz:
http://arxiv.org/abs/2409.13610
Knowing which features of a multivariate time series to measure and when is a key task in medicine, wearables, and robotics. Better acquisition policies can reduce costs while maintaining or even improving the performance of downstream predictors. In
Externí odkaz:
http://arxiv.org/abs/2407.13429
Laplace approximations are popular techniques for endowing deep networks with epistemic uncertainty estimates as they can be applied without altering the predictions of the trained network, and they scale to large models and datasets. While the choic
Externí odkaz:
http://arxiv.org/abs/2407.13711
Autor:
Kristiadi, Agustinus, Strieth-Kalthoff, Felix, Subramanian, Sriram Ganapathi, Fortuin, Vincent, Poupart, Pascal, Pleiss, Geoff
Bayesian optimization (BO) is an integral part of automated scientific discovery -- the so-called self-driving lab -- where human inputs are ideally minimal or at least non-blocking. However, scientists often have strong intuition, and thus human fee
Externí odkaz:
http://arxiv.org/abs/2406.06459
Fine-tuned Large Language Models (LLMs) often suffer from overconfidence and poor calibration, particularly when fine-tuned on small datasets. To address these challenges, we propose a simple combination of Low-Rank Adaptation (LoRA) with Gaussian St
Externí odkaz:
http://arxiv.org/abs/2405.03425
Autor:
Manduchi, Laura, Pandey, Kushagra, Bamler, Robert, Cotterell, Ryan, Däubener, Sina, Fellenz, Sophie, Fischer, Asja, Gärtner, Thomas, Kirchler, Matthias, Kloft, Marius, Li, Yingzhen, Lippert, Christoph, de Melo, Gerard, Nalisnick, Eric, Ommer, Björn, Ranganath, Rajesh, Rudolph, Maja, Ullrich, Karen, Broeck, Guy Van den, Vogt, Julia E, Wang, Yixin, Wenzel, Florian, Wood, Frank, Mandt, Stephan, Fortuin, Vincent
The field of deep generative modeling has grown rapidly and consistently over the years. With the availability of massive amounts of training data coupled with advances in scalable unsupervised learning paradigms, recent large-scale generative models
Externí odkaz:
http://arxiv.org/abs/2403.00025
Neural network sparsification is a promising avenue to save computational time and memory costs, especially in an age where many successful AI models are becoming too large to na\"ively deploy on consumer hardware. While much work has focused on diff
Externí odkaz:
http://arxiv.org/abs/2402.15978