Zobrazeno 1 - 10
of 4 360
pro vyhledávání: '"Seong, Joon"'
Membership inference attacks (MIA) attempt to verify the membership of a given data sample in the training set for a model. MIA has become relevant in recent years, following the rapid development of large language models (LLM). Many are concerned ab
Externí odkaz:
http://arxiv.org/abs/2411.00154
Open-vocabulary segmentation (OVS) has gained attention for its ability to recognize a broader range of classes. However, OVS models show significant performance drops when applied to unseen domains beyond the previous training dataset. Fine-tuning t
Externí odkaz:
http://arxiv.org/abs/2410.11536
While Explainable AI (XAI) aims to make AI understandable and useful to humans, it has been criticised for relying too much on formalism and solutionism, focusing more on mathematical soundness than user needs. We propose an alternative to this botto
Externí odkaz:
http://arxiv.org/abs/2409.16978
Training a diverse ensemble of models has several practical applications such as providing candidates for model selection with better out-of-distribution (OOD) generalization, and enabling the detection of OOD samples via Bayesian principles. An exis
Externí odkaz:
http://arxiv.org/abs/2409.16797
Error correcting codes (ECCs) are indispensable for reliable transmission in communication systems. The recent advancements in deep learning have catalyzed the exploration of ECC decoders based on neural networks. Among these, transformer-based neura
Externí odkaz:
http://arxiv.org/abs/2405.01033
Retrieval-augmented generation (RAG) mitigates many problems of fully parametric language models, such as temporal degradation, hallucinations, and lack of grounding. In RAG, the model's knowledge can be updated from documents provided in context. Th
Externí odkaz:
http://arxiv.org/abs/2404.16032
It has recently been conjectured that neural network solution sets reachable via stochastic gradient descent (SGD) are convex, considering permutation invariances (Entezari et al., 2022). This means that a linear path can connect two independent solu
Externí odkaz:
http://arxiv.org/abs/2403.07968
As large language models (LLMs) are increasingly deployed in user-facing applications, building trust and maintaining safety by accurately quantifying a model's confidence in its prediction becomes even more important. However, finding effective ways
Externí odkaz:
http://arxiv.org/abs/2403.05973
Uncertainty quantification, once a singular task, has evolved into a spectrum of tasks, including abstained prediction, out-of-distribution detection, and aleatoric uncertainty quantification. The latest goal is disentanglement: the construction of m
Externí odkaz:
http://arxiv.org/abs/2402.19460
Accurate uncertainty estimation is vital to trustworthy machine learning, yet uncertainties typically have to be learned for each task anew. This work introduces the first pretrained uncertainty modules for vision models. Similar to standard pretrain
Externí odkaz:
http://arxiv.org/abs/2402.16569