Zobrazeno 1 - 10
of 1 556
pro vyhledávání: '"A, Fel"'
Autor:
Colin, Julien, Goetschalckx, Lore, Fel, Thomas, Boutin, Victor, Gopal, Jay, Serre, Thomas, Oliver, Nuria
Much of the research on the interpretability of deep neural networks has focused on studying the visual features that maximally activate individual neurons. However, recent work has cast doubts on the usefulness of such local representations for unde
Externí odkaz:
http://arxiv.org/abs/2411.03993
Autor:
Moayeri, Mazda, Balachandran, Vidhisha, Chandrasekaran, Varun, Yousefi, Safoora, Fel, Thomas, Feizi, Soheil, Nushi, Besmira, Joshi, Neel, Vineet, Vibhav
With models getting stronger, evaluations have grown more complex, testing multiple skills in one benchmark and even in the same instance at once. However, skill-wise performance is obscured when inspecting aggregate accuracy, under-utilizing the ric
Externí odkaz:
http://arxiv.org/abs/2410.13826
Despite the growing use of deep neural networks in safety-critical decision-making, their inherent black-box nature hinders transparency and interpretability. Explainable AI (XAI) methods have thus emerged to understand a model's internal workings, a
Externí odkaz:
http://arxiv.org/abs/2410.01482
Publikováno v:
Conference on Neural Information Processing Systems (NeurIPS), Dec 2024
Recent studies suggest that deep learning models inductive bias towards favoring simpler features may be one of the sources of shortcut learning. Yet, there has been limited focus on understanding the complexity of the myriad features that models lea
Externí odkaz:
http://arxiv.org/abs/2407.06076
Autor:
Fel, Leonid G.
We consider numerical semigroups $S_3 = \langle d_1,d_2,d_3 \rangle$, minimally generated by three positive integers. We revisit the Wilf question in $S_3$ and, making use of identities for degrees of syzygies of such semigroups, give a short proof o
Externí odkaz:
http://arxiv.org/abs/2406.13580
Autor:
Boutin, Victor, Mukherji, Rishav, Agrawal, Aditya, Muzellec, Sabine, Fel, Thomas, Serre, Thomas, VanRullen, Rufin
Humans can effortlessly draw new categories from a single exemplar, a feat that has long posed a challenge for generative models. However, this gap has started to close with recent advances in diffusion models. This one-shot drawing task requires pow
Externí odkaz:
http://arxiv.org/abs/2406.06079
Efforts to decode neural network vision models necessitate a comprehensive grasp of both the spatial and semantic facets governing feature responses within images. Most research has primarily centered around attribution methods, which provide explana
Externí odkaz:
http://arxiv.org/abs/2402.10039
Deep-learning models can extract a rich assortment of features from data. Which features a model uses depends not only on \emph{predictivity} -- how reliably a feature indicates training-set labels -- but also on \emph{availability} -- how easily the
Externí odkaz:
http://arxiv.org/abs/2310.16228
Autor:
Jarosław Kozak, Stanisław Fel
Publikováno v:
Scientific Reports, Vol 14, Iss 1, Pp 1-10 (2024)
Abstract The article aims to determine the sociodemographic factors associated with the level of trust in artificial intelligence (AI) based on cross-sectional research conducted in late 2023 and early 2024 on a sample of 2098 students in Poland (108
Externí odkaz:
https://doaj.org/article/671994b7b3d941af8975d6db237e44c6
Attribution methods correspond to a class of explainability methods (XAI) that aim to assess how individual inputs contribute to a model's decision-making process. We have identified a significant limitation in one type of attribution methods, known
Externí odkaz:
http://arxiv.org/abs/2307.09591