Zobrazeno 1 - 10
of 1 519
pro vyhledávání: '"Noble, William"'
Standard multiple testing procedures are designed to report a list of discoveries, or suspected false null hypotheses, given the hypotheses' p-values or test scores. Recently there has been a growing interest in enhancing such procedures by combining
Externí odkaz:
http://arxiv.org/abs/2411.15771
Machine learning (ML) models are powerful tools for detecting complex patterns within data, yet their "black box" nature limits their interpretability, hindering their use in critical domains like healthcare and finance. To address this challenge, in
Externí odkaz:
http://arxiv.org/abs/2408.17016
The complexity of deep neural networks (DNNs) makes them powerful but also makes them challenging to interpret, hindering their applicability in error-intolerant domains. Existing methods attempt to reason about the internal mechanism of DNNs by iden
Externí odkaz:
http://arxiv.org/abs/2309.15319
Competition-based approach to controlling the false discovery rate (FDR) recently rose to prominence when, generalizing it to sequential hypothesis testing, Barber and Cand\`es used it as part of their knockoff-filter. Control of the FDR implies that
Externí odkaz:
http://arxiv.org/abs/2302.11837
Autor:
Zheng, Suchen, Thakkar, Nitya, Harris, Hannah L., Liu, Susanna, Zhang, Megan, Gerstein, Mark, Aiden, Erez Lieberman, Rowley, M. Jordan, Noble, William Stafford, Gürsoy, Gamze, Singh, Ritambhara
Publikováno v:
In iScience 17 May 2024 27(5)
Autor:
Mar, Daniel, Babenko, Ilona M., Zhang, Ran, Noble, William Stafford, Denisenko, Oleg, Vaisar, Tomas, Bomsztyk, Karol
Publikováno v:
In Laboratory Investigation January 2024 104(1)
Recently, Barber and Cand\`es laid the theoretical foundation for a general framework for false discovery rate (FDR) control based on the notion of "knockoffs." A closely related FDR control methodology has long been employed in the analysis of mass
Externí odkaz:
http://arxiv.org/abs/2011.11939
Saliency methods can make deep neural network predictions more interpretable by identifying a set of critical features in an input sample, such as pixels that contribute most strongly to a prediction made by an image classifier. Unfortunately, recent
Externí odkaz:
http://arxiv.org/abs/2002.00526
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.