Zobrazeno 1 - 10
of 127
pro vyhledávání: '"Lundberg, Scott"'
Autor:
Bubeck, Sébastien, Chandrasekaran, Varun, Eldan, Ronen, Gehrke, Johannes, Horvitz, Eric, Kamar, Ece, Lee, Peter, Lee, Yin Tat, Li, Yuanzhi, Lundberg, Scott, Nori, Harsha, Palangi, Hamid, Ribeiro, Marco Tulio, Zhang, Yi
Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest mo
Externí odkaz:
http://arxiv.org/abs/2303.12712
Autor:
Paranjape, Bhargavi, Lundberg, Scott, Singh, Sameer, Hajishirzi, Hannaneh, Zettlemoyer, Luke, Ribeiro, Marco Tulio
Large language models (LLMs) can perform complex reasoning in few- and zero-shot settings by generating intermediate chain of thought (CoT) reasoning steps. Further, each reasoning step can rely on external tools to support computation beyond the cor
Externí odkaz:
http://arxiv.org/abs/2303.09014
Vision models often fail systematically on groups of data that share common semantic characteristics (e.g., rare objects or unusual scenes), but identifying these failure modes is a challenge. We introduce AdaVision, an interactive process for testin
Externí odkaz:
http://arxiv.org/abs/2212.02774
Current approaches for fixing systematic problems in NLP models (e.g. regex patches, finetuning on more data) are either brittle, or labor-intensive and liable to shortcuts. In contrast, humans often provide corrections to each other through natural
Externí odkaz:
http://arxiv.org/abs/2211.03318
Feature attributions based on the Shapley value are popular for explaining machine learning models; however, their estimation is complex from both a theoretical and computational standpoint. We disentangle this complexity into two factors: (1)~the ap
Externí odkaz:
http://arxiv.org/abs/2207.07605
Local feature attribution methods are increasingly used to explain complex machine learning models. However, current methods are limited because they are extremely expensive to compute or are not capable of explaining a distributed series of models w
Externí odkaz:
http://arxiv.org/abs/2105.00108
Visual search, recommendation, and contrastive similarity learning power technologies that impact billions of users worldwide. Modern model architectures can be complex and difficult to interpret, and there are several competing techniques one can us
Externí odkaz:
http://arxiv.org/abs/2103.00370
Researchers have proposed a wide variety of model explanation approaches, but it remains unclear how most methods are related or when one method is preferable to another. We describe a new unified class of methods, removal-based explanations, that ar
Externí odkaz:
http://arxiv.org/abs/2011.14878
Researchers have proposed a wide variety of model explanation approaches, but it remains unclear how most methods are related or when one method is preferable to another. We examine the literature and find that many methods are based on a shared prin
Externí odkaz:
http://arxiv.org/abs/2011.03623
Many existing approaches for estimating feature importance are problematic because they ignore or hide dependencies among features. A causal graph, which encodes the relationships among input variables, can aid in assigning feature importance. Howeve
Externí odkaz:
http://arxiv.org/abs/2010.14592