Zobrazeno 1 - 10
of 114
pro vyhledávání: '"Naik, Mayur"'
Neurosymbolic learning has emerged as a promising paradigm to incorporate symbolic reasoning into deep learning models. However, existing frameworks are limited in scalability with respect to both the training data and the complexity of symbolic prog
Externí odkaz:
http://arxiv.org/abs/2410.03348
Concept-based interpretability methods offer a lens into the internals of foundation models by decomposing their embeddings into high-level concepts. These concept representations are most useful when they are compositional, meaning that the individu
Externí odkaz:
http://arxiv.org/abs/2406.18534
Autor:
Solko-Breslin, Alaia, Choi, Seewon, Li, Ziyang, Velingker, Neelay, Alur, Rajeev, Naik, Mayur, Wong, Eric
Many computational tasks can be naturally expressed as a composition of a DNN followed by a program written in a traditional programming language or an API call to an LLM. We call such composites "neural programs" and focus on the problem of learning
Externí odkaz:
http://arxiv.org/abs/2406.06246
Autor:
Wu, Yinjun, Keoliya, Mayank, Chen, Kan, Velingker, Neelay, Li, Ziyang, Getzen, Emily J, Long, Qi, Naik, Mayur, Parikh, Ravi B, Wong, Eric
Designing faithful yet accurate AI models is challenging, particularly in the field of individual treatment effect estimation (ITE). ITE prediction models deployed in critical settings such as healthcare should ideally be (i) accurate, and (ii) provi
Externí odkaz:
http://arxiv.org/abs/2406.00611
Software is prone to security vulnerabilities. Program analysis tools to detect them have limited effectiveness in practice due to their reliance on human labeled specifications. Large language models (or LLMs) have shown impressive code generation c
Externí odkaz:
http://arxiv.org/abs/2405.17238
While automated vulnerability detection techniques have made promising progress in detecting security vulnerabilities, their scalability and applicability remain challenging. The remarkable performance of Large Language Models (LLMs), such as GPT-4 a
Externí odkaz:
http://arxiv.org/abs/2311.16169
We introduce a novel approach for inferring natural preconditions from code. Our technique produces preconditions of high quality in terms of both correctness (modulo a test generator) and naturalness. Prior works generate preconditions from scratch
Externí odkaz:
http://arxiv.org/abs/2310.02154
Finding errors in machine learning applications requires a thorough exploration of their behavior over data. Existing approaches used by practitioners are often ad-hoc and lack the abstractions needed to scale this process. We present TorchQL, a prog
Externí odkaz:
http://arxiv.org/abs/2308.06686
It is well-known that real-world changes constituting distribution shift adversely affect model performance. How to characterize those changes in an interpretable manner is poorly understood. Existing techniques to address this problem take the form
Externí odkaz:
http://arxiv.org/abs/2305.16308
Pre-trained large language models (LMs) struggle to perform logical reasoning reliably despite advances in scale and compositionality. In this work, we tackle this challenge through the lens of symbolic programming. We propose DSR-LM, a Differentiabl
Externí odkaz:
http://arxiv.org/abs/2305.03742