Zobrazeno 1 - 10
of 72
pro vyhledávání: '"Ribeiro, Marco Tulio"'
Autor:
Buçinca, Zana, Pham, Chau Minh, Jakesch, Maurice, Ribeiro, Marco Tulio, Olteanu, Alexandra, Amershi, Saleema
While demands for change and accountability for harmful AI consequences mount, foreseeing the downstream effects of deploying AI systems remains a challenging task. We developed AHA! (Anticipating Harms of AI), a generative framework to assist AI pra
Externí odkaz:
http://arxiv.org/abs/2306.03280
Even when aggregate accuracy is high, state-of-the-art NLP models often fail systematically on specific subgroups of data, resulting in unfair outcomes and eroding user trust. Additional data collection may not help in addressing these weaknesses, as
Externí odkaz:
http://arxiv.org/abs/2305.17804
Autor:
Khani, Fereshte, Ribeiro, Marco Tulio
Despite substantial advancements, Natural Language Processing (NLP) models often require post-training adjustments to enforce business rules, rectify undesired behavior, and align with user values. These adjustments involve operationalizing "concepts
Externí odkaz:
http://arxiv.org/abs/2305.12219
Large language models are becoming increasingly pervasive and ubiquitous in society via deployment in sociotechnical systems. Yet these language models, be it for classification or generation, have been shown to be biased and behave irresponsibly, ca
Externí odkaz:
http://arxiv.org/abs/2304.09991
Autor:
Bubeck, Sébastien, Chandrasekaran, Varun, Eldan, Ronen, Gehrke, Johannes, Horvitz, Eric, Kamar, Ece, Lee, Peter, Lee, Yin Tat, Li, Yuanzhi, Lundberg, Scott, Nori, Harsha, Palangi, Hamid, Ribeiro, Marco Tulio, Zhang, Yi
Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest mo
Externí odkaz:
http://arxiv.org/abs/2303.12712
Autor:
Paranjape, Bhargavi, Lundberg, Scott, Singh, Sameer, Hajishirzi, Hannaneh, Zettlemoyer, Luke, Ribeiro, Marco Tulio
Large language models (LLMs) can perform complex reasoning in few- and zero-shot settings by generating intermediate chain of thought (CoT) reasoning steps. Further, each reasoning step can rely on external tools to support computation beyond the cor
Externí odkaz:
http://arxiv.org/abs/2303.09014
The in-context learning capabilities of LLMs like GPT-3 allow annotators to customize an LLM to their specific tasks with a small number of examples. However, users tend to include only the most obvious patterns when crafting examples, resulting in u
Externí odkaz:
http://arxiv.org/abs/2302.07346
Autor:
Ilharco, Gabriel, Ribeiro, Marco Tulio, Wortsman, Mitchell, Gururangan, Suchin, Schmidt, Ludwig, Hajishirzi, Hannaneh, Farhadi, Ali
Changing how pre-trained models behave -- e.g., improving their performance on a downstream task or mitigating biases learned during pre-training -- is a common practice when developing machine learning systems. In this work, we propose a new paradig
Externí odkaz:
http://arxiv.org/abs/2212.04089
Vision models often fail systematically on groups of data that share common semantic characteristics (e.g., rare objects or unusual scenes), but identifying these failure modes is a challenge. We introduce AdaVision, an interactive process for testin
Externí odkaz:
http://arxiv.org/abs/2212.02774
Current approaches for fixing systematic problems in NLP models (e.g. regex patches, finetuning on more data) are either brittle, or labor-intensive and liable to shortcuts. In contrast, humans often provide corrections to each other through natural
Externí odkaz:
http://arxiv.org/abs/2211.03318