Zobrazeno 1 - 10
of 852
pro vyhledávání: '"A. Brugere"'
Autor:
Giang, Nguyen, Brugere, Ivan, Sharma, Shubham, Kariyappa, Sanjay, Nguyen, Anh Totti, Lecue, Freddy
Interpretability for Table Question Answering (Table QA) is critical, particularly in high-stakes industries like finance or healthcare. Although recent approaches using Large Language Models (LLMs) have significantly improved Table QA performance, t
Externí odkaz:
http://arxiv.org/abs/2412.12386
Autor:
Tang, Wenzhuo, Mao, Haitao, Dervovic, Danial, Brugere, Ivan, Mishra, Saumitra, Xie, Yuying, Tang, Jiliang
Models for natural language and images benefit from data scaling behavior: the more data fed into the model, the better they perform. This 'better with more' phenomenon enables the effectiveness of large-scale pre-training on vast amounts of data. Ho
Externí odkaz:
http://arxiv.org/abs/2406.01899
Autor:
Zmigrod, Ran, Wang, Dongsheng, Sibue, Mathieu, Pei, Yulong, Babkin, Petr, Brugere, Ivan, Liu, Xiaomo, Navarro, Nacho, Papadimitriou, Antony, Watson, William, Ma, Zhiqiang, Nourbakhsh, Armineh, Shah, Sameena
The field of visually rich document understanding (VRDU) aims to solve a multitude of well-researched NLP tasks in a multi-modal domain. Several datasets exist for research on specific tasks of VRDU such as document classification (DC), key entity ex
Externí odkaz:
http://arxiv.org/abs/2404.04003
Autor:
Lazri, Zachary McBride, Dervovic, Danial, Polychroniadou, Antigoni, Brugere, Ivan, Dachman-Soled, Dana, Wu, Min
Applications that deal with sensitive information may have restrictions placed on the data available to a machine learning (ML) classifier. For example, in some applications, a classifier may not have direct access to sensitive attributes, affecting
Externí odkaz:
http://arxiv.org/abs/2403.07724
Autor:
Zhou, Yvonne, Liang, Mingyu, Brugere, Ivan, Dachman-Soled, Dana, Dervovic, Danial, Polychroniadou, Antigoni, Wu, Min
The growing use of machine learning (ML) has raised concerns that an ML model may reveal private information about an individual who has contributed to the training dataset. To prevent leakage of sensitive data, we consider using differentially-priva
Externí odkaz:
http://arxiv.org/abs/2402.04375
Autor:
Khanmohammadi, Reza, Kaur, Simerjot, Smiley, Charese H., Alhanai, Tuka, Brugere, Ivan, Nourbakhsh, Armineh, Ghassemi, Mohammad M.
This paper investigates the relationship between scientific innovation in biomedical sciences and its impact on industrial activities, focusing on how the historical impact and content of scientific papers influenced future funding and innovation gra
Externí odkaz:
http://arxiv.org/abs/2401.00942
Autor:
Lazri, Zachary McBride, Brugere, Ivan, Tian, Xin, Dachman-Soled, Dana, Polychroniadou, Antigoni, Dervovic, Danial, Wu, Min
Increases in the deployment of machine learning algorithms for applications that deal with sensitive data have brought attention to the issue of fairness in machine learning. Many works have been devoted to applications that require different demogra
Externí odkaz:
http://arxiv.org/abs/2310.15097
(Directed) graphs with node attributes are a common type of data in various applications and there is a vast literature on developing metrics and efficient algorithms for comparing them. Recently, in the graph learning and optimization communities, a
Externí odkaz:
http://arxiv.org/abs/2302.08621
Fair machine learning methods seek to train models that balance model performance across demographic subgroups defined over sensitive attributes like race and gender. Although sensitive attributes are typically assumed to be known during training, th
Externí odkaz:
http://arxiv.org/abs/2302.01385
Similarity functions measure how comparable pairs of elements are, and play a key role in a wide variety of applications, e.g., notions of Individual Fairness abiding by the seminal paradigm of Dwork et al., as well as Clustering problems. However, a
Externí odkaz:
http://arxiv.org/abs/2208.12731