Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Belém, Catarina"'
Autor:
Belem, Catarina G., Pezeskhpour, Pouya, Iso, Hayate, Maekawa, Seiji, Bhutani, Nikita, Hruschka, Estevam
Although many studies have investigated and reduced hallucinations in large language models (LLMs) for single-document tasks, research on hallucination in multi-document summarization (MDS) tasks remains largely unexplored. Specifically, it is unclea
Externí odkaz:
http://arxiv.org/abs/2410.13961
Uncertainty expressions such as ``probably'' or ``highly unlikely'' are pervasive in human language. While prior work has established that there is population-level agreement in terms of how humans interpret these expressions, there has been little i
Externí odkaz:
http://arxiv.org/abs/2407.15814
Gender bias research has been pivotal in revealing undesirable behaviors in large language models, exposing serious gender stereotypes associated with occupations, and emotions. A key observation in prior work is that models reinforce stereotypes as
Externí odkaz:
http://arxiv.org/abs/2405.00588
Autor:
Steyvers, Mark, Tejeda, Heliodoro, Kumar, Aakriti, Belem, Catarina, Karny, Sheer, Hu, Xinyue, Mayer, Lukas, Smyth, Padhraic
For large language models (LLMs) to be trusted by humans they need to be well-calibrated in the sense that they can accurately assess and communicate how likely it is that their predictions are correct. Recent work has focused on the quality of inter
Externí odkaz:
http://arxiv.org/abs/2401.13835
Tabular data is prevalent in many high-stakes domains, such as financial services or public policy. Gradient Boosted Decision Trees (GBDT) are popular in these settings due to their scalability, performance, and low training cost. While fairness in t
Externí odkaz:
http://arxiv.org/abs/2209.07850
In ML-aided decision-making tasks, such as fraud detection or medical diagnosis, the human-in-the-loop, usually a domain-expert without technical ML knowledge, prefers high-level concept-based explanations instead of low-level explanations based on m
Externí odkaz:
http://arxiv.org/abs/2104.12459
Publikováno v:
2021 IEEE International Conference on Data Mining (ICDM)
Considerable research effort has been guided towards algorithmic fairness but real-world adoption of bias reduction techniques is still scarce. Existing methods are either metric- or model-specific, require access to sensitive attributes at inference
Externí odkaz:
http://arxiv.org/abs/2103.12715
Autor:
Jesus, Sérgio, Belém, Catarina, Balayan, Vladimir, Bento, João, Saleiro, Pedro, Bizarro, Pedro, Gama, João
There have been several research works proposing new Explainable AI (XAI) methods designed to generate model explanations having specific properties, or desiderata, such as fidelity, robustness, or human-interpretability. However, explanations are se
Externí odkaz:
http://arxiv.org/abs/2101.08758
Machine Learning (ML) has been increasingly used to aid humans to make better and faster decisions. However, non-technical humans-in-the-loop struggle to comprehend the rationale behind model predictions, hindering trust in algorithmic decision-makin
Externí odkaz:
http://arxiv.org/abs/2012.01932
Considerable research effort has been guided towards algorithmic fairness but there is still no major breakthrough. In practice, an exhaustive search over all possible techniques and hyperparameters is needed to find optimal fairness-accuracy trade-o
Externí odkaz:
http://arxiv.org/abs/2010.03665