Zobrazeno 1 - 10
of 19
pro vyhledávání: '"Zytek, Alexandra"'
In response to the demand for Explainable Artificial Intelligence (XAI), we investigate the use of Large Language Models (LLMs) to transform ML explanations into natural, human-readable narratives. Rather than directly explaining ML models using LLMs
Externí odkaz:
http://arxiv.org/abs/2405.06064
Users in many domains use machine learning (ML) predictions to help them make decisions. Effective ML-based decision-making often requires explanations of ML models and their predictions. While there are many algorithms that explain models, generatin
Externí odkaz:
http://arxiv.org/abs/2312.13084
Through past experiences deploying what we call usable ML (one step beyond explainable ML, including both explanations and other augmenting information) to real-world domains, we have learned three key lessons. First, many organizations are beginning
Externí odkaz:
http://arxiv.org/abs/2312.02859
Autor:
Zytek, Alexandra, Arnaldo, Ignacio, Liu, Dongyu, Berti-Equille, Laure, Veeramachaneni, Kalyan
Through extensive experience developing and explaining machine learning (ML) applications for real-world domains, we have learned that ML models are only as interpretable as their features. Even simple, highly interpretable model types such as regres
Externí odkaz:
http://arxiv.org/abs/2202.11748
Detecting anomalies in time-varying multivariate data is crucial in various industries for the predictive maintenance of equipment. Numerous machine learning (ML) algorithms have been proposed to support automated anomaly identification. However, a s
Externí odkaz:
http://arxiv.org/abs/2112.05734
Autor:
Cheng, Furui, Liu, Dongyu, Du, Fan, Lin, Yanna, Zytek, Alexandra, Li, Haomin, Qu, Huamin, Veeramachaneni, Kalyan
Machine learning (ML) is increasingly applied to Electronic Health Records (EHRs) to solve clinical prediction tasks. Although many ML models perform promisingly, issues with model transparency and interpretability limit their adoption in clinical pr
Externí odkaz:
http://arxiv.org/abs/2108.02550
Machine learning (ML) is being applied to a diverse and ever-growing set of domains. In many cases, domain experts - who often have no expertise in ML or data science - are asked to use ML predictions to make high-stakes decisions. Multiple ML usabil
Externí odkaz:
http://arxiv.org/abs/2103.02071
Publikováno v:
Transactions on Machine Learning Research, 2023
As machine learning (ML) models are increasingly being employed to assist human decision makers, it becomes critical to provide these decision makers with relevant inputs which can help them decide if and how to incorporate model predictions into the
Externí odkaz:
http://arxiv.org/abs/2011.06167
Loss of thrust emergencies-e.g., induced by bird/drone strikes or fuel exhaustion-create the need for dynamic data-driven flight trajectory planning to advise pilots or control UAVs. While total loss of thrust trajectories to nearby airports can be p
Externí odkaz:
http://arxiv.org/abs/1711.00716
Autor:
Zytek, Alexandra, Arnaldo, Ignacio, Liu, Dongyu, Berti-Equille, Laure, Veeramachaneni, Kalyan
Publikováno v:
ACM SIGKDD Explorations Newsletter. 24:1-13
Through extensive experience developing and explaining machine learning (ML) applications for real-world domains, we have learned that ML models are only as interpretable as their features. Even simple, highly interpretable model types such as regres