Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Kevin Baum"'
Autor:
Kevin Baum, Joanna Bryson, Frank Dignum, Virginia Dignum, Marko Grobelnik, Holger Hoos, Morten Irgens, Paul Lukowicz, Catelijne Muller, Francesca Rossi, John Shawe-Taylor, Andreas Theodorou, Ricardo Vinuesa
Publikováno v:
Frontiers in Computer Science, Vol 5 (2023)
Externí odkaz:
https://doaj.org/article/5a0ce62813b04f6aaf30b83ad332d727
Publikováno v:
Electronic Proceedings in Theoretical Computer Science, Vol 286, Iss Proc. CREST 2018, Pp 34-49 (2019)
We find ourselves surrounded by a rapidly increasing number of autonomous and semi-autonomous systems. Two grand challenges arise from this development: Machine Ethics and Machine Explainability. Machine Ethics, on the one hand, is concerned with beh
Externí odkaz:
https://doaj.org/article/7c05853082214548a3a96dca608962a9
Designing trustworthy algorithmic decision-making systems is a central goal in system design. Additionally, it is crucial that external parties can adequately assess the trustworthiness of systems. Ultimately, this should lead to calibrated trust: tr
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::f73b511c71adc1d09c4fa2b5a32f0fe6
https://doi.org/10.31234/osf.io/qhwvx
https://doi.org/10.31234/osf.io/qhwvx
Autor:
Kevin Baum, Sarah Sterz
Publikováno v:
The International Review of Information Ethics. 31
Informatics is the innovation driver of our time. From social media and artificial intelligence to autonomous cyber-physical systems: informatics-driven, digital products and services permeate our society in significant ways. Computer scientists, whe
We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intel
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::cea26f5ee3b8715ae117984959bd1bb8
Publikováno v:
RE Workshops
System quality attributes like explainability, transparency, traceability, explicability, interpretability, understand-ability, and the like are given an increasing weight, both in research and in the industry. All of these attributes can be sub-sume
Publikováno v:
RE Workshops
National and international guidelines for trustworthy artificial intelligence (AI) consider explainability to be a central facet of trustworthy systems. This paper outlines a multi-disciplinary rationale for explainability auditing. Specifically, we
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::8849093ed058fe982e3c10a8fc028c3b
http://arxiv.org/abs/2108.07711
http://arxiv.org/abs/2108.07711
Autor:
Nadine Schlicker, Kevin Baum, Markus Langer, Sonja Kristine Ötting, Cornelius J. König, Dieter Wallach
Advances in artificial intelligence contribute to increasing automation of decisions. In a healthcare-scheduling context, this study compares effects of decision agents and explanations for decisions on decision-recipients’ perceptions of justice.
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e1090bf12b7a1406bf15594e533e1d61
Applicants seem to react negatively to artificial intelligence-based automated systems in personnel selection. This study investigates the impact of different pieces of information to alleviate applicant reactions in an automated interview setting. I
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::7a215d640d65c5cb72ba555ff3fd9dba
https://hdl.handle.net/10419/240946
https://hdl.handle.net/10419/240946
Publikováno v:
RE
2019 IEEE 27th International Requirements Engineering Conference (RE)
2019 IEEE 27th International Requirements Engineering Conference (RE)
Recent research efforts strive to aid in designing explainable systems. Nevertheless, a systematic and overarching approach to ensure explainability by design is still missing. Often it is not even clear what precisely is meant when demanding explain