Zobrazeno 1 - 10
of 55
pro vyhledávání: '"Klaes, Michael"'
Autor:
Jöckel, Lisa, Kläs, Michael, Groß, Janek, Gerber, Pascal, Scholz, Markus, Eberle, Jonathan, Teschner, Marc, Seifert, Daniel, Hawkins, Richard, Molloy, John, Ottnad, Jens
Assurance Cases (ACs) are an established approach in safety engineering to argue quality claims in a structured way. In the context of quality assurance for Machine Learning (ML)-based software components, ACs are also being discussed and appear prom
Externí odkaz:
http://arxiv.org/abs/2312.04917
When systems use data-based models that are based on machine learning (ML), errors in their results cannot be ruled out. This is particularly critical if it remains unclear to the user how these models arrived at their decisions and if errors can hav
Externí odkaz:
http://arxiv.org/abs/2311.05245
Generating context specific data quality deficits is necessary to experimentally assess data quality of data-driven (artificial intelligence (AI) or machine learning (ML)) applications. In this paper we present badgers, an extensible open-source Pyth
Externí odkaz:
http://arxiv.org/abs/2307.04468
As the use of Artificial Intelligence (AI) components in cyber-physical systems is becoming more common, the need for reliable system architectures arises. While data-driven models excel at perception tasks, model outcomes are usually not dependable
Externí odkaz:
http://arxiv.org/abs/2305.14872
Autor:
Adler, Rasmus, Klaes, Michael
The European Machinery Directive and related harmonized standards do consider that software is used to generate safety-relevant behavior of the machinery but do not consider all kinds of software. In particular, software based on machine learning (ML
Externí odkaz:
http://arxiv.org/abs/2208.08198
Data-driven models (DDM) based on machine learning and other AI techniques play an important role in the perception of increasingly autonomous systems. Due to the merely implicit definition of their behavior mainly based on the data used for training
Externí odkaz:
http://arxiv.org/abs/2206.06838
In the future, AI will increasingly find its way into systems that can potentially cause physical harm to humans. For such safety-critical systems, it must be demonstrated that their residual risk does not exceed what is acceptable. This includes, in
Externí odkaz:
http://arxiv.org/abs/2202.05313
Outcomes of data-driven AI models cannot be assumed to be always correct. To estimate the uncertainty in these outcomes, the uncertainty wrapper framework has been proposed, which considers uncertainties related to model fit, input quality, and scope
Externí odkaz:
http://arxiv.org/abs/2201.03263
Autor:
Heidrich, Jens, Kläs, Michael, Morgenstern, Andreas, Antonino, Pablo Oliveira, Trendowicz, Adam, Quante, Jochen, Grundler, Thomas
In recent years, the role and the importance of software in the automotive domain have changed dramatically. Being able to systematically evaluate and manage software quality is becoming even more crucial. In practice, however, we still find a largel
Externí odkaz:
http://arxiv.org/abs/2110.14301
Analytical quality assurance, especially testing, is an integral part of software-intensive system development. With the increased usage of Artificial Intelligence (AI) and Machine Learning (ML) as part of such systems, this becomes more difficult as
Externí odkaz:
http://arxiv.org/abs/2108.13837