Zobrazeno 1 - 10
of 42
pro vyhledávání: '"Meggendorfer, Tobias"'
Autor:
Andriushchenko, Roman, Bork, Alexander, Budde, Carlos E., Češka, Milan, Grover, Kush, Hahn, Ernst Moritz, Hartmanns, Arnd, Israelsen, Bryant, Jansen, Nils, Jeppson, Joshua, Junges, Sebastian, Köhl, Maximilian A., Könighofer, Bettina, Křetínský, Jan, Meggendorfer, Tobias, Parker, David, Pranger, Stefan, Quatmann, Tim, Ruijters, Enno, Taylor, Landon, Volk, Matthias, Weininger, Maximilian, Zhang, Zhen
The analysis of formal models that include quantitative aspects such as timing or probabilistic choices is performed by quantitative verification tools. Broad and mature tool support is available for computing basic properties such as expected reward
Externí odkaz:
http://arxiv.org/abs/2405.13583
Certified Policy Verification and Synthesis for MDPs under Distributional Reach-avoidance Properties
Markov Decision Processes (MDPs) are a classical model for decision making in the presence of uncertainty. Often they are viewed as state transformers with planning objectives defined with respect to paths over MDP states. An increasingly popular alt
Externí odkaz:
http://arxiv.org/abs/2405.04015
We present version 2.0 of the Partial Exploration Tool (PET), a tool for verification of probabilistic systems. We extend the previous version by adding support for stochastic games, based on a recent unified framework for sound value iteration algor
Externí odkaz:
http://arxiv.org/abs/2405.03885
Markov decision processes (MDPs) are a fundamental model for decision making under uncertainty. They exhibit non-deterministic choice as well as probabilistic uncertainty. Traditionally, verification algorithms assume exact knowledge of the probabili
Externí odkaz:
http://arxiv.org/abs/2404.05424
Autor:
Brázdil, Tomáš, Chatterjee, Krishnendu, Chmelik, Martin, Forejt, Vojtěch, Křetínský, Jan, Kwiatkowska, Marta, Meggendorfer, Tobias, Parker, David, Ujma, Mateusz
We present a general framework for applying learning algorithms and heuristical guidance to the verification of Markov decision processes (MDPs). The primary goal of our techniques is to improve performance by avoiding an exhaustive exploration of th
Externí odkaz:
http://arxiv.org/abs/2403.09184
We consider {\em bidding games}, a class of two-player zero-sum {\em graph games}. The game proceeds as follows. Both players have bounded budgets. A token is placed on a vertex of a graph, in each turn the players simultaneously submit bids, and the
Externí odkaz:
http://arxiv.org/abs/2307.15218
Entropic risk (ERisk) is an established risk measure in finance, quantifying risk by an exponential re-weighting of rewards. We study ERisk for the first time in the context of turn-based stochastic games with the total reward objective. This gives r
Externí odkaz:
http://arxiv.org/abs/2307.06611
Markov decision processes can be viewed as transformers of probability distributions. While this view is useful from a practical standpoint to reason about trajectories of distributions, basic reachability and safety problems are known to be computat
Externí odkaz:
http://arxiv.org/abs/2305.16796
We provide a learning-based technique for guessing a winning strategy in a parity game originating from an LTL synthesis problem. A cheaply obtained guess can be useful in several applications. Not only can the guessed strategy be applied as best-eff
Externí odkaz:
http://arxiv.org/abs/2305.15109
A classic solution technique for Markov decision processes (MDP) and stochastic games (SG) is value iteration (VI). Due to its good practical performance, this approximative approach is typically preferred over exact techniques, even though no practi
Externí odkaz:
http://arxiv.org/abs/2304.09930