Coordinating randomized policies for increasing security of agent systems

Autor: Janusz Marecki, Sarit Kraus, Milind Tambe, Jonathan P. Pearce, Fernando Ordóñez, Praveen Paruchuri
Rok vydání: 2009
Předmět:
Zdroj: Information Technology and Management. 10:67-79
ISSN: 1573-7667
1385-951X
DOI: 10.1007/s10799-008-0047-9
Popis: We consider the problem of providing decision support to a patrolling or security service in an adversarial domain. The idea is to create patrols that can achieve a high level of coverage or reward while taking into account the presence of an adversary. We assume that the adversary can learn or observe the patrolling strategy and use this to its advantage. We follow two different approaches depending on what is known about the adversary. If there is no information about the adversary we use a Markov Decision Process (MDP) to represent patrols and identify randomized solutions that minimize the information available to the adversary. This lead to the development of algorithms CRLP and BRLP, for policy randomization of MDPs. Second, when there is partial information about the adversary we decide on efficient patrols by solving a Bayesian---Stackelberg games. Here, the leader decides first on a patrolling strategy and then an adversary, of possibly many adversary types, selects its best response for the given patrol. We provide two efficient MIP formulations named DOBSS and ASAP to solve this NP-hard problem. Our experimental results show the efficiency of these algorithms and illustrate how these techniques provide optimal and secure patrolling policies. We note that these models have been applied in practice, with DOBSS being at the heart of the ARMOR system that is currently deployed at the Los Angeles International airport (LAX) for randomizing checkpoints on the roadways entering the airport and canine patrol routes within the airport terminals.
Databáze: OpenAIRE