Adversary decision-making using Markov models

dc.contributor.authorAndreas, Elizabeth
dc.contributor.authorDorismond, Jessica
dc.contributor.authorGamarra, Marco
dc.date.accessioned2023-07-12T19:52:39Z
dc.date.available2023-07-12T19:52:39Z
dc.date.issued2023-06
dc.description.abstractThis study conducts three experiments on adversary decision-making modeled as a graph. Each experiment has the overall goal to understand how to exploit an adversary’s decision-making in order to obtain desired outcomes, as well as specific goals unique to each experiment. The first experiment models adversary decision-making using an Absorbing Markov chain (AMC). A sensitivity analysis of states (nodes in the graph) and actions (edges in the graph) is conducted which informs how downstream adversary decisions could be manipulated. The next experiment uses a Markov decision process (MDP). Assuming the adversary is initially blind to the rewards they will receive when they take an action, a Q´learning algorithm is used to determine the sequence of actions that maximizes the adversary rewards (called an optimum policy). This experiment gives insight in the possible decision-making of an adversary. Lastly, in the third experiment a two-player Markov game is developed, played by an agent (friend) and the adversary (foe). The agents goal is to decrease the overall rewards the adversary receives when it follows optimum policy. All experiments are demonstrated using specific examples.en_US
dc.identifier.citationElizabeth Andreas, Jessica Dorismond, and Marco Gamarra "Adversary decision-making using Markov models", Proc. SPIE 12538, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications V, 125380I (12 June 2023); https://doi.org/10.1117/12.2655223en_US
dc.identifier.issn1996-756X
dc.identifier.urihttps://scholarworks.montana.edu/handle/1/17960
dc.language.isoen_USen_US
dc.publisherSPIEen_US
dc.rightscopyright SPIE 2023en_US
dc.rights.urihttps://spie.org/publications/contact-spie-publications/terms-of-useen_US
dc.subjectMarkov modelsen_US
dc.subjectmarkoven_US
dc.titleAdversary decision-making using Markov modelsen_US
dc.typeArticleen_US
mus.citation.extentfirstpage1en_US
mus.citation.extentlastpage21en_US
mus.citation.journaltitleArtificial Intelligence and Machine Learning for Multi-Domain Operations Applications Ven_US
mus.data.thumbpage12en_US
mus.identifier.doi10.1117/12.2655223en_US
mus.relation.collegeCollege of Letters & Scienceen_US
mus.relation.departmentMathematical Sciences.en_US
mus.relation.universityMontana State University - Bozemanen_US

Files

Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
Name:
andreas-markov-2023.pdf
Size:
295.02 KB
Format:
Adobe Portable Document Format
Description:
markov models

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description:
Copyright (c) 2002-2022, LYRASIS. All rights reserved.