Detecting and Tracking Hostile Plans in the Hats World Detection and Tracking of Plans Goals define plans, plans are then put into action which result in observations Intentions (Goals) Actions Observations Given a stream of observations, and some behavioral model of the agent that generates those observations – – Plans What are their most likely intentions of the agent? What kind of plan is he pursuing? Plan recognition: The problem of inferring an agent’s hidden state of plans or intentions based on their observable actions What is the Hats Simulator? Plan recognition in a virtual domain A simulation of a society in a box with about 100,000 agents (hats) that engage in individual and collective activities. Most of the agents are benign, very few are known adversaries (terrorists) and some are covert adversaries. Hats have capabilities which are a set of elementary attributes that can be traded with other hats at meetings. Beacons are special locations or landmarks that are characterized by their vulnerabilities. When a group of adversarial agents (task-force) has the capabilities that match the beacon’s vulnerabilities the beacon is destroyed. What is the Hats Simulator? Each hat belongs to one or more organizations, which are not known and must be inferred from meetings. – – Each adversarial hat belongs to at least one adversarial organization. Each benign hat belongs to at least one benign organization and no adversarial organizations. When a meeting is planned, the list of participants is drawn from the set of hats belonging to the same organization. – – A meeting planned by an adversarial organization will only consist of adversarial agents. A meeting planned by a benign organization will consist of both benign and adversarial agents. Hierarchical Model for Hats The planner chooses an organization to carry out the attack and the beacon A task force is then chosen The planner sets up an elaborate sequence of meetings for each task force member to acquire a capability The generative planner chooses an organization to carry out the attack and the beacon A task force is then chosen The planner sets up an elaborate sequence of meetings for each task force member to acquire a capability The Goals of Hats To find and neutralize the adversarial task force behind a beacon attack given the record of meetings between hats Bayesian Framework The state of the i-th agent at time t is characterized as xti ti , ti , ti – Terrorist indicator: ti 0,1 – Intention to acquire a capability: ti 0,1,2, , M – Actual capability carried: ti 1,2, , M The joint state of the system is given by x t xt1 , xt2 , , xtN Hidden Markov Model Transition Matrix: M x, x' ; y PXt x | Xt 1 x' , Yt 1 y – The probability that the system will be in state X t x given that it was in the state Xt 1 x' and produced an observation Yt 1 y Emission Model: y, x PYt y | Xt x – The probability of seeing a particular observation in a particular state Bayesian Filtering Filtering distribution (joint distribution over the hidden state at time t and observations up to time t): t x, y t0 P X t x, Y0t y t0 y t0 y 0 , y1 , , y t Recursive update formula: t x, y t0 x, y t M x, x' ; y t 1 t 1 x' , y t01 x' – When a transition matrix and emission model are specified this equation can be used for state estimation Bayesian Guilt by Association Model Estimates group membership of agents based on observed meetings Meetings only contain two agents 1 2 N State of the system: x t t , t ,, t Probability of a meeting between the same type of agents: p 0.5 Probability of a meeting between agents of different types: 1 p Kronecker’s -function: i, j 1 if i j i, j 0 if i j Bayesian Guilt by Association Model Emission model: y i, j, x 1 ,, N p i , j 1 p 1 i , j Transition matrix: M x, x' ; y x, x' Recursive update formula: t x, y t0 y t , x t 1 x, y t01
© Copyright 2026 Paperzz