KR: Propositional Logic - Syntax • Syntactic components of

KR: Propositional Logic - Syntax
• Syntactic components of propositional logic:
1. Symbols/atoms
(a) Logical constants true and f alse
(b) Propositional symbols P, Q, R, ...
2. Connectives
(a) ¬ (negation)
(b) ∧ (conjunction/and)
Arguments are conjuncts
(c) ∨ (disjunction/or)
Arguments are disjuncts
(d) ⇒ (or ⊃, →) (implication)
Arguments are premise/conclusion, antecedent/consequent
(e) ⇔ (or ≡) (equivalence/biconditional/iff)
3. Parentheses
• Precedence: () ¬ ∧ ∨ ⇒ ⇔
• Sentences, or Well Formed Formuli (wff), defined as one of
1. Propositional symbol
2. A negated sentence
3. 2 sentences joined by connectives
4. A parenthesized sentence
• A literal is a symbol or negated symbol
• An atomic sentence is a symbol
– true and f alse are special symbols
• A complex sentence is a sentence containing connectives or parentheses
1
KR: Propositional Logic - Syntax
• Syntax (BNF):
Sentence
AtomicSentence
Literal
ComplexSentence
→
→
→
→
|
|
|
|
|
|
AtomicSentence|ComplexSentence
T rue|F alse| < Symbol >
< Symbol > |¬ < Symbol >
(Sentence)
[Sentence]
¬Sentence
Sentence ∧ Sentence
Sentence ∨ Sentence
Sentence ⇒ Sentence
Sentence ⇔ Sentence
2
KR: Propositional Logic - Semantics
• Semantics define the rules for determining the truth of a sentence wrt a given
model
• Symbols represent statements about the world (universe of discourse) that are
true or false
• Interpretations
– An interpretation is a mapping of the set of propositional symbols to [true,
f alse]
– Represents the truth value of the propositions in a possible world
– Given n propositions, there are 2n models
∗ Each model represents a unique set of truth assignments to the symbols
• The truth values of true and false are fixed
• Complex sentence semantics defined by truth tables:
P
T
T
F
F
Q ¬P P ∧ Q P ∨ Q P ⇒ Q P ⇔ Q
T F
T
T
T
T
F F
F
T
F
F
T T
F
T
T
F
F T
F
F
T
T
3
KR: Propositional Logic - Proof Theory, Model Checking
• Logic not only is a language for representing knowledge, but provides a way of
inferring new sentences from existing ones
• One approach is called model checking
– To determine whether KB |= α, where α is the sentence to be proved (i.e.,
is α true given the sentences in KB)
∗ Examine all models of the propositions in KB
∗ If α is true in all models for which KB is true, then KB |= α
– An alternative way of looking at this is that if KB |= α, then KB ⇒ α is
valid
• Manually checking:
1. Id the propositions involved in α and in those axioms that can be used to
infer s from other propositions
2. Create a truth table that represents all interpretations of these propositions
3. Add to the table the facts and axioms that are in the KB pertinent to α
4. Determine the truth values for the columns representing the KB based on
each interpretation
5. Id the models of KB - these are the rows of the KB columns that are all
true
6. KB |= α if α is true for every model of KB
4
KR: Propositional Logic - Proof Theory, Model Checking (2)
• Algorithm:
function TT-ENTAILS (KB, alpha) returns Boolean
{
symbols <- UNION(PROPOSITIONS(KB), alpha)
return TT-CHECK-ALL(KB, alpha, symbols, {})
}
function TT-CHECK-ALL(KB, alpha, symbols, model) returns Boolean
{
if (EMPTY?(symbols)
if (PL-TRUE?(KB, model)
return PL-TRUE?(alpha, model)
else
return true
else {
P <- FIRST(symbols)
rest <- REST(symbols)
return TT-CHECK-ALL(KB, alpha, rest, UNION(model, {P = true}))
AND
TT-CHECK-ALL(KB, alpha, rest, UNION(model, {P = false}))
}
}
• Model checking is sound and complete
• The problem is that the number of models for n propositions is 2n , which makes
this approach impractical
5
KR: Propositional Logic - Proof Theory, Inference
• A proof procedure consists of an inference rule and an algorithm for applying it
• An inference rule is a mechanical (syntactic) means of determining entailment
• Two sentences are logically equivalent if they are true in the same set of models
– Denoted α ≡ β
– An alternative definition for equivalence:
α ≡ β iff α |= β ∧ β |= α
– Standard equivalencies:
1. Commutativity of ∧:
(α ∧ β) ≡ (β ∧ α)
2. Commutativity of ∨:
(α ∨ β) ≡ (β ∨ α)
3. Associativity of ∧:
((α ∧ β) ∧ γ) ≡ (α ∧ (β ∧ γ))
4. Associativity of ∨:
((α ∨ β) ∨ γ) ≡ (α ∨ (β ∨ γ))
5. Double negation elimination:
¬(¬α) ≡ α
6. Contrapositive:
(α ⇒ β) ≡ (¬β ⇒ ¬α)
7. Implication elimination:
(α ⇒ β) ≡ (¬α ∨ β)
8. Biconditional elimination:
(α ⇔ β) ≡ ((α ⇒ β) ∧ (β ⇒ α))
9. De Morgan’s Laws:
¬(α ∨ β) ≡ (¬α ∧ ¬β)
¬(α ∧ β) ≡ (¬α ∨ ¬β)
10. Distributivity of ∧ over ∨:
(α ∧ (β ∨ γ)) ≡ ((α ∧ β) ∨ (α ∧ γ))
11. Distributivity of ∨ over ∧:
(α ∨ (β ∧ γ)) ≡ ((α ∨ β) ∧ (α ∨ γ))
6
KR: Propositional Logic - Proof Theory, Inference (2)
• S is valid if it is true under all possible interpretations (all models)
– Such sentences are vacuous (provide no information)
– Also referred to as tautologies
• A sentence is satisfiable if it is true in some model
– The sentence is satisfied by that model
– Determining whether a sentence is satisfiable is called the SAT problem
• Relation between validity and satisfiablility:
– Sentence α is valid iff ¬α is unsatisfiable
– Sentence α is satisfiable iff ¬α is valid
– α |= β iff α ∧ ¬β is unsatisfiable
This corresponds to proof by contradiction
• The deduction theorem:
α |= β iff α ⇒ β is valid
– This means that if α ⇒ β is true in every model, then α |= β
• Inference rules
– A rule of inference captures an entire class of inferences - a pattern of
inferences that occurs frequently
– Once a rule is shown to be sound, it can be used without the need to reprove
it on each application
∗ An inference rule is sound if its conclusion is true every time its premise
is true
∗ Truth tables can be used to prove the soundness of an inference rule
– Rules are represented in one of two formats:
1.
premise
conclusion
2.
premise : conclusion
7
KR: Propositional Logic - Proof Theory, Inference (3)
– Propositional Logic Inference Rules:
1. Modus Ponens (Implication Elimination):
α, α ⇒ β
β
2. Modus Tolens:
3. And elimination:
4. And introduction:
5. Or introduction:
α ⇒ β, ¬β
¬α
α1 ∧ α2 ∧ ... ∧ αn
αi
α1 , α2 , ..., αn
α1 ∧ α2 ∧ ... ∧ αn
αi
α1 ∨ α2 ∨ ... ∨ αi ∨ ... ∨ αn
6. Unit resolution:
P1 ∨ P2 ∨ ... ∨ Pi ∨ ... ∨ Pn , ¬Pi
P1 ∨ P2 ∨ ... ∨ Pi−1 ∨ Pi+1 ∨ ... ∨ Pn
7. General resolution:
P1 ∨ P2 ∨ ... ∨ Pi ∨ ... ∨ Pn , Q1 ∨ Q2 ∨ ... ∨ ¬Pi ∨ ... ∨ Qm
P1 ∨ P2 ∨ ... ∨ Pi−1 ∨ Pi+1 ∨ ... ∨ Pn ∨ Q1 ∨ Q2 ∨ ... ∨ Qm
– Any of the equivalences noted previously can also be used as inference rules;
e.g., double negation elimination:
¬¬α
α
• Search can be used to find a proof using inference rules:
– Initial state: KB
– Actions: Inference rules, where the premise is matched against the sentences
in the KB
– Result: Add sentences that are the consequents of a rule
– Goal state: KB that contains the sentence to be proved
8
KR: Propositional Logic - Proof Theory, Resolution Intro
– Resolution is an alternative proof procedure to modus ponens
– The unit and general resolutions rules are listed above, and repeated here:
1. Unit resolution:
P1 ∨ P2 ∨ ... ∨ Pi ∨ ... ∨ Pn , ¬Pi
P1 ∨ P2 ∨ ... ∨ Pi−1 ∨ Pi+1 ∨ ... ∨ Pn
2. General resolution:
P1 ∨ P2 ∨ ... ∨ Pi ∨ ... ∨ Pn , Q1 ∨ Q2 ∨ ... ∨ ¬Pi ∨ ... ∨ Qm
P1 ∨ P2 ∨ ... ∨ Pi−1 ∨ Pi+1 ∨ ... ∨ Pn ∨ Q1 ∨ Q2 ∨ ... ∨ Qm
– Resolution makes the following assumptions:
∗ Based on clausal representation
· Clause is a disjunction of literals: P1 ∨ P2 ∨ ... ∨ Pn
∗ Each literal appears only once in a clause
· The process of eliminating duplicates is called factoring
P ∨P ≡P
∗ KB must be in conjunctive normal form (CNF)
· A KB in CNF consists of a conjunction of clauses
· Syntax (BNF):
CN F Sentence
Clause
Literal
Symbol
→
→
→
→
Clause ∧ Clause ∧ ... ∧ Clause
Literal ∨ Literal ∨ ... ∨ Literal
Symbol|¬Symbol
P |Q|R|...
· CNF conversion algorithm:
1. Eliminate biconditional α ⇔ β by rewriting as α ⇒ β ∧ β ⇒ α
2. Eliminate implication α ⇒ β by rewriting as ¬α ∨ β
3. Reduce scope of negation by
(a) Eliminate double negations ¬¬α by rewriting as α
(b) Apply DeMorgan’s laws
Rewrite ¬(α ∨ β) as ¬α ∧ ¬β
Rewrite ¬(α ∧ β) as ¬α ∨ ¬β
(c) Distribute ∨ over ∧
Rewrite α ∨ (β ∧ γ) as (α ∨ β) ∧ (α ∨ γ)
9
KR: Propositional Logic - Proof Theory, Resolution Proofs
– Algorithm overview
∗ Given: Set of sentences KB and sentence α that you want to prove
· (I.e., want to show KB |= α)
1. Convert KB to CNF
2. Negate α
3. Convert ¬α to CNF
4. Resolve pairs of clauses that contain complementary literals
5. Add results back into KB
6. Continue resolving until either
(a) The empty clause (denoted 2) is generated, or
(b) Nothing left to resolve
∗ If the empty clause is produced, this means that
· ¬α is inconsistent with KB, and therefore KB ∧ ¬α is not satisfiable
· Thus KB |= α
∗ This is proof by contradiction
– Algorithm:
function PL-RESOLUTION (KB, alpha) returns Boolean
{
clauses <- CONVERT-TO-CNF(KB, NEGATE(alpha))
new <- {}
loop {
for (each Ci, Cj in clauses) {
resolvents <- PL-RESOLVE(Ci, Cj)
if (EMPTY-CLAUSE in resolvents)
return TRUE
new <- UNION(new, resolvents)
}
if (SUBSET(new, clauses))
return FALSE
clauses <- UNION(new, clauses)
}
}
10
KR: Propositional Logic - Proof Theory, Resolution Properties
• Resolution is complete
– Based on concept of resolution closure (RC)
– Given the set of clauses s, RC(s) is the set of clauses that can be generated
by repeatedly applying resolution to all clauses in s
– This is a finite set
– The Ground Resolution Theorem:
If a set of clauses is unsatisfiable, then 2 ∈ RC(s)
– This is proved by showing that if RC(s) does not contain 2, then s IS
satisfiable
∗ Construct a model for s as follows.
Given k literals P1 ... Pk
for (i <- 1 to k)
if clause c in RC(s) contains NOT(Pi) and all other literals in c
Pi <- FALSE
else
Pi <- TRUE
∗ The assignment of truth values to P1 ...Pk results in a model for s
· If it weren’t, then an assignment to Pi would cause some clause c to
become f alse
· For this to happen, all other literals in c must have been made false
by assignments to P1 ...Pi−1
∗ So, c is either f alse ∨ f alse ∨ ... ∨ Pi ) or f alse ∨ f alse ∨ ... ∨ ¬Pi )
∗ c can only be made f alse if both of the above are in RC(s)
∗ Since RC(s) is closed under resolution, the resolvent is in RC(s) and all
literals P1 , ..., Pi−1 will be f alse
∗ This contradicts the assumption that the first falsified clause occurs in
step i
– Therefore, the above construction never falsifies a clause in RC(s) and thus
generates a model for RC(s), and thus for s
• Resolution is decidable
– If KB 6|= α, then resolution will terminate without generating the empty
clause
11
KR: Propositional Logic - Proof Theory, Resolution Strategies
• Resolution places no restrictions on order or choice of arguments
• Want to be systematic and efficient
• Ordering strategies based on search type
– Let
1. Level 0 resolvents be original KB
2. Level k + 1 resolvents are generated by resolving level k resolvent with
one from level j ≤ k
– Strategy:
1. Breadth first search
∗ Generate all resolvents at level k before generating those at level k +1
2. Depth first search
∗ At level k, resolve resolvent generated in previous step with level j ≤ k
resolvent
• Refinement strategies
– Specify what kinds of clauses can be resolved
1. Unit preference
∗ One argument must be single literal
∗ Successively decreases length of clause
2. Set of support
∗ Id a set of clauses as the set of support
∗ All resolutions must involve a clause from the set of support
∗ All resolvents added to the set of support
∗ One approach initializes the set of support to ¬α
∗ The set of support is augmented with descendents of ¬α
∗ γ2 is a descendant of γ1 if
(a) γ2 is a resolvent of γ1 and another clause, or
(b) γ2 is a resolvent of a descendant of γ1 and another clause
3. Linear input
∗ Every resolution combines an input clause with another clause
∗ Not refutation complete
4. Ancestry filtering
∗ One of arguments must be member of original KB, or ¬ω
∗ Refutation complete
12
KR: Propositional Logic - Proof Theory, Resolution Issues
• Want to be able to choose actions based on KB
– I.e., want P ∧ Q ∧ .... ⇒ action
– Can only use resolution to ask Should action X be taken now?, and not
What action should be taken now?
• Must have proposition for each individual situation
– Cannot represent classes of situations
– Cannot represent classes of objects
• Cannot handle change
– Propositions may change over time
– May want to represent state of world at various times
– Increases the number of propositions dramatically
• No structure to propositions
– Propositions are simply symbols
– Two can represent similar facts but be represented by symbols that have
no obvious relation
• Resolution resolves one atom at a time:
– P ∨ ¬Q, ¬P ∨ Q does not generate 2
– P ∨ ¬Q, ¬P ∨ Q : ¬Q ∨ Q : T rue
– P ∨ ¬Q, ¬P ∨ Q : ¬P ∨ P : T rue
13
KR: Propositional Logic - Proof Theory, Horn Clauses
• If the KB can be represented in certain restricted forms, there are alternative
proof procedures that are more efficient than resolution
• A definite clause is a disjunction of literals in which exactly one literal is true;
e.g.
¬P ∨ ¬Q ∨ R
• A Horn clause is a disjunction of literals in which at most one literal is true;
e.g.
¬P ∨ ¬Q ∨ R
or
¬P ∨ ¬Q ∨ ¬R
– Horn clauses are a superset of definite clauses
– Horn clauses are closed under resolution
• Horn clauses can be represented as implications:
¬P ∨ ¬Q ∨ R ≡ P ∧ Q ⇒ R
– An implication with a single positive literal on the right side is sometimes
referred to as Horn form
– The left side is called the body and the right side the head
– A Horn clause with a single positive literal is called a fact
T rue ⇒ P, or just P
– A Horn clause with no positive literal is called a goal
P ⇒ F alse
• Syntax (BNF):
HornClauseF orm → Def initeClauseF orm|GoalClauseF orm
Def initeClauseF orm → (Symbol ∧ Symbol ∧ ... ∧ Symbol) ⇒ Symbol
GoalClauseF orm → (Symbol ∧ Symbol ∧ ... ∧ Symbol) ⇒ F alse
14
KR: Propositional Logic - Proof Theory, Horn Clause Proofs
• Horn clause proofs are based on modus ponens
– Given P and P ⇒ Q, we can conclude Q
– This is just a special case of resolution:
• Given a KB of Horn clauses and facts, and a literal you want to prove
1. Find Horn clauses whose bodies match facts in the KB
2. Their heads can be added as facts
3. Continue until either
(a) The goal is added
(b) No more facts can be added
15
KR: Propositional Logic - Proof Theory, Horn Clause Proofs (2)
• Algorithm:
function PL-FC-ENTAILS? (KB, q) returns Boolean
{
for (each clause c in KB) {
count[c] <- number of literals in premise
inferred[c] <- false
}
agenda <- facts in KB
while (!EMPTY(agenda)) {
p <- POP(agenda)
if (p == q)
return TRUE
if (inferred[p] == FALSE) {
inferred[p] <- TRUE
for (each clause c in KB where p in c.PREMISE) {
count[c]-if (count[c] == 0)
agenda <- agenda + c.CONCLUSION
}
}
}
return FALSE
}
– agenda is a queue that maintains symbols that are true but not yet processed
– count[] keeps track of how many premises of a Horn clause are not yet
known
∗ When count[c] reaches zero, the head of the clause is added to the agenda
– inferred[] prevents an inferred symbol from being added to agenda more
than once
16
KR: Propositional Logic - Proof Theory, Horn Clause Proofs (3)
• Properties:
– Since inference is based on modus ponens, it is sound
– It is complete
∗ At the end of execution, inf erred[i] is true for every inferred symbol Pi ,
and f alse for the remaining symbols
∗ Every definite clause in the original KB is true in the model represented
by inf erred[]
· If this weren’t true, then for some A1 ∧A2 ∧...∧ak ⇒ b, A1 ∧A2 ∧...∧ak
must be true but b is f alse
· But this would contradict the assumption that no more inferences can
be made
∗ Thus, the set of atomic sentences derived by the algorithm at the fixed
point must be a model of the original KB
∗ Any atomic sentence q entailed by KB must be true in all models of KB
∗ Hence, every entailed sentence q must be inferred by the algorithm
• An AND-OR graph can be used to represent a proof:
17
KR: Propositional Logic - Model Checking Using the Davis-Putnam Algorithm
• This algorithm determines whether a sentence is satisfiable
• It does so by enumerating all possible worlds in a recursive, depth-first manner
• It uses a number of techniques to make the search more efficient
1. Early termination
– This is short-circuit evaluation - evaluate only as much of an expression
as needed
– In disjunctions, as soon as a disjunct is known to be true, stop
– In conjunctions, as soon as a conjunct is known to be f alse, stop
– This eliminates much processesing
2. Pure symbol heuristic
– A pure symbol is one that always appears positive, or negative (but never
both) in all sentences
– In a model, assign such symbols so that their value will be true
∗ This speeds movement towards a viable model
– Note that as assignments are made, symbols that were not initially pure
can become pure (as their inverse is eliminated)
3. Unit clause heuristic
– Assign values to unit clauses before assigments to symbols in complex
sentences
– Unit clauses must be true
– These assignments may cause other symbols to effectively become unit
clauses
∗ This cascade is called unit propagation
18
KR: Propositional Logic - Model Checking Using the Davis-Putnam
Algorithm (2)
• Algorithm:
function DPLL-SATISFIABLE?(s) returns Boolean
{
clauses <- COVERT-TO-CNF(s)
symbols <- EXTRACT-SYMBOLS(s)
return DPLL(clauses, symbols, {})
}
function DPLL (clauses, symbols, model) returns Boolean
{
if (ALL-CLAUSES-TRUE(clauses, model))
return TRUE
if (FIND-FALSE-CLAUSE(clauses, model))
return FALSE
P, value <- FIND-PURE-SYMBOL(symbols, clauses, model)
if (!NULL(P))
return DPLL(clauses, symbols - P, UNION(model, {p = value})
P, value <- FIND-UNIT-CLAUSE(clauses, model)
if (!NULL(P))
return DPLL(clauses, symbols - P, UNION(model, {p = value})
P <- FIRST(symbols)
rest <- REST(symbols)
return (DPLL(clauses, symbols - P, UNION(model, {p = true})) OR
DPLL(clauses, symbols - P, UNION(model, {p = false})))
}
19
KR: Propositional Logic - Model Checking Using the Davis-Putnam
Algorithm (3)
• Additional techniques:
1. Component analysis
– Point may be reached where have sets of sentences that do not share
unassigned propositions
– Such disjoint sets called components
– Advantage to id’ing components is that each can be dealt with independently of the other components
2. Value and variable assignments
– Assign true before f alse
– Use the degree heuristic
∗ Select the variable (proposition) that appears in the greatest number
of sentences
∗ This attempts to reduce the branching factor of the search
3. Intelligent backtracking
– Id conflicts (choices) and store them at the nodes at which they appeared
– When a path dead-ends, can go directly to the ancestor where the choice
point occurred to try another branch
– Conflict clause learning remembers conflicts so they are not repeated
elsewhere in the search
4. Random restarts
– As with hill climbing, if the algorithm appears to be stalled, restart using
a different set of random choices
5. Clever indexing
– Use data structures that facilitate accessing symbols, clauses, etc. that
are useful
20
KR: Propositional Logic - Model Checking Using Local Algorithms
• The general tactic in this approach is to flip the values of symbols one at a
time to try to find a model
• Algorithm:
function WALKSAT (clauses, p, max-flips) returns a model, or failure
{
symbols <- EXTRACT-SYMBOLS(clauses)
model <- ASSIGN-VALUES(symbols)
for (i <- 1 to max-flips) {
if (SATISFIES(model, clauses))
return model
c <- EXTRACT-FALSE-CLAUSES(clauses, model)
clause <- RANDOM-SELECT(clauses)
s <- RANDOM-SELECT-SYMBOL(clause)
probability <- GENERATE-RANDOM-PROBABILITY()
if (probability > p) {
INVERT-VALUE(s, model)
else {
s <- SELECT-MAX-SYMBOL(clauses)
INVERT-VALUE(s, model)
}
}
return FAILURE
}
– ASSIGN-VALUES(symbols) randomly assigns true/f alse to symbols
– SELECT-MAX-SYMBOL(clauses) id’s the symbol whose value would maximize the number of positive clauses
• Note that failure does not mean that a model does not exist
– It simply means that it may require more attempts than specified by maxflips
– But if max-flips were set to inf inity and a solution did not exist, the
algorithm will never terminate
21
KR: Propositional Logic - Constraints and Satifiability
• An underconstrained problem is one in which there is lots of flexibility in
assignments of values to variables
• An overconstrained problem is one in which there is little flexibility
– This is the result of many relations among variables such that assignments
are dependent on values already assigned to variables related to the variables being assigned
• CN Fk (m, n) represents a CNF sentence that has m clauses and n symbols,
where each clause consists of k literals
E.g., ((P ∨ Q) ∧ (P ∨ ¬R) ∧ (¬P ∨ S) ∧ (T ∨ ¬S)) ∈ CN F2 (4, 5)
• As m increases, the harder it is to find a model
• The satisfiability threshold conjecture:
For every k ≥ 3, there is a threshold ration m/n = rk such that, as
n → ∞, the probability that CN Fk (n, R ∗ n) is satisfiable becomes 1
for r < rk , and 0 for r > rk .
22
KR: Propositional Logic - Agents and Representation
• To implement an agent using propositional logic, need
1. Basic facts that describe the world
2. Set of rules that describe what can be deduced from other facts
• These sentences will be specific to a given environment
• Since the agent is immersed in the world, and the world changes over time,
– Time-variant propositions must be interpreted in terms of a time component
– Such time-varying aspects of the world are called fluents
• Issues
1. Representing location-specific perceptual information
– If a percept is sensed in location (x, y), and a proposition P is associated
with that location as a result of the perception, we represent it as
AtAgenttx,y ⇒ (perceptt ⇔ Px,y )
2. Transition model
– Need a transition model that represents changes in the world as a result
of the agent’s actions
– Basic representation:
preconditions for action @time t ∧ action @time t ⇒ results
@time t + 1
– The results include propositions made true by the action, as well as those
negated by the action
23
KR: Propositional Logic - Agents and Representation (2)
3. The frame problem
– An issue with the above is that it does not take into account those things
that remain unchanged by the action
∗ When an action tales the world from time t to time t + 1, the KB
will be updated with the changes made by the action, but only those
changes will be made
I.e., new propositions time-stamped with t + 1 will only appear for
those conditions affected by the action
∗ Conditions unaffected by the action will not be added to the KB with
updated time stamps
∗ This will preclude inferences as time elapses, as actions can only be
executed at time t if therir preconditions are true at time t
– This is known as the frame problem
∗ The specific problem described above is the referntial frame problem
∗ The inferential frame problem deals with projecting the results of a
plan
4. Solutions to the frame problem
(a) Frame axioms
– Frame axioms indicate those conditions that remain unchanged when
an action occurs
– General form:
Actiont ⇒ (P t ⇔ P t+1 )
– This expresses the fact that the condition represented by P holds at
time t + 1 after action Action executes at time t
– The problem is that there are a very large number of frame axioms
needed
∗ Given m actions and n fluents, O(mn) frame axioms are required
∗ Since the real world exhibits locality, actions only affect a small
number k of fluents, so the above becomes O(mk)
24
KR: Propositional Logic - Agents and Representation (3)
(b) Successor state axioms
– This approach focuses on the fluents, rather than the actions
– General form:
P t+1 ⇔ (Actiont ∨ (P t ∧ ¬Action))
– This expresses the fact that the condition represented by P holds at
time t + 1 if it is the result of action Action at time t, or if it was true
at time t and the action did not occur
5. Qualification problem
– In the real world, there are many things that could prevent an action
from being carried out
– As discussed in problem formulation, it is up to the designer to decide
at what granularity a problem should be represented
– The qualification problem is concerned with id’ing the preconditions that
should be met in order for an action to be triggered
25
KR: Propositional Logic - Hybrid Agents
• The hybrid agent model is a knowledge-based agent that combines search and
inference
• Overview:
– The agent maintains a KB and current plan, both of which are updated as
time passes
∗ Inference is used to determine safe rooms, etc. , using propositional logic
∗ A∗ search is used for route planning
– The KB initially contains only atemporal sentences
– On each time increment, a new percept sentence is added to the KB, along
with any time-dependent axioms
– Based on the current model, a plan is created using A∗ search
• Algorithm:
– Note that this algorithm is specific to the Wumpus World
– It does NOT represent a generic agent architecture
26
KR: Propositional Logic - Hybrid Agents (2)
function HYBRID-WUMPUS-AGENT (percept) returns action
{
TELL(KB, MAKE-PERCEPT-SENTENCE(percept, t))
for (each sentence s in KB)
if (s applies to time t)
TELL(KB, s)
safe <- {[x, y]: ASK(KB, OKXYatT) = true}
if (ASK(KB, GlitterAtT) = true)
plan <- [Grab] + PLAN-ROUTE(current, {[1, 1], safe) + [Climb]
if(EMPTY(plan)) {
unvisited <- {[x, y]: ASK(KB, LXYatT) = false for all tPrime <= t}
plan <- PLAN-ROUTE(current, INTERSECT(unvisited, safe), safe)
}
if(EMPTY(plan) && ASK(KB, HaveArrowAtT) = true) {
possibleWumpus <- {[x, y]: ASK(KB, !WXY) = false}
plan <- PLAN-SHOT(current, possibleWumpus, safe)
}
if(EMPTY(plan)) {
not_unsafe <- {[x, y]: ASK(KB, !OKXYatT) = false}
plan <- PLAN-ROUTE(current, INTERSECT(unvisited, not_unsafe), safe)
}
if(EMPTY(plan))
plan <- PLAN-ROUTE(current, {[1, 1], safe), safe) + [Climb]
action <- POP(plan)
TELL(KB, MAKE-ACTION-SENTENCE(action, t))
t = t + 1
return action
}
function PLAN-ROUTE (current, goals, allowed) returns action sequence
{
problem <- ROUTE-PROBLEM(current, goals, allowed)
return ASTAR-GRAPH-SEARCH(problem)
}
27
KR: Propositional Logic - Hybrid Agents (3)
• Note that there is a hierarchy of built-in goals:
1. Get the gold
2. Find a safe unexplored room
3. Shoot the Wumpus
4. Risk an unsafe room
5. Exit
28
KR: Propositional Logic - Agents, Logical State Estimation
• Major problem with hybrid agent:
– As time elapses, inference takes longer and longer
– This is due to the fact that
1. The KB becomes larger and larger
2. Reasoning always starts at time 0
• To preclude this, can save the results of inference so the agent doesn’t have to
start reasoning from time zero every iteration
• Belief state can be used to represent past history and the current possible states
– Use a sentence, not explicit state:
P1 ∧ P2 ∧ (P3 ∨ P4 ∨ P5 ) ∧ P6 ....
– The individual conjuncts represent those facts that are true in all physical
states of the belief state at time t
– The disjunction represents those propositions that might be true in some
physical state of the belief state
– The disjunct represents the multiple state aspect of the belief state
• The main issue using this representation is size
– Given n fluents, there will be 2n physical states
– Since the belief state space is the power set of the physical state space, the
number of states in the belief state space is
n
22
29
KR: Propositional Logic - Agents, Logical State Estimation (2)
• One approach that can be used to estimate the belief state is to represent it as
1-CNF
– E.g., P1 ∧ P2 ∧ P3 ∧ ....
– Try to prove P and ¬P for each literal, and for each atemporal symbol not
yet known, given belief state at t − 1
– The conjunction of provable literals is the new belief state
– The previous belief state is discarded
– Some info may be lost
– Since
∗ The initial state is true
∗ And the state generated from it is true
∗ The derived state will be true as it is derived from a sequence of true
states that are sequentially derived from each other
– The physical states represented by a belief state includes all physical states
that are possible given the agent’s percept sequence
– The 1-CNF belief state acts as a conservative approximation of the exact
belief state
∗ It is a superset of the exact belief state
30
KR: Propositional Logic - Inference-based Agents
• This agent model uses inference exclusively (unlike the hybrid model, which
incorporates search)
• The basic steps:
1. Construct a sentence that includes
(a) init0 : A sentence that specifies the initial state
(b) transition1 , transition2 , ..., transitiont : Successor state axioms for all
actions up to time t
(c) An assertion that the goal is achieved at time t
2. Call SAT-SOLVER with the above as input
(a) If a model is found, the goal can be achieved
3. If a model is found, extract the actions that are true
(a) These represent the plan
• Algorithm:
function SATPLAN (init, transition, goal, tMax)
returns solution or failure
{
for (i <- 0 to tMax) {
cnf <- TRANSLATE-TO-SAT(init, transition, goal, t)
model <- SAT-SOLVER(cnf)
if (!NULL(model))
return EXTRACT-SOLUTION(model)
}
return failure
}
– tM ax is used to place an upper bound on the solution
– Not applicable to partially observable environments
∗ Propositions that are not known would be set to whatever value required
to establish a model
31
KR: Propositional Logic - Inference-based Agents (2)
• It must be recognized that satisfiability (as used in the above algorithm) is
different from entailment (as embodied by ASK)
– The main issue - as noted above - is that if propositions truth values are
unspecified, a SAT solver will assign them to whatever value is needed to
create a model
– The ramification of this is that propositions must be sufficiently constrained
so that their values exactly model the world
• Various issues in representation:
1. Locational info
(a) Need to represent the fact that something can only be in one place at
one time
(b) E.g., AtX1Y 1t ⇔ ¬AtX2Y 2t ∧ ¬AtX3Y 3t ∧ ... ∧ ¬AtXnY nt
2. Relational info
(a) The value of many real world relationships preclude others
(b) E.g., If block R is on block G at time t, it cannot be on block B at time
t
If the agent is facing west at time t, it cannot be facing east at time t
If the agent is wearing chainM ail at time t, it cannot be wearing
plateArmor at time t
(c) Such situations are represented in the same manner as location info
above
3. Actions
(a) Actions must have sufficient preconditions specified to preclude their
simultaneous execution (unless some actions can be carried out in tandem)
(b) Simultaneous actions are prevented using exclusion axioms for every pair
of actions that cannot occur at the same time
(c) E.g., ¬Action1 ∨ ¬Action2
32
KR: Propositional Logic - Inference-based Agents (3)
• A typical KB will have lots of axioms that represent the same information, just
for different locations, or different times, or different objects
– Would want a module that generates these autimatically rather than handcoding them
• Evaluation
– Propositional logic can be used to represent decision-making in an agent
– The problem is that there must be a symbol for every fact that needs to be
represented, and an axiom for every situation
∗ E.g., We need an axiom to represent the fact that an agent can only be
at one place at one time for each distinct time, and a distinct proposition
for each location for each time!
– What propositional logic cannot do is represent the commonality of such
representations
– This leads to a proliferation of propositions and axioms - far too many for
practical use
– What is needed is a more expressive language
33