Logical equivalence
n
Proof Methods for
Propositional Logic
Two sentences are logically equivalent iff they are true in the same
models: α ≡ ß iff α╞ β and β╞ α
Russell and Norvig Chapter 7
1
Validity and satisfiability
Inference rules
A sentence is valid (a tautology) if it is true in all models
e.g., True,
2
n
A ∨¬A, A ⇒ A, (A ∧ (A ⇒ B)) ⇒ B
Modus Ponens
A ⇒ B, A
B
Example:
“raining implies soggy courts”, “raining”
Infer: “soggy courts”
Validity is connected to inference via the Deduction Theorem:
KB ╞ α if and only if (KB ⇒ α) is valid
A sentence is satisfiable if it is true in some model
e.g., A ∨ B
A sentence is unsatisfiable if it is false in all models
e.g., A ∧ ¬A
Satisfiability is connected to inference via the following:
KB ╞ α if and only if (KB ∧¬α) is unsatisfiable
(known as proof by contradiction)
3
Example
n
n
n
4
Inference rules (cont.)
KB: {A⇒B, B ⇒C, A}
Is C entailed?
Yes.
Given
Rule
Inferred
A⇒B, A
Modus Ponens
B
B ⇒C, B
Modus Ponens
C
n
Modus Tollens
n
A ⇒ B, ¬B
¬A
Example:
“raining implies soggy courts”, “courts not soggy”
Infer: “not raining”
And-elimination
A∧B
A
6
1
Reminder: The Wumpus World
Inference in the wumpus world
Performance measure
n
q
q
Given:
1.
¬B1,1
2.
B1,1 ⇔ (P1,2 ∨ P2,1)
Let’s make some inferences:
1.
(B1,1 ⇒ (P1,2 ∨ P2,1)) ∧ ((P1,2 ∨ P2,1) ⇒ B1,1 )
(By
definition of the biconditional)
2.
(P1,2 ∨ P2,1) ⇒ B1,1 (And-elimination)
3.
¬B1,1 ⇒ ¬(P1,2 ∨ P2,1) (equivalence with contrapositive)
4.
¬(P1,2 ∨ P2,1) (modus ponens)
5.
¬P1,2 ∧ ¬P2,1 (DeMorgan’s rule)
6.
¬P1,2 (And-elimination)
7.
¬P2,1 (And-elimination)
gold: +1000, death: -1000
-1 per step, -10 for using the arrow
Environment
n
q
q
q
q
q
q
Squares adjacent to wumpus: smelly
Squares adjacent to pit: breezy
Glitter iff gold is in the same square
Shooting kills wumpus if you are facing it
Shooting uses up the only arrow
Grabbing picks up gold if in same square
Sensors: Stench, Breeze, Glitter, Bump
Actuators: Left turn, Right turn, Forward, Grab, Release,
Shoot
n
n
7
8
More inference
Recall that when we were at (2,1) we could
not decide on a safe move, so we
backtracked, and explored (1,2), which
yielded ¬B1,2. This yields ¬P2,2 ∧ ¬P1,3
Now we can consider the implications of B2,1
n
n
1,4
2,4
3,4
4,4
1,3
2,3
3,3
4,3
2,2
3,2
4,2
2,1
3,1
4,1
1,2
A
B
G
OK
P
S
V
W
= Agent
= Breeze
= Glitter, Gold
= Safe square
= Pit
= Stench
= Visited
= Wumpus
A
OK
2,4
3,4
4,4
1,3
2,3
3,3
4,3
3,2
4,2
1,2
3,1
4,1
1,1
1,2
2,2
P?
OK
1,1
2,1
V
OK
OK
(a)
1,4
1,4
OK
1,1
Resolution
A
B
OK
P?
1,3
2,4
3,4
4,4
W!
2,3
3,3
4,3
A
2,2
S
OK
(b)
4,2
= Agent
= Breeze
= Glitter, Gold
= Safe square
= Pit
= Stench
= Visited
= Wumpus
B
V
OK
3,1
P!
4,1
2.
3.
4.
5.
6.
1,4
2,4
1,3 W!
2,3
1,2
2,2
OK
2,1
V
OK
3,2
A
B
G
OK
P
S
V
W
¬P2,2, ¬P1,1
B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1)
B2,1 ⇒ (P1,1 ∨ P2,2 ∨ P3,1)
P1,1 ∨ P2,2 ∨ P3,1
P1,1 ∨ P3,1
P3,1
1.
S
V
OK
1,1
A
S G
B
3,4
4,4
3,3 P?
4,3
3,2
4,2
The resolution rule: if there is a pit in (1,1) or (3,1), and it’s not in
(1,1), then it’s in (3,1).
P1,1 ∨ P3,1,
V
OK
2,1
V
OK
P?
B
V
OK
3,1
P!
(biconditional elimination)
(modus ponens)
(resolution rule)
(resolution rule)
¬P1,1
P3,1
4,1
(b)
(a)
9
Resolution
Unit Resolution inference rule:
l1 ∨… ∨ lk,
10
Resolution is sound
For simplicity let’s consider clauses of length two:
m
l1 ∨ l2,
l1 ∨ … ∨ li-1 ∨ li+1 ∨ … ∨ lk
where li and m are complementary literals.
¬l2 ∨ l3
l1 ∨ l3
To derive the soundness of resolution consider the
values l2 can take:
Full Resolution
l1 ∨… ∨ lk, m1 ∨ … ∨ mn
l1 ∨…∨ li-1 ∨ li+1 ∨…∨ lk ∨ m1 ∨…∨ mj-1 ∨ mj+1 ∨...∨ mn
q
q
where li and mj are complementary literals.
11
If l2 is True, then since we know that ¬l2 ∨ l3 holds, it must
be the case that l3 is True.
If l2 is False, then since we know that l1 ∨ l2 holds, it must
be the case that l1 is True.
12
2
Resolution
n
Properties of the resolution rule:
q
q
n
n
n
Conversion to CNF
B1,1 ⇔ (P1,2 ∨ P2,1)
Sound
Complete
1.
Resolution can be applied only to
disjunctions of literals. How can it be
complete?
Turns out any knowledgebase can be
expressed as a conjunction of disjunctions
(conjunctive normal form, CNF).
Example: (A ∨ ¬B) ∧ (B ∨ ¬C ∨ ¬D)
2.
Eliminate ⇔, replacing α ⇔ β with (α ⇒ β)∧(β ⇒α).
(B1,1 ⇒ (P1,2 ∨ P2,1)) ∧ ((P1,2 ∨ P2,1) ⇒ B1,1)
Eliminate ⇒, replacing α ⇒ β with ¬α ∨ β.
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬(P1,2 ∨ P2,1) ∨ B1,1)
3.
Move ¬ inwards using de Morgan's rules and double-negation:
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ ((¬P1,2 ∧ ¬P2,1) ∨ B1,1)
4.
Apply distributive law (∧ over ∨) and flatten:
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬P1,2 ∨ B1,1) ∧ (¬P2,1 ∨ B1,1)
13
Converting to CNF
n
Converting to CNF (II)
Every sentence can be converted to CNF
1.
2.
3.
4.
n
Replace α ⇔ β with (α ⇒ β) ∧(β ⇒ α)
Replace α ⇒ β with (¬ α v β)
Move ¬ “inward”
n
n
While converting expressions, note that
q
q
Replace ¬(¬ α) with α
2.
Replace ¬(α ∧ β) with (¬ α v ¬ β)
q
3.
Replace ¬(α v β) with (¬ α ∧ ¬ β)
q
n
Replace (α v (β ∧ γ)) with (α v β) ∧(α v γ)
((α v β) v γ) is equivalent to (α v β v γ)
((α ∧ β) ∧ γ) is equivalent to (α ∧ β ∧ γ)
Why does this algorithm work?
1.
q
Using resolution
n
14
Because ⇒ and ⇔ are eliminated
Because ¬ is always directly attached to literals
Because what is left must be ∧’s and v’s, and they
can be distributed over to make CNF clauses
Example of proof by resolution
Even if our KB entails a sentence α,
resolution is not guaranteed to produce α.
To get around this we use proof by
contradiction, i.e., show that KB∧¬α is
unsatisfiable.
Resolution is complete with respect to proof
by contradiction.
KB = (B1,1 ⇔ (P1,2∨ P2,1)) ∧¬ B1,1
in CNF… (¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬P1,2 ∨ B1,1) ∧ (¬P2,1 ∨ B1,1)
α = ¬P1,2
Resolution yielded the empty clause.
The empty clause is False (a disjunction is True only if
at least one of its disjuncts is true).
17
18
3
Automated Theorem Proving
n
Resolution Algorithm
How do we automate the inference process?
q
q
Step 1: assume the negation of the consequent and add it
to the knowledgebase
Step 2: convert KB to CNF
n
q
i.e. a collection of disjunctive clauses
Step 3: Repeatedly apply resolution until:
n
n
It produces an empty clause (contradiction), in which case the
consequent is proven, or
No more terms can be resolved, in which case the consequent
cannot be proven
function PL-RESOLUTION(KB, α) returns true or false
inputs: KB, the knowledge base, a sentence in propositional logic
α, the query, a sentence in propositional logic
clauses ← the set of clauses in the CNF representation of KB ∧ ¬α
new ← {}
loop do
for each pair of clauses Ci,Cj in clauses do
resolvents ← PL-RESOLVE(Ci,Cj)
if resolvents contains the empty clause then return true
new ← new U resolvents
if new ⊆ clauses then return false
clauses ← clauses U new
20
Another example
n
n
n
Inference for Horn clauses
Horn Form (special form of CNF)
If it rains, I get wet.
If I’m wet I get mad.
Given that I’m not mad, prove that it’s not
raining.
KB = conjunction of Horn clauses
Horn clause =
propositional symbol; or
(conjunction of symbols) ⇒ symbol
Example: C ∧ ( B ⇒ A) ∧ ((C ∧ D) ⇒ B)
Modus Ponens is a natural way to make inference in
Horn KBs
α1, … ,αn,
α1 ∧ … ∧ αn ⇒ β
n
β
Successive application of modus ponens leads to
algorithms that are sound and complete, and run in
linear time
21
Forward chaining
n
22
Forward chaining example
Idea: fire any rule whose premises are satisfied in
the KB
q
add its conclusion to the KB, until query is found
23
24
4
Forward chaining example
Forward chaining example
25
Forward chaining example
26
Forward chaining example
27
Forward chaining example
28
Forward chaining example
29
30
5
Forward chaining example
Forward chaining algorithm
function PL-FC-ENTAILS?(KB, q) returns true or false
inputs: KB, knowledge base, a set of propositional definite clauses
q, the query, a propositional symbol
count ← a table, where count[c] is the number of symbols in c’s premise
inferred ← a table, where inferred[s] is initially false for all symbols
agenda ← a queue of symbols, initially symbols known to be true in KB
while agenda is not empty do
p ← POP(agenda)
if p=q then return true
if inferred[p]=false then
inferred[p] ← true
for each clause c in KB where p is in c.PREMISE do
decrement count[c]
if count[c]=0 then add c.CONCLUSION to agenda
return false
Forward chaining is sound and complete for Horn KB
31
Backward chaining
32
Backward chaining example
Idea: work backwards from the query q:
check if q is known already, or
prove by backward chaining all premises of some rule
concluding q
Avoid loops:
check if new subgoal is already on the goal stack
Avoid repeated work: check if new subgoal
has already been proved true, or
has already failed
33
Backward chaining example
34
Backward chaining example
35
36
6
Backward chaining example
Backward chaining example
37
Backward chaining example
38
Backward chaining example
39
Backward chaining example
40
Backward chaining example
41
42
7
Backward chaining example
Forward vs. backward chaining
n
FC is data-driven
May do lots of work that is irrelevant to the goal
n
BC is goal-driven, appropriate for problem-solving,
n
q
n
e.g., What courses do I need to take to graduate? How do I
get into a PhD program?
Complexity of BC can be much less than linear in
size of KB
43
44
Efficient propositional inference
In the wumpus world
Two families of efficient algorithms for satisfiability:
n Complete backtracking search algorithms:
A wumpus-world agent using propositional logic:
q
n
¬P1,1
¬W1,1
Bx,y ⇔ (Px,y+1 ∨ Px,y-1 ∨ Px+1,y ∨ Px-1,y)
Sx,y ⇔ (Wx,y+1 ∨ Wx,y-1 ∨ Wx+1,y ∨ Wx-1,y)
W1,1 ∨ W1,2 ∨ … ∨ W4,4
¬W1,1 ∨ ¬W1,2
¬W1,1 ∨ ¬W1,3
…
DPLL algorithm (Davis, Putnam, Logemann, Loveland)
Local search algorithms
q WalkSAT algorithm:
n
n
Start with a random assignment
At each iteration pick an unsatisfied clause and pick a symbol in the
clause to flip; alternate between:
q
q
Pick the symbol that minimizes the number of unsatisfied clauses
Pick a random symbol
⇒ 64 distinct proposition symbols, 155 sentences
Is WalkSAT sound? Complete?
45
Summary
n
n
Summary
Methods for inference
n Resolution is complete for propositional logic
n Forward, backward chaining are linear-time, complete for Horn
clauses
General observation:
n Propositional logic lacks expressive power
Logical agents apply inference to a knowledge base to
derive new information and make decisions
Basic concepts of logic:
q
q
q
q
q
q
46
syntax: formal structure of sentences
semantics: truth of sentences wrt models
entailment: truth of one sentence given another
inference: deriving sentences from other sentences
soundness: derivations produce only entailed sentences
completeness: derivations can produce all entailed sentences
47
48
8
© Copyright 2026 Paperzz