The Chinese Room Argument Part II

The Chinese Room Argument
Part II
Joe Lau
Philosophy
HKU
The issues
 Certain computations are sufficient for
cognition (computational sufficiency).
– Objection : The Chinese room argument
– Evaluation : Not valid.
 A more general argument
– The argument from syntax and semantics
The argument
 Computer programs are formal
(syntactic).
 Human minds have mental contents
(semantics).
 Syntax is neither constitutive of nor
sufficient for semantics.
 Conclusion : Programs are neither
constitutive of nor sufficient for minds.
Initial comments
1. Programs are formal.
2. Minds have contents.
3. Formal syntax not
enough for contents.
Conclusion :
4. Programs not enough
for minds.
Comment
Comment
#2 #1
: :
Comment
#3 : is valid.
The
argument
The second premise is
obviously
have
a are
The
room
argument
ThatChinese
is,true.
if theTo
premises
mind,
have
mental
istrue,
anone
argument
is supposed
themust
conclusion
must
states
with
content.
to
provide
independent
also
be true.
support of premise #3.
Thoughts,
beliefs,
desires
So we have
to decide
all have
content
whether
premises 1 to 3 are
(intentionality,
true or not. aboutness).
Brentano’s “mark of the
mental”
First premise :
“Programs are formal”
 True in the sense that :
– Symbols are defined independently of
meaning.
– Computational operations are defined without
reference to the meaning of symbols.
 False in the sense that :
– Programs cannot / do not contain meaningful
symbols.
– The function of symbols is to encode content!
Third premise :
“Syntax not sufficient for semantics”
 Question : Do the symbols have meaning or
not?
– If so, then there is content / semantics.
– The symbols in the Chinese room do have
content.
– Symbols in AI programs can have assigned
content.
– So programs with meaningful symbols might
still be sufficient for minds.
What is “semantics” for Searle?
“Having the symbols by themselves … is not
sufficient for having the semantics. Merely
manipulating symbols is not enough to guarantee
knowledge of what they mean.”
• So having meaningful symbols in a
system is not enough for mental content.
• The system must know what those
symbols mean.
But why?
Response
 Mental representations (symbols) are used
to explain intentional mental states.
– E.g. X believes that P = X has a mental
representation M of type B with content P.
– X is not required to “understand” M.
 They cannot do that if they themselves have
to be understood or interpreted.
– Infinite regress otherwise.
Summary
 Searle thinks that the symbols in a system
must be understood / interpreted by the
system to generate meaning / understanding.
 Begs the question against the thesis of
computational sufficiency :
– Understanding is having symbols that encode
information in the right way.
– The symbols do not require further
understanding or interpretation.
Remaining issues
 Suppose formal operations on meaningful
symbols can be sufficient for mental states.
 Q1 : Where do the meanings of symbols
come from?
 Q2 : Can formal operations be sufficient to
give symbols meaning?
Where does meaning come from?
 The meaning of words
(linguistic meaning)
depends on conventions
governing their use.
 Words are “voluntary
signs” (John Locke)
A different theory is needed
 The theory of linguistic meaning does not
apply to mental representations :
– No conventions governing the use of the
mental representations.
– Presumably we cannot change the meanings
of mental representations arbitrarily through
conventions.