FUNCTIONALISM Let M be some mental property. Then it seems … Something might lack M but behave M-ishly (puppets, fakers, … robots?). Something might have M without behaving M-ishly (spartans with pain). Something w/o a brain might have (mental state) M (Martians, angels, … robots?). F is a functional property: F is defined solely by its bearers’ causal relations, manifested or dispositional; by the bearer’s “F-role” in a causal network. (These causal relations might not be behavioral/environmental— “behaviorism” requires all of them to be.) (These causal relations might not be teleological/evaluative “functions”— “teleofunctionalism” permits these, even if they are not causal dispositions.) Functionalism about M: o has M = o has the causal relations defining M; = o plays the M-role. So on functionalism, all it takes for o to have M is for o to play the M-role, no matter what o is ultimately made of (meat, soul, metal, …). From Terry Bisson, “They’re Made Out of Meat” "They're made out of meat." "Meat?" … "There's no doubt about it. We picked several from different parts of the planet, took them aboard our recon vessels, [and] probed them all the way through. They're completely meat." … "Thinking meat! You're asking me to believe in thinking meat!“ "Yes, thinking meat! Conscious meat! Loving meat. Dreaming meat. The meat is the whole deal! … And they've been trying to get in touch with us for almost a hundred of their years." … "We're supposed to talk to meat?" "That's the idea. That's the message they're sending out by radio. 'Hello. Any[body] out there? Any[body] home?' That sort of thing." "They actually do talk, then. They use words, ideas, concepts?" "Oh, yes. Except they do it with meat." "I thought you just told me they used radio." "They do, but what do you think is on the radio? Meat sounds. You know how when you slap or flap meat it makes a noise? They talk by flapping their meat at each other. They can even sing by squirting air through their meat." "Omigod. Singing meat. This is altogether too much. So what do you advise?“ … "Officially, we are required to contact, welcome, and log in any and all sentient races or multi-beings in [this] quadrant [of the universe], without prejudice, fear, or favor. Unofficially, I advise that we erase the records and forget the whole thing." … "I agree one hundred percent. What's there to say?" `Hello, meat. How's it going?' But will this work? How many planets are we dealing with here?" "Just one. They can travel to other planets in special meat containers, but they can't live on them. And being meat, they [can] only travel through C space. Which limits them to the speed of light and makes the possibility of their ever making contact pretty slim. Infinitesimal, in fact." "So we just pretend there's no one home in the universe. … Case closed. Any others? Anyone interesting on that side of the galaxy?" "Yes, a rather shy but sweet hydrogen core cluster intelligence in a class nine star in G445 zone. Was in contact two galactic rotations ago, wants to be friendly again." "They always come around." "And why not? Imagine how unbearably, how unutterably cold the universe would be if one were all alone." The Chinese Room Conclusion Functionalism about M: o has M = o has the causal relations defining M; = o plays the M-role. Functionalism about mental properties entails what Searle calls “Strong AI”: “an appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states” (MBP 1). “One way to test any theory of the mind is to ask oneself what it would be like if my mind actually worked on the principles that the theory says all minds work on.” (MBP 6) Searle’s Chinese Room A “program” describes internal parts of an M-role, not behavioral/environmental parts. All or virtually all of Searle’s opponents would claim (with functionalists and behaviorists) that the behavioral/environmental parts are necessary for having M. They (with functionalists but not behaviorists) would say that the internal parts (/program) are crucial to explaining the rest; they are the deepest parts of the M-role. (Searle: “what the machine and its program do [allegedly] explains the human ability to understand”, MBP 4: “not the slightest reason has been given to suppose … even that they make a significant contribution to understanding” MBP 9) So Searle needs a bolder conclusion than that programs are not enough for M. He needs the conclusion that programs plus behavioral/environmental causal relations (I’ll call these together “M-roles”) are not enough for M. (He could instead try concluding that programs do not help much to explain M-roles, but that is not his strategy.) The Chinese Room Argument Searle’s ultimate conclusion is that: M-roles are not enough for M. He imagines his opponent’s best stab at an M-role and the program that helps explain it. He does not want merely to assume that a computer with that M-role could lack M. He wants to argue for that from less-controversial premises. So he needs an M (and his opponent’s proposed M-role and program) for which these are less controversial: (1) That Searle can have the program and the M-role but lack M. (2) That no AI computer has anything relevant to M that he lacks. “[T]he computer has nothing more than I have in the case where I understand nothing” (MBP 8). Searle needs M to be something one can introspect, or easily know whether one has, or know this more easily about oneself than others typically know it about one. So M can’t be some deeply unconscious mental property. But also Searle does not mean to limit his conclusion to consciousness. He picks “cognitive states” like beliefs (also presumably meaning to rope in desires). He means “understanding [some Chinese]” to be such a cognitive state. So he wants to argue that having a program plus behavioral/environmental causal relations are not enough for understanding [any Chinese]. His argument is that in the Chinese Room case he has AI’s best program and behavioral/environmental understanding-roles, but does not understand any Chinese. Understanding Understanding Searle does not need to be precise about what “understanding [some Chinese]” means. He does not need to resolve borderline cases. It is enough if there are some clear cases of understanding and of non-understanding. Compare: it is clear that I am not one of the Rocky Mountains. You would merely be stalling if you challenged me to be precise about what “Rocky Mountain” means, or to resolve where a mountain ends and a valley begins. Also, Searle does not need an explicit theory of what even roughly separates clear understanding from clear non-understanding. It is enough to insist that we apply the same standards to computers that we normally apply to ourselves or to one another, whatever those standards are. But here is a stab at what “understanding [linguistic item L]” means. Understanding L = knowing the meaning of L (vs. having no opinion on the meaning, or just guessing the meaning). Meaning of L = what L is about, plus structure of L, plus standard functions/uses of L. Clearly knowing what L is about = knowing that L is about ____, if anything, which requires having a concept of ____, and linking it to L, which requires being able to discriminate ____ from non-____s, manipulate ____, imagine ____, and associate ____ with concepts of other obviously related things. Clearly knowing structure of L = knowing that L has meaningful parts L1…Ln, if any, knowing that L’s meaning depends on them, and in what ways. Clearly knowing functions of L = knowing that L-users typically use L to pursue goals G, if any are typical. Searle is clearly correct that he does not understand any Chinese (in any of his cases). Robots and Systems Robot Reply: Searle doesn’t understand (Chinese) L, because Searle lacks the full understanding-L-role: he runs the right program for linguistic input/output, but not for broader behavioral/environmental causal relations. He doesn’t link L with an ability to discriminate ____, manipulate ____, imagine ____. But: even if we secretly hook up cameras and arms, Searle does not know L’s meaning. System Reply: Searle doesn’t know the meaning of L, but Searle + the room does. But: if Searle doesn’t know quantum mechanics, Searle + the Grad Library doesn’t either. And: even if Searle memorizes the program, he still doesn’t know the meaning of L. Robot-System Reply: How does Searle “internalize” cameras and arms? If this is like multiple personalities, or demonic possession, or sleepwalking, then we wouldn’t expect Searle clearly to introspect whether L is understood. Searle: Let us suppose that I am totally blind because of damage to my visual cortex, but my photoreceptor cells work perfectly as transducers. Then let the robot use my photoreceptors as transducers …. What difference does it make? None at all …. I would still be blindly producing the input-output functions of vision without seeing anything. (Searle, “The Failures of Computationalism”, 1993, 69-70) So: Searle is blind but unbeknownst to him the inputs from his eyes convert to Chinese, and he merely imagines reading the rule books, and merely imagines writing outputs, and unbeknownst to him these tug strings that control his otherwise paralyzed arms. Still Searle would not be imagining ____ and linking it with L. He would be imagining rule books and paper, and linking that with L. Real vs Virtual Machines Remember that Searle needs these two premises to be not-too-controversial: (1) That Searle can have the program and the M-role but lack M. (I have granted this.) (2) That no AI computer has anything relevant to M that he lacks. But there is a bit of functioning that AI computers have that systems with Searle doesn’t: Searle-lessness! (Or better: more-basic-mind-lessness.) A functionalist theory of M can say that what’s needed for M is direct rather than virtual running of a program, running of a program not by running another program (or at least, not another mental program). Compare: a genuine apple has to be grown by a tree, fulfilling its own natural functions. an apple tree can be planted artificially (by a mind), or pruned artificially (by a mind), but if a delicious red thing is grafted atom-by-atom onto the apple tree, artificially, that’s not a genuine apple. Similarly for genuine understanding: a program can be planted in a computer (or a mind) artificially (by a mind), or pruned artificially, but if the program steps are controlled, artificially (by a mind or other program), that’s not genuine understanding. Linguistic aboutness seems typically to depend on deeper mental aboutness, and linguistic behavioral/evaluative functions/roles seem to depend on deeper mental cases. By contrast, those features of mental things seem typically not to depend on deeper cases. Aboutness is a symptom of mentality when it is deepest, when it does not depend on deeper aboutness. Searle runs his program by seeing it, reading it, planning to obey it, etc. After we (correctly) attribute these mental powers to Searle in the Chinese Room, we can explain the behavior without attributing further mental powers (e.g., understanding some Chinese). But a normal AI computer does not see, read, or plan to obey its program. Since we can’t explain the behavior using these mental powers, we explain it better by attributing other mental powers instead (understanding some Chinese)
© Copyright 2026 Paperzz