RE-READING THE PROBLEM OF CONSCIOUSNESS by RACHEL

RE-READING THE PROBLEM OF CONSCIOUSNESS
by
RACHEL E. SCHNEEBAUM
Professor Joseph Cruz, Advisor
A thesis submitted in partial fulfillment
of the requirements for the
Degree of Bachelor of Arts with Honors
in Philosophy
WILLIAMS COLLEGE
Williamstown, Massachusetts
May 25, 2009
CONTENTS
Acknowledgments
2
Introduction
3
The Philosophical Problem(s) of Consciousness
Part I Dislodging the Driving Cartesian Intuition
13
I.i
How Problematic is the Problem of Consciousness?
14
I.ii
Non-Phenomenal Experience
21
I.ii
The Non-Cartesian Mind
28
Part II Considering the Competing Interpersonal Intuition
II.i
Why Literature?
39
40
II.ii Reading Consciousness: An Empirical Account
43
II.ii.i Reading at the Cognitive Level: Conceptual Metaphor Theory
48
II.ii.ii Reading at the Neural Level: The Mirror Neuron System
54
II.iii Creating Consciousness: A Narrative Account
Conclusion
The Hard Problem, Reconsidered
Bibliography
64
71
78
1
ACKNOWLEDGMENTS
During my first year at Williams, Professor Cruz presented my introductory cognitive
science class with the thought experiment that I had secretly harbored as my own personal
innovation for years: how can we know that, when two people are looking at something that they
both label “red,” they’re not really seeing different things? Despite shattering my illusion of
intellectual originality, Professor Cruz thus launched the academic trajectory that I have been
pursuing, with varying degrees of frustration and elation, for the past four years. His teaching
and conversation on these and related issues during my time here have allowed me to attain, if
not closure, a sense of progress and directed curiosity. Without his unfailing encouragement and
support, I never would have had the confidence to pursue my philosophical ambitions at this—or
the next—level, and I am exceedingly grateful for everything he has done for me.
Further thanks to Professor Mladenovic for serving as my second reader for this project.
Her knowledge and direction were invaluable, particularly in my research on the literature side
and my attempts to forge a connection between what might look like two very different topics.
Thanks to Professor Barry for listening to me talk in circles when I was too panicked to
remember what I was supposed to be doing, and for patiently helping me stay on track. Thanks
to the Williams philosophy department as a whole for introducing me to a subject I never
imagined that I’d study, and for providing a nurturing yet academically rigorous setting in which
I could discover how much I love it. Thanks to my friends for bringing me meals when I forgot
to leave the library and for uncomplainingly enduring my frequent, frantic rants on
consciousness. Finally, thanks to my family for never letting me forget that they support me,
always.
2
INTRODUCTION: THE PHILOSOPHICAL PROBLEM(S) OF CONSCIOUSNESS
Most of the philosophical and scientific theorizing surrounding consciousness seems to
be rooted in a particular set of intuitions—the same intuitions that led Descartes to draw a
distinction between the mind and the body, and the same intuitions that gave us a rich canon of
science fiction and philosophy of mind scenarios involving body swapping and brains in vats.
These are the intuitions that suggest that there is something epistemically and ontologically
special about the me-ness of my own experiences, something distinct from a scientific
explanation of the world. The philosophical and empirical work on consciousness is full of
arguments and thought experiments that appeal to this sort of intuition, to the extent that it is
difficult if not impossible to think about consciousness at all without invoking similar
assumptions. Many would argue that this is no accident, that the seemingly inescapable
prevalence of our intuitions demonstrates something fundamental about the way things are.
I cannot attempt to deny the existence and power of our deep-seated intuitions. However,
I do aim to discuss some of these intuitions in more detail in order to show that they do not
automatically entail what are often assumed to be their philosophical conclusions. Furthermore,
I aim to show that there is, in fact, a competing intuition that is equally present in our theorizing,
in our everyday relations with others, and in our ordinary understanding of ourselves. We have
an intuition that we can know how another person feels in a way that is not purely inferential
from our own first-person experiences. This intuition grounds much of our interactions in the
world and communications with other people, and it is particularly fundamental to the way we
read, understand, and talk about literature.
The existence of a competing intuition alone, though, is not enough on which to stake
conceptual upheaval. To arbitrate between powerful intuitions, we must show that existing
3
theories built upon one set of assumptions ignore or do not sufficiently account for further data
that some other theory explains elegantly. In Part I, I work to weaken the argumentative power
of the driving Cartesian intuition by contrasting its claims with empirical evidence. In Part II, I
advance an alternative account that more satisfactorily explains that evidence. Developing
cognitive science research in conceptual metaphor theory and embodied cognition seems to
provide an account of the way we read and understand literature that is strikingly compatible
with our competing intuition. Moreover, this sort of research and its conclusions seem able to
shed light on the inarguable attractiveness of our Cartesian intuitions and the resulting way in
which we ordinarily think about consciousness. I aim to demonstrate that, though the intuitions
traditionally leading to Cartesian conclusions cannot be denied, they are the result of a particular
metaphorical way of thinking rather than representative of any extant property in need of
explanation. I claim, then, that the Hard Problem of consciousness so called is not a problem so
much as it is the conceptual baggage of a physically contingent metaphorical narrative.
When writing about the problem of consciousness, one of the most difficult tasks seems
to be the initial identification of the subject matter. In order to make any sort of theoretical claim
about consciousness, one must first attempt to delineate what it is we are actually talking about
when we do so—a task that seems at once both obviously unnecessary and disconcertingly
difficult. Perhaps this difficulty of definition has something to do with the nature of the odd and
seemingly unique relationship we have with the phenomenon itself: as David Chalmers points
out, “Conscious experience is at once the most familiar thing in the world and the most
mysterious.”1 Consciousness is our “subjective experience,” it is our “inner life,” it is “raw
feels”; perhaps Thomas Nagel puts it most helpfully and succinctly when he writes that an
organism’s having consciousness “means, basically, that there is something it is like to be that
1
Chalmers, D. (1996), 3.
4
organism.”2 Often philosophers resort to iconic lists of examples to serve in the place of
definition: consciousness is the singular and particularly pleasing taste of chocolate, the way it
feels to see the color red, the sensation of sun on one’s skin or a crunchy leaf under one’s foot, or
that little shiver up the spine at the key change of one’s favorite song. Consciousness is the
subjective quality of experience: you already know what consciousness is, some might say,
because it is inescapable.
Perhaps because of this resistance to and difficulty involved in definition, though—the
epistemic assumptions, the poetic vagueness, the proliferation of confusions—the problems and
questions at stake in the philosophical discussion of consciousness become similarly diffuse and
ill-bounded. The philosopher’s tendency to refer offhandedly to “the problem of consciousness”
is a habit that is misleading in itself, as there seem to be many problems worthy of both
philosophical and scientific investigation importantly associated with consciousness. Depending
on one’s understanding of the problem or problems, the questions that one might ask and work to
explore are different.3 Thus, at least part of the reason that consciousness appears intractable has
to do with the literature itself, with the vagueness of its subject matter and its tendency to assume
one particular incipient question but then pursue and arrive at a “solution” to something entirely
different. Before beginning my own work on the topic, then, I will make an effort to sort
through some of the existing discussion in an attempt to separate out the kinds of questions that
are asked, the kinds of answers that are yielded by these questions, and the areas that remain.
Many philosophers, perhaps in an effort to pursue the philosophy of mind as “simply
fall[ing] into place as a chapter of psychology and hence of natural science,”4 work to account
for consciousness from a physicalist perspective. They proceed from the standpoint that
2
Nagel, T. (1974), 436.
Thus, for example, Tye, M. (1995) tracks ten problems of consciousness.
4
Quine, W.V.O. (1969), 82.
3
5
everything that exists in the world is physical, and thus that whatever it is we call
“consciousness” must be explainable by physical laws. This approach allows us to pursue an
understanding of consciousness in much the same way—or, at least, in a way that is motivated
by the same sorts of beliefs and assumptions—that we would engage any other object of
scientific enquiry. Because consciousness seems unavoidably linked to neural processes such as
perception and awareness, it seems reasonable from a physicalist perspective that the right place
to look for an explanation of consciousness is in the nervous system.
However, even this move is the result of and is accompanied by one of several implicit
philosophical assumptions, and thus there are underlying bifurcations of questions here beneath
the guise of a single aim. In order to attain a useful physical understanding, we must first
determine the appropriate level of explanation for consciousness: can we locate consciousness
within a single neuron, or a neural network? Is there a certain part of the brain responsible for
consciousness, or is it the result of a type of overarching brain state? Is it dependent on a
biological nervous system as such, or is it a functional property replicable in other substances?
Each of these questions has proven productive in scientific enquiry, leading philosophers and
scientists to hypothesize that consciousness is related to the binding of short-term memory due to
oscillations of a particular frequency,5 or to the mapping of neural representations of awareness,6
or to “a virtual mechanism distributed over brain circuits”7; that consciousness is one unified
higher-level brain state8; that consciousness is certainly explainable at the level of neuroscience,
though we might not know what that explanation looks like yet9; and so on.
5
Crick, F. & Koch, C. (1990).
Kanwisher, N. (2001).
7
Bisiach, E. (1992), 250.
8
Searle, J. (2000).
9
Churchland, P.S. (1996).
6
6
If understanding consciousness (or, at least, the state of the current controversy over
consciousness) were merely a matter of these sorts of theoretically empirical questions, though,
things would be far more manageable than they presently seem. Regardless of the answers to
these and other investigations, any physicalist account still runs up against the accusation that
something important has been left out, that whatever empirical explanation just provided had
nothing to do with consciousness itself. This is because, according to their detractors, such
physicalist explanations ignore or presuppose a solution to what David Chalmers and others have
designated the “Hard Problem,” which is essentially a persistent incarnation of the mind-body
problem. According to these Hard Problem theorists, physicalists do not and cannot account for
the qualitative components of our experiences. A complete account of consciousness must
explain why it is that these experiences seem fundamentally different in kind from anything else
that can be described within a physicalist explanation, and how it is that they nevertheless are
able to affect and be affected by obviously physical things such as nervous systems. These
qualitative components of experience, or qualia—the special “what it is like for me”-ness of our
interactions with the world—are the source of the Hard Problem.10
Qualia may initially seem an odd concept, and thus the consciousness literature is full of
thought experiments that aim at invoking our deep-seated intuitions about our own
phenomenology in order to make the notion clear. Consider the inverted spectrum thought
experiment, in which we imagine two individuals who are functionally and physically identical
but whose color qualia are inverted. When they both look at the sky, one has the experience that
I have when I see blue, and the other has the experience that I have when I see yellow (although,
of course, they both learned to call that color-experience “blue” and to react to it in the same
10
Several philosophers, though, have challenged Chalmers’ isolation of the “Hard” as opposed to other allegedly
“easy” and strictly psychological problems (memory, perception, etc.), both those in favor of qualia and those
opposed (c.f. Nagel, T. (1986); Churchland, P.S. (1996)).
7
way). The conceivability of this scenario—physical sameness and phenomenological
difference—implies that qualia are not physical or, at least, that they are conceivably nonphysical.11 Another thought experiment asks us to imagine a brilliant neuroscientist named Mary
who specializes in visual perception and knows everything physical there is to know about color
vision, but who was brought up in a black-and-white room.12 When she is finally allowed to
leave the room and encounters colorful objects for the first time, we cannot help but imagine her
saying something such as, “Oh, so that’s what it’s like to see red.” Thus, Mary must have
learned something beyond the complete scientific understanding of the situation; and thus, any
physicalist explanation must be leaving something—qualia—out.
These sorts of thought experiments have led philosophers to identify several
characteristics that set qualia apart from other objects of philosophic and scientific study.13 First,
qualia are ineffable: though we can talk about “the taste of chocolate,” we cannot fully describe
our individual conscious experiences in a manner that would allow them to be experienced or
entirely understood by someone else. Second, qualia are intrinsic: individual qualia themselves
are non-relational and thus do not change depending on their interactions with other things, nor
can they be somehow simplified or reduced to something more basic or fundamental.14 Third,
qualia are private: it is impossible to access another individual’s conscious experiences or to
compare our own conscious experiences with someone else’s the way we might compare other
11
The inverted spectrum thought experiment’s argumentative force rests on the link between conceivability and
logical possibility. Because we can conceive of the physical and the phenomenal as entirely separable in such cases,
it is logically possible that they are separable and thus separate kinds of things. Similarly, even if one were to argue
that an inverted spectrum is conceivable only in the case of brain surgery or other necessarily physical differences,
the fact that even physically inverted qualia could conceivably have no functional or behavioral effect means that a
purely functionalist account of qualia is incomplete. (For further discussion of and variations on the inverted
spectrum, see e.g. Shoemaker, S. (1982), Block, N. (1990).)
12
Jackson, F. (1982).
13
The following formulation of the properties comes from Dennett, D. (1988); essentially the same properties are
phrased slightly differently by others (see e.g. Xu, X. (2004)).
14
The intrinsicness of qualia is not always explicitly referenced as a key feature. However, it seems implicit in the
thought experiments that often rely on our ability to identify a particular quale in isolation from the associations,
memories, and so on that might be involved in an experiential situation.
8
objects of enquiry. Fourth, qualia are immediate: they are directly accessible in a way that
nothing else is, as everything else is mediated through conscious experiences. Qualia are
instantly attainable and intimately understood: in other words, having a conscious experience is
all there is to know about that experience. It is these properties that cause philosophers to resort
to despairing exclamations such as “Consciousness is what makes the mind-body problem really
intractable,”15 or, more daunting yet, “Consciousness is the biggest mystery. It may be the
largest outstanding obstacle in our quest for a scientific study of the universe.”16
Before we can think about physical explanations of consciousness, we must tackle the
question connecting such physical explanations with the phenomenal phenomenon itself. The
question becomes, as Searle puts it, “How exactly do brain processes cause conscious states and
how exactly are these states realized in brain structures?”17 Most researchers admit, after all, that
what they are after are the neural correlates of consciousness rather than “consciousness itself.”
Thus, it is the fact that one might continue to speak of such a “consciousness itself” that leaves
the question looming. Empirical explanations might present the illusion of having “explained
consciousness” or “solved the problem of consciousness,” while in actuality leaving explanations
riddled with what seem to be unacknowledged gaps. Even if physicalist explanations have not
actually “left something out,” as Levine claims18—that is, they have merely provided an alternate
epistemic path with which we might conceive of a single state—the fact that one cannot account
for qualia in traditional physical terms means that the physicalist explanation is inadequate. It is
this explanatory gap that encourages the proliferation of dualist explanations such as the one put
forward by Chalmers in The Conscious Mind. Thus, I will briefly outline his theory as an
15
Nagel, T. (1974), 435.
Chalmers, D. (1996), xi.
17
Searle, J. (2000), 2, my italics.
18
Levine, J. (1993).
16
9
example of the sorts of explanations that “take consciousness seriously.”19
Because the physicalist can never truly answer Nagel’s original question, because that
crucial qualitative component of consciousness has been left out, Chalmers claims that we must
instead turn to an entirely different sort of non-strictly physical explanation for consciousness.
He claims that there are two distinct aspects of the mind: the psychological mind, which can be
explained through processes and functional details, and the phenomenal mind, which
characterizes conscious experiences. According to Chalmers, “On the phenomenal concept,
mind is characterized by the way it feels; on the psychological concept, mind is characterized by
what it does.”20 All mental states and events can have psychological properties or phenomenal
properties or, in some cases, both. Cognitive concepts such as learning and memory can be
understood psychologically, in that it is possible to have a complete understanding of what these
terms entail from an understanding of the neurological and psychological properties involved,
whereas the difference between being in the mental state of seeing a blue chair versus that of
seeing a red chair is a phenomenal one. Pain seems to have both psychological and phenomenal
aspects to it.21 Because these aspects of mind are distinct and separable, Chalmers does not think
that working to collapse one of them into the other or to explain one within the other’s terms
would be a promising or even an intelligible project. Rather, his task is to account for the
features of and to explain the interactions between the psychological and the phenomenal.
Chalmers bases his division of the mind into these two aspects on an argument from
supervenience. He points out that a higher-level concept (concept B) is logically supervenient on
19
Chalmers, D. (1996), xii. I focus solely on Chalmers’ brand of dualism (what he calls “naturalistic dualism”) for
the sake of space, though it is certainly not the only possible representative. Similarly, though the implication that
every position on consciousness can be broken down into a physicalism/dualism debate is an obvious
oversimplification, it is a useful and, I think, an accurate one. It is not my project to mount a direct defense against
Chalmers’ version of dualism. Rather, I present it as an example of the sorts of unavoidable implications that result
from our current way of thinking about consciousness—a way of thinking that I will later work to unseat.
20
Ibid., 11.
21
Ibid., 18.
10
a lower-level concept (concept A) if, assuming some conceivable alternate world contains every
property of A, B-properties must necessarily exist in that world as well.22 For example, we can
say that biological facts are logically supervenient on physical properties, because in an alternate
world which is physically identical to our own, such that every molecule and every occurrence in
space-time are indistinguishable from those in the actual world, identical biological entities and
processes would exist in that world, as well. Logical supervenience implies the existence of a
reductionist explanation of the higher-level concept, concept B, because it can necessarily be
explained in the terms of the lower-level concept, concept A. Thus, biology can be explained
and understood entirely from within the confines of physical properties. Conversely, a higherlevel concept B is naturally supervenient on a lower-level concept A when B-properties and Aproperties are completely correlative in the natural world, though neither set of properties can
entail the other.23 In the case of two naturally supervenient concepts, we can imagine an
alternate world in which the lower-level properties are identical to those of the actual world, but
the higher-level properties are absent or altered. Chalmers argues that consciousness cannot be
logically supervenient on physical properties, since we can conceive of a world of phenomenal
zombies, creatures who are physically identical to us but have no qualia. Thus, consciousness
cannot have the same relationship to the physical facts that biological facts have to chemistry.
Chalmers accounts for qualia within a system of property dualism, such that phenomenal
facts are naturally but not logically supervenient on physical facts. He postulates a system of
psychophysical laws in order to explain the interaction of the phenomenal and the physical:
“There is a system of laws that ensures that a given physical configuration will be accompanied
by a given experience, just as there are laws that dictate that a given physical object will
22
23
Ibid., 48.
Ibid., 36.
11
gravitationally affect others in a given way.”24 This explanation simply accepts that dualism is
the unavoidable result of claiming that consciousness “arises from” the physical, or that we can
find “neural correlates” of consciousness, or any other attempt to disguise a bit of hand-waving
and subject-changing as a true physicalist explanation. Thus, in an important sense, Chalmers is
right to insist that accepting a sort of dualism is the only way to take consciousness seriously.
However, the “victory” of dualism might not be as complete as Chalmers needs it to be,
as it merely preserves rather than solves the problem. He challenges the physicalist by
illustrating that empirical science cannot account for why seeing the color blue feels the way it
does—but his own account is no more successful in this respect. The fact that qualia can be
identified as non-physical properties that are related to the physical world in some reliable way
does not explain anything about why these phenomenal properties are the way they are. There is
also an important sense in which initial dualist assumptions dictate the terms of their inevitable
conclusion, so that the “victory” of dualism is circular, already written into the phenomenon that
we are working to explain. Chalmers acknowledges this, too, although he does not acknowledge
how problematic it is: “To take consciousness seriously is to accept just this: that there is
something interesting that needs explaining, over and above the performance of various
functions.”25 A non-question-begging explanation of consciousness has already been stipulated
to be one that addresses qualia, and qualia might easily be defined as “the part of experience that
is left out of a physicalist description.” Any debate on consciousness is already rigged by the
intuitive obviousness of its opening move. The only promising way to proceed, then, is to
challenge this opening move. Thus, I aim to show that the driving Cartesian intuition behind the
consciousness literature is not the solid foundation that can support such weighty implications.
24
Ibid., 170. Interestingly, Chalmers’ method of solving the mind-body problem is nearly identical to Descartes’;
and I doubt Elisabeth of Bohemia would be any more impressed by the former than she was by the latter.
25
Ibid., 167-8.
12
PART I: DISLODGING THE DRIVING CARTESIAN INTUITION
Any attempt to account for consciousness in the philosophical literature begins and
proceeds in the same way: identifying the concept that needs explaining, and then working to fit
that concept into an extant understanding of the world or modifying the current understanding of
the world such that it includes the problematic concept. The reason for identifying and
beginning with qualia strikes philosophers as so obvious that it does not require justification: we
just have qualia, now how can we account for them? When pressed, the justification appears to
begin on epistemic grounds. Our access to our own conscious states seems fundamentally
different from our access to anything else that we can know, and that epistemological distinction
is enough to signal an ontological one. This is the Cartesian strategy, as Richard Rorty points
out: “one has to have the notion of ‘immediate awareness’, and to believe that the things we want
to know about […] are not things which we are immediately aware of. Once one believes all
this, one will have to grant the existence of a realm to contain the objects of immediate
awareness.”26
It seems, though, that without this particular intuition and its philosophical implications,
the Hard Problem of consciousness simply falls away. Thus, I will try to weaken these Cartesian
claims by approaching them from several slightly different but related directions. First, I argue
that the mind-body problem and the Hard Problem of consciousness more specifically are not
universally obviously intuitive. To this effect, I examine disparities between the philosophical
tradition’s assumptions about consciousness and the actual nature of our experiences, and show
that resulting confusions and contradictions are mirrored by differences across languages. Next,
I show that it is possible to provide a comprehensive account of the mind that does not implicate
26
Rorty, R. (1970b), 278.
13
qualia, and that any remaining intuitions we might harbor about something left out are confused
and contradictory rather than straightforward and obvious.
I.i How Problematic is the Problem of Consciousness?
Certain properties of consciousness are often referenced when making a case for its
specialness, such as its ubiquity, its continuity, and its singularity. However, there is an
alternative argument27 to the effect that “consciousness” as conceived by philosophers is very
different from the way it seems in our actual everyday experiences and interactions. When
pressed by a thought experiment or an appeal to intuition, we might affirm that we are always
conscious throughout waking life, or that our conscious experience is of something like a single
continuous field. As seemingly tautological as it initially appears, though, Jaynes is right when
he points out that “Consciousness is a much smaller part of our mental life than we are conscious
of, because we cannot be conscious of what we are not conscious of. […] It is like asking a
flashlight in a dark room to search around for something that does not have any light shining
upon it.”28 In fact, the vast majority of our mental functioning should be labeled unconscious or
non-conscious. For example, though we assume the continuity of our field of visual perception,
there is in fact a blind spot in our vision that is “filled in”29 by the brain, and which can be
discovered introspectively upon closer examination (staring at a central point and moving a card
into and out of the blind spot, for instance).30 Similarly, though we might assume that we are
conscious of an entire page while reading a book, there is actually only a very small point upon
which we can focus at a time; our eyes merely move quickly enough to present the illusion or the
27
See e.g. Jaynes, J. (1976); Wilkes, K. (1984, 1988, 1995); Dennett, D. (1991); Schwitzgebel, E. (2008).
Jaynes, J. (1976), 23.
29
I use this phrase for the sake of simplicity, despite its potentially problematic scientific and conceptual inaccuracy,
as discussed in Dennett, D. (1991) (c.f. Churchland, P.S. & Ramachandran, V. (1993)).
30
See e.g. Jaynes, J. (1976); Dennett, D. (1991); Schwitzgebel, E. (2008).
28
14
assumption of continuity.31 Furthermore, as Jaynes points out, it might not even make sense to
speak of being conscious of any of the marks on the page. When reading, we are conscious of
“meaning” rather than sentences or words or ink marks, or even the allegedly conscious process
of reading itself.32 In fact, many of the ordinary activities in which we engage are actually
impeded by deliberate conscious attention to them: playing the piano, hitting a tennis ball,
typing, etc. This is in direct contrast to the philosophical tradition’s ordinary assumptions about
the continuity and ubiquity of our consciousness.
As these examples begin to demonstrate, the dichotomous distinction between
consciousness and non-consciousness seems far more confused and heterogeneous in both our
ordinary intuitions and in empirical examples than is implied by the traditional literature.
Following Jaynes’s point, the question of whether reading is a conscious process no longer
seems so easily answerable. We have intuitions that could point us either way, and though we
could conceivably do some empirical tests on the neurological functioning involved in reading, a
conscious/non-conscious distinction would not be the sort of resultant information that would be
interesting or even relevant. Similarly, how do we attribute “consciousness” in the case of the
driver who, engrossed in conversation, cannot recall the route she took to arrive at her current
location, despite having successfully maneuvered around potential hazards and diligently obeyed
the rules of the road? What about dreaming, or sleepwalking, or fugue states?33 We can draw
the conscious/non-conscious line somewhere, if we must, but (as Dennett points out34) wherever
this line ends up is unavoidably arbitrary and cannot be fixed across all situations. Thus, the
31
Dennett, D. (1991).
Jaynes, J. (1976).
33
These and other examples are discussed with more detail in Wilkes, K. (1984).
34
Dennett, D. (1991).
32
15
desire to attribute some intrinsic property of homogeneity across all of these and other cases
begins to look like a massively misleading oversimplification that should not be taken seriously.
Clearly, though, the relative complication of and necessary amendments that must be
made to the issue do not imply that the issue—that is, the problem of consciousness or, more
broadly, the mind-body problem—is an empty one. Maybe consciousness is a much smaller and
less fundamental component of our mental lives than we might have assumed, and maybe its
borders are not as nicely delineated as we might have liked, but the fact that it remains leaves the
discussion in play. We might be confused, perhaps, but there is something that we are confused
about. Moreover, that something is of universal import to a complete conception of what it
means to be human, and it is something that has haunted the entirety of the history of human
thought—or so it is usually claimed.
However, it has been pointed out35 that the problem of consciousness as currently
construed is a relatively modern development in the history of philosophy. According to Wilkes,
there is no word in ancient Greek that is even roughly translatable into our concepts of either
consciousness or mind.36 If correct, this seems to indicate that, despite the ancient Greeks’ rich
psychological and philosophical tradition, for them there simply was no mind-body problem, no
problem of consciousness, no discussion of qualia—or anything like them—at all.37 Instead,
philosophers described the world as purely physical, with gradations in complexity of structure
and perceptual capacities the only markers for determining the “categories” of life, rather than an
appeal to the presence or degree of conscious experience. Philosophers wrote about the
difference between sleeping and waking, rational thought, perception, sensation, and so on, but
there was no reference to anything like private introspection or the ineffably qualitative
35
See e.g. Matson, W. (1966); Jaynes, J. (1976); Wilkes, K. (1984, 1988, 1995).
Wilkes, K. (1984, 1988, 1995).
37
Wilkes, K. (1988), 19; also see Annas, J. (1992), 9, 59.
36
16
component of experience. Thus, Greek philosophy seems to indicate that it is possible to
develop a science and philosophy of “mind” without invoking the problem of consciousness, that
consciousness is not an undeniably extant and obviously intuitive property or problem, and that
the prospect of an eliminativist or reductivist position with respect to qualia is not as baldly
misguided, perversely silly, or downright dishonest as many philosophers tend to assume.
A similar position is defended by Julian Jaynes in his well-known The Origin of
Consciousness in the Breakdown of the Bicameral Mind, where he speculates that consciousness
is the product of a particular sort of narratization. Consciousness is always narratized, he claims,
not only in the sense that we are the perpetual protagonists of our own life stories, but also in that
we provide reasons and explanations for occurrences and the connections between them.
Consciousness is the conciliation of these narrations such that experiences fit together
cohesively. Moreover, he claims that this narratization is not something like a “component of
consciousness” that needs to be accounted for when making consciousness fit into a physical
system. Rather, it is a metaphorical feature entailed by the way we talk about consciousness.
This understanding of consciousness allows Jaynes to postulate that, before the development of
our modern way of speaking about consciousness, there existed a race of non-conscious humans.
These were not people who lacked a brain structure that we have since developed, and they were
not phenomenal zombies in the sense required by the philosophical thought experiments. This is
because, of course, they were not functionally identical to us. They would never have described
themselves as “conscious,” because the ability to describe oneself as conscious, with all the
metaphorical trappings that such a description entails, is consciousness.38
Like Wilkes, Jaynes points to the fact that consciousness-talk has not been a feature of
human language from its origins. According to Jaynes, there are no “mental” words or anything
38
In Part II, I develop an empirical account very much in the spirit of Jaynes’s theory of metaphor.
17
like them—consciousness, mind, introspection, etc.—mentioned in the Iliad.39 Rather, there are
merely references to behaviors, actions, and the speeches of gods.40 Jaynes’s conclusion based
on this observation is a stronger one than Wilkes’s. He claims that the ancient Greeks’ lack of
description of themselves as conscious means that they were not conscious. Jaynes speculates
that consciousness evolved after language, which means that we should not think of
consciousness as appearing somewhere along an evolutionary spectrum, existing in some
creatures but not in others. Rather, consciousness is a conceptual construct that could exist only
after we had developed the tools to make conceptualization possible via a form of narratization.
An obvious objection to Jaynes’s theory is that he has simply got things backwards:
clearly consciousness must have developed before language about it did. Consciousness seems
to be the sort of thing that can exist independently of its possessors’ ability to describe it. Thus,
one might argue that even though the word “consciousness” is not mentioned in the Iliad, we
cannot conclude therefore that Odysseus was not conscious. Perhaps the only implication we
can draw from Wilkes’s and Jaynes’s observations is that the ancient Greeks’ philosophy was
simply not sophisticated enough to observe and address a problem that was clearly there
regardless. Perhaps the Greeks, like modern materialists, were simply leaving something out.
After all, the fact that the ancient Greeks most likely had no word translatable into quantum
mechanics does not render insignificant current work in physics and the philosophy of science.
The important difference, however, is that much of the persuasive force of the problem of
consciousness lies in the fact that “if it is a problem at all, it is a glaringly obvious problem; or at
39
Jaynes makes this claim with a qualification: some mental words do appear, but only in sections hypothesized by
other scholars to have been added much later in the poem’s long and varied history (see Jaynes, J. (1976), 81-82).
Thus, the mental language is an overlay based on more modern interpretations, rather than representative of the
characters’ or authors’ psychological states. Interestingly, B.F. Skinner (1989) makes a similar claim about the
development of “cognitive” language from purely behavioral terms.
40
Jaynes, J. (1976), 69.
18
least it can be made so in a few words, to persons uncorrupted by philosophy.”41 If the problem
of consciousness is so problematic, and so obviously problematic, it seems suspect that “not just
Aristotle but the whole Greek nation”42 never once thought to mention it or anything like it.
There is another objection to the sorts of inferences from textual analysis that both
Wilkes and Jaynes make. They both imply that modern mentalistic readings of ancient Greek
texts have imposed a sort of temporal or cultural overlay based on a modern preoccupation with
consciousness, inferring cognitive language where none truly existed originally. However, it is
easy to see this argument reversed, such that a supporter of consciousness might accuse the
Jaynes/Wilkes/Skinner proponent of reading behavioristic sympathies into a translation where
none existed originally. This dispute cannot be settled from within the text itself. The fact that
there is a dispute, though, is noteworthy nevertheless, given the extent to which the
consciousness-centric reading is currently taken as given. Thus, the fact of this dispute
regardless of its result might lend more weight to the claim that Cartesian-based intuitions are
not the only motivating factors involved in the discussion. Furthermore, the fact that the dispute
seems difficult to arbitrate does not mean that such an arbitration is impossible. Rather than
relying on a particular text itself, we might trace the evolution of language over a range of texts,
tracking changes in the frequency of particular words and phrases before translation. This sort of
longitudinal study, though not entirely conclusive, might help to support claims about changes in
language over time that may or may not be attributed to a transformation in mentality.43
Still, even if we conclude that arguments from translation are unavoidably inconclusive,
and even if we assume that the ancient Greeks’ failure to recognize the problem of consciousness
or the mind-body problem was the result of a lack of sophistication or scientific advancement,
41
Matson, W. (1966), 94.
Ibid.
43
This is the sort of examination that Jaynes conducts—extremely speculatively—in Book II.
42
19
the accumulation of these and related claims continues to cast doubt on the intuitive primacy of
the problem of consciousness as currently construed. Similarly, Wilkes has pointed out that, in
English (and French and German), the word consciousness as it is used today did not appear until
the late 17th century.44 Even now, the term consciousness with all its modern philosophical
connotations is far from universally translatable. In Chinese, the word yìshì comes closest,
though it also seems to cover several additional meanings and connotations that consciousness
does not—it is perhaps closer to the word psychological—and leaves out the idea of nonlinguistic but presumably conscious states in both humans and animals.45 Moreover, in Chinese,
“there seems little concern with incorrigibility, immediacy, the inner eye, private mental items on
an internal stage, knowledge by acquaintance rather than by descriptions,”46 or all those
properties that allegedly made consciousness “special” in the first place. Similarly, Croatian has
no word for mind that goes beyond something like mental or psychological, terms that have no
special epistemological or ontological connotations.47
It seems slightly disingenuous, of course, to make assertions about a philosophical
problem solely on the basis of an observed difficulty in translation. As Wilkes herself points out,
there is nothing particularly unique or revelatory about the fact that languages are different, and
that all languages include and lack words that do not exist or exist exclusively in other
languages.48 Still, if words such as consciousness or qualia really marked a fundamental
component of human existence and experience, it seems suspicious that a society could function
without any sort of reference to them in its language. Our exposed confusions about
consciousness—its conceptual incongruities and ill-definedness—are implicated in the
44
Wilkes, K. (1995).
Wilkes, K. (1988), 28.
46
Ibid., 28-29.
47
Ibid., 29.
48
Ibid, 16.
45
20
comparative ways that various languages refer to it, or to parts of it, or to nothing like it. The
observed non-universality of consciousness as a concept might indicate that our English word is
merely a rhetorical or conceptual construction. Qualia, then, might be described not as
fundamental properties but as conceptual fictions rooted in a particular cultural and linguistic
environment. To push this point a bit further: the problem of consciousness might be a selfcreated problem, developed by and grounded in the confusions inherent in its own allegedly
descriptive literature. Again, such observations alone are not sufficient to discredit qualia and to
render the mind-body problem obsolete. However, these sorts of observations should at least
lead us to hesitate before making metaphysical assertions about fundamental properties based
solely on the existence and persistence of intuitions that are not, upon closer examination, as
unquestionable as they might at one point have seemed. For the proponent of the difficulty of
the Hard Problem of consciousness these intuitions have to be absolutely unassailable. The fact
that they are not means that she has lost her firm foundation.
Though the Hard Problem seems to have lost its universality and its obviousness, at least
in theory, perhaps it still strikes us as difficult to deny something that seems so real and
undeniable, even if we acknowledge that the reasons for this undeniability might have more to do
with our culture than our qualia. Thus, in the next section I work to weaken this persistent
intuition even further by outlining and developing an account of experience without qualia, and
suggesting that any lingering temptation to maintain that something has been left out is the result
of conceptual confusion.
I.ii Non-Phenomenal Experience
In his essay, “Experience,” Robert Sharf contrasts our traditional understanding of firstperson experience with evidence from the study of religious experiences that seems to contradict
21
our implicit Cartesian assumptions about immediate awareness. He claims that, in specific cases,
it might be misguided to look beyond constructed memories and reports of experiences for
something more originary or fundamental, some prior referent for experiential claims in “pure
consciousness.” In this section, I will develop Sharf’s argument against the phenomenal
component of religious experiences. I will then attempt to expand his argument, with the aid of
Dennett’s work on similar issues, and apply this broader understanding to an explanation of
phenomenal experience itself.
Sharf defines the philosophically relevant and potentially problematic understanding of
“experience”: “to ‘directly perceive,’ ‘observe,’ ‘be aware of,’ or ‘be conscious of.’ Here there
is a tendency to think of experience as a subjective ‘mental event’ or ‘inner process’ that avoids
public scrutiny.’ […] Experience is simply given to us in the immediacy of each moment of
perception.”49 This understanding of experience points to that fundamentally immediate and
indubitable component of consciousness, even if there is the possibility of deception regarding
the representation of a particular conscious experience. I may be wildly mistaken about the
objects of my consciousness, but I cannot be mistaken that I am conscious: “While the content of
experience may prove ambiguous or deceptive, the fact that I am experiencing something is
beyond question.”50 Experience, then, has an intrinsic and necessary phenomenal component to
it, and it makes sense to talk about this phenomenal component as something distinct and
separate from our second-order reflections on it or public reports about it. There seems, at first,
to be evidential support for this kind of understanding of experience within the literature on
religious experiences in particular. There are surprising similarities in accounts of religious
experiences across myriads of cultures, such that it seems natural to attribute all of these
49
50
Sharf, R. (1998), 104.
Ibid.
22
accounts to the same universal experience itself. Thus, the case of religious experiences seems to
be a particularly promising one from which to examine claims about phenomenology and their
relation to our philosophical and scientific claims about consciousness.
Sharf argues, though, that much of the evidence for a universal religious experience is not
as strong as is commonly assumed. There are fundamental technical issues with this assumption,
such as the observation that most religions do not rely on personal experience as a reference
point for practice, due to its “ambiguous epistemological status and essentially indeterminate
nature.”51 Similarly, most of our assumptions about experience-based religions come from
westernized/Christianized sources, such that their similarities and differences may be largely a
matter of socioculturally driven emphasis and exaggeration. Even if these emphases and
exaggerations are grounded in truth, though, the treatment of experience within religious
traditions—for example, Buddhism, Sharf’s exemplar of choice due to its valorization and
internal analysis of experience—forces us to reexamine the meaning of the concept itself.
Though Buddhist doctrines include specific accounts of distinct sorts of experiences, there is a
remarkable lack of agreement among Buddhist practitioners regarding those experiences
themselves in the sense of them as intrinsic, immediate, and indubitable events. Claims about
experiences are based on notably public criteria: the practices used, the behaviors exhibited, and
even the lineage of the individuals involved.52 Thus, if we cannot have any degree of certainty
or even reliability regarding phenomenal experience within a tradition that implicates an
elaborate system of supposedly verifiable and reproducible experiences, it seems that our
foundation for making less regulated experiential claims becomes at least equally problematic.
51
52
Ibid., 99.
Ibid., 107.
23
It is tempting, at this point, to acknowledge our Cartesian intuitions, to uphold our
confidence in the fact of experience if not in its particular form: “even if mystics and meditation
masters cannot always agree among themselves as to the designation or soteriological import of
their experiences, it is clear that something must be going on. Those Buddhist meditators are
clearly experiencing something.”53 To counteract this obviously intuitive desire, Sharf turns to
the less controversial example of experiential reports of alien abductions. In this case, like the
religious one, there are remarkable similarities in description across reported experiences, and
those individuals reporting the experiences seem to be utterly sincere in their beliefs. Still,
though we tend not to doubt the reports themselves, the beliefs held by the reporters, or even the
constructed memories driving the reports, we do find it reasonable to doubt the existence of the
alleged originary event, the actual phenomenal experience of alien abduction: “We suspect that
the abductees’ reports do not stem from actual alien encounters but that some other complex
historical, sociological, and psychological processes are at work. […] it is reasonable to assume
that the abductees’ memories do not faithfully represent actual historical occurrences.”54
Accounts of alien abduction prove that there can be a report of experience without “experience”
itself, or that the “experience” might be nothing prior to or other than its report. Perhaps, Sharf
claims, the same sort of thing is going on whenever an individual makes a claim about a
particular religious experience. Thus, he concludes, it is not helpful to study religious
experiences via an attempt to understand some particular originary and necessarily ineffable
phenomenology; rather, we should study the public texts, narratives, reports, and so on that we
have available.55
53
Ibid.
Ibid., 109.
55
Ibid., 111.
54
24
Of course, the defender of qualia might insist that this point merely defers the problem
rather than deflecting it. She might claim that there are qualitative components to beliefs and
memories as well, that it feels like something to remember an experience of alien abduction and
it feels like something to believe that that memory is the result of an actual experience, even if
there were no experience and the memory were false. In order for Sharf’s analogy to work,
though, we might view the alien abduction example as parallel to rather than strictly
representative of the broader phenomenological claim. Its important implication is not
necessarily the presence or absence of phenomenology at any or all levels of description, but
rather the idea that, when presented with an account of some originary event, there are instances
in which it seems natural that we look for underlying psychological work resulting in the report
of that event rather than anything resembling the event itself. Still, the possibility of this
counterargument points to two possible ways of accepting Sharf’s conclusion—one in which the
religious experience/alien abduction case marks an important departure from ordinary
phenomenological judgments, and one in which the absence of the originary event at one level
can be translated into an account of phenomenology more broadly. I will sketch both of these
readings, though I support the second.
The first, weaker way of reading Sharf might go something like this: Sharf is right:
reports of religious experiences, like reports of alien abductions, refer to no originary
phenomenal event. Thus, religious experiences are not real experiences, because experience by
definition entails this intrinsic phenomenal occurrence prior to its report. The presence or
absence of this event is what allows me to distinguish between real experiences, like tasting
coffee or distinguishing colors, and constructed ones, like religious experiences. Upon first
examination, this may seem to be the most natural response to Sharf’s conclusion. In fact, it
25
seems to be the sort of position that Sharf himself endorses when he writes, “I am not trying to
deny subjective experience. (Indeed, how would one do that?) I merely want to draw attention
to the way the concept functions in religious discourse.”56 If this explanation is correct, then
Sharf has simply identified and isolated a particular sort of experience and shown why it cannot
be treated in the same way that we treat our other more “ordinary” experiences. This
interpretation acknowledges the weight of the qualia-friendly counterargument, highlighting the
distinction between an originary event (e.g. an alleged alien abduction) and the secondary
phenomenological reconstruction of that event. On this reading, the only phenomenological
component of abduction lies in the current recollection and report, not in the originary event.
A stronger way of reading Sharf’s conclusion, though, contends that this distinction is not
a meaningful one. Such an interpretation might go something like this: Sharf is right: reports of
religious experiences, like reports of alien abductions, refer to no originary phenomenal event.
Thus, perhaps we should reexamine our current assumptions about experiences. Perhaps this
originary phenomenal event is a fiction in the case of ordinary taste and color sensations as
much as it is in the case of alien abductions and meditation-induced ‘pure consciousness.’
Religious experiences are just one example of experience as such, and experience as such seems
to have been entirely misconceived. This sort of interpretation relies on a strong version of the
previously sketched analogy response. The reply to the qualia counter-claim points out that, just
as the alleged originary event of an alien abduction can and should be disregarded in favor of the
psychological work that results in its report, so the new “originary event” of phenomenology
(what it feels like to remember the event and to believe it really happened) can be entirely
explained in terms of “easy problem” processes that generate the narrative report in question.
There is nothing substantively different about these two types originary event (actual alien
56
Ibid., 113.
26
abduction or actual phenomenology) to warrant a disparity in our treatment of them. There is
merely a difference in time span between the alleged event and its report, and perhaps our
cultural willingness to deny gods and aliens before we deny qualia.
If this second reading is a better interpretation of Sharf, then we must be agnostic (to use
Dennett’s term) about others’—and our own—claims about conscious experiences. That is,
though reports about phenomenal experiences might be valuable tools in learning about the way
we interpret the world, we cannot simply assume that these reports refer to any actual object or
property that requires further study or explanation. Thus, we need a new understanding of the
concept of “experience,” one that does not implicate qualia and that accounts for the entirety of
our abilities and interactions with the world. The idea of an “experience-less” experience might
seem absurd, rooted as we are in the traditional philosophical narrative of qualia. Shaken free of
these assumptions and intuitions, though, it allows for a more comprehensive physicalist picture
of our mental functioning as continuous with the rest of the physical world and with our current
scientific understanding. Our quasi-“scientific” explanations of consciousness need no longer
refer, implicitly or explicitly, to the fundamental character of some necessarily non-objectivelyobservable component of private subjectivity—some sort of non-physical mental realm, or a
supervening structure of understanding, or the “producing” or “arising” of one thing
(phenomenal experience) from something entirely different (e.g. neural structures or brain
states). Rather, explanations can remain firmly in the physical realm, concerning only those
components of cognition already safely categorized as “easy problems”: perception, memory,
language. This means that if Sharf is right, there is no need to begin with qualia. Physicalists are
allowed a complete account of mentality without accusations of leaving something out. Using
27
Sharf’s account as representative of the broad strategy, then, we can see our way to a detailed
empirical model of the mind without qualia, such as the one pursued by Dennett (1991).
I.iii The Non-Cartesian Mind
In addition to its unavoidably dualistic depiction of consciousness, our driving Cartesian
intuition also results in a potentially misleading framework for thinking about and modeling the
mind. Because we generally think of consciousness as unified and continuous, and because we
know that the brain is constantly receiving countless simultaneous inputs of sensory information,
we ordinarily assume that there must be somewhere or something in the brain that is responsible
for decoding or binding or unifying all these disconnected data-streams. Even though most
neuroscientists no longer believe that there is a single brain area responsible for combining and
interpreting all sensory processes, there is still the seemingly irresistible tendency to assume that
this information must be bound together somehow. We ordinarily think that this unified stream
of information must be “presented” so that it can be perceived “in consciousness.” Whether or
not it maps onto neurological reality, this sort of intuitive characterization can often be
discovered implicitly hidden in the work of neuroscientists who do not explicitly take the
“control center” concept seriously. The prevalence of these assumptions is due not only to our
reported experience of continuity, but also to the fact that there seems to be a marked and rigidly
defined difference between the sensory information to which we have access and that which is
processed subconsciously.
Dennett refers to this sort of theoretical picture as the “Cartesian Theater,”57 because it is
this way of thinking about the mind that allows for several of our metaphorical
(mis)understandings of consciousness: consciousness as a stage upon which mental images and
57
Dennett, D. (1991), e.g. 39, 107.
28
thoughts and emotions perform, or consciousness as a spotlight that focuses its attention on
particular representations of sense data and thus brings them into consciousness. Qualia are the
actors upon the stage of consciousness, illuminated in turn by its spotlight. In addition to
providing an ideal atmosphere for the cultivation of qualia, there are several conceptual
implications of adopting a Cartesian Theater model illustrated by the often confused and
problematic ways in which we tend to think about our experiences and the mental processes that
“cause” them. First, it reinforces the idea of a distinct and discoverable dividing line between
conscious thoughts and experiences and non-conscious ones: to be conscious is merely to be
interpreted by and presented to the Theater’s audience. Similarly, it necessitates this idea of an
“audience,” or an “intelligent” neural structure with the ability to interpret all sensory data. This
is at best a means of leaving the problem (“how does neural information become experience?”)
intact but deferring it from a person to a bit of neural tissue, and at worst a barely disguised
appeal to a homunculus. Finally, the metaphor of the stage or the spotlight and resulting quasimaterialist theories are problematic in that they imply a continuous stream of information that
persists whether or not the “spotlight” is focused on it or the “audience” is “watching.”
Because our experience of continuity is not actually as continuous as we might initially
assume,58 and because the Cartesian Theater model does not appear to be an accurate or
workable functional or neurological account of the way the mind—and consciousness—works,
we have to look for alternative ways to make sense of both the way the world seems to us and the
way our nervous systems actually function. Dennett calls his version of such an alternative
picture the “Multiple Drafts” model,59 because rather than a constant stream of information, it
postulates myriads of simultaneous interpretive processes that, when “probed” (either by internal
58
Similarly, see Wilkes, K. (1984), p. 229-230, for an argument that the “stage” and “spotlight” metaphors are
inconsistent not only with neurological functioning but also with instances of reported phenomenology.
59
Dennett, D. (1991), 111.
29
reflexive processes or external stimulation) result in a single stream of narrative information.
This account is not restricted to the Cartesian-inspired picture of a “control center” in the brain to
which neural information is presented (“in consciousness”) and which produces a cohesive and
continuous flow of experiential information (to—whom?). Rather, it postulates a collection of
“stupid” lower-level systems that individually and interconnectedly process neural data, the
strongest of which are output either as “privately” (thought) or publicly expressed speech.
Dennett’s model claims that there is no single “stream of consciousness,” no single cohesive
output of information within the brain that can become conscious at some threshold point (or
not). There is no need for any active “filling in” by the brain in order to produce a single
coherent output. Reports on our “conscious experiences,” then, are the products of this probeand-response mechanism. Similarly, thinking is a sort of self-stimulated probe-and-response
system, which is why asking ourselves questions during the process of problem-solving is a
surprisingly fruitful practice. Importantly, the question of when something becomes conscious
is, under this system, incoherent, because consciousness just is this response to probing—Sharf’s
linguistic reports that take the place of religious “experiences.”
It is possible that this account still seems to be leaving something out or, worse,
problematically dishonest. This is because it relies heavily on language as playing a crucial role
in the system as occupying input, output, and intermediary positions. Though it seems to allow
language to serve all the meaningful and intentional roles in which we ordinarily assume it
operating, it does not account for how this non-Cartesian and purely mechanistic picture can
have anything to do with such concepts as “meaning” or “intentionality” at all. In fact, these
concepts seem to imply the existence of that perennially problematic audience or homunculus or
“intelligent” neural structure that can “intend” things, and “mean” them when it does. However,
30
rather than resorting to something like a “Central Meaner,”60 Dennett’s system again relies solely
on the individual and collective functioning of relatively low-level and “stupid” processes.
These processes, in response to linguistic input (just one particular kind of physical sensory data,
after all), produce words, phrases, and grammatical structures, while parallel processes impose
restrictions, obsessively check for bits of triggered data, etc., all according to neural systems of
interconnections that vary in strength and reinforcement. The resulting linguistic information is
output externally or internally, in which case we might have several competing “thoughts” which
are then re-subjected to competing and parallel processes of revision.
Dennett’s complete account of the mind/brain takes on a familiar and metaphorical
configuration: the brain as a computer, though a parallel-processing computer far more complex
than any we can currently—or, perhaps, ever—hope to create. According to this account, the
brain is a vastly complicated hardware system on which various software programs (systems of
reinforced connections between neurons) are implemented and run. These programs are
sometimes genetically implemented (in the case of “innate ideas” and instinctive cognitive
functions), but are most often developmentally and culturally “installed” and modified via the
workings of language. Such ubiquitous self-replicable cultural artifacts (e.g. writing, religion,
advertizing jingles, veganism, the wheel—and, perhaps, consciousness) are called memes,61 and
they shape the structure and “meaning” of our language as well as the overall functioning of our
hardware systems. This is a vastly oversimplified condensation of Dennett’s position, but it
helps to highlight the most important objections to it as well as its potentially interesting
implications for a complete account of consciousness.
60
61
Ibid., 228.
For a more fully developed account of memes, see Dennett, D. (1991), p. 199-226; Dawkins, R. (1989), ch. 11.
31
There seem to be three distinct avenues for objection to Dennett’s model. First, one
might question part of his characterization of the brain on empirical grounds. For example,
Churchland and Ramachandran (1993) accuse him of “Reasoning more like a computer engineer
who knows a lot about the architectural details of the device in front of him than like a
neurobiologist who realizes how much is still to be learned about the brain”62 and find fault with
some of the neurological details of his theory, such as his interpretation of the way the brain
processes blind spots. However, regardless of the final neurological story, barring the discovery
of some dramatically divergent data, the empirical detail work—though certainly essential to
developing a comprehensive physicalist account of the mind—does not significantly affect the
ontological implications of the Multiple Drafts model. The occurrence of critiques such as
Churchland and Ramachandran’s indicates that if there are problems with Dennett’s account,
they are empirical problems, and best exposed and modified through empirical testing. This is
one of the crucial conclusions, of course, that Dennett is working to convey with his
heterophenomenological method.63
Second, one might take issue with Dennett’s reliance on the “mind as computer” model.
It should at least give us pause that, for as long as we have been talking about the mind, we have
been doing so in the metaphorical terms of our technology of the time, and that Dennett’s
account is no exception to this general trend. It seems likely that whatever actually goes on at a
physical level is far more continuous than any imposed metaphorical structure would allow, that
any boundaries and distinctions we might draw for the sake of comprehension exist solely in the
62
Churchland, P.S. & Ramachandran, V. (1993), 42.
That is, “the neutral path leading from objective physical science and its insistence on the third-person point of
view, to a method of phenomenological description that can (in principle) do justice to the most private and ineffable
subjective experiences, while never abandoning the methodological scruples of science” (Dennett, D. (1991), 72).
Heterophenomenology involves applying the intentional stance (see e.g. Dennett, D. (1987)) in order to study
consciousness through a combination of subjects’ phenomenological reports and more traditional empirical data.
(Verbal reports are, after all, just another sort of physical, empirically accountable process.)
63
32
limitations of our understanding rather than in the physical structure of the brain itself. This
caveat is worth noting and worth reminding ourselves of when thinking about the brain as part of
an organic system, but—depending on one’s interpretation of the role of Dennett’s metaphor
system—it need not greatly affect the important implications of his claims.
There are at least three ways of interpreting Dennett’s hardware/software model, and the
apparent absurdity of the position varies with respect to one’s interpretation of choice. To take
Dennett at his most literal, we might read him as claiming that the brain actually is a vastly
complex parallel processing computer and that the mind actually is a collection of programs
running on the brain’s hardware, or, at least, that the brain/mind functions exactly like a
hardware/software system. Dennett seems to shrug off this reading as ridiculous,64 though at
times this appears to be the position attributed to him by Searle (1995). Alternatively, we might
interpret the computer model to have been presented as the best way to understand the mind,
even if it is not entirely literally accurate. In order to make conceptual sense of any purely
descriptive scientific data, we must interpret those data metaphorically; perfect literality, then, is
a theoretical impossibility.65 This interpretation is less obviously objectionable than the wholly
literal reading, but it faces the same sorts of critiques regarding the time-sensitivity of
technology-based models and an overconfidence in present-day Artificial Intelligence research.
The most charitable reading of Dennett claims that the intended function of his metaphor
system is not to literally represent reality or even to metaphorically mirror it as closely and
accurately as currently possible. Rather, its goal is merely to shift the focus of the current
64
“Here is a bad idea: our hominid ancestors needed to think in a more sophisticated, logical way, so natural
selection gradually designed and installed a hard-wired von Neumann machine in the left (‘logical,’ ‘conscious’)
hemisphere of the human cortex. […] although that might be logically possible, it has no biological plausibility at
all—our ancestors might as easily have sprouted wings or been born with pistols in their hands. That is not how
evolution does its work” (Dennett, D. (1991), 214-15).
65
I revisit the role of metaphor in conceptual theorizing in Part II.
33
conversation.66 This interpretation seems to be the most useful one from which to appreciate the
implications of Dennett’s ideas, and to build on them. Dennett himself admits that, in putting
forth the Multiple Drafts model to replace the Cartesian Theater, he has merely exchanged one
set of metaphors for another.67 The replacements are not just any metaphors, however: while
these new metaphors may not be literally descriptive, they seem to result in far less, or at least
less obviously problematic, philosophical baggage. Thus, Dennett is not arguing—as Searle
seems to suggest—that modern AI has discovered how to model the mind through parallel
processing computers and connectionist networks (though these attempts are certainly closer
than old computational systems). Rather, these developing insights in AI and cognitive science
research provide us with a new set of metaphors that help us avoid the metaphysical quandaries
fostered by previous metaphorical systems of understanding. Dennett’s model, then, can be seen
not as an ultimate solution but as a much-needed change of subject. The crucial implication to
draw from Dennett’s hypothesizing is that it is conceptually possible to account for everything
that we consider to be “intelligent” or “higher-order” or “conscious” or “intentional” within a
physicalistic framework, without the need to appeal to qualia at all. He does not simply accept
the traditional formulation of the conscious mind, minus the qualia bit. Rather, he maintains that
the traditional formulation, which allows for a “qualia bit” (or not), is incoherent.
Without the Cartesian Theater and its problematic accompanying metaphors, then, it
becomes easier to see how a complete account of the mind can, at least in theory, function
without qualia. However, it is difficult to avoid the lingering temptation to complain that this
account is still importantly missing the point. No matter how thoroughly and convincingly we
construct our phenomenology-free models of the mind, it seems that we must be deluding
66
67
I suspect that this must be close to the interpretation Dennett intended—see e.g. p. 219.
Dennett, D. (1991), 455.
34
ourselves if we think that such accounts truly eliminate the problem. The third objection, then, is
that familiar appeal to something’s being left out. Searle reminds us of the absurdity—the
“intellectual pathology”—of any position, such as Dennett’s, that attempts to explain away
qualia. He asks readers to pinch themselves and then deny that, upon doing so, it feels like
something. Of course, his readers’ inevitable confirmation of this statement indicates that, under
Dennett’s account, something has gone horribly wrong. Searle seems to be arguing that those
“qualia” Dennett shrugs off so easily cannot truly be qualia themselves but rather a sort of straw
man. He claims that one of Dennett’s tricks “is to describe the opposing view as relying on
‘ineffable’ entities. But there is nothing ineffable about the pain you feel when you pinch
yourself.”68 Though we might give up a belief in something as demonstrably unempirical as
“ineffable entities” in order to be good scientists, the claim that such silliness is all there is to
consciousness is either self-deception or delusion. Thus, one might argue, perhaps the fact that
there is no room for qualia in this Multiple Drafts model of the mind is evidence enough to
conclude that the model is wrong.
Dennett combats this conclusion by working to illustrate that, by attempting to cling to
qualia, we are merely clinging to confusions. He challenges the thought experiments’ appeals to
the defined properties of qualia, demonstrating that our intuitions about them are far more
confused and less consistent than the consciousness literature tends to assume. Dennett
maintains that we are making a misstep whenever we claim, seemingly innocuously enough, that
we can separate the qualitative component of experience—what it is like for me to taste orange
juice at time t—from the entirety of the physiological, psychological, and associative properties
that are involved. This assumption is problematic not because of its practical impossibility but
because it generates “the more fundamental mistake of supposing that there is such a residual
68
Searle, J. (1995).
35
property to take seriously.”69 The usual thought experiments exploit the implications of this
original misstep, setting up situations in which we think we can isolate qualia and thus that
qualia exist. However, Dennett illustrates that qualia cannot actually have any of the special
characteristics that the thought experiments and the related arguments assume—that there is, in
fact, no coherent or conceivable property that the concept qualia tracks.
First, we cannot be infallible with respect to these alleged qualia. If we taste something
at time a and taste it again at time b, and the two experiences seem different to us, there are at
least two possible explanations for this phenomenon: either our particular tasting qualia involved
have shifted, or our memory has been altered, such that after the second tasting we misremember
the phenomenal character of the first event. If asked to describe the difference between these
taste sensations, we have to pick one of these options, which means that we could be incorrect.
After all, the only sense in which our assertions about qualia could be taken as infallible (or,
rather, incorrigible, in that they cannot be proven wrong70) is one so crippled that they might be
unrecognizable to their defenders: “it is an empty assertion, a mere demonstration that this is
how you fancy talking at this moment.”71 Worse for the defenders of qualia, there is no firstperson way to discriminate between these possibilities. We cannot rely on intuition, as both
possibilities seem to have intuitive weight. This means that whatever dividing line we draw
between a particular quale and the influences of memory is unavoidably arbitrary. There must be
a dividing line, however, if qualia are to be intrinsic in the sense that they are unique properties
identifiable in isolation. Furthermore, the only way to determine which of these possibilities is
correct is to do some sort of third-person, observable testing: tasting something else and noting
69
Dennett, D. (1988), 45.
Rorty, R. (1970a). Also see Parsons, K. (1970) for a discussion of why we should not hold an incorrigibility
theory with respect to qualia.
71
Dennett, D. (2002), 15.
70
36
consistency in judgments, or keeping track of tasting consistency over time, etc. Thus, qualia
cannot truly be immediately apprehensible, because their changes are best perceived not by
introspection but by third-person observation.72
Often, these hypothetical examples may seem to be changing the subject, but the
important observation is that “the subject” is far less easily bounded than is entailed by the way it
is usually described. There is nothing that can have all the properties that qualia allegedly have;
and without these properties, qualia are not qualia in the important sense. Dennett shows that the
“obviousness” of qualia is a construct of the thought experiments: “there is no secure foundation
in ordinary ‘folk psychology’ for a concept of qualia. We normally think in a confused and
potentially incoherent way about the way things seem to us.”73 Qualia, then, are unintelligible.
Without the support of the undeniable obviousness of intuition, they lose the driving force of the
argument for their existence. We can adopt a Sharfian understanding of experience without
worrying about leaving something out, without relapsing into the assertion that there must be
something more, something different. Consciousness just is a physical process of structure and
functioning—a system of recursive processes and resulting narrative reports expressed to others
and ourselves, private only in the uninteresting sense that they are not always produced vocally.
The Sharf/Dennett method of explanation and understanding forces us to fundamentally
redefine what we are talking about when we talk about experience. As Sharf points out, though,
this should not be phrased as an attempt to eliminate subjective experience—what would that
attempt even look like? Sharf acknowledges that an understanding such as the one he provides
strikes us as conceptually distasteful because it seems to be denying something fundamental,
something obviously apparent and undeniable, about what it is to be human. Similarly, Dennett
72
See Schwitzgebel, E. (2008) for an extended discussion of the unreliability of introspection for investigating our
conscious experiences.
73
Dennett, D. (1988), 62.
37
tries to ease readers into a new way of thinking about these issues, because an outright denial of
something that strikes us as intrinsic to what we are causes that position to be viewed with
considerable suspicion. Philosophers’ thought experiments take advantage of the illusory clarity
of our intuitions and lead us to stake everything on impossible properties—so that, when those
properties are challenged, we cannot help feeling that something has gone massively wrong.
In this section, I have tried to be diagnostic, and I have tried to avoid ignoring or denying
anything important. Instead I have merely worked to show that we cannot simply take our
assumptions about our intuitions as given in the search for understanding. Upon further
examination, the Cartesian intuitions upon which most of the current controversy over
consciousness rests do not appear as solid or as unshakable as we might have assumed. While
this claim on its own is not enough to discredit a powerful way of thinking, the weakening of one
set of intuitions might open the discussion to another intuition that is equally prevalent in our
experience but largely ignored by the philosophical discussion of consciousness thus far.
Moreover, the above account of phenomenal experience does not render that oft-cited
“what it is like” question unanswerable: clearly we are able to provide extensive, detailed, varied
accounts of the nature of human experience. In order to understand “subjective” consciousness,
then, we need look no further than these accounts themselves—than the narrative through which
we live and understand our lives. In Part II, I use literary narrative to approach an understanding
of consciousness from a different perspective, one which owes more to a competing intuition
about our experiences than the Cartesian view that has previously dominated the discussion. I
aim to develop an empirically based conception of the mind supported by this competing
intuition that not only provides a physicalist account of consciousness, but also illuminates our
capacity for understanding literature.
38
PART II: CONSIDERING THE COMPETING INTERPERSONAL INTUITION
There is another powerful intuition that drives our interactions in the world: the intuition
that we can know what it is like to be another person. This intuition is supported by our ability
to empathize with others, by our tendency to make moral judgments by “putting ourselves in
others’ shoes,” by our almost reflexive reaction to respond to others’ pain as if it were our own.
These areas have sparked involved and sophisticated philosophical discussions of their own, and
a comprehensive treatment of them in context would go beyond the scope of this paper. There is
another outlet for our competing intuition, though, which has not yet been given much attention
as such and which seems uniquely suited to the discussion of consciousness: namely, the extent
to which these interpersonal intuitions motivate and are embodied by our ability to read and
understand literature.
Many have claimed that the power of literature lies in its ability to allow us to move
beyond our own limited perspective and to experience things from alternative and unfamiliar
points of view.74 Through literature, we have a particular sort of access to other consciousnesses
that strikes us as different from ordinary cases in which we interact with and “know” other
people. It is a more immediate-seeming sort of access, something like the “privileged access”
that, under the traditional view, we might have reserved for our own conscious experience. In
what follows, I claim that this understanding of literature is more than the result of an empty
platitude or a romantic and meaningless conceit. It seems that the observation captured here
tracks something important about the cognitive process involved in reading literature. Thus, a
74
E.g. E.M. Forster (1951): “What is wonderful about great literature is that it transforms the man who reads it
towards the condition of the man who wrote”; Alexander Solzhenitsyn (1970): “the only substitute for an experience
we ourselves have never lived through is art, literature”; Samuel Johnson (1781): “[In literature] new things are
made familiar, and familiar things are made new”; Poulet, G. (1969): “Because of the strange invasion of my person
by the thoughts of another, I am a self who is granted the experience of thinking thoughts foreign to him. I am the
subject of thoughts other than my own. My consciousness behaves as though it were the consciousness of another.”
39
more comprehensive study of this cognitive process, particularly in the case of stream-ofconsciousness literature, might provide us with a firmer starting point from which to approach a
non-Cartesian understanding of our own consciousness.
II.i Why Literature?
It may seem odd at first to approach the philosophical discussion of consciousness from
the perspective of the presentation of consciousness in literature. After all, literature might strike
us as quite explicitly different from our ordinary experiences both with our own consciousness
and with other people in that it is authored—that is, it is intentionally structured and presented to
readers in a particular way in order to achieve a particular effect.75 Even if our experience with
literature is different in kind from our “non-fictional” experience, though, this need not in itself
present a problem for the proposed use of literature as exploratory or explanatory tool. The
“authoredness” of literature might actually be a point in its favor—particularly in the case of
stream-of-consciousness literature, which fairly unambiguously aims to present to a reader what
it is like to be another human character. Literature is created by creatures like us and meant to be
understood by creatures like us. Thus, it acts as a convenient representation of the way we
experience mentality and the conditions to which we intuitively imagine that mentality must
hold. Moreover, literature seems to be uniquely subjective and objective, in that it portrays
individual experiences in a manner that is replicable for multiple readers—and it has been argued
that the impenetrable subjective/objective distinction lies at the heart of the Hard Problem.76 We
might conceive of stream-of-consciousness literature, then, as an illustration of the necessary
75
I realize that “intentionally” is a loaded word in literary theory, but I do not need to make a controversial
theoretical claim about the role of authorial intention in the interpretation of literature, here. All I need to point out
is that the fact that we unavoidably understand a work of literature as “authored” means that we view it differently
than we would an arbitrary collection of shapes identifiable as letters traced by a wave in the sand, or an unmediated
conversation. (See e.g. Knapp, S. & Michaels, W. (1995) on intentionless meaning (p. 54); Carroll, N. (1995).)
76
See e.g. Xu, X. (2004), Nagel, T. (1974, 1986); c.f. Akins, K. (1993a, 1993b).
40
(and, perhaps, sufficient) conditions required to formulate a non-Cartesian response to Nagel’s
“what is it like” question. Admittedly, one might decide that literature marks a special case, that
the kind of access it tracks is never physically attainable outside of fiction (though I will
eventually argue that the difference between the literary case and the “non-fictional” firstperson/intersubjective case is merely one of degree). However, regardless of our ultimate
conclusion in this respect, the point remains that literature can serve as a guide to understanding
our relation to our own consciousness within a non-Cartesian framework.
In this account, I focus on stream-of-consciousness literature, as it is the most obviously
explicit and deliberate textual representation of consciousness and thus the most helpful example
for the kind of point I want to make. Perhaps the most notable feature of stream-ofconsciousness literature for my purposes is its presentation as a character’s direct thought. These
direct thoughts mark the same sort of internal monologue via which we characterize and narrate
our experiences, and it is access to this sort of internal monologue that seems closest to what,
under a Cartesian account, might have been referred to as “private access.” According to the
Sharf/Dennett conception of consciousness elaborated in Part I, consciousness is successive
linguistic responses to internal or external probing. If Sharf and Dennett are right about the
always-potentially-public narrative character of “internal” or “mental” processing, then our
direct thoughts simply are sentences such as the ones presented in stream-of-consciousness
fiction. Thinking is saying things to ourselves (this is, of course, that “silly theory that thinking
is talking to oneself”77 dismissed by Fodor in a one-line parenthetical remark—a theory that,
perhaps, has more plausibility than he is willing to entertain). Similarly, I argue, reading is
saying things to ourselves, except that the things we say to ourselves in this case are prompted
not by internal or environmental sensory stimulation but by semantic marks on a page.
77
Fodor, J. (1998), 10.
41
Repeating written words “inside our heads” and repeating the prompted words that make up our
internally sustained “stream of consciousness” are, essentially, the same processes. When we
read direct thoughts, then, those thoughts become our own in a notably literal sense. Poulet’s
quotation attains a new significance: “I am a self who is granted the experience of thinking
thoughts foreign to him. I am the subject of thoughts other than my own.”78 Our thoughts are
structured and strung together by an author, but this need not implicate any extra internal level of
translation from written representation of other’s experience to our own internal interpretation or
understanding.
Before beginning my discussion, though, I offer two disclaimers. First, the narrowness of
subject resulting from the restriction of my account to stream-of-consciousness literature is due
solely to the specificity and constraints of my project79—I do not intend it to imply a claim about
the purpose or merit of one literary genre versus any other. I cannot pretend to be describing all
of literature, and I cannot pretend to be giving a comprehensive account of the complexities of
literary works and the complexities of our interactions with them. Second, and similarly, my
description of reading is admittedly an extremely reductive one. I focus on the process of
reading only at the cognitive and neural levels, and thus I do not address the interpretive and
interactive aspect of reading that has fueled countless works in literary studies. I am interested in
the way the cognitive and neural levels of description affect what goes on in literary
interpretation, though, and I occasionally make interpretive claims of my own based on the
effects of these levels.
78
Poulet, G. (1969), 56.
In fact, I do think that the same sort of argument can be expanded to account for our ability to read other types of
literature as well, though that account is more complicated and longer than space allows (see Schneebaum, R. (In
preparation)). Also see Palmer, A. (2004) for a discussion of how all literature can be seen as the “presentation of
fictional mental functioning” (177).
79
42
Despite the relatively limited nature of my allocated subject matter, the variety and power
of the sorts of consciousness-narratives to which stream-of-consciousness literature provides
access is impressive:
But the silence weighs on me—the perpetual solicitation of the eye. The pressure is intermittent
and muffled. I distinguish too little and too vaguely. The bell is pressed and I do not ring or give
out irrelevant clamours all jangled. I am titillated inordinately by some splendor; the ruffled
crimson against the green lining; the march of pillars; the orange light behind the black, prickled
ears of the olive trees. Arrows of sensation strike from my spine, but without order.80
. . . I can see his face clean shaven Frseeeeeeeeeeeeeeeeeeeefrong that train again weeping tone
once in the dear deaead days beyondre call close my eyes breath my lips forward kiss sad look
eyes open piano ere oer the world the mists began I hate that istsbeg comes loves sweet
ssooooooong. …81
It was empty too, the pipes, the porcelain, the stained quiet walls, the throne of contemplation. I
had forgotten the glass, but I could hands can see cooling fingers invisible swan-throat where less
than Moses rod the glass touch tentative not to drumming lean cool throat drumming cooling the
metal the glass full overfull cooling the glass the fingers flushing sleep leaving the taste of
dampened sleep in the long silence of the throat …82
I will bind flowers in one garland and clasp them and present them—Oh! to whom? There is
some check in the flow of my being; a deep stream presses on some obstacle; it jerks; it tugs; some
knot in the centre resists. Oh, this is pain, this is anguish! I faint, I fail. Now my body thaws; I
am unsealed, I am incandescent. Now the stream pours in a deep tide fertilizing, opening the shut,
forcing the tight-folded, flooding free.83
My claim is that these passages, and others like them, are what consciousness is—one’s unique
and varied experiential narratives.
II.ii Reading Consciousness: An Empirical Account
In this section, I offer an empirical explanation of the processes involved in reading the
consciousnesses presented in stream-of-consciousness literature without relying on any Cartesian
misconceptions. This explanation will eventually allow me to show that the same processes are
involved in our own production of consciousness narratives. While there are innumerable
physical processes that occur during the act of reading a work of literary narrative, I focus on
80
Woolf, V. (2004), 103.
Joyce, J. (1998), 713.
82
Faulkner, W. (1990), 173-174.
83
Woolf, V. (2004), 35.
81
43
only a small chapter of the story. My account is limited to the space between perception of input
(text) and a reader’s physical response, and it is supported with empirical data from cognitive
and neural levels of description. These data suggest that we understand stream-of-consciousness
literature from a “first-person” perspective, by which we actually take on the consciousness of
the character described. Before developing this account in detail, I contrast it with its theoretical
opposition in the form of a more traditional “third-person” account of the reading process.
Perhaps the most immediately intuitive explanation for the cognitive processing involved
in understanding literary descriptions is that, when reading literature, we construct mental images
of the characters and scenes described.84 Under this account, when we read that “Lily, looking at
Minta being charming to Mr. Ramsey at the other end of the table, flinched for her exposed to
those fangs,”85 we effectively see her do so. This explanation may initially seem satisfying—
after all, we often express dismay when Hollywood’s rendition of our favorite literary character
looks different from our internal imagining. However, the fact that we give this intuitive
response does not demand that we originally possessed a complete mental image of that
character to which we could subsequently compare the actor. In fact, if pressed, we find it quite
difficult to fill in any details at all about our alleged mental images that are not explicitly
referenced in the text, such as the color of the character’s shirt.86 From the perspective of firstperson reports, we must admit that reading a book simply is not like watching a film: we do not
constantly “see” what is going on in anything like a visual sense.
84
For a fuller discussion of the potential relation between semantic processing and mental imagery, see e.g. Yaxley,
R. & Zwaan, R. (2007); Zwaan, R., Stanfield, R., & Yaxley, R. (2002). Also see Engelkamp, J. (1995); Richardson,
D. et al. (2003).
85
Woolf, V. (2006), 83.
86
Dennett (1991) makes a similar point when he asks readers to imagine a purple cow and then to answer questions
about the resulting mental image’s physical details (which, to readers’ surprise, we cannot do).
44
Furthermore, there is developing empirical evidence that mental “imagery” might not
actually involve “images.” Zenon Pylyshyn, for example, argues that what we call “mental
imagery” does not engage anything like visual perception. Rather, it involves processes of
linguistic representation and spatiotemporal physical manipulation.87 On a move like
Pylyshyn’s, we do not see a character; we “represent” that character with a set of language-like
entities and semantic associations.88 We do not see the position of troops on a fictional
battlefield or a map of a fictional country; we make use of the physical space in our own
environment to map distance and location. Cognitive details of the theory aside, our linguistic
conventions need not correspond in any meaningful way to the physical processes that underlie
them. The visual terms with which we talk about “imagining,” then, do not necessarily
illuminate anything about the process to which they refer.89 Thus, the mental imagery hypothesis
is both intuitively questionable and empirically empty.
Even if not explicitly referencing mental imagery, another potential account of the
cognitive process of reading involves an appeal to mental representations more broadly. Lisa
Zunshine, for example, articulates a cognitive theory of literary interpretation in which the act of
reading involves forming hierarchized levels of mental representations for the mental states and
levels of intentionality of the presented characters and character-interactions in a text.90
Zunshine claims that we always approach a text from a strictly third-person perspective, and that
part of the cognitive enjoyment derived from reading consists in our ability to keep our
“fictional” and “real world” mental representations separate: “The pleasure of being ‘tested’ by a
87
E.g. Pylyshyn, Z. (2002, 2003).
Ibid. An anti-imagery position need not be wedded to a language of thought account such as Pylyshyn’s, though.
One might instead postulate something like a connectionist system of variably strengthened neural associations.
89
The fact that we do have such visual perception-centric linguistic conventions, though, might be interesting in
itself—this point is addressed in more detail later on.
90
Zunshine, L. (2006).
88
45
fictional text—the pleasure of being aware, that is, that we are actively engaging our apparently
well-functioning Theory of Mind—is thus never completely free from the danger of allowing the
‘phantoms of imagination’ too strong a foothold in our view of our social world.”91 While
Zunshine’s account is restricted to our representations of characters’ mental states, the thirdperson theorist must maintain that this sort of approach will be able to explain the way we
represent other objects and relations of objects in the fictional world, as well. In order to make
sense of what we do when we read, our task might be, for example, to “give an exhaustive
account of the different representations that evolve during moment-by-moment reading.”92
In developing a representational theory, Gerrig and Egidi give a cursory and extremely
minimal list of the sorts of memory-based representations we must form in order to understand a
short passage from a Eudora Welty story: “Readers may retrieve from memory their own past
knowledge of restaurants to make sense of the general framing of the scene. They may retrieve
their knowledge of marital infidelity to understand how it might be marked on a woman’s face.
They may retrieve memories of social interactions to create a context. . . .”93 However, this sort
of representation-based account seems rife with problematic assumptions or gaps that a complete
account must fill in: for example, the representationalist must determine whether we develop a
stock representation of concepts referenced by literary works, or some sort of placeholder on
which we map details provided by the individual narrative; whether we modify a single
representation over time with the influence of additional inputs, or whether we constantly form
new representations with each read word. It seems, too, that this talk of “mental representation”
often becomes merely another means of referencing mental imagery—and if the
91
Ibid., 19. By “Theory of Mind,” she means a cognitively encoded system of rules used to ascribe mental states.
Gerrig, R. & Egidi, G. (2003), 35. This is only one of three potential goals of cognitive psychological theories of
reading presented by the authors.
93
Ibid., 33-34.
92
46
representationalist wishes to maintain a distinction here, she must account for what these
representations are, exactly. Thus, the representationalist runs into trouble for what seems to be
the same reason that the Cartesian does: she wants to create and fill a gap between the physical
world and an individual’s reactions to and interactions with and manipulations of that world by
positing a sort of “internal copy” of the world that exists in some “internal space.” What that
copy consists in may be problematically vague and variant depending on one’s representational
theory of choice. The fundamental problem, though, is the initial assumption that there is such
an internal in-between.
The explicitly third-person perspective such an approach adopts toward literature, too,
seems misguided in itself. This sort of representational third-person strategy privileges the
Cartesian intuitions previously debunked. Moreover, this Cartesian strategy seems to be missing
something important about the reading process. It seems that, when reading, we just know how
characters feel without applying any explicit interpretive work94—reading literature is the
paradigm case of our interpersonal intuitions in action. In order to demonstrate that this claim is
less mysterious than the representationalist’s case, my goal in this section is to fill in the
empirical details of this “just knowing.”
I claim that we understand literature via a “first-person” process of embodied physical
simulation. Whereas the Cartesian third-person mental representation account starts to look
more questionable upon the examination of empirical data, the interpersonal first-person account
is supported by developing research in cognitive science and neuroscience. At the cognitive
level, I use research in conceptual metaphor theory to show that our ability to read and
understand abstract concepts is rooted in embodied sensorimotor processes. At the neural level, I
94
We may, of course, do some explicit interpretive work when examining our own readings of literature in order to
theorize about them—but this is already a different topic.
47
fill in the cognitive account with hypotheses stemming from current mirror neuron research to
illustrate how this connection between input text and reader’s physical response might work.
Together, these empirical data suggest that reading a conscious narrative report involves taking
on (in an only slightly metaphorical sense) the embodiment of the character described.
Moreover, I claim that it is this act of embodiment in response to a produced conscious narrative
that constitutes the reader’s access to the character’s consciousness. Finally, I apply this
empirical account of reading literature to a conceptually continuous explanation of what goes on
in the case of our own conscious-narrative creation.
II.ii.i Reading at the Cognitive Level: Conceptual Metaphor Theory95
When we read a work of stream-of-consciousness literature, most of what we read is
metaphorical. This claim may appear to be an unwarranted overgeneralization; however, closer
examination of the language we use not only in literary but also in non-literary everyday cases
suggests that our language is built on a collection of conceptual metaphor structures that underlie
the way we think about abstract concepts. This is the fundamental hypothesis of conceptual
metaphor theory, as laid out by Lakoff and Johnson.96 They claim that the metaphors ubiquitous
in our language are not merely linguistic abstractions that have accumulated over time, existing
solely in poetry or elevated figurative language. Rather, metaphors are embodied in our
experiences with the world and constitute our everyday language about those experiences.
Lakoff and Johnson’s account develops an “integrated theory of primary metaphor” that explains
the sorts of conceptual metaphors that structure our understanding, the way in which these
95
Parts of this section have been adapted with substantial revisions from my final paper for COGS 493, “The
Embodiment of Literary Meaning: An Application of Conceptual Metaphor Theory,” with the permission of the
cognitive science program.
96
Lakoff, G. & Johnson, M. (1980, 1999). Also see Yu, N. (1998) for an explication and application of the theory.
48
metaphorical systems are formed via physical processes, a neurological account of this system of
associations, and a description of the way in which different metaphorical associations can be
combined and conflated to produce new complex metaphorical structures.97 These conceptual
metaphors can be understood as “embodied” in three ways: “First, the correlation arises out of
our embodied functioning in the world […] Second, the source domain of the metaphor comes
from the body’s sensorimotor system. Finally, the correlation is instantiated in the body via
neural connections.”98
According to conceptual metaphor theory, all abstract concepts, poetic metaphors, and
figurative language in general are derived from a collection of primary metaphorical structures:
for example, metaphors such as Important Is Big, Happy Is Up, Categories Are Containers, and
Similarity Is Closeness.99 The particular metaphorical structures that we use correspond to our
own nonmetaphorical experiences with the world.100 Thus, the primary metaphor Important Is
Big results from the childhood experience of big things, such as parents, being able to exert
influence over one’s experiences and being important to the fulfillment of one’s needs; the
primary metaphor Happy Is Up results from the psycho-physiological effects of being in an
upright position; and so on. Lakoff and Johnson postulate that this observed correlation between
primary metaphors and embodied experience is due to neural connections that are formed and
reinforced in childhood through processes of association, conflation, and differentiation.101 The
child who feels happy when standing in an upright position is not initially aware of a distinction
between concepts. She first learns to talk about physical things being “up” before using the word
97
Lakoff, G. & Johnson, M. (1999), 46-47.
Ibid., 54.
99
Ibid., 50-51. This is Lakoff and Johnson’s notation for conceptual metaphors: the target domain is in the subject
position, the source domain is in the predicate nominal position, and “Is” represents the mapping between them (58).
100
Several more recent studies, too, seem to confirm this claim on a physical level: for example, Casasanto, D.
(2008) has shown that we understand the abstract concept “similarity” in terms of physical proximity; Boroditsky, L.
(2001) has demonstrated that we understand temporality via spatial metaphors along a vertical or horizontal axis.
101
Lakoff, G. & Johnson, M. (1999), 49.
98
49
in its metaphorical context. To test this theory empirically, Christopher Johnson102 studied the
acquisition of the primary metaphor Knowing Is Seeing within a detailed corpus of a child’s
language acquisition,103 and noticed that the child initially only used the word see in its literal
context (e.g. “I see the box”). Before using the word metaphorically (“I see what you’re
saying”), there was a documented period in which the child used it in contexts where both
knowing and seeing occur simultaneously (“Let’s see what’s in the box”). The fact that, in the
case of primary metaphors, the literal use of a word is always learned before the metaphorical
one seems to indicate that the metaphor is not merely a linguistic construction but is, rather, a
part of an embodied process of association.
Lakoff and Johnson also provide a version of a neurological theory, attributed to
Narayanan and Bailey,104 to account for this observation. They posit that frequent concurrences
of concepts (in the case of the metaphor More Is Up, for example, concurrences of increase in
the domains of verticality and quantity) create a system of activation between the relevant neural
networks such that, after some threshold frequency of repetition, activation of one domain (the
source domain) triggers activation of the other (the target domain). Lakoff and Johnson’s theory
claims that the source domain involves a sensorimotor operation while the target domain
involves a subjective experience or judgment.105 Understood this way, conceptual metaphor
theory itself appears to be founded on Cartesian assumptions. However, it seems that the theory
holds equally well if, rather than “subjective experience,” we characterize the target domain as
one of linguistic convention and response. The appeal to subjective experience, then, is not a
necessary component of the theory; it is merely a gratuitous Cartesian overlay on empirical data.
102
E.g. Johnson, C. (1997a, 1997b).
MacWhinney, B. (1995).
104
See e.g. Bailey, D. (1997); Narayanan, S. (1997a, 1997b); Bailey, D. et al. (1997).
105
Lakoff & Johnson (1999), 55.
103
50
The resulting primary metaphors—that is, sets of neural co-activations between sensorimotor and
linguistic systems—unavoidably structure our thoughts and experiences simply by means of our
existing in and interacting with the physical world.
One important implication of conceptual metaphor theory is that the repeated
simultaneous activation that leads to forged neural connections forming metaphorical structures
in development remains a live connection that allows us to use and understand novel metaphors
operating within the same primary metaphorical structure. Some empirical studies106 seem to
suggest that there is at least some degree of embodiment involved in many cases of abstract
conceptualization. Wilson and Gibbs (2007) showed that performing a literal physical action
allows individuals to understand the same word when used in a metaphorical context more
quickly. For example, making a grasping motion allowed participants to more easily understand
the metaphor, “grasp the concept.”107 If understanding metaphor involved a process of
abstracting away from a word’s literal meaning, then the grasping motion should have had no
effect on understanding (if anything, response time should have been longer, as there would have
been a stronger process of inhibition required to suppress the primed presence of the literal
meaning). Casasanto and Dijkstra have performed experiments in which participants were asked
to describe memories while moving marbles up and down—a physical process that in itself has
no psycho-physiological effects or linguistic connotations, yet still influenced the percentage of
negative and positive memories recalled based on direction of movement.108
Another component of conceptual metaphor theory that seems to support the hypothesis
that metaphors are live and embodied is the claim that primary metaphors establish the
106
See e.g. Gibbs, R. (2006); Casasanto, D. (2008); Casasanto, D. & Boroditsky, L. (2008); Casasanto, D. &
Dijkstra, K. (In Review); Willems, R., Hagoort, P., & Casasanto, D. (In Review); Wilson, N. & Gibbs, R. (2007).
107
Wilson, N. & Gibbs, R. (2007).
108
Casasanto, D. & Dijkstra, K. (In Review).
51
conceptual structures (that is, the patterns of neural activation) that allow us to develop complex
and novel metaphors and, from them, to understand a variety of “abstract” concepts. One
traditional explanation for our ability to form novel metaphors states that we connect concepts
via their perceived linguistic or conceptual similarities, and then form metaphors that exploit
these similarities.109 However, this theory cannot account for the way that the structure of our
complex metaphorical systems and their resulting linguistic metaphors are mirrored by the literal
meanings of the abstract terms involved. For example, the complex metaphor Love Is A Journey
structures much of the way we talk about love and a love relationship’s component parts (The
Lovers Are Travelers; Their Common Life Goals Are Destinations; The Relationship Is A
Vehicle; Difficulties Are Impediments To Motion; etc.).110 The structure of the complex
metaphor actually influences the structure of our thinking about the concepts involved: “The
Love Is A Journey mapping does not just permit the use of travel words to speak of love. That
mapping allows forms of reasoning about travel to be used in reasoning about love.”111
If conceptual metaphor structures truly underlie our thinking, they must be involved in
the creation of metaphor in literature. After all, good literature is, in large part, the creation and
combination of novel metaphors within a conceptual system that must be intelligible to an
embodied reader. Thus, I claim that the process by which we read metaphor in literature—the
process of “understanding” these allegedly abstract concepts—involves a process of physical
embodiment and simulation.
The following passage comes from Virginia Woolf’s To the Lighthouse:
The boat was leaning, the water was sliced sharply and fell away in green cascades, in bubbles, in
cataracts. Cam looked down into the foam, into the sea with all its treasure in it, and its speed
109
Lakoff, G. & Turner, M. (1989), 123.
Lakoff & Johnson (1999), 64.
111
Ibid., 65.
110
52
hypnotized her, and the tie between her and James sagged a little. It slackened a little. She began
to think, How fast it goes. Where are we going? and the movement hypnotized her. . . .112
This passage relies on the conceptual metaphor structure in which relationships are physical ties
between individuals. A traditional theory of metaphor might explain that there is a certain
conceptual similarity in a literal tie between physical objects and a figurative tie between people.
However, if we examine this passage and the way it works more closely, it seems clear that
understanding the literal meaning of tie is necessary to achieve an understanding of the
individual metaphor and of the passage as a whole. Our ability to understand the implications of
a metaphorical tie sagging or slackening depends on our previous embodied experiences with
ropes and knots and the effects of adding or removing tension in our interactions with them.
Sometimes connections sag because the connected objects have been moved closer together;
sometimes they sag because there is a decrease in tension on the piece of rope, because one or
both ends of the connection are putting less energy into the bond; sometimes they sag because
one or both connecting knots have come loose and are about to unravel. There is no single
metaphorical “meaning” of this image; rather, each of these embodied interpretations plays into
our implicit understanding of the passage. Moreover, there are further associations between
ropes and knots and boats and movement through water that affect our reading of this scene—all
due to our experiences interacting with physical objects in the world—so that certain of these
implications become more urgent within the literal context of the passage.
The literal embodiment of metaphorical abstraction is crucial to our understanding of
literary works—not merely in this isolated instance, but as a fundamental component of what
literature is and how it functions. Moreover, unlike other literary-theoretical and cognitivepsychological accounts of literary interpretation, this theory claims that the process of reading
112
Woolf, V. (2006), 136.
53
literature involves “understanding” at an implicit level of sensorimotor stimulation. If
conceptual metaphors are accumulated via a process of simultaneous activation, then the
activation of the metaphorical “meaning” of a word or passage through the process of reading
should activate the sensorimotor system involved in that word’s literal function. Thus, there are
neural processes of association and inhibition at work in our comprehension of abstract concepts
and literary language: it is this low-level neurobiological work of association and activation that
allows for our seemingly implicit and immediate ability to read some highly metaphorical
passage and know what is going on.
II.ii.ii Reading at the Neural Level: The Mirror Neuron System
There is convergent evidence at the neurological level for this embodied way of linking
read text to physical activation. In action-description cases, literature provides a (literal or
metaphorical) report of the particular positions or movements of characters’ bodies. Given this
description of body-shapes as input, I hypothesize that we as readers shape our own bodies—or
“imagine” shaping our own bodies—in the same way. Of course, I do not mean to assert that,
when reading a book, we actually perform all the bodily motions described of the focal character,
nor do I mean to argue that we “consciously” imagine ourselves performing those actions in
place of the character. After all, anyone who has ever sat down to read a work of literature could
confirm with some confidence that this is not, in fact, anything like what happens during the
reading process. Instead, we might think about reading as analogous to social interactions with
other people. When we observe others, we tend to mimic them—adopting similar speaking
patterns, crossing our legs in the same manner, tilting our heads, moving our hands, and so on.
Again, it is apparent that we do not constantly mimic everything that other people do, just as we
clearly do not physically perform the actions described in literature. What seems to be at work,
54
then, is a system of activation and inhibition, such that reading stimulates in us potentials for
action rather than actions themselves.
Developing evidence from neuroimaging research indicates that our mimicry abilities—
and by extension, I suggest, our reading abilities—might be due to the firing of a particular kind
of neuron or neural system that is activated both when an individual performs an action and
when an individual observes others performing the same action. Evidence for these “mirror
neurons” was first discovered in macaques, when experimenters noticed that certain neurons
fired during both performance and observation of the same motion.113 Gallese et al. (1996)
observed that, in addition to canonical neurons that fire in response to the observation of
graspable objects and those that fire during the monkeys’ performances of grasping motions,
some neurons produced electrical activity both when observing a human experimenter grasp an
object (not simply observing the graspable object itself) and when performing that same motion.
They noted, too, that different neurons fired in response to different sorts of grasping motions,
while others fired in response to hand-related actions.114 Thus, though specificity of target varied
across neurons and neuron groups, the crucial mirroring component was observable across
degrees of specialization.
Moreover, though our current noninvasive technology for measuring neural activation is
still limited,115 there is reason to hypothesize that a similar mirror neuron system exists in
humans as well.116 The area in the macaque brain where these mirror neurons have been located
113
See e.g. Gallese, V. et al. (1996); Gallese, V., Keysers, C., & Rizzolatti, G. (2004); Umiltà, M. et al. (2001);
Rizzolatti, G. & Arbib, M. (1998); Iacoboni, M. & Dapretto, M. (2006). For the first documented mirror neuron
study in macaques, see di Pelligrino, G. et al. (1992).
114
Gallese, V. et al. (1996).
115
See e.g. Dinstein, I. et al. (2008).
116
See e.g. Gallese, V., Keysers, C., & Rizzolatti, G. (2004); Rizzolatti, G. & Arbib, M. (1998); Kilner, J., Friston,
K., & Frith, C. (2007); Iacoboni, M. & Dapretto, M. (2006).
55
is largely agreed to be the homolog of Broca’s area in the human brain.117 Fadiga et al. (1995)
used transcranial magnetic stimulation (TMS) on the motor cortexes of human subjects in order
to enable the measurement of the motor evoked potentials (MEPs) produced when those subjects
observed various conditions—three-dimensional objects, experimenters grasping the same
objects, experimenters tracing geometric shapes in the air, and dimming lights. MEPs are a
measure of electrical activity produced at a muscle cell in response to motor cortex activation:
their production indicates the relevant muscle’s readiness for action, even if the actual action is
inhibited. MEP production was significantly increased during the two testing conditions in
which motions were observed, and MEP patterns matched the patterns of muscle movement that
would be activated had the subject been performing the observed action.118 These data indicate
that the same neurons responsible for triggering a particular muscle movement are also activated
during the observation of that movement. More recent experiments have used fMRI technology
to image participants’ brains during action and observation. While neuroimaging does not allow
us the same level of neuron-specific fine-grainedness that we get from macaque data, it has
shown that certain brain areas thought to be involved in a mirror-neuron system not only exhibit
electrical activity during observation and action, but also exhibit the highest amount of activity
during imitation (observation and action) tasks.119 Similarly, it has been shown that the areas
potentially involved in a mirror neuron system exhibit activation selectively for actions which
the observer is physically capable of performing—thus, the human mirror neuron system might
be involved in watching a dancer (even if the observing individual herself does not have the
requisite skill to actually mimic the performer), but not when watching a dog barking.120
117
Rizzolatti, G. & Arbib, M. (1998), 189.
Fadiga, L. et al. (1995).
119
E.g. Iacoboni, M. & Dapretto, M. (2006).
120
Oberman, L. & Ramachandran, V. (2007).
118
56
This research indicates that whenever we see another human performing an action, part of
our nervous system that is responsible for the performance of that action in us is immediately
activated: according to Fadiga et al. (1995), “in the absence of movement or even of a voluntary
movement preparation […], the observation of an action automatically recruits neurons that
would normally be active when the subject executes that action.”121 If motor neurons are
involved in observation, this means that, in some sense, we do experience others’ physical states
merely in the act of watching them—not in a “subjective” or “qualitative” sense, of course, but in
terms of physically mirroring their embodied states. During everyday instances of human
observation and interaction, our bodies are continuously primed for action, even if there is no
outward visual sign of activation. Thus, the mirror neuron system must also involve an
inhibition process such that, despite the activation of the same neurons in an instance of
observation, resulting physical actions themselves are prevented. These neural processes of
activation and inhibition account for our ability to understand or empathize—that is, in a far
more literal rather than conceptual sense of “empathy”—with others in social situations without
the “conscious” (or implicit) application of a descriptive theory,122 and without actually
mimicking them. When we observe someone running from a pursuer or clutching her hand in
pain, we do not step back and wonder what it is like to be that individual performing that
action—our bodies have already taken on the position of the individual in question.
My hypothesized move from simulated activation-by-observation to simulated activationby-reading, though, requires an additional step. Further research has shown that mirror neurons
may be involved in more “abstract” cases than simple one-to-one correspondences between
121
Fadiga, M. et al. (1995), 2610.
The debate between theory-theory and simulation theory remains a live one in cognitive science, and I do not
mean to beg the question against the theory-theorist. It seems, though, that any fully comprehensive explanation of
mental state ascription must be some sort of hybrid of the two, and that this particular point can be accounted for via
neural simulation. For an account of such a hybrid theory with a simulationist bent, see Goldman, A. (2006).
122
57
identical physical actions. First, researchers have shown that some mirror neurons are activated
in response to the observation and performance of particular actions rather than simply particular
movements or particular objects involved in an action. Several experiments have isolated two
different kinds of mirror neurons, one of which fires in response to identical movements, while
the other fires in response to different movements that achieve the same goal. Thus, one group
of the latter sort of neurons was activated in macaques when the experimenter grasped a piece of
food in order to eat the food, regardless of whether the food was grasped with two fingers or with
the whole hand, while different neurons were activated when the food was grasped in order to
place it on the table.123 Similarly, experimenters have claimed that differing activations in
context show that mirror neurons are involved in the observation and simulation of intentions
rather than simple body-position—for example, different neurons are activated when a teacup is
grasped in exactly the same manner both during and after a meal (grasping in order to drink and
grasping in order to clean up, respectively).124 This finding suggests that goal-directed action is
recognized by low-level physical processes. Oberman and Ramachandran (2007) point out that
these embodied simulations might be part of the perceptual system itself, such that there is no
“pure perception” of others’ actions that are then interpreted but only simulation-perceptions of
intentionality.125 If mirror neurons are responsible for simulation at these levels of “abstraction,”
then it makes sense that they can be activated by the level of abstraction at which physical
descriptions in literature generally occur—that is, usually, the level of goal-directed actions
rather than detailed descriptions of physical movements and body positions.
Written words, though, are quite clearly not the sorts of things that a human observer
could embody; thus, they seem outside the realm of mirror neuron accessibility. Though our
123
See e.g. Iacoboni, M. & Dapretto, M. (2006); Iacoboni, M. et al. (2005).
Iacoboni, M. et al. (2001).
125
Oberman, L. & Ramachandran, V. (2007), 312. This hypothesis is originally attributed to Grush.
124
58
ability to read results from and thus implicates the same mirror neuron system involved in
mimicking action, there is an additional factor of cultural force involved in the reading case that
makes it less unavoidably immediate than the physical observation case. We are not innate
readers—rather, we are taught to read within a cultural context of reading.126 There is a certain
level of initially forced and repeatedly developed associations between read words and certain
physical states that is not required in the innate observation case. These physical states, though,
are the sorts of positions that human readers can embody and thus understand. Our cultural
history is one of determining intention from read words. Thus, our response to read text is not a
direct visual-observation-to-mimicking one—we clearly do not form our bodies into the shapes
of letters and punctuation marks. Rather, it is one of mirroring the goal-directed action culturally
and habitually associated with those letters and punctuation marks.
Furthermore, there have been studies linking mirror neuron systems to auditory
stimulation, which goes to support the idea that the functioning of these neurons is not restricted
solely to visual processing. Kohler et al. (2002) recorded mirror neuron activation in macaques
when the monkeys performed sound-producing actions (e.g. breaking a peanut, dropping a stick,
ripping a piece of paper) and when hearing the appropriate action-related sound. They
discovered neurons that were activated not only during physical performance and observation,
but also in response to auditory stimulation (without visual observation), though non-actionrelated sounds produced no response.127 These data suggest that we should think of the mirror
neuron connection not as tracking perceived shape to MEP production but, rather, recognized
intentional state to MEP, regardless of the sensory nature of the input. Moreover, the location of
126
If anything, this marks a further point of similarity between reading and consciousness-creation. After all,
according to those such as Jaynes and Wilkes, conscious narrative-production, like reading, is a relatively recent and
highly culturally forced development in human history. (See Jaynes, J. (1976); Wilkes, K. (1984, 1988, 1995).)
127
Kohler, E. et al. (2002).
59
these mirror neurons in macaque brains and its hypothesized evolutionary link to Broca’s area in
the human brain has led researchers to speculate that mirror neurons were involved in the genesis
of our linguistic abilities, and that mirror neurons remain actively involved in our capacity for
linguistic communication.128 It has been claimed that mirror neurons are essential for gestural
communication in macaques,129 as their shared activation allows the monkeys to understand and
convey contextual goal-direction. Audiovisual mirror neurons, then, might be implicated in the
evolution of our linguistic abilities from such originary gestural communication, as they allow
auditory access to stimulations of action contents.130
The activation of mirror neurons in response to auditory stimuli suggests that it is not the
sensory nature of the input that is important from a neurological perspective, but rather whatever
internal connections and neural processes are activated in response to that input. Though visual
and auditory signals might seem different in kind from an introspective perspective, this
phenomenological claim about us need have no more bearing on what actually happens
internally than our “mental imagery” talk does on the neural processes involved in that case. I
claim, then, that an input of read text triggers the same neural connections that an input of verbal
semantic data would. In the case of linguistic information, after all, it is importantly not the
phenomenological claim to “picturedness” of the words that is important to our understanding
from an empirical or conceptual perspective. It seems plausible that, though different inputs are
involved in various cases—perceptual ones in the watching case versus linguistic ones in the
reading case—these inputs activate the same neural systems of simulation and inhibition.
Stream-of-consciousness narratives describe their characters’ movements: “I leap like one of
128
See e.g. Rizzolatti, G. & Arbib, M. (1998); Kohler, E. et al. (2002); Iacoboni, M. & Dapretto, M. (2006);
Oberman, L. & Ramachandran, V. (2007).
129
Rizzolatti, G. & Arbib, M. (1998).
130
Kohler, E. et al. (2002), 848. Similarly, Tettamanti, M. et al. (2005) provide evidence that listening to actionrelated sentences activates the parts of the premotor cortex pertaining to the motoric coding of the relevant action.
60
those flames that run between the cracks of the earth; I move, I dance; I never cease to move and
dance. I move like the leaf that moved in the hedge as a child and frightened me.”131 The
linguistic input, “leap,” for example, activates the relevant neurons involved in producing a
leaping response, though that response itself is inhibited. In the same way that a character’s
direct thoughts become the reader’s thoughts, I hypothesize that a character’s described actions
become the reader’s actions or potentials for action (MEPs).
Given this simple simulationist theory of behavioral description, though, our ability to
understand descriptions of characters’ “internal” states looks more complicated. It seems more
difficult for mirror neurons to explain the reflexive simulation of fear in response to reading the
word, “frightened,” than the activation of a leaping motion in response to the read word, “leap.”
The application of a version of a third-person theory again presents itself as an intuitive
alternative to the simulationist picture. For example, we might adopt Zunshine’s claim that
reading about mentality involves an intentional act of deciphering different layers of represented
mental states within a text. Similarly, we might think of reading about human characters as an
explicit exercise of the application of Dennett’s “intentional stance.”132 Because characters in
literature are presented as “properly functioning” humans, we as readers can understand their
described sets of movements as involving mentality (and, conversely, attribute to read
descriptions of mentality the appropriately rational sorts of movements) rather than interpreting
them merely mechanistically. These sorts of third-person theories involve an extra step of
interpretation and abstraction in the perceptual process. Thus, there is a sense, under Dennett’s
construction, that one’s attribution of a particular stance is somehow “optional,” even if it is not
131
132
Woolf, V. (2004), 25.
See Dennett, D. (1987); also, many of the essays in Dennett, D. (1981).
61
necessarily a “conscious” decision on the observer’s part. Zunshine’s account, too, involves the
deliberate application of a theory to a read description as necessary for understanding mentality.
Mirror neurons, however, seem to allow for a more satisfactory account of the directness
or immediacy with which we perceive emotions and mentality in others, and to account for the
fact that we view and react differently to other humans and physically similar non-human
animals than we do to inanimate objects, plants, and vastly dissimilar creatures. This difference
does not lie within some sort of interpretive overlay which we learn to apply, but is instantiated
in our own physical makeup. Emotions are, fundamentally, certain physical behaviors and body
positions—thus, it follows that these actions and body positions allow for the same processes of
simulated activation and inhibition by which we mirror more explicitly physical acts. According
to mirror neuron theories, after all, “the mechanisms by which we understand an action, thought,
or emotion in another individual share an underlying neural circuitry with the mechanisms by
which we execute these actions, thoughts, or emotions ourselves.”133
Alvin Goldman (2006) cites research indicating that individuals with abnormal fear
responses are similarly disadvantaged when attempting to recognize fearful faces, though they
perform at a normal level when tested on other emotions.134 These data illustrate that the same
system is involved both in performance and observation in the case of emotion as well as motion
itself—and further activation-measuring and neuroimaging data from mirror neuron research in
both macaques and humans seem to confirm the simulationist account.135 These sorts of data
suggest that when we see a person in fear, the same neurons are activated that are responsible for
stimulating the muscles involved in producing a fear response in us. Thus, it seems that
133
Oberman, L. & Ramachandran, V. (2007), 310.
Goldman, A. (2006), esp. 6.1.
135
See e.g. Oberman, L. & Ramachandran, V. (2007); Gallese, V., Keysers, C., & Rizzolatti, G. (2004); Iacoboni,
M. et al. (2001).
134
62
attributions of mentality are the products not of the higher-level application of a stance, but of
our nervous system itself: we see others’—and other characters’—emotions in the same way that
we “see” our own.
If both of these claims are true—that the same mirror neuron system responsible for
simulating action allows us to simulate emotion, and that activation of the action-simulation
mirror neuron system is what allows us to understand descriptions of physical actions in
literature—it seems an obvious next step to conclude that descriptions of emotions in literature
work the same way. Stream-of-consciousness narratives often reference “mental state” words: “I
walked straight up to you instead of circling round to avoid the shock of sensation as I used. But
it is only that I have taught my body to do a certain trick. Inwardly I am not taught; I fear, I hate,
I love, I envy and despise you, but I never join you happily.”136 While the mirror neuron
explanation is straightforward in the case of walking and circling (and joining, due to its
conceptual metaphor foundation), it seems more difficult to give a similar simulationist account
for the mental state words. In these cases, the reader’s mirror neuron system is not activated in
direct response to a physical action—there is no described physical action to which it could
respond. Instead, this particular linguistic input triggers an association between the read words
and the sorts of body-shapes that ordinarily accompany those words when applied to the reader
herself—and the relevant muscle-activation required to produce these body-shapes is triggered
by that association. Thus, we can understand our interpretations of “mental” descriptions as
running the same low-level sensorimotor simulation processes as the “strictly physical” ones.
This associative link between read emotion word and accompanying body shape should
be no more mysterious than it is in the physical case. There is nothing more abstract or
problematic about a connective leap from the read word “fear” to the physical activation of a fear
136
Woolf, V. (2004), 148.
63
response than a similar leap from the read word “walked” to the activation-potential produced in
the muscles involved in walking. Thus, a simple associated link is forged between a read word
(“fear”) and the mirror neurons ordinarily activated when that word is associated with the
reader’s own neuro-physical configuration. Again, the reader need not necessarily feel fearful
herself, as there are also inhibitory processes at work in response to neuronal activation.137
Literature is a special case, then, because all the correct and significant mental-state labels are
provided for us. It is only in the case of our own consciousness that we are given as much
relevant information for our mirror neuron systems to process, because we are the ones labeling
our own particular body-shapes and narratizing the intentionality through which we understand
our own actions. It seems, then, that the sort of access we have to a focal character’s mental
states is the same sort of access by which we “know” our own. Any remaining difference is
merely one of degree of available information.
II.iii Creating Consciousness: A Narrative Account
The interpersonal intuition-based account of reading literature just developed can be
summarized as follows: given read text as input, neural processes of association stimulate
potentials for action at the sensorimotor level. The particular invoked connection between input
word and output physiological state is the result of strengthened neural coactivations that can be
either literally descriptive or based on underlying conceptual metaphor structures. The reader’s
resulting neurological and physiological position, then, mirrors the neurological and
physiological position that the character must (hypothetically) occupy in order to produce the
conscious narrative in question. Thus, it is a reader’s access to that narrative and resulting
137
Notably, read emotions (and, sometimes, actions) often “leak through” inhibitory barriers, such that we share a
character’s longing for a particular resolution or “internalize” a character’s complaints of thirst when she is lost in
the desert. Rizzolatti, G. & Arbib, M. (1998) hypothesize that such “indicators” of action or emotion that are not
entirely inhibited may serve as crucial cues in situations of social communication.
64
sensorimotor embodiment that constitutes the reader’s “taking on” a character’s consciousness.
This empirical account provides an explanatory foundation for what we mean by empathy, or by
understanding a character in a fictional narrative.
There remains some work to be done, though, in order to render plausibly the claim that
this account has anything at all to do with what goes on in the case of our own consciousnessnarrative production. The process of reading about a literary character seems entirely different
from that of actually being a conscious agent. In order to do the latter, of course, we merely need
to exist in the world, perceiving objects and reacting to those perceived objects in particular
ways. There is no narrator to direct the shapes of our bodies through detailed literal and
metaphorical descriptions; we simply do things and can, if questioned, subsequently talk about
them. I claim that the processes involved in the consciousness case and the processes involved
in the reading case are not only related but, in fact, the same—that one is merely the reversal of
the other. Thus, the consciousness case might be sketched as follows: in our interactions with
the world, the positions and movements of our bodies encode sensorimotor signals. These
sensorimotor data are input as neural signals and can activate “literal” or “metaphorical”
linguistic associations that are output as a conscious narrative. The same metaphor structures
shape the sorts of narratives that we are able to produce, and the same mirror neuron system is
activated in our narrative-production in response to input muscle activation. To begin with an
extremely simple case, we might imagine an individual who, when asked to reflect on her current
experience, reports, “I am holding a mug.” The position of the individual’s hand plus contextual
sensorimotor information produce neural activations, which trigger further activity across neural
connections that have been strengthened over time due to simultaneous or successive activation.
65
Some of these connections cause linguistic associations such that, when asked to describe her
experience, she is in a position to provide a particular report.
This account requires several points of clarification. First, although we are always
interacting as embodied creatures in the world, we are not always producing narrative reports in
response to those interactions. As previously recounted, most of our “experiences” are not
conscious. Moreover, the fact that we are not always conscious does not imply that this sort of
narrative report is continually being produced but does not always “become conscious” or “allow
us access to it,” or some similarly problematic Cartesian intuition-informed dispositional
phrasing. Embodied input does not always lead to linguistic output. Rather, conscious output is
generated only in response to internal or external stimulation. Thus, though the arc from body
position to linguistic report is the clearest illustration of my account of consciousness as
continuous with my account of reading literature, it might be less conceptually confusing to
characterize our body positions not as constantly involved in this sort of stream-of-consciousness
system, but as the potential data from which probing inputs elicit linguistic response.
Second, while it might seem conceivable that neural associations activated by the
particular grasping shape of one’s hand lead an individual to relate a narrative about grasping, it
is more difficult to see how body shapes could lead an individual to talk about what it feels like
to grasp the object in question. While my simulation story might account for one’s making the
assertion, “I am holding a mug,” it is less obvious how it might explain the subsequent
statements, “This mug is hot,” or “This mug is smooth,” for example. This worry stems from an
intuitive tendency to rely too much our own introspective abilities rather than the finegrainedness of our bodies’ calibration to their environment. Though it may seem to us that our
hands form the same shape when grasping a hot mug versus a cold one, in fact skin cells react
66
differently to different surface situations such as variations in temperature and texture. Even in
these sorts of seemingly subjective sensory descriptive cases, differences in body shape—if on a
microscopic rather than a macroscopic level—lead to differences in linguistic output. Perhaps it
is in part the apparent discrepancy between what looks like a rather limited range of physical
motions and a seemingly limitless range of linguistic expressions that leads us to postulate an
extra “mental” realm wherein the former can be reinterpreted in terms of the latter. Given the
complexity of the possibilities of our physical interactions with the world, though—even the
sheer number of involved cells, neurons, communications among systems, and so on—the
concept of direct correspondence seems, at least, conceivable, though the specific empirical
details might remain mysterious.
One might object, however, that this sort of appeal to the microscopic level looks
questionable in conjunction with my hypothesis regarding the alleged role of mirror neurons in
narrative-formation. If something cannot be observed, then it cannot be mimicked—thus, there
most likely are not mirror neurons activated in response to the imperceptible movements of skin
cells. A similar point holds not merely for this example but for the first-person case in general.
Simply by existing in the world, there are innumerable instances of neural stimulation that do not
directly activate the mirror neuron system yet contribute to our linguistic reports. All this means,
though, is that we are in a sense “better informed” about our own experiences than those of the
people we observe. However, this advantage constitutes “privileged access” only in a physically
contingent rather than in a metaphysically special sense.
Finally, the role of conceptual metaphor structures in our thinking might help to make
sense of the link between physical body positions and highly conceptual conscious narratives.
Examples of such conceptual narratives include allegedly a priori philosophical discussions, or
67
explanations of the nature of consciousness. Due to simultaneous processes of association in
learning, metaphorical and literally descriptive words can be generated by particular body shapes
and physical interactions with the environment. These metaphors, again, produce the conceptual
structures through which our ability to think about related topics is mediated. While this claim
may seem fairly straightforward in the literature case, it is more difficult to make sense of its
reversal in context. However, a conceptual metaphor account might help to elucidate the
underlying reasons for the way we talk about consciousness—for the inescapable intuitiveness of
our Cartesian intuitions—and this particular example can help to ground an understanding of the
way conceptual metaphor theory operates in our consciousness-narrative production.
According to some of the data referenced in support of conceptual metaphor theory,
again, people initially learn to talk about physically embodied concepts such as visual perception
before using the same words to describe abstract mental concepts such as knowledge. The
resulting metaphor is fundamentally grounded in its physical embodied meaning—there remains
an important conceptual link between one’s ability to perceive something and one’s knowledge
of that perceived thing. Thus, forged neural connections between “seeing” and “knowing” are
repeatedly strengthened. When we say things such as, “I see what you mean,” we go through a
procedure of physically simulating and inhibiting the neural processes involved in visual
perception. Conversely, the neural processes involved in physically seeing activate associative
connections to knowing-related words. Much of our mentalistic language is both physically and
conceptually linked to visual perception—which accounts for the earlier observation that many
of our linguistic conventions are perception-centric. Moreover, the embodied component of
conceptual metaphor theory might explain why mental imagery “feels” to us like visual
perception. This is not merely the result of arbitrary linguistic contrivance—we are, after all,
68
physically activating processes involved in vision when told to “image” something. Mental
imaging feels like seeing, then, because it is like seeing—just not in the way that mental imagery
theorists assume.
Interestingly, the way we ordinarily talk about consciousness seems to implicate the same
visual perception-based metaphors. Jaynes makes this point particularly clearly138:
Consider the language we use to describe conscious processes. The most prominent group of
words used to describe mental events are visual. We ‘see’ solutions to problems, the best of
which may be ‘brilliant,’ and the person ‘brighter’ and ‘clear-headed’ as opposed to ‘dull,’
‘fuzzy-minded,’ or ‘obscure’ solutions. These words are all metaphors and the mind-space to
which they apply is a metaphor of actual space. In it we can ‘approach’ a problem, perhaps
from some ‘viewpoint,’ and ‘grapple’ with its difficulties, or seize together or ‘com-prehend’
parts of a problem, and so on, using metaphors of behavior to invent things to do in this
metaphored mind-space.
The extent and range of our metaphorical mental language implies underlying conceptual
metaphor structures. These metaphor structures (e.g. Knowing Is Seeing; Mind Is Physical
Space) allow for the formation of complex and novel metaphors such that the abstract case
activates the same neural connections and physical associations as the “literal” embodied case.
Our tendency to talk about knowing in terms of “seeing” things, then, allows us to talk about
“mental things” as things that can be seen, about a “seer” or internal observer, about a stage on
which the mental things might be observed, illuminating mental spotlights to allow for optimum
observability, and so on. This primary-metaphor origin can also make sense of some of the
novel-metaphor features of consciousness that Jaynes articulates, such as the claim that we can
only “see” things as spatialized, even conceptual things that do not have anything like a spatial
quality to them. For example, we “see” time as existing along a timeline from left to right.139
Again, these conceptual metaphors are not merely linguistic comparisons that allow us to
138
Jaynes, J. (1976), 55. Jaynes posited a metaphor-based account of consciousness years before conceptual
metaphor theory was advanced as a hypothesis. His language is often obscure and his account is wholly speculative
rather than rooted in experimental empirical data, but his point, I think, is similar to my own—see esp. ch. 2.
139
Ibid., 59-61. For more current empirical evidence in support of this particular claim, see e.g. Boroditsky, L.
(2001); Casasanto, D. & Boroditsky, L. (2008).
69
describe what the experience of consciousness is like in the closest available manner given
limited linguistic resources. Rather, our thinking and speaking in terms of embodied primary
metaphors creates “similarity,” and the resulting semantically implied characterization of
consciousness as internal mind-space is all there is to that mind-space.
Thus, this empirically driven narrative-based account of consciousness creation can
explain the undeniable intuitiveness of our Cartesian intuitions. Due to learned associations in
childhood, we are able to talk about abstract “mental things”—the internal qualitative
components of our conscious experiences, for example—by simulating neural and sensorimotor
systems involved in physical processes such as visual perception and manipulation of objects
within space. These developed associations allow the same mentalistic metaphors to be output
as narrative report when the relevant neural and sensorimotor systems are activated in our
interactions with the physical environment. Moreover, the way we learn to talk about visual
perception, for example, encodes the primary conceptual metaphor structure which shapes the
formation of our novel metaphors about consciousness. Due to the embedded conceptual
structures entailed by the use of a particular language, this theory can account for the child who
intuitively finds herself wondering about the possibility of inverted spectrums despite a lack of
explicit schooling and reinforcement in the relevant philosophy. While a conceptual metaphor
theory approach allows us to recognize that our Cartesian intuitions truly are intuitive—they are,
in fact, explicably so—it also allows us to acknowledge that their obvious intuitive nature need
not automatically translate to their empirical or philosophical significance.
70
CONCLUSION: THE HARD PROBLEM, RECONSIDERED
The Hard Problem of consciousness, again, concerned that allegedly subjective quality of
experience that could not be captured by a purely physicalistic explanation—the property that
allows a creature to experience what it is like to be that creature. According to Hard Problem
theorists, this property is so special and so mysterious that its explanation requires a revision of
the fundamental laws of physics or an acceptance of some form of dualism. Books on the
subject tend to wax poetic at this point, championing consciousness as the fundamental character
of our humanity, deploring the fact that it is mystifying, difficult to understand, elusive. The
problem itself, though, seems equally elusive. Even Chalmers admits: “True, I cannot prove that
there is a further problem, precisely because I cannot prove that consciousness exists. We know
about consciousness more directly than we know about anything else, so ‘proof’ is
inappropriate.”140 In Part I, I have worked to show that the Hard Problem theorist cannot so
easily escape the evidentiary objection. There is, in fact, substantial “proof” that goes to show
that consciousness—as isolated and characterized by Chalmers et al.—does not exist, that our
knowledge of it is not special. An empirical account that treats literary understanding as a
window into our conscious-narrative production can explain not only the remnants of the original
explanandum but also the intuitive plausibility of the Hard Problem itself. Thus, in Part II I have
shown that the physicalist can provide a response to the “what it is like” component of what was
the Hard Problem.
140
Chalmers, D. (1996), xii-xiii.
71
In order to know what it is like to be a character in a work of stream-of-consciousness
literature, we are engaged in embodied processes of association, activation, simulation, and
inhibition. Our ability to read and understand mentality through literature is the result of lowlevel processes which rely on physical similarity between reader and character. Moreover, the
neural processes that are activated and inhibited during the process of reading and understanding
characters’ minds are the same processes that are activated when we physically perform the read
actions. Thus, the way that we know others’ minds through literature is the same way that we
create and thus know our own “conscious minds” through performing and narratizing physical
actions.
This understanding of consciousness-through-literature, then, might allow us to go back
and fill in some of the claims originally posited by Hard Problem theorists. The intuitive force of
Jackon’s knowledge argument as demonstrating the existence of qualia, for example, dissolves
entirely: of course Mary expresses a different narrative output when her physically embodied
sensory input and resulting neuro-physical configuration are different. Similarly, we might
account for what appears to be the “continuity” of our consciousness narratives in the same way
that we are able to read continuity in stream-of-consciousness literature—that is, by adherence to
particular patterns of association. When we have been reading about a particular character’s
consciousness for the course of a novel, we begin to “take on” the metaphorical and associative
patterns with which that character’s consciousness is created. Thus, when a character expresses
her thoughts about one thing, we in a sense anticipate141 the next thing that she will think about
or notice or comment on. Perhaps our own narrative reports, then, achieve their perceived
continuity in a similar way. Every bit of sensory information that we take in excites chains of
141
Though we need not be aware of this anticipation, of course, nor need we explicitly notice the patterns of
associations that link a character’s thoughts and, similarly, our expectations.
72
associations—neural pathways that have been activated and strengthened in the past in response
to similar signals. Every report results from the same networks of connections, so that we
“expect” or “anticipate” our own reports in the same way that we do a character’s thoughts or
actions. Reports seem continuous because they do not surprise us. While we may not be
constantly narrating our experiences, our narrations themselves result from the same
“continuous” neural processes and continually reinforced connective structures.
The case of first-person stream-of-consciousness literature seems to be the most idealized
example of our access to others’ consciousness. In the ordinary everyday case, of course, this
ideal is never realized. Other people are not constantly narrating their thoughts for us, telling us
what they are looking at and when or in what order objects in their environments strike them as
interesting or important. Similarly, we rarely have access to the sort of explicitly or implicitly
detailed background histories novels provide about a character’s past experiences and
associations. Thus, we do not and cannot build the same sort of accurately tracking networks of
association for others’ experiences, as we do not have all the information that would activate and
reinforce the correct pathways. Still, there is nothing intrinsic about any of these details—
narratized experiences, background histories, networks of associations—that are inherently or
necessarily inaccessible or selectively restricted to the first-person individual case. Literature is
the clearest and most rigorously detailed example of how this can be so, but we can also imagine
an individual neurologically wired such that she must always vocalize those thoughts that would
constitute her internal “conscious” narrative. In this case, there is nothing that the individual in
question could know about her own consciousness that we, as observers, could not. Similarly, if
we were to spend enough time with this individual such that we could take on the same
73
associative frameworks through which she perceived and narrated information, we would
“expect” her narrations in the same way that we “expect” our own.
Moreover, the intuitive claim to primacy or originariness for our own consciousness is
itself a questionable one. Jaynes, for example, has speculated that individuals’ conceptions of
their own consciousnesses might have originated in the observation of difference in others: “in
the forced violent intermingling of peoples from different nations, different gods, the observation
that strangers, even though looking like oneself, spoke differently, had opposite opinions, and
behaved differently might lead to the supposition of something inside of them that was
different.”142 Thus, the combination of external sameness and behavioral difference suggests the
presence of internal difference. “We may first unconsciously (sic) suppose other
consciousnesses, and then infer our own by generalization.”143 Similarly, it has been
hypothesized that children learn to conceive of and attribute mentality to “self” due to primary
experiences with others. As Katherine Nelson puts it, a child is “embedded in and surrounded by
cultural narratives, narratives that underlie the ways that parents view their roles and the role of
the child within the family and larger community. The child then ‘grows into’ these
narratives.”144 These sorts of theories go to show that first-person primacy cannot simply be
assumed and uncontested. Again, this is not to argue that there is nothing to the distinction
between self and other—clearly we are all different creatures with different sense organs and
different nervous systems that shape themselves differently according to different inputs and
outputs. I merely mean to point out that there is nothing metaphysically unique about the “self”
case, even when the subject in question is one’s own consciousness. The rigid distinction
142
Jaynes, J. (1976), 217.
Ibid.
144
Nelson, K. (2003), 22.
143
74
between first-person and third-person perspectives, as postulated by the tradition of Descartes,
Nagel, and Chalmers, cannot be preserved.
Again, literature provides a demonstration of the ultimately unsatisfactory nature of this
first-person/third-person distinction. The third-person side of the dichotomy does not provide
the appropriate perspective from which to read and understand mentality. Alex Burri, on the
other hand, has argued that Nagel is wrong to claim that our automatic initial stance is one of
first-person subjectivity, from which we then step back in attempts to attain a scientific “view
from nowhere.” Rather, according to Burri, we begin at a more central location along a spectrum
of perspective, and it is only through literature and other forms of art that we are able to attain a
purely subjective first-person stance.145 Burri’s approach is similarly problematic, though,
because even in reading literature, we never seem to achieve that pure subjectivity that Nagel and
Descartes require. Even when reading a first-person stream-of-consciousness novel, part of the
process of reading involves taking in information about and simulating other characters in the
focal character’s environment, as well as being influenced to some limited extent by our own
“non-fictional” environment and neuro-physical positioning at the time of reading. Perhaps the
best description of the position from which we read literature involves the evocation of
something like Depraz and Cosmelli’s “second-person methodology”: “Contrary to such opposed
polarities […] the ‘second person’ is but a plastic spectrum of interactions: it may even be
problematic to carry on calling it a ‘second person,’ as if we had to do with a separate and
isolated entity, whereas it rather corresponds to a relational dynamics in which we are
unavoidably immersed.”146 That is, rather than approaching literature from a single (third-person
or first-person) stance, the process involves taking in and outputting information from multiple
145
146
Burri, A. (2007); c.f. Nagel, T. (1986).
Depraz, N. & Cosmelli, D. (2003),165.
75
levels and multiple sources. Even seemingly “pure” or “first-person” descriptions are always the
result of the same sorts of simulating processes developed previously.
In reading The Waves, for example, we come to “know” the consciousnesses of all six
characters. However, we know these characters not only through the “direct” information to
which we have access from each of their “first-person” perspectives, but also through the “thirdperson” information we are given regarding other characters’ complementary and contradictory
reactions to and treatment of each character, as well as each character’s perspective of the other
characters’ perspectives of themselves, observed interactions among others, and so on. Even this
characterization is slightly misleading, as at least part of this “third-person” information to which
other characters provide access is the result of “first-person” mirroring and simulation. Thus, the
“consciousness” that we construct for each character is far richer and more multifaceted than
anything we could hope to attain from either side of the first-/third-person distinction.
Consciousness in literature cannot be located in or categorized as the product of any particular
individual “point of view,” whether that point of view be a subjective Cartesian one or an
objective view from nowhere.147
Furthermore, this second-person system of relational dynamics seems to be a more
illuminating framework for describing the way in which we go about creating our own
consciousness than either a strictly first- or third-person theory would allow. After all, upon
further examination, it seems clear that we do not produce coherent and comprehensive narrative
reports about our own mentality resulting purely from first-person “internal” information. At the
147
Similarly, see Palmer, A. (2004) esp. 5.5 and Herman, D. (2003) for a discussion of literature and socially
distributed cognition (though c.f. Vogeley, K. et al. (2004) for research on neural correlates of first- and third-person
perspectives). In Schneebaum, R. (In preparation), I discuss the way in which presented information about other
characters in non-strictly stream-of-consciousness literature allows access to consciousness through the same
processes as information about a focal character in stream-of-consciousness texts. Thus, what seems to be a
difference in kind regarding knowledge of self and other in literary (and, I argue, non-literary) contexts is merely a
difference in degree of available information.
76
most basic level, such narratizing behavior requires an environmental context in which linguistic
stimulation and response is a live and active system, in which a particular language is commonly
understood, in which existing cultural narratives provide the requisite conceptual framework.
Furthermore, it seems clear that much of the way we understand and characterize ourselves is
due to our embodied interactions with others. Our narrative reports are shaped by the way that
others respond to us, by the way they talk about and to us, by the way we mirror and simulate
their physically expressed perceptions of and reactions to us. Thus, just as there is no purely
third-person or objective perspective from which objects and individuals can be described in
literature, there is no purely first-person perspective from which we can “know” our own
consciousness. The creation of a cohesive and continuous consciousness in both the literary and
the non-literary case relies on an embodied existence within a social context of intersubjective
relations.
77
BIBLIOGRAPHY
Akins, K. (1993a). A bat without qualities? In Davies, M. & Humphreys, G.W. (Eds.),
Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell. 258-273.
Akins, K. (1993b). What is it like to be boring and myopic? In B. Dahlbom (Ed.), Dennett and
his Critics: Demystifying Mind. Oxford: Blackwell. 124-160.
Annas, J.E. (1992). Hellenistic Philosophy of Mind. Berkeley: University of California Press.
Bailey, D. (1997). A computational model of embodiment in the acquisition of action verbs.
Ph.D. dissertation, Computer Science Division, EECS Department, University of
California, Berkeley.
Bailey, D., Feldman, J., Narayanan, S., & Lakoff, G. (1997). Modeling embodied lexical
development. In M.G. Shafto & P. Langley (Eds.), Proceedings of the Nineteenth Annual
Conference of the Cognitive Science Society. Mahwah: Erlbaum.
Bisiach, E. (1992). Understanding consciousness: clues from unilateral neglect and related
disorders. In A.D. Milner & M.D. Rugg (Eds.), The Neuropsychology of Consciousness.
Academic Press.
Block, N. (1990). Inverted earth. Philosophical Perspectives, 4, 53-79.
Boroditsky, L. (2001). Does language shape thought?: Mandarin and English speakers’
conceptions of time. Cognitive Psychology, 43, 1-22.
Burri, A. (2007). Art and the view from nowhere. In J. Gibson, W. Huemer, & L. Pocci (Eds.), A
Sense of the World: Essays on Fiction, Narrative, and Knowledge. New York: Routledge.
308-317.
Carroll, N. (1995). Art, intention, and conversation. In G. Iseminger (Ed.), Intention and
Interpretation. Philadelphia: Temple University Press. 97-131.
Casasanto, D. (2008). Similarity and proximity: When does close in space mean close in mind?
Memory and Cognition, 36, 1047-1056.
Casasanto, D. & Boroditsky, L. (2008). Time in the mind: Using space to think about time.
Cognition, 106, 579-593.
Casasanto, D. & Dijkstra, K. (In Review). Motor action and emotional memory.
78
Chalmers, D. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford: Oxford
University Press.
Churchland, P.S. (1996). The hornswoggle problem. Journal of Consciousness Studies, 3, 402408.
Churchland, P.S. & Ramachandran, V.S. (1993). Filling in: Why Dennett is wrong. In B.
Dahlbom (Ed.), Dennett and his Critics: Demystifying Mind. Oxford: Blackwell. 28-52.
Crick, F. & Koch, C. (1990). Toward a neurobiological theory of consciousness. Seminars in the
Neurosciences, 2, 263-275.
Dawkins, R. (1989). The Selfish Gene, New Edition. Oxford: Oxford University Press.
Dennett, D.C. (1981). Brainstorms. Cambridge, MA: MIT Press.
Dennett, D.C. (1987). The Intentional Stance. Cambridge, MA: MIT Press.
Dennett, D.C. (1988). Quining qualia. In A.J. Marcel & E. Bisiach (Eds.), Consciousness in
Contemporary Science. Oxford: Clarendon. 42-77.
Dennett, D.C. (1991). Consciousness Explained. Boston: Little, Brown, & Co.
Dennett, D.C. (2002). How could I be wrong? How wrong could I be? Journal of Consciousness
Studies, 9 (5-6), 13-16.
Depraz, N. & Cosmelli, D. (2003). Empathy and openness: Practices of intersubjectivity at the
core of the science of consciousness. In E. Thompson (Ed.), The Problem of
Consciousness: New Essays in Phenomenological Philosophy of Mind. Calgary:
University of Calgary Press. 163-203.
Dinstein, I., Thomas, C., Behrmann, M., & Heeger, D.J. (2008). A mirror up to nature. Curr
Biol, 18, R13-R18.
di Pellegrino, G., Fadiga, L., Fogassi, L., Gallese, V., & Rizzolatti, G. (1992). Understanding
motor events: A neurophysiological study. Experimental Brain Research, 91, 176-180.
Engelkamp, J. (1995). Visual imagery and enactment of actions in memory. British Journal of
Pyschology, 86, 227-240.
Fadiga, L., Fogassi, L., Pavessi, G., & Rizzolatti, G. (1995). Motor facilitation during action
observation: A magnetic stimulation study. Journal of Neurophysiology, 73, 2608-2611.
Faulkner, W. (1990). The Sound and the Fury. New York: Vintage.
79
Fodor, J.A. (1998). Concepts: Where Cognitive Science Went Wrong. Oxford: Oxford University
Press.
Forster, E.M. (1951). Two Cheers for Democracy. New York: Harcourt, Brace, & Co.
Gallese, V., Fadiga, L., Fogassi, L., & Rizzolatti, G. (1996). Action recognition in the premotor
cortex. Brain, 119, 593-609.
Gallese, V., Keysers, C., & Rizzolatti, G. (2004). A unifying view of the basis of social
cognition. Trends in Cognitive Science, 8, 396-403.
Gallese, V. & Lakoff, G. (2005). The brain’s concepts: The role of the sensory-motor system in
conceptual knowledge. Cognitive Neuropsychology, 22, 455-479.
Gerrig, R.J. & Edigi, G. (2003). Cognitive psychological foundations of narrative experiences. In
D. Herman (Ed.), Narrative Theory and the Cognitive Sciences. Stanford: Center for the
Study of Language and Information. 33-55.
Gibbs, R.W. (2006). Metaphor interpretation as embodied simulation. Mind & Language, 21,
434-458.
Gibbs, R., Gould, J., & Andric, M. (2006). Imagining metaphorical actions: Embodied
simulations make the impossible plausible. Imagination, Cognition, & Personality, 25,
221-238.
Goldman, A.I. (2006). Simulating Minds: The Philosophy, Psychology, and Neuroscience of
Mindreading. Oxford: Oxford University Press.
Herman, D. (2003). Stories as a tool for thinking. In D. Herman (Ed.), Narrative Theory and the
Cognitive Sciences. Stanford: Center for the Study of Language and Information. 163192.
Iacoboni, M. & Dapretto, M. (2006). The mirror neuron system and the consequences of its
dysfunction. Nature Reviews of Neuroscience, 7, 942-951.
Iacoboni, M., Molnar-Szakacs, I., Gallese, V., Buccino, G., Mazziotta, J.C., & Rizzolatti, G.
(2005). Grasping the intentions of others with one’s own mirror neuron system. PLoS
Biol, 3, 529-535.
Jackson, F. (1982). Epiphenomenal qualia. The Philosophical Quarterly, 32, 127-136.
Jaynes, J. (1976). The Origin of Consciousness in the Breakdown of the Bicameral Mind. Boston:
Houghton Mifflin.
80
Johnson, C. (1997a). Metaphor vs. conflation in the acquisition of polysemy: The case of SEE. In
M.K. Hiraga, C. Sinha, & S. Wilcox (Eds.), Cultural, Typological and Psychological
Issues in Cognitive Linguistics. Current Issues in Linguistic Theory 152. Amsterdam:
John Benjamins.
Johnson, C. (1997b). Learnability in the acquisition of multiple senses: SOURCE reconsidered.
In J. Moxley, J. Juge, & M. Juge (Eds.), Proceedings of the Twenty-Second Annual
Meeting of the Berkeley Linguistics Society. Berkeley: Berkeley Linguistics Society.
Johnson, S. (1781). The Lives of the English Poets: And a Criticism of their Works. Wilson.
Joyce, J. (1998). J. Johnson (Ed.), Ulysses. Oxford: Oxford University Press.
Kanwisher, N. (2001). Neural events and perceptual awareness. Cognition, 79, 89-113.
Kilner, J.M., Friston, K.J., & Frith, C.D. (2007). Predictive coding: an account of the mirror
neuron system. Cognitive Processes, 8, 159-166.
Knapp, S. & Michaels, W.B. (1995). The impossibility of intentionless meaning. In G. Iseminger
(Ed.), Intention and Interpretation. Philadelphia: Temple University Press. 51-64.
Kohler, E., Keysers, C., Umiltà, M.A., Fogasi, L., Gallese, V., Rizzolati, G. (2002). Hearing
sounds, understanding actions: Action representation in mirror neurons. Science, 297,
846-848.
Lakoff, G. & Johnson, M. (1980). Metaphors We Live By. Chicago: University of Chicago Press.
Lakoff, G. & Johnson, M. (1999). Philosophy in the Flesh: The Embodied Mind and its
Challenge to Western Thought. Chicago: University of Chicago Press.
Lakoff, G. & Turner, M. (1989). More than Cool Reason: A Field Guide to Poetic Metaphor.
Chicago: University of Chicago Press.
Levine, J. (1993). On leaving out what it’s like. In M. Davies & G.W. Humphreys (Eds.),
Consciousness: Psychological and Philosophical Essays. Oxford: Blackwell. 121-136.
MacWhinney, B. (1995). The CHILDES Project: Tools for Analyzing Talk. Hillsdale, N.J.:
Erlbaum.
Matson, W.I. (1966). Why isn’t the mind-body problem ancient? In P.K. Feyerabend & G.
Maxwell (Eds.), Mind, Matter, and Method: Essays in Philosophy of Science in Honor of
Herbert Feigl. Minneapolis: University of Minnesota Press. 92-102.
Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83, 435-50.
Nagel, T. (1986). The View from Nowhere. Oxford: Oxford University Press.
81
Narayanan, S. (1997a). Embodiment in language understanding: Sensory-motor representations
for metaphoric reasoning about event descriptions. Ph.D. dissertation, Department of
Computer Science, University of California, Berkeley.
Narayanan, S. (1997b). Talking the talk is like walking the walk: A computational model of
verbal aspect. In M.G. Shafto & P. Langley (Eds.), Proceedings of the Nineteenth Annual
Conference of the Cognitive Science Society. Mahwah: Erlbaum.
Nelson, K. (2003). Narrative and the emergence of a consciousness of self. In G.D. Fireman,
T.E. McVay, & O.J. Flanagan (Eds.), Narrative and Consciousness: Literature,
Psychology, and the Brain. Oxford: Oxford University Press. 17-36.
Oberman, L.M. & Ramachandran, V.S. (2007). The simulating social mind: The role of the
mirror neuron system and simulation in the social and communicative deficits of autism
spectrum disorders. Psychological Bulletin, 133, 310-327.
Palmer, A. (2004). Fictional Minds. Lincoln: University of Nebraska Press.
Parsons, K.P. (1970). Mistaking sensations. Philosophical Review, 79, 201-213.
Poulet, G. (1969). Phenomenology of reading. New Literary History, 1, 53-68.
Pylyshyn, Z.W. (2002). Mental imagery: In search of a theory. Behavioral and Brain Sciences,
25, 157-238.
Pylyshyn, Z.W. (2003). Return of the mental image: Are there really pictures in the brain?
Trends in Cognitive Science, 7, 113-118.
Quine, W.V.O. (1969). Epistemology naturalized. In Ontological Relativity and Other Essays.
New York: Columbia University Press. 69-90.
Richardson, D., Spivey, M., Barsalou, L., & McRae, K. (2003). Spatial representations activated
during real-time comprehension of verbs. Cognitive Science, 27, 767-780.
Rizzolati, G. & Arbib, M.A. (1998). Language within our grasp. Trends in Neuroscience, 21,
188-194.
Rorty, R. (1970a). Incorrigibility as the mark of the mental. Journal of Philosophy, 67, 399-424.
Rorty, R. (1970b). Cartesian epistemology and changes in ontology. In J.E. Smith (Ed.),
Contemporary American Philosophy. London: 273-292.
Schneebaum, R.E. (In preparation). Intersubjectivity in literary consciousness and the problem of
other minds.
82
Schwitzgebel, E. (2008). The unreliability of naïve introspection. Philosophical Review, 117,
245- 273.
Searle, J.R. (1995). The mystery of consciousness: Part II. The New York Review of Books, 42.
Searle, J.R. (2000). Consciousness. Annual Review of Neuroscience, 23, 557-578.
Sharf, R. (1998). Experience. In M. Taylor (Ed.), Critical Terms of Religious Studies. Chicago:
University of Chicago Press. 94-116.
Shoemaker, S. (1982). The inverted spectrum. Journal of Philosophy, 79, 357-381.
Skinner, B.F. (1989). The origins of cognitive thought. American Psychologist, 44, 13-18.
Solzhenitsyn, A. (1970). Nobel Prize lecture. Available online at (and cited from):
http://nobelprize.org/nobel_prizes/literature/laureates/1970/solzhenitsyn-lecture.html.
Tettamanti, M., Buccino, G., Saccuman, M.C., Gallese, V., Danna, M., Scifo, P., Fazio, F.,
Rizzolatti, G., Cappa, S.F., & Perani, D. (2005). Listening to action-related sentences
activates fronto-parietal motor circuits. Journal of Cognitive Neuroscience, 17, 273-281.
Tye, M. (1995). Ten Problems of Consciousness: A Representational Theory of the Phenomenal
Mind. Cambridge, MA: MIT Press.
Umiltà, M.A., Kohler, E., Gallese, E., Fogassi, L., Fadiga, L., Keysers, C., & Rizzolatti, G.
(2001). I know what you are doing: A neurophysiological study. Neuron, 31, 155-165.
Vogeley, K., May, M., Ritzl, A., Falkai, P., Zilles, K., & Fink, G.R. (2004). Neural correlates of
first-person perspective as one constituent of human self-consciousness. Journal of
Cognitive Neuroscience, 16, 817-827.
Wilkes, K.V. (1984). Is consciousness important? The British Society for the Philosophy of
Science, 35, 223-243.
Wilkes, K.V. (1988). ——, yìshì, duh, um, and consciousness. In A.J. Marcel & E. Bisiach
(Eds.), Consciousness in Contemporary Science. Oxford: Clarendon. 16-41.
Wilkes, K.V. (1995). Losing consciousness. In T. Metzinger (Ed.), Conscious Experience.
Paderborn: Ferdinand Schöningh. 97-106.
Willems, R.W., Hagoort, P., & Casasanto, D. (In Review). Body-specific representations of
action verbs: Neural evidence from right- and left-handers.
Wilson, N.L. & Gibbs, R.W. (2007). Real and imagined body movement primes metaphor
Comprehension. Cognitive Science, 31, 721-731.
83
Woolf, V. (2004). The Waves. London: Vintage.
Woolf, V. (2006). To the Lighthouse. Oxford: Oxford University Press.
Xu, X. (2004). Consciousness, subjectivity and physicalism. Philosophical Inquiry, 26, 21-39.
Yaxley, R.H. & Zwaan, R.A. (2007). Simulating visibility during language comprehension.
Cognition, 105, 229-236.
Yu, N. (1998). The Contemporary Theory of Metaphor: Perspectives from Chinese. Amsterdam:
Benjamins.
Zunshine, L. (2006). Why We Read Fiction: Theory of Mind and the Novel. Ohio State
University Press.
Zwaan, R.A., Stanfield, R.K., & Yaxley, R.H. (2002). Language comprehenders mentally
represent the shapes of objects. Psychological Science, 13, 168-171.
84