View PDF

Chapter 2
The challenge of measuring
consciousness
5
Morten Overgaard
01
Introduction
rd
U
ni
ve
rs
ity
Pr
es
s,
2
Few things in the human intellectual history have given rise to so many different theories,
opinions, discussions, and academic frustrations as consciousness. While being incredibly
complex, as this chapter will show, consciousness is not just an academic concept, accessible only to specialized scientists in a particular field, as it is the case with many other complex topics in science, such as quantum particles or cell division. Consciousness is directly
accessible to all living humans, possibly all living creatures, from the moment they wake
up until they fall into dreamless sleep.
This chapter discusses some central definitions of consciousness and their relations to
different measures, i.e. their “operationalization.” As it will be argued, introspection is involved in all kinds of measures of consciousness at some level. Accordingly, the chapter
examines different uses of introspection, and the relation between introspection and consciousness. Finally, some criteria for adequate measures of consciousness are discussed.
xf
o
Measures and definitions
©
O
Although we have such intimate familiarity with consciousness, a definition of the concept does not follow automatically. From one perspective, one could care little about
definitions if the purpose is to have experimental investigations of consciousness. Definitional and methodological questions are separate issues: definitional issues are matters of
conceptual analysis and attempt to carve out non-circular descriptions with criteria that
make sure to include anything we wish to consider as conscious, while excluding anything we will not consider as such. Methodological issues deal with questions about the
validity of specific measures of consciousness and how these measures may relate to other
measures, such as measures of brain activity. At the same time, it seems very obvious that
the only way one may evaluate the validity of any measure of consciousness is by its relation to consciousness as such, i.e. its definition. I will not argue that we can only trust
experiments on consciousness when we have a formal, universally accepted definition
of consciousness. Empirical measures are often too crude to relate to minute conceptual
aspects, so in so far as definitional issues have no impact on which measures to apply,
9780199688890-Overgaard.indb 7
26/02/15 12:00 PM
8
The challenge of measuring consciousness
©
O
xf
o
rd
U
ni
ve
rs
ity
Pr
es
s,
2
01
5
there is no reason to wait for a potential final definition. So, in the following, a few crude
distinctions will be discussed.
In his article On a confusion about a function of consciousness (1995), Ned Block suggests
a distinction between “access-consciousness” (so-called A-consciousness) and “phenomenal consciousness” (so-called P-consciousness). A-consciousness refers to a state on the
basis of which a subject is able to reason, to have rational control of action and of speech.
P-consciousness refers to experiences (seeing the color red, thinking about rain, the sonar
sense of a bat, or whichever examples one might prefer). Both definitions refer to aspects of
what we may mean when saying we are conscious of something, yet are different aspects in
an important sense. Whether the two concepts refer to actual, empirically different states
is frequently debated (Block 2005; Kouider et al. 2010), but they certainly suggest different
ways to measure consciousness. If one wishes to conduct experiments on, say, the neural
basis of A-consciousness, the definition comes with behavioral and thus third-person accessible features, although rather unspecific. If one, instead, looks for the neural basis of
P-consciousness, one may end with measures with an important overlap with methods
used to study A-consciousness, but one needs to make the further argument why those
methods represent the first-person state in question as this definition comes without any
third-person-observable features.
Another commonly agreed distinction is one between the contents and the levels of
consciousness (Hohwy 2009; Overgaard and Overgaard 2010). A- and P-consciousness
are both examples of contents, whereas typical examples of levels of consciousness are
coma or the vegetative state, sleep, or drug abuse. Methods to study levels of consciousness
would obviously differ from those relevant to study the contents, and involve even further
problems as one here cannot rely on behavioral measures or communication (Owen 2006;
Overgaard 2009). Although one should hypothesize some relation between the contents
and the levels of consciousness (Overgaard and Overgaard 2010; Bachmann 2012), these
discussions are outside the stringent focus on behavioral methods here.
These days, most consciousness researchers would agree that the concept of consciousness does not, a priori at least, refer to a particular psychological function or behavior, but
to the fact that we have subjective experiences such as the taste of good coffee, the sound of
a dog barking, or frustrating thoughts about consciousness (Chalmers 1996). Whereas the
examples one might give in order to illustrate the contents of consciousness may result in a
rather heterogeneous list, all examples share the one feature that they are subjective. Many
important historical attempts to define consciousness more precisely stress the subjective
aspect. For instance, Nagel (1974) argues that if there is something it like for a bat to have a
sonar sense, then bats must be conscious creatures. Although most suggestions for precise
definitions of consciousness are controversial, there is good agreement that consciousness is
subjective in the sense that only the one person having the conscious experience, regardless
of its contents, has direct knowledge about it. The subject of an experience seems to have a
special kind of access to the particular content, different from the kind of access you can have
to the content of other people’s experience (e.g. when those other people describe their experiences). This core feature of consciousness is what makes it so scientifically challenging.
9780199688890-Overgaard.indb 8
26/02/15 12:00 PM
Challenges to the scientific study of consciousness
9
Challenges to the scientific study of consciousness
O
xf
o
rd
U
ni
ve
rs
ity
Pr
es
s,
2
01
5
But why should subjectivity be a challenge to science? The reason is probably not to be
sought for at the level of concrete methodology or theory, but rather at a more general
or paradigmatic level of background assumptions. That is, even though there is far from
perfect agreement on how to define science and what constitutes good research, there are
a number of fundamental criteria that define when an observation is scientifically valid.
Such criteria are, however, most often implicit. Arguably, basic elements of our conception
of what constitutes “good science” can be traced to Galileo in what might be the historical
birth of a systematic natural science in 1632 (Bunge 1998). Such conceptions include the
ideas that objects for scientific study always must be generally accessible through a “thirdperson perspective” (if only one person is able to observe a particular object, it cannot be
accepted as a scientific object), or that scientific results must always be replicable so that
when the same (relevant) causal conditions are present at time A and B, the same effects
must be observed.
Whereas one might add more such “basic conceptions of good science,” the first one
mentioned captures most of the problems. If scientific objects can only be those we
have “third-person access” to, why should we think we could ever have a science about
consciousness?
It seems that the only solution available is to associate conscious experience with particular kinds of third-person observables, typically particular kinds of behavior or, in experiments, “responses.” In the attempt to find neural correlates of consciousness (NCC),
for instance, neither consciousness nor the “neural processes” are directly observed. For
this reason, the actual correlations are between our measures of neural processes and
measures of consciousness (typically a particular conscious content such as seeing red or
thinking about a cup of coffee), as seen in Figure 2.1. If measures on both “sides” perfectly
match ontology (i.e. that the actual neural processes are exactly as the apparatus informs
the scientist, and that the actual experienced content is exactly as reported), the “proper”
NCC (or “pNCC”) can be reduced to its measure (or the “mNCC”). In other words, the
©
mNCC
Subjective
state
Subjective
measure
Objective
measure
Brain
state
pNCC
Fig. 2.1 The “proper NCC” (pNCC) is only identical to the measured NCC (mNCC) if measures
fully represent the relevant states. (Reprinted from Consciousness and Cognition, 15(4), Morten
Overgaard, Introspection in Science, pp. 629–33, figure 3a Copyright (2006), with permission
from Elsevier.)
9780199688890-Overgaard.indb 9
26/02/15 12:00 PM
10
The challenge of measuring consciousness
©
O
xf
o
rd
U
ni
ve
rs
ity
Pr
es
s,
2
01
5
mNCC has to fully represent the pNCC in a given experiment. In case some aspect of
the mNCC represents something else (an artefact), or in case the pNCC contains aspects
that are not represented in the mNCC, one obviously cannot derive a pNCC from that
experiment.
As we have no method to transform subjective experiences to third-person-accessible
information without losing the subjectiveness, our measures inevitably have to be indirect. This is, however, not very different from the case in many other scientific disciplines,
where one has no “direct” knowledge of molecules, genes, or radio waves, yet is fully able
to conduct experiments, create generally accepted and understood scientific explanations,
and predict future events. The problem may not be that a science of consciousness is, at
least in these regards, “special,” but rather that the scientific field is still a long way from
having standardized methods. As one example, some researchers repeatedly find prefrontal activations when subjects report to be conscious of visually presented numbers (Del
Cul et al. 2009), whereas others claim that re-entrant activity in occipital regions correlates
better with consciousness when reporting whether face icons look happy or sad (Jolij and
Lamme 2005).
The two different claims are based on evidence from experiments that have applied
rather different experimental techniques, making a direct comparison complex. Accordingly, most experiments applying transcranial magnetic stimulation (TMS) over the
occipital cortex at around 100 ms after stimulus onset show a disruption of visual consciousness, and may be used as evidence to suggest that the NCC for visual perception is
a re-entrant process to primary visual cortex. The same conclusion is often proposed by
research in blindsight, where patients with V1 lesions often report no conscious experience (Stoerig and Cowey 1997). Other experiments using change blindness or inattentional blindness paradigms typically demonstrate that the conscious noticing of a change
activates a frontoparietal network (Mack and Rock 1998). Interestingly, the research field
often acts as if differences in NCC models can be solved by just doing more experiments
rather than by developing those methods that have given rise to the results, and thus also
the differences.
Even if “subjectiveness” seems a common denominator in different conceptions of consciousness, this does not in and of itself reveal how to operationalize consciousness ideally. One illustration of two operationalization options comes from a recent discussion by
Block (2011) and Cohen and Dennett (2011); see a more thorough discussion in Overgaard and Grünbaum (2012). The discussion centers on the classical Sperling experiment
(Sperling 1960). Here, subjects were only able to report letters from one of three rows
presented on a screen. However, with post-stimulus cueing, subjects could report whatever row they were asked. Block believes that we experience seeing the entire display of
letters, yet we report only a limited amount, or, in other words, that conscious experience
“overflows” the cognitive functions involved in accessing and reporting the experience.
Cohen and Dennett, however, take a different point of departure in their interpretation
of Sperling, namely that conscious content must have a cognitive function. According to
their view, a person cannot be conscious of X but be principally unable to report about X
9780199688890-Overgaard.indb 10
26/02/15 12:00 PM
Challenges to the scientific study of consciousness
11
©
O
xf
o
rd
U
ni
ve
rs
ity
Pr
es
s,
2
01
5
or be unable to use it for rational control of action. Against Block, they argue that it makes
no sense to ascribe consciousness of X to a subject if the subject denies seeing X. Going
along with this idea, it is natural to think that consciousness plays a cognitive role, and that
a subject is conscious of some information if it is used by the subject’s cognitive system in
a particular way.
The discussion is important because it shows a fundamental conflict in conceptions
about consciousness and, as a consequence, methods to study it. It seems mutually exclusive that consciousness overflows cognitive functions and that consciousness is identical
to a cognitive function. Both ideas may seem intuitively compelling, but either we accept
overflow but also accept that consciousness is not identical to a function (at least in a cognitive understanding) or we accept that consciousness is indeed a cognitive function, but
deny overflow. This debate cannot be resolved empirically. This is so because the debate is
essentially pre-empirical as it concerns questions that determine how to gather and think
of empirical data in the first place (Overgaard and Grünbaum 2012).
Accordingly, one approach will argue that consciousness is identical to or inherently
related to a particular cognitive function. The idea has the immediate advantage that operationalization becomes much more tangible, as one may use already established experimental paradigms to study consciousness. For example, if consciousness is fundamentally
associated with or identical to working memory, all measures of working memory will also
be measures of consciousness.
The opposite approach considers consciousness to be a state, a process, or a property
that is not identical to or deeply associated with some (other) cognitive state. By dissociating consciousness from cognitive capacities (Rees and Frith 2007), one will in most cases
stay with a subjective criterion as the only acceptable measure. As a consequence, any
measure that can be said to be about something other than subjective experience cannot
be applied.
The choice between the “cognitive” and “non-cognitive” approach (Overgaard and
Grünbaum 2012) is decisive for one’s criteria of consciousness, experimental methodology, and, as a necessary consequence, findings. Despite attempts by researchers on both
sides, the dispute between cognitive and non-cognitive theories of consciousness cannot
be settled by empirical evidence. As neither position can be stated in an empirically falsifiable manner, the debate cannot itself be resolved by empirical data. In the end, the decision about which approach to prefer is a matter of personal preference rather than about
arguments.
There are specific challenges associated with the two choices. If one decides not to
associate consciousness with any particular cognitive function, it is difficult to trust
any measure of consciousness. For instance, why should the cognitive functions involved in saying “I am conscious” be a valid measure of consciousness? Nevertheless,
with the lack of other measures, a “non-cognitive” assumption would typically lead to
a use of subjective reports in some way. But when is a subjective report scientifically
trustworthy? Although we all have good intuitive ideas about the meaning of concepts
about subjective states (such as thoughts, feelings, or perceptions), their precision and
9780199688890-Overgaard.indb 11
26/02/15 12:00 PM
12
The challenge of measuring consciousness
U
ni
ve
rs
ity
Pr
es
s,
2
01
5
value as scientific concepts are debatable. Introspective reports, however, inevitably
make use of such concepts.
Associating consciousness with some cognitive function, e.g. in the attempt to get rid
of some of the problems associated with introspection, is potentially circular. This seems
a necessary consequence of the way in which such an association can be formed, deciding
which cognitive function to associate with consciousness. In order to make such a decision, one could employ at least two different strategies. One strategy could be to conduct
several experiments, correlating cognitive functions with consciousness. This strategy
would, however, need an independent measure of consciousness in order to make the
association. Were this measure the presence or absence of another cognitive function, it
would obviously lead to infinite regress (because this association with another cognitive
function, again, must be validated by yet other measures, etc.). Thus, the most plausible
measure would be introspective reports, and, consequently, the strategy would not be independent of introspection but carry along its strengths, weaknesses, and limitations. As
a different strategy, one could avoid experiments using introspection and just decide to
associate consciousness with, say, attention because it “feels like” the case that those two
phenomena occur together. This would, however, then depend on the researcher’s own intuitions, which hardly could be based on anything other than his or her own introspection.
It would be difficult to argue why researchers’ introspection has any more scientific value
than the introspection of experimental subjects.
So, regardless of which kind of measure is preferred in a given experiment, introspection
seems an unavoidable condition at some point in consciousness research.
Introspection and access
©
O
xf
o
rd
Although introspection seems unavoidable regardless of methodological choice, it may
be brought to use in different ways. The most minimal use of introspection is arguably as
“inspiration” or “direction” for objective methods.
Arguably, the only reason why one would ever come up with the idea to investigate
“color experience” or “emotional experiences,” even with so-called objective methods, is
because we know these states by way of introspection. This knowledge, then, guides our
methodological choices.
For example, the process dissociation procedure (PDP) has been proposed as an objective method for examining the influence of unconscious processing as the method does
not rely on subjective reports but on a measured difference between performance in two
different tasks: the exclusion and the inclusion task (Jacoby 1991). The argument for the
procedure is that unconscious processes are supposed to be affected by a briefly presented
priming word, and in the inclusion task both unconscious and conscious processes will
thus contribute to report a target word. In the exclusion task, however, unconscious processes will contribute to report the primed word whereas conscious processes will attempt to avoid it. The relative contribution of conscious and unconscious processes may
thus be estimated by comparing performance in the two tasks. This kind of “objective
9780199688890-Overgaard.indb 12
26/02/15 12:00 PM
Introspection and access
13
©
O
xf
o
rd
U
ni
ve
rs
ity
Pr
es
s,
2
01
5
approach” avoids using verbal reports, but it does not avoid introspection in the minimal
sense. Without ideas about what conscious experience is, the experiment would make
no sense.
So-called subjective methods attempt to use introspection directly as a method or as
part of a method. For example, neurophenomenology typically tries to understand experience itself, rather than just cognitive or neural processes, by explicitly using reports
(Gallagher 2009).
Typically, objective methods study behavior or reports that are not about consciousness,
while subjective methods study reports that are. It is, however, far from always obvious
to identify what is an objective and what is a subjective method. As one example, “postdecision wagering” was introduced as an objective measure (Persaud et al. 2007). Here,
subjects place a monetary wager on the correctness of their own response to a stimulus,
the amount of which is considered a measure of how conscious they were of that stimulus.
However, since wagering subjects try to maximize their gain, and since the very idea is that
they, in order to place the wager, explicitly consult how conscious they were (which is why
it could be a measure of consciousness), it seems to have many features in common with
subjective measures.
Regardless of methodological choice, some of the same basic criteria seem necessary
for a measure to be a valid measure: it should be exhaustive, i.e. sensitive to all aspects of
conscious experience, and exclusive, i.e. it should not measure something unconscious—at
a proper level of conceptual granularity. In the following, these three aspects (exhaustiveness, exclusiveness, and granularity) will be presented, after, however, a discussion of
introspection and its relation to measures of consciousness, as introspection of some kind
is involved in all measures of consciousness.
Regardless of methodology, a measure of a conscious state is obviously not identical to
the conscious state itself, and, thus, seems to involve further states having access to the
conscious state.
Even though the report, as a verbal utterance, obviously is different from the “internal,”
subjective state, it is less clear whether the very act of introspecting also is such a separate
state. The issue is important as it potentially introduces a particular kind of complexity in
the attempt of having a “pure” measure of consciousness, and, accordingly, the chance of
deriving a pNCC from an mNCC.
John Searle (1992) has argued that a conscious state can only be described in terms of
what the state represents, and, as a consequence, our awareness of a conscious state as
such is always just awareness of the very object represented by the conscious state itself.
Therefore no such distinction between being introspectively aware of a conscious state and
being in a conscious state exists. Fred Dretske (1995) has argued that we are never aware
of our mental states ourselves, although we can be aware of the fact that we have a mental
state. Introspection can be seen as an instance of what he calls “displaced perception”: we
come to know how much petrol is left in the car’s tank by looking at the gauge. In a similar
way, we come to know that we are in a particular type of mental state by being aware of the
objects represented by one’s mental states.
9780199688890-Overgaard.indb 13
26/02/15 12:00 PM
14
The challenge of measuring consciousness
©
O
xf
o
rd
U
ni
ve
rs
ity
Pr
es
s,
2
01
5
As has been pointed out by proponents of the higher-order thought theory of state consciousness (HOT), this critique might rest on a particular conception of what is meant by
“awareness of a mental state.” Thus it is possible to maintain that we never have experiences
of our mental states like the way we have experiences of things in the world—that is, by
having perceptual experiences—but that we nevertheless might be said to be aware of our
conscious mental states by having thoughts about them (Rosenthal 2000a).
According to this view, introspecting a conscious state is to have an accompanying
thought about that state, a thought that is itself conscious, attentive, and deliberate. From
the HOT perspective the basic distinction between conscious states and introspective
states is sustained: a subject’s having a conscious state is explained in terms of the subject’s having a second-order thought about a mental state of his or her. This higher-order
thought need not itself be conscious, although it might sometimes become so by the subject’s in turn having third-order thoughts about them. When this happens, the subject is
engaged in introspection (Rosenthal 2000b).
The coherence of HOT explanations regarding conscious states and introspection respectively are issues, which can be kept apart. One may argue that HOT is a good explanation of how to understand the relation between conscious states and introspection
without committing to the view that it is a good explanation of consciousness.
As mentioned above, there has been much historical skepticism about a science based
on introspection. This skepticism is often presented as a historical disagreement between
early twentieth-century research groups, suggesting that introspection is “hopelessly unreliable.” Recent work has challenged this idea (Costall 2006, 2012), and re-examination
of laboratory records reveals that disagreements between, say, Würzburg and Cornell were
disagreements about interpretations of results, rather than the results themselves (Monson
and Hurlburt 1993). Skepticism about introspection, and the use of it, seems to have been
around for centuries.
Comte, for instance, argued that introspection is scientifically useless as there cannot be
identity between an observer and the observed object (Lyons 1986). Comte argued that
this would lead to an illogical “splitting” of consciousness in two. Many classical accounts
of introspection in psychology, based on James (1898), suggest that “online” introspection,
as an ongoing observation of current mental states, does not exist. Rather, James suggested
that all introspection is in fact retrospection—an inspection of memories of previous experiences. This interpretation of introspection can be taken as a response to Comte’s objection against a splitting of consciousness. In the light of the discussion between first- and
higher-order accounts of introspection, James’ perspective seems to support the HOT version, as the first-order variety seems fully compatible with an “ongoing” or “direct” introspecting act.
Very few experiments have been designed to directly test these questions. Marcel (1993)
demonstrated a dissociation between responses when using eye blinks, hand movements,
and verbal reports. The dissociation was shown in a blindsight patient as well as in normal
participants. When the patient and the participants were instructed to introspect, they gave
the most accurate reports when using eye blinks for “yes-reports,” less accurate when using
9780199688890-Overgaard.indb 14
26/02/15 12:00 PM
Exhaustiveness and exclusiveness of measures
15
©
O
xf
o
rd
U
ni
ve
rs
ity
Pr
es
s,
2
01
5
hand movements, and the least accurate when using verbal reports. The blindsight patient
could even reply “yes, I am aware of a light” while at the same time—during the same
stimulus trial—reporting “no” with hand gestures. This pattern was not present when the
patient was told to report non-introspectively. Overgaard and Sørensen (2004) expanded
on this experiment and showed that a dissociation between the response modes used by
Marcel (1993) was only found when instructing participants before showing a stimulus.
When the order of the instruction and stimulus was reversed, no dissociation was found.
This result of Overgaard and Sørensen (2004) that introspection changes the participants’
behavior only when the instruction is given prior to the stimulus could be interpreted to
indicate that introspection has an effect on perception rather than on retrospective memory processes. The interpretation, although supported by little evidence, can be taken to
go against James’ retrospection account, and seems fully compatible with the first-order
account of introspection.
Overgaard et al. (2006) conducted an evoked response potentials (ERP) experiment attempting to contrast introspective and non-introspective conscious states. Subjects were
asked to report the presence or absence of a gray dot in two different conditions—one
in which they were asked to consider the stimulus of an “object on the screen,” and another in which they were to consider it “a content in their own experience.” The study
found differences between conditions in early occipital components, later attention-­
related components, and even later post-perceptual components. Although the study has
not been replicated, a cautious interpretation could be that introspection seems to affect
“pre-perceptual,” “perceptual,” and “post-perceptual” processes. Such interpretation does
not exclude retrospective elements but suggests that there is more to introspection than
retrospection alone.
Whether introspection is an online inspection of ongoing experiences, a retrospective
activity, or a combination, all alternatives face challenges with regards to the validity of
subjective reports. The retrospection alternative is confronted with problems related to
the fallibility of memory. If our only access to our own conscious states is by way of trying
to remember them, this access is obviously far from “certain knowledge.” The “online version” allows for an introspective “observation” of experiences as they occur. Although the
actual report is still delayed in time, and therefore also confronted with memory issues, at
least the actual accessing of the experiences is not. However, here, it seems possible that the
act of introspection may change the experience itself. We have no good evidence to believe
that mental states are simply additive, i.e. that the presence of two simultaneous states is
identical to the “sum” of the two occurring in isolation. Accordingly, one may fear that a
science based on introspection may tell us a lot about introspective conscious states, but
nothing about non-introspective conscious states.
Exhaustiveness and exclusiveness of measures
Another discussion, parallel to the question of whether introspective access may change the
contents of experience, is the question whether a measure of consciousness is exhaustive
9780199688890-Overgaard.indb 15
26/02/15 12:00 PM
16
The challenge of measuring consciousness
U
ni
ve
rs
ity
Pr
es
s,
2
01
5
and exclusive. One can reasonably demand of a good measure of consciousness that it
detects all relevant aspects of experience. Such “demands” have been referred to as exhaustiveness, and it is very likely that different measures differ in their degree of exhaustiveness (Overgaard and Timmermans 2010; Overgaard and Sandberg 2012). By a measure’s
exhaustiveness, it is typically meant that a measure or task should be sensitive to every bit
of relevant conscious content, so that we may avoid erroneously describing behavior as
resulting from unconscious mental processes (Timmermans et al. 2010).
By exclusiveness, one typically refers to the “flip side” of exhaustiveness. As a measure
of consciousness should measure all relevant experiences, it should exclusively measure
experiences, and thus be immune to influences from unconscious knowledge.
The issue is particularly relevant in debates about perception “in the total lack of conscious experience,” which has come to be the central topic of investigation in the attempt to
measure differences between consciousness and unconsciousness (and, thus, find the relevant contrasts to measure NCCs) (Hassin 2013). Here, the problem arises in cases where
subjects report no conscious awareness where in fact there may be some vague experience
or sensation, which is hard to express verbally. Such cases of bad exhaustiveness would
lead actual experiences to be misclassified as unconscious in the data analysis, and thus
misconstrue the measured NCC.
In cases with poor exclusiveness, subjects report irrelevant experiences, or, potentially,
totally unconscious influences. For instance, subjects may be influenced by other experiences of, say, confidence, insecurity, or positive or negative emotions in their report of the
relevant content. In such cases, irrelevant content will be misclassified as relevant, and
thus, in a different way, misconstrue the measured NCC.
rd
Conceptual granularity
©
O
xf
o
Bechtel and Mundale (1999) argue that one central issue, relevant to the study of consciousness, is that psychological as well as neural properties can be described with different “granularities.” They argue that psychological properties often are described at a very
“coarse” level of granularity, whereas neural properties are described at a much “finer”
level.
Overgaard and Mogensen (2011) suggest that mental functions can be described at least
three different levels of analysis. At the most general level, we find “visual perception,”
“intention,” “emotion,” or the like. At a more “fine-grained” level, there are task- and domain-specific functions. There may be several “subtypes” of the general category “visual
perception” that are specific to certain kinds of stimuli (faces, rectangles, the color purple)
or kinds of representation. Finally, there are basic functions as a kind of discretely operating system without any direct manifestation at a conscious or behavioral level.
Obviously, any measure of consciousness must somehow specify some “line of demarcation,” i.e. which exact subjective experiences are of relevance. Concepts about mental
states at one level of analysis are of course not more or less precise than concepts at others.
Yet, concepts at one level may confound concepts at different levels. One example could
9780199688890-Overgaard.indb 16
26/02/15 12:00 PM
Future directions
17
rs
ity
Pr
es
s,
2
01
5
be Benjamin Libet’s famous experiments, arguably showing that the conscious experience
of wanting to move is delayed around 500 ms compared to the onset of a neural response
potential in premotor cortex (Libet 1985). In these experiments, the subjects were to watch
a round clockface that included a moving dot. At some point, they were to initiate a voluntary movement and to note the location of the dot on the clock at that point in time. The
subjects were explicitly asked to monitor their own mental state in order to report the time
of their first “awareness of the wish or urge to act” (Libet 2002, p. 292). Without questioning the result, which has been repeatedly replicated (Vinding et al. 2014), the interpretation may be challenged. In order for a subject to report the first awareness of a wish, the
subject must apply some criterion for when such an experience is present. The seeming
fact that nothing reported “as a wish” was subjectively present until 500 ms after the onset
of the readiness potential is not the same as to say that no relevant subjective experience
was present 500 ms earlier, which, however, was subjectively different or was chosen based
on different criteria. In other words, had the experiment applied concepts at different levels of granularity, results might have appeared different.
Other previous discussions apply to the interpretation of the experiment as well. The
method is clearly introspective, and, even disregarding the questions of conceptual granularity, it is not obvious whether it is the conscious experience of wanting to move or the
introspective access that is delayed half a second.
ve
Future directions
©
O
xf
o
rd
U
ni
The last decades have seen an impressive upsurge of research into consciousness. Today,
the majority of this research takes one of two paths: a “philosophical” strategy, analyzing
the conceptual connection between the notions of consciousness and physical matter, or a
“cognitive neuroscience” strategy, applying some measure of consciousness to find empirical connections with measures of brain activity. Research from these strategies has added
greatly to our understanding of the human mind and of neural circuitries, but, as evident
from the discussion in this chapter, we are still far from solid ideas about how to measure
consciousness.
Given the great interest in consciousness, methodological obstacles may be among the
primary challenges to achieve solutions to the mind–body problem. Some may even say
they constitute the primary challenge, in that a scientific approach to subjective experience
would appear much simpler if one had the perfect measure of consciousness. In the lack of
an external, objective method to measure consciousness, how are we to know if we actually do find an optimal measure of consciousness? Even with no straightforward answer,
an interdisciplinary cooperation seems necessary in order to ensure that operationalization of subjective experience in experiments captures the essence of what we mean by the
concept, and in order to identify all possible confounding factors.
Although the amount of problems seems breathtaking, the gain of progress is high. Were
we to one day succeed in addressing all issues related to the measuring of consciousness, a
solution to the age-old mind–body problem would seem much more within reach.
9780199688890-Overgaard.indb 17
26/02/15 12:00 PM
18
The challenge of measuring consciousness
Acknowledgments
Morten Overgaard was supported by the European Research Council.
References
©
O
xf
o
rd
U
ni
ve
rs
ity
Pr
es
s,
2
01
5
Bachmann, T. (2012) How to begin to overcome the ambiguity present in differentiation between
contents and levels of consciousness? Frontiers in Psychology: Consciousness Research, 3, 1–6.
Bechtel, W. and Mundale, J. (1999) Multiple realizability revisited: linking cognitive and neural states.
Philosophy of Science, 66, 175–207.
Block, N. (1995) On a confusion about a function of consciousness. Behavioural and Brain Sciences, 18,
227–287.
Block, N. (2005) Two neural correlates of consciousness. Trends in Cognitive Sciences, 9, 46–52.
Block, N. (2011) Perceptual consciousness overflows cognitive access. Trends in Cognitive Sciences, 15,
567–575.
Bunge, M. (1998) Philosophy of Science. Transaction Publishers, Piscataway, New Jersey.
Chalmers, D. (1996) The Conscious Mind. Oxford University Press, Oxford.
Cohen, M. and Dennett, D. (2011) Consciousness cannot be separated from function. Trends in
Cognitive Sciences, 15, 358–364.
Costall, A. (2006) Introspectionism and the mythical origins of modern psychology. Consciousness and
Cognition, 15, 634–654.
Costall, A. (2012) Introspection and the myth of methodological behaviorism. In: J. Clegg (ed) SelfObservation in the Social Sciences. Transaction Publishers, Piscataway, New Jersey.
Del Cul, A., Dehaene, S., Reyes, P., Bravo, E., and Slachevsky, A. (2009) Causal role of prefrontal cortex
in the threshold for access to consciousness. Brain, 132, 2531–2540.
Dretske, F. (1995) Naturalizing the Mind. MIT Press, Cambridge, Massachusetts.
Gallagher, S. (2009) Neurophenomenology. In: T. Bayne, A. Cleeremans, and P. Wilken (eds) Oxford
Companion to Consciousness. Oxford University Press, Oxford.
Hassin, R. (2013) Yes it can—on the functional abilities of the human unconscious. Perspectives on
Psychological Science, 24, 2563–2568.
Hohwy, J. (2009) The neural correlates of consciousness: new experimental approaches needed?
Consciousness and Cognition, 18, 428–438.
Jacoby, L.L. (1991) A process dissociation framework: separating automatic from intentional uses of
memory. Journal of Memory and Language, 30(5), 513–541.
James, W. (1898) Principles of Psychology. Dover Publications, Mineola, New York.
Jolij, J. and Lamme, V. (2005) Repression of unconscious information by conscious processing: evidence
from affective blindsight induced by transcranial magnetic stimulation. Proceedings of the National
Academy of Sciences, 102, 10747–10751.
Kouider, S., de Gardelle, V., Sackur, J., and Dupoux, E. (2010) How rich is consciousness? The partial
awareness hypothesis. Trends in Cognitive Sciences, 14, 301–307.
Libet, B. (1985) Unconscious cerebral initiative and the role of conscious will in voluntary action.
Behavioral and Brain Sciences, 8, 529–566.
Libet, B. (2002) The timing of mental events: Libet’s experimental findings and their implications.
Consciousness and Cognition, 11, 291–299.
Lyons, W. (1986) The Disappearance of Introspection. MIT Press, Cambridge, Massachusetts.
Mack, A. and Rock, I. (1998) Inattentional Blindness. MIT Press, Cambridge, Massachusetts.
Marcel, A. (1993) Slippage in the unity of consciousness. In: G. Bock and J. Marsh (eds) Experimental
and Theoretical Studies of Consciousness. John Wiley and Sons, New York.
9780199688890-Overgaard.indb 18
26/02/15 12:00 PM
Future directions
19
©
O
xf
o
rd
U
ni
ve
rs
ity
Pr
es
s,
2
01
5
Monson, C. and Hurlburt, R. (1993) A comment to suspend the introspection controversy:
introspecting subjects did agree about imageless thought. In: R. Hurlburt (ed) Sampling Inner
Experience in Disturbed Affect. Plenum Press, New York.
Nagel, T. (1974) What is it like to be a bat? Philosophical Review, 83, 435–450.
Overgaard, M. (2006) Introspection in science. Consciousness and Cognition, 15, 629–633.
Overgaard, M. (2009) How can we know if patients in coma, vegetative state or minimally conscious
state are conscious? Progress in Brain Research, 177, 11–19.
Overgaard, M. and Grünbaum, T. (2012) Cognitive and non-cognitive conceptions of consciousness.
Trends in Cognitive Sciences, 16, 137.
Overgaard, M. and Mogensen, J. (2011) A framework for the study of multiple realizations: the
importance of levels of analysis. Frontiers in Psychology: Consciousness Research, 2, 1–10.
Overgaard, M. and Overgaard, R. (2010) Neural correlates of contents and levels of consciousness.
Frontiers in Psychology: Consciousness Research, 1, 1–3.
Overgaard, M. and Sandberg, K. (2012) Kinds of access: different methods for report reveal different
kinds of metacognitive access. Philosophical Transactions of the Royal Society of London—Series B:
Biological Sciences, 367, 1287–1296.
Overgaard, M. and Sørensen, T. (2004) Introspection distinct from first order experiences. Journal of
Consciousness Studies, 11, 77–95.
Overgaard, M. and Timmermans, B. (2010) How unconscious is subliminal perception? In: D.
Schmicking and S. Gallagher (eds) Handbook of Phenomenology and the Cognitive Sciences. Springer,
Heidelberg.
Overgaard, M., Koivisto, M., Sørensen, T., Vangkilde, S., and Revonsuo, A. (2006) The
electrophysiology of introspection. Consciousness and Cognition, 15, 662–672.
Owen, A. (2006) Detecting awareness in the vegetative state. Science, 313, 1402.
Persaud, N., McLeod, P., and Cowey, A. (2007) Post-decision wagering objectively measures awareness.
Nature Neuroscience, 10(2), 257–261.
Rees, G. and Frith, C. (2007) Methodologies for identifying the neural correlates of consciousness. In:
M. Velmans and S. Schneider (eds) The Blackwell Companion to Consciousness. Blackwell, Oxford.
Rosenthal, D. (2000a) Metacognition and higher-order thoughts. Consciousness and Cognition, 9,
231–242.
Rosenthal, D. (2000b) Introspection and self-interpretation. Philosophical Topics, 28, 201–233.
Searle, J. (1992) The Rediscovery of the Mind. MIT Press, Cambridge, Massachusetts.
Sperling, G. (1960) The information available in brief visual presentation. Psychological Monographs, 74,
1–29.
Stoerig, P. and Cowey, A. (1997) Blindsight in man and monkey. Brain, 120, 535–559.
Timmermans, B., Sandberg, K., Overgaard, M., and Cleeremans, A. (2010) Partial awareness
distinguishes between measuring conscious perception and conscious content. Consciousness and
Cognition, 19, 1081–1083.
Vinding, M., Jensen, M., and Overgaard, M. (2014) Distinct electrophysiological potentials for
intention in action and prior intention for action. Cortex, 50, 86–99.
9780199688890-Overgaard.indb 19
26/02/15 12:00 PM