Why might consciousness (of the moral status of one`s acts) be

The Importance of Awareness
Neil Levy
[email protected]
Draft only: please do not cite without permission
There are, as Nicholas Sturgeon (1986) has pointed out, two basic responses to a
philosophical claim: „oh yeah?‟ and „so what?‟ In the rapidly growing literature on the role
that consciousness plays in morally responsible action, most of the reactions have been of
the first variety. The challenge has come largely from neuroscientists and from psychologists,
who have claimed that consciousness is epiphenomenal with regard to action and that
therefore it is not able to play a role in grounding moral responsibility (Libet 1999; Wegner
2002). The philosophers‟ response has, largely, been to deny the claim. Mele (2009) has
argued that the date gathered by the scientists do not support the claim that conscious
intentions are epiphenomenal; Dennett (1991; 2003) has argued that because the brain is a
distributed system, subjective reports of simultaneity cannot be relied upon; Flanagan (1996)
has questioned the ecological validity of the findings; Nahmias (2002) has argued that the
demonstration of a dissociation between the conscious feeling of willing and actually willing
does not entail that the first is not a reliable guide to, even a cause of, the second. And so on.
All these responses, if they succeed, allow us to continue to hold that consciousness is
necessary for moral responsibility, and that agents are morally responsible for some of their
actions.
Lately, however, an attack on the claim that consciousness is needed for moral responsibility
has come from a very different direction. This attack is not motivated by new data in the
sciences of the mind, but by armchair reflection on actual and possible cases. Moreover, the
challenge takes a different form: the worry is not about the timing of conscious states –
whether they come on the scene too late to play a role in grounding moral responsibility –
but about their contents. Consider three representative examples from the recent philosophical
literature.
Huck Finn, Nomy Arpaly (2002) argues, is not conscious of the reasons why he ought to
help Jim escape from slavery. Quite the contrary; he is conscious of what he takes to be
the moral requirement that he turn Jim in. Yet he aids Jim‟s escape, and does for the
right reasons. He is morally praiseworthy for his action; therefore we need not be
conscious of the reasons which make our actions right in order to be praiseworthy for
them.
Agents who forget their friend‟s birthdays and therefore omit to offer them good wishes
cannot be conscious of the reasons that make their actions wrong, as Angela Smith
(2005) reminds us. Yet they can be blameworthy for their actions. Hence we need not be
conscious of the reasons which make our omissions wrong in order to be blameworthy
for them.
Ryland, a character in one of George Sher‟s examples (Sher 2009: 28), is too selfabsorbed to notice that „her rambling anecdote about a childless couple, a handicapped
person, and a financial failure is not well received by an audience that includes a childless
couple, a handicapped person, and a financial failure.‟ Obviously, Ryland was not aware
of the reasons that made her actions wrong, yet she is blameworthy for them. Hence we
need not be conscious of the reasons which make our action wrong in order to be
blameworthy for them.
All three philosophers deny that agents like these are responsible in virtue of some earlier
action or omission, with regard to which they were conscious. Rather, they are directly
responsible for what they did or failed to do. Obviously, what is at issue here is not that
consciousness arrives on the scene too late to ground moral responsibility. Huck may never
be conscious of the reasons that moved him; Ryland, and Smith‟s agent who forgets her
friend‟s birthday, are already responsible, prior to their becoming aware of the facts of that
make their actions wrong. Since, on the views articulated by these philosophers, moral
responsibility does not require consciousness of the reasons for which we act, these
philosophers might respond to the claims of scientists like Libet and Wegner by shrugging
their shoulders and saying „so what?‟
Call this view the non-awareness view. Whether proponents of this view are entitled to
respond to the challenge from Libet and Wegner by shrugging their shoulders is not
immediately Since the contents of consciousness focused on by the non-awareness view are
different from the content of conscious at issue in the work of Wegner and Libet – thelatter
focus on consciousness of the volition that is supposed to initiate action, while the nonawareness view focuses on consciousness of the reasons for or against the action – it would
take argument to show that the absence of need for the one entails the absence of need for
the other. Given the broad outlines of the accounts of moral responsibility at issue, however,
there is at least a prima facie case to think that a „so what‟ might be warranted. Arpaly and
Smith (at least) defend views on which agents are morally responsible for actions that
express who they are as moral agents; since the expression relation (allegedly) does not
require consciousness of the reasons for which we act, it is hard to see why it would require
consciousness of the volition causing the action either.
Whether proponents of the non-awareness view are entitled to the „so what‟ response is not
my concern, here. I will not deal directly with issues about the timing of our mental states at
all. Instead, I want to present arguments in favour of the *awareness view (the reason for the
asterisk will shortly be explained), where the *awareness is a rival of the non-awareness view.
Consciousness (in the sense I will define) is more central to our agency than they suppose: if
it not, quite, a necessary condition of moral responsibility, it is nevertheless typically
required, either at the time of the action or somewhere in the causal chain leading to it. In
developing this rival picture of moral agency, I will also help to motivate (perhaps preemptively) an answer to the „so what‟ response. That is, I aim to show that consciousness
does matter to moral agency, and that we therefore have good reasons to worry whether, and
when, it comes on the scene.
2
I
„Consciousness‟ is not a univocal term. When philosophers talk about consciousness, it is
typically phenomenal consciousness that is meant, where a state is phenomenally conscious if
there is something it is like to have it; if, that is, it is a state with some kind of qualitative feel.
Inasmuch as that is the main target of philosophers of consciousness, attention to their
debates threatens to mislead, since that conception of consciousness is not the one at issue
here. Inasmuch as it is phenomenal consciousness which generates so many of the central
puzzles of philosophy of mind, such as the hard problem (Chalmers 1995), the fact that it is
not our concern is good news; it makes our problem more tractable. Here, we are concerned
with the accessibility of information, not with whether that information, or the subject‟s
relation to that information, has any phenomenal quality. Proponents of the *awareness view
assert, while their opponents deny, that agents (typically) need access to certain facts
concerning the moral significance of their actions in order to be morally responsible for their
actions.
What is at issue is therefore a certain of access to a certain kind of information. I shall take
these components in turn. What kind of access is required for moral responsibility? In
answering this question, I will have repeated recourse to my intuitions about moral
responsibility. Of course, what is at issue in the debate is precisely what is required for moral
responsibility: given that I am going to claim that a certain kind of access is required for
moral responsibility, proponents of the non-awareness view will hold that my intuitions are
wrong. However, my aim in this section is only to identify the kind of consciousness at issue
in the dispute, not to argue (seriously) for the claim that this kind of consciousness is
required for moral responsibility. I do not, therefore, beg any questions in having free
recourse to my intuitions about responsibility in identifying this kind of consciousness.
It would be extremely implausible to maintain that agents are responsible only for actions
the reasons for which they hold before their minds while they act: this kind of introspective
awareness is far too demanding. Suppose that Dr No intends to kill James Bond using an
elaborate machine he has designed. Operating the machinery might require so much
concentration that when Dr No implements his plan he is utterly absorbed in the technical
details and has no introspective awareness of the end toward which he is working. Yet Dr
No is clearly responsible for attempting to kill James Bond, and for killing him if (per
impossible) he were to succeed.
Thus the awareness in question cannot be introspective consciousness. Might dispositional
awareness suffice to ground moral responsibility? Obviously, Dr No is dispositionally aware
of the end at which he aims. However, dispositional awareness is an extremely broad
category; too broad, I think, to play this role. Some of the content of which we are
dispositionally aware is relatively inaccessible and some is readily available: some might
require a great deal of effort to retrieve and some comes to mind unbidden, given the right
trigger. These differences are prima facie relevant to agents‟ moral responsibility. The relative
inaccessibility of information seems to correlate with the degree of moral responsibility of
the agent for failing to utilize it. Consider dementia patients. At least in the earlier stages of
the disease, their memories may still be dispositionally available, but it would require more
effort on their part or more cues in the environment to retrieve it than average. The fact that
such efforts and external cues are needed – that is, the fact of relative inaccessibility – seems
3
to diminish their responsibility for failing to recall and utilize their beliefs. More prosaically,
dispositional beliefs are often unavailable to ordinary agents on demand (the tip of the
tongue phenomenon is a dramatic illustration of this fact). Instead, for ordinary agents recall
of relatively inaccessible information may require a trigger. Since agents may not be able to
control the triggers which elicit such information (they often cannot know what the relevant
triggers are until they have recalled the information), retrieval of information may be beyond
their control. Since some agents are prima facie excused moral responsibility for not utilizing
information of which they are nevertheless dispositionally aware, it seems that dispositional
awareness alone isn‟t sufficient for moral responsibility.
When Dr No engages in the series of actions aimed at killing Bond, the goal toward which
he is working is not (merely) dispositionally available. It is occurrently active, playing a role in
guiding his behavior. Not all occurrent beliefs are also conscious beliefs though all conscious
beliefs are occurrent. Is it sufficient that a state be occurrent for it to ground moral
responsibility; that is, is the agent aware enough of a state if it is an occurrent state? I don‟t
think so. Once again, occurrence is a category covering a lot of ground. Information that is
occurrent includes information of which the agent is currently aware, but also includes a
great deal besides. Any state that actually guides an agent‟s behavior is occurrent, but,
notoriously, states that guide behavior may be personally unavailable to the agent. Consider
Emily, who forgets that today is her wedding anniversary, and therefore makes plan to go
out with friends. Nevertheless, the knowledge that it is her anniversary guides some of her
behavior: without her realizing the reason for it, she might choose to wear the necklace her
husband gave her on their last anniversary, and choose it because it is their anniversary. It may
be that Emily is responsible for forgetting her wedding anniversary, but the mere fact that
some of her behavior is guided by the knowledge that it is her wedding anniversary does not
seem to establish that this is the case. The fact that the information is occurrent does not
seem to suffice to render her responsible for failing to use it in her planning.
Both dispositional availability and actual on-line occurrence are too broad to ground moral
responsibility: though some states in each category seem available enough to play this role,
others don‟t. Information that is available enough to the agent, I suggest, is personally available.
Personal availability is a category which cuts across the dispositionally available/occurrent
distinction: some information that is occurrent is not personally available, and some
information that is dispositionally available is not personally available, but some information
in each category is personally available. The information that guides Dr No‟s behavior is
occurrent, though not occurrently introspectively conscious. It is, however, personally
available to him; it is available for easy and relatively effortless recall (no special prompting
or cues are needed, and, unlike the dementia patient, it is constantly available to him). If Dr
No is interrupted and asked what he is doing, he could reply without hesitation (“I‟m killing
you, Mr Bond”). Borrowing a term from Freud (1964), I will call information that is so
readily available that it requires little effort to retrieve and which is poised to guide behavior
even when it is not conscious, preconscious (preconscious information is thus occurrent). In
claiming that agents must be aware of the reasons for their actions in order to be responsible
for them, I am claiming that this information must either be conscious or preconscious;
information that is either conscious or preconscious is personally available to the agent. I will
say that when an agent has such information personally available to her, she is *aware of it.
My claim, then, is that agents need to be *aware of (some of) the reasons for which they act
in order to be (directly) morally responsible for an action (for stylistic reasons, I will
4
sometimes use the word „conscious‟ and its cognates in what follows, rather than
*awareness. Unless otherwise stated, „consciousness‟ should be understood as equivalent to
*awareness).
Of course, agents are rarely *aware of all their reasons for action. Unconscious processes
play a broad and deep role in guiding our actions, and much of the information processing
involved is inaccessible to us. Which reasons must agents be *aware of? I claim that agents
must be *aware of a sufficient subset of the facts that (they believe) render their action or
omission morally right or wrong; sufficient, that is, to warrant praise or blame. It is not
necessary to be *aware of all the morally relevant facts. Consider the Knave of Hearts. In
stealing the Queen‟s tarts, he not only deprived her of her lawful possession, he might also
have embarrassed her in front of the visiting Diamonds. We can blame him for his theft if
(assuming that the other conditions of moral responsibility are satisfied) he was *aware that
his action constituted theft; he need not be *aware that he would cause the Queen
embarrassment. The degree of praise or blame an agent merits is a function, in important
part, of the facts of which he is *aware. If the tarts were in fact prescribed by the Heart‟s
doctor as a cure for the King‟s gout, and their absence entailed that the King suffered great
pain, the Knave might not be due any blame for this fact, if he was *unaware of it. He might
merit only the lesser degree of blame that attends being *aware that he was engaging in
theft.1
II
Now, the claim that *awareness is required for moral responsibility may actually seem too
obvious to be worth defending. Cases of culpable ignorance aside, it might seem obvious
that *awareness is required, for one or more of several reasons. One might think that
*awareness is needed for control over our actions, or because we are only responsible for
actions on desires we endorse, or because we are morally responsible only for those actions
which express our attitudes toward others, and such expression requires *awareness. But
these answers, tempting though they may be (and especially if we are antecedently
committed to an account of moral responsibility requiring control, or endorsement, or
expression), lose their obvious quality once we begin to spell them out.
Take control. It is traditional to distinguish two conditions, both of which an agent must
satisfy in order to be morally responsible for an action: an epistemic condition and a
voluntariness condition. Many philosophers hold that the voluntariness condition is a
control condition: agents are responsible only for what they control. From there, the step to
the conclusion that moral responsibility requires *awareness seems a small one: doesn‟t
control require *awareness? Surely I can only exercise control over states of affairs of which
I am aware, and, moreover, regarding which I am aware of how that state of affairs is
sensitive to my actions? I do not control what is going on behind me, even if I could control
it, when I don‟t know what‟s going on behind me. If I know what is going on behind me,
and, further, I could control what is going on behind me (perhaps by saying the right words),
I don‟t control what is going on behind me if I don‟t know and can‟t guess what words I
need to say. So control seems to require *awareness.
But this is too swift. Consider the fact that agents are praised for behaviours that are
responses to information of which they are apparently *unaware. Think of sporting or
5
musical performance. Sometimes events on the sporting field or on a stage unfold too
quickly for us to say that if the agent deserves praise for their actions, it is because they were
*aware of what they were doing. Instead, it seems that they become *aware of what they
have done at the same time as we do. Hence the oft noted ability of athletes and improvisers
to surprise themselves. Consider Sonny Rollins, for instance, widely recognized as one of the
most important tenor saxophonists in jazz history, on the experience of improvisation:
When I get on the stage, I don‟t think. I sort of let the subconscious take over. That‟s
what it‟s all about ... I surprise myself, occasionally, with something that might come out
that is striking. I don‟t really think about whatever else is happening. It just happens.
Or, in a very different field, listen to Ayrton Senna on the experience of being „in the zone‟:
Suddenly I was nearly two seconds faster than anybody else, including my team mate
with the same car. And suddenly I realised that I was no longer driving the car
consciously. I was driving it by a kind of instinct, only I was in a different dimension. It
was like I was in a tunnel.
Rollins and Senna can very plausibly be regarded as reporting the experience that
Csíkszentmihályi (1990) calls „flow‟, and flow is associated with an enhanced sense of
control. Of course this sense could be illusory, but in these cases that is extremely hard to
swallow. Surely Rollins‟ saxophone playing and Senna‟s driving are paradigms of control.
Since they are apparently *unaware of the kinds of information their skilful responses to
which makes them (arguably) praiseworthy, *awareness does not seem to be required for
control.
There is a more general problem for arguments for the conclusion that *awareness is
necessary for the satisfaction of some condition (control, endorsement, or what have you)
that is in turn necessary for moral responsibility: it is hard to avoid begging the question.
Take endorsement. It may be true that I am responsible only for acting on a first-order
desire that I desire to be my will, as Frankfurt (1971) might have it; that is, it might be true
that I am responsible for acting only on first-order desires that are endorsed by higher-order
desires. But it would be question-begging to claim that this entails that I need to be *aware
of my first-order desire. We cannot point to the (alleged) necessity of endorsement to
establish the need for *awareness; surely one desire, of which I am *unaware, can stand in
the endorsement relation to another, and whether it can stand in that relationship does not
seem to depend on whether the other relatum is one of which the agent is *aware. I might
satisfy the conditions of a hierarchical approach to moral responsibility despite not being
*aware either that I endorse the desire on which I act, or of the desire upon which I act, or
both.
One might insist that the above ways of attempting to bring out why *awareness is required
for moral responsibility don‟t get at the heart of the matter. I am responsible for my actions,
one might say, only if I control them, or I endorse them. It is not enough, one might say,
that my behaviour be responsive to some piece of information; I must know that piece of
information; it is not that I have a desire that has as its content < that some other desire be
my will >; I must endorse that desire. As we shall see, I actually think that this line of
thought is a powerful one. But it is a line of thought that faces a large obstacle.
6
On the face of it, the objection that it is not sufficient that I have a mental state that is
responsive to some piece of information or endorsing of one of my desires because that
does not suffice to bring it about that I stand in the appropriate relation to that information
or that desire commits us to a claim about persons that is deeply unattractive. It commits us
to the claim that persons are constituted by their personally available states alone. Now,
while the thesis that I am to be identified with these mental states might once have been
attractive, few philosophers are tempted by it today. Today we all recognize that all our
cognitive achievements, from the banal to the most exalted, are deeply dependent on
unconscious states. The unconscious orients us toward certain options and away from
others, rending some features of a situation salient for us; without it we would face a
paralysing problem of combinatorial explosion. The unconscious does much of the work
even when we evaluate our options consciously. As I write, my unconscious takes care of the
grammar and a great deal of the sense too (“how do I know what I think till I see what I
say?”, as E.M. Forster is supposed to have asked). Given these facts, now accepted as
truisms by all of us, the claim that I am only my personally available states seems difficult to
swallow.
Yet I think that this thesis, or a closely related one, can be defended. I do not know whether
we ought to say that I am only my personally available states; I don‟t want to take sides on
the issue of personal identity. But so far as moral responsibility is concerned, I think that we
can defend the claim that the conscious self is the responsible self. We ought to identify the
morally responsible self with the deliberative perspective, I shall claim, and that perspective
is constituted by the states of which the agent is *aware. Moreover, and for closely related
reasons, I shall argue that agents‟ values are to be identified with those attitudes of which
they are *aware. I shall begin with this question.
III
(i)
Values
Notoriously, the unconscious contains much of which we don‟t approve. For Freud, the
unconscious was (among other things) the repository of the repressed, of all the thoughts we
could not, or did not want to, acknowledge. Contemporary cognitive science has typically
been concerned with the cognitive unconscious, the unconscious as information processor.
However, it has also shown that the unconscious is a repository of beliefs, or (perhaps more
accurately) of dispositions toward beliefs that the subject may explicitly disavow and of
which she is sometimes unaware. Here I shall mention just one central line of research, work
on implicit associations. Implicit association tests have provided persuasive evidence that the
majority of Americans have racist and sexist attitudes, including many people of whom we
have no good reason to doubt their sincerity when they claim to be passionately opposed to
racism and sexism (Dasgupta 2004). A dramatic illustration of this claim comes from the fact
the Black Americans sometimes show a negative association with black faces; women with
female faces and gays with homosexuals. We do not easily escape from the effects of
enculturation; it leaves it mark on the contents of the unconscious.
7
Further, it is difficult even for well-motivated agents to rid themselves effectively of these
attitudes. Attitudes revealed by the implicit association test can apparently be altered by
effort, but (1) our enculturated attitudes, acquired early and encoded in patterns of responses
that are deeply ingrained, are resistant to change, and (2) having explicit beliefs with a
contrary content is not sufficient to make a great deal of difference to our attitudes.2 Instead,
they are changed by means that resemble the ways in which they were acquired: by a gradual
and often unsuccessful process of acquiring new associations and learning new habits. The
fact that our attitudes are resistant to our judgment, and must be altered by nonrational
means has long been recognized; it is this fact that lies behind Pascal‟s advice to the person
who cannot bring themselves to believe in God by reason alone: kneel and move your lips in
prayer; you will believe.3
Now, under certain circumstances, these unconscious beliefs will cause behaviour. Scores on
an implicit association test seem to be a better predictor of certain kinds of subtle racist
behavior than are our conscious attitudes towards other races (McConnell & Leibold 2001)
moreover, implicit attitudes can explain certain kinds of lethal force. Priming with Black
faces raises the likelihood that agents will identify ambiguous stimuli or non-gun tools as
guns (Payne 2001); this fact may partially explain why police are more likely to use deadly
force when confronted with Black suspects.
Why are our actions sometimes caused by attitudes we disavow? When there is a conflict
between our conscious attitudes and our unconscious, it takes effort to ensure that our
behavior is in line with our conscious beliefs and not our conflicting attitudes (effort we may
not be aware of expending). Cognitive resources are limited; we can expect that under a
variety of conditions (when the person is under stress, tired, or has had to expend effort
recently in this or other tasks) they will not be available to prevent unconscious associations
from significantly impacting on behavior.
That is to say that under a variety of conditions, agents‟ actions will reflect their unconscious
attitudes and not their conscious beliefs. Ought we to hold them directly responsible for
these actions? For proponents of the non-awareness view, our implicit attitudes are partially
constitutive of who we are, and therefore part of our real selves. Insofar as we are disposed
to behave in a particular manner, we give evidence that we value the state of affairs at which
the action is aimed. We see that state of affairs as reason-giving for us (Scanlon 2002: 177);
we judge it as “good in some way” (Smith 2005: 270). An agent‟s implicit values reflect his
evaluations, “even if he disapproves of, rejects, and controls them, and would eliminate them
if he could” (Scanlon 2002: 171). Because in acting on our implicit attitudes, we express our
values, we express who we are – even if only in part – and can therefore justifiably be held
responsible for our actions. I shall show that the claim that our implicit attitudes ought to be
regarded as an expression of (some of) our values ought to be resisted.
Attributing values to agents is always a tricky business. Though we ought to give some
weight to what agents (sincerely) say they value, it is clear that they are not infallible on the
question. They can be self-deceived, for instance, or simply lack access to their values. So
there is no reason, in principle, why agents cannot have values of which they are *unaware.
Even in cases in which we have no doubt that an agent‟s consciously affirmed values really
are hers, we can justifiably doubt whether it tells the whole story about her. So neither
sincere affirmation nor even veridical report serve to rule out the possibility of unconscious
8
values. However, attention to the kinds of facts that would lead us to doubt that the report is
the whole story will lead us to see why we ought to hesitate before we identify an agent‟s
implicit attitudes with her values. Though agents can have values of which they are
*unaware, we can only ascribe such values to them when they become *aware (typically
indirectly) of their attitudes; hence *awareness on the part of the agent is required for the
ascription of values to her.
We will hesitate to accept that an agent is sincere in her affirmation that she is, for example,
opposed to racism when this opposition fails to have its characteristic effects on the full
range of her behaviors; when, in some of the circumstances in which opposition to racism
should cause her to act in a particular way (or in one of a range of ways), she fails to act in
that way. The more widespread and the greater the divergence between the behavior that the
professed value should cause and actual behavior, the less credibility the claim will have. All
this is to say that values are manifested as bundles of dispositions: dispositions to assert
certain claims, to have certain emotional response (to feel indignant when one‟s values are
violated, for instance), to use the value as a side-constraint in deliberation, and so on. When
we are confronted with an agent who sincerely professes that they hold a certain value but
who fails to act as if they held that value, we are confronted with a puzzle: the agent
manifests some of the dispositions associated with the value, but not others. In these cases,
and especially given the centrality of what we affirm to the dispositional stereotype, we have
good reason to deny that that the agent should simply be identified with their implicit
attitudes. Perhaps we should say that this is a case of „in-between valuing‟.4 But there are
reasons to go further, and identify the agent more strongly with her consciously affirmed
values than her unconsciously affirmed.
The broader the range of circumstances in which dispositions associated with a value are
manifested, the stronger our reason to attribute that value to the agent. Thus, we have
stronger reason to attribute the associated value to an agent who acts on an implicit attitude
if she is – perhaps despite herself – pleased by her behavior. Again, I do not want to deny
that cases which fit this description are possible. But conscious attitudes are unlike
unconscious in one important respect: conscious attitudes tend to cause the acquisition or
manifestation of other dispositions, within and beyond the dispositional stereotype associated
with that attitude. My being disposed, unconsciously, to A, or my A-ing, need not have any
effect on my other unconscious attitudes, no matter what their content. This is due to the
domain-specificity of the unconscious and its correlative relative lack of sensitivity to logical
relations, positive or negative. Unconscious attitudes, because they are relatively
encapsulated both from consciousness and from other unconscious attitudes, are not tested
for consistency. Conscious attitudes are quite different.
If I am disposed, consciously, to assert that p, I am also and automatically disposed to
manifest a range of associated dispositions (to be surprised that not-p; to use p as a premise
in reasoning; to be disturbed by my disposition to assert that q when – and of course only
when – my disposition to assert that q generates dispositions incompatible with (the
dispositions generated by) p, and so on). It is this generativity of conscious beliefs that ensures
that they are likely to be tested for consistency with one another. Typically, they do not
remain isolated from one another; instead, they generate dispositions which spread across
the domain of the mind and thereby run into conflicting dispositions.5 Conscious attitudes
have a similar sort of generativity (indeed, perhaps the generativity of attitudes of which I am
9
*aware is partially due to the fact that in becoming *aware of my values, I acquire a belief
about myself). Because attitudes of which I am *aware are personally available – constantly
available for easy and effortless recall – they tend to be activated automatically whenever
representations with an appropriately related content is online. Hence conscious activation
of an attitude tends to ripple throughout the agent‟s conscious attitudes.
Unconscious attitudes do generate further dispositions, but only when they intrude into
consciousness. When an agent acts on an attitude the content of which they are then
*unaware, the attitude enters *awareness by way of the action, thereby generating further
attitudes. Suppose I am disposed, unconsciously, to A, and A, but I take myself to have
conflicting values; my A-ing (or my becoming aware of my disposition to A) will typically
cause the manifestation of a range of dispositions associated with my professed value:
dismay at my A-ing, for instance.6 If I am also pleased about my A-ing, and *aware of that
fact, the conflict between my pleasure and my professed values will tend to intensify my
dismay, and perhaps cause feelings of guilt and shame as well. Insofar as we attribute values
to agents on the basis of their dispositions, but only consciously affirmed values tend to
cause this ramification of dispositions, we will have better reason to attribute the consciously
held values to agents than the unconsciously held. Since we attribute values to others and to
ourselves according to the degree to which we hew to the dispositional stereotype associated
with the value, but agents will tend to manifest a broader range of the relevant dispositions
when they are *aware of a value than when they are not, agents are usually correct in
identifying their values. The sheer breadth of concordant dispositions associated with
attitudes with which we are *aware, as contrasted with the relatively narrow range of
concordant dispositions generated by implicit attitudes, and the fact that this narrow range is
often offset by ramifying contrary dispositions, gives us strong reason to identify the agent‟s
values with what she takes her values to be.7
(ii) The Deliberative Perspective
Let us turn, now, to the claim that moral responsibility is intimately linked to the deliberative
perspective, and this perspective is bounded by the agent‟s *awareness. We have already seen
an important part of the reason why this is the case: it is only our conscious beliefs that can
be tested for consistency. But the search for consistency is closely linked to the ability to
deliberate at all: in trying to decide what to do, I seek the course of action that is most
consistent with my values and with my beliefs. Indeed, the search for all-things-considered
consistency might be said to be constitutive of deliberation. What we can do, in testing our
options, is limited by what we can consider, and we cannot consider whatever falls outside
the scope of our *awareness.
Of course, our unconscious attitudes play a significant role at every stage of our deliberation.
They make certain options salient to us and other pallid; they help to cause certain
considerations to come to mind and screen off others. But we lack introspective access to all
this essential activity; we are unaware of how it works and can have relatively little influence
over it. All this activity happens off stage, while deliberation happens within the spotlight.
We can only assess – for consistency and for plausibility – what happens on stage (and
offstage relatively little testing for consistency occurs). Moreover, only our conscious beliefs
are available to form the content of our reasons, and hence can form the content of the
intention upon which we settle by way of deliberation (Hurley 1997). We can only settle on a
10
course of action in order to X, for some conscious value of X. That is not to deny the point
stressed repeatedly by Arpaly (2002), that the best (third-personal) interpretation of one of
my actions might explain it by reference to a reason of which I was not conscious. But when
this is true, the unconscious reason does not form the content of the reason for which the
agent acts. Either it influences behavior which aims at a goal of which the agent is veridically
*aware – when, for instance, the agent acts in order to comply with the experimenter’s instruction (in
a blindsight experiment, for instance), or in order to say the first word that occurs to me but in doing
so is caused, by the unconscious prime, to select a particular word – or it acts by a kind of
dissimulation. This latter occurs when an unconscious belief causes a confabulatory reason
for action; for instance, when an agent acts in order to stand up to an aggressive bully – as he
sees it – but when he sees his interlocutor as aggressive only because of an implicit attitude
to black faces. When this occurs, the agent has been, as it were, deceived by their
unconscious; absent some reason to think that they are responsible for this deception, there
seems little reason to treat them as any more blameworthy for the action than if they had
been deceived by another into performing a similar action.
Only conscious information is available to form the content of our reasons for acting;
unconscious information is available only to guide and shape the content of our reasons for
action but cannot be our reason for action. Of course, it would be begging the question to
assert that consciousness matters because only conscious reasons are available to form the
content of conscious intentions, unless we had some reason to think that the consciousness
of an intention matters. But we do. First, the consciousness of an intention matters because
conscious intentions ramify, in the manner of conscious states generally: they automatically
generate a range of endorsing attitudes and appropriate dispositions (to be surprised if one is
accused of racism for standing up to a bully, for instance). Unconscious states may not
generate these second-order or entailed dispositions and beliefs, conscious or unconscious.
Second, unconscious states guide action (under normal conditions) only by either influencing
what enters into *awareness or by stealth; by causing the generation of a confabulatory
conscious intention. Only a conscious intention implicates the agent as a whole, by ramifying
through his or her mental states; and unconscious states owe their power to this same
phenomenon, bought by the generation of conscious states.
V
Let us return to the phenomenon of flow, or, more generally, of highly skilled performance
without *awareness. I have claimed that only conscious beliefs generate a range of secondorder and entailed or (more weakly) implied dispositions. Only conscious beliefs can thereby
be tested for consistency and form the very stuff of deliberation. I will now argue (in
apparent contradiction to my acknowledgement that the astonishing skills of someone like
Sonny Rollins are exhibited without *awareness) that the kind of generativity that we see
only with conscious attitudes is required for creativity.
Distinguish two kinds of creativity: local and innovative creativity. Local creativity is of the
kind of exhibited by Rollins; by artists and athletes, and also, in some, but only some, aspects
of their craft, by writers. Local creativity depends on extensive training. The purpose of this
training is to acquire a range of local skills, perhaps even to build up what might be regarded
as quasi-modular system, with its own proprietary database of information upon which it
draws and its own range of stock responses. The trained musician, say, has a range of scripts
11
she can draw upon at will, between which she can select, and which (as her training
progresses) she can break down, combine, and mix with other scripts. As the database
builds, as the range of scripts increases in number and as their complexity increases, her
responses to musical demands become more flexible and less predictable. Under the
pressure of performance, she may combine and mix and divide in ways she has never done
before, surprising even herself. She exhibits awesome skill, and produces something
genuinely novel (perhaps even, from a certain perspective, praiseworthy). But her skillful
performance is a product of domain-bound information processing. It is not the less
impressive for that.
The skilful musician may innovate within a domain, but she will rarely innovate (qua
improvising musician) in a more radically novel way; across domains, rather within a single
domain. It is this kind of creativity that is genuinely innovative. And this kind of creativity,
genuinely innovative creativity, requires *awareness (or a great deal of luck). Since it requires
not merely the combination and interplay between different domains but (naturally enough)
appropriate combinations, innovative creativity requires that the domains be in contact with
one another in a way that is sensitive to their content. Since it is *awareness that makes
possible the generativity that puts different dispositions, and thereby different domains, in
contact with one another, *awareness is required for this kind of domain-general creativity.
Much of the moral life is more akin to domain-general creativity than to local creativity.
Moral action is sometimes routinized in a way that it makes it possible to rely on domainspecific scripts, but this is more often in the blurred, though still important, area in which
morality shaded into etiquette (consider holding open the door for the person behind you; or
apologizing when you bump someone). Large parts of the moral life require much more in
the way of creative, or at least domain-crossing, response, and this seems to be especially the
case with regard to actions for which we praise and blame. There are habitual thieves, just as
there are habitual liars, but most theft requires planning and biding one‟s time; similarly,
though we sometimes reflexively aid one another (for example by slamming on the brakes
when someone darts in front of the car), even most of those cases in which the agent is
praised for not having „one thought too many‟ (Williams 1981) are far from routinized. The
creativity of the jazz improviser is impressive, and perhaps she deserves some praise for it.
But the praise is perhaps best thought of as domain-limited, just as the creativity is. We
praise it as good of its kind. Whereas innovative creativity reflects the person, thereby having a
special connection to her and justifying reactive attitudes directed to her (resentment,
indignation, anger, or gratitude, esteem, and so on), local creativity justifies only more
circumscribed responses, addressed not to the person but to the skill. It is, perhaps, a failure
to distinguish local and global responses that lies behind the discomfort many people express
at the discovery that an artist was also reprehensible person. There is no inconsistency in
blaming the person and praising the artist; the reactive attitudes are directed at different
targets.
VI
I have argued that only acts caused by reasons of which agents are *aware are acts for which
they are typically morally responsible. I have given two linked arguments for this claim. The
first is that only explicit attitudes constitute our values, because only our explicit attitudes
generate a broad range of concordant attitudes; implicit attitudes either failure to generate
12
much support except when they become conscious (directly, or, through their effects,
indirectly); if they are norm-discordant, they generate opposition rather than support when
we become *aware of them. Further, I have argued that the deliberative perspective, from
which the agent settles on the course of action to pursue, is bounded by her *awareness, and
that only options upon which she consciously settles can form the content of her reasons.
I have not argued that agents cannot have values of which they are *unaware. Rather, I have
suggested that for a value to be attributed to an agent, it must be broadly supported by
concordant values; in creatures like us, this typically occurs when the values are held with
*awareness. Agents may, I suppose, on occasion have attitudes of which they *unaware, but
which when they become aware of they find themselves approving. This possibility should
not worry us overly. Given the difficulties in circumscribing the content of attitudes (ought
we to ascribe content narrowly or broadly; should we say that an agent believes that p or p
and its obvious entailments?), cases in which agents have an implicit attitude which fits this
description will typically be cases in which we had good reason to ascribe the attitude to the
agent in any case. There will be marginal cases, in which we are unsure what to say. And
perhaps there will be cases in which, due to gross self-deception, we can ascribe a value to an
agent on the basis of a broad range of dispositions even when she sincerely denies that she
holds that value. For the most part we ought to restrict an agent‟s values to what she is
*aware of.
NOTES
1 Agents also seem due some praise or blame in cases in which the beliefs of which they are *aware are actually
false, if in acting on those beliefs they express good or ill will. The fact that the Knave took himself to be
stealing might underwrite some degree of blame, even if the tarts were actually left for anyone who wanted
them.
2 Gendler (2009:569) suggests that implicit attitudes are never changed; instead, the best we can hope is to
bypass them.
Smith (2005; 2008) holds that agents are morally responsible for actions that reflect their evaluative activity;
we are therefore morally responsible for actions that are caused by our judgment-dependent attitudes, where an
attitude is judgment-dependent if it belongs to the set of attitudes that “generally reflect and are sensitive to our
(sometimes hasty, mistaken, or incomplete) judgments about what reasons we have, and they are generally
responsive to changes in these judgments” (Smith 2008: 370). But our implicit attitudes are not responsive to
our judgments. As Gendler says, “Beliefs change in response to changes in evidence; aliefs change in response
to changes in habit” (Gendler 2008: 566)
3
4 This expression is a nod to Eric Schwitzgebel‟s notion of in-between believing (Schwitzgebel 2001).
Schwitzgebel argues that a belief is just a bundle of dispositions, phenomenal and cognitive; agents who
manifest some of the dispositions central to the dispositional stereotype but not others are in-between
believers. It should be noted that my claim that we rightfully attribute values, to others and ourselves, in virtue
of the degree to which we manifest the dispositions associated with its stereotype does not commit me to an
account of what values are parallel to Schwitzgebel‟s account of beliefs. Whatever values are, different agents
are committed to them to different degrees and this commitment will be reflected in the degree to which they
depart from the stereotype.
5 Gendler (2009) stresses the breadth of aliefs, a category which includes what I am (more noncommittally)
calling unconscous attitudes. As she puts it, a belief is a two place relation (S believes that p), whereas an alief
„involves a relation between a subject and an entire associative repertoire‟, including „affective states, behavioral
propensities, patterns of attentiveness, and the like‟ as well as some of kind of relation to a content (Gendler
13
2009: 559). Though she is certainly right to suggest that such an attitude might be manifested in a range of
dispositions, it ought to be clear that we attribute full-fledged beliefs in virtue of a broader range of
dispositions. A belief is not manifested solely in virtue of a subject‟s relation to a content but also virtue of the
range of dispositions emphasized by Schwitzgebel; the same seems to be true of attitudes of which we *aware.
6 Cf Hookway 1981: “The state of believing that p is accompanied by a distinct belief state, the state of believing
that one believes that p. The second order belief has a range of behavioural manifestations, given the agent's
other beliefs, desires etc., which the first order belief alone lacks” (76).
7 Schwitzgebel seems to me to overlook the generativity of our conscious attitudes. He holds that when agents
have inconsistent implicit and explicit attitudes, they are simply in-between believers (Schwitzgebel, ms). But
because our conscious attitudes tend to generate supporting dispositions, we will typically be far better
identified with our explicit then our implicit attitudes.
References
Arpaly, Nomy (2002) Unprincipled Virtue: An Inquiry Into Moral Agency. Oxford: Oxford University
Press, 2002.
Chalmers, David (1995) Facing Up to the Problem of Consciousness. Journal of Consciousness Studies 2:
200-19.
Csikszentmihalyi, Mihaly (1990) Flow: The Psychology of Optimal Experience. New York: Harper and Row.
Dasgupta, Nilanjana (2004) Implicit Ingroup Favoritism, Outgroup Favoritism, and their Behavioral
Manifestations. Social Justice Research 17: 143-168.
Dennett, Daniel (1991) Consciousness Explained. London: Penguin Books.
Dennett, Daniel (2003) Freedom Evolves. London: Allen Lane.
Flanagan, Owen (1996) Neuroscience, agency, and the meaning of life. In Flanagan, Self-Expressions
(Oxford: Oxford University Press,), pp. 53-64.
Frankfurt, Harry (1971) Freedom of the Will and the Concept of a Person. Journal of Philosophy 68: 5–
20
Freud, Sigmund (1964) New Introductory Lectures on Psycho-Analysis. New York: W.W. Norton.
Gendler. Tamar (2009).Alief in Action (and Reaction). Mind & Language 23: 552-585.
Hookway, Christopher (1981) Conscious Belief and Deliberation. Proceedings of the Aristotelian Society
75: 75-89.
Hurley, Susan L. (1997) Non-Conceptual Self-Consciousness and Agency: Perspective and Access.
Communication and Cognition 30: 207-248.
Libet, Benjamin (1999( Do We Have Free Will? Journal of Consciousness Studies 6: 47- 57.
14
McConnell, Allen R. and Leibold, Jill M. (2001) Relations among the Implicit Association Test,
discriminatory behavior, and explicit measures of racial attitudes. Journal of Experimental Social
Psychology 37: 435-442.
Nahmias, Eddy (2002) When consciousness matters: a critical review of Daniel Wegner‟s The illusion
of conscious will. Philosophical Psychology 15: 527-41.
Payne , Keith (2001) Prejudice and perception: the role of automatic and controlled
processes in misperceiving a weapon. Journal of Personality and Social Psychology 81:
181-192.
Scanlon, Thomas, M. (2002) Reasons and Passions. In Sarah Buss and Lee Overton (eds) Contours of
agency: essays on themes from Harry Frankfurt (Cambridge, Mass.: The MIT Press), pp. 165-183.
Schwitzgebel, Eric. 2001. In-Between Believing, Philosophical Quarterly 51: 76-82.
Schwitzgebel, E ms. Acting Contrary to Our Professed Beliefs or The Gulf Between Occurrent
Judgment and Dispositional Belief.
Sher, George (2009) Who Knew? Responsibility Without Awareness. New York: Oxford University Press.
Smith, Angela (2005) Responsibility for Attitudes: Activity and Passivity in Mental Life. Ethics 115:
236–71
Smith, Angela (2008) Control, Responsibility, and Moral Assessment. Philosophical Studies 138: 367392.
Sturgeon, Nicholas (1986) What Difference Does It Make Whether Moral Realism Is
True? The Southern Journal of Philosophy 24: 115-141.
Wegner, Daniel (2002) The illusion of conscious will. Cambridge, Mass.: The MIT Press.
Williams, Bernard (1981) Persons, Character, and Morality. In Williams, Moral Luck. (Cambridge:
Cambridge University Press), pp. 1-19.
15