Literacy, media and multimodality: a critical response

Literacy
Volume 47
Number 2
July 2013
95
Literacy
Literacy, media and multimodality: a
critical response
Cary Bazalgette and David Buckingham
Abstract
In recent years, literacy educators have increasingly
recognised the importance of addressing a broader
range of texts in the classroom. This article raises some
critical concerns about a particular approach to this issue that has been widely promoted in recent years –
the concept of ‘multimodality’. Multimodality theory
offers a broadly semiotic approach to analysing a range
of communicative forms. It has been widely taken up
by literacy educators, initially at an academic level,
and has begun to find its way into policy documents,
teacher education and professional development and
classroom practice. This article presents some criticisms, both of the theory itself and of the ways in which
it has been taken up within the wider context of curriculum change. It argues that, in its popular usage,
multimodality theory is being appropriated in a way
that merely reinforces a long-standing distinction between print and ‘non-print’ texts. This contributes in
particular to a continuing neglect of the specificity of
moving image media – media that are central to the
learning and everyday life experiences of young children. Drawing on recent classroom-based research, the
article concludes by offering some brief indications of
an alternative approach to these issues.
Key words: media, multimodality, critical literacy, digital literacy/ies, Early Years, new literacies, popular
culture
Introduction
In recent years, growing numbers of literacy educators
have come to recognise the importance of addressing a
broader range of texts in the classroom. Of course, media such as film, television, the press and advertising
have been a concern for progressive English teachers
for at least half a century (Greiner, 1955). Many English teachers in the United Kingdom are also teachers of Media Studies, whose existence as a separate,
optional examination subject dates back to the early
1970s. Media educators have also long argued for an
expanded conception of text and of literacy: the earliest
arguments for ‘media literacy’ (or alternatively film,
television or visual literacy) began to emerge in the
late 1970s, well before the term was taken up by New
Labour policy-makers (see Bazalgette, 1988; Buckingham, 1989; Great Britain, 2003). Although the history
of media education in primary schools is somewhat
more recent (see, for example, Bazalgette, 1989), the
case for including media texts under the broad rubric
of literacy has become much more generally accepted
in the past decade, not least because of the impact of
new digital media. This more inclusive view of literacy obviously reflects the growing social and cultural
importance of the modern media, as well as the continuing attempt to ensure that the curriculum remains
relevant to children’s changing experiences outside
school.
As media educators, these are ideas we have been
promoting for several decades, at the level of theory
and research, policy advocacy and classroom practice. In this article, we want to raise some concerns
about a particular approach to these issues that has
been widely promoted in recent years – the concept of
‘multimodality’. Multimodality theory offers a broadly
semiotic approach to analysing most communicative
forms, including spoken and written language, still
and moving images, sound, music, gesture, body posture, movement and the use of space and so on (Jewitt,
2009; Kress, 2010; Kress and van Leeuwen, 2001). The
concept of multimodality has been widely taken up by
literacy educators, initially at an academic level (e.g.
Bearne and Wolstonecroft, 2007; Fernandez-Cardenas,
2009; Narey, 2008). More recently, it has begun to find
its way into policy documents, teacher education and
professional development and classroom practice. Our
concern is with the ways in which some proponents
of the theory have sought to account for and prescribe classroom practice; and how these ideas have
been taken up within the politics of curriculum change.
Our contention is that, in its popular usage, the concept of multimodality is being appropriated in a way
that merely reinforces a long-standing distinction between print and ‘non-print’ texts. This contributes in
particular to schools’ continuing neglect of the specificity of moving image media – media that are central to the learning and everyday life experiences of
young children. Our response is certainly critical, even
polemical; but we write in the hope of provoking a
wider debate about issues that seem to have been
sidelined.
Defining the field
On one level, multimodality theory can be seen as an
extension of linguistics, of the kind foreseen in the
C 2012 UKLA. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA.
Copyright 96
early 20th century by Ferdinand de Saussure (1995
[1916]) and C. S. Peirce (1931–1935). The possibility of extending linguistic concepts and methods of
analysis to visual and audio-visual texts was not fully
taken up until the 1950s and 1960s in France – and
not translated into English until the 1970s – in the work
of semiologists such as Roland Barthes and Christian
Metz. While Metz (e.g. 1973) focused primarily on the
cinema, Barthes ranged more promiscuously across
media such as advertising, photography and film, as
well as extending the approach to areas such as food,
toys, sport and fashion (e.g. Barthes, 1972). There are
theoretical differences between this structuralist form
of semiotics and the ‘social semiotic’ theory on which
multimodality theory is based (to be discussed below),
although in many respects the ambition remains the
same. For example, in his recent textbook on multimodality, Gunther Kress (2010) considers advertisements, street signs, children’s drawings, book illustrations, food packaging, web pages and a range of
other texts. Yet although Kress acknowledges the importance of computer games, film and television, he
largely ignores these media, and retains a central emphasis on the printed page. Some of his most recent
work, for example, looks at the changing relationship
between verbal and visual material in school textbooks
(Bezemer and Kress, 2008): he draws attention to the
fact that print and visual images need not be separately composed or separately read, but combine in
a single, multimodal communicative form – an argument that has much in common with Barthes’ much
earlier analysis of the role of written captions on
newspaper photographs and advertisements (Barthes,
1977).
However, Kress and his colleagues have also used
multimodality theory to analyse teaching and learning in schools, in areas such as Science and English
(Kress et al., 2001, 2005). The concept now informs
approaches to the teaching of literacy, especially at primary level (see, e.g., Bearne and Wolstonecroft, 2007;
Neville, 2008). Inevitably, the theory has been simplified in order to make it usable by classroom teachers
and attract the attention of policy-makers with neither
the time nor the inclination to read academic tomes.
Yet these attempts to reach a wider audience with an
‘easier’ definition of the field can prove misleading.
Thus, David Machin’s (2007) Introduction to Multimodal
Analysis is described by its publisher as providing “a
groundbreaking approach to visual analysis” (see, e.g.,
http://www.whsmith.co.uk/CatalogAndSearch/
ProductDetails.aspx?productID=9780340929384). This
appears to conflate the multimodal and the visual –
although in fact, while much of what multimodality
theorists deal with is visual, much of it is not, or combines the visual with other modes. In the education
sector, a further distortion has appeared in the process
of trying to make the ideas more concrete and graspable. For example, Bearne and Wolstonecroft (2007)
not only chose the title Visual Approaches to Teaching
C 2012 UKLA
Copyright Literacy, media and multimodality
Writing for their book subtitled ‘Multimodal literacy
5–11’ but also make the claim that:
“Many everyday texts are now ‘multimodal’ combining
words with moving images, sound, colour and a range of
photographic, drawn or digitally created visuals” (p. 1,
emphasis added).
A significant conceptual leap has been made here,
from multimodal analysis as a way of looking at texts,
to multimodal texts as a way of identifying and singling out apparently new kinds of text. The implication here is that, while ‘many’ everyday texts are
‘now multimodal’, there are many that are not, and
that in the past, texts were not multimodal. In fact,
multimodality theorists frequently insist that all texts
are and always have been multimodal – even print
texts, whose visual dimensions are apparent in aspects like the choice of fonts or the design of a
page (Kress, 2010); or even in the choice of either a
pencil or a pen for writing, a modal status distinction of which most children are keenly aware (Webb,
2011).
Nevertheless, a thriving industry has started to grow
around the idea of ‘multimodal texts’ (as distinct from
multimodal analysis). The previous UK government’s
National Strategies website offers this:
“Multimodal texts are now common on the Internet and
pupils are used to texts that use more than one method
of communication. All over the web there are short
films, animations and combinations of words, sounds
and images that convey ideas” (http://www.national
strategies.standards.dcsf.gov.uk/node/191938).
Here, multimodality seems to be reduced to a mere
aggregation of ‘methods of communication’ – which
is very different from the aim of multimodal analysis, which is to investigate how the interaction between modes can produce meanings that are more
than the sum of the parts. A further reductive approach can be seen in this Local Authority advice on
‘multimodal texts’, which simply equates them with
computer-based activity:
“ICT texts incorporating sound and images as well as
text can be a highly effective way of engaging children in
purposeful interactions with reading and writing” (http:
//www.eriding.net/english/multimodal writing.
shtml).
In addition to separating multimodal from print (presumably mono-modal?) texts, this advice implicitly allocates them a lower status. ‘Multimodal texts’ in this
scenario are just a way of getting children to do better
at reading and writing: by implication, they have little
intrinsic value. At the level of marketing to education,
terms such as ’multimodal’, ’multimedia’ and ’digital’
seem to function merely as eye-catching ways to spice
Literacy
Volume 47
Number 2
July 2013
up a promotion, rather than having any specific meanings, as in this advertising blurb from the published
Scholastic:
“Multimodal texts: podcast – give your literacy lesson
some multimedia magic with our free digital text and
activities” (http://www.education.scholastic.co.uk/
content/4902).
These examples demonstrate a confusion, not just of
terms but also of fundamental aims. Our many informal discussions and professional development sessions with teachers suggest that the adoption of the
term ‘multimodal texts’ as a way of encouraging them
to bring non-print texts into the classroom has backfired: many teachers are either thoroughly confused by
the term ‘multimodal’ or they interpret it (as Scholastic
does) as something to do with digital stuff and having
fun.
This confusion is exacerbated by the fact that, while
there may be a growing desire to bring non-print texts
into the classroom, there is also an anxiety about how
to justify this. There may be an underlying fear that
someone – parents, press, head teachers, school inspectors – may object to the apparent devaluing of print
texts that is implied if children spend time on films, TV
or video games in the classroom. ‘Multimodal texts’
sounds scientific and businesslike, and may be less
likely to attract the opprobrium of the right-wing press
in the way that ‘Media Studies’ so consistently does
(Laughey, 2010). In the current climate of testing and
league tables these anxieties are understandable, but
the contortions that are generated cause as many problems as they solve. The ultimate effect is to maintain
a problematic distinction between the proper texts that
are written or printed on paper and in books, and
the other texts, whether they are labelled ‘multimodal’,
‘digital’, ‘visual’ or ‘media’. This disregards the fact
that much of what falls into the ‘other’ category is actually written: websites, e-mail, e-books and SMS – not
to mention newspapers, advertisements and (on many
occasions) films, television and games – all use written
language. Of course, there are some interesting differences between words on paper and on screen, not least
relating to their cost and ease of distribution, but the
basic decoding skills required to make sense of them
are broadly the same.
We will refer later to this question of how, and whether,
the landscape of texts may be usefully categorised and
divided. Initially, however, we consider in a little more
detail where this term ‘multimodal’ comes from, what
kind of a theory it is based upon, and how useful it
really is to literacy teaching. While the term is being
increasingly widely used in some areas of education,
there seems to have been relatively little critique – and
in some cases, not even much acknowledgement – of
the theory on which it is based. Obviously, we cannot
offer a detailed discussion of the theory here; but in
C 2012 UKLA
Copyright 97
the following section we aim to highlight a number of
broad critical points that we feel are in need of further
debate.
Multimodality theory
The so-called ‘linguistic turn’ in the human and social sciences, at least in the anglophone world, dates
back to the 1970s. At that time, structuralism and semiotics were widely proclaimed as all-encompassing theories that could be used to interpret a whole range of
social and cultural phenomena in terms of language.
Everything, it seemed, could be seen as a ‘text’ that
could be analysed and explained in linguistic terms:
from popular culture to fashion to food, and from politics to the operations of the unconscious mind, it was
all about language. And these languages could all be
understood as logical systems, with their own codes
and conventions and forms of grammar and syntax.
Aside from anything else, these developments empowered linguistics to see itself as some kind of master
discipline, offering a universal template that could be
placed over a vast range of cultural practices.
As a form of semiotics, multimodality theory represents the latest manifestation of this continuing project
– although it has emerged at a time where the dream
of such an all-encompassing theory has largely faded.
Yet this does not seem to have quelled the ambition:
for example, the promotional materials for Gunther
Kress’ most recent book Multimodality (2010) proclaim
that it will “bring all modes of meaning-making together under one theoretical roof” (http://routledge.
customgateway.com/routledge-linguistics/multimod
ality/multimodality.html). What we are promised
is both a theory and a set of analytical tools that
can be applied in a scientific manner across seemingly disparate forms (or modes) of communication.
This is often combined with the argument that this
all-encompassing approach is now urgently needed
because digital technologies are making it easier to
combine many modes in one text.
However, the realisation that communication may involve a diversity of modes – visual, written, auditory,
musical, gestural and so on – is not new. There is a long
tradition of visual analysis within fields such as art history and film studies; and media educators have been
working with different modes and media for decades.
Standard film studies textbooks such as Bordwell and
Thompson’s Film Art: An Introduction (first published
in 1979) have inducted generations of students into
ways of analysing moving image texts, paying close
attention to the interaction between image and sound,
and the role of editing, for example. In schools, the detailed analysis of ‘media language’ (including aspects
such as bodily communication) has been a staple element of Media Studies curricula for many decades (see
Masterman, 1980).
98
These approaches to textual analysis are not, of course,
set in stone: they should be open to change and renewal. Yet unfortunately, much discussion of multimodality seems to take us little further than the recognition that there are indeed different modes, which
serve different functions, work in different ways, and
often operate in combination to generate meanings.
In some instances, the approach seems to veer into a
form of determinism that has much in common with
Marshall McLuhan’s ‘medium theory’ – crudely, the
notion that the means (or mode) of communication
determines the form of thought or of social life
(McLuhan, 1964). McLuhan’s famous dictum “the
medium is the message” might be translated into multimodality theory as “the mode is the message”. Thus,
it is claimed that orality, literacy and visual media in
themselves ‘afford’ different kinds of social relationships and social identities, irrespective of context or
purpose. For instance, Bezemer and Kress (2005) claim
that a changing balance or relationship between image
and text, for example in school textbooks, necessarily results in a different form of learning; while Kress et al.
(2005) assert that in the classroom, the use of typewritten rather than handwritten text, or video clips rather
than spoken text, in itself transforms the relationships
of authority between teachers and learners. The mode
apparently “shapes both what is to be learnt (e.g. the
curriculum) and how it is to be learnt (the pedagogic
practices involved)” (Jewitt and Kress, 2010, p. 349,
emphasis in original).
When it comes to analysing classroom practice, this
produces a peculiarly thin and generalised account.
The revelation that English teachers use visual imagery
and digital media in their teaching (as they have been
doing for many decades) sanctions a rather breathless account of the far-reaching changes in knowledge,
learning and identity that have apparently ensued as
a result (Jewitt and Kress, 2010). Changes in the balance and combination of modes, it is argued, are all it
takes to erode boundaries, unsettle existing practices
and forge new connections. Yet in the process, multimodality theorists barely address the actual content
of English teaching and the social and political contexts
in which teaching and learning take place. The fundamental historical transformations in English and literacy pedagogy – and the complexity and ambivalence
of those transformations – are largely reduced to questions of textuality.
A further difficulty here is in the theory’s account of
the process of meaning-making. One of the favoured
terms here is the concept of ‘design’, which appears to
imply a view of communication as a wholly rational,
controlled process. The individual ‘sign-maker’ sits
at the “multi-modal mixing desk” (Burn and Parker,
2003), making systematic choices about the mode that
will best suit his or her intended meaning. While
this might partly describe the process by which professional advertising agencies construct campaigns,
modal choices in everyday communication – especially
C 2012 UKLA
Copyright Literacy, media and multimodality
in the case of work created by children in classrooms
– are dictated by economics, power, convenience and
perhaps assessability, as much as by the suitability of
mode to content. The theory appears to ignore the haphazard and improvised nature of much human communication, as well as its emotional dimensions. It is as
if the scientific rationalism of the analyst has been vicariously transferred to the ordinary meaning-maker.
The notion of ‘design’ in its original usage in this context (New London Group, 1996) clearly applies to the
full range of communicative forms. However, the fact
that this term is drawn from the production processes
of print, illustration and graphics again reveals the theory’s inherent bias towards the printed page and ‘the
visual’. To describe the meaning-making processes involved in film production, for example, as ‘design’
severely limits our understanding of the innumerable
creative, logistical and economic decisions (and indeed
the many accidents or fortuitous discoveries) involved
in processes such as scripting, casting, performance,
set and costume design, musical composition, sound
design, special effects and the orchestration of all these
and more into a single timeline. It also cuts off consideration of the generic, institutional, technical, economic and historical dimensions of these choices.
Multimodality theory purports to be a social theory of
communication, and many of its key exponents are or
were advocates of the broader field of ‘social semiotics’
(Hodge and Kress, 1988). Social semiotics sought to
distinguish itself from previous semiotic approaches
by virtue of its concern with the lived reality of language use, as opposed to the abstract system or grammar that underlies it. This approach drew on “systemic functional linguistics” (especially the work of
Halliday, 1994) rather than the structuralist linguistics of de Saussure (1995 [1916]). Communication, from
this perspective, was socially motivated and situated,
not merely the manifestation of an abstract system or
grammar. Yet it is doubtful whether ‘social’ semiotics
or multimodality theory has ever escaped the formalism of structuralist semiotics; and as a social theory, it
often seems to do little more than gesture towards the
social dimensions of meaning-making.
Thus, in practice, multimodality theory appears to
sanction a rigidly formalistic approach to analysis.
Kress and van Leeuwen’s Grammar of Visual Design
(1996), for instance, proposes a way of reading visual imagery (such as advertisements and magazine
layouts) in which the material at the left is known,
whereas that at the right is new; the top is what might
be (the ideal), the bottom is what is (the real) and so
on. Needless to say, this approach works exceptionally well with the examples Kress and van Leeuwen
provide, but as is often the case, attempts to apply the
grammar to other examples do not work out so neatly.
This careful selection of examples that appear to prove
the case is characteristic of texts on multimodality (and
indeed linguistics more broadly): yet the principles on
Literacy
Volume 47
Number 2
July 2013
which these examples are selected are hardly ever discussed. Here again, questions to do with content and
context are dealt with in very limited terms.
If we compare a multimodal analysis of a media form
such as advertising with the kinds of analysis practised in Media Studies, the limitations are immediately
apparent. Media Studies would require us to analyse
not only the text itself but also its production (working practices, institutional contexts, commercial strategies and so on), and the ways in which it is used and
interpreted by different audiences. By contrast, a social semiotic analysis typically infers the intentions of
the text’s producers and makes assumptions about its
meaning based simply on an analysis of the text itself.
Some writers on multimodality have noted the importance of developing “a political economy of transmedia signs” but have simply staked this out as a future
task (Lemke, 2009, p. 150). And while there may be an
in-principle recognition of the fact that readers interpret texts in diverse ways, there is no attempt to investigate this empirically. The text, it would seem, is the
be-all and end-all of meaning.
The place of moving image
One of the problems with the distinction between written and ‘multimodal’ texts is that it ignores the nature of people’s everyday textual practices and preferences. Multimodality theory, while it offers powerful
accounts of textuality, provides little insight into what
people actually do with texts in the contexts of their
everyday lives. There is a striking contrast here with
the more anthropological or sociological analysis practised in Media and Cultural Studies – and indeed with
the more situated approach of “new literacy studies”
(e.g. Street, 1995). For classroom teachers, this problem is compounded by the fact that they may feel that
they know little about their pupils’ textual practices,
because of the changes in communications technologies that have taken place in recent years. When they
seek help on this, they quickly encounter a popular
rhetoric about ‘digital natives’ and ‘Web 2.0’ that has
helped to build up a mythology about the power and
pervasiveness of new communications technologies –
although this is not itself an argument propounded
within multimodality theory itself (Thomas, 2011). It is
commonly assumed that all children and young people
are incessantly texting each other, using social media
and playing computer games, and that these practices
have driven out everything else.
The reality is somewhat more nuanced. While Ofcom’s
annual series of “media literacy audits” may not tell
us much about what media literacy actually is, they
certainly provide a useful source of information about
changing trends in people’s textual practices and preferences. Of importance to primary school teachers are
the responses given by 5- to 11-year-olds when asked
what media technology they would miss most if it
was taken away. In 2011, 52 per cent of 5- to 7-yearC 2012 UKLA
Copyright 99
olds identified television as their favourite media technology, with computer/console games coming a long
way behind at 25 per cent and other media practically
nowhere (Ofcom, 2011, p. 29). Forty-five per cent of 8to 11-year-olds cited TV as their favourite, with only
20 per cent of this age group preferring games and 15
per cent naming the Internet (which means mainly social networking and virtual worlds). While these figures do show a gradual increase in interest in games
and online media, it is still extremely important to note
how public excitement and moral panics about digital technologies have tended to overlook the continuing importance of ‘old’ moving image media (television and film) in children’s formative years. We suspect that if Ofcom’s study looked at pre-schoolers, we
would see an even bigger preference for TV – and for
DVDs, which Ofcom does not ask about, since it does
not have responsibility for regulating them. Sheffield
University’s Digital Beginnings study showed that 59
per cent of children have started looking at TV by the
age of 6 months; and that by the age of two, 70 per cent
of children can (and probably do) turn on the TV set by
themselves (Marsh et al., 2005, p. 25).
It is thus worth giving some specific attention to the
role of moving image media in children’s literacy practices, especially since, as we have argued, multimodality theory offers little to help educators think about
the potential role of these media in the classroom.
For more than a century, but particularly since the
widespread take-up of television in the 1950s, moving image media have been enormously important to
young children; and this has been even more the case
since the domestic VCR made it possible for them to
view and re-view favourite bits of TV and film whenever they wanted or were allowed to. Yet most educators have continued to be distracted by public concerns
relating to the possible harmful effects of these media.
We would like to suggest a different approach to children’s moving image consumption. Given that children start to engage with moving image media in their
second year of life – often in contexts with little or no
adult mediation – they must have acquired some understanding of the complex multimodal characteristics
of these media well before they start school: if they had
not, they would not be able to enjoy them so much.
This should have immense implications for the early
stages of conventional literacy learning. Many tend
to assume that, because children learn to understand
films and TV at an early age, these media must be simple – as in statements such as “the visual nature of film
makes its devices more accessible to a wider range of
children” (Simpson, 2011). But we do not assume that
verbal language is simple just because children learn it
early in life.
Where multimodal analysis should help us is in identifying the complexity and distinctiveness of moving image media, and recognising that understanding
them must involve learning, even for very young chil-
100
dren. Yet this is something that, in our view, it has
largely failed to do. A small number of theorists have
addressed this (Bateman and Schmidt, 2011; Burn and
Parker, 2003; Van Leeuwen, 1998) but the textbooks we
have referred to conspicuously neglect moving image
media such as television, film and computer games. By
contrast, there is a large body of work in Media Studies
that explores the nature of meaning-making in moving image media in considerable detail (e.g. Barker,
2000; Bordwell and Thompson, 1979; McKee, 2003). We
might identify three broad modes in operation here:
an image mode that includes sub-modes such as framing, movement, mise-en-scène, lighting, colour, graphics
and animation style; a sound mode that includes voice,
music, sound effects and silence, each of which can be
broken down again into a multiplicity of modes; and
a ‘performance’ mode that includes elements such as
expression, movement, speech, song, appearance and
costume.
However, there is a further, vitally important, mode
that is almost always overlooked: time, which includes
duration, rhythm, sequence and transitions. Time in
film and TV is different from the time required to read
a book or scan through a website, which is under our
control. Time in moving-image media is an essential
part of the repertoire of creative choices available to
the film-maker, in the same way that it is essential to
composers of music: changing the duration of a shot
or a transition, or altering the sequence of shots, affects
meaning just as much as changing the tempo of a piece
of music or changing a crochet to a minim (for further
discussion see Bazalgette, 2011). Time in the reading
of print texts works in different ways, and here we
could make useful distinctions between reading time
and story time, and indeed between story and plot
(see, e.g. Genette, 1980). Yet it is this kind of complexity
that is lost when all non-print (or indeed ‘multimodal’)
texts are unthinkingly lumped together, whether for
facile reasons or for more ideologically charged ones,
such as defending the pre-eminence of print.
Literacy, media and multimodality
media, not only because of their obvious cultural importance, but also because of their significant role in
the very early cultural experiences of young children.
Our argument is not that moving-image media are ‘superior’ to print, although it might well be proposed
that film is ‘more multimodal’ than print, but simply
that the important formal and institutional differences
between these two forms are worth learning about and
understanding.
We now want to explore the implications of these
arguments for curriculum design and for pedagogy.
We would argue that the ‘ages and stages’ models that currently govern curriculum and pedagogy
are based on learning progression models and cultural hierarchies that are in turn grounded in print
culture. But research – and an increasing body of
anecdotal evidence – indicates that when children
have opportunities to pay detailed critical attention
to non-print texts such as films, then notions of ‘ability’ may be disrupted, and assumptions about ‘readiness’ have to be rethought. For example, in our recent research (Bazalgette and Dean, 2011), even children aged between 3 and 5 showed some understanding of, and interest in, concepts such as authorial intent, stylistic and generic expectations, and
‘reality status’ – all normally thought of as appropriate only at a much later stage). Similar findings in relation to creative work in animation have emerged in our
other recent research (Bazalgette and Bearne, 2010).
Beyond text: literacies and learning
progression
These findings should prompt us to review some established assumptions about learning progression and
literacy. The default response to the research findings
described above tends to be an acceptance of film as
a useful stimulus to traditional, print-based literacy
learning – and no more than that. This is to ignore the
gains in conceptual understanding that are achieved
when print and moving-image texts are studied side
by side, together with opportunities for creative work
in both media. If a relatively sophisticated understanding of text can be achieved at a much earlier age than
we have previously believed, what justifies the exclusive fixation on written text throughout the 5–14 literacy curriculum?
Our account thus far has focused on the problems of
distinctions between print and non-print texts, and
placing the latter under the ‘multimodal’ heading. As
we have argued, multimodality theory itself has its
limitations, but the simplified version currently available to most primary teachers has generated even more
significant problems: it ignores the specificity of different types of non-print texts; neglects the fact that print
texts are also multimodal; loses sight of the important commonalities between print and non-print texts;
and imposes a false, technologically determined uniformity on non-print texts. We have argued that if texts
need to be categorised, “print versus multimodal” is
unhelpful. To illustrate why this is so, we have argued
for particular attention to be given to moving-image
In our other recent research in primary school classrooms (Buckingham et al., forthcoming), we have
found that from the age of 6, many children are able to
start addressing complex questions about the production, circulation and use of media texts such as television news or celebrity images. Such areas form a significant part of young children’s everyday cultural experiences outside school, yet they are typically deemed to
be appropriate for study only by much older children
(if at all). Our work includes many examples of children between the ages of 6 and 9 understanding the
motivations and working practices of media companies; critically analysing the selection and construction
of such texts and exploring how they are targeted at
particular audiences, and how they are actually read.
C 2012 UKLA
Copyright Literacy
Volume 47
Number 2
July 2013
What this suggests to us is that, certainly by Key
Stage 2 (ages 7–11) and possibly earlier, the literacy
curriculum could be more ambitious. Teachers could
be encouraging learners to move towards more rigorous ways of understanding the contexts in which all
texts are produced, as well as realising and exploiting
new ways of making and circulating them. This would
mean moving on from learning about how meanings
are constructed and defined, towards understanding
how particular points of view can be conveyed, and
ultimately, how broader assumptions and ideologies
are sustained. It would include recognising and exploring the social, historical, economic, political and cultural forces that shape and determine the production
and consumption of texts and meanings.
The crucial point here is that – unlike multimodality
theory – these approaches do not remain at the level
of the text: they also look beyond the text, to consider
how texts are actually produced, circulated and used in
everyday life. They do not foreclose, but rather encourage, discussion of the complexity of these processes:
for example, they challenge simplistic understandings
of media power, and familiar stereotypes about how
different audiences use and interpret texts in all forms
of media. These activities may involve close textual
analysis – and in that respect, multimodality theory
can provide useful approaches that sit alongside more
established methods – but they also situate textual
analysis within a broader account of the social production of meaning. We would argue that this is as relevant to the study of older textual forms such as the
book as it is to newer ones.
Conclusion
Multimodality theory has been co-opted by literacy educators with the best of intentions, as a way of broadening the range of texts that primary school teachers
feel able to use in the classroom. But its inherent limitations, as well as some unfortunate oversimplifications,
have led to its recuperation and neutralisation. In an
ideal world, we ought to be able to characterise, select
and teach about a wide range of texts in the classroom
on the basis of many different theories and methods.
In this scenario, multimodality theory would be just
one of many approaches that might usefully inform
literacy learning and teaching. But multimodality theory has not been taken up by educators within such a
scenario. Rather, it has been taken up in a climate of
extreme political interference in education, when conservative forces in several countries are mounting a
cynical populist resistance to anything that smacks of
‘cultural relativism’. Much more deeply rooted educational and corporate ideologies continue to dominate
choices about what may be studied in primary schools,
the methods that should be used and the outcomes that
are to be expected. Teachers’ understandable confusion
about what multimodality is and why it might be important can only help to sustain the status quo. As the
C 2012 UKLA
Copyright 101
debate in this area unfolds, we hope that the value and
limitations of multimodality theory will be more fully
recognised and critically addressed in the context of literacy teaching and learning.
Acknowledgements
The classroom research identified here was funded
by the Economic and Social Research Council UK as
part of the project ‘Developing Media Literacy: Towards a Model of Learning Progression’, 2009–2012.
The Persistence of Vision project described by Simpson (2011) was funded by the UK Film Council’s Film:
21st Century Literacy project. The ‘Reframing Literacy’ research on animation described in Bazalgette and
Bearne (2010) was funded by the Qualifications and
Curriculum Authority.
References
BARKER, M. (2000) From Antz to Titanic: Reinventing Film Analysis.
London: Pluto.
BARTHES, R. (1972) Mythologies. London: Cape (first published
1957).
BARTHES, R. (1977) ‘The photographic message’, in R. Barthes, Image – Music – Text. Glasgow: Fontana/Collins, pp. 38–41 (first published 1961).
BATEMAN, J. and SCHMIDT, K. H. (2011) Multimodal Film Analysis.
Abingdon: Routledge.
BAZALGETTE, C. (1988) ‘They changed the picture in the middle
of the fight: new kinds of literacy’, in M. Meek and C. Mills (eds.)
Language and Literacy in the Primary School. London: Falmer, pp.
211–224.
BAZALGETTE, C. (Ed.) (1989) Primary Media Education: A Curriculum Statement. London: British Film Institute.
BAZALGETTE, C. (2011) Rethinking texts. Available at
http://www.ukla.org/downloads/UKLA Chester International
Conference Papers 2011.pdf
BAZALGETTE, C. and BEARNE, E. (2010) Beyond Words: Developing
Children’s Responses to Multimodal Texts. Leicester: UKLA.
BAZALGETTE, C. and DEAN, G. (2011) Persistence of Vision Report. Available at http://themea.org/pov/volume-3-issue2/persistence-of-vision
BEARNE, E. and WOLSTONECROFT, H. (2007) Visual Approaches
to Teaching Writing: Multimodal Literacy 5–11. London: Paul Chapman.
BEZEMER, J. and KRESS, G. (2008) Writing in multimodal texts: a
social semiotic account of designs for learning. Written Communication, 25.2, pp. 166–195.
BORDWELL, D. and THOMPSON, K. (1979) Film Art: An Introduction. Reading, MA: Addison Wesley.
BUCKINGHAM, D. (1989) Television literacy: a critique, Radical Philosophy, 51, 12–25.
BUCKINGHAM, D., BURN, A., PARRY, B. and POWELL, M. (forthcoming) Developing Media Literacy: Culture, Creativity and Critique.
London: Routledge.
BURN, A. and PARKER, D. (2003) Analysing Media Texts. London:
Continuum Books.
GREAT BRITAIN (2003) Communications Act 2003. Chapter 21. London: The Stationery Office.
FERNANDEZ-CARDENAS, J. M. (2009) Learning to Write Together:
Multimodal Literacy, Knowledge and Discourse. Saarbrucken: VDM
Verlag.
GENETTE, G. (1980) Narrative Discourse. Oxford: Blackwell.
GREINER, G. (1955) Teaching Film. London: British Film Institute.
HALLIDAY, M. A. K. (1994) An Introduction to Functional Grammar,
2nd edn. London: Edward Arnold.
102
HODGE, R. and KRESS, G. (1988) Social Semiotics. Cambridge: Polity.
JEWITT, C. (2009) Handbook of Multimodal Analysis. London: Routledge.
JEWITT, C. and KRESS, G. (2010) ‘Multimodality, literacy and school
English’, in D. Wyse, R. Andrews and J. Hoffman (Eds.) The Routledge International Encyclopaedia of English, Language and Literacy
Teaching. London: Routledge, pp. 342–353.
KRESS, G. (2010) Multimodality: Exploring Contemporary Methods of
Communication. London: Routledge.
KRESS, G., JEWITT, C., JONES, K., BOURNE, J., FRANKS, A. and
HARDCASTLE, J. (2005) English in Urban Classrooms. London:
Routledge.
KRESS, G., JEWITT, C., OGBORN, J. and TSATSARELIS, C. (2001)
Multimodal Teaching and Learning. London: Continuum.
KRESS, G. and VAN LEEUWEN, T. (1996) Reading Images: A Grammar
of Visual Design. London: Routledge.
KRESS, G. and VAN LEEUWEN, T. (2001) Multimodal Discourse. London: Arnold.
LAUGHEY, D. (2010) The Case for and Against Media Studies in the
UK Press. Paper Presented at ‘Media Literacy 2010’ Conference,
London, 19–20 November.
LEMKE, J. (2009) ’Multimodality, identity, and time’, in C. Jewitt
(Ed.) The Routledge Handbook of Multimodal Analysis. Abingdon:
Routledge, pp. 140–150.
MACHIN, D. (2007) Introduction to Multimodal Analysis. London:
Bloomsbury Academic.
MARSH, J., BROOKS, G., HUGHES, J., RITCHIE, L., ROBERTS, S.
and WRIGHT, K. (2005) Digital Beginnings: Young Children’s Use of
Popular Culture, Media and New Technologies. Sheffield: University
of Sheffield.
MASTERMAN, L. (1980) Teaching about Television. London:
Macmillan.
MCKEE, A. (2003) Textual Analysis: A Beginner’s Guide. London: Sage.
MCLUHAN, M. (1964) Understanding Media: The Extensions of Man.
New York: McGraw Hill.
METZ, C. (1973) Methodological propositions for the analysis of
film. Screen, 14.1–2, pp. 89–101.
NAREY, M. (Ed.) (2008) Making Meaning: Constructing Multimodal
Perspectives of Language, Literacy, and Learning through Arts-Based
Early Childhood Education. New York: Springer.
C 2012 UKLA
Copyright Literacy, media and multimodality
NEVILLE, M. (2008) Teaching Multimodal Literacy Using the Learning
by Design Approach to Pedagogy. Melbourne: Common Ground.
NEW LONDON GROUP (1996) A pedagogy of multiliteracies: designing social futures. Harvard Educational Review, 66.1, pp. 60–
92.
OFCOM (2011) UK children’s media literacy. Available at http://
stakeholders.ofcom.org.uk/binaries/research/medialiteracy/media-lit11/childrens.pdf
PEIRCE, C. S. (1931–1935) Collected Papers. Cambridge, MA: Harvard
University Press.
SAUSSURE, F. de (1995 [1916]) Course in General Linguistics, trans. R.
Harris. London: Duckworth.
SIMPSON, J. (2011) ‘Challenges for transferability’, in ‘Children
and teachers talking’ – part of a report on the Persistence of
Vision project. Available at http://themea.org/pov/volume-3issue-2/persistence-of-vision/children-and-teachers-talking/.
STREET, B. (1995) Social Literacies: Critical Approaches to Literacy in Development, Ethnography and Education. London:
Longman.
THOMAS, M. (Ed.) (2011) Deconstructing Digital Natives. London:
Routledge.
VAN LEEUWEN, T. (1998) Speech, Music, Sound. London: Macmillan.
WEBB, R. (2011) Is this Like Art/Literacy? Exploring Multimodality
with Year 5 Pupils. Paper Given at 47th UKLA International Conference, Chester, 15–17 July.
CONTACT THE AUTHORS
Cary Bazalgette, London Knowledge Lab, Institute of Education, University of London, 23–29
Emerald Street, London WC1N 3QS, UK.
e-mail: [email protected]
David Buckingham, Media and Communications,
Loughborough University, Loughborough, UK.
e-mail: [email protected]