Can They Live Together? - The Royal Society of Edinburgh

RSE Lectures - Session 2002-2003
November 02 - October 03
Robert Burns and British Poetry.
British Academy Chatterton Lecture on English Poetry .................... 2
From Chaos to the Indian Rope Trick ................................................ 6
Chemical Constraints on Biological Evolution ..................................... 9
A New Russian Revolution: Partnership with NATO
Part of the Edinburgh Lecture Series .......................................... 11
Life on a Little Known Planet and Unsustainable Development
Joint RSE/ECRR/IoB Lecture ....................................................... 22
The Disappearing Computer
Science & Society Lecture ........................................................ 25
Public Transport and Public Expectations:
Can the Gaps Be Bridged? ....................................................... 28
I Cyborg
The Royal Society of Edinburgh and
Royal Academy of Engineering Joint Lecture ............................... 30
Cell Mediated Immunity in Virus Infections
Joint RSE/SABRI Lecture ........................................................... 41
O Brave New World?
The Union of England and Scotland in 1603
Joint Royal Society of Edinburgh and
British Academy Lecture .......................................................... 42
Genetics and Insurance: Can They Live Together? ............................ 43
The Fate of the Neanderthals ........................................................ 45
Goals, Greed and Governance ....................................................... 47
How Cancer Chemotherapy Works ................................................ 49
Semiconductor Devices for Entertainment Robots
Joint RSE/SDI Lecture ............................................................... 51
The Bionic Man
Joint RSE/Heriot Watt Lecture ................................................... 53
European Science in Difficulty ....................................................... 54
LECTURES
Professor Murray Pittock
Professor of Literature, University of Strathclyde
7 November 2002
Robert Burns and British Poetry.
British Academy Chatterton Lecture on English Poetry
Professor Pittock began by
outlining his theme: the argument
that since 1945, Burns’ reputation
has been confined by a critical
introspection. This is visible both
in the tradition of celebratory
anaphora in discussion of a poet
and also by a definition of
Romanticism which has increasingly excluded him; even though
paradoxically the cult of his
personality places him squarely in
the Romanticist category of artist
as hero. His politics of vision and
prophetic role fit easily into the
Romanticist definition of MH
Abrams. On the other hand the
growing post-war interest in
imaginative and subjective
Romanticism, understandable in
terms of the history of the 1930s
and 1940s, has turned aside from
Burns as it has expanded interest
into Coleridge and Blake.
In his lecture, Professor Pittock
looked to uncover, through
dialogue, Burns’ relation to the
poetic concerns over the generations, and to argue that Burns
deserves to be free from the
introspection of class, language,
periodicity and theory which have
begun to erase him from British
poetry. Burns’ significance in
global culture remains out of all
proportion to this erasure; over
1000 societies are dedicated to
him, in at least 18 countries. He
has statues in three continents,
his books have been translated
3000 times into 51 languages
and over 200 editions of his work
have been published. Burns has
been compared to the leading
writers of Japanese haiku, and to
the national Bards of other
countries. His work has been set
to music by Haydn, Mendelssohn,
Shostakovitch and Britten.
This status does not differ greatly
from that accorded Burns by many
writers and pre-war critics. A
succession of pre-war critics
described Burns as a Bard and in
the late 1930s more articles were
published on Burns than on
Coleridge and Blake; he was on a
par with Byron. By the 1960s he
had sunk to a quarter of Coleridge’s total and half of Blake’s.
The decline continued unabated
despite occasional recognition.
Even after a bicentenary in 1996,
articles devoted to Burns had
shrunk to one sixth of those
accorded to Shelley, the least
popular of the six central English
Romanticist poets. This decline is
Lectures
not uniform; he is still well
represented in anthologies and
dictionaries of quotations but is
virtually absent from textbooks
and works of reference. The 1993
Cambridge Companion to British
Romanticism cites him three
times, compared to twenty for
Southey and seventy for Blake.
The 1998 Clarendon Press
Literature of the Romantic Period
gives around twenty pages to the
six main Romantics, fourteen to
Clare, yet only two to Burns.
been made to separate Burns
from his natural relationship with
the English Romantics. The
overlooked Britishness of Romanticism lies at the root of both the
introspective neglect and introspective celebration of Burns.
Burns recognised his debt to a
broad British literary tradition
(Pope, Shenstone, Thomson,
Sterne) and his intellectual roots
in the age of sensibility have long
been noted. Furthermore, his
radical energy is evidenced in
writings such as The Prelude and
The Death of Wallace.
Why should Burns, demonstrably
a writer of global status,have
become British literature’s invisible
man. The most popular argument
is Burns’ use of unfamiliar
language. However if this was a
barrier, why was Burns so popular
in British and American culture
before 1960? And why did his
work feature as a set book in
English Schools until 1945?
Ironically, it is Burns’ highly varied
use of language itself which
appears to have distanced him
from the hieratic high cultural
activities of the poet as a theoretician of art, imagination and
language.
The complexity of his radical
energy can be seen in his 1787
letter comparing the Jacobites to
Milton’s devils “after their unhappy Culloden in Heaven…thrown
weltering on the fiery surge”. At
this time Burns was still involved
with Edinburgh’s remaining
Jacobite Club at which he addressed A birthday Ode to Charles
Edward on 31December 1787.
Burns’s use of epigraphs is both
implicit and explicit. The Address
to the Deil may be headed by an
epigraph from Milton but its first
lines contain another submerged
epigraph: “O Thou, whatever title
suit thee! / Auld Hornie, Satan,
Nick, or Clootie”.
Burns’ decline may be more to do
with an aesthetic/theoretic
Romantic paradigm. Scottish
culture’s sometimes repetitive and
critically undemanding celebration
of its ‘national’ poet is no doubt
another cause for neglect in a
world increasingly in love with
novelty. Various attempts have
His addressed poems promote a
range of speakers who both frame
and intervene in their narratives
from the Devil himself to the sly
bard posing in folk naivety. Unlike
3
Review of the Session 2002-2003
Wordsworth, Burns’ poetic voice
conflates with its subject, the
commentator as participant, the
agent as spectator. Both the
sympathy of the benevolent
spectator and the objective
correlative of the imagined
sensuality of nature are present in
an alliance of sentimental object
and Romantic subject. This inward
outwardness may explain the
flexibility of Burns’ register.
tenant farmer, which is poor as
the mouse he encounters.
It was this situation which nearly
brought Burns to emigration. He
wrote that he could almost
exchange lives at any time with
farmyard animals. The initial voice
in this poem is the register of the
rural poor but another register
supervenes, that of a benevolent
bystander of Enlightenment
theory. Elsewhere Burns compares
the oppression of the poor to cats
at a plundered mouse nest, his
standard English indicative of
both of his sympathy and the
speech of the spectator.
In poems of apparent folk naivety,
such To a Louse, Burns’ consciousness of hygiene combines with
the radical energy of the louse,
whom the speaker repeatedly
appears to blame for its impertinence in infesting a member of
the upper middle class. Burns’
apparently simple language
conceals the density of his
allusion and conceit.
Burns’ linguistic flexibility is the
key to a hybridity of experience
outwith and within a number of
dominant cultures, not only in
England but in Edinburgh, not
only national but social. Burns’
direct and indirect influence on
other poets who followed was
considerable. For example, John
Clare called him “imitable and
perfect” and developed Burns’
nature poetry and use of the
language of regional location for
his own purposes.
In To a Mouse, Burns combines
local event and the larger politics
of the sentimental era with a
universal stance suited to his
emerging prophetic status as a
“bard”, a term which he constructs to his own advantage. The
animal is many different things; a
sentimental object like Smart’s cat;
the inheritor of tradition of
political fable reaching back to
Robert Henryson and before; an
avatar of the misery of the poet
and, on some level, an anticipation of the Wordsworthian
solitary, the victim of a changing
countryside. The local event is a
moment in the speaker’s life as a
Burns’ cultural hybridity was
critical in denominating the scope
of British Romanticism which
drew so much of its strength from
the imagined recreation of the
familiar yet alien particular: leech
gatherer, mariner, Grecian Urn, the
“chartered streets of London”. In
this, Burns’ idea of the Bard is
4
Lectures
important because for him the
familiar and alien were comprised
in himself as subject, not the
objects of his gaze. Burns’
adopted the persona of the Bard
not as a ventriloquist but as a
means of hybridising his own
cultural origins to the literary
expectations of a wider audience.
Burns’ bard was only at the
margins a fatalistic doomed
figure; he is more centrally part of
the living community. Yet much as
he might claim to own “the
appellation of Scotch Bard” who
sought only “to please the
rustic”, Burns always aimed to do
more than this. In Tam o’ Shanter
he united the popular dialect poet
of local anecdote with the
detached satirical voice of written
culture, which is then inverted by
the orality it seeks to control. In
mediating the bardic ability to
speak both the language of
locality and that expected of the
more universal figure of the Noble
Savage, Burns adopted a variety of
linguistic registers, much more
sophisticated tools than the
predictable tone and oblique
narration of MacPherson’s Ossian
poetry, which in its own way also
sought to give the intensely
localised bard a universal appeal.
Burns’ Kilmarnock Edition of 1786
begins to emplace this notion of
the Bard.
tions of the bard and bardic rules
have been taken as authentic,
even autobiographical. Like the
much lesser poets of the 1890s,
Burns reputation is polluted by
biography.
In conclusion, Professor Pittock
examined Burns’ bardic voice in
The Cotter’s Saturday Night and
Tam o’ Shanter, chosen in order to
discern the unfamiliar in the
familiar works of Burns. The latter
begins as a written report of an
oral tale told about another tale,
then develops into a satire of the
genesis of the oral tale as a
fanciful product of alcohol and
lechery, which at the same time
conspires to celebrate the liberating quality of the secret life of the
locality, represented by the
witches’ freedom from control.
Professor Pittock argued that by
understanding that what was, in
Burns’ time, called “polish” was
present in the self conscious
nature of his bardic and imaginative vision, and is to begin once
again to give him his due, and to
deliver him from being that
humorous, parochial and ultimately naive figure, the
Ploughboy of the Western World.
One of the most remarkable
things about Burns criticism is
how often his playful characterisa5
Review of the Session 2002-2003
Professor Tom Mullin
Professor of Physics, University of Manchester
and Director of the Manchester Centre for Nonlinear Dynamics
Monday 11 November 2002
From Chaos to the Indian Rope Trick
Professor Mullin introduced his
subject by explaining that the
principal impact of his talk would
be via demonstrations; a progression through the “simple”
pendulum, “not so simple”
pendulum, “excited” pendulum,
and “up-side-down” pendulum.
Other systems, again governed by
Newton’s equations can exhibit
“chaos”. Among other definitions, chaos can be defined as
“apparent randomness”. This is a
fundamental issue because most
systems in the real world are nonlinear (like the simple pendulum)
and have sufficient ‘freedom’ to
display chaos (unlike the simple
pendulum).
He began by examining the term
“deterministic chaos”, an apparently contradictory term, and
asked why disorder should arise in
a deterministic system, if no
randomness is added. The answer
is that non-linear systems can
show deterministic chaos.
In his first demonstration of chaos
Professor Mullin split the simple
pendulum to produce a double
pendulum (in effect a second
pendulum hanging from the tip
of the first, i.e. a not so simple
pendulum). Every time the double
pendulum operates, it traces a
different path, although the final
resting position, hanging vertically, is always the same. The double
pendulum is a very simple system
but behaves in an apparently
random way.
The simple pendulum, the
movement of which is governed
by Newton’s equations of motion,
is part of a predictable system
and real-world examples include
the orbits of satellites and solar
eclipses. The period of this
pendulum is entirely predictable
and is determined by the angle
through which the pendulum
swings (amplitude); hence its use
in traditional clocks. Despite
being entirely predictable, the
system is non-linear because the
greater the amplitude, the greater
the period.
Professor Mullin extended this
demonstration to a computer
screen to represent the motion of
a double pendulum in a geometric format. The screen showed the
path traced by two such pendulums, differing in their starting
position by only 1 part in 1012.
6
Lectures
After an initial period following
the same path, they soon embarked on entirely different paths;
predictability breaking down.
Complete predictability in such a
system is impossible. Given
infinite precision it would be
possible to predict the pendulum’s path, but infinite precision is
impossible.
becomes chaotic. In this situation
the attractor is no longer a circle
and at first glance its geometric
representation appears as a “ball
of wool”, i.e. random movements
within a confined area. However,
a plane taken through this pattern
displays structure, a feature
characteristic of chaos, but not
characteristic of noise. Hence the
attractor (known as a “strange
attractor”) has an element of
predictability. Professor Mullin
gave an audible demonstration of
the difference between chaos and
noise; noise is what is heard when
a radio is incorrectly tuned. In
contrast, chaos contains a repetitive quality.
Drawing on the analogy of
weather forecasting, Professor
Mullin described how data, of
finite precision, is gathered from
various sources and used by a
computer to predict weather in
several days time. Typically a
prediction can be no more than a
few days because the data used
has limited precision. In further
explanation he demonstrated a
simple battery-driven toy (driven
by an eccentric motor) which
always traces a different path
when set in motion.
Professor Mullin then demonstrated a computer simulation of two
excited pendulums, differing in
their starting position by 1 in 106.
As with the double pendulum, the
two excited pendulums begin by
tracing the same path but then
diverged on entirely different
paths, again confined to a given
region.
Professor Mullin then moved to
exciting pendulums. He demonstrated “parametric resonance”,
in which a simple pendulum,
excited to bob up and down, can
be made to swing regularly from
side to side. The frequency at
which it swings from side-to-side
is half that of the frequency of
excitation and its movement can
be represented geometrically as a
circle (known as an attractor).
He then asked whether these
properties have relevance outside
of pendulums. He demonstrated
a “forced buckling beam” in
which a thin steel beam, excited at
a given frequency, shakes from
side to side between two magnets. He used a laser to project
the beam’s path onto a screen and
showed that above a certain
frequency the beam’s movement
became chaotic.
If the frequency of excitation is
further reduced, the amplitude of
the side-to-side movement
7
Review of the Session 2002-2003
Professor Mullin concluded his
talk by addressing upside-down
pendulums. His demonstration
consisted of a horizontal metal
beam fixed at one end; its other
end attached to a pendulum. The
beam was excited at its fixed end
by a motor and at high frequency
the pendulum made to stand
upright. Professor Mullin credited
David Mullin of Jesus College,
Oxford, who first suggested the
extension of this principle to
multiple upside-down pendulums. In providing the world’s
first demonstration of four
standing upside-down pendulums Professor Mullin explained
that (in principle) any number of
pendulums could be made to
stand upside-down, given a high
enough frequency of excitation. A
piece of rope can be regarded a
sequence of pendulums and,
given an infinite frequency and
zero amplitude, it too could be
made to stand upside-down.
Finally he demonstrated how a
wire (a length of curtain wire long
enough so that it does not
support its own weight) can be
made to stand upright when
excited at a given frequency.
Current theory suggests that the
frequency to create this effect
should be infinitely high, given
that the wire can be regarded as a
series of many pendulums. It is
not yet understood why the wire
can be made to stand upright at
relatively low frequencies.
8
Lectures
Professor RJP Williams FRS
Inorganic Chemistry Laboratory, University of Oxford
2 December 2002
Chemical Constraints on Biological Evolution
Professor Williams introduced his
lecture by noting that life evolved
in a geological environment and
that he would be developing this
theme by examining the effect of
the coming of living organisms on
the chemistry of the environment.
discarded as a by-product. This
latter action was seen as a painful
mistake as the oxygen released
acted as a poison for organisms
suited to an anaerobic environment. As a result the oxygen in
the air changed the geological
nature of the earth and it is this
that drives evolution.
Biologists have largely been
interested in macromolecules and
their interactions, for example
proteins and nucleic acids.
However this preoccupation with
dead molecules is difficult to
understand when the requirement
is to study living flow systems.
That is, the nucleic acid DNA,
could not originate in the absence
of material to code. Present and
past research work has not
adequately emphasised the role of
the environment as the driving
force behind the evolution of
internal cellular mechanisms.
Is it that the only way for life
forms to survive in an oxygenated
environment was for new compartments to develop within the
cell? Thus, inside all would be
reducing and outside all would be
oxidising. Eventually this would
lead to the formation of multicellular organisms. This is driven by
processes depending on the use
of space rather than by considerations involving DNA. Dawkins’
blind watchmaker may well have
been blind, but he was also
placed in an unavoidable one-way
tunnel. The tunnel was the
environment, changed by the
excreted oxygen of the life-forms
within it.
Cellular life began with prokaryotic organisms evolving in a
“reducing” environment, i.e. one
that is anaerobic, or lacking in
oxygen. Early life-forms reduced
oxidised carbon to such molecules
as sugars, proteins and nucleic
acids. In doing so they used the
hydrogen from gases such as H2S,
discarding sulphur as a
by-product. The hydrogen in
water was also used, and oxygen
Bioavailability of the elements
depends on such variables as their
individual solubilities and how in
turn these depend on their
chemical properties as well as on
environmental variables such as
temperature. The more soluble a
9
Review of the Session 2002-2003
metal complex, the more available
it is to life. Sodium, magnesium,
calcium and potassium were very
common in the sea and these
were used by early life forms.
However, copper was relatively
unavailable as it had a very low
solubility in a reducing environment. Thus early life forms did not
use copper in their metabolic
processes. However as oxygen
levels increased, copper became
more and more available. At the
same time hydrogen sulphide
became increasingly oxidised (to
sulphate).
processes. Cells began to come
together to form multicellular
organisms held together by
connective tissue. Copper, in
association with specific catalytic
proteins, found a role in the
formation of connective tissue. In
an oxidising environment,
available iron is present in very
low concentrations. One of the
main struggles of life is obtaining
sufficient iron to satisfy internal
oxidative processes. These
developed as oxygen levels
increased. Cells differentiated to
form internal vesicles which could
carry out oxidising chemistry
protected from the reducing
environment of the cytosol. Any
waste products formed in the
vesicles would not damage the
reducing chemistry carried out in
the cytosol. Thus cells with a
complex array of vesicles and
organelles developed in response
to changes in the external environment.
The entry and exit of inorganic
substances is controlled by a cell
mechanism, in which a cell
balances its internal environment
with equal numbers of positive
and negative charges.
As time went on, oxygen accumulated, ammonia was converted to
nitrogen and methane was
converted to carbon dioxide and
life forms continued to adapt to a
changing environment Living
cells began to develop internal
vesicles and organelles which
specialised in different types of
metabolic activity. Aerobic
biochemistry got under way and
additional elements became
involved in these new metabolic
In his concluding remarks,
Professor Williams expressed his
dismay that the study of inorganic
chemistry - fundamental to the
understanding of life and evolution - has been removed from
undergraduate biology courses.
10
Lectures
The Rt Hon Lord Robertson of Port Ellen
Secretary General of NATO
13 December 2002
A New Russian Revolution: Partnership with NATO
Part of the Edinburgh Lecture Series
“Ladies and Gentlemen, I have
just returned this week from
Moscow, where I opened a NATORussia conference on combating
terrorism – the second one of this
year. While I was there, I also held
talks with President Putin – the
fifth time we have met in the past
fourteen months.
The First World War and the
Bolshevik Revolution triggered
Russia’s mutation into the Soviet
Union. The Second World War
allowed Russia and the West to
join forces – temporarily – in the
face of a common threat, but
failed to resolve basic differences
in values and strategic philosophies.
What is striking about these
meetings is precisely that they
were not striking. No drama. No
fuss. No shoes being banged on
tables. Instead, pragmatic
discussions, in a friendly and
workmanlike atmosphere. In fact,
our thinking on certain issues has
grown so close that a Russian
newspaper, Izvestia, speculated
that the Russian Defence Minister
and I might share the same
speechwriter – which I assure you
is not the case.
After the war, the Iron Curtain fell
across Eastern Europe, as Winston
Churchill described so vividly. The
Cold War divided the continent,
and indeed the world, into two
massive armed camps: one
threatening to export its repressive model through intrigue or
violence; the other a group of
democracies determined to
protect their security and their
values.
The damage done to European
security during those long years
was massive. The threat of World
War III was a lens which distorted
our whole view of the world, and
allowed many of the security
challenges we face today to fester
and grow, while our energies were
diverted by the compelling task of
avoiding mutual annihilation.
As revolutions go, it has been a
quiet one. But it has been a
revolution nonetheless. To my
mind, the partnership between
NATO and Russia today marks the
end of a dark century for Europe –
a century which, in a very real
sense, began with the storming of
the Winter Palace in 1917, and
ended with the collapse of the
World Trade Center in September
2001.
Most dialogue between Russia
and the West took place at the
11
Review of the Session 2002-2003
occasional high-pressure and
adversarial Summit meeting. And
of course, there was no question
of sharing the benefits of democracy and growing prosperity with
the countries of the Warsaw Pact –
including with the Soviet Union
and Russia herself.
It took place at the first meeting
of the North Atlantic Cooperation
Council. NATO created this body,
usually called the NAC-C, almost
as soon as the Berlin Wall came
down. The NAC-C brought
together all the newly liberated
countries in Europe, together with
the Soviet Union, to sit around the
same table with NATO nations. It
was an unprecedented gathering.
It gave a first political voice to
peoples who for so long had not
had one. And it gave a first hint
of the role NATO would play, in
the coming years, in guiding EuroAtlantic integration.
The end of the Cold War opened
something of a Pandora’s box.
The fall of the Berlin Wall unleashed a flood of security
challenges that we were, frankly,
largely unprepared to face. But it
also released a great opportunity
– to unify Europe in security,
democracy and prosperity. And,
as an essential part of that
mission, to bring Russia in from
the cold, and into the European
family of nations.
For all those reasons, that first
NAC-C meeting was full of drama
and history. But it soon got more
interesting yet.
Few people would have guessed,
in 1990, how integral a role NATO
would play in this process. After
all, NATO was certainly seen by
Russia as a threat, if not the
enemy. How could we possibly
envisage not only a trusting
dialogue between NATO and
Russia, but cooperation? Even
partnership? A decade ago, this
would have seemed to most
observers like Mission Impossible.
At a certain point in the evening, a
messenger came into the room
and whispered in the ear of the
representative of the Soviet Union.
He excused himself and left the
room. A few minutes later, he
returned. He took his chair, and
asked for the microphone. He
announced that he could no
longer speak for the Soviet Union,
as the Soviet Union had, in the
past few minutes, dissolved. He
would henceforth represent only
Russia.
In fact, the NATO-Russia relationship did begin almost exactly 10
years ago – in NATO headquarters
in Brussels, on the evening of
December 20th 1991. And it was a
rather dramatic moment.
As you might imagine, the
meeting’s agenda was derailed.
But that moment opened up the
possibility of creating something
new in Europe. Where Russia was
no longer feared by its European
12
Lectures
neighbours, but trusted. Where
mutual mistrust and recrimination
could be replaced by regular
dialogue and frank exchanges.
And where Russia and NATO
could cooperate in solving mutual
security challenges, rather than
simply challenging each other.
This, too, was an historic development. For the first time, a
permanent, organic relationship
between Russia and her Western
partners was established. And
like our cooperation on the
ground, it offered the potential
for so much better cooperation in
future.
That was the beginning of the
revolution in NATO-Russia
relations. And throughout the
1990s, our practical cooperation
slowly deepened. First, in the
Balkans, where Russian soldiers
worked alongside NATO soldiers
in Bosnia to help keep the peace,
after the war came to an end in
1995.
But this potential was not realised
immediately. On the contrary. Too
many Russian generals had
targeted NATO for too long to
accept that the Alliance had now
changed. For them, and for many
Russians still mired in Cold War
prejudices, NATO was still an
enemy, to be watched, and
perhaps grudgingly worked with,
but not trusted. And, to be
honest, there were some sitting
around the NATO table whose
views were a mirror image, based
on decades of mistrust.
This, alone, was an almost
unbelievable event. I still remember a photograph of a young
American NATO soldier shaking
hands with a young Russian
soldier in Sarajevo, as that mission
began. It illustrated the massive
potential for peace, if NATO and
Russia could only work together
towards that common goal.
To these people, whether on the
Russia side or in NATO, security in
Europe was still what we call a
“zero-sum” game. Any gain in
security for one country had to
mean a commensurate loss of
security for another country.
Which is why Russia protested so
bitterly against one of the most
positive developments in modern
European history: NATO’s enlargement.
Practical cooperation set the stage
for political relations. In 1997, we
signed the Founding Act on
relations between NATO and
Russia, and established the
Permanent Joint Council. In the
Permanent Joint Council or PJC,
Russia met with all the countries
of NATO to discuss common
security concerns, and to work
towards mutual understanding
and, where possible, cooperation.
To Alliance members, and to the
aspirant countries, NATO enlargement has always had one simple
purpose: to deepen and broaden
Euro-Atlantic security through
13
Review of the Session 2002-2003
integration amongst democracies.
From our perspective, increased
stability and deepening democracy in Europe is of net benefit, even
to those countries not in the
Alliance.
regional stability, they saw,
initially, as an attempt to extend
NATO’s geographic “sphere of
influence” – again, through the
out-dated “zero-sum” prism.
This attitude sparked the hasty
dash by a few poorly-equipped
Russian troops to seize the main
airport in Kosovo – a reckless
piece of brinkmanship in political
and military terms. And even
though it was clearly both
pointless and dangerous, it was
hailed in some circles in Russia as
a restoration of national pride.
But those Russians who still clung
to the “zero-sum” perspective
had a different word for enlargement: “encirclement”. Even
President Yeltsin - who played
such a key role in bringing the
Soviet era to an end - made his
opposition to enlargement very
clear. He protested bitterly. He
threatened vague “countermeasures”. And he drew
imaginary “red-lines” on the map,
designating those new democracies which, in his estimation,
Russia could never accept to join
the Alliance.
There are more examples, but the
point is clear. Ten years after the
Cold War ended, the practical
foundations for NATO-Russia
cooperation were in place -–but
the psychological foundations
were not. Our future cooperation
was a helpless hostage of Cold
War ghosts.
The message was a familiar one:
that Russia still viewed the West
with suspicion, and would try to
maintain a geographic buffer
zone beyond Russia’s borders.
We needed a breakthrough. And
we got it. Two events, in particular, played a key role in taking our
relationship to a new level.
A similarly outdated view was
demonstrated over another event
in the same year: the Kosovo
campaign. Despite our many
political declarations of partnership and shared values and
interests, the Russian leadership
still felt compelled to define itself
in opposition to the West,
regardless of what was manifestly
taking place on the ground.
The first was Vladimir Putin
succeeding Boris Yeltsin as
President, on the first day of the
Millennium. A few weeks before
that I had met Russian Foreign
Minister Igor Ivanov for the first
time, in Istanbul, at a Summit
meeting of the Organisation for
Security and Cooperation in
Europe. He invited me to come to
Moscow in February 2000, where I
What we saw as a compelling case
for military intervention in support
of humanitarian relief and
14
Lectures
met President-elect Putin, also for
the first time.
making progress was understood
in Moscow as much as in Brussels.
That meeting was a real gamble
for the new President. After all,
he had only been in office a few
weeks, and one of his first
decisions was to crack the ice on
which his predecessor had put the
NATO-Russia relationship. It was
no surprise, then, that our first
meeting was cautious in tone and
in substance.
My second meeting with President
Putin a year later, in February
2001, proved that we were on the
right path. Many of you will recall
that there was a furious international debate underway at the
time about US plans for missile
defence – and in particular,
whether these plans would
critically damage relations between Russia and the West.
It was very bold, however, in
symbolism, considering how
difficult the previous year had
been. President Putin and I
agreed, in February 2000, to take
a “step-by-step” approach to
improving NATO-Russia relations.
What was really important was
that the show was back on the
road.
My meeting with President Putin
turned what was a divisive debate
into a productive discussion. He
put forward a proposal on missile
defence that acknowledged that
we face a common threat; that
there was a military response to it;
and that we could cooperate in
addressing it.
A tragic event a few months later
demonstrated the potential of our
cooperation. When the Kursk
submarine sank on August 12th,
2000, NATO immediately, that
same day, made an offer to help
try to rescue the sailors trapped
inside.
This was already unprecedented
common ground. What was
equally significant was that he
handed that proposal to me, as
NATO Secretary General, rather
than to the United States or to the
other NATO nations. In doing so,
President Putin made it clear that
he acknowledged that NATO had
an important role to play in EuroAtlantic security. And that he
intended to work with NATO, even
on controversial issues, rather
than trying to engage in a
counter-productive policy of
confrontation.
Soon after the accident, Russian
Admirals were in NATO Headquarters, working with their NATO
counterparts on potential solutions. In the end, there was no
way to save the sailors of the
Kursk. But the lesson was clear –
in times of crisis, ad-hoc cooperation wasn’t enough. We needed
more. And the importance of
Two months later, in April 2001, I
met with President Bush in
15
Review of the Session 2002-2003
Washington. And I predicted that
he and Putin would work well
together. Why, he asked?
made clear that today’s threats can
come from anywhere, and that
“spheres of influence” and other
traditional notions of geographic
security are irrelevant in the
modern world.
I told him, because both came to
politics late in life. Because both
come to their capitals from
elsewhere — Bush from Texas,
Putin from St Petersburg.
On September 12th, NATO invoked
its mutual defence clause for the
first time in its history. During the
Cold War, it was designed to be
invoked against a Soviet attack.
Now, it was invoked in response
to terrorism – the most vivid proof
to Russia, if any were needed, that
the Alliance truly had changed.
Both Bush and Putin were exciting
major expectations, for change
and improvement, especially in
the economy. Both were managing big countries, with all the
challenges that this holds.
And both, in my experience, were
unlikely to accept the answer,
“But Mr. President, we always do
it this way.”
It also brought NATO and Russia
firmly onto the same side in the
fight against international
terrorism. It was clear, from the
moment of the attacks, that the
broadest possible coalition was
necessary to counter these
terrorists. It was also clear that
there was no more time for outdated fears. We needed a new
approach to security: cooperation
at all levels, across the full spectrum of security issues that we
actually face today.
These predictions proved to be
correct.
So the first element of a fundamentally new relationship
between NATO and Russia was
already in place – a much more
pragmatic leadership in Moscow,
which saw the West as a Partner,
not a rival. But the real opportunity sprang, ironically enough,
from a real tragedy – September
11 2001.
I don’t mean to imply that last
year’s terrorist attacks led to a
fundamental change in direction
in the NATO-Russia relationship.
Many on both sides, not least
President Putin himself, had
already grasped the idea that we
must join forces if we are to
defeat terrorism, proliferation,
regional instability and the other
threats we all face today. But
The terrorist attacks in New York
and Washington did more than
just destroy buildings and kill
thousands of people – including,
by the way, nearly 100 Russian
citizens. They also created an
earthquake in international
relations – including relations
between NATO and Russia. They
16
Lectures
September 11 made a real
breakthrough in our relations an
immediate necessity, rather than a
theoretical long-term goal.
Instead of asking, “How much cooperation can we tolerate”, we
began to ask, “How can we
achieve the full promise of
partnership — quickly”?
change in the relationship
between Russia and the West. It
proved to NATO that President
Putin was serious about being a
true Partner in security. And it
proved to Russia that NATO, and
the West, were serious about
having Russia as a Partner in
facing new threats.
President Putin demonstrated
immediately that he understood
the importance of putting aside
old prejudices, and embracing
true, and immediate, cooperation.
With a heavy emphasis on
immediate: of all the leaders in
the world, President Putin was the
first to call President Bush on
September 11.
It was this breakthrough that led
to the creation, in May, of a
fundamentally new framework for
NATO-Russia cooperation. It is
called the NATO-Russia Council or
NRC. I cannot claim to be the
author of the initiative. Like all
success stories, it has many
godfathers. The Prime Ministers of
Britain, Canada and Italy, and the
US President, can all take some of
the credit. What is important is
not who initiated the NATO Russia
Council but what it has already
achieved. The way it has done
business in its first six months
demonstrates that we truly have
achieved a revolution in NATORussia relations.
From that moment, Russia was a
staunch partner in the international response to the attacks.
Russia offered to open its airspace
to US war planes for the campaign
to topple the Taliban and rout AlQaida. Moscow also
demonstrated its openness by
having US and other Western
troops based in the Central Asian
Republics, an area Russia had
considered until recently to be her
exclusive area of influence.
The seating arrangements alone
speak volumes. In the old PJC, a
cumbersome troika shared the
chair. We called it “19 plus one”.
Russia called it “19 versus one”.
And Moscow was willing to share
the most sensitive intelligence on
terrorism itself, and on the region
around Afghanistan – an area
they know well, through grim
experience.
In the new NATO-Russia Council,
there is no “19”, and no “1”. All
participants sit as equals, in
alphabetical order – great powers
and small powers together.
Russia sits between Spain and
Portugal, fully comfortable as one
This was more than just cooperation. It demonstrated a sea
17
Review of the Session 2002-2003
of twenty participating nations.
We meet monthly, in NATO
Headquarters – a building that
was on the target list of every
Soviet nuclear missile commander.
And I - the Secretary General of
NATO - chair the meeting.
Potsdam and at Yalta, but to unite
it. Unlike any gathering in
European or transatlantic history,
the great powers and a lot of
other like-minded countries were
launching a body to build lasting
cooperation and interaction
across a part of the world fractured and laid waste by the same
countries for centuries.
It is hard to exaggerate how much
of an advance this is. It proved
that Russia is now ready to take
her place as a full, equal and
trusting partner in Euro-Atlantic
security. And it shows that NATO’s
members are equally ready to take
that step.
I have to say that being the
Chairman of such an assembly
was for me a moment of real
significance and of momentary
intimidation. More importantly,
that day changed the world for
ever.
The seismic change was vividly on
display in Pratica di Mare Airbase
near Rome on 28 May this year,
when NATO and Russia held the
first meeting of this new NATORussia Council, at the level of
Heads of State and Government.
We have made a quick start in
ensuring that this revolutionary
new relationship delivers substantial new security. First and
foremost, we have dramatically
deepened our cooperation in the
struggle against terrorism.
In 20 days, Prime Minister Berlusconi had constructed a complete
Summit headquarters in grand
Italian style. But the real drama
came at the table itself – indeed,
by the table itself.
The NRC nations are completing
common assessments of specific
terrorist threats in the EuroAtlantic Area. We are also
assessing much more closely the
terrorist threat to NATO and
Russian soldiers in their peacekeeping missions in the Balkans.
Here, around one table, were the
Presidents of the USA and Russia,
of France and Poland, the Chancellor of Germany, the British
Prime Minister, the Italian Prime
Minister, the Prime Ministers of
Iceland and Luxembourg and
others. Twenty of the key EuroAtlantic leaders at one big table.
And as I mentioned, we have just
held the second NATO-Russia
conference on improving the
military role in combating terrorism. We looked at how best to
use the military’s unique assets
and capabilities to defend against
terrorist attacks, and against
And here’s the history – they were
there not to carve up the world
like the assembled leaders at
18
Lectures
attacks using weapons of mass
destruction. And we are looking
at how best to transform the
military to better address these
new threats.
exploring options for co-operating in this area as well – to share
best practices, and to see where
we can cooperate to make best
use of our collective resources.
Part of that transformation has to
cover purely technical or technological changes, such as buying
chemical and biological defence
kits. But the transformation must
go beyond the kit, to also change
the culture.
Our new partnership extends to
many more areas. For example,
we are deepening our military-to
military cooperation — including
talks about having Russian air
tankers refuelling NATO aircraft.
Imagine that idea, even just a few
years ago!
In Moscow, I took the opportunity
to stress to our Russian friends
the importance of proportionality
in responding to threats, and of
training the military to act also as
policeman and diplomat. I shared
with them the experience of so
many NATO countries: that a
political solution to conflict was
the only lasting solution. Blind,
brute force only turns political
opponents into future terrorists.
It was a tough message to pass in
Russia – but I could make my case,
at a high level, and be listened to,
because of the new character in
our partnership.
We are also laying the groundwork for future joint NATO-Russia
peacekeeping operations. We
have already agreed broad
political guidance for such future
missions. And we are discussing
holding a crisis management
exercise together in the coming
year.
We are deepening our cooperation on short range Missile
Defence, to better protect our
deployed forces against attack.
We are jointly assessing the threat
to Russia and NATO nations
posed by chemical, biological,
radiological weapons, and their
means of delivery. And we are
preparing to work together in the
event of such an attack, or indeed
in any civil emergency.
Of course, military reform goes
beyond preparing for terrorism. It
means fundamental adaptation:
to jettison out-dated Cold War
heavy metal armies, and to create
modern, light and flexible forces
that are trained and equipped to
meet 21st century threats.
We held a joint exercise in
September, in Russia, where we
practised responding together to
a terrorist attack on a chemical
factory. This was truly a groundbreaking event. Fourteen
NATO armies face this reform
challenge. Russia faces it in
spades. Which is why we are
19
Review of the Session 2002-2003
A Wall Street Journal article a few
days ago said that, by inviting
seven countries to join, “NATO
has achieved the greatest victory
in the five decades of its existence,
by finally erasing the effects of the
Ribbentrop-Molotov Pact and the
Yalta Agreement, which had
shackled Europe for half a
century.”
countries from across Europe,
including Russia, sent teams to
participate, along with the UN.
More than ten other countries
sent observers. And together, all
of these countries and international organisations practised
working together to help those
who might be injured in an attack,
control contamination, and
evacuate those at risk. This was a
truly new coalition, training
together to take on new threats.
Three years earlier, as the previous
round of enlargement was
finalised, Russia, still furious over
the Kosovo crisis, shunned any
contact with NATO. By contrast,
two weeks ago, Russian Foreign
Minister Ivanov attended a NATORussia Council meeting in Prague,
on the margins of the NATO
Summit, the day after the invitations were issued. He offered a
glowing assessment, both in
public and in our closed-door
meeting, of the progress that had
been made in the NATO-Russia
Council in the past six months.
And then he hopped on Air Force
One, and rode back to Russia with
President Bush, who was warmly
received by President Putin. A
revolution indeed.
We are also deepening our
cooperation on search and rescue
at sea. I have already mentioned
the Kursk disaster, and how it
sparked deeper cooperation
between us. Well, our Search and
Rescue Work Program already
includes Russia participation in
our exercises. And we aim to sign
a framework document on our
search and rescue at sea cooperation in the next few weeks.
I could go on, but you get the
picture. There truly has been a
revolution in NATO-Russia
relations. And to me, one of the
most vivid illustrations came from
our recent Summit in Prague - a
city once deep behind the Churchillian Iron Curtain.
In 1917, Lenin said, “How can you
make a revolution without
executions?” And true to his call,
the Bolshevik revolution ushered
in one of the darkest eras in
modern European history. A
period in which Russia was
isolated from Europe, and during
which Europe was divided by
Russia.
In Prague, NATO invited seven
new democracies to begin
accession talks to join the Alliance. Before September 11th,
2001, Prague was foreseen by all
to be an “enlargement Summit”.
And to a great extent, it still was.
20
Lectures
That era is now finally over. And it
could not come too soon. In the
21st Century, it is simply impossible to preserve our security
against such new threats as
terrorism, the proliferation of
weapons of mass destruction, or
regional conflicts, without Russia.
In an increasingly globalised
world, we need the broadest
possible cooperation. And the
new NATO-Russia relationship has
created what has been missing for
almost a century: a strong security
bridge between Russia and her
partners in the West.
the political and economic
isolation of the past century. With
her nuclear arsenal, her 11 time
zones, her 150 million citizens,
and her borders stretching from
the Caucasus through Central
Asia and the Far East, Russia’s fate
remains vital to the security of the
Euro-Atlantic community. Nothing could be of more long-term
benefit to our common security
than for Russia to take her rightful
place as a full, trusting and
trustworthy member of the EuroAtlantic community. And we have
begun to make that vision a
reality.
But the new NATO-Russia relationship has a benefit that is more
political than practical. It is also a
platform for Russia’s return from
Thank you.”
21
Review of the Session 2002-2003
Professor John Lawton
Chief Executive of the Natural Environment Research Council
14 February 2003
Life on a Little Known Planet and Unsustainable Development
Joint RSE/ECRR/IoB Lecture
In this lecture, Professor Lawton
sought firstly to review what is not
yet known about diversity of life
on Earth and secondly to look at
what we do know about life’s
diversity and how we are impacting it, in the context of
unsustainability. He stressed the
undeniable conclusion that we are
not using it sustainably, with very
profound implications for our
lifestyles and economies.
studies looking at species diversity
in plots of rainforest. In one
particular study, several years of
work did not reveal any new bird
species. However 1% of the
butterfly species detected were
new, 80% of the beetles were
new, and 90% of nematode
species were new. Such studies
indicate that vertebrates make up
less than 1% of species diversity.
What are we doing to life on
Earth, and why does it matter? As
far as we know Earth is the only
planet in the universe with life on
it. However we are not treating it
carefully and there is an extinction
crisis underway. The fossil record
tells us that the species around
today make up between 2% and
4 % of all the species which have
ever lived, i.e. 95% of species
which have ever lived are now
extinct. From the fossil record we
can also determine the expected
lifetime of a species, and therefore
calculate the underlying background extinction rate for the 600
million years for which we have a
fossil record. For the number of
species we have on Earth, the
average background rate equates
to one or two species becoming
extinct each year but human
Relatively little is known about the
range of species inhabiting Earth.
We know some groups, such as
birds, very well, but other groups,
such as beetles, not at all well. In
total we know of approximately
1.7 million species on the planet
but this is only a fraction of the
true picture; there could be
anywhere between six and 30
million different species, probably
around 12 million. Our view of
species diversity is often skewed
and it is worth remembering that
the vertebrates (animals, birds,
fish, amphibia and mammals)
make up only 2.7% of the total.
The vast majority comprises
insects and microscopic life-forms
of which we know very little.
Estimates of how many organisms
might inhabit Earth come from
22
Lectures
impact alone has pushed up this
rate in the last 100 years from 1-2
to 10-100 species / year. In this
century it is likely that the rate will
increase to 1,000 or even 10,000
species / year. There have been
previous mass extinctions, but
each time resulting from natural
causes; vulcanism, meteorite
impact or climate change. This
time the cause is humans alone;
there are too many humans
making too much demand upon
Earth.
those we care most about. Unless
we are very careful, spectacular
animals such as the cheetah will
disappear in the lifetime of our
grandchildren.
There are five principal arguments
as to why we should be concerned
about loss of species. First, the
simple moral argument: all life
forms have a right to exist and in
our role as stewards of the planet
we should hand it on in good
shape. Second, they enrich our
lives and we should celebrate
them. Third, the utilitarian
reason: we will lose organisms
which are genuinely useful for
food and medicinal purposes.
Fourth, the multiple “canaries in
the coalmine” warning us that we
are not using the planet sustainably. Fifth, we know that
ecosystems that are “species
poor”do not appear to work as
well. Since we depend on
ecosystems’ services for water and
biomass production this represents a significant threat to
humans.
There are three ways of knowing
mass extinction is occurring. The
first comes from an order of
animals that we know a lot about:
birds. About 10% of the 10,000
bird species are seriously endangered and it is likely that 1,000
bird species will become extinct
unless we intervene in the next
10-50 years. Perhaps more
startling, up to 25%-40% of all
vertebrates will become extinct
over a similar time-scale. The
second is through the speciesarea relationship. This shows that
the bigger the island (or area of
habitat), the greater is the number
of species to be found. Therefore
if rate of habitat reduction is
known, the species-area relationship can predict how many species
will be lost. Thirdly the international Red Data Book lists all
species under threat. Over the
next 50-100 years we stand to
lose at least 15-20% of all the
organisms on Earth, including
Political and economic systems
also have a role to play. Why are
we stupid enough to fish the
earth’s fisheries - a renewable
resource that could be harvested
indefinitely - to the verge of
extinction? The answer is simple:
money in the bank grows faster
than fish. Therefore under our
present system it is economically
rational (but biologically stupid) to
fish a population to extinction,
23
Review of the Session 2002-2003
take the money and put it in the
bank. As the North Sea fishery
declined we gave fishermen
perverse subsidies to use even
bigger boats to catch even more
fish. Governments refused to
grasp the nettle and somehow
believed the laws of biology
would be suspended. When the
last cod is gone there is no
technology that will find another.
Therefore, at this rate, within the
lifetime of our grandchildren
humans will be taking 100% of
the planet’s resources. Clearly this
cannot happen and even politicians are now waking to this
alarming fact.
How can we redress the balance?
Species can be rescued, e.g. the
Californian Condor. To restore
the populations of all the 1000
endangered species in North
America would cost just $650
million, the same as three days of
war in Iraq. In addition, setting
aside just 10% of the planet’s
habitats could carry forward about
half of the planet’s species.
Furthermore, by concentrating on
biodiversity “hotspots”, significantly more than half could be
carried forward. These measures
will buy us time. We have a
window of opportunity, perhaps
25 years, to fix our accounting
systems, political systems, and
lifestyles. Otherwise sustainability
is ruled out.
The following figures well
demonstrate the human impact
on Earth. Humans take 4%
annual plant growth for food.
However to get this useful
material humans take a staggering 40% of primary production.
Most of the earth’s oceans are less
productive than the Sahara desert.
In areas where they are productive, between a third and half of
ocean primary production ends up
used by humans. This is why we
have a fisheries crisis.
Humans take 60% of all the
readily accessible fresh water. The
united impact of these three facts
is that the “ark” is roughly half
the size that it was before humans
arrived. The financial analogy is
that we are treating the Earth’s
natural capital as income. The way
we look after the planet makes
Enron look like a paradigm of
virtue. Even more concerning,
these numbers are growing
exponentially, with a doubling
time of between 30 and 50 years.
The huge discrepancy between
rich and poor must be remedied,
as the world’s desperate poor will
make ever-greater demands upon
biodiversity until they are lifted
out of poverty and into a fairer
world. On a planet with environmental destruction, food and
water shortage we will be unable
to prevent the pressure of refugees entering developed
countries.
24
Lectures
Professor Paddy Nixon
Head of Global and Pervasive Computing Group,
University of Strathclyde
17 February 2003
The Disappearing Computer
Science & Society Lecture
In 1947 experts predicted only six
computers would be needed to
satisfy the world’s computing
needs. Yet in 2002 there were
200 million hosts on the Internet
alone. However, the market is
beginning to saturate and PC
sales are beginning to drop,
despite the fact that only 5% of
the world’s population use
computers.
Gordon Moore, co-founder of
Hewlett Packard, predicted in
1970 that computing power
would double every 18 months.
Roughly speaking this has been
true to the present day, and
technological development in
molecular and quantum computing will probably ensure this trend
continues for the foreseeable
future. Four or five years from now
computers will have the computational power of an insect and by
2023 computers may have the
same computational power as the
human brain, and cost less than
one penny to produce. By 2050
the computational power of the
whole human race will be available on a single microprocessor.
There is a concurrent advance in
the development of communication technology; this gives rise to
Neilson’s law which suggests that
Internet access will increase by
50% per year.
In 1948 the first computer (‘The
Baby’) was built, in Manchester. It
performed 714 calculations/
second. In 1964 machines were
built capable of 3 million calculations/second. 1974 saw the
development of the mouse and
windows based system representing the first attempt to make
interaction between computer
and user more meaningful. In
1981 the first PC was released,
and in 1984 the Apple computer
appeared; the latter machine set
the standard for interaction
between user and computer. In
2002 web-servers the size of a
10p piece were developed.
Throughout this fifty-year period
the defining trends have been:
increasing computing power,
decreasing size, and decreasing
cost.
But have users seen a corresponding increase in the speed of the
programmes they use, such as
Microsoft Word? We have not,
and the problem lies in trying to
funnel the enormous power of
today’s computers and bandwidth
through the traditional mouse
25
Review of the Session 2002-2003
and keyboard system. We need to
change the way the user interacts
with the computer.
istics has to be observed, but at
the same time remain anonymous.
Issues of data ownership and data
sharing present subtle challenges.
In some respects they correspond
to managing the identity and
personal information of individuals. The instructions required to
set up these rules must be in plain
English, not programming
language, e.g. “anyone from my
company can look at these files”.
Furthermore, the system should
work on a global scale so that if
an individual walks into an office
in America, the system there
should recognise the individual
and react accordingly.
The next wave of development is
not just in computing power but
encompasses the social and
interactive aspect, the networking
aspect, and the physics underlying
communication technology. The
goal is an unobtrusive computer,
rather than increasingly impressive
graphics. The experiment currently
being performed by Kevin Warwick, inserting chips under his
skin, serves to show how the
computer is disappearing in many
ways and some commentators
believe that in the next twenty
years computers will recede into
the background; so-called calm
technology.
The task ahead involves building
systems requiring much greater
flexibility than those we currently
experience. In Professor Nixon’s
own area of research, temporal
logic - observing what happens in
a given time frame to understand
how different events are grouped
- one of the major hurdles is
equivalence, i.e. how you determine if one thing is the same as
another one.
An unobtrusive computer system
could be in the background of an
office environment for people of
varying physical ability where
many computing devices would
be required to communicate in
order to provide a people-centric
environment. As an individual
walked into this environment the
computing facility would determine what information that
individual should have access to
and automatically relay any urgent
information to them. The only way
to make this service personalised
to an individual is to recognise the
context of that individual’s history
and the environment in which
they are located. The identity of
the individual and their character-
The research necessary to realise
this vision involves advances in
programming and user-interface
and is being tackled by the Global
Smart Spaces Initiative (GLOSS).
GLOSS aims to investigate the
barriers, both user-centred and
technical, to the construction of
flexible and powerful living and
working environments for all
European citizens. It aims to do
26
Lectures
this seamlessly, by integrating
many services including application services, information and
environments. It will do this by
paying close attention to the
interaction between user, space,
device, and information.
the tags attached to shop goods,
which use radio frequency to start
an alarm if taken illegally through
the shop doors.
Professor Nixon concluded by
stating his belief that this topic
represented a new era in Computer Science that required radical
rethinking of computer architecture, network infrastructure and
the user interaction paradigms
that computers currently use. He
also reinforced the need to think
very carefully about identity and
privacy, as the implications of
exposing all our data are immense. Engineers must
incorporate this issue into their
designs from the very start.
In order to demonstrate the
facilities that might be offered by
such computing advances,
Professor Nixon proceeded to
show a cartoon animation of a
fictional character travelling from
Brussels to Paris. The character
receives a message highlighting a
nearby coffee shop that he might
enjoy, and is able to write digital
postcards upon the fabric of a
café tablecloth. The kinds of
devices which might relay this
information could be similar to
27
Review of the Session 2002-2003
Mr David Bayliss OBE FREng
Former Planning Director of London Transport
3 March 2003
Public Transport and Public Expectations:
Can the Gaps Be Bridged?
Mr Bayliss highlighted the
advantages of using public
transport, but conceded that there
were several valid reasons for
people not making greater use of
the services currently available.
These include: changes in the
places where people live and
work, increases in car availability,
shortcomings in transport pricing
and a number of attributes of
public transport systems themselves. Public transport networks
are sparser than the road network
used by cars, and buses and taxis
have to compete with cars and
lorries for congested road space,
resulting in slower and less
reliable journeys. They are also
disjointed and therefore require
interchange. The result is that
most public transport journeys
take longer and are less convenient than going by car. This
disadvantage is reinforced by
public transport prices rising
faster than motoring costs compounded by improvement in
the quality and reliability of cars.
Planning controls to limit dispersal would help but would take a
long time to have much effect.
Substantial improvements to bus
services would require a package
of traffic priorities, more modern
vehicles, safe and convenient
stops and stations, together with
better information and easier fare
and ticketing systems.
Light rail systems can boost the
appeal of public transport but are
cost-effective only along busy
travel corridors. Their effectiveness is improved when
co-ordinated with bus services
and integrated ticketing and
information services.
Metros have the greatest capacity
to serve the densest urban
corridors and whilst the opportunities for new lines are few
nationally, there is considerable
potential for expanding and
modernising the London Underground.
Similarly, improvements to
stations, modern trains, information and ticketing systems would
increase the appeal and use of
rail.
Mr. Bayliss then identified the
attributes of public transport that
people most want improved reliability, frequency, fare structures, comfort, cleanliness and
personal safety.
Mr. Bayliss concluded his lecture
by stating that although new
28
Lectures
technology and systems have
important roles to play in making
existing modes work better, the
scope for entirely new systems is
small.
forms of transportation in some
circumstances.
There is no “silver bullet” that can
eliminate the present shortcomings of our public transport
system, but there is a wide range
of measures which, together with
better co-ordination, can reduce
the gap between expectations and
reality. However, implementing
these is a challenge to which we
have not yet adequately
responded.
However, new operational
concepts such as demandresponsive bus/taxi systems and
“mobility packages” combining
car and public transport use are
emerging and have the potential
to close the gap between the two
29
Review of the Session 2002-2003
Professor Kevin Warwick,IEE, FCGI
Professor of Cybernetics, University of Reading
18 March 2003
I Cyborg
The Royal Society of Edinburgh and
Royal Academy of Engineering Joint Lecture
humans, or as helping those who
have a physical or mental problem, such as a paralysis, to do
things they otherwise would not
be able to do. This dichotomy
presents something of an ethical
problem with regard to how far
the research should be taken and
whether it is a good thing or bad
thing to ‘evolve’ humans in a
technical, rather than biological,
way.
Reasons for Experimenting
“This lecture tells the story of the
self-experimentation implant
research carried out over the last
few years.
The term Cyborg has been widely
used in the world of science
fiction, yet it aptly describes a field
of research still in its infancy. The
Oxford English Dictionary describes a Cyborg as ‘a person
whose physical abilities are
extended beyond normal human
limitations by machine technology
(as yet undeveloped)’. Meanwhile
others see the class of Cyborgs
(cybernetic organisms – part
human, part machine) as including those with heart pacemakers
or artificial hips, even those riding
bicycles (Hayles, 19993). In this
discussion however, the concept
of a Cyborg is reserved for
humans whose physical and/or
mental abilities are extended by
means of technology integral with
the body.
One interesting feature of Cyborg
research however is that technology developed can be considered in
one of two ways. It can be seen
either as potentially augmenting
all humans, giving them abilities
over and above those of other
The primary question is why
should we want to extend human
abilities? Yet despite the success
of humans on earth, this is
something we have generally
always been trying to do. Indeed
it could be regarded as an
important part of what it means
to be human. We have obvious
physical limitations and in the last
few centuries in particular we have
employed technology to dig
tunnels, lift heavy loads, communicate instantly around the world,
accurately and rapidly repeat a
mundane task and, perhaps most
diversely of all, to enable us to fly.
But due to a finite, limited brain
size, humans also exhibit only a
small range of mental abilities.
30
Lectures
Such a statement can though be
difficult for some humans to
accept, largely because of their
finite, limited brain size. By
comparing the human brain with
a machine (computer) brain,
however, one can witness distinctly different modes of operation
and, in some ways, advantages of
the machine in terms of its
performance.
Some of the machines’ ‘mental’
advantages have been put to
good use in recent years. For
example, a computer’s ability to
carry out millions of mathematical
calculations accurately, in the
same time it takes a human to do
one calculation inaccurately. Also,
the memory capabilities of a
networked computer are phenomenal in comparison to a human’s
memory. Surfing the web for a
host of information that the
human brain cannot hope to
retain has become commonplace.
Such mathematical and memory
abilities of machines have led to
considerable redefinitions of what
‘intelligence’ is all about and have
given rise to an ongoing controversy as to just what machine
intelligence is and what it might
be capable of. (Warwick, 20014).
Technology has also been used to
improve on the human’s limited
range of senses, and to give us
some sort of picture of the world
around us that we do not have
any knowledge of in everyday life.
So now technology can give us
information about X-ray signals,
what’s going on in the infrared
spectrum or the ultraviolet and
even ultrasonic pictures of the
world around. In most cases such
signals are converted into visual
images that humans can understand.
Computers are nowadays also
employed to process data, to
‘think’, in many dimensions. One
reason for this is that human
brains have evolved to think in, at
most, three dimensions, perhaps
extending to four if time is
included as a dimension. Space
around us is, of course, not threedimensional, as humans
categorise it, but quite simply can
be perceived in as many dimensions as one wishes. Machines
therefore have the capability of
understanding the world in a
much more complex, multidimensional, way in comparison to
humans. This multidimensionality
is an extremely powerful advantage for machine intelligence.
When one human communicates
either with a machine or with
another human, the human
brain’s relatively complex electrochemical signals are converted to
mechanical signals, sound waves
in speech or perhaps movement
with a keyboard. Realistically this
is a very slow, limited and
error-prone means of communication in comparison with direct
electronic signalling. Human
languages are, as a result, finite
31
Review of the Session 2002-2003
coding systems that cannot
appropriately portray our
thoughts, wishes, feelings and
emotions. In particular, problems
arise due to the wide variety of
different languages and cultures
and the indirect relationships that
exist between them. Machine
communication is by comparison
tremendously powerful, partly
because it usually involves parallel
transmission, whereas human
communication is, by nature,
serial.
When witnessing the physical and
mental capabilities of machines, in
comparison with those of humans, some of which have just
been described, it is apparent that
in the physical case humans can
benefit from the technological
abilities by external implementation. In other words, we sit in cars
or on planes, but we don’t need
to become one with them. When
it comes to the mental possibilities, humans can also benefit, as
we already do in many cases, with
external cooperation. As examples, a telephone helps us
communicate or a computer
provides us with an external
memory source. But a much more
direct link up could offer us so
much more. For example, by
linking human and computer
brains together could it be
possible for us, in this Cyborg
form, to understand the world in
many dimensions? Might it also
be possible to directly tap the
mathematical and memory
capabilities of the machine
network? Why should the human
brain remember anything when a
machine brain can do it so much
better? What are the possibilities
for feeding other (non-human)
sensory information directly in?
What will a human brain make of
it? And perhaps most pertinent
of all, by linking the human brain
directly with a computer might it
be possible to communicate
directly person to machine and
person to person, purely by
electronic signals – a phenomena
that could be regarded as thought
communication?
All of these questions, each one
of which is valid in its own way,
provide a powerful driving force
for scientific investigation,
especially as the technology is
now becoming available to enable
such studies. It is a challenge that
perhaps provides the ultimate
question for human scientists.
Can we technologically evolve
humans into a post-human,
Cyborg, state?
The 1998 Experiment
By the mid to late 1990s numerous science fiction stories had
been written about the possibilities of implanting technology into
humans to extend their capabilities. But also at this time several
eminent scientists started to
consider what might be achievable now that appropriate
technology had become available.
32
Lectures
As an example, in 1997 Peter
Cochrane, who was then Head of
British Telecom’s Research Laboratories, wrote “Just a small piece of
silicon under the skin is all it
would take for us to enjoy the
freedom of no cards, passports or
keys. Put your hand out to the car
door, computer terminal, the food
you wish to purchase, and you
would be dealt with efficiently.
Think about it: total freedom; no
more plastic”. (Cochrane, 19971).
identify me. In this way signals
were transmitted between my
body and the computer – the
reverse transmission also being
possible.
In order to demonstrate the
capabilities of an individual with a
transponder implant, the door to
my laboratory opened as I approached, the computer was
aware of exactly what time I
arrived at certain rooms and when
I left, the corridor light came on
automatically and a voice box in
the entrance foyer of the cybernetics building welcomed my arrival
each morning with “Hello
Professor Warwick”. As far as we
were concerned the experiment
was successful, and hence the
implant was removed 9 days after
its insertion.
One reason for carrying out the
experiment was to take a look at
some of the ‘Big Brother’ tracking
and monitoring issues. In fact, as
a one off test, it was difficult for
me to assess this. Personally I was
quite happy with the implant in
place, after all, doors were being
opened and lights came on. It is
therefore difficult to conclude
anything with regard to the ‘Big
Brother’ issues. If I did have to
make some statement however it
would be that if we feel we are
gaining from more monitoring
then probably we would go ahead
with it all, i.e. we would gladly
move into a ‘Big Brother’ world.
Despite the predictions of such
scientists, little or nothing had,
perhaps surprisingly, been done
with research in this direction. In
particular no actual scientific tests
or trials had been carried out by
that time.
As a first step, on 24 August 1998
a silicon chip transponder was
surgically implanted in my upper
left arm. With this in place the
main computer in the Cybernetics
Building at Reading University was
able to monitor my movements.
The transponder, being approximately 2.5 cm long and
encapsulated in glass, was in fact
a Radio Frequency Identification
Device. At various doorways in
the building, large coils of wire
within the doorframe provided a
low power, radio frequency signal,
which energised the small coil
within the transponder. This in
turn provided the current necessary for the transponder to
transmit a uniquely coded signal,
such that the computer could
33
Review of the Session 2002-2003
One surprise was that mentally I
regarded the implant as being
part of my body. Subsequently, I
discovered that this feeling is
shared by those who have artificial
hips, heart pacemakers and
transplanted organs. However it
was clear that the implant only
had a limited functional use. The
signals it transmitted were not
affected by what was going on in
my body and any signals sent
from the computer to the implant
did not affect what was going on
in my body in any way. To achieve
anything along those lines we
needed something a lot more
sophisticated. Hence after
concluding the 1998 tests we
immediately set to work on a new
implant experiment.
The 2002 Experiment
On 14 March 2002, at the
Radcliffe Infirmary, Oxford, an
array of one hundred silicon
needle electrodes was surgically
implanted into the median nerve
fibres of my left arm. The array
itself measured 4mm x 4mm with
each of the one hundred electrodes being 1.5 mm in length.
The median nerve fascicle was
estimated to be approximately
4mm in diameter, hence the
electrodes penetrated well into
the fascicle.
A first incision was made centrally
over the median nerve at the wrist
and this extended to 4 cm
proximally. A second incision was
made 16 cm proximal to the wrist,
this incision itself extending
proximally for 2 cm. By means of
a tunnelling procedure, the two
incisions were connected, ultimately by means of a run of open
tubing. The array, with attached
wires, was then fed down the
tubing from the incision nearest
the elbow to that by the wrist.
Once the array and wires had
been successfully fed down the
tubing, the tubing was removed,
leaving the array sitting on top of
the exposed median nerve at the
point of the first (4 cm) incision.
The wire bundle then ran up the
inside of my arm to the second
incision, at which point it linked
to an electrical terminal pad which
remained external to my arm. The
array was then pneumatically
inserted into the radial side of the
median nerve under microscopic
control, the result being that the
electrodes penetrated well into
the fascicle.
With the array in position, acting
as a neural interface, it was
possible to transmit neural signals
directly from the peripheral
nervous system to a computer,
either by means of a hard wire
connection to the terminal pad or
through a radio transmitter
attached to the pad. It was also
possible to stimulate the nervous
system, via the same route,
sending current signals from the
computer to the array in order to
bring about artificial sensations
(Warwick et. al., 20036). By this
34
Lectures
means a variety of external devices
could be successfully operated
from neural signals and feedback
from such devices could be
employed to stimulate the
nervous system. (Gasson et.al.,
20022).
The project was conducted in
association with the National
Spinal Injuries Centre at Stoke
Mandeville Hospital, Aylesbury.
One key aim was to see if the type
of implant used could be helpful
in allowing those with spinal
injuries, either to bring about
movements otherwise impossible
or at least to control technology,
which would, as a result, bring
about a considerable improvement in lifestyle. In an extreme
case the aim would be to implant
the same device directly into the
brain of a severely paralysed
individual to enable them to
control their local environment, to
some extent, by means of neural
signals – in popular terminology
to perhaps switch on lights or
drive their car just by thinking
about it. Our experiment of 2002
was therefore a first step in this
direction, and in that sense
provided an assessment of the
technology.
The electrodes allowed neural
signals to be detected from the
small collection of axons around
each electrode. As the majority of
signals of interest, e.g. motor
neural signal, occurred at frequencies below 3.5 KHz, low-pass
filters were used to remove the
effects of high-frequency extraneous noise. In this way distinct
motor neural signals could be
generated quite simply by making
controlled finger movements.
These signals were transmitted
immediately to the computer,
from where they could be employed to operate a variety of
technological implements.
In experiments to ascertain
suitable voltage/current relationships to stimulate the nervous
system, it was found that currents
below 80 uA had, in the first
instance, little perceivable effect.
Unfortunately, such results are not
fixed in time, due to the human
brain’s ability to firstly process out
initially unrecognised signals and
subsequently to gradually recognise stimulation signals more
readily as it adapts to the signals’
input. In order to realise this
current, voltages of 40 to 50 volts
were applied to the array electrodes. The exact voltage
depended on the electrical
resistance met by each individual
electrode, which, due to the
variability of the human body, was
not strictly the same from day to
day.
It was further found with stimulation experimentation that currents
above 100 uA had little extra
effect, the stimulation switching
mechanisms in the median nerve
fascicle exhibiting a non-linear,
thresholding characteristic. The
35
Review of the Session 2002-2003
current was, in each case, applied
as a bi-phasic signal with 100
usec inter signal break periods.
This signal waveform in fact
closely simulates the first harmonic of the motor neural signals
recorded.
In the first stimulation tests, whilst
wearing a blindfold, a mean
correct identification of 70% was
achieved. In simple terms this
indicates that, without prior
warning, I could successfully
detect when a signal had been
injected, and when not, 7 times
out of 10 on average. But this
figure is somewhat misleading as
it would usually take a few sets of
tests to get my brain ‘into the
mood’ for an experimentation
session. Subsequently, after
about an hour of inputting
signals, my brain would appear to
‘get fed up’ and results would tail
off. Hence experimental sessions
usually lasted for an hour at most
with about one hour for alternative activities before the next
session commenced. Results from
the middle time period of a
session were frequently a lot
higher than the 70% average.
Towards the end of the entire
2002 implant experiment, which
concluded with its extraction on
18th June 2002, a mean perception rate of stimulation of over
95% was being achieved. Given
the nature of the tests being
carried out, as described in the
previous paragraph, what this in
reality meant was that, to all
intents and purposes, the recognition of stimulation was, by this
time, usually 100%. All sorts of
side effects were likely to disrupt a
pure 100% return though,
ranging from phantom signals, to
local mobile phone texting to, in
extreme cases, potential pickup
from local radio stations.
The applications carried out were
quite wide ranging (Gasson et.al.
20022; Warwick, 20025) and
included the bi-directional control
of an articulated hand. The aim of
the hand, known as the SNAVE
hand, is to mimic the control
mechanisms apparent in a human
hand. Sensors in the fingertips
allow for the grip shape to be
adapted as well as for the applied
force to be modified as necessary.
In this way tension applied to an
object can be adjusted to avoid
slippage or to apply a force
appropriate to the object being
gripped.
In tests, during which I wore a
blindfold, the articulated hand’s
movements were controlled
directly from signals taken from
the implanted array, i.e. my motor
neural signals. Further to this,
sensory data was fed back via the
implant and the grip force was
recorded. The object of the
exercise was for me, without any
visual stimulus, to apply the
lightest touch to an object, just
sufficient for a very light grip. As
more force was applied to an
36
Lectures
object, so the amount of neural
stimulation was increased. Over
the course of a two-week period, I
learnt to judge, to a very fine
detail, a force just sufficient to
grip an object.
On 20 May 2002 I visited Columbia University, New York City, and
an Internet link was set up
between the implant, in my arm in
New York, and the SNAVE hand,
which was still back in Reading
University in the UK. Signals from
the neural implant in the USA
were transmitted across the
Internet to control the remote
hand. Coupled with this, with
myself wearing a blindfold,
feedback information was sent
from the UK to the implant to
successfully stimulate my nervous
system in a series of trials. A
100% signal recognition rate was
achieved and the SNAVE hand was
controlled adequately despite the
apparent delay in signal transmission.
Data taken from the neural
implant was directly employed to
control the movement of an
electric wheelchair, by means of a
simple sequential state machine.
Neural signals were used to halt
the machine at a point related to
the chosen direction of travel –
forwards, backwards, left, and
right. In the first instance,
experiments involved selectively
processing signals from several of
the implant electrodes over time,
in order to realise direction
control.
With only a small amount of
learning time, (about one hour),
reasonable drive control of the
wheelchair was achieved. For this
task however, a short-range digital
radio link was established between the implant and the
wheelchair’s driver-control
mechanism. The radio transmitter/receiver unit was worn on my
lower left arm, being housed in a
lightweight gauntlet. Extensive
trials were subsequently carried
out around a fairly cluttered
outdoor environment, with
considerable success.
Another application was the use
of neural stimulation to feed in
extra sensory input.
Two ultrasonic sensors were
positioned on the peak of a
baseball cap. The output from
these sensors was fed down to
the gauntlet, to bring about direct
neural stimulation. When an
object was positioned adjacent to
the sensors, the rate of stimulation was high. As the distance
between the object and the
sensors increased, the rate of
stimulation was reduced in a
linear fashion with regard to
distance. In this way I was able to
obtain a highly accurate ultrasonic
sense of distance.
Tests were carried out in a normal
laboratory environment and, with
a blindfold on I was able to readily
navigate around objects in the
37
Review of the Session 2002-2003
laboratory. My personal, albeit
one-off, experience was that my
brain adapted very quickly, within
a matter of minutes, to the new
sensory information it was
receiving. The pulses of current
being witnessed were clearly
directly linked to the distance of a
nearby object. Furthermore, when
an object was rapidly brought into
my ultrasonic ‘line of sight’ an
‘automatic’ recoil type response
was witnessed, causing my body
to back away from what could
have been a dangerous situation.
The final experiment of scientific
note involved the assistance of my
wife, Irena. She had two electrodes inserted into her median
nerve in, roughly speaking, the
same location as my own implant,
a process referred to as microneurography. Via one of the
electrodes in particular, motor
neural signal responses could be
witnessed. The output from the
electrodes was then linked directly
to a computer. In tests, signals
generated by my wife’s nervous
system were transmitted through
the computer in order to stimulate
my own nervous system, with the
process also being initiated in the
reverse direction. Effectively we
had brought about a direct
electrical connection between the
nervous system of two individuals.
We then employed this link to
send motor neural signals directly
from person to person. So if Irena
generated three such signals, I
witnessed three signal stimula-
tions on my own nervous system
and vice versa. In this way we had
successfully achieved a simple
radio telegraphic signalling
system between our nervous
systems. Clearly, with implants
positioned not in the peripheral
nervous system but directly in the
motor neural brain region, the
same type of signalling could be
regarded as the first, albeit
rudimentary, steps in thought
communication.
Conclusions So Far
The range of applications carried
out with the 2002 implant, a full
description of which is given in
Warwick, 20025, gives rise to a
number of implications. With
implants subsequently positioned
in the motor neural brain region it
means we can look forward to a
variety of technological control
schemes purely initiated by
thought. For those who are
paralysed this should open up a
new world, with them being able
to switch on lights, make the
coffee and even drive a car – just
by thinking. Extra sensory input,
such as the ultrasonics employed
already, could also provide an
alternative sense for those who
are blind.
Issues of infection and rejection
were also high on the agenda
during the experimental period. It
can be reported that at no time
was any sign of infection witnessed. As regards rejection of
the implant however, results are
38
Lectures
perhaps far more encouraging
than could have initially been
hoped for. When the implant was
removed, 96 days after implantation, no signs of rejection were
observed. Indeed fibrous scar
tissue had grown around the
implant itself, firmly pulling it
towards the median nerve bundle.
It appeared that the implant had
neither lifted nor tilted from the
nerve trunk and the electrodes
were still embedded.
of electrodes to be implanted and
the extent of signals it is wished
to investigate. High on the list of
experiments to be carried out
though are a series of tests
involving thought communication. Necessarily this will involve
the implantation of more than
one individual other than myself,
which may present ethical difficulties in attempting to bring it
about.
The whole programme presents
something of an ethical dilemma
however. Very few would argue
against the development of
implants to help those who are
paralysed to control their environment, including some aspects of
their own bodily functions.
Alternative senses for those who
are blind or deaf would also be
seen by most to be a good cause.
But the use of such technology to
upgrade humans, turning them
into Cyborgs, presents a much
more difficult problem. Who gets
an implant and who doesn’t?
Who controls their use? Indeed,
should humans be allowed to
upgrade their capabilities and
become super humans? Humans
themselves now have the potential to evolve their own destiny. It
will be interesting how quickly
and easily this will be brought
about. I, for one, will be at the
front of the queue”
One negative aspect to the trial
was the gradual loss of electrodes,
most likely due to mechanical wire
breakdown at the point of exit
from my arm. By the end of the
96-day study only three of the
electrodes remained functional, all
others having become opencircuit. Post-extraction
examination indicated that the
electrodes themselves appeared to
be still intact and serviceable.
However, the gradual decline in
the number of channels still
functioning was one of the main
reasons that the experiment was
brought to an end. Clearly, for
long-term implantation, the
mechanical design aspects will
need to be looked at in detail.
Our research in this area has now
been clearly refocused towards a
potential brain implant, possible
in the motor neural area. However many decisions need to be
taken in the meantime as to the
exact positioning of implanted
electrodes, the number and type
39
Review of the Session 2002-2003
References
1.
P. Cochrane, ‘Tips for the Time
Traveller’ Orion Business
Books, 1997.
4.
K. Warwick, ‘QI: The Quest
for Intelligence’, Piatkus,
2001.
2.
M. Gasson, B. Hutt, I. Goodhew, P. Kyberd and K.
Warwick, ‘Bi-directional
Human Machine Interface via
Direct Neural Connection’,
Proc. IEEE International
Workshop on Robot and
Human Interactive Communication, Berlin, pp. 265-270,
Sept. 2002.
5.
K. Warwick, ‘I, Cyborg,
Century, 2002.
6.
K. Warwick, M. Gasson, B.
Hutt, I Goodhew, P. Kyberd, B.
Andrews, P. Teddy and A.
Shad, The Application of
Implant Technology for
Cybernetic Systems’, Archives
of Neurology, to appear,
2003.
3.
N. K. Hayles, ‘How we became
Posthuman’, University of
Chicago Press, 1999.
A full colour report of Professor Warwick’s lecture has already been published by the Society. ISBN No 0 902 198 68 8.
40
Lectures
Professor C Doherty, FRS
The University of Melbourne, Australia
9 April 03
at Moredun Research Institute
Cell Mediated Immunity in Virus Infections
Joint RSE/SABRI Lecture
Speaker’s Abstract
The biology and role of the CD8+
“killer” T cell response was
discussed in the context of
recovery from virus infections.
in the context of viral vaccines,
together with the limited protection conferred by the T cell recall
response.
The nature and durability of
immune memory was considered
41
Review of the Session 2002-2003
Dr Jenny Wormald
St Hilda’s College, Oxford
24 March 2003
O Brave New World?
The Union of England and Scotland in 1603
Joint Royal Society of Edinburgh and
British Academy Lecture
24 March 1603 witnessed a
stunning event: James VI became
James VI and I. So the British Isles
were at last united, under a king
with the wrong nationality, the
wrong accent, the wrong experience of kingship. Thus the English
saw it.
people who inhabited the ‘brave
new world’ were timorous rather
than courageous; and opportunities to make ‘Britain’ a major
European power were missed.
Who the leading players were,
how they coped with the unpalatable challenge created in 1603,
and why the ramshackle union
survived, were the themes of this
lecture.
The Scots rejoiced - until they
realised the level of English
hostility to union, and began to
worry about neglect. So those
Click here for full transcript
42
Lectures
Professor Angus MacDonald
Department of Actuarial Mathematics and Statistics
Heriot-Watt University
Monday 12 May 2003
Genetics and Insurance: Can They Live Together?
Advances in human genetics seem
to cause excitement and fear in
equal measure: new understanding of diseases and new
treatments, even gene therapy,
but also GM crops, cloning, and
the possibility of creating a new
‘genetic underclass’. These are
people who would be turned
away by insurance companies, or
charged unaffordable premiums,
because their genes would reveal
whether or not they would die
prematurely, or require expensive
medical treatment. Any kind of
‘underclass’ is a bad outcome.
insured against has become too
likely to happen.
Insurance can cope quite well with
everyday risks, however. The cost
of life insurance may depend on
someone’s age, sex, smoking
habits
and general health, but until the
signs are so bad that the chance
of premature death is excessive,
this degree of variation neither
creates any obvious ‘underclass’
nor leaves the insurance industry
exposed to hidden information. In
fact if social policy were to override commercial freedom, some of
these factors could quite well be
ignored: information that implies
different insurance risks is no
great threat unless it is so strong
that it changes peoples’ behaviour. So the question is: will
genetic testing, in future, reveal
risks of illness and premature
death much more extreme than
knowledge of age, sex, smoking
habits and general health?
On the other hand, as long as the
NHS continues to provide universal health care, the decision to buy
life, health or other kinds of
insurance is made voluntarily, or
largely so. An insurer has to ask,
why does any particular applicant
want to buy insurance? Is it
genuine insurance against
unforeseen events, or is that
person in possession of information that suggests a greatly
elevated risk, such as a diagnosis
of cancer? The NHS would not
work if people could opt out of
paying taxes to fund it until they
felt the need, and equally, private
insurance does not work if it can
be obtained after the event being
The answer in most cases is likely
to be ‘no’. Many of the great
advances in future will be into the
genetic component of the major
killers like heart disease and most
cancers. Most likely, they will
uncover immensely complicated
43
Review of the Session 2002-2003
networks of interacting gene
variants, environments and
lifestyles, within which the genetic
contribution will be hard to
isolate, and even harder to
measure. And, any important ones
that are identified ought to lead
to better health, which is hardly
an insurance problem.
true risk of illness and premature
death. Broadly, they lead us to the
conclusion that if insurers would
agree to ignore the results of
genetic tests (as they do just now,
except for very large policies) it
would have hardly any noticeable
effect. Research into common
illnesses is unlikely to find lots of
clear-cut genetic risks to compete
with smoking, poor diet and lack
of exercise, and the single-gene
disorders are rare enough that a
mature insurance market could
absorb any extra costs, which
would be very small. So, such
models show that the answer to
our main question, most of the
time, is ‘yes, genetics and insurance can live together’. If this
should be a surprise, it is perhaps
because arguments that proceed
purely from philosophical considerations (abhorring
discrimination) or purely from
commercial considerations
(abhorring interference in the free
market) make it less obvious,
rather than more obvious, where
to find the pragmatic ground
upon which they might meet.
However, that leaves aside those
rare disorders where a defect in a
single gene really does signal a
very high chance of premature
illness or death. These were
discovered long before genetic
testing became possible, because
they were exactly the diseases that
were seen to ‘run in families’, and
in fact insurers have taken account
of such ‘family histories’ for a very
long time. To the extent that an
‘underclass’ exists, it is not new; it
is just that it only recently acquired the ‘genetic’ label, and all
the attention that that brings.
Actuarial models are mathematical
models of the progression of
diseases, the resulting mortality,
and other aspects of a person’s
‘life history’ can be included too,
such as when and why they
choose to buy insurance. They
allow us to quantify the costs of
genetic information to individuals,
in terms of possibly higher
insurance premiums, and to
insurers, in terms of being
unaware of information about the
However, does this pragmatism
solve a problem, or create a bigger
one? What about the person with
a non-genetic impairment that
means they might be excluded
from insurance? How should we
answer their question.
44
Lectures
Professor Chris Stringer
Human Origins Group, The Natural History Museum, London
Monday 9 June 2003
The Fate of the Neanderthals
Chris Stringer holds an Individual
Merit Promotion in the Palaeontology Department of The Natural
History Museum, London, where
he has worked since 1973, and is
also Visiting Professor in the
Department of Geography at
Royal Holloway, University of
London. He is currently Head of
the Human Origins Programme at
the Natural History Museum and
also Director of the Leverhulmefunded Ancient Human
Occupation of Britain project. He
began his research on the relationship of Neanderthals and early
modern humans in Europe, but is
now best known for his work on
the “Out of Africa” theory
concerning the global development of Homo sapiens. This has
involved collaborations with
archaeologists, dating specialists
and geneticists in their attempts
at reconstructing the evolution
and dispersal of modern humans.
He has directed or co-directed
excavations at sites in England,
Wales, Turkey and Gibraltar, and is
now collaborating in fieldwork in
Morocco to find further evidence
of early human occupation there.
Chris has published over 200
scientific papers and has edited or
co-authored several books
including “In Search of the
Neanderthals” with Clive Gamble
(1993), and “African Exodus”
with Robin McKie (1997). Over its
5-year span, the ambitious
Ancient Human Occupation of
Britain project will reconstruct a
detailed history of when Britain
was occupied by early humans,
the nature of these early inhabitants, and what factors controlled
their presence or absence.
Professor Whiten then invited
Professor Stringer to deliver his
lecture entitled “The Fate of the
Neanderthals”.
The Neanderthals evolved in
Europe over at least 200,000 years
of the Pleistocene. But about
thirty five thousand years ago,
they were joined by early modern
humans, known as Cro-Magnons.
This period was also marked by
the major technological and
behavioural changes of the Upper
Palaeolithic (Upper Old Stone
Age), apparently coinciding with
the arrival of modern people. The
Neanderthals disappeared soon
afterwards, but the factors behind
their demise are still fiercely
debated. While some workers
argue that incoming early modern
populations genetically absorbed
them, other data suggest that
45
Review of the Session 2002-2003
they became extinct. Explanations
for their extinction have ranged
from suggestions of disease or
warfare, through to economic
competition from early modern
humans, but most of these
hypotheses imply Neanderthal
adaptive inferiority.
One recent view holds that the
Neanderthals went extinct
because they could not cope with
the increasing open country
environments of Europe around
30,000 years ago. The CroMagnons, who in this model were
better adapted to the changing
conditions, then simply colonised
the vacant habitats. An alternative
view is that extinction probably
stemmed from various factors,
including climatic instability and
resource competition from CroMagnons. In particular, modelled
data for the effect of millennialscale climatic oscillations on the
Neanderthals suggest that
cumulative climatic stress could
have played an important part in
their extinction. Overall there was
probably no single universal cause
of Neanderthal extinction, which
actually took place across western
Eurasia over many millennia. But
in Western Europe, increasing
environmental instability probably
both seriously reduced Neanderthal numbers and gave selective
advantage to early modern
populations with greater technological and social support for
survival.
More recently, with improved
archaeological, dating and
environmental evidence, it has
been possible to examine this
time period in greater detail. This
has led to new ideas and a greater
emphasis on palaeoclimatic or
palaeoecological factors in
Neanderthal extinction, as well as
a recognition that Neanderthals
apparently shared many elements
of “modern” human behaviour.
Increased knowledge of the
vagaries of Europe’s climate over
the past 100,000 years has been
particularly influential. Cores
from the Greenland icecap, from
the floor of the North Atlantic,
and from lakebeds in continental
Europe, reveal remarkable, rapid,
short-term oscillations in temperatures. These show how severe the
effects could have been on both
Neanderthals and Cro-Magnons
in reducing environmental
capacity to support populations of
either type.
46
Lectures
Professor Neil Hood CBE FRSE
Professor of Business Policy, University of Strathclyde,
Monday 1 September 2003
Goals, Greed and Governance
This lecture was set in the context
of a number of widely published
scandals that had emerged in the
business world over recent years.
These events in themselves,
together with a number of
important trends such as globalisation, had served to heighten
reputational risk and focus
attention on both public and
private morality. In total, they
raised questions as to whether
there were fundamental behavioural problems at the heart of the
economic system. In addressing
this subject, it was noted that it
had been of interest to some of
the early Fellows of the Society.
Both Adam Smith and David
Hume testified to the powerful
influence of avarice, yet alerted
their readers on the need to
govern it.
area of goals, there is more
pressure to meet short-term
performance measures; greater
stakeholder interest; more
complexity in both business
models and through the diversity
of relationships and so on. The
net effect is that there are more
(and potentially more conflicting)
goals to be achieved – the pursuit
of which can put pressure on
governance structures. These
goals are subject to ever increasing forensic examination from
shareholders. As regards greed,
there is a perception that the
interests of executives and
shareholders are not always well
reconciled – especially in the area
of rewards. This is reflected in
scepticism about the relative
returns of senior executives and
other stakeholders. Further,
although implicit in concepts such
as entrepreneurship, the role of
personal avarice as an economic
driver is less well understood. The
challenge remains how this basic
human motivation can be allowed
to flourish within acceptable
limits, and with due attention to
both ethics and values. Finally on
the matter of governance, many
changes are evident including the
adoption of different types of
Professor Hood acknowledged
that in the contemporary environment there was much interest in
this topic – not least because of
the negative public sentiment
about business that some of the
scandals had served to fuel. While
concerns about the governance of
business are not new, in each of
the dimensions of goals, greed
and governance there were new
dimensions in recent years. In the
47
Review of the Session 2002-2003
voluntary and mandatory codes of
conduct. So extensive have been
these developments in some
countries that there are concerns
about the costs of compliance, the
role of self-regulation, and the
limits to governance. This in turn
leads some business interests to
view governance structures and
regulation as ever more intrusive
and capable of limiting corporate
development. On the other hand,
there are grounds for arguing that
governance alone will never
resolve the tensions between
goals and greed – not least
because of the low levels of trust
that prevail in some situations.
outcomes to resolve these
tensions. Most business leaders
had approaches to goals, greed
and governance that espoused
enlightened self-interest. It was
acknowledged that there were
high costs to business of not
being able to address these issues
and resolve tensions between
them. This in turn called for an
ever more proactive business
stance of reviewing goals and
correcting perceptions of greed
and excess. It was concluded that
governance though essential was
not enough, and that the responsibility of business included that
of constantly reviewing its system
of values.
The final section of the lecture
examined alternative behavioural
48
Lectures
Professor John F Smyth
Director, Cancer Research Centre, University of Edinburgh
6 October 2003
How Cancer Chemotherapy Works
Professor Smyth began by stating
that chemotherapy has been used
for the treatment of cancer for
over 50 years and is capable of
producing cures in some of the
rarer diseases and palliation for
many of the commoner forms of
malignancy. From the early
excitement of curing childhood
leukaemia, Hodgkin’s disease and
testicular cancer came expectation
of similar success in breast, lung
and colorectal cancer – but this
has not been realised. Disappointment has been tempered
however by recognising the value
of slowing the advance of cancer,
resulting in extension of good
quality life – the purpose of most
medical prescribing. For example
in breast cancer a recent 20-year
update has shown the persistent
advantage of chemotherapy
administered after surgery in more
than doubling survival from 22%
to 47%. In patients with colorectal cancer a recent analysis of
three separate trials including over
1500 patients has shown that
chemotherapy reduced mortality
by 22%. The enormous effort
expended on clinical research has
been matched by intensive
laboratory research to understand
how and why chemotherapy
works. Our present knowledge is
partial and almost counter
intuitive. It is remarkable that
such simple chemicals as alkalating agents (eg.
Cyclophosphamide) or antimetabolites (eg. Methotrexate) can have
such useful results in patients.
We understand the chemistry but
the biology is still a partial
mystery.
Current research is focused on
applying molecular biology to the
development of more selective –
even individualised anti-cancer
medicines. Rapid progress in
understanding how cells signal
metabolic messages from the
surface to the nucleus to alter
protein expression has lead to the
identification of new targets for
therapeutic design. Oncogenes
code for growth stimulants in
tumours and recently the first
highly selective drug which
prevents oncogenic expression
has been licensed for the treatment of chronic myeloid
leukaemia. The consequences of
reciprocal translocation between
chromosomes 9 and 22 results in
an oncogenic fusion protein (bcrabl) functioning as a cell surface
receptor tyrosine kinase on
myeloblastic cells. The resulting
49
Review of the Session 2002-2003
leukaemia is reversed by a synthetic, potent and specific inhibitor of
bcr-abl (Gleevec).
therapies – for which individual
patients will be selected on the
basis of genetic phenotyping. The
relative success of cancer chemotherapy over the past 25 years has
been largely serendipitous. The
future will be focused on exploiting the new scientific
understanding of how cancer cells
grow and what controls them.
Professor Smyth ended with the
thought that this is no more likely
to “cure” malignancy than
existing drugs, but therapy will be
used in continuous - (non-toxic)
administration, to prolong useful
life – reflecting an acceptance that
like most diseases, cancer is a
chronic condition associated with
ageing – from which there is
eventually no escape!
Tumour suppressor genes (TSG)
normally serve a house keeping
function to prevent tumours, but
their loss – inherited or environmentally caused - allows tumour
formation. In women with
ovarian cancer loss of a TSG on
chromosome 11 has been shown
to correlate with poor prognosis,
and characterisation of the
function of this gene is in
progress with the aim of developing a “therapeutic” to reinstate
the tumour suppressor effect.
Such “gene therapy” offers
realistic promise of developing
much more selective anti-cancer
50
Lectures
Dr Tsugio Makimoto
Corporate Advisor, The Sony Corporation
7 October 2003
Semiconductor Devices for Entertainment Robots
Joint RSE/SDI Lecture
Professor Andrew Walker, VicePresident, welcomed Dr Makimoto
and his colleagues from the Sony
Corporation in Japan. He highlighted that the lecture was an
example of the society’s interactions with the wider international
community and introduced Dr
Tsugio Makimoto, Corporate
Adviser of the Sony Corporation,
in charge of semiconductor
technology.
zine, Electronics Weekly, dubbed
this “Makimoto’s wave”. He
developed this concept further in
his book, authored jointly with
David Manners, called “Living
with the Chip”, published in
1995. In 1997, Dr Makimoto was
elected an IEEE fellow in recognition of his work on developing
identity DRAMS, and new types of
RISC processors, and in the same
year he published a second book
with David Manners on new
trends in electronics, entitled “The
Digital Nomad”.
Dr Makimoto was born in 1937,
he studied for his first degree at
Tokyo University, and then later
completed a Masters at Stanford
University. He returned to Tokyo
to complete his PhD in 1971, and
during that latter period he was
working for Hitachi Ltd, and went
on to rise through that company
to become Senior Executive
Managing Director in 1997. Two
years later, he left Hitachi, joining
Sony in the year 2000, initially in
the role as Vice-President.
Amongst his numerous contributions to the development of
silicon technology, Dr Makimoto is
credited with recognising the
cyclical nature of the semiconductor chip industry, this being
associated with the tension
between customisation and
standardisation. The UK maga-
Introduction: Robots Are
Coming
Dr Makimoto’s lecture was
illustrated with computer graphics. He began by describing an
entertainment robot as a toy with
cutting edge technology. He
showed examples of robots from
Robodex 2003, Japan, the largest
annual touring robotics exhibition
and explained that the 70,000
visitors who attended the event
gave a strong signal that the age
of the robot is coming.
Brief History
Dr Makimoto gave a brief history
of robotics, from the origin of the
word about 80 years ago to Isaac
Asimov’s insight into the future;
Joseph Engelberger’s, the “Father
51
Review of the Session 2002-2003
of Industrial Robots”, inventions
between 1960-1970; and the
latest robots, designed to coexist
with human beings.
that it is a very ambitious role and
explained that a great deal of new
technologies will be required to
meet this target but in time they
will also contribute to many other
fields of robotics including
construction, transportation,
agriculture, etc.
Sony’s Entertainment Robots
Dr Makimoto showed a picture of
AIBO, a dog-like robot that took
six years to develop. When AIBO
went on sale on the Internet in
1999, all of the 3,000 units that
had been produced, with a price
of $2,000 per unit, were sold
within 17 minutes. This was a
symbolic event for Sony, signalling
the beginning of the new age of
entertainment robots.
The second example was R3, or
Real Time Remote Robotics. This
concept is a technology to enable
us to virtually visit anywhere by
staying in one place. He illustrated this by showing a conceptual
picture of a mountain climbing R3
type robot.
Semiconductor Devices for
Robots
Semiconductor devices: the chips,
sensors and applicators are
important basic elements for
robots, and they are all related to
semiconductor technology.
Other examples were the DARPA
Grand Challenge, a race on 13
March 2004, where robot cars will
navigate 300 miles of rugged
terrain between Los Angeles and
Las Vegas within 10 hours; and a
Robonaut Project at NASA.
Besides intelligent chips, there are
other various types of sensors
used for humanoid robots, the
SDR-4X for example has a total of
about 80 sensors, which is a much
larger number than AIBO which
has just 15. The larger number
allows more motion performance
and more sophisticated interaction with humans.
He went on to explain the evolution of robot intelligence and the
prediction that the level of robot
intelligence will reach that of a
monkey’s by 2020 and that of a
human by 2040.
Conclusion
Dr Makimoto concluded by saying
that the robot will become the
most dynamic technology driver
for our industry, creating a
synergistic spiral effect between
chips and robots and an exciting
future ahead.
Future Prospects
The first example was Robocup, a
long-term international research
group, who’s goal is to create a
soccer team of robots which can
beat a champion human team by
2050. Dr Makimoto emphasised
52
Lectures
Professor Chris Touamzou
Imperial College London
16 October 2003
The Bionic Man
Joint RSE/Heriot Watt Lecture
Speaker’s Abstract
We are entering an exciting new
wave of technology inspired by
lifestyle, healthcare and the
environment. Professor Touamzou’s lecture showed how we can
enhance biological functions with
implantable microelectronics,
explore the natural analogue
physics of silicon technology to
replace biological behaviour, and
demonstrated how a new generation of analogue computing
results can bring significant power
savings in security and telecommunications applications.
53
Review of the Session 2002-2003
Professor Etienne-Emile Baulieu
President, The French Academy of Sciences
27 October 2003
European Science in Difficulty
“Science is at the heart of society
and determines society, and plays
no less a role for those who have
been elected to take society’s
decisions. The present day is
perhaps no richer in artistic and
literary achievement than were
previous centuries, but science
moves so fast, progresses with
such feverish determination, that
it requires society to invent
radically new modes of organisation, which is no easy matter. We
need no reminder of the part
played by birth control in changing the activity of women in
today’s society, nor of how a
longer lease of life has modified
the economics of savings and
pensions, nor of how internet
reshapes international trading
and dealing.
(and I sympathise with them) will
regret it. Man invents, constantly
seeks to know more, about the
earth’s climate and its evolution,
the neighbouring planets, or the
possibility of prolonging life in
good health and complete
lucidity. This cannot be repressed.
It is up to men, and to women, to
their representatives, to their
civilisations, to fashion this into
happiness, to abide by these
advances, and to forge the rules
of life that turn them into steps
forward for the human species. In
our countries, specifically there is
at this point reason to fear that
we shall succumb to the temptation of letting others take the
lead, the United States in particular, and rest content with
importing principles, patents,
objects. Has scientific research
become futile ? Is this the true
destiny of Europe, of its ideals, of
its economy, of the continent that
gave the world Newton, Darwin,
Jenner, Lavoisier, Pasteur, Marie
Curie ?
However, science is very much
criticized. The scientists acknowledge that progress does not
consist of classical scientific
progress alone, but they also
insist that fundamental research,
at the core of any scientific
process, is still going to develop.
There should be no expectation of
a plateau of scientific stability, or
of a moratorium on change : that
is a totally unrealistic hypothesis –
and many a quiet conservationist
Scientific Europe : a novelty
Each nation treats science differently. The United States who
“believe” in it, are currently
piling on real pressure by endowing their research – including
54
Lectures
fundamental research – with
colossal means of public and
private origin, in the universities
and the research institutes, in
industry, at all levels. Our students who have completed their
postdoctoral education across the
Atlantic are not returning. Fully
mature 30-40 year olds are not
finding here the working conditions which enable them to
express themselves: no independent structures, no laboratories
equipped in a modern manner, no
technical support staff, no
reasonable personal situations or
careers: there is no money for
that, or so little, and I am afraid
that we are becoming used to it.
Our young colleagues, some of
the best, are leaving for countries
which are only to happy to
welcome them. European industry
is relocating its research services,
with their best personnel, mainly
to American universities where
they find the complementary and
necessary skills in fundamental
research, often paradoxically
abandoned in Europe. The
immediate intellectual loss will be
added to tomorrow by formidable
economic consequences with
patents taken out in America, and
the day after tomorrow by the loss
of the best potential teachers.
Nuclear Research, CERN in
Geneva, and on that of the
European Molecular Biology
Organisation, EMBO in Heidelberg. The example is there, Europe
can win if the course is well set
and firmly held.
There is an urgency, as the
representatives of the European
nations recently recognised, in
Lisbon then in Barcelona, without
yet going beyond good intentions. That is why I propose that
we should debate the following
essential points proposed by a
number of colleagues. The spirit
of it has been basically accepted
under the signature of the
presidency of the Royal Society,
the French Académie des sciences
among ten European Academies,
and currently also included in the
Report from an expert group
officially installed during the
Danish EU presidency in December
2002, led by Federico Mayor and
which has been released on
December 15, 2003.
1 It is necessary to at least double
the budget allocated to research
by the European Union.
Furthermore, and this is a
strong symbol, it would be in
my opinion appropriate to
request an exception in principle from the European stability
pact in favour of investments in
research, which, in addition to
supplementing resources in
each country, could provide a
quasi-constitutional indication
Only the European dimension can
enable us to attempt to reverse
matters. We all congratulate
ourselves on the success of the
European Organisation for
55
Review of the Session 2002-2003
of Europe’s confident approach
to human progress.
Additional financing, fundamental research, an elitist policy which
is also open to young researchers,
all that should come under the
remit of an independent European Research Council, different
from and complementary to the
present institutional mechanisms
of the European Union. It might
also address the difficult problems
resulting from the diversity of the
European university systems and
their links with research in each
country. The time has come to
propose to Europe an important
objective which is both achievable
and a call to action, now, when
the Constitution is being written.
My hope is that scientific research
may become this new frontier for
young Europeans and that the
Royal Society and the French
Academy will contribute to that.
2 A redirection of European
research giving priority to
fundamental research and
strengthening or creating of
several outstanding supercentres and laboratories, with the
ambition of becoming both the
best and at the same time a
very great cultural and economic
force of attraction at World
level.
3 A policy to train and put in
place young scientists, not only
those from our countries which
are now favoured but also from
European regions which are
still in difficulty and from the
countries of the South. Their
initial training courses must also
be remunerated and they must
be guaranteed several years of
work on their return, together
with the necessary means for
their research work. It is totally
inadmissible that in France, our
young researchers, 10 or 15
years after the Baccalaureate,
should earn only 2000 Euros
per month and even do not
have a fixed-term contract : even
apart from their financial
circumstances, it is a degrading
situation.
We have to change. Let us
demonstrate that we know how
to ask the right questions and
define an objective and open
procedure for replying. There
must never be a preset answer. I
would like to add the slightly
strained smile of George Orwell:
“the enemy is the gramophone
mind, whether or not one agrees
with the record that is being
played at the moment ...”
56
57