Vulcan – Canada’s Neutron Science Legacy
Lives on
by Lynann Clapham
Dept. of Physics, Queen's University
Canada has a long and distinguished history in the
science of neutron scattering, beginning with Nobel Prize
winner Bertram Brockhouse. While working at Chalk River
in the 1950s, Dr. Brockhouse invented and perfected “neutron spectrometers” – instruments which use neutron beams
to probe the mysteries of solids. It is only fitting that the
Canadian Foundation for Innovation (CFI) should continue
this legacy by awarding $15 million to build Vulcan – a stateof-the-art neutron diffractometer to be located at the most
advanced neutron source in the world – the $1.4 billion
Spallation Neutron Source (SNS) at Oak Ridge National
Labs in Tennessee. In this article we will first take a look at
neutrons and examine why scientists and engineers find
them useful. Next we will set our sights on the SNS – what
it is and how it works, and finally we will see how Canada
is contributing with the new Vulcan instrument.
Neutrons – and what they are good for….
A neutron is a neutral particle, one of the fundamental particles comprising more than half of all visible matter.
All neutrons have a magnetic moment, a spin, and a
Page 2
wavelength that depends on how fast they are travelling.
Because neutrons don’t have a charge, they don’t interact
readily with matter. Their neutrality allows them to penetrate deep into a material compared with X-rays or electrons that get stopped near the surface. When a beam of
neutrons is projected at a sample, much of it passes through
but some neutrons will interact directly with the atomic
nuclei and ‘bounce’ away. This behaviour is called neutron
scattering. Scientists can use detectors to count these
scattered neutrons and determine their energy and scattering angle. Analysis of these details can reveal much about
the nature of a material sample, which can be anything from
a liquid crystal to a section of pipeline.
How do we make a neutron beam?
Neutron beams are made at neutron sources, and
traditionally those sources are nuclear reactors designed for
research purposes. The nuclear fission that occurs in the
reactor produces neutrons with energies around 1MeV, but
this energy is too high for continuing the fission reaction or
for research studies. Therefore it is necessary to slow down
the neutrons (or reduce their energy) with a moderator.
Typical moderators are water, graphite, or (in the case of
Canadian reactors) heavy water. The neutrons collide with
the moderator nuclei and transfer some of their energy to it.
Eventually after a number of collisions the neutrons end up
with energies less than one electron volt. These neutrons are
commonly referred to as "thermal neutrons" since their
Phys 13 News / Fall 2004
energy is similar to the thermal energy that particles have at
room temperature. The wavelength of a particle depends on
its energy, and many of these thermal neutrons have
wavelengths around 1-3 Angstroms (1 Angstrom = 1010m). Since this wavelength is about the same as the atom
spacing in a solid, thermal neutrons are ideal for material
science studies.
There are a lot of neutrons bumping around in a
reactor, so first you need to get them out if you want to make
a beam. This is done by cutting a hole in the reactor wall. As
you can imagine, the neutrons coming out of the hole are
traveling in all directions, so those that are not going in the
desired beam direction are removed using shielding. Furthermore, the escaping neutrons have a range of energies,
so the beam is termed a ‘white’ beam (similar to ‘white’ light
which is made up of a range of energies). This white beam
is then sent through a number of devices which remove all
neutrons except those of a specific desired energy (wavelength). This monochromatic (single wavelength) neutron
beam is then ready for use in an experiment.
How do we use this monochromatic neutron beam in
an experiment?
One of the most useful things we can do with a
thermal neutron beam is neutron diffraction. This makes
use of Bragg’s diffraction law:
λ = 2d sin θ
where λ is the neutron wavelength, d is the spacing between
atomic planes in the material, and 2θ is the angle between
the incoming beam and the diffracted neutron. When a
neutron beam hits a sample, some of the neutrons are
diffracted according to Bragg’s Law. The wavelength λ of
the neutron is known, and we can use a detector to measure
the angle (θ) of the diffracted neutrons. Therefore we can
use the equation above to determine “d” – the spacing
between planes of atoms in the sample. This atomic plane
spacing gives us very useful information – it can tell us what
phases and compounds are present in a material, and it can
also tell us how the material behaves under stress or in
response to temperature changes.
Next-generation neutron sources
The time it takes to run a neutron diffraction experiment depends on two things: 1) the neutron flux (essentially
the number of neutrons in the beam), and 2) the detector
instrumentation, which determines how efficiently the diffracted neutrons can be detected. In order to conduct bigger
and better experiments, scientists need both higher flux
neutron sources and improved instrumentation. First we will
consider the exciting new developments in neutron sources,
and then we will consider the instrumentation aspect when
we introduce the Vulcan instrument itself.
Phys 13 News / Fall 2004
Figure 2 shows the development of neutron sources
from 1932 (when the neutron was discovered by Chadwick)
to the present. Research reactors (fission reactors) were
the first sources of neutron beams, and reactor improvements in the 1940s and 1950s led to a rapid increase in
neutron flux. The highest flux research reactor in the world
today is HFIR, the High Flux Isotope Reactor at Oak Ridge,
Tennessee, which has a peak flux of 2.6x1015 neutrons/cm2
per second. For comparison, the peak flux at the Chalk River
NRU reactor is ~1.0x1015 n/ cm2s.
Fig. 2: Graph indicating how the thermal neutron flux has
increased in neutron sources, since the neutron was discovered
by Chadwick in 1932. The data points indicate neutron flux at
specific neutron sources worldwide (note NRU and NRX are
Canadian reactors located at Chalk River).
Although the most common way to make a neutron
beam is with a nuclear reactor, the next generation sources
are based on an entirely different type of technology. The
newest and most powerful sources are not reactor-based,
but produce neutron beams by spallation. Spallation
sources are also known as particle-driven sources (Fig. 2)
or accelerator-based sources. So what is spallation? When
a fast particle such as a high-energy proton hits a heavy
atom nucleus (like mercury or tantalum) some neutrons are
‘spalled’ or knocked out. For every proton that hits the
nucleus, about 20 to 30 neutrons are expelled. This nuclear
reaction is called spallation.
Up until the mid 1980s, spallation sources could not
compete with the neutron flux produced by the high flux
research reactors like HFIR. However, dramatic advances
in accelerator technology in the 1990s led to the ability to
produce very high energy protons with energies in the GeV
range, that travel at about 90% the speed of light. These
powerful accelerators form the heart of the next generation
neutron sources. When it is completed in 2006, the SNS, or
Spallation Neutron Source, located at Oak Ridge, Tennessee, will be the most powerful neutron source on the planet
(Fig. 2). It is on this neutron source that Canada’s Vulcan
instrument will be located.
Page 3
The Spallation Neutron Source (SNS)
Under construction since 1999, the SNS is an accelerator-based pulsed neutron source being built in Oak Ridge,
Tennessee, by the United States Department of Energy. It
will be completed in 2006 at an overall cost of $1.4 billion.
The end result will provide the global scientific community
with the most intense pulsed neutron beams in the world,
eventually channeled to 24 research instruments including
Vulcan.
So how does the SNS work? Well, earlier we found
out that in order to produce a useful neutron beam using
spallation you need: 1) some way to make a beam of protons,
2) a way of accelerating that proton beam to high energies,
and 3) a heavy metal ‘target’ for the protons to bombard.
Fig. 3 shows the overall layout of the SNS, which measures
about 2km end-to-end. Each of the major components – the
front end, the linear accelerator (LinAc) the accumulator
ring and the mercury target – are state of the art systems
being built by the groups indicated in Fig. 3. The role of each
of these components is outlined below.
Fig. 3: A drawing of the SNS from an aerial view. Each of the
major components is labelled, with the scientific group indicated
that is responsible for designing and building the equipment.
The distance from one end to the other is approximately 2 km.
The Front end systems: Lawrence Berkeley National Laboratory (LBNL) is responsible for designing and building the
SNS’s front-end system. Here an “ion source” produces
negative hydrogen (H- ) ions—hydrogen with an additional
electron attached—that are formed into a beam and accelerated to an energy of 2.5 million electron volts (MeV). This
beam is shot into a large linear accelerator (LinAc).
The Linear Accelerator (LinAc): Los Alamos National
Laboratory (LANL) is responsible for designing and building the LinAc. The LinAc is actually three accelerators,
which in total accelerate the H- beam from 2.5 MeV to 1000
MeV (1 GeV). At 1GeV the H- ions are travelling only
slightly less than the speed of light. Just before the H- ions
are shot into the accumulator ring, they pass through a
‘stripper foil’ that strips the electrons from the (H-) ions,
making them into H+ (protons!).
The Accumulator Ring: Brookhaven National Laboratory
Page 4
(BNL) is responsible for the accumulator ring structure.
This ring is a circular tunnel where magnets curve the path
of the protons to keep them traveling in a circle. The protons
from the LinAc are continuously added to the storage ring,
until some have accumulated about 1200 turns. A gate is
then opened and all of these protons escape at once
producing an intense pulse less than 1 millionth of a second
in duration (the time it takes to empty the ring). And, all this
happens 60 times per second!
The Mercury Target: Oak Ridge National Labs (ORNL) is
responsible for the design and construction of the liquid
mercury target, the first of its kind in the world. Mercury
was chosen as the target material for several reasons: (1) it
is not damaged by radiation, as solids would be; (2) it has a
high atomic number, making it a source of numerous
neutrons (the average mercury nucleus has 120 neutrons
and 80 protons); and (3), the rapid high energy pulse causes
a huge jump in target temperature and significant shock
effects. Because it is a liquid, mercury can withstand these
extreme conditions better than solid targets.
Fig. 4: the SNS mercury spallation target unit and instrument
beam lines. Note this is a side view of the target so only 5 beam
lines are shown
The mercury target station is shown in Figure 4. The proton
beam enters the mercury target chamber from the left hand
side. 20 tonnes of mercury will cycle continuously through
the target chamber at pumping speeds up to 30 litres per
second. Approximately 20-30 neutrons are released for
each incoming proton, and these neutrons are channeled to
beam lines containing scientific instruments such as Vulcan.
Vulcan!!
So finally we get to Vulcan itself! A total of 18 beam
lines will eventually come off of the mercury target. Fig. 5
shows the instrument layout in the target building (the target
itself is in the centre of the beam lines). The beam line
labeled “Engineering Diffractometer” is the one on which
Vulcan will be located.
So what exactly is Canada’s role in all this? Well, the
SNS itself is funded by the US Department of Energy, and
Phys 13 News / Fall 2004
ments at a reactor. Earlier in this article we said that at
reactor-based neutron sources you use a monochromatic
(single wavelength) neutron beam – so that you have a
single value of wavelength to use for Bragg’s diffraction
equation - and you detect neutrons diffracted at a specific
angle .. But to get a monochromatic beam you throw away
all of the neutrons with other wavelengths – what a waste!!!
In Vulcan experiments a ‘white’ neutron beam will be used
having a range of different wavelengths. Because you are
using a lot more of the beam you can run your experiments
much more quickly.
Fig. 5: the layout of scientific instruments in the SNS
target building. The mercury target is in the centre of
the beam lines.
will cost almost $1.4 billion when it is completed in 2006. The
scientific community has identified 10 very important scientific instruments that should be built (the ones shown in
Figure 5) but since these instruments are very expensive the
base funding for the SNS will only cover the costs for 5 of
these. Therefore, the SNS is looking to other countries to
help fund different scientific instruments. The timing happened to be perfect, since the Canadian Government recently announced an “International Access Award” through
its Canada Foundation for Innovation (CFI) funding body.
This award seemed tailor-made to funding a Canadian
scientific instrument at the SNS, and so a team of neutron
beam scientists from the Canadian Institute for Neutron
Scattering (CINS) applied for and won a $15million CFI
grant to build Vulcan.
Vulcan, illustrated in Figure 6, will be an engineering
diffractometer – specifically designed and built to analyze
engineering components and materials. It is named after the
Roman god of fire and metalworking. Neutron diffraction
with Vulcan will provide a method for determining stresses
and strains deep within a material. It will also be able to make
maps of the chemistry or microscopic structure of a sample.
Sample sizes will range from the very small – a few
millimeters across – to the very large – like sections of
pipeline! And even though the sample might be large,
Vulcan can make measurements very close together – less
than 0.1mm between measurement points. Finally, measurements can be made so quickly with Vulcan that it
experiments can be done in-situ. This means that scientists
can make measurements to see how strain changes with
time – for example you could make measurements on a full
size operating aircraft engine as it warms up!
In addition to using a much more intense
neutron beam, Vulcan experiments at the SNS will be
fundamentally different than neutron diffraction experi-
Phys 13 News / Fall 2004
So how do you use a white neutron beam in a Vulcan
diffraction experiment? Well, the critical point is that the
neutron beam comes in a pulse. You know exactly when it
hits your sample, and then you time how long it takes each
neutron to hit your detector bank – this is called the ‘Time
of Flight’ – or TOF. The TOF is related to the energy of the
neutron (since the higher the energy the faster it travels) and
the energy can be used to calculate the neutron wavelength!
So because we know the neutron wavelength (.) as well as
where it hits on the detector bank (.) we can once again use
Bragg’s equation to calculate the spacing of atomic planes.
However, because we are using all of the neutrons in the
white beam (rather than throwing most of them away to
make a monochromatic beam) we can do our measurements much more quickly.
Fig. 6: a computer rendering of the Vulcan instrument. The
detector banks are shown in blue, surrounding the sample
table. On the table is a stressing machine (purple) pulling on
a small sample.
When it is completed, Vulcan will provide a 30 to 50
times increase in the rate that researchers obtain data. This
increase in performance will undoubtedly open up new
frontiers in engineering and material science research.
Brockhouse would most certainly be proud of the way in
which Canada is continuing on the neutron scattering legacy
he began over 50 years ago.
Page 5
When we were the centre of the Universe
by John A. Deacon
Dept. of Physics, Curtin University, Australia
Our place in the universe was undisputed. Our world
was fixed in space and the heavens moved about us. There
was contentment and satisfaction for us all. What a
wonderful place the world was. We were the centre of the
universe. The stars at night were embedded on the inside
of a huge circular dome, which gently rotated about the
earth. By day the sun moved across the sky bringing light,
and making the trees green and the crops grow.
During the time of Aristotle (384 – 322 BC) the
sphere was considered to be the body with the most perfect
shape, and to have the highest degree of symmetry. The
gods would use this shape and so the motions of the heavenly
bodies such as the moon, sun, planets and stars were based
on spherical surfaces and circular paths. It all made sense.
The world consisted of a round central earth core,
which was covered mostly by a watery layer, and on top of
this was a layer of air. These were considered to be the
natural components of our immediate world. There were
believed to be four elements, namely earth, water, air and
fire. Everything about us would be made up of a combination of one or more of these elements. The natural state of
these elements was one of rest. If a stone was lifted and
released, it naturally fell to the ground because it had earthlike properties, and its place was at rest with the earth.
Smoke rose into the air because of its air-like nature. Water
ran down hill to join other bodies of water such as the sea or
rivers or lakes. Once again it all made good sense.
If the natural state of a body is at rest, according to
Aristotle, then what causes the continuous motion of the
heavenly bodies? Initially this was explained away by the
presence of Gods, that lived in the heavens above us. These
supreme beings looked after the motions of the sun, moon
and stars and kept it all running smoothly like some huge
clock. Such was the eminence of Aristotle that these ideas
prevailed until the Middle Ages. By this time, God and a
Host of Angels looked after the heavens. There was a
strong belief in the immutability and permanence of the
structure of the heavens.
In 1577 the presence of a comet weakened the
concept of the immutability and permanence of the heavens.
Clearly the comet was not an atmospheric effect like
thunder and lightning. The position of the comet was further
away than the moon. Something in the heavens seemed
awry. In 1590 Galileo climbed the leaning tower of Pisa and
dropped simultaneously pairs of heavy “weights” of different value. According to Aristotle, the speed of a falling body
varies with its weight, the heavier body having the greater
Page 6
speed. Galileo found no difference in the speeds and both
“weights” hit the ground together. The foundation of
science based on Aristotle was showing definite cracks.
What else might be wrong? It was all rather worrying.
In the 17th century Tycho Brahe, Johannes Kepler
and Isaac Newton destroyed the old scientific beliefs. The
earth is merely one of nine planets orbiting about the sun that,
in turn, is a medium sized star of the many billions in our
universe. Our entire solar system is drifting through space.
There is no spherical surface of stars. The stars are at
varying distances from the earth and generally are all
speeding away from us. The possibility of a large meteor
colliding with the earth is unlikely, but by no means impossible. There is nothing special about us. So after 2000 years
we have a new order. We are not at the centre of the
universe. It is with some regret that the notion of the earth
and its inhabitants being at the centre of the universe has had
to be discarded.
Mr. Mach's Amazing Mechanics
by Paul Wesson
Dept. of Physics, University of Waterloo
It is one of those simple yet fundamental problems
that have puzzled people for centuries. Take a bucket filled
with water and set it spinning like a top. As you will see,
although the bucket tends to drag the water at the edge with
it, most of the contents stay put. Why? The problem has
been known to physicists since the 1600's and is generally
referred to as Newton's bucket experiment. It has an
appealing simplicity. You can do it at negligible cost in your
own backyard with a rusty pail - unlike most other modern
experiments, which tend to cost millions and require enormous laboratories with sophisticated particle accelerators.
It is one of those simply yet fundamental problems that has
puzzled people for centuries, there are even books about
Newton's bucket. Why does the water tend to stand still?
Ernst Mach, a 19th-century Austrian physicist, was
the first to suggest an answer: that the mass of everything
on Earth, including you and me, is intimately connected with
the mass of everything else, even distant astronomical
objects. Matter simply "knows" that it should stay still with
respect to the rest of the stuff out there in a vast and ancient
Universe. Similarly, a particle with mass resists acceleration - it has inertia - because it is in some way "connected"
to the myriad objects in the cosmos. This concept is known
as Mach's Principle, and no one has ever been able to
construct a theory of the Universe that justifies it.
Einstein was clever and he thought a lot about
Phys 13 News / Fall 2004
Mach's principle. It was his main inspiration for inventing
the theory that space-time is curved by massive bodies (not
necessarily opera singers), the theory we now know as
General Relativity. This mind-boggling intellectual tour de
force involves ten non-linear partial differential equations,
tensors based on the properties of curved space, and causes
loss of sleep in hordes of distracted students. The theory
works extremely well: the motion of the planet Mercury; the
bending and time delay of light passing near the Sun; the red
shift of radiation from our own and other stars; and the
gradual decay of the orbits of the neutron stars in a binary
pulsar system are all examples that have been seen to
confirm the predictions Einstein laid out in general relativity.
It all fits. And, at a time when society seems to be
quite insane, it is conforting to know that a person, allbeit a
very special person, can achieve so much by just thinking.
Currently, General Relativity as a mathemicatical subject is
taught at most respectable universities. It is the stuff of not
only erudite academics, but of newspaper articles, TV and
videos. It has also, by personal experience, been the subject
of conversation between a truck driver and a hitch-hiker at
1:00 a.m. on a rainy highway in the northern hinterlands.
People like Relativity.
Disappointingly, however, Einstein never managed
to incorporate Mach's principle into his theory of General
Relativity. He admitted this after studying the idealized
version of Newton's bucket experiment known as the
Lense-Thirring effect, wherein physicists calculate the
effects inside and outside of a massive, rotating, hollow
shell. This drove Alby, as Einstein was known to his friends,
to search for an even more grandiose unified field theory
among other things. A well-regarded attempt to implement
Mach's Principle into a theory was made in the 1950's by
the english astrophysicists Denis Sciama and published in
the Monthly Notices of the Royal Astronomical Society.
He wrote down equations designed to make the locallymeasured mass of a particle depend on the rest of the matter
in the universe which is expanding and uniform. The
principle is simple and has applicability to other physics
subjects: The volume of space, and the effect of uniformly
distributed objects in it, scales as the distance cubed and
therefore outpowers local influences. Consequently, things
here are closely constant because of the uniformity out
there. The paper created a stir among theoreticians but was
never fully developed. One reason is that experimentals
have established the isotropy of mass (inertia) to better than
1 in 1021, a staggering level of uniformity, that indicates the
need for a new approach. Sciama promised that another
paper with more details was on the way. Unfortunately, it
never arrived. Sciama died in 1999 before he had successfully formalized his ideas.
A trio of cosmologists, myself among them, entered
the discussion in 2002. One English, one Chinese and one
Phys 13 News / Fall 2004
Indian. Their views on theorectical physics were as diverse
as their origins and taste in food. The history of physics is
littered with anecdotal stories about how new theories have
been jotted down on serviettes in restaurants, and our
experience adds to the tally. Sitting in a chintzy Waterloo
restaurant, Hongyia Liu, originally of the Dalian University
of Technology in China, my colleague Sanjeev Seahra of the
University of Waterloo in Canada and I demonstrated that,
although Einstein never found it, there is a way to incorporate Mach's Principle into General Relativity. So, in order
to embroider the record:
Myself: "What if we formulate Mach's Principle in
terms of a wave, with the metric tensor a complex function
of the spacetime coordinates? We have lots of time, the
menu says all-you-can-eat fish and chips."
Hongyia Liu: "What if we just order a big bowl of
chop-suey?"
Sanjeev: "Nah. How about we get prawn vindaloo
with poppodoms, and then do the spacetime thingy?"
Several peptobysmals and sheets of calculations
later, the results were apparent: massive international
indigestion and the proof that mass can be a wave. While
the serviette has long since been consigned to the trash, our
formal proof was published in 2002 (International Journal
of Modern Physics D, Vol. 11, p 1347).
Our argument starts from one of the well-known
basic tenets of general relativity: a particle that has mass and
is moving through space-time deforms the surrounding
space-time as it goes. The equation that describes this
deformation contains a numerical variable that is normally
what mathematicians call a "real" number. We, however,
took the controversial step of supposing it was "complex":
composed of real and imaginary parts, where the imaginary
part is a multiple of i, the square root of -1. Complex
numbers are indispensable in many areas of physics, for
example Maxwell's electromagnetism. Electronic engineers, also routinely use complex numbers to describe the
behaviour of their circuits. But in the theory of gravity, it's
not generally considered necessary.
Any fears that taking this theoretical step would lead
to a hideously complicated outcome quickly disappeared.
We found that everything worked out with remarkable
harmony. When our test particle moves through the "complex" space-time, what comes out, after working through
Einstein's equations, are physical quantities which are all
real. The imaginary parts of the complex numbers disappear.
Introducing complex numbers into general relativity
also forced us to alter the standard way of describing the
Page 7
matter we were interested in. Instead of using an approximation in which we considered a large number of particles
and "smoothed" them out into a fluid, we concentrated on the
locality of one specific particle. Again, this step was
unconventional but turned out to give physically reasonable
results.
We also incorporated the quantum idea that our
particle can simultaneously be considered as a wave, with
a wavelength related to its momentum. This, in complex
space-time, changes the nature of our particle. It means that
the particle's mass can be thought of as extending throughout space-time as a wave, with the result that the global
geometry of space - its curvature throughout the Universe
- depends on the properties of that wave. With a description
of space-time that admits complex numbers, and without
violating Einstein's framework for relativity, we have an
explanation for Mach's suggestion. It is not insane to believe
that all matter on Earth could be linked to the stars, including
people and buckets of water.
Better yet, we believe our idea can be tested. Our
theory predicts a specific relationship between the mass of
a particle and the curvature of the space that it inhabits. We
are currently looking into the feasibility of observing the
effect in the properties of the space surrounding a hydrogen
atom.
Any successful proof of this idea could have even
more extraordinary implications. The effect of mass on
space-time curvature that we have proposed is calculated
from classical principles and works in four dimensions, the
three spatial dimensions and one of time. But the result is
similar to the proposed effects of certain theories, including
the various string theories for example, that aim to mesh
relativity with quantum mechanics and produce a "theory of
everything". These ideas invoke extra spatial dimensions
and if we do manage to observe the relation between spacetime curvature and mass that we have proposed, we think
our theory may be just the shadow of something that occurs
in many extra dimensions.
Thus does the scientific collaboration bear fruit in
that it has made us realize that mass can be thought of as a
wave, the value of it as measured here being the result of
propogation of many like-endowed objects distributed
throughout the length and breadth of the universe. Mr.
Mach may well have been right about his amazing mechanics. We could be tied to the quasars, and Newton's bucket
shows it.
Our conclusion is that Mach's principle may not only
be feasible, it may be rather important. The connection
between the atoms in our bodies and the atoms in a distant
star could have a fundamental part to play in our final
description of how the Universe works.
Page 8
Configuring C++: A Simple Freeware
Scientific Programming Environment
by David Yevick
Dept. of Physics, University of Waterloo
In this article we’ll start our technical discussion of
scientific programming with a prescription for setting up a
free C++ programming and graphics environment from
which you can run the code in future articles in this series.
The material in this and future articles are largely contained
in my comprehensive but very concise textbook "A First
Course in Computational Physics and Object-Oriented
Programming with C++" (Cambridge University Press,
2004) that should be available by the time you read this. The
textbook also contains a CD-ROM with numerous programs
and tools to help you get started in learning C++ and
scientific programming and is packed with self-correcting
examples to study or work through.
Perhaps I should first state a few reasons of
variable scientific merit for why C++ is the best language for
potential physics students to learn:
(1) It looks good on your resume.
(2) Many scientific program libraries take years to port to a
new language, currently C++ versions of many legacy
FORTRAN programs exist but few programs have been
translated into other languages.
(3) An extremely large number of libraries are available for
system services as for example communicating with other
computers or processors.
(4) While the architecture of C++ does not incorporate the
most modern innovations, the language does contain all key
object-oriented features that can be used to compartmentalize
and simplify program structure.
(5) C++ is mature so that a program written today that
employs standard libraries will compile and run essentially
forever over a wide range of operating systems and
processors.
Although there are numerous free compilers
available, we’ll focus on the Borland C++ compiler for
Windows since it’s the simplest to manipulate once set up
properly. If you’re sophisticated enough to be running Linux
you can use the gcc and the DISLIN version for gcc instead
and you’ll have more than enough knowledge from your
experiences battling the operating system to work through
the installation procedure without further instructions.
You can download the Borland C++ command line
compiler, currently version 5.5 (not the trial version of the
Phys 13 News / Fall 2004
Enterprise product) after filling in some nuisance forms
from the website:
http://www.borland.com/products/downloads/ownload_cbuilder.html
To obtain a free but really comprehensive, precisely
coded scientific graphics package, you will also need to
download the Borland version of DISLIN available at
http://www.dislin.de/
Now install both of these products by navigating to
the directories you downloaded them into and clicking on
their icons. Install in the default directories (if you don’t
have space on your C: drive, replace C: by whatever drive
letter is best, but then don’t forget to replace C everywhere
by this letter in the instructions below).
At this point, nothing works, or as my father
succinctly puts it, “situation normal, all screwed up”. Being
careful to do everything exactly as written below (if you
have the textbook and the CD-ROM all this becomes much
simpler), begin by clicking on the Start button on the lower
left-hand corner of your screen. Either you will see an icon
labeled MS-DOS prompt or you will select All Programs
(in Windows XP) and then Accessories or navigate directly
to Accessories and then choose Command Prompt (or
MS-DOS prompt depending on your windows version).
This opens up a blank window into which you can issue text
commands, typing help brings up a menu of these. Type
C:
cd \borland\bcc55\bin
notepad ilink32.cfg
You’ll have to click on the Yes pushbutton when prompted.
Enter the line
-L”C:\Borland\Bcc55\lib”
Save the file and exit notepad. Now type notepad bcc32.cfg
and enter
-I”C:\Borland\Bcc55\include”
-L”C:\Borland\Bcc55\lib”
and save this file as well. Exit notepad again. Finally type
C:\ and then notepad bpath.bat and type
set PATH=%PATH%;C:\borland\bcc55\bin;C:\dislin\win
set DISLIN=C:\dislin
and save this file (it should be located in the root of C:).
Finally, type
cd C:\dislin\win
notepad bcclink.bat
Find the lines
Phys 13 News / Fall 2004
:COMP
@ set_ext=c
@ set_int=% _dislin%
@ if %_opt1%== –cpp set_ext=cpp
Replace ‘=c’ by ‘=cpp’ in the second line and cpp
by c (twice) in the last line and save the file.
You’re now located at the root directory of the C:
drive. Make a new directory for your program files and
generate a test program in this directory by typing.
mkdir progs
cd progs
notepad plottest.cpp
enter the lines
#include <iostream.h>
#include <dislin.h>
main( ) {
float x[3] = {1, 2, 3};
float y[3] = {2, 4, 8};
int numberOfPoints = 3;
qplot(x, y, numberOfPoints);
}
You must have the line #include <iostream.h> at
the beginning of your file to run dislin. Now save the file and
exit notepad.
To run the program, type \bpath (you only have to do
this once for each new command in the MS-DOS window
that you open). Do not forget the beginning backslash. Then
enter
bcclink –a plottest
You’ll see a graph of the x and y values you entered
into the program. If you later generate a program, e.g.
test.cpp, that doesn’t include graphics you should type
instead
bcc32 test
and then
test
In either case, if you have no additional changes to
make in the .cpp file and want to rerun the program, you only
need to type its name (plottest or test) or click on its window
icon from within My Computer.
In the next installments we’ll start talking about some
features of C++ and then look in more depth at key elements
of scientific programming.
Page 9
New Physics? The Pioneer Spacecraft
Anomaly
by Guenter Scholz
Dept. of Physics, University of Waterloo
The essential problem is that the Pioneer 10 and 11
space probes are in positions that are without a doubt
significantly away from where they should be based on very
accurate calculations that include the combined gravitational
pulls of our solar system's Sun, planets and asteroids (see
issue #92, Oct.1999, pg. 11 of Phys 13 news). The Pioneer
space probes were initially launched in 1972 and 1973
respectively to study the outer planets of our solar system.
Both spacecraft are now near the edge of our solar
system about 86 AU (13 billion km) from the Sun with
trajectories that can not be explained by conventional
physics. Similar to the radar that Police Constables use to
check the speed of motorists, researchers find that the
Doppler frequency of the microwave signals that they have
been bouncing off the crafts invariably drift at a constant,
very small rate which indicates that the probes are
accelerating toward the Sun at the constant, albeit extremely
small rate of about 10-10 g. This tiny amonalous behaviour
in the probes’ trajectories was initially thought to be caused
by one or more mundane problems such as propulsion fuel
leakage, heat radiating from the craft, etc. Also under
serious consideration was the existence of a new planet ‘X’
beyond Pluto’s orbit. It has meanwhile become clear the
planet ‘X’ does not exist and that the other mechanisms also
do not appear to be responsible. The mystery acceleration
of the Pioneer spacecrafts remains anomalous.
Attempts were subsequently made to test the Pioneer
anomaly using other spacecraft such as Galileo and the
Voyager probes that had meanwhile been launched, but
without success. Current deep-space missions now under
development, such as LISA (laser interferometer space
antenna) and JIMO (jupiter icy moons orbiter) will not be
designed to test the robustness of the anomaly.
Pioneer 10 spacecraft
You may remember the plaque, shown below, that
was attached to Pioneer 10 to illustrate the origin of the
probe as well as other information about mankind.
To a number of researchers it has become decidedly
possible that the Pioneer anomaly is an indication of new
physics, the origin of which could very well change our basic
understanding of nature. A somewhat similar situation
comes to mind when Gustav Kirchhoff, one of Max Planck’s
advisors at the University of Berlin suggested to Planck,
then a young mathematics student with a passion for
theoretical physics, that there was this ‘little’ problem about
the wavelength distribution as a function of temperature in
the spectrum of radiated heat that didn’t agree with classical
physics predictions. For this characteristic spectrum
Kirchhoff had coined the name black-body radiation. Keep
in mind that most physicists in the latter half of the 19th
century thought that understanding nature was well in hand,
and that a physicist’s career outlook was simply to dot the
i’s and cross the t’s.
Theoretical proposals were advanced by a number
of researchers to explain the anomalous acceleration of
Pioneer (Phys 13 news, #92) and more recent experimental
results have meanwhile also supported some of these
predictions. We now know, for example, that the expansion
of the universe is accelerating and, as well, that there may
be variations in the values of the fundamental constants.
Present suggestions for Pinoeer’s deceleration include:
Plaque on Pioneer 10 identifying mankind on earth
Page 10
(1) invisible ‘dark matter’ whose existance astronomers
had already postulated to explain excess gravitational forces
observed on objects at very large scales.
Phys 13 News / Fall 2004
(2) problems with Einstein’s general theory of relativity.
This is just fine with many theorists who already believe that
this is necessary in order to merge gravity with quantum
mechanics. Specifically, some of these theories suggest
that gravitational forces will increase at large distances or
small accelerations. Best of all, these theories would also
obviate the need for the pesky, invisible dark matter.
(3) a decrease in the spacecraft's mass at small accelerations
because of exotic quantum and/or cosmological forces.
Furthermore, beyond the Standard Model of particle physics,
String theory and/or supersymmetry in n-dimensional space
introduce new degrees of freedom along with possible
violations of space-time symmetries, for example the Lorentz
symmetry, that could result in very weak forces acting at
large scales.
Of course, sober consideration still places the most
likely cause of the anomalous acceleration of the Pioneer
spacecrafts on the on-board systems, the question remains
though - What? All conceivable sources have already been
systematically ruled out. The only other possibility, aside
from the on-board systems, is the existence of new physics.
Now, about seven years since Slava Turyshev and
John Anderson (at NASA Jet Propulsion Laboratory),
Michael Martin Nieto (at Los Alamos National Laboratory)
and their co-workers Philip Laing, Eunice Lau and Tony Liu
published the initial analysis of the anomalous deceleration
(Phys 13 news, #92), interest in the anomaly has spread and
the European Space Agency (ESA) appears willing to
investigate this puzzling situation. At its recent ‘Cosmic
Vision’ workshop in Paris, ESA announced a number of
experiments and missions that will test gravity in new ways
and, in particular, ESA will address the Pioneer anomaly
directly to see if indeed there exists an indication for new
physics. If so, this would be an extremely important
discovery. A better understanding of gravity would have a
profound effect on the future of physics and astronomy,
possibily on par with the consequences of Plank
understanding the nature of the black body radiation curve.
The above researchers, jointly with their ESA
colleagues at the University of Bremen, also plan to reanalize
earlier trajectory data for indications about the acceleration
anomaly in earlier stages of Pioneer's path when it was
closer to the Sun. This future mission by ESA will have
better accelerometers, improved launch techniques and
optical navigation, as well as improved probe design to
eliminate all on-board effects that might be able to mask
significant results. These efforts will produce the most
precisely tracked spacecraft ever to be launched into deep
space and, moreover, it will have a precision estimated to be
a thousand times greater than the value of the anomalous
decelerating force.
Phys 13 News / Fall 2004
Beyond Silicon
by Rafik O. Loutfy
Xerox Research Centre of Canada*
For more than four decades, silicon has been the
backbone of the microelectronics industry. However, by
2010, manufacturing integrated circuits from silicon that can
handle the increased horsepower of the future will become
increasingly challenging. Futhermore, the silicon integrated
circuit is sometimes not suitable for the electronics that are
increasingly becoming embedded in everyday products.
Gordon Moore, co-founder of Intel Corp., predicted
in 1965 that the number of transistors per integrated circuits
– and therefore the power of computing devices using those
circuits – would double every 18 months. This theory,
dubbed "Moore's Law," was originally forecast to continue
through to 1975. The trend, however, has remained true for
longer. But, how much longer can it continue?
There are physical limits to how miniscule silicon
transistors can be made. At a molecular level, there comes
a point when the transistor will give off heat signals that willoverwhelm its own electronic signals, causing too much
"noise" to function efficiently. As well, such microscopically
small, silicon transistors require a super-clean environment
so sterile that the cost of building facilities to produce them
is expected to skyrocket to prohibitive levels.
Organic materials (i.e. polymers, oligomers) that
sustained life on earth for centuries are being investigated as
future alternatives to silicon. The fabrication of organic
microprocessors can take place in ambient environments,
allowing the development of carbon-based transistors by a
low-cost printing process that is actually similar to jet
printing used in ink-jets. On the other hand, silicon chips
require high temperatures and ultra-clean, vacuum environments for fabrication.
Xerox, in partnership with Motorola and Dow, has
been working on developing a printed organic electronics
technology, known as Organic Thin Film Transistor (OTFT)
Fabrication. While reduced development cost is an exciting
element of this technology, perhaps even more exciting is
the flexibility it will bring. Organics are printable, structurally flexible and mechanically durable.
On the other hand, silicon is rigid and brittle, which
limits its potential applications. Even while silicon technology will continue to be commonly used for a number of
applications, like computer processors, printed organic electronics will allow intelligence to be integrated easily and
Page 11
efficiently into a variety of unique applications.
Organic materials have conducting and light-emitting properties. But, today there remains the challenge that
they, like living creatures, age with time and are vulnerable
to breaking down due to oxidation. To counteract this, the
scientific community continues to gain a deeper understanding of the causes of aging on organic molecules and is
utilizing molecular chemistry techniques to overcome the
breakdown of organic material.
Initial applications for printable organic electronics
are expected to co-exist alongside silicon-based integrated
circuits. Each material has inherent advantages: while
organics are flexible and mechanically durable, silicon retains a faster switching speed. Because of this, the initial
intention of organic chips will be to create new markets
based on the dramatically lower cost of producing the chips
and the ability to make larger, more flexible chips than
silicon.
Smart cards, credit card-like ID security cards, could
be reduced from a cost that ranges between $6 and $10 to
less than $1 using affordable organic materials and costeffective printing technology.
Tiny Radio-Frequency (RF) Tags can be developed
and used for product tracking, inventory control and other
applications, also for similarly low costs. While RF Tag
technology has been around for decades and used in tagging
vehicles for toll highways and tracking livestock and government freight cargo, the cost must drop considerably for it to
receive wide-scale use in consumer and retail applications.
Novel displays driven by carbon-based transistors
are being developed that will have a lower cost, as well as
being more lightweight and compact than existing LCD
display technology. Also, by using flexible organic circuits,
radical inventions like low-cost, electronic, reusable paper
can move to the next level of evolution by advancing to a
paper-like flexibility.
Ultimately, the expectation is that integrated circuits
made of organic TFT will cost approximately $1 per square
metre. In comparison, silicon-integrated circuits cost between $30,000 and $500,000 per square metre and manufacturing a square metre of active matrix liquid crystal
displays costs between $6,000 and $10,000.
When you look at these figures, it's obvious that
organic microprocessors will be vastly more affordable than
existing technologies. The low cost of developing organic
circuits will drive universal acceptance of the technology
and make it practical for use in even the simplest of
consumer applications.
Page 12
At the right cost, organic micro-processors could, for
example, be integrated into the plastic of a food container
and measure the state of its contents. Another possibility is
that affordable, flexible organic circuits could be embedded
into a plastic bag at the grocery store to warn of an
impending break or tear. Imagine a container that, through
embedded intelligence, sensors and communications capabilities, is able to let the refrigerator know it's time to dispose
of last week's leftovers or communicate to the microwave
that its contents have thawed.
In relation to pure laboratory applications, organic
electronics can be used to create sensors for detecting
chemical presence, humidity, etc. Organics can offer a
unique light source or display at a low cost, and RF tags like
those previously discussed also have their place in a research facility, as well as other applications.
The goal of silicon-based electronics has been to
develop integrated circuits with increasingly densely packed
transistors; however, this is not the case with organic
electronics. Initially, one of the benefits of organic electronics will be reduced cost when developing transistors over a
wide area, allowing them to be printed on to large objects to
introduce unique applications.
Ultimately, the flexibility, affordable nature and ability to cover large areas will create an abundance of never
before imagined applications and opportunities for organic
electronics.
Scientists at the Xerox Research Centre of Canada
are already gaining intense familiarity with the causes of
organic breakdown in molecules and are now creating longlived organic semiconductors. Applications for carbon
semiconductors may be in production in as little as four
years.
The possibilities, like life, are limitless.
* Dr. Rafik O. Loutfy is vice-president and
head of the Xerox Research Centre of
Canada. Reprinted with permission of "Laboratory Focus" magazine, May 2002.
Laboratory Focus
4220 Steeles Ave. W.
Unit C15
Woodbridge, ON L4L 3S8
Canada
email: [email protected]
Phys 13 News / Fall 2004
THE SIN BIN
A problem corner intended to stimulate reader
participation. The best valid solution to the problem
will merit a book prize. We will always provide a book
prize for the best student solution. Send your favourite
problems and solutions to our BINkeeper, Chris
O'Donovan, [email protected]
Editor's Note: It is with some sadness that I announce the
imminent retirement of our BINkeeper, John Vanderkooy,
from Phys 13 news. John's problems might have been
difficult to solve at times, but they were nevertheless always
entertaining and will be missed. We are grateful not only for
his diligence in providing us with so many problems over the
years, but as well also for the numerous articles he has
penned, often at the spur of the moment when I suggested
an article was needed. We wish him the very best in his
future, more leisure filled years no doubt exploring audio
related topics.
Now consider the same event from a reference
frame moving with the block’s final velocity, vf , to the right
relative to the level surface. We now find that in the
beginning the block has potential energy mgh and kinetic
energy mvf2/2, and at the end of the downhill slide both
energies vanish. Explain in your solution where the energy
disappeared to and why energy appears not to be conserved.
Problem 111 from the last issue:
1) A ‘necklace’ of batteries is constructed from 10 identical
1.5 V cells and all are electrically connected. What voltage
is measured between any group of cells? e.g. as shown.
2) A board is placed on a log cut lengthwise in half as shown.
Tilting the board slightly will start it oscillating: In terms of
length and radius (i) what is the frequency of stable oscillation
for a thin board, and (ii) the thickest board that remains
stable. You may want to make reasonable approximations.
However, time does march on and the SIN BIN
sceptor has been handed to Chris O'Donovan, a lecturer in
our Department. Chris will in future also assume
administration of our SIN Exam that is written in many high
schools across Canada. I know Chris will provide thought
provoking questions that will allow us to continue with our
mental gymnastics. But first, below is one more problem
from me until Chris takes over next issue.
Problem 112
A block starts sliding from rest without friction
down a slope of height h to a level surface. Initially, of
course, the block has potential energy and no kinetic energy.
After reaching the bottom, the block has only kinetic energy
and no potential energy with respect to the level surface.
We know from energy conservation that: mgh = mvf2/2
so that the velocity of the body at the bottom of the hill will
be vf = 2 g h .
Phys 13 News / Fall 2004
We received a correct solution from Norman Cowan
and partially correct solutions from Ali Sharafat and Chris
Curran.
The Solutions !
1) There will be zero volts between any group of batteries.
Consider, for example, that the current in the battery loop
will be equal to the total EMF,15V, divided by the total
resistance 10Rint of the cells. The current will be
i = Vtot / Rtot = 15V/10Rint
The internal voltage drop, Vint, in each battery will therefore
equal
Vint = iRint = (15V/10Rint ) Rint= 1.5 V
Page 13
Let M be the mass of the plank. When the board is deflected
by a small angle θ the restoring torque,τ, is the perpendicular
force to the lever arm, s:
τ = s Mg cosθ = (R sinθ) Mg cosθ
for very small angles cosθ → 1 and sinθ → θ
so that τ = RMg θ
but the torque can also be equated with the moment of
interia, ML2/12, of the plank around its center multiplied by
its angular acceleration about its centre, v2/s or sω2 = Rθω2.
This immediately leads to:
RMg θ = (ML2/12) Rθω2
which simplifies to yield the desired answer. This solution
disregards the fact that the center of mass of the plank is
moving (left-right and up-down), and that the contact point
is moving relative to the center of mass. A detailed analysis
of all these approximations does show that for small angle
oscillations these “complications” can indeed be neglected.
2ii) The plank becomes unstable if its thickness exceeds the
log’s diameter. Consider if the radius of the log is R and the
thickness of the plank is t, then at equilibrium the height of
the plank is equal to R + t/2. When the plank undergoes an
infinitesimal angular displacement of θ from equilibrium, the
height of the plank above the log axis becomes
We received correct entries from Robert Bandurka, Chris
Edwards, Brenda Gerein, Margaret Scora, Ali Sharafat and
V. Srinivasan.
The winner of our random draw for the book prize is Paolo
Violino from Bruino, Italy. Congratulations Paolo!
A copy of "Explorations an Introduction to Astronomy"
by Thomas T. Arny has been mailed to you.
Phys 13 news is published four times a year by the
Physics Department of the University of Waterloo. Our
policy is to publish anything relevant to high school and
first-year university physics, or of interest to high school
physics teachers and their senior students. Letters, ideas,
and articles of general interest with respect to physics are
welcome by the editor. You can reach the editor by email
at: [email protected]. Alternatively you can send all
correspondence to:
Phys 13 news, Physics Department
University of Waterloo
200 University Avenue West
Waterloo, ON N2L 3G1
Editor:
Guenter Scholz
Editorial Board:
Tony Anderson, Robert Hill, Rohan
Jayasundera, Guenter Scholz, Russell
Thompson and David Yevick
Publisher:
Judy McDonnell
Printing:
Graphics Solutions, UW
(R + t/2) cosθ + Rθ sinθ
For stable equilibrium, the new height should be
greater than R + t/2. Applying the limit θ → 0 and solving
the inequality provides the result t < 2R.
Page 18
Phys 13 News / Fall 2004
CROSSWORD - ASTRONOMY
by Tony Anderson
(Down Clues):
2: "Let cop see" this instrument
3: "Ration aid" to give light
4: Ringed planet
5: See 42 across
6: “Soiled” planet?
7: Untruthful person
8: Without more this means forthwith
9: "Store aid" gives a minor planet
10: This effective for good value
11: Part of drill
14: Halley found one
20: Scottish affirmative
21: “Pat Len” for this heavenly body
24: This is black and sticky
26: "A leg oil" for this astronomer
29: Mischievous being
30: “I horse” around for a sock merchant
33: Old testament (abbrev)
34: Type of galaxy
36: See 1 across
37: This portends good or bad
38: Female horse
42: See 42 across
43: Canadian National (abbrev)
44: United Parcel Service (abbrev)
46: Historical period
49: Copies (abbrev)
(Across Clues):
1: (With 36 down). “Dull terrains test" the stuff
between stars"
9: King beater
11: Bachelor of Arts (abbrev)
12: Keep this on to remain calm
13: Direct current (abbrev)
15: State of matter
16: 1012 prefix
17: Disapproving sound
18: Off this is displaced
19: Backward “part” for mouse
22: Extra terrestrial (abbrev)
23: This board is right
25: British astronomer
27: Nine-lived animal
28: New York (abbrev)
30: Hot stuff!
31: Brilliant constellation
32: 6 down has only one of these
35: Long playing (abbrev)
36: Evil fate
39: Keep in this to be with the music
40: Never say this to persevere
41: United Nations (abbrev)
42: (With 42 down and 5 down). “Going by the bar”
explains the start of it all
43: “Set curl” for this star collection
45: For this means for a very long time
47: “Teen pun” gives this planet
48: National Research Council (abbrev)
50: Royal Academy (abbrev)
51: "Cool magic, Sol" describes this principle
Page 16
Once you have solved the puzzle, use the letters
corresponding to the various symbols in the above grid (in
the usual order: left to right, top row first) to form the names
of five famous “astronomical” scientists:
Submit these five names (not the crossword) to Rohan for
a chance to win a book prize and certificate.
A draw for a book prize will be made from all correct
entries received before the end of January 2005. This
contest is open to all readers of Phys 13 news, and
submissions from students are especially welcomed. The
solution and winner’s name will be given in the next issue of
the magazine. Please include your full name, affiliation and
address with your solution.
MAIL:
R. Jayasundera, Dept. of Physics, University of
Waterloo, Waterloo, ON N2L 3G1 Canada
FAX:
(519) 746-8115 (attention of Rohan).
E-MAIL: [email protected]
Phys 13 News / Fall 2004
Subscription Form for Phys 13 news
Name .............................................................................................................................................
Street.............................................................................................................................................................
City................................................................
Province/State..................................................
Country...........................................................
Postal/ZipCode.................................................
Rates: Four Issues per year.
In Canada
In USA
Other
Annual Subscription
$12 Can
$15 US
$18 US
3 Year Subscription
$30 Can
$35 US
$40 US
7% GST included
Number R119260685
A limited number of reprints are available on request. A larger number of reprints (minimum of 25) can be provided at an
additional cost. Please inquire by email.
Make cheque or money order payable to Phys 13 news
Amount Enclosed ..................................
Please send your complete order form and await your next issue. If you really need a receipt or an invoice, add a service
charge of $1.00 to the amount of your subscription and check here. .............
Please Invoice ................
Please send receipt ................
Is this a new subscription or a renewal order? ................................
Return undeliverable Canadian addresses to:
Phys 13 news
University of Waterloo, Department of Physics
200 University Avenue West
Waterloo, ON N2L 3G1 CANADA
Return postage guaranteed
Phys 13 News / Fall 2004
Page 17
Page 18
Phys 13 News / Fall 2004
© Copyright 2026 Paperzz