University of Groningen Event-based simulation of quantum

University of Groningen
Event-based simulation of quantum phenomena
Zhao, Shuang
IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to
cite from it. Please check the document version below.
Document Version
Publisher's PDF, also known as Version of record
Publication date:
2009
Link to publication in University of Groningen/UMCG research database
Citation for published version (APA):
Zhao, S. (2009). Event-based simulation of quantum phenomena: application to quantum interference,
Einstein-Podolsky-Rosen experiments, quantum computation and quantum cryptography s.n.
Copyright
Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the
author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons).
Take-down policy
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately
and investigate your claim.
Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the
number of authors shown on this cover page is limited to 10 maximum.
Download date: 12-07-2017
21
Chapter 3
Randomness and Quantum
Mechanics
3.1
Introduction
Randomness (or unpredictability) of the outcomes of a repeating process plays an
essential role in many fields ranging from classical computer simulation to quantum computing and quantum communication. The methods to generate random
numbers rely on either the complexity of algorithms or on some physical processes
with chaotic property. In principle, as long as the random numbers are generated
in a classical way, they can never be truly random although they do pass all the
statistical tests, as the so called random bits can be predicted completely if we
have enough information about the initial state and the rule of evolution. For some
applications like crypotography which has a high requirement on the randomness,
true random numbers are expected to ensure the security during the information
exchange process. Quantum mechanics is often postulated to have an inherent irreducible randomness which can provide a recipe to generate true random numbers,
at least in principle. In this chapter, we discuss aspects of this so-called quantum
randomness. Many physical random number generators [24–27] based on the quantum processes (such as the decay of radioactive nucleaus, splitting of single photon
beams, the polarization measurement of single photons, the light-dark periods of a
single trapped ion’s resonance fluorescence signal) have been built. Experimentally,
one always finds short-time correlations in the sequence. In order to eliminate the
correlation, physical quantum random number generators use an additional sampling method to extract random bits from the raw sequence. This is necessary for
the random sequences [24–27] to pass more statistical tests for random numbers.
This chapter is organized as follows. In section one, we give a general discussion of
random variables. Then we discuss how to check the randomness (or more practically, unpredictability) of a sequence of numbers. Finally, we discuss the randomness
22
Randomness and Quantum Mechanics
in quantum theory. In section two, we describe an event-based deterministic simulation of a single photon beam splitter which produces completely deterministic
binary sequences. We illustrate how the DLM-based processing unit behaves like a
single particle 50:50 beam splitter and how to get binary sequences with different
randomness by randomly discarding some signals from the original deterministic
sequence with different probability and finally, we analyze the randomness of these
sequences. Conclusions are given in the last section.
3.1.1
Random variables and probability distribution
The concept of a random variable has various aspects. Let us take a definition of a
random variable as given by Wikipedia:
Definition: A random variable is an abstraction of the intuitive concept of chance
into the theoretical domains of mathematics, forming the foundations of probability
theory and mathematical statistics. Roughly, a random variable is defined as a
quantity whose values are random and to which a probability distribution is assigned.
More formally, a random variable is a measurable function from a sample space to
the measurable space of possible values of the variable.
From this definition, we can learn four things: 1) the random variable is an abstract
concept in the probability theory and mathematical statistics; 2) very often, a random variable is defined by a circular argument like “A random variable is defined
as a quantity whose values are random”; 3) the values of a random variable can
be assigned with a probability distribution; 4) the random variables are concerned
with two spaces, a sample space and a measurable space. More precisely, a sample
space of a random variable, denoted as Ω = {λi } (i = 1, ..., N , where N is the total
number of random events), consists of a set of random events associated with a
random experiment and gives all the possible outcomes of the random experiment.
A random variable Xλ based on this random experiment can take the values {xi }
corresponding to {λi }, and {xi } constitutes the measurement space.
Consider two examples to understand the meaning of these two spaces. The first
example is the game of flipping coins, the set of random events can be written as
Ω = {λ0 = Heads, λ1 = T ails}. The random variable Xλ can take two values:
Xλ0 = 0 (associate to heads) and Xλ1 = 1 (associate to tails). The second example
is the game of flipping a coin 5 times. The random variable is the number of heads.
Then the corresponding set of random events is Ω = {0H, 1H, 2H, 3H, 4H, 5H}.
The values of the random variable can take six values {0, 1, 2, 3, 4, 5} corresponding
to six different random events.
Given a sequence of random numbers generated by measuring a random variable in
an experiment, we are interested in two things. From the “whole sequence” point
of view, we are interested in the probability of the occurence of each random event.
3.1. Introduction
23
From the “individual random-number” point of view, we are interested in the correlation of the occurrence of the preceding numbers and the posterior numbers.
Randomness, an objective abstract quantity, is often used to describe the correlation between consecutive numbers in a sequence. Nevertheless, the judgement of
randomness given by people can only be subjective, which means what appears random to one observer may not appear random to another observer. Nobody can say
whether a sequence of numbers is truly random or not, we can only say whether it
is unpredictable or not based on the knowledge we have. For instance, a sequence
of pseudo random bits generated by the computer is obviously deterministic to the
designers, but it is unpredictable to all users who do not know the mechanism. In
practice, the question of how to check the unpredictability of a sequence of random
bits will be discussed in the next subsection.
Although the individual random numbers should be unpredictable, the behavior
of the whole random sequence can be precisely characterized by some probability
distribution. In this discussion we will only consider the simplest case in which
Ω is finite and countable. We may assign a number pi between 0 and 1 to each
random event λi to represent the probability of the occurence of this event. We
N
P
pi = 1 for i = 1, ..., N .
have
i=1
To illustrate the meaning of the probability distribution we consider the two examples given above. For the first example it is reasonable to set p1 = p2 = 1/2 for a
non-weighted coin since the two sides have equal chance to occur. The probabilities
of the occurrence of the random events for the second example can be analyzed
as follows: tossing a coin once, there are two possible results with equal chance to
occur. Therefore in total there are 25 possible results for tossing a coin five times.
If k heads show up in 5 times tossing, there are C5k possible results. Therefore the
final probability of the occurrence of k heads equals to P (X = xk ) = C5k /25 . For
instance, the probability of 0 heads occurring equals to 1/32. If we modify this game
as “tossing a coin 100 times”, how about the probability distribution? In Fig. 3.1,
the frequency distribution of a simple simulation of this process is plotted. The
scattered points are the simulation data, while the curve is obtained by fitting to a
Gaussian with the mean 50.
To close this section, it might be necessary to clarify the difference between probability and frequency. Intuitively, the probability is often identified with the frequency
of the occurrence of a random event. This is regarded as the frequency interpretation of probability. If we toss a non-weighted coin N times, the number of the
occurence of heads divide by the total number of tossing will be as close as possible
to 1/2 when N is a very large number. In the same way, the result of the probability in the second example may be examined. On the other hand, in some cases,
it is impossible to carry out independent experiments and the probability will not
be identifiable with a quantity resulting from expriments. For instance, two boxers
24
Randomness and Quantum Mechanics
0.08
Frenquency
0.06
0.04
0.02
0.00
0
20
40
60
80
100
Random events
Figure 3.1: The frequency distribution of the simulation random process of tossing a coin 100
times. The random variable is the numbers of the heads occurred in 100 times tossing. The set
of random events is {0, 1, ..., 100}. The scattered squares are the simulation data, while the curve
is obtained by Gaussian fitting with the mean around 50 which means the occurrence of 50 heads
out of 100 times tossing has the maximum frequency.
fight each other, each of them having one-half probability to win. However, we can
not define this probability through the frequency.
3.1.2
Shannon entropy and Kolmogorov complexity
In this section we will discuss how to check the randomness, or more practically,
the unpredictability of a sequence of numbers. For a sequence of random numbers
with finite length, we can apply a series of statistical tests to the sequence, with
each test aiming at a certain aspect. The more statistical tests [4, 28] a sequence of
numbers can pass, the more random the sequence should be. In this section we will
first give a brief introduction of the Shannon entropy which characterize the whole
set of possible realizations, and then concentrate on the quantity of Kolmogorov
complexity [29] which deals with the individual realization in the sequence.
Shannon entropy, a quantity in information theory, whose definition is the idea
based on that the probability of the occurrence of each random event can be used
3.1. Introduction
25
Table 3.1: Examples of random processes, sample spaces λi , measurement spaces xi , probability
(P), distribution (p) and Shannon entropy (S).
Random process
(λi )
(xi )
P
Toss a coin once
{H, T }
{0, 1}
PH = PT =
Toss a coin 5 times
{0H, ..., 5H}
{0, 1, ..., 5}
Pk = C5k /25
Toss a dice
{1, 2, ..., 6}
{1, 2, ..., 6}
Pxi =
1
6
1
2
p
S
Uniform
0.693
Binomial
1.52
Uniform
1.792
to quantify the average randomness associated with a certain random variable. The
Shannon entropy of a set of random events Ω is defined as
S (Ω) = −
N
X
pi ln pi ,
(3.1)
i=1
where pi denotes the probability of the occurrence of random event λi . In the last
column of Table. 3.1. we list the value of the Shannon entropy of these three examples. From the values listed in the table and also the definition of the Shannon
entropy, we can see that the Shannon entropy increases as the number of random events in the sample space increases and that equal distribution of all random events give the largest Shannon entropy if the number of random events is
fixed. Specifically, it is of interest to analyze the Shannon entropy of a random
binary sequence. For one specific binary sequence, many different sets of random events can be associated to it, such as Ω1 = {0, 1}, Ω2 = {00, 01, 10, 11},
Ω3 = {000, 001, 010, 011, 100, 101, 110, 111} and so on. If all the random events are
equal distributed in the sequence, the ideal values of the entropies correpond to the
three different sets can be calculated: S (Ω1 ) = ln 2 = 0.693, S (Ω2 ) = ln 4 = 1.386,
and S (Ω3 ) = ln 8 = 2.079. As the Shannon entropy gives the average randomness
of a sequence, it describes a property of the whole sequence and says nothing about
the individual random numbers.
On the contrary, Kolmogorov complexity aims at a description of the individual
realization of a sequence. Loosely speaking, Kolmogorov complexity of a sequence
of random numbers is defined as the length of the shortest program able to generate
it [29]. To determine the complexity of a sequence, one has to find out the rule that
can not only be used to regenerate the numbers in the sequence in precisely the
same order but can also be used to predict new numbers. If there is a very simple
rule to describe a sequence, then the program correspondence will be very short,
which means the sequence has a very low Kolmogorov complexity. Given a sequence
of random numbers, how do we determine the complexity of the sequence? Let us
consider two examples,
(a). 1, 4, 9, 16, ?
26
Randomness and Quantum Mechanics
(b). 2, 3, 10, 15, 26, 35, ?
find the rule and try to give the prediction of the next number. For sequence (a),
it is easy to see that next number will be 25, and the nth number will be n2 . We
can get the same answer in a different way. To solve such problems, we start from
the difference between consecutive numbers. The difference of this sequence is an
arithmetical progression (3, 5, 7, 9, ...). We can also see that the numbers in this
sequence obey the following rule
1
1+3
1+3+5
1+3+5+7
...
n
X
2(i − 1) + 1.
i=1
The length of the program being used to generate sequence (a) will depend on the
rules one find and also depend on the program language one use. However, if only
the rule exists, there will be no significant different of the program’s length. For
sequence (b), again there are different ways to analyze it and to predict the next
number which is 50. The simplest one, found by comparison with sequence (a), is
n2 ± 1 (“+” for odd term while “−” for even term). Apparently, sequence (a) has a
lower complexity than sequence (b).
How about sequences for which one can not find the rule? There are two reasons
one need to consider. One is because of the incomplete knowledge of the human
being. In this case, the only way to regenerate the numbers is to write them down
one by one. The Kolmogorov complexity is just equal to the length: this implies
that this sequence is considered to be truly random. The other reason is that the
sequence is too complicated for us to find the rule. This is the case for good pseudorandom numbers generated by a computer algorithm. For most practical purposes
one can consider the sequence as unpredictable or algorithmically random. A very
popular computer-based pseudo-random number generator, the linear congruential
generator, is defined by the recurrence relation [4]
Xn+1 = (aXn + c)modm,
(3.2)
where Xn+1 represents the “random” variable, a is the multiplier, c is the increment,
X0 is the seed, and m is the modulus. This formula can be used to produce a
very long sequence of numbers (the period can be at most m) that appear to be
random and may pass many statistical tests for randomness. Apparently, sequences
of pseudo random numbers have very low Kolmogorov complexities, however they
appear random enough for many applications.
3.1. Introduction
3.1.3
27
Randomness in quantum mechanics
To discuss the randomness in quantum mechanics, I would first like to give a brief
summary of the mathematical formalism of quantum theory, then give a concise introduction of two popular interpretations (statistical interpretation and Copenhagen
interpretation) of quantum theory, and finally address the question how randomness
enters quantum theory.
Mathematical formalism
Here we follow [2] to give a summary of the mathmatical formalism of quantum
mechanics.
Axiom 1 A state is represented by a state operator ρ (also called density matrix) on
a Hilbert space (a complex vector space with inner product). It must be self-adjoint,
nonnegative, and of unit trace. Being a self-adjoint operator, ρ has a spectral
representation
n
X
ρ=
ρi |ϕi i hϕi | ,
(3.3)
i=1
in terms of its eigenvalues ρi and orthonormal eigenvectors ϕi . Specifically, a pure
state, defined by the condition ρ2 = ρ, has only one nonzero eigenvalue and can be
expressed by
ρ = |ϕi i hϕi | .
(3.4)
A general state is a combination of all possible eigenvectors and is commonly called
mixed state.
Axiom 2 An observable is represented by a self-adjoint operator R on a Hilbert
space and has a spectral representation
R=
X
rn Pn ,
(3.5)
n
where the numbers rn are the eigenvalues of R. The Pn are orthorgonal projection
operators related to the orthonormal eigenvectors of R by
X
Pn =
|a, rn i ha, rn | .
(3.6)
a
and the parameter a labels the degenerate eigenvectors which belongs to the same
eigenvalue of R. If all the eigenvectors are nondegenerate, then we have
X
R=
rn |rn i hrn | .
n
Axiom 3 The average value of an observable R in the state ρ is given by
hRi = T r(ρR).
(3.7)
28
Randomness and Quantum Mechanics
For a pure state, Eq. (3.7) reduces to
hRi = hϕi | R |ϕi i .
(3.8)
Axiom 4 The evolution of a quantum state is described by a unitary operator U .
In general, it can be written in the form
ρ (t) = U ρ (t0 ) U −1 .
(3.9)
Axioms 1-4 constitute the mathematical formalism of quantum theory which enables
us to define models and compute expectation values. To make contact with the
experimental observations of events, one introduces Axiom 5.
Axiom 5 The only values which an observable may take are its eigenvalues. In the
case of a pure state represented by the normalized vector |ϕi, the probability to
observe an eigenvalue rn of R is
P (rn ) = |hϕ|rn i|2 .
(3.10)
This is a generalization of Born’s famous postulate that the square of a wave function
represents a probability density. (In other words, it is generalization of Born’s
famous postulate of the probability interpretation of the wave function.)
Interpretations of quantum theory
The different interpretations of quantum theory are most sharply distinguished by
their interpretations of the concept of a quantum state. Different interpretations
of the concept of a quantum state are at the root of most of the controversy. The
statistical interpretation, viewed as minimalist interpretation in that it claims to
make the fewest assumptions associated with the standard mathematical formalism,
assumes that a pure state (and hence also a general state) provides a description
of certain statistical properties of an ensemble of similarly prepared systems. The
most prominent advocate of the statistical interpretation, was A. Einstein whose
view is concisely expressed as follows [30]: “The attempt to conceive the quantumtheoretical description as the complete description of the individual systems leads to
unnatural theoretical interpretations, which become immediately unnecessary if one
accepts the interpretation that the description refers to ensembles of systems and not
to individual systems.” For example, an individual system may be a single particle.
Then the ensemble will be the conceptually infinite set of all such single particles
prepared similarly in some of their properties. In the statistical interpretation,
the state function is not taken to be physically real, but taken to be an abstract
probability distribution of all the eigenvalues of the observable in the conceptual
infinite ensemble. Consider an example of a die, the state of which can be expressed
by a wave function
|ϕi =
|1i + |2i + |3i + |4i + |5i + |6i
√
.
6
(3.11)
3.1. Introduction
29
The statistical interpretation of this state predicts that if one throws the die for
many times (every time before throwing, the die is prepared in the same state as
described by Eq. (3.11)), one will expect 6 possible results occurring with the same
probability (1/6). The uniform distribution of the probability of each possible result
can be checked easily by doing the experiment, that is to repeat throwing the same
die. If this experiment is only done once, one can say nothing about the quantum
state.
The Copenhagen interpretation assumes that a pure state provide a complete description of an individual system. For the same example of the die described by
Eq. (3.11), the Copenhagen interpretation assumes that this wave function dictates
the probability for the die to be shown in any out of the 6 states following a throw.
Each throw causes a change in the state of the die, known as wavefunction collapse
or unpredictable discontinuous reduction of the state vector. The “discontinuous” is
because the measurement procedure (that is the wavefunction collapse procedure)
involves an interaction between the system and the measurement apparatus or the
person who carrys out the measurement, and it is too difficult or maybe impossible
to define a new whole isolated system. As a result the measurement procedure can
not be described by a unitary transformation. Therefore in order to describe this
procedure, a new postulate, known as projection postulate, should be introduced
into the mathematical formalism of quantum theory.
Projection Postulate: Quantum measurements are described as some measurement operators acting on the state vector of the system being measured. Consider a
pure quantum state described by the normalized vector |ϕi immediately before the
measurement, and the observable which will be measured on |ϕi is R with eigenvalue
rn , with corresponding eigenvectors |ϕn i. A collection of projection operators {Rn }
related to the eigenvectors of R can be expressed as
Rn = |ϕn i hϕn | .
(3.12)
Then the probability that result rn occurs is given by
p(rn ) = hϕ| Rn† Rn |ϕi = |hϕ|rn i|2
(3.13)
and the normalized state of the system after the measurement is
|ϕ0 i =
Rn |ϕi
.
|hϕ|rn i|
(3.14)
Based on the Copenhagen interpretation, an individual measurement (throw a die in
this case) does mean something in that the measurement causes a dramatic change
of the state vector used to describe the die. However, the Copenhagen interpretation
also admits that the reduction of the state vector is unpredictable, which implies
the randomness of the outcome of each individual measurement. Therefore there
are two things in common regardless of the interpretations of the mathematical
30
Randomness and Quantum Mechanics
formalism of quantum theory. One is that the outcome of a specific measurement is
unpredictable by quantum theory, and the other is that the probabilities (calculated
according to Axiom 5) of all possible outcomes can only be obtained by repeating
the same measurement procedure for sufficiently many times. As a conclusion, based
on the current understanding of quantum theory, there is an intrinsic randomness
in an individual measurement.
Accepting the postulated, intrinsic randomness of quantum phenomena, quantum
phenomena should be good sources for true random numbers. Thus it may be
of interest to consider the Kolmogorov complexity of quantum theory. Generally
speaking, the aim of a theory is to sum up as concisely as possible, the observed
phenomena. A good theory should decribe not only the results already obtained
but it should also predict new ones. Evidentally, we prefer a simple theory over a
complicated one. In line with the earlier discussion, a good theory therefore has a
low Kolmogorov complexity [29]. However, if we accept the Kolmogorov complexity
as a qualitative measure for the theory to be a good one, then, in the case of quantum
theory we face a kind of paradox.
On the one hand, according to this criterion, quantum theory is a good theory because it describes many experimental results concisely and very well. Thus, this
observation would lead us to attribute a low Kolmogorov complexity to quantum
theory. But, on the other hand, if we accept the postulated randomness, quantum
theory has maximum Kolmogorov complexity. Clearly, we cannot assign both a
low and high Kolmogorov complexity to the same theory, hence we face a kind of
paradox. In fact, this paradox is just another manifestation of the quantum measurement problem of quantum theory [1], and disappears if we accept that quantum
theory has nothing to say about individual events [2].
3.2
3.2.1
QRNG with single photon beam splitting
Description of the real QRNG
In this section we will give a brief introduction of a quantum random number generator (QRNG) [24], based on the splitting of a beam of photons with an optical 50:50
beam splitter or by measuring the polarization of single photons with a polarizing
beam splitter.
The schematic view of the principle of the quantum random number generator is
shown in Fig. 3.2. The experimental setup consists of a single photon source, a nonpolarizing 50:50 beam splitter (Fig. 3.2(a)) or a polarizing beam splitter (Fig. 3.2(b)),
and two photon detectors at the output side. For the case of the 50:50 beam splitter
(BS), Fig. 3.2(a), each individual photon coming from the light source and traveling
through the beam splitter has equal probability to be found in either output of
3.2. QRNG with single photon beam splitting
(a)
(c)
BS
Source
31
Clicks of the detector
D1
S
0
1
Output
D2
D2
(b)
D1
0
PBS
Source
S
D1
1
H
S
POL 450
V
D2
0
1
Output
Output
Time
Figure 3.2: Left: The experimental realization of two quantum random number generators. The
experimental setup consists of a single photon source, a non-polarizing 50:50 beam splitter (a) or
a polarizing beam splitter (b), and two photon detectors at the output side. Right: Processing the
signal detected by D1 and D2 by a toggle switch S.
the beam splitter. For the case in which a polarizing beam splitter (PBS) is used,
Fig. 3.2(b), the photons are polarized at 450 before they enter the polarizing beam
splitter and then they have equal probability to be found in the H (horizontal)
polarization or V (vertical) polarization output of the polarizer. Anyhow, quantum
theory postulates for both cases that the individual “decisions” are truly random
and independent of each other. According to [24], this feature is implemented by
detecting the photons in the two output beams with single photon detectors and
combining the detection pulse in a toggle switch (S), which has two states, 0 and 1.
If detector D1 fires, then the switch is flipped to state 0 and left in this state until a
detection event in detector D2 occurs, leaving the switch in state 1, until a event in
detector D1 happens, and S is set to state 0, see Fig. 3.2(c). In the case that several
detections occur in a row in the same detector, then only the first detection event
will toggle the switch S into the corresponding state, and the following detection
events leave the switch unaltered. Consequently, the toggling of the switch between
its two states constitutes a binary signal with the randomness due to the transition
time between the two states.
The autocorrelation time, related to the toggle rate R, was evaluated by the traces
of the random signals with different average toggle rates [24]. Then in order to
obtain independent and evenly distributed random numbers, a periodical sampling
process was implemented with a period well above the autocorrelation time. It was
observed that for a signal autocorrelation time of roughly 20ns a sampling rate of 1
MHz suffices for obtaining “good” random numbers.
32
3.2.2
Randomness and Quantum Mechanics
Theoretical analysis
In this section we give a brief account of the quantum mechanical description of the
single photon beam splitting experiment. In the quantum mechanical description,
the source is assumed to emit single photons of which the state is described by
1
|Ψin i = √ (|Ri + |T i) ,
2
(3.15)
for Fig. 3.2(a) and
1
(3.16)
|Ψin i = √ (|Hi + |V i) ,
2
for Fig. 3.2(b), respectively. Here R and T denote the reflection and transmission, while H and V denote the horizontal and vertical polarization. The random
processes, used to generate random numbers, refer to the measurement either the
position of the single photons (Fig. 3.2(a)) or the polarization of the single photons (Fig. 3.2(b)). According to the fundamental postulates of quantum mechanics,
quantum theory can only give an expectation value of a dynamical variable in a
certain quantum state. The outcome of an individual measurement of a dynamical
variable is unpredictable which implies the collection of the outcomes from large
amounts of such measurements can be regarded as a sequence of truely random
numbers. The probabilities of the occurrence of the two alternative possibilities are
given by Eq. (3.15) and (3.16), but the outcome of an individual measurement is
postulated to be random.
3.3
Statistical analysis of simulated binary sequences
The event-based simulation of the normal single-photon beam splitter (BS) has
been described in Chapter 2 and the event-based simulation of the polarizing beam
splitter (PBS) will be discussed in Chapter 4. In this section we only present the
statistical analysis of the binary sequences simulated according to the case shown in
Fig. 3.2a.
3.3.1
Tests for randomness
In [24], some intuitive statistical tests have been applied to analyze the randomness
of the random data samples obtained from their QRNG. In this section we will use
the similar tests to check the randomness of the random bits generated from the
event-based simulation of the single photon beamsplitter. First of all, we will give
a brief introduction of these tests and then show the test results.
Autocorrelation test: This test says something about the relation between bits
at different times. For a truely random sequence, the autocorrelation is zero. The
3.3. Statistical analysis of simulated binary sequences
autocorrelation coefficient is defined by
¡
¢
PN −k
i=1 (xi − x) x(i+k) − x
C(k) =
,
PN
2
(x
−
x)
i
i=1
33
(3.17)
where {xi }N
i=1 is a binary sequence of length N , x is the mean value, and k denotes
the time difference between two signals.
Mean test: This is the most simple test of randomness of a squence of random bits.
The mean value of all n-bits blocks should be (2n − 1)/2. Specifically, the mean of
1-bit blocks (used to test the equidistribution of 0 and 1) is 0.5, and the mean of all
8-bit blocks should be 127.5.
Shannon entropy test: As explained in Section 3.1.2, this test is used to show the
average distribution of n-bit blocks of a data sample and the value of the Shannon
entropy is given by Eq. (3.1).
Monte-Carlo estimation of π: This is a nice way to demonstrate the use of a
sequence of random numbers. The idea is to map a large amount of numbers onto
points on a square with unit area. Therefore the ratio of the number of points
lying in the circle which fits in the square and the total number of points gives an
estimation of π.
Equal distribution of zero’s and one’s: To check the equidistribution of 0s and
1s, one can calculate the probability of 1 followed by 1, 1 followed by 0, denoted by
P (1|1), P (1|0), and 0 followed by 1, 0 followed by 0, denoted by P (0|1), P (0|0).
3.3.2
Statistical test of the simulation samples
Using the DLM network described in Chapter 2 and blocking the input channel 1,
we can simulate a device which can generate binary sequences with different randomness: two extreme cases and some intervenient cases. After simple description
of the simulation procedure, some results of the statistical tests of the simulated
binary sequences will be given.
We consider the following cases:
1. Pseudo random. We modify the final step of the output stage LMout in the
¢
¡ 0
0
= (z0,n+1 , z1,n+1 ) through output
, y1,n+1
following way: Sends a message y0,n+1
2
2
channel 0 if z0,n+1 + z1,n+1 > r, where 0 < r < 1 is a uniform pseudo-random
¢
¡ 0
0
= (z2,n+1 , z3,n+1 ) through
, y3,n+1
number. Otherwise LMout sends a message y2,n+1
output channel 1. This simulation uses pseudo-random numbers generated by the
computer. The autocorrelation for all different time difference is expected to be
close to zero, which means no correlation.
2. Completely deterministic. If the phase information carried by the incident photons is ψ0 = 0, then the output sequence is simply repeated 01. The autocorrelation
is oscillating between -1 and 1. The sequence is completely deterministic.
34
Randomness and Quantum Mechanics
Serial autocorrelation
1
0.8
0.6
0.4
0.2
0
0
5
10
15
Time difference
20
25
Figure 3.3: Plots of the autocorrelation coeffiients Eq. (3.17) as a function of time difference k for
three simulated binary samples with different removal probabilities 0.1, 0.2 and 0.5 (corresponds
to triangles, butllets, stars). The original deterministic sequence is generated by a 50:50 single
photon beam splitter which works in an event-by-event deterministic way. In the simulation, the
total number of photons is N = 107 and α = 0.9. The autocorrelation coefficients are fitted by exponential decay Eq. (3.18). The continuous, dashdotted and dashed line represent the exponential
fitting curves for the three cases.
3. Intervenient random. This numerical experiment is carried out as follows: Let
the input photons carry message ψ0 = 0 and generate a deterministic sequence of
0 and 1 (situation 2). Randomly remove some signals with probability p to obtain
different random distribution.
In Fig. 3.3 we plot the autocorrelation coefficients Eq. (3.17) as a function of the
time difference k for three different reduced sequences with p = 0.1, 0.2, 0.5. We fit
the autocorrelation coefficients by an exponential decay
C(k)0 = ae−k/b ,
(3.18)
where a is the normalization constant and b represents the autocorrelation time [24].
The fitting curves are also shown in Fig. 3.3. The values of the fitting parameters
a, b are listed in Table. 3.2.
From Table. 3.2, we see that the value of a does not significantly depend on the
reduced sequence, it is fluctuating around 1, which can be easily derived from
Eq. (3.18) for the extreme case: k = 0. However, the value of the autocorrela-
3.3. Statistical analysis of simulated binary sequences
35
Table 3.2: Results of the correlation test. We list the fitting parameters a and b corresponding
to 10%, 20% and 50% of the signals removed from the original deterministic sequence. The values
of a show no big differences for different shortened sequences, they are all around 1, and it can
be easily derived from Eq. 3.18 for the extreme case: k = 0. However, the values of b decrease
rapidly when the removal probability increases from 0.1 to 0.5. The parameter b corresponds to
the autocorrelation time [24].
p = 0.1
p = 0.2
p = 0.5
a 1.014 ± 0.004 1.001 ± 0.006 0.998 ± 0.003
b
4.865 ± 0.041 2.401 ± 0.028 0.898 ± 0.006
tion time b decreases rapidly as the removal probability increases from 0.1 to 0.5.
This plot looks very similar to Fig. 4 in [24], in which the exponential decay of
the autocorrelation curve was explained in terms of a reduced detection efficiency.
In [24], to obtain independent and evenly distributed random numbers, a periodic
sampling was applied to the raw signal with the sampling period larger than the
autocorrelation time of the random signal.
To test the equidistribution, the four probabilities (P (1|1), P (1|0), P (0|1) and P (0|0))
were calculated. For the special binary sequences generated according to case 3, the
theoretical prediction of these four probabilities are
1 p
,
21+p
1 1
P (1|0) = P (0|1) =
,
21+p
P (1|1) = P (0|0) =
where p denotes the removal probability. The theoretical predictions of these four
probabilities for different removal probability p are listed in Table. 3.3, while the values of these four probabilities for the simulation sequences are given in Table. 3.4. It
is clear the simulation results are in good agreement with the theoretical predictions.
When 10% of the signals are removed, the four probabilities differ a lot. However
for the last column, 99% of the signals being removed, the four values show very
little difference, all around 0.25, which indicates the equal distribution of the four
cases.
Finally, we list the results of a few other statistical tests for different binary sequences
in Table. 3.5.
The first two rows give two mean values, bit-mean and byte-mean, and they have no
big difference except the byte-mean value for the case 2, completely deterministic
sequence, for the special structure of the sequence “10101010...”. Apparently, the
mean values can not differentiate the randomness very well. The Shannon entropy
and estimation of π-value are more sensitive to the randomness changes in difference
binary sequences.
36
Randomness and Quantum Mechanics
Table 3.3: Theoretical prediction of the equidistribution of the simulation binary samples. We
list the probabilities p(1|1), p(0|1), p(1|0) and p(1|1) for the five data samples (0, 10%, 20%, 50%,
80% and 99% of the signals removed from the original sequence). With lower removal probability
p, the differences among these four probabilities are very big, while for p = 0.99, the differences
are rather small which means more or less equidistribution of the four cases: 1 followed by 1, 1
followed by 0, 0 followed by 1 and 0 followed by 0.
p
0
0.1
0.2
0.5
0.8
0.99
Ptheory (1|1) 0.000 0.045
0.083 0.167
0.222
0.249
Ptheory (1|0) 0.500 0.455
0.417 0.333
0.278
0.251
Ptheory (0|1) 0.500 0.455
0.417 0.333
0.278
0.251
Ptheory (0|0) 0.000 0.045
0.083 0.167
0.222
0.249
Table 3.4: Simulation results for the probabilities p(1|1), p(0|1), p(1|0) and p(1|1) for the six data
samples (0, 10%, 20%, 50%, 80% and 99% of the signals removed from the original sequence).
p
3.4
0
0.1
0.2
0.5
0.8
0.99
P (1|1) 0.000 0.044 0.083
0.165
0.223
0.252
P (1|0) 0.500 0.455 0.418
0.334
0.280
0.251
P (0|1) 0.500 0.455 0.418
0.334
0.280
0.251
P (0|0) 0.000 0.045 0.083
0.166
0.216
0.247
Conclusion
In this chapter, we demonstrated that the single photon beam splitter can be simulated in an event-by-event deterministic way and after some extra mathematical
Table 3.5: The results of the statistical tests (Bit-mean value, Byte-mean value, Shannon entropy
and the estimated π-value) for the different binary sequences. The values of bit-mean and bytemean show no big difference for all different sequences while the Shannon entropy and the estimated
π-value are more and more close to the expected value as the removal probability increases.
Case 1
Case 2
p
Case 3
0.1
0.2
0.5
0.8
0.99
Bit-mean
0.498
0.500
0.500
0.500
0.500
0.502
0.503
Byte-mean
131.36
85.00
127.91 127.36
127.19
128.07
127.82
Entropy
7.916
2.137
4.068
5.557
7.431
7.924
7.980
π-value
3.080
4.000
3.829
3.694
3.419
3.253
3.131
3.4. Conclusion
37
processing it can produce output binary sequences with different randomness, from
deterministic to non-deterministic. Some statistical tests have been applied to test
the randomness of the reprocessed binary sequences. We demonstrated that with
our event-by-event processing unit, the single photon beam splitting process can
be simulated very well. The randomness of the final binary sequences we obtained
depends both on the pseudo-random number generator and also on the resampling.
In the experimental realization of quantum random number generators [24], time
is the source of the randomness. We have implemented similar schemes using an
event-based beam splitter (results not shown) but found that this is unnecessary to
produce good-quality random sequences. This is in sharp contrast to the case of the
Einstein-Podolsky-Rosen-Bohm experiments in which time information is essential
to determine the coincidence of a pair of two photons, as will be discussed in Chapter
6.
38
Randomness and Quantum Mechanics