pairs in pp collisions at √ s = 8 TeV with the AT

Double differential cross section
for Drell-Yan production of
high-mass e+e−-pairs in
√
pp collisions at s = 8 TeV
with the ATLAS experiment
von
Markus Zinser
Masterarbeit in Physik
vorgelegt dem Fachbereich Physik, Mathematik und Informatik (FB 08)
der Johannes Gutenberg-Universität Mainz
am 7. August 2013
1. Gutachter: Prof. Dr. Stefan Tapprogge
2. Gutachter: Prof. Dr. Achim Denig
ii
Ich versichere, dass ich die Arbeit selbstständig verfasst und keine anderen als die
angegebenen Quellen und Hilfsmittel benutzt sowie Zitate kenntlich gemacht habe.
Mainz, den 7.8.2013
Markus Zinser
ETAP
Institut für Physik
Staudingerweg 7
Johannes Gutenberg-Universität D-55099 Mainz
[email protected]
iv
Contents
1. Introduction
3
2. Theoretical foundations
2.1. The Standard Model of particle physics . . . . . . . . . . . . .
2.1.1. Overview of the fundamental particles and interactions
2.1.2. Mathematical structure of the Standard Model . . . . .
2.1.3. The electroweak interaction . . . . . . . . . . . . . . .
2.1.4. The strong interaction . . . . . . . . . . . . . . . . . .
2.2. Phenomenology of proton-proton collisions . . . . . . . . . . .
2.2.1. Structure of protons . . . . . . . . . . . . . . . . . . .
2.2.2. Determination of parton distribution functions . . . . .
2.3. Drell-Yan process . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.1. Recent results . . . . . . . . . . . . . . . . . . . . . . .
3. Theoretical predictions
3.1. Physics simulation . . . . . . . . . . . . . . . . . . . . . .
3.2. Theoretical tools . . . . . . . . . . . . . . . . . . . . . . .
3.2.1. MCFM . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.2. FEWZ . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.3. APPLgrid . . . . . . . . . . . . . . . . . . . . . . .
3.3. Cross section predictions . . . . . . . . . . . . . . . . . . .
3.4. Comparison between different parton distribution functions
4. The
4.1.
4.2.
4.3.
4.4.
4.5.
4.6.
4.7.
ATLAS experiment at the Large Hadron
Large Hadron Collider . . . . . . . . . . .
Overview of ATLAS . . . . . . . . . . . .
The inner detector . . . . . . . . . . . . .
4.3.1. Pixel detector . . . . . . . . . . . .
4.3.2. Semi conductor tracker . . . . . . .
4.3.3. Transition radiation tracker . . . .
The calorimeter system . . . . . . . . . . .
4.4.1. Electromagnetic calorimeter . . . .
4.4.2. Hadronic calorimeter . . . . . . . .
The trigger system . . . . . . . . . . . . .
Data acquisition and processing . . . . . .
Luminosity determination . . . . . . . . .
Collider
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
5
5
6
8
10
13
13
16
17
20
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
21
21
23
23
23
23
24
24
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
29
29
29
32
32
33
33
33
34
35
36
36
37
v
Contents
4.8. Detector simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5. Electrons in ATLAS
5.1. Reconstruction . . . . . . . . . . . .
5.1.1. Track reconstruction . . . . .
5.1.2. Electron reconstruction . . . .
5.2. Identification . . . . . . . . . . . . .
5.2.1. Identification level “loose” . .
5.2.2. Identification level “medium”
5.2.3. Identification level “tight” . .
5.2.4. Isolation . . . . . . . . . . . .
6. Monte Carlo simulation
6.1. Simulated processes . . . . .
6.1.1. Drell-Yan process . .
6.1.2. Top processes . . . .
6.1.3. Diboson processes . .
6.1.4. W process . . . . . .
6.2. Correction of simulation . .
6.2.1. Pile-up . . . . . . . .
6.2.2. Energy smearing . .
6.2.3. Efficiency corrections
7. Data and selection criteria
7.1. Data . . . . . . . . . . . . .
7.2. Event selection . . . . . . .
7.3. Electron selection . . . . . .
7.4. Energy correction . . . . . .
7.5. Comparison with simulation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
39
39
39
39
40
40
41
42
42
.
.
.
.
.
.
.
.
.
45
45
45
46
47
47
47
47
48
49
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
51
51
52
53
54
55
8. Background determination
8.1. Simulation of background processes . . . . . . . . .
8.1.1. tt̄+tW background . . . . . . . . . . . . . .
8.1.2. Diboson background . . . . . . . . . . . . .
8.1.3. Drell-Yan background . . . . . . . . . . . .
8.2. Measurement of background processes . . . . . . . .
8.2.1. Matrix method . . . . . . . . . . . . . . . .
8.2.2. Measurement of the fake rate . . . . . . . .
8.2.3. Measurement of the real electron efficiency .
8.2.4. Selection of the background . . . . . . . . .
8.2.5. Kinematic properties of the fake background
8.2.6. Systematic uncertainties . . . . . . . . . . .
8.2.7. Summary . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
61
61
61
61
62
63
64
67
71
74
76
77
85
vi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Contents
9. Comparison of signal and background with data
89
9.1. Single electron properties . . . . . . . . . . . . . . . . . . . . . . . . . 89
9.2. Electron pair properties . . . . . . . . . . . . . . . . . . . . . . . . . 89
10.Cross section measurement
10.1. Resolution and binning . . . . . . . . . . . .
10.2. Unfolding . . . . . . . . . . . . . . . . . . .
10.2.1. Differential cross section . . . . . . .
10.2.2. Efficiency and acceptance . . . . . .
10.2.3. Correction factor CDY . . . . . . . .
10.3. Systematic uncertainties . . . . . . . . . . .
10.3.1. Systematic uncertainties on CDY . .
10.3.2. Systematic background uncertainties
10.3.3. Discussion of systematic uncertainties
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
97
97
99
99
99
102
103
103
105
106
11.Results and interpretation of the Measurement
11.1. Single differential cross section . . . . . . . . . . . . . . . . .
11.2. Double differential cross section . . . . . . . . . . . . . . . .
11.3. HERAFitter . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.4. Comparison with existing parton distribution functions . . .
11.5. Impact of the measurement on parton distribution functions
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
111
111
113
114
115
118
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
12.Summary and Outlook
123
A. Appendix
125
B. Bibliography
147
C. Danksagung
155
vii
Kurzfassung
Titel der Arbeit: Doppelt differentieller Wirkungsquerschnitt der DrellYan-Produktion√von e+ e− -Paaren bei hohen invarianten Massen in ppKollisionen bei s = 8 TeV mit dem ATLAS-Experiment
Eine präzise Vorhersage der Prozesse am Large Hadron Collider am CERN, an dem
Protonen bei bisher unerreichten Schwerpunktsenergien kollidieren, ist essentiell, um
präzise Tests des Standardmodells durchführen und nach neuen Physikpänomenen
suchen zu können. Eine Schlüsselrolle bei der präzisen Vorhersage von diesen
Prozessen spielt dabei die Kenntnis der Partonverteilungsfunktionen (PDFs) des
Protons.
In dieser Arbeit wird die erste Messung des doppelt differentiellen Wirkungsquer∗
+ −
schnitts des Prozesses pp
√ → Z/γ + X → e e + X, bei einer Schwerpunktsenergie der Protonen von s = 8 TeV, als Funktion der invarianten Masse und Rapidität des e+ e− -Paares präsentiert. Die Messung wurde durchgeführt im Bereich
invarianter Massen von 116 GeV bis 1500 GeV. Die analysierten Daten wurden
von dem ATLAS-Experiment im Jahr 2012 aufgezeichnet und entsprechen einer integrierten Luminosität von 20.3 fb−1 . Es wird erwartet, dass der rapiditäts- und
massenabhängige Wirkungsquerschnitt sensitiv auf PDFs bei sehr hohen Werten
der Bjorken-x Skalenvariable ist. Speziell Sensitivtät auf PDFs von Antiquarks im
Proton wird erwartet, da diese bei hohen Werten von x nur ungenau bestimmt sind.
Die Standardmodellvorhersage für die erwartete Menge an e+ e− -Paaren wurde
abgeschätzt mit Hilfe von Monte Carlo Simulationen und auf Daten basierenden
Methoden. Ein Hauptteil dieser Arbeit handelt von der Weiterentwicklung und
dem Verstehen von Methoden um aus Daten einen Untergrund zu bestimmen, der
entsteht wenn Jets fälschlicherweise als e+ /e− -Kandidat identifiziert werden. Verschiedene Methoden, um diesen Untergrund zu bestimmen, wurden durchgeführt
und lieferten Ergebnisse in guter Übereinstimmung. Der Wirkungsquerschnitt wurde
eindimensional als Funktion der invarianten Masse und zweidimensional als Funktion der Rapidität und invarianten Masse gemessen und systematische Unsicherheiten bestimmt. Dominierende Unsicherheiten waren dabei die Energieskala der
Elektron/Positron-Kandidaten, die gemessenen Effizienzen der Rekonstruktion und
Identifikation der Elektronen und die Unsicherheit auf den aus Daten bestimmten
Untergrund. Der gemessene zweidimensionale Wirkungsquerschnitt wurde mit Theorievorhersagen verschiedener Rechnungen und PDFs verglichen. Kleine Abweichungen zwischen Daten und Theorie wurden, insbesondere im Massenbereich zwischen
150 GeV und 300 GeV, gesehen. Zusätzlich wurde gezeigt, dass trotz der kleinen Abweichungen zwischen Daten und Theorie, mit dem gemessenen Wirkungsquerschnitt
die Unsicherheit der Antiquarkverteilung bei hohem x verringert werden kann.
1
Contents
2
1. Introduction
The search for the understanding of the structure of matter has lead in the past century to the development of the Standard Model of elementary particle physics. It can
describe the structure of matter out of fundamental building blocks and explains the
elementary processes of three of the four fundamental forces. The Standard Model
is a very powerful instrument and its predictions are verified up to highest precision.
Even though the Standard Model is very successful, there are observations, e.g., dark
matter, which cannot be explained within the existing theories. Thus extensions of
the Standard Model are needed. Many existing theories predict the appearance of
new physics phenomena at energy scales not yet probed. The Large Hadron Collider (LHC), a proton-proton-accelerator at CERN in Geneva, is a powerful machine
which allows to search for new physics phenomena and to test the predictions of the
Standard Model at the highest yet reached energy scales. For these tests and measurements, precise predictions of the processes at the LHC are needed. To obtain
a high level of accuracy for these predictions, a very good understanding of the
structure of the proton is essential. In this context the knowledge of the parton
distribution functions of the proton plays a key role.
In this thesis the first measurement of the process pp → Z/γ ∗ + X → e+ e− + X with
the√ATLAS experiment at a center of mass energy of the proton-proton collisions
of s = 8 TeV is presented. The aim is the measurement of a double differential
cross section at high invariant masses (me+ e− > 116 GeV) of the electron-positron
pair as a function of rapidity and invariant mass. Such a measurement can help to
improve the parton distribution function of the proton at high momentum fractions
x. In particular sensitivity to the PDFs of the antiquarks in the proton is expected,
since these are not well constrained at high values of x.
This thesis is structured as follows. Chapter 2 and 3 address the theoretical foundations and predictions needed for this measurement. Chapter 4 and 5 describe
the ATLAS experiment and how electrons and positrons are identified using the
detector. In chapter 6 the simulations used for this analysis are discussed and in
chapter 7 the selection of electron-positron pairs from the data is presented. The
determination of background processes is described in chapter 8, which are then
compared together with the expectation of the signal process to the data in chapter
9. Chapter 10 addresses the measurement of the double differential cross section
and the determination of its systematic uncertainties. The result is compared to
existing theory predictions in chapter 11. Here also parton distribution functions
are extracted using the results of the measurement and are discussed concerning the
impact on the uncertainties of these distributions.
3
1. Introduction
4
2. Theoretical foundations
In the first part of this chapter, a brief introduction into the Standard Model of
particle physics and its interactions is given. This is followed by a discussion of
the formalism which is needed to describe proton-proton (pp) collisions. Also the
extraction of the needed ingredients to predict the outcome of these collisions is
described, followed by a discussion of the Drell-Yan process. Throughout this thesis,
the convention ~ = c = 1 is used, therefore masses and momenta are quoted in units
of energy, electron volts (eV).
2.1. The Standard Model of particle physics
2.1.1. Overview of the fundamental particles and interactions
The Standard Model of particle physics [1] is one of the most successful models
in physics and describes the dynamics and interactions of all currently known elementary particles. It can describe three of the four fundamental interactions very
precisely and survived by now every experimental test.
In our current understanding, matter is formed by point-like particles which can be
divided into two groups: fermions with spin 1/2 which form matter and bosons with
spin 1 which mediate the fundamental forces.
The three fundamental forces described by the Standard Model are the electromagnetic, the weak and the strong interaction. Gravitational interaction is not described
within the Standard Model but its strength is negligible at subatomic scale. The
electromagnetic interaction is mediated by the exchange of a massless photon (γ).
The photon couples to the electric charge of particles but does not carry an electric
charge by itself. The electromagnetic force has an infinite range, since the photon is
massless. The weak interaction is mediated by three different gauge bosons which
couple to the third component of the weak isospin T3 . The W ± -bosons are electric
positively and negatively charged and the mediators of the charged current, which is
responsible for the β-decay of atomic nuclei. The Z-boson carries no electric charge
and is responsible for the neutral current. The three gauge bosons of the weak interaction are very heavy (mW ≈ 80.4 GeV, mZ ≈ 91.2 GeV), which makes the range of
the weak interaction very short. The strong interaction is mediated by the exchange
of eight different gluons (g) which couple to the so-called color charge. The color
charge occurs in three different types: red (r), green (g) and blue (b). Gluons carry
color charge themselves and as a result couple to each other. The coupling between
the gluons leads, despite the fact that they are massless, to a short range of the
5
2. Theoretical foundations
strong interaction. Table 2.1 lists again all gauge bosons of the Standard Model.
Interaction
Boson
Mass [GeV]
electromagnetic photon (γ)
0
±
W
≈ 80.4
weak
Z
≈ 90.2
strong
gluon (g)
0
corresponding charge
electric charge (e)
weak isospin (T3 )
color charge (r, g, b)
Table 2.1.: Overview of the forces described by the Standard Model and their gauge
bosons.
Fermions can be divided into two groups, leptons and quarks, and three generations.
Leptons interact with the weak force, carry an electric charge of integer numbers
and interact according to this electric charge also electromagnetically. They do not
undergo strong interactions, since they do not carry color charge. Electron (e), muon
(µ) and tau (τ ) carry the electric charge Q/e = −1 and isospin T3 = −1/2 thus interact both electromagnetically and weakly. Neutrinos carry no electric charge and
thus interact only weakly. Neutrinos are treated as massless particles in the Standard
Model although neutrino oscillations prove that they have a non vanishing-mass [2].
Quarks can be separated into six different flavors. They carry a charge of Q/e =
+2/3 or Q/e = −1/3 and interact with all three forces, since they also carry a color
charge. The mass of the fermions rises in the same order as their generation and
varies from ≈ keV to ≈ 100 GeV over many orders of magnitude. There are two
different definitions of the quark masses. The current quark mass which is the mass
of the quark itself and the constituent mass which is the mass of the quark plus the
gluon field surrounding the quark. For the heavy quarks (c, b, t) these are almost the
same whereas there are large differences for the light quarks (u, d, s). The current
mass of the light quarks is difficult to measure and thus have large uncertainties.
Every fermion exists as a particle as well as an antiparticle. The only difference
between these two is given by the additive quantum numbers which change the sign.
For example, the electron carries an electric charge of Q/e = −1 whereas its antiparticle, the positron, carries a charge Q/e = +1. The fermions of the 2. and
3. generation can decay via the weak force into fermions of the lower generations.
Table 2.2 shows a listing of all leptons and quarks with their charges.
2.1.2. Mathematical structure of the Standard Model
The mathematical structure of the Standard Model is given by a gauge quantum
field theory [4]. All fundamental particles are described by quantum fields which
are defined at all points in space time. Fermions are described by fermion fields ψ,
also known as (Dirac-)spinor, and gauge bosons are described by vector fields Aµ .
The dynamics of the fundamental fields are determined by the Lagrangian density
6
2.1. The Standard Model of particle physics
Leptons
Generation Name
Symbol Color
Electron
e−
No
1.
Electron neutrino
νe
No
Muon
µ−
No
2.
Muon neutrino
νµ
No
−
Tau
τ
No
3.
Tau neutrino
ντ
No
Quarks
Generation Name
Symbol Color
up
u
Yes
1.
down
d
Yes
charm
c
Yes
2.
strange
s
Yes
top
t
Yes
3.
bottom
b
Yes
T3
−1/2
+1/2
−1/2
+1/2
−1/2
+1/2
Q/e
−1
0
−1
0
−1
0
T3
Q/e
+1/2 2/3
−1/2 −1/3
+1/2 2/3
−1/2 −1/3
+1/2 2/3
−1/2 −1/3
Mass
0.511 MeV
< 2 eV
105.6 MeV
< 0.19 MeV
1776.8 MeV
< 18.2 MeV
Mass
2.3 MeV
4.8 MeV
1.3 GeV
95 MeV
173.5 GeV
4.2 GeV
Table 2.2.: Fermions of the Standard Model, divided into leptons and quarks.
Given are the name, the symbol, the charges and their masses [3]. The masses of
the particles are rounded and given without any uncertainties. For the light quarks
(u, d) the current mass is given. Antiparticles are not listed explicitly.
L (short Lagrangian). The Lagrangian for a free fermionic field is given by
L = ψ̄(iγ µ ∂µ − m)ψ,
(2.1)
where γ µ are the gamma matrices and ψ̄ = ψ † γ 0 . Starting from this, the Lagrangian
L can be required to be gauge invariant under transformations of a specific symmetry
group. For instance, for the electromagnetic force is the related symmetry the
transformation under the group U (1), and thus is ψ required to be invariant under
the following transformation:
ψ = eiα(x) ψ,
(2.2)
where α(x) is a phase. If α(x) is a constant for all values of x, the symmetry is called
a global symmetry. If α(x) changes for different points in space time x, the symmetry
is called local. The Lagrangian can be made invariant under a local symmetry
transformation by introducing additional bosonic gauge fields which can then be
identified as mediators of the fundamental forces. The number of bosonic gauge
fields needed to be introduced is equal to the number of generators of the symmetry
group. The Lagrangian of all fundamental interactions of the Standard Model can
be derived by requiring L to be locally gauge invariant under transformations of an
appropriate symmetry group.
7
2. Theoretical foundations
2.1.3. The electroweak interaction
Historically the electromagnetic and weak interactions were treated as two separate
theories. An unification of these two theories, the electroweak theory, was developed
by Glashow, Salam and Weinberg [5, 6, 7].
Based on the observation that the weak interaction only couples to left handed particles, the quantum number of the weak isospin T can be introduced. The weak forces
are now constructed in such a way that they only couple to the third component of
the isospin T3 . By exploiting the isospin formalism [8], left handed fermions can be
grouped into doublets with T = 1/2 and thus T3 = ±1/2. All right handed fermions
form a singlet with T = 0, T3 = 0 and as a result they do not undergo weak interactions. To describe the electromagnetic interaction, which couples to both left-handed
and right-handed particles, the weak hypercharge Yw is introduced. Analogous to
the Gell-Mann-Nishijima formula [9], the electric charge Q and the third component
of the weak isospin T3 can be related to the weak hypercharge Yw by:
Q = T3 +
Yw
.
2
(2.3)
The corresponding symmetry group related to the weak isospin is the group SU (2)L
which has three generators Ti = σi /2, given by the Pauli matrices σi . The three
bosonic vector fields corresponding to these generators are Wµ1 , Wµ2 and Wµ3 . L
in this context stands for “left-handed”. The symmetry group associated to the
weak hypercharge is the group U (1)Y which has one generator and thus one gauge
field Bµ . These two groups build up the symmetry group of the electroweak theory
SU (2)L × U (1)Y . The requirement of local gauge invariance under this symmetry
group leads to the following Lagrangian:
LEW =
X
j
ψ̄jL iγ µ Dµ ψjL +
X
j,σ
1
1 i
R
R
ψ̄jσ
Wiµν − Bµν B µν ,
iγ µ Dµ ψjσ
− Wµν
4
4
(2.4)
where j is the generation index, ψ L are the left-handed fermion fields and ψ R are the
right-handed fermion fields with the component for the flavor σ. Dµ is the covariant
derivative:
~σ ~
0Y
Dµ = ∂µ − ig W
Bµ
(2.5)
µ + ig
2
2
There are two coupling constants, g for SU (2)L and g 0 for U (1)Y . The corresponding
field strength tensors are
i
Wµν
= ∂µ Wνi − ∂ν Wµi + gijk Wµj Wνk
Bµν = ∂µ Bν − ∂ν Bµ
(2.6)
The symmetry group SU (2) is a non-Abelian group which leads to a self coupling
of the W fields. This is shown by the third term of the corresponding field strength
tensor, which couples these components. The physical mass eigenstates Wµ± can be
8
2.1. The Standard Model of particle physics
obtained via a linear combination of Wµ1 and Wµ2 :
1
Wµ± = √ (Wµ1 ∓ iWµ2 )
2
(2.7)
and the Z-boson Zµ and photon field Aµ via a rotation of the fields Wµ3 and Bµ
about the weak mixing angle θW
0
Bµ
Aµ
cos θW sin θW
=
(2.8)
Zµ
− sin θW cos θW
Wµ3
To fulfill the local gauge invariance, the fields Wµ1 , Wµ2 , Wµ3 and Bµ0 have to be massless. This is in contrast to the experimental observation. By using the mechanism
of spontaneous symmetry breaking, the W - and Z-boson can acquire mass while
the photon remains massless. This is done by introducing a single complex scalar
doublet field
† φ (x)
Φ(x) =
,
(2.9)
φ0 (x)
called Higgs field [10], with its Lagrangian
LH = (Dµ Φ)† (Dµ Φ) − V (Φ),
(2.10)
where the potential V (φ) is given by
λ
V (Φ) = −µ2 Φ† Φ + (Φ† Φ)2 .
4
(2.11)
The potential is invariant under the local gauge transformations of SU (2)L × U (1)Y .
It is constructed in such a way that V (Φ) has for µ2 < 0 and λ > 0 a degenerate
2
ground state Φ† Φ = − µ2 λ = v 2 with a non-vanishing vacuum expectation value
v. The ground state hΦi = √12 ( v0 ) can now be chosen in such a way that the
SU (2)L × U (1)Y -symmetry is broken to U (1)EM . If Φ is expanded around the
vacuum expectation value [11], it is found to have the following form:
1
0
Φ(x) ≈ √
.
(2.12)
2 v + H(x)
The field√H(x) describes a physical neutral scalar, called Higgs-boson with the mass
mH = µ 2. In July 2012 a new boson consistent with the Higgs-boson was observed
by ATLAS [12] and CMS [13] at the LHC. The three additional degrees of freedom
of Φ are absorbed, leading to mass terms for three out of four physical gauge bosons:
1
1 p
±
MW
= vg
MZ = v g 2 + g 02 .
(2.13)
2
2
The photon remains massless. The ratio of the masses of the massive bosons can in
leading order (LO) be expressed as
cos θW ≈
MW
,
MZ
(2.14)
9
2. Theoretical foundations
and the relation of the coupling constants can be expressed as
g sin θW = g 0 cosθW = e.
(2.15)
These relation can be tested within the Standard Model. Also the masses of the
fermions, which were also required to be massless, can be explained by a Yukawa
coupling to the scalar Higgs field.
2.1.4. The strong interaction
Quantum Chromodynamics (QCD) is the theory describing the strong interactions.
It is, like the electroweak theory, a gauge field theory that describes the strong
interactions of colored quarks and gluons. The corresponding symmetry group is
the SU (3)C , which has NC2 − 1 = 8 generators1 which can be represented by the
Gell-Mann matrices λi , i = 1,2,3,...,8. The requirement of LQCD to be local gauge
invariant under transformations of the group SU (3)C leads to following Lagrangian
of the QCD
LQCD =
X
q
1
µν
ψ̄q,a (iγ µ (Dµ )ab − mq δab )ψq,b − GA
µν GA ,
4
(2.16)
where repeated indices are summed over. ψq,a are the quark-fields of flavor q and
mass mq , with a color-index a or b that runs over all three colors. The covariant
derivative is given by
λC
.
(2.17)
(Dµ )ab = ∂µ δab + igs ab AC
2 µ
The gauge fields AC
µ correspond to the gluons fields, with C running over all eight
kinds of gluons. The quantity gs is the QCD coupling constant which can be reg2
. It is usual
defined to an effective “fine-structure constant” for QCD by αs = 4π
in literature to call also αs the coupling constant of QCD. Finally the gluon field
strength tensor is given by
C
ρ
B C
GA
µν = ∂µ Aν − ∂ν Aµ − gs fABC Aµ Aν
[λA ,λB ] = ifABC λC ,
(2.18)
where fABC are the structure constants of the SU (3)C group. The last term in the
gluon field tensor corresponds to a coupling between two gluon fields and thus to
the self coupling between the gluons.
Feynman diagrams [14] are pictorial representations of the mathematical expressions
of the amplitudes of fundamental processes. Figure 2.1 shows the three fundamental Feynman vertices of Quantum Chromodynamics. Solid lines represent quarks,
whereas curly lines represent gluons. Feynman diagrams, as shown in figure 2.2 a)
and b), can be constructed from these fundamental vertices. From the Lagrangian
LQCD , the Feynman rules can be determined to calculate the amplitude contributing from such a diagram. The cross section of a process can be calculated by first
1
NC = 3 is the number of color charges.
10
2.1. The Standard Model of particle physics
Figure 2.1.: Feynman diagrams representing the three fundamental interactions of Quantum Chromodynamics. The solid lines represent quarks, whereas the curly line represent
gluons.
summing up all amplitudes which are contributing and then use Fermi’s golden rule
[15] which connects the amplitudes to the cross section. Each possible Feynman
diagram contributing to a process has to be considered for the calculation of the
exact cross section. The first diagram of a) is contributing to the cross section with
gs2 (leading order (LO)). The second diagram represents a higher order correction
(next-to-leading order (NLO)) and contributes with gs3 . The second diagram in a)
is a real higher order correction since an additional gluon is emitted which also occurs in the final state. The diagrams in b) are virtual higher order corrections (also
called vacuum polarization) since they have the same final state as the leading order
diagram.
a)
b)
t
Figure 2.2.: Examples of LO and NLO Feynman diagrams. The solid lines represent
quarks, whereas the curly line represent gluons. The first diagram in a) shows a LO
diagram, whereas the second one shows a real NLO correction. The two diagrams in b)
show virtual NLO corrections.
11
2. Theoretical foundations
If the contribution σ (n) from all diagrams of the same order are calculated, the complete cross section of a process can be written down as an expansion in powers of
αs :
A
X
σ=
σ (n) αsn ,
(2.19)
i=1
A is the highest order to which the coefficients σ (n) are known. For an exact calculation all possible diagrams would be summed up.
When calculating a cross section there are virtual loop diagrams of higher order
in αs , shown in 2.2 b), which lead to divergences in the calculation of σ (n) . These
divergences occur during the integration over all possible momenta of the loop particles. To handle these divergences, a cutoff is introduced. With this cutoff, the
infinities are absorbed into the coupling constant αs . The coupling constant is socalled ’renormalized’. This procedure is similar to the renormalization in QED,
where the actual bare electric charge is infinite but redefined in such a way that
it becomes finite. A heuristic explanation is that the infinite charge is screened by
charges coming from vacuum polarization in such a way that the measured charge
is finite. Renormalization leads, due to finite correction terms, to a dependency of
the coupling constant2 on the scale of momentum transfer Q2 and an unphysical
renormalization scale µ2R . The cross section σ now also depends on µR due to the
dependency of αs . Since µR is an unphysical quantity, the physical result σ must be
independent of the choice of µR , which leads to the equation:
µR
d
σ(µR ) = 0.
dµR
(2.20)
This equation holds exactly if σ(µR ) is calculated up to all orders. If this is applied
on a finite order approximation, the numerical result will depend on the choice
unphysical scale µR . The dependency on the choice of µR gets lower when higher
orders are calculated and can be interpreted as a theoretical uncertainty on the
knowledge of σ.
The coupling constant decreases for high values of Q2 (small distances) which leads
to quasi free quarks. This behavior is called “asymptotic freedom”. At small values
of Q2 (large distances) the coupling constant increases. If αs is in the order of unity
observables cannot any longer be calculated as an expansion in powers of αs . In LO
the dependency of αs on Q2 can be written as
αs (Q2 ) =
αs (µ2 )
1 + αs (µ2 )β0 ln
Q2
µ2
β0 =
33 − 2Nf
,
12π
(2.21)
where µ2 is a reference scale where αs is known. The factor β0 is the leading order
coefficient of the perturbative expansion of the β-function [16] which predicts the
2
There are additional loop diagrams which lead to the running of the mass and magnetic moment
of the quarks.
12
2.2. Phenomenology of proton-proton collisions
running of αs , and Nf the number of quark flavors contributing at this scale Q2 .
The value for αs at the scale of the mass of the Z-boson is αs (MZ2 ) = 0.1184±0.0007
[3]. QCD at a scale where αs is small enough to calculate observables perturbatively
is called perturbative QCD. The scale where αs gets greater than unity and
perturbative expansions start to diverge is called ΛQCD ≈ 220 MeV3 .
It is empirically found that the potential of QCD has a Coulomb behavior ∝ 1/r
at short distances and a linear rising potential ∝ r at larger distances. Hence if
two quarks are tried to be separated it is energetically favorable to produce a new
quark-antiquark-(q q̄)-pair out of the vacuum. These can then build new colorless
bound states, called hadrons. There are two kinds of hadrons, first mesons which
are a quark-antiquark state, and baryons which are a three-quark state. The
process of building these colorless states is called hadronization. This feature of
QCD is called “confinement” meaning that there are no free quarks and gluons.
If a high energetic quark or gluon is produced, it can loose energy by radiating
additional gluons, up to an energy scale where confinement occurs and hadrons are
formed. This leads to a collimated shower of hadrons which is also called jet.
2.2. Phenomenology of proton-proton collisions
Baryons are made out of three valence quarks which determine the quantum numbers
of the hadron. The proton is a baryon made out of two u-quarks and one d-quark.
These valence quarks exchange gluons which bind the quarks together. During this
exchange, several processes can occur. For instance a gluon can split into a q q̄-pair.
These dynamically changing quarks are called sea quarks, since they form a “sea“ of
q q̄-pairs. Also the valence quarks or a gluon itself can radiate a gluon. The proton is
a very dynamic structure due to these processes. All objects in the proton, gluons,
valence- and sea-quarks are named partons.
Protons collide and interact, in difference to the collision of electrons-positrons, not
as a whole. Only the partons of the proton interact and thus not the whole center
of mass energy of the colliding protons is available. Figure 2.3 shows a schematic
view of such a scattering process. Two protons A and B collide and the partons
a and b, which carry a momentum fraction xa and xb of the proton, scatter in the
hard scattering process with a cross section σ̂. The probability to find a parton with
a given x inside the proton is parametrized by the parton distribution functions
(PDF) fa,b/A,B (xa,b ).
2.2.1. Structure of protons
The hadron-hadron cross section for an inelastic hard scattering process can not be
calculated directly with perturbative QCD, since physics processes of all scales in
Q2 are involved. It was first pointed out by S.D. Drell and T.-M. Yan [17] that the
3
For using all flavors up to the b-quark.
13
2. Theoretical foundations
Figure 2.3.: Schematic view of a hard scattering process with a cross section σ̂. The
incoming protons are labeled with A and B, the scattered partons with the momentum fraction xa,b of the proton are labeled as a and b. The probability to find these partons at a given
momentum fraction x is parametrized by the parton distribution functions fa,b/A,B (xa,b ).
perturbatively calculable short distance interactions and the non perturbative long
distance interactions can be separated. The part calculable with perturbative QCD
is given by the subprocess cross section σ̂, whereas the non-perturbative part has to
be described by a function. These functions can not be calculated and have to be
extracted from measurements. They parametrize the probability to find a parton of
a certain flavor at a certain momentum fraction x. The factorization theorem can
then be used to calculate the proton-proton cross section σAB for a specific hard
process σ̂ab→X :
XZ
dxa dxb fa/A (xa )fb/B (xb ) σ̂ab→X (xa , xb ).
(2.22)
σAB =
a,b
The PDFs fa,b/A,B (xa,b ) have, besides the dependency on x, a dependency on the Q2
value at which a certain process takes places. This can be seen in a heuristic way in
figure 2.4. A higher momentum transfer results, according to Heisenbergs uncertain
principle [18], in a higher spatial resolution. If the Q2 of the process, which corresponds to a certain resolution, is under a certain Q2res additional substructures can
not be resolved. If the momentum transfer is above this scale, additional processes
can be resolved. This fact leads to a dependency of the PDFs on the momentum
transfer Q2 , since in the latter case the probability to find additional partons is
higher.
The probability for a parton i to emit a parton f or to undergo a splitting that
yields a parton f is described by the corresponding Altarelli-Parisi [19] splitting
14
2.2. Phenomenology of proton-proton collisions
Q2 < Q2res
Q2 > Q2res
q
q
q̄
q̄
Figure 2.4.: In this figure, the diagram of a gluon that splits into a quark-antiquark-pair,
which annihilates back to a gluon is shown. The blue circle indicates the resolution due to
the Q2 of the process. In the left case the quark-antiquark-pair can not be resolved, in the
right case it can.
functions Pif (z), where 1 − z is the fraction of momentum carried by the emitted
parton. These splitting functions can be expressed as perturbative expansions:
(0)
Pif (z, αs ) = Pif (z) +
αs (1)
P (z) + ....
2π if
(2.23)
They are at the moment calculated up to next-to-leading order (NLO) and nextto-next-to-leading order (NNLO) [20]. The dependency of the parton distributions
qi and g on Q2 can be determined using the splitting functions with the DGLAP
equations4 :
Z
x
αs 1 dz X
x
∂qi (x, Q2 )
=
{
Pqi qj (z, αs )qj ( ,Q2 ) + Pqi g (z, αs )g( , Q2 )}
2
∂ log Q
2π x z j
z
z
(2.24)
Z
∂g(x, Q2 )
x 2
αs 1 dz X
x 2
=
{
Pgqj (z, αs )qj ( ,Q ) + Pgg (z, αs )g( , Q )}.
∂ log Q2
2π x z j
z
z
The PDFs depend now on Q2 and the factorization theorem has now to be written
as:
XZ
σAB =
dxa dxb fa/A (xa , µ2F )fb/B (xb , µ2F ) × [σ̂0 + αs (µ2R )σ̂1 + ...]ab→X . (2.25)
a,b
Here µF is the factorization scale, which can be thought of as the scale that
separates the long- and sort-distance physics. The partonic cross section σ̂ is
now also expressed as a perturbative expansion in αs . Formally the cross section
calculated in all orders of perturbation theory is independent from the choice of
the unphysical parameters µR and µF . However, in the absence of a complete
set of higher-order corrects, it is necessary to make a specific choice. Different
choices will lead to different numerical results which is a reflection of the theoretical
4
Dokshitzer-Gribov-Lipatov-Altarelli-Parisi equations
15
2. Theoretical foundations
uncertainty. The partonic cross section and the splitting functions have to have the
same order in αs , to be consistent.
2.2.2. Determination of parton distribution functions
The full x dependency of the PDFs can not be predicted by any known theory.
Thus this dependency has to be extracted somewhere else, usually from global QCD
fits to several measurements. Most important for the determination are the results
from deep inelastic scattering (DIS) where a proton is probed by a lepton. The most
precise measurement of protons was done by the H1 [21] and ZEUS [22] experiments
at the HERA accelerator. These measurements are predominantly at low x and can
not distinguish between quarks and antiquarks. There are also DIS measurements
done at a fixed-targets, e.g. [23], which are at higher x. Jet data from collider
experiments, e.g. [24], cover a broad range on x and Q2 and are especially important
for the high x gluon distribution.
To extract the x-dependence from these measurements, first a scale Q0 has to be
chosen at which a generic functional form of the parametrization for the quark and
gluon distributions is used
F (x, Q20 ) = AxB (1 − x)C P (x; D,...).
(2.26)
The parameters B and C are physically motivated but not sufficient enough to describe either quark or gluon distributions. Thus the term P (x; D,...) is a suitable
smooth function which adds more flexibility, depending on the number of parameters. The parametrization scale Q0 is often chosen to be in the range 1 − 2 GeV.
This is above the region where αs is large and not in the region where the extracted
PDF is used. The functional form with a set of start parameters is evolved in Q2
and convoluted with the partonic cross section to predict cross section which can
be compared to the actual measurements. For the measured and calculated cross
sections a χ2 is calculated. The starting parameters are now deduced by minimizing
the χ2 . Once these parameters are determined, the PDFs can, starting from the
parametrization scale, be evolved to any Q2 using the DGLAP equations.
The extracted PDFs have uncertainties corresponding to the uncertainties of the
measurements used for the global fit. These uncertainties can be propagated to experimental uncertainties on the deduced parametrization parameters. However, the
propagation of these uncertainties to the PDFs can not be done straight forward,
since some of these parameters are highly correlated. To calculate sets of parameters
which are uncorrelated and can be directly propagated, often the Hessian method
is used [25]. In this method, the up and down variation of the parameters by either
68% or 90% confidence level, is written up in a n × n matrix, where n is the number of parameters. These can then be rotated into an orthogonal eigenvector basis.
The result are 2n eigenvector sets (one set for up and one set for down variation)
which allow the uncorrelated propagation of the fit uncertainties. These eigenvector
16
2.3. Drell-Yan process
sets for up and down variation can then be combined to an asymmetric uncertainty
−
+
on the PDF or an observable using the PDF with following
and ∆Xmax
∆Xmax
formula:
v
u 2n
uX
+
∆Xmax = t [max(Xi+ − X0 , Xi− − X0 , 0)]2
i=1
−
∆Xmax
v
u 2n
uX
= t [max(X0 − Xi+ , X0 − Xi− , 0)]2 .
(2.27)
i=1
∆X + adds in quadrature the PDF uncertainty contributions that lead to an increase
of the observable X, and ∆X − the PDF uncertainty contributions that lead to a
decrease. The additions in quadrature can be done since the eigenvectors are given
in an orthonormal basis.
Additional uncertainties arise from the chosen parametrization at Q0 and the value
of αs used in the evolution. There are different approaches for the treatment of
these uncertainties.
The extraction of these PDFs is usually done by different groups of theorists and
experimentalists specialized to this topic. The extracted PDFs are then made public
in a certain order of αs which is given by the order of the splitting functions used
for the DGLAP evolution. Figure 2.5 shows the NNLO PDF with its corresponding
uncertainties extracted by the MSTW group [26]. The distributions of quarks and
gluons at Q2 = 10 GeV2 and Q2 = 104 GeV2 are shown. At low values of x the
distributions show a rise due to the rising contributions from the sea. At higher x
around ≈ 1/3 the u and d distributions have a peak which corresponds to valence
part of the quarks. At higher Q2 these peaks are getting less significant and the sea
part is moving to higher values of x.
2.3. Drell-Yan process
The Drell-Yan process [17] is the production of a lepton pair `+ `− at a hadron
collider by a quark-antiquark annihilation. In the basic Drell-Yan process, the q q̄pair annihilates to a virtual photon q q̄ → γ ∗ → `+ `− . From now on this process is
discussed for the case of a decay into an electron-positron pair. The cross section
for this process can easily be obtained from the fundamental QED e+ e− → µ+ µ−
cross section, with the addition of appropriate color and charge factors:
σ̂(q q̄ → γ ∗ → e+ e− ) =
4πα2 1 2
Q,
3ŝ NC q
(2.28)
where Qq is the charge of the quarks, ŝ the squared center of mass energy of the
incoming partons and 1/NC = 1/3 is a color factor, taking into account that only
three color combinations are possible since the intermediate state has to be colorless.
17
2. Theoretical foundations
1.2
xf(x,Q2)
xf(x,Q2)
MSTW 2008 NNLO PDFs (68% C.L.)
Q2 = 10 GeV2
1
1.2
Q2 = 104 GeV2
1
g/10
g/10
0.8
0.8
u
0.6
0.4
0.2
0
10-4
0.6
d
c,c
10
-3
s,s
10-2
u
b,b
0.4
10-1
d
c,c
s,s
0.2
d
u
u
1
x
0
10-4
10
-3
10-2
d
10-1
1
x
Figure 2.5.: In this figure is the MSTW2008NNLO PDF [26] times Bjorken-x for
quarks and gluons shown at a scale of Q2 = 10 GeV2 on the left and Q2 = 104 GeV2 on
the right. The uncertainty of the PDFs is indicated by an uncertainty band.
The partonic center of mass energy is equal to the virtuality of the photon and the
invariant mass of the electron-positron pair:
p
√
ŝ = mγ ∗ = me+ e− = (pe+ + pe− )2 ,
(2.29)
where pe+ and pe− are the momentum four vectors of the positron respectively the
electron. Hence looking at the invariant mass of the lepton pair the cross section has
a strongly falling behavior σ̂ ∝ 1/m2e+ e− . If me+ e− ≈ MZ the process can also take
place via the exchange of a Z-boson q q̄ → Z → e+ e− , leading to a Breit-Wigner
resonance in the spectrum of the invariant mass near MZ . These two possible processes, the exchange via a virtual photon and the exchange via a Z-boson interfere,
leading to a process q q̄ → Z/γ ∗ → e+ e− .
The four vectors of the incoming partons can be written as
√
√
s
s
µ
µ
(xa , 0, 0, xa ), pb =
(xb , 0, 0, −xb ),
(2.30)
pa =
2
2
where s is the squared center of mass energy of the hadrons which is related to the
partonic quantity by ŝ = xa xb s. Using the four vectors, the rapidity ye+ e− of the
18
2.3. Drell-Yan process
e+ e− -pair can be expressed as
ye+ e− =
and hence
xa =
xa
1
log( ),
2
xb
me+ e− ye+ e−
√ e
,
s
xb =
(2.31)
me+ e− −ye+ e−
√ e
.
s
(2.32)
Thus different invariant masses me+ e− and different rapidities ye+ e− probe different
values of the parton x. This formula is universal and valid for any final state.
Figure 2.6 shows the relationship between the variables x and Q2 and the kinetic
variables corresponding to a final state of mass M and produced with rapidity y.
Also shown are the regions of phase space each experiment can reach.
7 TeV LHC parton kinematics
9
10
WJS2010
8
10
x1,2 = (M/7 TeV) exp(±y)
Q=M
M = 7 TeV
7
10
6
M = 1 TeV
5
10
4
M = 100 GeV
10
Q
2
2
(GeV )
10
3
10
y= 6
2
4
0
2
4
6
2
10
M = 10 GeV
1
fixed
target
HERA
10
0
10
-7
10
-6
10
-5
10
-4
-3
10
10
-2
10
-1
10
0
10
x
Figure 2.6.: Graphical representation of the relationship between parton (x, Q2 ) variables
and the kinematic variables corresponding to a final state of mass M produced with rapidity
√
y at the LHC collider with s = 7 TeV [27].
19
2. Theoretical foundations
The DIS experiments have access to lower values of Q2 , where HERA probes lower
values of x and fixed target experiments higher values
of x. The kinematic plane for
√
the LHC is shown for a center of mass energy of s = 7 TeV. A broad range in both
variables, x and Q2 , is covered by the LHC. The measurement of the Drell-Yan process starting at invariant masses above the Z-resonance (me+ e− > 116 GeV) probes
values of x > 10−2 when going up to higher rapidities even reaching approximately
values of x = 1. Since for Drell-Yan production an antiquark is needed, a cross
¯
section measurement is especially sensitive to the ū- and d-distributions
at higher
x.
For the Drell-Yan process usually the factorization scale and renormalization scale
are set to the mass of the process µR = µF = me+ e− . This convention is also used
in this analysis for all theoretical calculations.
2.3.1. Recent results
The Drell-Yan process was measured at several hadron-hadron colliders, but the
region above the Z-resonance was only measured by the experiments at the Tevatron collider5 and the LHC. The CDF experiment at the Tevatron has measured the
double differential cross section binned in invariant mass and
√ rapidity for the region
66 GeV < me+ e− < 116 GeV and me+ e− > 116 GeV at s = 1.8 TeV [28]. The
measurement is in good agreement with NLO predictions, but was performed with
only 108 pb−1 and thus has quite low statistics. There are additional measurements
of the
binned in rapidity in the region of the Z-resonance
√ differential cross section −1
at s = 1.96 TeV using 2.1 fb by the D0 experiment [29] and the CDF experiment [30]. The Tevatron experiments were able to reach up to invariant masses of
approximately 500 GeV.
The region above the Z-resonance, due to the new kinematic region at the LHC,
is for the first time accessible for precise measurements. The CMS √
experiment has
measured the invariant mass spectrum of the Drell-Yan process at s = 7 TeV up
to 600 GeV using 36 pb−1 of data [31]. Additionally
there is a preliminary mea√
surement of the differential cross section at s = 7 TeV in two mass windows from
−1
120 to 200 GeV and 200 to 1500 GeV, binned in rapidity and
√ using 4.5 fb of
data [32]. A measurement of the differential cross section at s = 7 TeV binned
in invariant mass up to 1.5 TeV using 4.9 fb−1 of data has been performed by the
ATLAS experiment [33]. The latter measurement is discussed in more detail in 11.1
and compared to the outcome of this analysis.
5
The Tevatron
is a proton-antiproton collider at Fermilab and was operated at
√
and s = 1.96 TeV.
20
√
s = 1.8 TeV
3. Theoretical predictions
In the following chapter the principle of simulating a physics event for a protonproton-collision is introduced. The theoretical tools used in this analysis are discussed in a second part. In the last part the theoretical prediction of the Drell-Yan
cross section is discussed as well as differences due to different PDFs.
3.1. Physics simulation
Physical processes, like Drell-Yan production, are simulated to predict the outcome
of a measurement. The simulation is done on an event-by-event basis and can
be separated in two steps. First the physics simulation of all involved particles
and second the detector response to the simulated particles (the latter is discussed
separately in section 4.8). Here the physics simulation is discussed, which can further
divided into five main steps:
1. Hard process
2. Parton shower
3. Hadronization
4. Underlying event
5. Unstable particle decays
Figure 3.1 illustrates the different steps of simulation, where the color corresponds
to these steps listed above. At the beginning the matrix element of the hard process is calculated. This involves the calculation of the probability distribution of
the hard scatter process (like Drell-Yan) from perturbation theory. This probability
distribution is then convoluted with the PDFs of the incoming partons. With this
resulting probability distribution, four vectors of the outcoming particles can be calculated using a random generator. Due to the random generation process, programs
doing this, are called Monte Carlo generators. Additional phase space restrictions
can be imposed to the generation of the four vectors of the particles. Depending
on the Monte Carlo generator, the calculation of the hard process is done at LO
or NLO. In all simulations used in ATLAS, additional real photon emission (final
state radiation) of the outcoming particles is simulated using the program Photos
[34]. The initial incoming and outgoing partons involved in the hard process are
colored particles and thus can radiate further gluons. In case of an incoming parton
21
3. Theoretical predictions
this is called initial state radiation (ISR) and in case of an outgoing parton final
state radiation (FSR). In addition can gluons split into a q q̄-pair. Also the initial
uncolored proton had a colored parton for the hard process taken out, and thus has
been left in a colored state which can radiate gluons. These gluons then can radiate
themselves further gluons which leads to an extended shower. These parton showers
can be simulated step-by-step as an evolution in momentum transfer, starting from
the momentum scale of the hard process, downwards to a scale where perturbation
theory breaks down. If a generator at NLO is used, an additional matching between
matrix element and parton shower is needed, since the matrix element already includes Feynman diagrams for initial and final state radiation. At the scale where
perturbation theory breaks down, hadronization models simulate the transition of
colored particles into hadrons, which are in the end measured in the detector. Besides the hard process, additional interactions of other partons in the protons can
occur. These lead to an so-called underlying event containing typically low energy
hadrons, which contaminate the hard process. In the end, many of the produced
hadrons are not stable and thus decays have to be simulated.
There are several different generators available which can handle all or a part of the
event generation steps. After the event generation, the detector response has to be
simulated, this is described in chapter 4.8.
Figure 3.1.: Diagram showing the structure of a proton-proton collision, where the different colors indicate the different stages involved in the event generation [35].
22
3.2. Theoretical tools
3.2. Theoretical tools
3.2.1. MCFM
MCFM1 [36] is a Monte Carlo program which gives NLO predictions for a range
of processes at hadron colliders. In this thesis MCFM is used to calculate theory
predictions of the Drell-Yan cross section using different PDFs. These predictions
can be used to compare differences between the cross sections in certain phase spaces
due to different PDFs used.
3.2.2. FEWZ
The best available theory prediction for the Drell-Yan process is currently based on
a calculation done with FEWZ2 [37]. This calculation includes electroweak NLO
and QCD NNLO corrections and is done using the MSTW2008NNLO [26] PDF.
The calculation also covers real W and Z radiation of electrons, calculated using
MadGraph [38] and was provided by ATLAS [39].
Since the quarks in the proton carry charge, there are also photons in the proton.
Due to these photons there is an additional production channel for e+ e− -pairs, called
photon induced (PI) production. Here two photons from the two colliding protons
produce an e+ e− -pair (γγ → e+ e− ). For this process no Monte Carlo simulation is
available. Thus, additive corrections were calculated with FEWZ, which cover the
PI production. For the calculation of this corrections, the MRTST2004qed [40]
PDF at leading order, currently the only available PDF describing the photon part
of the proton, was used. The measurement of the Drell-Yan process at high masses
can actually be used to further constrain the photon PDFs [41]. The corrections
for PI processes were calculated and provided by ATLAS [39].
3.2.3. APPLgrid
The calculation of NLO cross sections is computationally intensive and thus it is
impractical to calculate a new cross section prediction for every single PDF. When
doing PDF fits, the recalculation of the cross section with a different PDF is needed
in every minimization step of the χ2 . Therefore a faster method of the calculation
is needed. A program which provides such a possibility is APPLgrid [42].
The Drell-Yan cross section is, according to formula 2.25, a convolution of the perturbative calculable matrix elements dσ̂/dx of the hard process, with the non perturbative PDF functions. The perturbative matrix elements are due to the factorization
theorem independent from the PDF functions. To reduce the needed time for a cross
section calculation, the calculation can be split into two parts using the factorization
theorem. First the time consuming generation of the partonic cross section is made
1
2
Monte Carlo for FeMtobarn processes
Fully Exclusive W and Z production
23
3. Theoretical predictions
and weights w are derived from the calculation and stored in a three dimensional
grid:
ij
dσ̂(p)
(p)(ij)
(x1 ,x2 , µ2R ) → wm,n,k (x1m , x2n , µ2k ),
(3.1)
dx
where m,n and k are indices of the three dimensional grid, i,j indices of the contributing flavors, p the order in αs of the process and µ2 = µ2R . The generation of
the grid is done in two steps, in a first run the phase space of the grid is optimized,
(p)(ij)
which is filled with weights wm,n,k (x1m , x2n , µ2k ) in a second step. This grid can then
be convoluted in a second step with PDFs, assuming that also µ2 = µ2F , in the
following way:
X X X (p)(ij) αs (µ2 ) p
dσ
k
.
(3.2)
fi x1m ,µ2k fj x2n ,µ2k →
wm,n,k
2π
dx
p
i,j m,n,k
In this case is fi,j (x1,2m ,Q2k ) the representation of the used PDF on the grid. Since
the integral in formula 2.25 has now been replaced by a sum over all points in the
grid, the convolution with any strong coupling constant αs or any PDF can be calculated within some milliseconds. It is additionally possible to vary for the convolution
the renormalization and factorization scales µR and µF . This allows the study of
the theoretical uncertainties which come along with the choice of these unphysical
parameters.
MCFM, connected to APPLgrid was used to generate grids for the one dimensional Drell-Yan cross section binned in invariant mass mee and the two dimensional
cross section binned in invariant mass mee and absolute rapidity |yee |.
3.3. Cross section predictions
Figure 3.2 shows the differential cross section prediction calculated with FEWZ
binned in invariant mass. The cross section shows a strongly falling behavior with
the e+ e− mass over six orders of magnitude from 116 GeV to 1500 GeV. The double
differential cross section binned in absolute rapidity is shown for two invariant mass
bins in figure 3.3. It can be seen that the cross section is slowly falling towards
higher rapidities. This behavior is stronger for higher invariant masses.
3.4. Comparison between different parton distribution
functions
Since the FEWZ calculation is only available for one PDF, it is not suitable to study
differences due to different PDFs. Various PDFs were convoluted with the grids produced with MCFM and APPLgrid, to study the differences between these PDFs,
in different phase spaces.
Figure 3.4 shows the ratio of different theoretical cross section predictions to the
24
dσ [ pb ]
dmee GeV
3.4. Comparison between different parton distribution functions
1
FEWZ NNLO MSTW2008 + PI + WZ radiation
10-1
10-2
10-3
10-4
10-5
116
200
300
400 500
1000
1500
mee [GeV]
Figure 3.2.: Differential cross section prediction, binned in invariant mass. The pre-
100
×10-3
dσ2/dmee/d|yee| [pb/GeV]
dσ2/dmee/d|yee| [pb/GeV]
diction was calculated with FEWZ using the MSTW2008NNLO PDF. Corrections for
photon induced processes and W /Z radiation are applied.
FEWZ NNLO MSTW2008 + WZ radiation
80
60
40
×10-6
35
FEWZ NNLO MSTW2008 + WZ radiation
30
25
20
15
10 500 GeV < mee < 1500 GeV
116 GeV < mee < 150 GeV
20
5
0
0
0.4
0.8
1.2
1.6
2
2.4
|yee|
0
0
0.4
0.8
1.2
1.6
2
2.4
|yee|
Figure 3.3.: Double differential cross section prediction, binned in absolute rapidity
and shown for two mass windows. The prediction was calculated with FEWZ using the
MSTW2008NNLO PDF. Corrections for W /Z radiation are applied but no PI corrections.
25
3. Theoretical predictions
FEWZ prediction including all corrections. The cross sections are binned in invariant mass, starting from mee = 116 GeV up to mee = 1500 GeV. Shown as dashed
lines are the predictions of FEWZ when not applying the PI corrections and the
corrections for real W and Z radiation. In the first bin both corrections are below
1%. The effects are getting larger at higher invariant masses, the W /Z radiation
has in the last bin an effect of 2% and the PI corrections about 5%.
Also shown are predictions using the grid based on MCFM, convoluted with different PDFs. The same predictions are shown for the two dimensional cross section,
binned in invariant mass and absolute rapidity in figure 3.5. Here the PI corrections for the FEWZ calculation are not included, because a rapidity dependency is
expected, which is not considered in the calculation of the corrections. Also shown
in the figure is the MCFM prediction using the MSTW2008NLO PDF. Differences between both calculations can be interpreted as missing corrections, since the
FEWZ calculation is also based on this PDF. For the one dimensional cross section,
where the FEWZ calculation includes the PI corrections, the differences above 200
GeV are in the order of 5%. In the two dimensional case, the differences in the bin
116 GeV < mee < 150 GeV are in the order of 1%, except for the outermost bin,
where the differences are about 5%. At higher invariant masses between 500 GeV
and 1500 GeV, the differences are more depending on the rapidity. In the first bin
the NNLO calculation predicts an about 1% lower cross section, whereas in the last
rapidity bin, the cross section is about 16% higher.
To study differences due to different PDFs, the MCFM predictions can be compared to each other. Predictions for the PDFs MSTW2008, NNPDF2.3 [43],
Herapdf1.5 [44], ABM11 [45] and CT10 [46] are shown in the figures 3.4 and
3.5. MSTW2008, NNPDF2.3 and CT10 show quite similar result within about
3% at lower masses and 5% at higher masses. Larger deviations can be seen for
ABM11 and Herapdf1.5. ABM11 predicts in the first bin a 5% larger cross section than FEWZ and Herapdf1.5 a 9% higher cross section in the last invariant
mass bin.
26
dσ / dσ
dmee
dmee
FEWZ MSTW
3.4. Comparison between different parton distribution functions
FEWZ NNLO MSTW2008 no PI corrections
FEWZ NNLO MSTW2008 no WZ radiation
MCFM NLO MSTW2008
1.1
MCFM NLO NNPDF2.3
MCFM NLO HERAPDF1.5
MCFM NLO ABM11
MCFM NLO CT10
1.05
1
0.95
0.9
116
200
300
400 500
1000
1500
mee [GeV]
Figure 3.4.: Ratio of different theory predictions for the one dimensional Drell-Yan cross
MCFM NLO MSTW2008
1.15
1.1
MCFM NLO MSTW2008
1.15
MCFM NLO NNPDF2.3
MCFM NLO HERAPDF1.5
σ/σFEWZ NNLO MSTW2008
σ/σFEWZ NNLO MSTW2008
section, binned in invariant mass, to the FEWZ prediction including corrections for PI
and W /Z radiation.
116 GeV < mee < 150 GeV
MCFM NLO ABM11
MCFM NLO CT10
1.05
MCFM NLO HERAPDF1.5
1.1
500 GeV < mee < 1500 GeV
MCFM NLO ABM11
MCFM NLO CT10
1.05
1
1
0.95
0.95
0.9
0.9
0.85
0.85
0
MCFM NLO NNPDF2.3
0.4
0.8
1.2
1.6
2
2.4
|yee|
0
0.4
0.8
1.2
1.6
2
2.4
|yee|
Figure 3.5.: Ratio of different theory predictions using MCFM for the two dimensional
Drell-Yan cross section, binned in invariant mass and absolute rapidity, to the FEWZ
prediction. The FEWZ calculation does include corrections for W /Z radiation, but no PI
corrections. On the left side the prediction for a bin from mee = 116 GeV to mee = 150
GeV and on the right side a bin from mee = 500 GeV to mee = 1500 GeV is shown.
27
3. Theoretical predictions
28
4. The ATLAS experiment at the
Large Hadron Collider
In the following chapter the LHC is shortly described and the ATLAS detector with
its components is introduced. Also the method to determine the luminosity and the
simulation of the detector for a physics event is discussed.
4.1. Large Hadron Collider
The Large Hadron Collider (LHC) [47] is a particle accelerator at CERN1 in Geneva.
It can be operated with two types of beams, proton beams and heavy ion beams2 .
The LHC is installed in a 27 km long tunnel which is up to 175 m beneath the surface.
The tunnel was originally build for LEP3 . Protons are accelerated4 in preaccelerators
up to an energy of 450 GeV, and then get injected into the LHC. In the configuration
used in the year 2012, the protons are accelerated in proton bunches, with 50 ns
spacing between √
each other, in opposite directions and collide at a center of mass
energy of up to s = 8 TeV. The proton bunches are again grouped into larger
bunch trains which have a much larger distances than the bunches itself.
During
√
the year 2011 the collisions took place at a center of mass energy of s = 7 TeV.
Luminosities up to L ≈ 7 · 1033 cm−2 s−1 have been reached. The LHC is since early
2013
√ under construction to reach in late 2014 even higher center of mass energies of
s = 13 TeV.
4.2. Overview of ATLAS
The ATLAS5 -experiment [48] is one of the four main experiments6 at the LHC. It
is a multi purpose detector, build at one of the four interaction points. ATLAS
was constructed to measure precisely electrons, photons, muons and jets in large
kinematic regions, to allow tests of the Standard Model and searches for new
particles. It consists out of several layers of different detector systems, which
1
Conseil Europeen pour la Recherche Nucleaire
Typically lead is used.
3
Large Electron Positron Collider
4
Alternatively also lead ions are accelerated.
5
A Toroidal LHC ApparatuS
6
The four main experiments are ATLAS and CMS as multi purpose experiments, ALICE specialized for heavy ion collisions and LHCb which is specialized to measure the decay of B-hadrons.
2
29
4. The ATLAS experiment at the Large Hadron Collider
surround the beam axis, an overview can be seen in figure 4.1.
Figure 4.1.: Cut-away view of the ATLAS experiment. The dimensions of the detector
are 25 m in height and 44 m in length. The overall weight of the detector is approximately
7000 tonnes [48].
Coordinate system of ATLAS
The coordinate system used by ATLAS is a right handed Cartesian coordinate
system with its origin at the interaction point, where the protons collide. The
positive x-axis points towards the center of the LHC ring and the y-axis upwards
to the surface. Thus the z-axis points counter-clockwise along the beam axis. The
azimuthal angle φ is defined around the beam axis in the x-y plane. The range of
φ is going from −π to π with φ = 0 pointing towards the direction of the x-axis.
Hence the range 0 to π describes the upper half plane of the detector whereas −π to
0 describes the lower half plane. Instead of a polar angle θ, which is measured from
the positive z-axis, it is convenient to use the pseudorapidity η. It can be calculated
from θ using η = − ln (tan θ/2). The pseudorapidity has
the
advantage that it
E+pz
1
is for massless particles equal to the rapidity y = 2 ln E−pz , which is in good
approximation valid for electrons. Transverse momentum pT , transverse energy ET
and the missing transverse energy ETmiss are commonly measured in the x-y plane,
30
4.2. Overview of ATLAS
p
p
so pT = p2x + p2y and ET = p2T + m2 . For mass-less particles pT and ET are
the same. The missing transverse momentum is given
the negative vector sum
P by
rec
of all reconstructed transverse momenta p~miss
=
−
p
~
.
T
i T,i The missing transverse
miss
miss
energy is then defined as ET = |~pT |. In
pdifferent aspects, the distance ∆R in
the η,φ-plane is used and defined as ∆R = ∆η 2 + ∆φ2 .
Overview of the detector system
The inner detector is the tracking system of ATLAS (a more detailed description
in section 4.3) and the closest detector to the beam axis. It has a coverage up to
|η| = 2.5 and consists out of three sub-systems, first the pixel detector, followed by
the Semi Conductor Tracker (SCT) and Transisition Radiation Tracker (TRT).
A solenoidal magnetic field of 2 T makes it possible to measure the transverse
momentum of charged particles. The inner detector is additionally designed to
measure vertices and identify electrons.
Following is the electromagnetic and hadronic calorimeter to measure the energy
of particles. As electromagnetic calorimeter a liquid argon sampling-calorimeter is
used up to |η| < 3.2. (a more detailed description in section 4.4). A scintillator
tile calorimeter is used as hadronic calorimeter covering the range |η| < 1.7. The
hadronic endcap calorimeters cover 1.5 < |η| < 3.2 and use, like the electromagnetic
counterparts, liquid argon technology. Finally there is the liquid argon forward
calorimeter, covering the range 3.2 < |η| < 4.9, which is used for measuring both,
electromagnetic and hadronic objects.
The calorimeter is surrounded by the muon spectrometer [49] which consists out
of a toroid system [50], separated into a long barrel [51] and two inserted endcap magnets [52], and tracking chambers. The toroid system has an air-core and
generates a strong magnetic field with strong bending power in a large volume within
a light and open structure. Additionally there are three layers of tracking chambers.
These components of the muon spectrometer have a coverage up to |η| = 2.7 and
define the overall dimension of the ATLAS experiment. Due to the open structure,
multiple-scattering effects are reduced leading to a muon momentum resolution of
σpT /pT = 10% at pT = 1 TeV. The muon system also includes trigger chambers,
covering a rage up to |η| = 2.4. The muon spectrometer is not explained in any
more details, since in this analysis muons are not studied.
A three-stage trigger system (more detailed description in section 4.5) is used to
reduce the rate of pp collisions (≈ 400 MHz) to a rate which can be processed and
stored (≈ 200 Hz). To reduce this rate, the trigger system has to select events
which are of special interest. The first trigger stage, the Level-1 (L1) trigger, uses a
subset of the total detector information to make the decision whether to continuing
processing an event or not. This reduces the rate already down to approximately 75
kHz. The subsequent two trigger stages are the Level-2 (L2) trigger and the event
filter, which reduce the rate further to the needed 200 Hz.
31
4. The ATLAS experiment at the Large Hadron Collider
4.3. The inner detector
The inner detector [53] is the ATLAS tracking system and is shown in figure 4.2.
It consists out of three subsystems which are mounted around the beam axis. The
superconducting solenoid [54], which produces the magnetic field of 2 Tesla, needed
for the momentum measurement of charged particles, has a length of 5.3 m and a
diameter of 2.5 m. With the solenoidal magnetic field and the inner detector components a momentum resolution of σpT /pT = 0.05% pT [GeV] ⊕ 1% can be achieved.
The subsystems of the inner detector are in the following described in more detail.
Figure 4.2.: Cut-away view of the ATLAS inner detector [48].
4.3.1. Pixel detector
The pixel detector [55] is one of the two precision tracking detectors, with a coverage
of |η| < 2.5. It is the inner most layer of the inner detector with a distance to the
beam axis of r = 50.5 mm. In the central region, three layers of silicon pixel modules
are cylindrical mounted around the beam axis, while in the endcap regions three discs
each are mounted perpendicular to the beam axis. Its purpose is the measurement
of particle tracks with a very high resolution, to reconstruct the interaction point
(primary vertex) and secondary vertices from the decay of long-lived particles. The
inner most layer of the pixel detector is called b-layer because of its importance to
reconstruct the secondary vertices of decaying B-hadrons. The pixel modules used
have dimensions of 50 × 400 µm2 . The position resolution is 10 µm in the R-φ plane
and 115 µm in z(R) for the central (endcap) region. Due to this fine granularity,
around 80.4 million readout channels are needed.
32
4.4. The calorimeter system
4.3.2. Semi conductor tracker
The semi conductor tracker is mounted in a distance of 299 mm to 514 mm from the
beam axis, and is the second layer of the inner detector. It is a silicon microstrip
detector covering the region |η| < 2.5. Eight strip layers are used which are in the
central region joined to four layers of small-angle (40 mrad) stereo strips to allow
the measurement of both coordinates. It is designed that each particle is within its
coverage traverses through all four double layers. In the endcap region nine discs on
each side are installed, using two radial layers of strips each. The spatial resolution of
the SCT is 17 µm in the R-φ plane and 580 µm in the z(R) for the central (endcap)
region. The total number of readout channels in the SCT is approximately 6.3
million.
4.3.3. Transition radiation tracker
The TRT is the third and last component of the tracking system which provides a
large number of hits (typically 36 per track). It consists out of straw tubes with a
diameter of 4 mm and provides coverage up to |η| = 2.0. The straw tubes are in the
central region 144 cm long and parallel to the beam axis. In the endcap region the 37
cm long straws are arranged radially in wheels. The TRT provides R-φ information
for the determination of the transverse momentum with an accuracy of 130 µm. The
straws are surrounded by a transition medium. Transition radiation is emitted, if
charged particles traverse this medium. The intensity of the transition radiation is
proportional to the Lorentz factor γ = E/m. Electrons have m ≈ 0 and thus at high
energies the transition radiation is above a characteristic threshold. The intensity
of heavy objects like hadrons is much weaker and thus the transition radiation can
be used to identify electrons. The total number of readout channel in the TRT is
approximately 351000.
4.4. The calorimeter system
The energy of particles (except µ and ν) is in ATLAS measured with sampling
calorimeters, in which layers of passive and active material alternate. When incident particles like electrons, hadrons or photons traverse the calorimeter, they
interact with the material in the calorimeter. In the dense passive layers, these incident particles lead to particle showers. The deposited energy of these showers, also
called clusters, can be measured in the active layers and allows conclusion about the
energy of the incident particle.
There is a difference between electromagnetic and hadronic showers and thus there
are separate calorimeters, one for electrons and photons and one for hadrons. In
electromagnetic calorimeters, the initial particle interacts electromagnetically via
pair production and by radiating Bremsstrahlung. Electrons are radiating photons
via Bremsstrahlung which then do pair production, leading to more and more particles which are stopped by ionization, while photons are first doing pair production.
33
4. The ATLAS experiment at the Large Hadron Collider
The initial energy E0 of the incident electron or positron decreases exponentially
with E(x) = E0 e−x/X0 until it is completely stopped. The parameter X0 is the
radiation length which is material dependent. The hadronic showering process is
dominated by a succession of inelastic hadronic interactions via the strong force. A
characteristic quantity for the length of a hadronic shower is the absorption length
λ. Hadronic showers are typically longer and broader than electromagnetic ones and
thus is the hadronic calorimeter placed after the electromagnetic one.
Figure 4.3 shows a cut-away view of the calorimeter system of ATLAS.
Figure 4.3.: Cut-away view of the ATLAS calorimeter system [48].
4.4.1. Electromagnetic calorimeter
For the electromagnetic calorimeter of ATLAS [56], lead is used as an absorber
medium and liquid argon as an active medium. The electrodes to measure the energy
in the liquid argon and the lead absorbers are build in an accordion geometry, in
order to provide complete and uniform coverage in φ. The thickness of the absorber
plates varies with η in such a way that the energy resolution is optimal [57]. The
electromagnetic calorimeter consists out of four different regions. First, there is the
central part up to |η| = 1.475, called barrel calorimeter, which has a thickness of at
least 22X0 . In the region 1.375 < |η| < 3.2 there is the endcap calorimeter which
is again separated into the ’outer wheel’ 1.375 < |η| < 2.5 and the ’inner wheel’
2.5 < |η| < 3.2. The forward calorimeter, which is also used for the measurement of
hadrons, is in the region 3.2 < |η| < 4.9.
In the part of the calorimeter which is intended for precision measurements (|η| <
2.5), it is separated into three layers. Figure 4.4 shows the three layers and the
34
4.4. The calorimeter system
accordion geometry of the electromagnetic calorimeter. Upstream the first layer,
there is in the range |η| < 1.8, the so-called presampler which is a 11 mm thick
layer of liquid argon. It has the purpose to estimate the energy lost in front of
the calorimeter. The first layer has a granularity of 0.0031 × 0.0982 in η × φ. The
cells are also called “strips“, due to the fine segmentation in η. They allow to
distinguish close by particles that enter the calorimeter, e.g., two photons from a
π0 decay. The second layer has a more coarse granularity of 0.025 × 0.025 in η × φ.
It has a thickness of 16X0 and is thus intended to measure the bulk part of the
energy. The third layer has again a much coarser granularity and the purpose to
correct for the overlap of the energy deposition in the following hadronic calorimeter.
The electromagnetic
calorimeter has in the region |η| < 3.2 an energy resolution of
√
σE /E = 10%/ E[GeV] ⊕ 0.7%.
Cells in Layer 3
∆ϕ×∆η = 0.0245×0.05
Trigge
r Towe
∆η = 0 r
.1
2X0
47
0m
m
η=0
16X0
Trigge
Tow r
∆ϕ = 0er
.09
82
m
m
4.3X0
15
00
1.7X0
∆ϕ=0.0
245x
36.8m 4
mx
=147.3 4
mm
ϕ
Square cells in
Layer 2
∆ϕ = 0
.0245
∆η = 0
.025
m/8 =
4
∆η = 0 .69 mm
.0031
Strip cells in Layer 1
37.5m
η
Figure 4.4.: Sketch of a barrel module where the different layers and the accordion
geometry is visible. Also shown is the granularity in η and φ of the cells of each of the
tree layers [48].
4.4.2. Hadronic calorimeter
The hadronic tile calorimeter [58] is, like the electromagnetic one, a sampling
calorimeter. But instead of lead, iron is used as an absorber and scintillating tiles as
active material. The tile calorimeter is divided into three parts, first the tile barrel
up to |η| = 1.0, followed by the extended barrel between 0.8 < |η| < 1.7.
35
4. The ATLAS experiment at the Large Hadron Collider
In the endcaps a liquid argon calorimeter is used as hadronic calorimeter. It is placed
behind the electromagnetic endcap calorimeter and uses the same cryostats for the
cooling of the liquid argon. The covered range is 1.5 <√|η| < 3.2. The hadronic
calorimeter has a jet energy resolution of σE /E = 50% / E[GeV] ⊕ 3%.
4.5. The trigger system
The ATLAS trigger system [59] is divided into three levels. The hardware-based
Level 1 (L1) [60] trigger performs a fast event selection by searching for objects
with high pT and large total or missing energy. It only uses data with reduced
granularity from the calorimeters and the muon system. Electromagnetic objects
are selected by the L1 trigger if the total transverse energy deposited in the
electromagnetic calorimeter in two neighboring cells of ∆η × ∆φ = 0.1 × 0.1 is
above a certain threshold. The first trigger level has about 2.5 µs for a decision and
reduces the event rate from about 400 MHz to 75 kHz. Regions of interest (RoI)
are defined by the L1 and seeded to the second trigger level L2 [61]. The L2 uses
the full granularity and precision of all detector system, but only in the regions
of interest defined by the L1. For the L2 trigger also tracks are reconstructed
using reconstruction algorithms (see 5.1). The L2 trigger has some milliseconds
for the decision and reduces the event rate to about 3 kHz. A further reduction
to the required rate of 200 Hz is done by the event filter (EF) which is seeded by
the decisions of the L2. The event filter reconstructs the complete event using all
available information and already applies several calibrations and corrections. The
events are sorted into different streams which correspond to the physics objects
triggering the event. For instance in this analysis the egamma-stream is used which
contains events with electrons and photons. For events which pass also the last
trigger level, the information of all sub-detectors is recorded.
4.6. Data acquisition and processing
Each detector component has an on-detector buffer pipe-line, which allows to buffer
the data during the L1 trigger decision. Once an event is accepted by the L1 trigger, the data from the pipe-lines transferred off the detector. There the signals are
digitized and transferred to the data acquisition (DAQ) system. The first stage of
the DAQ system, the readout system, stores the data temporarily in local buffers.
The stored data of the RoI’s is then subsequently solicited by the L2 trigger system.
Those events selected by the L2 trigger are then transferred to the event-building
system, where the whole event is reconstructed, and subsequently to the event filter for the final decision. The information of accepted event is then stored in the,
so-called, RAW data format on magnetic tape in the CERN computer center.
The further processing and reprocessing happens in the LHC Computing Grid
36
4.7. Luminosity determination
[62, 63]. The Grid is a network of many computer clusters organized in several
levels, so-called Tiers. The Tier-0 is the CERN computer center which applies
reconstructions algorithms (for electrons see chapter 5.1) and calibrations to the
data. The whole information on detector level is transformed into information on
object level into a data format called Event Summary Data (ESD). These ESD are
distributed to the Tier-1 centers, which are located around the world and provide
storage space for the data as well as additional processing power, e.g., for recalibration of the data. Additionally a copy of the raw data is distributed among the
Tier-1 centers. From the ESD, the Analysis Object Data (AOD) are derived, which
only contain information about specific physics objects which are needed for the
analysis, like electrons, muons, jets or photons. From the AODs a further extraction to the Derived Physics Data (DPD) is done. The DPDs are transferred to the
Tier-2 centers, which provide processing power for physics analysis and Monte Carlo
production. For the analysis needed data can be copied to local Tier-3 centers. Such
a Tier-3 is the local maigrid and the mainzgrid which is still under construction.
Data in the D3PD format, a special type of DPD, is used for this analysis. D3PDs
store the information into ROOT Ntuples. ROOT Ntuples are a commonly used
data format in high energy physics. The program ROOT [64] is a statistical analysis
framework which is also used in this analysis. It provides the possibility of analyzing data and has various possibilities to visualize data in histograms. All shown
histograms in this thesis were produced using ROOT.
4.7. Luminosity determination
The luminosity L is a quantity which describes the performance of an accelerator.
The number of produced events N of a process is directlyRrelated to its cross section
σ and the luminosity integrated over time via N = σ L dt = σLint . For a pp
collider the luminosity can be determined by
L=
Rinel
,
σinel
(4.1)
where Rinel is the rate of inelastic collisions and σinel is the pp inelastic cross section.
For a storage ring operating at a revolution frequency fr and with nb bunch pairs
colliding per revolution, the luminosity can be rewritten as
L=
µnb fr
,
σinel
(4.2)
where µ is the number of average inelastic interactions per bunch crossing. ATLAS
monitors the delivered luminosity by measuring µ with several detectors and several
different algorithms. These algorithms are for instance based on counting inelastic
events or the number of hits in the detector. When using different detectors and
algorithms, the measured µmeas has to be corrected with the efficiency of the detector
and algorithm, to obtain µ. The luminosity detectors are calibrated to the inelastic
37
4. The ATLAS experiment at the Large Hadron Collider
cross section using beam-separation scans, also known as van der Meer (vdM) scans
[65]. Here the absolute luminosity can be inferred from direct measurements of the
beam parameters. The luminosity can be expressed in terms of machine parameters
as
nb f r n1 n2
(4.3)
L=
2πΣx Σy
where n1 and n2 is the number of protons in beam one or two and Σx and Σy characterizes the horizontal and vertical convolved beam width. By separating the beams
in steps of known distances in a vdM scan, Σx and Σy can be directly measured.
A more detailed description of the methods and sub-detectors used for luminosity
determination can be found in [66]. The systematic uncertainty for the determination, which is obtained by comparing the results from the different sub-detectors
and methods, is for the 2012 data set 2.8% [67].
4.8. Detector simulation
In section 3.1, the simulation of a physics event was discussed. This simulation was
independent from the detector. The simulation of the detector and the response
of the detector to the physics event has to be simulated separately. The program
GEANT4 [68] is used for the detector simulation.
In a first step GEANT4 simulates the way of the generated particles through the detector. Therefore a detailed model of the detector, including all details about geometry and materials used as well as details about the magnetic fields, is implemented.
The interaction of the particles with the matter of the detector is entirely simulated.
Furthermore additionally produced particles, like photons from Bremsstrahlung and
particles in an electromagnetic or hadronic shower, are also propagated through the
detector. The result is a precise record of the amount of energy deposited in which
part of the detector at which time.
In a second step, the response of the detector components to the deposited energy
and the electronics of the readout system is simulated. Therefore also effects like
calibrations or dead readout channels are simulated, to simulate conditions as they
are present for data taking. The information is stored in the same way as for data
taking, additionally truth information about the particles is added.
Whereas the physics simulation is in comparison quite fast, takes the simulation of
the detector a significantly longer time. For instance, the simulation of an event
pp → W ± + X → e± νe + X takes about 19 minutes [69].
38
5. Electrons in ATLAS
In the following chapter, firstly the reconstruction of tracks and electrons is discussed. In a second part the identification of electrons is discussed. Since electrons
and positrons only differ in the curvature of their tracks, but besides that, have the
same signature, positrons are in this thesis from now on also denoted as electrons.
5.1. Reconstruction
5.1.1. Track reconstruction
Aim of the track reconstruction is to reconstruct the path of a charged particle
through the inner detector. In case of a muon, also the path through the muon
spectrometer is relevant. However, since this analysis concentrates on electrons, the
muon reconstruction is not discussed in particular.
In a first step, hits in the pixel detector and first layer of the SCT are transformed
into three dimensional space points. The hits in the TRT are transformed into drift
circles using the timing information. A track seed is formed from a combination of
space points in the three pixel layers and the first SCT layer. These track candidates
are then extended up to the fourth layer of the SCT by using a Kalman-filter [70],
forming track candidates. The track candidates are fitted and extended by the
TRT hits. After all tracks are fitted, vertex finder algorithms are used to assign
the tracks to their primary vertices. After the vertex reconstruction, additional
algorithms search for secondary vertices and photon conversions. A more detailed
description of the track reconstruction is given in [71].
5.1.2. Electron reconstruction
Reconstruction of an electron candidate starts always from an energy deposition
(cluster) in the electromagnetic calorimeter. To search for such a cluster, a slidingwindow algorithm is used. The electromagnetic calorimeter is first divided into an
η-φ-matrix with Nη = 200 and Nφ = 256. Thereby matrix elements with a concrete
size of ∆η ×∆φ = 0.025×0.025 are formed. In a first step a window of the size 3×5,
in units of 0.025 × 0.025 in η × φ space, runs over the matrix and searches for an
energy deposition with a transverse energy above 2.5 GeV. In a second step, a track
is searched which matches the identified clusters. The distance between the track
impact point and the cluster center is required to satisfy |∆η| < 0.1. To account
for radiation losses due to Bremsstrahlung an asymmetric ∆φ cut is chosen. Track
39
5. Electrons in ATLAS
impact point and cluster center have to have a ∆φ < 0.1 on the side where the
extrapolated track bends, and ∆φ < 0.05 on the other side. The cluster is discarded
as an electron candidate if no track is matched to it. If there is more than one
track matching to the cluster, the ones with hits in the pixel detector and SCT are
preferred and the one with the smallest ∆R to the cluster is chosen. After track
matching, the electron cluster is rebuilt using a 3 × 7 (5 × 5) window in the barrel
(endcap). The larger window in φ in the barrel, and the larger window in η in the
endcap region is chosen to account for radiation losses due to Bremsstrahlung. The
cluster energy is than determined by summing the estimated energy deposited in the
material before the electromagnetic calorimeter, the measured energy deposited in
the cluster, the estimated energy deposited outside the cluster and the estimated energy deposited beyond the electromagnetic calorimeter. A more detailed description
of the electron reconstruction is given in [72].
5.2. Identification
A large part of the reconstructed electron candidates are no real electrons. Thus
this background which consists dominantly out of jets, has to be rejected. By
rejecting jet events it is necessary to make sure that a sufficient amount of real
electrons is kept. ATLAS provides an electron identification based on cuts of
track and shower shape variables [72]. Three different levels of identification loose,
medium and tight 1 are defined. The cut values of the three identification levels
are optimized in such a way that a signal efficiency of 90% for loose, 80% for
medium and 70% for tight is achieved whereas the background rejection is getting
higher from loose to tight. The cuts of the three identification levels are in the
following briefly introduced and explained. A summary of all the cuts of the three
identification levels is again shown in table 5.1. The variables on which are cuts
applied are partially also defined in the table.
5.2.1. Identification level “loose”
The loose identification level imposes restrictions on the ratio between the transverse
energy in the electromagnetic and hadronic calorimeter to reject jets which would
cause a high energy deposition in the hadronic calorimeter. If the energy deposited
in the first layer of the electromagnetic calorimeter is more than 0.5% of the total
deposited energy, further cuts on the first layer deposition are imposed. The total
shower width in the first layer wstot is defined as
sP
2
i (i − imax )
i EP
,
(5.1)
wstot =
i Ei
1
loose, medium and tight are internally also called loose++, medium++ and tight++.
40
5.2. Identification
where i is the index of the strip in the first layer and imax the index of the strip
with the shower maximum. Typically wstot is defined summing over 20 strips in η.
Jets have broader showers than electrons and thus can be rejected by restricting
the shower width towards lower values. A jet can contain π 0 mesons which decay
dominantly into two photons, leading to two nearby energy depositions in the electromagnetic calorimeter. To reject photons from such a decay π 0 → γγ, a second
maximum in the energy deposition of the first layer can be searched. The quantity
Eratio is the difference between highest and second highest energy deposition in one
of the strips, divided by its sum. If the difference between these energies is below a
certain value the candidate is assumed to originate from a π 0 decay and is rejected.
In the second layer of the electromagnetic calorimeter restrictions on the ratio Rη
between the energies deposited in a window of 3×7 to the window 7×7 are imposed.
By restricting the ratio to higher values, it is ensured that not a broad symmetric
shower is selected, like typical for hadronic showers, but a shower broad in φ like
it is expected due to radiated Bremsstrahlung2 . A similar quantity, sensitive to the
same issue, is the lateral shower width wη,2 . It is defined by
sP
P
2
Ei ηi
Ei ηi2
i
i
P
− P
,
(5.2)
wη,2 =
i Ei
i Ei
where i is the index of the strip in the first layer and imax the index of the strip with
the shower maximum. To ensure the matching between the chosen track and the
cluster, it can be required that the distance of the impact point of the track and
the η of the cluster in the first layer is below a certain value. To ensure that the
track originates from a primary vertex, a cut is imposed on the transverse distance
between the track and vertex. The track is also required to have a sufficient amount
of hits in the pixel and SCT detector.
5.2.2. Identification level “medium”
The medium identification level imposes the same cuts as the loose identification
level but uses partially tighter restrictions. Additionally to ensure that the shower
barycenter is in the second layer, the ratio between the energy in the third layer
to the complete cluster energy is restricted. This cut is only imposed to clusters
with a pT lower than 80 GeV, since for growing pT the barycenter moves towards
the hadronic calorimeter. Electrons should cause transition radiation in the TRT
above a certain threshold. It is required that a sufficient amount of the TRT hits
are such high-threshold hits. To reject tracks which are coming from secondary
vertices or photon conversions, it is required that the associated track has a hit in
the first layer of the pixel detector.
2
The shower is expected to be broader in φ due to the radiated photons from Bremsstrahlung,
which are measured nearby in φ to the electron cluster.
41
5. Electrons in ATLAS
5.2.3. Identification level “tight”
The tight identification level imposes the same cuts as the medium level with again
partially tighter restrictions. To ensure that the track and the cluster belong to the
same physics object, a cut is made on the ratio of the measured momentum and the
measured energy. To tighten the matching between track and cluster, an additional
|∆φ| cut is imposed and to further constrain the track quality a minimum number
of hits in the TRT is required. Electron candidates which are flagged by a certain
algorithm as objects which are coming from a photon conversion are also rejected.
Besides looking for an object which has no hit in the inner most layer of the pixel
detector and thus was a photon which converted in the first layer or afterwards,
the algorithm searches for additional conversion vertices associated to the object.
A conversion vertex is a vertex with two opposite charged tracks associated to it
which build up a very low invariant mass and therefore can originate from a photon.
If the track associated to the electron candidate comes from such a conversion
vertex or such a conversion vertex is near the track, the candidate is also flagged as
electron from a photon conversion.
5.2.4. Isolation
Single electrons should produce a shower located in a rather small region, whereas
jets produce broader showers. The sum of the energy in a region around the cluster
center larger than a certain radius ∆R, can be used to discriminate between isolated
electrons, e.g., from W or Z decays and non-isolated electrons in jets, e.g., from meson decays. Such an isolation requirement is not imposed by the three identification
levels and can be applied additionally to electron candidates.
42
5.2. Identification
Category of selection cut
Hadronic leakage
Explanation
Loose identification
• Rhad,1 =
ET,had,1
ET,em ,
if |η2 | < 0.8 or |η2 | > 1.37
ET,had
ET,em ,
if |η2 | > 0.8 and |η2 | < 1.37
• Rhad =
(ET,had(,1) is the energy (in the first layer) of the
hadronic calorimeter and ET,em the energy in the
electromagnetic calorimeter)
First layer of the electromagnetic calorimeter
Second layer of the electromagnetic calorimeter
If f1 = E1 /E > 0.005:
• wstot absolute shower width
E1st −E2nd
• Eratio = E
, with Enth the nth highest en1st +E2nd
ergy in the cluster
• Rη =
cells
E3×7
E7×7 ,
Ex×y is the energy in a window of x×y
• Lateral shower width wη,2
Track quality
Hits in the pixel detector
• |∆η1 | < 0.015, matching between track and cluster
in the first calorimeter layer
• |d0 | < 5 mm, transverse impact parameter (transverse distance between track and assigned vertex)
• NSI + NSI,outliers ≥ 7, outlier hits are hits near the
track which are not directly matched to it
• NPix + NPix,outliers ≥ 1
Medium identification
Third layer of the electromagnetic calorimeter
Track quality
• f3 = E3 /E
Transition radiation tracker
• RTRT = NTRT,high−treshold /NTRT , ratio between
TRT hits above a certain threshold and all TRT
hits
Pixel detector
• |∆η1 | < 0.005, matching between track and cluster
in the first calorimeter layer
• Hit in the first pixel layer
Tight identification
Transverse impact parameter
• |d0 | < 1 mm, transverse distance between track
and assigned vertex
Agreement cluster and track
• Ratio of cluster energy to track momentum E/p
• Distance between cluster and track |∆φ| in second
calorimeter layer
Transition radiation tracker
• Number of hits in TRT NTRT
Photon conversion
• Exclude candidates which are tagged as conversion
electrons
Table 5.1.: List and explanation of the identification cuts made for the three identification levels. The cut values of the variables depend, if not explicitly given, on η and/or
pT . The cut values of already introduced cuts are partially getting tighter towards higher
identification level. An integer subscript of a variable refers to the layer of the calorimeter,
if not differently defined.
43
5. Electrons in ATLAS
44
6. Monte Carlo simulation
Monte Carlo simulations are used in this analysis to predict several processes of
the Standard Model in order to compare them with data or to calculate corrections
for detector effects. The simulation of processes like the Drell-Yan process, can be
divided into two parts, the simulation of the physical process, which is described
in section 3.1, and the simulation of the detector response, which is described in
section 4.8. All Monte Carlo samples used are produced centrally by the ATLAS
collaboration.
In this chapter all used Monte Carlo simulations are described and it is explained
how these simulations are corrected for small differences to data.
6.1. Simulated processes
In the following the simulation of the signal process and the simulation of processes
which can lead to at least two electrons in the final state is described. Additionally
a sample of a process which can lead to one electron is described. This sample is
used later on in background studies.
6.1.1. Drell-Yan process
The Drell-Yan process (pp → Z/γ ∗ + X → e+ e− + X) is the signal process in
this analysis. The simulation of this process is used to calculate efficiencies and
acceptances and to compare the theory with the measurement. Thus it is crucial
to have a very precise prediction for this process. Powheg [73] with the CT10
PDF is used as a generator for the matrix element of the hard scattering process.
Powheg provides matrix elements calculated at NLO in QCD. The modeling of
parton showers, hadronization and particle decays is done with Pythia8 [74].
At invariant masses higher than the Z-resonance, the Drell-Yan spectrum is a
strongly falling spectrum. Since there is only limited computing time for the
generation of the Monte Carlo, the process is separated in invariant mass of the
Z/γ ∗ to have sufficient statistics also at high invariant masses. For the dominant
region from 60 GeV to 120 GeV a Monte Carlo sample was used which simulates
the process starting form 60 GeV. From this sample only events were considered
which were generated in a mass window between 60 GeV and 120 GeV. To have
sufficient statistics, 14 different samples were used above 120 GeV up to 3000
GeV and an additional one above 3000 GeV. For these samples only events were
generated in the given mass window and are then joined together weighted with
45
6. Monte Carlo simulation
the associated cross section. A list of all Monte Carlo samples used can be found
in the appendix in table A.1.
To get a more precise prediction at NNLO, so called k-factors are applied, to
reweight the underlying cross section generated by Powheg from NLO to NNLO.
For this, mass dependent k-factors to reweight the Monte Carlo to the FEWZ
prediction (see section 3.2.2) are obtained by a polynomial fit to the ratio between
the FEWZ and the Powheg cross section. The corrections cover real W /Z
radiation of electrons and are typically on the order of 3%. The fitted function was
provided by ATLAS [39].
In the same way an additional k-factor to correct for PI processes is calculated and
fitted with respect to the FEWZ prediction without the photon induced production
channel. The fitted function was provided by ATLAS [39].
6.1.2. Top processes
tt̄ process
The decay of a top-antitop pair can, like the Drell-Yan process, lead to two real electrons. Over 99.9% of the top and antitop quarks, decay into b and b̄, under emission
of two W -bosons. These W -bosons can decay directly, or via τ -leptons, into electrons. As a generator for this process MC@NLO [75] was used. MC@NLO provides, similar to Powheg, the matrix element of the hard scattering process at NLO
in QCD. The modeling of parton showers and hadronization is done with Herwig
[76] using the CT10 PDF. The Monte Carlo sample was during the event generation
filtered for decays with at least one lepton. The efficiency of this filter has to be multiplied to the cross section used by the Monte Carlo generator, to get the correct cross
section of the sample. Positive and negative weights are provided by MC@NLO,
which have to be applied
to get a NLO prediction. The predicted tt̄ cross section
√
+7.56
+11.67
for pp collisions at s= 8 TeV is σtt̄ = 252.89+6.39
−8.64 (scale)−7.30 (mt )−11.67 (PDF+αs )
pb. It has been calculated by ATLAS at NNLO in QCD using Hathor [77]. The
uncertainties correspond to the renormalization and factorization scale uncertainty,
a ±1 GeV variation of the top mass and to the PDF and αs uncertainty. To get a
better description of the tt̄ process this cross section is used for the normalization.
A table with details about the sample can be found in the appendix in table A.2.
tW process
One possibility to produce a single top quark is the conversion of a b-quark from
the sea of the proton to a top quark under radiation of a W -boson. Thus a W boson and a top quark occur in the final state which then can further decay into
two electrons. As a generator for this process MC@NLO was used. The modeling
of parton showers and hadronization is done afterwards with Herwig√ using the
CT10 PDF. The predicted NNLO tW cross section for pp collisions at s= 8 TeV
46
6.2. Correction of simulation
is σtW = 22.37 ± 1.52 pb [78]. To get a better description of the tW process this
cross section is used for the normalization. A table with details about the sample
can be found in the appendix in table A.2.
6.1.3. Diboson processes
Processes relevant for this analysis are processes where two Z-bosons, two W -bosons
or one Z- and one W -boson are produced. These W - and Z-bosons can then further
decay into electrons. The processes were generated at LO using Herwig using the
CTEQ6L1 PDF [79]. The samples were filtered for decays with at least one lepton.
Since the diboson spectrum is strongly falling with invariant mass, two additional
mass binned samples were produced. Here only events were generated where the
decay leads to at least two electrons which build up an invariant mass a certain
window. If there were more than two electrons, the pair with the highest invariant
mass is chosen. The inclusive sample is used up to an invariant mass of 400 GeV,
a second sample from 400 GeV to 1000 GeV and
√ a third sample above 1000 GeV.
The diboson cross sections for pp collisions at s= 8 TeV are known up to NLO
[80]. These NLO cross sections were used to normalize the samples to get a better
description of the processes. A table with the cross sections used and details about
the samples used can be found in the appendix in table A.3.
6.1.4. W process
The decay of a W boson can lead to one electron. The process was generated
with Powheg using the CT10 PDF. The modeling of the parton showers and
hadronization is done afterwards by Pythia8. Two samples are used, one with the
+
−
−
process W + →
√ e νe and the other one with W → e ν¯e . The W cross section for pp
collisions at s= 8 TeV is known up to NNLO [80]. These cross sections were used
to normalize the samples to get a better description of the process. A table with
the cross sections used and details about the samples can be found in the appendix
in table A.4.
6.2. Correction of simulation
Some properties in the simulation are modeled inaccurate. To get the best possible
match between simulation and data these quantities are corrected in the simulation.
6.2.1. Pile-up
There are two different sources of pile-up. On the one hand there is time pile-up,
which is a quantity for the number of inelastic collisions per event. A good quantity
for the in time pile-up is the number of primary vertices1 nP V . Additionally there
1
Number of vertices with more than two tracks.
47
6. Monte Carlo simulation
350
arbitrary units
arbitrary units
is the out of time pile-up which are signals in the detector coming from earlier
crossings of the proton bunches. A good quantity for the out of time pile-up is
the number of interactions hµi averaged over one bunch train and a luminosity
block. These quantities are strongly dependent on the settings of the LHC, like the
number of protons in a bunches and the spacing between different proton bunches.
Since the physics simulation takes place before or during the time of data taking,
the parameters for the pile-up distribution of the final data are not known. Thus
approximate distributions are simulated which are meant to be matched to the actual
data. To adjust the simulation, every event is reweighted using reweighting tool2
provided by ATLAS [81]. It has been found difficult to describe both distributions in
data equally well with MC, thus the reweighting was adjusted to better fit the nP V
distribution. Figure 6.1 shows the distribution of hµi before and after reweighting.
before reweighting
300
250
350
after reweighting
300
250
200
200
150
150
100
100
50
50
0
0
5
10
15
20
25
30
35
40
<µ>
0
0
5
10
15
20
25
30
35
40
<µ>
Figure 6.1.: The distribution of hµi is shown on the left side before and on the right side
after reweighting.
6.2.2. Energy smearing
In the Monte Carlo simulation a too optimistic energy resolution of the electromagnetic calorimeter is assumed. For this reason the simulated energy gets smeared by
a correction following a Gaussian distribution. The width of the Gaussian distribution is determined by selecting a sample of Z → ee and J/Ψ → ee candidates
and comparing the reconstructed width of the invariant mass distribution in data
and simulation. The determination of the energy smearing is done by the electron
performance group of ATLAS [82], which provides also a software tool3 which is
2
3
PileupReweighting-00-02-09
egammaAnalysisUtils-00-04-17/EnergyRescalerUpgrade
48
6.2. Correction of simulation
used in this analysis. The corrections are on the order of one percent, with slightly
higher corrections around the transition region between the detector barrel and the
detector endcaps.
6.2.3. Efficiency corrections
The probability to select a real electron in the analysis is the product of the efficiencies of three main steps, namely the application of the trigger algorithms, the
reconstruction of the electron object and the specific electron identification criteria.
For these three steps the efficiency in data and in simulation shows small differences. To correct for these differences, scale factors are derived which are defined
as wSF = data /M C , where is the efficiency of a certain identification step. The
efficiency in data data is measured in a sample of Z candidates which is obtained
using a so called “tag and probe method“. In this method an electron candidate
with a very strict identification is selected and called tag. Then a second electron
candidate, called probe, is selected which builds with the tag a pair with an invariant mass in a window around the Z-peak. With this probe the efficiency is
studied. This method provides a clean sample of probe electrons, since the region
of the Z-peak is dominated by real electrons. The efficiency in simulation is simply
measured, by using the same tag and probe method on a Monte Carlo simulating
pp → Z/γ ∗ + X → e+ e− + X. All scale factor weights are derived by the ATLAS
electron performance group [83], which provides a tool4 , used in this analysis. The
derived scale factor weights are binned in electron pT . They typically deviate from
one on the order of one percent and are applied as weight on a single object basis.
Trigger scale factor
The scale factors for a certain trigger are measured selecting a sample of Zcandidates with a different trigger [84]. The efficiency is measured that the probe
triggers the certain trigger.
Reconstruction scale factor
For the measurement of the reconstruction scale factor it is assumed that the reconstruction efficiency of clusters in the electromagnetic calorimeter is 100%. Studies
have shown [72] that this is a good approximation. To measure the efficiency of
the track reconstruction and track-cluster matching, probes are chosen which are
reconstructed as a cluster in the electromagnetic calorimeter.
Identification and isolation scale factor
To measure the identification scale factor, the efficiency that the probe fulfills a
certain identification level is measured [83]. Additionally a scale factor is derived
4
ElectronEfficiencyCorrection-00-00-09
49
6. Monte Carlo simulation
for the efficiency that an object, which fulfills a certain identification level, is also
isolated [84].
50
7. Data and selection criteria
In this chapter the data set used is discussed and the event and electron selection is
presented. Finally the selected data is compared to the Monte Carlo simulation of
the Drell-Yan process in the region of the Z-resonance.
7.1. Data
Total Integrated Luminosity [fb -1]
In this analysis the full 2012 data set delivered by the LHC and recorded by ATLAS
is used. The data taking period was from April 2012 to December 2012 and the
data set corresponds to a total integrated luminosity of 21.7 fb−1 . Figure 7.1 shows
the sum of the integrated luminosity by day delivered by the LHC and recorded by
ATLAS.
30
ATLAS Online Luminosity
s = 8 TeV
LHC Delivered
25
20
ATLAS Recorded
Total Delivered: 23.3 fb-1
Total Recorded: 21.7 fb-1
15
10
5
0
26/03
31/05
06/08
11/10
17/12
Day in 2012
Figure 7.1.: Sum of integrated luminosity delivered by the LHC by day is shown in
green for data taking in 2012. In yellow the sum of the from ATLAS recorded integrated
luminosity is shown [85].
51
7. Data and selection criteria
7.2. Event selection
To reduce the amount of data to analyze and the amount of needed disk space,
the data set is preselected. The preselection requires that in an event at least two
objects are reconstructed as electron candidates where one has pT > 23 GeV and
the other pT > 14 GeV. The analysis is done on this preselected data set.
There are several quality criteria which have to be fulfilled for the data. For instance
there have to be stable beams and collisions at the LHC, the magnetic fields of the
detector have to be powered and the data acquisition has to work properly. Also all
important detector components for electrons, like tracking system, electromagnetic
and hadronic calorimeter and the trigger have to work properly.
The data is divided into large periods of some weeks where the run conditions were
the same. These long periods are labeled with letters from A to L in alphabetical
order1 . The data is then further divided into short periods of approximately one
minute, where the instantaneous luminosity is constant. These periods are also
called luminosity blocks. For these luminosity blocks there are lists (Good Runs
List) available, which store the blocks which can be used for physics analyses. All
events have to fulfill a trigger2 which requires at least two energy depositions in the
electromagnetic calorimeter which have ET > 35 GeV and ET > 25 GeV. For these
energy depositions, requirements on the shape of the shower and the leakage into
the hadronic calorimeter are imposed. If applying such a list and requiring such a
trigger for electrons, the integrated luminosity of the data set reduces to 20.3 fb−1 .
Hence this is the number quoted as luminosity of the data set. This is the trigger
with the lowest available pT thresholds which has simultaneously an identification
criteria, similar to the reconstruction criteria.
During data taking it can happen that, because of occurring problems, the trigger
system has to be restarted. In the luminosity block after such a restart there can
be incomplete events (where some detector information is missing from the event).
This very small fraction of events is flagged and not considered in the analysis. To
ensure the quality of an event it is required that at least one vertex with more than
two tracks is present. In addition, events are discarded where a noise burst was in
the electromagnetic or hadronic calorimeter. Such noise bursts could fake energy
depositions in the calorimeter and would make an accurate energy measurement of
an energy deposition impossible.
Table 7.1 shows the number of events remaining after each selection cut. It can be
seen, that the requirement of a trigger reduces the number of events strongly to a
subset of events which are interesting for the analysis. The requirements to ensure
the quality of the triggered events reduces the number only a very little.
1
2
In period F and K no data for physics analyses was taken.
EF g35 loose g25 loose
52
7.3. Electron selection
Selection cut
Number of Events
Event passes Good Runs List
389741202
Trigger for two energy depositions
in the electromagnetic calorimeter
41475359
Events with incomplete detector information
41475331
Event has at least one vertex
with more than two tracks
41475220
Veto on noise burst in the
electromagnetic calorimeter
41383931
Veto on noise burst in the
hadronic calorimeter
41383930
Table 7.1.: The table shows the number of events which remain after each selection cut.
Preselected data was used, where one electron candidate with pT > 23 GeV and one with
pT > 14 GeV was required.
7.3. Electron selection
In the events selected, pairs of electron candidates have to be found. Therefore
several selection criteria are applied on the single electrons and the pairs. These
selection criteria are chosen in such a way that they reduce background from other
physics processes. Each pair of electron candidates consists of a leading and a subleading candidate, where the leading candidate is the one with the higher pT and
the subleading one with lower pT .
First all events are selected with at least two electron candidates reconstructed by
an reconstruction algorithm, which first searches for an energy deposition in the
electromagnetic calorimeter and then searches for a track matching to this energy
deposition. A more detailed description of the electron reconstruction can be found
in section 5.1. To have tracking information it is required that the electron candidates are detected in the central detector region |η| < 2.47. Additionally a transition
region of the electromagnetic calorimeter 1.37 < |η| < 1.52 is excluded, since here
the energy resolution is worse. With an object-quality check it is ensured, that the
electron is measured in a region where the electromagnetic calorimeter is working
properly at that time. This excludes electron candidates in regions where for instance some electronic device was broken or problems with the high-voltage supply
occurred. The pT cuts for the electron candidates are chosen to be 5 GeV above
the trigger requirements, i.e., leading candidate pT > 40 GeV, subleading candidate
53
7. Data and selection criteria
pT > 30 GeV. These cuts are chosen to ensure that the trigger is fully efficient. To
reduce background, both electron candidates are first required to fulfill the medium
electron identification, described in section 5.2. The candidates are in addition required to fulfill a calorimeter isolation. The cut value on the isolation is less strict
for the subleading candidate, since it has most likely less pT because it radiated
Bremsstrahlung, which leads to a worse calorimeter isolation. The cut values for
the isolation are described by linear functions depending on pT . The functions (see
table A.5 in the appendix) are chosen in such a way that the cut has an efficiency
of 99%. No further requirements are made on the charge of the electron candidates,
since for very high transverse momentum the charge identification efficiency gets
worse. For example, for an electron with pT = 1 TeV, the efficiency to reconstruct
the correct charge is decreased to 95% [86]. It is also very difficult to measure the
charge identification efficiency for high pT , and thus derived scale factors would come
with large systematic uncertainties. The pairs are required to have an invariant mass
of mee > 66 GeV. If there is more than one pair in one event, all combinations are
considered. This is the case only in less than one per mill of the events. Table 7.2
shows the number of events with at least two electrons remaining after each selection
cut. The efficiency of each cut, studied in the signal Drell-Yan Monte Carlo, can be
seen in table 7.3. Here the relative fraction of electron pairs, coming from a Z-boson,
passing a selection step with respect to the previous selection step is given for two
ranges of invariant mass. It can be seen that the trigger, because of the high ET
thresholds only selects about 35% of all Z-bosons in an invariant mass window from
66 − 116 GeV. In an invariant mass window of 500 − 600 GeV this efficiency rises up
to 80%. The reconstruction, η and object quality cuts have in both invariant mass
windows efficiencies from 95% up to 100%. The pT cut for the electrons then shows
the same behavior as the trigger, in the window of 66 − 116 GeV it is about 70%
and then rises for 500 − 600 GeV up to 93%. The medium identification criteria has
efficiencies from 83% up to 90% and the isolation cut always above 97%.
The left plot in figure 7.2 shows the selection efficiency of the signal selection. The
efficiency was calculated using the Drell-Yan Monte Carlo and is binned in the invariant mass of the electron pair. In the range of the Z-resonance from 66 GeV to
116 GeV the selection efficiency is only on the order of 20%. This is because of the
large pT thresholds of the two electrons. The measurement of this analysis starts at
116 GeV, where the selection efficiency is about 30% and then rises with invariant
mass up to 65%. On the right side of figure 7.2, the yield of Z candidates per pb−1
is shown, as well as the integrated luminosity for different data periods. The yield is
constant over all data periods, as expected if there are no time dependent efficiency
losses.
7.4. Energy correction
To further calibrate the reconstructed energy of the electrons, η-dependent corrections are applied to recalibrate the energy. The corrections are small and below one
54
7.5. Comparison with simulation
Selection cut
Number of Events
After event selection
41383930
At least two objects reconstructed as electron
candidates by a specific algorithm
39847294
At least two electrons with |η| < 2.47,
which are not in transition region 1.37 < |η| < 1.52
38297174
At least two electrons fulfilling the object quality check
38217421
Leading electron pT > 40 GeV,
subleading electron pT > 30 GeV
19453382
At least two electrons fulfilling the
medium identification
4570839
At least two electrons fulfilling the isolation
requirements
4525560
At least one electron pair has mee > 66 GeV
At least one electron pair has mee > 116 GeV
4504702
124934
Table 7.2.: The table shows the number of events with at least two electron cuts
remaining after each selection cut.
percent. They were obtained by selecting a sample of Z- and J/ψ- candidates. The
corrections are then derived by comparing the resonances in data and Monte Carlo
simulation. For the recalibration, corrections3 were used, obtained by the electron
performance group of ATLAS [82].
7.5. Comparison with simulation
In this section the selected Z-candidates in the region 66 GeV< mee < 116 GeV are
compared with the Monte Carlo simulation of the Drell-Yan process, no background
processes are included. The simulation was scaled to the luminosity of the data set
with a factor LData /LM C .
Figure 7.3 shows the reconstructed invariant mass spectrum of the different massfiltered Drell-Yan Monte Carlo samples and the sum of them. The distribution is in
3
egammaAnalysisUtils-00-04-17/EnergyRescalerUpgrade
55
7. Data and selection criteria
Efficiency
66 GeV< mee
< 116 GeV
Efficiency
500 GeV< mee
< 600 GeV
After trigger and event selection
At least two objects reconstructed
as electron candidates
At least two electrons with |η| < 2.47,
which are not in transition region 1.37 < |η| < 1.52
At least two electrons fulfilling
the object quality check
Leading electron pT > 40 GeV,
subleading electron pT > 30 GeV
At least two electrons fulfilling the
medium identification
At least two electrons fulfilling the isolation
requirements
35.5%
79.8%
99.2%
99.8%
95.5%
97.4%
99.7%
99.9%
70.4%
92.6%
83.0%
89.7%
99.4%
97.7%
At least one electron pair has mee > 66 GeV
At least one electron pair has mee > 116 GeV
99.97%
0.06%
100%
100%
Selection cut
Table 7.3.: The table shows the efficiency of each selection step with respect to the
90
MC simulation
80
70
60
50
40
30
20
10
66 200
integrated
luminosity [fb-1]
100
Number Z-cand.
per pb-1 [pb]
Acceptance × efficiency [%]
previous one. The efficiencies were determined in the Drell-Yan signal Monte Carlo and
are given for two different truth invariant mass ranges.
400
600
800
1000
1200 1400
mee [GeV]
7
6
5
4
3
2
1
0
240
Data 2012
A
B
C
D
E
G
H
I
J
L
Period
230
220
210
A
B
C
D
E
G
H
I
J
L
Period
Figure 7.2.: The left plot shows the selection efficiency of the signal selection. The
efficiency was calculated using the Drell-Yan Monte Carlo. The right plot shows in the
upper half the amount of integrated luminosity for each period. The yield of Z-candidates
per pb−1 over the different periods of data taking is shown in the lower half.
56
7.5. Comparison with simulation
Entries
this case shown from 66 GeV and extended up to 2 TeV, to show how the separate
samples form a smooth spectrum up to high invariant masses. Starting from 66
GeV, this spectrum shows a kinematic turn-on due to the pT cuts. Around 91 GeV
the resonance of the Z-boson can be seen. In the region above the Z-resonance, the
spectrum shows a strongly falling behavior due to the, in this range, dominating
photon exchange.
1010
109
8
10
7
10
106
DY, 66 < M < 120 GeV
DY, 600 < M < 800 GeV
DY, 2000 < M < 2250 GeV
DY, 120 < M < 180 GeV
DY, 800 < M < 1000 GeV
DY, 2250 < M < 2500 GeV
DY, 180 < M < 250 GeV
DY, 1000 < M < 1250 GeV
DY, 2500 < M < 2750 GeV
DY, 250 < M < 400 GeV
DY, 1250 < M < 1500 GeV
DY, 2750 < M < 3000 GeV
DY, 400 < M < 600 GeV
DY, 1500 < M < 1750 GeV
DY, M > 3000 GeV
Sum x10
DY, 1750 < M < 2000 GeV
105
104
103
102
10
1
10-1
70
100
200
300 400
1000
2000
mee [GeV]
Figure 7.3.: This figure shows the reconstructed invariant mass spectrum of the massfiltered Drell-Yan samples. The simulations are scaled to the integrated luminosity of the
data and the sum of all simulations is scale with an additional factor of ten.
Figure 7.4 shows the invariant mass distribution of the Z-candidates in the region
of 66 GeV< mee < 116 GeV. Here the Z-resonance can be seen with a maximum
at about 91 GeV. Above 91 GeV data and simulation show good agreement but
deviations up to 20% can be seen for the low mass tail of the resonance. This is
a known effect caused by missing material in the simulation of the detector and is
currently under investigation within ATLAS. The bad description does not influence
the analysis, since it is performed above 116 GeV. In figure 7.5 the properties of the
single electrons are shown. In the left column η, φ and pT of the leading electron
is shown, and in the right column η, φ and pT of the subleading electron. The
η distribution shows a maximum at η = 0 and a slowly falling behavior towards
the positive and negative η-direction. The falling behavior is due to the non-linear
dependency of η and θ. Actually there are more electrons in the more forward
directions, but a bin in η from 0.0 to −1.0 already covers all particles from 90◦ to
57
Entries
7. Data and selection criteria
×103
500
∫ L dt = 20 fb
400
s = 8 TeV
Data 2012
-1
Drell-Yan
300
200
Data/Exp.
100
0
1.2
1
0.8
mee [GeV]
70
80
90
100
110
mee [GeV]
Figure 7.4.: Distribution of the invariant mass of the full electron pair selection in the
region 66 GeV < mee < 116 GeV. Data and the Monte Carlo simulation of the Drell-Yan
process is shown. The simulation is scaled to the luminosity of the data.
≈ 140◦ , whereas a bin from −1.0 to −2.0 only covers the range from ≈ 140◦ to
≈ 165◦ . This is also illustrated in the appendix in figure A.1. In the region around
the transition region a dip can be seen, caused by the exclusion of the transition
region. The two bins 1.6 < |η| < 1.7 deviate about 10% from the rest of the bins.
This deviation is caused by mis-modeling of the detector material in this region
and currently under investigation within ATLAS. The φ distributions show that
the electrons are equally distributed from −π to π. The dip at the edges of the
distributions is an effect of the chosen binning. Overall the agreement between data
and simulation is good for both distributions. The pT distributions of the electrons
show a maximum at around 45 GeV, which is about half of the mass of the Z-boson.
The distributions strongly fall towards higher pT , whereby the distribution of the
subleading electron falls even faster. Data and simulation show a good agreement
58
7.5. Comparison with simulation
at low pT . For higher pT both distributions start to deviate, the deviation is around
30% in the region of pT = 150 GeV. This is an effect of the poor description of the
pT of the Z-boson in the Monte Carlo simulation. The transverse momentum of the
Z-bosons is produced via initial state gluon emission which is very model dependent.
Therefore the prediction is difficult and the pT of the Z-boson in the region of the
resonance is not well described by some Monte Carlo generators. This is a known
effect and causes the bad description of the single electron distributions at higher
pT .
59
7. Data and selection criteria
3
200
-1
Drell-Yan
s = 8 TeV
66 GeV < mee < 116 GeV
150
Data/Exp.
Entries
∫ L dt = 20 fb
Data 2012
250 ×10
∫ L dt = 20 fb
200
100
50
50
0
1.2
1.1
1
0.9
0.8
0
1.2
1.1
1
0.9
0.8
-2
-1
0
1
2
Subleading Electron η
-2
-1
0
120
-1
Drell-Yan
s = 8 TeV
100
Data/Exp.
Entries
∫ L dt = 20 fb
140
80
60
60
40
40
20
20
0
1.2
1.1
1
0.9
0.8
0
1.2
1.1
1
0.9
0.8
-3
-2
-1
0
1
∫ L dt = 20 fb
120
80
Leading Electron φ
2
3
105
104
66 GeV < mee < 116 GeV
Subleading Electron φ
-3
-2
-1
0
1
106
∫ L dt = 20 fb
s = 8 TeV
105
s = 8 TeV
66 GeV < mee < 116 GeV
104
66 GeV < mee < 116 GeV
-1
Data 2012
Drell-Yan
103
103
102
102
10
10
1
1.4
1.2
1
0.8
1
1.4
1.2
1
0.8
Leading Electron p [GeV]
T
100
Drell-Yan
200
300
400
500
600
Leading Electron p [GeV]
T
2
3
Subleading Electron φ
Data/Exp.
Data/Exp.
∫ L dt = 20 fb
Data 2012
-1
s = 8 TeV
Leading Electron φ
106
2
×103
100
66 GeV < mee < 116 GeV
Data/Exp.
Entries
140
Data 2012
1
Subleading Electron η
Leading Electron η
×103
Drell-Yan
66 GeV < mee < 116 GeV
100
Leading Electron η
Data 2012
-1
s = 8 TeV
150
Data/Exp.
Entries
3
250 ×10
-1
Data 2012
Drell-Yan
Subleading Electron p [GeV]
T
100
200
300
400
500
600
Subleading Electron p [GeV]
T
Figure 7.5.: Properties of the full single electron selection of pairs in the range 66 GeV
< mee < 116 GeV are shown and compared to the Drell-Yan Monte Carlo simulation.
The distributions of the leading electron η, φ and pT are shown in the left column. In the
right column the same distributions for the subleading electron are shown. The Drell-Yan
Monte Carlo was scaled to the luminosity of the data.
60
8. Background determination
There are processes, besides the Drell-Yan process, which also contribute to the
selected candidates. These processes are called background processes and split into
two categories. First background which consists out of two real electrons passing the
signal selection criteria. The determination of this background is described in the
first part of this chapter. In a second part the determination of background coming
from events with only one or no real electron is described.
8.1. Simulation of background processes
Background which consists out of at least two real electrons passing the signal selection criteria is determined from Monte Carlo simulations of these background
processes. Details on the simulations were discussed in chapter 6. The simulated
processes do not only include decays to two electrons. Thus all simulations are
filtered for events with at least two real electrons coming from the particle decay,
since the estimation of background with one or no real electron is done by a different method. This filter is applied on generator level and does not take into account
kinematic properties. The filtering reduces the background contribution by about
5%.
8.1.1. tt̄+tW background
The produced top and antitop quarks dominantly decay into b-Quarks under emission of W -bosons. These W -bosons can then further decay into real electrons or
τ -leptons, leading to two possible electrons or τ -leptons or one τ -lepton and one
electron. In the case of the decay into τ -leptons these can then further decay into
electrons. Figure 8.1 shows the reconstructed invariant mass distribution of the top
backgrounds. The spectrum shows a kinematic turn-on up to around 150 GeV and
has then a strongly falling behavior. The contribution from tW is about 10% of the
contribution from tt̄.
8.1.2. Diboson background
In case of the diboson process, the produced W W -, ZZ- or W Z-pairs can either
decay directly to electrons or indirectly via τ -leptons. In the case of a W W -process
this can lead to at maximum two real electrons, in case of the W Z-process to at
maximum three electrons and in case of a ZZ-process to at maximum four electrons.
61
8. Background determination
Figure 8.2 shows the reconstructed invariant mass distribution of the different massfiltered diboson backgrounds. The spectrum shows a peak around mZ for processes
including a Z-boson. Above the Z-resonance the spectrum shows a strongly falling
behavior. In this region the largest contribution is coming form W W , followed by
W Z.
8.1.3. Drell-Yan background
Entries
There is also a background contribution from the Drell-Yan process itself, namely
the decay into two τ -leptons Z/γ ∗ → τ − τ + . This process can lead also to two
real electrons, if each of the two τ -leptons decays further into electrons. But since
the τ -leptons can only decay into electrons under emission of two neutrinos, these
electrons would often have a much lower transverse momentum and thus not pass
the pT cut of the selection. Because of this high pT cut and the low branching ratio
of about 3% for the decay into two electrons, the contribution of this process is
below one per mill thus not considered as a background.
104
tt
tW
103
Sum x2
102
10
1
10-1
70
100
200
300 400
1000
2000
mee [GeV]
Figure 8.1.: This figure shows the reconstructed invariant mass spectrum of the top
backgrounds tt̄ and tW . The simulations are scaled to the integrated luminosity of the
data and the sum of both simulations is scale with an additional factor of two.
62
Entries
8.2. Measurement of background processes
105
4
10
WW, 66 < M < 400 GeV
WW, 400 < M < 1000 GeV
WW, M > 1000 GeV
ZZ, 66 < M < 400 GeV
ZZ, 400 < M < 1000 GeV
ZZ, M > 1000 GeV
WZ, 66 < M < 400 GeV
WZ, 400 < M < 1000 GeV
WZ, M > 1000 GeV
103
Sum x2
102
10
1
10-1
70
100
200
300 400
1000
2000
mee [GeV]
Figure 8.2.: This figure shows the reconstructed invariant mass spectrum of the massfiltered diboson backgrounds W W , ZZ and W Z. The simulations are scaled to the integrated luminosity of the data and the sum all simulations is scale with an additional factor
of two.
8.2. Measurement of background processes
There is an additional background caused by objects which are not electrons but still
fulfill the identification criteria and thus enter the signal selection. These objects are
mainly electromagnetic objects which are contained in jets coming from radiation
or particle decays. This background is called fake background, since it is caused
by objects which fake the signature of an electron. There are two main sources,
first a component where both electron signals are faked. These processes are highly
dominated by events with two jets, called di-jet events. Additionally there is a
component where one real electron occurs and one jet fakes the second electron
signal. This is dominated by the process of W -boson production in association with
jets, where the W -boson can decay into an electron.
There are two main reasons why the fake background is not estimated from Monte
Carlo. First the rejection of processes like the di-jet process is very high and thus the
available statistics of Monte Carlo samples would not be sufficient for an estimate.
But the contribution from this background is still not negligible due to the very large
cross section these processes have. Secondly is the modeling of the probability for
jets to fake an electron signal in Monte Carlo simulations very difficult and thus can
63
8. Background determination
such an estimate not be fully trusted. Due to these reasons, is the fake background
estimated from data using a method called matrix method or fake factor method.
8.2.1. Matrix method
The matrix method works with two levels of electron identification criteria. There is
first a loose identification level which is in this analysis chosen to be the loose electron
identification level without a cut on ∆η between the track measured in the inner
detector and the energy deposition measured in the electromagnetic calorimeter.
This is a slightly stricter identification than the requirement of the trigger. Secondly
there is a tight identification level which is the same as the one used in the signal
selection, thus the medium electron identification plus an additional calorimeter
isolation requirement. Since the calorimeter isolation requirement differs for leading
and subleading electrons, also the tight identification levels differ slightly for leading
and subleading electrons. With these two levels of identification a loose selection
and a tight selection can be defined.
The different probabilities of real electrons and fake electrons in the loose selection to
do the transition into the tight selection can in the following be used to discriminate
between events with real electrons and events with fake electrons. The probabilities
for real electrons r (called real electron efficiency) and fake electrons f (called fake
rate) to do this transition from loose to tight is given by
real
Ntight
r = real
Nloose
and
f=
f ake
Ntight
f ake
Nloose
,
(8.1)
real
real
where Nloose
and Ntight
are the number of real electrons in the loose or tight selection.
f ake
f ake
In the same way are Nloose
and Ntight
the number of fake electrons in the loose or
tight selection. The measurement of these probabilities will be described in sections
8.2.2 and 8.2.3.
One can now separate all objects in the loose selection into objects which are real
electrons and objects which are fake electrons. Since the loose selection fully contains
the tight selection, the same separation can be done into objects which fulfill the
tight selection and objects which fail the tight selection. Since the selection is based
on pairs of electron candidates, the number of events with two real electrons in the
loose selection, can be defined as signal. Thus events of the categories NRF , NF R
and NF F , the number of events with at least one fake electron in the loose selection
NRR , can be defined as background. In the following the first subscript will be
assigned to the leading object and the second subscript to the subleading object.
The categories NRR , NRF , NF R and NF F are quantities based on truth information
and thus cannot be measured. But in a similar way four measurable categories
of events can be defined using the two levels of identification, NT T , NT L , NLT and
NLL . Here the subscript T stands for objects in the loose and tight selection and the
subscript L stands for objects in the loose selection which fail the tight selection. It
is now possible to write down a matrix equation which connects the truth quantities
64
8.2. Measurement of background processes
and measurable quantities:




NT T
NRR
NT L 



 = M NRF 
NLT 
NF R 
NLL
NF F

r1 r2
r1 f2
f1 r2
f1 f2
 r1 (1 − r2 )
r1 (1 − f2 )
f1 (1 − r2 )
f1 (1 − f2 ) 

M =
 (1 − r1 )r2
(1 − r1 )f2
(1 − f1 )r2
(1 − f1 )f2 
(1 − r1 )(1 − r2 ) (1 − r1 )(1 − f2 ) (1 − f1 )(1 − r2 ) (1 − f1 )(1 − f2 )
(8.2)

(8.3)
The probabilities r1 and f1 are for the leading object and r2 and f2 for the subleading
one. The fake background and thus the quantity of interest is the part of NT T which
originates from a pair of objects with at least one fake NTf Take . This is described by
the first line of equation 8.2 and contains the inaccessible truth categories, e.g., NRF .
NTe+jet
= r1 f2 NRF + f1 r2 NF R
T
NTdi−jet
= f1 f2 NF F
T
NTf Take
=
NTe+jet&di−jet
T
(8.4)
= r1 f2 NRF + f1 r2 NF R + f1 f2 NF F
By inverting the matrix 8.3 the truth variables can be expressed via measurable
quantities.




NRR
NT T
NRF 



 = M −1 NT L 
(8.5)
NF R 
NLT 
NF F
NLL

(f1 − 1)(f2 − 1)

1
(f1 − 1)(1 − r2 )
=
(r1 − f1 )(r2 − f2 ) (r1 − 1)(1 − f2 )
(1 − r1 )(1 − r2 )

f1 (f2 − 1) f1 f2
f1 (1 − r2 ) −f1 r2 

M −1
r1 (1 − f2 ) −f2 r1 
r1 (r2 − 1) r1 r2
(8.6)
The number of events containing one fake object is then given by:
(f1 − 1)f2
(1 − f1 )r2
(1 − r1 )f2
(r1 − 1)r2
NTe+jet&di−jet
=
T
αr1 f2 [(f1 − 1)(1 − r2 )NT T + (1 − f1 )r2 NT L + f1 (1 − r2 )NLT − f1 r2 NLL ]
+αf1 r2 [(r1 − 1)(1 − f2 )NT T + (1 − r1 )f2 NT L + r1 (1 − f2 )NLT − r1 f2 NLL ]
+αf1 f2 [(1 − r1 )(1 − r2 )NT T + (r1 − 1)r2 NT L + r1 (r2 − 1)NLT + r1 r2 NLL ]
(8.7)
= α[r1 f2 (f1 − 1)(1 − r2 ) + f1 r2 (r1 − 1)(1 − f2 ) + f1 f2 (1 − r1 )(1 − r2 )]NT T
+αf2 r2 [r1 (1 − f1 ) + f1 (1 − r1 ) + f1 (r1 − 1)]NT L
+αf1 r1 [f2 (1 − r2 ) + r2 (1 − f2 ) + f2 (r2 − 1)]NLT
−αf1 f2 r1 r2 NLL
(8.8)
65
8. Background determination
where
α=
1
.
(r1 − f1 )(r2 − f2 )
(8.9)
Since not only the absolute number of fake events NTe+jet&di−jet
can be calculated,
T
but also the number in a given bin, for example an invariant mass bin, this method
provides the possibility to predict any distribution of the fake background.
Systematic variations of the matrix method
Second method
To simplify equation 8.8 the approximation r1 = r2 = 1 can be made. This assumes,
that every real electron in the loose selection enters also the tight selection. Equation
8.2 then simplifies to

 


NT T
1
f2
f1
f1 f2
NRR
NT L  0 1 − f2


0
f1 (1 − f2 ) 

=
 NRF  .
(8.10)
NLT  0
0
1 − f1
(1 − f1 )f2  NF R 
NLL
0
0
0
(1 − f1 )(1 − f2 )
NF F
Entries which accounted for real electron contributions in the selection L (fail the
tight selection), now simplify to zero. Since the method does now not any longer
account for these entries in NT L , NLT and NLL these corrections have to be done with
Monte Carlo simulations. Therefore the contribution from processes with two real
electrons to NT L , NLT and NLL can be subtracted. This only corrects for processes
with two real electrons falling into NT L , NLT and NLL , which corresponds to the
corrections, done by the first row of the matrix. The case where a “RF“ or “FR“
event enters NLL is not corrected and assumed to be negligible. Also corrections
where an “RF“ event falls into the NLT category and vice versa are assumed to be
negligible. The equation for the background then simplifies to
NTe+jet
= F2 NT L + F1 NLT − 2F1 F2 NLL
T
NTdi−jet
= F1 F2 NLL
T
(8.11)
NTe+jet&di−jet
= F2 NT L + F1 NLT − F1 F2 NLL ,
T
where
f ake
f ake
f ake
Ntight
/Nloose
Ntight
fi
=
= f ake
.
Fi =
f ake
f ake
f ake
1 − fi
1 − Ntight
/Nloose
Nloose − Ntight
(8.12)
The quantity Fi is called fake factor. The following expression is valid, since the
tight selection is a subset of the loose selection:
f ake
f ake
ake
Nloose
− Ntight
= Nffail
tight .
(8.13)
The fake factor then simplifies to
FiF T
66
=
f ake
Ntight
ake
Nffail
tight
.
(8.14)
8.2. Measurement of background processes
ake
In terms of the analysis means Nffail
tight for a subleading object to fail the medium
electron identification or the subleading isolation. For a leading objects this means
to fail the medium electron identification or the leading isolation cut. The fake
factor is calculated from the fake rate and therefore has to be selected on a sample
which contains true fakes. This can be achieved by selecting a background enriched
control region in data. How this is done will be discussed in section 8.2.2.
Third method
ake
The selection Nffail
tight , which is also the definition of the L selection in measurable
quantities NT L , NLT and NLL , contains contamination of real electrons since it is
possible for a real electron to fail the medium identification or isolation requirement.
To get a cleaner set of fake objects, and thus smaller corrections from Monte Carlo,
the fake factor can be furthermore calculated with a subset of the fail tight set
FiF T M
=
f ake
Ntight
ake
Nffail
track match
,
(8.15)
ake
where Nffail
track match has to fail the ∆η cut between track and cluster of the medium
requirement. Since a cluster from a jet has most likely more than one track pointing
towards it, jets often fail this criterion and thus gives definition this a cleaner sample
of fake objects. If the fake factor FiF T M is applied to a measurable quantity like
NT L , the definition of L has to also change from ”pass loose selection but fail tight
selection” to ”pass loose selection but fail medium track match”. This method
assumes that the fraction of events which fail the track match is the same in the
sample where the fake factors are obtained and where they are applied.
This leads to overall three different matrix methods. First the default method using
equation 8.8 and additionally a method using the approximation r1 = r2 = 1 and
thus equation 8.11. The latter one again splits up whether using a fake factor from
equation 8.12 or using a fake factor with a subset of failing the tight selection from
equation 8.15.
8.2.2. Measurement of the fake rate
The fake rates and the fake factors have to be calculated from real fakes. Since fakes
are dominantly jets, two methods are performed which aim to get jet enriched data
samples, one using single jet triggers and the other one using also single jet triggers
or the same trigger as in the signal selection. In these jet enriched samples the fake
rates and fake factors are calculated. The different methods are discussed and the
resulting fake rates are compared.
67
8. Background determination
Single object method
The default method is based on objects in events, which fulfill a single jet trigger.
First events are selected fulfilling such a trigger. Since jets appear very often in a
hadron collider it is not possible to record every event with a single jet. To have anyhow the possibility to study these events, different triggers exist, for which fulfilling
events are not always recorded. These triggers apply different pT requirements to
the jet. Eleven different triggers1 with different pT requirements which go from 25
GeV up to 360 GeV are used for this method. Around one out of 2 million events
is recorded if the triggering jet fulfilled the pT > 25 GeV requirement. The higher
the pT -requirement gets, the more often the events get recorded. Starting from jets
with pT > 360 GeV, every event is recorded if the trigger is fulfilled. Additionally,
all events have to fulfill the same quality requirements as in the signal selection,
discussed in section 7.2. In the following the selection of potential electron objects
in these events is discussed.
The jets in the selected event are reconstructed with the AntiKt [87] algorithm with
a radius parameter of R = 0.4. Basic quality criteria2 for jets are applied, like cuts
against background from cosmic muons, quality cuts for the hadronic calorimeter
and cuts on the fraction of energy in the electromagnetic calorimeter. The reconstructed jet is then matched to a reconstructed electron candidate using a ∆R < 0.1
requirement, to ensure that at the same time the jet was reconstructed as an electron candidate. Over 99% of all selected electron candidates are matched to a jet
candidate which fulfills the quality criteria. The matched electron candidate is then
used to measure the fake rate. All electron candidates are required to fulfill the same
selection cuts regarding reconstruction algorithm, object quality and phase space,
as discussed in 7.3. It is not required that the electron candidate is matched to the
jet which originally triggered the event.
In the selected events there are still electrons present, since the single jet trigger
has only very loose identification requirements. To get a jet-enriched sample these
electrons have to be suppressed with additional cuts. In addition, to suppress a
dilution from processes with two electrons, there is a veto on events with two reconstructed electron candidates fulfilling the medium electron identification. For the
specific suppression of the Drell-Yan process there is an additional veto on events
with two reconstructed electron candidates fulfilling the loose electron identification
which are within an invariant mass range |mee − 91 GeV| < 20 GeV. The decay
W → eν leads to an electron and a neutrino which leaves the detector undetected.
Thus the neutrino causes missing energy ETmiss in the transverse plane. To suppress
electrons from such decays there is a veto on events with ETmiss > 25 GeV.
f ake
f ake
In these jet enriched events the objects of the categories Nloose
, Ntight,
leading ,
f ake
f ake
Ntight, subleading and Nf ail track match for the calculation of the fake rates and fake facf ake
f ake
tors are selected. Here Ntight,
leading , Ntight, subleading are the selections applying the
1
2
EF jX a4tchad (X = 25, 35, 45, 55, 80, 110, 145, 180, 220, 280, 360), X corresponds to the pT cut.
medium jet cleaning
68
8.2. Measurement of background processes
leading or subleading isolation cut. For each trigger used a fake rate is calculated.
The final fake rate f for all triggers is then the weighted average of all separate fake
rates
Pntrig
1
fi /∆fi2
,
∆f 2 = Pntrig
.
(8.16)
f = Pi=1
ntrig
2
2
i=1 1/∆fi
i=1 1/∆fi
∆fi is the statistical uncertainty of each fake rate and ∆f the statistical uncertainty
of the averaged fake rate. The same formula is used for the fake factors. The fake
rates and fake factors are discussed together with the result of the other method in
section 8.2.2.
Reverse tag and probe method
An additional method, the reverse tag and probe method, is used to measure the
fake rates and fake factors. The idea is to tag one object as a jet and then look for
a second object in the event, the probe, which is assumed to be also a jet. This is
done using two different triggers, first the same electron trigger used in the analysis
and second using the same jet triggers as in the single object method.
Electron trigger
The trigger used in this method is the same as used in the analysis, which requires
two energy depositions in the electromagnetic calorimeter with loose requirements
on the shower shape. All selected events have to fulfill the same selection as discussed
in section 7.2 and the selected object tagged as jet and probe will have to fulfill the
same selection for the reconstruction algorithm, object quality and phase space as
discussed in section 7.3.
First in the selected events a tag object is required to have pT > 25 GeV and to
fulfill the loose electron identification without the cut on the difference ∆η between
η of the track and the energy deposition in the electromagnetic calorimeter. These
cuts are made to ensure that the tag could be one of the two objects triggering
the event. To tag the electron candidate as jet-like it is required to fail the ∆η
cut of the medium electron identification. In principle also the ∆η cut of the loose
identification could be used, but this would reduce the statistics, since the cut value
is looser. If a tag object is found in an event, all other reconstructed electron
candidates are considered as probes. In the selected events there are still dilutions
from real electrons. To suppress contributions from the Drell-Yan process, tag and
probe object are required to have the same charge and to have an invariant mass of
|mee − 91 GeV| > 20 GeV. To further suppress dilutions from decays of W -bosons
there is a veto for events with ETmiss > 25 GeV. The selected probes are then divided
f ake
in the different categories for the calculation of the fake rates and fake factors Nloose
,
f ake
f ake
ake
.
Ntight,leading
, Ntight,subleading
and Nffail
track match
Since for the trigger used, every event is recorded, it is possible to use Monte Carlo
simulations to further study the remaining dilution from real electrons. Figure
8.3 shows the different categories and the contributions from processes with real
69
8. Background determination
104
Data
Drell Yan
Diboson
t t + tW
W
Nfake
tight, leading
3
10
Entries [1/GeV]
Entries [1/GeV]
electrons, binned in pT . All four distributions show a strongly falling behavior with
pT . The Drell-Yan process and electrons from W -decays cause the largest dilutions
to the different categories. In the tight selections the dilutions are for low pT in the
order of 10% and then rise at higher pT up to around 30%. For the loose selection
ake
the dilution is much lower and in the order of 1%. In the selection Nffail
track match ,
the dilution reduces further to below a per mill. For the calculation of the fake rates
and fake factors the estimated events from dilution are subtracted.
104
3
10
102
102
10
10
140
60
80
100
120
140
160
1
180 200
p [GeV]
Nfake
tight, subleading
40
60
80
100
120
140
Data
Drell Yan
Diboson
t t + tW
W
160
104
Data
Drell Yan
Diboson
t t + tW
W
Nfake
loose
3
10
104
3
102
10
10
40
60
80
100
120
140
160
180 200
p [GeV]
T
Data
Drell Yan
Diboson
t t + tW
W
Nfake
fail track match
10
102
1
180 200
p [GeV]
T
Entries [1/GeV]
Entries [1/GeV]
T
1
40
60
80
100
120
140
160
180 200
p [GeV]
T
Figure 8.3.: pT distributions of the selected objects for the calculations of fake rate and
fake factor. The tight selections for leading and subleading are shown in the upper row.
The lower left distribution is used for the fake rate fi and the fake factor FiF T . The bottom
right distribution is used for the fake factor FiF T M . For the selection the reverse tag and
probe method with the electron trigger was used. Also shown are the real electron dilutions
from Drell-Yan, W+jets, tt̄ and Diboson processes determined from MC.
Jet trigger
The reverse tag and probe method can also be performed using the same single jet
triggers as for the single object method. Since the triggers work on a single object
basis it is not any longer required, that the tag object fulfills the loose electron
70
8.2. Measurement of background processes
identification without the ∆η cut. Thus the requirement to tag an electron as jetlike is changed to simply fail the loose electron identification. This is also done to
increase the statistics of the tags. Besides the requirement of the tag candidate the
method is exactly the same as for the electron trigger. Again, like in the single
object method, for every trigger a separate fake rate or fake factor is calculated and
a final one is calculated using equation 8.16.
Comparison of all methods
Figure 8.4 shows the fake rates f1 and f2 of the three presented methods, binned
in pT and η. All methods lead, binned in pT , to a fake rate around 5% to 10%
for the leading object and for the subleading object a slightly higher rate around
7% to 12%. The fake rates are different due to the tighter isolation cut used for
the leading object. The differences in all methods are within around absolute 3%
at low pT . At higher pT the method using the electron trigger predicts a rather
constant behavior whereas the methods using jet triggers predict a slightly falling
behavior. Contrary to pT all methods show a strong dependency on η which is the
same for positive and negative η. In the barrel region of the detector (|η| < 1.37) the
fake rates are quite constant with a small drop in the last bin before the transition
region. In the endcap region (|η| > 1.52) there are three regions with very different
fake rates. Directly after the transition region between barrel and endcap and up
to |η| < 2.01 there is still coverage from the transition radiation tracker and thus
nearly the same conditions as in the barrel region. This leads to a fake rate which
is a bit higher but in the same order as in the barrel region. The region |η| > 2.01 is
no longer covered by the transition radiation detector which leads to an increasing
fake rate. The last region of |η| > 2.37 is additionally not covered by the most inner
pixel detector layer which leads again to an increase in the fake rate. The behavior
is the same for all methods, but the method using the electron trigger predicts
a more drastic increase of the fake rate in the endcap region than the methods
using jet trigger. Especially in the last bin of the leading fake rate there are large
differences up to 10%. Motivated by this dependency in |η| and pT the fake rates
are from now on binned in pT for four detector regions, |η| < 1.37, 1.52 < |η| < 2.01,
2.01 < |η| < 2.37 and 2.37 < |η| < 2.47. Figure 8.5 shows all three fake rates in
this binning. The agreement between all three methods is very good for the barrel
and the first endcap bin. For the last two endcap bins the methods using jet trigger
predict a falling behavior whereas the method using the electron trigger predicts a
more or less constant fake rate. Similar results can be seen for the fake factors FiF T
and FiF T M which are shown in the appendix in figure A.2 and A.3.
8.2.3. Measurement of the real electron efficiency
real
real
The real electron efficiency is defined as r = Ntight
/Nloose
(equation 8.1). It has to be
determined on a sample of real electrons. Since the modeling of electrons in Monte
Carlo is good, the real electron efficiencies are determined from the mass-binned
71
0.25
f2
f1
8. Background determination
0.25
Reverse tag and probe method electron trigger
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
0.2
Reverse tag and probe method jet trigger
0.2
Single object method jet trigger
Single object method jet trigger
0.15
0.15
0.1
0.1
0.05
0.05
0
40
60
80
100
120
140
160
0
180 200
p [GeV]
40
60
80
100
120
140
160
0.2
Reverse tag and probe method electron trigger
0.18
Reverse tag and probe method jet trigger
0.16
Single object method jet trigger
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
180 200
p [GeV]
T
f2
f1
T
0.24
0.22
0.2
0.18
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
η
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
η
Figure 8.4.: The fake rate fi binned in pT is shown for the leading object on the top
right plot and for the subleading object on the top left. The corresponding fake rates binned
in η are shown in the bottom row.
Drell-Yan Monte Carlo. Although the modeling of electrons is good, scale factors
are used in order to correct the efficiencies for small differences.
The derived real electron efficiencies for leading and subleading objects are shown
binned in pT separately for the barrel region (|η| < 1.37) and two endcap regions
(1.52 < |η| < 2.01 and 2.01 < |η| < 2.47) in figure 8.6. The last two endcap bins
used for the fake rate were combined due to statistical reasons. The efficiency is in
the range of ≈ 91 − 96% and shows a rising behavior with pT .
For the fake rates the statistical uncertainty was calculated assuming the samples
are fully uncorrelated. But since the tight selection is fully contained in the loose
selection they are fully correlated. Assuming the samples were uncorrelated was a
good approximation for the fake rates since these are small and thus also the overlap
is small. The real electron efficiency is near to 100% and thus this approximation
does not any longer hold. To account for this a binomial uncertainty was used
∆r2 =
72
r(1 − r)
.
real
Nloose
(8.17)
0.5
barrel (|η| < 1.37)
f2
f1
8.2. Measurement of background processes
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.45
0.4
0.5
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.4
0.35
0.35
0.3
0.3
0.25
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0
barrel (|η| < 1.37)
0.45
50
100
150
200
250
0
300
p [GeV]
50
100
150
200
250
0.5
endcap (1.52 <|η| < 2.01)
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.45
0.4
0.5
endcap (1.52 <|η| < 2.01)
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.45
0.4
0.35
0.35
0.3
0.3
0.25
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0
50
100
150
200
250
0
300
p [GeV]
50
100
150
200
250
endcap (2.01 <|η| < 2.37)
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.45
0.4
0.5
endcap (2.01 <|η| < 2.37)
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.45
0.4
0.35
0.35
0.3
0.3
0.25
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0
50
100
150
200
250
0
300
p [GeV]
50
100
150
200
250
endcap (2.37 <|η| < 2.47)
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.45
0.4
0.5
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.4
0.35
0.3
0.3
0.25
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0
endcap (2.37 <|η| < 2.47)
0.45
0.35
300
p [GeV]
T
f2
f1
T
0.5
300
p [GeV]
T
f2
f1
T
0.5
300
p [GeV]
T
f2
f1
T
50
100
150
200
250
300
p [GeV]
T
0
50
100
150
200
250
300
p [GeV]
T
Figure 8.5.: Comparison of the fake rates fi calculated with the three different methods
(tag and probe method using the electron trigger, tag and probe method using jet triggers
and single object method using jet triggers). The upper row shows the fake rates for the
barrel region (η < 1.37). The corresponding fake rates for the endcap regions (1.52 < |η| <
2.01, 2.01 < |η| < 2.37 and 2.37 < |η| < 2.47) are shown from the second to the fourth
row. The fake rates for the leading object are shown on the left side and for the subleading
object on the right side.
73
1
r2
r1
8. Background determination
1
0.98
0.98
0.96
0.96
0.94
0.94
0.92
0.92
0.9
0.9
0.88
0.86
0.88
barrel (|η| < 1.37)
endcap (1.52 < |η| < 2.01)
endcap (2.01 < |η| < 2.47)
0.86
0.84
0.84
0.82
0.82
0.8
0
50
100
150
200
250
300
pT [GeV]
0.8
0
barrel (|η| < 1.37)
endcap (1.52 < |η| < 2.01)
endcap (2.01 < |η| < 2.47)
50
100
150
200
250
300
pT [GeV]
Figure 8.6.: Real electron efficiencies determined from Drell-Yan MC and binned in pT
separately for the barrel and the two endcap regions. For leading electrons the efficiency
is shown on the left side and for subleading electrons on the right side.
8.2.4. Selection of the background
In this section the selection of the measurable quantities NT T , NT L , NLT and NLL
and the determination of the background is described. The selection NT T corresponds to the number of pairs where both electron candidates fulfill the signal
selection, thus to the normal signal spectrum. In case the default method with the
fake rate fi or the method using fake factors FiF T is used, the selections NT L , NLT
and NLL correspond to the number of pairs where the T-object fulfills the signal
selection and the L-object is in the loose selection but fails the signal selection. The
L-object has to be in the loose selection but fail the track match cut of the medium
electron identification, if the fake factors FiF T M are used. Figure 8.7 shows the distributions NT L , NLT and NLL for the default fail tight case, binned in the invariant
mass of the pair, without any fake factor weights applied. It can be seen, that the
NLL distribution shows a kinematic turn-on up to around 90 GeV due to the high
pT thresholds for the leading and subleading object. Above 90 GeV there is a strong
falling spectrum like expected for processes coming from two jets. The samples NT L
and NLT also show a kinematic turn-on and additionally a resonant structure in the
region of mZ . This is due to a high amount of real electrons in this region coming
from Z-decays, which cause a strong dilution. Above an invariant mass of 90 GeV,
74
8.2. Measurement of background processes
Entries
the spectrum also shows a strongly falling behavior. The fail track match selection
is shown in the appendix in figure A.4.
fail tight selection
NTL
NLT
NLL
105
104
103
102
10
1
70
100
200
300
1000
2000
mee [GeV]
Figure 8.7.: Distribution of NT L , NLT and NLL of the fail tight selection. No fake rates,
real electron efficiencies or fake factors are applied.
To obtain the final background estimate, the selected categories have to be convoluted with the fake rates and real electron efficiencies or fake factors following the
equations 8.8 or 8.11. The convolution happens on a pair by pair basis where each
pair is weighted according to the background formula of the method with a fake rate
fi (pT,i , |ηi |) and real electron efficiency ri (pT,i , |ηi |) or a fake factor Fi (pT,i , |ηi |).
Figure 8.8 shows all nine background estimations using the three different methods
for the background determination and three different methods for measuring the
fake rates and fake factors. The background is binned in invariant mass, starting
from 66 GeV up to 2000 GeV. It can be seen, that the methods using the fake rate fi
and the fake factor FiF T still show a peak at mZ which is due to dilutions from real
electrons and which is not fully removed by the Monte Carlo corrections. The methods using the fake factor FiF T M have a lower real electron dilution and thus don’t
show a peak structure. For this method also a kinematic turn-on can be seen which
has its maximum around 90 GeV. Above the Z-peak all methods show a strongly
falling behavior. Around 400 GeV there is a kink in the invariant mass distribution
which was further investigated and is discussed separately in the following section.
Differences between the methods are used as a systematic uncertainty and discussed
in more detail later on.
75
Entries
8. Background determination
Default: Single object method using jet trigger, r and f i applied
i
T&P method using electron trigger, r and f applied
T&P method using jet trigger, r and f applied
Single object method using jet trigger, r and f applied
T&P method using electron trigger, r=1 and F FT
T&P method using jet trigger, r=1 and F FT
Single object method using jet trigger, r=1 and F FT
T&P method using electron trigger, r=1 and F FTM
T&P method using jet trigger, r=1 and F FTM
Single object method using jet trigger, r=1 and F FTM
104
103
102
10
1
10-1
80 100
200
300 400
1000
2000
mee [GeV]
Figure 8.8.: Nine fake background estimates using the three different methods for the
background determination and three different methods for measuring the fake rates and
fake factors. The marker color represents the method used for the determination of the
fake rates or fake factors and the marker symbol represents the method used for the determination of the background.
8.2.5. Kinematic properties of the fake background
As mentioned in the previous section, all backgrounds show a kink in the shape
around 400 GeV. To clarify the reason for this change in the slope, the kinematic
properties of the default method were studied. Figure 8.9 shows on the upper left
the |∆η| distribution of the leading and subleading object binned in mee . For larger
invariant masses, both electron pairs have a large opening angle |∆η| above 3.5.
This means that the high invariant mass is mostly produced by a large opening
angle, since invariant mass of a pair can either be generated by having two objects
with high pT or having two objects with a large opening angle |∆η|. On the upper
right, the |∆φ| distribution of the objects is shown, also binned in mee . All objects
are mainly emitted back to back in all regions of invariant mass. This is due to
momentum conservation in the transverse plane and shows that the two objects are
most likely correlated. The lower left distribution shows a η-φ-map of the leading,
the lower right distribution the subleading object. There some “hotspots” of the
detector can be seen around (η, φ) = (1.5, −0.8) and (η, φ) = (0.25, 0.4). This could
be caused by some lower identification power of the detector in these regions. This
seems to influence only the fake background estimation and thus the background
76
8.2. Measurement of background processes
103
5
102
4
6
4
3
102
3
10
2
2
1
10
1
1
0
100 200 300 400 500 600 700 800 900 1000
sublead
φ
2
Entries
30
3
40
2
35
25
1
30
1
20
0
15
-1
10
-2
-1 -0.5
0
0.5
1
1.5
2
2.5
1
mee [GeV]
3
φ
lead
mee [GeV]
25
0
20
-1
15
10
5
-2
0
-3
-2.5 -2 -1.5
η
lead
Entries
0
100 200 300 400 500 600 700 800 900 1000
-3
-2.5 -2 -1.5
Entries
5
|∆ φ|
6
Entries
|∆ η|
rejection. The same distributions for the event selection in data is shown in the
appendix in figure A.5. These show no visible effects in these regions. In addition
it can be seen that the background is mainly distributed at very large values of η.
5
0
-1 -0.5
0
0.5
1
1.5
2
η
2.5
sublead
Figure 8.9.: Kinematic distributions of the fake background sample are shown. On the
upper left side the |∆η| distribution of the objects is shown vs. the invariant mass of the
objects. The same for |∆φ| is shown on the upper right side. In the lower row, a η-φ map
of the leading object is shown on the left side, and for the subleading on the right side.
In figure 8.10 the ηlead vs. ηsublead distribution is shown on the left side and the
pT,lead vs. pT,sublead distribution on the right side, for different invariant mass bins.
It can be seen, that for low invariant masses between 80 and 200 GeV, most of
the objects are both in the same η-direction and are in the low pT region around
40 GeV. At higher invariant masses the objects are still dominantly in the low pT
region around 60 GeV. The invariant mass is then generated due to a large opening
angle ∆η. Considering two objects with pT = 60 GeV, one of the objects hits the
limit of |η| < 2.47 around an invariant mass of 400 GeV. Since most of the objects
are distributed around pT = 60 GeV this leads to the observed change in the slope
of the invariant mass distribution.
8.2.6. Systematic uncertainties
In this section the systematic effects of the fake background estimation are studied.
Different sources are discussed in the following.
77
1
80
400
0.5
1.6
300
40
150
0.6
20
100
0.4
-2
50 100 150 200 250 300 350 400
p
[GeV]
3
0.5
200 GeV < mee < 500 GeV
50
350
40
300
250
30
2.5
0
200
2
-0.5
-1
-1.5
20
1.5
150
1
100
0.5
-2
50 100 150 200 250 300 350 400
p
[GeV]
lead
[GeV]
400
500 GeV < mee
2.2
2
1.8
300
1.6
2
250
1.4
1.5
200
1.2
1
0.8
2.5
p
T,sublead
350
3
1.5
1
0.5
0
-0.5
-1
1
-1.5
0.5
-2
-2.5
0
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
η
lead
0
T,lead
Entries
2
10
50
-2.5
0
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
η
500 GeV < mee
Entries
400
p
3.5
1
[GeV]
4
1.5
T,sublead
4.5
2
0
T,lead
Entries
η
sublead
lead
200 GeV < mee < 500 GeV
0.2
50
-2.5
0
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
η
sublead
1
150
Entries
-1.5
η
1.2
0.8
-1
2.5
1.4
200
-0.5
3
×10
2
1.8
350
250
60
0
2.5
80 GeV < mee < 200 GeV
Entries
100
[GeV]
2
1.5
T,sublead
80 GeV < mee < 200 GeV
p
2.5
Entries
η
sublead
8. Background determination
0.6
100
0.4
0.2
50
50 100 150 200 250 300 350 400
p
[GeV]
0
T,lead
Figure 8.10.: On the left side the ηlead vs. ηsublead distribution of the fake background
sample is shown in different bins of invariant mass. The same is shown for pT on the
right side.
78
8.2. Measurement of background processes
Systematic uncertainties of the methods
variation/default
One possible source of a systematic uncertainty is the chosen method, for the background estimate and the fake rate measurement, itself. The uncertainty of the real
electron efficiency ri was found to have a negligible effect on the background estimation. Nine different background estimations were calculated using three different
methods for the fake rate and fake factor estimation and three different methods
for the background estimation itself. For the default background estimation, the
fake rates from the single object method using jet triggers were used to estimate
the background with the method using real efficiencies r and fake rates f . Figure
8.11 shows the ratio of the nine different background estimates with respect to the
default method. The ratio is binned in invariant mass, starting above the Z-peak at
116 GeV and is shown up to 1500 GeV. In the first bins there are larger variations
of the background estimate up to 18%. These become smaller for invariant masses
from 300 GeV up to 1000 GeV. The last bin shows very large fluctuations which are
due to low statistics in this bin. To be conservative, the largest deviation of 18% is
taken as a systematic uncertainty for the whole invariant mass range.
1.6
1.5
1.4
1.3
Default: Single object method using jet trigger, r and f i applied
i
T&P method using electron trigger, r and f applied
T&P method using jet trigger, r and f applied
Single object method using jet trigger, r and f applied
T&P method using electron trigger, r=1 and F FT
T&P method using jet trigger, r=1 and F FT
Single object method using jet trigger, r=1 and F FT
T&P method using electron trigger, r=1 and F FTM
T&P method using jet trigger, r=1 and F FTM
Single object method using jet trigger, r=1 and F FTM
1.2
1.1
1
0.9
0.8
116
200
300
400 500
1000
1500
mee [GeV]
Figure 8.11.: Ratio of the final background estimate of all method variations to the
default method. The ratio starts at 116 GeV and ends at 1500 GeV. The marker color
represents the method used for the determination of the fake rates or fake factors and the
marker symbol represents the method used for the determination of the background.
79
8. Background determination
Systematic effects of the fake rate
To study systematic effects of the default fake rate method, the cuts selecting the
jet enriched region, were modified. The ETmiss cut was varied by ±5 GeV and the
mass window around the Z-peak was varied by ±10 GeV. In addition in one variation the veto for events with two objects fulfilling the medium identification was
turned off and in another variation, events with two loose objects were vetoed. All
the variations were done separately. Figure 8.12 shows the ratio of the leading and
subleading fake rate variations with the corresponding default fake rate for all four
regions in η. The largest variations occur when changing the cut value for ETmiss .
The variations are a bit larger at low pT and deviate at maximum about 20%.
For the two tag and probe methods, the same studies were done. Here the same
ETmiss cut and mass window variations were done. In addition the pT requirement
of the tag object was changed to pT > 35 GeV and the identification requirement
of the tag object was changed to fail the ∆η cut of medium or the isolation. In the
default case the tag object was only allowed to fail the ∆η cut. Figures A.7 and A.8
in the appendix show the ratio of the fake rate variation to the default. The change
of the jet enriched region leads also for these methods to variations on the order of
20-30%.
The largest variations of the fake rate for the single object method, when changing
the cuts defining the jet enriched control region (changing the ETmiss cut), are propagated to the final background estimate and can be seen in figure 8.13. There the
ratio between the background estimate when varying the fake rate and the default
background estimate is shown. Also the background variation when shifting the fake
rates within their statistical uncertainties is shown. This approach is very conservative, as statistical uncertainties are uncorrelated. Each of the variations leads at
maximum to shifts of the background of about 5%. Thus for both, the statistical
uncertainties and the cut variations, a 5% systematic uncertainty is added for the
background estimate.
Composition of the data samples used
The fake background consists not only out of one category of objects, for instance
light flavor jets3 . Various types of objects are able to fake the signature of an
electron. This can lead to a wrong background estimate, if the fake rates for these
objects are different, and the composition in the sample where the fake rates are
obtained and the composition in the sample where the fake rates are applied is also
different.
There are mainly three different types of objects. First of all light jets, which
contain electromagnetic objects from decays or radiation which then lead to an
electromagnetic cluster with a track from the jet matching the cluster. These light
jets should dominantly consist out of pions. Thus they contain π 0 ’s, which decay to
two photons and cause electromagnetic cluster and in addition charged pions which
3
Light flavor jets are all jets which do not originate from a c or b quark.
80
1.6
Single object method jet trigger: leading barrel (|η| < 1.37)
miss
< 35 GeV
miss
< 20 GeV
ET
1.4
ET
variation/default
variation/default
8.2. Measurement of background processes
|m -91 GeV| < 30 GeV
ee
1.2
1
1.6
Single object method jet trigger: subleading barrel (|η| < 1.37)
1.4
1.2
1
|m -91 GeV| < 10 GeV
ee
0.8
0.8
Veto on events with two loose obj.
No veto on two medium objects
0.6
0
50
100
150
200
250
300
350
400
450
0.6
500
0
50
100
150
200
250
300
350
400
p [GeV]
Single object method jet trigger: leading endcap (1.52 <|η| < 2.01)
1.4
1.2
1.6
1.2
1
0.8
0.8
0.6
Single object method jet trigger: subleading endcap (1.52 <|η| < 2.01)
1.4
1
0
0.6
50
100
150
200
250
300
350
400
450
500
0
50
100
150
200
250
300
350
400
p [GeV]
variation/default
variation/default
1.2
1.6
1.2
1
0.8
0.8
0.6
Single object method jet trigger: subleading endcap (2.01 <|η| < 2.37)
1.4
1
0
0.6
50
100
150
200
250
300
350
400
450
500
0
50
100
150
200
250
300
350
400
p [GeV]
variation/default
variation/default
1.2
1.6
1.2
1
0.8
0.8
0
Single object method jet trigger: subleading endcap (2.37 <|η| < 2.47)
1.4
1
0.6
500
T
Single object method jet trigger: leading endcap (2.37 <|η| < 2.47)
1.4
450
p [GeV]
T
1.6
500
T
Single object method jet trigger: leading endcap (2.01 <|η| < 2.37)
1.4
450
p [GeV]
T
1.6
500
T
variation/default
variation/default
1.6
450
p [GeV]
T
0.6
50
100
150
200
250
300
350
400
450
500
p [GeV]
T
0
50
100
150
200
250
300
350
400
450
500
p [GeV]
T
Figure 8.12.: Ratio of the default fake rate fi with the systematic variations. The upper
row shows on the left side the ratio for the leading fake rate in the barrel region (η < 1.37)
and on the right side the subleading fake rate in the barrel region. The corresponding fake
rates for the endcap regions (1.52 < |η| < 2.01, 2.01 < |η| < 2.37 and 2.37 < |η| < 2.47)
are shown from the second to the fourth row. The fake rates were calculated using the
single object method using jet triggers.
81
variation/default
8. Background determination
1.25
Default: Single object method using jet trigger, r and f i applied
i
1.2
Default
Stat. error up
1.15
Stat. error down
miss
< 20 GeV
miss
< 35 GeV
ET
1.1
ET
1.05
1
0.95
0.9
0.85
116
200
300
400 500
1000
1500
mee [GeV]
Figure 8.13.: Ratio of the final background estimate, using the different systematic fake
rate variations, with the default background estimate (selection with applying ri and fi
from single object method, ETmiss < 25 GeV). For the variations fake rates with cuts
on ETmiss < 20 GeV and ETmiss < 35 GeV were used, since these lead to the largest
variations. The ratio is shown from 116 GeV to 1500 GeV. Also the variations due to the
statistical uncertainty of the fake rate are shown.
cause tracks. Another category of objects are heavy flavor jets. The most important
part are jets coming from b-quarks, called b-jets, which contain b-flavored hadrons.
About 40% of the b-flavored hadrons decay weakly into leptons. As a result b-jets
contain a rather large fraction of real electrons which make it more likely that a b-jet
fakes an electron signature. A third category are electrons coming from a conversion
of a photon. These can either come from a photon contained in a jet, which then
comes most likely from a π 0 decay, or from a prompt photon. Since these objects are
real electrons and can be isolated, the fake rate should be highest for these objects.
To study the fake rates and the composition of the samples, three categories of
objects were defined. First objects which are tagged from an algorithm searching
for b-jets were selected. The b-quarks hadronize to b-hadrons which have a relatively
large life time τ and thus travel a measurable distance4 in the detector. After this
distance the hadrons decay, causing a secondary vertex which is displaced compared
to the primary vertex form the collision. B-tagging algorithms work mainly with
4
cτ ≈ 0.5 mm
82
8.2. Measurement of background processes
f2
this property and try to reconstruct a displaced vertex of a jet. To classify objects
as b-jets an algorithm, the so called MV1-algorithm [88] was used at a working point
with 70% efficiency to tag a b-jet. This algorithm is based on a neural network that
combines information of three different tagging algorithms. The second category
consists of electrons which come from photon conversion. To tag such electrons,
another algorithm, which uses several different criteria, was used (see section 5.2).
It seems, that objects tagged by this algorithm are not necessarily real electrons
from a photon conversion, but rather also light jets. This is reasonable, since a
conversion vertex can also occur easily in light flavor jets. For example photons
from π 0 decays can lead to such conversion electrons. Nevertheless this algorithm
can be used to subdivide the objects which are not b-tagged into two categories of
conversion enhanced and reduced objects. For these three categories the fake rates
were measured.
Figure 8.14 shows the subleading fake rate measured with the single object method
binned in |η|. It can be seen that the not b-tagged and not conversion-flagged objects
have the highest fake rate followed by the b-tagged objects and the conversionflagged. This is not intuitive but can be understood since b-jets are broader and
thus fail more often the isolation cut. Only 13% of all b-tagged objects in the loose
selection pass the leading isolation criteria, whereas 22% of the light jets pass the
0.4
0.35
b-tagged
!b && !conv. flagged
0.3
!b && conv. flagged
0.25
0.2
0.15
0.1
0.05
0
0
0.2 0.4 0.6 0.8
1
1.2 1.4 1.6 1.8
2
2.2 2.4
|η|
Figure 8.14.: Subleading fake rate f2 binned in |η| for three different categories of objects:
b-tagged objects, conversion-flagged objects which are not b-tagged and objects which are
not b-tagged and not conversion-flagged.
83
8. Background determination
leading isolation. Conversion will occur in material and thus most of the time in or
after the first layer of the pixel detector5 . Since in the medium electron identification
there is a requirement for a hit in the first layer of the pixel detector, the amount of
electrons from conversion is strongly reduced. Additionally the conversion-flagged
objects fail more often the ∆η requirement between track and cluster, since if there
are two tracks from a conversion vertex associated to the electromagnetic cluster,
these have a larger opening angle and thus do not point to the center of the shower.
This leads to the lowest fake rate of all categories and to the indication that, like
already presumed, not only conversion electrons are flagged but also a rather large
fraction of jets. This impression was also supported by a study done with event
displays of conversion-flagged objects.
Since the fake rates of the three categories are different, a further systematic can
occur if the relative contribution of these objects in the jet enriched sample in
which the fake rates are measured and in the sample where they are applied is
different. Table 8.1 shows the relative contribution, in the loose selection, of the
three categories for the different fake rate methods. Table 8.2 shows the relative
contribution of the three categories for the number of objects in NT L , NLT and NLL ,
which fail the tight selection. These are the objects where the fake rates are applied.
Single object TnP jet trigger
b-tagged
1.8%
4.4%
!b + conv. flag
62.0%
63.0%
!b + !conv. flag
36.2%
32.5%
TnP electron trigger
1.3%
62.0%
36.6%
f ake
Table 8.1.: Relative contribution of the three categories in the loose selection Nloose
.
The relative contribution is given for the three different fake rate methods.
NT L
b-tagged
2.1%
!b + conv. flag 57.5%
!b + !conv. flag 40.4%
NLT
lead. NLL
1.2%
1.3%
64.8%
70.3%
34.0%
28.4%
sublead. NLL
2.5%
60.8%
36.7%
Table 8.2.: Percent of the objects in the three categories in the fakeable object selection
with mee > 116 GeV. The values are given for the fail tight selection, which is the selection
where fake rates and fake factor are applied.
The fraction of b-tagged objects is in all three fake rate methods very low. The
single object and the tag and probe method using the electron trigger predict both
a fraction of below 2%. Only the tag and probe method using the jet trigger has
5
There is also the possibility that the conversion occurs in the beam pipe.
84
8.2. Measurement of background processes
with 4.4% a bit higher fraction. For objects in the fail tight selection the fraction of
b-tagged objects is also always below 3%. Since the fractions for b-tagged objects
are very low and very similar in both samples, a systematic uncertainty arising from
this category seems to be negligible. For the objects which are not b-tagged, all three
fake rate methods contain 62-63% of conversion-flagged objects in the loose selection.
This large fraction is due to the tagging of jets as electrons from a conversion. The
rest of the objects is then neither b-tagged nor conversion-flagged. In the fail tight
selection 57.5-70.3% of the objects are not b-tagged but conversion-flagged. Thus
there is a larger variation of ≈ 6% between the fractions predicted by the fake rate
methods and in the fail tight selection. The fraction for the leading object is always
larger than the fraction for the subleading objects.
To study whether the differences in the fraction of conversion-flagged objects in
both samples can cause a systematic effect or not, further studies were made. Since
there is no effect from b-jets, the objects were from now on only separated into
conversion-flagged and not conversion-flagged objects. For these two categories fake
rates were calculated in the same binning which is used for the default ones. The
fake rate for conversion-flagged or non-conversion-flagged objects was then applied
depending on whether or not the failing object was also conversion-flagged. This
was done again for all nine methods. The ratio of the final background estimates
with separated fake rates and fake factors with the default background estimate can
be seen in figure 8.15. Also shown is the mean of all variations when separating the
fake rates and fake factors, using the standard deviation as uncertainty. Comparing
to figure 8.13, the methods assuming r = 1 seem to be less affected by separating
the fake rates than the method using both r and f . All methods are within 20%
in agreement with the default method. The mean of all methods deviates from
the default background at most about 5% in the first bin. Since there are larger
changes within the methods, but the mean of all methods seems to be very stable,
the maximum variation of 5% of the mean is added as an additional systematic
uncertainty.
8.2.7. Summary
The background was selected using the single object method for selecting fake rates
and the matrix method which applies the real efficiency r and fake rates f . Figure
8.16 shows the invariant mass spectrum of the background estimate and it’s systematic and statistical uncertainties in the region 116 GeV to 1500 GeV. Table 8.3 lists
all systematic sources and the total uncertainty. To be conservative, the maximum
systematic is taken for all bins. To obtain a total systematic uncertainty for the
fake background estimate all uncertainties from the different sources are added in
quadrature. The overall total uncertainty of the fake background estimate is 20%.
85
variation/default
8. Background determination
1.6
1.5
1.4
1.3
Default: Single object method using jet trigger, r and f i applied
i
T&P method using electron trigger, r and f applied, separated fake rates
T&P method using electron trigger, r=1 and F FT , separated fake factors
T&P method using electron trigger, r=1 and F FTM, separated fake factors
T&P method using jet trigger, r and f applied, separated fake rates
T&P method using jet trigger, r=1 and F FT , separated fake factors
T&P method using jet trigger, r=1 and F FTM, separated fake factors
Single object method using jet trigger, r and f applied, separated fake rates
Single object method using jet trigger, r=1 and F FT m separated fake factors
Single object method using jet trigger, r=1 and F FTM, separated fake factors
Mean of all variations
1.2
1.1
1
0.9
0.8
116
200
300
400 500
1000
1500
mee [GeV]
Figure 8.15.: The background estimations of all nine methods are shown when using
separate fake rates for conversion-flagged objects and non-conversion-flagged objects. The
background is shown as a ratio with respect to the current default method using a combined
fake rate. Also the mean of all variations is shown as central value and the standard
deviation as uncertainty.
Stat. uncer.
fake rate
Cuts to
obtain jetenriched region
Fake
background
composition
total
Systematic source
Fake rate and
background
method
Systematic uncer.
18%
5%
5%
5%
20%
Table 8.3.: Systematic uncertainties of the fake background estimate and their sources.
86
Entries
8.2. Measurement of background processes
103
Di-jet & W+jet background
2
10
10
Statistical uncertainty
1
Systematic uncertainty
Uncer. [%]
10-1
50
103
0
-50
116
200
300
400 500
1000
1500
mee [GeV]
Figure 8.16.: Di-jet and W +jet background estimate binned in invariant mass in the
range 116 GeV to 1500 GeV. The spectrum is shown at the top and the relative uncertainty
of each bin on the bottom.
87
8. Background determination
88
9. Comparison of signal and
background with data
In the following chapter kinematic properties of the single electrons and the electron
pairs of the selected data are compared to the signal and background expectation.
9.1. Single electron properties
Figure 9.1 shows the η, φ and pT distributions of the leading (left column) and subleading (right column) electron candidates. All electron candidates belong to pairs
with an invariant mass above 116 GeV. The η and φ distributions show the same
behavior as in the region 66 GeV < mee < 116 GeV, see figure 7.5. The backgrounds
are stacked on top of each other and the ratio between data and expectation is shown.
The η distributions shows a maximum around η = 0 and a slowly falling behavior
to larger positive and negative values. The falling behavior is due to the non-linear
dependency of η and θ, as discussed in section 7.5 and illustrated in the appendix
in figure A.1. Between |η| = 1.37 and |η| = 1.52 the distributions show a dip, which
is caused by the exclusion of the transition region between barrel and endcap in the
electromagnetic calorimeter. The azimuthal angle φ is distributed uniformly. The
shape of both distributions is in good agreement with the expectation but there is
an overall offset of about 3-4%. The pT distributions show a strongly falling spectrum with a maximum around 60 GeV. The highest leading electron candidate is at
around pT = 780 GeV. Also here an overall offset of around 3-4% can be seen but
in contrast to the electron candidates in the region of the Z-peak, the simulation
seems to be modeled better and thus there are no large differences for higher pT .
The contribution of the background processes is in the order of a few percent. The
largest contribution is from the tt̄- and tW -process followed by the di-jet and W +jet
background. The diboson background has the smallest contribution.
9.2. Electron pair properties
Figure 9.2 shows on the left side the pee
T distribution. A strongly falling spectrum
with a maximum in the first bin from 0 to 20 GeV can be seen for this distribution.
The maximum occurs near 0 since the initial quarks which produce the Z/γ ∗ have
in first order no transverse momentum. In the first bin there is a deviation of
around 6% between data and expectation, the rest of the spectrum is in very good
89
9. Comparison of signal and background with data
∫ L dt = 20 fb
4
Entries
6
5
Data/Exp.
3
Data 2012
-1
Drell-Yan
s = 8 TeV
mee > 116 GeV
7 ×10
6
tt & tW
4
Di-jet & W+Jets
3
3
2
2
1
1
0
1.2
1.1
1
0.9
0.8
-3
0
1.2
1.1
1
0.9
0.8
-3
Leading Electron η
-2
-1
0
1
∫ L dt = 20 fb
5
Diboson
2
3
Data/Exp.
Entries
3
7 ×10
Data 2012
-1
Drell-Yan
s = 8 TeV
Diboson
mee > 116 GeV
tt & tW
Di-jet & W+Jets
Subleading Electron η
-2
-1
0
Leading Electron η
Entries
∫
Drell-Yan
Diboson
tt & tW
Di-jet & W+Jets
Leading Electron φ
0
1
2
3
5 ×10
4.5
-1
L dt = 20 fb
4
s = 8 TeV
3.5
3 mee > 116 GeV
2.5
2
1.5
1
0.5
0
1.2
1.1
1
0.9
0.8
-3
-2
-1
-1
s = 8 TeV
mee > 116 GeV
103
Data 2012
Drell-Yan
105
10
1
1.2
1.1
1
0.9
0.8
1
1.2
1.1
1
0.9
0.8
T
Data/Exp.
Data/Exp.
10
400 500 600 700 800
Leading Electron p [GeV]
Subleading Electron φ
0
1
2
3
-1
Data 2012
Drell-Yan
Diboson
tt & tW
Di-jet & W+Jets
102
300
Di-jet & W+Jets
mee > 116 GeV
103
tt & tW
102
200
tt & tW
s = 8 TeV
Di-jet & W+Jets
100
Diboson
∫ L dt = 20 fb
104
Diboson
Leading Electron pT [GeV]
Drell-Yan
Subleading Electron φ
Entries
Entries
∫ L dt = 20 fb
104
3
Data 2012
∫
Leading Electron φ
105
2
3
Data 2012
Data/Exp.
Data/Exp.
Entries
3
5 ×10
4.5
-1
L dt = 20 fb
4
s = 8 TeV
3.5
3 mee > 116 GeV
2.5
2
1.5
1
0.5
0
1.2
1.1
1
0.9
0.8
-3
-2
-1
1
Subleading Electron η
Subleading Electron pT [GeV]
100
200
300
400 500 600 700 800
Subleading Electron p [GeV]
T
Figure 9.1.: Properties of the final single electron selection, which build a pair with
mee > 116 GeV, are shown and compared to the sum of the Drell-Yan Monte Carlo
simulation and the background expectation. In the left column, the distributions of the
leading electron η, φ and pT are shown. In the right column the same distributions for the
subleading electron are shown. The binning for leading and subleading pT is chosen to be
√
constant in pT . Signal and background simulations were scaled to the luminosity of the
data.
90
9.2. Electron pair properties
agreement. On the right side the φee distribution of the electron pair is shown. As
for the single electrons φee is distributed uniformly from −π to π. Here again an
overall deviation of around 3-4% can be seen.
∫ L dt = 20 fb
104
Data 2012
-1
Drell-Yan
s = 8 TeV
Diboson
mee > 116 GeV
103
Entries
Entries
3
105
tt & tW
Di-jet & W+Jets
102
1
1.2
1.1
1
0.9
0.8
0
[GeV]
pee
T
100
200
300
400
500
600 700
[GeV]
pee
T
Data/Exp.
Data/Exp.
10
5 ×10
4.5
-1
L dt = 20 fb
4
s = 8 TeV
3.5
3 mee > 116 GeV
2.5
2
1.5
1
0.5
0
1.2
1.1
1
0.9
0.8
-3
-2
-1
Data 2012
∫
Drell-Yan
Diboson
tt & tW
Di-jet & W+Jets
φee
0
1
2
3
φee
Figure 9.2.: On the left side the pee
T distribution for the electron pairs is shown. On the
right side the φee distribution of the electron pairs. Electron pairs of the final selection with
mee > 116 GeV are shown and compared to the sum of signal and background expectation.
√
The binning for pee
pT . Signal and background simulations
T is chosen to be constant in
were scaled to the luminosity of the data.
Figure 9.3 shows the rapidity yee distribution of the electron pairs with mee >
116 GeV. The data distribution is compared to the expected distribution. The
rapidity can be identified with the boost of the Z/γ ∗ along the beam axis. The
distribution has a maximum at yee = 0, slowly falling to higher positive and negative
values. Thus most Z/γ ∗ are produced with a small boost along the beam axis. The
distribution ends, similar to the η distributions of the single electrons, at ±2.47.
This is the maximum rapidity within the acceptance of the detector, which can only
be produced when both electrons are at η = 2.47 or η = −2.47 and back-to-back in
φ. Figure 9.4 shows the invariant mass distribution of the electron pairs for the final
selection. The distribution is shown from 66 GeV up to 2 TeV. Starting from 66 GeV,
all distributions show a kinematic turn-on due to the pT cuts. Around 91 GeV the
resonance of the Z-boson can be seen. This resonance shows also up in the diboson
background and in the di-jet and W +jet background. In the latter this resonance is
an unphysical relic of real electron dilution and thus leads in this region to a small
deviation. In addition, as discussed in section 7.5, there is a poor modeling of the
low-mass tail of the Z-resonance in the Monte Carlo simulation. Both effects lead
to deviations up to 11%. This deviation will not influence the actual measurement,
since it will start at 116 GeV. In the region above the Z-resonance, the spectrum
shows a strongly falling behavior. Data and the expectation are in a good agreement
around 116 GeV but then start to deviate. The deviation gets larger and is around
250 GeV up to 7%. From 300 GeV on, data and expectation agree again, although
the statistical fluctuations of the data are getting larger which makes it impossible
91
Entries
9. Comparison of signal and background with data
×103
7
Data 2012
∫ L dt = 20 fb
-1
6
5
4
Drell-Yan
s = 8 TeV
Diboson
mee > 116 GeV
tt & tW
Di-jet & W+Jets
3
2
Data/Exp.
1
0
1.2
1.1
1
0.9
0.8
yee
-2
-1
0
1
2
yee
Figure 9.3.: In this figure, the rapidity Yee distribution of the electron pairs after the
final selection with mee > 116 GeV is shown. The distribution of the data is compared
to the sum of signal and background expectation. Signal and background simulations were
scaled to the luminosity of the data.
to see potential disagreements. The invariant mass spectrum above the Z-resonance
is also interesting for searches for new particles. Many theories predict new heavy
particles which behave as heavy partners of the Standard Model Z-boson and are
therefore called Z 0 -bosons. A search for such new particles was performed by ATLAS
[89] but no significant deviation from the Standard Model processes were found and
exclusion limits for several theory models were determined. Thus in this region the
cross section of the Drell-Yan process can be measured. The pair with the highest
invariant mass is at mee = 1542 GeV. An event display for this event can be seen in
figure 9.5. In the tracking system, tracks with pT > 5 GeV are shown and colored
depending on the originating vertex. The regions where the energy is deposited in
the electromagnetic calorimeter are colored in yellow. A histogram of the energy
deposition is shown in green. It can be seen that several tracks originate from the
collision point. There are two red tracks, originating from the same vertex which
point towards two clusters in the electromagnetic calorimeter. The pT of these two
objects is 584 GeV for the object in the upper detector half and 589 GeV for the
object in the lower half. The tracks are back-to-back in φ and have an opening angle
of roughly ∆η ≈ 1.5. The track which is in the lower detector half has no hits in the
TRT detector. This seems to be a display problem, since the reconstructed object
92
Entries
9.2. Electron pair properties
106
∫ L dt = 20 fb
105
s = 8 TeV
104
-1
Data 2012
Drell-Yan
Diboson
tt & tW
Di-jet & W+Jets
3
10
102
10
Data/Exp.
1
1.2
1.1
1
0.9
0.8
70
mee [GeV]
100
200
300 400
1000
2000
mee [GeV]
Figure 9.4.: In this figure the invariant mass mee distribution of the selected electron
pairs starting at mee = 66 GeV is shown. The distribution of the data is compared to
the sum of signal and background expectation. The bin width is chosen to be constant in
log mee . Signal and background simulations were scaled to the luminosity of the data.
has 41 hits in the TRT assigned to it.
Table 9.1 shows the number of selected events from all estimated processes in bins
of invariant mass. Shown are data, expected signal and background and the sum
of signal and background. Also shown is the statistical error which is assumed to
follow the one by a Gaussian distribution. In the region of the Z-peak, 66-116
GeV, the Drell-Yan process is dominating. Due to W Z and ZZ events, the diboson
contribution is in this region larger than the contribution from the top backgrounds,
as discussed in section 8.1. The estimated fake background in this region is biased
due to real electron dilution and can thus not be trusted. Above the Z-peak up
to 500 GeV, the top backgrounds and above 500 GeV the fake backgrounds are
dominating. In every mass window the number of data events is a few percent
above the expectation. This can also be seen in all discussed distributions. The
reason for this behavior is yet unknown.
93
94
pairs is shown. On the upper left, the r-φ-plane and on the lower left, the r-η-plane of the detector is shown. On the upper right,
the energy deposition in the electromagnetic calorimeter is shown in the φ-η-plane. Tracks with pT > 5 GeV are shown and colored
depending on the originating vertex. The event display was made using ATLANTIS [90].
Figure 9.5.: In this figure the event with the electron pair which has the highest invariant mass of mee = 1542 GeV of the selected
9. Comparison of signal and background with data
9.2. Electron pair properties
max
[GeV]
mmin
ee -mee
66 - 116
116 - 150
150 - 200
Drell-Yan
Diboson
tt̄ & tW
Di-jet & W+jet
4264582 ± 3801
7396 ± 38
6573 ± 55
16688 ± 81
64207 ± 263
957 ± 15
3910 ± 41
2070 ± 18
24043 ± 118
740 ± 13
3343 ± 38
1363 ± 14
Total
4295239 ± 3802
71145 ± 267
29488 ± 126
Data
4380540
73295
30810
200 - 300
300 - 500
500 - 1500
max
[GeV]
mmin
ee -mee
Drell-Yan
Diboson
tt̄ & tW
Di-jet & W+jet
11251 ±
528 ±
2291 ±
968 ±
67
11
31
12
3243 ± 22
214 ± 6
644 ± 17
465 ± 9
584 ±
44.8 ±
66 ±
101 ±
3
0.8
6
4
Total
15039 ± 75
4566 ± 30
796 ± 8
Data
15563
4629
833 ± 29
Table 9.1.: Number of selected events from all estimated processes in bins of invariant mass. Shown are data, expected √
signal and background and the sum of signal and
background. Only the statistical error N of the expectation is shown.
95
9. Comparison of signal and background with data
96
10. Cross section measurement
The following chapter describes the procedure of the cross section measurement.
First the binnings are discussed. After this the unfolding procedure is described and
systematic uncertainties on the cross sections are derived.
10.1. Resolution and binning
4
Relative resolution [%]
Relative resolution [%]
A sensible binning has to be chosen for the measurement of the differential cross
section. It is important to choose a binning, which is coarse enough to have sufficient
statistics in every bin. In addition the binning has to be coarser than the detector
resolution of the measured observable. Otherwise bin migration effects become too
large and it becomes difficult to extract the cross section from the measurement,
without having large uncertainties. On the other hand, if the binning is too coarse,
information about the shape of the distribution is lost.
Figure 10.1 shows the relative resolution of the invariant mass on the left side and
of the absolute rapidity on the right side. The relative resolution is calculated with
respect to the truth mass on born level. Born level in this context means that no
final state radiation for the electrons is considered.
3.5
3
2.5
2
4
3.5
3
2.5
2
1.5
1.5
1
1
0.5
0.5
116
200
300 400
1500
1000
mee (truth) [GeV]
0
0.4
0.8
1.2
1.6
2
2.4
|yee| (truth)
Figure 10.1.: The resolution of the invariant mass is shown on the left side. The resolution of the absolute rapidity is shown on the right side. The resolution was determined
on Born level using the Drell-Yan simulation.
The relative invariant mass resolution is at mee = 116 GeV about 2.4% and then
gets better up to 1.5% at mee = 1500 GeV. The improving relative invariant mass
resolution is due to the improving relative energy resolution at higher energies.
97
10. Cross section measurement
The relative resolution of the absolute rapidity is at |yee | = 0.0 about 1.6% and is
then improving up to 0.5% for |yee | = 0.4%. The rapidity of a Z depends on its
energy1 and thus also on the energy of its decay products. This causes an improving
relative rapidity resolution for higher absolute rapidities, since at higher rapidities,
the average energy of both electrons is larger (see figure A.9 in the appendix). One
of the electrons has to have |η| > 2.0 to build a pair which has a rapidity above
|yee | = 2.0. However, the η resolution, and thus the pz = pT sinh(η) resolution, gets
worse for these pairs, since no tracking information from the TRT is available for
electrons above |η| = 2.0. This causes, with respect to the previous bin, a slightly
increase in the last absolute rapidity bin.
The one dimensional binning was chosen to be√the same as used in the publication
of the same one dimensional measurement at s = 7 TeV [33]:
mee = [116, 130, 150, 170, 190, 210, 230, 250, 300, 400, 500, 700, 1000, 1500] GeV.
The choice of this binning makes it easier to compare to the previous measurement.
The main result of this analysis is the two dimensional measurement and thus it is
not important to further optimize the one dimensional binning. Figure 10.2 shows
on the left side the purity of this binning. The purity is defined as fraction of
simulated events, reconstructed in a given mee bin that have mtrue
in the same bin,
ee
and was determined using the Drell-Yan simulation. For the first bin the purity is
about 83%. The second bin is 6 GeV wider than the first and thus the purity rises
up to 87%. Since the width stays then constant up to 250 GeV, the purity drops
again down to about 82% due to a worse absolute energy resolution at higher mee .
Above 250 GeV the purity stays, besides small effects of the bin width, constant
and is always above 90%.
The two dimensional binning was chosen by hand in such a way that every bin has
sufficient statistics:
mee = [116, 150, 200, 300, 500, 1500] GeV × |yee | = [0.0, 0.4, 0.8, 1.2, 1.6, 2.0, 2.4]
The purity of this binning is overall better than the one of the one dimensional,
since the binning in mee got much coarser. Figure 10.2 shows on the right side in
red the purity of this binning in mee . The purity is in the first bin about 93% and
rises then up to 98% in the last bin. On the right side the purity of the rapidity
binning in the range 116 GeV< mee < 150 GeV and 300 GeV< mee < 500 GeV is
shown. The purity of the rapidity binning is for both invariant mass bins constant
and always above 97%, due to the very good angular resolution.
1
y=
98
1
2
ln
E+pz
E−pz
1
Fraction of non migrating events
Fraction of non migrating events
10.2. Unfolding
0.95
0.9
1 dim. binning
0.85
2 dim. binning
0.8
116
200
300
400 500
1000
1500
mee (reconstructed) [GeV]
1
0.98
0.96
0.94
0.92
mee: 116 - 150 GeV
0.9
mee: 300 - 500 GeV
0.88
0.86
0
0.2 0.4 0.6 0.8
1
1.2 1.4 1.6 1.8 2 2.2 2.4
|yee| (reconstructed)
Figure 10.2.: The purity of the one and two dimensional invariant mass binning is
shown on the left side. The purity of the rapidity binning in an invariant mass range from
116 to 150 GeV and 300 to 500 GeV is shown on the right side. The purity is defined as
fraction of simulated events, reconstructed in a given mee bin that have mtrue
in the same
ee
bin and was determined on Born level using the Drell-Yan simulation.
10.2. Unfolding
10.2.1. Differential cross section
To determine a differential cross section, the measured signal spectra have to be
unfolded. In this thesis the differential cross section of the invariant mass mee and
absolute rapidity |yee | is calculated in the following way:
dσ
Ndata,i − Nbkg,i
=
.
(10.1)
dmee d|yee | i Lint Ai Ei ∆mee,i ∆|yee |i
Ndata,i is the number of selected events and Nbkg,i the number of estimated background events (see chapter 8) in a given bin i. To unfold the cross section for
efficiency and acceptance effects, bin-by-bin correction factors Ei and Ai are used,
respectively. For this analysis a bin-by-bin unfolding is sufficient since the chosen
binning has a high purity and thus bin-migration effects are small.
√ The effect of
using a different Bayesian unfolding method was studied in the s = 7 TeV measurement and found to be small. A systematic uncertainty in the order of 1.5% was
added due to small differences [33], which is neglected in this thesis. Finally, to get
the cross section, the unfolded number of signal events have to be divided by the
integrated luminosity of the dataset Lint and the width ∆mee,i and ∆|yee |i of the
bins.
10.2.2. Efficiency and acceptance
The number of selected events has to be corrected, since due to inefficiencies of the
detector, not every produced Drell-Yan event is measured. This efficiency correction
can be determined from the signal simulation and can, for a specific bin, be derived
99
10. Cross section measurement
with the following formula:
sim
Nsel,Σ
E = sim .
Ngen,Σ
(10.2)
sim
is the number of selected events simulated on detector level. This number is
Nsel,Σ
valid for a given phase space Σ which is defined by the fiducial region of the signal
selection:
|η| < 2.47, excluding 1.37 < |η| < 1.52,
pleading
> 40 GeV,
T
psubleading
> 30 GeV.
T
sim
Ngen,Σ
is the number of simulated events in this phase space Σ. The efficiency covers
also the effect of bin migration, since for the numerator, the event is not required to
be generated and reconstructed in the same bin. Figure 10.3 shows on the top left,
the efficiency of the one dimensional mee binning, determined by using the signal
simulation. The efficiency starts at mee = 116 GeV at around 69% and then rises up
to 80%. The rising behavior has to be due to the medium identification efficiency,
since the isolation cut was chosen in such a way that the efficiency stays constant.
At higher invariant mass both electrons have on average higher energy. The relative
energy resolution gets better at higher energies and thus it is easier to cut on the
energy deposition in the calorimeter. This leads to a higher medium identification
efficiency. On the bottom left, the efficiency binned in rapidity of an invariant mass
slice of 300 to 500 GeV is shown. At |yee | = 0 the efficiency is around 80% and then
drops down to 71%. At higher rapidities, the two electrons are more likely to be
at higher |η| and thus measured in the endcaps of the electromagnetic calorimeter.
This leads to a falling behavior with |yee |, since in the endcaps there is more material
between beam axis and electromagnetic calorimeter and thus the identification more
problematic.
The efficiency correction, as already discussed, is for a given fiducial phase space
Σ. The calculated cross section is thereby only valid in this phase space. To give
a more convenient result, which is more independent from the detector geometry,
a phase space extrapolation to a more common fiducial region can be made via an
acceptance correction. The acceptance correction can also be determined from the
signal simulation and is given by:
A=
sim
Ngen,Σ
,
sim
Ngen,Ω
(10.3)
sim
where Ngen,Ω
is the number of generated events in a phase space Ω to which the
cross section shall be extrapolated. Ω for this analysis is chosen to be:
|η| < 2.5,
> 40 GeV,
pleading
T
psubleading
> 30 GeV.
T
This includes the extrapolations over the transition region 1.37 < |η| < 1.52 to
have a continuous interval and the extrapolation from |η| < 2.47 up to |η| < 2.5
due to cosmetic reasons. A correction up to higher |η| and smaller pT would have,
100
10.2. Unfolding
1
Acceptance
Efficiency
mainly due to the chosen PDF, a stronger model dependency and thus introduce
larger theoretical uncertainties. The acceptance can be seen in the right column in
figure 10.3. For the one dimensional mee binning, the acceptance stays up to 700
GeV constant at around 87%. This is due to the extrapolation over the transition
region which affects electrons in all ranges of invariant mass. Above 700 GeV the
acceptance rises slightly up to 89%. The reason is that the electrons in this region
are less affected by the η extrapolations, since at high invariant masses the average
η is at lower values (see also figure A.11 in the appendix). On the bottom the
acceptance is shown for the rapidity binning. For |yee | = 0 the acceptance is about
92%. Both electrons are most likely in the central region of the detector and thus not
so much affected by the acceptance extrapolations. When going to higher |yee |, it is
more and more likely for one of the two electrons to be in the transition region. This
results in a minimum acceptance of 77% in the bin up to |yee | = 2.0. For rapidities
above 2.0, both electrons have to be in |η| above the transition region and thus are
only affected by the small extrapolation up to |η| = 2.5. Hence the acceptance goes
up again to 91% in the last bin.
0.95
0.9
0.85
1
0.95
0.9
0.8
0.85
0.75
0.7
0.8
0.65
0.75
0.6
200
300 400
1500
1000
mee [GeV]
1
0.95
116
Acceptance
Efficiency
0.55
116
300 GeV < mee < 500 GeV
0.9
200
300 400
1500
1000
mee [GeV]
1
0.95
300 GeV < mee < 500 GeV
0.9
0.85
0.8
0.85
0.75
0.8
0.7
0.75
0.65
0 0.2 0.4 0.6 0.8
1 1.2 1.4 1.6 1.8
2 2.2 2.4
|yee|
0 0.2 0.4 0.6 0.8
1 1.2 1.4 1.6 1.8
2 2.2 2.4
|yee|
Figure 10.3.: In the left column the efficiency of the one dimensional mee binning (top)
and the rapidity binning in a range 300 to 500 GeV (bottom) is shown. In the right column
for the same binnings the acceptance is shown. Efficiency and acceptance were determined
on Born level using the Drell-Yan simulation.
101
10. Cross section measurement
10.2.3. Correction factor CDY
The efficiency and acceptance corrections can be combined to a common correction
factor:
sim
Nsel,Σ
(10.4)
CDY = AE = sim .
Ngen,Ω
CDY
This correction factor CDY is shown for the extrapolation to born level in figure 10.4.
The dependency on mee is as expected the same as for the efficiency with an offset
to smaller values and a slightly stronger increase at high mee due to the acceptance.
0.76
0.74
Born level PowhegPythia
0.72
0.7
0.68
0.66
Fiducial region:
0.64
|η| < 2.5, pT
leading
subleading
> 40 GeV, p T
> 30 GeV
0.62
0.6
0.58
116
200
300
400 500
1000
1500
mee [GeV]
Figure 10.4.: The correction factor CDY , to correct for efficiency and acceptance effects
is shown for the one dimensional binning in mee . For the determination, the Drell-Yan
simulation on Born level was used.
The correction factor CDY is affected by the limited statistics of the signal sample
used to calculate it. For a perfect detector resolution, the statistical uncertainty
sim
of CDY would be the uncertainty of a binomial distribution, since in one bin Nsel,Σ
sim
is a subset of Ngen,Ω
. Due to finite resolution, migration between bins occurs and
sim
sim
thus Ngen,Ω does not any longer completely contain Nsel,Σ
. Assuming an uncertainty
of a Gaussian distribution would however be too conservative and would lead to a
too large uncertainty. Due to the rather small amount of migration there is still a
large correlation between numerator and denominator. To get the correct statistical
102
10.3. Systematic uncertainties
uncertainty, the calculation of CDY can be split into uncorrelated samples:
CDY
sim
Nsel,Σ
Nstay + Ncome
= sim =
,
Ngen,Ω
Nstay + Nleave
(10.5)
where Nstay is the number of events generated and reconstructed in a certain bin,
sim
− Nstay are the events reconstructed in a certain bin, but generated
Ncome = Nsel,Σ
sim
elsewhere, and Nleave = Ngen,Ω
− Nstay are the events generated in a certain bin, but
migrating out or failing the selection cuts. Following reference [91], the uncertainty
on CDY can then be expressed as:
(∆CDY )2 =
sim
sim 2
)
(Ngen,Ω
− Nsel,Σ
1
(∆Nstay )2 +
(∆Ncome )2
sim
sim
4
(Ngen,Ω )
(Ngen,Ω )2
sim 2
(Nsel,Σ
)
(∆Nleave )2 .
+
sim
(Ngen,Ω )4
(10.6)
10.3. Systematic uncertainties
There are three main sources of systematic uncertainties on the cross section. First
there are systematic uncertainties on the bin-by-bin correction factor CDY coming
from various sources, which are discussed in detail in the following. A further
source of systematic uncertainties is coming from the background estimation. These
two sources are in the following discussed in more detail. The third systematic
uncertainty comes from the measurement of the integrated luminosity which has
currently an uncertainty of 2.8%, as already discussed in section 4.7.
10.3.1. Systematic uncertainties on CDY
To calculate systematic uncertainties on the correction factor CDY , different variations of CDY were calculated by varying single sources of systematic uncertainties.
The resulting CDY was compared to the default one and the differences are quoted
as systematic uncertainty.
Reconstruction
The reconstruction scale factor [83] is correcting for differences of the reconstruction
efficiency in data and simulation and enters only the numerator of CDY as a weight.
Due to this the relative uncertainty on this scale factor is directly one uncertainty on
CDY . The systematic uncertainty given for the reconstruction scale factor is added
and subtracted from the default value. For both variations, the larger one is quoted
as systematic uncertainty on CDY .
103
10. Cross section measurement
Identification and isolation
The identification [83] and isolation [84] scale factors correct for differences of the
identification and isolation efficiency in data and simulation. Thus these scale factors
enter, like the reconstruction scale factor, only the numerator of CDY as a weight
and therefore, the uncertainty is directly propagating to CDY . The systematic uncertainty given for the identification scale factor is again varied up (added to the
nominal value) and down (subtracted from the nominal value). For the isolation
scale factors the uncertainties are separated into a systematical and a statistical
part. Both are varied up and down separately, and the largest variation in CDY is
quoted as systematic. This is a very conservative treatment for the statistical part,
since the statistical uncertainty is uncorrelated between all bins.
Energy scale
Corrections of the energy scale [82] are applied to data, but effects due to systematic
uncertainties of this rescaling are studied in the simulation. For this study the reconstructed energy is varied in the simulation, according to the systematic uncertainties
which are given for the energy rescaling. Rescaling of the energy of the electrons
leads to different invariant masses and thereby to bin migration in invariant mass.
This can distort the shape of the reconstructed invariant mass spectrum and thus
lead to differences in CDY . There are different sources of systematic uncertainties
given. First there is a systematic uncertainty due to the knowledge of the material
in the detector. To study this, the energy scale is reevaluated using a Monte Carlo
sample where the amount of material in the detector was changed according to its
systematic uncertainty. Differences in the energy scale are then quoted as systematic uncertainty of the material. Furthermore a systematic uncertainty due to the
method to extract the energy scales is given. The method uncertainty is dominantly
driven by uncertainties on the background estimation in the electron selection, which
is used to determine the corrections on the energy scale. Additionally there is an uncertainty due to the knowledge of the energy scale in the presampler detector, which
is used to correct for energy lost upstream of the active electromagnetic calorimeter
(see section 4.4.1). Also the statistical uncertainties of the energy rescaling are given.
All uncertainties are symmetric but do not lead to symmetric effects in CDY , since
varying the energy scale up has a larger effect on a strongly falling spectrum, due
to larger bin migrations. Because of this asymmetry, not the maximum deviation
from the up and down variations is used as systematic uncertainty on CDY , but the
average of the up and down variation. Of all systematic variations, the material
uncertainty is largest, followed by the presampler uncertainty.
Energy resolution
The smearing of the energy in the simulation, to correct for a too good modeled
energy resolution, has a systematic uncertainty. This, like for the energy scale,
can distort the reconstructed invariant mass spectrum and thus cause differences in
104
10.3. Systematic uncertainties
CDY . The degree of smearing is varied within its systematic uncertainty, the largest
deviation of the up and down variations is then taken as the systematic uncertainty
on CDY .
Trigger
The trigger scale factor corrects for differences of the trigger efficiency in data and
simulation. Thus this scale factor enters, as the other scale factors, only the numerator of CDY as a weight and therefore the uncertainty is directly propagating to
CDY . The uncertainty is separated into a systematic and a statistical part. Both
parts are varied up and down separately, and the largest variation in CDY is quoted
as systematic.
Monte Carlo modeling
A systematic uncertainty on CDY could occur if the simulation is modeled incorrectly. In particular this concerns the modeling of the pileup. To cover possible
systematic effects the reweighting to a realistic pileup distribution is turned off.
This is a very conservative treatment, but since this effects both, numerator and
denominator, possible effects cancel to a large degree. Systematic uncertainties due
to showering and harmonization models of a specific generator are not studied, since
no alternative Monte Carlo simulation is available yet.
Theory
The k-factors, which reweight the Powheg cross section to the cross section predicted by FEWZ, have a systematic uncertainty which can affect CDY . The systematic uncertainty is given separately for the k-factor to NNLO and the part which
covers for the photon induced processes. The uncertainty on the k-factor to NNLO
covers for uncertainties coming from the chosen PDF and the uncertainty on αs . It
was found that the PDF uncertainties covered differences to CT10, Herapdf1.5
and NNPDF2.3, but not differences to ABM11. Thus in addition an uncertainty
was added covering the envelope to ABM11. The uncertainty is given on a 90%
confidence level. The correction for photon induced processes has a systematic uncertainty coming from differences when using two different quark mass schemes in
the calculation. Since the k-factors enter the numerator as well as the denominator,
differences cancel in large parts and the resulting uncertainty is thus small.
10.3.2. Systematic background uncertainties
To calculated systematic uncertainties on the cross section due to uncertainties on
the background estimation, the different backgrounds are varied according to their
estimated uncertainty, and the difference of the resulting cross section to the nominal
one is quoted as systematic uncertainty on the cross section. Also the statistical
105
10. Cross section measurement
uncertainties on the background estimation are quoted as systematic uncertainty
but treated separately.
Di-jet and W+Jet background
As discussed in section 8.1, the estimated uncertainty on the di-jet and W +jet
background is 20%. This assumes that the shape of the distribution is correct. Thus
the background was varied up and down by this uncertainty. Also the statistical
uncertainties were propagated by shifting the background prediction up and down
by this uncertainty. The larger variation of both, up and down, was taken as a
systematic uncertainty.
tt̄ and tW background
The cross section for the tt̄ process is, as described in section 6.1.2, σtt̄ =
+7.56
+11.67
252.89+6.39
−8.64 (scale)−7.30 (mt )−11.67 (PDF+αs ) pb at NNLO [77]. The given uncertainties of the different sources are added in quadrature to calculate a total uncertainty.
For the tW process the cross section is given by σtW = 22.37 ± 1.52 pb [78]. According to these uncertainties on the cross sections, a systematic uncertainty of 6%
was assumed for the top backgrounds. This assumes that there are no differences in
the shape of the predicted distributions. Also the statistical uncertainties of both
backgrounds were propagated.
Diboson background
The cross sections used to normalize the diboson background have a systematic uncertainty of 5% [80]. The uncertainty of these cross sections are taken as systematic
uncertainty of the diboson background estimation. This assumes that there are no
differences in the shape of the predicted distributions. The background was shifted
according to this uncertainty and the differences in the cross section are quoted as
systematic uncertainty on the cross section. Also the statistical uncertainties were
propagated.
10.3.3. Discussion of systematic uncertainties
Figure 10.5 shows the systematic uncertainties on CDY on the left side, and on the
cross section due to the background estimation on the right side. Since CDY enters
linearly into the calculation of the cross section, the relative uncertainty on CDY
can directly be translated into an uncertainty on the cross section.
The dominating systematic uncertainty on CDY is coming from the systematic uncertainty of the energy scale. It is around 2.6% at an invariant mass of 116 GeV
and rises up to 3.8% in the last bin. The second and third largest uncertainties are
coming from the identification and reconstruction scale factors and are about 1.8%
and 1.4%, constant over the whole invariant mass range. The uncertainty coming
106
7
6
5
4
total
E-scale syst.
E-scale stat.
E-resolution
Identification
Reconstruction
Relative background uncertainty [%]
Relative CDY uncertainty [%]
10.3. Systematic uncertainties
Trigger stat.
Trigger syst.
Isolation stat.
Isolation syst.
k-Factor
Pileup
3
2
1
0
116
200
300 400
1500
1000
mee [GeV]
7
6
5
4
total background
Di-jet & W+Jet syst.
Di-jet & W+Jet stat.
Diboson syst.
Diboson stat.
tt & tW syst.
tt & tW stat.
∫ L dt = 20 fb
-1
s = 8 TeV
3
2
1
0
116
200
300 400
1500
1000
mee [GeV]
Figure 10.5.: Systematic uncertainty for the one dimensional cross section in percent of
CDY is shown on the left side and of the cross section due to the background estimation on
the right side. The systematic uncertainties from different sources are added in quadrature
to a total systematic uncertainty.
from the statistical uncertainty of the isolation scale factor is always below 1% except for the last two bins. In the second to last bin its about 1.4% and then rises
up to 3.4% in the last bin. All other systematic uncertainties are below 1%, except
for the energy resolution uncertainty and the pileup uncertainty. These rise in the
region 200-300 GeV up to 1.2%. This is most likely due to statistical fluctuations,
since in this region the signal simulation has quite low statistics. The resulting total
uncertainty on CDY therefore varies between 2.8% in the bin around 200 GeV up to
5.7% in the last bin.
At low invariant masses the dominating systematic uncertainties on the cross section
due to the background estimation is coming from the systematic uncertainty of the
fake and top backgrounds. The systematic uncertainty of the top background rises
up to 1.2% in the bin 200-300 GeV and then drops again below 1% for the last bins.
The systematic uncertainty due to the di-jet and W +jet background rises up to
3.5% around 450 GeV and then begins to drop again, down to 2.4% in the last bin.
Both behaviors can be explained by the relative contribution of the backgrounds.
For higher invariant masses above 700 GeV, the statistical uncertainties of the top
and fake background start to contribute. These rise up to 2.8% in the last bin.
The systematic and statistical uncertainty due to the diboson background is always
below 0.5%.
Figures 10.6 and 10.7 show the same plots of the relative uncertainty binned in absolute rapidity for the five invariant mass bins of the two dimensional measurement.
The same dependency of the CDY uncertainty on invariant mass can be seen. Besides the energy scale uncertainty, no other uncertainty has a strong dependency
on the rapidity. The systematic uncertainty of the energy scale rises up to higher
rapidities, which corresponds to electrons which are measured at higher |η|, where
the detector resolution is worse and thus the determination of the energy scale gets
more problematic. In the highest mass bin, the uncertainty rises in the outer most
107
10. Cross section measurement
116 GeV < m ee < 150 GeV
7
total
E-scale syst.
E-scale stat.
E-resolution
Identification
Reconstruction
6
5
Relative background uncertainty [%]
Relative CDY uncertainty [%]
116 GeV < m ee < 150 GeV
Trigger stat.
Trigger syst.
-1
Isolation
L dtstat.
= 20 fb
Isolation syst.
k-factors = 8 TeV
Pileup
∫
4
3
2
1
0
0
0.4
0.8
1.2
1.6
2
7
6
5
4
Trigger stat.
Trigger syst.
-1
Isolation
L dtstat.
= 20 fb
Isolation syst.
k-factors = 8 TeV
Pileup
∫
3
2
1
0.4
0.8
3
2
1
1.2
1.6
2
5
∫
3
2
1
0.4
0.8
1.2
1.6
2
2.4
|yee|
-1
s = 8 TeV
5
4
total
Di-jet & W+Jet syst.
Di-jet & W+Jet stat.
Diboson syst.
Diboson stat.
tt & tW syst.
tt & tW stat.
3
2
1
0.4
0.8
1.2
1.6
2
2.4
|yee|
200 GeV < mee < 300 GeV
Trigger stat.
Trigger syst.
-1
Isolation
L dtstat.
= 20 fb
Isolation syst.
k-factors = 8 TeV
Pileup
4
0
0
0.8
∫ L dt = 20 fb
6
0
0
2.4
|yee|
Relative background uncertainty [%]
Relative CDY uncertainty [%]
6
total
E-scale syst.
E-scale stat.
E-resolution
Identification
Reconstruction
0.4
7
200 GeV < mee < 300 GeV
7
total
Di-jet & W+Jet syst.
Di-jet & W+Jet stat.
Diboson syst.
Diboson stat.
tt & tW syst.
tt & tW stat.
150 GeV < mee < 200 GeV
4
0
0
s = 8 TeV
0
0
2.4
|yee|
Relative background uncertainty [%]
Relative CDY uncertainty [%]
total
E-scale syst.
E-scale stat.
E-resolution
Identification
Reconstruction
-1
5
150 GeV < mee < 200 GeV
7
∫ L dt = 20 fb
6
1.2
1.6
2
2.4
|yee|
7
∫ L dt = 20 fb
-1
6
s = 8 TeV
5
4
total
Di-jet & W+Jet syst.
Di-jet & W+Jet stat.
Diboson syst.
Diboson stat.
tt & tW syst.
tt & tW stat.
3
2
1
0
0
0.4
0.8
1.2
1.6
2
2.4
|yee|
Figure 10.6.: Systematic uncertainty for the two dimensional cross section in percent.
Systematic uncertainties of CDY are shown on the left side and of the cross section due to
the background estimation on the right side. The systematic uncertainties are separated
into different sources which are added in quadrature to a total systematic uncertainty.
108
10
300 GeV < mee < 500 GeV
total
E-scale syst.
E-scale stat.
E-resolution
Identification
Reconstruction
9
8
7
Relative background uncertainty [%]
Relative CDY uncertainty [%]
10.3. Systematic uncertainties
Trigger stat.
Trigger syst.
-1
Isolation
L dtstat.
= 20 fb
Isolation syst.
k-factors = 8 TeV
Pileup
∫
6
5
4
3
2
1
12
0.4
0.8
1.2
1.6
2
2.4
|yee|
500 GeV < mee < 1500 GeV
10
8
total
E-scale syst.
E-scale stat.
E-resolution
Identification
Reconstruction
Trigger stat.
Trigger syst.
-1
Isolation
L dtstat.
= 20 fb
Isolation syst.
k-factors = 8 TeV
Pileup
∫
6
4
2
0
0
0.4
0.8
1.2
1.6
2
2.4
|yee|
300 GeV < mee < 500 GeV
9
∫ L dt = 20 fb
-1
8
s = 8 TeV
7
6
5
total
Di-jet & W+Jet syst.
Di-jet & W+Jet stat.
Diboson syst.
Diboson stat.
tt & tW syst.
tt & tW stat.
4
3
2
1
0
0
Relative background uncertainty [%]
Relative CDY uncertainty [%]
0
0
10
12
0.4
0.8
1.2
1.6
2
2.4
|yee|
500 GeV < mee < 1500 GeV
∫ L dt = 20 fb
10
-1
s = 8 TeV
8
6
total
Di-jet & W+Jet syst.
Di-jet & W+Jet stat.
Diboson syst.
Diboson stat.
tt & tW syst.
tt & tW stat.
4
2
0
0
0.4
0.8
1.2
1.6
2
2.4
|yee|
Figure 10.7.: Systematic uncertainty for the two dimensional cross section in percent.
Systematic uncertainties of CDY are shown on the left side and of the cross section due to
the background estimation on the right side. The systematic uncertainties are separated
into different sources which are added in quadrature to a total systematic uncertainty.
rapidity bin up to 9%. The background uncertainty is still dominated by the top and
fake background uncertainty. In the low mass bins, the fake background is higher
in the outer rapidity bins and causes uncertainties up to 1.2%. The uncertainty of
top processes contributes mostly in the central rapidity bins. Above an invariant
mass of 200 GeV, the uncertainty of the fake background contributes stronger at
low rapidities. The largest uncertainty coming from the background is at higher
invariant masses in the first rapidity bin and goes up to 7% for the bin 300 to 500
GeV. Tables with a detailed listing of all systematic uncertainties can be found in
the appendix from A.6 to A.11.
109
10. Cross section measurement
110
11. Results and interpretation of the
Measurement
In this chapter, the measured cross sections are shown and interpreted in sense
of sensitivity to PDFs. In a first part the measured cross sections are shown and
discussed. In a second part a tool to compare the data to existing PDFs and to
extract PDFs is introduced followed by the discussion of the comparison to Standard
Model predictions using various PDFs extracted by different groups, and by studies
of the impact of the new measurement on PDFs.
11.1. Single differential cross section
Figure 11.1 shows the measured single differential Drell-Yan cross section binned in
invariant mass of the electron pair, in the range 116 to 1500 GeV. The measured
cross section is shown with its statistical uncertainty, which is assumed to follow a
Gaussian distribution. The green band shows the total systematic uncertainty, also
including statistical uncertainties which come not directly from the measured cross
section. The 2.8% luminosity uncertainty is not included in the systematic uncertainty. The cross section is falling from 116 to 1500 GeV by six orders of magnitude.
The measurement is dominated by systematic uncertainties up to an invariant mass
of 700 GeV. The statistical uncertainty in the last two bins is larger than the systematic one. Also shown are different theory predictions for the cross section. First
the blue triangles show the theory prediction calculated at NNLO with FEWZ using the MSTW2008NNLO PDF. This calculation also includes corrections which
cover for photon induced processes and W /Z radiation [39]. The relative contribution of this process becomes larger at high invariant masses and is in the last bin
about 5%. The prediction was obtained by reweighting the Powheg prediction to
the FEWZ prediction using a k-factor (see section 6.1.1). The uncertainties shown
on the FEWZ prediction cover the uncertainties for the PDF used, αs , scale and
differences to other PDFs, and are given for a 90% confidence level. Also three
different theory predictions using three different PDFs, CT10, Herapdf1.5 and
NNPDF2.3, are shown. These were calculated at NLO using MCFM. The uncertainties on the MCFM predictions are the uncertainties coming from the PDF
uncertainties and the scale uncertainty. For the PDF uncertainty either the 68% or
the 90% confidence level uncertainty was used, depending on which confidence level
was provided by the PDF group. To calculate the scale uncertainty, factorization
and renormalization scale were at the same time multiplied with a factor 2 and 1/2
111
11. Results and interpretation of the Measurement
dσ [ pb ]
dmee GeV
and the average deviation was added in quadrature to the PDF uncertainty. Also
shown is the ratio between the measurement and the different theory predictions.
For all theoretical predictions, the ratio theory/data is below unity. The deviation
between data and theory is most significant in the region 150 to 300 GeV. The largest
deviation in this region between data and the NNLO prediction is about 7% in the
bin 230 to 250 GeV. This is not covered by the statistical and systematic uncertainty
of the measurement, but data and theory are still in agreement when considering
the 90% confidence level theory uncertainty. For the NLO predictions, Herapdf1.5
fits best to the measurement. At low and very high masses Herapdf1.5 fits even
better than the NNLO prediction. Assuming a NNLO prediction will shift the theory prediction about 4% in the correct direction (as seen for MSTW2008 in section
3.4), Herapdf1.5 would agree with the data within the statistical and systematic
uncertainties. The theory prediction using CT10 has the worst agreement with the
measurement.
10-2
10-3
10-4
Theory/Data
Data 2012 ( s = 8 TeV)
MCFM NLO HERAPDF1.5 68% C.L.
MCFM NLO NNPDF2.3 68% C.L.
MCFM NLO CT10 90% C.L.
FEWZ NNLO MSTW 90% C.L.
10-1
Stat. uncertainty
Syst. uncertainty
2.8% luminosity
uncertainty excluded
10-5
s=8 TeV
10-6
|η| < 2.5, p
1.1
1
0.9
0.8
116
∫ L dt = 20 fb
-1
leading
T
subleading
> 40 GeV, pT
> 30 GeV
Mee [Gev]
200
300
400 500
1000
1500
mee [GeV]
Figure 11.1.: Fiducial Drell-Yan cross section binned in invariant mass of the electron
pair in the range 116 to 1500 GeV. The measured fiducial cross section with its statistical uncertainty is shown. The green band shows the total systematic uncertainty, also
including statistical uncertainties which come not directly from the measured cross section
and excluding the 2.8% luminosity uncertainty. The theory prediction calculated at NNLO
with FEWZ using the MSTW2008NNLO PDF, including corrections for photon induced
processes is shown. Also shown are the theory predictions calculated at NLO with MCFM
for three different PDFs: CT10, Herapdf1.5 and NNPDF2.3.
112
11.2. Double differential cross section
√
Figure 11.2 shows the measurement of the single differential cross section at s = 7
TeV [33]. The cross section is shown from 116 GeV to 500 GeV with its statistical
and systematic uncertainties and is compared to different theory predictions. The
theory
predictions were calculated in the same way as the FEWZ calculation for
√
s = 8 TeV, also including corrections for photon induced production and W /Z
radiation. The behavior between theory and measurement is similar. Figure 11.3
shows the ratio between the measured cross section and the corresponding
FEWZ
√
prediction
using
the
MSTW2008
PDF
for
the
measurement
at
s
=
8
TeV
and
√
s = 7 TeV in the range 116 to 1500 GeV. As uncertainty the statistical uncertainties of both measurements are shown. Both measurements show basically the same
discrepancy between data and theory. Starting from√116 GeV, the disagreement
gets larger and is most significant, even larger for the s = 7 TeV measurement, in
the region 200 to 300 GeV.
A table, including the measured cross section and the corresponding statistical and
systematic uncertainties can be found in the appendix at A.12.
11.2. Double differential cross section
Figure 11.4 shows the measured two dimensional cross sections binned in absolute
rapidity |yee | of the electron pair in different ranges of invariant mass mee . The
measured cross section is shown with its statistical uncertainty which is assumed to
follow a Gaussian distribution. The systematic uncertainty, also including all statistical uncertainties which are not coming from the measured cross section, is shown
as a green band. The cross section is, in a certain invariant mass bin, slowly falling
to higher values of rapidity. Up to an invariant mass of 300 GeV, the measurement
is dominated by its systematic uncertainties. Above 300 GeV, the statistical uncertainty is in the same order as the systematic one. The last invariant mass bin is
dominated by the statistical uncertainty. The same theory predictions as for the one
dimensional measurement are shown. The only difference is, that the corrections for
the photon induced process are not applied, since the corrections are only available
for the dependency with invariant mass. Photon induced production is assumed to
be more dominant at higher rapidities, and since this dependency is not covered
by the calculation, the corrections are not applied to the NNLO prediction. As for
the one dimensional measurement, the ratio between the NNLO theory prediction
and data is nearly in all bins below unity. Only a few bins in the last two invariant
mass bins have a ratio above unity for some theory predictions. In the first invariant
mass bin the rapidity shape is in a good agreement between theory and data. But
for the three bins between 150 and 500 GeV, the measurement has a higher cross
section in the second bin as in the first bin. This behavior is not predicted by any
of the theory cross sections. As for the one dimensional cross section, the NNLO
calculation using MSTW2008 and the NLO calculation using Herapdf1.5 are in
best agreement with the data.
113
dσ [pb/GeV] (Born)
dmee
11. Results and interpretation of the Measurement
ATLAS
10-1
Data
Sys. uncertainty
10
-2
Total uncertainty
s = 7 TeV,
10
-3
∫ L dt = 4.9 fb-1
electron p > 25 GeV, |η| < 2.5
T
1.8 % luminosity uncertainty not included
10-4
116 GeV < mee < 500 GeV
MSTW2008 with 68% CL (PDF + α s ) + scale + PI unc.
Theory/Data
1.1
1
0.9
HERAPDF1.5
116
CT10
ABM11
200
NNPDF2.3
300
400
500
mee [GeV]
Figure 11.2.: Fiducial Drell-Yan cross section at
√
s = 7 TeV [33], binned in invariant
mass of the electron pair in the range 116 to 500 GeV. Shown is the measured fiducial
cross section with its statistical uncertainty. The green bands show the systematic and
total uncertainty, excluding the 1.8% luminosity uncertainty. Different theory predictions,
calculated at NNLO with FEWZ using different PDFs are shown. The predictions include
corrections for photon induced processes and W /Z radiation.
11.3. HERAFitter
HERAFitter [92, 93] is an open source project to extract the HERAPDF [44] from
the HERA measurements. It provides also the possibility to add own measurements
and test them for sensitivity to PDFs. The extraction of the PDFs is based on the
methodology described in section 2.2.2: A parametrization of the PDFs is introduced
at a starting scale Q0 . The PDF is then evolved to the Q2 scale corresponding to the
measurement and the parameters of the parametrization are deduced by minimizing
the χ2 of the data and the PDF. The minimization of the χ2 is based on the standard
MINUIT program [94]. For the calculation of the χ2 , the statistical and uncorrelated
114
dσFEWZ NNLO MSTW dσ
/
d mee
dmee
11.4. Comparison with existing parton distribution functions
1.1
1.05
1
0.95
0.9
0.85
0.8
s = 8 TeV
s = 7 TeV
0.75
116
200
300 400
1500
1000
mee [GeV]
Figure 11.3.: Ratio between the measured cross section and the
corresponding FEWZ
√
√
NNLO prediction using MSTW2008, for the measurement at s = 8 TeV and s = 7
TeV [33]. The uncertainties shown are the statistical uncertainties of the measurements.
systematic uncertainties as well as the correlated systematic uncertainties of the
measurements are considered.
11.4. Comparison with existing parton distribution
functions
The HERAFitter framework is used to compare the two dimensional measurement
to theoretical Standard Model predictions based on different PDFs, allowing for
possible PDF discrimination by this measurement. To make this comparison, the
measured cross section is implemented in HERAFitter. Statistical uncertainties are
treated as fully uncorrelated between all bins, whereas systematic uncertainties are
treated as fully correlated between all bins. The treatment of the systematic uncertainties as 100% correlated between all bins is an approximation, but since the bins
are certainly strongly correlated the error introduced by this should be very small.
Different theory calculations are implemented to which the measurement shall be
compared. The measurement is compared to the theory calculation using FEWZ
115
Stat. uncertainty
Syst. uncertainty
2.8% luminosity
uncertainty excluded
116.0 GeV < mee < 150.0 GeV
0.08
0.06
dσ2/dmee/d|yee| [pb/GeV]
0.1
0.025
Stat. uncertainty
Syst. uncertainty
2.8% luminosity
uncertainty excluded
150.0 GeV < mee < 200.0 GeV
0.02
0.015
0.04
Data 2012 ( s = 8 TeV)
MCFM NLO HERAPDF1.5 68% C.L.
MCFM NLO NNPDF2.3 68% C.L.
MCFM NLO CT10 90% C.L.
FEWZ NNLO MSTW 90% C.L.
0.02
0
1
0.9
0.8
0
Yee
0.4
0.8
1.2
0.01
Data 2012 ( s = 8 TeV)
MCFM NLO HERAPDF1.5 68% C.L.
MCFM NLO NNPDF2.3 68% C.L.
MCFM NLO CT10 90% C.L.
FEWZ NNLO MSTW 90% C.L.
0.005
1.6
2
2.4
|yee|
Theory/Data
Theory/Data
dσ2/dmee/d|yee| [pb/GeV]
11. Results and interpretation of the Measurement
0
1
0.9
0.8
0
Yee
0.4
0.8
1.2
1.6
2
2.4
|yee|
Stat. uncertainty
Syst. uncertainty
2.8% luminosity
uncertainty excluded
200.0 GeV < mee < 300.0 GeV
0.005
0.004
0.003
Data 2012 ( s = 8 TeV)
MCFM NLO HERAPDF1.5 68% C.L.
MCFM NLO NNPDF2.3 68% C.L.
MCFM NLO CT10 90% C.L.
FEWZ NNLO MSTW 90% C.L.
0.002
Theory/Data
0.001
0
1
0.9
0.8
0
Yee
0.4
0.8
1.2
1.6
2
2.4
|yee|
dσ2/dmee/d|yee| [pb/GeV]
0.006
Theory/Data
dσ2/dmee/d|yee| [pb/GeV]
-3
×10
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Stat. uncertainty
Syst. uncertainty
2.8% luminosity
uncertainty excluded
300.0 GeV < mee < 500.0 GeV
1
0.9
0.8
0
Data 2012 ( s = 8 TeV)
MCFM NLO HERAPDF1.5 68% C.L.
MCFM NLO NNPDF2.3 68% C.L.
MCFM NLO CT10 90% C.L.
FEWZ NNLO MSTW 90% C.L.
Yee
0.4
0.8
1.2
1.6
2
2.4
|yee|
Theory/Data
dσ2/dmee/d|yee| [pb/GeV]
-6
×10
40
35
30
25
20
15
10
5
0
1
0.9
0.8
0
Stat. uncertainty
Syst. uncertainty
2.8% luminosity
uncertainty excluded
500.0 GeV < mee < 1500.0 GeV
s=8 TeV
∫ L dt = 20 fb
leading
Data 2012 ( s = 8 TeV)
MCFM NLO HERAPDF1.5 68% C.L.
MCFM NLO NNPDF2.3 68% C.L.
MCFM NLO CT10 90% C.L.
FEWZ NNLO MSTW 90% C.L.
|η| < 2.5, pT
-1
subleading
> 40 GeV, p T
> 30 GeV
Yee
0.4
0.8
1.2
1.6
2
2.4
|yee|
Figure 11.4.: Fiducial Drell-Yan cross section binned in invariant mass and absolute
rapidity of the electron pair in the range mee = 116 to mee = 1500 GeV and |yee | = 0.0
to |yee | = 2.4. Shown is the measured fiducial cross section with its statistical uncertainty.
The green band shows the total systematic uncertainty, also including statistical uncertainties which come not directly from the measured cross section and excluding the 2.8%
luminosity uncertainty. The theory prediction calculated at NNLO with FEWZ using the
MSTW2008NNLO PDF is shown. Also shown are the theory predictions calculated with
MCFM at NLO for three different PDFs: CT10, Herapdf1.5 and NNPDF2.3.
116
11.4. Comparison with existing parton distribution functions
with MSTW2008NNLO including all corrections except the PI corrections. To
furthermore obtain theory predictions for different PDFs, the grids calculated with
MCFM at NLO can be used. To correct the grids for higher order corrections,
k-factors are calculated from the ratio of the FEWZ prediction to the MCFM
prediction convoluted with MSTW2008NNLO. By applying these k-factors, the
matrix elements of the hard scattering process, stored in the grids are corrected for
NNLO QCD and NLO electroweak effects as well as for W /Z radiation. If a PDF
is convoluted with the NNLO corrected grid, the NNLO version of the PDF has to
be used. In case of using the pure MCFM grid, the NLO version of the PDFs is
used.
A measure of agreement can be made by calculating the χ2 for the measurement
and the different theory predictions. For each mass bin a χ2 is calculated, as well
as a correlated χ2 which contains the correlated uncertainties. Shifts to the correlated systematic uncertainties are introduced by minimizing the correlated χ2 . The
exact form of the used χ2 -functions can be found in [93] and for the correlated χ2
in appendix B of [95]. The number of degrees of freedom is defined by the binning.
Table 11.1 shows the resulting χ2 in each mass bin, the correlated χ2 and the sum
of all χ2 for different theory predictions.
mee [GeV]
FEWZ
χ2
116-150
χ2
150-200
χ2
200-300
χ2
300-500
10.02
9.18
16.56
21.21
χ2
500-1500
Corr.
χ2
4.67 12.87
P
χ2
74.51
MCFM NLO (no k-factor)
MSTW2008
14.15
21.85
26.89
23.05
5.80 16.04 107.78
MCFM NNLO (including k-factor)
MSTW2008
HERAPDF1.5
CT10
NNPDF2.3
ABM11
10.01
4.82
12.55
8.02
6.79
9.19
7.30
9.85
8.25
9.31
16.56
14.20
23.94
18.86
17.26
21.22
18.65
20.58
18.06
18.81
4.66 12.86
3.73 6.83
5.41 10.01
4.46 13.19
3.85 2.83
74.50
55.53
82.34
70.84
58.85
Table 11.1.: χ2 values obtained by comparing the measurement to different theory predictions. The results for the FEWZ prediction as well as for NLO and NNLO predictions
using different PDFs are given.
The comparison with the FEWZ prediction using MSTW2008NNLO shows, like
the comparison by eye in the previous section, that there are some tensions between
the measured cross sections and the theory. Especially in the two bins from 200
GeV to 300 GeV and 300 GeV to 500 GeV there is a large tension with χ2 values
of 16.56 and 21.21.
Shown is also the NLO prediction obtained by using the default grid without apply-
117
11. Results and interpretation of the Measurement
ing k-factors, convoluted with the MSTW2008NLO PDF. The χ2 value is much
worse than for the FEWZ prediction and thus the NNLO corrections improve the
agreement between data and theory. Applying the k-factors leads, when using the
MSTW2008NNLO PDF, to basically (up to some differences in the last digit) the
same χ2 values than the FEWZ prediction. This is expected, since the k-factor
reweight the matrix elements of the hard scattering to the FEWZ prediction.
Using the NNLO predictions, the different χ2 values for various PDFs can be compared. All PDFs show a tension to the measurement, especially in the two bins from
200 GeV to 500 GeV. Besides this tension, HERAPDF1.5 and ABM11 have with
an overall χ2 of 55.53 respectively 58.58, the best agreement with the measurement.
HERAPDF1.5 has especially in the first two bins the best agreement with the
measurement. Following are NNPDF2.3 and MSTW2008 with overall χ2 values
from 70.84 to 74.50. The worst agreement is found for the CT10 PDF with a χ2 of
84.34.
11.5. Impact of the measurement on parton
distribution functions
To test the sensitivity of the new measurement to PDFs, a QCD fit using HERAfitter
framework is performed. The parametrization of the PDFs, chosen at a starting scale
of Q20 = 1.9 GeV2 , has the following form:
xf (x) = AxB (1 − x)C (1 + Ex2 ).
(11.1)
Parametrized are the valence distributions xuv and xdv , the gluon distribution xg,
and the u-type and d-type sea distributions xŪ , xD̄, where xŪ = xū, xD̄ = xd¯+ xs̄.
The normalization parameters Auv , Adv and Ag are constrained by the QCD sumrules, such that counting and momentum conservation is preserved [1]. For the sea
distributions, the parameters BŪ and BD̄ are set equal BŪ = BD̄ . Since the starting
scale Q20 is chosen to be below the charm mass mc there is no charm distribution xc̄
present. Certainly there is the strange quark distribution present and it is assumed
that xs̄ = fs xD̄ at Q20 . The fraction fs is chosen to be fs = 0.31, which is in
agreement with determinations of this fraction using fixed target DIS with neutrinos
[96, 97]. There is a recent publication from ATLAS using the W /Z data from the
2010 run, which indicates a value of fs = 0.5 [98], but since no sensitivity to the
parameter fs is expected, the default value is chosen. In addition, to ensure that the
sea distributions are equal at low x, AŪ = AD̄ (1−fs ) is chosen. Besides Euv all other
E parameters are fixed and set to 0. Motivated by the parametrization chosen by
0
0
the MSTW group [26] an additional term −A0 xB (1−x)C , where C 0 is kept constant,
is added to the parametrization of the gluon PDF. This term allows the behavior
of negative gluon PDFs1 at low scales which adds flexibility to the parametrization.
1
PDFs are not directly physical observables, so it is not unphysical to have negative values at the
parametrization scale.
118
11.5. Impact of the measurement on parton distribution functions
In the end there are 13 free parameters which have to be determined from the fit.
The strong coupling constant is chosen to be αs (MZ ) = 0.1176 which is the value
used for HERAPDF1.5. The heavy flavor scheme, which predicts the change in
contribution of heavy flavors with rising scale, is chosen to be the Thorne Roberts
scheme [99, 100], which is used by the MSTW group.
For the fit the grids calculated with APPLgrid [42] are used and the derived kfactors are applied. The DGLAP evolution is done with the program QCDNUM
[101] at NNLO. The fit is based on the combination of the neutral and charged
current cross section measurements of both HERA experiments H1 and ZEUS [102].
Added to this data is the two dimensional rapidity measurement from ATLAS,
described in this thesis. The χ2 values of the fit can be found in table 11.2. The
χ2 /NDoF is very good but dominated by the very precise HERA data. The partial χ2
values of the different mass bins show, like for the comparison with the predictions
based on various PDFs, a tension especially in the two mass bins from 200 GeV to
500 GeV. The first two mass bins and the last mass bin are in a good agreement.
This leads to the indication that the deviations which are seen, do not to originate
from differences in the PDF. Table 11.3 shows the systematic shifts introduced by
the fit. It can be seen that the cross section is shifted down by around 2σ with the
identification and reconstruction uncertainty. The shift corresponds to an overall
renormalization of the data, since these systematic uncertainties are flat in invariant
mass and absolute rapidity. The choice of the shift seems to be arbitrary, since the
data is not shifted by the luminosity uncertainty. Thus this cannot be interpreted
as tension between the reconstruction and identification scale factors and the data.
Figure 11.5 shows the valence plus sea PDFs and the gluon PDF with and without
adding the own measurement at the starting scale and a scale of Q2 = 10000 GeV2 .
The valence plus sea PDFs U = uv + ū + c̄ and D = dv + d¯ + s̄ show the expected
behavior: a peak around x = 1/3 which is getting smaller for higher Q2 due to the
valence part and a rising behavior at low x due to the sea part. It can be seen that
all changes for D and U when adding the new data are approximately within the
experimental uncertainties. The reduction of these uncertainties at higher x when
adding the Drell-Yan data is very small, since these PDFs are well constrained by the
HERA measurements. The gluon PDF shows at the starting scale the mentioned
negative values at low x. There is some small tension at higher x between the
central values when adding the Drell-Yan data, but this is a negligible effect, since
only experimental uncertainties are shown. The reduction of the uncertainties at
high x is again very small. Figure 11.6 shows in the same way the PDFs of the sea.
Here again all changes are approximately within the experimental uncertainties.
Some small tensions can be seen at higher x for the d-type quark distribution. Like
expected from the Drell-Yan data, a reduction of the uncertainty at high x can be
seen. The reduction starts at around x ≈ 0.05 which is in good agreement with the
expected sensitivity (see section 2.3). The uncertainty of the u-type quarks reduces
by about 25% whereas the uncertainty of the d-type quarks reduces by about 50%.
119
11. Results and interpretation of the Measurement
Dataset
e− p
2
HERA comb. ZEUS and H1 d2 σN
C /dx/dQ
+
e p
2
HERA comb. ZEUS and H1 d2 σN
C /dx/dQ
e− p
HERA comb. ZEUS and H1 d2 σCC
/dx/dQ2
+
e p
HERA comb. ZEUS and H1 d2 σCC
/dx/dQ2
2
dσpp→Z/γ
∗ (e+ e− )+X /dmee /d|yee |, 116 GeV < mee
2
dσpp→Z/γ ∗ (e+ e− )+X /dmee /d|yee |, 150 GeV < mee
2
dσpp→Z/γ
∗ (e+ e− )+X /dmee /d|yee |, 200 GeV < mee
2
dσpp→Z/γ ∗ (e+ e− )+X /dmee /d|yee |, 300 GeV < mee
2
dσpp→Z/γ
∗ (e+ e− )+X /dmee /d|yee |, 500 GeV < mee
< 150 GeV
< 200 GeV
< 300 GeV
< 500 GeV
< 1500 GeV
Correlated χ2
Total fit
Partial χ2 NData
109.78
145
304.20
337
20.04
34
29.36
34
5.98
6
8.01
6
13.93
6
21.34
6
3.39
6
10.56
χ2 /NDoF = 526.57/567 = 0.929
Table 11.2.: Resulting total and partial χ2 values of the PDF fit. Given is for the total
fit also the number of degrees of freedom NDoF and for the partial fits the number of fitted
data points NData .
Systematic uncertainty
Identification
Reconstruction
Trigger
Isolation
Energy resolution
Pileup
Energy scale
Background
Luminosity
Systematic shift
−2.24σ ± 0.71σ
−1.92σ ± 0.82σ
−0.06σ ± 0.10σ
−0.02σ ± 0.10σ
−0.05σ ± 0.90σ
0.62σ ± 0.67σ
0.57σ ± 0.41σ
−0.23σ ± 0.73σ
−0.19σ ± 0.87σ
Table 11.3.: Shifts of the systematic uncertainties introduced by the fit and their uncertainties.
120
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1.1
1
0.9
Hera
Hera + High mass Drell-Yan
Hera + High mass Drell-Yan
5
2
2
Q = 10000 GeV
2
α s(M ) = 0.1176
f s=0.31
2
Q = 1.9 GeV
α s(M ) = 0.1176
2
f s=0.31
1
x
-3
10-2
10
10-1
Hera
Hera + High mass Drell-Yan
0
1.1
1
0.9
x
10-4
1
x
D = dv + dsea
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1.1
1
0.9
Ratio
Z
10
-3
10
10-2
10-1
1
x
D = dv + dsea
Hera
8
Hera + High mass Drell-Yan
2
Q = 10000 GeV
6
2
α s(M ) = 0.1176
Z
2
2
Z
f s=0.31
x
-3
10-2
10
10-1
0
1.1
1
0.9
x
10-4
1
x
-3
10
10-2
10-1
1
x
Q = 1.9 GeV
α s(M ) = 0.1176
2.5
f s=0.31
xf(x,Q2 )
g
4
3.5
3
Hera
2
2
Hera + High mass Drell-Yan
Z
1
0.5
0
1.1
1
0.9
x
-3
10
10-2
10-1
1
x
Ratio
2
1.5
10-4
f s=0.31
4
2
Q = 1.9 GeV
α s(M ) = 0.1176
g
xf(x,Q2 )
Hera
6
Z
10-4
Ratio
U = uv + usea
3
xf(x,Q2 )
xf(x,Q2 )
7
4
10-4
Ratio
xf(x,Q2 )
U = uv + usea
Ratio
Ratio
xf(x,Q2 )
11.5. Impact of the measurement on parton distribution functions
80
70
60
50
40
30
20
10
0
1.1
1
0.9
10-4
Hera
Hera + High mass Drell-Yan
2
Q = 10000 GeV
2
α s(M ) = 0.1176
Z
f s=0.31
x
-3
10
10-2
10-1
1
x
Figure 11.5.: This figure shows xf (x,Q2 ) for the fitted PDFs with and without including
the double differential high mass Drell-Yan measurement. The PDFs are shown at the
input scale of Q2 = 1.9 GeV2 and at a scale of Q2 = 10000 GeV2 for the distributions of
U , D and g. The ratio of the fitted values is shown as a dashed line and the experimental
uncertainties are indicated by the band.
121
0.6
Ubar = ubar + cbar
Ubar = ubar + cbar
xf(x,Q2 )
xf(x,Q2 )
11. Results and interpretation of the Measurement
Hera
0.5
Hera + High mass Drell-Yan
0.4
Q = 10000 GeV
2
α s(M ) = 0.1176
Z
3
2
Q = 1.9 GeV
f s=0.31
2
2
α s(M ) = 0.1176
Z
0.1
1
f s=0.31
0
1.1
1
0.9
x
-3
10-4
10-2
10
10-1
Ratio
Ratio
Hera + High mass Drell-Yan
5
2
0.2
10-4
xf(x,Q2 )
Hera
Hera + High mass Drell-Yan
x
10
-3
10
10-2
10-1
1
x
Dbar = dbar + sbar
Hera
8
Hera + High mass Drell-Yan
6
Q = 10000 GeV
α s(M ) = 0.1176
4
f s=0.31
2
2
Z
2
Q = 1.9 GeV
2
α s(M ) = 0.1176
2
Z
f s=0.31
x
-3
10
10-2
10-1
1
x
Ratio
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1.1
1
0.9
0
1.1
1
0.9
10-4
1
x
Dbar = dbar + sbar
xf(x,Q2 )
Hera
6
4
0.3
Ratio
7
0
1.1
1
0.9
10-4
x
-3
10
10-2
10-1
1
x
Figure 11.6.: This figure shows xf (x,Q2 ) for the fitted PDFs with and without including
the double differential high mass Drell-Yan measurement. The PDFs are shown at the input
scale of Q2 = 1.9 GeV2 and at a scale of Q2 = 10000 GeV2 for the distributions of U bar
and Dbar. The ratio of the fitted values is shown as a dashed line and the experimental
uncertainties are indicated by the band.
122
12. Summary and Outlook
A precise prediction of the processes at the Large Hadron Collider at CERN, where
protons collide at unprecedented center of mass energies, is essential to do precise
tests of the Standard Model and for the search of new physics phenomena. A key
role for the precise prediction of these processes plays the knowledge of the parton
distribution functions (PDFs) of the proton.
In this thesis the measurement of the first double differential cross section
of the
√
∗
+ −
process pp → Z/γ + X → e e + X, at a center of mass energy of s = 8 TeV
of the colliding protons, as a function of the invariant mass and rapidity of the
e+ e− -pair was presented. The measurement covered an invariant mass range from
me+ e− = 116 GeV up to me+ e− = 1500 GeV. The analyzed data set was recorded
by the ATLAS experiment in the year 2012 and corresponds to an integrated luminosity of 20.3 fb−1 . The rapidity and mass dependent cross section is expected to
have sensitivity to the PDFs at very high values of the Bjorken-x scaling variable.
In particular sensitivity to the PDFs of the antiquarks in the proton is expected,
since these are not well constrained at high values of x.
The expected amount of e+ e− -pairs produced by Standard Model processes has
been carefully estimated using Monte Carlo simulations and data-driven methods.
A main part of this thesis addresses the further development and understanding of
data driven methods to determine the fake background which arises if one or both of
the e+ /e− -candidates are jets and wrongly identified as a e+ /e− -candidate. Several
methods to determine this background were carried out and found to be in a good
agreement. The cross section was measured single differential as a function of invariant mass with a systematic uncertainty of 3.2%-7.3% and a statistical uncertainty
of 0.5%-16.9%. In the lowest mass bin of the double differential measurement are
the systematic uncertainties in the range 3.0%-4.6% and the statistical ones in the
range 0.7%-1.6%. In the highest invariant mass bin the systematic uncertainties rise
to 7.6%-13.1% and the statistical uncertainties to 5.7%-50%. The main contribution
to the systematic uncertainties arises from the uncertainty on the fake background,
the uncertainty of the electron energy scale and the measurement of the scale factors
for the reconstruction and identification efficiency. The measured cross section is
compared to several theory predictions using different calculations and PDFs. A
small tension between data and theory is seen especially in the invariant mass√range
of 150 to 300 GeV. This tension was already seen in an analysis performed at s = 7
TeV [33]. It was additionally shown that, despite this small tension between theory
and data, the uncertainty on the antiquark distributions at high x can be improved
using the presented measurement.
The uncertainties used for the electron energy scale and the reconstruction and
123
12. Summary and Outlook
identification scale factors were a preliminary estimation for the 2012 data set. It is
expected that these uncertainties will improve in the next few months at least by a
factor of two and thus the precision of this measurement will significantly improve.
The main systematic uncertainty is then expected to come from the fake background
estimation. To see if the tension between theory and data is a physics effect or originates from detector or analysis related effects, e.g., wrong electron calibration, a
measurement using the decay Z/γ ∗ → µ+ µ− can be made and used as an independent cross check. Such a measurement is in progress, and will in a large part have
different sources of systematic uncertainties and nearly no background contribution
from fakes. Additionally this will allow to combine the e+ e− and µ+ µ− channel to
further reduce the uncertainties of the measurements.
√
In the year 2015 the LHC will provide collisions at a center of mass energy of s = 13
TeV. The gg-luminosity will increase, depending on the mass of the final state, by a
factor of about four whereas the q q̄-luminosity will only increase approximately by
a factor of two [27]. Since tt̄ events are mainly produced via gluon-fusion, the cross
section of this process will have an approximately twice larger increase [103] than
the cross section of the Drell-Yan process. This will double the tt̄ background to an
amount of 30% for some ranges of the signal selection. To reduce the amount of tt̄
background it might be necessary to impose additional requirements √
to reject this
miss
background (e.g. small ET or b-jet veto). Since a measurement at s = 13 TeV
would take place at a higher Q2 , the x values covered by the range 116 to 1500 GeV
would be smaller by a factor of approximately two [27].
124
A. Appendix
Signature
Z → ee
Z → ee
Z → ee
Z → ee
Z → ee
Z → ee
Z → ee
Z → ee
Z → ee
Z → ee
Z → ee
Z → ee
Z → ee
Z → ee
Z → ee
Z → ee
mee
[GeV]
60120-180
180-250
250-400
400-600
600-800
800-1000
1000-1250
1250-1500
1500-1750
1750-2000
2000-2250
2250-2500
2500-2750
2750-3000
3000-
MC run
number
147806
129504
129505
129506
129507
129508
129509
129510
129511
129512
129513
129514
129515
129516
129517
129518
σBr [pb]
Powheg
1109.9
9.8460
1.5710
0.54920
0.089660
0.015100
0.003750
0.001293
0.0003577
0.0001123
0.00003838
0.00001389
0.000005226
0.000002017
0.0000007891
0.0000005039
Nevt
[k]
10000
500
100
100
100
100
100
100
100
100
100
100
100
100
100
100
LM C
[fb−1 ]
9
51
64
182
1115
6623
26667
77340
279564
890472
2605524
7199424
19135094
49578582
126726651
198452074
Table A.1.: Drell-Yan Monte Carlo samples used in the analysis. The first column
gives the mass range in which the Drell-Yan process was simulated, the second the
internal ATLAS run number. For each sample the cross section times branching ratio
with which the Powheg generator produced the sample and the number of produced
events are given. In last column, the integrated luminosity LM C = Nevt /(σBr) of
each sample is given.
125
A. Appendix
MC run
Signature number
tt̄ → `X
105200
W t → X 105467
σBr [pb]
F
Nevt LM C
MC@NLO NNLO
[%]
[k] [fb−1 ]
208.13 252.89 54.26 15000
133
20.67
22.37 100.00 2000
97
Table A.2.: Top Monte Carlo samples used in the analysis. The first column gives
the internal ATLAS run number. For each sample the cross section times branching
ratio with which the MC@NLO generator produced the sample is given. Also given
is σBr at NNLO which was used for the normalization, the number of produced
events and the efficiency with which the sample was filtered. In last column, the
integrated luminosity LM C = Nevt /(σBr) of each sample is given.
Signature
W W → `X
W W → eνeν
W W → eνeν
ZZ → `X
ZZ → ee
ZZ → ee
W Z → `X
W Z → ee
W Z → ee
mee
[GeV]
MC run
number
105985
400-1000 180451
1000180452
105986
400-1000 180455
1000180456
105987
400-1000 180453
1000180454
σBr
Herwig
32.501
0.37892
0.37895
4.6914
0.34574
0.34574
12.009
0.46442
0.46442
[pb]
F Nevt
LM C
NLO
[%]
[k]
[fb−1 ]
56.829 38.21 2500
201
0.66255
0.07
10 37701
0.66255 0.001
10 263887
7.3586 21.17 250
252
0.54229
0.13
10 22249
0.54229 0.0029
10 997361
21.4778 30.55 1000
273
0.83060 0.3087
10
6975
0.83060 0.0114
10 188879
Table A.3.: Diboson Monte Carlo samples used in the analysis. The first column
gives the mass range in which the diboson processes were simulated, the second the
internal ATLAS run number. For each sample the cross section times branching
ratio with which the Herwig generator produced the sample is given. Also given is
σBr at NLO which was used for the normalization, the number of produced events
and the efficiency with which the sample was filtered. In last column, the integrated
luminosity LM C = Nevt /(σBr) of each sample is given.
126
Signature
W + → eν
W − → eν
MC run
number
147800
147803
σBr
Powheg
6891.0
4790.2
[pb]
NNLO
7073.8
5016.2
Nevt LM C
[k] [fb−1 ]
23000
3.25
17000
3.39
Table A.4.: W Monte Carlo samples used in the analysis. The first column gives
the internal ATLAS run number. For each sample the cross section times branching
ratio with which the Powheg generator produced the sample is given. Also given
is σBr at NNLO which was used for the normalization and the number of produced
events. In last column, the integrated luminosity LM C = Nevt /(σBr) of each sample
is given.
leading candidate
subleading candidate
isolation[ GeV] < 0.007 × pT [ GeV] + 5 GeV
isolation[ GeV] < 0.022 × pT [ GeV] + 6 GeV
η
Table A.5.: Linear function to calculate the cut value for the isolation.
4
2
0
-2
-4
0
20
40
60
80
100 120 140 160 180
θ [°]
Figure A.1.: In this figure η = − ln(tan(θ/2)) is shown as a function of θ in the range
0◦ -180◦ .
127
barrel (|η| < 1.37)
FT
0.7
FailTight
F2
F1
FT
A. Appendix
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.6
0.7
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
50
100
150
200
FailTight
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.6
0.5
0
barrel (|η| < 1.37)
250
0
300
p [GeV]
50
100
150
200
250
endcap (1.52 <|η| < 2.01)
FT
0.7
FailTight
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.6
0.7
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
50
100
150
200
endcap (1.52 <|η| < 2.01)
FailTight
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.6
0.5
0
250
0
300
p [GeV]
50
100
150
200
250
FT
endcap (2.01 <|η| < 2.37)
FailTight
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.6
0.7
endcap (2.01 <|η| < 2.37)
FailTight
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0
0.1
50
100
150
200
250
0
300
p [GeV]
50
100
150
200
250
FT
endcap (2.37 <|η| < 2.47)
FailTight
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.6
0.7
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
50
100
150
200
endcap (2.37 <|η| < 2.47)
FailTight
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.6
0.5
0
250
300
p [GeV]
T
300
p [GeV]
T
F2
F1
FT
T
0.7
300
p [GeV]
T
F2
F1
FT
T
0.7
300
p [GeV]
T
F2
F1
FT
T
0
50
100
150
200
250
300
p [GeV]
T
Figure A.2.: Comparison of the fake factors FiF T calculated with the three different
methods (tag and probe method using the electron trigger, tag and probe method using jet
triggers and single object method using jet triggers). The upper row shows the fake factors
for the barrel region (η < 1.37). The corresponding fake factors for the endcap regions
(1.52 < |η| < 2.01, 2.01 < |η| < 2.37 and 2.37 < |η| < 2.47) are shown from the second to
the fourth row. The fake factors for the leading object are shown on the left side and for
the subleading object on the right side.
128
0.8
FTM
barrel (|η| < 1.37)
FailTrackmatch
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
F2
FTM
F1
1
0.9
1
0.9
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0
barrel (|η| < 1.37)
FailTrackmatch
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.1
50
100
150
200
250
0
300
p [GeV]
50
100
150
200
250
0.8
endcap (1.52 <|η| < 2.01)
FTM
1
0.9
FailTrackmatch
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
1
0.9
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0
endcap (1.52 <|η| < 2.01)
FailTrackmatch
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.1
50
100
150
200
250
0
300
p [GeV]
50
100
150
200
250
0.8
endcap (2.01 <|η| < 2.37)
FTM
1
FailTrackmatch
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
1
0.9
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0
endcap (2.01 <|η| < 2.37)
FailTrackmatch
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.1
50
100
150
200
250
0
300
p [GeV]
50
100
150
200
250
0.8
endcap (2.37 <|η| < 2.47)
FTM
1
FailTrackmatch
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
1
0.9
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0
300
p [GeV]
T
F2
F1
FTM
T
0.9
300
p [GeV]
T
F2
F1
FTM
T
0.9
300
p [GeV]
T
F2
F1
FTM
T
endcap (2.37 <|η| < 2.47)
FailTrackmatch
Reverse tag and probe method electron trigger
Reverse tag and probe method jet trigger
Single object method jet trigger
0.1
50
100
150
200
250
300
p [GeV]
T
0
50
100
150
200
250
300
p [GeV]
T
Figure A.3.: Comparison of the fake factors FiF T M calculated with the three different
methods (tag and probe method using the electron trigger, tag and probe method using jet
triggers and single object method using jet triggers). The upper row shows the fake factors
for the barrel region (η < 1.37). The corresponding fake factors for the endcap regions
(1.52 < |η| < 2.01, 2.01 < |η| < 2.37 and 2.37 < |η| < 2.47) are shown from the second to
the fourth row. The fake factors for the leading object are shown on the left side and for
the subleading object on the right side.
129
Entries
A. Appendix
fail track match selection
4
10
NTL
NLT
NLL
103
102
10
1
70
100
200
300
1000
2000
mee [GeV]
Figure A.4.: Distribution of NT L , NLT and NLL of the fail track match selection. No
fake rates, real electron efficiencies or fake factors are applied.
130
10
6
105
5
4
4
104
4
103
3
103
3
102
2
1
0
100 200 300 400 500 600 700 800 900 1000
2
102
10
1
10
1
0
100 200 300 400 500 600 700 800 900 1000
3
2000
2
1800
1600
2
1
1400
1
1200
1000
800
-1
2200
Entries
sublead
φ
Entries
2000
0
1800
1600
1400
1200
0
1000
800
-1
600
-2
-3
-2.5 -2 -1.5
400
-1 -0.5
0
0.5
1
1.5
2
2.5
η
lead
1
mee [GeV]
2200
φ
lead
mee [GeV]
3
Entries
5
|∆ φ|
105
Entries
|∆ η|
6
200
0
600
-2
-3
-2.5 -2 -1.5
400
200
-1 -0.5
0
0.5
1
1.5
2
η
2.5
sublead
Figure A.5.: Kinematic distributions of the event selection in data are shown. On the
upper left side the |∆η| distribution of the objects is shown vs. the invariant mass of the
objects. The same for |∆φ| is shown on the upper right side. In the lower row, a η-φ map
of the leading object is shown on the left side, and for the subleading on the right side.
131
5
1
4
T,sublead
2
1.5
[GeV]
3
×10
400
0.5
0.7
300
0.6
250
0.5
-1
2
150
1
100
0.1
-2
50
-2.5
0
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
η
50 100 150 200 250 300 350 400
p
[GeV]
lead
20
350
350
300
300
250
p
1
200 GeV < mee < 500 GeV
0.5
15
250
0
-0.5
10
5
50 100 150 200 250 300 350 400
p
[GeV]
1.5
3
1
2.5
0.5
0
2
-0.5
[GeV]
T,sublead
3.5
400
500 GeV < mee
10
9
350
8
300
7
250
6
p
2
5
200
4
1.5
-1
150
3
100
2
1
-1.5
0.5
-2
-2.5
0
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
η
lead
0
T,lead
Entries
sublead
lead
4
50
50
-2.5
0
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
η
500 GeV < mee
100
100
-2
η
150
150
-1.5
2.5
200
200
-1
Entries
1.5
400
Entries
2
[GeV]
25
T,sublead
200 GeV < mee < 500 GeV
0
T,lead
Entries
sublead
0.3
0.2
-1.5
η
0.4
200
-0.5
6
×10
0.8
350
3
0
2.5
80 GeV < mee < 200 GeV
Entries
80 GeV < mee < 200 GeV
p
2.5
Entries
η
sublead
A. Appendix
1
50
50 100 150 200 250 300 350 400
p
[GeV]
0
T,lead
Figure A.6.: On the left side the ηlead vs. ηsublead distribution of the event selection in
data is shown in different bins of invariant mass. The same is shown for pT on the right
side.
132
T&P method electron trigger: leading barrel (| η| < 1.37)
miss
ET
variation/default
variation/default
1.8
< 35 GeV
miss
1.6
E T < 20 GeV
|m -91 GeV| < 30 GeV
1.4
Tag object p
ee
|m -91 GeV| < 10 GeV
ee
T
> 35 GeV
Tag object also allowed to fail isolation cut
1.2
1.2
1
0.8
0.6
100
150
200
T&P method electron trigger: leading endcap (1.52 <|
250
300
p T [GeV]
η| < 2.01)
1.6
1.4
50
variation/default
variation/default
50
1.2
1.8
200
250
T&P method electron trigger: subleading endcap (1.52 <|
300
p T [GeV]
η| < 2.01)
1.4
1
1
0.8
0.6
100
150
200
T&P method electron trigger: leading endcap (2.01 <|
250
300
p T [GeV]
η| < 2.37)
1.6
1.4
50
variation/default
50
1.2
1.8
100
150
200
250
T&P method electron trigger: subleading endcap (2.01 <|
300
p T [GeV]
η| < 2.37)
1.6
1.4
1.2
1
1
0.8
0.8
0.6
0.6
100
150
200
T&P method electron trigger: leading endcap (2.37 <|
250
300
p T [GeV]
η| < 2.47)
1.6
1.4
1.2
50
variation/default
50
1.8
150
1.6
0.8
1.8
100
1.2
0.6
variation/default
1.4
0.8
1.8
T&P method electron trigger: subleading barrel (| η| < 1.37)
1.6
1
0.6
variation/default
1.8
1.8
100
150
200
250
T&P method electron trigger: subleading endcap (2.37 <|
300
p T [GeV]
η| < 2.47)
1.6
1.4
1.2
1
1
0.8
0.8
0.6
0.6
50
100
150
200
250
300
p T [GeV]
50
100
150
200
250
300
p T [GeV]
Figure A.7.: Ratio of the default fake rate fi and the variations. The upper row shows
on the left side the ratio for the leading fake rate in the barrel region (η < 1.37) and on
the right side the subleading fake rate in the barrel region. The corresponding fake rates
for the endcap regions (1.52 < |η| < 2.01, 2.01 < |η| < 2.37 and 2.37 < |η| < 2.47) are
shown from the second to the fourth row. The fake rates were calculated using the reverse
tag and probe method using the electron trigger.
133
1.6
T&P method jet trigger: leading barrel η(|| < 1.37)
miss
< 35 GeV
miss
< 20 GeV
ET
1.4
ET
variation/default
variation/default
A. Appendix
|m -91 GeV| < 30 GeV
ee
1.2
1
1.6
T&P method jet trigger: subleading barrelη(|| < 1.37)
1.4
1.2
1
|m -91 GeV| < 10 GeV
ee
0.8
0.8
Tag object p > 35 GeV
T
Tag also allowed to fail iso. cut
0.6
0
50
100
150
200
250
300
350
400
450
0.6
500
0
50
100
150
200
250
300
350
400
p [GeV]
T&P method jet trigger: leading endcap (1.52 η<|| < 2.01)
1.4
1.2
1.6
1.2
1
0.8
0.8
0.6
T&P method jet trigger: subleading endcap (1.52 η<|
| < 2.01)
1.4
1
0
0.6
50
100
150
200
250
300
350
400
450
500
0
50
100
150
200
250
300
350
400
p [GeV]
variation/default
variation/default
1.2
1.6
1.2
1
0.8
0.8
0.6
T&P method jet trigger: subleading endcap (2.01 η<|
| < 2.37)
1.4
1
0
0.6
50
100
150
200
250
300
350
400
450
500
0
50
100
150
200
250
300
350
400
p [GeV]
variation/default
variation/default
1.2
1.6
1.2
1
0.8
0.8
0
T&P method jet trigger: subleading endcap (2.37 η<|
| < 2.47)
1.4
1
0.6
500
T
T&P method jet trigger: leading endcap (2.37 η<|| < 2.47)
1.4
450
p [GeV]
T
1.6
500
T
T&P method jet trigger: leading endcap (2.01 η<|| < 2.37)
1.4
450
p [GeV]
T
1.6
500
T
variation/default
variation/default
1.6
450
p [GeV]
T
0.6
50
100
150
200
250
300
350
400
450
500
p [GeV]
T
0
50
100
150
200
250
300
350
400
450
500
p [GeV]
T
Figure A.8.: Ratio of the default fake rate fi and the variations. The upper row shows
on the left side the ratio for the leading fake rate in the barrel region (η < 1.37) and on
the right side the subleading fake rate in the barrel region. The corresponding fake rates
for the endcap regions (1.52 < |η| < 2.01, 2.01 < |η| < 2.37 and 2.37 < |η| < 2.47) are
shown from the second to the fourth row. The fake rates were calculated using the reverse
tag and probe method using jet triggers.
134
Average electron E [GeV]
250
200
150
100
50
0
0
0.4
0.8
1.2
1.6
2
2.4
|yee|
Figure A.9.: Average electron energy E of the selected pairs in the Drell-Yan simulation
Average electron |η|
in dependence of the absolute rapidity.
2.5
2
1.5
1
0.5
0
0
0.4
0.8
1.2
1.6
2
2.4
|yee|
Figure A.10.: Average electron η of the selected pairs in the Drell-Yan simulation in
dependence of the absolute rapidity.
135
Average electron |η|
A. Appendix
1.05
1
0.95
0.9
0.85
116
200
300
400 500
1000
1500
mee [GeV]
Figure A.11.: Average electron η of the selected pairs in the Drell-Yan simulation in
dependence of the invariant mass of the electron pair.
136
max [GeV]
mmin
ee -mee
66 - 116
116 - 130
130 - 150
150 - 170
170 - 190
190 - 210
210 - 230
Data stat. [%]
Nbkg stat. [%]
CDY stat. [%]
Trigger stat. [%]
Iso. stat. [%]
Energy scale stat. [%]
0.0
0.0
0.1
0.0
0.0
0.1
0.5
0.1
0.5
0.0
0.0
0.4
0.6
0.1
0.4
0.0
0.0
0.2
0.8
0.2
0.5
0.1
0.1
0.2
1.0
0.3
0.6
0.1
0.1
0.1
1.2
0.4
0.8
0.1
0.1
0.2
1.5
0.5
0.9
0.1
0.2
0.2
Nbkg syst. [%]
Trigger syst. [%]
Reco. syst. [%]
Id. syst. [%]
Iso. syst. [%]
Trigger syst. [%]
Energy scale syst. [%]
Energy res. syst. [%]
MC modeling syst. [%]
Theoretical syst. [%]
0.1
0.1
1.3
1.9
0.1
0.0
1.0
0.0
0.5
0.0
0.6
0.1
1.3
1.8
0.1
0.0
2.6
0.4
0.0
0.0
0.9
0.1
1.3
1.8
0.1
0.0
2.0
0.2
0.8
0.0
1.2
0.0
1.3
1.8
0.0
0.0
1.8
0.5
0.4
0.0
1.5
0.0
1.3
1.8
0.0
0.0
1.7
0.3
0.3
0.0
1.7
0.1
1.3
1.8
0.1
0.0
1.4
0.1
0.7
0.0
1.9
0.1
1.3
1.8
0.1
0.0
1.6
1.1
0.3
0.0
Lumi [%]
2.8
2.8
2.8
2.8
2.8
2.8
2.8
230 - 250
250 - 300
300 - 400
400 - 500
500 - 700
700 - 1000
1000 - 1500
Data stat. [%]
Nbkg stat. [%]
CDY stat. [%]
Trigger stat. [%]
Iso. stat. [%]
Energy scale stat. [%]
1.7
0.7
1.1
0.1
0.2
0.1
1.5
0.6
0.5
0.1
0.2
0.2
1.7
0.7
0.6
0.2
0.3
0.2
3.0
1.2
0.6
0.2
0.5
0.1
4.0
1.4
0.4
0.2
0.7
0.2
7.7
2.3
0.3
0.4
1.4
0.2
16.9
3.9
0.2
0.9
3.4
0.2
Nbkg syst. [%]
Trigger syst. [%]
Reco. syst. [%]
Id. syst. [%]
Iso. syst. [%]
Trigger syst. [%]
Energy scale syst. [%]
Energy res. syst. [%]
MC modeling syst. [%]
Theoretical syst. [%]
2.1
0.1
1.3
1.8
0.1
0.1
1.7
1.1
1.2
0.0
2.3
0.1
1.3
1.8
0.1
0.1
1.7
0.2
0.4
0.0
2.9
0.1
1.3
1.8
0.1
0.1
2.2
0.1
0.2
0.0
3.8
0.1
1.3
1.8
0.1
0.2
2.8
0.2
0.0
0.0
3.4
0.1
1.3
1.8
0.1
0.2
2.5
0.1
0.1
0.0
3.1
0.2
1.3
1.8
0.2
0.3
3.2
0.1
0.1
0.0
2.5
0.6
1.3
1.8
0.6
0.3
3.7
0.1
0.1
0.0
Lumi [%]
2.8
2.8
2.8
2.8
2.8
2.8
2.8
max [GeV]
mmin
ee -mee
Table A.6.: Uncertainties of the one dimensional cross section measurement. The table
is separated into correlated systematic uncertainties, uncorrelated statistical uncertainties
and the luminosity uncertainty.
137
A. Appendix
min |-|y max |
|yee
ee
0.0 - 0.4
0.4 - 0.8
0.8 - 1.2
1.2 - 1.6
1.6 - 2.0
2.0 - 2.4
Data stat. [%]
Nbkg stat. [%]
CDY stat. [%]
Trigger stat. [%]
Iso. stat. [%]
Energy scale stat. [%]
0.7
0.2
0.6
0.0
0.0
0.3
0.8
0.2
0.7
0.0
0.0
0.2
0.8
0.2
0.8
0.0
0.0
0.3
0.9
0.2
0.8
0.0
0.0
0.5
1.1
0.2
1.1
0.0
0.0
0.5
1.6
0.2
1.5
0.0
0.1
0.6
Nbkg syst. [%]
Trigger syst. [%]
Reco. syst. [%]
Id. syst. [%]
Iso. syst. [%]
Trigger syst. [%]
Energy scale syst. [%]
Energy res. syst. [%]
MC modeling syst. [%]
Theoretical syst. [%]
0.7
0.1
1.4
1.7
0.1
0.0
1.8
0.5
0.7
0.0
0.7
0.1
1.4
1.8
0.1
0.0
1.6
0.3
0.3
0.0
0.7
0.1
1.3
1.8
0.1
0.0
3.2
0.7
0.7
0.0
0.8
0.1
1.3
1.9
0.1
0.0
2.7
0.9
0.3
0.0
0.9
0.1
1.3
2.0
0.1
0.0
3.8
0.3
0.4
0.0
1.3
0.0
1.4
2.0
0.0
0.0
3.0
0.7
1.5
0.0
Lumi [%]
2.8
2.8
2.8
2.8
2.8
2.8
Table A.7.: Uncertainties of the two dimensional cross section measurement in the bin
mee = 116 GeV to mee = 150 GeV. The table is separated into correlated systematic
uncertainties, uncorrelated statistical uncertainties and the luminosity uncertainty.
138
min |-|y max |
|yee
ee
0.0 - 0.4
0.4 - 0.8
0.8 - 1.2
1.2 - 1.6
1.6 - 2.0
2.0 - 2.4
Data stat. [%]
Nbkg stat. [%]
CDY stat. [%]
Trigger stat. [%]
Iso. stat. [%]
Energy scale stat. [%]
1.1
0.4
0.6
0.1
0.1
0.2
1.2
0.4
0.7
0.1
0.1
0.1
1.3
0.4
0.8
0.1
0.1
0.1
1.4
0.4
0.9
0.1
0.1
0.1
1.8
0.3
1.1
0.1
0.1
0.3
2.7
0.4
1.6
0.1
0.1
0.2
Nbkg syst. [%]
Trigger syst. [%]
Reco. syst. [%]
Id. syst. [%]
Iso. syst. [%]
Trigger syst. [%]
Energy scale syst. [%]
Energy res. syst. [%]
MC modeling syst. [%]
Theoretical syst. [%]
1.4
0.0
1.4
1.7
0.0
0.0
1.2
0.6
0.1
0.0
1.5
0.0
1.3
1.8
0.0
0.0
1.4
0.5
1.1
0.0
1.5
0.1
1.3
1.8
0.1
0.0
1.5
0.3
0.7
0.0
1.2
0.0
1.3
1.9
0.0
0.0
2.1
0.5
0.1
0.0
1.1
0.0
1.3
1.9
0.0
0.0
2.5
0.7
0.8
0.0
1.2
0.0
1.4
2.0
0.0
0.0
2.1
0.6
1.1
0.0
Lumi [%]
2.8
2.8
2.8
2.8
2.8
2.8
Table A.8.: Uncertainties of the two dimensional cross section measurement in the bin
mee = 150 GeV to mee = 200 GeV. The table is separated into correlated systematic
uncertainties, uncorrelated statistical uncertainties and the luminosity uncertainty.
139
A. Appendix
min |-|y max |
|yee
ee
0.0 - 0.4
0.4 - 0.8
0.8 - 1.2
1.2 - 1.6
1.6 - 2.0
2.0 - 2.4
Data stat. [%]
Nbkg stat. [%]
CDY stat. [%]
Trigger stat. [%]
Iso. stat. [%]
Energy scale stat. [%]
1.6
0.7
0.7
0.1
0.2
0.1
1.6
0.7
0.8
0.1
0.2
0.1
1.8
0.6
0.9
0.1
0.2
0.3
2.1
0.6
1.0
0.1
0.2
0.3
2.8
0.5
1.4
0.1
0.2
0.3
4.4
0.6
1.9
0.2
0.2
0.4
Nbkg syst. [%]
Trigger syst. [%]
Reco. syst. [%]
Id. syst. [%]
Iso. syst. [%]
Trigger syst. [%]
Energy scale syst. [%]
Energy res. syst. [%]
MC modeling syst. [%]
Theoretical syst. [%]
2.6
0.1
1.4
1.7
0.1
0.1
1.1
0.4
1.3
0.0
2.6
0.1
1.3
1.8
0.1
0.1
1.4
0.4
0.2
0.0
2.0
0.1
1.3
1.8
0.1
0.1
2.2
0.5
0.2
0.0
1.4
0.1
1.3
1.9
0.1
0.1
2.5
0.4
1.0
0.0
1.0
0.1
1.3
1.9
0.1
0.1
2.8
0.7
0.4
0.0
1.0
0.1
1.4
2.0
0.1
0.1
3.1
0.3
1.2
0.0
Lumi [%]
2.8
2.8
2.8
2.8
2.8
2.8
Table A.9.: Uncertainties of the two dimensional cross section measurement in the bin
mee = 200 GeV to mee = 300 GeV. The table is separated into correlated systematic
uncertainties, uncorrelated statistical uncertainties and the luminosity uncertainty.
140
min |-|y max |
|yee
ee
0.0 - 0.4
0.4 - 0.8
0.8 - 1.2
1.2 - 1.6
1.6 - 2.0
2.0 - 2.4
Data stat. [%]
Nbkg stat. [%]
CDY stat. [%]
Trigger stat. [%]
Iso. stat. [%]
Energy scale stat. [%]
2.7
1.7
0.7
0.2
0.4
0.1
2.8
1.3
1.0
0.2
0.4
0.2
3.2
1.0
0.8
0.2
0.4
0.1
4.0
1.0
1.0
0.2
0.4
0.3
5.8
0.9
1.6
0.2
0.4
0.3
10.8
1.2
2.7
0.2
0.4
0.5
Nbkg syst. [%]
Trigger syst. [%]
Reco. syst. [%]
Id. syst. [%]
Iso. syst. [%]
Trigger syst. [%]
Energy scale syst. [%]
Energy res. syst. [%]
MC modeling syst. [%]
Theoretical syst. [%]
6.8
0.1
1.4
1.8
0.1
0.1
1.2
0.6
0.9
0.0
3.0
0.1
1.3
1.8
0.1
0.1
2.3
0.3
0.5
0.0
1.7
0.1
1.3
1.8
0.1
0.1
2.7
0.3
1.3
0.0
1.1
0.1
1.3
1.9
0.1
0.1
3.5
0.7
0.4
0.0
0.7
0.1
1.3
1.9
0.1
0.2
4.3
0.5
1.4
0.0
0.9
0.1
1.4
2.0
0.1
0.2
5.1
0.8
1.7
0.0
Lumi [%]
2.8
2.8
2.8
2.8
2.8
2.8
Table A.10.: Uncertainties of the two dimensional cross section measurement in the
bin mee = 300 GeV to mee = 500 GeV. The table is separated into correlated systematic
uncertainties, uncorrelated statistical uncertainties and the luminosity uncertainty.
141
A. Appendix
min |-|y max |
|yee
ee
0.0 - 0.4
0.4 - 0.8
0.8 - 1.2
1.2 - 1.6
1.6 - 2.0
2.0 - 2.4
Data stat. [%]
Nbkg stat. [%]
CDY stat. [%]
Trigger stat. [%]
Iso. stat. [%]
Energy scale stat. [%]
5.7
2.8
0.4
0.3
1.0
0.1
6.4
2.2
0.5
0.3
1.0
0.1
8.1
2.1
0.6
0.3
0.9
0.2
10.0
1.1
0.9
0.2
0.9
0.3
19.2
1.0
1.8
0.2
0.8
0.3
50.0
2.0
5.8
0.2
0.8
0.1
Nbkg syst. [%]
Trigger syst. [%]
Reco. syst. [%]
Id. syst. [%]
Iso. syst. [%]
Trigger syst. [%]
Energy scale syst. [%]
Energy res. syst. [%]
MC modeling syst. [%]
Theoretical syst. [%]
6.5
0.2
1.4
1.8
0.2
0.2
1.6
0.2
0.1
0.0
3.3
0.2
1.3
1.8
0.2
0.2
2.2
0.1
0.2
0.0
1.4
0.1
1.3
1.8
0.1
0.2
3.3
0.4
0.4
0.0
0.6
0.1
1.3
1.9
0.1
0.2
4.3
0.5
0.5
0.0
0.2
0.1
1.3
1.9
0.1
0.3
5.1
1.2
0.1
0.0
0.2
0.1
1.4
2.0
0.1
0.3
9.0
3.0
6.2
0.0
Lumi [%]
2.8
2.8
2.8
2.8
2.8
2.8
Table A.11.: Uncertainties of the two dimensional cross section measurement in the bin
mee = 500 GeV to mee = 1500 GeV. The table is separated into correlated systematic
uncertainties, uncorrelated statistical uncertainties and the luminosity uncertainty.
142
max [GeV]
mmin
ee -mee
66 - 116
116 - 130
130 - 150
150 - 170
170 - 190
CDY
7.0
0.61
0.23
0.61
0.10
0.64
0.054
0.65
0.031
0.66
Stat. err. [%]
Syst. err. [%]
0.0
2.6
0.5
3.6
0.6
3.3
0.8
3.2
1.0
3.3
190 - 210
210 - 230
230 - 250
250 - 300
300 - 400
CDY
0.019
0.67
0.013
0.68
0.0089
0.68
0.0050
0.69
0.0018
0.69
Stat. err. [%]
Syst. err. [%]
1.2
3.4
1.5
3.7
1.7
4.1
1.5
3.8
1.7
4.4
dσ
dmee
pb
[ GeV
]
max [GeV]
mmin
ee -mee
dσ
dmee
pb
[ GeV
]
max [GeV]
mmin
ee -mee
400 - 500
500 - 700
700 - 1000
1000 - 1500
CDY
0.00056
0.70
0.00016
0.70
0.000030
0.71
0.0000038
0.72
Stat. err. [%]
Syst. err. [%]
3.0
5.4
4.0
5.1
7.7
5.7
dσ
dmee
pb
[ GeV
]
16.9
7.3
Table A.12.: The table lists the two dimensional cross section
dσ
dmee ,
the corresponding
correction factor CDY and the statistical and total systematic uncertainties. The total systematic uncertainty was calculated by adding the uncertainty of all sources in quadrature.
The statistical uncertainty includes only the statistical uncertainty of the cross section
measurement. Statistical uncertainties of other sources are included in the systematic
uncertainty.
143
A. Appendix
min |-|y max |
|yee
ee
0.0 - 0.4
0.4 - 0.8
0.8 - 1.2
1.2 - 1.6
1.6 - 2.0
2.0 - 2.4
CDY
0.082
0.72
0.082
0.65
0.078
0.61
0.073
0.57
0.050
0.53
0.022
0.56
Stat. err. [%]
Syst. err. [%]
0.7
3.1
0.8
3.0
0.8
4.2
0.9
3.9
1.1
4.7
1.6
4.6
dσ
dmee d|yee |
pb
[ GeV
]
d2 σ
dmee d|yee | in
factor CDY and
Table A.13.: The table lists the two dimensional cross section
the bin mee =
116 GeV to mee = 150 GeV, the corresponding correction
the statistical
and total systematic uncertainties. The total systematic uncertainty was calculated by
adding the uncertainty of all sources in quadrature. The statistical uncertainty includes
only the statistical uncertainty of the cross section measurement. Statistical uncertainties
of other sources are included in the systematic uncertainty.
min |-|y max |
|yee
ee
0.0 - 0.4
0.4 - 0.8
0.8 - 1.2
1.2 - 1.6
1.6 - 2.0
2.0 - 2.4
CDY
0.021
0.73
0.021
0.68
0.020
0.65
0.017
0.62
0.012
0.56
0.0050
0.61
Stat. err. [%]
Syst. err. [%]
1.1
3.0
1.2
3.4
1.3
3.3
1.4
3.5
1.8
3.9
2.7
4.0
dσ
dmee d|yee |
pb
[ GeV
]
d2 σ
dmee d|yee | in
factor CDY and
Table A.14.: The table lists the two dimensional cross section
the bin mee =
150 GeV to mee = 200 GeV, the corresponding correction
the statistical
and total systematic uncertainties. The total systematic uncertainty was calculated by
adding the uncertainty of all sources in quadrature. The statistical uncertainty includes
only the statistical uncertainty of the cross section measurement. Statistical uncertainties
of other sources are included in the systematic uncertainty.
144
min |-|y max |
|yee
ee
0.0 - 0.4
0.4 - 0.8
0.8 - 1.2
1.2 - 1.6
1.6 - 2.0
2.0 - 2.4
CDY
0.0047
0.73
0.0050
0.71
0.0046
0.67
0.0036
0.65
0.0025
0.59
0.00093
0.63
Stat. err. [%]
Syst. err. [%]
1.6
4.0
1.6
3.8
1.8
3.9
2.1
4.0
2.8
4.1
4.4
4.7
dσ
dmee d|yee |
pb
[ GeV
]
d2 σ
dmee d|yee | in
factor CDY and
Table A.15.: The table lists the two dimensional cross section
the bin mee =
200 GeV to mee = 300 GeV, the corresponding correction
the statistical
and total systematic uncertainties. The total systematic uncertainty was calculated by
adding the uncertainty of all sources in quadrature. The statistical uncertainty includes
only the statistical uncertainty of the cross section measurement. Statistical uncertainties
of other sources are included in the systematic uncertainty.
min |-|y max |
|yee
ee
0.0 - 0.4
0.4 - 0.8
0.8 - 1.2
1.2 - 1.6
1.6 - 2.0
2.0 - 2.4
CDY
0.00067
0.73
0.00073
0.73
0.00069
0.69
0.00049
0.66
0.00029
0.59
0.000076
0.64
Stat. err. [%]
Syst. err. [%]
2.7
7.6
2.8
4.7
3.2
4.4
4.0
4.7
5.8
5.5
dσ
dmee d|yee |
pb
[ GeV
]
10.8
6.7
d2 σ
dmee d|yee | in
factor CDY and
Table A.16.: The table lists the two dimensional cross section
the bin mee =
300 GeV to mee = 500 GeV, the corresponding correction
the statistical
and total systematic uncertainties. The total systematic uncertainty was calculated by
adding the uncertainty of all sources in quadrature. The statistical uncertainty includes
only the statistical uncertainty of the cross section measurement. Statistical uncertainties
of other sources are included in the systematic uncertainty.
145
A. Appendix
min |-|y max |
|yee
ee
0.0 - 0.4
0.4 - 0.8
0.8 - 1.2
1.2 - 1.6
1.6 - 2.0
2.0 - 2.4
CDY
0.000032
0.74
0.000030
0.73
0.000022
0.70
0.000017
0.66
0.0000054
0.60
0.00000080
0.61
Stat. err. [%]
Syst. err. [%]
5.7
7.6
6.4
5.2
8.1
4.9
dσ
dmee d|yee |
pb
[ GeV
]
10.0
5.3
19.2
6.2
50.0
13.1
d2 σ
dmee d|yee | in the bin mee =
factor CDY and the statistical
Table A.17.: The table lists the two dimensional cross section
500 GeV to mee = 1500 GeV, the corresponding correction
and total systematic uncertainties. The total systematic uncertainty was calculated by
adding the uncertainty of all sources in quadrature. The statistical uncertainty includes
only the statistical uncertainty of the cross section measurement. Statistical uncertainties
of other sources are included in the systematic uncertainty.
146
B. Bibliography
[1] F. Halzen and A. Martin, Quarks and leptons: an introductory course in
modern particle physics, Wiley (1984) .
[2] Super-Kamiokande Collaboration Collaboration, Y. Fukuda et al., Evidence
for oscillation of atmospheric neutrinos, Phys.Rev.Lett. 81 (1998)
1562–1567, arXiv:hep-ex/9807003 [hep-ex].
[3] Particle Data Group Collaboration, J. Beringer et al., Review of Particle
Physics (RPP), Phys.Rev. D86 (2012) 010001.
[4] M. E. Peskin and D. V. Schroeder, An Introduction To Quantum Field
Theory, Addison-Wesley Publishing Company (1995) .
[5] S. Glashow, Partial Symmetries of Weak Interactions, Nucl.Phys. 22 (1961)
579–588.
[6] A. Salam, Weak and Electromagnetic Interactions, Conf.Proc. C680519
(1968) 367–377.
[7] S. Weinberg, A Model of Leptons, Phys.Rev.Lett. 19 (1967) 1264–1266.
[8] W. Heisenberg, Über den Bau der Atomkerne. I , Zeitschrift für Physik 77
(1932) no. 1-2, . http://dx.doi.org/10.1007/BF01342433.
[9] D. Griffiths, Introduction to Elementary Particles.
Wiley, New York, 2008.
[10] P. W. Higgs, Broken symmetries, massless particles and gauge fields,
Phys.Lett. 12 (1964) 132–133.
[11] W. Hollik, Electroweak theory, J.Phys.Conf.Ser. 53 (2006) 7–43.
[12] ATLAS Collaboration Collaboration, G. Aad et al., Observation of a new
particle in the search for the Standard Model Higgs boson with the
ATLAS detector at the LHC , Phys.Lett. B716 (2012) 1–29,
arXiv:1207.7214 [hep-ex].
[13] CMS Collaboration Collaboration, S. Chatrchyan et al., Observation of a
new boson at a mass of 125 GeV with the CMS experiment at the LHC ,
Phys.Lett. B716 (2012) 30–61, arXiv:1207.7235 [hep-ex].
[14] R. P. Feynman, The Theory of Positrons, Phys. Rev. 76 (1949) .
[15] Orear, J. and Fermi, E., Nuclear Physics: A Course Given by Enrico Fermi
at the University of Chicago, University of Chicago Press (1950) .
147
B. Bibliography
[16] CTEQ Collaboration Collaboration, R. Brock et al., Handbook of perturbative
QCD; Version 1.1: September 1994 , Rev. Mod. Phys. (1994) .
[17] S. Drell and T.-M. Yan, Partons and their Applications at High-Energies,
Annals Phys. 66 (1971) 578.
[18] Heisenberg, W., Über den anschaulichen Inhalt der quantentheoretischen
Kinematik und Mechanik , Zeitschrift für Physik 43 (1927) no. 3-4,
172–198. http://dx.doi.org/10.1007/BF01397280.
[19] G. Altarelli and G. Parisi, Asymptotic Freedom in Parton Language,
Nucl.Phys. B126 (1977) 298.
[20] A. Vogt, S. Moch, and J. Vermaseren, The Three-loop splitting functions in
QCD: The Singlet case, Nucl.Phys. B691 (2004) 129–181,
arXiv:hep-ph/0404111 [hep-ph].
[21] H1 Collaboration Collaboration, C. Adloff et al., Measurement of neutral and
charged current cross-sections in positron proton collisions at large
momentum transfer , Eur.Phys.J. C13 (2000) 609–639,
arXiv:hep-ex/9908059 [hep-ex].
[22] ZEUS Collaboration Collaboration, S. Chekanov et al., Measurement of the
neutral current cross-section and F(2) structure function for deep
inelastic e + p scattering at HERA, Eur.Phys.J. C21 (2001) 443–471,
arXiv:hep-ex/0105090 [hep-ex].
[23] FNAL E866/NuSea Collaboration Collaboration, R. Towell et al., Improved
measurement of the anti-d / anti-u asymmetry in the nucleon sea,
Phys.Rev. D64 (2001) 052002, arXiv:hep-ex/0103030 [hep-ex].
[24] Atlas Collaboration Collaboration, G. Aad et al., Measurement of inclusive
jet and dijet cross sections in proton-proton collisions at 7 TeV
centre-of-mass energy with the ATLAS detector , Eur.Phys.J. C71 (2011)
1512, arXiv:1009.5908 [hep-ex].
[25] J. Pumplin, D. Stump, R. Brock, D. Casey, J. Huston, et al., Uncertainties
of predictions from parton distribution functions. 2. The Hessian method ,
Phys.Rev. D65 (2001) 014013, arXiv:hep-ph/0101032 [hep-ph].
[26] A. Martin, W. Stirling, R. Thorne, and G. Watt, Parton Distributions for
the LHC , Eur. Phys. J. C63 (2009) 189.
[27] W. Stirling, private communication.
[28] CDF Collaboration Collaboration, A. Bodek, Measurement of d sigma / d y
for high mass drell-yan e+ e- pairs at CDF , Int.J.Mod.Phys. A16S1A
(2001) 262–264, arXiv:hep-ex/0009067 [hep-ex].
[29] D0 Collaboration Collaboration, V. Abazov et al., Measurement of the shape
∗
+ −
of the boson rapidity
√ distribution for pp̄ → Z/gamma → e e + X
events produced at s of 1.96-TeV , Phys.Rev. D76 (2007) 012003,
arXiv:hep-ex/0702025 [HEP-EX].
148
B. Bibliography
[30] CDF Collaboration Collaboration, T. A. Aaltonen et al., Measurement of
+ −
dσ/dy
√ of Drell-Yan e e pairs in the Z Mass Region from pp̄ Collisions
at s = 1.96 TeV , Phys.Lett. B692 (2010) 232–239, arXiv:0908.3914
[hep-ex].
[31] CMS Collaboration Collaboration, S. Chatrchyan
√ et al., Measurement of the
Drell-Yan Cross Section in pp Collisions at s = 7 TeV , JHEP 1110
(2011) 007, arXiv:1108.0566 [hep-ex].
[32] Measurement of the differential and double-differential Drell-Yan cross
section in proton-proton collisions at 7 TeV , Tech. Rep.
CMS-PAS-SMP-13-003, CERN, Geneva, 2013.
[33] ATLAS Collaboration Collaboration, G. Aad et al., Measurement of the
high-mass Drell–Yan differential cross-section in pp collisions at
sqrt(s)=7 TeV with the ATLAS detector , arXiv:1305.4192 [hep-ex].
[34] P. Golonka and Z. Was, PHOTOS Monte Carlo: A Precision tool for QED
corrections in Z and W decays, Eur.Phys.J. C45 (2006) 97–107,
arXiv:hep-ph/0506026 [hep-ph].
[35] M. H. Seymour and M. Marx, Monte Carlo Event Generators,
arXiv:1304.6677 [hep-ph].
[36] J. M. Campbell and R. Ellis, MCFM for the Tevatron and the LHC ,
Nucl.Phys.Proc.Suppl. 205-206 (2010) 10–15, arXiv:1007.3492
[hep-ph].
[37] R. Gavin, Y. Li, and F. Petriello, FEWZ 2.0: A code for hadronic Z
production at next-to-next-to-leading order , arXiv:1011.3540 [hep-ph].
[38] J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer, and T. Stelzer, MadGraph
5 : Going Beyond , JHEP 1106 (2011) 128, arXiv:1106.0522 [hep-ph].
[39] Uta Klein et al. https://twiki.cern.ch/twiki/bin/
viewauth/~AtlasProtected/Zprime2012Kfactors.
[40] A. D. Martin, R. G. Roberts, W. J. Stirling, and R. S. Thorne, Parton
distributions incorporating QED contributions, Eur. Phys. J. C39 (2005)
155–161, arXiv:hep-ph/0411040.
[41] S. Carrazza, Towards the determination of the photon parton distribution
function constrained by LHC data, arXiv:1307.1131 [hep-ph].
[42] T. Carli, D. Clements, A. Cooper-Sarkar, C. Gwenlan, G. P. Salam, et al., A
posteriori inclusion of parton density functions in NLO QCD final-state
calculations at hadron colliders: The APPLGRID Project, Eur.Phys.J.
C66 (2010) 503–524, arXiv:0911.2985 [hep-ph].
[43] R. D. Ball, V. Bertone, S. Carrazza, C. S. Deans, L. Del Debbio, et al.,
Parton distributions with LHC data, Nucl.Phys. B867 (2013) 244–289,
arXiv:1207.1303 [hep-ph].
149
B. Bibliography
[44] H1 and ZEUS Collaboration Collaboration, F. Aaron et al., Combined
Measurement and QCD Analysis of the Inclusive e+- p Scattering Cross
Sections at HERA, JHEP 1001 (2010) 109, arXiv:0911.0884 [hep-ex].
[45] S. Alekhin, J. Blumlein, and S. Moch, Parton Distribution Functions and
Benchmark Cross Sections at NNLO, Phys.Rev. D86 (2012) 054009,
arXiv:1202.2281 [hep-ph].
[46] H.-L. Lai, M. Guzzi, J. Huston, Z. Li, P. M. Nadolsky, et al., New parton
distributions for collider physics, Phys.Rev. D82 (2010) 074024,
arXiv:1007.2241 [hep-ph].
[47] O. S. Brüning, P. Collier, P. Lebrun, S. Myers, R. Ostojic, J. Poole, and
P. Proudlock, LHC Design Report, CERN (2004) .
[48] ATLAS Collaboration Collaboration, G. Aad et al., The ATLAS Experiment
at the CERN Large Hadron Collider , JINST 3 (2008) S08003.
[49] ATLAS muon spectrometer: Technical Design Report.
Technical Design Report ATLAS. CERN, Geneva, 1997.
distribution.
[50] ATLAS magnet system: Technical Design Report, 1.
Technical Design Report ATLAS. CERN, Geneva, 1997.
[51] J. P. Badiou, J. Beltramelli, J. M. Maze, and J. Belorgey, ATLAS barrel
toroid: Technical Design Report.
Technical Design Report ATLAS. CERN, Geneva, 1997.
Electronic version not available.
[52] ATLAS end-cap toroids: Technical Design Report.
Technical Design Report ATLAS. CERN, Geneva, 1997.
Electronic version not available.
[53] S. Haywood, L. Rossi, R. Nickerson, and A. Romaniouk, ATLAS inner
detector: Technical Design Report, 2.
Technical Design Report ATLAS. CERN, Geneva, 1997.
[54] ATLAS central solenoid: Technical Design Report.
Technical Design Report ATLAS. CERN, Geneva, 1997.
Electronic version not available.
[55] N. Wermes and G. Hallewel, ATLAS pixel detector: Technical Design Report.
Technical Design Report ATLAS. CERN, Geneva, 1998.
[56] ATLAS liquid-argon calorimeter: Technical Design Report.
Technical Design Report ATLAS. CERN, Geneva, 1996.
[57] ATLAS calorimeter performance: Technical Design Report.
Technical Design Report ATLAS. CERN, Geneva, 1996.
[58] ATLAS tile calorimeter: Technical Design Report.
Technical Design Report ATLAS. CERN, Geneva, 1996.
150
B. Bibliography
[59] ATLAS Collaboration Collaboration, D. Casadei, Performance of the
ATLAS trigger system, J.Phys.Conf.Ser. 396 (2012) 012011.
[60] ATLAS level-1 trigger: Technical Design Report.
Technical Design Report ATLAS. CERN, Geneva, 1998.
[61] P. Jenni, M. Nessi, M. Nordberg, and K. Smith, ATLAS high-level trigger,
data-acquisition and controls: Technical Design Report.
Technical Design Report ATLAS. CERN, Geneva, 2003.
[62] I. Bird, K. Bos, N. Brook, D. Duellmann, C. Eck, et al., LHC computing
Grid. Technical design report, .
[63] R. Jones, ATLAS computing and the GRID, Nucl.Instrum.Meth. A502
(2003) 372–375.
[64] R. Brun and F. Rademakers, ROOT: An object oriented data analysis
framework , Nucl.Instrum.Meth. A389 (1997) 81–86.
[65] V. Balagura, Notes on van der Meer Scan for Absolute Luminosity
Measurement, Nucl.Instrum.Meth. A654 (2011) 634–638,
arXiv:1103.1129 [physics.ins-det].
[66] ATLAS Collaboration Collaboration, G. Aad et al., Improved luminosity
determination in pp collisions at sqrt(s) = 7 TeV using the ATLAS
detector at the LHC , arXiv:1302.4393 [hep-ex].
[67] T. Luminosity Group, Preliminary Luminosity Determination in pp
Collisions at sqrt(s) = 8 TeV using the ATLAS Detector in 2012 , Tech.
Rep. ATL-COM-LUM-2012-013, CERN, Geneva, Nov, 2012.
[68] GEANT4 Collaboration, S. Agostinelli et al., GEANT4: A Simulation
toolkit, Nucl.Instrum.Meth. A506 (2003) 250–303.
[69] ATLAS Collaboration Collaboration, G. Aad et al., The ATLAS Simulation
Infrastructure, Eur.Phys.J. C70 (2010) 823–874, arXiv:1005.4568
[physics.ins-det].
[70] R. Fruhwirth, Application of Kalman filtering to track and vertex fitting,
Nucl.Instrum.Meth. A262 (1987) 444–450.
[71] ATLAS Collaboration Collaboration, G. Aad et al., Expected Performance of
the ATLAS Experiment - Detector, Trigger and Physics,
arXiv:0901.0512 [hep-ex].
[72] Expected electron performance in the ATLAS experiment, Tech. Rep.
ATL-PHYS-PUB-2011-006, CERN, Geneva, Apr, 2011.
[73] S. Alioli, P. Nason, C. Oleari, and E. Re, A general framework for
implementing NLO calculations in shower Monte Carlo programs: the
POWHEG BOX , JHEP 1006 (2010) 043.
[74] T. Sjöstrand, S. Mrenna, and P. Z. Skands, A Brief Introduction to PYTHIA
8.1 , Comput. Phys. Commun. 178 (2008) 852–867.
151
B. Bibliography
[75] S. Frixione and B. R. Webber, Matching NLO QCD computations and parton
shower simulations, JHEP 0206 (2002) 029.
[76] G. Corcella et al., HERWIG 6: an event generator for hadron emission
reactions with interfering gluons (including supersymmetric processes),
JHEP 0101 (2001) 010.
[77] M. Aliev et al., – HATHOR – HAdronic Top and Heavy quarks crOss section
calculatoR, Comput. Phys. Commun. 182 (2011) 1034–1046.
[78] N. Kidonakis, Two-loop soft anomalous dimensions for single top quark
associated production with a W- or H-, Phys.Rev. D82 (2010) 054018,
arXiv:1005.4451 [hep-ph].
[79] J. Pumplin et al., New generation of parton distributions with uncertainties
from global QCD analysis, JHEP 07 (2002) 012.
[80] J. Butterworth, E. Dobson, U. Klein, B. Mellado Garcia, T. Nunnemann,
J. Qian, D. Rebuzzi, and R. Tanaka, Single Boson and Diboson
Production Cross Sections in pp Collisions at sqrts=7 TeV , Tech. Rep.
ATL-COM-PHYS-2010-695, CERN, Geneva, Aug, 2010.
[81] Physics Analysis Tools, Pileup Reweighting, https://twiki.cern.ch/
twiki/bin/viewauth/~AtlasProtected/ExtendedPileupReweighting.
[82] Electron performance group, Energy scale and resolution,
https://twiki.cern.ch/twiki/bin/viewauth/~AtlasProtected/
EnergyScaleResolutionRecommendations.
[83] Electron performance group, Efficiency measurements,
https://twiki.cern.ch/twiki/bin/viewauth/~AtlasProtected/
EfficiencyMeasurements2012.
[84] Oleg Fedin, Victor Maleev, Evgeny Sedykh, Victor Solovyev
https://indico.cern.ch/conferenceDisplay.py?confId=251134.
[85] A. L. WG https://twiki.cern.ch/twiki/bin/view/~AtlasPublic/
LuminosityPublicResults.
[86] ATLAS Collaboration Collaboration, G. Aad et al., The ATLAS Experiment
at the CERN Large Hadron Collider , JINST 3 (2008) S08003.
[87] M. Cacciari, G. P. Salam, and G. Soyez, The Anti-k(t) jet clustering
algorithm, JHEP 0804 (2008) 063, arXiv:0802.1189 [hep-ph].
[88] Identification and Tagging of Double b-hadron jets with the ATLAS Detector ,
Tech. Rep. ATLAS-CONF-2012-100, CERN, Geneva, Jul, 2012.
−1
[89] Search
√ for high-mass dilepton resonances in 20 f b of pp collisions at
s = 8 TeV with the ATLAS experiment, Tech. Rep.
ATLAS-CONF-2013-017, CERN, Geneva, Mar, 2013.
[90] http://www.hep.ucl.ac.uk/atlas/atlantis/.
152
B. Bibliography
[91] B. Laforge and L. Schoeffel, Elements of statistical methods in high-energy
physics analyses, Nucl.Instrum.Meth. A394 (1997) 115–120.
[92] HERAFitter developers, https: // www. herafitter. org/ HERAFitter/ , .
[93] HERAFitter developers HERAFitter manual.
[94] F. James and M. Roos, Minuit: A System for Function Minimization and
Analysis of the Parameter Errors and Correlations,
Comput.Phys.Commun. 10 (1975) 343–367.
[95] D. Stump, J. Pumplin, R. Brock, D. Casey, J. Huston, et al., Uncertainties
of predictions from parton distribution functions. 1. The Lagrange
multiplier method , Phys.Rev. D65 (2001) 014012,
arXiv:hep-ph/0101051 [hep-ph].
[96] NuTeV Collaboration Collaboration, D. Mason et al., Measurement of the
Nucleon Strange-Antistrange Asymmetry at Next-to-Leading Order in
QCD from NuTeV Dimuon Data, Phys.Rev.Lett. 99 (2007) 192001.
[97] NuTeV Collaboration Collaboration, M. Goncharov et al., Precise
measurement of dimuon production cross-sections in muon neutrino Fe
and muon anti-neutrino Fe deep inelastic scattering at the Tevatron,
Phys.Rev. D64 (2001) 112006, arXiv:hep-ex/0102049 [hep-ex].
[98] ATLAS Collaboration Collaboration, G. Aad et al., Determination of the
strange quark density of the proton from ATLAS measurements of the
W → `ν and Z → `` cross sections, Phys.Rev.Lett. 109 (2012) 012001,
arXiv:1203.4051 [hep-ex].
[99] R. Thorne and R. Roberts, An Ordered analysis of heavy flavor production in
deep inelastic scattering, Phys.Rev. D57 (1998) 6871–6898,
arXiv:hep-ph/9709442 [hep-ph].
[100] R. Thorne, A Variable-flavor number scheme for NNLO, Phys.Rev. D73
(2006) 054019, arXiv:hep-ph/0601245 [hep-ph].
[101] M. Botje, QCDNUM: Fast QCD Evolution and Convolution,
Comput.Phys.Commun. 182 (2011) 490–532, arXiv:1005.1481
[hep-ph].
[102] H1 and ZEUS Collaboration Collaboration, F. Aaron et al., Combined
Measurement and QCD Analysis of the Inclusive e+- p Scattering Cross
Sections at HERA, JHEP 1001 (2010) 109, arXiv:0911.0884 [hep-ex].
[103] https://twiki.cern.ch/twiki/bin/view/~Sandbox/
CrossSectionsCalculationTool.
153
B. Bibliography
154
C. Danksagung
Ich danke Prof. Dr. Stefan Tapprogge, dass er mir die Möglichkeit gegeben hat
mich in meiner Masterarbeit mit diesem sehr interessanten Thema zu beschäftigen
und für die gute Betreuung und Unterstützung dieser Arbeit. Es gibt wohl
wenige Arbeitsgruppen in denen vergleichbare Möglichkeiten geboten werden,
was Betreuung, Reisen ans CERN, zu Tagungen und Schulen angeht. Zusätzlich
möchte ich mich für die großartige Unterstützung bedanken, die ich seit meiner
Bachelorarbeit in den verschiedensten Dingen von ihm erfahren habe.
Weiterhin bedanke ich mich bei Prof.
als Zweitgutachter.
Dr.
Achim Denig für seine Tätigkeit
Ein ganz besonderer Dank geht natürlich an Dr. Frank Ellinghaus, für die
geniale intensive Betreuung und für die viele Geduld. Er hat sich nicht aus der
Ruhe bringen lassen, wenn ich ihn mal wieder mit, vielleicht teilweise unnötigen,
Fragen bombardiert habe. Außerdem will ich mich für die große Unterstützung von
ihm bedanken, ohne die es mir nicht möglich gewesen wäre meine Arbeit innerhalb
der Kollaboration zu präsentieren.
Ein weiterer Dank geht an die ganze Arbeitsgruppe für das angenehme Arbeitsklima und an all die Leute, die jederzeit für kleinere oder auch größere Fragen
und Diskussionen zur Verfügung standen.
Ein ganz besonders großer Dank gilt auch meinen Eltern und meiner Schwester, die
immer an mich geglaubt und mich in allen Lagen unterstützt haben und ohne die
ich niemals so weit gekommen wäre. Ein zusätzlicher Dank geht an meine Freundin
Elena, in dem letzten Jahr immer sehr geduldig war, wenn am Wochenende dann
doch mal ein bisschen gearbeitet wurde.
155