Systematic subtraction of initial-state collinear singularities

I Systematic subtraction of initial-state collinear singularities
I Event Generators and Cross Section Integrators
I Weighted and unweighted events
I Process- and observable-independent methods for the cancellation of
IR singularities at the NLO: subtraction and slicing
I Selected results available at NNLO
I Quark densities grow at small x and decrease at large x
with increasing Q2
Remind the master equations for PDF determination
σdata = f σth
=⇒
σdata = (f1 f2 )σth
f = σdata /σth
=⇒
(f1 f2 ) = σdata /σth
PDFs are non physical: they have a perturbative accuracy which is that
of σth – this is why we talk about LO, NLO, ... PDFs
This is also why we need perturbative computations ⇐⇒ CSIs
The above implies a (subtraction) scheme dependence. Popular ones
are MS and DIS – different schemes are related by a convolution with a
process-independent function
A given σdata will receive contributions from many different parton
combinations (Z hadroproduction: uū → Z, dd¯ → Z, ug → Zu, ...).
The problem: disentangle these contributions
Consider F2 in Neutral Current DIS
n
o
X
F2NC (x, Q2 ) = x
e2f (qf + q̄f ) + αS Cf ⊗ (qf + q̄f ) + Cg ⊗ g
f
αS = αS (Q2 ) ,
f = flavours
The coefficient functions Ci have a perturbative expansion, and:
(sing)
i
h
∂F2
(sing)
2
=
α
⊗
F
+
2N
P
⊗
g
+
O(α
)
P
S
qq
F
qg
S
2
2
∂ log Q
I DIS is the backbone of PDF determination; singlet easy
I At O(αS ), also determines the gluon – but small-x only
¯ µ + µ−
I Current determinations also use low-Q2 DY data (→ ū − d),
low-energy data (→ s), y(W ± → l± ) @Tevatron (→ u/d slope), jet
@Tevatron (→ large-x gluon)
What is the uncertainty affecting PDF determinations?
If we measure a scalar quantity x (σtot ), the error we quote is the width of a Gaussian
which represent the probability distribution P(x) of such quantity
Z
Z
2
2
2
hxi = dx x P(x) ,
Errx = dx x − hxi P(x)
In the case of PDFs, a scalar quantity is replaced by a set of functions =⇒ the
probability distribution is replaced by a probability functional
Z
Z
2
2
2
hσi = dµ[f ] σ[f ]P[f ] ,
Errσ = dµ[f ] σ[f ] − hσ[f ]i P[f ]
This approach has been followed by Giele, Kosower, Keller
I The problem is transformed in one with finite dofs by assuming a
“prior”, generated with MC methods
I Convergence by comparison to data
I Very computing intensive; difficult to prove independence from priors
Not used in practice
CTEQ and MRST use the easier Hessian method: find the 1σ errors on the
parameters of the fit, and consider PDFs obtained by changing the
parameters by ±1σ
The problem has the dimensionality d of the parameter space: 15 (CTEQ) or 20 (MRST)
χ2 =
N X
N
X
i=1 j=1
(Di − Ti (a)) σij−1 (Dj − Tj (a)) = χ20 + ∆χ2 + . . .
2
⇒ ∆χ =
d X
d
X
ak −
d
X
1
O(Sk+ )
k=1 l=1
a0k
Hkl al −
a0l
One then introduces T 2 > ∆χ2 , diagonalises Hkl , and defines 2d PDFs
√
±
Sk , which correspond to displacements ± T along the direction of the k th
eigenvector. Then
2
(∆O) =
T is called Tolerance
2
k=1
−
− 2
O(Sk )
1.5
Luminosity function at LHC
0.2
2
1.4
PDFs: CTEQ6
0.1
2
2
Ratio of xg(x,Q )/xg(x,Q ,MRST2001C) at Q =100 GeV
2
1.3
Fractional Uncertainty
0
1.2
−0.1
G−G
1.1
−0.2
1
−
+
Q−Q −−> W
0.1
0.9
0.8
0
0.1
Hessian uncertainty
0.7
CTEQ6M
±
0
−
−
Q−Q −−> W
−0.1
50
100
200
M (GeV)
0.6
500
1000
0.5
10
−4
10
−3
10
−2
x
10
−1
I Inconsistencies between the data sets imply underestimation of errors
I This also imply that ∆χ2 = 1 rule cannot be imposed: the T is
arbitrary (CTEQ and MRST defaults differ)
I Theoretical uncertainties, bias from parametrizations not included
1
Summary on PDFs with errors
I CTEQ and MRST do have NLO PDFs with errors
I A “central” PDF set has 30 or 40 companion sets
I The central set is used to compute the main value of the
observable chosen
I This computation has to be repeated 30 or 40 times, with the
companion sets, to determine the uncertainty on the prediction
due to uncertainties on PDFs
Keep in mind that
I There is a hidden dependence on Tolerance
I The method would suggest parallelization (parton cross sections don’t
change), but this is usually not done =⇒ computing intensive
It may take a while before completing all the (N)NLO
computations needed for phenomenology...
But some issues cannot wait. The typical example is indeed that many-jet
final states (a serious problem even for SM studies: W + 4j is a huge
background for tt̄ production)
Key observation: we are able to compute in a highly-automated manner
real-emission (ie tree level) amplitudes up to a very large number of
external legs (8 − 10)
=⇒ Keep only real contributions in fixed-order
computations
Cannot work! KLN theorem tells us that the result will diverge
Solution: avoid infrared divergences by cutting them out by hand
In hadronic collisions, this is typically equivalent to imposing
p
I ∆Rij ≡ (ϕi − ϕj )2 + (ηi − ηj )2 ≥ Rcut
=⇒ will avoid final-state collinear divergences
I pT i ≥ pT cut
=⇒ will avoid initial-state collinear and soft divergences
The cut parameters Rcut and pT cut are arbitrary, and in general physical
observables will depend on them
On the other hand, it is sufficient to find values of the cut parameters
which do not affect the observables we aim to study. If such values do not
exist, this approach is simply bound to fail
CSIs implementing the solution above are known as Matrix Element
Generators – and one should actually add Tree-Level to their names
Lacking virtual corrections, MEGs are basically leading-order computations
for many-leg processes. As such, the scale dependence is that typical of a
LO result (ie very large)
Side effect: the matrix elements are bounded (the upper bound is likely be
obtained by computing the MEs at ∆Rij = Rcut and pT i = pT cut ), and
thus unweighted events can be obtained
Clearly, these unweighted events are biased by Rcut and pT cut , but
according to the strategy outlined above one should use them in such a way
that the bias will not affect the physics
There are two classes of MEGs
−→
Matrix element generators for specific processes
Feature a pre-defined list of partonic processes, for which phase-space
sampling is optimized
Here’s a non-exahustive list of codes
AcerMC
ALPGEN
GR@PPA
MadCUP
VECBOS
There are substantial differences in the number of processes simulated, and
in the techniques used to compute the matrix elements!
Phase-space sampling typically optimized process-by-process, to improve
unweighting efficiency
Matrix element generators for arbitrary processes
Compute the matrix elements for any process given in input by the user
(sort of automated matrix element generator authors...)
AMEGIC++
CompHEP
Grace
MadEvent/MadGraph
On average, the largest number of external legs is smaller than that
obtained with MEGs for specific processes. Beyond-SM capabilities are
being added to these codes
Phase-space sampling (where present) cannot be optimized
process-by-process. Adaptive importance sampling techniques
are used instead
Good
agreement
among codes
Capabilities
will
increase with computer power
Summary on Matrix Element Generators
I Parton level and LO
I Have unweighted (biased) events
I Up to 8-particles final states
I Physical results must be proved independent of unphysical cutoffs used
in the computations
Must be used to
I Improve predictions for shapes of many-jet observables
These codes can be included into Event Generators (see later)
MEGs: tails, large multiplicities
NLO/NNLO: rates, tails, small multiplicities
Fixed
order
PDFs
hadronization
exact
in α s
Hard
Subprocess
Incoming
hadrons
PDFs
Matched
dominant
contributions
hadronization
Resummed
Outgoing
hadrons
Want to have more details?
Les Houches Guidebook to Monte Carlo Generators
for Hadron Collider Physics
Editors: M.A. Dobbs1 , S. Frixione2 , E. Laenen3 , K. Tollefson4
Contributing Authors: H. Baer5 , E. Boos6 , B. Cox7 , M.A. Dobbs1 , R. Engel8 , S. Frixione2 , W. Giele9 ,
J. Huston4 , S. Ilyin6 , B. Kersevan10 , F. Krauss11 , Y. Kurihara12 , E. Laenen3 , L. Lönnblad13 ,
F. Maltoni14 , M. Mangano15 , S. Odaka12 , P. Richardson16 , A. Ryd17 , T. Sjöstrand13 , P. Skands13 ,
Z. Was18 , B.R. Webber19 , D. Zeppenfeld20
1
Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
INFN, Sezione di Genova, Via Dodecaneso 33, 16146 Genova, Italy
3
NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam, The Netherlands
4
Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824-1116, USA
5
Department of Physics, Florida State University, 511 Keen Building, Tallahassee, FL 32306-4350, USA
6
Moscow State University, Moscow, Russia
7
Dept of Physics and Astronomy, University of Manchester, Oxford Road, Manchester, M13 9PL, U.K.
8
Institut für Kernphysik, Forschungszentrum Karlsruhe, Postfach 3640, D - 76021 Karlsruhe, Germany
9
Fermi National Accelerator Laboratory, Batavia, IL 60510-500, USA
10
Jozef Stefan Institute, Jamova 39, SI-1000 Ljubljana,Slovenia; Faculty of Mathematics and Physics, University
of Ljubljana, Jadranska 19,SI-1000 Ljubljana, Slovenia
11
Institut für Theoretische Physik, TU Dresden, 01062 Dresden, Germany
12
KEK, Oho 1-1, Tsukuba, Ibaraki 305-0801, Japan
13
Department of Theoretical Physics, Lund University, S-223 62 Lund, Sweden
14
Centro Studi e Ricerche “Enrico Fermi”, via Panisperna, 89/A - 00184 Rome, Italy
15
CERN, CH–1211 Geneva 23, Switzerland
16
Institute for Particle Physics Phenomenology, University of Durham, DH1 3LE, U.K.
17
Caltech, 1200 E. California Bl., Pasadena CA 91125, USA
18
Institute of Nuclear Physics PAS, 31-342 Krakow, ul. Radzikowskiego 152, Poland
19
Cavendish Laboratory, Madingley Road, Cambridge CB3 0HE, U.K.
20
Department of Physics, University of Wisconsin, Madison, WI 53706, USA
2
Abstract
Recently the collider physics community has seen significant advances in the
formalisms and implementations of event generators. This review is a primer
of the methods commonly used for the simulation of high energy physics
events at particle colliders. We provide brief descriptions, references, and links
to the specific computer codes which implement the methods. The aim is to
provide an overview of the available tools, allowing the reader to ascertain
which tool is best for a particular application, but also making clear the limitations of each tool.
Compiled by the Working Group on Quantum ChromoDynamics and the Standard Model for the
Workshop “Physics at TeV Colliders”, Les Houches, France, May 2003.
March 4, 2004
Les Houches MC guidebook
(hep-ph/0403045). Here you
can find precise references to
and explanations about MEGs
(but not only about MEGs)
Problems with fixed-order CSIs
Compute pT (W ) at O(αS ) and O(αS2 )
Not exactly what you expect to see when LHC
is turned on...
We know from KLN theorem that the problem would be less severe (or absent) by considering the integrated cross section in
0 ≤ pT (W ) ≤ a few GeV
Still, it is very instructive to see what happens to the perturbative expansion
X
dσ
=
cn αSn ,
dpT (W )
n
2n
2
X
d
m mW
cn =
dn,m log
dpT (W ) m=0
p2T (W )
I Up to two logs per power of αS
I Logarithms grow large when pT (W ) → 0, and spoil the perturbative expansion
I This is the typical situation in a multi-scale problem: here pT (W ) mW
Since all terms in the perturbative expansion are equally important, it
would seem that perturbation theory is useless in this case
Let’s understand where the large logs come from. Start at O(αS ): we have
pT (W ) = pT (g). For small pT (W ), the gluon is thus soft and/or collinear
dσ
=
dpT (W )
Z
dΦWg δ(pT (W ) − pT )M(q q̄ → W g)
m2W
αS
2π
2
p
1− mT
W
dpT
1 + z2
M(q q̄ → W )
−→
δ(pT (W ) − pT )
dz CF
2
p
1
−
z
0
0
T
2
1
mW
pT (W )
αS
M(q q̄ → W )
+ B1 + O
A1 log 2
=
2π
pT (W )
pT (W )
mW
pT (W )→0
Z
Z
with
A1 = C F
3
B 1 = − CF
2
These values are entirely determined by the AP kernel Pqq .
Hence, you know the kernels, you know the logs
I Large logs result from soft and collinear emissions
I Soft and collinear emissions are universal
Strategy: include all pertubative orders in the computation, keeping only
the dominant logs
One can prove that this is doable in practice, provided that both dynamics
and kinematics factorize (ie can be expressed as products of simple building
blocks)
More often than not, factorized kinematics is obvious when introducing a
conjugate variable, e.g. in the case of pT (W )
δ p~T (W ) −
X
i
p~T i
!
=
=
"
!#
Z
d~b
exp i~b ·
2
(2π)
Z
Y
d~b
~b · p~T (W )
~b · p~T i
exp
i
exp
−i
(2π)2
i
p~T (W ) −
X
i
p~T i
Using the conjugate variable, the large logs change form
m2W
2 2
−→ L̃ = log b mW
L = log 2
pT (W )
For the record, here’s the form that results upon including all orders
dσ
dp2T (W )
G
∝ M(q q̄ → W )
= −
Z
m2W
b20 /b2
2
dµ
µ2
= L̃g1 (αS L̃) +
∞
Z
db bJ0 (bpT (W ))eG
0
A(αS (µ2 )) log
∞ X
αS n−2
n=2
A(αS )
=
∞ X
α S n
n=1
π
An ,
π
m2W
µ2
+ B(αS (µ2 ))
gn (αS L̃)
B(αS ) =
∞ X
α S n
n=1
π
Bn ,
There is a −∞ at the exponent, which is therefore damped (Sudakov
suppression): the cross section is finite at pT (W ) = 0
A resummation is therefore just a re-organization of the perturbative series.
One starts from the fixed-order expression
σF O = f00 + αS (c12 L2 + c11 L + f10 )
+ αS2 (c24 L4 + c23 L3 + c22 L2 + . . . + f20 ) + . . .
When the logarithm L grows large, one rewrites this as
0
σres = exp [Lg1 (αS L) + g2 (αS L) + . . .] (f00
+ . . .)
The resummed expression can be systematically improved
g1 (ie A1 ) Leading Logs
g2 (ie A2 , B1 ) Next-to-Leading Logs
...
precisely as the perturbative expansion in αS can be improved by computing
the next contribution in αS
I Rule of thumb:
Soft and collinear emissions −→ double logs
Soft, large angle or collinear, hard emissions −→ single logs
I The precise nature of the logs depends however on the observable
studied
I Coefficients Ai and Bi are obtained from perturbative computations;
typically Nk LL ↔ Nk+1 LO (DIS, Drell-Yan)
I It is best to count the logs at the exponent. In such a way, the
condition for the validity of the expansion is αS L < 1
I If the exponent is expanded, gi ’s mix, and the condition for the validity
of the expansion is αS L2 < 1
I State of the art: NLL, some NNLL results available
Example: logs in QQ̄ production
1) Observable-dependent logs: depend strictly on the kinematics of the final state
(including cuts). Occur in specific regions of the phase space
Q=
pT (Q)
,
mQ
pT (QQ)
Q=
,
mQ
Q=1−
∆φ(QQ)
,
π
pT (Q) mQ
pT (QQ) ' 0
∆φ(QQ) ' π
2) Observable-independent logs
Threshold logs: occur when the c.m. energy is small
4m2Q
Q=1−
,
ŝ
ŝ ' 4m2Q
Small-x logs: occur when the c.m. energy is large
m2Q
Q=
,
ŝ
ŝ m2Q
pp → H (Bozzi, Catani, de Florian, Grazzini)
NNLL+NLO
NLL+LO
NLO
LO
Resums log mH /pT (H); matched to fixed-order
Integration over pT (H) returns the NNLO rate
Resummation effects visible up to large values of pT (H): this is partly
due to the constraint on the total rate
Roughly speaking, fixed-order and resummed computations apply to
complementary kinematic regions
I Large-pt tails =⇒ fixed-order
I Peaks =⇒ resummation
One can exploit the good features of each approach, and construct a
matched result
σmatched = σF O + σres − σF O |L→∞ G(Q)
X
L = log Q ,
G(x) = 1 +
ai x i
i
Problems:
I Resummed computations are observable-specific
I They are also lenghty, tedious, and most of all error-prone
Your are now in a better position to appreciate the improvement
A matched (NLL+NLO −→ FONLL) computation has been used: a
20% improvement (Cacciari, Greco, Nason)
A specific treatment of the fragmentation function is necessary for this
spectrum: a 45% improvement (Cacciari, Nason)
Tests a very large number of ideas used in pQCD computations
(factorization, fragmentation, fixed-order computations, resummation, matching)
There are alternative solutions to the problem posed by analytical
resummation
I (Semi-)numerical approach (CAESAR)
I Parton Shower Monte Carlos
CAESAR (Banfi, Salam, Zanderighi) has been developed in the past few years,
reproduces known analytical results, and computes results not available at
the analytical level
PSMCs, which are at the core of Event Generators, have enjoyed and will
keep on enjoying enormous success. They will keep us busy for the rest of
these lectures
The CAESAR approach
Given the observable O to be resummed, define a “simple” observable Os
which has the same LL structure as O, but is trivial to exponentiate:
dσs
dσ
=
F (O, Os )
d log O
d log Os
F includes all the effects difficult to compute analytically (such as recoil),
and is computed numerically. A NLL accuracy is achieved
Checks that O exponentiates
Works for e+ e− → 2j/3j, ep → 1j/2j, pp → jµ+ µ− /2j
Matched to fixed order in e+ e− and DIS, work in progress for hadronic
collisions
The code is public
Summary on resummed computations
I Reorganize the perturbative expansion: αS −→ αS Lk
I Don’t have unweighted events
I Can (and should always) be matched to fixed-order results
Must be used to
I Get sensible predictions for inclusive shapes at peaks
I Cross check the results of Event Generators
Why you should care: Event Generators perform the same kind of
computation for inclusive variables, but with smaller nominal accuracy
MEGs: tails, large multiplicities
NLO/NNLO: rates, tails, small multiplicities
Fixed
order
PDFs
hadronization
exact
in α s
Hard
Subprocess
Incoming
hadrons
PDFs
Matched
dominant
contributions
hadronization
Resummed
peaks, fully inclusive
Outgoing
hadrons
Summary
Fixed order: lots at NLO, a few at NNLO
Highly-automated generation of tree-level diagrams
High-accuracy resummed computations available for a few
key observables
Resummed and fixed-order results are complementary
PDFs with errors must be considered for serious assessment
of systematics. Computing intensive
Progress being made in (semi)-numerical approaches to
loop computations, resummations
Event Generators
Remind that an Event Generator aims at giving a complete description of collision
processes
The core of Event Generators is the Parton Shower mechanism, which serves two main
purposes:
To provide estimates of higher-order corrections that are enhanced by
large kinematic logarithms
To generate high-multiplicity partonic states which can readily be
converted into the observed hadrons
The Parton Shower is built on the same concept as resummations:
logarithmically dominant contributions to the cross section are ”universal”.
Power-suppressed and finite terms are neglected
Parton Showers are more flexible than (analytical or numerical) resummation results.
This comes at a price, since more approximations need be made
The problem
I A lot of physics at the LHC will involve many-jet events,
and processes with large K factors
I Monte Carlos cannot give sensible descriptions of many-jet
events, and cannot compute K factors
I Although Monte Carlos should not be seen as discovery
tools, these issues must be addressed for a good
understanding of LHC physics
Event Generators in a nutshell
I Infinite number of dominant Feynman diagrams
Generate high-multiplicity parton final state: shower
I Models for hadronization, underlying event
Convert partons into incoming and outgoing hadrons
I PDG information embedded
Used to decay particles with correct branching ratios
Let’s discuss the Parton Shower
Before going into that, let me stress that the problem of the sensible
generation of the underlying event is a serious one, owing to
I its importance for all kind of physics simulations
I the still-poor theoretical understanding of its mechanisms
The process of checking the predictions of and of improving the models for
the underlying event will start immediately after the LHC turn on
There is a lot of ongoing activity on this issue, which I won’t report
Let’s start by ignoring the problem of soft singularities
z
Collinear kinematics
b
AN
Θb
a
c
Work in axial gauges
dσN +1
dσ̄N +1
Θc
= Eb /Ea , t = ka2
Θ = Θ b + Θc
Θc
Θb
=
=
1−z
z
s
t
1
=
Ea z(1 − z)
dt dφ αS
= dσN
dz |Kba (z)|2
t 2π 2π
dt αS
= dσ̄N dz Pba (z)
t 2π
as we already know from fixed-order and resummed computations
In the phase space, φ can be conveniently identified with the azimuthal
angle between the plane of branching and the polarization of a
It is easy to iterate the branching process (splittings are called branchings in
this context)
a(t) −→ b(z) + c , b(t0 ) −→ d(z 0 ) + e
dt dt0 0 αS 2
Pba (z)Pdb (z 0 )
dσ̄N +2 = dσ̄N dz 0 dz
t
t
2π
This is a Markov process, ie a random process in which the probability of
the next step only depends on the present values of the random variables.
In formulae
τ1 < . . . < τn =⇒
P x(τn ) < xn |x(τn−1 ), . . . , x(τ1 ) = P (x(τn ) < xn |x(τn−1 ))
In our case, the probability of each branching depends on the type of
splitting (g → gg, ...), the virtuality t, and the energy fraction z
Following a given line in a branching tree, it is clear that enhanced
contributions will be due to the strongly-ordered region
Q2 t1 t2 . . . tN Q20
Z tN −1
Z Q2
Z t1
N
2 N
Q
dt1
dt2
dtN
αS
N
log 2
σN ∝ σ 0 α S
...
= σ0
2
2
2
t
t
t
N
!
Q0
1
2
N
Q0
Q0
Q0
Denote by
Φa [E, Q2 ]
the ensemble of parton cascades initiated by a parton a of energy E
emerging from a hard process with scale Q2 . Also, denote by
∆a (Q21 , Q22 )
the probability that a does not branch for virtualities Q22 < t < Q21
With this, it is easy to write a formula that takes into account all the
branches in a branching tree:
Φa [E, Q2 ] = ∆a (Q2 , Q20 )Φa [E, Q20 ]
Z Q2
XZ
dt
αS
2
∆a (Q , t)
+
dz Pba (z)Φb [zE, t]Φc [(1 − z)E, t]
2
t
2π
Q0
b
which has an immediate pictorial representation
b
c
1−z
t
b
=Σ
a
a
z
Now simply impose that no information is lost during the parton shower:
the sum of all the probabilities associated with the branchings of partons
must be one. Therefore
1 = ∆a (Q2 , Q20 ) +
Z
Q2
Q20
dt
∆a (Q2 , t)
t
XZ
b
dz
αS
Pba (z)
2π
which can be solved:
∆a (Q
2
, Q20 )
= exp −
Z
Q2
Q20
dt
t
XZ
b
αS
dz Pba (z)
2π
!
Note
I This Sudakov form factor looks familiar −→ resummation
I Some virtual corrections must be included, otherwise unitarity couldn’t
be imposed!
It’s clear that a Sudakov must appear: resummation and parton shower
described the same physics
The closed form of the Sudakov is easy to find. Derive the unitarity
condition wrt Q20
d∆a 2 2
Pa
0=
(Q , Q0 ) − 2 ∆a (Q2 , Q20 ) ,
2
dQ0
Q0
Pa =
XZ
b
αS
dz Pba (z)
2π
The exponential form immediately follows, if one imposes the boundary
condition
∆a (Q2 , Q2 ) = 1
ie there’s no branching without evolution. Note that
∆a (Q2 , Q20 )
∆a (Q , t) =
∆a (t, Q20 )
2
which is why the second argument of the Sudakov is typically forgotten
At this point, one observes that Pa is actually divergent
Go back to the branching kinematics pa → pb + pc . If we do not neglect
the virtualities of b and c we find
p2a
p2b
p2c
p2T
+
+
=
z(1 − z)
z
1−z
Parton b and c will do some further branching, unless their virtuality is too
small, say Q20 . By imposing pT > 0 we get
z(1 − z)Q2 ≥ (1 − z)Q20 + zQ20

1
1−
2
s


Q20 
1
1+
1−4 2 ≤z ≤
Q
2
s

Q20
Q20 
Q20
1−4 2
=⇒
≤z ≤1− 2
Q
Q2
Q
Physical meaning: if z is outside this range, one of the partons is too
soft to be resolved, and the kinematics is set equal to that without
branching
As a further refinement, we must give αS the possibility to run. In general,
its argument will be a function of Q2 and z
αS const =⇒
∆∝
αS (Q2 )
∆∝
=⇒
2
exp −κ log2 Q
Q20
2 a
Q0
2 b
αS (Q )
Q2
I For Q2 → ∞ the Sudakov must vanish
I The diagonal Altarelli Parisi kernels dominate. Gluons tend to radiate
more than quarks (CA vs CF )
Two-loop AP kernels have terms log(1 − z)/(1 − z). Such terms can
effectively be taken into account in the LL Sudakov by setting
2
2
2
αS (z(1 − z)Q ) ≡ αS (pT ) =⇒ ∆ ∝ αS (Q )
c log Q2 /Q20
This form goes to zero faster than any power of 1/Q2 (ie it is really
difficult not to emit starting from a very hard scale)
An alternative derivation stresses a point. Consider again the e+ e− → q q̄g
cross section
x21 + x22
dσ ∼ dx1 dx2 CF
(1 − x1 )(1 − x2 )
Now change variables (x1 , x2 ) → (x3 , cos θ), with θ the angle between g
and q. We have
dσ
1 + (1 − x3 )2
2CF
0
∼ dx3 d cos θ
+ O (sin θ)
2
1 − cos θ
x3
2
0
Pgq (x3 ) + O (sin θ)
= dx3 d cos θ
1 − cos2 θ
Now it is easy to associate half of the radiation to the quark line, and the
other half to the antiquark line
1
1
2
=
+
2
1 − cos θ
1 − cos θ 1 + cos θ
Using
θq = θ ,
θq̄ = π − θ
one gets
2d cos θ
1 − cos2 θ
=
'
d cos θq
d cos θq̄
+
1 − cos θq
1 − cos θq̄
dθq2
dθq̄2
+ 2
θq2
θq̄
and therefore
dθq2
dθq̄2
dσ ∼ 2 dx3 Pgq (x3 ) + 2 dx3 Pgq (x3 )
θq
θq̄
But, within the collinear approximation in which we are working, this is
equivalent to the expression of the collinear limit of the matrix elements
we have started from
Note in fact that we can define the following quantities with mass squared
dimensions
Q2
p2T
= z(1 − z)θ 2 E 2
= z 2 (1 − z)2 θ2 E 2
t̃ = θ2 E 2
and obtain
dQ2
dp2T
dt̃
dθ 2
=
=
=
θ2
Q2
p2T
t̃
This fact has a serious implication: in the collinear approximation, the
evolution scale which enters the parton shower is not uniquely defined
This is because the scales chosen above have the same angular behaviour,
provided that z is not too close to 0 or 1
In other words, we must stay away from the soft region
Double logs
From perturbative computations, we know that the soft/collinear region
will actually give the dominant contribution to the cross section
Choose t ∈ {Q2 , p2T , t̃} and evaluate the integral
I=
Z
dt dz
t z
We are interested in the soft behaviour, so we assume θ ∼ 1, and use the
definition of t to restrict the integration range in z. Thus, t has the
meaning of the hard scale of the current branching
t = Q2
=⇒
t = p2T
=⇒
t = t̃
=⇒
1
t
log2 2
2
E
t
1
I ∼ log2 2
4
E
t
Λ
I ∼ log log
E
E
I∼