Naturalness and fine-tuning- Patrik Adlarson

Naturalness and fine-tuning- Patrik Adlarson
Structural naturalness versus numerical naturalness
There is an important role in particle physics called naturalness. This is also
related to one out of two usages of the word “fine-tuning” which shall later be
explained. Regarding naturalness there is a more qualititative definition of
naturalness, structural naturalness, and a more quantitave usage, numerical
naturalness.
Structural naturalness is used in the role of aesthetic beauty. It is oftentimes a
guide when constructing new theories in physics. A definition from the late 40s
is “the subconscious reasoning which are called instinctive judgments.”
Examples of this kind of reasoning is Einstein’s remark about what would have
happened if his theory of general relativity had been falsified: “Then I would
have felt sorry for the dear Lord.” In his book, dreams of a final theory Nobel
laurate Steven Weinberg writes
The kind of beauty that we find in physical theories is of a very
limited sort. It is, as far as I have been able to capture it in words,
the beauty of simplicity and inevitability- the beauty of perfect
structure, the beauty of everything fitting together, of nothing being
changeable, of logical rigidity. It is a beauty that is spare and classic,
the sort we find in the Greek tragedies.1
Paul Dirac even stated that it is more important to have beauty in one's
equations than to have them fit experiment.2 The quotes that have been used
shows is a somewhat subjective or at least an intersubjective usage of the word
beauty. It does not mean that there is no such a thing as beauty in physics, only
that it is difficult to pin point it and use it as a guide in constructing new theories.
Also, perhaps Dirac’s comment is a bit overreaching how much weight should be
applied to the concept of beauty. One of the reasons is that is also somewhat
dependent on the philosophical influences at the time. Given an Aristotelian
framework it is far more natural to have the earth at the centre of the world. The
arguments for this was that we would all have been thrown off by the tail wind of
the earth it had been moving. Also we should observe some parallax effects of
the stars and since one did not do that at the time there could be two reasons for
this:
1. the stars are too far off for us to note such parallax effects.
2. The earth was the center of the universe.
Weinberg, Steven, Dreams of a final Theory, Vintage (1993), p. 119.
Dirac, Paul, The evolution of the Physicist's Picture of Nature, Scientific American
208 (5) (1963)
1
2
1
We now know that the first alternative is true, however for the medieval mind
the second alternative was more plausible.
Another reason for being cautious about overstating Dirac’s statement is that
sometimes the concept of beauty/naturalness can lead into wrong directions.
Dirac developed something called the Large Number Hypotheses (Eddington was
the first) which means that any large number should be related to another large
number. This number was chosen to be the age of the universe. He then found
three dimensionless numbers which are all very close to 1040 and then he stated
that these should remain approximately constant during the evolution of the
universe.3 However, as it turns out, keeping these constants means that other
fundamental constants would have to vary in time, and this would essentially
have lead to that no life would have been able to develop on Earth. A simpler
solution could be an anthropic one. Basically it says that in order for selfconscious beings to be able to reflect on their situation and given that life formed
the way it did, the universe has to be as old and as large as it is, otherwise we
would not be here to observe it.
Stating the hierarchy problem
However the discrepancy between measuring large numbers and what one
expects can lead to problems. The hierarchy problem asks why the strengths of
the weak and gravitational forces is such a large nr:
GF h2
GN c 2
1033
The problem is however further increased since one expects it to be
approximately unity. The value of GF can be determined by the Higgs mass via
MH
2hv
1
2h
2GF
This means that
GF
(mH )
2
When measuring the mass of the Higgs one has the bare mass and then
contributions to the mass via interactions with the vacuum. The contributions to
the mass is given by
2
m2H
where lambda gives the maximum energy which are available to the virtual
particles and kappa denotes a proportionality factor.
These three were: the ratio of the size of the observable universe to the
electron radius; ratio of EM to gravitational force between the proton and
electron; square root of the nr of protons in the visible universe.
3
2
A good analogy given is to imagine a thermodynamical system with particles P
which on the average has a temperature T. If one inserts a new particle at rest
call it R, then at time t=0 its energy is equal to its rest mass, but due to the
particles P, it will, as time goes towards infinity receive additional contributions
from the particles P interacting with R, so that it is in thermal equilibrium, i.e. in
the order of T, or with a proportionality factor kappa multiplied by T in the order
of 1/100. The analogy with this the Higgs mass and the vacuum is that the
contribution to the effective mass of the Higgs should be very large, if one sets
the maximum energy available to the Planck mass. The problem is then that if it
is in the order of Planck mass, then the Higgs mass should be in the order of GN,
and the ratio should be 1.
There are however two ways to solve this problem. The first is that virtual
particles at different energy scales are very precisely cancelled or finely tuned so
that the effective contribution becomes small. That means that if lambda is the
Plank mass then the proportionality factor is less than 10-32. This usage of the
word fine-tuning is different when compared to its other usage. In the other
usage of the word fine-tuning it refers to the fact that different parameters seem
optimized to make it possible for carbon based life to occur. The three mostly
discussed reasons for the second type of fine-tuning are brute fact, multiverse
and design.
The first usage of fine-tuning feels quite unnatural in the structural naturalness
sense (i.e. in the sense of beauty).
The other solution to this problem is if there is some reason to stop at some
energy scale. If one scales kappa to approximately 1/100 then the corrections to
lambda is a scale of our ignorance of how far the SM is applicable. This gives the
scale in the order of TeV, where one should expect some new physics to appear.
Naturalness criterion
The naturalness criterion is first stated by t’Hooft in 1979 where he writes
At any energy scale mu, a physical parameter or set of physical
parameters alphai(mu) is allowed to be very small only if the
replacement alphai(mu)=0 would increase the symmetry of the
system.4
There are three premises underlying the naturalness criterion. The first is the
belief in reductionism, that there exists a fundamental theory where all
parameters (dimensionless) are determined.
The second premise is the concept of symmetries. Symmetries are protectors of
the natural laws, so if there is a parameter which is 0, it will remain zero even
after all quantum corrections have been made.
t’Hooft, Naturalness, Chiral Symmetry and Spontaneous Chiral Symmetry
Breaking,
4
3
The photons are protected by a symmetry, namely gauge invariance, so that
quantum corrections coming from electron-positron pairs are protected by the
symmetry.
(0)
d 4 k Tr ( ie
(2 ) 4
0
)i(k
me )( ie
(k
2
)i(k
0
me )
2 2
e
m )
0
Similarly for electrons there exists a chiral symmetry which protects the electron
mass from increasing without limit.
(0)
d4k
( ie
(2 ) 4
)
iTr k
(k
0
2
me
2
e
m )
( ie )
ig
k2
logdivergent
The cut off regularization gives that the contribution to the electron mass is
given by
me
2me
log
0.24me if
me
M Planck
and one can see that the contribution to the electron mass scale to the electron
mass itself.
The reason for this is that mass less particles of spin ½ have two degrees of
freedom while massive particles of spin ½ (exception: Majorana fermions) and
higher have more than two degrees of freedom. The reason for this is that for
massive states one may go into a reference frame where a particle is at rest and
therefore one needs an extra degree of freedom. So going from a massive to a
mass less state enables us to eliminate some degree of freedom. However for the
Higgs, which is a spin-0 particle no such symmetry exists to protect it. This is
what gives the Higgs its large quantum corrections.
There are ways to protect massive spin-0 particles, however these go beyond the
regular SM description.
The third premise is using effective field theories. The main point of effective
field theories is that the theory is valid only up to some valid energy scale. It
describes the original theory in some truncated version where one uses local
operators which only uses the light degrees of freedom. The higher
energies/smaller scales are summed together in a finite number of parameters.
The theory is then only valid to some energy scale. For instance Chiral
Perturbation theory is an effective field theory of QCD and it is valid up to 1 GeV.
Now we already now that the Standard Model (SM) is an effective theory because
of the fact that the SM does not take into account the effects of gravitation. This
happens at the Planck scale, however the question is if the SM breaks down
already at the TeV scale.
If the naturalness criterion is correct one should expect some new physics at the
TeV scale. Technicolor may be one such option.
4
Example of naturalness in the past, success
The concept of naturalness has been successful and there are some examples of
that. Take for instance the electromagnetic contribution to the pion mass
M2
3
4
M 20
2
From the knowledge of the experimental masses (35.5 MeV)2 one needs either
cancellations of different contributions or one has a cutoff scale at approximately
850 MeV. At 770 MeV the rho-meson comes in which modifies the
electromagnetic contribution. This is a deduction after the fact, the next example
is not.
When one considers the mass difference between the K-short and K-long states it
is given by
MK 0 M 0
GF2 f K2
L
KS
2 2
2 sin c
M 0
6
KL
The experimental measured value is 7x10-15 and this gives a cut off scale less
than 2 GeV. At 1.2 GeV the charm quark comes in and modifies the behavior.
Doing the same thing and including the particles that modify the Higgs the most
an effective theory approach would give the contribution to the Higgs as
3GF
m 2H
(4m 2t 2m 2W mZ2 m 2H ) 2
2
4 2
If one sets the Higgs mass at 182 GeV (95% CL) some new physics should enter
at less than 1 TeV.
Example of naturalness in the past, failure
However there are examples where naturalness fails, and this is with regards to
the cosmological constant. Quantum corrections to the cosmological constant
shows that the theoretical description of particle physics should fail at an energy
scale less than 3 meV, however no such thing is observed. This has led some,
most notably Steven Weinberg to consider an anthropic explanation for the low
positive value of the cosmological constant.
Quantifying Naturalness beyond SM
Another definition of naturalness is that the observable properties should be
stable against minute variations of the fundamental parameters.5
5
Grinbaum, Alexi, Which fine-tuning arguments are fine? arXiv:0903.4055v1, p6
5
As one is waiting for LHC results to confirm and disconfirm different models,
naturalness has been used as a ways to measure how natural, or how likely a
model is to be true especially super symmetric models. The thought was to
compare the Z boson mass to the underlying parameters of the more
fundamental theory. One may do this since the Z boson mass is equivalent up to
constants of order unity, to the Higgs mass or to the inverse square root of the
Fermi constant. Also the new weak theory should be able to be calculable. The
measure of fine-tuning needed is described by
max
ai M Z2 (ai )
M Z2 ai
which is the logarithmic variation of the function MZ(ai) with respect to ai.
Delta was originally was set to 10 which would lead to a parameter tuning of no
more than 10%. The choice of 10 was arbitrary, and is a subjective/heuristical
measure of what is natural. In fact, as the experimental constraints on the BSM
were refined, the MSSM required 20 in order to work. This was still considered
reasonable.
However, as the experimental data became more and more, it was unnatural
(pun intended) to just use the Z mass to consider the naturalness, but also other
parameters were included. In 1994 a new measure of naturalness was used
given as
BG
AC
_
BG
where the delta is used as the average over some sensible range over the
parameters. Several other types of modifications have later been performed in
order to compare different models and how unnatural they are. One can see
them as LHC forecasts, but as we know from meteorology forecasts are not
always accurate and therefore we hopefully will receive some insight whether or
not the naturalness framed in the hierarchical problem, is a successful indication
for new physics around 1 TeV.
References:
Main source:
Guidice, G.F., Naturally Speaking: The naturalness Criterion and Physics at the
LHC, arXiv:0801.2562v2
Other sources:
Enberg, Rikard, Lecture Notes
Grinbaum, Alexi, Which fine-tuning arguments are fine? arXiv:0903.4055v1
6
Guidice, G.F., Theories for the Fermi scale, arXiv:0710.3294v2
t’Hooft, Naturalness, Chiral Symmetry and Spontaneous Chiral Symmetry
Breaking, Lecture given at Cargese Summer Inst., 1979. NATO Adv. Study Inst.
Ser. B Phys. 59, 135 (1980)
Weinberg, Steven, Dreams of a final theory, Vintage, London, 1993
Carr, Bernard (Editor), Universe or Multiverse? Cambridge University Press,
Cambridge, 2007
7