ppt - IceCube Neutrino Observatory

Efficiency of Cuts in the Inverted Analysis
Ndirc > 13 (number of direct hits)
Ldirb >170
(track length in meters)
| Smootallphit | < 0.250
(smoothness of hits along track)
Medres < 4
(median resolution in degrees)
Likelihood ratio vs. zenith (horizontal events must be > 27.5 and
vertical events must be greater than 65.7 (linear function)
How I calculated the efficiency:
1) Made an N-1 Plot of the selected parameter.
(applied all cuts at my cut level except for the cut on the
parameter I am studying)
2) Counted number of events that passed and failed each cut
3) efficiency = # events that pass the cut / total # of events
(plots will follow the numbers....)
NDIRC
Nch < 100
data
dcors
# cut
1.06E+007 1.81E+007
# kept
3.17E+008 2.50E+008
efficiency = kept/total
.968
.933
Nch >= 100
data
dcors
# cut
3.21E+005 1.63E+006
# kept
6.93E+007 5.35E+007
efficiency = kept/total
.995
.970
LDIRB
Nch < 100
data
dcors
# cut
5.63E+007 3.35E+007
# kept
3.17E+008 2.50E+008
efficiency = kept/total
.849
.882
Nch >= 100
data
dcors
# cut
1.64E+007 8.10E+006
# kept
6.93E+007 5.35E+007
efficiency = kept/total
.809
.868
Smootallphit
Nch < 100
data
dcors
# cut
1.42E+007 5.99E+006
# kept
3.17E+008 2.50E+008
efficiency = kept/total
.957
.977
Nch >= 100
data
dcors
# cut
2.27E+006 1.05E+006
# kept
6.93E+007 5.35E+007
efficiency = kept/total
.968
.981
Median Resolution
Nch < 100
data
dcors
# cut
1.54E+006 8.84E+005
# kept
3.17E+008 2.50E+008
efficiency = kept/total
.995
.996
Nch >= 100
data
dcors
# cut
2.47E+005 1.42E+005
# kept
6.93E+007 5.35E+007
efficiency = kept/total
.996
.997
The next pages contain the N-1 plots for each parameter. Each
page contains 4 plots.
Nch < 100
Nch < 100
with the
dCorsika
normalized to
have the same
number of
events as the
data
*The normalization factor needed is approximately 1.25.
Nch >= 100
Nch >= 100
with the
dCorsika
normalized to
have the same
number of
events as the
data
Nch < 100
Normalized Nch < 100
Nch >= 100
INVERTED
Normalized Nch >= 100
Nch < 100
Normalized Nch < 100
Nch >= 100
INVERTED
Normalized Nch >= 100
Nch < 100
Normalized Nch < 100
Nch >= 100
INVERTED
Normalized Nch >= 100
Nch < 100
Normalized Nch < 100
Nch >= 100
INVERTED
Normalized Nch >= 100
Nch < 100
Normalized Nch < 100
Nch >= 100
INVERTED
Normalized Nch >= 100
Now, take a look at the comparable plots for the upgoing
analysis.
(Sorry that the histograms don't have identical binning... I
can do them again if critical.)
UPGOING
n-1 plot
Nch < 100
Normalized
Nch < 100
UPGOING
n-1 plot
Nch < 100
Normalized
Nch < 100
UPGOING
n-1 plot
Nch < 100
Normalized
Nch < 100
UPGOING
n-1 plot
Nch < 100
Normalized
Nch < 100
What I am working on.....
If we are cutting on distributions that don't agree, then we
are likely to get the normalization for low Nch events
wrong.
What would happen to the normalization if we had gotten
the Monte Carlo distribution incorrect?
Right now, I see two ways to approach this.
1) We could try to shift the MC to match the data.
* Using different ice models, for instance, could shift the
Ndirc into better agreement --->> We decided this was a bad
idea because it would send parameters like Nch out of
agreement.
cut
cut
keep
keep
2) We could shift the Monte Carlo cut (but keep the data cut).
Then we could see how this
changes the overall
normalization.
atms cut
data cut
If the Ndirc peak is off by 20%, you can “shift” it higher (or shift
the cut lower) and see the effect on the normalization.
For MC
1.2*Ndirc > 13
This is the same as shifting the cut -->>
Ndirc > 10.83
Ndirc > 13 / 1.2
since it is discrete
Ndirc >= 11
We can compare what happens to the normalization at low
Nch if we pretend that we are working at an entirely different
quality cut level.
Ignore the data for a moment and pretend that the Level 7
central Bartol distribution is the truth for atmospheric neutrinos.
Count the number of events above and below the Nch cut for
other quality levels.
Since I work at Level 7, consider Levels 5,6,7,8 and 9.
Bartol Min
Level
5
6
7
8
9
Bartol Central
Bartol Max
<100
>100
<100
>100
<100
>100
539.3
463.8
397.2
247.9
153.5
6.7
5.2
4.9
3.7
2.6
725.9
623.1
533.8
331.7
204.6
12.3
9.7
9.1
6.9
4.8
912.6
782.5
670.3
415.6
255.7
17.9
14.3
13.3
10
7
Bartol Min
Level
5
6
7
8
9
Bartol Central
Bartol Max
Signal
<100
>100
<100
>100
<100
>100
>100
539.3
463.8
397.2
247.9
153.5
6.7
5.2
4.9
3.7
2.6
725.9
623.1
533.8
331.7
204.6
12.3
9.7
9.1
6.9
4.8
912.6
782.5
670.3
415.6
255.7
17.9
14.3
13.3
10
7
82.6
72.3
68.4
53.4
39.6
Assuming the Bartol Central Level 7 is the truth, you can find the low Nch
normalization factor for each scenario:
5 levels * 3 fluxes = 15 scenarios
For each, you can then calculate a normalized number of background and
signal events.
Example: Bartol Max, Level 5
normalization = 533.8 / 912.6 = 0.585
normalized background = 0.585 * 17.9 = 10.5
normalized signal = 0.585 * 82.6 = 48.3
This may appear somewhat random, but the pattern is evident
on the next slide.
Bartol
140
Normalized Signal
130
120
110
100
90
Column B
80
70
60
50
40
5
6
7
8
9
10
11
12
13
Normalized Background
14
15
Assuming Bartol Central Level 7 is the truth.....
Bartol
140
Lv. 9
Normalized Signal
130
120
Lv. 8
110
100
Lv. 7
90
80
Column B
Lv. 6
Lv. 5
70
60
50
40
5
6
7
8
9
10
11
12
13
Normalized Background
14
15
signal
You start with a single prediction of the background and
signal for the final sample.
Bartol Central
bgd
normalized signal
Uncertainties in the theoretical prediction of the atmospheric neutrino
flux lead to a spread in background values predicted in the final sample.
Bartol Min
Bartol Central
Bartol Max
normalized bgd
normalized signal
Normalization to low nch events. Despite the low normalization factor,
Bartol max will still predict the highest normalized background.
However, it will predict the lowest signal.
normalized bgd
normalized signal
Assume there is a non-uniform, energy dependent scale factor. The
signal and background may be shifted by different amounts (shown by
the different sizes of the arrows.
normalized bgd
Cut levels 5,6, and 7 (the circled region, with level 7 being the blue line) show
similar behavior. Because of the large gap, it seems that the cuts tighten
dramatically between levels 7 and 8. If I wanted to, I could add a cut level in that
region.
I hope that our distributions (data vs MC) are not in as large a disagreement as
Level 7 MC to Level 9 MC. If the data and MC show a disagreement that is similar
to the disagreement between Level 6 MC and Level 7 MC (for instance), then it
seems that we can constrain the range of signal and background.
Bartol
140
Normalized Signal
130
120
110
100
Level 7
90
Column B
80
70
60
50
40
5
6
7
8
9
10
11
12
Normalized Background
13
14
15
Using the 2003 files with modified OM sensitivity, I made this
plot of normalized background vs. normalized signal. Everything
is normalized assuming that Bartol central 100% OM sensitivity
is the truth.
Bartol
Bartol min
55
70%
Normalized Signal
50
45
Bartol central
40
35
Bartol max
30
Column H
25 100%
20
15
130%
10
5
2
2.2 2.5 2.7
5
5
3
3.2 3.5 3.7
5
5
4
4.2 4.5 4.7
5
5
Normalized Background
5
5.2
5
Albrecht asked me to check the space angle difference between
the True and Reconstructed tracks of the muons near the horizon
in the inverted analysis.
Although my statistics are low (not as good as Newt's), I find that
events that pass my final quality cuts (minus the Nch cut) are well
reconstructed. The difference between the true angle and the
reconstructed angle is usually within 4 to 5 degrees.
*Obviously, my statistics are low. Unweighted, there are 107 events in this plot, but they are weighted up to be
comparable in numbers to the 4-year data.