Audio-based Music Segmentation Using Multiple Features

1
Audio-based Music Segmentation Using Multiple
Features
Pedro Girão Antunes, David Martins de Matos, Isabel Trancoso
L2 F/INESC-ID Lisboa
Rua Alves Redol, 9, 1000-029 Lisboa, Portugal
{pedro.girao,david.matos,isabel.trancoso}@l2f.inesc-id.pt
Abstract—Structural segmentation based in the musical audio
signal is a growing area of investigation. It aims to segment a
piece of music into structurally significant parts, or higher level
segments. Among many applications, it offers great potential for
improving the acoustic and musicological modeling of a piece of
music. This thesis describes a method for automatically determine
the location points of change in the music, based on a two
dimensional representation, the SDM (Self Distance Matrix),
and the detection of audio onsets. The features used for the
computation of the SDM are: the MFCCs, the chromagram and
the rhythmogram which are also combined together. The audio
onsets are determined using distinct state of the art methods,
used in the assumption that every segment changing moment
must be an audio onset. Basically, the SDM is used to determine
which of the detected onsets are a moment of segment change. To
do so, using the SDM, on which a checkboard kernel with radial
smoothing is applied along its diagonal, a novelty score function
is obtained of which the peaks are considered to be candidate
instants. The selected instants are the audio onsets closer to the
detected peaks. The application of the method relies on the use of
Matlab and several toolboxes. Our results, obtained for a corpus
of 50 songs, are comparable with the state of the art.
Regarding pop music, modern production techniques often
use copy and paste to clone multiple segments of the same
type, even to clone components within the segment. This
obviously facilitates the work of automatic segmentation, thus
good results are obtained on this kind of music.
Underlying segmentation and other sub-fields of musical
information retrieval is the feature extraction, as the stream
of audio samples itself does not provide essential information.
Ideally, extracting the information perceived by humans would
be of the most interest. For example: the progression of
harmonies, the melodic cadences, change of instrumentation,
the presence of drum fills, the presence of vocals, etc. In
practice, these features can be roughly summarized in three
musical dimensions: harmony (melody), timbre and rhythm.
One of the most important breakthroughs in this area was
the use of a two-dimensional representation of a musical audio
signal proposed by Foote [8]: the Self Distance Matrix (SDM)
(Figure 1)(ds is a distance function and v a feature vector).
I. I NTRODUCTION
The expansion of music in the digital format due to the
growing efficiency of compression algorithms led to the massification of music consumption. Such a phenomenon led to
the creation of a new investigation field, so called musical
information retrieval. Musical structural segmentation is a sub
field and it can be a starting point for a number of more
complex tasks: music summarizing, music analysis, music
search, and genre classification. Segmentation can also assist
in audio browsing i.e., besides browsing an album through
songs it could also be possible to browse a song through
segments.
Every piece of music has an overall plan or structure. This
is called the form of the music. Musical forms offer a great
range of complexity. For example, most occidental pop music
tends to be short and simple, often built upon repetition; on the
other hand, classical music traditions around the world tend
to encourage longer, more complex forms. Note that, from
an abstract point of view, structure is closely related to the
human perception of it. For instance, most occidental people
can easily distinguish the verse from the chorus of some pop
song, but will have trouble recognizing what is going on in a
piece of Chinese traditional music for instance. Furthermore,
classical music forms may be difficult to recognize without
the familiarity that come from study or repeated hearings.
SDM (i, j) = ds (vi , vj ) i, j = 1, ..., n
(1)
It is based on information extracted from this matrix that a
variety of different approaches were born. But, before exploring the different approaches, the features used to compute the
matrix have a capital role.
A. Features
Considering the three musical dimensions refereed before,
they have an essential role for the segmentation problem.
According to Bruderer et al, experiments on human perception
of structural boundaries in popular music [3]; “global structure” (repetition, break), “change in timbre”, ”change in level”
and “change in rhythm”, represent the main perceptual cues
responsible for the perceiving of boundaries in music. These
are related to the referred musical dimensions. Therefore, in
order to optimize the detection of such boundaries, ideally, the
extracted features shall represent the referred perceptual cues.
The timbre can be defined as everything about a sound
which is neither loudness nor pitch [6]; it is what is different
about the same tone performed in an acoustic guitar and a flute.
Perceptually, as referred, timbre is one of the most important
dimensions in a piece of music. Its importance relatively to
other musical dimensions can be easily understood by the fact
that anyone can recognize familiar instruments, even without
2
Figure 1. SDM (40 MFCCs using euclidean distance) for “Northern Sky”
by Nick Drake.
conscious thought, and people are able to do it with much less
effort and much more accuracy than recognizing harmonies or
scales.
Figure 2. The three features represented in time: on the top 40 MFCCs; on
the midle the chromagram and on the bottom the rhythmogram.
According to [33], Mel-frequency cepstral coefficients
(MFCC) are a good model for the perceptual timbre space.
Therefore the use of MFCCs are of capital importance for the
task of segmentation and so they are extensively used. The
SDM computed using MFCCs generally produce a very good
representation of the music structure. Figure 2 representes the
40 MFCCs in time.
B. Approaches to the Segmentation Problem
The pitch related features are also important. Actually, referring to popular music, music covers, which usually preserve
harmony and melody while using a different set of musical
instruments, thus altering the timbre information of the song,
can usually be accurately recognized by people. In the context
of music structure analysis, chroma features represent the most
powerful representation for describing harmonic information
[18]. The most important advantage of chroma features is their
robustness to changes in timbre. The chroma refers to the 12
traditional pitch classes. Therefore, the chroma representation
is a 12 dimensional vector, where each dimension is the
respective chroma of the signal. (Figure 2).
The rhythmic features, are among the less used in the
task of music structural analysis. Which, considering the
perceptual cue identified by Bruderer et al study, “change
in rhythm”. In fact, Paulus and Klaupuri noted that the use
of rhythmic information in addition to timbre and harmonic
features provides useful information to structure analysis [21].
The rhythmic content of a musical signal can be described
with a rhythmogram as introduced by Jensen, [11]. It is
comparable to a spectrogram but instead of representing the
frequency spectrum of the signal it represents the rhythmic
content (Figure 2).
Paulus et al [20] suggested dividing the segmentation methods into three main sets: the homogeneity-based approaches,
the repetition-based approaches and the novelty-based approaches. The homogeneity-based approaches consider the
musical audio signal to be a succession of states, where each
state refers to some part of the signal. These methods rely
mainly on clustering algorithms, they can also be referred
to as “state” approaches [1], [4], [13], [24], [27], [28]. The
repetition-based approaches consider that there are sequences
of events that are repeated several times in a given music,
they can also be referred to as “sequence” approaches [8],
[10], [15], [26], [29]. The third set proposed by Paulus, the
novelty-based approaches, can be seen as a front-end for one
of the other approaches or both, as it has as main objective to
determine the changing segment instant [7], [11], [35], [34].
The novelty-based approaches are slightly different from the
first two, as they address the problem of detecting musical
boundaries.
C. Objective
The goal of this document is to perform structural segmentation on audio stream files, that is, to identify the instants of
segment change, boundaries between segments. The computed
boundaries will be then compared with manually noted ones
in order to evaluated the quality of the boundaries.
The rest of this document is organized as follows: in the
next section we present our approach, in sections III and IV,
the result evaluation and the conclusion are presented, as well
as future works suggestions.
3
Figure 3.
Checkboard kernel (k = 96).
II. M ETHOD
Our method relies in two observations: the note onsets and
the novelty-score.
We used 4 different onsets: one taken from [27], which is
based on a beat tracker, other by Rosão [29], based on the
Spectral Flux [2] and the others using MIRToolbox function
mironsets() [14]: one using the envelope of the signal and the
other using the Spectral Flux as well.
The note onsets will be used to create a grid of candidate
instants of change. That is, assuming that any segment can
only start at a note onset, which is generally the case. This
yields a vector of candidate instants. The decision between
those who are and are not instants of segment change is then
up to the novelty-score peaks.
The novelty-score function is determined, as previously
mentioned, using a checker board kernel with radial smoothing
(figure ). The score is derived as follows:
N (i) =
k/2
X
k/2
X
abs(r(Ck (m, n), SDM (i+m, i+n)))
m=−k/2n=−k/2
(2)
where Ck denotes a Gaussian tapered checker board kernel
of size k, radially symmetric and centered on (0, 0). The abs()
represents the absolute value and r() represents the correlation
coefficient which is computed as follows:
The MFCCs are computed using the Auditory toolbox [32],
the chroma is computed using the Chroma toolbox [19], [17],
[18] and the rhythmogram is computed using the 4 onsets
presented, [11]. The SDMs are computed using the Manhattan
distance. Every feature is collected using a variable window
size, according to: ws = 1/(bpm/60), where the bpm (beats
per minute) are determined using function mirtempo() also
from MIRToolbox. The idea underlying variable size windows
is to have windows proportional to the structure of the music,
is preventing sound events respective features from spreading
to other frames.
The combination of features is done using the SVD (Singular Value Decomposition), summing SDMs and the intersection of peaks.
The features are combined in groups of two and three,
making 4 different combinations. Except for the intersection
of peaks where the feature are combined in 3 ways, taking
the peaks from each feature as reference, they are compared
with the other two, each peak that is repeated at least once in
a threshold of 1.5s was kept.
The sum of SDMs is done before the computation of the
novelty-score function, as follows:
SDM (M + R) = αSDM (M ) + SDM (R);
(4)
SDM (C + R) = βSDM (C) + SDM (R);
(5)
SDM (M + C) = SDM (M ) + σSDM (C);
(6)
SDM (M +C +R) = αSDM (M )+βSDM (C)+SDM (R).
(7)
Where, the SDMs respective features are represented in
brackets. M, C and R stand for, MFCC, Chromagram and
Rhythmogram respectively. The coefficients α, β and σ are
computed as follows:
α=
mean(SDM (R))
;
mean(SDM (M ))
(8)
β=
mean(SDM (R))
;
mean(SDM (C)
(9)
σ=
mean(SDM (M ))
.
mean(SDM (C)
(10)
P P
(Amn − Ā)(Bmn − B̄)
r = q P Pm n
P P
( m n (Amn − Ā))2 ( m n (Bmn − B̄))2
(3)
A and B represent the Gaussian tapered checker board
kernel matrix and the subset of SDM respectively, and Ā and
B̄ are the respective scalar means.
This computation of N (i) is slightly different from the
presented in [5], and presented better final results. This can
be justified by the fact that the computation of the correlation
takes into account the mean values of both matrices, thus
eliminating eventual noise.
The novelty-score is computed for several SDMs: using 40
MFCC coefficients, using the chromagram, using the rhythmogram and combining these.
Where, the operation mean() determines the mean value
of the matrix. Its purpose is balancing the factors of the sum,
trying to give the same weight to each one.
Finally, the SVD is computed for a concatenated feature
vector, combining features in groups of two and three. This
created new feature vectors, then used to compute the SDM
and the remainder of the method.
4
used. That in fact influences its accuracy substantially. The
first result presented for the rhythmogram uses the onsets
computed by Rosão [29].
The last set of results in Table I are derived in experiments
using combined features, one of the goals of this work.
Combining features was unsuccessful relative to the results
obtained with the MFCC alone. Although on average the
mixture of features failed to improve the final accuracy, in
some songs it obtained better results than the MFCC features
alone. The problem is that it is hard to predict the behavior
of the mixture and so it is to control it.
Figure 4. Novelty-score peak selection. Green crosses represent the selected
peaks and red dashed lines represent the groundtruth annotation.
Afterward, the novelty-score peaks from the SDMs are
adjusted to the onsets.
The last stage eliminates segments shorter than a pre-defined
threshold (6s, in our case). This is accomplished by peak
selection on the novelty-score, that is, by deleting potential
instants that are too close together. A novelty-score with the
peaks selected is presented in figure 4.
III. E VALUATION AND D ISCUSSION
In order to evaluate the automatic segmentation algorithm,
a manual groundtruth segmentation has to be done, and a
measure of the accuracy computed. According to the most
used approach, e.g. [27], the precision (P ), recall (R) and
F-measure (F ), are calculated to evaluate the success of the
method. GT and A denote the instants of change of the
groundtruth and the automatic generated boundaries respectively. w determines how far two boundaries can be apart
but still count as one (we used w = 1.5s). Finally F is the
harmonic mean of P and R. They are calculated as follows:
P =
|A ∩w GT |
|GT |
R=
|A ∩w GT |
|A|
(11)
The groundtruth segments used are a subset of those used
by Peisze [27], as well as the groundtruth annotations. The
Results of the full corpus are presented in table I. They are
divided in three groups: The first one includes the features
alone, the second introduces the note onsets and finally the
last one introduces results of feature combination.
Contrarily to our expectations, using note onsets does not
bring any improvements. As shown in table I, the average
F-measure for the 4 onsets used was almost the same as
without onsets, since the introduction of the onsets did not
have an influence on the final accuracy values. One hypothesis
for this failure is the fact that there are too many note onsets
in time and in that sense, some selection would be advised.
However, tests showed that using a selection window (similar
to the one used for the peak selection) varying from ws
to 16ws, final accuracy results showed no improvement.
Note that, the rhythmogram is different for every note onset
The biggest problem with this kind of approach is the
determination of whether the peaks of the novelty-score are
or are not real boundaries. In general, all boundaries are
represented as a peak in the novelty-score curve. The fact
is that in some cases the bigger peaks (global maximum)
are proper boundaries, however, in general smaller peaks
(local maximum) represent actual boundaries. So the choice
is between having a large number of detections, what
would make R large and P smaller, or limit the number
of detections, using the onsets for instance (or threshold in
the novelty-score or using the n bigger peaks, etc...), that
make P and R tend to be closer. We consider the second
one to be the best option. However, in general that is very
hard to achieve. The inclusion of a dynamic threshold,
using a running average, was tested with no performance
improvement.
However, these results can be considered satisfactory.
Namely considering that: the algorithm is only based on
information retrieved from the signal. The corpus contains
songs from various styles. And the algorithm presents a high
degree of freedom, making it difficult to decide when to spot
seeking for a better result.
The results obtained are difficult to extend to other genres of
music, even to pop alone, it is not guaranteed the success rate
obtained with this corpus. This is because the analyzed songs
strongly influence the results obtained. For instance, better
results were obtained for subsets of this set of songs, after a
large number of tests and parameter adjustment; however they
did not lead to a mean performance improvement. This shows
the importance of the songs analyzed, meaning that a different
set of songs could lead to different results and therefore the
corpus used is of great importance. This is particularly relevant
when comparing results between different works.. Our work
can be compared with [27] and [34]. The first one, using
a corpus of 109 songs, obtained: P = 0.58; R = 0.77;
F = 0.66, although he used w = 3 for evaluation. The second
one obtained at most, P = 0.33; R = 0.46; F = 0.38; using
a corpus of 100 songs. When using a groundtruth threshold
of 3s our method obtains for the best setup F = 0.577,
which is close to the state of the art results. When using 0.5s,
F = 0.208, also in the state of the art as documented in the
MIREX 2010 results1 .
1 http://nema.lis.illinois.edu/nema_out/mirex2010/
results/struct/mirex10/summary.html
5
IV. C ONCLUSION AND F UTURE WORK
The features used are an attempt to represent the most
important musical dimensions: the timbre, represented by the
MFCCs; the tonal space (melody and harmony), represented
by the chromagram and the rhythmic space represented by
the rhythmogram. In addition, we used mixture of features
and note onsets to improve the final results. The goal of this
chapter is to present the conclusions of the work done followed
by the presentation of future work suggestions.
Since the main idea underlying the use of multiple features
is that each dimension complements the others, mixing the features was the logical action. However, mixing features proved
to be a more difficult task than expected. Three methods were
tested: Intersection of peaks, sum of SDMs and the SVD. None
achieved an improvement of the average results relatively to
the features alone, they all got worse mean accuracy. This lead
us to the conclusion that, using the novelty-score approach the
inclusion of more features and mixture is not advantageous.
The MFCCs got the best results as can be seen in table I. This
verifies that the MFCCs are of the most importance for the task
of segmenting music. Meaning that, on average they encode
the information that is most useful to detect the instants of
change between segments. This is averagely speaking, because
as seen, in some cases the rhythmogram or the chroma encodes
the most useful information. This was the reason that lead us
to experiment a way of selecting the features according to the
song.
Another effort to improve the final average results was
done using the note onsets to select boundaries. This approach
proved to be ineffective, on average did not change the final
results.
Future work should focus on note onsets, not only in terms
of detection, but mostly in terms of selection. If a good
selection of potential onset candidates is done, the use of such
onsets should improve the final average results. The fact is
that the note onsets used did not get worse results, results
did barely change. This probably means that the note onsets
were not sufficiently selective to the novelty-score peaks,
meaning that they are too close to each other and in that
sense do not influence the final result. Moreover, one should
try to improve the window adjustment to the song, to avoid
features to spread to other frames. Additional work on feature
combination is also necessary, namely by experimenting with
different methods of dimensionality reduction.
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
R EFERENCES
[1]
[2]
[3]
[4]
J.-J. Aucouturier and M. Sandler. Segmentation of musical
signals using hidden Markov models. In Proc. of 110th Audio
Engineering Society Convention, Amsterdam, The Netherlands,
May 2001.
J. P. Bello, L. Daudet, S. Abdallah, C. Duxbury, M. Davies, M.
B. Sandler. A Tutorial on Onset Detection in Music Signals,
IEEE Transaction on Speech and Audio Processing, 2005.
M. J. Bruderer, M. McKinney, and A. Kohlrausch. Structural
boundary perception in popular music. In Proc. of 7th Intl. Conf.
on Music Information Retrieval, pages 198–201, Victoria, B.C.,
Canada, Oct. 2006.
S. Chu and B. Logan. Music summary using key phrases.
Cambridge Research Laboratory, Technical Report Series, CRL
2000/1 Apr. 2000.
[26]
[27]
[28]
[29]
[30]
R. B. Dannenberg and M. Goto, Music structure analysis from
acoustic signals. Compaq Computer Corporation, Cambridge
Research Laboratories, Apr.2005.
R. Erickson. Sound Structure in Music, University of California
Press, Berkley, Los Angeles, London, 1975.
J. Foote. Automatic audio segmentation using a measure of
audio novelty. In Proc. of IEEE Intl. Conf. on Multimedia and
Expo, pages 452–455, New York, N.Y., USA, Aug. 2000.
J. Foote. Visualizing music and audio using self-similarity. In
Proc. of ACM Multimedia, pages 77–80, Orlando, Fla., USA,
1999.
E. Gomez. Tonal Description of Music Audio Signals. PhD
thesis, UPF Barcelona, 2006.
M. Goto. A chorus-section detecting method for musical audio
signals. In Proc. of IEEE Intl. Conf. on Acoustics, Speech, and
Signal Processing, pages 437–440, Hong Kong, 2003.
K. Jensen. Multiple scale music segmentation using rhythm,
timbre, and harmony. EURASIP Journal on Advances in Signal
Processing, 2007. Article ID 73205.
F. Kaiser and T. Sikora. Music structure discovery in popular
music using non-negative matrix factorization. 11th Intl. Society
for Music Information Retrieval Conference, ISMIR, 2010.
M. Levy and M. Sandler. Structural segmentation of musical
audio by constrained clustering. IEEE Trans. on Audio, Speech,
and Language Processing, 16(2):318–326, Feb. 2008.
O. Lartillot. MIRToolbox 1.3.2 - User’s Manual, University of
Jyväskylä, Finland, Jan, 2011.
L. Lu, M. Wang, and H.-J. Zhang. Repeating pattern discovery and structure analysis from acoustic music data. In
Proc. of Workshop on Multimedia Information Retrieval, pages
275–282, New York, N.Y., USA, Oct. 2004.
N. C. Maddage. Automatic structure detection for popular
music. IEEE Multimedia, 13(1):65–77, Jan. 2006.
M. Müller, S. Ewert, and S. Kreuzer. Making chroma features
more robust to timbre changes. In Proc. of IEEE Intl. Conf. on
Acoustics, Speech, and Signal Processing, pages 1877–1880,
Taipei, Taiwan, Apr. 2009.
M. Müller, Information Retrieval for Music and Motion. ISBN
978-3-540-74047-6, Springer, 2007.
M. Müller, S. Ewert. Chroma Toolbox: Pitch, Chroma, CENS,
CRP. Intl. Society for Music Information Retrieval, 2011.
J. Paulus, M. Müller and A. Klaupuri. Audio-based music
structure analysis. 11th Intl. Society for Music Information
Retrieval Conference, ISMIR, 2010.
J. Paulus and A. Klapuri. Acoustic features for music piece
structure analysis. In Proc. of 11th Intl. Conf. on Digital Audio
Effects, pages 309–312, Espoo, Finland, Sept. 2008.
J. Paulus and A. Klapuri. Music structure analysis using a
probabilistic fitness measure and a greedy search algorithm.
IEEE Trans. on Audio, Speech, and Language Processing,
17(6):1159–1170, Aug. 2009.
G. Peeters. Deriving musical structure from signal analysis
for music audio summary generation: ”sequence” and ”state”
approach. In Computer Music Modeling and Retrieval, LNCS
vol. 2771, pages 143–166. Springer Berlin / Heidelberg, 2004.
G. Peeters, A. La Burthe, and X. Rodet. Toward automatic music audio summary generation from signal analysis. In Proc. of
3rd Intl. Conf. on Music Information Retrieval, pages 94–100,
Paris, France, Oct. 2002.
G. Peeters. Sequence representation of music structure using higher-order similarity matrix and maximum-likelihood
approach. In Proc. of 8th Intl. Conf. on Music Information
Retrieval, pages 35–40, Vienna, Austria, Sept. 2007.
Peeters and E. Deruty. Is music structure annotation multidimensional? A proposal for robust local music annotation. In
Proc. of 3rd Workshop on Learning the Semantics of Audio
Signals, pages 75–90, Graz, Austria, Dec. 2009.
E. Peiszer. Automatic audio segmentation: Segment boundary
and structure detection in popular music. Master’s thesis, Vienna University of Technology, Vienna, Austria, Aug. 2007.
L. R. Rabiner. A tutorial on hidden markov models and selected
applications in speech recognition. In Proc. of 1989 IEEE, Vol.
77, No. 2, Feb. 1989.
C. Rosão. Som em Java. ISCTE, 2011.
C. Rosão, R. Ribeiro. Trends in Onset Detection. In Proc. of the
2011 Workshop on Open Source and Design of Communication,
75–81, 2011.
6
[31]
[32]
[33]
[34]
[35]
[36]
Y. Shiu, H. Jeong and C.-C. J. Kuo. Musical structure analysis
using similarity matrix and dynamic programming. In Proc. of
SPIE Vol. 6015 - Multimedia Systems and Applications VIII,
pages 398–409, 2005.
M. Slaney. Auditory Toolbox - version 2, Interval Research
Corporation.
H. Terasawa, M Saney, J. Berger. The thirteen colors of timbre.
In Proc. of 2005 IEEE Workshop on Applications of Signal
Processing to Audio and Acoustics, New Platz, N.Y., USA, Oct.
2005.
D. Turnbull, G. Lanckriet, E. Pampalk and M. Goto. A Supervised Approach for Detecting Boundaries in Music using
Difference Features and Boosting. ISMIR 2007.
G. Tzanetakis and P. Cook. Multifeature Audio segmentation
for browsing and annotation. In Proc. of 1999 IEEE Workshop
on Applications of Signal Processing to Audio and Acoustics,
pages 103–106, New Platz, N.Y., USA, Oct. 1999.
R. Zhou and G. Kankanhalli. Precise Pitch Profile Feature
Extraction From Musical Audio for Key Detection. IEEE Trans.
on Multimedia, Vol. 8, No. 3, Jun. 2006.
Method Setup
Without note onsets
Rosão [29]
Pieszer [27]
mironsets()
mironsets()
using
Spectral Flux
Without note onsets
SVD
Without note onsets
Sum of SDMs
Without note onsets
Intersection of
peaks
Features
P
MFCC
0.377
Chromagram
0.210
Rhythmogram
0.250
Using Note Onsets
MFCC
0.367
Chromagram
0.207
Rhythmogram
0.249
MFCC
0.362
Chromagram
0.200
Rhythmogram
0.207
MFCC
0.370
Chromagram
0.206
Rhythmogram
0.180
MFCC
0.369
Chromagram
0.204
Rhythmogram
0.206
Features Combination
M+R
0.216
C+R
0.217
M+C
0.280
M+C+R
0.243
M+R
0.312
C+R
0.233
M+C
0.312
M+C+R
0.233
M+C+R
0.338
C+R+M
0.268
R+M+C
0.268
Table I
Results
R
0.622
0.402
0.451
F
0.455
0.268
0.311
0.614
0.389
0.452
0.597
0.369
0.395
0.608
0.385
0.336
0.612
0.379
0.387
0.446
0.263
0.311
0.437
0.253
0.263
0.445
0.261
0.228
0.447
0.259
0.260
0.365
0.371
0.482
0.411
0.538
0.538
0.441
0.484
0.399
0.348
0.399
0.260
0.262
0.343
0.293
0.381
0.297
0.319
0.340
0.344
0.287
0.360
AVERAGE RESULTS FOR DIFFERENT SETUPS .