Databased Intrinsic Weights of Indicators of Multi

Databased Intrinsic Weights of Indicators of Multi-Indicator
Systems and Performance Measures of Multivariate Rankings of
Systemic Objects
By G. P. Patil(1) and S.W. Joshi(2)
(1) Center for Statistical Ecology and Environmental Statistics, Department of
Statistics The Pennsylvania State University, University Park, PA
(2) Department of Computer Science, Slippery Rock University, Slippery Rock,
PA 16057 USA
Based on the initial part of the inaugural keynote lecture given by the first author
at the International Conference on Recent Advances in Mathematics, Statistics,
and Computer Science held at the Central University of Bihar, Patna, Bihar,
India, during May 29-31, 2015.
This material is based upon work partially supported by the National Science
Foundation under Grant No. 0307010. Any opinions, findings, and
conclusions or recommendations expressed in this material are those of the
author(s) and do not necessarily reflect the views of the agencies.
Technical Report Number
2015-0612 June 2015
Department of Statistics
The Pennsylvania State University
University Park, PA 16802
G. P. Patil Director
Distinguished Professor Emeritus
Tel: (814)883-2814, Fax: (814)863-7114
Email: [email protected]
http://www.stat.psu.edu/~gpp
http://www.stat.psu.edu/~hotspots
Databased Intrinsic Weights of Indicators of Multi-Indicator Systems and Performance
Measures of Multivariate Rankings of Systemic Objects *
G. P. Patil, Center for Statistical Ecology and Environmental Statistics, Department of Statistics,
Penn State University, University Park PA USA
Email address: [email protected] , [email protected]
S. W. Joshi, Department of Computer Science, Slippery Rock University, Slippery Rock, PA
USA
Email address: [email protected]
Abstract: In this paper, we discuss concepts, methods, and tools of partial order based
multivariate ranking of objects leading to novel and innovative measures of performance of
ranking methods for given data sets/data matrices of objects and features (indicators ). We also
develop novel and innovative intrinsic differential weights of relative importance of indicators
with implications on their prioritization and subsequent selection status.
We also provide illustrative examples using 25x3 data matrix with 25 objects and 3 indicators,
giving intrinsic relative weights of the indicators indicating their databased relative importance.
Further, we derive the rankings of the objects using different ranking methods constructing multiindicator object rank scores , given by the weighted composite index, comparability weighted net
superiority index, MCMC-based weighted indicator cumulative rank frequency distribution
index, and MCMC-based average rank index. Finally, the ranking performance measures of these
ranking methods are computed for the illustrative data matrices/data sets.
We conclude the paper with selected references and extended bibliography.
2
Databased Intrinsic Weights of Indicators of Multi-Indicator Systems and Performance
Measures of Multivariate Rankings of Systemic Objects *
G. P. Patil, Center for Statistical Ecology and Environmental Statistics, Department of Statistics,
Penn State University, University Park PA USA
Email address: [email protected] , [email protected]
S. W. Joshi, Department of Computer Science, Slippery Rock University, Slippery Rock, PA
USA
Email address: [email protected]
Abstract: In this paper, we discuss concepts, methods, and tools of partial order based
multivariate ranking of objects leading to novel and innovative measures of performance of
ranking methods for given data sets/data matrices of objects and features (indicators ). We also
develop novel and innovative intrinsic differential weights of relative importance of indicators
with implications on their prioritization and subsequent selection status.
We also provide illustrative examples using 25x3 data matrix with 25 objects and 3 indicators,
giving intrinsic relative weights of the indicators indicating their databased relative importance.
Further, we derive the rankings of the objects using different ranking methods constructing multiindicator object rank scores , given by the weighted composite index, comparability weighted net
superiority index, MCMC-based weighted indicator cumulative rank frequency distribution
index, and MCMC-based average rank index. Finally, the ranking performance measures of these
ranking methods are computed for the illustrative data matrices/data sets.
We conclude the paper with selected references and extended bibliography.
1. Introduction: Multivariate Nonparametric Statistics for purposes of inference on multivariate
median, multivariate order statistics, and multivariate image reconstruction and enhancement is
presently occupied with issues of multivariate ranking involving data depth measures and affine
invariance. See Donoho and Gasko (1992), Liu et al. (2006), Serfling (2006), Mottonen et al.
1998, Zuo (2003), Zuo and Serfling (2000), Hardie and Arce (1990), Tang et al (1992).
Genome Wide Association Studies are presently occupied with issues of gene discoveries and
variables selection with oracle properties among others. These issues involve multivariate ranking
of objects and variables / indicators. See Chiang et al. (2008), Phillips and Ghosh (2012),
Wittkowski et al. (2004, 2008, 2013), Wittkowski and Song (2010).
Problems of this comparative nature are arising in various areas of inferential and exploratory
importance, such as, surveillance geo-informatics, bio-geo-informatics, networks assessments,
banking predictive analytics, drug development research, etc. See Bruggemann and Patil (2011),
Bruggemann et al. (2014),Myers and Patil (2012a, 2012b), Patil (2011,2012), Willet (2012).
*
Based on the initial part of the inaugural keynote lecture given by the first author at the
International Conference on Recent Advances in Mathematics, Statistics, and Computer Science
held at the Central University of Bihar, Patna, Bihar, India, during May 29-31, 2015.
3
As a result, an exciting field of study is emerging within the discipline of knowledge discovery,
data mining, and statistical learning. It is the comparative knowledge discovery in the multiindicator information fusion systems.
See Patil and Joshi (2014), Myers and Patil (2012a, 2012b), Bruggemann et al. (2014).
In this paper, we discuss concepts, methods, and tools of partial order based multivariate ranking
of objects leading to novel and innovative measures of performance of ranking methods for data
sets ( data matrices ) of objects and their features (indicators ). We also develop novel and
innovative intrinsic differential weights of relative importance of indicators with implications on
their prioritization and subsequent selection status.
2. Preliminaries: To set the stage, we consider a multivariate data set in the form of an nxm data
matrix [ xij ], i= 1,2,…, n ; j= 1,2,…, m, where n rows correspond to n objects a1, a2, …, ai, …, an
and m columns correspond to m indicators I1, I2, …, Ij,…, Im.
Objects may be entities, such as, individuals, units, pixels, areas, regions, patients, genes, drugs,
documents, clients, products, tools with relevant characteristics/ features as potential indicators
for some single or multiple outcomes, endpoints, concepts, domains.
Indicators may be measurable characteristics / features of objects with common orientation
indicative of some un-measurable abstract/ latent concept for objects. For example, a larger
magnitude of an indicator will be indicative of a correspondingly larger magnitude of the latent
concept/ trait of an object. And vice versa.
As a simple example, consider size of an individual as the abstract concept. Consider height,
weight, and volume of the individual as indicators of size with assumed common orientation of
positive monotonicity/ positive correlations. Generally speaking, larger the size, larger the
indicator; larger the indicator, larger the size.
The three indicators/ indicator measurements may have three-dimensional elliptical distribution
with pairwise positive correlations.
We note that we thus have an m-dimensional data set consisting of n data points, with no
measurement column available for any response variable y.
The multivariate data set is usually a nonlinear partially ordered set (poset ). Not all pairs of
objects are comparable. For a two- indicator set up, the following diagram in Figure 1 may be
suggestive. Interestingly, every object here induces its four quadrants defined by the horizontal
and vertical lines passing through it. Every object in its first quadrant, where both indicators are
larger, is comparable to it, and together they are defined to make its “ UpSet “. Every object in its
third quadrant, where both indicators are smaller, is comparable to it, and together they are
defined to make its “ DownSet “. Clearly, every object in its second and fourth quadrants, where
the two indicators are in conflict, is incomparable to it.
Ranking amounts to linearizing the poset by ranking the objects with appropriate scalar rankscores consistent with the comparability in the data matrix. Rank-scores need to inherit the
comparabilities in the data set. Incomparable pairs are expected to become comparable in either
direction. We will see later that the UpSet and DownSet of an object help define a rank-score for
it.
4
Figure 1
3.Indicator relative importance weight vector: On which line is the linearized set to lie?
Without loss of generality, which axis passing through the origin is to be selected? What can be
said of the separations between successive objects when ranked? Projections on a ray through the
origin have been popular. The ray is determined by w= ( w1,…,wm ), where wj>0,with summation
of wj being unity, a differential weight vector, measuring relative importance of indicators for the
abstract concept. Projection is a fixed scalar multiple of what is popularly called weighted
composite index with weight vector w.
Choice of w involves subjective trade off/ compensation among indicators. It becomes a sensitive
political issue between stakeholders. Reconciliation in view of data matrix evidence becomes a
practical challenge and scientific/ statistical opportunity.
Can we think of a data based w intrinsic to the data matrix? And relative to such a w, and its
corresponding ray, can we think of alternative ways of computing appropriate rank-scores, which
do not involve indicator trade offs? And if we can think of several methods of rank-scores and
resultant rankings, is it possible to measure their individual performance level to help find a best
method among them for the given data set? Interestingly, all of these questions are frontier
questions that we should wish to address. And fortunately, we now have some initial answers that
we wish to share on the challenging issues of multivariate ranking over the past several decades.
4. Data Matrix based Intrinsic Differential Weight Vector wI for the Indicator Set to
Measure Relative Importance of Indicators:
In this paper, we will discuss, “Pairwise Object Comparisons, and Indicator Agreements among
Object Comparison Disagreements,” as a basis for the formulation.
Consider Multivariate Zeta Matrix: Object x Object Comparability nn Matrix. Cell Entry: mvariate bit, binary digit:111…000… 101100…01, where 1 if ai >= aj, and 0, otherwise.
• Comparability cell has all 1’s, or, all 0’s in its bit, indicating collective agreement among
indicators.
• Incomparability cell has some 1’s and some 0’s in its bit, indicating collective
disagreement among indicators.
5
•
•
For each incomparability cell, count for each indicator the number of agreements with the
collectivity of indicators. Add up for each indicator over all of the incomparability cells.
Normalize/ unitize to give the intrinsic wI we are looking for.
Incidentally, and importantly, this intrinsic wI also provides a powerful basis for
comparison and selection of indicators.
5. Conceptualizing and Computing Performance Measure of a Ranking Method:
Consider Multivariate Zeta Matrix as before: But, this time, Cell entry: ( m+1 )-variate bit with
the first m variates as before, and the ( m+1 )-th variate corresponding to the Ranking.
• For each incomparability cell, count for each indicator the agreement with the Ranking.
Add up for each indicator over all the incomparability cells.
• Normalize/ unitize to give the wR induced by the Ranking R.
Define its performance measure PMR by corr/ gen. corr ( wI, wR).
6.Some Comparability Invariant/ Partial Order based Ranking Methods:
Method1: Weighted Composite Index for Rank-score: WCI.
See Bruggemann and Patil (2011), Patil and Joshi (2014)
Given a differential weight vector w= (w1,w2,..., wm ), and the indicator values of the object to be
x= ( x1, x2,…, xm ), the correspondingly weighted composite index for the object rank-score is
given by the weighted average of the values of the indicators for the object, w1x1 + w2x2 + …+
wmxm, which is equivalent to the inner product
w.x = |w| |x| cos(w,x) = |w| x projection of x on w.
And we get w.d = 0, w.d > 0, w.d < 0, where d = x1 – x2 , where now, x1 and x2 are indicator
value vectors of some objects 1 and 2.
An illustration with two- indicator space:
Figure 2
6
Method2: Comparability Weighted Net Superiority Index for Rank-score: CWNSI.
See Myers and Patil ( 2012a, 2012b), Patil (2015).
With larger the better, and smaller the worse protocol, its DownSet provides a measure of
superiority for the object x and its UpSet provides a measure of its inferiority.
Let us define O(x) to be the cardinal size of the DownSet of x, and F(x) to be the cardinal size of
its UpSet, leading us to define
Rank-score ( x )=(O(x)- F(x))(O(x) + F(x) )/ (n-1) = Net Superiority  Comparability.
Method3: MCMC based Average Rank for Rank-score: ARI.
See Bruggemann and Patil (2011).
This method attempts to construct a comparability invariant population of indicators/voters
relevant to the data matrix indicators/voters as a random sample from it. For this purpose, the
method engages in an MCMC, sequentially producing comparability invariant permutations of
the objects, yielding what are called linear extensions that specify ranks to the objects.
With the sequence of linear extensions, each object receives a sequence of ranks, giving a
sequence of its cumulative rank averages. The MCMC stops when all of the sequences of the
cumulative rank averages converge. The limiting rank averages of individual objects are then
defined to be their rank scores, providing the ranking of the objects due to this method.
Method4: MCMC based Weighted indicator Cumulative Rank Frequency Distribution for
Stochastic Rank-score: WICRFDI.
See Patil and Taillie (2004), Patil and Joshi (2014).
This method starts as Method 3, but, instead of computing cumulative rank averages in the
process of MCMC, it computes cumulative rank frequency distributions for each object, and the
MCMC stops, when these cumulative rank frequency distributions converge for all objects. They
are comparability invariant under stochastic ranking, and in multiple steps, produce ranking of
objects.
7
7. Ranking Performance Measure: Illustrative Example
We will show calculation for a 25 by 3 data matrix in some detail. First we need to compute the
databased intrinsic weights.
Consider the data matrix shown below in Table 1. Its Hasse diagram is in Figure 3.
Object
a1
a2
a3
a4
a5
a6
a7
a8
a9
a10
a11
a12
a13
a14
a15
a16
a17
a18
a19
a20
a21
a22
a23
a24
a25
I1
4.273
4.630
8.226
7.044
3.586
5.245
5.928
6.237
4.275
6.109
5.756
5.410
4.285
4.602
7.498
5.065
7.676
6.503
4.430
3.783
3.840
3.335
7.801
5.580
5.823
I2
5.140
4.601
5.983
5.885
6.848
6.643
5.524
6.966
5.391
6.775
4.151
4.639
4.879
5.137
7.789
5.243
6.010
6.261
5.090
4.204
4.849
5.249
6.488
5.140
5.623
I3
4.766
4.645
7.500
7.247
6.165
4.699
6.926
8.543
5.161
7.527
6.216
3.901
3.296
4.369
7.654
6.806
9.181
4.105
3.657
3.280
3.171
2.570
7.544
4.371
7.018
Table 1: Data Matrix
Figure 3: Hasse Diagram for Data Matrix in Table 1
8
Multivariate zeta matrix in its entirety is in Table 2. The multivariate zeta matrix has 25 main
columns for 25 objects. Each main column is a group of four sub-columns. Of these to compute
intrinsic weights we use three sub-columns with headings I1, I2, and I3 for three indicators at this
time. The fourth column with the heading of r (for ranking) will be used in a later step to compute
performance measure of a particular ranking of the objects. The matrix has 25 rows for the 25
objects as identified in the leftmost column. Entries in sub-columns are bits - 0 (zero) or 1 (one).
The entry in a sub-cell formed by the row corresponding to object ai and the sub-column
corresponding to indicator Ik for object aj is one if and only if the value of Ik for object ai is
greater than or equal to the value of Ik for object aj otherwise it is zero. Object ai is comparable
with object aj if all entries in the group of three sub-columns are identical. If the entries are not all
identical and no indicator values for the two objects are equal then the two objects are
incomparable. Cells corresponding to all pairs of incomparable objects are shaded in the
multivariate zeta matrix above. Only these cells are used in computation of intrinsic weights. For
each group of three sub-cells corresponding to a pair of incomparable objects we count the
number of times the bit-value in a sub-cell occurs within the group of the three sub-cells. For the
current data matrix with three indicators this count is either 1 or 2. For each indicator such counts
are added up over all sub-cells corresponding to the indicator. The total for indicator Ij is denoted
by Aj for agreement totals. For this illustrative example, we present individual agreement counts
in Table 3. This table has 25 rows for 25 objects and 25 main columns for 25 objects. Each main
column has three sub-columns.
For example, entries in the three sub-columns corresponding to pair (a10, a3) are 1,2, 2. This is
because the three entries in corresponding sub-columns in Table 2 are 0 1, 1. Here I1 bit agrees
with itself only and thus its agreement count is . I2-bit agrees with I3-bit. Thus agreement counts
for I2 and I3 are 2 each. The matrix in Table 3 is symmetric and so we only could have done with
either the upper or the lower triangular matrix. In fact,it may be mentioned here that in a
computer program much of the computation can be carried out without maintaining matrices
which are used here for clarity and conceptual purposes. Here A1, A2, and A3 are, respectively,
364, 344, and 412 adding to 1120 so that intrinsic weights which are proportional to A1, A2, and
A3 are 0.325, 0.307, and 0.368.
Remark: Multivariate zeta matrix (also the traditional zeta matrix) can be used to compute UpSets
and DownSets that are needed to compute ranking using CWNSI since for a given object in the
column corresponding to the object the number of groups of sub-cells (except the one diagonally
located) with all 1's is the size of its UpSet and in the row corresponding to the object the number
of groups of sub-cells (except the one diagonally located) with all 1's is the size of its DownSet.
To compute the performance measure of arbitrary ranking of objects we need to see how closely
individual indicators agree with the ranking of the objects. For a given pair (ai, aj) if both the
indicator and the ranking rank ai higher than aj then they agree or if both the indicator and the
ranking rank ai lower than aj then they agree. Otherwise the indicator and the ranking disagree.
We keep the count of agreements in a matrix shown in Table 4. For the specific example here ,we
use the ranking defined by the weighted composite index based on the intrinsic weights (0.325,
0.307, 0.368) and assign higher ranks (closer to 1) to objects with larger index score.
These ranks are shown in Table 5.
9
Table 2: Multivariate Zeta Matrix
10
Table 4: Consensus Table
11
Object
I1
I2
I3
r
a1
4.273
5.140
4.766
17
a2
4.630
4.601
4.645
19
a3
8.226
5.983
7.500
5
a4
7.044
5.885
7.247
7
a5
3.586
6.848
6.165
12
a6
5.245
6.643
4.699
13
a7
5.928
5.524
6.926
9
a8
6.237
6.966
8.543
3
a9
4.275
5.391
5.161
16
a10
6.109
6.775
7.527
6
a11
5.756
4.151
6.216
14
a12
5.410
4.639
3.901
20
a13
4.285
4.879
3.296
22
a14
4.602
5.137
4.369
18
a15
7.498
7.789
7.654
2
a16
5.065
5.243
6.806
10
a17
7.676
6.010
9.181
1
a18
6.503
6.261
4.105
11
a19
4.430
5.090
3.657
21
a20
3.783
4.204
3.280
24
a21
3.840
4.849
3.171
23
a22
3.335
5.249
2.570
25
a23
7.801
6.488
7.544
4
a24
5.580
5.140
4.371
15
a25
5.823
5.623
7.018
8
Table 5: Ranks Induced by Intrinsic Weights
We use the fourth sub-column with the heading r in the group of four sub-columns in the matrix
shown in Table 2 to show how the ranking rates object ai compared aj for all combinations of
pairs (ai, aj). If the ranking assigns ai a rank higher than it assigns aj then the entry is 1 else it is 0.
The mutual agreement counts between ranking method and individual indicators are in Table 6.
The vector WR of the ranking method obtained from the totals in the rightmost column is (0.307,
0.253, 0.440) and the corresponding PMR is corr((0.325, 0.307, 0.368), (0.307, 0.253, 0.440)) =
0.99999.
It is of interest to use WR itself to construct a new composite index and measure PMR of the new
ranking obtained this way. We can continue this process iteratively until the two successive
rankings are identical. Without reproducing our calculations, we find that this iterative process
soon terminates with stable ranking. Table 7 shows the iterative rankings.
It is also of interest to see progression of the iterative sequence of WR graphically. Patil and
Joshi(2013) investigated equivalence classes of weight vectors with respect to ranking using
composite index. For data matrices with three indicators , these equivalence classes can be shown
as a partition of the triangular plane w1 + w2 + w3 = 1 in the three dimensional Euclidean space.
For the present data matrix,
12
Table 6: Agreements Between Ranking and Indicators
the sequence of the converging weight vectors are labeled by letters P, Q, R, S on the 'weight
triangle' S, P being the initial and S being the final vectors (Figure 4 ). Points R and S are very
close to each other. The 112 intersecting lines divide the triangle into regions of equivalent
weight vectors in the sense that composite indexes based on all weight vectors within the same
region produce identical ranking for the data matrix. PMR for final ranking is given in Table 10.
13
Object
id
a1
a2
a3
a4
a5
a6
a7
a8
a9
a10
a11
a12
a13
a14
a15
a16
a17
a18
a19
a20
a21
a22
a23
a24
a25
Iterated Ranking with Intrinsic Weights
Indicator Values
Iterative
I1
I2
I3
0
1
4.273
5.140
4.766
17
17
4.630
4.601
4.645
19
19
8.226
5.983
7.500
5
5
7.044
5.885
7.247
7
7
3.586
6.848
6.165
12
12
5.245
6.643
4.699
13
14
5.928
5.524
6.926
9
9
6.237
6.966
8.543
3
3
4.275
5.391
5.161
16
15
6.109
6.775
7.527
6
6
5.756
4.151
6.216
14
11
5.410
4.639
3.901
20
20
4.285
4.879
3.296
22
22
4.602
5.137
4.369
18
18
7.498
7.789
7.654
2
2
5.065
5.243
6.806
10
10
7.676
6.010
9.181
1
1
6.503
6.261
4.105
11
13
4.430
5.090
3.657
21
21
3.783
4.204
3.280
24
24
3.840
4.849
3.171
23
23
3.335
5.249
2.570
25
25
7.801
6.488
7.544
4
4
5.580
5.140
4.371
15
16
5.823
5.623
7.018
8
8
Ranks
2
17
18
5
7
12
14
9
3
15
6
11
20
22
19
2
10
1
13
21
24
23
25
4
16
8
3
17
18
5
7
12
14
9
3
15
6
11
20
22
19
2
10
1
13
21
24
23
25
4
16
8
Table 7: Iterative Ranking Starting with Intrinsic Weights
Iteration#
0
1
2
3
Indicator Weights
0.325
0.307
0.368
0.307
0.253
0.440
0.298
0.245
0.457
0.302
0.239
0.459
Table 8: Iterative Intrinsic Weights
Figure 4: Regions of Equivalent Weights and
Convergence of Iterative Intrinsic Weights
14
Below we present PMR's for all the four methods discussed above for the 25 by 3 data matrix.
Table 9 contains rankings by the methods and Table contains actual PMRs.
Ranks by various methods
Object
a1
a2
a3
a4
a5
a6
a7
a8
a9
a10
a11
a12
a13
a14
a15
a16
a17
a18
a19
a20
a21
a22
a23
a24
a25
Method 1
Method 2
Method 3
Method 4
17
18
5
7
12
14
9
3
15
6
11
20
22
19
2
10
1
13
21
24
23
25
4
16
8
17
18.5
4.5
7
12
10
8.5
2
15
6
16
18.5
23
20
1
13
4.5
11
22
25
24
21
3
14
8.5
20
18
4
7
12
11
8
2
15
6
16
19
23
17
1
13
5
10
21
25
24
22
3
14
9
17
18
5
7
10
11
9
3
14
6
15
20
22
19
4
13
1
12
21
24
25
23
2
16
8
Table 9:Rankings by Four Methods
Indicator
I1
I2
I3
Corr. Coeff.
Intrinsic
0.325
0.307
0.368
PMR
Weights For Methods
Method 1 Method 2 Method 3
0.302
0.305
0.329
0.239
0.307
0.292
0.459
0.388
0.379
0.9999
0.9505
0.9886
Method 4
0.291
0.274
0.434
0.9810
Table 10:Performance Measure for Rankings by Four Methods
8. Looking Forward. The illustrative example of this paper shows potential for investigating a
variety of data matrices to examine computational and ranking patterns of performance behavior
of the four ranking methods. It will be worthwhile also to investigate situations, where the
features are variables, and not just indicators of common orientation. These situations are typical
15
in applications involving variously big data. Also in multivariate nonparametric statistics
involving multivariate ranking, multivariate median, image reconstruction, etc.
References :
Bruggemann, R. and G. P. Patil. 2011. Ranking and Prioritization for Multi-indicator Systems Introduction to Partial Order Applications. Springer, New York. p 328.
Bruggemann, R., Carlson, L. and J. Wittman Eds 2014. Multi-indicator Systems and Modeling in
Partial Order. Springer, New York. p 437.
Chiang, A. Y., G. Li, Y. Ding, and M. D. Wang. 2008. A multivariate ranking procedure to
assess treatment effects. Technical Report, Eli Lilly and Company , Indianapolis, IN
Diaconis, P. and R. L. Graham. 1977. Spearman’s footrule as a measure of disarray. JRSS B 39
262-268.
Donoho, D. L. and Gasko, M. 1992. Breakdown properties of location estimates based on
halfspace depth and projected outlyingness. Annals of Statistics 20 1803–1827.
Hardie, R. E. and Arce, G. R. 1990. Ranking in Rp and its use in multivariate image estimation.
SPIE volume 1247 Nonlinear Image Processing 13-27. Also in: IEEE Trans. On Circuits
and Systems for Video Technology. Vol. 1, No.2, June 1991 197-209.
Liu, R. Y., R. Serfling, and D. L. Souvaine. Eds. 2006. Data Depth: Robust Multivariate
Analysis, Computational Geometry and Applications (Dimacs Series in Discrete
Mathematics and Theoretical Computer Science)
Mottonen, J., Hettmansperger,T.P., Oja, H., and Tienari, J 1998. On the efficiency affine
invariant multivariate rank tests. Journal of Multivariate Analysis 66 118-132.
Myers, W. L. and G. P. Patil. 2012a. Multivariate Methods of Representing Relations in R for
Prioritization Purposes.: Selective Scaling, Comparative Clustering, Collective Criteria
and Sequenced Sets. Springer, New York. p 297.
Myers, W. L. and G. P. Patil. 2012b. Statistical Geoinformatics for Human Environment
Interface. CRC/ Chapman & Hall, New York. p 305.
Patil, G. P. 2011. Inaugural Keynote Address, UNEP Panel Workshop on Sustainability
Indicators, New Delhi, India.
Patil, G. P. 2012. Plenary Lecture, UNEP Panel Workshop on Green Economy Indicators,
Beijing, China.
Patil, G. P. 2012a. Invited Lecture on comparative knowledge discovery with partial order and
composite indicators ln multi-indicator information fusion systems. DIMACS Workshop
on Algorithmic Aspects of Information Fusion at Rutgers University
16
Patil, G. P. 2012b. Keynote Lecture on Partial Orders and Composite Indicators for Multivariate
Ranking in Multivariate Nonparametric Statistics at the International Workshop on
Partial Order Theory and Modeling held in Berlin, Germany.
Patil, G. P. 2015. Invited Keynote Inaugural Lecture on multivariate ranking in multi-indicator
systems: recent past, present, and near future. 2015 Annual Florida Chapter Meeting of
ASA, University of South Florida, Tampa, Florida.
Patil, G. P., W. L. Myers, and R. Bruggemann. 2014. Multivariate datasets for inference order:
some considerations and explorations. In Multi-Indicator Systems and Modeling in
Partial Order, R. Bruggemann, L. Carsen, and J. Wittmann, Eds. Springer 13-46.
Patil, G.P. and S. W. Joshi. 2014. Comparative knowledge discovery with partial order and
composite indicators: Multi-indicator Systemic Ranking, Advocacy, and Reconciliation
In Multi-indicator Systems and Modeling in Partial Order. , R. Bruggemann, L. Carsen,
and J. Wittmann, Eds Springer, New York. 107-146
Patil, G. P. and C. Taillie. 2004. Multiple indicators, partially ordered sets, and linear extensions:
Multi-criterion ranking and prioritization. Environmental and Ecological Statistics
11:199-228.
.Phillips, D. and D. Ghosh. 2012. A two-dimensional approach to large-scale simultaneous
hypothsis testing, using Voronoi tessellations. MS.
Serfling, R. 2006. Depth functions in nonparametric multivariate inference, Dimacs Series in
Discrete Mathematics and Theoretical Computer Science)
Tang, K. et al. 1992. Multivariate order statistic filters in color image processing. IEEE,
92CH3179-9/92 584-587.
Willett, P. 2012. Invited Lecture on fusing database rankings in similarity-based virtual screening.
DIMACS Workshop on Algorithmic Aspects of Information Fusion at Rutgers .
Wittkowski, K. M., E. Lee, et al. 2004. Combining several ordinal measures in clinical studies.
Statistics in Medicine 23(10): 1579-1592.
Wittkowski, K. M., V. Sonakya, et al. 2013. From single-SNP to wide-locus: genome-wide
association studies identifying functionally related genes and intragenic regions in small
sample studies. Pharmacogenomics 14(4): 391-401.
Wittkowski, K. M. and T. Song 2010. Nonparametric methods in molecular biology. Statistical
Methods in Molecular Biology. H. Bang, X. K. Zhou, H. L. Van Epps and M. Mazumdar. New
York, Springer: 105-154.
Wittkowski, K. M., T. Song, et al. 2008. U-Scores for Multivariate Data in Sports. Journal of
Quantitative Analysis in Sports 4(3): 7
Zuo, Y. 2003. Projection-based depth functions and associated medians. Ann Stat 31 1460-90.
Zuo, Y. and Serfling, R. 2000. General notions of statistical depth function. Ann Stat 28 461-82.
17