Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Learning the Parameters of a Multiple Criteria Sorting Method Based on a Majority Rule Agnes Leroy1 , Vincent Mousseau2 , Marc Pirlot1 1 MATHRO, Faculté Polytechnique, U-Mons, Belgium [email protected] 2 LGI, Ecole Centrale Paris, France [email protected] ADT Conference, Rutgers, Oct. 26-28, 2011 Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Contents 1 Introduction 2 MR-Sort: a sorting method based on a majority rule 3 Learning a MR-Sort model 4 Empirical design Experiment 1: Experiment 2: Experiment 3: 5 Conclusion and results model retrieval tolerance for error idiosyncratic behavior Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Sorting problem statement Multicriteria sorting problem Alternatives evaluated on multiple criteria... ... to be assigned to (predefined) ordered categories Absolute rather than comparative evaluation Typical examples Granting bank credits, Evaluating research projects, Attribute merits to students, Evaluating objects on an ordinal scale. Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion MR-Sort : Simplified Electre Tri method 1 Define categories using limit profiles B = {b1 , b2 , . . . , bp }, C1 C2 Cp−1 Cp+1 Cp g1 g2 g3 gm−1 gm b0 b1 bp−1 bp bp+1 2 Compare a ∈ A to b1 , ..., bp using a majority relation , 3 Assign a to the highest category Ch for which a bh . Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Learning an MR-Sort model sorting from examples Inference procedure Input = assignment examples alternatives that should be assigned to a given category Output = values for the parameters of the sorting model, Inference procedure = algorithm which learns the parameters of the sorting model from assignment examples. Our aim Empirical investigation of the behavior of the inference procedure for the MR-Sort model Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Experimental questions Q1 Model retrieval: when examples are assigned by a simulated MR-Sort, does the learning method elicit a model “close” to the original one ? What is the size of a learning set to obtain a “good approximation”? Q2 Tolerance for error: when the learning set contains errors (5 to 15%), to what extent do these errors perturb the elicitation of the assignment model? Q3 Idiosyncrasy: we assign examples using an additive value function, and learn an MR-Sort model. Can we detect a change in the model with the learning procedure? Can the models be discriminated on an empirical basis, i.e., on the sole evidence of assignment examples? Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Contents 1 Introduction 2 MR-Sort: a sorting method based on a majority rule 3 Learning a MR-Sort model 4 Empirical design Experiment 1: Experiment 2: Experiment 3: 5 Conclusion and results model retrieval tolerance for error idiosyncratic behavior Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion MR-Sort [Bouyssou, Marchant 2007] Notation n criteria, N = {1, . . . , n}, Xi values on criterion i , Q X = ni=1 Xi , 2 categories : ordered bipartition (X 1 ,X 2 ) of X , for each criterion i , there is a partition (Xi1 , Xi2 ) of Xi , F a family of “sufficient” coalitions of criteria (subsets of N), x = (x1 , . . . , xi , . . . , xn ) ∈ X 2 Leroy, Mousseau, Pirlot iff {i ∈ N|xi ∈ Xi2 } ∈ F. Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion MR-Sort procedure Further hypothesis Xi ⊂ R, and the partitions (Xi1 , Xi2 ) of Xi (i ∈ N) are: compatible with the order on R, represented by bi the smallest element in Xi2 . weights wiP ≥ 0, i ∈ N andPλ are defined s.t. for F ⊆ N, F ∈ F iff i ∈N wi ≥ λ ( i ∈N wi = 1) Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Contents 1 Introduction 2 MR-Sort: a sorting method based on a majority rule 3 Learning a MR-Sort model 4 Empirical design Experiment 1: Experiment 2: Experiment 3: 5 Conclusion and results model retrieval tolerance for error idiosyncratic behavior Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Learning a MR-Sort model A∗ = {a1 , . . . , aj , . . . , ana }, Learning set: bipartitions (A∗1 , A∗2 ) Two categories separated by a frontier b = (b1 , ..., bn ), Define δij ∈ {0, 1}, i ∈ N, aj ∈ A∗ s. t. δij = 1 ⇔ gi (aj ) ≥ bi and δij = 0 ⇔ gi (aj ) < bi , M(δij − 1) ≤ gi (aj ) − bi < M · δij δij ∈ {0, 1} Define cij such that cij = 0 ⇔ δij = 0 and cij = wi ⇔ δij = 1, wi weight of criterion i . ⇒ δij − 1 + wi ≤ cij ≤ δij and 0 ≤ cij ≤ wi , P i ∈N cij = P i :gi (aj )≥bi wi , Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Learning a MR-Sort model a → C2 ⇔ a → C1 ⇔ P Pi ∈N cij ≥ λ, i ∈N cij + ε ≤ λ, introduce a slack variable s, P a → C2 ⇔ j∈N cij = λ + α, P a → C1 ⇔ j∈N cij + ε + α = λ, α ≥ 0 ⇔ conform to assignment examples Max α s.t. previous constraints Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Learning a MR-Sort model max α P Pi ∈N cij + xj + ε = λ i ∈N cij = λ + yj α ≤ xj , α ≤ yj cij ≤ wi cij ≤ δij cij ≥ δij − 1 + wi Mδij + ε ≥ gi (aj ) − bi M(δ P ij − 1) ≤ gi (aj ) − bi i ∈N wi = 1, λ ∈ [0.5, 1] wi ∈ [0, 1] cij ∈ [0, 1], δij ∈ {0, 1} xj , yj ∈ R α∈R Leroy, Mousseau, Pirlot ∀aj ∀aj ∀aj ∀aj ∀aj ∀aj ∀aj ∀aj ∈ A∗1 ∈ A∗2 ∈ A∗ ∈ A∗ , ∀i ∈ N ∈ A∗ , ∀i ∈ N ∈ A∗ , ∀i ∈ N ∈ A∗ , ∀i ∈ N ∈ A∗ , ∀i ∈ N ∀i ∈ N ∀aj ∈ A∗ , ∀i ∈ N ∀aj ∈ A∗ Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Infeasible learning sets γj = 1 if alternative aj is correctly assigned, γj = 0 otherwise, P ∗1 Pi ∈N cij < λ + M(1 − γj ), ∀aj ∈ A∗2 i ∈N cij ≥ λ − M(1 − γj ), ∀aj ∈ A P Objective function: Max aj ∈A∗ γj Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Experiment 1: model retrieval Experiment 2: tolerance for error Experiment 3: idiosyncratic behavior Contents 1 Introduction 2 MR-Sort: a sorting method based on a majority rule 3 Learning a MR-Sort model 4 Empirical design Experiment 1: Experiment 2: Experiment 3: 5 Conclusion and results model retrieval tolerance for error idiosyncratic behavior Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Experiment 1: model retrieval Experiment 2: tolerance for error Experiment 3: idiosyncratic behavior Empirical design We run 10 instances of each of the problems obtained by varying the following parameters: Two, three categories, Three, four, five criteria, Learning sets containing 10 to 100 alternatives. MIPs solved with CPLEX Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Experiment 1: model retrieval Experiment 2: tolerance for error Experiment 3: idiosyncratic behavior Experiment 1: model retrieval, “learnability ” Empirical questions Does the learning method elicit a model “close” to the original one? What is the size of a learning set to obtain a “good approximation”? Varying the size of the model and the number of assignment examples, we evaluate the % of assignment errors on a set of 10000 alternatives, the computing time. Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Experiment 1: model retrieval Experiment 2: tolerance for error Experiment 3: idiosyncratic behavior assignment errors ց when learning set size ր, For a given learning set size, % error ր with the number of param. To keep error<10%, we need 40 (resp. 70) examples for 2 (resp. 3) categories, 5 criteria, Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Experiment 1: model retrieval Experiment 2: tolerance for error Experiment 3: idiosyncratic behavior CPU time<10s for 2 and 3 categ., up to 4 crit., 100 examples, In the case 3 categories, 5 criteria = 25s for 100 examples ⇒ CPU time can soon become unacceptable for larger size. Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Experiment 1: model retrieval Experiment 2: tolerance for error Experiment 3: idiosyncratic behavior Experiment 2: Tolerance for errors Empirical question to what extent do assignment errors perturb the elicitation of the assignment model? are the corresponding elicitation programs harder to solve? varying the size of the learning sets, with 2 categ. and 3 crit., we evaluate the max % of correct re-assignment of examples, the % of assignment errors on a set of 10000 alternatives, the computing time. Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Experiment 1: model retrieval Experiment 2: tolerance for error Experiment 3: idiosyncratic behavior the max % of correct re-assignment of examples ց from a high value to reach asymptotically a minimum, as the learning set size ր the asymptotic minimum corresponds to the % of correct assignments. Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Experiment 1: model retrieval Experiment 2: tolerance for error Experiment 3: idiosyncratic behavior the % of assignment errors on a set of 10000 alternatives ց, as the learning set size ր, a limited number of errors does not strongly impact “learnability”, for learning sets > 40 examples, errors slightly deteriorates the ability of the model to restore the assignment of random alternatives. Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Experiment 1: model retrieval Experiment 2: tolerance for error Experiment 3: idiosyncratic behavior Computing time in Experiment 2 CPU time increases with the size of the learning set, for all proportion of errors, for large learning sets (>50 examples) the % of errors in the learning set significantly ր CPU time. Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Experiment 1: model retrieval Experiment 2: tolerance for error Experiment 3: idiosyncratic behavior Experiment 3: idiosyncratic behavior Empirical question we define the learning set using an additive value function, and learn an MR-Sort model. Can we detect a change in the model with the learning procedure? Can the models be discriminated on an empirical basis, i.e., on the sole evidence of assignment examples? varying the size of the learning sets, with 2 or 3 categ., we evaluate the max % of correct re-assignment of examples, the computing time. Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Experiment 1: model retrieval Experiment 2: tolerance for error Experiment 3: idiosyncratic behavior 2 categories Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Experiment 1: model retrieval Experiment 2: tolerance for error Experiment 3: idiosyncratic behavior 3 categories MR-Sort models are flexible enough to accommodate more than 95% (resp. 90%) of additive-Sort examples with 2 (resp. 3) categ, it is uneasy to detect, on the sole basis of assignment examples, which sorting model has been used to generate the learning set. Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Contents 1 Introduction 2 MR-Sort: a sorting method based on a majority rule 3 Learning a MR-Sort model 4 Empirical design Experiment 1: Experiment 2: Experiment 3: 5 Conclusion and results model retrieval tolerance for error idiosyncratic behavior Leroy, Mousseau, Pirlot Learning a majority rule sorting model Introduction MR-Sort: a sorting method based on a majority rule Learning a MR-Sort model Empirical design and results Conclusion Conclusion and further research eliciting an MR-Sort model is highly demanding in terms of information, Parsimony in the choice of a model: the scarcity of available information should lead the analyst to avoid models involving many parameters, MR-Sort expressive power is sufficient when the learning set contains a few dozens of examples, Selecting informative assignment examples: Developing a methodology for efficiently eliciting sorting or ranking models by learning, Experimental analysis of learning methods in MCDA is a subject that has not received enough attention sofar. Leroy, Mousseau, Pirlot Learning a majority rule sorting model
© Copyright 2024 Paperzz