On Constructing Efficient
Shared Decision Trees for
Multiple Packet Filters
Author:
Bo Zhang T. S. Eugene Ng
Publisher:
IEEE INFOCOM 2010
Presenter:
Han-Chen Chen
Date:
2010/06/02
1
Outline
Introduction
Background
Construct shared Hypercuts Decision Tree
Performance Evaluation
2
Introduction
Multiple packet filters serving different purposes may be
deployed on a single physical router (i.e firewalling, quality of
service (QoS), virtual private networks (VPNs), load balancing,
etc.)
The saved memory can be used to improve cache performance, to
more efficiently hold more packet filters and to support more
virtual routers.
In this paper, we will use the HyperCuts decision tree to represent
packet filters since it is one of the most efficient data structures
for performing packet filter matching.
3
Background (1/2)
Efficiency Metrics of The HyperCuts Decision Tree
1. Memory consumption
2. Average depth of leaf nodes : The average memory access time when
searching.
3. Height of the tree : The worst case memory access time when
searching.
4
Background (2/2)
5
Construct shared Hypercuts
Decision Tree (1/8)
1. Find the subset of packet filters sharing a HyperCuts decision
tree which is more efficient than a set of separate trees.
Given the pair-wise prediction on all possible pairs, a greedy heuristic
algorithm is used to classify packets filters into a number of shared
HyperCuts decision trees.
2. Construct shared Hypercuts decision tree.
6
Construct shared Hypercuts
Decision Tree (2/8)
1. Define the good or bad pair of Packet Filters.
Two packet filters are defined to be a “good” pair if their shared HyperCuts
tree has decreased memory usage and decreased average depth of leaf nodes
compared to the two separate HyperCuts trees.
2. Use machine learning techniques to predict whether a pair of filters is
good.
We use 3 types of machine learning techniques
1.decision tree (DT)
2.generalized linear regression (GLR)
3.naive Bayse classifier (NBC)
input some factors affecting the efficiency of the shared tree, and train the
machine learning techniques for 10% packet filters, then it output the good or bad
pair for all packet filters pair.
7
Construct shared Hypercuts
Decision Tree (3/8)
Factors Affecting the Efficiency of the Shared Tree
1. Class-1 factors : Include some simple statistical properties of a
packet filter itself. They include the size of the packet filter and the
number of unique elements in each field.
2. Class-2 factors : Represent the characteristics of the constructed
HyperCuts decision tree. They include the memory consumption of
the tree, the average depth of leaf nodes and the height of the tree, the
number of leaf nodes, the number of internal nodes and the total
number of cuts on each field.
8
Construct shared Hypercuts
Decision Tree (4/8)
9
Construct shared Hypercuts
Decision Tree (5/8)
False positive rate (bad pairs be mistakenly predicted to be good one)
10
Construct shared Hypercuts
Decision Tree (6/8)
False negative rate (good pairs be mistakenly predicted to be bad one)
11
Construct shared Hypercuts
Decision Tree (7/8)
Clustering Packet Filters Base on Pair-wise Prediction.
2
A
1
F
5
3 C
B
G
4 D
1 E
H
1
Sfilter : {A,C,D,E,F,G,H}
12
Clusteri : {B}
1
Construct shared Hypercuts
Decision Tree (7/8)
Clustering Packet Filters Base on Pair-wise Prediction.
2
A
1
F
5
3 C
B
G
4 D
1 E
H
1
Sfilter : {A,C,E,F,G,H}
13
Clusteri : {B,D}
1
Construct shared Hypercuts
Decision Tree (7/8)
Clustering Packet Filters Base on Pair-wise Prediction.
2
A
1
F
5
3 C
B
G
4 D
1 E
H
Sfilter : {A,E,F,G,H}
14
Clusteri : {B,C,D}
1
1
Construct shared Hypercuts
Decision Tree (8/8)
We extend the original HyperCuts tree construction algorithm.
If F1, F2 are sharing Hypercuts decision tree
1. Child limit :
F1, N2 is the rule number of F2
N1 is the rule number of
2. The number of unique elements : u j= ( u1j + u2j ) / 2 j is the
dimension
3. The rest of the algorithm is just the same as original HyperCuts
algorithm.
15
Performance Evaluation (1/4)
Memory consumption ratio :
Average leaf depth ratio :
Tree height ratio :
16
Performance Evaluation (2/4)
17
Performance Evaluation (3/4)
If we fix α as 1, then we can reduce memory consumption over 20% on average while only
increasing average leaf depth by 3%.
18
Performance Evaluation (4/4)
Computing time breakdown (in seconds) for each step in the proposed approach.
19
© Copyright 2026 Paperzz