partial_J_multicast tree topology

1. Introduction
A mobile ad hoc network (MANET) has mobile nodes that are connected dynamically in arbitrary manner.
These nodes function as routers that discover and maintain routes to others in the network. When two nodes are within
each transmission range, they can exchange packets each other. Otherwise, communications between non-neighbor
nodes need the multi-hop protocols [14]. Multicast communication, which delivers a data stream from a single source
to multiple destinations by routing protocols, is widely used in information sharing applications, such as conferences,
emergency services, and law enforcement. In this paper, we undertake the development of multicast protocols in
MANETs.
The design of multicast protocols in MANETs has received considerable attention, such as Ad hoc Multicast
Routing Protocol Utilizing Increasing ID Numbers (AMRIS) [?], Core-Assisted Mesh Protocol (CAMP) [?], Ad hoc
Multicast Routing Protocol (AMRoute) [?], and Differential Destination Multicast (DDM) [?]. The AMRIS is an ondemand protocol that constructs a group-shared multicast tree. The CAMP supports multicasting by creating a groupshared mesh. The AMRoute combines the advantages of both tree-based and meshed-based approaches by employing
the mesh links to construct the multicast tree. The DDM is proposed in an attempt that the effect of increasing
overhead in maintaining multicast tree/mesh with frequently fluctuant environment is minimized. These protocols
have their own characteristics and are discussed in [?].
A relative large number of multicast protocols use a core-based group-shared tree as the routing topology, such
as Lightweight Adaptive Multicast (LAM) [?] and Gupta’s distributed core selection and migration protocol [9], since
core-based architecture has better scaling factor than source-based tree architecture and has less forwarding overhead
than mesh-based architecture. A core-based group-shared tree is rooted at core node that is responsible for distributing
packets to and from the group members. In the method, two issues need to be addressed: the selection of the core and
the construction of the multicast tree. With dynamically changing network topology in MANETs, Gupta [9] designs
an adaptive core selection and migration methods for the multicast tree. Since MANETs have limited bandwidth, the
bandwidth consumed for multicasting a packet needs to be conserved. It leads to construct a minimum cost multicast
tree that spans all members. Traditionally, the bandwidth consumed by transmitting a packet via the tree is evaluated
by the total weights of the edges. Under the assumption that each edge has unit weight [9], the bandwidth is minimized
by constructing a minimum edge multicast tree that spans all members.
In MANETs, sending a packet consumes most power [3]. It leads us to evaluate the bandwidth by the total
weights of sender nodes that include the core and the other non-leaves when the local broadcasting operation is used
to multicast a packet. Under the assumption that each sender node has unit weight, when the core is non-leaf,
constructing minimum non-leaf multicast tree minimizes the bandwidth consumed by the total broadcasting packets
forwarded from the core. However, Lim [?] also takes the number of broadcasting packets as the cost factor of
multicast tree and proposes the flooding methods that determine which receiver to forward the packets. But, when
majority nodes in the network are not multicast group members, the methods waste much bandwidth to forward the
packets to all nodes. Thus, a packet transmitted via the multicast tree without flooding reduces the unnecessary
bandwidth.
The rest of this paper is organized as follows. In the next section, some definitions for the multicast are given. It
then goes to show that the problem of finding a minimum non-leaf multicast tree is harder than MAX-SNP and there is
no polynomial time approximation scheme, unless P = NP. Thus, a heuristic is designed in Section 3. The simulation
results are reported in Section 4, and finally, Section 5 concludes this paper.
2. Definitions and Notations
A MANET can be represented as an undirected connected graph G. Each node in V(G) whose size denotes
|V(G)| represents a host, and each edge (u, w) in E(G) implies that the hosts u and w are neighbors which can
communicate with each other. The multicast tree can be represented as a subtree T of G that satisfies V (T )  V (G )
and E (T )  E (G ) . Here denotes pathT (u, w) to be the path in T between nodes u and w. We classify the nodes of T into
two types, white and black, and the relative definitions are described as follows.
Definition 1. Given a tree T. W(T) denotes the set of white nodes in T; and B(T) denotes the set of black ones in T.
W(T)∪B(T) = V(T) and W(T)∩B(T) =  .
Definition 2. Given two graphs G and G ' . G ' is an induced subgraph of G iff G' is a subgraph of G and for any two
nodes u, v in V (G ' ) , there is no edge (u, v) in E (G )  E (G ' ) .
Definition 3. Given a graph G and a subtree T of G . Let GT be the induced subgraph by V(T).
Definition 4. Given a graph G and a subtree T of G . For each node v in V(T), let N(v) be its neighbor tree set in
GT , N w (v) be its neighbor white set in GT , and N b (v) be its neighbor black set in T where
N (v)  {u | {v, u}  E(GT ), u V (T )} , Nw (v)  {u | {v, u}  E (GT ), u  W (T )} , and N b (v)  {u | {v, u}  E (T ), u  B(T )} .
Definition 5. Given a graph G and a subtree T of G . For each node v in T, we denote the value of v as follows:
v.value =
| N w (v) |,
if | N b (v) | = 0,
| N w (v) | + 1,
if | N b (v) | = 1,
| N w (v) | + 2,
if | N b (v) | >1.
An example for nodes’ values is depicted in Fig. 1.
Definition 6. Given a graph G and a subtree T of G . We define some notations for node v  V (T ) as follows:
1.
Candidate of node v: Node u  N (v)  {v} is a candidate of node v if (u.value > w.value) or (u.value≧
w.value and u.id < w.id) for all w  N (v)  {v}  {u} .
2.
Determinant node v: It must meet any one of the following two conditions:
(a) Node v is a determinant node if v.value≧3 and v is the candidate of node w for all w  N w (v)  {v} .
(b) It is set as determinant node.
Take Fig. 1 as an example and alphabetical order labels indicate ascending order of network ids. In Fig. 1a,
Node D is the candidate of node F since the values of nodes D, E, and F are the same but D.id is the smallest one.
Node B is the candidate of node B since B.value is larger than the values of nodes A, C, D, and E. Because node
B.value≧3 and B is the candidate for nodes A, B, C, D, and E, it is a determinant node. In Fig. 1b, if node B is set as
determinant node, it is determinant node regardless the other condition.
A
A
E
E
G
G
B
B
D
C
H
F
C
(a)
- black node in the tree
- white node in the tree
D
H
F
(b)
- the node not in the tree
Figure 1. An example for nodes’ values. (a) Node B has 4 neighbor white nodes and its value is 4. Other values in
sequence are {A, C, D, E, F} = {1, 1, 2, 2, 2}. (b) Node F has two neighbor white nodes and one edge incident to one
black node, and then its value is 3. The values of {A, B, C, D, E, G, H} = {1, 2, 1, 2, 2, 1, 1}.
3. Locally Minimum Non-Leaf Multicast Tree
The multicast group members are distributed in the networks, thus a multicast tree, which spans all group
members, may contain some non-member nodes, called forwarding nodes. Under the assumption that each sender has
unit weight, the MNLMT (Minimum Non-Leaf Multicast Tree) that spans arbitrary group members over the
undirected networks takes minimum cost to distribute a packet from any non-leaf node. If the MNLMT problem can
be solved in polynomial time, the problem of finding a MNLST (minimum non-leaf spanning tree) that spans all nodes
also can be solved in polynomial time since it is a special case of the MNLMT problem. However, this MNLST
problem is equal to the MLST (Maximum Leaf Spanning Tree) problem that is not only NP-complete [8] but also
MAX SNP-complete [7]. It reveals that the MNLMT problem is harder than the MLST problem and has no (1+ε)approximation scheme, unless P = NP. We need a heuristic in an attempt to solve this problem.
Reducing the construction time is necessary for group communications, and thus, our goal of the heuristic is to
reduce the number of non-leaves for a shortest path multicast tree rooted at core node instead of constructing a
minimum cost multicast tree in the beginning. We will propose a distributed algorithm to dynamically adjust the tree
with varying network topology, and hence a long-lived multicast tree can stabilize the cost when the network topology
is stable. Section 3.1 describes our main idea to locally minimize the number of non-leaves and the distributed
algorithm is introduced in following sections.
3.1 Idea
The number of non-leaves makes a great impact on the cost of multicast tree, and less cost leads to better
performance for multicasting packets. Non-leaf of a rooted tree is responsible for linking other nodes to keep the tree
connected. If a reduction can transfer some non-leaves into leaves and keep the property of a tree, it is useful to be an
adjusting method. By tree definition, when the number of nodes in a tree is greater than two, we know that there is at
least one non-leaf in the neighbor zone of each tree node. Thus, if one has many non-leaves in its neighbor zone, it is
possible to reduce the cost by linking to these neighbors on the basis of tree properties, such as Ex. 1. It is reasonable
for us to choose one tree node to minimize the cost locally by the reduction.
Ex. 1. In Fig. 2, nodes A, B, C, D, E, and F are all tree nodes, and C, D, E, and F are non-leaves in Fig. 2a. It is easy
to see that when a reduction applies to A, that is, A connects to B, C, D, E, and F but deletes the edges (B, C), (C, D),
(D, E), and (E, F). This reduction gains the most number of leaves, like Fig. 2b.
C
C
D
B
D
B
A
A
E
F
E
F
(a)
(b)
- node in the tree
Figure 2. Spanning tree. (a) Node A has four non-leaves in its neighbor zone. (b) Node A links to its neighbors on the
basis of tree properties and reduces the cost.
C
E
B
D
A
B
G H
F
(a)
C
E
D
B
A
G H
F
C
E
D
A
G H
F
(b)
- node in the tree
(c)
Figure 3. Spanning tree optimized by reduction. (a) A spanning tree. (b) The reduction applies to node A and let D
becomes leaf. (c) The reduction applies on node B, but it works without any profit.
We can determine one node to be applied with the reduction that is used to locally minimize the cost. If this
determining node has many neighbor tree nodes, it has more probability of minimizing the number of non-leaves than
others. However, when one tree node becomes leaf by the reduction, it is needless to set this one again, like Ex. 2. It
compels us to design a mechanism to prevent from duplicate setting. Thus, we classify the tree nodes into two types,
black and white, and initially, all nodes are white and then the black nodes are induced by the reduction. To prevent
from duplicate setting, our reduction will never add a new edge incident to black node and Ex. 3 describes this
scenario. Some rules are made by this classification as follows:
1.
White node is allowed to be linked with new edge.
2.
Black node is not allowed to be linked with new edge.
These rules are helpful in distributed manner and Def. 1 of Section 2 gives formal definition for tree classification.
Ex. 2. In Fig. 3a, if we take the reduction on node A or B, that is, let A or B connect itself to neighbors as possible as
it can, we can minimize the tree cost. When the reduction is applied with A firstly, it links itself to nodes C, D, G, and
H and the edges (C, D) and (D, H) are erased for loop-free. Later, if the reduction is applied with B, it links to nodes
C, E, and F and edge (A, C) is erased. However, node C has become leaf by the reduction with node A and then it is
needless to set C as leaf again.
Ex. 3. In Fig 4a, initially all nodes are white. When the reduction applies to node A, like Fig. 4b, the nodes A, C, D,
G, and H then become black. Later, even the reduction applies to node B, it cannot set node C as leaf again.
C
E
B
D
A
B
G H
F
(a)
C
E
F
D
C
E
B
A
G H
(b)
- black node in the tree
F
D
C
E
B
A
G H
D
A
G H
F
(c)
- white node in the tree
(d)
Figure 4. Spanning tree optimized by the reduction with classification. (a) A spanning tree. (b) The reduction applies
to node A from (a). (c) The reduction applies to node B from (a). (d) The reduction applies to node A from (c).
To determine which node is applied with the reduction is the impact factor for reducing the cost of the tree.
Take Fig. 4a as an example, when node A is determined, the cost of the tree is 4, such as Fig. 4b. When node B is
determined, the cost is 6, such as Fig. 4c. If node A is determined later in Fig. 4c, the cost is 4 in Fig. 4d and it shows
that node A is a good choice than node B in the beginning since the former uses less reduction than the latter’s. The
choices of determining nodes involve with the times of the reduction, and thus we will use a value to represent the
importance of each tree node and determine these nodes in descending sequence on demand.
The value of tree node is involved with the mentioned classification, and the most value implies the most
possibility of minimizing the cost locally. Since the rule of the classification indicates that there is no new edge
incident to black node, the value of tree node has no concern with the black nodes that have no linkage with it. In Fig.
4b, node B has no edge incident to black node C, and we then only take two neighbor white nodes E and F into
consideration with the value of B. To look at the other rule of the classification, it indicates that the value of tree node
must include the number of neighbor white nodes since these nodes are connectible. Thus, we can then see that the
value of node B in Fig. 4b is 2. However, the value 2 has less possibility of minimizing the number of non-leaves
because it is usually used as a connector, like the node B between nodes E and F in Fig. 4b. The value W that is
greater than 2 can minimize the cost with (W-2) nodes, so the value 3 is the threshold. When one value is greater or
equal to this threshold, it has much possibility to minimize the number of non-leaves. In Fig. 4a, node B has value 3,
and if it is determined to do the reduction, the cost can be reduced by one of three nodes like Fig. 4c. When one tree
node has tree edges incident to black nodes, these edges contribute at most 2 to its value since these edges are used for
connection. In Fig. 4d, node B has three edges incident to black nodes, but it cannot minimize the cost by applying the
reduction, that is, it is adaptive to be a non-leaf but is useless by the reduction. Thus, it is at most a connector with
value 2 that does not satisfy the threshold. We give a formal value definition in Section 2 and show how to merge the
reduction with distributed algorithm in next section.
3.2 Distributed Algorithm
The distributed algorithm is used in each node of the tree that is a shortest path multicast tree rooted at core
node, and these nodes are responsible for locally adjusting this tree with varying network topology. In a distributed
manner, each tree node only knows its neighbors, thus our distributed algorithm will only use neighbor information to
determine which node can be applied with the reduction to locally minimize the cost. By the idea of the reduction, we
know that the most value is possible to minimize the most cost in multicast tree. Thus, a node that is determined to do
the reduction, which is called determinant node later, has the most value than others locally, that is, its value is largest
in its neighbor zone and it has most possibility of minimizing the cost. However, if some nodes are applied with the
reduction without negotiation due to limited information, a cycle may be formed and an example is depicted in Ex. 4.
An election model for designing our distributed algorithm is presented in an attempt to resolve this condition.
Ex. 4. In Fig. 4a, node C knows the values of its neighbor tree nodes including A, B, and D, and realizes that it is not
a determinant node since the value of A is more than its, that is, 4 is more than 3. Then nodes A and B are considered
as determinant nodes by their locally maximal values. If the reduction applies to nodes A and B simultaneously, that
is, node A links itself to nodes C, D, G, and H, and node B links itself to nodes C, E, and F. A closed path (cycle) {C,
A, G, F, B, C} is formed since nodes A and B do not know to each other and thus it is hard for them to come to a
compromise.
In the election model, every tree node v has only one vote to choose a candidate node that has most value by its
local knowledge at each time. After this candidate becomes determinant node, this node v can elect next candidate for
another reduction. The determinant node, which is determined to do the reduction, is allowed to link itself to all
neighbor white nodes, that is, it is voted by itself and all neighbor white nodes. Thus, this election model decides
which can be the determinant node and avoids the unnecessary cycles. Ex. 5 precisely describes this behavior.
Ex. 5. By election model, node C in Fig. 4a elects node A as its candidate since the value of node A is maximum
among the nodes in C’s neighbor zone. It can be seen that node A is determinant node since it is elected by itself and
all its neighbor white nodes. At this time, a reduction can apply to node A, like Fig. 4b. Later, though node B is voted
by nodes C, E, and F, the reduction cannot apply to B since there is no edge incident to black node C and then the
value of B is 2 that is less than threshold value.
The election model may lead to a deadlock due to randomly selecting a candidate among the neighbor tree
nodes with equally maximum value. Take 4-clique as an example, when the clique nodes A, B, C, and D are all group
members, their candidates may be (B, C, D, A) sequentially since their values are the same. Thus, a circular wait is
formed and nothing will be done. An explicit choice of candidate is to avoid this deadlock. The mobile nodes in
MANET have unique ids that can be used to meet our requirement. When some neighbor tree nodes have the same
maximum values, it allows us to choose the only one node with lowest distinct id among these nodes. A deadlock of
election model is avoided due to the unambiguous ids.
We can use the election model to choose the determinant nodes that are applied with the reduction to minimize
the cost of multicast tree. A determinant node, which can link itself to neighbor white nodes directly, is responsible to
locally minimize the tree topology. In other words, it must be a candidate of itself and also be elected by all neighbor
white nodes. After completing the reduction, if this determinant node is still responsible for the same job, it is possible
to reduce the cost for dynamic issue because any white node may move to its neighbor zone. An example is like Fig.
4b where node B may move to the neighbor zone of node A, and when A can link itself to node B and erase edge (B,
F), the cost is reduced to 3. To formalize these distributed operations, we strictly define some mentioned notions at
Def. 6 in Section 2.
Our distributed algorithm, which locally minimizes the cost by determinant nodes that are chosen by mentioned
election model, is described as Fig. 5. In Step 1, every tree node v is initiated. In Step 2, if a determinant node u is in
node v’s neighbor zone, this u can link to v for locally minimizing the cost only when v is white. When node v is
linked, it must remove a correct edge for loop-free, which is discussed in Section 3.4.3. It then goes to Step 3 to see if
node v is a candidate of itself or not. When it is, it will wait for becoming a determinant node, that is, it will wait
neighbor white nodes to elect it. If time T1 is out of date, it means that there may be some neighbor white nodes voting
others and it is not adaptive to be a determinant node at this time, so it may go back to Step 2 to check again.
Otherwise, if it becomes a determinant node, it uses the reduction to minimize the tree cost. If node v is not a
candidate, it notifies its candidate to allow to be linked in Step 4, that is, node v elects its candidate in its mind. The
user-defined time T1 and time T2 are used to prevent from blocking itself, and are also employed to automatically
detect the changes of fluctuant environment. In Step 5, node v is determined and it is allowed to use the reduction to
locally optimize the tree. Illustrative examples and some issues for our distributed algorithm are presented in
following sections.
For each tree node v:
Step 1. Initially, node v is white. Go to next step.
Step 2. If (Is node v white) and (Can it be linked by determinant node in its neighbor zone) then
Node v is linked by that node and keeps the tree loop-free, and then becomes black.
Go to next step.
Step 3. If (Is node v a candidate of itself) then
It will wait for some time T1 until becoming a determinant node.
When time is out of date, go to Step 2.
When it becomes a determinant node, go to Step 5.
Else
Go to next step.
Step 4. Notify node v’s candidate node, and wait for some time T2 .
When time is out of date, go to Step 2.
Step 5. Since it becomes a determinant node, it links itself to all neighbor white nodes and sets all
relative nodes black. This determinant node will be used to reduce cost with varying tree topology.
Figure 5. Distributed algorithm.
3.3 Illustrative Examples
The following examples show how does our algorithm work. Our algorithm is applied with a static network in
Ex. 6 and the dynamic issue will be discussed in Ex. 7.
Ex. 6. In Fig. 6, our algorithm applies to a spanning tree and alphabetical order labels indicate ascending order of
network ids. Suppose these joined nodes have initiated and are all white in Fig. 6a. Node A is a determinant node
since it is elected by nodes B, C, D, G, and H, and its value is 5 that satisfies threshold. Another node N is also a
determinant node since its value is 4 and its network id is lower than node Q. These two nodes can use the reduction
to locally minimize the tree cost. Fig. 6b shows the multicast tree modified by the reduction. The values of nodes are
changed, and 2 edges are added and 2 edges are erased for loop-free. The values of {D, H, K, Q, R, S, U} = {3, 3, 3,
3, 3, 3, 3}, and D, K, and Q are determinant nodes at this time. In Fig. 6c, the values of nodes are changed again. No
values are at least the threshold value 3. The final graph is shown in Fig. 6d.
B
C
A
J
K
G
H
D
H
K
H
K
F
Q
N
P
T
H
V
J
K
G
F
M
Q
N
P
S
R
W
E
U
O
L
I
S
R
W
C
U
O
M
(c)
A
D
R
T
L
I
S
V
J
E
B
P
(b)
G
D
Q
N
W
C
U
O
M
L
I
E
A
R
V
J
F
S
T
(a)
G
B
P
W
C
A
D
Q
N
L
I
F
E
B
O
M
T
(d)
V
- black node
U
- white node
Figure 6. The tree optimized by our algorithm.
Ex. 7. Let’s see how the distributed algorithm works with varying network topology in Fig 7 where node A is the core
node. In Fig. 7a, node D approaches node C. In Fig. 7b, node D can see node C. In Fig. 7c, nodes B and D see each
other, and B.value is 3. By our distributed algorithm, node B is applied with the reduction in Fig. 7d.
A
A
B
E
C
A
B
C
D
C
D
(a)
B
E
(b)
- black node in the tree
A
B
E
D
(c)
- white node in the tree
C
E
D
(d)
Figure 7. The movement of nodes. (a) D moves. (b) A new edge (C, D) exists in the network. (c) A new edge (B, D)
exists in the network. (d) Optimize the tree.
3.4 Information Acquiring and Tree Maintenance
This section discusses some detail content of our distributed algorithm, such as how to acquire neighbor
information or the maintenance of the rooted multicast tree.
3.4.1 Message Switching
Switching information in MANET is common [15] for acquiring newly knowledge since every mobile node
needs to periodically confirm with its neighbors. The distributed algorithm only needs to maintain or update its
neighbor information by doing follows:
1.
Every tree node v exchanges its value with each node w in open neighbor tree set N(v).
2.
Keep or update the received information.
By this acquiring mechanism, when the value of one node is changed, its neighbor nodes can obtain the newly
information for the decision of our distributed algorithm.
3.4.2 The Structure of Rooted Tree
Our algorithm works on a multicast tree rooted at core node, and thus we suppose that each tree node records
the Parent_Link for its parent node and Children_Links for its child nodes and these links are bidirectional. So the
Parent_Link of the core is null. The packets are forwarded via the rooted multicast tree that is maintained by this link
structure. However, when a new edge is added by the reduction or core moves, these links may be altered. We will
address these problems in Section 3.4.3 and Section 3.4.4 separately.
3.4.3 Loop Free
When a new edge is added, we consider how to remove a correct edge to keep the tree loop-free. Given a graph
G and a tree T of G, let two distinct nodes v, u V (T ) , and edge (v, u )  E (G ) but (v, u )  E (T ) . Suppose that node v
links itself to node u, we know that there must be a node w, such that edge (u, w) is in the pathT (u, v) . When edge (v,
u) is added, we can remove edge (u, w) for loop-free. How to find the edge (u, w) is the most concern problem and we
will propose two methods to solve it by mentioned rooted tree structure.
Loop Detection:
When node v links to node u, they can put each other’s link to Child_Links. Node v is responsible to send a
special packet for node u via current topology except for edge (v, u). When node u receives the packet, it knows
which link the packet comes from. If the packet comes from the parent of node u, it deletes the link with its parent,
and sets its Parent_Link to node v. Otherwise, if the packet comes from the child of node u, it deletes the link with this
child w and notifies the nodes in pathT (w, v) to reset their Parent_Link. These situation are described in Ex. 8. Ex. 8.
In Fig. 8, Fig. 8a and Fig. 8c are two examples for mentioned cases. In Fig. 8b, node u receives the packet from its
parent node w, and then edge (u, w) will be deleted and node u set its Parent_Link to node v. Otherwise, in Fig. 8d,
node u receives the packet from its child node w, and then edge (u, w) will be deleted and nodes {w, v} will set their
Parent_Links to nodes {v, u} separately.
core
u
v
core
u
w
v
core
v
core
v
u
w
w
u
w
- white node in the tree
(a)
(b)
(c)
(d)
Figure 8. Example of loop detection. In these graphs, node v is a determinant node and edge (v, u) is added. (a) An
illustrative topology. (b) Node v sends packet about node u via this topology except for edge (v, u). (c) An illustrative
topology. (d) Send packet like (b).
This loop detection method is adaptive to be used in a violently fluctuant environment since it is needless to take
much overhead in maintaining extra information on the tree. On the other hand, the other method, which is proposed
in Appendix A, can find the correct edge to remove in constant time but record some tree information that may be
updated with varying network. The cost of the mechanism for loop-free is the critical work in our algorithm but this
cost is restricted due to the limited determinant nodes. Theorem 1 shows this result.
Theorem 1. Given a graph G and a subtree T of G. When the distributed algorithm applies to this T, it needs r number
of determinant nodes to reduce. We denote k to be the number of operations of adding edges and denote n = |V(T)|.
Then k < (n-1) where r > 0.
Proof
We prove this bound by contradiction. Suppose that
 k≧(n-1) number of operations of adding edges where
there are one or more determinant nodes used. By our algorithm, there are at most n-1 edges added since this tree has
n-1 edges and the added ones are never erased by the rules of classification. Thus,
 k = n-1.
Since our algorithm needs one or more determinant nodes in this spanning tree where all nodes are initiated in
the beginning, we take the determinant node v that is firstly applied with the reduction into consideration. Before the
reduction applies to v, there must be one or more edges incident to it and its neighbor white nodes, since tree is
connected. Thus, the both end-points of these edges are set as black and these edges are never be erased by the rules of
classification. If there are totally k = n-1 edges added, this tree must have more than n edges. It is a contradiction. So
we get that the bound k < (n-1) where r > 0. □
3.4.4 Core Moving
When core moves, the direction of these links may be changed. However, we can use two simple methods to
achieve this goal.
1.
The newly core node is responsible for multicasting a special packet to notify the direction of root.
2.
When core moves, the path between old core and newly core must change its relative parent link.
These two mechanisms are designed for the maintenance of the rooted multicast tree.
3.4.5 Disjointed Tree
With varying network topology, the multicast tree may be disjointed. Due to the rooted multicast tree, we can
know when this will happen by Parnet_Link. When the parent of one node v is missed, the sub-tree rooted at node v is
escaped from the rooted tree. At this time, node v is responsible for reconnecting to the multicast tree by shortest path
to the core since our algorithm is based on the shortest path multicast tree rooted at core node. The newly nodes of
multicast tree can use the distributed algorithm to locally reduce the number of non-leaves.
3.4.6 Fluctuant Networks
The networks may vary with time and the tree topology applying with our algorithm may not be applied with the
reduction any more because majority nodes are black. We provide two mechanisms to efficiently optimize the
fluctuant tree topology as follows:
1.
The multicast tree is periodically flushed and all tree nodes are initiated again.
2.
When majority nodes in the neighbor zone of each tree node v are not the same as before, this v can flush
itself and all neighbor tree nodes, and these nodes are initiated for the local optimization in sub-area of the
tree.
By one of these two methods, the multicast tree can be locally optimized with varying network topology.
Appendix A. Another Method for Loop-Free
With the same preliminaries in Section 3.4.3, we propose another method compared to the loop-detection.
Loop Avoidance:
Loop avoidance avoids a cycle when an edge is added into the tree topology. When an edge is willing to be
added, it must choose a correct edge to remove for loop-free. We will use NCA (Nearest Common Ancestor) to
achieve this goal since it can distinguish which node is the nearest common ancestor of any two nodes in the rooted
tree. In Fig. 9a, the spanning tree is rooted at node A. The NCA of nodes D and E is node B, and the NCA of nodes C
and D is node A. The distributed algorithm [1] takes constant time to acquire the NCA information of any two nodes
and we will use it as our basic operation.
When node v is willing to link to node u, if we can choose a node w such that after the edge (u, w) is removed,
adding the edge (v, u) cannot form a cycle and this tree is still connected. However, node w can be acquired by the
usage of NCA. Since nodes v and u are distinct, there are three situations about NCA (v, u) listed as follows:
1.
NCA (v, u)≠v and u. It implies that NCA (v, u) = q such that q≠v and q≠u. Since node q is the
NCA of v and u, node q must be in the path from u to v, that is, node q is in pathT (u, v) . Thus, the parent of
node u is the wanted node w. Take Fig. 9b as an example where node B is willing to link to node C, and NCA
(B, C) = A where A≠B and A≠C. The parent of node C is node A, thus, if we remove link (C, A) and sets
link (C, B), there is no cycle induced.
2.
NCA (v, u) = v. It implies that node v is the ancestor of node u and the parent of node u is the
wanted node w since each node has only one parent in the rooted tree. Take Fig. 9c as an example where node
B is willing to link to node F, and NCA (B, F) = B. The parent of node F is node D, thus, if we remove link (F,
D) and sets link (F, B), there is no cycle induced.
3.
NCA (v, u) = u. It implies that node u is the ancestor of node v but we do not know which child of
node u is the wanted node since each tree node can have one or more children. However, there must be only
one child node q of node u in the pathT (u, v) , where NCA (v, q) = q and v≠q since node q is also the ancestor
of node v. So this node q is the wanted node w. Take Fig. 9d as an example where node E is willing to link to
node B, and NCA (E, B) = B. The children of node B are nodes D and H. However, NCA (E, D) = D and it
shows that node D is the wanted node. If we remove link (D, B), and sets link (E, B), there is no cycle induced.
Besides, all nodes in pathT ( D, E) will change their Parent_Link to maintain the tree structure.
A (core)
B
C
E
D
A (core)
E
D
A (core)
G
C
B
A (core)
G
B C
F
E
D
F
E
G
B C
D
H
- white node in the tree
(a)
(b)
(c)
(d)
Figure 9. Example of loop avoidance. (a) An illustrative topology. (b) Node B wants to link to node C. (c) Node B
wants to link to node F. (d) Node E wants to link to node B.
After this analysis, we can design an algorithm to choose a correct node w. The algorithm is listed in Fig. 10.
Input: nodes v and u
Output: node w
Step 1. If (NCA (v, u) != v) and (NCA (v, u) != u) then
w = Parent (u); return w;
Go to next step.
Step 2. If (NCA (v, u) = v) then
w = Parent (u); return w;
Go to next step.
Step 3. If (NCA (v, u) = u) then
Choose a child node w where NCA (v, w) = w;
return w;
Figure 10. Algorithm to choose correct node w.
A correct edge (u, w) can be chosen by this algorithm that meets the loop avoidance. However, this method must
run the labeling function that takes O(n) where n is the number of tree nodes before acquiring the NCA. Thus, when
the tree topology varies, the labeling mechanism may run again for the next query.
The loop avoidance method can find this edge in constant time and is quicker than loop detection method. But
loop avoidance is adaptive to seldom varying network topology since the labeling function is used again when
topology varies. When the topology does not vary so fast, the loop avoidance is better because it can spare the time for
loop-free.