10.1 Blocking Flows

6.854 Advanced Algorithms
Lecture 10: October 12, 1999
Lecturer: David Karger
Scribes: Catalin Francu,Ovidiu Gheorghioiu, J. Ireland, Deepak Ramaswamy
10.1
Blocking Flows
10.1.1 Dinics Algorithm
The shortest augmenting path algorithm described in 9.2.2 is inecient
because we throw out the information disovered while doing each search and so we must start from
scrath the next time a search for the shortest augmenting path is performed. Dinics algorithm
exploits the information found during the shortest path computation which will reduce the cost to
O(mn) as we will see shortly.
Dinics algorithm partitions residual networks into layered graphs. A layered graph is a graph organized into layers of vertices grouped by common distance to the sink. See Figure 10.1 for an
example.
Figure 10.1: Example of a graph organized into layers.
Notice that a shortest augmenting path must move layer to layer, as a path that has an arc going
to the same layer or a layer already visited could be shortened.
We label each node, v by its distance to the sink, d(t). We can now make the following denitions:
denition : An admissible arc (v w) is an arc on a shortest path or, equivalently, an arc such that
d(v) = d(w) + 1.
denition : An admissible path is a path of admissible arcs.
10-1
Lecture 10: October 12, 1999
10-2
Our goal is to nd a ow at each iteration of the algorithm that destroys all the shortest augmenting
paths. Such a ow is called a blocking ow. denition : A blocking ow, f , is a ow that saturates
at least one edge on every s � t admissible path.
lemma : If f blocks s from t (saturating arcs in every s � t admissible path), then d(s t) in the
residual graph Gf increases.
Proof: By the denition of a blocking ow, it follows that no path remains from s to t that uses only
forward (admissible) arcs. All other arcs, including the new residual arcs, will be either sideways or
backwards arcs. Hence, any new shortest path must include at least one of these arcs which means
that the distance from s to t has increased.
10.1.2
Finding the Blocking Flow
A Special Case: Unit Capacity Graphs
If the algorithm blocks on an edge, then there is no need to create a residual edge, since that edge
would be useless. Then we can just throw away that edge. The algorithm is a depth-rst search
which starts from s and advances until:
� It reaches the sink, in which case it has found an augmenting path. It should then augment
on this path and discard the edges
� It gets stuck in a node v.
But how can it get stuck? Under normal conditions, since d(v) = k, there must be a path of length
k from v to t. So the algorithm can only get stuck when it hits a node from a previous path. That
means it only blocks on admissible arcs. What does the algorithm have to do when it blocks on such
an edge?
� retreat to previous vertex to try another direction
� when retreating, delete the edge, because the node from where it is retreating has no path to
the sink
� vertices with no outgoing arcs can also be marked as blocked.
Analysis: We always remove an edge after examining it once, either by augmenting on it, or by
retreating on it. This yields a runtime of O(m) for one blocking ow. The entire algorithm has a
complexity of O(m � n).
Why is this algorithm better? We already had an O(m � n) algorithm for unit capacity graphs! This
algorithm:
� has even nicer unit bounds for unit capacity bounds
Lecture 10: October 12, 1999
10-3
can be extended to capacity graphs
performs better in practice
leads to the current best techniques of push-relabel.
10.1.3 Better unit bounds
These are not the tightest bounds we can give for our algorithm. Let's do a more accurate analysis.
Suppose we do k blocking ows. When we are done, the distance from the source to the sink is at
least k. If we consider the maximum ow in the residual graph and decompose it into paths, there
are at most m=k of these, since the paths are disjoint (unit graph), have length k and there are m
edges in the graph. Each path has unit capacity, so the maximum ow is at most m=k as well, so
we'll be done after m=k further augmentations.
p
This pyields a total of mk + k breadth-rst searches. Setting k = m gives an upper bound of
O(m m) = O(m3=2 ) for the algorithm.
Using a similar argument we can actually obtain an even tighter bound of O(mn2=3 ).
10.1.4 Bipartite matching
We can obtain even better results with this algorithm in the special case of a bipartite matching
network.
Here, both the initial and the residual graphs have the property that each vertex has either indegree
1 or outdegree 1, property that is preserved by augmentations. Thus, each vertex has only one unit
of ow owing through it. Using the argument above: after k blocking ows, paths are of length k,
and each node is part of at most one path, hence, the residual ow is n=k.
p
p
Setting k = n yields an O(m n) time bound for doing bipartite matching.
10.1.5 General graphs
What breaks the algorithm in the general case?
Our previous argument, that only O(m) total time per blocking ow is required for retreats, advances
and blocks, remains valid. This time, however, we can only assume that each augmentation saturates
one arc. Since we do O(n) work to augment a path, we must charge n per edge per blocking ow.
This gives us a bound of O(mn) per blocking ow, or O(mn2 ) for the whole algorithm.
10.1.6 Two fast BF algorithms
We can improve the above O(mn2 ) bound by working on the O(n) time per edge for path augmenting.
The idea is to use previous augmenting work, instead of throwing it away after just one augmentation.
10-4
Lecture 10: October 12, 1999
This can be done by two dierent approaches: dynamic trees and scaling.
10.1.7 Data Structures - Dynamic Trees Sleator and Tarjan]
Here we maintain pieces of the Augmenting Paths so that we can jump to the head of the current
piece in one step.
Jump
�
�
keep track of smallest edge on path.
decrease capacity of all edges on path.
We maintain an in-forest of the augmentable edges. Initially all the vertices are isolated and the
Figure 10.2: In-forest
\current" vertex is the root of the tree containing the source s. We now describe the basic procedures.
advance :
�
�
Add traversed edge to forest. This is not trivial as it may involve linking to another tree.
Jump to the root of the linked tree.
retreat :
�
Cut the edge that we retreat on from the forest. This separates the trees.
10-5
Lecture 10: October 12, 1999
Figure 10.3: Advance may involve linking trees
augment :
Find the minimum capacity on the s � t path in the current tree.
� Decrease all capacities by that amount. Cut any zero capacity arcs that may result.
�
We claim that Dynamic trees support these four operations in O(log n) time per operation. We
show this below for the simpler case of Dynamic paths.
Example : Dynamic Paths
We can maintain the vertex sequence of the path in a splay tree.
� Linking of two trees corresponds to the \join" splay tree operation.
� Cut or deleting an edge corresponds to the \split" splay tree operation.
� To nd the minimum capacity edge we augment the node in the splay tree to contain the
smallest element in it's subtree.
Associate arc with preceding node
� To make adds more ecient we store the 's. Now the true value at a node is the sum of the
's from the node to the root of the splay tree. These 's are easy to maintain under rotations.
All these splay tree operations take O(log n) time. Therefore the running time for Dinic's algorithm
reduces to O(mnlog n).
Lecture 10: October 12, 1999
10.1.8
10-6
Scaling Gabow]
(This algorithm was actually rst discussed by Dinitz in '73, but it was published only in Russian
so, as is often the case in this area, and American discovered it independently much later.)
Scaling is a tool for converting general capacities to unit capacities. We make use of the fact that
numbers are made up of bits and that bits are unit case. Therefore aside from scaling, we have the
benet of unit capacity ows.
Idea
We start with the rounded down values of the general capacities. We solve for the maximum ow
and shift back the last bit.
�
�
�
�
Capacities are represented by log u bits where u is the maximum capacity. Therefore we may
need to pad some capacities with extra zeroes.
Start with all bits shifted out.
Shift in one bit, double the ow and also add one to some of the residual capacities.
Find the maximum ow in the residual graph.
This algorithm takes O(mnlog u) time.
Proof: For each phase, the shift increases the capacity by � m. The Blocking Flow algorithm
spends n time per augmentation. Therefore it takes O(mn) time to determine the maximum ow
per phase. There are a total of log u phases.