The Submodular Welfare Problem
Lecturer: Moran Feldman
Based on “Optimal Approximation for the Submodular
Welfare Problem in the Value Oracle Model”
By Jan Vondrák
Talk Outline
•
•
•
•
•
Preliminaries and the problems
Reducing to continuous problems
Approximating the continuous problems
Constructing an integral solution
Summary
2
Combinatorial Auctions
Instance
• A set P of n players
• A set Q of m items
• Utility function wj: 2Q + for each player.
Objective
• Let Qj Q denote the set of items the jth player gets.
• The utility of the jth player is wj(Qj).
• Distribute the items among the players, maximizing the
sum of utilities.
3
Combinatorial Auction - Example
• Players
• Items
– TAs
– Classrooms
– Markers
– Multimedia keys
TA with a presentation
Items
Utility
TA without a presentation
Items
Utility
No classroom
0
No classroom
0
Classroom and
multimedia
10
Classroom and
multimedia
1
Classroom and
markers
3
Classroom and
markers
7
Only classroom
1
Only classroom
1
4
Oracles
Obstacle
The utility functions wj have exponential size in the
number of items.
Solutions
• Considering special class of utility functions with
polynomial-size representation.
We assume
this solution
• Accessing the utility functions through oracles:
– Value Oracle - Given a set Qj of items, returns the value
wj(Qj).
– Demand oracle - Given an assignment p:Q of prices
to items, finds a set Qj maximizing w j Q p( j ) . More
jQ
powerful oracle.
5
Utility Functions
Assumptions about
the utility functions:
Assuming
nothing
*
Approximation with
polynomial number
of oracle queries:
None
possible
* Under the value oracle model
Assuming monotone
submodular
*
1–1/e approximation
at best
6
Set Function Properties
Set Function Properties
Given a set function : 2S +, we say that is:
• Monotone – if (A) ≤ (B) for any A B S.
• Submodular – if (A B) + (A B) ≤ (A) + (B) for any A,B S.
Say that wj is submodular, so what?
• Consider a player j receiving a set Qj of items.
• Let v be the additional value j would get if we assigned item q to j.
Formally: v = wj(Qj q) – wj(Qj)
• Assign to j additional set of items Q’j.
• If we assigned item q to j now, j would get no more additional value
than before. Formally: v ≥ wj(Qj Q’j q) – wj(Qj Q’j)
The utility of an item diminishes as the player gets more items!
7
The Submodular Welfare Problem (SWP)
Instance
• A set P of n players
• A set Q of m items
• Monotone submodular utility function wj: 2Q + for
each player, accessed via value oracles.
Objective
• Find a partition Q1, Q2, …, Qn of Q maximizing the total
utility:
w Q
n
j 1
j
j
8
Matroid
Definition
An ordered pair (X, ), with 2X (the sets in are called “independent
sets”), such that:
1. There is an independent set:
2. Monotonicity: A B, B A
3. Augmentation: If A,B and |A| < |B|, then there exists b B - A
such that A {b} .
Motivation:
Generalization of many well known concepts:
• Given a vectors space, the sets of independent vectors form a
matroid.
• Given a graph, the set of forest sub-graphs form a matroid.
9
Example – Forest Sub Graphs Matroid
b
d
a
f
c
X = {ab, bd, df, ef, ce,
ca, bc, be, de}
= {S X | S is a forest}
e
Independent set Example
S = {ab, bc, df, ef}
Property 1:
10
Example – Forest Sub Graphs Matroid
b
d
a
f
c
e
Property 3:
2: Augmentation
Monotonicity
Removing
Given
two forests
edges from
A,B with
a forest
|A| <leaves
|B|, there
it a forest.
is an edge e B, such
that A {e} is also a forest.
Why?
No forest can have more than |A| edges inside connected
components of A.
11
Submodular Maximization Subject to a Matroid Constraint (SMSMC)
Instance
• A groundset X.
• A monotone submodular function : 2X +, accessed via a value
oracle.
• Matroid M = (X, ) accessed via a membership oracle.
Objective
• Find an independent set of value: max f S
SI
Motivation
Generalize known problems, for example:
• SWP – (will be proven in a moment)
• Max k-Cover – Find in a group S1, S2, …, Sn, k sets of maximal
union.
• Multiple Knapsack (exponential reduction) – Same as Knapsack,
but multiple Knapsacks are available.
12
Reduction
SWP
Theorem
SWP can be reduced to SMSMC.
SMSMC
The Reduction
• Groundset: X = P Q
• Given a set S X, let Sj = {q Q | (j, q) S}, define : 2X +
n
as:
f S w j S j
j 1
• The matroid M = (X, ) enforces the restriction that an item may
be assigned to at most one player:
I S X | j : S ( P { j }) 1
Corollary
We can focus from now on SMSMC.
13
Continues Set Functions
Motivation
• Let X be a groundset, and consider y [0, 1]X
• Intuitively, we can think of y as selecting each item j X to the extent
y j.
• We want extend the properties of set functions to the continues case.
Notation
•
•
(x y)i = max {xi,yi} x y is a generalization of union
(x y)i = min {xi,yi} x y is a generalization of intersection
x = (0.4, 0, 0.3, 0.6, 1)
y = (0.6, 1, 0.2, 0.2, 0.7)
x y = (0.6, 1, 0.3, 0.6, 1)
x y = (0.4, 0, 0.2, 0.2, 0.7)
Definitions
Let F: [0, 1]X , F is:
• monotone if x y F(x) F(y)
• submodular if F(x y) + F(x y) F(x) + F(y)
15
Smooth Monotone Submodularity
Definition
A function F: [0, 1]X is smooth monotone
submodular if:
• F has a second derivation throughout its domain.
• For each j X,
F is monotone.
F
0
y j
everywhere.
• For any i,j X (possibly i=j),
2F
0
y j yi
everywhere.
F
F is submodular ( y is
j
non-increasing with respect to yi).
F is concave in all non-negative directions.
16
Extension by Expectation
Objective
Given a set function we want to extend it into a continuous function F.
• Extension: F should coincide with for integral values.
• Properties Preservation: F should be smooth monotone submodular,
assuming is monotone and submodular.
Extension
• For y [0, 1]X, let us denote by ỹ the random set obtained from y by
selecting item j X with probability yj.
• Let : 2X be a monotone submodular function.
• The canonical extension of into a continuous function is:
F y E f y
f R y 1 y
R X
iR
i
iR
i
• Extension: If y is integral, ỹ contains exactly the items selected by y.
• Properties Preservation: We need to prove that F is a smooth
monotone submodular function.
17
Properties of F
• The monotonicity requirement:
F
f R yi 1 yi f R yi 1 yi
y j R X , jR
R X , jR
iR j
iR
iR
iR ,i j
E f y | j y E f y | j y 0
Follows from the
monotonicity of
• The submodularity requirement for i j:
2 F
E f y | j , i y E f y | i, j y
yi y j
E f y | j y, i y E f y | j y, i y 0
Follows from the
submodularity of
• The submodularity requirement for i = j:
2 F
f
R
y
1
y
f
R
y
1
y
i
i
i
i 0
2 y j y j R X , jR
R X , jR
iR j
iR
iR
iR ,i j
18
Matroid Polytopes
Definition
• For any set S X we let 1S represent the characteristic vector of S in
[0, 1]X.
• Given a matroid M = (X, ), its matroid polytopes P(M) is the set of
convex combinations of vectors from:
1S | S I
Example
{a}
{a, b}
X = {a, b, c}
= { {a,b}, {b,c}, {a}, {b}, {c}, }
Characteristic vectors:
{a, b}
{c}
(1, 1, 0)
(0, 0, 1)
(0, 0, 0)
{c}
{b}
{b, c}
19
Matroid Polytopes - Properties
Definition
{a}
{a, b}
A polytopes P +X is called
down-monotone if for any 0 x y,
y P x P.
Lemma
For any matroid M, P(M) is
down-monotone.
Proof
{c}
{b}
{b, c}
• Given 0 x y P(M), the following procedure gets us from x to y without
leaving P(M).
• Procedure:
1. While yj > xj need to be decreased:
2. Find a set S in the convex combination of y containing j.
3. Since M is matroid, S’ = S – { j } .
4. Replace S by S’ in the convex combination continuously, until xj = yj
or S is completely removed.
20
Marginal Value
Definition
• Let : 2X be a set function.
• Given a set S of items, the marginal value of item j is
S(j) = (S j) – (S).
• is submodular the marginal value of every item j diminishes as
more items are added to S.
E [ỹ(2)] is high
What is it good for?
E [ỹ(1)] is low
• Continuous optimization algorithms
often use F as guidance.
• Implicitly, the derivative F/yj is used to
estimate the importance of increasing yj.
• Analogously, we use E [ỹ(j)] to estimate
the importance of increasing yj.
22
The Continuous Greedy Algorithm
Input
Matroid M = (X, ), monotone submodular function : 2X +.
Pseudo Code
1. Set 1/n2 (n = |X|), t 0 and y(0) 0.
2. While t < 1 do
3.
For each j let ωj(t) be an estimation of E [ỹ(t)(j)] to an
error of no more than OPT n2.
4.
Let s(t) be a maximal-weight independent set in M
according to the weights ωj(t).
5.
Set y(t + ) = y(t) + ∙ 1s(t), t t +
6. Return y(1)
23
Continuous Greedy Algorithm - Demonstration
1S4
1S1
1S2
y(0.04)
1S3
y(0.03)
y(0.02)
y(0.01)
y(0)
Next steps
•
•
Analyzing the algorithm’s performance.
Surveying the implementation the algorithm.
24
Lemma 1
Lemma 1
Let OPT = maxs (S). Consider any y [0, 1]X, then:
OPT F y max E f ~y j
sI
jI
Proof
• Consider a specific optimal solution O , and any given value of ỹ.
• By the submodularity of F we know that:
OPT f O f ~
y f ~y j
jO
• By taking the expectation over ỹ we get:
OPT E f y f y j E f y E f y j
jO
jO
F ( y) E f y j F ( y ) max E f y j
jO
sI
jS
25
Lemma 2
Lemma 2
Let y be the fractional solution found by the Continuous Greedy
Algorithm, then:
1
F y E f ~
y 1 o(1) OPT
e
Proof
• Consider a specific time t, and let Δ(t) = y(t + ) – y(t) = ∙ 1s(t).
• D(t) - a random set containing each item j with probability Δj(t). Then:
F y (t ) E f y (t ) E f y (t ) D(t )
By monotonicity, since:
Pr[j ỹ(t + )] = yj(t) + Δj(t)
Pr[j ỹ(t) D(t)] = 1 - (1-yj(t))(1-Δj(t)) yj(t) + Δj(t)
Roadmap
• We want to lower bound the increase in F in time t.
• Taking small enough , we can ignore the contribution from D(t)’s which
are not singletons.
26
Lemma 2 (cont.)
Lower bound the increase in F at time t
F y (t ) F y (t ) E f ( y (t ) D (t )) f ( y (t ))
Pr D(t ) { j} E f y j
j
js ( t )
Considering only
singleton sets
(1 )|s (t )|1 E f y j
(1 n )
E f j
js ( t )
By the inequality:
(1 - )k ≥ (1 - k)
y
Taking advantage of s(t) properties
• We chose s(t) as the independent set maximizing: j t
jI
• However ωj(t) is an estimation of E [fỹ(j)] up to an error of OPT n2.
Therefore:
OPT
E f
js ( t )
~
y
( j ) max E f ~y ( j )
sI
js
n
27
Lemma 2 (cont.)
Continuing the bound derivation
F y (t ) F y (t ) (1 n )
E f j
~
y
js ( t )
By Lemma 1
(1 n ) max E f ~y j OPT / n
sI js
(1 1 / n ) OPT F y (t ) OPT / n
Defining:
OPT F ( y (t ))
OPT 1 2 / n OPT
Corollary
OPT F y (t ) 1 OPT F y (t )
The distance to OPT diminishes by a factor of (1 - ) at each step.
28
Lemma 2 (cont.)
Warping up the proof
After all 1/ steps has been performed we get:
OPT F y (1) 1
1/
1
OPT F y (0) OPT F y (0)
e
F is always positive, therefore:
1
1 2
1
F ( y ) F y (1) 1 OPT 1 1 OPT 1 o(1) OPT
e
e n
e
Algorithm Analysis - Summary
Let y be the fractional solution returned by the Continuous
Greedy Algorithm.
• y is within P(M) since it is the convex combination of 1/ = n2
characteristic vectors of independent sets from M.
• The value of F in y is: F(y) ≥ (1 – 1/e – o(1))OPT
29
Continuous Greedy Algorithm - Implementation
Problem
Two steps of the Continuous Greedy Algorithm are not strait forward to
implement:
• Calculating ωj(t), the estimation of E [fỹ(t)(j)], to an error of no more
than OPT n2.
• Finding a maximum weight independent set in M according to the
weights ωj(t).
Implementing the first step w.h.p.
• Each time E [fỹ(t)(j)] has to be estimated:
– Perform n5 independent samples.
– Use the average as the estimation ωj(t).
• Notice that fỹ(t)(j) ≤ f({ j }) ≤ OPT (unless j S for any S ).
Chernoff
bound
The probability of
|ωj(t) – E [fỹ(j)]| > OPT/n2
is exponentially small
Union
bound
W.h.p. all estimations
are not off by more
2
than
OPT/n
3
(n estimations)
30
Finding Maximal Weighted Independent Set
Instance
• Matroid M = (X, )
• Weight function w on the elements of X
Objective
Find an independent set S , maximizing w(S).
The Greedy Algorithm
1. Sort the elements of X in non-decreasing
weight order: j1, j2, …, jn.
2. Start with S =
3. For k = 1 to n do:
4. If S {jk} , add jk to S
31
An old fellow
• Reconsider the forest sub-graphs matroid:
– The edges of the graph are elements of the matroid.
– The independent sets are forests.
• Rewriting the greedy algorithms in terms of this matroid yields:
Translated Greedy Algorithm
1. Sort the edges of the graph in non-decreasing
weight order: e1, e2, …, em.
2. Start with F =
3. For k = 1 to m do:
4. If F {ek} does not contain circles, add ek to F
Kruskal’s algorithm
32
Greedy Algorithm - Correctness
Notation
Sk – The set S after the kth element is considered.
S0 – The set S before the first element is considered, i.e. .
Lemma 3
For every 0 ≤ k ≤ n, Sk Ok for some
optimal set Ok, and j1, j2, …, jk Ok - Sk.
Corollary
Elements we
will consider
Ok
Elements we
considered
Sk
For k = n lemma 3 implies:
• Sn On
• j1, j2,…, jn On – Sn On Sn
Therefore the result of the algorithm (Sn) is the optimal set On.
Now it all boils down to proving lemma 3.
33
Lemma 3 – Proof
Proof Overview
• The proof is by induction of k.
• Induction base - for k = 0:
- = S0 O0 for any optimal set O0
- No item must not be in O0-S0
Induction step
Prove the lemma for k, assuming it holds for k – 1:
• If jk is not inserted into S, set Ok = Ok-1 then:
- Sk = Sk-1 Ok-1
- j1, j2, …, jk-1 Ok-1 - Sk-1
- jk Ok-1 - Sk-1 (because if jk Ok-1, then Sk-1 {jk} Ok-1,
contradicting Sk-1 {jk} )
• If jk is inserted into S, then we need to construct a matching Ok.
34
Lemma 3 – Proof (cont.)
Construction of Ok
1. Initially Ok = Sk
2. While |Ok| < |Ok-1| do
3. Find j Ok-1 – Ok such that Ok { j }
4. Set Ok Ok { j }
Construction Correctness
• Line 3 can be implemented because M is a matroid.
• Ok = Ok-1 {jk} – {jh} for some jh
• Ok is an optimal set:
– jh Ok-1 – Sk-1 h k w(jh) ≤ w(jk) w(Ok-1) ≤ w(Ok)
– However, since we know that Ok-1 is a maximum weight
independent set: w(Ok-1) = w(Ok).
• j1, j2, …, jk Ok – Sk:
– Holds for j1, j2, …, jk-1 because Ok - Ok-1 = {jk}
– Holds for jk because jk Sk
35
Milestone
Achievement
We showed how to find a fractional solution y such that:
• F(y) (1 – 1/e – o(1))OPT
• y P(M)
What’s next?
• We need to convert y into an integral solution.
• The integral solution must be the characteristic vector of
an independent set of M.
• The conversion should not decrease the value of the
solution significantly (in expectation).
36
Rounding in the Submodular Welfare Case
SWP to SMSMC Reduction - Remainder
• Groundset: X = P Q
• Given a set S X, Sj is the set of items allocated to player j.
Function : 2X + is defined as:
f S wj S j
n
j 1
• The matroid M = (X, ) allows only solutions assigning each item
to at most one player:
I S X | j : S ( P { j }) 1
Notation
• Let yij be the extent to which the j item is allocated to the ith player in
y, i.e. the value of the pair (i, j) in y.
• Since y P(M), we know that:
n
y
i 1
ij
1
Each item is allocated
to an extent 1
38
Rounding Procedure
Procedure
• For each item j, randomly allocate it to a player, the probability of
assigning it to the ith player is yij.
• This procedure is guaranteed to generate a valid integral solution,
because each item is allocated to at most one player.
Value Preservation
• Let Ri be the set of items allocated to the ith player.
• Notice that Ri has the same distribution as the set of items allocated
to the ith player by ỹ.
• The expected utility of the ith player is: Ewi ( Ri )
• This is also the contribution of the ith player to F(y):
n
n
F ( y ) E wi ( Ri ) E[ wi ( Ri )]
i 1
i 1
39
Rounding in the General Case
• The rounding procedure presented crucially
depends on:
– The structure of the specific matroid.
– The linearity of F (in terms of the utility functions).
• In the general case we need to use a stronger
rounding method: Pipage rounding.
• Unlike randomized rounding, Pipage rounding
preserves constraints.
• Pipage rounding was firstly proposed by Ageev
and Sviridenko in 2004.
40
Rank Functions and Tight Sets
Definition
• Consider a matroid M = (X, ).
• The rank function rM(A): 2X N induced by M is:
rM(A) = max {|S| | S A, S }
• Informally, rM(A) maps each set to the size of the maximal
independent set in it.
Lemma
The rank function induced by a matroid M is submodular.
Proof
• Consider two sets A,B X.
• Let SAB be a maximal independent set in AB.
SA
• Extend SAB to maximal independent sets SA
and SB in A and B, respectively.
• Extend SA to a maximal independent set SAB in AB.
SAB
SB
SAB
41
Rank Functions and Tight Sets (cont.)
Proof - Continuation
• SAB = SA SB
• |SAB| |SA SB|, otherwise:
SAB
– |SAB – (SA - SAB)| > |SB|
– |SAB – (SA - SAB)| B, otherwise SA is not maximal.
– Contradiction, because SB is maximal.
• |SAB| + |SAB| |SA SB| + |SA SB| = |SA| + |SB|
Observation
SA
SB
SAB
Given a vector y P(M), and a set A X:
yi rM A
iA
Why?
It holds for every independent set, therefore it must also hold for
convex combination of independent sets.
Definition
Given a vector y P(M), a set A X is tight if:
y
iA
i
rM A
42
Rank Functions and Tight Sets (cont.)
Lemma
Let A and B be two tight sets, then:
• The intersection A B is a tight set.
• The union A B is a tight set.
Proof
• By the submodularity of rM:
rM ( A B) rM ( A B) rM ( A) rM ( B)
• It can be easily checked that:
yj yj yj yj
jA
• Implying:
• Due to the observation:
jB
jA B
rM ( A B) rM ( A B)
rM ( A B )
j A B
yj
jA B
jA B
yj
rM ( A B )
yj
yj
jA B
j A B
43
Algorithm Preliminaries
Assumption
We assume X is tight set under y. Otherwise:
• y P(M) is a convex combination of independent sets 1,2,…,p.
• Replace each set j with the maximal cardinality set containing it
(its size must be rM(X)).
• y remains in P(M).
• Since F is monotone, F(y) does not decrease.
Notation
• yij() - The vector y with yi increased by and yj decreased by .
• yij+ = yij(ij+(y)), ij+(y) = max{ 0 | yij() P(M)}
• yij- = yij(ij-(y)), ij-(y) = min{ ≤ 0 | yij() P(M)}
44
The Pipage Rounding Algorithm
Input
•
•
A matroid M = (X, )
Vector y such that X is tight.
Pseudo Code
1. While y is not integral do
2.
Let A be a minimal tight set containing
i,j A, such that yi,yj are fractional.
3.
If F(yij+) F(yij-)
4.
then y yij+
5.
else y yij6. Output y
Next steps
•
•
Analyzing the algorithm’s performance.
Describing how to implement the algorithm (sketch).
45
Pipage Rounding is Well Defined
Aim
We need to show that there is always a valid set A, as long as y is
fractional.
Observation
If a tight set A contains a non-integral value then it contains at least two
fractional values in it because:
rM ( A) iA yi
Corollary
• All we only need to show is that there is a tight set containing
fractional value.
• Consider the set X:
– X is tight, the sum
y do not change throughout the
iX i
algorithm.
– X contains all items It must include a fractional value.
46
Pipage Rounding is a Rounding
Lemma
3.141592653
The algorithm converges to an integral solution within O(n2) iterations.
Proof
• For simplicity, assume that the algorithm chooses minimal tight set
of minimal cardinality.
• Let A1, A2,…, An be n sets chosen by the algorithm in n consecutive
iterations.
• Assume no value of y becomes integral in the iterations
corresponding to A1, A2,…, An-1, then:
– We will prove |A1| > |A2| > … > |An|.
– |An| 2
Contradiction
– |A1| n
• Therefore at least one additional value of y must become integral
after every n-1 iterations.
47
Pipage Rounding is a Rounding (cont.)
Aim
• Consider the iteration of Ai for some 1 i n-1.
• We want to show |Ai+1| < |Ai|
Proof
• The matroid polytopes can be equivalently defined as:
P( M ) y 0 | S X ; y j rM ( S )
jS
• Since no value of y becomes integral, there must be a set B X
which becomes tight, and prevents us from going further.
• Either i B or j B, but not both, otherwise
y j does not
change.
jB
• Consider Ai B:
– It is the intersection of two tight sets, therefore it is also tight.
– It contains a fractional value.
– |Ai+1| |Ai B| < |Ai|
48
Pipage Rounding Preserve the Value
Lemma
The value of F(y) do not decrease during the rounding.
Proof
• We need to show that one of the following holds:
– F(yij+) F(y)
– F(yij-) F(y)
• To do that we only need show F is concave in the direction:
0,
, 0,1, 0,
, 0, 1, 0,
ith place
j th place
,0
• Let us replace yi by yi+t and yj by yj-t in the definition of F.
• By the submodularity of we get:
F
2 E f ( ~y ) | y i 1, y j 1 2 E f ( ~y ) | y i 0, y j 0
2
t
2 E f ( ~y ) | y i 1, y j 0 2 E f ( ~y ) | y i 0, y j 1 0
49
Pipage Rounding - Implementation
• The set A and the values ij+(y), ij-(y) can be computed in polynomial
time using known methods.
• The values F(yij+) and F(yij-) can be approximated to an error
polynomially small in n, by averaging over a polynomial number of
samples:
– The pipage rounding only loose a factor of (1 – o(1)) in each
iteration w.h.p.
– Using enough samples, the complete pipage rounding only loose
a factor of (1 – o(1)) w.h.p, because it only makes O(n2) iterations.
Rounding - Summary
• Using the pipage rounding algorithm we get a valid integral solution
for SMSMC.
• The approximation ratio is:
1 o(1) (1 1/ e o(1)) 1 1/ e o(1)
50
Conclusion Remarks
• The approximation ratio can be improved to (1 – 1/e) using:
– More samples in the estimation of ωj(t).
– Setting = o(1 / n3): More iterations
– More careful analysis
• The rounding step is necessary:
– y P(M) is a convex combination of independent sets 1,2,…,p.
– Since is non-linear, it is possible for (k) to be negligible in
comparison to F(y) for all 1 k p.
• Consider the case of SWP with identical utility functions to all
players:
– The algorithm might end up assigning each item with equal
probability to each player.
– It can be shown that this is the best one can do in this case.
51
?
© Copyright 2026 Paperzz