Alternating Direction Augmented Lagrangian
Algorithms for Convex Optimization
Donald Goldfarb
Joint work with Bo Huang, Shiqian Ma, Tony Qin, Katya Scheinberg,
Zaiwen Wen and Wotao Yin
Department of IEOR, Columbia University
BIRS Workshop on Sparse Statistics, Optimization and
Machine Learning
January 16-21 2011
Donald Goldfarb
Alternating Direction Algorithms
Alternating direction augmented Lagrangian (ADAL)
methods
Alternating direction methods: go back to Peaceman,
Rachford, Douglas, Gabay and Mercier, Glowinski and
Marrocco, Lions and Mercier, and Passty etc.
Augmented Lagrangian methods: Hestenes, Powell,
Rockafellar
Donald Goldfarb
Alternating Direction Algorithms
Motivation for Studying ADAL Methods
Motivation:
Current optimization problems of interest in machine learning,
data mining, medical imaging, etc., have enormous numbers
of variables/constraints
Only first-order methods are practical
It is necessary to take advantage of the structure (e.g.,
sparsity) of the optimal solution
Donald Goldfarb
Alternating Direction Algorithms
Part I: Introduction
SUM-K
min F (x) ≡
SUM-2
K
X
min F (x) ≡ f (x) + g (x)
fi (x)
i=1
Minimize the sum of convex functions
Assume the following problem is easy
min
x
1
τ fi (x) + kx − y k2
2
ExamplesP
of fi : kxk1 , kxk2 , kAx − bk2 , kX k∗ , − log det(X ),
kxk1,2 ≡ g ∈G kxg k2 ......
Donald Goldfarb
Alternating Direction Algorithms
Examples
Compressed sensing (Lasso):
min
1
ρkxk1 + kAx − bk22
2
Matrix Rank Min:
1
ρkX k∗ + kA(X ) − bk22
2
min
Robust PCA:
min
X ,Y
kX k∗ + ρkY k1 : X + Y = M
Sparse Inverse Covariance Selection:
min − log det(X ) + hΣ, X i + ρkX k1
Group Lasso:
min
1
ρkxk1,2 + kAx − bk22
2
Donald Goldfarb
Alternating Direction Algorithms
Variable Splitting
(SUM − 2)
min
f (x) + g (x)
Variable splitting
min f (x) + g (y )
s.t. x = y
Augmented Lagrangian function:
L(x, y ; λ) := f (x) + g (y ) − hλ, x − y i +
1
kx − y k2
2µ
Augmented Lagrangian Method:
( k+1 k+1
(x
,y
) := arg min L(x, y ; λk )
(x,y )
λk+1 := λk − (x k+1 − y k+1 )/µ
Donald Goldfarb
Alternating Direction Algorithms
Alternating Direction Augmented Lagrangian (ADAL)
L(x, y ; λ) := f (x) + g (y ) − hλ, x − y i +
1
2µ kx
− y k2
Solve augmented Lagrangian subproblem alternatingly
k+1
:= arg min L(x, y k ; λk )
x
x
y k+1 := arg min L(x k+1 , y ; λk )
y
k+1
λ
:= λk − (x k+1 − y k+1 )/µ
Donald Goldfarb
Alternating Direction Algorithms
Symmetric ADAL
L(x, y ; λ) := f (x) + g (y ) − hλ, x − y i +
1
2µ kx
− y k2
Symmetric version
k+1
x
:= arg min L(x, y k ; λk )
x
λk+ 12 := λk − (x k+1 − y k )/µ
1
y k+1 := arg min L(x k+1 , y ; λk+ 2 )
y
k+1
1
λ
:= λk+ 2 − (x k+1 − y k+1 )/µ
Optimality conditions lead to (assuming f and g are smooth)
1
λk+ 2 = ∇f (x k+1 ),
Donald Goldfarb
λk+1 = −∇g (y k+1 )
Alternating Direction Algorithms
Alternating Linearization Method
(SUM − 2)
min F (x) ≡ f (x) + g (x)
Define
Qg (u, v ) := f (u) + g (v ) + h∇g (v ), u − v i +
Qf (u, v ) := f (u) + h∇f (u), v − ui +
1
ku − v k2
2µ
1
ku − v k2 + g (v )
2µ
Alternating Linearization Method (ALM)
k+1
x
:= arg minx Qg (x, y k )
k+1
y
:= arg miny Qf (x k+1 , y )
Gauss-Seidel like algorithm
Donald Goldfarb
Alternating Direction Algorithms
Key Lemma for Proof
F (x) := f (x) + g (y ): f and g are convex.
Qg (x, y ) :=
1
kx − y k2
f (x) + g (y ) + h∇g (y ), x − y i + 2µ
pg (y ) := arg miny Qg (x, y )
Key Lemma:
2µ(F (x)−F (pg (y ))) ≥ kpg (y )−xk2 −ky −xk2
Donald Goldfarb
Alternating Direction Algorithms
Complexity Bound for ALM
Theorem (Goldfarb, Ma and Scheinberg, 2009)
Assume ∇f and ∇g are Lipschitz continuous with constants L(f )
and L(g ). For µ ≤ 1/ max{L(f ), L(g )}, ALM satisfies
F (y k ) − F (x ∗ ) ≤
kx 0 − x ∗ k2
4µk
Therefore,
convergence in objective value
O(1/) iterations for an -optimal solution
The first complexity result for splitting and alternating
direction type methods
Can we improve the complexity ?
Can we extend this result to that ADAL method ?
Donald Goldfarb
Alternating Direction Algorithms
Optimal Gradient Methods
min f (x)
(assuming ∇f is Lipschitz continuous)
-optimal solution f (x) − f (x ∗ ) ≤ Classical gradient method
x k = x k−1 − τk ∇f (x k−1 )
Complexity O(1/)
Nesterov’s acceleration technique (1983)
k
x
:= y k−1 − τk ∇f (y k−1 )
k
k−1 )
y k := x k + k−1
k+2 (x − x
√
Complexity O(1/ )
Optimal first-order method; best one can get
Donald Goldfarb
Alternating Direction Algorithms
ISTA and FISTA (Beck and Teboulle, 2009)
Assume g is smooth
min
F (x) ≡ f (x) + g (x)
Fixed Point Algorithm ( Also called ISTA in compressed
sensing )
x k+1 := arg min Qg (x, x k )
x
or equivalently
1
x k+1 := arg min τ f (x) + kx − (x k − τ ∇g (x k ))k2
x
2
Never minimize g
Iteration complexity: O(1/) for an -optimal solution
(F (x k ) − F (x ∗ ) ≤ )
Donald Goldfarb
Alternating Direction Algorithms
ISTA and FISTA (Beck and Teboulle, 2009)
min F (x) ≡ f (x) + g (x)
Fast ISTA (FISTA)
k
1
k
k
2
x
:= arg
qx τ f (x) +
min
2 kx − (y − τ ∇g (y ))k
tk+1 :=
1 + 1 + 4tk2 /2
k+1
−1 k
y
:= x k + ttkk+1
(x − x k−1 )
√
Complexity O(1/ )
Donald Goldfarb
Alternating Direction Algorithms
Fast Alternating Linearization Method
ALM
x k+1 := arg minx Qg (x, y k )
y k+1 := arg miny Qf (x k+1 , y )
Accelerate ALM in the same way as FISTA
Fast Alternating Linearization Method (FALM)
k
x
k
yk
w
tk+1
k+1
z
:=
:=
:=
:=
arg minx Qg (x, z k )
arg miny Qf (x k , y )
k
(x
y k )/2 +q
1+
:= w k +
1 + 4tk2 /2
1
k
tk+1 (tk (y
− w k−1 ) − (w k − w k−1 ))
computational effort at each iteration is almost unchanged
both f and g must be smooth; however, both are minimized
Donald Goldfarb
Alternating Direction Algorithms
Complexity Bound for FALM
min F (x) ≡ f (x) + g (x)
Theorem (Goldfarb, Ma and Scheinberg, 2009)
Assume ∇f and ∇g are Lipschitz continuous with constants L(f )
and L(g ). For µ ≤ 1/ max{L(f ), L(g )}, FALM satisfies
F (y k ) − F (x ∗ ) ≤
kx 0 − x ∗ k2
µ(k + 1)2
Therefore,
convergence in objective value
√
O(1/ ) iterations for an -optimal solution
Optimal first-order method
Donald Goldfarb
Alternating Direction Algorithms
ALM with skipping steps
At k-th iteration of ALM-S:
x k+1 := arg minx Lµ (x, y k ; λk )
If F (x k+1 ) > Lµ (x k+1 , y k ; λk ), then x k+1 := y k
y k+1 := arg miny Qf (y , x k+1 )
λk+1 := ∇f (x k+1 ) − (x k+1 − y k+1 )/µ
Note that only one function is required to be smooth.
Donald Goldfarb
Alternating Direction Algorithms
Complexity Bound for ALM-S
Theorem (Goldfarb, Ma and Scheinberg, 2010)
Assume ∇f is Lipschitz continuous. For µ ≤ 1/L(f ), the iterates
y k in ALM-S satisfy:
F (y k ) − F (x ∗ ) ≤
kx 0 − x ∗ k2
, ∀k,
2µ(k + ks )
where ks is the number of iterations until the k-th for which
F (x k+1 ) ≤ Lµ (x k+1 , y k ; λk ). Thus O(1/) iterations to obtain an
-optimal solution.
√
Similar algorithm can be designed for FALM with O(1/ )
complexity and only one function is required to be smooth.
Donald Goldfarb
Alternating Direction Algorithms
Conjecture: Complexity result for ADAL and fast ADAL
(FADAL)
Theorem (Conjectured)
Assume both ∇f and ∇g are Lipschitz continuous. For
µ ≤ 1/ max{L(f ), L(g )}, ADAL and FADAL need O(1/) and
√
O(1/ ) iterations, respectively, to obtain an -optimal solution.
No proof currently known.
Donald Goldfarb
Alternating Direction Algorithms
Basis for possible proof
Let A := ∂f , B := ∂g and the operator
S := (I − µA)(I + µA)−1 (I − µB)(I + µB)−1
The k-th iteration of ALM can be written as
v k+1 = S ◦ v k .
where v k = (I + µB)y k , for all k.
We can verify that at the k-th iteration for ADAL, the
following relation holds
1
v k+1 = (I + S) ◦ v k
2
Donald Goldfarb
Alternating Direction Algorithms
Fast Generalized Alternating Direction Augmented
Lagrangian (FGADAL)
4/ρ
Choose µ and a sequence θk = min{1, k+2
} and 0 < ρ ≤ 2.
x k = arg minx Lµ (x, y k ; λk )
1
k
k
λk+ 2 = λk − ρ−1
µ (x − y )
−1
z k = x k + θk ( ρ2 θk−1
− 1)[x k − x k−1 − (1 − ρ2 )(y k − y k−1 )
+(µλk+ 2 − µλk− 2 ) − (1 − ρ2 )(µλk − µλk−1 )]
1
y k+1 = arg miny Lµ (x k+1 , y ; λk+ 2 )
1
λk+1 = λk+ 2 − µ1 (x k+1 − y k+1 )
1
1
If ρ = 2, FGADAL reduces to FALM.
If ρ = 1, FGADAL is a fast version of ADAL.
No proof of complexity currently known.
Donald Goldfarb
Alternating Direction Algorithms
SUM-K
From P.L.Lions and B.Mercier’s 1979 paper on operator splitting
Generalization from 2 to K is possible, but
Convergence proof for K ≥ 3 is difficult
min
F (x) ≡ f (x) + g (x) + h(x)
Define
Qgh (u, v , w ) := f (u) + g (v ) + h∇g (v ), u − v i + ku − v k2 /2µ
+h(w ) + h∇h(w ), u − w i + ku − w k2 /2µ.
k+1
:= arg min Qgh (x, y k , z k )
x
k+1
y
:= arg min Qfh (x k+1 , y , z k )
k+1
z
:= arg min Qfg (x k+1 , y k+1 , z)
However, no complexity results for Gauss-Seidel like algorithm!
Donald Goldfarb
Alternating Direction Algorithms
MSA: A Jacobi Type Algorithm
min
F (x) ≡ f (x) + g (x) + h(x)
Multiple Splitting Algorithm (MSA)
k+1
x
:= arg min Qgh (x, w k , w k )
k+1
y
:= arg min Qfh (w k , y , w k )
z k+1 := arg min Qfg (w k , w k , z)
k+1
w
:= (x k+1 + y k+1 + z k+1 )/3
Jacobi type algorithm
Can be done in parallel
We have a complexity result!
Donald Goldfarb
Alternating Direction Algorithms
Complexity Bound for MSA
min F (x) ≡ f (x) + g (x) + h(x)
Theorem (Goldfarb and Ma, 2009)
Assume ∇f , ∇g and ∇h are Lipschitz continuous with constants
L(f ), L(g ) and L(h). For µ ≤ 1/ max{L(f ), L(g ), L(h)}, MSA
satisfies
min{F (x k ), F (y k ), F (z k )} − F (x ∗ ) ≤
kx0 − x ∗ k2
.
µk
Therefore,
convergence in objective value
O(1/) iterations for an -optimal solution
Donald Goldfarb
Alternating Direction Algorithms
Fast Multiple Splitting Algorithm
Fast Multiple Splitting Algorithm (FaMSA)
xk
yk
zk
wk
:=
:=
:=
:=
tk+1
wxk+1
wyk+1
k+1
wz
:=
arg min Qgh (x, wxk , wxk )
arg min Qfh (wyk , y , wyk )
arg min Qfg (wzk , wzk , z)
k
(x
y k + z k )/3
+q
1+
:= w k +
:= w k +
:= w k +
1 + 4tk2 /2
1
k
tk+1 [tk (x
1
k
tk+1 [tk (y
1
k
tk+1 [tk (z
Donald Goldfarb
− w k ) − (w k − w k−1 )]
− w k ) − (w k − w k−1 )]
− w k ) − (w k − w k−1 )]
Alternating Direction Algorithms
Complexity Bound for FaMSA
min F (x) ≡ f (x) + g (x) + h(x)
Theorem (Goldfarb and Ma, 2009)
Assume ∇f , ∇g and ∇h are Lipschitz continuous with constants
L(f ), L(g ) and L(h). For µ ≤ 1/ max{L(f ), L(g ), L(h)}, FaMSA
satisfies
min{F (x k ), F (y k ), F (z k )} − F (x ∗ ) ≤
4kx0 − x ∗ k2
µ(k + 1)2
Therefore,
convergence in objective value
√
O(1/ ) iterations for an -optimal solution
optimal first-order method
Donald Goldfarb
Alternating Direction Algorithms
Comparison of ALM/FALM and MSA/FaMSA
ALM/FALM
Gauss-Seidel like algorithms
expected to be faster than MSA/FaMSA since the information
from current iteration is used
complexity results for (SUM-2), no results for (SUM-K) when
K ≥3
only one function needs to be smooth
MSA/FaMSA
Jacobi like algorithms
can be done in parallel
complexity results for (SUM-K) for any K ≥ 2
Donald Goldfarb
Alternating Direction Algorithms
Comparison on compressed sensing model with ρ = 0.01
solver
FALM-S
FALM
FISTA
ALM-S
ALM
ISTA
SALSA
SADAL
obj in k-th
500
9.341282e+4
9.186355e+4
9.372093e+4
1.042869e+5
1.047410e+5
1.040666e+5
1.054600e+5
1.021905e+5
200
9.726599e+4
9.516208e+4
9.752858e+4
1.107103e+5
1.116683e+5
1.079721e+5
1.132676e+5
1.068386e+5
∗
iteration
800
9.182962e+4
9.073086e+4
9.233719e+4
1.021905e+5
1.025611e+5
1.025107e+5
1.031346e+5
1.004005e+5
cpu (iter)∗
1000
9.121742e+4
9.028790e+4
9.178455e+4
1.013128e+5
1.016589e+5
1.018068e+5
1.021898e+5
9.961905e+4
to achieve F (x) ≤ 1.04e + 5
Donald Goldfarb
Alternating Direction Algorithms
24.3
23.1
26.0
208.9
208.1
196.8
223.9
113.5
(51)
(51)
(69)
(531)
(581)
(510)
(663)
(332)
5
9.2
5
x 10
8.55
ALM
ISTA
9.1
x 10
FALM
FISTA
8.5
8.45
9
objective value
objective value
8.4
8.9
8.8
8.7
8.35
8.3
8.25
8.2
8.6
8.15
8.5
8.1
8.4
100
200
300
400
500
600
iteration
700
800
900
8.05
100
1000
5
9.2
200
300
400
500
600
iteration
700
800
9.2
x 10
FALM
ALM
FALM−S
ALM−S
9
9
8.8
8.8
objective value
objective value
1000
5
x 10
8.6
8.4
8.6
8.4
8.2
8
100
900
8.2
200
300
400
500
600
iteration
700
800
900
1000
8
100
200
300
400
500
600
iteration
700
800
Figure: comparison of the algorithms
Donald Goldfarb
Alternating Direction Algorithms
900
1000
Application: Robust PCA
Robust PCA:
{rank(X ) + ρkY k0 : X + Y = M}
min
X ,Y ∈Rm×n
Recently it has been shown that under suitable conditions on
the rank of X and the sparsity of Y , for ρ in a suitable range
this generally NP-hard problem can be solved by solving the
convex optimization problem
min
X ,Y ∈Rm×n
{kX k∗ + ρkY k1 : X + Y = M}
Donald Goldfarb
Alternating Direction Algorithms
ALM and FALM for Robust PCA
Robust PCA: f (X ) = kX k∗ , g (Y ) = ρkY k1
min
X ,Y ∈Rm×n
{kX k∗ + ρkY k1 : X + Y = M}
Subproblem wrt X (a matrix shrinkage operator, corresponds
to an SVD):
X k+1 := arg min f (X ) + g (Y k ) + hγg (Y k ), M − X − Y k i
X
+kX + Y k − Mk2F /2µ
Subproblem wrt Y (a vector shrinkage operator):
Y k+1 := arg min f (X k+1 ) + hγf (X k+1 ), M − X k+1 − Y i
Y
+kX k+1 + Y − Mk2F /2µ + g (Y )
Smoothed f (X ) and g (Y ): subgradient γf (X k ) = ∇f (X k )
and subgradient γg (Y k ) = ∇g (Y k )
Donald Goldfarb
Alternating Direction Algorithms
Surveillance video
200 images with size 144 × 176, so M ∈ R25344×200
43 SVDs, CPU time: 04:03.
MATLAB code runs on a Dell Precision 670 workstation with
an Intel Xeon(TM) 3.4GHZ CPU and 6GB of RAM.
Donald Goldfarb
Alternating Direction Algorithms
Surveillance video
300 images with size 130 × 160, so M ∈ R20800×300
45 SVDs, CPU time: 05:53.
Donald Goldfarb
Alternating Direction Algorithms
Shadow and specularities removal from face images
65 images with size 200 × 200, so M ∈ R40000×65
42 SVDs, CPU time: 01:39
Donald Goldfarb
Alternating Direction Algorithms
Video denoising
300 colored images with size 144 × 176, so M ∈ R25344×900
42 SVDs, CPU time: 01:00:18
Donald Goldfarb
Alternating Direction Algorithms
ALM-S for Sparse Inverse Covariance Selection
SICS: f (X ) = − log det(X ) + hΣ, X i,
g (Y ) = ρkY k1
Subproblem wrt X (corresponds to an eigenvalue
decomposition):
X k+1 := arg min f (X ) + g (Y k ) − hΛk , X − Y k i
X
+kX − Y k k2F /2µ
Subproblem wrt Y (a vector shrinkage operator):
Y k+1 := arg min f (X k+1 ) + h∇f (X k+1 ), Y − X k+1 i
Y
+kY − X k+1 k2F /2µ + g (Y )
Donald Goldfarb
Alternating Direction Algorithms
Numerical Results: Synthetic Data
Create data: create sparse matrix U ∈ Rn×n with nonzero
entries equal to -1 or 1 with equal probability.
Compute S := (U ∗ U > )−1 as the true covariance matrix.
Hence, S −1 is sparse.
We then draw p = 5n iid vectors, Y1 , . . . , Yp , from the
Gaussian distribution N (0, S) by using the mvnrnd function
in MATLAB.
P
S := p1 pi=1 Yi Yi> .
We compare ALM with PSM (Duchi et.al.2008) and VSM (Lu
2009)
Termination: Dgap ≤ 10−3
Donald Goldfarb
Alternating Direction Algorithms
Numerical Results: Synthetic Data
n
iter
ALM
Dgap
CPU
iter
200
500
1000
2000
300
220
180
200
8.70e-4
5.55e-4
9.92e-4
6.13e-5
13
84
433
3110
1682
861
292
349
200
500
1000
2000
140
100
100
160
9.80e-4
1.69e-4
9.28e-4
4.70e-4
6
39
247
2529
6106
903
489
613
200
500
1000
2000
180
140
160
240
4.63e-4
4.14e-4
3.19e-4
9.58e-4
8
55
394
3794
7536
2099
774
1158
Donald Goldfarb
PSM
Dgap
ρ = 0.1
9.99e-4
9.98e-4
9.91e-4
1.12e-3
ρ = 0.5
1.00e-3
9.90e-4
9.80e-4
9.96e-4
ρ = 1.0
1.00e-3
9.96e-4
9.83e-4
9.35e-4
CPU
iter
VSM
Dgap
CPU
38
205
446
3759
857
946
741
915
9.97e-4
9.98e-4
9.97e-4
1.00e-3
37
377
1928
16085
137
212
749
6519
1000
1067
1039
1640
9.99e-4
9.99e-4
9.95e-4
9.99e-4
43
425
2709
28779
171
495
1172
12310
1296
1015
1310
2132
9.96e-4
9.97e-4
9.97e-4
9.99e-4
57
406
3426
37406
Alternating Direction Algorithms
Numerical Results: Real Data
Data on gene expression networks (Li and Toh, 2010): (1) Lymph
node status; (2) Estrogen receptor; (3) Arabidopsis thaliana; (4)
Leukemia; (5) Hereditary breast cancer.
n
587
692
834
1255
1869
iter
60
80
100
120
160
ALM
Dgap
9.41e-6
6.13e-5
7.26e-5
6.69e-4
5.59e-4
CPU
35
73
150
549
2158
iter
178
969
723
1405
1639
Donald Goldfarb
PSM
Dgap
9.22e-4
9.94e-4
1.00e-3
9.89e-4
9.96e-4
CPU
64
531
662
4041
14505
iter
467
953
1097
1740
3587
Alternating Direction Algorithms
VSM
Dgap
9.78e-4
9.52e-4
7.31e-4
9.36e-4
9.93e-4
CPU
273
884
1668
8568
52978
Contributions and Future Work
Our contributions
New alternating direction augmented Lagrangian, alternating
linearization and multiple splitting methods
Optimal first-order methods
First complexity results for splitting and alternating direction
methods (including Peaceman-Rachford method)
Current and Future Work
Extension of ALM/FALM, MSA/FaMSA to constrained
problems
Line search variants
Extension of MSA/FaMSA to nonsmooth problems
Applications in many fields such as Medical Imaging, Machine
Learning, Model Selection, Optimal acquisition basis selection
(radar), etc.
Donald Goldfarb
Alternating Direction Algorithms
Current Work: Constrained Problems
Stable Robust PCA (SRPCA): Here the the elements of the
matrix M are assumed to have noise.
min
X ,Y ∈Rm×n
{rank(X ) + ρkY k0 : kX + Y − MkF ≤ σ, }
As in the RPCA problem, under suitable conditions on the
rank of X and the sparsity of Y , for ρ in a suitable range
solving (SRPCA) can be accomplished by solving
min
X ,Y ∈Rm×n
{kX k∗ + ρkY k1 : kX + Y − MkF ≤ σ}
We have developed ISTA/FISTA and ALM/FALM algorithms
for SRPCA that require only a modest increase in the work
over that required to solve RPCA
Overlapping Group Lasso: Here the groups are allowed to
overlap,resulting in additional linear constraints. We have
developed ISTA/FISTA and ALM/FALM algorithms for this
problem.
Donald Goldfarb
Alternating Direction Algorithms
Current Work: FISTA with line search
Given µ0 and
0 < β < 1.
Cycle to find
µk and tk
x
x k = arg miny Qf (y k , y )
Find the smallest ik ≥ 0 such that
µk = β ik µ0√and F (x k ) ≤ Qf (y k , x k )
1+
1+4θ t 2
k k
k
tk+1 :=
, θk := µµk+1
2
2
µk tk ≥ µk+1 tk+1 (tk+1 − 1)
−1 k
(x − x k−1 )
y k+1 := x k + ttkk+1
⇓
F (x k ) − F (x ∗ ) ≤
µk tk2 ≥
kx 0 − x ∗ k2
2µk tk2
βk 2
2Lkx 0 − x ∗ k2
⇒ F (x k ) − F (x ∗ ) ≤
4L
βk 2
Donald Goldfarb
Alternating Direction Algorithms
© Copyright 2026 Paperzz