EPSfun: a Matlab toolbox for implementing the simplified topological

EPSfun: a Matlab toolbox for implementing the
simplified topological ε–algorithms and its
applications
Michela Redivo–Zaglia
1 Dipartimento
2 Laboratoire
1
Claude Brezinski
2
di Matematica “Tullio Levi-Civita”, Università degli Studi di
Padova, Italy
Paul Painlevé, Université de Lille 1 - Sciences et Technologies, France
2 giorni di Algebra Lineare Numerica (Como, Italy)
February 16-17, 2017
Outline
The scalar Shanks transformation and the ε-algorithm
Particular rules
The topological Shanks transformations and the (simplified)
topological ε-algorithms
The EPSFUN Matlab toolbox
Applications and numerical results
Shanks transformation and the ε–algorithm
Let (Sn ) be a sequence of elements of a vector space E on a field
K (R or C) which converges to a limit S.
If the convergence is slow, it can be transformed, by a sequence
transformation, into a new sequence (or into a set of new
sequences) which, under some assumptions, converges faster to the
same limit.
When E is R or C, a well known such transformation is due to
Shanks (1955), and it can be implemented by the
scalar ε–algorithm of Wynn (1956).
We assume that the sequence (Sn ) satisfies, for a fixed value of k,
the difference equation
a0 (Sn − S) + · · · + ak (Sn+k − S) = 0 ∈ E ,
with a0 ak 6= 0 and a0 + · · · + ak 6= 0.
n = 0, 1, . . .
Shanks proposed a transformation able to define a new sequence
(ek (Sn )) such that
ek (Sn ) = S,
for all n.
This linear combination can be computed even if (Sn ) does not
satisfy the difference equation and, after having computed the
coefficients a0 , . . . , ak (now depending on k and n) and by adding
a normalization condition a0 + · · · + ak = 1, we set
ek (Sn ) = a0 Sn + · · · + ak Sn+k .
For computing recursively the ek (Sn ) we can use the Wynn scalar
ε–algorithm whose rule are
 (n)

n = 0, 1, . . . ,
 ε−1 = 0,
(n)
ε0
= Sn ,
n = 0, 1, . . . ,

 (n)
(n)
(n+1)
(n+1)
− εk )−1 , k, n = 0, 1, . . .
εk+1 = εk−1 + (εk
and it holds
(n)
ε2k = ek (Sn ),
(n)
ε2k+1 = 1/ek (∆Sn )
Thus only the quantities with an even lower index are of
interest (the quantities with an odd lower index are intermediate
computations).
The ε-array
The ε’s are put in a table, called the ε-array
(0)
ε−1 = 0
(1)
ε−1 = 0
(2)
ε−1 = 0
(3)
ε−1 = 0
(3)
ε−1 = 0
(4)
ε−1 = 0
(0)
ε0 = S0
(1)
ε0 = S1
(2)
ε0 = S2
(3)
ε0 = S3
(4)
ε0 = S4
(0)
ε1
(1)
ε1
(2)
ε1
(3)
ε1
(0)
ε2
(1)
ε2
(2)
ε2
(0)
ε3
(1)
ε3
(0)
ε4
The rhombus (or losanze) scheme
The quantities related by the ε-algorithm give the so called
rhombus scheme, were the quantity in red is computed from the
three ones in blue.
(n)
(n+1)
εk−1
%
&
εk
(n+1)
εk
&
(n)
εk+1
%
The algorithm is implemented by ascending diagonals and only
one vector and three auxiliary variables are needed.
The ε-array with MAXCOL=4
column 1
diagonal 1
(0)
ε0
diagonal 3
(2)
ε0
(0)
= S1
−→
(1)
ε1
(0)
ε2
↓
(1)
ε2
= S2
(2)
diagonal 4
(2)
ε2
= S3
(3)
(4)
(2)
ε3
(4)
diagonal 6
..
.
= S5
..
.
..
.
↓
↓
(2)
ε2
ε1
(0)
ε4
(1)
ε4
(3)
ε0 = S4
(5)
ε0
−→
ε3
ε1
diagonal 5
(0)
ε3
(1)
ε1
(3)
ε0
column 4
ε1
(1)
ε0
column 3
= S0
↓
diagonal 2
column 2
ε4
(3)
(4)
ε2
..
.
ε3
..
.
..
.
Particular rules
(n)
(n+2)
εk−3
(n+1)
εk−2
(n+2)
εk−2
εk−1
(n+1)
εk−1
(n+2)
N
(n)
εk
(n+1)
εk
(n)
εk+1
a'α
W
b'α
E?
C '∞
e'α
εk−1
d 'α
S
Instead of computing E as
E = C + 1/(d − b)
we use
E = r (1 + r /C )−1
with
r = S(1 − S/C )−1 + N(1 − N/C )−1 − W (1 − W /C )−1
Particular rules: an example of isolated singularity
column 1
ε0 = 0
diagonal 2
(1)
ε0
=1
(1)
1
=
2
(2)
(2)
1
2
(3)
3
=
4
(3)
ε0 = 5
(4)
ε0
19
=
3
(0)
ε1 =
1
2
(0)
ε2 = ∞
ε1
diagonal 5
column 4
(0)
ε2 = −1
(1)
ε0 = 3
ε1 =
diagonal 4
column 3
(0)
ε1 = 1
ε1
diagonal 3
column 2
(0)
diagonal 1
ε4 = 5
(1)
ε3 =
(2)
ε2 = 9
1
2
The topological ε-transformations
Let us consider the case where E is a general vector space, and we
assume again that the elements of the sequence Sn ∈ E satisfy the
difference equation
a0 (Sn − S) + · · · + ak (Sn+k − S) = 0 ∈ E ,
n = 0, 1, . . .
Brezinski (1975) proposed new transformations called topological
ε-transformations.
For computing the coefficients a0 , . . . , ak he considered the relation
a0 ∆Sn + · · · + ak ∆Sn+k = 0.
Given y, an element of the dual space E ∗ of E (which means that
it is a linear functional), writing the relation from n to n + k − 1,
taking the duality product with y, and adding the normalization
condition he obtained a system of scalar relations.
Once the unknown coefficients have been computed, he defined
the first topological Shanks transformation by
ek (Sn ) = a0 Sn + · · · + ak Sn+k ,
and the second topological Shanks transformation by
ẽk (Sn ) = a0 Sn+k + · · · + ak Sn+2k .
By construction, ∀n, ek (Sn ) = ẽk (Sn ) = S if
∀n, a0 (Sn − S) + · · · + ak (Sn+k − S) = 0.
Each of these transformations can be implemented by recursive
algorithms named the Topological ε–algorithms (TEA).
However, these algorithms are quite complicated:
they possess two rules,
they require the storage of elements of E and of E ∗ ,
the duality product with y is recursively used in their rules.
Each of these transformations can be implemented by recursive
algorithms named the Topological ε–algorithms (TEA).
However, these algorithms are quite complicated:
they possess two rules,
they require the storage of elements of E and of E ∗ ,
the duality product with y is recursively used in their rules.
Recently, simplified versions were obtained and called the
Simplified Topological ε–algorithms (STEA).
only one recursive rule,
they require less storage than the initial algorithms and only
elements of E ,
the elements of the dual vector space E ∗ no longer have to
be used in the recursive rules (only in their initializations) ,
numerical stability is improved (thanks to particular rules of
Wynn).
The rule of the First Simplified Topological ε–algorithm,
denoted by STEA1, is
(n)
(n)
ε2k+2
=
(n+1)
ε2k
+
(n+1)
ε2k+2 − ε2k
(n+1)
ε2k
(n)
− ε2k
(n+1)
(ε2k
(n)
− ε2k ),
k, n = 0, 1, . . . ,
(n)
with ε0 = Sn ∈ E , n = 0, 1, . . .
(n)
The scalars ε2k are computed by the scalar ε–algorithm of
Wynn, by setting the initial values
(n)
ε0 = hy, Sn i,
n = 0, 1, . . .
A Second Simplified Topological ε–algorithm, denoted by
(n)
STEA2, also exists and the quantities are denoted εe2k .
It holds
(n)
ε2k = ek (Sn )
(n)
εe2k = ẽk (Sn ).
TEA1 vs STEA1
Odd rule TEA1
(n)
(n+1)
ε2k−1
%
&
ε2k
(n+1)
ε2k
Even rule TEA1
STEA1
(n)
&
%
ε2k
(n)
ε2k+1
&
(n)
(n)
ε2k+1
(n+1)
ε2k
%
&
ε2k
&
(n+1)
ε2k+1
%
(n)
ε2k+2
(n+1)
ε2k
&
(n)
→ ε2k+2
Exploitation of the algorithms
The algorithms will be used in two different ways
Acceleration method (AM)
Restarted method (RM)
Exploitation of the algorithms
The algorithms will be used in two different ways
Acceleration method (AM)
Restarted method (RM)
The Acceleration Method (AM) can be applied to the solution
of systems of linear and nonlinear equations, to the computation of
matrix functions, . . . .


Choose 2k and x0 .




For n = 1, 2, . . .




Compute xn .

Apply the STEA1 to x0 , x1 , x2 , . . . and compute the sequence


of
extrapolated values



(0)
(1) (0) (1)
(0) (1) (2)


ε = x0 , ε0 , ε2 , ε2 , . . . , ε2k , ε2k , ε2k , . . . .


 end 0
The Restarted Method (RM) is used for fixed point problems
in which we have to find the solution of linear or nonlinear
equations of the form
x = F (x)
or
f (x) = F (x) − x = 0
with F : Rm 7−→ Rm .
or
x = x + αf (x),

Choose 2k and x0 .





For i = 0, 1, . . . (cycles or outer iterations)




Set u0 = xi




For n = 1, . . . , 2k (basic or inner iterations)

Compute un = F (un−1 )


Apply
STEA1 to u0 , . . . , u2k




end



(0)


Set xi +1 = ε2k



end
Generalized Steffensen Method (GSM)
When k = m (the dimension of the system) the RM is called
Generalized Steffensen Method (GSM) and, under some
assumptions, the sequence (xi ) of the vertices of the successive
ε–arrays asymptotically converges quadratically to the fixed
point x of F .
Remark: This method can also be applied when
F : Rm×s 7−→ Rm×s but the quadratic convergence of the GSM
has not yet been proved in this case.
References
C. Brezinski, M. Redivo–Zaglia, The simplified topological
ε–algorithms for accelerating sequences in a vector space,
SIAM J. Sci. Comput., 36 (2014) A2227–A2247.
C. Brezinski, M. Redivo–Zaglia, The simplified topological
ε–algorithms: software and applications, Numer. Algorithms
(2016), online version, DOI: 10.1007/s11075-016-0238-0.
References
C. Brezinski, M. Redivo–Zaglia, The simplified topological
ε–algorithms for accelerating sequences in a vector space,
SIAM J. Sci. Comput., 36 (2014) A2227–A2247.
C. Brezinski, M. Redivo–Zaglia, The simplified topological
ε–algorithms: software and applications, Numer. Algorithms
(2016), online version, DOI: 10.1007/s11075-016-0238-0.
A Matlab toolbox called EPSfun will be published with the paper.
EPSfun Matlab toolbox, na44 package in Netlib
(www.netlib.org/numeralgo/).
References
C. Brezinski, M. Redivo–Zaglia, The simplified topological
ε–algorithms for accelerating sequences in a vector space,
SIAM J. Sci. Comput., 36 (2014) A2227–A2247.
C. Brezinski, M. Redivo–Zaglia, The simplified topological
ε–algorithms: software and applications, Numer. Algorithms
(2016), online version, DOI: 10.1007/s11075-016-0238-0.
A Matlab toolbox called EPSfun will be published with the paper.
EPSfun Matlab toolbox, na44 package in Netlib
(www.netlib.org/numeralgo/).
S. Gazzola, A. Karapiperi, Image reconstruction and
restoration using the simplified topological ε-algorithm,
Applied Mathematics and Computation (2016) 539-555.
The EPSFUN Matlab toolbox
STEAfun This directory contains all the STEA functions for
implementing the simplified topological ε-algorithms and the
SEAW function for the scalar ε-algorithm with particular rules
of Wynn.
templates This directory contains some examples of simple
scripts for implementing the AM and RM methods.
demo This directory contains demo scripts able to replicate all
the examples of the paper.
TEAEfun This directory contains the functions for
implementing the original topological ε-algorithms TEA and
the vector ε-algorithm of Wynn VEAW (for allowing possible
comparisons).
A detailed userguide (27 pages)
Pseudocode AM
Initializations
Set MAXCOL, TOL, NDIGIT, NBC
Set y
Computations
IFLAG ← 0
EPSINI ← S0
EPSINIS ← S0 = hy, S0 i
First call of SEAW
First call of STEA
for n = 1 : NBC − 1
EPSINI ← Sn
EPSINIS ← Sn = hy, Sn i
Call SEAW
Call STEA
Output EPSINI
end for n
Storage requirements
We suppose to have chosen a certain MAXCOL = 2k.
We have (without the storage of the original sequence)
algorithm
TEA1
TEA2
STEA1
STEA2
# elements ε-array
in spaces
# auxiliary elements
3k
2k
2k
k
E, E∗
3
3
2
2
E, E∗
E
E
Example - Nonlinear system
We consider the nonlinear system f (x) = 0 of dimension m = 10,
fi (x) = m −
m
X
j=1
cos xj + i (1 − cos xi ) − sin xi ,
i = 1, . . . , m,
whose solution is zero.
We take x0 = (1/(2n), . . . , 1/(2n))T , the basic method
x = x + αf (x) with α = 0.1, k = 2 for the AM and k = 10 for the
RM.
We obtain the results of the next Figure. They show that instability
occurs for the AM after iteration 75 (where the error attains
2.7 · 10−11 ), and that the GSM achieved a much better precision
with a fewer number of iterations (after 2 cycles the STEA1 has an
error of 4.62 · 10−14 , and the STEA2 an error of 3.85 · 10−14 ).
EXAMPLE 2. AM with STEA1 and STEA2
10 0
EXAMPLE 2. GSM with STEA1 and STEA2
10 0
10 -5
10 -5
10 -10
10 -10
||x ori -sol||
||x-sol|| stea1
||eps - sol|| stea1
||x-sol|| stea2
||eps - sol|| stea2
||x-sol||
||eps - sol|| stea1
||eps - sol|| stea2
10 -15
10 -15
0
50
100
Acceleration Method
150
0
5
10
15
20
25
30
35
40
45
Generalized Steffensen Method
Example - Nonlinear system
We consider the nonlinear

 x1 =
x =
 2
x3 =
system
x1 x23 /2 − 1/2 + sin x3
(exp(1 + x1 x2 ) + 1)/2
1 − cos x3 + x14 − x2 ,
whose solution is (−1, 1, 0)T .
Starting from x0 = 0, we obtain the results of the next Figure
(left) for the AM with α = 0.2 and k = 4.
For the GSM with α = 0.1 and k = 3 the STEA2 gives better
results than the STEA1 as shown on the Figure (right).
EXAMPLE 3. AM with STEA1 and STEA2
10 5
EXAMPLE 3. GSM with STEA1 and STEA2
10 5
10 0
10 0
10 -5
10 -5
||x ori -sol||
||x-sol||
||eps - sol|| stea1
||eps - sol|| stea2
10 -10
||x-sol|| stea1
||eps - sol|| stea1
||x-sol|| stea2
||eps - sol|| stea2
10 -10
10 -15
10 -15
0
50
100
150
200
250
Acceleration Method
300
0
10
20
30
40
50
60
70
Generalized Steffensen Method
Example - Non-differentiable system
Consider now the non–differentiable system
2
|x1 − 1| + x2 − 1 = 0
x22 + x1 − 2 = 0.
It has two solutions:
(1, 1)T for which we are starting from x0 = (1.3, 1.3)T , with
α = −0.1 (First Figure - left),
and (−2, −2) for which we are starting from x0 = (−1, −1)T , with
α = 0.1 (First Figure - right).
For the AM we took k = 3. We see that, for both solutions, the
AM works quite well.
The GSM with k = 2 achieves a precision of 10−15 in three cycles.
EXAMPLE 4a. AM with STEA1 and STEA2
10 0
EXAMPLE 4b. AM with STEA1 and STEA2
10 5
10 0
10 -5
10 -5
10 -10
||x-sol||
||eps - sol|| stea1
||eps - sol|| stea2
||x-sol||
||eps - sol|| stea1
||eps - sol|| stea2
10 -10
10 -15
10 -15
0
5
10
15
20
AM, first solution
25
0
5
10
15
20
AM, second solution
25
EXAMPLE 4a. GSM with STEA1 and STEA2
10 0
10 -5
10 0
10 -10
10 -5
||eps-sol|| stea1
||eps-sol|| stea2
10 -15
EXAMPLE 4b. GSM with STEA1 and STEA2
10 5
||eps-sol|| stea1
||eps-sol|| stea2
10 -10
10 -20
10 -15
0
0.5
1
1.5
2
2.5
cycles
GSM, first solution
3
0
0.5
1
1.5
2
2.5
cycles
GSM, second solution
3
Example - Nonlinear system
For the nonlinear system
7
X
j=1
xj − (xi + e −xi ) = 0,
i = 1, . . . , 7,
the solution is the vector with all components equal to
0.14427495072088622350.
Starting from x0 = (1, . . . , 1)T /10, α = −0.01 and with k = 3 for
the AM, we obtain the results of the following Figure.
Notice that the with GSM with k = 7 achieves with full precision
in only two iterations.
EXAMPLE 5. AM with STEA1 and STEA2
10 0
EXAMPLE 5. GSM with STEA1 and STEA2
10 0
10 -2
10 -4
||eps-sol|| stea1
||eps-sol|| stea2
10 -5
||x-sol||
||eps - sol|| stea1
||eps - sol|| stea2
10 -6
10 -10
10 -8
10 -10
10 -15
10 -12
0
5
10
15
20
25
30
0
0.5
1
1.5
2
cycles
Acceleration Method
Generalized Steffensen Method
Example - Linear system
We consider the linear system AX = B, where A is the parter
matrix of dimension 5 divided by 3 (its spectral radius is 0.9054
and its condition number is 2.149), X is formed by the first two
columns of the matrix pei and B is computed accordingly.
We perform the iterations
Xn+1 = (I − A)Xn + B
starting from X0 = 0.
The linear functional y associates to the matrix the sum of its
elements.
We obtain for the AM the results of next Figure, with k = 3 (left)
and k = 4 (right).
For k = 5 (that is for column 10 of the ε–array), the solution is
obtained with full precision in one iteration by the GSM as stated
by the theory.
EXAMPLE 6. AM with STEA1 and STEA2, k=3
10 5
10 0
10 0
10 -5
10 -5
||x-sol||
||eps - sol|| stea1
||eps - sol|| stea2
10 -10
EXAMPLE 6. AM with STEA1 and STEA2, k=4
10 5
||x-sol||
||eps - sol|| stea1
||eps - sol|| stea2
10 -10
10 -15
10 -15
0
10
20
30
AM, k = 3
40
50
0
10
20
30
AM, k = 4
40
50
Example - Matrix equation
We now want to solve the matrix equation
X =
∞
X
Ai X i .
i =0
We use the algorithm of Z.-Z. Bai (1997) which consists, for
n = 1, 2, . . ., in the iterations
Qn = I −
∞
X
i −1
Ai Xn−1
,
i =1
Bn = 2Bn−1 − Bn−1 Qn Bn−1 ,
Xn = Bn A0 ,
starting from a given A0 , B0 = I and X0 = B0 A0 . He proved that
the sequence (Xn ) converges to the minimal nonnegative solution
of the matrix equation.
We consider the numerical example treated
(1999),were

0.05 0.1 0.2
 0.2 0.05 0.1

4
A0 = (1 − p) 
 0.1 0.2 0.3
3
 0.1 0.05 0.2
0.3 0.1 0.1
by C.-H. Guo
0.3 0.1
0.1 0.3
0.05 0.1
0.1 0.3
0.2 0.05



,


Ai = p i A0 for i = 1, 2, . . ., and p = 0.49.
The linear functional y used in the duality product corresponds to
the trace of the matrix.
The infinite sum in the computation of Qn was stopped after 100
terms.
The results are given in the next Figure with k = 3 for the AM,
and k = 5 for the GSM.
EXAMPLE 8a. AM with STEA1 and STEA2
10 0
EXAMPLE 8a. GSM with STEA1 and STEA2
10 0
10 -5
10 -5
10 -10
10 -10
||eps-F(eps)|| stea1
||eps-F(eps)|| stea2
||x-F(x)||
||eps-F(eps)|| stea1
||eps-F(eps)|| stea2
10 -15
10 -15
0
5
10
15
20
25
30
35
0
0.5
1
1.5
2
cycles
Acceleration Method
Generalized Steffensen Method
Example - Matrix function
We want now to compute an approximation of
log(I + A) = A −
A2 A3 A4
An
+
−
+ · · · + (−1)n+1
+ ···
2
3
4
n
where A is a real matrix.
We know that the series is convergent if ρ(A) < 1.
We will show that the topological ε-algorithm is able to accelerate
the convergence of the partial sums of this series and also . . . to
converge to an approximation of the value, when the series is
divergent!
We consider a random matrix B of dimension N = 50, we choose a
value r , and we define A = r × B/ρ(B).
We obtain for the AM the results of the next Figure (on the left
r = 0.9, k = 4, and on the right r = 1.2, k = 4). The linear
functional y is the trace of a matrix.
log(I+A), N=50 - AM with STEA1 and STEA2
100
log(I+A), N=50 - AM with STEA1 and STEA2
102
100
10-2
10-5
||X-log(I+A)||2
||X-log(I+A)||2
10-4
||eps-log(I+A)||2 STEA1
||eps-log(I+A)||2 STEA2
||eps-log(I+A)||2 STEA1
||eps-log(I+A)||2 STEA2
10-6
10-10
10-8
10-10
10-15
0
5
10
15
20
25
30
AM, r = 0.9, k = 4
35
10-12
0
5
10
15
20
25
30
35
40
AM, r = 1.2, k = 4
45
50
Example -
√
I −C
The binomial iteration for computing the square root of I − C ,
where ρ(C ) < 1, consists in the iterations
1
Xn+1 = (C + Xn2 ),
2
k = 0, 1, . . . ,
X0 = 0.
The sequence (Xn ) converges linearly to X = I − (I − C )1/2 and
Xn reproduces the series
(I − C )1/2 =
∞
∞ X
X
1/2
αi Ci ,
(−C )i = I −
i
i =0
αi > 0,
i =1
up to and including the term C n .
For C , we took the matrix moler of dimension 500 divided by
1.1 × 105 so that ρ(C ) = 0.9855. The AM gives the results of the
following Figure with k = 2 on the left and k = 4 on the right, for
the acceleration of the sequence (Xn ).
EXAMPLE 12. AM with STEA1 and STEA2, k=2
10 0
EXAMPLE 12. AM with STEA1 and STEA2, k=4
10 0
10 -5
10 -5
10 -10
10 -10
||x-sol||
||eps - sol|| stea1
||eps - sol|| stea2
||x-sol||
||eps - sol|| stea1
||eps - sol|| stea2
10 -15
0
5
10
15
20
AM, k = 2
25
30
10 -15
0
5
10
15
20
AM, k = 4
25
30
Tensor `p -eigenvalues (with F. Tudisco)
Let us consider a square tensor t = (ti1 ,...,im ) ∈ Rn×···×n and the
map T : Rn → Rn defined by
T (x)i1 =
n
X
i2 ,...,im =1
ti1 ,...,im xi2 · · · xim ,
for i1 = 1, . . . , n
where m is the order (or modes) of a tensor (that is the number
of dimensions).
A vector v ∈ Rn is an `p -eigenvector of t with eigenvalue λ ∈ R if
T (v ) = λ|v |p−1 sign(v )
(entry-wise operations).
Maximal eigenpair
An `p -eigenpair (λ, v ) is maximal if
λ=
x T T (x)
v T T (v )
= max
x6=0
kv kp
kxkp
Tensor `p -eigenvalues (with F. Tudisco)
Theorem (Perron-Frobenius)
If p > m, ti1 ,...,im ≥ 0 and is “irreducible” then a maximal eigenpair
exists, it is unique and positive.
We want to evaluate the maximal eigenvalue, and we consider as
basic method the Shifted (non-linear) Power-Method:
0
vk+1 = (T (vk ) + s|vk |p−1 sign(vk ))p −1 ,
λk+1 =
1/p + 1/p 0 = 1, s > 0,
T T (v
vk+1
k+1 )
.
kvk+1 kp
We know that, if p > m, this method converges ∀s ∈ [0, 1) (not
necessarily to the maximal eigenpair).
But, if p > m, and the tensor is non negative and irreducible,
this method converges ∀s ∈ [0, 1) to the maximal eigenpair.
Two examples for tensor `p -eigenvalues
First example: t ∈ R3×3×3×3 symmetric real tensor (Kofidis,
Regalia, SIMAX, 2002)
10 5
Eigenvalue error for p = 2.000 < m
10 5
10 0
10 0
10 -5
10 -5
10 -10
10 -15
10 -10
SEAW s = 0.0
PM s = 0.0
SEAW s = 0.2
PM s = 0.2
0
5
Eigenvalue error for p = 4.100 > m
10
# iterations
15
20
10 -15
SEAW s = 0.0
PM s = 0.0
SEAW s = 0.1
PM s = 0.1
0
5
10
# iterations
In the scalar ε-algorithm SEAW we consider k = 2
15
Two examples for tensor `p -eigenvalues
Second example: t = t(α) ∈ R3×3×3 multi-linear Pagerank tensor
(nonnegative and irreducible), 0 < α < 1. Larger is α, better is
the model, but more difficult (Gleich, Lim, Yu, SIMAX, 2015).
10 5
Eigenvalue error for p = 2.000 < m
10 5
10 0
10 0
10 -5
10 -5
10 -10
10 -15
10 -10
SEAW s = 0.0
PM s = 0.0
SEAW s = 0.2
PM s = 0.2
0
10
20
Eigenvalue error for p = 3.100 > m
30
# iterations
40
50
10 -15
SEAW s = 0.0
PM s = 0.0
SEAW s = 0.2
PM s = 0.2
0
5
10
# iterations
α = 0.99, k = 2 in the scalar ε-algorithm SEAW.
15
Two examples for image reconstruction (S. Gazzola, A.
Karapiperi)
First example: They used as basic method, the Cimmino
method.
They considered the Lena image of dimension m = 128, they
blurred it (blur Matlab function from regtools of P.C.Hansen)
obtaining a matrix of dimension N = 16384, and they added a
white noise of 10−2 .
In the following table we show the relative errors by comparing the
Cimmino method, the STEA2 (RM), and two possible restarts.
Cimmino
STEA2
100 iterations
0.8074
0.1177
and for the images
10 × 10 iterations
20 × 5 iterations
0.0913
0.0841
Original image
Blurred, noisy image
Cimmino, 1000 iterations
STEA2, 20 × 5 iterations
Second example: They used as basic method, the Landweber
method. They considered the satellite image (Restore Tools of
J. Nagy et al) of dimension m = 256, obtaining a matrix of
dimension N = 65536, with a Spatially Invariant Atmospheric
Turbulence Blur and a noise level of 10−2 .
They compared STEA1 with different y vectors (RM 20 × 5) and
also with the GMRES method (after iteration 20 the method
exhibits semiconvergence behaviour and the error grows quickly)
0
0
10
10
Landweber
STEA_Landweber, y random
STEA_Landweber, y=r
Landweber
STEA_Landweber, y=rn
GMRES
n
STEA_Landweber, y = xn
−1
10
0
20
40
60
80
100
0
20
40
60
80
100
120
Original image
Blurred, noisy image
GMRES, 8 iterations
STEA1, 20 × 5 iterations
Conclusions
We think that our Matlab toolbox will be useful in trying to solve a
very wide range of problems, in a pretty easy way:
Acceleration of vector and matrix sequences
Solution of linear and nonlinear vector, matrix and tensor
equations
Computation of matrix functions
Solution of integral equations
Image reconstruction and restoration
...............
Conclusions
We think that our Matlab toolbox will be useful in trying to solve a
very wide range of problems, in a pretty easy way:
Acceleration of vector and matrix sequences
Solution of linear and nonlinear vector, matrix and tensor
equations
Computation of matrix functions
Solution of integral equations
Image reconstruction and restoration
...............
THANK YOU !