BOOK OF ABSTRACT NLAA 2013 International

BOOK OF ABSTRACT
NLAA 2013
International Conference on
Numerical Linear Algebra and its Applications
15–18 January 2013
DEPARTMENT OF MATHEMATICS
INDIAN INSTITUTE OF TECHNOLOGY GUWAHATI
INDIA
Acknowledgements
Professor B. V. Limaye, Emeritus Professor, IIT Bombay has played a pivotal role in initiating
the event. The organizers would like to thank him for this and for his constant intellectual
and moral support. The organizers would also like to thank the following organizations for
sponsoring the event.
• Council of Scientific and Industrial Research (CSIR)
• International Mathematical Union - Committee for Developing Countries (IMU - CDC)
• Indian National Science Academy (INSA)
• Defence Research & Development Organisation (DRDO)
• Spoken Tutorial Project, National Mission on Education through ICT
• National Board for Higher Mathematics (NBHM)
• Oil India Limited
CSIR
IMU - CDC
INSA
DRDO
NBHM
Spoken Tutorial Project
Oil India Limited
i
Organizers
Local Organizing Committee:
Scientific Committee:
Prof. Rafikul Alam (Chairman)
Department of Mathematics
IIT Guwahati
Prof. Rafikul Alam
Department of Mathematics
IIT Guwahati
Dr. Shreemayee Bora (Secretary)
Department of Mathematics
IIT Guwahati
Prof. R. B. Bapat
ISI Delhi
Prof. S. N. Bora
Department of Mathematics
IIT Guwahati
Dr. Shreemayee Bora
Department of Mathematics
IIT Guwahati
Dr. Kalpesh Kapoor
Department of Mathematics
IIT Guwahati
Prof. B. V. Limaye
Department of Mathematics
IIT Bombay
Dr. G. Sajith
Department of Computer Science & Engg.
IIT Guwahati
Prof. Volker Mehrmann
Institute for Mathematics
TU Berlin
Prof. B. K. Sarma
Department of Mathematics
IIT Guwahati
Prof. Harish Pillai
Department of Electrical Engineering
IIT Bombay
Prof. R. K. Sinha
Department of Mathematics
IIT Guwahati
Dr. G. Sajith
Department of Computer Science & Engg.
IIT Guwahati
ii
Preface
Numerical Linear Algebra is concerned with the theory and practical aspects of computing
solutions of three fundamental problems which are at the core of science and engineering,
viz., linear systems of equations, least-squares problems and eigenvalue problems. Reliable
software for solving these problems are of utmost importance for most scientific processes and
finite precision arithmetic poses significant challenges to the numerical accuracy of solutions.
Also the rapid advances and changes in the computer hardware make it necessary to keep
revisiting existing software for these problems so that they make the most efficient use of the
available computing resources. Moreover, the need to design efficient mathematical models of
real world phenomena constantly gives rise to challenging problems in the area. Meeting the
challenges in computing also leads to a better understanding of the theoretical aspects of these
problems and vice versa.
This conference envisages an opportunity for young researchers interested in such problems to
interact with some of the leading experts and active researchers in the area and have a glimpse
of the state of the art research in the subject and its many applications. It also aims to encourage
young researchers to present their work to some of the most eminent researchers in the field.
iii
Schedule
iv
v
vi
vii
Contents
Acknowledgements
i
Organizers
ii
Preface
iii
Schedule
iv
Entanglement of Multipartite Systems: Few Open Problems
Bibhas Adhikari
1
Numerical Gradient Algorithms for Eigen Value Calculations
Ayaz Ahmad
2
Backward Errors for Eigenvalues and Eigenvectors of Structured Eigenvalue Problems
Sk Safique Ahmad
3
Recycling BiCGSTAB
Kapil Ahuja
5
Sensitivity Analysis of Rational Eigenvalue Problem
Rafikul Alam and Namita Behera
6
Application of Structured Linearization for Efficient Computation of the H∞ Norm of a
Transfer Matrix
Madhu N. Belur
7
Linear and Nonlinear Matrix Equations Arising in Model Reduction
Peter Benner and Tobias Breiten
10
Finite Element Model Updating: A Structured Inverse Eigenvalue Problem for Quadratic
Matrix Pencil
Biswa Nath Datta
11
viii
Structured Eigenvalue Condition Numbers for Parameterized Quasiseparable Matrices
Froilán M. Dopico
12
Structured Eigenvalue Problems – Structure-Preserving Algorithms, Structured
Error Analysis
Heike Faßbender
13
Comparative Study of Tridiagonalization Methods for Fast SVD
Bibek Kabi, Tapan Pradhan, Ramanarayan Mohanty and Aurobinda Routray
14
Distance of a Pencil from Having a Zero at Infinity
Rachel Kalpana Kalaimani
17
Extracting Eigenvalues and Eigenvectors of a QEVP in the form of Homogenous Coordinates using Newton-Raphson Technique
Karuna Kalita, Aritra Sasmal and Seamus D Garvey
20
The Separation of two Matrices and its Application in the Perturbation Theory of Eigenvalues and Invariant Subspaces
Michael Karow
22
Computation Considerations for the Group Inverse for an Irreducible M-Matrix
Stephen Kirkland
23
Conditioning of a basis
Balmohan V. Limaye
24
Robust Stability of Linear Delay Differential-Algebraic Systems
Vu Hoang Linh
25
Minimal Indices of Singular Matrix Polynomials: Some Recent Perspectives
D. Steven Mackey
26
Structured Backward Errors for Eigenvalues of Hermitian Pencils
Shreemayee Bora, Michael Karow, Christian Mehl and Punit Sharma
27
Self-adjoint Differential Operators and Optimal Control
Peter Kunkel, Volker Mehrmann and Lena Scholz
28
ix
Matrix Functions with Specified Eigenvalues
Michael Karow, Daniel Kressner, Emre Mengi, Ivica Nakic and Ninoslav Truhar
29
B† -splittings of Matrices
Debasisha Mishra
30
Characterization and Construction of the Nearest Defective Matrix via Coalescence of
Pseudospectral Components
Michael L. Overton, Rafikul Alam, Shreemayee Bora and Ralph Byers
31
Linearizations of Matrix Polynomials in Bernstein Basis
D. Steven Mackey and Vasilije Perović
32
A Linear Algebraic Look at Differential Ricatti Equation and Algebraic Ricatti (in)equality
H. K. Pillai
34
A Regularization Based Method for Dynamic Optimization with High Index DAEs
Soumyendu Raha
35
A Constructive Method for Obtaining a Preferred Basis from a Quasi-preferred Basis for
M -matrices
Manideepa Saha and Sriparna Bandopadhyay
36
An I/O Efficient Algorithm for Hessenberg Reduction
S. K. Mohanty and G. Sajith
37
Sparse Description of Linear Systems and Application in Computed Tomography
R. Ramu Naidu, C. S. Sastry and J.Phanindra Varma
39
Structured Backward error of Approximate Eigenvalues of T-palindromic Polynomials
Punit Sharma, Shreemayee Bora, Michael Karow and Christian Mehl
42
On the Numerical Solution of large-scale Linear Matrix Equations
Valeria Simoncini
43
Eigenvalues of Symmetric Interval Matrices Using a Single Step Eigen Perturbation Method
Sukhjit Singh and D. K. Gupta
44
x
Nonnegative Generalized Inverses and Certain Subclasses of Singular Q-matrices
K. C. Sivakumar
45
Accurate Eigenvalue Decomposition of Arrowhead Matrices and Applications
Ivan Slapnicar, Nevena Jakovcevic Stor and Jesse Barlow
46
A Family of Iterative Methods for Computing the Moore–Penrose Generalized Inverse
Shwetabh Srivastava and D. K. Gupta
47
Distance Problems for Hermitian Matrix Pencils - an -Pseudospectra Based Approach
Shreemayee Bora and Ravi Srivastava
48
What’s So Great About Krylov Subspaces?
David S. Watkins
49
Author Index
50
xi
[email protected]
Bibhas Adhikari
Centre of Excellence in Systems Science
IIT Rajasthan, Jodhpur, India.
+91 291 251 9006
+91 291 251 6823
Entanglement of Multipartite Systems: Few Open Problems
Bibhas Adhikari
The mystery of entanglement of multipartite systems is still not completely revealed although
its usefulness is recognized in quantum information processing. In the axiomatic formulation of
quantum mechanics, a multipartite quantum state lies in the tensor product of complex Hilbert
spaces. Hence, developing the linear algebraic tools for determining the entangled states has
become computationally challenging except for the 2 × 2 and 2 × 3 systems. This calls for
the development of graph representations of quantum states with a hope that graph theoretic
machinery could be applied to unfold the secrets of entanglement. In this talk, we discuss
few open problems in linear and/or multilinear algebra including a nonlinear singular value
problem that arise in formulating a geometric measure for entanglement. We also discuss
the graph representation of multipartite quantum states and few open problems in the field of
combinatorial matrix theory.
1
[email protected]
Ayaz Ahmad
NIT Patna, Bihar 800005
8809493586
Numerical Gradient Algorithms For Eigen Value Calculations
Ayaz Ahmad
In this paper we discuss a numerical algorithm, termed, the double bracket algorithm
Hk+1 = e−αk[Hk ,N ] Hk eαk[Hk ,N ]
is proposed for computing the eigen values of an arbitrary symmetric matrix. For suitably
small αk , termetime –steps, the algorithm is an approximation of the solution to the continuous – time – double bracket equation. Here we use an analysis of convergence behaviour
showing linear convergence to the desired limit points is presented. Associated with the main
algorithms presented for the computation of the eigen values or singular values of matrices are
algorithm evolving on Lie-groups of orthogonal matrices which compute the full eigen spaces
decomposition of given matrices.
2
Sk Safique Ahmad
M block, IET DAVV Campus
Indian Institute of Technology Indore
Khandwa Road, Indore 452017, India.
[email protected]
+91 731 2438731
+91 7312364182
Backward Errors for Eigenvalues and Eigenvectors of Structured
Eigenvalue Problems
Sk Safique Ahmad
There are many applications on polynomial/rational eigenvalue problems in various directions,
few of them will be highlighted along with their perturbation analysis. Some known results will
be discussed on perturbation analysis of various structured eigenvalue problems and then we
will highlight our results. Basically, in this talk we propose a general framework discussed in
[4, 5, 6] for the structured perturbation analysis of several classes of structured matrix polynomials in homogeneous form, including complex symmetric, skew-symmetric, Hermitian, skew
Hermitian, even and odd matrix polynomials. Also we discuss structured backward error of
an approximate eigenpair of a structured homogeneous matrix polynomial with T-palindromic,
H-palindromic, T-antipalindromic, and H-antipalindromic. We introduce structured backward
errors for approximate eigenvalues and eigenvectors, and then, we construct minimal structured perturbations such that an approximate eigenpair is an exact eigenpair of an appropriately perturbed matrix polynomial. This work extends previous work of [1, 2, 3, 7] for the
non-homogeneous case (we include infinite eigenvalues) and we show that the structured backward errors improve the known unstructured backward errors. Further, we extend some of
these results to rational eigenvalue problems and to construct minimal structured perturbations
such that an approximate eigenpair is an exact eigenpair of an appropriately perturbed rational
eigenvalue problems.
Note: This work is joint with Volker Mehrmann, Institut für Mathematik, Technische Universität Berlin, Germany.
References
[1] B. Adhikari. Backward perturbation and sensitivity analysis of structured polynomial
eigenvalue problem. PhD thesis, IIT Guwahati, Dept. of Mathematics, 2008.
[2] B. Adhikari and R. Alam. On backward errors of structured polynomial eigenproblems
solved by structure preserving linearizations, Linear Algebra Appl., 434: 1989–2017.
3
[3] B. Adhikari and R. Alam. Structured backward errors and pseudospectra of structured
matrix pencils. SIAM J. Matrix Anal. Appl., 31:331–359, 2009.
[4] S. S. Ahmad and V. Mehrmann, Perturbation analysis for complex symmetric, skew
symmetric, even and odd matrix polynomials, Electr. Trans. Num. Anal., 38: 275–302,
2011.
[5] S. S. Ahmad and V. Mehrmann, Backward errors for eigenvalues and eigenvectors of Hermitian, skew-Hermitian, H-even, and H-odd matrix polynomials, Linear and Multilinear
Algebra, reprint 2012.
[6] S. S. Ahmad and V. Mehrmann, Structured backward errors for structured nonlinear
eigenvalue problems, preprint 2012.
[7] X. G. Liu and Z. X. Wang, A note on the backward errors for hermite eigenvalue problems. Appl. Math. Comput., 165:405–417, 2005.
[8] R.C. Li, W.W. Lin, and C.S. Wang, Structured backward error for palindromic polynomial
eigenvalue problems, Numer. Math. 116 (2010) 95–122
4
Kapil Ahuja
Max Planck Institute for Dynamics of
Complex Technical Systems
Sandtorstr. 1, 39106 Magdeburg, Germany.
[email protected]
(+49) (0) 391 6110 349
(+49) (0) 391 6110 500
Recycling BiCGSTAB
Kapil Ahuja, Eric de Sturler and Peter Benner
Krylov subspace recycling is a process for accelerating the convergence of sequences of linear
systems. Based on this technique we have recently developed the Recycling BiCG algorithm.
We now generalize and extend this recycling theory to BiCGSTAB. We modify the BiCGSTAB
algorithm to use a recycle space, which is built from left and right approximate eigenvectors.
We test on three application areas. The first problem simulates crack propagation in a metal
plate using cohesive finite elements. The second example involves parametric model order
reduction of a butterfly gyroscope. The third and final example comes from acoustics. Experiments on these applications give promising results.
5
Namita Behera
Department of Mathematics
Indian Institute of Technology Guwahati
Guwahati 781039, India.
[email protected]
9508640369
Sensitivity Analysis of Rational Eigenvalue Problem
Rafikul Alam and Namita Behera
P
We discuss sensitivity of rational eigenvalue problem of the form R(λ) = di=0 Ai λi + L(C −
λD)−1 U . In particular, we derive condition number of simple eigenvalues of R. We also
discuss linearization of R(λ) and its effect on the sensitivity of eigenvalues of R.
6
Madhu N. Belur
Department of Electrical Engineering
Indian Institute of Technology Bombay
Powai, Mumbai 400 076, India.
[email protected]
+91 22 2576 7404
+91 22 2572 3707
Application of Structured Linearization for Efficient Computation of the
H∞ Norm of a Transfer Matrix
Madhu N. Belur
The H∞ -norm of a system’s transfer matrix plays a key role in control studies, for example, in
the context of robust control. The H∞ norm of a transfer matrix G(s) is defined as
kGkH∞ :=
sup
σmax (G(λ))
λ∈C, Real (λ)>0
where σmax (P ) is the maximum singular value of a constant matrix P . The norm exists if and
only if G is proper and has all its poles in the open left half complex plane. When it exists,
the procedure to compute the precise value of the norm γ involves a computationally intensive
procedure using an iteration on a parameter dependent Hamiltoninan matrix constructed from
G. The current procedure available in numerical computation packages like Scilab and Matlab
involves iteration on the γ value by checking whether the Hamiltonian matrix has or does not
have eigenvalues on the imaginary axis. Of course, within each such iteration on the γ value is
nested the iteration to compute the eigenvalues of a constant matrix. The precision to which the
H∞ norm is sought is specified through the tolerance band upto which the procedure checks
whether or not the Hamiltonian matrix has imaginary axis eigenvalues.
This work brings out the use of structured linearization of a Bezoutian polynomial matrix to
compute just once the generalized eigenvalues of a matrix pair and thus drastically reduces the
computation involved in calculating the H∞ norm of a transfer matrix. Moreover, the precision
in the calculated value of the norm is the machine precision: typically 10−16 .
The method we use first utilizes the key property that when the parameter γ is equal to the
H∞ norm, then the Hamiltonian matrix has repeated eigenvalues on the imaginary axis; this is
captured in the property that a certain polynomial p(γ, ω) and its derivative with respect to ω,
denoted by say q(γ, ω), are not coprime at this value of γ. Hence, we reconsider the bivariate
polynomials p(γ, ω) and q(γ, ω) as polynomials in ω with coefficients in γ: denoted respectively as pγ (ω) and qγ (ω). Determining coprimeness of two polynomials is straightforward:
the resultant of the two polynomials is zero if and only if they have a common factor.
We thus calculate the resultant R(γ) of the two polynomials pγ and qγ and compute the roots
of R(γ) to obtain all candidates γ that make pγ and qγ have a common root ω0 . Though
7
it is conceptually easier to calculate the resultant R(γ) as the determinant of the Sylvester
matrix, the computationally easier way is to construct the Bezoutian matrix B(γ) of the two
polynomials pγ and qγ and use the property that the coefficient matrices of the polynomial
matrix B(γ) are symmetric. The matrix pencil (A, E) obtained by the structured linearization
of B(γ) helps in calculating the generalized eigenvalues of (A, E) and thus the roots of the
resultant R(γ). This is elaborated in what follows.
With a slight abuse of notation, let the coefficient of ω i in the polynomial pγ (ω) be denoted by
pi (γ), and similarly so for qγ :
pγ (ω) = p0 (γ)+ωp1 (γ)+· · ·+ω N pN (γ) and qγ (ω) = q0 (γ)+ωq1 (γ)+· · ·+ω N −1 qN −1 (γ)
and define their Bezoutian polynomial bγ (ζ, η) and the Bezoutian matrix B(γ) as follows

bγ (ζ, η) :=
1
ζ
T

1
η

pγ (ζ)qγ (η) − pγ (η)qγ (ζ) 
..  B(γ)  .. 
=
.
.
ζ −η
N −1
N −1
η
ζ
with B(γ) = B0 + γB1 + · · · + γ m Bm defined to obtain the above equality. Clearly, B(γ) is
a symmetric polynomial matrix, i.e. each of B0 , . . . , Bm are constant real symmetric matrices.
Rather than determining the roots of det B(γ), we use the results in [2] to obtain a structured
linearization: define the symmetric matrices E and A as

B


m

E := 

−Bm−1 ··· −B1 −B0
−Bm−2 ··· −B1 −B0
..
.
−B1
−B0
. .. . ..
. ..
0




A := 
0
..
.
−B1
−B0
.
. .. . .
.
..
0

.
0
The development in [2] helps conclude that the generalized eigenvalues of (A, E) are the roots
of det B(γ). Thus we obtain a finite number of candidates γ, one of which is kGkH∞ .
The above procedure when implemented in Scilab and compared with the current procedure
available in Scilab (and Matlab) shows remarkable improvement in computation time and accuracy. In spite of the precision of generalized eigenvalues being of the order of 10−16 and
that using the Hamiltonian matrix iteration being 10−8 (the default value), the time taken using
structured linearization is about 20 to 40 times less than the current available method. Moreover, this comparison becomes more favourable to the new method for larger orders of the
transfer matrix G(s). See [1] for an elaborate treatment of the procedure1 and details of the
computational experiments.
1
The author thanks Dr. C. Praagman, close collaboration with whom resulted in the new method and [1].
8
References
[1] M.N. Belur and C. Praagman, An efficient algorithm to compute the H-infinity norm, IEEE
Transactions on Automatic Control, vol. 56, no. 7, pages 1656-1660, 2011.
[2] D. Mackey, N. Mackey, C. Mehl, and V. Mehrmann, Structured polynomial eigenvalue
problems: good vibrations from good linearizations, SIAM Journal of Matrix Analysis and
Applications, vol. 28, no. 4, pages 1029-1051, 2006.
9
Peter Benner
Max Planck Institute for Dynamics of
Complex Technical Systems
Sandtorstr. 1
39106 Magdeburg, Germany.
[email protected]
+49 391 6110 450
+49 391 6110 453
Linear and Nonlinear Matrix Equations Arising in Model Reduction
Peter Benner and Tobias Breiten
System-theoretic model reduction methods like balanced truncation for linear state-space systems require the solution of certain linear or nonlinear matrix equations. In the linear case, these
are Lyapunov or algebraic Riccati equations. Generalizing these model reduction methods to
new system classes, variants of these matrix equations have to be solved. Primarily, we will
discuss the generalized matrix equations associated to bilinear and stochastic control systems,
where in addition to the Lyapunov operator, a positive operator appears in the formulation of
the equations. Due to the large-scale nature of these equations in the context of model order
reduction, we study possible low rank solution methods for these so-called bilinear Lyapunov
equations. We show that under certain assumptions one can expect a strong singular value
decay in the solution matrix allowing for low rank approximations. We further provide some
reasonable extensions of some of the most frequently used linear low rank solution techniques
such as the alternating directions implicit (ADI) iteration and the extended Krlyov subspace
method. By means of some standard numerical examples used in the area of bilinear model
order reduction, we will show the efficiency of the new methods.
If time permits, we also discuss some new classes of nonlinear matrix equations arising in
special variants of balanced truncation.
10
Biswa Nath Datta, IEEE Fellow
Distinguished Research Professor
Northern Illinois University
DeKalb, Illinois 60115, USA
[email protected]
815 753 6759
815 753 1112
Finite Element Model Updating: A Structured Inverse Eigenvalue
Problem for Quadratic Matrix Pencil
Biswa Nath Datta
The finite element model updating problem is a special inverse eigenvalue problem for a
quadratic matrix pencil and arises in vibration industries in the context of designing automobiles, air and space crafts, and others. The problem is to update a very large theoretical finite
element model with more than a million degree of freedom using only a few measured data
from a real-life structure. The model has to be updated in such a way that the measured eigenvalues and eigenvectors are incorporated into the model, the symmetry of the original model
is preserved and the eigenvalues and eigenvectors that do not participate in updating remain
unchanged. When the model has been updated this way, the updated model can be used for
future design with confidence. Finite Element Model Updating has also useful applications in
health monitoring and damage detection in structures, including bridges, buildings, highways,
and others.
Despite much research done on the problem both by academic and industrial researchers and
engineers, the problem has not been satisfactorily solved and an active research is still underway. There are many industrial solutions which are ad hoc in nature and often lack solid
mathematical foundations.
In this talk, I shall present a brief overview of the existing techniques and their practical difficulties along with the new developments within the last few years. The talk will conclude with
a few words on future research direction on this topic.
11
Froilán M. Dopico
ICMAT and Department of Mathematics
Universidad Carlos III de Madrid
Avda. de la Universidad 30,
28911 Leganés (Madrid), Spain
[email protected]
+34916249446
+34916249129
Structured Eigenvalue Condition Numbers for Parameterized
Quasiseparable Matrices
Froilán M. Dopico
Rank structured matrices of different types appear in many applications and have received considerable attention in the last years from theoretical and numerical points of view. In particular,
the development of fast algorithms for performing all tasks of Numerical Linear Algebra with
rank structured matrices has been, and still is, a very active area of research. The key idea
for developing these fast algorithms is to represent n × n rank structure matrices in terms of
O(n) parameters and to design algorithms that run directly on the parameters instead of on
the matrix entries. Therefore these fast algorithms usually produce tiny backward errors on
the parameters and it would be natural to study the condition numbers of different magnitudes
with respect perturbations of the parameters. However, there are not results published in the
literature on this topic and the goal of this talk is to present a number of first results on this
area. To this purpose, we consider a very important class of rank-structured matrices: quasiseparable matrices. It is well-known that n × n quasiseparable matrices can be represented
in terms of O(n) parameters or generators, but that these generators are not unique. In this talk,
we plan to develop for first time eigenvalue condition numbers of quasiseparable matrices with
respect tiny relative perturbations of the generators and to compare these conditions numbers
for different specific set of generators of the same matrix, with the aim of determining which
representation in preferable for eigenvalue computations.
12
Heike Faßbender
AG Numerik
Institut Computational Mathematics
TU Braunschweig, Fallersleber-Tor-Wall 23
38106 Braunschweig, Germany.
[email protected]
+49 531 391 7535
+49 531 391 8206
Structured Eigenvalue Problems – Structure-Preserving Algorithms,
Structured Error Analysis
Heike Faßbender
Many eigenvalue problems arising in practice are structured due to (physical) properties induced by the original problem. Structure can also be introduced by discretization and linearization techniques. Preserving this structure can help preserve physically relevant symmetries in
the eigenvalues of the matrix and may improve the accuracy and efficiency of an eigenvalue
computation. This is well-known for n × n real symmetric matrices A = AT . Every eigenvalue
is real and every right eigenvector is also a left eigenvector belonging to the same eigenvalue.
Many numerical methods, such as QR, Arnoldi and Jacobi-Davidson automatically preserve
symmetric matrices (and hence compute only real eigenvalues), unavoidable round-off errors
can not result in the computation of complex-valued eigenvalues. Algorithms tailored to symmetric matrices (e.g., divide and conquer or Lanczos methods) take much less computational
effort and sometimes achieve high relative accuracy in the eigenvalues and – having the right
representation of A at hand – even in the eigenvectors.
Another example are matrices for which the complex eigenvalues with nonzero real part theoretically appear in a pairing λ, λ, λ−1 , λ−1 . Using a general eigenvalue algorithm such as QR
or Arnoldi results here in computed eigenvalues which in general do not display this eigenvalue pairing any longer. This is due to the fact, that each eigenvalue is subject to unstructured
rounding errors, so that each eigenvalue is altered in a slightly different way. When using a
structure-preserving algorithm this effect can be avoided, as the eigenvalue pairing is enforced
so that all four eigenvalues are subject to the same rounding errors.
This talk focuses on structure-preserving algorithms and structured error analysis such as structured condition numbers and backward errors presenting the state-of-the art for a few classes
of matrices.
13
Bibek Kabi
Department of Advanced Technology
and Development Center
IIT Kharagpur, Kharagpur, 721302, India.
[email protected]
09749713677
+49 531 391 8206
Comparative Study of Tridiagonalization Methods for Fast SVD
Bibek Kabi, Tapan Pradhan, Ramanarayan Mohanty and Aurobinda Routray
Singular Value Decomposition (SVD) is a powerful tool in digital signal and image processing applications. Eigenspace based method is useful for face recognition in image processing,
where fast SVD of the image covariance matrix is computed. SVD of dense symmetric matrices can be computed using either one step or two step iterative numerical algorithms. One
step method includes Jacobi’s and Hestenes algorithm, which generate accurate singular values
and vectors. However, the time of execution increases with the dimension of matrices. Computation of SVD of dense symmetric matrices via two step method consists of two stages: 1.
Bidiagonalization or Tridiagonalization and 2. Diagonalization. Tridiagonalization is a significant step the dense symmetric matrix. and its performance affects the performance of SVD.
Therefore, in this article a comparative analysis of available tridiagonalization methods has
been carried out to select the fast and accurate tridiagonalization method, to be used as first
step for fast SVD computation. Lanczos tridiagonalization method with partial orthogonalization, among available tridiagonalization methods is found to be fast and most accurate. Hence,
Lanczos tridiagonalization with partial orthogonalization, along with Divide and Conquer algorithm could be the fast SVD algorithm for computing SVD of symmetric matrices.
For any dense real symmetric matrix An×n , it is possible to find an orthogonal Qn×n such that
T = QT AQ, where T is the symmetric tridiagonal matrix. There are various tridiagonalization
methods like Givens rotation, Householder reflection and Lanczos method. Householder and
Givens methods are well known for tridiagonalization of a dense symmetric matrix. Givens
method is used to reduce a dense symmetric matrix into its equivalent symmetric tridiagonal
form by multiplying a sequence of appropriately chosen elementary orthogonal transformation
matrices. Householder reflection overwrites A with T = QT AQ, where T is tridiagonal and Q
is an orthogonal matrix, and is the product of Householder transformation matrices.
Lanczos algorithm produces a symmetric tridiagonal matrix T and a set of Lanczos vectors
qj , j = 1, . . . , n, which forms the columns of the orthogonal matrix Q. Lanczos uses a threeterm recurrence equation to reduce a symmetric matrix into its tridiagonal form. Lanczos
algorithm without orthogonalization became popular for tridiagonalizing a matrix. However,
the loss of orthogonality among Lanczos vectors, complicates the relationship between A’s singular values and those of T . To Circumvent this problem, various orthogonalization schemes
14
have been introduced, like full, selective, and partial orthogonalization. Orthogonalizing each
Lanczos vector against all previous vectors is called complete or full orthogonalization. At
each iteration the lanczos vector is orthogonalized by the Gram-schimdt process. However,
full orthogonalization scheme proved to be much expensive with respect to storage requirements and execution time. Therefore, selective orthogonalization scheme has been introduced.
In this scheme, Lanczos vectors are orthogonalized against a few selected vectors, namely the
Ritz vectors which have nearly converged. However, one of the major drawback of selective orthogonalization is the factorization of Tk . This involves again the solution of symmetric
tridiagonal singular value problem using algorithms like Divide and Conquer, QR factorization.
There are other drawbacks mentioned by S. Qiao in his work. To overcome these drawbacks
partial orthogonalization has been introduced. It orthogonalizes Lanczos vectors only to the
square root of the machine epsilon and removes the computation of Ritz vectors.
A comparative study of various tridiagonalization methods has been presented based on orthogonality error, factorization error and elapsed time. Orthogonality and factorization error have been named as Oerror and Ferror respectively. Orthogonality error was quite less
in all the cases. However, factorization error and elapsed time play a major role, forming
the basis of comparison. For symmetric matrices of sizes 100 and 300 Ferror and Elapsed
time (secs) obtained from Givens method are (1.4668e + 003, 4.085310secs) and (7.6239e +
003, 1043.504051secs) respectively. The results for same matrices obtained from Householder
transformation are (3.6013e+003, 1.101101secs) and (3.1710e+004, 107.275468secs) respectively. Lanczos method with partial orthogonalization produces the following results (1.0501e−
005, 0.049968secs) and (6.1891e − 005, 0.313961secs). Hence, it is seen that Lanczos with
partial orthogonalization proves to be much effective in terms of errors and elapsed time.
The elapsed time for large matrices of sizes 1000 and 1500 with partial orthogonalization are
12.424729 and 45.833286secs respectively. Hence, Lanczos with partial orthogonalization can
be used for tridiagonalizing large image covariance matrices in less time, ensuring fast SVD in
digital signal and image processing applications. Experimental results show that all the tridiagonalization schemes effectively reduces a dense symmetric matrix to its tridiagonal form,
whereas, Lanczos with Partial Orthogonalization proved to be much efficient. Lanczos tridiagonalization with partial orthogonalization scheme may be combined with Divide and Conquer
algorithm for computation of fast SVD of dense symmetric matrices. Fixed-point format of the
fast SVD can be developed and implemented in embedded platforms like ARM, DSPs, FPGAs
for face, eye tracking and other digital signal and image processing applications.
15
References
[1] G. H. Golub and C. F. Van Loan, Matrix Computations Third Editions, The Jhons Hopkins
University Press, Baltimore and London, (1996).
[2] S. Qiao, Orthogonalization Techniques for the Lanczos Tridiagonalization of Complex
Symmetric Matrices, Proc. of SPIE Vol. 5559 (2004), 423–434.
[3] C. C. Paige, Error Analysis of the Lanczos Algorithm for Tridiagonalizing a Symmetric
Matrix, IMA Journal of Applied Mathematics (1992) 18, 341–349.
16
Rachel Kalpana Kalaimani
212, Control & Computing Lab
Dept. of Electrical Engg.
Indian Institute of Technology Bombay
Mumbai 400076, India.
[email protected]
91 9920731380
Distance of a Pencil from Having a Zero at Infinity
Rachel Kalpana Kalaimani
Objective For a given pair (E, A) such that the matrix pencil sE − A has no zeros at infinity,
compute a nearest perturbed pair (Ẽ, Ã) so that the pencil sẼ − Ã has a zero at infinity.
Introduction
We consider a linear time invariant system represented in the descriptor state space form as
follows:
E ẋ = Ax + Bu.
Assume determinant of (sE − A) 6≡ 0. When E is singular the free response of the system may
have impulsive modes. This corresponds to the matrix pencil sE − A ‘losing rank at s = ∞’.
If this happens to be the case then the matrix pencil sE − A is said to have a zero at infinity.
More precisely sE − A is said to have a zero at infinity if the degree of the determinant of
(sE − A) < rank E. (Refer [1] for a precise definition of zeros at infinity.) An important
fact to be noted here is that the matrix E being singular is only a necessary condition for the
presence of zeros at infinity. However for the generalized eigenvalue problem, E being singular
is a necessary and sufficient condition for the pencil sE − A to have eigenvalues at infinity. We
follow the system theoretic notion about zeros at infinity as in this case the number of zeros at
infinity corresponds to the number of impulsive solutions in a dynamical system. Refer [2] for
more about this distinction.
Problem Statement
Given: E, A ∈ Rn×n , rank E = r such that the pencil sE − A has no zero at infinity. Let
X := {(Ẽ, Ã) ∈ Rn×n × Rn×n | sẼ − Ã has a zero at infinity }. Find
1. = min(Ẽ,Ã)∈X (kE − Ẽk2 + kA − Ãk2 ).
2. A pair (Ẽ, Ã) that achieves the .
17
Results
Since the problem is about perturbing only the coefficient matrices, it would be helpful to
characterize the condition for no zeros at infinity in terms of the coefficient matrices. The
following lemma does this.
Lemma 1 Consider a pair (E, A), E, A ∈ Rn×n with rank of E equal to r. Let M ∈ R(n−r)×n
full row rank be such that M E = 0. Then sE − A has no zeros at infinity if and only if
dim(M A ker E) = n − rank E.
Thus the pencil sE − A has zeros at infinity if and only if a smaller matrix derived from A is
nonsingular. The metric is defined by the 2 norm. Hence for a given nonsingular matrix the
distance from the nearest singular matrix is the smallest singular value of the former matrix.
This fact is crucially used in the main result which is the following algorithm which provides
the perturbation amount and a pair (Ẽ, Ã) as required.
Algorithm 1 Input: (E, A) and r := rank E such that sE − A has no zeros at infinity.
Output: and (Ẽ, Ã).
Define Ak : A([n − k + 1 : n], [n − k + 1 : n]). This corresponds to the submatrix of A formed
by the last k rows and k columns of A. Let nullity of E be t, i.e. t = n − r.
0
0
Step 1: Find the SVD of E; U EV = E . Hence E = diag (σ1 , . . . , σr , 0, . . . , 0) where
0
σ1 >, . . . , > σr are the singular values of E. Then A := U AV .
0
0
0
Step 2: Let k = 0 and min = σt (At ), where σt (At ) refers to the largest singular value of At .
Perform the following iteration.
For i = 0 to r − 1
If σr−i (E) > min
Stop
0
ElseIf (σr−i (E) + σt+1+i (At+1+i )) < min
0
min= σr−i (E) + σt+1+i (At+1+i )
k =i+1
End.
Hence =min and Ẽ = U −1 diag (σ1 , . . . , σr−k , 0, . . . , 0) V −1 .
0
0
0
Step 3: Find the singular values of At+k and let them be σ1 , . . . , σt+k . Let U1 and V1 be the
0
matrices involved in computing the SVD of At+k . Define U2 and V2 as follows.
Ir−k 0
Ir−k 0
U2 =
and V2 =
0 U1
0 V1
Then it follows that
 0
A11
0

U2 A V2 =
0
A21
0
0
σ1
A12
.
0
σt+k
 0
A11

 and let A2 :=  0
A21
0

A12

...


0
σt+k−1
0
18
0
0
0
0
where A11 , A12 and A21 are submatrices of A . Therefore calculate à = U1−1 U2−1 A2 V2−1 V1−1 .
References
[1] A.I.G. Vardulakis, “ Linear Multivariable Control, Algebraic Analysis and Synthesis
Methods”, John Wiley and Sons, Chichester, 1991.
[2] P. V. Dooren and P. Dewilde, “The eigenstructure of an arbitrary polynomial matrix: computational aspects”, Linear Algebra and its Applications, vol. 50, pp. 545-579, 1983.
19
Karuna Kalita
Department of Mechanical Engineering
Indian Institute of Technology Guwahati
Guwahati 781039, India.
[email protected]
+91 (0)361 2582680
Extracting Eigenvalues and Eigenvectors of a QEVP in the form of
Homogenous Co-ordinates using Newton-Raphson Technique
Karuna Kalita1 , Aritra Sasmal1 and Seamus D Garvey2
1
Department of Mechanical Engineering
Indian Institute of Technology Guwahati
Guwahati 781039, India.
2
Department of Mechanical, Materials and Manufacturing Engineering
University of Nottingham
University Park, Nottingham NG7 2RD, United Kingdom
The equation of motion of a second order system is
Mq̈ + Dq̇ + Kq = f
where {K, D, M} are the system stiffness, damping and mass matrices and their dimensions
are N × N . f is the vector of generalised forces of dimension N × 1 and q is the vector of
generalised displacements of dimension N × 1. {K, D, M} are not assumed to be symmetric
and the mass matrix may not be invertible.
Two systems {K0 , D0 , M0 } and {K1 , D1 , M1 } are related by a Structure-Preserving Equivalence [1] if there are two invertible (2N × 2N ) matrices {TL , TR } such that the Lancaster
Augmented Matrices (LAMs) of system {K0 , D0 , M0 } are transformed to become the LAMs
of {K1 , D1 , M1 } and the transformations will be
TTL K0 TR = K1 , TTL D0 TR = D1 and TTL M0 TR = M1
The quadratic eigenvalue-eigenvector problem (QEVP) of interest in homogeneous coordinates
form as described in [2]
(ki K + di D + mi M)vi = 0
wTi (ki K + di D + mi M) = 0
20
vi and wi are the right and left eigenvectors respectively of dimension 2N × 2 and M, D and
K are the matrices of size 2N × 2N and have the following structures
M :=
0 K
K D
, D :=
K 0
0 −M
and K :=
−D −M
−M 0
In this paper a method for extracting eigenvalues and eigenvectors of a QEVP in the form of
homogeneous coordinates using Newton-Raphson technique is presented.
References
[1] S. D. Garvey, M. I. Friswell and U. Prells, Coordinate Transformations for Second Order
Systems. Part I: General Transformations, Journal of Sound and Vibration, volume 258,
issue 5, pages 885-909, 2002.
[2] S. D. Garvey, Basic Mathematical Ideas behind a Rayleigh Quotient Method for the QEVP
(Unpublished manuscript).
21
Michael Karow
Institut für Mathematik
Technische Universität Berlin
Straße des 17. Juni 136
10623 Berlin, Germany.
[email protected]
+49 3031425004
+49 3031421264
The Separation of two Matrices and its Application in the Perturbation
Theory of Eigenvalues and Invariant Subspaces
Michael Karow
We discuss the three definitions of the separation of two matrices given by Stewart, Varah and
Demmel. Then we use the separation in order to obtain an inclusion theorem for pseudospectra
of block triangular matrices. Furthermore, we present two perturbation bounds for invariant
subspaces and compare them with the classical bounds of Stewart and Demmel.
22
[email protected]
Stephen Kirkland
Stokes Professor, Hamilton Institute
National University of Ireland Maynooth
+353 (0)1 708 6797
Computation Considerations for the Group Inverse for an Irreducible
M-Matrix
Stephen Kirkland
M-matrices arise in numerous applied settings, and a certain generalised inverse – the group
inverse – of an irreducible M-matrix turns out to yield useful information. For example, perturbation theory for stationary distribution vectors of Markov chains, sensitivity analysis for
stable population vectors in mathematical ecology, and effective resistance in resistive electrical networks all rely on the use of the group inverse of an irreducible M-matrix.
How then can we compute such a group inverse? In this talk, we will survey several known
methods for computingthe group inverse of an irreducible M-matrix, and then discuss some
sensitivity and perturbation results.
23
Balmohan V. Limaye
Department of Mathematics
Indian Institute of Technology Bombay
Powai, Mumbai 400076, India.
[email protected]
Conditioning of a Basis
Balmohan V. Limaye
We define condition numbers of a basis of a finite dimensional normed space in terms of the
norms of the basis elements and the distances of the basis elements from their complementary
subspaces. Optimal scaling strategies are determined. Condition numbers of a basis of an inner
product space can be calculated explicitly in terms of the diagonal entries of the corresponding
Gram matrix and of its inverse. We give estimates for the change in the condition numbers
when a given basis is transformed by a nonsingular matrix to another basis. Our condition
numbers arise naturally when we address the problem of computing the dual basis with the
given basis as the data.
24
Vu Hoang Linh
Faculty of Mathematics
Mechanics and Informatics
Vietnam National University - Hanoi
334, Nguyen Trai, Thanh Xuan, Hanoi,
Vietnam
[email protected]
+84 4 38581135
+84 4 38588817
Robust Stability of Linear Delay Differential-Algebraic Systems
Vu Hoang Linh
In this talk, we discuss the robust stability of linear time-invariant delay differential-algebraic
systems with respect to structured and admissible perturbations. To measure the distance to
instability, the concept of stability radii is used. First, existing results on stability radii of
differential-algebraic systems without delay are briefly summarized. Then, we show how some
of these results can be extended to the case of differential-algebraic systems with delay which is
more complicated than the non-delay case. As a main result, under certain structure conditions,
the formula of the (complex) stability radius is obtained. In addition, the asymptotic behaviour
of the stability radii of discretised systems is characterised as the stepsize tends to zero. The
talk is based on joint works with Nguyen Huu Du, Do Duc Thuan, and Volker Mehrmann.
25
D. Steven Mackey
Dept. of Mathematics
Western Michigan University
1903 W. Michigan Ave
Kalamazoo, MI 49008-5248
[email protected]
269 387 4539
269 387 4530
Minimal Indices of Singular Matrix Polynomials: Some Recent
Perspectives
D. Steven Mackey
In addition to eigenvalues, singular matrix polynomials also possess scalar invariants called
“minimal indices” that are significant in a number of application areas, such as control theory
and linear systems theory. In this talk I will discuss recent developments in our understanding
of several fundamental issues concerning minimal indices, starting with the very definition of
the concept. Next the question of how the minimal indices of a polynomial are related to those
of its linearizations is considered.
Finally I describe the “index sum theorem”, a fundamental relationship between the elementary
divisors and minimal indices of any matrix polynomial, and some of its consequences.
26
Christian Mehl
Institute of Mathematics
TU Berlin
MA 4-5, 10623 Berlin, Germany.
[email protected]
(+49) 30 314 25741
(+49) 30 314 79706
Structured Backward Errors for Eigenvalues of Hermitian Pencils
Shreemayee Bora, Michael Karow, Christian Mehl and Punit Sharma
In this talk, we consider the structured backward errors for eigenvalues of Hermitian pencils
or, in other words, the problem of finding the smallest Hermitian perturbation so that a given
value is an eigenvalue of the perturbed Hermitian pencil.
The answer is well known for the case that the eigenvalue real, but in the case of nonreal
eigenvalues, only the structured backward error for eigenpairs has been considered so far, i.e.,
the problem of finding the smallest Hermitian perturbation so that a given pair is an eigenpair
of the perturbed Hermitian pencil.
In this talk, we give a complete answer to the question by reducing the problem to an eigenvalue
minimization problem of Hermitian matrices depending on two real parameters. We will see
that the structured backward error of complex nonreal eigenvalues may be significatly different
from the corresponding unstructured backward error - which is in contrast to the case of real
eigenvalues where the structured and unstructured backward errors coincide.
27
Volker Mehrmann
Institut f. Mathematik, MA 4-5, Str. des 17.
Juni 136, D-10623 Berlin, Germany
[email protected]
+493031425736
+493031479706
Self-adjoint Differential Operators and Optimal Control
Peter Kunkel, Volker Mehrmann and Lena Scholz
Motivated by the structure which arises e. g. in the necessary optimality boundary value problem of DAE constrained linear-quadratic optimal control, a special class of structured DAEs,
so-called self-adjoint DAEs, is studied in detail. It is analyzed when and how this structure is
actually associated with a self-conjugate operator. Local structure preserving condensed forms
under constant rank assumptions are developed that allow to study existence and uniqueness
of solutions. A structured global condensed form and structured reduced models based on
derivative arrays are developed as well. Furthermore, the relationship between DAEs with selfconjugate operator and Hamiltonian systems are analyzed and it is characterized when there is
an underlying symplectic flow.
28
Emre Mengi
Koç University
Rumelifeneri Yolu 34450
Sariyer, Istanbul, Turkey.
[email protected]
+90 212 3381658
+90 212 3381559
Matrix Functions with Specified Eigenvalues
Michael Karow, Daniel Kressner, Emre Mengi, Ivica Nakic and Ninoslav Truhar
The main object we study is a matrix function depending on a parameter analytically. A singular value optimization characterization is derived for a nearest matrix function with prescribed
eigenvalues from such a matrix function with respect to the spectral norm. We start by considering the linear matrix pencil case, that was motivated by an inverse shape estimation problem.
The derivation for a linear pencil of the form L(λ) = A + λB extensively exploits the solution
space of an associated Sylvester equation of the form AX +BXC = 0, where C is upper triangular with diagonal entries selected from the prescribed eigenvalues. Kroneckerization of the
Sylvester equation yields a singular value optimization characterization. We obtain the singular
value optimization characterization for a matrix polynomial by means of a linearization of the
matrix polynomial, and applying the machinery for the linear pencil case. Finally, the more
general characterization for an analytic matrix function is obtained from a matrix polynomial
by incorporating polynomial interpolation into the derivation.
Many of the widely-studied distance problems fall into the scope of this work, e.g., distance
to instability, distance to uncontrollability, distance to a nearest defective matrix, and their
generalizations for matrix polynomials as well as analytic matrix functions.
29
[email protected]
Debasisha Mishra
Institute of Mathematics and Applications
Andharua, Bhubaneswar 751 003, India.
9337573138
+90 212 3381559
B† -splittings of Matrices
Debasisha Mishra
A new type of matrix splitting called B† -splittings ([1]) generalizing the notion of B-splitting
([2]) is introduced first. Then a characterization of nonnegative Moore-Penrose inverse using
the proposed splitting is obtained. Another convergence and two comparison results for the
B† -splittings as well as B-splittings are finally discussed.
The above work aims to solve a rectangular system of linear equations by iterative method
using a matrix splitting. Then the problem becomes an eigenvalue problem i.e. finding the
spectral radius of the iteration scheme. The contents of the present work is from the author’s
earlier work ([1] and [3]) jointly written with Dr. K. C. Sivakumar of IIT Madras. The same
technique also can be used to compute Moore-Penrose inverse of a matrix.
References
[1] Mishra, Debasisha; Sivakumar, K. C., On Splittings of matrices and Nonnegative Generalized Inverses, Operators and Matrices 6 (1) (2012) 85-95.
[2] Peris, Josep E., A new characterization of inverse-positive matrices, Linear Algebra Appl.
154/156 (1991) 45-58.
[3] Mishra, Debasisha; Sivakumar, K. C., Comparison theorems for a subclass of proper
splittings of matrices, Appl. Math. Lett. 25 (2012) 2339-2343.
30
Michael L. Overton
Professor of Computer Science and Mathematics
Courant Institute of Mathematical Sciences
New York University, USA.
[email protected]
+1 212 998 3121
Characterization and Construction of the Nearest Defective Matrix via
Coalescence of Pseudospectral Components
Michael L. Overton, Rafikul Alam, Shreemayee Bora and Ralph Byers
Let A be a matrix with distinct eigenvalues and let w(A) be the distance from A to the set
of defective matrices (using either the 2-norm or the Frobenius norm). Define Λ , the pseudospectrum of A, to be the set of points in the complex plane which are eigenvalues of
matrices A + E with kEk < , and let c(A) be the supremum of all with the property that Λ
has n distinct components. Demmel and Wilkinson independently observed in the 1980s that
w(A) ≥ c(A), and equality was established for the 2-norm by Alam and Bora in 2005. We
give new results on the geometry of the pseudospectrum near points where first coalescence of
the components occurs, characterizing such points as the lowest generalized saddle point of the
smallest singular value of A − zI over z ∈ C. One consequence is that w(A) = c(A) for the
Frobenius norm too, and another is the perhaps surprising result that the minimal distance is
attained by a defective matrix in all cases. Our results suggest a new computational approach to
approximating the nearest defective matrix by a variant of Newton’s method that is applicable
to both generic and nongeneric cases. Construction of the nearest defective matrix involves
some subtle numerical issues which we explain, and we present a simple backward error analysis showing that a certain singular vector residual measures how close the computed matrix
is to a truly defective matrix. Finally, we present a result giving lower bounds on the angles
of wedges contained in the pseudospectrum and emanating from generic coalescence points.
Several conjectures and questions remain open.
31
Vasilije Perović
Department of Mathematics
Western Michigan University
Kalamazoo, Michigan 49008
[email protected]
269 270 1900
269 387 4530
Linearizations of Matrix Polynomials in Bernstein Basis
D. Steven Mackey and Vasilije Perović
Bernstein polynomials were introduced just over a hundred years ago, and since then have
found numerous applications, most notably in computer aided geometric design [2]. Due to
their importance, a significant amount of research already exists for the scalar case. On the
other hand, matrix polynomials expressed in the Bernstein basis were studied only recently [1].
Two considerations led us to systematically study such matrix polynomials: the increasing use
of non-monomial bases in practice, and the desirable numerical properties of the Bernstein
basis.
For a matrix polynomial P (λ), the classical approach to solving the polynomial eigenvalue
problem P (λ)x = 0 is to first convert P into a matrix pencil L with the same finite and infinite elementary divisors, and then work with L. This method has been well studied when P
is expressed in the monomial basis. But what about the case when P (λ) is in the Bernstein
basis? It is important to avoid reformulating P (λ) into the monomial basis, since a change of
basis can introduce numerical errors that were not present in the original problem. Using novel
tools, we show how to work directly with P (λ) to generate large families of linearizations that
are also expressed in the Bernstein basis. We also construct analogs for the Bernstein basis of
the vector spaces of linearizations introduced in [4], and describe some of their basic properties. Connections with low bandwidth Fiedler-like linearizations, which could have numerical
impact, are also established.
Additionally, we extend the definitions of structured matrix polynomials in the monomial basis
to arbitrary polynomial bases. In the special case of the Bernstein basis, we derive spectral
pairings for certain classes of structured matrix polynomials and discuss the existence of structured linearizations. We illustrate how existing structure preserving eigenvalue algorithms for
structured pencils in the monomial basis can be adapted for use with structured pencils in the
Bernstein basis.
Finally, we note that several results from [3, 5] are readily obtained by specializing our techniques to the case of scalar polynomials expressed in the Bernstein basis.
32
References
[1] A. A MIRASLANI , R. M. C ORLESS & P. L ANCASTER, Linearization of matrix polynomials expressed in polynomial bases, IMA Journal of Numerical Analysis, 29 (2009),
pp. 141–157.
[2] R. T. FAROUKI, The Bernstein polynomial basis: a centennial retrospective, Computer
Aided Geometric Design, 29 (2012), pp. 379–419.
[3] G. F. J ÓNSSON & S. VAVASIS, Solving polynomials with small leading coefficients,
SIAM J. Matrix Anal. Appl., 26 (2005), pp. 400–414.
[4] D. S. M ACKEY, N. M ACKEY, C. M EHL & V. M EHRMANN, Vector spaces of linearizations for matrix polynomials, SIAM J. Matrix Anal. Appl., 27 (2006), pp. 821–850.
[5] J. R. W INKLER, A companion matrix resultant for Bernstein polynomials, Linear Algebra
Appl., 362 (2003), pp. 153–175.
33
H. K. Pillai
Department of Electrical Engineering
Indian Institute of Technology Bombay
Powai, Mumbai 400 076, India.
[email protected]
+91 22 2576 7424
+91 22 2576 8424
A Linear Algebraic Look at Differential Ricatti Equation and Algebraic
Ricatti (in)equality
H. K. Pillai
Differential Ricatti equation (DRE) plays a central role in optimal control problems. Closely
associated to DRE is the Algebraic Ricatti (in)equality (ARI/ARE), which are usually solved
nowadays using LMI techniques. I shall present a linear algebraic approach that provides
several interesting insights into solutions of DRE, ARI and ARE. I shall also discuss how these
insights may perhaps lead to more efficient algorithms for solving a spectrum of problems
arising in control theory and signal processing. These results have been obtained as joint work
with Sanand Athalye.
34
Soumyendu Raha
Supercomputer Education and
Research Centre
Indian Institute of Science, Bangalore.
[email protected]
+91 80 2293 2791
+91 80 2360 2648
A Regularization Based Method for Dynamic Optimization with High
Index DAEs
Soumyendu Raha
We describe a consistent numerical method for solving dynamic optimization problems involving high index DAEs via regularization of the Jacobian matrix of the method. In doing so, we
use Aronszajn’s lemma to relate the discretization and of the problem and the regularization.
The method is illustrated with examples.
35
[email protected]
Manideepa Saha
Department of Mathematics
IIT Guwahati, Guwahati 781039, India.
+91 9678000530
A Constructive Method for Obtaining a Preferred Basis from a
Quasi-preferred Basis for M -matrices
Manideepa Saha and Sriparna Bandopadhyay
An M -matrix has the form A = sI −B, where s ≥ ρ(A) ≥ 0 and B is a entrywise nonnegative
matrix. A quasi-preferred set is an ordered set of nonnegative vectors and the positivity of the
entries of the vectors depends on the graph structure of A in a specified manner. A preferred
set is a quasi-preferred basis such that the image of each vector under −A is a nonnegative
linear combination of the subsequent vectors and the coefficients in the linear combinations
also depends entirely on the graph structure of A.
The existence of a preferred basis for the generalized eigenspace(i.e.,the null space of An with
the order of A is n) of an M -matrix A was proved by H. Schneider and D. Hershkowitz in
1988 [2] . But their proof were by the ‘tracedown method’ which is essentially induction on
the diagonal blocks of the Frobenius Normal form of the matrix. As it is easier to obtain a
quasi-preferred basis compared to a preferred basis, so we introduce a direct method for computing a preferred basis when a quasi-preferred basis is given. We also introduce some special
properties of quasi-preferred basis of an M -matrix and those properties help us to introduce the
constructive method for obtaining a preferred basis starting from a quasi-preferred basis. We
also illustrate the whole procedure with the help of some typical examples. And we conclude
our work summarizing the whole procedure in the form of an algorithm.
References
[1] A. Berman and R. Plemmons, Nonnegative Matrices in the Mathematical Sciences,
2nd ed., SIAM Publications, Philadelphia, PA, 1994.
[2] D. Hershkowitz, and H. Schneider, On the generalized nullspace of M -matrices and Zmatrices, Lin. Algebra Appl.,106:5–23 1988.
[3] U. Rothblum, Algebraic eigenspace of nonnegative matrices, Lin. Algebra Appl.,12:281–
292, 1975.
36
[email protected]
G. Sajith
Department of Computer Science
Indian Institute of Technology Guwahati
+91 8970100222
An I/O Efficient Algorithm for Hessenberg Reduction
S. K. Mohanty and G. Sajith
In traditional algorithm design, it is assumed that the main memory is infinite in size and
allows random uniform access to all its locations. This enables the designer to assume that all
the data fits in the main memory. (Thus, traditional algorithms are often called in-core.) Under
these assumptions, the performance of an algorithm is decided by the number of instructions
executed, and therefore, the design goal is to optimize it.
These assumptions may not be valid while dealing with massive data sets, because in reality,
the main memory is limited, and so the bulk of the data may have to be stored in slow but
inexpensive secondary memory. The number of instructions executed would no longer be a
reliable performance metric, but the number of in- puts/outputs (I/Os) executed would be. I/Os
are slow because of large access times of secondary memories. While designing algorithms for
large data sets, the goal is therefore to minimise the number of I/Os executed. The literature
is rich in efficient out-of-core algorithms for matrix computation. But very few of them are
designed on the external memory model of Aggarwal and Vitter, and as such attempt to quantify
their performances in terms of the number of I/Os performed.
This model introduced by Aggarwal and Vitter, has a single processor and a two level memory.
It is assumed that the bulk of the data is kept in the secondary memory which is a permanent
storage. The secondary memory is divided into blocks. An I/O is defined as the transfer of
a block of data between the secondary memory and a volatile main memory, which is limited
in size. The processors clock period and the main memory access time are negligible when
compared to the secondary memory access time. The measure of performance of an algorithm
is the number of I/Os it performs. Algorithms designed on this model are referred to as external
memory algorithms. The model defines the following parameters: the size of the problem
input (N ), the size of the main memory (M ), and the size of a disk block (B). They satisfy
1 ≤ B ≤ M < N.
It has been shown that, on this model, the number of I/Os needed to read (write) N contiguous
items from (to) the disk is Scan(N ) = θ(N/B), and that the number of I/Os required to sort N
items is Sort(N ) = θ((N/B) logM/B (N/B)). For all realistic values of N , B, and M , Scan(N )
< Sort(N ) << N .
37
We study the problem of Hessenberg reduction on this model. We show that even the blocked
variant of Hessenberg reduction [Gregorio Quintana-Orti and Robert van de Geijn, Improving the Performance of Reduction to Hessenberg Form, ACM Transactions on Mathematical
Software 32 (June 2006), No. 2, 180194.], in spite of the 70% reduction it achieves in matrixvector operations, has an I/O complexity of O(N 3 /B). We propose an Hessenberg reduction
algorithm with an I/O complexity that is asymptotically superior.
38
[email protected]
C. S. Sastry
Indian Institute of Technology, Hyderabad
Hyderabad 502205, India.
+91 40 2301 6072
Sparse Description of Linear Systems and Application in Computed
Tomography
R. Ramu Naidu, C. S. Sastry and J.Phanindra Varma
In recent years, sparse representation has emerged as a powerful tool for efficiently processing
data in non-traditional ways. This is mainly due to the fact that natural data of interest tend
to have sparse representation in some basis. A wealth of recent optimization techniques [1] in
applied mathematics, by the name of Compressive Sampling Theory (CST or CS theory) aim
at providing the sparse description of such data in redundant bases. The present abstract briefly
walks through the current developments in Sparse representation theory and shows how the
developments could be used for applications in Computed Tomography.
A full rank matrix A of size m × n (with m n) generates an underdetermined system
of linear equations Ax = y having infinitely many solutions. The problem of finding the
sparsest solution (that is, x for which |{i : xi 6= 0}| << n) has been answered positively and
constructively through the following optimization problem:
min kαk1
α
subject
to
Aα = y.
(1)
The need for the sparse representation arises from the fact that several real life applications
demand the representation of data in terms of as few basis (frame) elements as possible. The
elements or the columns of A are called atoms and the matrix so generated by them is called the
dictionary. The current literature [1] indicates that CST has a number of possible applications
in fields like data compression and medical imaging, to name a few. The developments of CST
depend typically on sparsity and incoherence. Sparsity expresses the idea that the “information
rate” of a continuous time data may be much smaller than suggested by its bandwidth, or that
a discrete-time data depends on a number of degrees of freedom which is comparably much
smaller than its (finite) length. On the other hand, incoherence extends the duality between the
time and frequency contents of data.
The recent results in CST talk of properties of the redundant dictionaries and address the issues
such as 1. what are all the recovery conditions that the matrix A has to satisfy so that (1)
provides unique sparsest solution to Ax = y ? 2. how are the sparsity of x and the size of
y related ?. Candes et. al.[1] introduced Restricted Isometry Property (RIP) to establish the
39
theoretical guarantees for the stated sparse recovery. As verifying RIP is very difficult, the
concepts of widths of subsets, null space properties of A are being studied [1] to establish the
theoretical guarantees for sparse recovery.
Applications in Computed Tomography
The basic objective [2] in Computed Tomography (CT) is to obtain the high quality images
from projection data obtained using different scanning geometries such as parallel, fan and cone
or spiral cone beam with as less exposure and utmost efficiency as possible. In the following
subsection, we outline how the reconstruction of tomography from the incomplete parallel
beam data and how the classification of CT images in the Radon domain can be realized.
Reconstruction of CT images from incomplete data
It is well known that the problem of reconstruction in CT from projection data can be addressed
by solving the matrix equation p = RI [2], where p and I are projection data and the image
respectively, and R is the projection matrix. There are several well known methods, like Algebraic Reconstruction Method (ART) [2], being used for solving the stated matrix equation.
The recent CST provides a way out for faster reconstruction especially when we have limited
and noisy projection data:
−1
˜
I = Ψ arg min kp − RΨθk2 + λkθk1 .
(2)
θ
where Ψ is the suitable sparsifying basis for I (that is, I = ΨI1 , with I1 being sparse).
Classification of CT images directly from projection data
The classification of CT images into various classes possessing various degrees of abnormalities is of relevance in medical imaging. Such a classification directly in Radon domain is a
way out for the automated classification of medical images. This approach is free from the
reconstruction artifacts in classification. Motivated by the dictionary learning [1], we realize
our objective of classification of CT images in the Radon domain through the following optimization problem:
n X
K
X
j=1 i=1
min
Di ,Ci ,θl
X
kRxj − Dαk22 + γkαk1 ,
xj ∈Cj
40
where
γ > 0.
(3)
Acknowledgement: One of the authors (CSS) is thankful to DST, Govt. of India for the
support (Ref: SR/FTP/ETA-54/2009) he received.
References
[1] Y. Eldar and G. Kutyniok, “Compressed Sensing: Theory and applications,” Cambridge
Univ Press, 2012.
[2] F. Natterer, “Mathematical methods in image reconstruction”, SIAM, 2001.
41
Punit Sharma
Department of Mathematics
Indian Institute of Technology Guwahati
Guwahati 781039, India.
[email protected]
9957939372
Structured Backward Error of Approximate Eigenvalues of
T-palindromic Polynomials
Punit Sharma, Shreemayee Bora, Michael Karow and Christian Mehl
We study the backward error of approximate eigenvalues of T-palindromic polynomials with
respect to structure preserving perturbations that affect up to four coefficients. Expressions for
the backward error are given for T-palindromic pencils and quadratic polynomials. The same
are also obtained for real approximate eigenvalues of real T-palindromic cubic polynomials.
Finally it is also shown that the analysis leads to a lower bound for the structured backward
error of an approximate eigenvalue of any T-palindromic polynomial which is also an upper
bound of the unstructured backward error.
42
Valeria Simoncini
Dipartimento di Matematica
Università di Bologna
Piazza di Porta San Donato, 5
40126 Bologna, Italy
[email protected]
On the Numerical Solution of large-scale Linear Matrix Equations
Valeria Simoncini
Linear matrix equations such as the Lyapunov and Sylvester equations play an important role
in the analysis of dynamical systems, in control theory, in eigenvalue computation, and in other
scientific and engineering application problems.
A variety of robust numerical methods exists for the solution of small dimensional linear equations, whereas the large scale case still provides a great challenge.
In this talk we review several available methods, from classical ADI to recently developed
projection methods making use of “second generation” Krylov subspaces. Both algebraic and
computational aspects will be considered.
43
Sukhjit Singh
Department of Mathematics
Indian Institute of Technology Kharagpur
Kharagpur 721302, India.
[email protected]
+91 7872001816
Eigenvalues of Symmetric Interval Matrices Using a Single Step Eigen
Perturbation Method
Sukhjit Singh and D. K. Gupta
This paper deals with the eigenvalue problems involving interval symmetric matrices. The deviation amplitude of interval matrix is considered as a perturbation around the nominal value
of the interval matrix. Using the concepts of interval analysis, interval eigenvalues are computed by the interval extension of the single step eigenvalue perturbation method which is an
improved version of multi-step perturbation applied in a single step. Two numerical examples
are worked out and the results obtained are compared with the results of existing methods. It is
observed that our method is reliable, efficient and for all examples gives better results.
References
[1] G. Alefeld and G. Mayer; Interval analysis: theory and applications, Journal of Computational and Applied Mathematics, Vol-121 (2000) pp. 421–464.
[2] S.S.A. Ravi, T.K. Kundra and B.C. Nakra; Single step eigen perturbation method for
structural dynamic modification, Mechanics Research Communications, Vol-22 (1995)
pp. 363–369.
[3] Z.P. Qiu, S. Chen and I. Elishakoff; Bounds of eigenvalues for structures with an interval
description of uncertain-but-non-random parameters, Chaos, Solitons and Fractals, Vol-7
(1996) pp. 425–434.
[4] Z.P. Qiu, I. Elishakoff and James H. Starnes Jr; The bound set of possible eigenvalues of
structures with uncertain but non-random parameters, Chaos, Solitons and Fractals, Vol-7
(1996) pp. 1845–1857.
44
K. C. Sivakumar
Department of Mathematics
Indian Institute of Technology Madras
Chennai 600 036, India.
[email protected]
+91 44 22574622
Nonnegative Generalized Inverses and Certain Subclasses of Singular
Q-matrices
K.C. Sivakumar
The notion of Q-matrices is quite well understood in the theory of linear complementarity problems. In this article, the author considers three variations of Q-matrices, typically applicable
for singular matrices. The main result presents a relationship of these notions (for a Z-matrix)
with the nonnegativity of the Moore-Penrose inverse of the matrix concerned.
45
Ivan Slapnicar
University of Split
Faculty of Electrical Engineering
Mechanical Engg. and Naval Architecture
R. Boskovica 32, HR-21000 Split, Croatia.
[email protected]
Accurate Eigenvalue Decomposition of Arrowhead Matrices and
Applications
Ivan Slapnicar, Nevena Jakovcevic Stor and Jesse Barlow
We present a new, improved, algorithm for solving an eigenvalue problem of real symmetric arrowhead matrix. Under certain conditions the algorithm computes all eigenvalues and all
components of the corresponding eigenvectors with high relative accuracy in O(n2 ) operations.
The algorithm is based on shift-and-invert technique, where in some cases it may be necessary
to compute only one element of the inverse of the shifted matrix with double precision arithmetic. Each eigenvalue and the corresponding eigenvector are computed separately, which
makes the algorithm suitable when only part of the spectrum is required and for parallel computing. We also present applications to Hermitian arrowhead matrices, symmetric tridiagonal
matrices and diagonal-plus-rank-one matrices, and give numerical examples.
46
Shwetabh Srivastava
Department of Mathematics
Indian Institute of Technology Kharagpur
Kharagpur 721302, India.
[email protected]
+91 8016367291
A Family of Iterative Methods for Computing the Moore–Penrose
Generalized Inverse
Shwetabh Srivastava and D. K. Gupta
Based on a quadratical convergence method proposed in [1], a family of iterative methods
to compute the Moore- Penrose generalized inverse of a matrix are presented. Convergence
analysis along with the error estimates of the proposed method are discussed. The theoretical
proofs and numerical experiments show that these iterative methods are very effective.
References
[1] Marko D. Petković, Predrag S.Stanimirović; Iterative method for computing MoorePenrose inverse based on Penrose equations, Journal of Computational and Applied
Mathematics volume 235, (2011)1604-1613.
[2] Haibin Chen, Yiju Wang, A Family of higher-order convergent iterative methods for computing the MoorePenrose inverse, Appl. Math. Comput. 218 (2011) 4012-4016
47
Ravi Srivastava
Department of Mathematics
IIT Guwahati, Guwahati 781039, India.
[email protected]
09678883441
Distance Problems for Hermitian Matrix Pencils - an -Pseudospectra
Based Approach
Shreemayee Bora and Ravi Srivastava
Given a definite pencil L(z) = zA − B, we present a bisection type algorithm for computing
its Crawford
p number and a nearest Hermitian pencil that is not definite with respect to the norm
|kL|k = kAk2 + kBk2 where k.k denotes the 2-norm.
We also provide numerical experiments that compare the proposed algorithm with similar algorithms proposed in [1], [2] and [3].
The same technique may also be used to find the distance from a definitizable pencil to a nearest
Hermitian pencil that is not definitizable with respect to the norm |k.|k.
References
[1] Chun-Hua Guo, Nicholas J. Higham, and Francoise Tisseur, An improved arc algorithm for
detecting definite Hermitian pairs, SIAM J. Matrix Anal. Appl., 31(3):1131-1151, 2009.
[2] Nicholas J. Higham, Francoise Tisseur, and Paul M. Van Dooren, Detecting a definite
Hermitian pair and a hyperbolic or elliptic quadratic eigenvalue problem, and associated
nearness problems, Linear Algebra Appl., 351/352:455-474, 2002. Forth special issue on
linear systems and control.
[3] F. Uhlig, On computing the generalized Crawford number of a matrix, Linear Algebra
Appl.(2011) , doi:10.1016/j.laa.2011.06.024.
48
David S. Watkins
Department of Mathematics
Washington State University
Pullman, WA 99163-3113, USA.
[email protected]
001-509-335-7256
001-509-335-1188
What’s So Great About Krylov Subspaces?
David S. Watkins
Everybody knows that Krylov subspaces are great. The most popular algorithms for solving
linear systems and eigenvalue problems for large, sparse matrices are Krylov subspace methods. I hope to convince you that they are even greater than you realized. As usual, I will focus
on eigenvalue problems. Most general-purpose eigensystem algorithms, including John Francis’s implicitly-shifted QR algorithm, are based on the power method and its multi-dimensional
generalizations. We will discuss the synergy between the power method and Krylov subspaces,
and we will show that by introducing Krylov subspaces into the discussion sooner than usual,
we are able to get a simple, satisfying explanation of Francis’s and related algorithms that does
not rely on making a connection with the “explicit” QR algorithm.
49
Author Index
Adhikari, Bibhas, 1
Ahmad, Ayaz, 2
Ahmad, Sk Safique, 3
Ahuja, Kapil, 5
Alam, Rafikul, 6, 31
Mohanty, Ramanarayan, 14
Mohanty, S. K., 37
Bandopadhyay, Sriparna, 36
Barlow, Jesse, 46
Behera, Namita, 6
Belur, Madhu N., 7
Benner, Peter, 5, 10
Bora, Shreemayee, 27, 31, 42, 48
Breiten, Tobias, 10
Byers, Ralph, 31
Overton, Michael L., 31
Datta, Biswa Nath, 11
Dopico, Froilán M., 12
Faßbender, Heike, 13
Garvey, Seamus D, 20
Gupta, D. K., 44, 47
Kabi, Bibek, 14
Kalaimani, Rachel Kalpana, 17
Kalita, Karuna, 20
Karow, Michael, 22, 27, 29, 42
Kirkland, Stephen, 23
Kressner, Daniel, 29
Kunkel, Peter, 28
Limaye, Balmohan V., 24
Linh, Vu Hoang, 25
Mackey, D. Steven, 26, 32
Mehl, Christian, 27, 42
Mehrmann, Volker, 28
Mengi, Emre, 29
Mishra, Debasisha, 30
Naidu, R. Ramu, 39
Nakic, Ivica, 29
Perović, Vasilije, 32
Pillai, H. K., 34
Pradhan, Tapan, 14
Raha, Soumyendu, 35
Routray, Aurobinda, 14
Saha, Manideepa, 36
Sajith, G., 37
Sasmal, Aritra, 20
Sastry, C. S., 39
Scholz, Lena, 28
Sharma, Punit, 27, 42
Simoncini, Valeria, 43
Singh, Sukhjit, 44
Sivakumar, K. C., 45
Slapnicar, Ivan, 46
Srivastava, Ravi, 48
Srivastava, Shwetabh, 47
Stor, Nevena Jakovcevic, 46
Sturler, Eric de, 5
Truhar, Ninoslav, 29
Varma, J. Phanindra, 39
Watkins, David S., 49
50