Matrix Approach: the relationship among Lagrange, Monomial
and Newton’s interpolations
Yi Shao
Abstract
Lagrange’s, Newton’s and Monomial interpolations are three well-known polynomial
approximations for functions. This research is the study on the relationship among Lagrange’s
interpolation, Newton’s interpolation and monomial interpolation, i.e., discover matrix closedform formulas, which connect any two interpolations.
1. Introduction
There has been an interest in studying Lagrange’s interpolation, Newton’s interpolation and
monomial interpolation over the past few decades. Interpolation is a method of constructing new
data points within the range of a discrete set of known data points used in the mathematical field.
In other words, given a set of (n+1) data points (𝑥0 , 𝑦0 ), (𝑥1 , 𝑦1 ), … , (𝑥𝑛 , 𝑦𝑛 ), we can find an nth
degree polynomial Pn(x) to pass every data point. We usually use Newton basis, Lagrange basis
and Monomial basis to interpolate.
Given a set of (n+1) data points (𝑥0 , 𝑦0 ), (𝑥1 , 𝑦1 ), … , (𝑥𝑛 , 𝑦𝑛 ), where no two 𝑥𝑘 are the same
throughout this paper, The interpolation polynomial in the Monomial form is a linear combination
of Monomial basis polynomials
𝑛
𝑀(𝑥 ) ≔ ∑ 𝑎𝑘 𝑚𝑘 (𝑥)
𝑘=0
where the Monomial polynomials basis is defined as
𝑚𝑘 (𝑥 ) ≔ 𝑥 𝑘
Thus the Monomial polynomial interpolation can be written as
𝑀(𝑥 ) = 𝑎0 + 𝑎1 𝑥 + ⋯ + 𝑎𝑛 𝑥 𝑛
Given a set of (n+1) data points (𝑥0 , 𝑦0 ), (𝑥1 , 𝑦1 ), … , (𝑥𝑛 , 𝑦𝑛 ), the interpolation polynomial in the
Newton form is a linear combination of Newton polynomials basis
𝑛
𝑁(𝑥 ) ≔ ∑ 𝑏𝑘 𝑛𝑘 (𝑥)
𝑘=0
where the Newton polynomials basis is defined as
𝑘−1
𝑛𝑘 (𝑥 ) ≔ ∏(𝑥 − 𝑥𝑖 )
𝑖=0
for k>0 and 𝑛0 (𝑥 ) = 1.
Thus the Newton polynomial can be written as
𝑁 (𝑥 ) = 𝑏0 + 𝑏1 (𝑥 − 𝑥0 ) + ⋯ + 𝑏𝑛 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) ⋯ (𝑥 − 𝑥𝑛−1 )
For the same set of (n+1) data points (𝑥0 , 𝑦0 ), (𝑥1 , 𝑦1 ), ⋯ , (𝑥𝑘 , 𝑦𝑘 ), ⋯ , (𝑥𝑛 , 𝑦𝑛 ), the interpolation
polynomial in the Lagrange form is a linear combination
𝑛
𝐿̅(𝑥 ) ≔ ∑ 𝑐̅𝑘 𝑙𝑘̅ (𝑥)
𝑘=0
of Lagrange polynomials basis
𝑥 − 𝑥𝑚
𝑥 − 𝑥0
𝑥 − 𝑥𝑘−1 𝑥 − 𝑥𝑘+1
𝑥 − 𝑥𝑛
𝑙𝑘̅ (𝑥 ) ≔ ∏
=
⋯
⋯
𝑥𝑘 − 𝑥𝑚 𝑥𝑘 − 𝑥0 𝑥𝑘 − 𝑥𝑘−1 𝑥𝑘 − 𝑥𝑘+1 𝑥𝑘 − 𝑥𝑛
where 0 ≤ 𝑘 ≤ 𝑛.
0≤𝑚≤𝑛
𝑚≠𝑘
For convenience, we defined
𝑛
𝐿(𝑥 ) ≔ ∑ 𝑐𝑘 𝑙𝑘 (𝑥)
𝑘=0
where we treat the denominator of Lagrange basis polynomial as a number to avoid complexity in
calculation.
Thus we have
𝑙𝑘 (𝑥 ) ≔ ∏ 𝑥 − 𝑥𝑚
0≤𝑚≤𝑛
𝑚≠𝑘
where 0 ≤ 𝑘 ≤ 𝑛.
Thus the Lagrange polynomial can be written as
𝐿(𝑥 ) = 𝑐0 (𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) ⋯ (𝑥 − 𝑥𝑛 ) + 𝑐1(𝑥 − 𝑥0 )(𝑥 − 𝑥2 ) ⋯ (𝑥 − 𝑥𝑛 ) + ⋯ + 𝑐𝑛 (𝑥 − 𝑥0 )(𝑥
− 𝑥1 ) ⋯ (𝑥 − 𝑥𝑛−1 )
The relationship between 𝑐̅𝑘 and 𝑐𝑘 is
𝑐̅𝑘 = ( ∏
0≤𝑚≤𝑛
𝑚≠𝑘
1
) ∙ 𝑐𝑘
𝑥𝑘 − 𝑥𝑚
2. Preliminary work
To make it simply explain, we define monomial basis as Basis A: {1, 𝑥, 𝑥 2 , … , 𝑥 𝑛 }, Newton basis as
Basis B: {1, (𝑥 − 𝑥0 ), (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ), … , (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) ⋯ (𝑥 − 𝑥𝑛−1 )}, and Lagrange basis as
Basis C: {(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) ⋯ (𝑥 − 𝑥𝑛 ), (𝑥 − 𝑥0 )(𝑥 − 𝑥2 ) ⋯ (𝑥 − 𝑥𝑛 ), … , (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) ⋯ (𝑥 −
𝑥𝑛−1 )}. We will use these three bases for our research.
People are more familiar with monomial interpolation but it is pretty hard to find the coefficients.
Instead, if we use Newton’s interpolation or Lagrange interpolation, the calculation is much easier.
To illustrate that, let us consider the situation when n=2. Given three data points:
(𝑥0 , 𝑦0 ), (𝑥1 , 𝑦1 ), (𝑥2 , 𝑦2 ), we have the following three second degree polynomial using above three
bases.
For basis A:
P2 (x) = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2
We need to find the coefficients a0, a1 and a2.
Pluging into the three data points, we get the following three equations:
𝑎0 + 𝑎1 𝑥0 + 𝑎2 𝑥02 = 𝑦0
{𝑎0 + 𝑎1 𝑥1 + 𝑎2 𝑥12 = 𝑦1
𝑎0 + 𝑎1 𝑥2 + 𝑎2 𝑥22 = 𝑦2
We will have to solve for these three equations to get the coefficients.
Here we have to find the inverse matrix of Vandermonde matrix. It is difficult.
Now let’s consider the same situation using Basis B:
P2 (x) = 𝑏0 + 𝑏1 (𝑥 − 𝑥0 ) + 𝑏2 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )
To solve this equation,
let 𝑥 = 𝑥0 , then 𝑏0 = 𝑦0
𝑦 −𝑦
let 𝑥 = 𝑥1 , then 𝑏0 + 𝑏1 (𝑥1 − 𝑥0 ) = 𝑦1 , 𝑏1 = 𝑥1 −𝑥0
1
0
let 𝑥 = 𝑥2 , then 𝑏0 + 𝑏1 (𝑥2 − 𝑥0 ) + 𝑏2 (𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 ) = 𝑦2, 𝑏2 = (𝑥
𝑦2 −𝑦0
2 −𝑥0 )(𝑥2 −𝑥1 )
− (𝑥
𝑦1 −𝑦0
1 −𝑥0 )(𝑥2 −𝑥1 )
If n is getter bigger, it is still hard to solve for the equation. But it is a little bit easier than the
previous method.
For Basis C:
P2 (x) = 𝑐0(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) + 𝑐1(𝑥 − 𝑥0 )(𝑥 − 𝑥2 ) + 𝑐2 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥3 )
To solve the coefficients,
𝑦0
let 𝑥 = 𝑥0 , then 𝑐0 =
(𝑥0 −𝑥1 )(𝑥0 −𝑥2 ))
𝑦1
let 𝑥 = 𝑥1 , then 𝑐1 = (𝑥
1 −𝑥0 )(𝑥1 −𝑥2 )
𝑦2
let 𝑥 = 𝑥2 , then 𝑐2 = (𝑥
2 −𝑥0 )(𝑥2 −𝑥1 )
If n is getter bigger, it is still very easy to calculate the coefficients because when we plug a
number, we only have one unknown and one equation, we can easily find all 𝑐𝑘 .
That’s the reason why we study the relationship among these three bases. If we can find the
transition matrix among these three bases, we can easily obtain the coefficients of a basis given
others.
In 2005, Shengliang Yang published his paper on Discrete Applied Mathematics. In [1], he found
the transition matrix between Monomial basis and Newton basis using the following two types of
elementary polynomials.
Let n be a positive integer. For integers1 ≤ r ≤ (n + 1), the rth elementary symmetric function on
the variables set {𝑥0 , 𝑥1 , … , 𝑥𝑛 } is defined by
𝑒𝑟 (𝑥0 , 𝑥1 , … , 𝑥𝑛 ) =
∑
𝑥𝑘1 𝑥𝑘2 ⋯ 𝑥𝑘𝑟
0≤𝑘1 <𝑘2<⋯<𝑘𝑟 ≤𝑛
and the rth complete symmetric function on the variables set {𝑥0 , 𝑥1 , … , 𝑥𝑛 } is defined by
ℎ𝑟 (𝑥0 , 𝑥1 , … , 𝑥𝑛 ) =
∑
𝑥𝑘1 𝑥𝑘2 ⋯ 𝑥𝑘𝑟
0≤𝑘1 ≤𝑘2 ≤⋯≤𝑘𝑟≤𝑛
where 𝑒0 (𝑥0 , 𝑥1 , … , 𝑥𝑛 ) = 1, ℎ0 (𝑥0 , 𝑥1 , … , 𝑥𝑛 ) = 1.
Let 𝑅𝑛 [𝑥] be the vector space of polynomials in x over the real number field R of degree at most n,
and let 𝑥0 , 𝑥1 , … , 𝑥𝑛 be arbitrary real numbers. Then the sets 𝐵1 = {1, 𝑥, 𝑥 2 , … , 𝑥 𝑛 } and 𝐵2 =
{[𝑥 ]0 , [𝑥 ]1 , … , [𝑥 ]𝑛 } are both bases for 𝑅𝑛 [𝑥], where [𝑥 ]𝑟 = (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) ⋯ (𝑥 − 𝑥𝑟−1 ) for 1 ≤
r ≤ n, and [𝑥 ]0 = 1.
He found the following theorem:
For m=0, 1,…, n,
𝑚
[𝑥]𝑚 = ∑(−1)𝑚−𝑘 𝑒𝑚−𝑘 (𝑥0 , 𝑥1 , … , 𝑥𝑚−1 )𝑥 𝑘
𝑘=0
𝑚
𝑥 𝑚 = ∑ ℎ𝑚−𝑘 (𝑥0 , 𝑥1 , … , 𝑥𝑚−1 )[𝑥]𝑘
𝑘=0
The (n+1) × (n+1) matrices 𝑄𝑛 = 𝑄𝑛 [𝑥0 , 𝑥1 , … , 𝑥𝑛−1 ] and 𝐿𝑛 = 𝐿𝑛 [𝑥0 , 𝑥1 , … , 𝑥𝑛−1 ] are defined as
1
𝑖𝑓 𝑖 = 𝑗,
𝑖−𝑗
𝑄𝑛 (𝑖, 𝑗) = {(−1) 𝑒𝑖−𝑗 (𝑥0 , 𝑥1 , … , 𝑥𝑖−1 )
𝑖𝑓 𝑖 > 𝑗 ≥ 0,
0
𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒,
1
𝑖𝑓 𝑖 = 𝑗,
𝐿𝑛 (𝑖, 𝑗) = { ℎ𝑖−𝑗 (𝑥0 , 𝑥1 , … , 𝑥𝐽 )
𝑖𝑓 𝑖 > 𝑗 ≥ 0 ,
0
𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.
The transition matrix from the basis 𝐵1 = {1, 𝑥, 𝑥 2 , … , 𝑥 𝑛 } of 𝑅𝑛 [𝑥] to the basis 𝐵2 =
{[𝑥 ]0 , [𝑥 ]1 , … , [𝑥 ]𝑛 } of 𝑅𝑛 [𝑥] is the matrix 𝑄𝑛 . The transition matrix from the basis 𝐵2 back to basis
𝐵1 is the matrix 𝐿𝑛 .
Since Shengliang Yang has already found the relationship between Monomial interpolation and
Newton interpolation, our focus of the research is to fulfill the relationship among these three
bases.
3. Relationship between Newton and Lagrange interpolations
It is not difficult to find the transition matrix between basis B and basis C. We use the definition of
the transformation matrix and find the following transition matrix.
The transition matrix from the basis B to basis C is
p
C B
1
( x x )( x x ) ( x x )
1
0
2
0
n
0
1
( x1 x 0 )( x1 x 2 ) ( x1 x n )
1
( x 2 x 0 )( x 2 x1 ) ( x 2 x n )
1
( x x )( x x ) ( x x )
n
0
n
1
n
n 1
The inverse matrix of
0
p
C B
( x0 x1 )( x0 x2 ) ( x0 xn )
( x0 x2 )( x0 x3 ) ( x0 xn )
( x0 x3 )( x0 x4 ) ( x0 xn )
( x0 xn1 )( x0 xn )
( x0 x n )
1
0
0
0
0
0
1
( x n 1 x n )
1
( x n x n 1 )
1
( x1 x 2 )( x1 x3 ) ( x1 x n )
1
( x 2 x1 )( x 2 x3 ) ( x 2 x n )
1
( x 2 x3 )( x 2 x 4 ) ( x 2 x n )
1
( x n x1 )( x n x 2 ) ( x n x n 1 )
1
( x n x 2 )( x n x3 ) ( x n x n 1 )
is
0
0
0
0
1
p
BC
0
0
0
( x1 x2 )( x1 x3 ) ( x1 xn )
0
0
( x1 x3 )( x1 x4 ) ( x1 xn )
( x2 x3 )( x2 x4 ) ( x2 xn )
( x1 xn 1 )( x1 xn )
0
( x1 xn )
( x2 xn )
( xn 1 xn )
1
1
1
0
0
0
0
1
Both of these matrices are lower triangular matrices.
4. Relationship between Lagrange interpolation and Monomial interpolation
We then use the definition of transformation matrix and find the transition matrix from basis A to
Basis C is
p
C A
1
(
x
x
)(
x
x2 ) ( x0 xn )
0
1
0
1
( x x )( x x ) ( x x )
0
1
2
1
n
1
1
( x x )( x x ) ( x x )
0
n
1
n
n 1
n
x0
( x 0 x1 )( x 0 x 2 ) ( x 0 x n )
x1
( x1 x 0 )( x1 x 2 ) ( x1 x n )
xn
( x n x 0 )( x n x1 ) ( x n x n 1 )
But it is very hard to find the inverse matrix
it simple to find the inverse matrix.
p
C A
x 0n 1
( x 0 x1 )( x 0 x 2 ) ( x 0 x n )
x1n 1
( x1 x 0 )( x1 x 2 ) ( x1 x n )
x nn 1
( x n x 0 )( x n x1 ) ( x n x n 1 )
n
xn
( x n x 0 )( x n x1 ) ( x n x n 1 )
x 0n
( x 0 x1 )( x 0 x 2 ) ( x 0 x n )
x1n
( x1 x 0 )( x1 x 2 ) ( x1 x n )
. So we try to factorize this (n+1) × (n+1) to make
Ignoring the denominator of all entrances of this matrix, we get the following matrix:
1
1
1
x0
x02
x1
x12
xn
xn2
x0n
x1n
xnn
This matrix has a very unique pattern and it is the transpose matrix of Vandermonde matrix.
Actually in 2011, F. Soto-Eguibar and H. Moya-Cessa published a paper on Applied Mathematics &
Information Sciences about the inverse matrix of Vandermonde In [2], they found the recursive
formula for entrances of inverse of Vandermonde matrix.
A matrix N × N of the form 𝑉𝑖,𝑗 = λ𝑗𝑖−1
matrix.
i=1, 2, 3… N; j=1, 2, 3,…, N is said to be a Vandermonde
They write 𝑉 −1 = (𝑈′ )−1 (𝐿′ )−1 . Denoting (𝑈′ )−1 as U, they have found that U is an upper
triangular matrix whose elements are
𝑈𝑖,𝑗 = 0 𝑖𝑓 𝑖 > 𝑗
1
𝑈𝑖,𝑗 = ∏𝑗𝑘=1,𝑘≠𝑖
𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.
λ𝑖 −λ𝑘
The matrix U can be decomposed as the product of a diagonal matrix D and other upper triangular
matrix W and they found
1
∏𝑁
𝑖=𝑗
∏𝑁
(λ − λ𝑘 ) 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝑘=1,𝑘≠𝑖 λ −λ
𝑖
𝑘
𝐷𝑖,𝑗 = {
and
𝑊𝑖,𝑗 = { 𝑘=𝑗+1,𝑘≠𝑖 𝑖
.
0
𝑖>𝑗
0
𝑖≠𝑗
The matrix L=(𝐿′ )−1 is a lower triangular matrix, whose elements are
0
𝑖<𝑗
1
𝑖=𝑗
L𝑖,𝑗 = {
L𝑖−1,𝑗−1 − L𝑖−1,𝑗 λ𝑖−1
𝑖 = 2,3, … , 𝑁; 𝑗 = 2,3, … , 𝑖 − 1
The inverse of the Vandermonde matrix can be written as 𝑉 −1 = 𝐷𝑊𝐿, which is a LU
decomposition.
p
In our research, we found other two explicit ways to factorize our matrix C A .
Before we show the factorization, we introduce 1-banded matrix and Pascal matrix.
The (n+1) × (n+1) Pascal matrix, denoted by 𝑃𝑛 [𝑥], was defined in as
(𝑃𝑛 [𝑥])𝑖,𝑗
(𝑗𝑖)𝑥 𝑗−𝑖 , 𝑖𝑓𝑗 ≥ 𝑖
={
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
i, j = 0, 1, … , n.
The n × n reduced generalized Pascal matrix, denoted by 𝑃̃𝑘 [𝑥], was defined by Cheon and Kim in
[3] as
𝐼
𝑃̃𝑘 [𝑥 ] = [ 𝑛−𝑘
0
0
]
𝑃𝑘 [𝑥]
Now we use Reduced Pascal matrix factorization technique to factorize
p
C A
x 0n 1
x0
1
( x 0 x1 )( x 0 x 2 ) ( x 0 x n )
( x 0 x1 )( x 0 x 2 ) ( x 0 x n )
( x 0 x1 )( x 0 x 2 ) ( x 0 x n )
x1n 1
x1
1
( x x )( x x ) ( x x )
( x1 x 0 )( x1 x 2 ) ( x1 x n )
( x1 x 0 )( x1 x 2 ) ( x1 x n )
n
1
2
1
0
1
x nn 1
xn
1
( x x )( x x ) ( x x )
( x n x 0 )( x n x1 ) ( x n x n 1 )
( x n x 0 )( x n x1 ) ( x n x n 1 )
n 1
n
1
n
0
n
1
0
0
( x x )( x x ) ( x x )
n
0
2
0
1
0
1
1
0
( x1 x 0 )( x1 x 2 ) ( x1 x n )
( x1 x 2 )( x1 x 3 ) ( x1 x n )
1
1
1
( x 2 x 3 )( x 2 x 4 ) ( x 2 x n )
( x 2 x1 )( x 2 x 3 ) ( x 2 x n )
( x 2 x 0 )( x 2 x1 ) ( x 2 x n )
1
1
1
( x x )( x x ) ( x x )
( x n x 2 )( x n x 3 ) ( x n x n 1 )
( x n x1 )( x n x 2 ) ( x n x n 1 )
n 1
n
1
n
0
n
~
~
~
~
~
P1 [ x n 1 x n 2 ] P2 [ x n 2 x n 3 ] P3 [ x n 3 x n 4 ] Pn 1 [ x1 x 0 ] Pn [ x 0 ]
x nn
( x n x 0 )( x n x1 ) ( x n x n 1 )
0
0
0
0
0
0
1
0
( x n 1 x n )
1
1
( x n x n 1 )
x 0n
( x 0 x1 )( x 0 x 2 ) ( x 0 x n )
x1n
( x1 x 0 )( x1 x 2 ) ( x1 x n )
Now we have our matrix factorized and it is easy to find the inverse matrix when it is factorized.
The inverse matrix of
p
C A
is
p
AC
~
p n [ x0 ] ~
p1[ x0 x1 ] ~
p3 [ x n 4 x n 3 ] ~
p 2 [ x n 3 x n 2 ] ~
p1[ xn2 xn1 ]
( x0 x1 )( x0 x2 )( x0 xn )
( x0 x2 )( x0 x3 )( x0 xn )
( x0 x3 )( x0 x4 )( x0 xn )
( x0 xn1 )( x0 xn )
( x0 x n )
1
0
0
0
( x1 x2 )( x1 x3 )( x1 xn )
0
0
( x1 x3 )( x1 x4 )( x1 xn )
( x2 x3 )( x2 x4 )( x2 xn )
( x1 xn1 )( x1 xn )
0
( x1 xn )
( x2 xn )
( xn1 xn )
1
1
1
Another way to factorize matrix
p
C A
is to use 1-banded matrix technique. The diagonals are all 1.
0
0
0
0
1
1
(
x
x
)(
x
x2 ) ( x0 xn )
1
0
0
1
( x x )( x x ) ( x x )
0
1
2
1
n
1
1
( x x )( x x ) ( x x )
0
n
1
n
n 1
n
1
( x x )( x x ) ( x x )
1
0
2
0
n
0
1
( x1 x 0 )( x1 x 2 ) ( x1 x n )
1
( x 2 x 0 )( x 2 x1 ) ( x 2 x n )
1
( x x )( x x ) ( x x )
0
n
1
n
n 1
n
1
0
0
1
0
0
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
1
x2
x 22
1
x2
1
0
n 1
n
xn
xn
xn
( x n x 0 )( x n x1 ) ( x n x n 1 )
( x n x 0 )( x n x1 ) ( x n x n 1 )
( x n x 0 )( x n x1 ) ( x n x n 1 )
0
0
0
0
1
0
0
0
( x1 x 2 )( x1 x3 ) ( x1 x n )
1
1
( x 2 x1 )( x 2 x3 ) ( x 2 x n )
( x 2 x 3 )( x 2 x 4 ) ( x 2 x n )
0
0
1
0
( x n 1 x n )
1
1
1
1
( x n x1 )( x n x 2 ) ( x n x n 1 )
( x n x 2 )( x n x3 ) ( x n x n 1 )
( x n x n 1 )
0 1 0 0
0
0 1 0 0
0
0
0
0 0 1
0 1
0 0 0
0 0 0
0
0
0
0
0
0
0 1 x n 3
x n23
x n33
0
1 xn2
x n2 2
0
1
x n 3
x n23
x n 1
0
1
x n2
0
0
1
x n 3
1 0 0 0
0
1 0 0 0
0
0
1
0 1 0
0
0
0 1 x 0
x 02 x 0n 1
x 0n
0 0 1 x1
x12 x1n 1 0
1
x0
x 0n 1
x 2n 2
0
1
x1
0
1
0
1
0
1
x 22
0
x12
0
x 02
x2
x1
x0
1 0 0
0
0
0
1 0
0
0
0
0
1
Thus the inverse matrix is
x0
( x 0 x1 )( x 0 x 2 ) ( x 0 x n )
x1
( x1 x 0 )( x1 x 2 ) ( x1 x n )
p
AC
x 0n 1
( x 0 x1 )( x 0 x 2 ) ( x 0 x n )
x1n 1
( x1 x 0 )( x1 x 2 ) ( x1 x n )
x 0n
( x 0 x1 )( x 0 x 2 ) ( x 0 x n )
x1n
( x1 x 0 )( x1 x 2 ) ( x1 x n )
1
0
0
x0
0
1
x0
0
0
1
0
0
0
0
1
0
0
0 1
0 0
0
x0
1 0
0
0
1
0
0
0
1
xn2
0
1
0
0
0
( x0 x1 )( x0 x 2 ) ( x0 x n )
( x0 x 2 )( x0 x3 ) ( x0 x n )
( x0 x3 )( x0 x 4 ) ( x0 x n )
( x0 x n 1 )( x0 x n )
( x0 x n )
1
0
0
1
x1
0
0
1
x1
0
0
x1
0
0
0
0 1
0
0
xn2
1 0
0 1
0 0
0
x1
1 0
0
0
1
0
0
1
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
1
x2
0
0
1
0
0
0
0
0
0
0
x n 1
1
0
0
0
( x1 x 2 )( x1 x3 ) ( x1 x n )
0
0
( x1 x3 )( x1 x 4 ) ( x1 x n )
( x 2 x3 )( x 2 x 4 ) ( x 2 x n )
( x1 x n 1 )( x1 x n )
0
( x1 x n )
( x2 xn )
( x n 1 x n )
1
1
1
Hence we found two explicit ways to represent the inverse matrix of
p
C A
0
0
0
0
1
. We believe that our
results with more transparent and structural characteristic contracting to one obtained in [2].
Future work
We will finish up the proof of our results.
Acknowledgements
This work is supported by the University of St. Thomas’ Center for Applied Mathematics. The
author is grateful to the referee.
Reference
[1] Sheng-liang Yang, Notes on the LU factorization of Vandermonde matrix, Discrete Applied
Mathematics 146 (2005) 102-105.
0
0
0
x2
1
[2] F. Soto-Eguibar, H. Moya-Cessa, Inverse of the Vandermonde and Vandermonde confluent
matrices, Applied Mathematics & Information Sciences, 5(3) (2011) 361-366.
[3] G.-S. Cheon, J. –S. Kim, Stirling matrix via Pascal matrix, Linear Algebra Appl. 329 (2001) 49-59.
© Copyright 2026 Paperzz