On the Convergence of the Radial Basis Collocation Schemes for 1

國立中山大學應用數學系
碩士論文
Department of Applied Mathematics
National Sun Yat-sen University
Master Thesis
一維橢圓算子的特徵模組在徑向基底函數中的收斂性
On the Convergence of the Radial Basis Collocation Schemes
for 1-D Eigenmodes of Elliptic Operators
研究生:陳宥存
Yu-Tsun Chen
指導教授:黃杰森 博士
Dr. Chieh-Sen Huang
中華民國 102 年 7 月
July 2013
On the Convergence of the Radial Basis
Collocation Schemes for 1-D Eigenmodes
of Elliptic Operators
Yu-Tsun Chen
Department of Applied Mathematics
National Sun Yat-sen University
Kaohsiung, Taiwan 804
R.O.C.
July 29, 2013
致謝
時光飛逝,兩年的時間一下子就這樣過去了。很感激有這個機會可以來念中山,兩年下來
感謝老師和同學們的指導和幫忙,讓我可以順利完成學業。
首先,由衷的感謝指導教授黃杰森老師,不管是在課業或者論文上感到迷惑遇到困難時,
老師總會適時的給予提點,引領我找到方向去解決問題。除了課業外,也十分關心我的近況、
未來展望與生涯規畫等等,從黃老師身上,我不僅學到了作研究該有的精神,也學到了一種態
度,不管是面對生活、課業、甚至是未來進入職場,都要積極、認真、努力的去面對,為自己
負責。再者,也謝謝呂宗澤教授、李宗錂老師、黃宏財教授百忙之中撥空來擔任我的口試委員,
讓我的論文可以更加完美。
另外,感謝我的好同學瀚文、誌宏、遠菡,不僅在課業上給予我很多幫助,讓我在有困難
疑惑時,可以一起討論解決;偶爾也會一起出去吃個飯看個電影,適當的調劑課業上的壓力。
還要感謝我的好室友-尚義與煥榮,在我身體微恙時,總會即時的伸出援手。再來是球隊的學
弟們,雖然碩班在球隊時間變少了,技術也不好,但你們依然在我最需要放鬆與鼓勵時願意陪
我一起打球。
最後也是最重要的,我要感謝我的家人,謝謝你們總是支持我,明知道這或許不好走,但
依然直持我的決定,並且在我感到挫折時,成為我的避風港,這份喜悅與成就,我要與你們一
起分享。未來,我會更加努力的!謝謝!
謹致誠摯的敬忱和謝意。
陳宥存 謹致
2013 年 7 月
i
摘要
Chen [3]證明利用基本解法耦合徑向基底函數來求解一維帕松方程式的近似解會收斂到
Driscoll 和 Fornberg [2]利用拉格朗日插值多項式計算出來的結果。對於橢圓算子的特徵模
組問題,我們可以得到類似的結果,即利用徑向基底函數求得的近似解會收斂到利用拉格朗日
插值多項式為基底所計算出來的結果。在這篇論文中,我們比較利用徑向基底函數與拉格朗日
插值多項式求解一維橢圓算子特徵模組問題的近似解,得出徑向基底函數求得的近似解會收斂
到拉格朗日插值多項式的近似解之結論。
關鍵字:特徵模組問題,徑向基底函數, 拉格朗日插值多項式,帕松方程式,橢圓算子。
ii
Abstract
Chen el al. [3] showed that for 1D Poisson’s equation the approximate
solution obtained by using method of fundamental solutions (MFS) coupled
with the radial basis functions (RBFs) converges in the sense of Lagrange
interpolating polynomial using the result of Driscoll and Fornberg [2]. For
an elliptic operator eigenmode problem we show that similar results could be
obtained. That is the solution obtained by radial basis collocation schemes
converges to the solution with Lagrange interpolating polynomial being the
underline approximate solution. In this thesis, we compare the approximate
solutions of 1-D elliptic eigenmodes problem using radial basis function and
Lagrange interpolating polynomial in a collocation scheme, and conclude that
the solution obtained from RBF collocation method converges to the solution using Lagrange interpolating polynomial as approximate solution in the
similar sense of Driscoll and Fornberg [2].
Keywords : Eigenmode problem, Radial basis function, Lagrange interpolating polynomial, Poisson equation, Elliptic operators.
iii
Contents
1 Introduction
1
2 Radial Basis Function Collocation Method
2.1 Radial Basis Functions . . . . . . . . . . . . . . .
2.2 Interpolation . . . . . . . . . . . . . . . . . . . .
2.3 Solving One-Dimensional Eigenmodes Problem of
Using Lagrange interpolating polynomial . . . . .
2.4 Solving One-Dimensional Eigenmodes Problem of
using RBF . . . . . . . . . . . . . . . . . . . . . .
.
.
2
2
3
.
5
.
6
3 Numerical Results
. . . . . . . . . .
. . . . . . . . . .
Poisson Equation
. . . . . . . . . .
Poisson Equation
. . . . . . . . . .
8
4 Conclusion
11
References
12
iv
List of Tables
1
2
3
4
Some examples of radial basis functions. . . . . . . . . . . . . . . . .
Expansion coefficients for infinitely smooth RBFs. . . . . . . . . . . .
Error difference of first eigenvalue, |λ1 − λ1N |, λ1 = 12 × π 2 , between
RBF and Lagrange, as function of and N . . . . . . . . . . . . . . .
First eigenvalue’s error, |λ1RBF − λ1Lagrange |, between RBF and Lagrange.
2
3
8
9
List of Figures
1
2
The errors and the convergent rates of IMQ. . . . . . . . . . . . . . . 9
The errors and the convergent rates of Gaussian. . . . . . . . . . . . . 10
v
1
Introduction
Multiquadric (MQ) collocation method, first pioneered by Kansa [4], is a highly
efficient numerical method for solving partial differential equations (PDEs). Platte
and Driscoll [5] also discussed using Radial Basis Functions (RBFs) to compute
eigenmodes of elliptic operators. They concluded from numerical results that RBFs
method is superior to basic finite-element methods for Laplacian operator in two
dimensions.
To be precise, they studied the following eigenmodes problem of elliptic operators. Given a linear elliptic second-order partial differential operator L and a
bounded region Ω in Rn with boundary ∂Ω, they sought eigenpairs (λ, u) ∈ (C, C(Ω̄))
satisfying
Lu + λu = 0,
LB u = 0,
in Ω,
(1.1)
on ∂Ω,
(1.2)
where LB is a linear boundary operator of the form
LB u = au + b(n · ∇u).
(1.3)
Here, a and b are given constants and n is the unit outward normal vector defined on
the boundary. They used an interpolating RBF approximation of an eigenfunction of
(1.1) and replaced the eigenvalue problem above with a finite-dimensional eigenvalue
problem.
In the study of several types of RBFs interpolation, such as MQ, Driscoll and
Fornberg [2] proved that, in the 1-D case, the interpolates converges to the Lagrange
interpolating polynomial as the basis functions become increasingly flat. For solving
1-D Poisson’s equation, Chen et al. [3] proved that the approximation by using
increasingly flat RBF in the method of fundamental solutions (MFS) to the given
Poisson’s equation reduces to simply interpolation problem and solution converges
in the sense of Lagrange interpolating polynomial. It is the purpose of this paper to
show similar results can be obtained for the eigenvalue problem (1.1). That is the
eigenvalues obtained by using the scheme in [5] with increasingly flat RBF converge
to those using Lagrange interpolating polynomial interpolant.
The paper is organized as follows. The main numerical scheme is given in Section
2. Numerical results are in Section 3, at last, we conclude the thesis in Section 4.
1
2
2.1
Radial Basis Function Collocation Method
Radial Basis Functions
A radial function is a multivariate function Φ such that
Φ : Rd → R in the sense that Φ(x1 , . . . , xd ) → φ (k(x1 , . . . , xd )k2 ) .
(2.1)
Here the Eucledian norm of x = (x1 , . . . , xd ) is defined as
v
u d
uX
x2i = r.
kxk = t
(2.2)
i=1
RBF collocation method is an “element-free”, or a “meshless”, technique for
generating data-dependent spaces of multivariate functions. The spaces are spanned
by shifted and scaled radial functions. The shifting is accomplished by using a set
of scattered centers, y1 , . . . , yn in Rd , sometimes called basis points, as the origin of
the RBFs. Reconstruction of functions is then made by trial functions which are
linear combinations
ũ (x) :=
N
X
αj Φ (x, yj ) =
j=1
N
X
αj φ (kx − yj k) .
(2.3)
j=1
Some of the commonly used RBFs are given in Table 1. All of these RBFs can be
infinitely smooth RBFs
RBF
φ(r)
2
Gaussian (GA)
e√−r
2+1
Multiquadric (MQ)
r√
Inverse Multiquadric(IMQ) 1/ r2 + 1
Inverse Quadric (IQ)
1/(r2 + 1)
Table 1: Some examples of radial basis functions.
scaled by a shape parameter c, or = 1/c, that controls the flatness (or steepness)
of the RBF. This is done in such a way that φ(r) is replaced by φ(r), or φ(r/c).
p
p
For example, the MQ is scaled as φ = (r)2 + 1, or φ = (r/c)2 + 1. The effect
of the scaling is that as gets smaller, or c gets larger, the RBF becomes flatter.
Further detail on RBF can be found in two excellent books, by Buhmann [1] and
Wendland [7].
2
2.2
Interpolation
A typical interpolation problem has the following form: Given scattered data points
{yj }N
j=1 , and data values ũj = ũ (yj ), find an interpolant
s (x) =
N
X
αj φ (kx − yj k) .
(2.4)
j=1
The interpolation at collocation points gives
s (yi ) ≡
N
X
αj φ (kyi − yj k) = ũi ,
i = 1, . . . , N.
(2.5)
j=1
This summarized in a system of equations for the unknown coefficients αj ,
A Π = ũI ,
(2.6)
h
where A is an N ×N matrix with elements Aij = φ (kyi − yj k), Π = α1 , α2 , . . . , αN
h
iT
and ũI = ũ1 , ũ2 , . . . , ũN
. The solvability of such system, with distinct centers, was proven by Micchelli [6].
For an one-dimensional approximation, (2.4) becomes
s (x, ) =
N
X
αj φ (|x − yj | , ) ,
(2.7)
j=1
where we have explicitly brought out the role of (= 1/c) in the approximation.
Driscoll and Fornberg [2] pointed out that the RBFs presented in Table 1 belong to
a class of infinitely smooth RBFs that can be expanded into a power series
2
4
φ(x, ) = a0 + a1 (x) + a2 (x) + . . . =
∞
X
ai (x)2i ,
(2.8)
i=o
with the coefficients given in Table 2. They have also proven that, for this class, if the
RBF
GA
MQ
IMQ
IQ
Coefficients
(−1)i
,
i!
ai =
i+1 Qi−1
2k−1
a0 = 1, ai = (−1)
,
2i Q k=1 2k
a0 = 1, ai = (−1)i ik=1 2k−1
,
2k
ai = (−1)i ,
i = 0, . . .
i = 1, . . .
i = 1, . . .
i = 0, . . .
Table 2: Expansion coefficients for infinitely smooth RBFs.
interpolation system defined by (2.5) and (2.6) is nonsingular, then the interpolant
3
i
T
,
(2.7) satisfies
lim s(x, ) = LN (x) + O(2 ),
→0
(2.9)
where LN (x) is a Lagrangian interpolating polynomial of N th degree with LN (yi ) =
u(yi ) on the nodes. The special cases of N ≤ 3 were explicitly demonstrated in [2].
We review the algorithm for solving eigenmode problem (1.1) provided in [5].
The idea is to approximate the operator L by a matrix Lφ that also incorporates
the boundary conditions. Let N = NI + NB , where NI nodes are in the interior of
the domain and NB nodes are on the boundary. We denote the interpolation points
N
I
by {yi }N
i=1 ∈ Ω and {yi }i=NI +1 ∈ ∂Ω. We evaluate the RBF approximation (2.3) at
the interior interpolation points and enforce the boundary condition to obtain
N
X
αi φ (kyj − yi k) = ũ(yj ),
j = 1, 2, . . . , NI ,
(2.10)
i=1
N
X
αi LB φ (kyj − yi k) = 0,
j = NI + 1, NI + 2, . . . , N,
(2.11)
i=1
where LB is applied to φ(x), then evaluated. Similar to (2.6), this can be written in
matrix form as
#
"
I
ũ
,
(2.12)
Π = A−1
0NB ×1
where ũI ≡ [ũ(y1 ), ũ(y2 ), . . . , ũ(yNI )]> , 0NB ×1 is the NB × 1 zero matrix,
"
A≡
I
A
B
#
,
AI ≡ [φ (kyi − yj k)]NI ×N , and
B ≡ [LB φ (kyNI +i − yj k)]NB ×N .
For interior points, by (1.1), we also have
−
N
X
αi Lφ (kyj − yi k) = λũ(yj ),
j = 1, 2, . . . , NI ,
(2.13)
i=1
which can be written as
LI Π = λũI ,
(2.14)
where
LI ≡ − [Lφ (kyj − yi k)]NI ×N .
From (2.12) and (2.14), we have
Lφ ũI = λũI ,
4
(2.15)
where Lφ is the NI × NI matrix given by
"
Lφ ≡ LI A−1
INI ×NI
0NB ×NI
#
.
(2.16)
Here INI ×NI is the identity matrix of order NI . The RBF approximation of eigenpairs
problem (1.1) is now the eigenvalues and eigenvectors problem of the matrix Lφ .
2.3
Solving One-Dimensional Eigenmodes Problem of Poisson Equation Using Lagrange interpolating polynomial
Without loss of generality, we only consider the case of Ω = [0, 1]. We choose inteN −2
rior points {yj }j=1
and two boundary points {yN −1 , yN } = {0, 1}, and {ũ(yj )}N
j=1
are given values with u(yN −1 ) = u(yN ) = 0. Then the Lagrange interpolating
polynomial is defined as
p(x) = ũ(y1 )L1 (x) + · · · + ũ(yN −2 )LN −2 (x) =
N
−2
X
ũ(yk )Lk (x),
(2.17)
k=1
where
Lk (x) =
x(x − 1)(x − y1 )(x − y2 ) · · · (x − yk−1 )(x − yk+1 ) · · · (x − yN −2 )
.
yk (yk − 1)(yk − y1 )(yk − y2 ) · · · (yk − yk−1 )(yk − yk+1 ) · · · (yk − yN −2 )
N −2
Let’s substitude {yj }j=1
into (2.17) and yields
LũI = ũI ,
(2.18)
where ũI ≡ [ũ(y1 ), ũ(y2 ), . . . , ũ(yN −2 )]> , and

L1 (y1 )
L1 (y2 )
..
.
L2 (y1 )
L2 (y2 )
..
.
···
···


L≡


L1 (yN −2 ) L2 (yN −2 ) · · ·
LN −2 (y1 )
LN −2 (y2 )
..
.



.


LN −2 (yN −2 )
(2.19)
−2
Now we plug (2.17) into (1.1) and evaluate at {yj }N
j=1 gets
−L
N
−2
X
!
ũ(yk )Lk (yj )
= λu˜j .
(2.20)
k=1
We could write (2.20) in a matrix form as
Lφ ũI = λũI ,
5
(2.21)

∆L1 (y1 )
∆L1 (y2 )
..
.
···
···
∆L2 (y1 )
∆L2 (y2 )
..
.


Lφ ≡ − 


∆L1 (yN −2 ) ∆L2 (yN −2 ) · · ·
∆LN −2 (y1 )
∆LN −2 (y2 )
..
.



,


∆LN −2 (yN −2 )
(2.22)
note that L is taken to be the Laplacian operator here.
For a simple case where N = 4, we obtain
"
Lφ = −
2.4
2(3y1 −y2 −1)
y1 (y1 −1)(y1 −y2 )
2(2y2 −1)
y1 (y1 −1)(y1 −y2 )
2(2y1 −1)
y2 (y2 −1)(y2 −y1 )
2(3y2 −y1 −1)
y2 (y3 −1)(y2 −y1 )
#
.
(2.23)
Solving One-Dimensional Eigenmodes Problem of Poisson Equation using RBF
Similar to Section 2.2, we would like to expand our RBF as a power series of the
reciprocal of the shape parameter as in (2.7). Without loss of generality, we only
−2
consider the case of Ω = [0, 1]. We choose interior points {yj }N
j=1 and two boundary
points {yN −1 , yN } = {0, 1}. Then the RBF reconstruction (2.3) can be written as
ũ (x) = x(x − 1)
N
−2
X
αj φ (kx − yj k , ) .
j=1
And (2.6) can be written as
Ā()Π = ũI .
and (2.13) can be written as
−
N
−2
X
αi L(yj (yj − 1)φ(kyj − yi k, )) = λũ(yj ),
j = 1, 2, ..., N − 2.
i=1
Therefore Lφ in (2.15) is written as
L̄φ ≡ L̄I ()Ā−1 (),
where
L̄I () ≡ − [L(yj (yj − 1)φ (kyj − yi k , ))](N −2)×(N −2) ,
and
Ā() ≡ [yj (yj − 1)φ (kyj − yi k , )](N −2)×(N −2) .
6
(2.24)
For a simple case where N = 4, we could obtain

2(3y1 −y2 −1)
 y1 (y1 −1)(y1 −y2 )
L̄φ () = − 

2(2y2 −1)
y1 (y1 −1)(y1 −y2 )
+ O[2 ]
2
+ O[ ]
2(2y1 −1)
y2 (y2 −1)(y2 −y1 )
2(3y2 −y1 −1)
y2 (y2 −1)(y2 −x1 )

+ O[2 ]

.

2
+ O[ ]
(2.25)
Note that the L̄φ () above converges to Lφ as → 0 as in Section 2.3.
Proposition 2.1 From (2.25) and (2.23), we note that every entry in L̄φ () is
converges to corresponding entry in Lφ , so we have a proposition that when limits
to zero, L̄φ () will converges to Lφ and convergence rate is O(2 ), that is
lim L̄φ () = Lφ .
→0
7
3
Numerical Results
Now, we give some numerical test results of
Lu + λu = 0,
LB u = 0,
in Ω,
on ∂Ω,
PN −2
with ũ (x) = x(x − 1) j=1
αj φ (kx − yj k , ), to verify our proposition, we use
three types of radial basis function including MQ, IMQ, Gaussian and compare
results with Lagrange interpolating polynomial respectively.
N
4
10−1
10−2
10−3
10−4
Lagrange
8.7460(−01)
8.6965(−01)
8.6960(−01)
8.6960(−01)
8.6960(−01)
10−1
10−2
10−3
10−4
Lagrange
8.6462(−01)
8.6955(−01)
8.6960(−01)
8.6960(−01)
8.6960(−01)
10−1
10−2
10−3
10−4
Lagrange
8.5962(−01)
8.6950(−01)
8.6960(−01)
8.6960(−01)
8.6960(−01)
6
MQ
5.0154(−02)
5.1756(−02)
5.1772(−02)
5.1772(−02)
5.1772(−02)
IMQ
4.8167(−02)
5.1736(−02)
5.1771(−02)
5.1772(−02)
5.1772(−02)
GA
5.0163(−02)
5.1756(−02)
5.1772(−02)
5.1772(−02)
5.1772(−02)
10
20
1.2613(−05)
1.9606(−05)
1.9683(−05)
1.9684(−05)
1.9684(−05)
2.7809(−18)
1.3324(−16)
1.3982(−16)
1.3988(−16)
1.3989(−16)
1.0896(−05)
1.9584(−05)
1.9683(−05)
1.9684(−05)
1.9684(−05)
2.7503(−18)
1.3259(−16)
1.3981(−16)
1.3988(−16)
1.3989(−16)
1.7929(−05)
1.9666(−05)
1.9684(−05)
1.9684(−05)
1.9684(−05)
9.3669(−17)
1.3935(−16)
1.3988(−16)
1.3989(−16)
1.3989(−16)
Table 3: Error difference of first eigenvalue, |λ1 − λ1N |, λ1 = 12 × π 2 , between RBF
and Lagrange, as function of and N .
From Table 3, we can see that when becomes smaller, the error of the first
eigenvalue of all three types of RBF’s converges to that of Lagrange’s results. We
conclude that the eigenvalues obtained by using the scheme with increasingly flat
RBF converge to those using Lagrange interpolating polynomial interpolant.
8
N
4
6
10−1
10−2
10−3
10−4
10−5
4.9931(−03)
4.9999(−05)
5.0000(−07)
5.0000(−09)
5.0000(−11)
10−1
10−2
10−3
10−4
10−5
4.9848(−03)
4.9999(−05)
5.0000(−07)
5.0000(−09)
5.0000(−11)
10−1
10−2
10−3
10−4
10−5
9.9833(−03)
9.9998(−05)
1.0000(−06)
1.0000(−08)
1.0000(−10)
MQ
1.6182(−03)
1.6210(−05)
1.6210(−07)
1.6210(−09)
1.6210(−11)
IMQ
3.6052(−03)
3.6468(−05)
3.6472(−07)
3.6472(−09)
3.6472(−11)
GA
1.6087(−03)
1.6209(−05)
1.6210(−07)
1.6210(−09)
1.6210(−11)
10
20
7.0712(−06)
7.7483(−08)
7.7553(−10)
7.7554(−12)
7.7554(−14)
1.3711(−16)
6.6408(−18)
6.7578(−20)
6.7589(−22)
6.7589(−24)
8.7883(−06)
1.0031(−07)
1.0044(−09)
1.0044(−11)
1.0044(−13)
1.3714(−16)
7.2926(−18)
7.4359(−20)
7.4373(−22)
7.4373(−24)
1.7552(−06)
1.8069(−08)
1.8074(−10)
1.8074(−12)
1.8074(−14)
4.6217(−17)
5.3876(−19)
5.3959(−21)
5.3960(−23)
5.3960(−25)
Table 4: First eigenvalue’s error, |λ1RBF − λ1Lagrange |, between RBF and Lagrange.
According to Table 4, we note that the errors of the first eigenvalue between
MQ and IMQ are very close, so we only take IMQ’s numerical result and Gaussian’s
numerical result to implement data fitting and obtained Figure 1 and Figure 2
logHerrorL
logHΕL
-5
-4
-5
æ
à
æ
à
-10
æ
à
-3
æ
à
-2
-1
æ
à
æ
à
ì
ì
ì
n=4
à
n=6
ì
ì n=10
ì
-15ì
ò
ò
-20
æ
ò
ò
n=20
ò
ò
-25ò
Figure 1: The errors and the convergent rates of IMQ.
9
logHerrorL
logHΕL
-5
-4
-5
-10
æ
à
-15
æ
à
æ
à
-3
-2
æ
à
æ
à
-1
æ
à
æ
n=4
à
n=6
ì
n=10
ò
n=20
ì
ì
ì
ì
ì
ì
ò
ò
-20
ò
ò
-25
ò
ò
Figure 2: The errors and the convergent rates of Gaussian.
From Figure 1 and Figure 2, we note that there are linear relationship between
log() and log(error), and slope are two. Therefore, we conclude that the convergent rate of the first eigenvalue between the methods of using RBF and Lagrange
interpolation is O(2 ), as the proposition in Section 2.4.
10
4
Conclusion
In this thesis, we discuss the convergence of the radial basis collocation schemes for
1-D eigenmodes of elliptic operators, in Section 2.1 and Section 2.2 we explain how
to convert the radial basis function approximation of eigenpairs problem into the
eigenvalues and eigenvectors problem of the matrix Lφ .
And in Section 2.3 and Section 2.4, we decribe the procedures to construct
the matrix Lφ with Lagrange interpolating polynomial and L̄φ () with radial basis
function, and note that L̄φ () converges to Lφ as → 0. We summarize that with a
proposition in Section 2.4.
In Section 3, we give some numerical results to verify our inference. According
to those numerical results, we conclude that the eigenvalues obtained by using the
scheme with increasingly flat RBF converge to those using Lagrange interpolating
polynomial interpolant, and the convergence rate is O(2 ).
11
References
[1] Buhmann, M.D. Radial Basis Functions: Theory and Implementations. Cmbridge Univ. Press, 2003.
[2] Driscoll, T.A. & Fornberg, B. Interpolation in the limit of increasingly flat radial
basis functions. Comput. Math. Applic. 43:413–422, 2002.
[3] Chen CS, Huang CS, Lin KH. On The Shape Parameter of The MFS-MPS
Scheme. International Journal of Computational Methods, 2013; 10(2):1341006.
[4] Kansa EJ. Multiquadrics—A scattered data approximation scheme with applications to computational fluid-dynamics, I. Comput. Math. Applic., 1990;
19:127–145.
[5] Platte R. B., Driscoll T.A. Computing Eigenmodes of Elliptic Operators Using Radial Basis Function. Computers & mathematics with applications, 2004;
48:561–576.
[6] Micchelli, C.A. Interpolation of scattered data–distance matrices and conditionally positive definite functions. Constructive Approximation. 2:11–22, 1986.
[7] Wendland, H. Scattered Data Approximation. Cambridge Univ. Press, 2005.
12