Dyadic Hermite interpolation on a rectangular mesh

Dyadic Hermite interpolation
on a rectangular mesh
Serge Dubuc1 , Jean-Louis Merrien2
Abstract: Given f and rf at the vertices of a rectangular mesh, we
build an interpolating function f by a subdivision algorithm. The construction on each elementary rectangle is independent of any disjoint rectangle.
From the Hermite data associated with the vertices of a rectangle R, the
function f is dened on a dense subset of R. Sucient conditions are found
in order to extend f to a C 1 function. Moreover innite products and generalized radii of matrices are used to study the convergence to a C 1 function.
This convergence depends on the ve parameters introduced in the algorithm.
AMS subject classication: 41A05, 63D05
Keywords: Interpolation, Subdivision, Rectangular Mesh, Generalized
Radii of Matrices.
1 Introduction
A classical method for constructing curves and surfaces in CAGD consists in
binary subdivision algorithms. They are ecient tools which can be adapted
S. Dubuc, Departement de mathematiques et de statistique, Universite de Montreal,
C.P. 6128 Succursale Centre-ville, Montreal (Quebec), Canada H3C 3J7, email:
[email protected]
2 J.-L. Merrien, INSA de Rennes, 20 av. des Buttes de Coesmes, CS 14315, 35043
RENNES CEDEX, France, email: [email protected]
1
1
to the computer.
Dubuc 5], Dyn et al 6], then Deslauriers et al 3, 4] have studied these
methods to build interpolating curves and surfaces from Lagrange data while
Merrien 10, 11] introduced the case of the Hermite interpolation. See also
Dyn and Levin 7] for an analysis of general one dimensional schemes.
In this paper, given f and rf at the vertices of a rectangular mesh,
we dene an algorithm HR1 building an interpolating C 1 function. The
algorithm is local and the construction on a rectangle is independent of its
neighbours. In order to get C 1 continuity across an edge, the construction
depends only on the values of f and rf at the ends and of the length of this
edge.
In Section 2, we describe the algorithm HR1 on a single rectangle R. We
build f on a dense subset of R and give the rst properties. As an example,
we show that the Sibson-Thomson element 12] can be obtained by HR1.
Section 3 is devoted to the proof of sucient conditions for extending
continuously to R a function built on a dense subset. In Section 4 and 5, we
present the matrix tools, especially generalized spectral radii, which are used
to give a necessary and sucient condition for convergence. Then in Section
6, we give examples depending on the parameters used in the algorithm.
Finally, in Section 7, we produce a few illustrations.
2 Description and rst properties of the algorithm HR1
Our purpose is to dene a bivariate function f on a given rectangle R = I J .
We expect that this function will have continuous partial derivatives: p = fx,
q = fy . At the beginning of the construction, the only data that are known
about these three functions f , p and q are their values at the vertices of R.
2
Before describing the surface z = f (x y), we recall the univariate version of
this construction, given by Merrien 10].
Suppose that we know the values of a function f and of its rst derivative
p = f at the endpoints of an interval I of IR. We proceed by induction on
n. At step n (n 0), Pn is the regular partition of I in 2n subintervals of
equal lengths h = jI j=2n. If a and b are two consecutive points of Pn, then
we compute f and p at the midpoint a = a + b according to the following
2
scheme, which depends on two parameters ans :
0
f (a)
=
9
>
=
2
f (b) ; f (a) + p(a) + p(b) >
(1 ; )
f (a) + f (b) + hp(b) ; p(a)]
(1)
p(a) =
h
2
By applying these formulae on ever ner partitions, f and p are dened
on a dense set. Moreover there are many values of ( ) for which f and p
are uniformly continuous on I and when this occurs, p = f . Merrien drew
attention to two important choices of ( ), which is the content of next
remarks.
Remark 1: If = ;1=8 = ;1=2, then f is the Hermite cubic interpolant.
Remark 2: If = ;1=8 = ;1, then f is the Hermite quadratic spline
interpolant with one knot at the midpoint of I .
0
Let us come back to the rectangle and describe the algorithm HR1.
The values of f p q at the vertices of R are specied and the construction
depends on 5 parameters .
For n = 0 1 2 : : : let us denote by Pn the regular partition of I in 2n
subintervals and by Qn the similar regular partition of J in 2n subintervals.
We proceed by induction on n and we assume that f p q are already known
on the mesh Pn Qn . Starting from these values, we dene these functions
on Pn+1 Qn+1. Let h = jI j=2n and k = jJ j=2n and let (a c) 2 Pn Qn not
on north or east side of the initial rectangle. Then dene b = a + h d = c + k.
3
Then (a c) (b c) (b d) and (a d) are in Pn Qn. Let a = a+2 b and c = c+2 d .
We have to dene f p and q at (a c) (a d) (a c) (b c) and (a c). (see Figure
1).
y
(a–,d)
(a,d)
(b,d)
(a–,c–)
(a,c–)
x
(a–,c)
(a,c)
: an old point
: a new point
(b,c–)
(b,c)
Fig 1: Recursive computation of f .
At (a c) 2 (Pn+1 n Pn) Qn and similarly on (a d):
f (a c)
=
p(a c)
=
q(a c)
=
9
>
>
>
2
>
=
p
(a c) + p(b c)
f
(b c) ; f (a c)
+
(1 ; )
>
h
2
>
>
q(a c) + q(b c)
>
f (a c) + f (b c) + hp(b c) ; p(a c)]
(2)
2
At (a c) 2 Pn (Qn+1 n Qn) and similarly on (b c):
f (a c)
=
p(a c)
=
q(a c)
=
9
>
>
>
2
>
=
p(a c) + p(a d)
>
2
>
>
f
(a d) ; f (a c)
q
(a c) + q (a d) >
(1 ; )
+
f (a c) + f (a d) + kq(a d) ; q(a c)]
k
4
2
(3)
At (a c) 2 (Pn+1 n Pn) (Qn+1 n Qn):
f (a c)
=
f (a c) + f (a d) + f (b c) + f (b d)
4
p(b c) ; p(a c) + p(b d) ; p(a d)
+h
2
q(b d) ; q(b c)
k q(a d) ; q(a c) +
2
+
p(a c)
=
(1
; ) f (b c) ; f (a c) 2+hf (b d) ; f (a d)
p(a c) + p(a d) + p(b c) + p(b d)
+
4
k q(b d) ; q(b c) h+ q(a c) ; q(a d
+
q(a c)
=
(1
; ) f (a d) ; f (a c)2+k f (b d) ; f (b c)
q(a c) + q(a d) + q(b c) + q(b d)
+
4
+
p(b d) ; p(b c)
h p(a c) ; p(a d) +
k
9
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
=
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
(4)
Remark 3: If we use tensor product to dene f p q at (a c), then f and
xy
fyx appear as h tends to 0. We could suppose these derivatives are zero on
the vertices of the initial rectangle. This is not compatible with our initial
data and the idea of getting a C 1 interpolant.
2.1 An example of Hermite dyadic interpolation
We recall the denition of the quadratic nite element of Sibson-Thomson 12]
and we show that it can be constructed by Hermite dyadic interpolation. Let
R be a rectangle whose vertices are A B C D, we split R into 4 subrectangles
arranged in a St- George pattern:
(A + B )=2 (C + D)=2] (A + D)=2 (B + C )=2]
5
afterwards we split each subrectangle into a St-Andrew pattern. At the end of
the subdivision, R is split into 16 triangular panels (see Figure 2). Sibson and
Thomson have shown that for Hermite data (f fx fy ) at the vertices of R,
there exists a continuously dierentiable function f dened on R, quadratic
on each triangular panel, and which interpolates the data. There is only one
function which fullls these conditions, provided it is also assumed that fy is
linear on horizontal edges and fx is linear on vertical edges.
D
C
A
B
Fig 2: Sibson-Thomson subdivision of a rectangle
Proposition 1 The Sibson-Thomson solution to the Hermite problem on a
rectangle R coincides with the solution given by Hermite dyadic interpolation
algorithm HR1,with parameters = = ;1=8 = = ;1 = ;1=4.
Proof : We assume that R = u u ] v v ], and we set h = u ; u k =
v ; v. If f is the function dened by Sibson-Thomson, we set p = f , q = f .
0
0
0
0
x
y
We need to prove that formulae (2-4) are satised for n = 0 1 : : :
Case: n = 0,
Formula (2):
On each edge of R, f is a quadratic spline with one knot at the midpoint of
the edge. If A = (u v) B = (u v), then f restricted to the segment A B ]
is a quadratic spline with one knot at the midpoint E = (A + B )=2. This
0
6
fact and Remark 2 of Merrien in Section 2 show that
9
f (E ) = f (A) + f (B ) ; h p(B ) ; p(A)] >
=
f (B ) ; f (A) ; p(A) + p(B ) >
2
2
p(E )
8
(5)
h
2
By denition of the Sibson-Thomson element, the value of q to E is specied:
q(E ) = q(A) + q(B )
(6)
=
2
Since the same argument can be carried for the segment whose endpoints are
(u v ) and (u v ), Formula (2) has been proved for n = 0 with = ;1=8
and = ;1.
Formula (3)
On each vertical edge of R, the same arguments can be carried to prove this
Formula.
Formula (4)
On the vertical segment E F ] (where F = (C + D)=2), f is a quadratic
spline with one knot at the midpoint G of the side. So
9
f (G) = f (E ) + f (F ) ; k q(E ) ; q(F )] >
0
0
0
=
f (E ) ; f (F ) ; q(E ) + q(F ) >
2
2
q(G)
=
8
(7)
k
2
But the values f (E ) f (F ) q(E ) q(F ) are already known (Formulae (5-6)).
After expansion, it is found that
9
f (A) + f (B ) + f (C ) + f (D)
>
>
f (G) =
>
>
;h=16p(B ) ; p A p C ) ; p(D)] >
>
>
;k=16q(C ) + q(D)) ; q(A) ; q(B )] >
>
=
f (A) + f (B ) ; f (C ) ; f (D)
>
2
>
2k
>
>
h
; 4k p(B ) ; p(A) ; p(C ) + p(D)] >
>
>
>
q
(A) + q (B ) + q (C ) + q (D )
;
4
( )+ (
q(G)
=
4
7
(8)
The rst and last parts of Formula (4) are proved with = ;1 = ;1=4.
One proceeds similarly on the horizontal segment (A + D)=2 (B + C )=2]
for the evaluation of p(G).
The proof of Formula (2-4) is complete for n = 0.
Case: n > 0
Let us consider an elementary rectangle Rn coming from the mesh of
order n > 0. We assume that the vertices of Rn are An = (un vn) Bn =
(un vn) Cn = (un vn) Dn = (un vn). The two diagonals of Rn split Rn in
four disjoint triangles. The restriction of f to each of these triangles is a
quadratic polynomial the restrictions of p and of q to the same triangles are
linear. This a consequence of the geometry of the Sibson-Thomson subdivision of R. The following properties hold:
- On any side of Rn , f is a quadratic function, p and q are linear.
- The restriction of f to the vertical segment (An + Bn)=2 (Cn + Dn)=2] is
a quadratic spline with a unique node at the midpoint of the segment.
- The restriction of f to the horizontal segment (An + Dn)=2 (Cn + Bn)=2]
is a quadratic spline with a unique node at the midpoint of the segment.
From these properties, it follows that Formulae (2-4) are true for any n
as for n = 0. 0
0
0
0
2.2 Properties of the interpolating process
One of the main properties of the interpolating process is its self-similarity.
Proposition 2 Let R be a rectangle of the plane, and let R~ be one of the
four smaller rectangles obtained after the bisection of the sides of R. Let us
assume that ff p q g are the three functions that are produced by HR1 over
the rectangle R. Then the three functions ff~ p~ q~g that are produced by HR1
over the rectangle R~ from the Hermite data: f~ = f p~ = p q~ = q at each
vertex of R~ , are simply the restrictions of ff p q g to R~ .
8
Another property of the dyadic Hermite interpolation scheme refers to its
behavior with respect to a change of scale.
Proposition 3 Let R be a rectangle, and let us assume that ff p qg is a
triple of functions obtained by HR1 on R, with parameters . Let
us consider a change of coordinates: (x y ) 7! T (x y) = (mx + b nx + c)
with m 6= 0 n 6= 0. If ff~ p~ q~g is the triple of functions obtained by HR1
on R~ = T (R), with the same parameters , from the Hermite data:
f~ T = f p~ T = p=m q~ T = q=n at each initial vertex of R~ , then
f = f~ T p = mp~ T q = nq~ T on R.
Denition 1. A function g dened on R is said to be reproduced by HR1
on R if the function f coming from the scheme with Hermite data:
(f fx fy ) = (g gx gy ) at each vertex of R coincides with g.
The next question is how to choose the parameters in order
to reproduce some specic polynomials.
Proposition 4 Regardless of the values of , any function of the
form a + bx + cy + dxy is reproduced by HR1 .
The main argument of the proof is that the sum of the weights in each formula
(2-4) is equal to 1.
Proposition 5
x2 and y2 are reproduced i = = ;1=8,
x2 x3 y2 y3 are reproduced i = = ;1=8 = = ;1=2,
x2 x2y xy2 y2 are reproduced i = = ;1=8 = ;1=8.
Proof : We consider 6 distinct case.
First case: x2 is reproduced by HR1.
According to the last proposition and by linearity, the polynomial P (x y) =
9
(x ; a)(b ; x) is also reproduced. So if f = P p = Px q = Py , then the rst
identity in Formula (2) with n = 0 is true and this implies that = ;1=8.
If also the rst identity in Formula (4) with n = 0 is used, then = ;1=8.
Second case: x2 and x3 are reproduced by HR1.
The polynomial P (x y) = (2x ; a ; b)3 is also reproduced. So if f = P p =
Px q = Py , then the second identity in Formula (2) with n = 0 is true and
this implies that = ;1=2. If also the second identity in Formula (4) with
n = 0 is used, then = ;1=2.
Third case: x2 and x2 y are reproduced by HR1.
The polynomial P (x y) = (x ; a)(b ; x)(2y ; c ; d) is also reproduced. So
if f = P p = Px q = Py , then the second identity in Formula (4) with n = 0
is true and this implies that = ;1=8.
Fourth case: = = ;1=8.
If f (x y) = x2 p(x y) = fx(x y) = 2x q = fy (x y) = 0, then Formulae (2-4)
can be proved by induction on n. So x2 is reproduced by HR1. Of course by
symmetry, y2 is also reproduced.
Fifth case: = = ;1=8 = = ;1=2.
If f (x y) = x3 p(x y) = fx(x y) = 3x2 q(x y) = fy (x y) = 0, then Formulae (2-4) can be proved by induction on n. So x3 as x2 is reproduced by HR1
by symmetry, y2 y3 are also reproduced.
Sixth case: = = ;1=8 = ;1=8.
If f (x y) = x2 y p(x y) = fx(x y) = 2xy q = fy (x y) = x2 , then again an
induction on n is used for showing that x2y is reproduced. 10
3 Sucient conditions for continuous extension
From now on, we assume that R = 0 1]2 . It follows that for n = 0 1 2 : : : Pn is the regular partition of I = 0 1] in 2n subintervals .
Denition 2. Let Rn = Pn Pn, and let R = Rn, the set of dyadic
n=0
points of R. If is a function which is dened on R , we denote by n(
)
the largest increase of between two neighbors of the mesh Rn :
1
1
1
n(
) = maxfj
(A) ; (B )j : A B 2 Rn jjA ; B jj = 1=2ng:
Theorem 6 If is dened on R and if
X
1
1
continuous extension on R.
n=1
n(
) < 1, then has a
Proof : For dened on R , we introduce the sequence of functions
n
1
dened on R as follows: n is the unique function which, on each elementary
subsquare of the mesh Rn, is of the form a + bx + cy + dxy and interpolates
the values of at the vertices of the subsquare.
We will show that the following inequality holds:
jj
n+1 ; njj 2n+1(
):
1
Each function n = n+1 ; n is piecewise linear on any horizontal or
vertical line. So jjnjj = maxfjn(x y)j : (x y) 2 Rn+1g.
Let S be a subsquare whose vertices A B C D belong to Rn, AB is the
lower horizontal side of length 1=2n, CD is the upper horizontal side of length
1=2n.
We set E = (A + B )=2,F = (C + D)=2,G = (E + F )=2. Then
1
n(E ) = (A) + (B )]=2 since n is linear on any side of S n+1(E ) = (E ) by denition so jn(E )j n+1 (
)
11
jn(F )j n+1(
), similarly
n(G) = n(E )+ n(F )]=2 = (A)+ (B )+ (C )+ (D)]=4 since n is
linear on any vertical segment inside S n+1(G) = (G) by denition
so jn(G)j 2n+1(
) since
n(G) = (G) ; (F )]=2 + (F ) ; (C )]=4 + (F ) ; (D)]=4
+ (G) ; (E )]=2 + (E ) ; (A)]=4 + (E ) ; (B )]=4:
It follows that jjnjj 2n+1(
).
By construction, on R , n converges pointwise to . As a telescoping
N
X
X
series, n = N +1 ; 0. The convergence of n (
) and Weierstrass
n=0
n=1
criterion (for uniform convergence) show that the sequence n converges uniformly to a continuous function on R which coincides with on R . Now let us assume that f p and q are three functions that are dened
at the dyadic points of a rectangle R . We are looking for conditions on
f ensuring that it has a continuously dierentiable extension to R and that
rf = (p q) on R .
We introduce other bounds. Let h = 1=2n and En(f p q) be the largest
of the following quantities:
1
1
1
1
1
1
n(p) n(q)
the numbers j f (x + h yh) ; f (x y) ; p(x y) + p2(x + h y) j
where x x + h y 2 Pn
the numbers j f (x y + hh) ; f (x y) ; q(x y) + q2(x y + h) j
where x y y + h 2 Pn.
Denition 3. We say that the algorithm HR1 converges if
1. (f p q) built on R have continuous extensions to R,
1
12
2. the extension of f is continuously dierentiable,
3. at each dyadic point of R, the functions satisfy rf = (p q).
Theorem 7 If f p q are dened at the dyadic points of a rectangle R and
X
if E (f p q ) < 1, then the algorithm HR1 converges.
1
n=0
n
Proof : According to Theorem 6, p and q have continuous extensions
to R. These continuous extensions are unique, so without loss of generality,
we can assume that p and q are dened on R and are continuous.
If E = maxn En(f p q), then n(f ) (E + max(jjpjj jjqjj ))=2n. According to Theorem 6, f has a continuous extension to R. Therefore, we can
assume that f is dened and is continuous on R.
By using repeatedly the inequalities
1
1
j(f (x + h y) ; f (x y))=h ; (p(x + h y) ; p(x y))=2j En(f p q)
where x x + h y 2 Pn h = 1=2n it can be shown that
(8x x 2 I )(8y 2 J ) f (x y) ; f (x y) =
0
0
Zx
0
x
p(t y) dt
hence fx = p. See Merrien 11] for details. Similarly, we obtain fy = q. Corollary 8 If there exists 2 0 1 and c 2 IR+ such that E (f p q) c then the algorithm HR1 converges.
n
n
4 Matrix representation of one step of the
algorithm HR1
To study the dierences introduced in Section 3, we shall use vectors in IR12 .
For a square obtained at step n, with south-west vertex (x = i=2n y = j=2n)
13
and side length h = 1=2n, we write:
n (x y
U
0
BB
BB
BB
BB
BB
BB
BB
B
)=B
BB
BB
BB
BB
BB
BB
BB
B@
( + ); ( )
( + + ); ( + )
( + + ); ( + )
( + ); ( )
( + ); ( )
( + + ); ( + )
( + + ); ( + )
( + ); ( )
( + ); ( ) ; ( + )+ (
2
+ ); ( + ) ; ( + + )+
2
+ ); ( + ) ; ( + + )+
2
( + ); ( ) ; ( + )+ (
2
q x
h y
p x
h y
h
p x
q x
h y
h
q x y
p x y
h
p x y
h y
p x y
p x
f x
( +
f x
( +
f x
q x y
h y
q x
h y
h
q x
p x
h y
h
p x y
q x y
h
q x y
h y
h
h y
h
h y
h
p x
h y
h y
q x
h y
h
q x
h
p x
h y
h
p x y
p x y
f x
h
h y
h
f x y
( +
h y
(
+ )
h
f x y
h
)
f x y
h
f x y
q x y
h
q x y
)
)
h
1
CC
CC
CC
CC
CC
CC
CC
CC
CC
CC
CC
CC
CC
CC
CC
CA
h
then we have
Proposition 9
Un+1(x y)
Un+1(x + h2 y)
Un+1 (x + h2 y + h2 )
Un+1(x y + h2 )
= A(1) Un(x y)
= A(2) Un(x y)
= A(3) Un(x y)
= A(4) Un(x y)
where A(1) ,A(2) ,A(3) ,A(4) are four matrices in IR12x12 depending only on the
5 parameters of algorithm HR1 .
Proof : With a computer algebra system, one can verify that
0 (i) (i) (i) 1
1
A11 A12 A13 C 0 (i)
B
A
:
:
:
A(i) = B
B@ 0 A(22i) A(23i) CCA = @ 11 (i) A
0 B
0 A(32i) A(33i)
14
with A(jki) 2 IR4 4 and B (i) 2 IR8 8.
More specically:
01
1
01
0
0
0
2
BB 1
C
BB 2
1 C
BB 0 4 0 4 CC (2) BB 0
A(1)
A =
11 = B 1
B@ 4 0 41 0 CCA 11 BB@ 14
0 0 0 12
0
01
1
0
1
1
BB 4
BB 4 01 4 0 CC
BB 0 2 0 0 CC (4) BB 0
A =
A(3)
11 = B
B@ 0 0 21 0 CCA 11 BB@ 0
0
1
4
0
0 0
1
2 0
0 41
1
4 0
0 14
1
4 0
0 12
0 0 0
1
4
0
0
0
1
4
1
CC
CC
CC A
1
0C
1 C
CC
CA
0C
4
1
2
0
0
0 0
0
BB 0 0 0 0
BB 0 0 ; ; 1 + 2 1 0 1 2 0
(1)
(A(1)
A
)
=
12 13
BB ; 0 0
1 0
0 ; 1 + 21
2
@
;
;
;
0 0 0 0
0
;
0
0
0
1
CC
CC
CC :
A
(1)
One can obtain (A(12i) A(13i) ) for i 2 f2 3 4g from (A(1)
12 A13 ) by permutations of
rows and columns.
0
1
0
0
0
2
BB
1
1
BB ;
4
4
1
1
BB
;
4
4
BB
1
0
0
2
B (1) = B
BB 1 0
0
0
0
BB 4 + 2
BB + 2 ; 2 18 + ; 2 81 + BB 1 + ; 1 + + ; 2
@ 8
2
8
2
0
0
0
15
1
4 +2
1
;
0
1;
2
0
1+
2
0
1+
4
0
0
0
1;
2
0
0
1;
2
0
0
0
0
1+
4
0
0
1+
4
0
0
1
C
1 C
C
2
C
C
0
C
C
1; C
C
C
C
0
C
C
1+ C
C
4
C
C
0
A
0
;
1+
2
0
1
1
0
0
0
;
1
0
0
0
2
BB
C
1
C
0
0
0
0
1
;
0
0
BB
C
2
C
1
1
1
1
BB
;
0
0 C
C
4
4
2
2
BB
C
1
1
1
1
C
;
0
0
B
4
4
2
2 C
B (2) = B
C
1+
BB ; 14 ; 2
0
0
0
0
0
0 C
C
2
BB
C
1+
1
C
+
2
0
0
0
0
0
0
BB
C
4
2
B@ ; 81 ; ; ; 2 + 2 ; 18 ; ; + 2 1+4 0 1+4 0 C
C
A
1
1
1+
1+
+ 2 ; 2
+
;
+
0
0
8
2
8
4
4
0 1
1
1
1
1
;
0
0
4
2
2
BB 4
C
C
1
0
0
0
0
;
1
0
0 C
BB
2
C
1
BB 0
C
0
0
0
0
;
1
0
C
2
BB
C
1
1
1
1 C
;
0
0
B
(3)
4
4
2
2 C
B =B
BB ; 18 ; ; ; 2 + 2 ; 81 ; ; + 2 1+4 0 1+4 0 C
C
C
BB
C
1+
1
; 4 ; 2
0
0
0
0
0 C
BB 0
C
2
C
1+
B@ 0
C
0
0
0
0
0
; 14 ; 2
A
2
1
1
1+
1+
; +
; 8 ; ; ; 2 + 2 ; 8 ; 0
0
4
4
1
0 12
1
1 1 ;
0
0
4
2
2
C
BB 4
1
1
1 C
1
;
0
0
CC
BB
4
4
2
2
1
CC
BB 0
0
0
0
0
1;
0
2
C
BB
1
0
0
0
;
1 C
0
0
0
CC
B
(4)
2
B =B
1
1+
1+
CC
BB 81 + ; 2
+ 2 ; 2 4 0
0
8 +
4
C
BB
BB ; + 2 ; 18 ; ; ; 2 + 2 ; 81 ; 0 1+4 0 1+4 CCC
1+
1
CA
B@ 0
0
0
0
0
0
4 + 2
2
1+
0
0
0
0
; 41 ; 2 0 0
2
;
;
;
;
;
;
;
;
;
;
We can immediately deduce:
Corollary 10
Un(x y) = A(d1 ) A(d2 ) : : : A(dn )U0 (0 0)
with dk 2 f1 2 3 4g k = 1 : : : n.
16
;
;
The convergence to 0 of (Un) is proved in the next two sections by studying
the products of matrices A(d1 ) A(d2 ) : : : A(dn ) .
5 Convergence and spectral radii of matrix
products
For the convergence of the above products of matrices, we shall use the same
tools as in Merrien 11]. The problem will be a little more dicult with the
data D at the vertices of the initial square is associated a vector U0 (0 0) but
an arbitrary vector U 2 IR12 does not have necessarily the form U0 (0 0).
We need denitions of spectral radii for products of matrices. Let be
a set of matrices of IRn n . If k k is a norm on IRn, the norm of a matrix M
is sup X =1 kMX k. Dene:
k
k
(M ), the spectral radius of a matrix M ,
(), the generalized spectral radius of , () = lim sup(k ()) k
1
k !+1
where
Yk
k () = supf( Mi ) Mi 2 1 i kg
i=1
^(), the joint spectral radius, ^() = lim sup(^k ( k k)) k , where
1
k !+1
^k ( k k) = supfk
Yk
i=1
Mik Mi 2 1 i kg
remark here that ^() is independent of the norm used,
() = inf
k
k
an operator norm supfkAk : A 2 g.
We shall use the results of Daubechies and Lagarias 2] completed by Berger
and Wang 1], then by Elsner 8]:
17
if is a bounded set then
(k ()) k1 () = () = ^() (^k ( k k)) k1
We shall also need a lemma of Berger and Wang :
Lemma 11 Assume that the matrices M 2 are all block upper- triangular
0 (1)
BB M .
..
M =B
@
0
1
C
C
M (l)
CA where the M (j ) are square matrices. Set (j) = fM (j ) : M 2 g, then:
() = max(((1) ) : : : ((l) ):
Lemma 12 Let be the linear operator from IR12 into IR12 which transforms
the 12 data (f (A) rf (A) : : : rf (D)) at the vertices of the inital square into
U0(0 0). If V is the subspace generated by V1 = (1 0 1 0 0 : : : 0) V2 =
T
(0 1 0 1 0 : : : 0)T and V3 = (0 1 1 0 0 : : : 0)T in IR12 , then:
1. V Im() = IR12,
2. for i 2 f1 2 3 4g A(i)(Im()) Im() A(i) (V ) V ,
3. for i 2 f1 2 3 4g kA(i)k = 21 .
jV
1
Proof : If (D) = 0 then
8
>
>
< p(0 0) = p(1 0) = p(1 1) = p(0 1) = p
q(0 0) = q(1 0) = q(1 1) = q(0 1) = q
>
>
: f (1 0) = f (0 0) + ph f (0 1) = f (0 0) + qh f (1 1) = f (0 0) + ph + qh:
This means that the data can be interpolated by a linear polynomial, and
dim(Ker()) = 3, so that dim(Im()) = 9. Precisely Im() is the subspace
18
of vectors U = (x1 : : : x12) such that:
8
>
>
< x1 ; x3 + x6 ; x8 = 0
x2 ; x4 + x5 ; x7 = 0
>
>
: x9 + x10 ; x11 ; x12 + 1 (x1 + x3 ; x2 ; x4) = 0
2
It's easy to see that Vi 6 Im() and that (V1 V2 V3) is linearly independent. So that
Im() V = IR12 :
Clearly if U1 = A(i) U0 (0 0) with i 2 f1 2 3 4g then U1 2 Im(), so that
A(i) (Im()) Im():
Now for i 2 f1 2 3 4g, A(i) V1 = 21 V1 A(i) V2 = 12 V2 and
A(1) V3 = 41 V3 A(2) V3 = 41 (V2 + V3) A(3) V3 = 14 (V1 + V2 + V3)
A(4) V3 = 41 (V1 + V3 ), then :
A(i) (V ) V :
If V = x1 V1 + x2 V2 + x3 V3 = (x1 x2 + x3 x1 + x3 x2 )T 2 V with kV k = 1
then A(1) V = ( x21 x22 + x43 x21 + x43 x22 )T , so that
1
kA(1) V k max( jx21j jx42 j + jx2 +4 x3 j jx41j + jx1 +4 x3j jx22j ) 12 1
with equality for V = V1 . This gives kA(1) k = 21 . The other norms are
evaluated similarly. jV
1
To use the spectral radii, we choose = fA(1) A(2) A(3) A(4) g where the
matrices are dened in Proposition 9.
Proposition 13 The algorithm HR1 is convergent if and only if () < 1
19
Proof : Since () = () < 1, there exists an operator norm k k for
which = max(kA( ) k i = 1 2 3 4) < 1. So that for any vector
i
Un(x y) = A(d1 ) A(d2 ) : : : A(dn )U0 (0 0)
we have kUn(x y)k cn and since the norms are equivalent:
kUn(x y)k c n. According to Theorem 7 and its Corollary the algorithm
HR1 converges.
Conversely, if the algorithm converges, we know that p q and f are continuous on the square with fx = p and fy = q morover p and q are uniformly
continuous. For any data D and the corresponding vector U0 (0 0), it's easy to
prove, by using uniform continuity and Taylor expansions, that Un(x y) tends
to 0 as n tends to +1, which can be resumed in: kUn(x y)k "(n U0)
with n lim
"(n U0 ) = 0.
+
Let us choose a basis B = (V1 V2 V3 : : : V12 ), composed of vectors V ,
adapted to the decomposition VIm() = IR12, where (V1 V2 V3) are dened
in the preceding proposition, then:
kA(d1 ) A(d2 ) : : : A(dn ) Vik 21n for i 2 f1 2 3g:
And for any vector V of the basis B, we have proved that:
0
1
1
!
1
1
kA(d ) A(d ) : : : A(dn ) V k "(n V ):
1
2
1
12
X
Now if U 2 IR12 with kU k = 1, U = iVi, then max jij is bounded
i=1
independently of U . So that we have:
1
kA(d ) A(d ) : : : A(dn )U k 1
and
2
12
X
1
i=1
jij"(n Vi) "(n) with n lim+ "(n) = 0:
!
kA(d )A(d ) : : : A(dn ) k "(n):
1
2
1
20
1
There exists an integer k such that "(k) < 1, therefore ^k ( k k ) < 1. As
() (^k ( k k )) k1 , we get the result. 1
1
Set = fB (1) B (2) B (3) B (4) g.
Corollary 14 The algorithm HR1 is convergent if and only if () < 1:
0 ()
1
A
11
()
A. Using Lemma 11 we obtain:
Proof : We know that A = @
i
i
0 B ( i)
(2)
(3)
(4)
() = max((A(1)
11 A11 A11 A11 ) ( )):
(2)
(3)
(4)
1
A direct computation gives kA(11i) k = 12 , so that (A(1)
11 A11 A11 A11 ) 2 .
Now () < 1 if and only if () < 1. With the preceding proposition, we
get the result. 1
6 Necessary and/or sucient conditions of
convergence
This last necessary and sucient condition is important, but the computation
of the spectral radii is dicult. Gripenberg 9] gives algorithms to nd an
arbitrary small interval that contains the joint spectral radius of a nite set
of matrices, but we whould like to nd conditions of convergence depending
on the parameters. Using again the inequalities:
(k ()) k1 () = () = ^() (^k ( k k)) k1
we immediately get:
if there exists k such that k ()) > 1 then the algorithm HR1 diverges
if there exists an operator norm k k and k such that ^k ( k k) < 1 then the
algorithm converges.
21
0 (i) (i) 1
A A23 A
We shall study () where = fB (1) B (2) B (3) B (4) g with B (i) = @ 22
.
A(32i) A(33i)
(2)
(3)
(4)
Set j = fA(1)
jj Ajj Ajj Ajj g for j = 2 or j = 3.
Proposition 15 If = = ;1=8 = ;1=4,
1. the algorithm HR1 converges if and only if (3 ) < 1
2. for = , the algorithm HR1 converges if and only if ;3 < < 1
3. for 6= ,
if ;3 < < 1 and ;3 < < 1, the algorithm HR1 converges if 62] ; 3 1 or 62] ; 5 3, the algorithm HR1 is not convergent ie
f 62 C 1 .
Proof : With the above conditions on the parameters, we obtain:
0 ()
1
A
A
B ( ) = @ 22
i
i
0
where:
0
1
2
0
0
0
BB
CC
B
CC
1
1
;
1
1
1
BB
A(1)
22 =
CC 4B
1
;
1
1
1
@
A
0 0 0 2
0
1
1
1
1
;
1
BB
CC
B
CC
0
2
0
0
1
BB
A(3)
22 =
CC 4B
0
0
2
0
@
A
0
1
2
0
0
0
BB
CC
B
CC
0
2
0
0
1
BB
A(2)
22 =
CC 4B
1
1
1
;
1
@
A
1 1 ;1 1
0
1
1
;
1
1
1
BB
CC
B
CC
;
1
1
1
1
1
BB
A(4)
22 =
CC 4B
0
0
2
0
@
A
;1 1 1 1
0 1+
BB 2 1+0 0
BB 0 4 0
A(1)
33 = B 1+
B@ 4 0 1+4 0
0
0
A(33i)
1
0 1+
0 C
BB 2
1+ C
C (2) BB 0
4 C
A =
CA 33 BB@ 1+4 0 C
0
1+
2
22
0
0 0 2
1
0 0 0 C
1+
CC
0 0 C
2
1+
CA
0 4 0 C
1+
0 1+2 4
0 1+
BB 4
BB 0
A(3)
33 = B
B@ 0
0
0
1+
4
0
1+
2
0
1+
2
0
1+
4
0
0
0
1+
2
1
0 1+
CC
BB 4 1+0
CC (4) BB 0 4
CC A33 = BB 0 0
A
@
0
p
0
1+
4
0
1+
2
0
1
0 C
1+ C
CC
:
CA
0 C
1+
4
2
Now, for all i 2 f1 2 3 4g, kA k = 22 ,
(A(33i) ) = max(j 1 +2 j j 1 +4 j) and kA(33i) k = max(j 1 +2 j j 1 +2 j).
p
Using the inequalities on spectral radii, we have (2 ) 22 and
(i)
22 2
1
max(j 1 +2 j j 1 +4 j) (3 ) max(j 1 +2 j j 1 +2 j):()
From lemma 11, we know that () = max((2 ) (3 )), so that () < 1 if
and only if (3 ) < 1.
If = then from inequalities () above, (3 ) = j 1 +2 j and () < 1
if and only if j 1 + j < 1, which gives the second result.
2
The third result with 6= is a direct consequence of the inequalities
(). To compute (3 ), note that if we suppose that the A(33i) are associated
with the operators written in the canonical basis of IR4,fe1 e2 e3 e4g then, if
we write these operators in fe1 e3 e2 e4 g, it is easy to see that the matrices
are all block diagonal form and using again Lemma 11
0
(3 ) = (@
1+
2
1+
4
1 0
0 A @
1+
4
1+
4
1+
4
1+
2
0
1
A):
Example 1: For the Sibson-Thomson element, = = ;1=8 = =
;1 = ;1=4, then A(32) = 0 A(33) = 0 and () 22 . As already proved the
i
p
i
algorithm HR1 converges.
23
Proposition 16 If there exists an operator norm k k on IR4 such that
8i 2 f1 2 3 4g:
()
kA22 k < 1 kA(33)k < 1, and kA(23) k:kA(32) k < (1 ; kA(22) k)(1 ; kA(33) k)
i
i
i
i
i
i
then the algorithm HR1 converges.
0 1
Proof : For V 2 IR8 V = @ X A with X Y 2 IR4 ,
Y
dene kV k = kX k + kY k where 2 IR+ is to be chosen. Then
0
kB (i) V k = kA(22i) X + A(23i) Y k + kA(32i) X + A(33i) Y k
(i)
k
A
(i)
(i)
(kA22 k + kA32 k)kX k + ( 23 k + kA(33i) k)kY k
(i)
max(kA(i) k + kA(i) k kA23 k + kA(i) k)kV k
0
22
32
0
33
So that kB (i) k < 1 as soon as
(i)
k
A
(i)
(i)
kA22 k + kA32 k < 1 and 23 k + kA(33i) k < 1, which may be written:
0
kA(32i)k < 1 ; kA(22i) k kA23 k < 1 ; kA(33i) k:
(i)
Now using the hypothesis of the Theorem, we are able to choose such that
these inequalities hold. Example 2: We shall use the norm k k on IR4 .
Let = ;1=16 = ;1=8 = = ;1 = 0 then:
1
kA(22i) k = 1=2 kA(23i)k = 2 kA(32i)k = 1=8 kA(33i)k = 0:
1
1
1
1
So that (1 ; kA(22i) k )(1 ; kA(33i) k ) ; kA(23i) k :kA(32i) k = 0:25 > 0. The
algorithm HR1 converges. (see Figure 5).
Example 3: We shall suppose = = ;1=8 and = so that the
algorithm depends on two parameters and . We shall use the norm k k2
on IR4. Then a direct computation gives:
1
1
1
24
1
kA(22i) k22
p
p
2
4
2
4
= max( 3 +2+ 5 + 32 + 256 + 32 3 +2+ 5 + 32 + 256 ; 32 )
16
16
16
16
p p
p p
p (i)
10
+
2
1
(i)
(i)
kA23 k2 = j1 ; j 4 kA32 k2 = j 8 + 2 j 2 kA33 k2 = j1+ j 10 8+ 2 :
p
p
3
1
3
1
2
(i)
The condition kA22 k2 < 1 gives 28 ; 7 < < ; 28 + 7 2 with
p
1
3
; 28 + 7 2 ' 0:57 .
p p
p p
To get kA(33i) k2 < 1, we need ;1 ; 10 + 2 < < ;1 + 10 ; 2 with
p p
10 ; 2 ' 1:74.
For 2 ;1:5 0:2] and 2 ;0:3 ;0:12], we have drawn the surface
s( ) = (1 ; kA(22i) k2 ):(1 ; kA(33i) k2) ; kA(32i) k2:kA(23i) k2 :
The algorithm HR1 converges if s( ) > 0. (see Figure 3 for s( ) and
Figure 6 for the surfaces)
0.3
0.2
0.1
0
-0.3
0.5
-0.25
0
-0.2
η -0.15
-1
-0.1
-1.5
Fig 3: Graph of s( ).
25
-0.5
β
7 Examples
On the square 0 1]2, we have interpolated the data:
f (0 0) = 0 f (1 0) = ;0:2 f (0 1) = 0:3 f (1 1) = 1
fx(0 0) = ;2 fx(1 0) = ;1 fx(0 1) = 0 fx(1 1) = 1
fy (0 0) = 0 fy (1 0) = 1:2 fy (0 1) = 0 fy (1 1) = 2=3:
by the algorithm HR1. We stopped the process for n = 5. So that the values
of f fx fy are evaluated at (25 + 1) (25 + 1) points of 0 1]2. Then we have
drawn the surfaces f p q and the level curves of f .
7.1 The Sibson-Thomson element
We choose = = ;1=8 = = ;1 = ;1=4. fx and fy are linear on
each subtriangle.
f
level curves of
f
1
1
0.8
0.5
0.6
0
0.4
−0.5
1
y
0.2
1
0.5
0.5
0
0
0
x
0
0.2
0.4
0.6
fx
2
0.8
1
fy
1.5
1
0
0.5
−2
1
0
1
1
0.5
y
0
0
1
0.5
0.5
y
x
0.5
0
0
x
Fig 4: Graphs of f fx fy and level curves of f .
26
7.2
= ;1=16 = ;1=8 = = ;1 = 0
This is an illustration of Example 2. The functions fx and fy are continuous
on 0 1]2 but irregular. On the sides x = 0 and x = 1, the function fx is
linear and piecewise linear on x = 1=2 . . . and similarly for fy on y = 0 . . .
f
level curves of
f
1
1
0.8
0.5
0.6
0
0.4
−0.5
1
y
0.2
1
0.5
0.5
0
0
0
x
0
0.2
0.4
0.6
fx
2
0.8
1
fy
1.5
1
0
0.5
−2
1
0
1
1
0.5
y
0
0
1
0.5
0.5
y
x
0.5
0
0
x
Fig 5: Graphs of f fx fy and level curves of f with = 0.
7.3
= = ;1=8 = = ;0:6 = ;0:15
This is an illustration of Example 3. = ;0:15 and the functions fx fy are
less irregular than in the preceding example.
27
f
level curves of
f
1
1
0.8
0.5
0.6
0
0.4
−0.5
1
0.5
y
0.2
1
0.5
0
0
0
x
0
0.2
0.4
0.6
fx
2
0.8
1
fy
1.5
1
0
0.5
−2
1
0
1
1
0.5
y
0
0
1
0.5
0.5
y
x
0.5
0
0
x
Fig 6: Graphs of f fx fy and level curves of f
8 Conclusion
We considered dyadic Hermite interpolations on a rectangular mesh under
the assumption that the process of interpolation is invariant under any symmetry of the original mesh. The simplest Hermite type subdivision schemes
involve at most 5 parameters ( ) and it is possible to check for
which values of these parameters, one gets an interpolating C 1 function for
arbitrary Hermite data.
From a set of Hermite data, we got parametric surfaces f , fx and fy
where x 2 0 1] and y 2 0 1]. In contrast with tensor products, no second
28
order mixed partial derivatives are used. We conclude by saying that other
Hermite subdivision schemes can be considered. It is an open question to
know if our techniques can be extended to this situation.
References
1] M. A. Berger and Y. Wang, Bounded Semigroups of Matrices, Linear
Algebra Appl. 166 (1992) 21-27.
2] I. Daubechies and J. C. Lagarias, Set of matrices all innite products of
which converge, Linear Algebra Appl. 161 (1992) 227-263.
3] G. Deslauriers, S. Dubuc, Interpolation dyadique, In Fractals. Dimensions non entieres et applications. Masson, Paris, (1987) 44-55.
4] G. Deslauriers, J. Dubois, S. Dubuc, Multidimensional iterative interpolation, Canad. J. Math 43 (1991) 127-147.
5] S. Dubuc, Interpolation through an iterative scheme, J. Math. Anal.
Appl. 114 (1986) 185-204.
6] N. Dyn, D. Levin, J. A. Gregory, A 4-point interpolatory subdivision
scheme for curve design, Comput. Aided Geom. Design 4 (1987) 257-268.
7] N. Dyn, D. Levin, Analysis of Hermite-type Subdivision Schemes, In
Approximation Theory VIII.Vol 2: Wavelets and Multilevel Approximation, Ed: C. K. Chui and L.L. Schumaker. World Scientic, Singapore
(1995), 117-124.
8] L. Elsner, The Generalized Spectral-Radius Theorem: An AnalyticGeometric Proof, Linear Algebra Appl. 220 (1995) 151-159.
29
9] G. Gripenberg, Computing the Joint Spectral Radius, Linear Algebra
Appl. 234 (1996) 43-60.
10] J.-L. Merrien, A family of Hermite interpolants by bisection algorithms,
Numerical Algorithms 2 (1992) 187-200.
11] J.-L. Merrien, Interpolants d'Hermite C 2 obtenus par subdivision, to
appear in M 2 AN .
12] R. Sibson et G. D. Thomson, A seamed quadratic element for contouring.
The Computer Journal 24 (1980) 378-382.
30