Some notes on two-scale difference equations

Some Notes on Two{Scale
Dierence Equations
Lothar Berg and Gerlind Plonka
Fachbereich Mathematik, Universitat Rostock
D-18051 Rostock, Germany
Abstract: In this paper, we continue our considerations in 2] on two-scale dierence
equations, mainly with respect to continuous solutions. Moreover, we study renable
step functions and piecewise polynomials. Also, solutions with noncompact support
are considered. New algorithms for the approximative computation of continuous
solutions are derived.
Key words: Renement equation, piecewise continuous solutions, spectral properties
of coe
cient matrices, new algorithms
AMS subject classications: 39A10, 15A18, 65Q05
1. INTRODUCTION
A two{scale dierence equation is a functional equation of the form
'(t=2) =
n
X
=0
c '(t ; )
(1.1)
where c are given real or complex constants with c0 cn 6= 0 and n 1. A function
' satisfying (1.1) for all real t is called renable. Functional equations of type (1.1)
arise especially in the construction of wavelets as well as in interpolating subdivision
schemes. They are considered in a lot of papers (see e.g. 15, 6, 7, 3, 4, 11, 13, 2]).
1
If ' 2 L1(R) is assumed, then ' has necessarily a compact support contained in
0 n] (see 6]). In 2], we have restricted us to this case.
In the present paper, we continue our considerations in 2] on two-scale dierence
equations, mainly with respect to nonzero continuous solutions ' of (1.1).
Functional equations of similar types as in (1.1) occur in physics and can be investigated with related methods. For example, the equation
y(qt) = 41q y(t + 1) + y(t ; 1) + 2y(t)] (0 < q < 1) t 2 R
associated with the appearance of spatially chaotic structures in amorphous (glassy)
materials, was intensively studied (see e.g. 1, 16, 9, 8]). Systems of such functional
equations are useful for applications in probability theory 17] and in the theory of
fractal objects (see e.g. 10]).
Usually, the Fourier transformR 1is the main tool for solving this type of equations.
By Fourier transform '^(u) = ;1 '(t) e;iut dt of (1.1), we obtain
'^(2u) = P (e;iu ) '^(u)
(1.2)
with the two{scale symbol (or renement mask)
P (z) := 21
n
X
=0
c z :
(1.3)
In 6], it is proved for real coecients c that:
(i) if jP (1)j < 1 or P (1) = ;1, then (1.1) has no L1-solution
(ii) if P (1) = 1, then it has at most one linearly independent L1-solution
(iii) if jP (1)j > 1, and if a nonzero L1-solution ' exists, then P (1) = 2m for
some nonnegative integer m. If in the last case, the coecients c ( = 0 : : : n)
are replaced by 2;m c in (1.1) then the new two-scale dierence equation possesses
a nonzero integrable solution g such that ' is the m-th derivative of g:
m
'(x) = ddxm g(x) a:e::
Hence, looking for compactly supported solutions, we can essentially restrict us to
the case P (1) = 1. Then, repeated application of (1.2) yields
1
Y
'^(u) = P (e;iu=2j )
(1.4)
R
j =1
where we assume that '^(0) = 0n '(x) dx = 1 (see e.g. 6]).
Later on, we also shall consider continuous solutions of (1.1), which vanish only for
t 0, but are polynomials for t n. In this case, P (1) = 2m also for negative
integers m is possible.
In the following, if we speak about a solution of (1.1), then we usually mean a
continuous one, but not the always existing identically vanishing solution. Some
2
considerations will be transfered to the case of piecewise contionuous functions and
to step functions (see Section 3).
Let us rst assume that ' is a continuous and compactly supported solution of (1.1).
Introducing the (n + 1)-dimensional vector
(t) = ('(t) '(t + 1) : : : '(t + n))T
(1.5)
and the (n + 1) (n + 1)-matrix A := (c2i;j ), i j = 0 : : : n, where ci = 0 for i < 0
and i > n, respectively,
0
1
c0 0 0 : : : 0
BB c2 c1 c0 : : : 0 C
C
C
B
.
.
n
.
.
.
.
.
.
.
.
A := (c2i;j )ij=0 = B
.
.
.
.
.
C
B@ 0 : : : c c c C
n n;1 n;2 A
0 : : : 0 0 cn
(1.1) can be written in vector form
(t=2) = A (t)
(1.6)
for ;1 t 1. Note that A is a 1 2 block Toeplitz matrix (see e.g. 19]), and that
(t) has at most n nonvanishing entries for t 0 the rst and for t 0 the last
component vanishes. If the coecients c are symmetric, c = cn; ( = 0 : : : n),
then it easily follows that '(t) = '(n ; t). Equation (1.6) implies
(2;k t) = Ak (t)
(1.7)
and for t = 0
(0) = A (0):
(1.8)
According to (0) 6= 0 for a nonzero continuous solution (cf. 7], Proposition 2.1),
(0) is necessarily a right eigenvector of A corresponding to the eigenvalue 1.
By means of the shifting matrices
0 0 1 0 ::: 0 1
0 0 0 0 ::: 0 1
... 0 C
... 0 C
B
B
0
0
1
1
0
0
B
C
B
C
B
C
B
C
T
.
.
.
.
.
.
.
.
.
.
V := B
V
:=
C
B
C
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
B
B
@ 0 : : : 0 0 1 CA
@ 0 ::: 1 0 0 C
A
0 ::: 0 0 0
0 ::: 0 1 0
we have for ;1 t 1
(t + 1) = V (t) (t ; 1) = V T (t):
(1.9)
In view of V V T = diag (1 : : : 1 0) and V T V = diag (0 1 : : : 1), we obtain (t) =
V V T (t) for t 0 and (t) = V T V (t) for t 0. Replacing t by t ; 1 in (1.6), it
follows that
t ; 1
2 = A (t ; 1) = A V T (t)
3
for 0 t 1, and hence, by (1.9),
t ; 1
t + 1
2 = V 2 = V A V T (t) (0 t 1):
Recursive computation of '(t). For the calculation of a solution ' of (1.1) at
dyadic rationals, it is convenient to introduce the following (n n)-submatrices of
A
A0 := (c2i;j;1)1ijn A1 := (c2i;j )1ijn and the n-dimensional vector ~(t) = ('(t) '(t + 1) : : : '(t + n ; 1))T (cf. e.g.
4, 15, 7, 13]). Then starting with a suitable eigenvector ~(0) of A0 (or with an
eigenvector ~(1) of A1) belonging to the eigenvalue 1, we have to apply consecutively
the relations
~( 2t ) = A0 ~(t) ~( t +2 1 ) = A1 ~(t):
(1.10)
In Section 2, we shall discuss the problem, how to choose ~(0) among the eigenvectors of A0, if 1 is an eigenvalue of A0 with multiplicity greater than 1.
After having calculated the values '(2;j ) (l j 2 N) by means of (1.10), we can
interpolate them by continuous spline functions
'j (t) :=
2j n
X
l=0
'(2;j l) h(2j t ; l)
(j = 0 1 2 : : :)
(1.11)
where h(t) is the hat function
1 ; jtj jtj 1
h(t) :=
0
jtj > 1:
As shown in 6, 15], if there exists a nonzero continuous solution ' of (1.1) then
limj!1 'j = '. Note that this dyadic interpolation method is dierent from the
subdivision scheme considered e.g. in 3] and 5], pp. 207. In particular, this method
also applies if the integer translates of ' are linearly dependent, while the subdivision
algorithm does not work in this case (see e.g. 3]).
At this point, we want to mention, that ~(t) can also be computed at certain
nondyadic values in a similar way. From (1.10), it follows that
~ t + 1 = A0 A1 ~(t) ~ t + 2 = A1 A0 ~(t)
4
4
and hence
~ 1 = A0 A1 ~ 1 ~ 2 = A1 A0 ~ 2 :
3
3
3
3
This means that A0 A1 and A1 A0 must also have the eigenvalue 1, and the vectors
~(1=3) and ~(2=3) can be computed as eigenvectors. Generally, we nd
i i ~ 2k ; 1 = Ak Ak;1 : : : A1 ~ 2k ; 1
4
for i = 1 : : : 2k ; P
2, where the indices j 2 f0 1g are dened by the dyadic
representation i = kj=1 j 2j;1 . If we restrict ourselves to simple eigenvalues,
the corresponding eigenvectors are determined only up to a normalization factor.
In
the correct factor, we can use Theorem 2.1 in 2], which gives
P1order'(tot +nd
j
)
=
'^(0) = 1, which is possible in the case P (1) = 1 (see Formula
j =;1
(1.4)). Equivalently, the sum of all components of ~(i=(2k ; 1)) is equal to 1.
Necessary and sucient conditions for the existence of a nonzero continuous solution
of (1.1) are already presented in 15] and 4], involving innite products of matrices
A0 and A1. Further, if (1.1) is assumed to have a nonzero, compactly supported,
integrable solution, some necessary conditions for the renement mask P (z) can be
given. In particular, P (z) necessarily contains a polynomial factor p(z), where all
zeros of p(z) are roots of ;1 of order 2r (r 2 N) (see 2]). This polynomial factor
p(z) can be seen as the renement mask of a certain step function.
In Section 2, we investigate the structure of the (n + 1) (n + 1)-matrix A by
means of Jordan's normal form. The goal is to derive simple necessary conditions
for the existence of nonzero continuous solutions of (1.1). In Section 3, we present
a procedure for the construction of an arbitrary renement mask of a renable step
function. Polynomial solutions of (1.1) are considered in Section 4. In particular,
we show how an integral equation can be solved approximately. In Section 5, we
introduce a new algorithm which is based on a factorization of the renement mask
P (z) into a simpler part P~ (z), being the renement mask of a known function (e.g. a
step function) and a remainder part Q(z),
P (z) = P~ (z) Q(z):
Assuming that a (piecewise) dierentiable solution '0 of (1.1) corresponding to P~ (z)
is given explicitly, we get the desired solution of (1.2) with the original mask P (z)
in the form
1
Y
'^(u) = '^0(u) Q(e;iu=2k ):
k=1
Then '^(u) can be recursively approximated taking
'^n(u) = Q(e;iu=2n ) '^n;1 (u)
and the corresponding original functions 'n converge to a continuous (or even dierentiable) solution ' of (1.1), if Q(z) behaves well. In particular, this new algorithm
also works if the integer translates of ' are linearly dependent. Finally, in Section
6, we consider the case of continuous solutions of (1.1) with support 0 1) which
are polynomials for t n.
5
2. JORDAN'S NORMAL FORM
Now, we want to consider the matrix A in detail. Assuming the existence of a
nonzero continuous solution ' of (1.1), we shall derive a set of simple necessary
criterions on A.
Let A = W ;1 J W be the decomposition of A into Jordan's normal form, where
0J
0
B
J1
J := B
@
J2
1
0 WT
0
T
C
B
W
1
C
B
W
=
A
@ W2T
1
C
C
A W ;1 = (U0 U1 U2 U3) :
J3
W3T
Here, Ji contains the Jordan blocks with eigenvalues
= 0 for i = 0,
= 1 for i = 1,
jj < 1 and 6= 0 for i = 2 as well as
jj 1 and 6= 1 for i = 3.
If such blocks do not appear, we consider the corresponding matrices as empty.
Further, WiT are matrices consisting of the rows of W , and Ui are the matrices
consisting of the columns of U = W ;1 corresponding to Ji. In particular, we have
3
X
as well as
and
i=0
3
X
Ui Ji WiT = A
i=0
Ui WiT = I
WiT Ui = I
WiT Uj = 0
for i 6= j (i j = 0 1 2 3), where I and 0 denote identity matrices and zero matrices of
suitable size, respectively. As shown in 2], Theorem 5.1, if (1.1) possesses a nonzero,
continuous L1-solution, then the Jordan block J1 in the Jordan decomposition of A
(belonging to the eigenvalue 1) is a nonempty identity matrix, i.e. J1 = I . With
the notations above, (1.7) can be written in the form
(2;k t) = U J k W (t):
(2.1)
Furthermore, for i = 0 1 2 3 it follows that
WiT (2;k t) = Jik WiT (t) (k 2 N)
(2.2)
for ;1 t 1.
Theorem 2.1 If is a nonzero, continuous L1-solution of (1.6), then we have for
;1 t 1,
W0T (t) = 0
W3T (t) = 0
W2T (0) = 0
(2.3)
as well as
W1T (t) = W1T (0):
6
(2.4)
Proof: The convergence of the left-hand side of (2.2) for k ! 1 and the divergence
of J3k in all elements of the main diagonal implies the relation W3T (t) = 0.
If A has root vectors of height at least m corresponding to the eigenvector 0, then
J0m is a zero matrix, and (2.2) implies that W0T (t) = 0 for ;2;m t 2;m .
For m = 1, we obtain our assertion. In the case m > 1, we use the following
argument: Considering the innite matrix A = (c2i;j )ij0, it can be shown, that
each root vector of A to the eigenvalue 0 can be extendend to a root vector of A
corresponding to 0. The assertion W0T (t) = 0 then follows for all t 2 ;1 1] from
2].
Further, since J1 = I , we get from (2.2) for j = 1
W1T (0) = W1T (t):
Finally, for j = 2, k ! 1 in Formula (2.2) yields W2T (0) = 0.
Corollary 2.2 If is a nonzero, piecewise continuous L1-solution of (1.6), then
for ;1 t 1 we have (2.3) and (instead of (2.4)),
W1T (t) = W1T (+0) for t > 0
W1T (t) = W1T (;0) for t < 0 (2.5)
where both (+0) and (;0) are right eigenvectors of A corresponding to the eigenvalue 1.
This assertion can be shown in the same manner as Theorem 2.1.
Example 2.3 Consider (1.1) with c0 = c1 = 1. Then, for arbitrary a 2 R,
8
>< a t = 0
1)
'(t) = > 11 ; a tt 2= (0
1
:
0
otherwise
is a piecewise continuous solution of (1.1). The eigenvectors of A = I mentioned in
Corollary 2.2 are (+0) = (1 0)T and (;0) = (0 1)T .
Applying Theorem 2.1, we have (for continuous solutions) from (1.6)
3
X
t
( 2 ) = A (t) = Ui Ji WiT (t) = U1 W1T (0) + U2 J2 W2T (t)
i=0
and in particular for t = 0,
Hence,
(2.6)
(0) = U1 W1T (0):
(2.7)
( 2t ) = (0) + U2J2W2T (t):
(2.8)
7
A lot of papers dealing with two{scale dierence equations (cf. e.g. 7, 12]), are especially interested in nonzero continuous solutions of (1.1) with linearly independent
integer translates '(t + l) (l 2 Z). Here,
P we say that '(t + l) are linearly independent,
if, for all nite linear combinations, l al '(t + l) = 0 implies that al = 0 for all l.
The linear independence (or at least L2{stability (see e.g. 11])) of '(t + l) (l 2 Z)
is crucial for applications of ' as a scaling function in wavelet theory as well as for
the convergence of the corresponding subdivision scheme (cf. 11]). In this case we
have:
Lemma 2.4 Let (1.1) possess a nonzero continuous L1-solution ' with linearly independent integer translates. Then both, J3 and J0, are empty.
Proof: If ' satises the assumptions of the lemma, then the vectors (t) (t 2
;1 1]) span the whole Rn+1, i.e., we have
spanf(t) : t 2 ;1 1]g = Rn+1:
If W0T and W3T would not be empty, this would be a contradiction to (2.3).
We recall that the condition of linear independence for ' is equivalent to the condition that the Casorati determinant
0 '(t )
1
'(t2)
:::
'(tn)
1
B '(t1 + 1) '(t2 + 1) : : : '(tn + 1) C
C
det B
B
C
...
...
...
@
A
'(t1 + n ; 1) '(t2 + n ; 1) : : : '(tn + n ; 1)
does not vanish for any tk 2 0 1] (k = 1 : : : n) with tk 6= tj for k 6= j (cf. 18]).
This is satised if and only if e.g. the two conditions
(a) P (z) has no symmetric zeros in C n f0g (i.e., if z0 6= 0 is a zero, then ;z0 is no
zero)
(b) For any odd integer m > 1 and anyd primitive mth root ! of unity, there exists
an integer d 0 such that P (;!2 ) 6= 0
are satised (cf. 11, 12]).
Computation of (0). For application of the vector cascade algorithm (see in the
Introduction), we rst need to compute (0). If the eigenvalue 1 of A is simple (as
e.g. assumed in 7]), then u := U1 is a vector (of length n +1) and (0) is necessarily
determined by u up to normalization. But, how to choose the initial vector (0), if
dim J1 > 1? Introducing the vector
x := W1T (0)
(2.9)
we nd from (2.7) that (0) = U1 x. Using (2.4), it follows that x = W1T (t), and
by (1) = V (0) and (;1) = V T (0), we obtain
x = W1T V T U1 x = W1T V U1 x:
(2.10)
8
A similar consideration can be done with respect to (2.3) yielding
WiT V U1x = WiT V T U1 x = 0
(2.11)
for i = 0 and i = 3. Hence we have
Proposition 2.5 If (2.10){(2.11) has only the trivial solution x, then (1.1) possesses only the trivial solution '(t) 0 as continuous solution. If (1.1) possesses a
nonzero continuous solution then x in (2.9) satises (2.10) and (2.11).
The importance of the vector x lies in the fact, that the initial vector (0) = U1 x
needed for the numerical computation of (t) is known, if we know x. Computational
observations lead us to the following
Conjecture. If a nonzero, continuous solution of (1.1) exists, then x is already
uniquely determined by the rst equation in (2.10).
Examples 2.6 (i) Let (1.1) be given with the coecients c0 = c6 = 1=2 c3 =
1 c1 = c2 = c3 = c4 = 0. Then the corresponding coecient matrix A possesses the
double eigenvalue 1 with
W1T
=
1
U1 = 16
1 1 1 1 1 1 1 0 0 1 0 0 1
Using (2.10), we obtain that
x=
0
1
0
1=2 ;1=2
1 2 0 2 1 0
0 ;1 ;2 6 ;2 ;1 0
T
:
x
and hence x = (3 1)T up to normalization. By Proposition 2.5, only this choice of
x can lead to a continuous solution of (1.1). The starting vector (0) is now given
by (0) = (0 1=3 2=3 1 2=3 1=3 0) up to normalization. The corresponding
solution is
8 t=3
0 t 3
<
'(t) = : (6 ; t)=3 3 < t 6
0
otherwise:
(ii) Consider
the two{scale dierence equation corresponding to the renement mask
1
3
P (z) = 2 (x + 1)2(x5 + 1)2 = x16 + 2x13 + 2x11 + x10 + 4x8 + x6 + 2x5 + 2x3 + 1.
Then the matrix A possesses the eigenvalue 1 with multiplicity 3, and we nd
W1T =
0
14 17
1 @ 20
30
180 0 300 ;4560
00 1
T
U1 = @ 0 0
0 0
8 ;1 8 ;4 ;7 8 ;7 ;4 8 ;1 8 17 14 20 1
30 30 ;60 30 30 ;60 30 30 ;60 30 30 ;60 30 30 A ;90 45 0 0 45 ;90 45 0 0 45 ;90 45 0 0
2 3 2 1 ;2 ;4 ;6 ;4 ;2 1 2 3 2 1 0 1
0 1 0 0 0 0 ;2 0 0 0 0 1 0 0 0 A :
0 0 0 1 0 0 ;2 0 0 1 0 0 0 0 0
9
The matrix W1T contains a 3-periodic vector in the second and a 5-periodic vector
in the third line, whereas the rst line is a linear combination of the other two and
the eigenvector with the quadratic entries 3(j ; 8)2 ; 17 for j = 0 : : : 16. Equation
(2.10) leads to
0 60 5 3 1
1 @ 0 ;30 0 A x
x = 60
0 0 ;15
and we nd x = (1 0 0)T . The starting vector (0) is then given by (0) =
(0 1 2 3 2 1 ;2 ;4 ;6 ;4 ;2 1 2 3 2 1 0)T up to normalization. As a solution,
we obtain
80
t 0
>
>
t 2 0 3]
>
< t6 ; t
5]
'(t) = > 16 ; 3t tt 22 3
5
6]
>
; 2t t 2 6 8]
>
: 10
'(16 ; t) t 8:
3. REFINABLE STEP FUNCTIONS
In this section, we want to deal with the case of renable step functions. The
results obtained here can be simply transmitted to piecewise polynomial solutions
by integration and to integrable solutions by convolution. Observe that the case of
renable spline functions was also studied in 14]. However, this section has another
intention. We show that for each compactly supported integrable solution ' of
(1.1) the corresponding renement mask P (z) contains a polynomial factor, which
itself can be seen as the renement mask of a step function. In the second part
of this section we give a general method for the construction of renement masks
corresponding to step functions.
First let us recall the following result (see 2], Theorems 3.3 and 3.4):
Theorem 3.1 (2]) Assume that (1.1) with P (1) = 1, and c0 6= 1, cn 6= 1 possesses
a nonzero, Lebesgue{integrable, compactly supported solution '. Then P (z ) has a
polynomial factor p(z) of the form
2
p(z) = 2q(qz(z)) :
(3.1)
Here q(z ) is a polynomial of the same degree as p(z ) and possesses a set S of zeros
with the following property: S contains roots of unity with powers of 2 as root exponent, and it is closed regarding to the operation z ! z 2 (i.e., for z 2 S it follows
that z 2 2 S ). Moreover, denoting the zero set of p(z ) by R and introducing the set
S~ of all square roots of elements of S ,
S~ := fz : z2 2 S g
10
we have the relations:
R = S~ n S
S = fz2j : j 2 N z 2 Rg:
We want to show that the factor p(z) in (3.1) can be considered as a renement
mask of a special compactly supported step function. Observe that for nonzero step
functions, c0 = cn = 1 is satised, since for t ! +0, (1.1) yields '(+0) = c0'(+0),
for t ! n ; 0 we nd '(n ; 0) = cn '(n ; 0), and '(+0) 6= 0, '(n ; 0) 6= 0 must be
satised. The values of '(t) at integers t can be dierent from the one-sided limits
'(t + 0) and '(t ; 0) (see Example 2.3). Let
'(t) =
k
X
=0
b (t ; )
(t 2 R)
(3.2)
with b0bk 6= 0 be a step function,
(t) is the characteristic function of the
1 t 2 where
0
1)
interval 0 1), i.e., (t) = 0 t 62 0 1): . Then we have:
Theorem 3.2 If ' of the form (3.2) is renable, then the corresponding
P renement
mask P (z ) is of the form 2P (z) = q(z 2)=q (z ), where q (z ) := (z ; 1)( k=0 b z ) with
the coecients b as in (3.2). Moreover, the zero set fz 2 C : q (z ) = 0g is closed
regarding to the operation z ! z2.
Proof: Observe that ^(u) = (1 ; e;iu )=iu. Hence,
1 + e;iu ^(u)
2
i.e., is renable with the mask (1 + z)=2. Let
^(2u) =
q~(z) =
k
X
=0
b z such that q(z) = (z ; 1) q~(z). By Fourier transform of (3.2), we obtain
;iu )
(1
;
e
;
iu
'^(u) = q~(e ) iu :
Since ' is renable, there is a corresponding renement mask P (z) with '^(2u) =
P (e;iu )'^(u). Hence
e;2iu) = P (e;iu ) q~(e;iu ) (1 ; e;iu ) :
q~(e;2iu) (1 ;2iu
iu
Putting z := e;iu and remembering that q(z) := q~(z) (z ; 1), we nd
2)
q
(
z
1
(3.3)
P (z) = 2 q(z) :
11
Now, let q(z) =
Qk (z ; ), where 0 = 1 since q(1) = 0. Then,
=0
Yk (z ; p )(z + p )
2 P (z ) =
(z ; )
=0
:
Hence, pfor all = 0 : : : k, there are 2 f0 1g, 2 f0 : : : kg such that =
(;1) that means: 2 = . It follows that the set f0 : : : k g must be
closed regarding to the operation z ! z2, i.e.,
f0 : : : k g = f2j : j 2 N0 = 0 : : : kg:
Remarks: 1. For piecewise polynomial renable functions ', similar relations for
the corresponding renement mask P (z) as in Theorem 3.2 can be observed (see
14]).
2. The zero set f : = 0 : : : kg of q(z) in Theorem 3.2 has the followingm property:
For each , there
exist integers m k 0 and l 1 such that k = 2 = 2m+l ,
l
i.e., k = 2k . So, k is a (2l ; 1)st root of unity and a (2m (2l ; 1))th root of
unity. In view of Euler's Theorem we have 2'() 1 mod for every odd , where
here '() denotes the well{known Euler function (cf. e.g. 21]). Since every integer
can be represented as 2m with odd , it follows that roots of unity of arbitrary
order 2m can appear with l at most equal to '().
3. Since only the quotient q(z2)=q(z) is needed in Theorem 3.2, we can assume
without loss of generality that q~(1) = 1 as far as q~(1) 6= 0. If indeed q~(1) = 0, then
we can nd a factorization q~(z) = (z ; 1)l r~(z) with some l 2 N and with r~(1) 6= 0
(and without loss of generality r~(1) = 1). In this case, it follows for the renement
mask P (z) = q(z2)=(2q(z)) that P (1) = 2l.
How to construct a polynomial q(z) such that 2P (z) = q(z2)=q(z) is a polynomial?
The zeros of such a q(z) can be obtained by the following procedure:
We choose an arbitrary nite set I of positive rationals p =q containing 1 and
consider
CI := fe2
ip =q : p =q 2 I g:
Then we form the closure CI of CI such that
CI := fz2j : z 2 CI j 2 N0 g = CI where I contains
all rationals p =q with e2
ip =q 2 CI . Note that CI is nite as
j
+1
well since fe2 ip =q gj2N0 is a sequence with at most q dierent entries. The
polynomial with the zero set CI is the desired q(z), and by (3.3), we can compute
the renement mask P (z).
Example 3.3 Choose I := f1 1=4 1=14g such that
CI = fe2
i e2
i=4 e2
i=14g:
12
Forming the closure CI , we nd
1
1 1 2 4 I = 4 12 1 14
7 7 7
so that the corresponding polynomial q(z) has the degree 7. Hence, the zeros of
the corresponding renement mask P (z) = q(z2)=(2q(z)) are given by the set of
rationals
1 5 3 1 15 9 11 K = 8 8 4 28 28 14 14 :
These rationals are the numbers below the rationals of I in the directed graphs in
Figure 1, which present the squares in direction of the arrows.
4. POLYNOMIAL SOLUTIONS
We assume now that (1.1) has a compactly supported m-times continuously differentiable solution, i.e., ' 2 C m for an m 1. Then A has the eigenvalues 2;
( = 0 : : : m) (cf. 7, 15]). Let wT u be corresponding left and right eigenvectors of A, respectively (in the case of simple eigenvalues). By (1.6), we nd
()(t=2) = 2 A ()(t) ( = 0 : : : m). Applying the second equality of (2.3) to
()(t) (instead of (t)) it follows for 0 < m that
(t 2 ;1 1]):
wT ()(t) = 0
(4.1)
Further, (2.4) yields
wT ()(t) = wT ()(0) (t 2 ;1 1]):
(4.2)
Since, as before (see Theorem 2.1), wT ()(0) = 0 for all row vectors of W with
w 6= w, we obtain ()(0) = u wT ()(0), analogously as in (2.7). Moreover, by
integration of (4.2), it follows that
wT (;)(t) = t ! wT ()(0)
for = 0 : : : m. In particular,
wT (t) = t ! wT ()(0):
(4.3)
If A also possesses the eigenvalue 2;m;1 , and if '(m+1)(t) is piecewise continuous,
then we also have
m+1
wmT +1(t) = (mt + 1)! wmT +1 (m+1)(+0)
(4.4)
13
R1
6
1
2
3 k
1
4
3
4
]
1
8
7 ]
5
8
3
8
7
8
]
o
o
o
9
5
13
3
11
7
15
16
16
16
16
16
16
16
K K K K K K K K
1
16
/
1
3
1
6
1
12
-2
3
K
1
7
6
5
6
]
7 ]
7
12
5
12
11
12
o
]
]
o
13
7
19
5
17
11
23
24
24
24
24
24
24
24
K K K K K K K K
1
24
1=
5
6
1
10
1
14
-3
5
6
3
7
6
7
10
9
10
3
10
3
14
Figure 1
14
11
14
o
]
]
15
9
23
11
25
28
28
28
28
28
K K K K K K
-4
5
6
]
o
7 K
]
11
7
17
9
19
3
13
20
20
20
20
20
20
20
K K K K K K K K
9
14
- 4
7
6
1
28
-2
5
6
1
20
- 2
7
6
- 6
7
6
13
14
- 5
7
6
5
14
o
]
]
17
13
27
5
19
28
28
28
28
28
K K K K K K
3
28
for t > 0. The equations (4.3) and (4.4) can be used for a further simplication of
(2.8). If we have n linearly independent relations of this kind, then '(t) is uniquely
determined by them, and '(t) is a polynomial spline.
Remarks: 1. In the case that 2; is a multiple eigenvalue, it is possible that
wT ()(0) = 0 for a certain . This is a new explanation for the fact that also
eigenvectors w of A to eigenvalues 6= 0 with jj < 1 can satisfy wT (t) = 0 (see
the rst remarks in Section 2 in 2]).
2. According to Theorem 2.1, equations of the form (4.1) are also valid for left
eigenvectors wT corresponding to all eigenvalues 6= 2; with jj 2; and m.
By integration, we obtain wT (t) = 0, since, by (2.3), wT is orthogonal to all
eigenvectors ()(0) with 0 .
Example 4.1 For the renement mask P (z) = 18 (1 + z)2(z2 + 1) = 81 (z4 + 2z3 +
2z2 + 2z + 1), we obtain
01
B
2
B
1
B
A= 4B 1
@0
0
2
2
0
0 0
0
1
2
1
0
0
0
2
2
0
0
0
1
2
1
1 01 0 0 0
C
B
0 1=2 0 0
C
B
C
B
= U 0 0 1=4 0
C
A B
@ 0 0 0 1=4
with
01 1
B
1 1=2
B
T
B
W =B 1 0
@0 0
1 1
0 ;1=2
0 0
0 0
1 ;1 1 ;1
1
;1
1
1
1
0 0
0
0
0
0
0
0 0
1
C
C
C
WT
C
A
00 0 4 0 0 1
B
1 4 ;4 4 ;1 C
B
C:
1
B
U = 4 B 2 0 ;4 ;4 2 C
@ 1 ;4 4 ;4 ;1 C
A
1
C
C
C
C
A
0 0
0
4
0
Let W = (!1 !2 !3 !4 !5) and U = (1 2 3 4 5), where !T is the left
eigenvector, and the right eigenvector of A corresponding to the eigenvalue ( = 1 : : : 5) with 1 = 1, 2 = 1=2, 3 = 4 = 1=4, 5 = 0. Assume that (1.1) has
a nonzero, compactly supported, continuous solution with a piecewise continuous
second derivative '00(t). Let (t) be the vector (1.5), so that ( 2t ) = A(t) for t 2
;1 1]. Consider 00(t) satisfying 00( 2t ) = 4 A 00(t) for t 2 0 1] almost everywhere.
Then, 00(+0) is a right eigenvector of A corresponding to the eigenvalue 1=4. Hence,
00(+0) = C0 3 = C0 (1 ;1 ;1 1 0)T with some constant C0, since '00(4 + 0) = 0,
so that the other right eigenvector 4 of A to the eigenvalue 1=4 falls out. Formulas
(4.1) and (2.5) imply that !1T 00(t) = !2T 00(t) = 0 and !3T 00(t) = !3T 00(+0) = C0
as well as !4T 00(t) = !4T 00(+0) = 0. Finally, the rst equality in (2.3) yields
!5T 00(t) = 0. Hence, we have
W T 00(t) = C0 (0 0 1 0 0)T
leading to 00(t) = C0 U (0 0 1 0 0)T = C0 (1 ;1 ;1 1 0)T . Further, 0(0) and
(0) are right eigenvectors of A corresponding to 2 = 1=2, and 1 = 1, respectively,
15
i.e., 0(0) = C1 2 = C1 (0 1 0 ;1 0)T and (0) = 4 C2 1 = C2 (0 1 2 1 0)T
with some constants C1 C2. Thus, integration of 00(t) leads to 0(t) = (C0t C1 ;
C0t ;C0t ;C1 + C0t 0)T and
(t) = (C0t2=2 C2 + C1t ; C0t2=2 2C2 ; C0t2=2 C2 ; C1t + C0t2=2 0)T :
Finally, the continuity of ' implies that C0 = C1 = 2C2, such that (t) = C2(t2 1+
2t ; t2 2 ; t2 1 ; 2t + t2 0)T . It can easily be checked that (t) really satises
(1.6). Of course, there are simpler methods to calculate .
Polynomial solutions on the whole line R are considered in Section 6.
Interval of constancy. There exist renable functions, which are polynomials in
a certain interval, but not in other intervals. We shall show this for the case of
constant polynomials:
Let ' be a nonzero, Lebesgue{integrable and compactly supported renable function
with the renement mask P (z) = Q(z) (z + 1), Q(1) = 1=2 which is normalized by
'^(0) = 1. Further, let be the renable function with the mask P~ (z) = Q(z) (zm +
1), m 2, so that
m
X;1
(t) = '(t ; )
=0
according to Theorem 3.9 from 2] with l = 1.
Obviously,
a polynomial of degree n ; 1, supp ' = 0 n], and we have
P1 '(t Q;(z)) =is P
n;1 '(t ; ) = '^(0) = 1 for n ; 1 t n (see Theorem 2.1
=;1
=0
P
in 2]). Hence, if n m, then it follows that m=0;1 '(t ; ) = 1 for n ; 1 t m,
i.e., (t) = 1 for n ; 1 t m.
For example, let Q(z) := (2z + 1)=6 and m = 2, and consider the compactly
supported solutions '(t) and (t) corresponding to the renement masks P (z) =
1
1
2
~
6 (2z +1) (z +1) and P (z ) = 6 (2z +1)(z +1). Then, supp ' = 0 2], supp (t) = 0 3]
and, in particular, (t) = 1 for 1 t 2 (see Figure 2).
An integral equation. Two{scale dierence equations also appear as approximations of certain integral equations. Let us consider the following problem
Zs
Z1
s
f(2) = 2
f (t) dt
f (t) dt = 1 supp f 0 1]
(4.5)
s;1
0
which was studied in 20] in dierentiated form. A solution of (4.5) obviously satises
f 2 C 1(R). A simple consequence of (4.5) is f ( 21 ) = 2. Applying the trapezoidal
rule to the rst integral, we nd
!
n;1
X
s
1
f ( 2 ) = n f (s) + f (s ; 1) + 2 f (s ; n ) :
=1
Putting s = t=n and '(t) = f (t=n), we obtain
!
n;1
X
t
1
'( 2 ) = n '(t) + '(t ; n) + 2 '(t ; ) :
(4.6)
=1
16
1
0.8
0.6
0.4
0.2
0.5
1
1.5
2
2.5
3
Figure 2: Solution (t) of (1.1) with c0 = c2 = 1=3, c1 = c3 = 2=3
This is a two-scale dierence equation of type (1.1) with the coecients c0 = cn =
1=n and c = 2=n for = 1 : : : n ; 1. The corresponding renement mask reads
n;1
n
X
P (z) = 21n (1 + zn) + n1 z = (1 +2nz(1)(1;;z)z ) =1
and we have P (1) = 1. The symmetry property, c = cn; , implies that '(t) =
'(n ; t) (cf. the Introduction).
Observe that, for n = 4, this renement mask coincides with that of Example 4.1.
In the special case n = 2k , we obtain
2k
P (z) = Pk (z) := 2k1+1 (1 + z1) ;(1 z; z )
= 2k1+1 (1 + z)2 (1 + z2) (1 + z4) : : : (1 + z2k;1 ):
Let 'k (t) (k 1) be a compactly supported solution of (1.1) with the renement
mask Pk (z). Then supp 'k = 0 2k ]. The Fourier transform '^k can be given explicitly, since from
1 1 + e;iu2l=2k ! 1 ; e;iu2l
Y
= 2liu
(l 2 N0 )
2
k=1
it follows that
1 ; e;iu 2 1 ; e;2iu 1 ; e;2k;1iu !
'^k (u) =
:::
:
iu
2iu
2k;1iu
In time domain, 'k (t) can be interpreted as a convolution of characteristic functions,
namely
;
'k (t) = 01] ? 01] ? 02] ? : : : ? 02k;1] (t):
17
Using the notion of cardinal B-spline Nk+1 of order k + 1, dened by
1 ; e;iu k+1
N^k+1(u) =
iu
and the identity
2 (1 ; z 2 ) : : : (1 ; z 2k;1 )
(1
;
z
)
= (1 + z) (1 + z + z2 + z3) : : : (1 + z + : : : + z2k;1;1) (1 ; z)k+1
= (1 + z)k;1 (1 + z2)k;2 (1 + z4)k;3 : : : (1 + z2k;2 ) (1 ; z)k+1
it follows that
with
Qk (z) =
'^k (u) = Qk (e;iu ) N^k+1(u)
1 + z k;1 1 + z2 k;2 1 + z4 k;3
2
2
2
!
2X
;k;1
2k;2
1
+
z
k zn :
:::
=
a
n
2
n=0
Hence, with the just dened coecients akn, we have in time domain
2kX
;k;1
'k (t) =
akn Nk+1(t ; n):
n=0
k
(4.7)
In particular, 'k 2 C k;1(R). The functions fk (s) = 2;k 'k (2k s) can be shown to
converge to a solution of (4.5). Let us mention that (4.7) can also be considered as
an example for Theorem 3.6 in 2].
5. A NEW ALGORITHM
As known, the subdivision algorithm works well only for solutions of (1.1) with
linear independent integer translates (cf. 11, 3]). Now, we want to propose a new
algorithm, which is based on the fact, that we can nd a factorization of P (z) into
polynomials
P (z) = P~ (z) Q(z)
(5.1)
with P~ (1) = Q(1) = 1, where the solutions of (1.1) corresponding to P~ (z) can be
explicitly computed as e.g. in Theorem 3.1. Remember that, if (1.1) yields a solution
with linear independent (or stable) integer translates, then P (z) contains a factor
of the form (1 + z)l with l 1.
Assume now, that P (z) factorizes as given in (5.1), and the solution '0 of (1.1) with
the renement
mask P~ (z) is explicitely known. Further, let Q(z) be of the form
P
k
Q(z) = =0 d z (k 1). We dene for m 1
'm(t) :=
k
X
=0
d 'm;1(t ; 2m )
18
(5.2)
or in Fourier domain
'^m(u) := Q(e;iu=2m ) '^m;1(u) = '^0(u)
m
Y
l=1
Q(e;iu=2l ):
(5.3)
Then we have:
Theorem 5.1 Let the renement mask P (z) with P (1) = 1 be of the form (5.1),
and let '0 2 C r (R) (r 1) be a nonzero compactly supported solution
Pk of (1.1)
with corresponding
renement mask P~ (z ). Assume that Q(z ) = =0 d z with
P
k
D := =0 jd j satisfying
2r;p;1 D < 2r;p
(5.4)
for some integer p and 0 p < r. Then 'm dened in (5.2) converges uniformly to
a solution ' 2 C p (R) of (1.1) with corresponding P (z ), i.e.,
lim k'm ; 'k1 = 0:
m!1
Proof: Since '0 2 C r(R), it follows by (5.2) that 'm 2 C r (R) for m = 1 2 : : :.
Observe that supp '0 = 0 n ; k] since P~ (z) has the degree n ; k. Then there are
constants Ml (l = 0 : : :r) such that
k'(0l)k = sup j'(0l)(t)j Ml (l = 0 : : : r):
t2R
By (5.2) and (5.4), we easily estimate k'(ml)k1 D k'(ml);1k1, hence
k'(ml)k1 Dm Ml
Pk d = 1,
=0
k
(l)
X
for all m 0. Further, by Q(1) =
'(ml)(t) ; '(ml);1(t) =
=0
(5.5)
d ;'m;1(t) + '(ml);1(t ; 2m ) such that for l < r
(l) (t) ; '(l) (t ; )j
j'(ml)(t) ; '(ml);1(t)j D 0max
j
'
m;1
k m;1
2m
D 2km j'(ml+1)
;1 (m;1l+1)j
for some m;1l+1 2 t ; 2km t]. Hence, we get for l < r
j'(ml)(t)j
j'(0l)(t)j +
m
X
=1
j'(l)(t) ; '(l;) 1(t)j
m
X
Ml + D 2k j'(l;+1)1 (;1l+1 )j:
=1
19
(5.6)
By incomplete induction, we show that for l = r r ; 1 : : : p + 2 there are constants
Cl not depending on m, such that
D m
(
l
)
(5.7)
k'm k1 Cl 2r;l
is satised: For l = r the assumption (5.7) follows by (5.5), where Cr = Mr . Now,
supposing that (5.7) is satised for l + 1 (instead of l) with a constant Cl+1 and
p + 2 l r ; 1, there exists a constant Cl with
j'(ml)(t)j
=
m
X
Ml + D 2k k'(l;+1)
1 k1
=1
D ;1
m
X
k
Ml +
D 2 Cl+1 2r;l;1
=1
;1
D m
m X
k
D
= Ml + D Cl+1 2
Cl 2r;l r;l
2
=1
since 2rD;l 2r;Dp;2 > 2r;Dp;1 1 in view of (5.4), and (5.7) is proved. Analogously,
we can show that (5.7) is also valid for l = p + 1, if D > 2r;p;1 . We conclude in
these cases that '(mp) is a Cauchy sequence, since by (5.6) and (5.7)
D m;1 kD
D m;1
k
(
p
)
(
p
)
j'm (t) ; 'm;1(t)j D 2m Cp+1 2r;p;1
= 2 Cp+1 2r;p
such that, by (5.4), limm!1 k'(mp) ;'(mp;) 1k1 = 0. In case of D = 2r;p;1 and l = p+1,
we have
j'(mp+1)(t)j Mp+1 + DCp+2 k2 m
and hence
k
kD
k
(
p)
(
p+1)
(
p
)
j'm (t) ; 'm;1(t)j D 2m k'm;1 k1 2m Mp+1 + D Cp+2 2 (m ; 1) :
Thus, in any case, '(mp) uniformly converges to a function '(p), and therefore 'm
uniformly converges to '. From (1.4) and (5.3), we see that the Fourier transform
of ' has the representation '^(u) = '^0(u) ^(u), where (t) is the solution of (1.1)
with the renement mask Q(z) normed by ^(0) = 1. Hence by (5.1), '(t) is the
convolution ('0 ? )(t) and therefore a solution of the original two-scale dierence
equation (1.1) with the renement mask P (z).
Remark: In view of Q(1) = 1, we always have D 1, and for D = 1, we have
p = r ; 1 0. (The
assumption '0 2 C r in Theorem 5.1 can be relaxed. It suces to
r
)
assume that '0 (t) is a piecewise continuous function. Assume that '0 2 C r;1, and
'(0r) is continuous up to a set of nite points 0 t1 < : : : < td n ; k, in which '(0r)
can have jumps. Then, we can nd an Mr with j'(0r)(t)j Mr for t 2 0 n;k] t 6= tj
20
(j = 1 : : : d). Assume that m satises k 2;m < min2jd jtj ; tj;1j. Thus, if t is in
the neighborhood of tj , say t ; 2m < tj < t, we have
j'(mr;;1)1 (t) ; '(mr;;1)1 (t ; 2m )j
j'(mr;;1)1 (t) ; '(mr;;1)1 (tj )j + j'(mr;;1)1 (tj ) ; '(mr;;1)1 (t ; 2m )j
(t ; tj ) j'(mr);1(j1)j + (tj + 2m ; t) j'(mr);1(j2)j
such that
j'(mr;;1)1 (t) ; '(mr;;1)1 (t ; 2m )j 2km Mr
as needed in (5.6) of the proof of Theorem 5.1.
6. SOLUTIONS WITH NONCOMPACT SUPPORT
Up to now,Pwe only have considered solutions of (1.1) with compact support, where
2 P (1) = n=0 c = 2k (k 2 N). Now, we want to deal with the case of solutions
with noncompact support and arbitrary integers k 0. We obtain:
Theorem 6.1 If 2 P (1) = 2;k with k 2 N k 0, then equation (1.1) possesses a
unique polynomial solution of the form
'(t) =
k
X
=0
x tk;
(6.1)
for all t 2 R with x0 = 1.
Proof: Let ' be of the form (6.1). This function is a solution of the renement
equation (1.1), if
k
X
i=0
2i;k xi tk;i
k; n
k
X
X
X
k
;
= c x
(; )k;;j tj :
=0
=0
j =0
j
Comparing the coecients of tj with j = k ; i, this equation is satised if
2i;k x
i=
n
X
Xi k ; =0
c
=0
i;
k ; i (; ) x:
For i = 0, this is an identity in view of P (1) = 2;k;1 . The remaining system can be
written in the form
X
i;1 n
X
k
;
i
;
k
(2 ; 1) 2 xi =
x
(; )i; c :
(6.2)
k
;
i
=0
=1
21
In view of x0 = 1, it can be solved recursively for i = 1 2 : : : k so that the theorem
is proved.
P
For i = 1, it follows that x1 = ;k 2k n=1 c . In the case n = 1, we can determine
the dependence of the coecients x from k:
Lemma 6.2 If n = 1 and 2 P (1) = c0 + c1 = 2;k with k 2 N k 0, then equation
(1.1) has a unique solution of the form (6.1) with x0 = 1 and with
x
= (;1)
k X
=0
(2k c1) c (6.3)
where the coecients c are determined recursively by c00 = 1 c0 = 0 for > 0
and
;1 X
1
c
c+1 = 2 ; 1
(6.4)
k
k= k
for > .
Proof: Replacing (6.3) into (6.2), we obtain
(2i ; 1)
k Xi
i
=0
(2k c1) ci
= 2k c
1
i;1 X
k; k X
=0
k ;i
=0
(2k c1) c :
k ; k k i =
In view of
k;i i we nd by comparison of the coecients from (2k c1)+1 that c0 = 0 for > 0 and
i;1 X
i c i
(2 ; 1)ci+1 =
= i.e. (6.4). The assertion c00 = 1 follows from (6.3) and x0 = 1.
As special cases of (6.4) we obtain
c1 = 2 1; 1 and the recursion formula
1
c2 = 2 ; 1
;1 X
k=1
c+1+1 = 2+1+;1 1 c :
k
1
2k ; 1 Moreover, being interested in solutions of (1.1) which vanish for t 0 and which
are polynomials for t n, we nd:
22
Theorem 6.3 Let 2P (1) = Pn=0 c = 2;k with k 2 N k 0, and let
=
n
X
=0
jc j < 1
(6.5)
be satised. Then (1.1) has a unique continuous solution, which vanishes for t 0
identically, and coincides with the polynomial solution in (6.1) for t n.
Proof: Let us introduce the operator L, dened by
8 Pm
>
< =0 c '(2t ; )
L'(t) := > mP
;n
n
P
c
p
(2
t
;
)
+
c '(2t ; )
: =0 =m;n+1
for 0 m2 t m2+1 n2 for n2 m2 t m2+1 n
where p(t) is the polynomial solution of (1.1) constructed in Theorem 6.1. First, we
show the following: If ' is a solution of (1.1) with '(t) = 0 for t 0 and '(t) = p(t)
for t n, then we have '(t) = L'(t).
For 0 m2 t m2+1 n2 it follows that '(2t ; ) = 0 for > m, implying
'(t) =
n
X
=0
c '(2t ; ) =
m
X
=0
c '(2t ; ):
For n2 m2 t m2+1 n, we observe that '(2t; ) = p(2t; ) for = 0 : : : m;n,
such that
n
m
;n
n
X
X
X
'(t) = c '(2t ; ) = c p(2t ; ) +
c '(2t ; ):
=0
=0
=m;n+1
Hence, '(t) = L'(t) in both cases.
Second, we see that the operator L maps C (0 n]) ! C (0 n]), where C (0 n])
denotes the set of continuous functions '(t) (0 t n) with '(0) = 0 and '(n) =
p(n). Taking the maximum norm, we have kL' ; L'0 k k' ; '0 k with in (6.5)
for arbitrary functions ' and '0 in C (0 n]), so that L is contractive. Hence, the
assertion of the theorem follows from Banach's xed point theorem.
If we consider the function
0
0 t < n2 f (t) = Pm;n c p(2t ; ) for
for n2 m2 t m2+1 n
=0 and the linear operator L0, dened by L0'(t) = L'(t) ; f (t), we can write (1.1) in
the form '(t) = L'(t) = L0'(t) + f (t): The unique solution of this equation can be
represented by Neumann's series:
1
X
'(t) = Lm0 f (t)
m=0
23
though f (t) 62 C (0 n]). If the condition (6.5) is not satised, then we can attain it by
integration of (1.1). Afterwards, the original solution can be found by dierentiation
of the integrated solution, but then it may be a distribution. Obviously, by k + 1
further dierentiations, one obtains a compactly supported (distributional) solution
of an equation (1.1) with the usual condition P (1) = 1:
Example 6.4 Let us consider the case n = 1 with c0 = c1 = 2;k;1 . Then (1.1) has
a solution
80
for t 0
< k+1
for 0 t 1
'(t) = : t
tk+1 ; (t ; 1)k+1 for 1 t
which is normalized by '(1) = 1. The monic polynomial solution p(t) reads
k
k
X
k + 1 (;1)k; t = X
k (;1) tk;:
p(t) = k +1 1
=0
=0 + 1
Comparing this result with (6.3), we obtain
X
=0
2; c = +1 1 :
Numerical computation. Under the assumptions of Theorem 6.3, the continuous solution of (1.1) can be constructed numerically by an extension of the dyadic
interpolation method (see e.g. 4]). For this purpose, we introduce the vectors
~(t) = ('(t) : : : '(t + n ; 1))T q(t) = (p(t + n) : : : p(t + 2n ; 1))T
and the matrices
A0 = (c2i;j )0ijn;1 A1 = (c2i;j )1ijn which have already been used in the Introduction, and
B0 = (c2i;j ) n0ji2nn;;11 B1 = (c2i;j ) n+11ijn2n :
Then, generalizing (1.6), equation (1.1) with 0 t 1 can be written in the form
~( 2t ) = A0~(t) + B0q(t) ~( t +2 1 ) = A1~(t) + B1q(t):
(6.6)
According to (6.5), the matrix A0 cannot have the eigenvalue 1. Hence, we can solve
the rst equation in (6.6) for t = 0 with respect to ~(0), and obtain
~(0) = (I ; A0);1 B0 q(0):
This vector can be used as a start vector for our algorithm. Applying (6.6) successively, ~(t) can be constructed at all dyadic rationals. The corresponding linear
interpolatory splines converge in view of (6.5).
24
References
1] K. Baron, A. Simon and P. Volkman, Solutions d'une equation fonctionelle
dans l'espace des distributions temperees, C.R. Acad. Sci. Paris 319 Serie I (1994),
1249{1252.
2] L. Berg and G. Plonka, Compactly supported solutions of two{scale dierence
equations, preprint, Universitat Rostock, 1996.
3] A. Cavaretta, W. Dahmen and C. A. Micchelli, Stationary Subdivision, Mem.
Amer. Math. Soc. 453 (1991), 1{186.
4] D. Colella and C. Heil, Characterizations of scaling functions, I. Continuous
solutions, J. Math. Anal. Appl., 15 (1994), 496{518.
5] I. Daubechies, "Ten Lectures on Wavelets", SIAM, Philadelphia, 1992.
6] I. Daubechies and J. Lagarias, Two{scale dierence equations: I. Global regularity of solutions, SIAM J. Math. Anal. 22 (1991), 1388{1410.
7] I. Daubechies and J. Lagarias, Two{scale dierence equations: II. Local regularity, innite products of matrices and fractals, SIAM J. Math. Anal. 23 (1992),
1031{1079.
8] G. Derfel and R. Schilling, Spatially chaotic congurations and functional equations with rescaling, preprint, 1995.
9] Fo rg-Rob, On a problem of R. Schilling, part 1 Math. Pannonica 5 (1995), 29{65.
10] R. Girgensohn, Nowhere dierentiable solutions of a system of functional equations,
Aeq. Math. 47 (1994), 89{59.
11] R. Q. Jia, Renable shift{invariant spaces: From splines to wavelets, in: Approximation Theory VIII (C.K. Chui and L.L. Schumaker, eds.), World Scientic Publishing
Co., 1995, pp. 1{30.
12] R. Q. Jia and J. Z. Wang, Stability and linear independence associated with
wavelet decompositions, Proc. Amer. Math. Soc. 117 (1993), 1115{1124.
13] K. S. Lau and J. Z. Wang, Characterization of Lp -solutions for the two{scale
dilation equations, SIAM J. Math. Anal. 26 (1995), 1018{1046.
14] W. Lawton, S. L. Lee, and Z. Shen, Characterization of compactly suppoted
renable splines, Adv. Comput. Math. 3 (1995), 137{146.
15] C. Micchelli and H. Prautzsch, Uniform renement of curves, Linear Algebra
and Appl. 114/115 (1989), 841{870.
16] J. Morawiec, On continuous solution of a problem of R. Schilling, Results in Mathematics 27 (1995) , 381{386.
17] S. Paganoni Marzegalli, One parameter system of functional equations, Aeq.
Math. 47 (1994), 50{59.
18] T. M. Rassias and J. Simsa, Finite Sums Decompositions in Mathematical Analysis, J. Wiley & Sons, Chichester, 1995.
19] G. Strang, Eigenvalues of Toeplitz matrices with 1 2 blocks, Z. Angew. Math.
Mech. 76 (1996) Suppl. 2, 37{39.
20] W. Volk, Properties of subspaces generated by an innitely often dierentiable
function and its translates, Z. Angew. Math. Mech. 76 (1996) Suppl. 1, 575{576.
21] I. M. Winogradow, Elemente der Zahlentheorie, Berlin 1955.
25