SELBERG POLYNOMIALS
Robert A. Mena, William R. Bridges, Eli L. Isaacson
Department of Mathematics
University of Wyoming
Laramie, WY 82071
and
Donald St. P. Richards 1
Department of Statistics
University of North Carolina
Chapel Hill, NC 27514
•
1
Supported by NSF grant MCS-8403381.
Table of Contents
Section 1:
Introduction
Section 2:
Translational Polynomials and the Differential Operators A(i,j)
Section 3:
Reciprocal Translational Polynomials and the Subspace
Decompositions
Section 4:
Selberg Polynomials and Tactical Decompositions
Appendices
Appendix 1:
Notation
Appendix 2:
Homogeneous and Symmetric Polynomials
ABSTRACT
In the course of proving an important multivariate beta-type
integral formula, Selberg (Norsk. Mat. Tidsskr. 26 (1944), 71-78)
utilized the following properties of the discriminant polynomial
p(t 1 , ... ,t n ) =
II
i<j
(t.-t.)2 in n real variables t 1 , ... ,t :
1
J
n
(1)
p(t 1 ,
,t ) is homogeneous of some (even) degree k
n
(2)
p(t 1 ,
,t n ) = (t1t2 ... tn)~ p(tl1,
p(t ,
1
=p(1-tl'
(3)
,t ) is symmetric in
n
,1-t ). An arbitrary
n
satisfies (1)-(4) for COlJlpatible
~
,t~1) for some integer ~,
,t , and (4) p(t 1 , .. ·,t n)
n
polynomial p(t 1 , ... ,t n) which
n, ~ and k will be called a
t 1,
SeZberg polynomiaZ of type (n,~,k).
In this paper, we obtain a
complete description of the Selberg polynomials.
Section 1:
Introduction
In [8], A. Selberg evaluated an important multivariate beta-type
integral involving the discriminant polynomial
(x.-x.) 2 .
J
1
He proved the
Theorem (1.1):
Let r,s,z be complex numbers with Re r> 0, Re s> 0
and Re z > max{-~, -Re n~I' -Re n=I}.
Then
n
1
J .. {; 6(x 1 , .. ·,x )z II x·r-l (I-x. )s-1 dx.
o 0
n i=1 1
1
1
=
n
II r(r+(j-l)z)r(s+(j-l)z)r(jz+l)
j=1 r(r+s+(n+j-2)z)r(z+l)
This result has been shown to include, as a limiting case, the
Mehta-Dyson conjecture [5; p. 42]:
if
Re z >- l
n'
then
= ~ r(jz+l)
j=1 r(z+l)
Recently, Andrews [IJ, Askey [2J, [3J, Macdonald [4J, Morris [6J
and others have related Theorem (1.1) such topics as basic hypergeometric
series, orthogonal polynomials, the Dyson conjecture and the root systems
of finite reflection groups.
-2-
Selberg's ingenious proof of Theorem (1.1) (see also [6J, [7J)
utilized the following properties of the polynomial p(x)
-
(we write x
~
(1.1)
=
6(X)
~
(x 1 ' ... ,x n )):
=
p(x) is homogeneous:
~) =
p(t
t
k
for some nonnegative integer k,
p(~)
for any real number t.
p(~)
(1.2)
is translational
p(x+1) = p(x)
where 1
=
(1.3)
(1,1, ... ,1).
p(x) is
p(x)
(x 1x2·· .x n )
(xl-1 , ... ,x -1 ).
where x-1
=
(1.4)
p(~)
~
~-reciprocal:
=
~
p(~
for some nonnegative integer
-1
~,
)
n
p (ax)
-
is
symmet~ic:
for any permutation a of {1,2, ... ,n},
= p (x)
-
It may be easily checked that
with k = n(n+1)/2 and
~
= n+1.
6(~)
satisfies these four conditions
As a natural generalization, we shall
refer to any nontrivial polynomial satisfying (1.1)-(1.4) for compatible
n,
~
and k as a Selberg polynomial of type
that the Selberg polynomials of type
denote it by s~(n).
(n,~,k).
(n,~,k)
It is easy to check
form a vector space; we
-3-
Our aim in this paper is three-fold.
(i)
We intend to
characterize the Selberg polynomials;
(ii) determine the dimension of the vector space S~{n) (as well as the
dimensions of some larger spaces consisting of polynomials typified by
a subset of the properties (1.1)-(1.4);
(iii) indicate how the Selberg polynomials can be constructed.
Before stating the main results, we make several remarks.
property (1.2) is not the one used by Selberg.
(1.2
1
First,
Instead he required
)
However, (1.2 1 ) is more restrictive (in conjunction with (1.1)) than
(1.2), as we shall show in Section 2.
Further,. we shall also consider
polynomials which satisfy the more general reciprocal property:
p{~)
is ±£-reciprocaZ, that is,
The corresponding vector spaces are denoted by
Sk+£ (n).
One of the main results is that, in some sense, (1.2) and (1.3)
are not both needed here.
Theorem (3.3) proves that, aside from certain
obvious necessary conditions, homogeneous polynomials are translational
if and only if they are reciprocal.
This result requires some detailed
machinery; we shall first show that a polynomial p(x) is translational
n
if and only if 3p(x) = 0, where 3 = I 3/dx .. Then, Theorem (3.3) follows
1
1
from a detailed analysis of the action of powers of the operator 3 on
certain vector spaces of polynomials.
-4-
Our main result evaluates the dimension of s~(n); if par(n,£,k)
denotes the number of ordered partitions (Appendix 1) of k into at
most n parts with no part exceeding £, then we prove in Section 4 that
when 2k
=
n£,
dim s~(n)
and that s~{n)
=
=
par(n,£,k) - par(n,£,k-l)
(0) if 2k ~ n£.
To obtain this result we set up
certain "tactical decompositions " , of s~(n), which are partly motivated
by the operator 3.
We should also remark on some implications of the dimension formula.
First, we clearly have dim S~(2)
1, so that P(x 1 ,x 2) = (x 1-x 2)2 is the
only linearly independent Selberg polynomial. For n ~ 4 however,
dim s~(n)
>
=
1 and we are naturally led to surmise that for increasingly
large n, there exists a plethora of integral formulas similar to Theorem
(1.1).
This is indeed the case as is shown in [7J, and we expect that
in time, these results will lead to hypergeometric integrals which are
linearly independent of those in [2J and [3J.
Finally, a word to the reader.
n, £ and k are fixed throughout.
We emphasize that the integers
With this in mind, it is perhaps
best if one peruses the appendices before reading Sections 3 and 4.
-5-
Section 2:
Translational Polynomials and the Differential Operators A(i,j)
In this section we prove the main results concerning translational
polynomials and their connection with the differential operators A(i,j).
In particular, we give several characterizations of these polynomials.
One of these characterizations forms the heart of an algorithm for computing
a basis for the space of Selberg polynomials (see Appendix 3).
,Q,
In addition,
,Q,
we characterize the spaces T, Tk , T , T .
k
For completeness, we begin with the
The polynomial p in
Definition (2.1):
~
F[~]
is transZationaZ if
where T is the translation operator
(Tp)(x)
-
= p(x+1).
- -
The vector space of translational polynomials is denoted by T.
The main tool that we use to characterize the translational
polynomials is the
Lemma (2.2):
Suppose
p(~+tl)
p€F[~].
Then p is translational if and only if
= p(~).
The conditional that p be translational is slightly more general (in
conjunction with homogeneity) than the condition imposed by Selberg.
required that
In fact, we have the
He
-6-
Lemma (2.1a):
(i)
If
Suppose p is k-homogeneous.
p(1-~)
=
p(~),
then k is even; if
p(1-~)
=
-p(~),
then k is odd.
In either case, p is translational.
(ii) If P is translational, then p(!-;:<) = (_l)kp(~).
froof:
Let p be k-homogeneous.
(i)
pO-~)
If
=
±p(~),
then
p(~+!) = p(!+~) = ±p(-x) = ±(_l)kp(~) .
Since the highest order terms of
p(~+!)
are the same as those of
p(~),
the result follows.
(ii) If P is translational, then
p(!_~) = (_l)kp(~_!) = (_l)kp(~).
Proof of Lemma (2.2):
QED
Setting t=l shows that the condition is sufficient. To
4It
prove it is necessary, assume that p is translational and consider the polynomial
q(t)
==
p(~+t!)
-
p(~)
.
Since p is translational, q(t)=O for t=O,±1,±2, ....
nomial with infinitely many zeros is identically zero.
p(~+t 1)
But the only polyHence,
QED
= p(x) .
With this Lemma we can prove our main characterization result.
Theorem (2.3):
Suppose
[Characterization of Translational Polynomials]
p€IF[~].
Then the following statements are equivalent:
(i)
p is translational
(ii)
p(~+t!)
=
p(~)
(iii) There is a polynomial q in IF[x1, ... ,x _ ] for which
n1
q(x 2-x 1 ,···,x n-x 1 )
p(~)
=
-7-
~
(iv)
Each Pk in the homogeneous decomposition of p is translational;
(v)
ap:: (.~.
aXl
Proof:
+ ... +
a}-) p = a .
n
Suppose p€IF[2S].
Lemma 2.2 shows that
easy to see that (iii)9(i) and
and
(iv)~(i).
(i)~(ii).
We will show
Also, it is
(ii)~(iii)c9(iv)
(ii)~(v).
So suppose (ii) is true.
Set
Then
So (ii) implies (iii).
Now suppose (iii) is true.
decomposition of q be q =
I
bO
qk'
Then
is a homogeneous decomposition of p.
Pk(~)
Let the homogeneous
Since the decomposition is unique,
= qk(x 2-x 1"",x n-x 1) for each k. Thus, each Pk is translational
since it satisfies (iii).
To show the equivalence of (ii) and (v), let
note that
= ap(1S+ t
!) .
q(t)::p(~+t!)
and
-8-
e
Thus ~ if (ii) is true, then
ap((S) = ql(O) = 0
If (v) is true, then
since q is constant.
ql(t) = ap(~+t!) = 0;
so q is
constant~
and
QED
p(!5+t!) = q(t) = q(O) = P(~) .
We obtain immediately the
Theorem (2.4):
(i)
T
has the direct sum subspace decomposition
=
T
(ii)
T
[Characterization of T, TkJ
T
(t)
k>O
k
•
is isomorphic to
"') T lS
" lsomorp h'lC to IF[ X ' .. ·'X _ J .
( 111
n1
1
k
Moreover~
if
~:
e
IF[x1~,,,xn_IJ·
I n par t'lCU 1ar d'1m T
k
=
(n+k-2)
k-l .
IF[xJ +IF[X1.I. ~, •. Xn- 1J and
~
(~p)(Yl""'Yn-l) = P(0'Y1""~Yn-1)
(ljJq )(x 1 ' .. · ~xn)
then the restriction of
(ii) (resp.
(iii))~
lFk[xl,,,,~xn_lJ)
Proof:
(i)
=
q(x 2-x l ,··· ,x n-xl)'
~
to T (resp. Tk) provides the isomorphism in
and the restriction of ljJ tolF[x1",,~xn_IJ (resp.
is its inverse.
This follows immediately from the fact that p is translational
if and only if each Pk in the homogeneous decomposition of p is translational.
-9-
4It
(ii)
p=~q
one.
This follows from the fact that p is translational if and only if
for some q in F[x1, ... ,x n_1].
To show this, assume that
We need also the fact that
~ql=~q2.
~
is one-to-
Then
ql (Y 1 , .. · 'Yn-l) = (~ql )(O'Yl' ... 'Yn-l)
= (~q2)(O'Yl'···'Yn-l)
= q2(Yl'''· 'Yn-l) .
So, ql=q2' and
~
is one-to-one.
Moreover, the restriction of ¢ to T is
its inverse since for q in F[X1, ... ,x n_1]
(¢(~q))(Yl'···'Yn-l) = (~q)(O'Yl'···'Yn-l)
= q(Yl'···'Y n- 1 )'
and for p in T
= p(O,x 2-x 1 ,···,x n-x 1 )
= P(~-xl!)
= p(~)
QED
since p € T.
The characterizations of T~ and T~ are not quite so simple.
we require that the polynomial p in Theorem (2.3) be
~-bounded,
five conditions of the Theorem are still equivalent.
If
then the
Moreover, the poly-
nomial q in condition (iii) (which is given by q=£(p)) will also be
However, if q is
~-bounded,
then
p=~(q)
need not be.
n=3, ~=l, q(Yl'Y2)=Y 1Y2' p(xl'x 2 ,x 3 ) = (x 2-x )(x -x ).)
1 3 1
~-bounded.
(Consider the example:
Therefore, ¢ is a
-10-
one-to-one map from
(resp.
T~)
intolF9.,[x , ... ,x _ ] (resp.
n1
1
f [x , ... ,x _ ]), but it is not necessarily onto.
n 1
k 1
We must therefore use a different approach to characterize the spaces
T9.,
T~.
The approach is provided by condition (v) of Theorem (2.3)
9.,
9.,
from which we see that T (resp. T k ) is the null space of the restriction
T9."
of the operator a to lF9.,[c] (resp. lF~[~]).
In addition, we see from
condition (iv) that the characterization of
T9.,
reduces to that of T~.
Therefore, the restrictions of the operator a (and its powers) to the
9.,
spaces lFk[~] as well as the null spaces of these operators will play
a role in our characterization.
Consequently, we make the
Let i,j be integers with
Definition (2.5):
i~j.
The map
(i)
A( i ,j): lF~[x]
J ~
-+
lF~[x]
1 ~
j
denotes the restriction of the operator (1/(j-i)!)a - i to lF1[~].
(ii) c(i,j) denotes the range of A(i,j).
(iii) n(i,j) denotes the null spaces of A(i,j).
A basis for lF~[x] is given by the monomials x~ as u ranges over P~.
J
J
~
Therefore, the matrix representation of A(i,j) (which we also denote by
A(i,j)) with respect to these bases has rows indexed by P~ and columns
indexed by
p1.
1
The actual matrix elements of A(i,j) are given in the
Lemma (2.6): Let i,j be non-negative integers with
9.,
ueP., then the (v,u) elements of A(i,j) is
~
J
i~j.
9.,
If v e P.1 and
e
-11-
4It
We must determine the coefficient of xY in
Proof:
(l/(j-i)!)aj-ix~. Now,
and
u!
U-a
x~
if a <
~
U,
( lJ -Q.) !
o
if a k
>
uk for some k ,
so that
= I
51.
l3eP.
~
J-S
(!:!) x~
13
(l3=u-a)
Taking s=j-i yields the result.
Theorem (2.7):
polynomials]
[Characterization of SG-bounded, k-homogeneous, translational
Let k be an integer satisfying 0 2 k 2
p e F~[~], say p = ID~ au~~'
~eJ-
(i)
(ii)
QED
k
n~,
and suppose
Then the following statements are equivalent:
~
P is translational.,
I
51.
~ePk
au~~ = 0 foi each ~ e P~-l;
~
(iii) The vector [au] of coefficients of p is in the null space of the matrix
A(k-l,k).
Proof:
This is an immediate consequence of the Lemma provided k
~
1.
For k=O, the space F~[:] consists of the constant polynomials, and P: l
is empty. In this case, condition (ii) holds vacuously and the Theorem
-12reduces to the obvious result that all constant polynomials are trans-
QED
lational.
Remark (2.8):
We do not consider k> n£ since the space F~[~]
(i)
contains only the zero polynomial in this case.
(ii)
The matrix elements of A(k-1,k)
I n f ac,'
t 'f
£
~€Pk-1
(~)
£
~€Pk'
are easily determined.
th en
if u = v + e, (l ~ j ~ n)
J
-J
0 , otherwi se .
U,'
-
v
an d
(l~k~n£)
{
[Here, e, is the standard basis vector all of whose components are 0
-J
except for the jth component which is 1.] This fact along with condition
(iii) of the Theorem may be used to developan algorithm for constructing
(a basis for) the Selberg polynomials.
Theorem (2.9):
( i)
[Characterization of the Spaces T£, T£]
k
T£ has the d,'rect sum subspace decomposition
~,1oreover
,
where k is the largest integer satisfying 2k ~
(i i )
T
nQ, •
£
k = N(k-1,k), the null space of A(k-1, k) . Moreover,
Ip~1 - !P£k- 1 1
if 0 < 2k < n£
dim T~ =
0
if 2k > n£ .
{
The proof of this Theorem rests on the following result which will
be proved in stages below.
Theorem (2.10):
(i)
Let k be an integer.
If 0< 2k< n £, then A(k-1,k) is onto.
(ii) If 2k< n
£,
then A(k-1,,) is one-to-one,
-13-
e
Proof of Theorem (2.9):
(ii)
We prove (i i) first and then (i).
We conclude from Theorem (2.7) that T~
= N(k-1,
k).
Thus, from
Theorem (2.10) we obtain
dim T~
= dim
=
N(k-1,k)
{:im 1F~[~] - dim 1F~_1 [~]
if 0 < 2k < n 9.,
if 2k > nQ.
(:1 Q.
=
Q.
if 0<2k<n9.,
[PHI
if 2k > nQ..·
(i)
From condition (iv) of Theorem (2.3) we know that
=
Therefore, from (ii) we get T
e
e
T9.,
0<2k<n Q. k
and
dim T
=
I
0<2k<n 9.,
dim T9.,
k
=
This sum telescopes to yield the result since IP:11
= O.
QED
The remainder of this section is devoted to showing that A(k-l,k) is
onto when 0 < 2k < n 9., and one-to-one when 2k> n 9.,.
I n fact, we wi 11 prove
the following generalization of Theorem (2.10).
Theorem (2.11):
Let i,j be integers.
(i)
If o~T.::. i.::.j.::.n£, then A(i,j) is one-to-one.
(ii)
If O~i'::'j,::,l.::.n£, then A(i,j) is onto.
In particular, if O.::.J.::.j.::.n£
then A(J,j) is invertible.
[Here, l:::n£ -i.]
-14-
Theorem (2.10) follows easily upon setting i=k-1, j=k.
each operator A(i,j) with
O~ i2.j2.n~
The appearance of the reciprocal index
Moreover,
is characterized in Theorem (2.11).
T=n~-i
stems from the use of
the reciprocal operators Rk in the proof.
The reciprocal operator R: lF~[x]+lF~[x] is given by
Definition (2.12):
For each integer k satisfying
02.k2.n~,
the map Rk:
~
~
[
lFk[~]+lFn~_k~]
denotes the restriction of R to lF~ .
It is easily verified that each Rk is an isomorphism whose inverse
is RI . Consequently, R is also an isomorphism as well as its own inverse.
Finally, we will use an inner product on lF~[x] in which R1( is the adjoint
of Rk and in which the monomials x~ are orthogonal.
For p,q elF ~ [x], set
Definition (2.13):
Theorem (2.14):
If p(x) =
I a u~xl.!
uep£-
and q(x) =
~
I
UeP~
b x~u ,
u~
then
~
<p,q> = I(~)-1 a b .
u u
u
~
In particular, <,> is an inner product on lF~[~] in which monomials are
orthogonal.
In fact,
if u 'f v
if u = v
~
-15-
Proof:
We need check only the final formula since <,> is bilinear.
xu-a
ifg<1J
Q:(~)
d<:x IJ =
otherwise,
j
~
Now,
~
and
R ~'i = 9v-v
~ ~
so that for any
~,
in
1J,'t,~
p
9v
we obtain
if v ~g~!:!
otherwise
Thus, a11 terms in
u v are
a unless
<~~,~~>
u=v.
In this case, the one non-
zero term arises from g=l,!, and we get
QED
Theorem (2.15) (i) :
Proof:
For
It suffices to show that
u
<Rk~ ~, x~>
for any
-1
R* = R-j( = Rk .
k
O'::'k~n9v,
I,! € P
9v ,
k
=
9v
't€Et·
<R k?$~,
1I
x't> =
u
<~~,
v
R-x~>
k~
But
,Q,-u~ ,x~·>
v =
<x~
u
v =
Rkx~>
if u+v=9v
otherwi se
and
<x~
{~~)-1
u 9v-v =
<x~,x~
~>
{:~fl
l
if u+v=9v
otherwise.
QED
-16-
Theorem (2.15) (iii):
A(i,j)A(j,k)
Proof:
O~ i~j~k~nQ"
For
=
(kk-~) A(i,k).
-J
This follows from the definition since
)
A( i,j ) A(
j,k
1
1
= (j-i)!
(k-j)!
1
~j-i
~k-j
a
a
1
k-i
( k_j )! d
= -r(J'--'
_7,i)r-;"""!
k-i
1
d
= (k-~)
k-J ( k- i) !
k .) A(k,i)
= (k-i
-J
Theorem (2.15) (ii):
A( i ,j)*
Proof:
For
QED
O~ i~j~nQ"
= R---J\(},T)R
..
J
,
We need show only
<R·~1\ ("7
J, -k) Ri
for every"!.
.~
€
v
~~,
Q,
Q,
P.,
€ P..
, U
~
J
<A(,' , J')x~
~,
x~>
=
U
~~>
v A(."J.) ~~>
U
<~~,
First, we use Lemma (2.6) to obtain
=
=
<x~, A(",J')~~>
-
_
v
LQ, (~)
ex
,
Q,€P.
<~~,
-
= (lJ)j(E:)
v
v
.
x~~
-17-
Therefore,
<R.-A(J,1) R. ~~, x~>
J
1-
=
<A(J ,1) R. x't, R. x~>
1 -
--
-
=
<A(J,:n~~, X~>
=
(~)/(~~)
l.l
_
-
J-
Q,
QED
and the two operators are equal.
Lemma (2.16):
e-
Here, if rand s are integers, then (x-s)
r denotes the polynomial
(x-s)
r
(i i )
(X-S)(X-S-1) ... (X_S-(r-1))
r!
!
=
If 0<- i,j -< nQ"
1
if r = 0
o
if r < 0
and if -v €P~,
u €P~,
then
1
J
both sides are (x).
v
We proceed by induction on u simultaneously for all v. So assume the
Proof:
(i)
if r> 1
If v=O, both sides are 1.
formula holds for some u> 0 for all v>
If u=O, then
o.
Using the identity
-18-
which is valid for all r, we obtain
Since the formula is correct for v=O (and any u), we may assume V?_l.
v
=
I
k=O
(_l)k(x-k) [(u) + ( u)J
v-k
k
k-1
and the induction is complete.
(ii)
Thus,
[since (_u ) = OJ
1
.,
If O~ v,u~~, then replacing x by ~ in (i) gives (~-u) = L (_l)k(~-k)(U) .
v
k=O
v-k k
v
Multiplying the n equalities obtained by replacing u,v by u r ,v r (l < r<
n)
yields the first equality.
The second equality follows from the simple
identity
QED
For O~ i~j~n9"
Definition (2.17):
9,
9,
Hi(j):
lFj[~J+lF)~J ,
NJo (i):
lF~[x]
+ lF~[xJ ,
1 1 ~
are given by
the maps
-19-
H.(j) = A(i,j)*A(i,j),
1
N.(i)
J
•
=
~.[(-l)k/(~-kk)JHk(i).
L.
1-
k=U
These maps are clearly hermitian.
Theorem (2.18):
(i )
If 0.2. i .2. j ,}.2. n 9,
,
then
A(i,}) Rj = Nj(i)A(i,j).
(ii) If 0.2.J.2.j.2.n9"
then
N.(})A(J,j) = R..
J
J
In particular, A(},j) is invertible.
Proof:
We show <x~, A(i,})R. x~>
(i)
£,
£,
1
J
v€ P., U€ P ..
~
N.(i)A(i,j)x~>
J
In fact,
v A(.1,J"7) R.
<x~,
=
J
~
u =
x~>
J -
v A(.1 ,J-,.) x~>
u
<x~,
-
-
[Lemma (2.6)J
[any term not satisfying § ~ ~ is
i
=
L
k=O
OJ
(_l)k <A(k,i)x~, A(k,j)x~>
~
for each
-20-
[by Theorem (2.15) (iii)J
=
f [(-l)k/(~-kk)J<A(k,i)*A(k,i)X~, A(i,j)x~>
k=O
=
<N.(i)x~, A(i,j)x~>
=
<X~, N.(i)A(i,j)x~>
J
J
.-
~
J
~
[since N.(i) is hermitianJ.
(ii)
~
1-
~
So the matrices are equal.
The formula is obtained from the formula in part (i) by replacing
i by J and noting that A(J,J) is the identity map.
(2.5i).J
[Recall Definition
The invertibility of A(J,j) now follows from the fact that Rj
is an isometry (Theorem (2.15i)).
QED
The proof of Theorem (2.11) is now elementary.
In fact, if T~ i~j,
then
A(J,i)A(i,j)
=
(( j-T)
. . Aj,j).
J -1
Hence, A(i,j) is one-to-one since A(J,j) is invertible.
A(i,j) A(j,T)
i-i
= (...- .)
l-J
If
i~j~ i,
then
. -;-
A(l,l).
Hence, A(i,j) is onto for a similar reason.
Corollary (2.19):
O~ i~j~T~n£,
If
£
f.[xJwith
1 ~
Proof:
(2.11).
Therefore, the formula
(*)
A(i,T)R. = N.(i)A(i,j)
J
J
then N.(i) is an isomorphism of
J
-21from Theorem (2.18) shows that N.(i) is onto since R. is onto by Theorem
J
(2.15).
J
Consequently, Nj(i) is an isomorphism.
~7(i)
left by
J
gives
N.(i)N.(i)A(i,j)
J
Multiplying (*) on the
J
=
N.(i)A(i,J)R.
J
J
J )Rj
= ( A( i , j ) R
= A(i,j)
since
so
•
~
J
is the inverse of R..
J
~(i)N.(i)
J
J
is the identity.
But A(i,j) is onto (again by Theorem (2.11)),
QED
-22Section 3:
Reciprocal Translational Polynomials and the Subspace
Decomposition~
The fact that the reciprocal operators R are used in the previous
k
section to help in the characterization of £-bounded translational polynomials leads us to expect that there is a connection between such
polynomials and reciprocal polynomials.
In fact, one of the main results
of this section (Theorem (3.3)) is that, aside from some obvious necessary
conditions, homogeneous polynomials are reciprocal if and only if they are
translational.
To prove this, we will obtain an orthogonal subspace
decomposition of F£[~J using the spaces C(i,j), N(i,j) (see Definition
(2.5)), and then we will consider in detail the actions of the operators
A(i,j) on this orthogonal decompsition.
We end the section with some
additional analysis of the operators H.(j)
(see Definition (2.17)).
1
However, we begin the section with some elementary results on reciprocal
polynomials.
Definition (3.1):
(Rp)(x)
==
The polynomial p is (+£) - reciprocal if
p(x) .
It is (-£) - reciprocal if
(Rp)(~) == -p(~)
.
In either case, p is. called reciprocal.
The vector space of (+£) -
reciprocal (resp. (-£) - reciprocal) polynomials is denoted R+£ (resp. R-£).
Since Rp is a polynomial if and only if p is £-bounded, we restrict
.
£
£
+£
our attentlon to the spaces F [~J and F k[~J (02- k2- n £ ) . Naturally, Rk
denotes the vector spaces of k-homogeneous, (+£) - reciprocal polynomials,
k
and R £ has a similar meaning.
-23-
~
Theorem (3.2):
[Characterization of Reciprocal Polynomials]
~
Suppose p € IF [x] and p has the homogeneous decomposition p =
n~
I
Pk.
k=O
Then the following statements are equivalent:
(+~)
(i)
P is
(ii)
For O'::'k.:_n
(-~)
- reciprocal (resp.
RPk = Pn~-k
Q"
- reciprocal).
(resp. RPk = -Pn~-k)·
(iii) For 0.::. k.::. n ~, RkPk = Pn~-k
(iv)
If p =
(resp. RkP k = -Pn~-k).
then a-= a
(resp. a-= -a ) .
u u
u
u
a x~,
L
u€P~
~~
L
Moreover, if p=
~D ~
a x~, then p is (+~) - reciprocal (resp. (-~) -
~=-k
u~
~
reciprocal) if and only if both 2k=n,Q, and au= au
Proof:
.
(resp. a = -au) .
u
This is a straight forward application of the appropriate definitions
and the fact that Rp = LRPk is again a homogeneous decomposition.
This leads immediately to the
[Characterization of the Spaces (R± , R~ ]
Theorem (3.3):
(i)
R+,Q" R-,Q, are isomorphic to the following direct sums:
(9
j
k-l
( (9
j=O
~
Suppose O<k<nQ,.
r
•
dim
, if n ~ is odd,
0~2j<n,Q,
R±,Q, -
(i i )
lF3[~]
,Q,
+,Q,
IF.[x])(9 R ' if
J
k
n~=2k
is even.
Then
a
R±~ l"P~'
~
if 2kfn~
if 2K=nQ"
Q,
odd
=
! ( IP~ I ± 1)
if
2k=n~,
,Q, even.
QED
-24-
We are now in a position to state the main result of this section;
however, as in Section 2, we will carry out the proof in several stages.
Theorem (3.4):
Suppose p is a non-zero polynomial in~.
reciprocal if and only if 2k=n£ (i.e., k=k).
Then p is
Moreover, in this case,
Rp = (-1) kp,
so that T~ consists entirely of (+£) - reciprocal (resp. (-£) - reciprocal)
polynomials if k is even (resp. odd).
satisfying
then
O~k~_n£,
R±£
k
R~£
R-£
k
=
r
nT
nT
= )
(0)
if 2k1'n£,
T~
if 2k=n£, k even
1
=
(0)
if 2k=n£, k odd,
(0)
if 2k=n£, k even
T£
if 2k=n£, k odd.
{
l
In other words, if k is an integer
k
e
.
Moreover, the non-zero spaces have dimension
The first step towards the proof is a study of the spaces
C(i,j)::: range (A(i,j)) -clF~[xJ,
1
~
N( i ,j) ::: null space (A( i ,j))
of Definition (2.5).
~ lF~[~J
We will also study the orthogonal complements
C(i,j)~, N(i,j)l taken in the respective spaces lF~[xJ,
lF~[xJ
with respect
1 ~
J ~
to the inner product of Definition (2.13).
•
-25-
The second step consists of obtaining subspace decompositions of
C(i,j) and N(i,j) as orthogonal direct sums.
see how the operators
will follow.
In the process, we will
A(i,j) act on the summands, and Theorem (3.4)
The section will be concluded with a detailed analysis
of these actions.
Theorem (3.5):
[Properties of C(i,j), N(i,j)]
If O~}~ i~j-=-n9"
(i)
then
dim C(i,j)
=
Ip~1
dim
dim N( i ,j)
=
0,
dim N(i,j)J.
=
Ip~1
dim C(i,j)l
=
0
J
.
Moreover, if 0 ~ J ~ i .::: j ~ n 9"
C(i,j)
Proof:
=
C( i ,j ) J.. =
IP~ I - Ipll
J
.
then
N(},i).L.
The following formulas from linear algebra are standard.
is a linear map from the vector space U to the vector space
dim
U
If A
v, then
= dim range (A) + dim nullspace (A).
If U is an inner product space and W is a subspace, then
dim
W+
dim
Wi
= dim u.
The dimension formulas in (i) and (ii) now follow from Theorem (2.11)
>-
since A(i,j) is one-to-one in case (i) and onto in case (ii).
To show that C(i,j) = N(J-i)~, we note that the dimensions are equal
when 3~i~j.
A(i,j)p
€
Thus, it suffices to show that C(i,j) ~ N(3,i)~. So suppose
C(i,j) and q
€
N(},i).
Then
-26-
<A(i,j)p, q>
= <p,
A(i,j)*q>
= <p,
R~A(J,T)R.q>
= <p,
Rj NiG)A(J,i)q> (Theorem (2.18))
=
= 0,
since A(J,i)q
J
,
(Theorem (2.15))
0
and the spaces are equal.
QED
We are now ready to introduce the subspace which comprise the
orthogonal direct sums.
Definition (3.6):
Suppose j,k are integers with k.:..k, j2k.
Then the
subs paces V(j,k) are given by
V(j,k)
Lemma (3.7):
(i i)
= IF£nx'Jx]
,
V(k,k)
= N(k-l,k)
,
v(j , k)
= (0)
V( n£, n£)
V(j ,k)
~
for j < K
c lF~[x]
J ~
-
or k > n £.
for j _< !: .
= IP~I - ~~+11
A(i,j)v(j,k)
= V(i,k)
for i~j~k,
In particular, A(i,j)V(j,k)
(v)
v(j , k)
= (0)
for i <I<.
= c( j , k) f'\ c( j , k+1) J.
= N(k,j)J. o N(k-l,j)
(vi)
.
Then
(i i i) dim V(j,k)
(iv)
A(j,k) N(k-l,k).
[Properties of V(j,k)]
Supposp, K.:..k.
(i)
==
V(i,j)l V(i,k)
for k2.j~ k.
for 02.k<J.:.. i2..j< k~n £.
-27-
e
Proof:
(i)
Since A(k,k) is the identity on
If j<K, then V(j,k) = A(j,k)N(k-l,k) = a k-
~
j
N(k"-I,k) = (0)
since
Also, N(k-l,k) = (0) if k> n~.
k-j > k-(k-l).
If j
we have V(k,k) =N(k-1,k).
v(nR"n~) = N(-I,nR.) = lF~n[xJ since an~+1lF~n[xJ = (0).
nN nN -
For k=nR., this becomes
(ii)
lF~[~J,
k j
k, then V(j,k) = a - N(k-l,k)
~
k j
a -
lF~[~J ~lF1[~].
(iii) If k" ~ j ~ k, then A(j,k) is one-to-one by Theorem (2.11) so that
N(K~1,k)
dim V(j,k) = dim
=
f~1
- rI-1 1 (by Theorem (3.5)) =
IP~I - f~+11
since K-l = k+l.
(iv)
A(i,j)V(j,k) = A(i,j)A(j,k)N(k~l,k) = const. A(i,k)N(k-1,k) = V(i,k).
By part (i), this is (0) for i <k.
(v)
Suppose k ~ j ~ k.
by Theorem (3.5).
Then V(j,k) = A(j,k)N(k-l,k)~N(I-l,j) = C(j,k+1)1
Since V(j,k)
£
C(j,k) and C(j,K+l) ~ C(j,k)
(3.5)), we get V(j,k)~. C(j,k)()C(j,k+1)1 = C(j,k) G C(j,k+l).
(Theorem
But the
dimensions here are equal:
dim V (j , k) =
dim C(j,k)
e
IP~ I - IP~+l I ,
C(j,k+l) =
So V(j,k) = C(j,k) "C(j,k+l).l..
f~1
-
f~+11.
The formula v(j,k) = N(k,j)1. n N(k-l,j)
follows now from the identity C(j,k) = N(k,j)l
(vi)
(Theorem (3.5)).
This follows easily from the inclusions V(i,j) ~C(i,j+1)J. and
V(i,k) ~ C(i,k) ~ C(i,j+l)
Theorem (3.8):
(Theorem (3.5)).
[Orthogonal Subspace Decompositions]
The following subspace decompositions are orthogonal direct sums.
(i)
lF~[xJ =
, -
(9
k
k<i<k
V(i ,k)
for 0 < i < n L
QED
-28-
C(i,j) =
(i i)
6:)
V(i,k)
for 0.::: i ~ j ~ n i.
v( j, k)
for Os i ~ j
e
k
~i~j~k
(iii) N(i,j) =
6:)
k
_~
nL
i<k~j~k
Proof:
The proofs are all similar; we prove only (i).
we have V(i,k)
~~[xJ.
C
-
50
1 -
the dimensions agree.
~ V:i,k) c ~~[xJ.
-
1 -
By Lemma (3.7),
It suffices to show that
Now,
= L
dim V(i,k)
k
k<i<k
-j
n9-
-I
L (Ip~ ,
9IPk+1' )
if i < i ,
n9-
9IPk+11)
< i ,
if i _.
k=i
L_ (IP~ I
k=i
= {
W~ I
if i -< i ,
f;'
if i < i .
1
Since Ip~1
= ,p~1
1
1
, the dimension is Ip:1
1
in either case.
dim ~~[xJ = IP:' , the spaces are equal.
1 -
1
We are now in a position to prove Theorem (3.4).
Since
QED
In fact, all
that needs to be shown is that the restriction of (-l)k Rk to T~=N(k-l,k)
is the identity when k=k.
result.
This is generated in part (ii) of the next
-29-
e
Suppose O:::k~ i~k~n~.
Theorem (3.9):
Then
(i)
The restriction of Ri to V(i,k) is an isometric isomorphism onto
v(l,k).
(ii)
The restriction of (-l)kRkA (I,k) to V(k,k) is the identity.
Proof:
,
(i)
It suffices to show that R. C(i,j)
= c(T,j). In fact, if
this were true, then we would have
ED
V(T,k)
=
C(T,k)
[by Theorem (3.8ii)]
,
R., [
[to be proven]
k
k<T<k
=
=
=
e
=
But the spaces lF~'[X]
-
,
R.1 V(i,k) c ,
R.1
lF~[x]
-
R. C(i,k)
ED
V(i,k)]
k
k<i<k
R., v(i ,k)
ED
k
k<i<k
[by Theorem (3.8ii)]
[R.1 is an isometry] .
R. V( i ,k) .
ED
1
k
k<,<k
(0< i < n~) are orthogonal, and v(f,k) C.lF;[x],
-
,
=lF~[x].
So V(T,k) = R.1 V(i,k) since the sums
1-
above are orthogonal direct sums.
We prove now the contention that
(2.15ii) we have
RiC(i,j) = c(T,j).
R.A(i,j)
= A(J,T)*R.,
so that
,
J
R,.C(i,j) = R. A(i,j)lF~[X] = A(J,T)R.lF~[x]
,
J J J-
= A(J,T)* lF~[x] = c(A(J ,T)*)
J
-
By Theorem
-30-
= N(J,T)~ =
Suppose O~I~k:::.nQ,.
(ii)
c(T,j)
[Theorem (3.5)J.
Then Rk=Nk(k)A(k',k) by Theorem (2.18ii).
Therefore, for p € V( k, k) we obta in
RkP = Nk(k)A(k,k)P
=
k
. k .
I
[(-1)J/(k=~)JA(j,k)*A(j,k)A(k,k)P
j=O
J
k -
= (-1) A(k,k)p
since A(j ,k)A(k,k)p = 0 for j <k.
Consequently,
QED
In the remainder of this section we describe the action of the
operators A(i,j) on the spaces V(j,k) and conclude from this the structure
~
of the operators H. (j) on lF~,[xJ.
J
1
Theorem (3.11):
Suppose
~
[Action of A(i,j): V(j,k)-+V(i,k)J
O~I~i~j~k~nL
Then
A(i,j)p> = (~=~)(tf) <P,P>
<A(i,j)p,
fora" p€V(j,k).
Proof:
We consider first the case j=k.
0< k < i < k < n L
So suppose P€ V(k,k) and
Then
<A(i,k)p, A(i,k)p>
=
<p, A(i,k)*A(i,k)p>
= <P, RkA(k,T)RiA(i,k)p>
[Theorem (2.15ii)J
=
<P, R~i(k)A(k,i)A(i,k)p>
[Theorem (2.18i)J
=
(~=~)
[Theorem (2.15iii)~
<p,
R~iCK)A(k,k)p>
-31[Lemma {3.lOJ
= ( k-k) <p,p>.
k-i
[Theorem (3.9)J.
So the result is true when k=j.
To prove the general result, assume pe V{j,k).
for some q e v{ k, k) .
Then p=A{j,k)q
Therefore,
<p,p> = <A(j,k)q, A{j,k)q>
= (kk-~) <q,q>
-J
by the first part of the proof, and so
<A{i,j)p, A(i,j)p> = <A(i,j)A{j,k)q, A(i,j)A{j,k)q>
= {~=j)2 <A{i,k)q, A{i,k)q>
[Theorem (2.15iii)J
k-i)2 k-k
= ( k·
(k·)
<q, q>
-J
-1
[the case k=jJ
k-i ) 2 (k·
k-k )/ (k·
k-l)} <p, p>
= {( k·
-J
-1
-J
{j-k
= (kk-i
. -k-) <p, p>
-J.) 1-
QED
We digress slightly to discuss properties of maps which have the
same behavior exhibited in the previous Theorem.
map A from the vector space
a.
V
Recall that a linear
to itself is scalar if A=aI for some scalar
To generalize this notion to a linear map A between two vector spaces
V and
W, we require that they be inner product spaces.
-32-
Definition (3.12):
Suppose V and Ware inner product spaces.
The linear
~
transformation A: V+W is called scalar (with respect to the corresponding
inner products) if there is a scalar a so that for all ve V ,
<Av,Av>
W
= a < v, v>
V
where <o,o>w and <o,o>v are the inner products on Wand V respectively.
It is easy to see that if A: V+W is scalar, then either a=O and
A=O, or a>O and A is one-to-one.
Unfortunately, this notion of a scalar
map is not very restrictive in the following sense:
if A is any one-to-
one linear map between vector spaces, it is possible to determine inner
products on the spaces with respect to which A will be scalar.
On the
other hand, in any given problem the knowledge that a map is scalar
helps to determine its structure.
We give now several equivalent characterizations of scalar maps.
~heorem
(3.13):
[Characterizations of Scalar Maps]
Suppose V and ware inner product spaces and A: V+W is a linear transformation.
Then each of the following conditions are equivalent:
(i)
There is a scalar a so that
<Av,Av> = a<v,V>
for all v e
(ii) There is a scalar a so that
<Au,Av> = a<U,V>
for all u, Ve V
(iii) <U,V> = 0 implies <Au,Av> =
a
V.
•
for all u,ve
(iv)
A*Av is a multiple of v for each veV.
(v)
There is a scalar a so that A*A = aI.
V.
That is A*A: V+V is scalar
in the original sense.
Moreover, the scalar a in (i), (ii) and (v) is the same.
-33-
-_
Proof:
We will show
identity shows that
is trivial.
(i)9(ii)~(iii)~(iv)~(v)~(i).
(i)~(ii)
with the same a.
Now assume (iii) is true and veV.
clearly a multiple of v.
So assume
w =: A*Av _ <A*Av, v>
<v,v>
v~O.
The polarization
The implication
(ii)~(iii)
If v=O, then A*Av is
Let
v.
Direct calculation shows <W,V> = O.
On the other hand, if <U,V> = 0,
then from (iii) we get
<A*Av,v> < V , u>
<V,V>
<w,u> = <A*Av,u>
---e..;,.~--<-.:._
= <Av,Au> - 0
=
0.
Therefore, w is orthogonal to any vector in V so that w=O.
(iii)~(iv).
Hence,
To prove that (iv) implies (v), we will show more generally
that if B: V+V is a linear map so that Bv is a multiple of v for each
veV, then B=aI for some scalar a.
V,
and define a by Bv=av.
Bu=au.
So let v be a nonzero element of
If u e V and u is a multiple of v, then clearly
On the other hand if u e V and u is independent of v wi th Bu=Bu
and B(u+v) = y(u+v), then
av + Bu = Bv + Bu = B(v+u) = y(v+u)
= yv + yu
so that a=y=(3.
with the same a.
•
Hence, B=aI.
Finally, it is clear that (v) implies (i)
QED
-34-
By Theorem (3.11) we see that the maps A( i ,j): V(j ,k) -+ V( i ,k) are
scalar.
_.
Therefore, we can apply characterization (v) above to the maps
H.(j) = A(i,j)*A(i,j): IF~[x]-+IF~[x] to determine their structure completely.
1
J ~
JTheorem (3.14):
Suppose 0 ~ i 2. j
(i)
[The Operators Hi(j)]
~
n£ .
For O<k<i<j<k<n£,the
restriction of H.(j)
to V(j,k) is scalar
1
with multiple (kk-~)(~-kk)
-J
l-
In particular, the eigenvalues of H.(j) are
1
(k-~)(~-I)
with multiplicity
k-J l-k
IP~ I - IP~+ 11 .
(i i)
If
0.:: k ~ j,
then
Hi(j) Hk(j) = Hk(j) Hi(j) .
Proof:
We know that H.(j): IFJ~[X]-+IF~[X] andIF~[x]
Theorem (3.8i).
1
-
J
~
J -
=
(!)
V(j ,k) by
k
k<j<k
Both results then follow from Theorems T3.11) and (3.13).
QED
•
-35-
~
Section 4:
Selberg Polynomials and Tactical Decompositions
We have characterized the polynomials which satisfy the first three
of the four conditions imposed by Selberg.
In this section we complete
the characterization by handling the condition of symmetry.
This will
require the partitioning of the matrix A(k-1,k) into a IItactical decomposition"
(see Definition (4.4)).
A related matrix B(k-1,k) is
determined whose null space consists of the vectors of coefficients of
Selberg polynomials.
The relation between A and B will allow us to
determine the dimension of this null space.
The polynomial
Definition (4.1):
(ap)(x)
-
=
p€lF[~J
is sylTU7letric if
p(x)
-
for all permutations a € TIn where
The vector space of symmetric polynomials is denoted Sym and is
spanned by the basis of polynomials
-+
where P denotes the set of
and O(u) is the orbit of u
o(-u) :: {au:
-
U
€ TI } .
n
ordered partitions (Appendix 2)
-36-
We think of
P as
an index set for Sym.
The ~-bounded and k-homogeneous
subspace of Sym are denoted as usual by Sym~, Symk , Sym~ and are indexed
by the sets ~, Pk , P~, respectively.
The symmetric SeLberg space S~ is given by
S~
==
T~ " Rn Sym .
By Theorem (3.4) we have:
If 2k.,
If
~
n~,
then Sk
2k=n~,
then S~
= (0).
Moreover, if k is even, then S~ consists of (+~) - reciprocal polynomials,
while if k is odd, then S~ consists of (-~) - reciprocal polynomials.
Consequently, we need to determine which polynomials in T~ are also
symmetri c.•
Theorem (4.2):
[Characterization of s~J
Suppose 2k=n~ and p € sym~k' say p =
a x~ .
~€P~ ~-
L
~
Then p € Sk if and only if
~~
for all v €P k- 1
where
Proof:
~
Assume P € Symk'
By Theorems (2.7) and (3.4), P is translational
and reciprocal (hence in S~1if and only if
for all
~ € P
~
k- 1
~
-37-
Butor
f
0 € 1T
n we have
£
£
oP k = P k'
I
=
£
~€Pk
=
I
£
~ePk
(o~)
= (w·,,),
v and aow = aw so that
ov
ow
( oy) a ow
(~)
a~
(with
ll=O~)
.
Thus, pes: if and only if
I
~eF~
(~) a
v'
=0
~
for all
u
+£
e Pk- 1 .
However,
c~) a )
(~) a u = I+£ ( 6
~eFk <:;e (~)
k ~.
I£
ueP
~
=
I 90
ueP
~
=
k
(
~
CteO(u)
~
~
V
Ct
(~))
a
v
-u
(s i nce a Ci =a u)
I£b a
ueP k vu u '
and the result follow;, .
QED
This result leads us to consider the matrices B(i,j) for
rows and columns are indexed by
+90
P.
1
and
+£
J
P"
i~j
respectively, and whose
whose
(y,~)-
element is given by
£
From Theorem (4.2) we see that Sk is determined in an obvious way from the
null-space of B(k-l,k).
The main result is
-38-
Theorem (4.3):
If
2k=n~,
[Dimension of S~]
then B(k-1,k) has full row rank.
In particular,
As in the previous sections, this result is proven in stages.
We will use the following notion.
Let A=[A lJ
.. ] (l < i< s, 1_< j _< t) be a partitioned matrix.
The partition is a tacticaZ decomposition if each Aij has constant row
sum b.. and constant column sum c ... The s·t matrices B== [b 1. ·] and
Definition (4.4):
lJ
lJ
J
C == [c ij ] are the row- and coZwnn-swn matrices of the decomposition.
Lemma (4.5):
(i)
Let [A ij ] be an Sot tactical decomposition of A.
If A has full row rank, then so does B.
(ii) If A has full column rank, then so does C.
Note:
A has full row rank if AT is one-to-one, and it has full column
rank if A is one-to-one.
Proof of Lemma (4.5):
Let [A .. ] be an Sot tactical decomposition of A
lJ
with row-sum matrix B and column-sum matrix C.
We prove only (ii); (i)
follows by taking transposes since [AI;] is a t-s tactical decomposition
of AT with row-sum matrix CT and column-sum matrix BT.
Let A.. be e.-f ..
lJ
1
Adding the elements of A.. by rows and by columns
J
lJ
gives
e.b .. =c .. f.
1
lJ
lJ
J
or
ER = CF
where E== diag(e , ... ,e ) and F == diag(f 1 , ... ,f ). Since E and Fare
1
t
s
invertible, C is one-to-one if and only if B is. But B[b 1 , ... b ]T = 0
t
-39-
4It
if and only if
o.
A[b1,···,b 1, b2,···,b 2,···, bt , ... ,bt]T =
'-----y-----"
f
"
f
1
~
---'
f
2
t
It follows easily that if A is one-to-one, then so is B (and hence C).
QED
To complete the proof of Theorem (4.3), we need show only that
B{k-l,k) is the row-sum matrix of a tactical decomposition of A{k-l,k).
The point about full row rank will follow from Theorem (3.5ii) (which
shows A{k-l,k) has full row rank for k=k) by applying Lemma (4.5).
~,Q,
~,Q,
J
1
To partition A{i,j), we order the elements of P. and P.
lexicographically, and then we order the orbits O{u) and O{v)
(ueP~, YEP:) lexicographically. We obtain
-
J
-
-
-
1
where, for ueP~ and YEP:, A =A
-
J
-
whose rows are indexed by
Theorem (4.6):
1
~~
O{~)
~~
(i,j) denotes the block of A{i,j)
and whose columns are indexed by
O{~).
[Tactical Decomposition of A{i,j)]
The partition A{i,j) = [Avu{i,j)] is a tactical decomposition whose rowsum matrix is B{i,j).
Proof:
J
A typical row sum of Avu is
-40-
b
vu
The column sums are handled similarly.
QED
-41-
Appendix 1:
Notation
Throughout this paper n and
n
>
1.
-
(1)
2 and
~
>
~
denote fixed integers satisfying
1.
n-tuples and operations on them
-
... , 0)
= (1, 1, ... , 1 )
= (ul' u2' ... , un)
-x
= (xl' x2 '
o = (0,
-
1
(2) u
0,
... ,
xn )
(3) tx= (t xl' t x ' ... , t x )
2
n
-
tl= (t, t, ... , t)=!
(4) ~ = ~ - ~ = (~-ul' ~-u2' ... , ~-un)
~
(5)
I~I = u1
+ u2 + ... + un
(! denotes factorial)
((~) denotes a binomial ~oefficient)
-42-
II.
Polynomial Properties
All polynomials are in
unless otherwise specified.
(1)
Let peF[x] and k,Q, be nonnegative integers. Then
Q,.
p is translational if
p(~ +
(3)
= F[x I , x2' ... , xn] (see Subsection III below)
p is Q,-bounded if the largest power to which any xi occurs in p is
at most
(2)
F[~]
!)
= p(~) .
p is k-homogeneous if
p(t~) = tkp(~) .
(4) p is
Q,+-reciprocal if
4It
p is Q,--reciprocal if
l
(x l x2 .. ·x n )Q, p(x\' }2' ... , x ) = -p(x).
n
In either case, p is Q,-reciprocal.
(5)
p is symmetric+ if
p(o(~))
= p(~)
for all
0 €
TIn .
p is symmetric- if
In either case, p is symmetric.
if
0
[Here, (_1)0 is +1 (resp. -1)
is an even (resp. odd) permutation.]
\.
-43-
. 4It
III. Spaces and Sets
(1) F[x l , ... , xml denotes the vector space over the real field F of all
polynomials in m variables with coefficients in F.
(F[x] = F[x l , ... , xmJ.)
(2) F£[x l , ... , xm] denotes the subspace ofF[x l , ... , xm] consisting
of all £-bounded polynomials.
(3) Fk[x l , ... , xm] denotes the subspace of F[x l , ... , xm] consisting
of all k-homogeneous polynomials.
£
(4) Fk[x l , ... , xm] denotes the subspace ofF[x l , ... , x ] consisting
m
of all £-bounded, k-homogeneous polynomials.
(5)
Let Q be a subspace of F[x l , ... , xm].
Q of Q are defined by
k
Then the subspace 2£, 2k,
£
Q,
Q = QtlF [xl' ... , xm],
Qk = QflFk[x l ,
Q,
Qk
(6)
x ] .
m
£ T£ , T£.)
(T,
k k
R denotes the subspace of F[x] consisting of all £-reciprocal
polynomials.
. (8)
... ,
T denotes the subspace of F[x] consisting of all translational
polynomials.
(7)
£
= Q/) Fk[x 1'
... ,
S
£
(R,
R , R£k.)
k
denotes the subspace of F[l<] consisting of all symmetric polynomials.
£
(S£ , Sk' Sk·)
-44-
i 2 j
~
n
~
(10) N(i,j) denotes the nullspace of A(i,j) for 0 < i
~
j
~
(9)
(11 )
C(i,j) denotes the range of A(i,j) for 0
~
n£
£
v(n£,n£) = IF Jx] .
nJ\,
~
V(k,k) = C(k,k+l)~
for I < k < n L
V(i,j) = A(i,j)V(j,j)
for 0
~
i<j
~
n£, j
~
j.
(12) Let £ be a subset of lln = {u = (u , ... , un): ui € II for
1
£
£
i ~ i ~ n}. Then the subsets £ , £k' £k of £ are defined by
££ = {U€£: u. < £ for i < i < n£},
1 -
(13) P
£k = {U€£;
I~ I = u1 +
£
£k = {U€£:
Iu I = k
= {u € lln: u.1
>
-
u +
2
...
+ u = k} ,
n
and u.1 -< £ for i < i < n }.
0 for a11
i} .
the unordered partitions.]
-+
(14) P
= {~€ll n : u1 ~ u2 ~ ... un
the
~
}
o.
(-+£ -+
-+£)
P, P , P .
k
k
[
These are
ordered partitions.]
(15) TIn denotes the group of permutations of the set {1,2, ... ,n}.
-45-
IV.
Operators
(1)
a
a i = ax.
for 1 < i < n .
1
(2)
a = (aI' a 2 ,
(3)
a = a
(4)
u u
a U =a 1 a 2
(5)
A(i,j):
-
1
+ a
2
... ,
a ) .
n
+ a .
+
n
u
an
u=(u ' ... u )
n
1
n'-
12'"
IF.[x]-+lF.[x], for i < j,
J 1 -
' .) 1
A( 1,J
P - (j_;)!
. (6)
R:
lF~[x] -+lF~[x]
-
-
(7)
is defined by
aj-i p •
is defined by
~
(Rp)(x) = (x 1x 2 .. · x)
n
Rk :
inFo
lF~[~]-+lF~~_k[~]
1
1
1
x2
n
p(-x' - , ... , -x).
1
is the restriction of R
tOlF~[~].
<,> is the inner product on IF[x] defined by
<p,q>
=
1
iT
~.
(8) S
u
L
.s-p(O)
u£P
~
ua-q(O) .
~
~
~
denotes the orthogonal complement (with respect to <,»
in lFk[~]
of the subset S of IF k[~] .
(9)
A(i,j)*: lFi[~]-+lFj[~] is the adjoint (with respect to <,»
A(i,j):
IF.[x]-+lF.[x].
J -
1-
Hk(i) = A(k,i)*A(k,i) .
of
-46-
(II)
N.(;):
J
IF . [x]
1 -
N. ( i ) =
J
-+ IF
i
I
. [x],
1 ~
l-l)k
-.--
k=O [~-k)
l-k
a~ = (xa(I)' xa(2)'
(I2)
(I3 )
(ap)( x) = p(ax)
-
v.
Miscellaneous
(1)
k
(2)
lsi
=
for 0 S i .:: j .::. n £,
is defined by
Hk(i) .
... ,
xa(n))
for p € IF[x] ,
-
for a €
O€
1T
n
.
n£ - k
denotes the cardinality of the set
E.
1T
n
.
e
-47-
4It
Appendix 2:
Homogeneous and Symmetric Polynomials
The purpose of this appendix is to summarize some basic facts about
polynomials and to set some notation.
Recall that the vector space over the field F of all polynomials in n
variables with coefficients in F is denoted by
F[~]
(where
~ =
(xl"" ,x )).
n
A basis for the space is given by the monomials
where u = (u I ' ... , un) is a multi-index in the index set
P
-= {( ul ' ... , un ): u.1 ell., u.1
Thus, a typical polynomial
p(X) =
L
ueP
a
p(~)
in
>
-
F[~]
0 (1 > i > n)} .
-
has a unique representation
x~
~-
where all but finitely many of the coefficients au are O.
The spaces
Fk[~]' F£[~], and F~[~] are the subspaces of F[x] obtained by restricting
the multi-indices to lie in the index sets
eP
Iu I
p£-={uep
u.1
P k -= {~
-= ul +
<
+ u = k} ,
n
Q, (12- i < n) },
-
Pk£-= P k f'\ p£ ,
respectively.
The polynomials in FQ,[~J are called Q,-bounded.
that P~=0 if 2k> n Q,.
We note
-48-
Definition:
Let k be a non-negative integer.
The polynomial p in
F[x] is k-homogenpous (or homogeneous of degree k) if
p(ax)
= a
-
Theorem:
k
p( x) .
-
Let p be a polynomial inF[~], say p =
(1)
p is k-homogeneous if and only if p is in
(2)
p has a unique decomposition
p
~Je
(3)
a x~.
~-
Then
Fk[~].
= I Pk
k>O
where each Pk is k-homogeneous.
p
I
ueP
k
= I
a
~ePk
u
In fact,
x~.
-
call this decomposition the homogeneous decomposition of p.
p is in F~[x] if and only if each Pk is in F~[~].
Theorem:
Let
k,~
be non-negative integers.
(1)
dim F k[~] = IPkl
= (n+k-1)
k-1 .
(2)
dim F~[x] = Ip~1
= (H1)n.
(3)
dim
Then
F~[~] = Ip~1
In particular, lF~[~] contains only the zero polynomial if 2k> n L
Symmetric and Alternating Polynomials
To each permutation a in TIn (the group of permutations on {1,2, ... ,n})
and each polynomial p in
F[x] defined by
F[~],
we can associate a new polynomial ap in
-49-
(op )( x)
==
p(ax )
~
~
where
This operation can clearly be restricted to each of the subspaces
Q,
Q,
lFk[~]' IF [~], lFk[~] .
Definition:
The polynomial p in IF[x] is symmetric if
(op)(x) = p(x)
for each a in TI.
The polynomial is alternating if
n
Here, (_1)0 is +1 (resp. -1) if a is an even (resp. odd) permutation.
Let p be a polynomial in IF[x], say p = L au
ueP ~
Theorem:
(1)
u
x~.
~
p is symmetric (resp. alternating) if and only if each Pk in the
homogeneous decomposition of p is symmetric (resp. alternating).
(2)
pis symmetri c if and on"ly if a = a for each a inTI .
au
u
n
(3)
p is alternating if and only if a
au
Theorem:
= (-l)oa u for each a in TI n .
Let Sym (resp. Alt) denote the subspace of IF[x] consisting of
symmetric (resp. alternating) polynomials.
(1)
Then
A basis for Sym is the set of monomials with multi-indices in the
index set
•
(2)
A basis for Alt is the set of monomials with multi-indices in the
index set
=>
P
== {u €
p .•
U
}
1 > U 2 > ... > U n·
Similar results hold for Sym , SymQ"
k
Acknowledgement.
We owe much to Dennis Staton.
The initial stimulus
which led to this paper arose during conversations with him.
Further,
he provided us with the first example of a Selberg polynomial which
was distinct from
~(!).
REFERENCES
1. ANDREWS, G.E., Notes on the Dyson conjecture, SIAM J. Math. Anal., 11
(1980), 787-792.
2.
ASKEY, R., Some basic hypergeometric extensions of integrals of
Selberg and Andrews, SIAM J. Math. Anal., 11 (1980), 938-951.
3.
ASKEY, R., Computer algebra and definite integrals, preprint.
4.
MACDONALD, I.G., Some conjectures for root systems and finite reflection
groups, SIAM J. Math. Anal., 13 (1982), 988-1007.
5.
MEHTA, M.L., Random Matrices and the Statistical Theory of Energy Levels,
Academic Press, New York, 1967.
6.
MORRIS, W.G. II, Constant term identities for finite and affine root
systems: conjectures and theorems, Ph.D. thesis, University of
Wisconsin-Madison, 1982.
7.
RICHARDS, D. St. P., Integrals of Selberg polynomials, preprint.
8.
SELBERG, A., Bemerkninger om et multipelt integral, Norsk. Mat. Tidsskr.,
26 (1944), 71-78.
•
© Copyright 2026 Paperzz