REDUCED-PARAMETER MODELS FOR ANALYSIS OF
CAPTURE-RECAPTURE DATA FROM ONE- AND TWO-AGE
CLASS OPEN POPULATIONS
A report of research for U.S.F.W.S. Research Work Order, Unit
Cooperative Agreement No. 14-16-0009-1522, Work Order #4.
Mimeograph Series No. 1675
Cavell Brownie
Department of Statistics
North Carolina State University
Raleigh, NC
27695-8203
Introduction
Management of any animal population requires some knowledge of
population size and of survival and
reproductive rates,
parameters that determine changes in population size.
population size,
survival rate and
reproductive
the two
Estimation of
rate
is
thus an
extremely important cOJllPonent of wildlife management and research
programs.
Efficient methods for estimating survival rate from banding and
band recovery data were developed by statisticians and published as
early as 1970-71.
However, these methods saw very little use until
the development of computer programs (ESTIMATE and BROWNIE) to
compute
these
estimates
and
the
monograph by Brownie et al. (1978).
publication
of
the
associated
Now, largely because of this
computer software, these methods are widely used by biologists.
Populat.ion size, survival rate and reproductive rat.e can all be
estimated using data from capture-recapture and capture-resighting
experiments.
A general model for estimation using data from a single
age class was published in 1965 (Jolly 1965, Seber 1965).
Computer
programs are available t.o provide estimates under t.his model and t.he
model has thus Been some use by biologists.
Age-specific models
(Pollock 1981) and reduced-parameter single-age models (Jolly 1982)
have been developed but have seen virtually no use because of the
absence of readily available comput.er software.
The
following
material
develop
computer
routines
provides
for
the
information
implementing
needed
to
reduced-parameter
models which assume either constant sl:lrvival or constant survival
and capture rates.
This is presented in two parts.
Part 1 relates
to the models appropriate when data is available for one age-class
only, and Part 2 describes the models for data from young and
adults.
The
statistical
theory
underlying
the
development
of
these
methods is outlined in Jolly (1982) and in Brownie, Hines and Nichols
(in preparation).
-1-
NOTATION
The notation used is close to that of Jolly (1982) and Seber's text.
The correspondence between this notation and symbols in the first page of
output from program JOLLY follows.
Jolly (1982)
Program
JOLLY
Meaning
NM
# of marked animals in sample i.
NU
# of unmarked animals in sample i.
NN=NM+NU
NS
Total # caught in sample i.
# released from sample i.
R
# caught in sample i and later recaptured.
z
# caught before and after but not in
sample 1.
P
capture probability.
Some additional notation needed is
the number of sampling occasions
s
the number of marked animals captured at i but not
released (d
R.
~
i
<
m)
i
the number of animals first caught at i and released
.
and m.-d. = R.-R. )
~
1.
1.
1..
the number of time units between samples i and i+1
t.
1.
The constant survival rate per unit of time, so that
ljl
ljlti
qi
Xs
I-X i
= ljli
=
survival rate between samples i and i+1,
l-p.
1.
=1
for i=1,2, •.. ,s-1.
-2-
Here I-X.
~
= probability
an
animal alive just after sample i is subsequently
recaptured.
MODEL B ALGORITHM
Necessary input
s
mi ' u i ' n i ' Ri ' r i ' zi ' t i ' d i ' R.1..
a
a
plus "starting values" or initial estimates ~ and p. ,
1.
i=2, .•. , s.
We should consider two methods for obtaining these starting values.
OPTION 1:
First compute (in the main program) the Jolly-Seber estimates
~i ' i=1, •.• ,s-2 ,
p.1. ,
1=2, ••. ,s-l •
Then the starting values are
i=2, .•. ,s-1
s-l
E Pi
i=2
s-2
Pass these starting values to the Model B algorithm.
This will usually be the best procedure - except in the
case of sparse data sets where problems are encountered in
computing the Jolly-Seber estimates
$.,p
..
1.
1.
For such data sets,
-3-
it may be possible to get Model B (or D) estimates.
In fact,
this is one of the reasons for developing the Model B algorithm.
The following option would therefore be useful.
OPTION 2: Before computing the Jolly-Seber estimates (or tests), call the
Model B algorithm with starting values
~O
,
P~""'P~ supplied
by the user in some way (e.g., read in with the data?).
Printed output
Model B estimates ~,Pi'
($ti if desired), standard
errors, and correlations or covariances.
Also, Ni,B
i
(Mi ?),
standard errors, and correlations or
covariances.
Goodness of fit test to Model B
Output which need not be printed out
I-X.
~
i=l, ••• ,s-l
i=2, ••• ,s-1
These are needed for the test of Model B versus the more
general Jolly-Seber Model, and the test of Model D versus
Model B.
-4-
Computations
First, the iterative procedure is carried out to produce
Maximum likelihood (ML) estimates of
$ , Xl"",X s - l '
From
these estimates (and with Xs=1)' we obtain
i=2, •.•• s .
Mi
Lastly.
•
Ni • Bi are obtained.
Some notation
Recall X = 1
s
(by definition).
i=l ••••• s-l.
computed recursively.
0
Given starting values ,f,0.
~
PZ0 •...• Ps.
calculate
i=2, ••• ,s •
and
computed recursively.
o=1
working backwards from X
s
etc •• to l-X
Then.
~O
o
l
•
1- 0
= (,f,O)ts-l(l 0 0)
Xs - l
~
-qs Xs •
.
...
is an sxl vector of starting values
o
Xl
X~_l
for the iteration procedure.
-5-
~n
represents the corresponding vector at the nth
iteration, n=1,2,3, ••.
e~
is the ith element of en
~n
is an sxl vector of first order partial derivatives
~
evaluated using elements in ~n, n=0,1,2, .••
~n
is the ith element of ~n
n
H
is an sxs matrix, involving second order partials,
i
evaluated using elements in ~n , n=0,1,2, •.•
vn
=
(Hn)-l
is related to the variance-covariance matrix
n
is the element in the ith row, jth column of V •
V~j
O
Explicit formulas for computing ~O , H , ~l
, etc.,
are given below, but first the basic iteration scheme is outlined.
ITERATION SCHEME
Compute
.
~O
us~ng
~
~
°
°
, Xl"'"
HO,
°
using the starting values in eO (i.e.,
°
Xs - l ' an d P2""'Ps0) .
° °
Compute V = (H ) -1
(NOTE that this involves inverting a symmetric matrix and requires
an efficient and numerically accurate algorithm.)
-6-
1st iteration
Compute ~1
=
<pI
In matrix notation,
Specifically,
a11 = <pI1
a1i
Using
a1
= <pO
1
s
+ r
j=l
1
0
= X - = X -1 +
i 1
i
and
0
V..
~J
t:.~
J
s
r V0ij t:.?
j=l
J
i=Z , ••• , s ,
~O, check for convergence (see below).
1
If
1
convergence criterion is not satisfied, compute PZ, ••• ,ps '
.
us~ng
1
. a
e 1 ements ~n
_ , by
Use these to compute t:. 1 , and HI
1
Compute V
-7-
2nd iteration
~
Compute
2
and check for convergence using
~2 and ~l •
nth iteration
Continue iterating in this way until convergence criterion is
satisfied.
That is, at the nth iteration, n=1,2, .••
and convergence is declared if
16~ - 6~-11
<
10-5
for all i, i=1,2, ••• ,s.
·
.
6_n - l i s t h e vector
If convergence occurs at tent
h
h 1terat1on,
n-l
of ML estimates and V
the estimated variance-covariance
for P2' ••• 'P
s
~
, Xl, ... ,X - .
s l
n-l
n-l
are obtained from 6
, V
matrix for the parameters
-
MLe's and variances
(see below).
If the convergence criterion is not met at the nth iteration
(i.e.,
16~_6~-11 ~ 10- 5 for any i), then compute ~n , ~
and go to the (n+l)st iteration.
NOTE:
The convergence criterion 10 -5 is arbitrary.
I could
not find a program listing for ESTIMATE or BROWNIE to
see what was used there.
The number of iterations should
be limited to 20 or 25, and nonconvergence declared if
this would be exceeded.
-8-
Formulas for computing ~n
If L represents the log likelihood function under Model B, the elements
of
~
are the first order partials oL/08 i •
~l
Thus,
= oL/o~
,
~i
= oL/oX i _1
i=2, ••• ,s , given below.
= _l s-lt'l{Z.
r -l:.::- -2:. _ m.(l-X.)}
1
1
~
1
[OMIT
5=3
if)
~ i=2
I
~i = - -
Xi - l
Xi
qi
Pi
[qi-lP mi-1 -
]
z.1- l+ R1. l-r.1-1
i-1
1
<jJti-l
-
[qimi _ z ]
qiXi
Pi
i
i=3, ••• ,s-1
1
=-~
5
Xs - l
[qS-l
ms_1 p
5-1
To compute the elements of
replace
~
1
o
5-1
+R
5-1
-r
5-1
]
, in the above expressions for 6. ,
1
.
0 0 0 0
w1th $ ,Pi,qi,X i , respectively.
~,Pi,qi,Xi
computed using
~
z
m
5
- l-x
5-1
1
1
1
,P.,qi'X, (produced after the
1
on for ~n , n=2,3, •••
1
1~
Similarly, ~1 is
iteration).
And so
'
-9-
Formulas for computing Hn
The elements of H correspond to -o2L/o6i06j , so H is of dimension
sxs and is symmetric, i.e., H • H
.
ij
ji
Formulas for elements in the
diagonal and upper triangle are given below, the rest are obtained by
symmetry.
E1e1ents in row 1 are:
2
H
11
OMIT
[ s=3
if)
=-
1 s-l t.~- 1
1:
ep2 i-2
2
Xi
.-
+
ep
j=3, ••. s-1 •
t
= -
m
(l-q
X )
s-2 s-l
s-l s-1
2
2
ep Ps-1 Xs - 1
-10-
Rows 2 to s:
OMIT if]
[ s=3
i=3, ••• ,s-1
1
H
ss
=-2
Xs -1
i=2,3, ••• ,s-1
=0
H.~, j
n
Elements of H , n=O,l,2, .••
for the parameters
j>i+1 .
i=2, ••• ,s-2,
~,Xi,Pi,qi
·
.
are eva 1 uate d by su b st~tut~ng
~n
~
n n n
,x.,p.,q.
~
~
in the above expressions for Hij
n
is computed by inverting the symmetric matrix H .
~
-11-
Printing Model 8 estimates
[Question:
~,
P2' ••• 'Ps •
Do we need to print (~)t1, ..• ,(~)ts-1 in addition to ~ 1]
After convergence, ~ contains the MLe's
~, X1""'X s 1
and
V the
estimated variances-covariance8.
i=l, •.. ,s-l .
i=l, ..• ,s-l •
Vi +1 ,i+1 = Var(X.)
~
= Cov (~ , X. 1)
J-
From
§, V,
obtain
i=2, ... ,8
(Note for
ps ,
define
•
and the above formula works.)
2
t. l(l-q.X.)
~-
~
4>2
~
2
Var($)
i=2, ... ,8-1
-12-
Var(p )
s
Var ~ +
=
2t
s-
1P
s
ept s -1
ep
A
is evaluated by substituting Xi for Xi '
Cov(~,xs-1')
qi
for
qi' etc., and
V for Var ~, etc.
11
Var(p.) and covariances which must be computed are given below in terms of
1.
the elements of V. and with ~ti written as ~.,
1.
(Reca11~. = (~)ti)
1.
i=2, .• "s-1
Var(p )
s
=
(1)
(2)
+
_1_{tj_1(1-qjXj)
ep i-1
-
ep
-
V..
V1~... + ~
- q • V • '+1
A.
'l'j_1
J 1.,J
}
2,:.i,j <s
(3)
-13-
_ <(t s-1 P s
NOTE:
Cov(P.,P )
1
i
~
l
qi
= Var(p.)
1
V
+
1,i+1
11
v.1+1,sl
~s-1 J
i=2, ... ,s-1
, so that (3) with j=i, and (1), should yield
the same numerical result for a given value of i .
Cov(p.
1
,p.)
J
(4)
Also note that
= Cov(P. ,Pi) •
J
(5 )
i=2, .•• ,s-1
-t
Cov(~,p ) =
s
P
s-1 s
~
Vu
VIs
---
~s-l
.,
(6)
-14-
Model B Estimates of Mi ' Ni ' Bi ' Variances and Covariances
7111.'+1
A
N
i
NOTE:
= U.
1.
+
2.
m. 711.1. + R.1.. }
i=2, ••• ,s-1
(7b)
i=2, ••• ,s
(8)
i=2, ••• ,s
(9)
i=2, ••• ,s-1
(10)
1.
M.
1.
s
(7a)
d,
= ~i{mi-p,1.
Define z =0 , and, as before,
i=2, ••• ,s
Xs =1
, so that (7a) holds
for i=s.
A
To compute variances/covariances for M ' N '
i
i
Bi
' first compute the
matrices X and W defined below, replacing the parameters
~
,
~i
Xi ' etc. by the estimates ~ , $i ' qi ' Xi ' etc.
1
Define
a
kj
kj
for
if
k<j
j
1T
a
k=j
=
~=k+1
Compute
if
j=l, •.• ,s-l
and
q~1>~
k=l, .•. ,j
' qi '
-15-
Row 1 of the s by (s-1) matrix X
j=Z, ••• ,s-l
(11)
i=2, ...
(1Z)
Rows 2 to s of X
o
,8
j=l, ••• ,i-Z; i=3, .•. ,s
j=i,i+l, .•• ,s-l; i=Z, ••• ,s-l
For example, if
0
-qZXZM Z
Xl (I-Xl)
X =
s=4,
Q3 X3 tIMZ<P Z
cP
Xz
q4 X4 3 tk_l<Pk~(tk3
l:
cP
Xk
k=Z
Q3X3(1-cPl)MZcPZ
Q4X4(1-<Pl)MZ<PZ(tZ3
cPlXIX Z
0
-Q3 X3M3
Xz(l-x Z)
0
0
<PIxlx Z
Q4X4(1-<PZ)M3<P3
<P ZXZX 3
-Q4 X4M4
X (1-X 3 )
3
(13)
-16-
Next, compute W = VX by matrix multiplication, so that W is s by (s-l) •
Elements of Ware denoted W in the formulae below.
ij
2t.~- 1
2
}
+ -M-(-:-I-=--""'"")
+ ,l,M
W1 i-I
. 1
~ i '
i
-X i - 1 W.~,~-
M.] t.] -1 {
+
+
Mit.~-1
WI, i-I +
<P
<P
i=2, ••• ,s
M.~
}
VII + I-Xi-I VIi
M.]
{
Mit.~- 1
M.~
}
l- X _
$
V1j + I-X i-I Vji
Wj ,i_l +
j 1
j=i+l, ••• ,s; i=2, ••• ,s-1
+
t.] - 1M.] {
<j>
+
(14)
M.]
l-X j _l
{
qi Vl,i+l
q.V..~ J,~+1
t·~- 1 (I-q.x.)
~ ~
<P
t.~- 1 (l-q.X.)
~ ~
<P
V
Vu
(15)
I
n - <Pi-I!
V
1j
-~}]
$i-l
i=2, ••• ,s-l, j=2, .•• ,s
(16)
-17-
t
{
_
M
j
1-Xj_1
NOTE:
{t
P
P
V}
s-1 s V + -!!cfl
11
cfl s - 1
V}
s-1 s V . + ~
cfl
1J
cfl s - 1
j=2, .•• ,s
(17)
The variances and covariances in (1) through (10), and (14)
through (17) are used in computing Var(N.) , Var(B.) , and
~
covariances.
~
It is probably not necessary to print out
i=2, ••• ,8
~
Cov(N~
...
u.~
~
~
,N~ ) = Cov(M.
,M.) - -
j
~
J
Pi
(18)
u.
Cov (~~)
p. ,M. - ...J... Cov (~~)
p. ,H.
~
J
Pj
J
~
i=2, •.. ,5-1; j=i+l, .•• ,5
i=2, ... ,5-1
(19)
(20)
-18-
OMIT
[ 5=3
if]
~
~
U1+1 Cov(P.,Po+
+ ---)}
P 1+
~
~ 1
1=3"",5-1
(21)
1
if]
OMIT
[ 5=3 or
4
+
UO+1
}
<P.U.
{-q.<p.toU.
(~~)
~ ~
] J ] ]
Cov <P'P j +1
+ ---~
Pj +1
Pi
~
~
<P.U.
-
~
p.
J
~)
Cov (~P'+1'P,
~
J
i=2" .. ,s-3, j=i+2, ... ,s-1
(22)
- 19 -
MODEL D ALGORITHM
Necessary input
As for Model B, with starting values via OPTION 1 or OPTION 2.
OPTION 1:
Starting values are
~O
a
pO
= _1_ r
Model B estimate
~
s
A
s-l i=2 Pi
where ~ and P2""'Ps are the MOdel B estimates.
OPTION 2:
Read in values for
~O , pO .
Printed output
Model 0 estimates ~ and
and covariances.
p,
($t i
a
~.~ if desired), standard errors,
Also, Model 0 estimates
N. ,
~
Hi
(M.
~
?), standard
errors, covariances.
Computations
The iterative procedure produces ML estimates ~ and
p,
(estimated) variance-covariance matrix V (dimension 2 by 2).
and the
At each
iteration, it is necessary to compute
i=2, ..• ,5-l
q=l-p , and X =1
s
Yl - 0
Yi + l =
Oi{qY i
+
R.-rJ
~
Xi
~
as before;"
(23)
i+l=2, ..• ,s
-20-
(24)
1+1 = 2, ... ,s
and
j=l, .•• ,s-l
k=l, ... ,j
as on page 14.
Iteration Scheme
Iteration proceeds as for Model B but with the vector of starting
and
values
6 (2 x l)
and
H (2x2)
as defined below.
FormuLas for computing 6
(25)
(26)
At the nth step, ~n is computed by substituting the current
va 1 ues
~n
~
, p n , Xin ' Yn into the a bove f ormu 1 as.
i
-21-
Formulas for computing H (2x2)
+
~~ i=3
~ to_l(l-qXo)i~lto_l~o_l
o_l{Yo+(l-qXo)Wj}
~
~ j =2 J
J ,~
J
J
(27)
't'
+}
't'
~
i=3
ti_l[l-qX i )
i~l epo~o
i_l[qXjWj-Yo)
J J,
J
j=2
(28)
s
E
i=2
(29)
n
At the nth step, compute H by substituting the current values
respectively, in formulas (27), (28), (29).
U
the inverse of the symmetric matrix H
•
u
Al so, compute Vu __ (H ) -1 ,
-22-
Printing Model 0 estimates p ,
After convergence,
e
p
A
A
contains the Model D estimates p , p
and V
the variance-covariance matrix.
A
That is,
¢
.. e
P=
6
1
Var ~ .. V
u
2
Var P .. V
22
A
Cov(~,p) .. V
I2
Print estimates, standard errors and covariance.
Model 0 estimates of M.1
, 1
N. ,1
B. , variances and covariances
A
A
d.~
71/'+1
... p.1. 7I/.-p
-m. 'IlJ.+R.
1.
"'1.
1..
{ ~
}
i=2, ••. ,s
(30)
i=2, ••• ,s-1
(31)
i=2, ••. ,s
(32)
i=2, •.. ,s
(33)
i=2, ... ,s-l
(34)
1.
To obtain variances/covariances for
matrices X (2 by s-l) and W (2 by s-l)
Mi ' Ni
= vx.
' Hi ' first compute
-23-
Row 1 of X
j=2, .... 5
or
j-l=l .... ,5-1
(35)
Row 2 of X
(36)
for
j=3 •...• 5
or
j-l=2 •.•.• 5-1.
-24-
Compute W = VX by matrix multiplication, and the variances below which
involve the elements of W and V, and S1 ' P1 defined below.
Compute
S1
=
8-1
S. I:
t Q1-1,k-1(1-x k )
</l k=i k
1"'2, ••• ,5-1
(37)
a
1=5
5-1
Xi + q</li
P.~
=
I:
k=1
Q1k Xk+1
i=2, .••• 5-1
(38)
1
i=8
1=2, ••• ,5
(39)
-25-
SjM .
P.M.]
J J
V
J
.....L.._.....L_-V
1-QX
j
12 - 1-QX
j
22
i<j, i=2, •.• ,s-1, j=i+1, ••• ,s
(40)
[Note that (40) with i=j should give the same numerical result as (39)
for the same value of i.]
i=2, ..• ,s
(41)
The variances and covariances in (39), (40), (41), and in V are
used to compute Var(N ), Var(B ), and covariances.
i
i
(42)
i=2, ••• , s
u.
U.
A )
~)
A ~) = Cov (AM.,M.
Cov ( N.,N.
- .....l.. Cov (~~)
p,M. - - ~ Cov (Ap,M.
~
J
~
J
P
~
P
J
i=2, ... ,s-1, j=i+1, •.. ,s
(43)
-26-
OMIT
[ s=3
i=2, ••• ,s-1
(44)
i=3, .•• ,s-1
(45)
if)
if )
OMIT
( s-3 or 4
i<j-l, i=2, ... ,s-3, j-i+2, ... ,s-1
(46)
-27-
TESTING BETWEEN MODELS A, BAND D.
There are several test statistics which could be used.
Jolly (1982)
presents two statistics [equations (47), (48)J which are
on the
r
proportions
a-i '
i=l, ••• ,s-l.
However, the proportions
also
i
provide information about fit of the models, and this information is not
utilized in Jolly's test statistics.
I would like to compute and print
separately the components corresponding to these two proportions to see
how much difference the second component makes, especially for the data
in Jolly's example.
For the general user, however, only the sum of the
two components need be printed.
Jolly presents two different types of test statistics, one for
comparing B with A (eqn. 47), the other for comparing D with B (eqn. 48).
The latter is based on the likelihood ratio; the former is based more
directly on a chi-square statistic.
The two are equivalent in large
samples, and there is no overwhelming statistical argument supporting
the use of one type rather than the other.
We can either compute the
test statistics as Jolly does (but with the second component included),
or we can compute a likelihood ratio test statistic in both cases.
The instructions for programming these tests are therefore
preliminary.
Jolly.
They will enable us to compare results with those of
Later it will be necessary to make some changes to produce a
version for general use.
-28-
Test of Model 0
Notation
XiB
and
XiD
VS
Model 8
are the model B and model D estimates of Xi '
respectively.
i=l •••• ,s-l •
are the model B and model D
and
i=2 •••• ,s-1 .
estimates, respectively. of
Compute
L1
[l-
X
{ r 10g I_AiD
i
i=l
e
XiB
E
= -2 s-l
J
(47)
(48)
Compute
Print out
Print out "Total chisquare
="
"Degrees of freedom ="
"Probabili ty
="
s-2
(computed in usual way).
-Z9Test of ModeL 0 vs ModeL A
Compute two test statistics, print out both.
(i)
L
1
E
= -Z 5-1
i=l
{
fR
i (l-X iD )]
r.log·
+
~
e
ri
(49)
(50)
(ii)
Print out
L , L
Z
l
and
"Total chi square"
"Degrees of freedom"
=
"Probability"
=
T
Ii
Compute
=
Zs-5
(computed in usual way).
[r. -R. (l-x
) J2
iD
)
R.X.D(l-X.
~ ~
~D
~
~
[mi-(mi+Zi)~iDJ2
TZi
= (mi+Zi)~iD(l-~iD)
i=l, •••• s-l
(51)
i=Z, ••• ,s-l
(52)
Print out individual chi-square values T11 , ..• ,T l ,s-1
and
TZZ ' ... •TZ•5-1
Also print out
and
"Total chi square"
"Degrees of freedom"
=
"Probability"
:0
(computed in usual way).
-30-
Test of Model B vs Model A (Omit if s=3)
Compute two test statistics as for the test of D vs A •
(i)
Compute L as in (49) but with
l
XiB
in place of
XiD
•
Compute L as in (50) but with P
in place of P
•
iB
iD
2
~,L2
Print out
and (in the usual format) the total chi-square
• L +L 2 , with degrees of
l
freedom • s-3, and probability.
(11)
Compute T
li
as in (51) but with X in place of
iB
Compute T
as in (52) but with
2i
PiB
s-l
Compute Tl •
r
1
in place of
XiD
.
PiD
.
s-l
T
2
T
li
•
r
2
T
2i
Print out individual chi-square values, Tll,···,Tl,s_l
T22 ,···,T 2 ,s_1 ; also, Tl , T2 and total chi-square
with degrees of freedom
= s-3.
,
= T1+T 2
-31-
Comments on Structure of Program
The order in which computations are carried out for models A (i.e.,
the Jolly-Seber model), Band 0 should depend on the data set to be
analyzed.
For "good" data sets, the best way to proceed is to do the
computations for model A first, and use the Jolly-Seber or model A
estimates
$i' ~i to get starting values for the model B algorithm.
For poor data sets where some summary statistics are zero, it may be
better to start with the simplest model (model D) then proceed to B then
A.
This leads to the
OPTION 1:
(i)
(ii)
possibilities:
follo~ng
(for good data sets--should be the default?)
First compute Jolly-Seber estimates (model A estimates).
Use these to get starting values for the model B algorithm
(see the model B instructions).
(iii)
Use model B estimates
$ , Pi to get starting values for the
model 0 algorithm (see the model 0 instructions).
(iv)
OPTION 2:
(i)
Proceed to tests.
(for poor data sets)
·
•
Begin wi t h t h e mo d e 1 0 a 1 gorit hm uS1ng
start1ng
va1 ues
~
0 , p0
read in with the data.
(ii)
Use the model D estimates
~
,
~
to get starting values for
model B; (i.e., ~ from model 0 is passed as the initial value,
~
o
, and
~
o
from model D is passed as p., i=2, •.. ,s, to the
model B algorithm).
1
This is instead of reading
in
.
~O and p?1
as initial values for Model B as I had suggested earlier.
(iii)
Proceed to Jolly-Seber (model A) computations and tests,
i f possible.
Part 2
COMPUTER ALGORITHMS FOR MODELS B2 AND D2
To be incorporated in program JOLLYAGE
Reduced parameter models for the two-age class case.
With age-dependent models, the time for an animal to mature
from the first to the second age-class must be the same as (or
simply related to) the period between successive bandings.
Thus, as
in Pollock (1981), the period between bandings and the time spent in
age-class
0,
are
considered here.
both
assumed
to
be
one
year
in
the
models
That is, ti=1 for i=l,...,s-l, so that for the models
with constant survival, <Pi=<pti=<p , i=I,...,s-l.
I cannot think of a
useful way to generalize and allow variable ti in
the two-age-class
models.
Outline of models considered
Model A2 - variable or time-specific survival for adults and young,
variable capture rates.
age dependent generalization of Jolly-Seber Model
same structure as Pollock's
(1981)
model, but with Mi
viewed as variables, not fixed parameters.
estimable survival and capture rate parameters are
1
Model B2.
constant survival for adults and young, variable capture
rates.
age-dependent generalization of JollY's (1982) Model B.
estimable survival and capture rate parameters are 19a , 19 Y,
Model D2 - constant survival for adults and young, constant capture
rates.
age-dependent generalization of Jolly's (1982) Model D.
estimable
~y,
Notation:
survival and
capture
rate
parameters are
19a ,
p.
Because of the complexity of formulae below, it seemed less
confusing to use superscripts "a" and "y", instead of a prime or
"0" or "1" to denote age dependence.
The relationship between
notation here, that in Pollock (1981), and the JOLLYAGE output is
indicated below (but note that I have a question concerning the
equivalence of NB(I) and zi).
2
Pollock (1981)
Program JOLLYAGE
(page 1 of output)
Here
a
1
1
1
NN(I)
n,
0
NN'(I)
1
m.
1
NM(I)
1
z.
1
NBCI)
R~1
SCI)
R~
1
R?1
S'(I)
R~
1
1
r.
1
R(I)
a
r.
1
r,0
1
R' (I)
r~
1
N~1
N(I)
N~
1
M(l)
M.
PHI(I)
qJ.
PHI' CI)
qJy'
n.
1
M.1 + M~
1
1
1
1
qJ.
0
qJ.
l.
n.
n~
1
m.
1
? ?
Z,
1
1
a
1
l.
PC I)
and z1'+1
a y
= z.+r.+r.-m.
1 1 1 1+ 1
'
3
= u~1
p.
1
i=2, ..• ,s-2, z =0
s
Additional notation
(where s is the number of samples)
i=l, ••• ,s-l
i=l. ••• ,s-l
u~ = no. of unmarked adults caught at i
u~]. = no. of unmarked adults present just before sample i
Define M =O ,
l
and
z =0 •
s
Input for Model B2 algorithm
r~
].
plus starting values for ~a,
OPTION 1:
s ,
mY,
...
p
u.a].
2'···· p s .
These starting values can be computed by averaging the
Pollock (1981) estimates
m.].
·1
~.
].
·0
and~.
].
, and from
i=2, ••. ,s-1
z.
m.+R~ ~
]. ]. a
r.
1
OPTION 2:
Starting values for
~a,
~y.
P2,."'Ps can be obtained
from the Model D2 estimates of constant
(to be described later).
4
~a.
~y
and P
Computations
The iterative procedure is carried out to produce ML estimates
Ay
Aa
Aa
Aa
lP , Xl ' ••• Xs-l ' lP •
-!
Pi = Aa
Xi
From these estimates,
[l-:~_l - [l-x~Jl
.
.
1
q.o
= I-p.01
G1ven start1ng va ues lP
1
is obtained,
a,O
, lP
y,O
0
a
and
0
, P2' ••• 'Ps '
i=2, ••• ,s,
starting values for I-Xi '
i=2, ••• ,s.
calculate
and then obtain
l-x~,
1
i=l, •.• ,s-l
a
using the formulas
I-x·1 = lPa[l-qi+l
where
Iteration is carried out as for Model B in program JOLLY, but with
8,
~,
H and
V as defined below.
The vector of estimates
contains the elements
~
(s+l)xl
....
8
=
(s+l)xl
5
At each step of the iteration, elements in
~
are used to calculate
y
updated values for p., q., i=2, •.. ,s, and l-x 1., i=l, •.• ,s-l.
1
1
These are used to re-evaluate
v
(s+l)x(s+l)
= H-1
11
H
(s-l)xl
(s+l)x(s+l)
and
using the formulas below.
Blements of A
-
1 s-l 1
t -
= -a
<P
. 2
1=
a
Xi
I
Zi
(1)
q.
1
(2)
q
i i
m
[ Pi
_
Z.
1
+
R~ r~]
-
1
(3)
1
i+l=3, ... ,s
(4)
6
Elements of H
(5)
(6)
(7)
i+l=3, ..• ,s
H
1,s+1
=-
s-l
1
L
Y i=l
,. ,.
rftam
(1-;/1') RY
1,
(8)
xiY
7
Rows 2 to 5
(9)
+
1
a a
[ <II xi+1
(10)
)2
i+1=3, ••• ,s
i=2, ••• ,s-1
H.1,1'+1
i=2, ••. ,s-1
= 0
H..
1J
,
i+1<J~.s
(11)
(12)
y
R.1-1
=
H.1,S+1
i=2, ... s
a y
If
(13)
Xi-1
Row s+l
s-l
E
s+l, s+l =~
(uy)l"
<II
i=l
(14)
H
Elements H.. for which i>j are obtained from the above by symmetry,
1J
i. e., H.. =H .. ,
1J
J1
i=2, ... ,s,
j=l, ... ,i-1.
8
Model B2 estimates
After convergence, the elements of 9 are
and
and
VII = var(;a) ,
i=2, ... ,s .
Vs+l,s+l = var(;Y) ,
It may not be necessary to print out x~
~
(or 1-x~) and
~
However, p, and variances/covariances should
variances/covariances.
~
be calculated (as below) and printed.
(15)
1=2, ••• ,s-1
VIi
V. .
]
+ ~ V
~a
qj i,j+1
2ii,j<s
Cov(Pi ,p s ) =~a
[(1-:; x~J
x·
[Ps
~a Vu
+
(16)
v::]
~
+~
a
~
[PO~:li
+
v::] _qi [Po
VI ~'+1
+
a
!
Vi:;.o]]
~
i=2, ..• ,s-1
9
(17)
V
2
2p
s
= (cpa)2 + (cpa)2 Vll + (cpa)2 VIs
ss
Aa
Cov(cp ,Pi) =
Ps
; ['It V1,i+l
X·
cpa
Ay
Cov(cp ,Pi) =
a
_ Vli _
(l- q i Xi)
cp a
cp a
1
_-1
(18)
Vn]
V.l,S+ 1
a [qi Vi+l,s+l -
cp a
X·1
(19)
i=s
[Vls + Ps Vll ]
-1
i=2, ••• ,s-1
a
(l- q i Xi)
cp a
V1'S+1]
(20)
i=2, ••• ,s-1
i=s
Model B2 estimates
0
a
f M. , N.a , B .
111
Note that M~ = 0 by definition, M. is actually M~
1 1 1
B~
1
Also, N~
are not estimable.
A
M.1
=
m.+z.
1
1
i=2, ••. ,s
l-qi x~
u.a1
Ui -- p.
-
i=2, ••. ,s
"a
N.
i=2, ... ,s
Aa
1
1
"'a
..
= U.
+ M.
1
1
i=2, .•• ,s-1
10
1
...
"'a
"'a
To obtain variances/covariances for Mi ' Ni
Bi
compute matrices
X and W defined below.
Let
=
lX. ,
~J
r
a
qi+l~
Thus,
lX • •
~J
a J-~
= (~ )
••• qj~
a
j
II
q
'k=i+l k
if
i=j, j=l, ••• ,s-l
if
i<j, j=2, •.• ,s-1,i=l, ••• ,j
if
i<j
Elements of X (s+l) by (s-l)
=-q
Y
a R1 Y
fj\-
-~
X
2 2 xY ~
1
R~
~
'j
~,]
lX.
X~~
j=2, .•• ,s-1
2 to s
Rows
a
q'+l
X'+l
,]
,]
X2j =
a
X2
a
x..
~,~-
1 =
qi Xi
a
I-X·~-1
lXij
{a a a
Y Y Y1
Rl q) il-q) ) + Rl q) ~l-q) )
Xl
{lMi-j q.
~-
j=2, ••. ,s-1
Xl
a
a
1 + R.~- 1) q)
a
Xi-l
Y .Yj
R
+
~-1
Y
xi-l
11
i=3, .•• , s
x..
ex ••
= q'+l
,)
~,]
~J
I
-:.(M- ,~=-....::l=---q~=-·-...::l=---+a_R. . : ~:. . -. :l:. . ;)(/)r.. a. .;(:.:l.. ;-CPLa..:..) + Rr-1
Xi-l
ql: (l-qlY) ]
Xi-l
i=3, ••• ,s-1, j=i, ••• ,s-l
X•. = 0
j~i-2,
~J
i=3, ••• ,s
Row 9+1
R~ ex ••
j
a
t
X
s+ l,J' = qj+l Xj+l i=l
~
~,]
j=l, ••• ,s-l
X~~
_....:2~
+ _2 WI . 1 +
W. . ]
~a
,~1- a
~,~-l
,..
Xi-l
=
q.,] x~
M.~
,]
l-q. Xia
~
ex.~- 1 ,J. 1
M.~
M.~
. 1 +
W~,J.. 1
+ - a WI Jql
,
( l-X~_l]
M. [M.J. V
+-al
ql a
ql a
i=2, ••• ,s
11 + W1,i-l +
M.
M.
--(~~---]
VIi 1
l-X~_l
M.~
+ _ .....,]1....W•.
1
a
J,~+
VI'J +
I-x· 1 [
ql a
r
M.~
V.•
~J J
i<j, i=2, ••• ,s-1, j=i+l, ••• ,s
12
i+I,j-I
W.1, .J. I
a
V
- -!J. -
W
[ qi
_
_
<P
(I-q 1. x~)
1
a
<P
M.
+-.1
<P
a
M.
V••
+ _ .....JL...-
HI, j
a
I-x·J- I
<P
(I-q.].
a
<P
a
x~)
1
V )1
Ij
i=2, ••• ,s-I,
=_ WS,J-. I
<P
P
_ -! W
a
<P
M. [P-!V
_-.1
<P
a
<P
j=2, •.. ,s.
a
I,j-I
a
11
VI )
+--!
<P
_
a
M. [P-! V
.....JL...-
a
I-x·J- I
<P
a
V.)
• + ...!J.
IJ
<P
a
j=2, ... ,s .
Var(N~)
1
= Var(M.) +
1
u~
Pi
[q. +
].
u~
Pi
Var(p.) - 2COV(P.,Mi))
1
1
i=2.",.s
~.a :-.a)
A).
COV ( N.,
N •
= Cov (AM. , M
1
J
1
J
if.l
- -1
Pi
tf.
A). - -..l Cov (Ap .,M.
A)
Cov (Ap. , M
1
u~ U~
A)
+ - 1 -.1 COV (Ap. ,po
Pj P j
1
J
Pj
J
i=2, ... ,s-l;
J
13
1
j=i+I •... ,s
a
Var(B.)
1
a
a
Ui +1 qi+l
a.a[
qi~
1
= P+
+ q.~ u. 1 + ---1
1
Pi
i 1
ua )2
2
a (Ua) 2
i+l
) - qi~
i
Cov (-a
- )
+ [---Var (
P'+1
~ ,po
PHI
1
Pi
1
2q,U~~+1
+
1 1 1
Pi +1
COV
(-a.)
~ ,P'+l
2~~~~+1
1 1
P i Pi+1
1
y 2
y [ a U~
1
+ [H.]
V
1
s+ I
,s+I + 2H.1 ~ -p.
1
COV (.p. ,P• '+1 )
1 1
..y
)
Cov(p.,~
1
I
a
a
1.·y
Ui +COV(P1'+I'~)
- U.1 q.1VI
+1 - --,S
Pi +l
i=2, ... ,s-1
a4IT
[ s=3
if]
a
Ui+1
-a •
·a •
- - - - Cov(~ ,p.) + --- Cov(~ ,P'+l)
Pi
1
Pi + 1
1
~ ~a
i
a.a
ql
I
a
U.1
_
•
U.1+ 1
••
+ --- Cov(P·_1,P.) - --- cOV(P'_l'P,+ l )
Pi
1
1
Pi+1
1
1
14
I
a
ui+l COV(P.,P·+ ) ) + RY- RY V + s+1
+ ---i 1 i s 1,
p.1+1
1
1 1
4
4
~+1 Cov (4P'+I'~
- ----
.Y))
PHI
1
i=3, •.. ,s-1
l
auT
s=3
or 4
if]
Cov(B~ B~)J
l'
15
y
y
+ R.~ R.J Vs +1 ,s+1
i=2 •...• s-3.
j=i+2 •..•• s-l
Model D2
Necessary Iuput
Same as for Model B2. plus starting values for ~a • ~y and p ,
via one of several options.
OPTION 1.
Compute starting values for ~a • ~y and p by averaging the
corresponding Model A2 estimates.
OPTION 2.
Use Model B2 estimates ~a • ~y • and an average of the
starting values.
16
p. •
~
as
OPTION 3.
Read in starting values for
~
a
,
~
y
and p .
COMPUTATIONS
The iterative procedure produces ML estimates ~"'a
"'y
,~
and
p,
and the corresponding estimated variance covariance matrix V
(dimension 3x3).
At each iteration, it is necessary to compute
q
= 1-p
i=l, ••• ,s-l
(with xa • 1 as usual),
s
i=l, ••• ,s-l,
s-l [ a ]k-i
r
~ q
k=i
o.l. =
(=
~
a
p
if
[I-X:]
i=l, ••• ,s-l
i=s-l)
i=s
0
Iteration ScheIDe
Iteration proceeds as for Models B, D, B2, with
as defined below.
B1ements of 8 (3.1).
"'a
~
fl
=
P
"'y
~
17
~
, A , H and V
Elements of A (3d)
s-1
=
-.!
I
Al
a
C9 i=1
[a + Y+
s-l
A2 = -.! I
pq i=l
[
s-l
I
A3 = -.!
y
C9 i=1
[
r.
1
r.
1
R~-r~l
a
R~ - .-L.2.
1
X·1
RY-r~]
R~ - .-L.2.
1
Y
Xi
18
[a a
YYj]
R.-r.
y R.-r.
z. - 6. ....!--! + .£ ....!--!
a
a
y
1
1
Xi
C9
xi
Elements of H (3x3)
[-R~ +
a
Xi
mi q ] + [ q f9Y ]2
a
1-q Xi
t
t
1 [[6-(1-X~)]2 [R~
YRi_] +
i
r -_+L
= s-l
i=1 1-x~
q
1
s-l
H =_1_ r
33
(Y)2.
1
f9
1=
H
12
H21
=_-1
s-l
r -
pq
X~
1
f9a X~
1
6. m.
1
1
R~(l-x~)
1
1
XiY
1
+
f9a i=1 1-x~
1
,H31
1
2
q(l-q X:)
and H32
are obtained from
19
Hji
= Hij
.
m.1
67)
1
a
1-q X.
1
V (3x3)
As before, V is obtained by inverting H , i. e., V
= H-1
•
Printing out estimates for Model D2
After convergence, the elements of
~
are
Aa
V is the estimated variance-covariance matrix for q)
P , and
,
Ay
q) .
Thus,
Aa
= var q)
V
12
= var
= cov (q)Aa ,pA)
V
23
= cov
~y
p,q)Ay)
A
(
These estimates, standard errors and covariances (correlations?)
should be printed.
Rstiaates of M. , N~ , B~
~
M.1
=
m. + z.
1
~
l-q Xia
~
~
Aa
a
u.
Aa
P
1
A
N. = iJ~ + M.
1
1
1
U. = -!
Aa
Aa
B. = Ui+l - q)Aa qA uAa.1 + RY~. ;.Y
1
i=2 •... ,s
i=2, ..• ,s-l
?
Corresponding variances and covariances are obtained using the
matrix
W= V X
3xs-l
where X is defined below.
20
Matrix X (3 by s-1)
•
Row
1 of X
0 . R~
(9 g> a)j-i ....!.......!+
R! pq O.~ ~
y
0
...J.+
= q xJ'+1 i=l
I
a
ql
q i+l y
a
a
I-x·~
X.~
x·~
l-q x·~
M'J
j
a
1
t
j=l ••••• s-l
Row
2 of X
a
X2j = -q Xj+l
j
I
( 9 g> a) j-i
+
a
i=l
o.1
M.
1
p]
l-q X~1
l-q xi+l
t
tel7l1
vanishes for
j=l •••.• s-l
i=l
Row 3 of X
j=l •..•• s-l
Matrix W (3 by s-1)
Compute W = VX by matrix multiplication.
Also. compute P.
~
= o.
1
-
(l-q ~.)
~
P
i=2 •••.• s
(recall 6 =0)
s
21
Then,
var (M.) =
1.
M.
[
_-,,1.,,--
a
l-q Xi
6.]2
M.1. VII + P2 M.
V22
a + [q
__
1.
_1.=---==
q Xi
a
a
i
a
~
l-q Xi
I-q Xi
6'1
- 2 [ q a 1. WI 1. I + 2 P.1. W 1.. 1 - 2 P.1.
2,
~'
[q.:i]
i=2, ••. ,s
q
Cov (M.,M.) =
1
J
x.Ja
M.
1
a
l-q X.
1.
q 6
i
M
i
~a(l_q X~)
1
+
P.1 M.1
l-q
[W
I
x.1
J
q
2,j-l
a
q 6. M.
W
_
J
J
V + P.J M.J V12]
1-q X·a
l,j-l
~a(l_q X~) 11
6. M.
J
J
a( l-q x.a)
~
V
+
12
J
P. M. V22 ]
J
J
l-q X.a
J
for i<j, i=2, .•• ,s-1, j=i+l, ••. ,s
Note that in Cov(M.,M)
1
S
all terms involving 6 will be 0 •
S
i=2, .. .,s
i=2, ..• ,s
22
COV
Aa N.
Aa) =
(N.,
1J
COV
(AM., MA).
1J
u~
- -
1
P
u~
COV
(-)-.J.
p, M . P
J
COV
(p,M.1 )
U.a U.a V
+ 1 .J 22
i=2, ••• ,s-1
p2
= q ~+1
p
Var (B~)
1
j=i+l, •••• s
+ q ql a Uai (1 + S-ia] +
p
V +
+ (q Ua.)2
1
11
+ 2R~
1
<J4IT
( 9=3
~
2[U~+1 p-
ql a U'-U'+
a a l
I
[
1
]V23
-
qla
U~]
[
U~+1
- qla
p
~12
V
22
[ ua.] V + (RY)2 V
q 1
12
i
33
U~_1 q V13
1
i=2, ••• ,s-1
ifl
J
+
+ R~
1
23
q
a a . .8
1
u~[Ui-ql
Ui- 1] V
1
P
12
~
C9 a U.-U·+
a a l
y
+ R.1-1
[
[
1
]V 23 -
1
U~ q
V13
i=3, ••• ,s-1
[
<J4IT if ]
s=3 or 4
+
Ryi
[
[
C9 a U.-U'+l
a a
,].I
p
]V23
-
1
Uaj q V13
+ R~
J
i=2, .•. ,s-3, j=i+2, •.• ,s-1
Aa
Note that formulae for B. , variances and covariances have been
1
included for Models B2 and D2, but are not given in Pollock (1981).
Pollock (1981) states that B~ refers to recruitment of adults
1
through tm.igratioD only [page 523 (first paragraph)].
24
However,
estimators B~ defined on page 20 here, seem to include recruitment
~
Aa
through survival of unmarked young in year i • If the B. on
~
page 20 do not seem to be meaningful quantities, . then the program
need not compute the B~
, variances and covariances.
~
Testing between Models D2, B2 and A2.
(i)
Test of Model B2 versus Model A2
Let
a
model B2 estimates of Pi '
Let
Pi,B2
and
x·
~
x~~
respectively.
p. B2
=-__
=
A
~~!
i=2, ••• ,s-1
1
Aa
-Qi,B2 Xi,B2
A
Aa B2 )]2
a a( I-x.
[ r.-R.
~
~
~!
aAa
Aa
)
R ·x· B2 (1 -x·
~ ~,
~, B2
Compute
=
be the
and
Pi,B2
i=l, ... ,s-l
[m.-(m.+z.)p. B2]2
~
~
~
~.
i=2, ••. ,s-1
(m.+z.)p.
B2(1-p.~, B2)
~
~~,
[r~-R~(l-X~
)]2
~
~
~.B2
y
R Ay
(1 Ay
. x· B2 -x· B2
~~,
i=l, ••• ,s-l
)
~,
a
a
Print out individual chi-square values Tl1 , .•• ,T 1 ,s-1
T22 ,···,T2 ,s-1
y
and
y
T11 ' .•. , T1, s-1
Also print out
s-l
a
a
T = t T
U
1
i=l
s-1
T = t T2i
2
i=2
25
s-1
TY = t TY
U
1
i=1
"Total chi-square"
and
"Degrees of freedom"
=
2s - 5
"Probability"
=
(computed in usual way)
Model D2 versus Model A2.
( ii)
and
Let
model D2 estimates of p
Let
a
x·1
q
and
PD2
..
= ---==---Pi,D2
1-qD2 X~,D2
be the
x~
1
respectively.
i=2, ••• ,s-1
.. a D2 )] 2
a a( I-x·
[ r.-R.
1
1
1,
Compute
i=l, ••. ,s-l
.. & (1"&
)
Ra1·x·
1, D2 -x·1, D2
=
[m.-(m.+z.)p. D2]2
1
1
1
1,
i=2, ••. ,s-1
(m.+z.)p.
D2(1-p.1, D2T
1
1
1,
.. y D2 )] 2
y y( I-x·
[ r.-R.
1
1
1.
y .. y
y
R. x· D2 (l"
-x·1., D2 )
1.
1.,
&
Print out
i=l, .•. ,s-l
&
T11 ,···,T1 ,s-1
T22 ,···,T2 ,s-1
y
y
T11 ' ••• , T1, s-l
and
s-l
s-l
1:
1:
2
1
26
s-l
and
E
1
Also print out
(iii)
"Total chi-square"
=
T~ + T2 + Ti
"Degrees of freedom"
=
3s - 7
"Probability"
=
Model D2 versus Model B2
+
Compute
[R~-r~1
1
X~'D21
loge .a
Xi,B2
1
i=1, •.• t s - 1
.
P
D21]
l- 1. ,
log ~i,D2 + z. log 1_.
[
1
e
Pi,B2
e Pi,B2
i=2" •• ,s-1
= -2Ir~1
Xr
loge [1'D2] +
1 .y
-Xi,B2
[R~-r~l
1
1
i=l t ,."s-1
Print out
a
a
L11 ,···,L1 ,s-1
L22 ,·",1 2 ,s-1
y
Y
L11 ,·",11 ,s-1
and
27
xr.
D2
loge .y ]
Xi,B2
Also print
ItTotal chi-square
ItDegrees of freedom
=It
Itprobability
=It
s-2
Checking for Small Expectations.
In carrying out these tests, before computing individual Tij or
L.. values, it will be necessary to check for small expectations as
1J
follows:
(i)
B2 versus A2
Check for values
Ra Aa
1. x·1. B2
R~(l-x~
1
1, B2)
a a
a
Ri-r i ' r 1.
i=l, ••• ,s-l
RY Ay
1. X·1, B2
R~(l-x~
1
1, B2)
R~-r~ , r~
i=l, ••• ,s-l
(m.1+z.1 ),0.1, B2
(ii)
<2 :
(m.+z.)
1 1 (l-,o.1, B2)
1
1
1
m.• z.
1
1
i=2, ••• ,s-1
D2 versus A2
for B2 versus A2 above, but replacing B2 estimates in
.
Aa
formulae with D2 estimates (Le., rep I aC1ng
Xi,B2 with
Aa
Xi.D2 , etc.).
As
(iii)
D2 versus B2
Check for values
<2 :
R~(l-X~
D2)
1
1,
Aa
Ra1. X·
1, B2
R~(l-X~
B2)
1
1,
i=l, •..• s-l
y AY
R1. x·
1, D2
R~(l-X~
D2)
1
1,
y AY
R. X·
1
1. B2
i=l, ...• s-l
28
R~(l-x~
B2)
1
1,
(m.1 +z 1.) p.1, D2
(m.+z.)p.
B2
1
1
1,
(m.+z.)
(1-.0.1, B2)
1
1
i=2, ••. ,s-1
When expectations <2 are found, it will be necessary to pool
before computing the T.. or L .. value, for example, as described
1J
1J
for testing between models D, B and A.
Alternatively, the
component T.. or L.. could be omitted entirely (and a degree of
1J
1J
freedom subtracted from the degrees of freedom for the total
chi-square) •
Goodness of fit tests
(i)
Test of fit to Model B2
"Test of fit" chi-square
= chi-square
for B2 versus A2
+ chi-square for test of fit
to A2.
df for test of fit to B
= df
for B2 vs A2
+ df for test of fit to A2.
(ii)
Test of fit to Model D2
"Test of fit" chi-square
= chi-square
for D2 vs A2
+ chi-square for test of fit
to A2.
df for test of fit to D2
= df
for D2 vs A2
+ df for test of fit to A2.
29
Literature Cited
Brownie, C., Anderson, D. R., Burnham, K. P., and Robson, D. S.
(1978).
Statistical
Handbook.
Inference
from
Band
Resource Publication No. 131.
Recovery
Data
Washington, DC:
-
A
Fish
and Wildlife Service, United States Department of the Interior.
Jolly, G. M.
(1965).
Explicit estimates from capture-recapture data
with both death and immigration--stochastic model.
Biometrika
52, 225-247.
Jolly, G. M.
in time.
Pollock,
K.
(1982).
Mark-recapture models with parameters constant
Biometrics 38, 301-321.
H.
age-dependent
(1981).
Capture-recapture
survival
and
capture
models
rates.
allowing
for
Biometrics
37,
521-529.
Seber, G. A. F.
(1965).
A note on the multiple-recapture census.
Biometrika 52, 249-259.
30
© Copyright 2026 Paperzz