Condi tionally Unbiased Bounded Influence Robust Regression. with
Applications to Generalized Linear Models
H.R. Kunsch I
L.A. Stefanski
R.J. Carroll
2
3
ISeminar fuer Statistik. ETH-Zentrum. CH-S092. Zurich. Switzerland.
2nepartment of Statistics. North Carolina State University. Raleigh, NC 27695
(USA). Research supported by the National Science Foundation.
~partment of Statistics. University of North Carolina, Chapel Hill, NC 27514
(USA).
Research supported by the Air Force Office of Scientific Research.
ABSTRAcr
We propose a class of bounded influence
conditionally
robust
regression
unbiased estimating functions given the design.
are found within this class.
estimators
with
Optimal estimators
Applications are made to generalized linear
models.
An example applying logistic regression to food stamp data is discussed.
Key
Words
and
Phrases
Robust regression. Bounded Influence. Asymptotic Bias
Generalized Linear Models. Linear Regression.
- 1 -
INTRODUCfION
1.
In this paper
variables
with
X
joint
we
study
robust
estimation
in
a
model
with
explanatory
(Xi.Y ) are assumed to be independent and identically distributed
i
distribution
consider
We
Pa(dYlx)F(dx).
M-estimators
defined
implicitely by the equation
n
,..
~(Y.
};
,x.,a )
lIn =
i=1
In
(1.1).
a belongs to a subset
n of
JRP and
~
o.
(1. 1)
takes values in JRP.
Usually.
~
is
required to be Fisher-consistent. i.e ..
II ~(y.x.a) Pa(dYlx)F(dx) = 0
for all a,
(1. 2)
which implies consistency in the usual sense under weak regularity conditions.
In
this paper we require a stronger form of (1.2):
I ~(y,x.a)Pa(dylx) = 0
for all x and all a,
(1. 3)
which we will call conditional Fisher-consistency.
In the linear model with symmetric errors, essentially all
estimators
which
Fisher-consistent
are optimal in some sense automatically satisfy (1.3).
not the case with asymmetric errors or for generalized linear models.
et
ala
(1986)
have
consistency
is
an
appealing
distribution of the explanatory
parameter
of
Stefanski.
investigated robust estimators satisfying (1.2) (With F the
empirical distribution function of the {Xi}) but not (1.3).
Fisher
This is
concept
variables
{Xi}'
However.
conditional
because
it
does not involve the
which
is
independent
of
the
Moreover, these estimators have the advantages of being
interest.
computationally simpler in certain cases (Section 3)
and
less
affected
by
the
estimation of nuisance parameters (Section 4).
Recall
some
general
Hampel eta al .• 1986).
results
and
definitions
from robust statistics (see
The influence function of an M-estimator is
-1
IC~(y,x.9) = D~ (a) ~(y.x.a);
( 1.4)
- 2 -
Under
regularity
a
=-
D~(9)
where
a~ II ~(y.x.~) P9(dylx)F(dx)I~=9'
1 2
conditions.
N /
(eN
9)
-
is
(1.5)
asymptotically
normal
with
covariance matrix
-T
W~(9) D~(9)
W~(9)
where
= V(~) = V(~.9).
= E[~(9)~(9)T]
(1.6)
(1.7)
.
Finally we consider the self-standardized influence
s(~)
2
= sup
y.x 'A.#;O
which
measures
the
maximal
Integrating
influence
(1.8)
an
shows
standard
that
s(~)
estimation
( 1.8)
W~ ~
observation
influence. see for example Giltinan. et. al (1986).
influence
~
y.x
combination of interest standardized by the
combination.
T -1
= sup
sup
can
have
deviation
2
~
p.
The
of
on
a linear
this
linear
For other measures of
usual
goal
of
bounded
is to minimize the asymptotic covariance (1.6) subject to
~
,
bound on the measure of influence. in this case (1.8).
(1986).
optimal
estimators
satisfying
this
goal
As
in
depend
Giltinan.
on
the
et.
al
measure
of
influence.
2.
,
LINEAR REGREssION WIlli Ab~IC ERRORS
/\.,
\-, ,:.
~
.
The purpose of this section is to' show that the differences between (1.2) and
(1.3) are relevant also in linear regression if the errors are not symmetric.
We
consider the model
(2.1)
where
{u }
i
are
independent
haVing no point of symmetry.
rank.
The
vector
9
is
and
identically
distributed with common density g
The usual design matrix is assumed
of
length
(p-1).
to
be
of
ful4l'
The most important proposals for
•
- 3 -
M-estimators in linear regression are the Mallows and Schweppe forms:
= w{x)
~u{y,x,a)
~s{y,x,a)
= w{x)
~({y-ao-x
Here w(x) is a scalar weight function
additional
assumptions
on g,
a
~(y-ao-x
O
and
T
T
T T
(2.2)
a){1 x ) ;
TT
(2.3)
a) / w(x»(1 x ) .
is
~
a
scalar
Wi thout
function.
is not identifiable, but the following result
gives conditions for consistency of a.
For related results, see Carroll (1979).
TIIEOREM 2. 1
i)
For the Mallows-form (2.2), there is a constant T such that
~(y,x.aO+T,a1,...
ii)
,ap) satisfies (1.3).
If the x. are symmetrically distributed with center c €
1
ffiP and if w(X-c) -
W(-X+c), then there is a constant T such that for the Schweppe form (2.3)
~(y,x,aO+T,a1,... ,ap) satisfi~s:(~;2~,:~~t in general not (1.3).
PROOF:
i)
is obvious by
defining.~ as
-' iii: n·
the solution of I
.9:::,',> 'J J.".
case ii) define T as the solution of
~r11 \:", ~:~ ;-.~ .f' ::
II w{x)
This
is
just
(1.2)
for
~({U-T)/W{X»
e
•
=
O.
In
~:;m ;'
f
g{u) F{dx)du =
the first cOmPOnent
~(u-T)g(u)du
of~.
o.
For the other components we
observe that
II x J,_ w(~~.tqu;;;Tl~~J~}l"
g}fu~!F~dx)du
=
_
_ _
.
--.
~
~~
..
_---_._
.. ..•...•
..•
_~
~.:--.-.~~
... ~~
-~.-
;;:;..
II{xj-cj)w{x} ~({U-T)/W{X}) g{u) F{dx)du = 0
by symmetry, so (1.2) holds.
Equation {1.3} does not hold in general, because the
; (:.;,
,!
~yrl ~-<
~1;.1
solution T of
~.
I
-.:
,:
OJ ~~ ~1;:
'
,.
~({U-T)/W{X»
g{u)du = 0
o
depends in most cases on x.
Thus, the Mallows form has the advantage of being
a
conditionally
unbiased
estimating equation even when the {Xi} or {u } are not symmetrically distributed.
i
-4-
GENERAU'ZED' LINEAR. MODElS
3..
We
consider t.be c.mnmlad.
aE gerreraHz:ed linear models. where
fQ·:nJ
}"'g;{dJllb(,) ::. exIll{Y'
·.l~- ~(~T~J
If l&iJ.s the derivat:tV'!'C of G.. tfu".e Ifkel:tifunod
sa."'.:t~ifle'!!.
(3.2}
\t!.!Iat
B~<awse t i s
propov't1:.'l!ll1i:1il.[
fIL:l}.
MIl :lE.,
m.iIJ.1.fum1i:~
V('-S') in
SOIlllle
oVttiilm!!ll.f
robust
est:i.ms.trti.")!f$
s.c.ore~
3m'
(3.2)
tt:Jna..t: !the :score is condi tionally unbiased.
tt!te' ii.lm£ J.r~t:e is unbGlUnded. i. e.. s2 ( e)
'11m 1tIl2t.1..ag 'AIit,th a:.
$S:"ge.
function is
= b'""g:(xTe'llx_
2{W,x.. fJ}
NOl~
(3.1)
- sty)} /-L(dy).
g~eral
=
(X).
principle for constructing
(Rkmpel .. e'c a] {1l'!l86). Section 4.3). we consider the
1
""colllt1b"'.x~lJi~B) ~ d(Y:x."e:B) ~n!4di«w.x·)iUli)
"~l
.j
GC,J"·~x.,S ..B-}.7iy-,;.(x.Te} ~
".
,
,..
wfIel!"f?
9\
oht!alilllBlble for the
is
trfDaIt
~;C!)fit.ert
of
.1l1l.~s-EoIrlBl as
t~
.~
T -1 !l2
wI ((x B xl
}.
1.'1lle
function
c
fa ~. 0).
Schweppe-form. al though related resul ts are
parlt$~
The
major
change
'I'1!le: f'iilrst depends only on x and is of
The adler d2pends only ott.
d(y.x"I!JJ):
armf ils' 0·£ the form w (
2
,IDm'a.. b}
in SteEamsJd. et. al (1986).
"b in (3.3) factors intO two.
tl\1.;e ltdl,rnt
= Elb€a)J"'a"
is the Huber £uD::liOl!l!\(a), =
Weo work wi thin the
1
1
IZ(X'Fe<" h/{:lll B- x)2)
t,.
~(a)
am:cl!
(3.3)
'f'
j
w
I(xTB- 1x)2) x.
y - gC,ls} -
T
c«x /;};"b/{xTB- 1x) 1/2)
!ie{y.x. El') ~} and
the
matrix B in
p .. J)
will be chosen so that the S i d .
- 5 -
s(~
conditions (1.3) and
d)
con
=b
(1.3) holds if and only if for all
f(y-g(P)-c(P.a»
By
are satisfied.
p and
all a
the
definition
of
~
cond'
> 0,
~(dy) =
wa (ly-g(p)-c(p,a)l)exp(y P-G(P)-S(y»
O.
(3.4)
First we discuss the existence of a solution to (3.4).
LEMMA 3.1
= c(p,a)
> 0 and p, there is a solution c
For any a
to (3.4).
For fixed y, p, a, the function
PROOF:
c ~ (y - g(P) - c) wa (Iy - g(P) - cl)
is
continuous,
bounded
and
nonincreasing with limits ± a.
monotone
existence follows from dominated convergence and the intermediate
Hence the
value
theorem.
o
A practical advantage here is
,\,,'
closed or almost close form.
where
c
is
that
ofte.n the function c can be calculated in
-:_~.,
:~
i...
',,:
':: " . : ;
This is particularly important
compared
to
(1.2),
a vector (depending only on 8) whose computation is quite difficult .
.__::;;
see Stefanski, et. al (1986), Section 2~4;
t
checked that
"
,{:a
if P ('.OL"and a (
p / q -- p
c{P,~J
=
>;f-
Here'aretwo examples where c(p,a) can
q - ~/p
if
o
otherwise
~.>
0
and
q
a ( p
satisfies (3.4).
Example 3.2 : Negative Exponential Regression
[O,ro),
If
G{~)
the
= -tn{-~),
bound
s{y)
=a and
~
(0.
Here
~
is
Lebesgue-measure
on
Two cases occur.
a is large, the Huberization in
~
cond is one-sided {for large
- 6 -
It
y's only), for small a's both large and small y's will be Huberized.
checked
by
straightforward
calculations
that the cutting point between the two
cases is given by the equation e2~a = 1 + ~a, so that ~a ~ - 0.797.
case c(~,a)
case c(~,a)
= _~-1 times the smaller solution of exp(x+~a-l) = x
= _~-1(1 + log(~a/(exp(~a) - exp(-~a»».
Turn now to the matrix B.
S(·I,
"'cond
can
It follows
from
the
In the former
and in the latter
definition
of
~
cond
that
) = b provided
ES[~
~
cond(y,x,S,B)
T
cond(y,x,S,B)]
(3.5) is used to define B=B(e).
Equation
of the explanatory variables.
= B.
(3.5)
B(e) depends also on the distribution F
A necessary condition for (3.5) is b
~
p, but we do
not know if it is also sufficient.
We have the following optimality result for
THEOREM 3.1:
-1
~
),among all
3.1
is
a
.,..' of
co~ollary
.'
Stefanski, et ale (1986).
-.,
Then
~
.con.
satisfying (1.3) and
-1
2
d)
Ie", ~ b .
sup"IC." V(~
(y/x)' . ." .. , Q(m
Theorem
cond'
Suppose that for a given b (3.5) has a solution B(e).
tr(V(~)V(~cond)
minimizes
~
."
the
following
analogy
to
Theorem 1 of
Note that Theorem 3.2 below also applies to any kind of
model with explanatory variables: ..
THEOREM 3.2
Let i(y,x,S) be the likelihood score function.
function
Define
the
score
:.'f
.1
~
2·
T
_1
d(y,x,e) = (e-c)min (l,b /{(e-c) B
con
~
where c
= c(x,S)
and B
= B(e)
are assumed to exist and satisfy
E(~cond(y,x,e) Ix)
E{~
~(e-c)}),
cond(y,x,e)
~
=0
T
cond(y,x,e)} = B.
Then (3.6) minimizes tr(V(~)V{~cond)-I) among all ~ satisfying (1.3) and
(3.6)
- 7 -
sup IC.,.
.,. V('iIcond)
( y,x )
-1
2
~
IC.,.
.,.
b .
the exception of multiplication by a constant matrix, 'iIcond is unique almost
With
surely.
o
PR<X>F OF TIIEOREM 3.2
The proof is
almost
identical
to
tha
of
Theorem
1
in
Stefanski, et al. (1986), once one notes that for any conditionally unbiased score
function 'ii,
E c(x,S) 'iI(Y,x.S)
= E{c(x.S)
The
computational
complexity
particular to the model (3.1).
= o.
E(",(y,x,S)!x)}
of
o
the conditional unbiased estimator is not
For instance, if
we
have
a
generalized
linear
model with arbitrary link function h. we have to replace in (3.3) d(y.x.S.B) by
h'(xTS){y - g(h(xTS»
where
c(~,a)
In
- c(h(xTS),b/«xTB- 1x)1/2 ! h'(xTS)I»},
is still defined by (3.4).
CI
\~:
j
,j
"'''''~''''.i:
-I._l!
applications, the distribution F COr" the {xi} is unknown.
replace F by its empirical
distribu,t;r~~:{ ,~ff'o~~q'r)
It is common to
and (3.5). this means that we
solve
(3.7)
N
-1
N
I
(3.8)
i=1
where
v(~,a)
= J(y-g(~)-c(~,a»2 w2(y,~,a)exp(Y~-G(~)-S(Y»
w(y.~,a) = ~(ly~g(~) -cc(~,~)I) / Iy-g(~)
,a:..
i · • -".; \; \"
•
""
_
••
,_
In many applications, improved protection against
breakdown
points
Rousseeuw (1984).
could
can
be
achieved
by
~(dy),
(3.9)
- c(~,a)1
outliers
(3.10)
through
higher
the use of redescending 'iI functions, see
In equations (3.3), (3.4) and
(3.10)
the
Huber
function
Hb
be replaced by any of the redescenders such as Hampel's three part function
'iI or the Tukey biweight.
The calculation of
c(~,a)
is of the same
complexity as
- 8 -
with
the Huber function.
The breakdown properties of such estimates remain to
be4lt
studied.
4
TIlE EFFECI' OF ESTIMATING TIlE MATRIX B
In the last section we derived the estimator defined
approximation
to
the
optimal
estimator
which
The
~function
(3.7)-(3.8)
as
an
We may
cond(y.x.a,B(a».
S and a nuisance parameter B.
uses
consider (3.7) and (3.8) as an M-estimator for both
by
~
defining this M-estimator is
T
(~
T T
• where
cond(y.x.S,B). X(x.S.B»
X(x.a.B) = x xT v(xTa. b/(xTB- 1x)1/2) - B.
The influence function of this estimator is (compare (1.4) and (1.5»
-1
I~,.
(y.x.a.B) = ~,.
'f"X
'f'.x (a)
T
(~
T T
(4.1)
cond(y,x.a,B). x(x.a.B) ) .
e-
where
(4.2)
-a"'
" -aA Ea[x(x.a.A)] IA=B(S)
By the definition of ~cond and c([3;.a} in (3.3) and (3.4). ~cond(y.x.a,A) satisfies
(1.3) for arbitrary A.
right block of
Hence.
E9[~c9nd(r.x.a.A)J
= 0 for
all
A and
the
upper
is zero. This means that the S part of the influence function
'f'.x
for (3.7) and (3.8) is equal to
~,.
a
{- - Ea[~cond(y·x.~.B)J
a~
On
~
the
other
d(y,x.S.B(a»
con
hand.
the
influence
I
~=a
}-1 ~
function
d(y.x.S.B).
con
for
the
optimal
is also equal to (4.3) because by the same argument.
(4.3)
=
•
- 9 -
D", (a)
o
We have thus shown
THEOREM 4.1:
simultaneously
in
the
case
The a part of the influence function in the case that a and
Bare
estimated by (3.7) and (3.8) is the same as the influence function
that
a
~cond(y·x.a.B(a».
alone
is
estimated
using
the
=
optimal
In particular. the asymptotic covariance matrix of aN is the
o
same in both cases.
REMARKS :
i)
a n and Bn are not asymptotically independent:
Ea[~
T
~n
d X ] = 0 by (1.3). but
8 '
8/3 Ea [X(x./3.B)] 1/3=a '# 0 in general.,.
ii)
t (), ....
Because in linear regression with s~~tric errors X does not depend on a. an
analogue to Theorem 4.1 is obvious .f, In addi tion. estimation of the scale of the
~ .(,
r ~ ,,;' i ( j \:':.,
errors does not change the asymptb'tic covat11iance
either. and a n is asymptotically
,
independent of all nuisance
paramet~s.
iii) From the finite sample
inte;~t41:ation of
the
to
following:
the /influence function. (4.3)
the first order of approximation the change in a
adding or deleting an obser~tiori; ;iii (~y); ·1s::";;
1"lt'"
n
means
caused by
'
-N[N- 1i=1~ ~
EO t~co~~.·(Y·'·~i.{Jl.~) 1;;(~lr~~~']-1"'cond(Y .x. eN'~)'
8/3 N,.
N
,.
i.e. the change in
~
iL.'
e,ro.'
' , .
has approximately no effect on the change in aN.
In
this
sense the estimator (3.7)-(3.8) is reasonably stable.
(iv)
For
the
Fisher-consistent
estimator
(1986). there is no analogue of Theorem 4.1.
is in general a linear combination of
(2.12)-(2.13)
of
Stefanski. et al.
The a part of the influence function
~BI' Ea[~I I
x] and
Ea[~BI~BITlx]-B because
- 10 -
all blocks in D are in general different from zero.
5:
EXAMPLES
We illustrate the conditional Fisher-consistent estimators on two examples of
logistic regression. the first
considered by Stefanski. et al. (1986).
Example 5.1 : The Food Stamp Data
The response is whether or not one participates in
program.
Two
dichotomous
predictors
are
An additional predictor is log(1 +
income.
be
highly
inf luential
less influential.
case
contrasted
al.
(1986)
the
use:
the
monthly
Qf:,~.th~ yco,nditionally
iC(P.,a..)=:~,Q~,
were
one
,ather
150
One observation. #5. is known
~wotlld
obtain
. e·
In
one
unbiased score function and the
The latter choice was used
by
Stefanski.
problems in enforcing unconditional Fisher
analysis.
We
also
if one llsed a Hampel 'lJ function with bend
points (6.14.32). see Hampel, et .a!,." (1986.,
convenience.
There
income).
In Table !. we list the rf"sults of this
analysis
stamp
on the ordinary analysis. with another. **66. slightly
becauseQ~; n~e~~~~l
consistency (1.2).
list
food
We used the Huber. .'lJ flmction wi th two values of b. and
biased score function wid)
et
federal
available. tenancy and supplemental
observations. with 24 participating in the program.
to
the
pages
For
66-67).
computational
than, .sqlvi!)g.(3,,1). in defi.ning the function c(l3.a) we have
chosen to use the same formula as in Example 3.1.
which
should
not
affect
the
results too severely.
In this example. the major difference is not among the biased and conditional
Fisher-consistent estimates. but rather among the choices of b (maximum likelihood
corresponds to b
well
as
the
= 00).
weights
With decreasing b the importance of supplemental income as
for
cases
#5
and
**66
decrease
and
the importance o .
- 11 -
log(l+monthly income) increases.
Example 5.2 : Skin Vaso-constriction data
These data have appeared elsewhere in the context of robustness. see Pregibon
(1981).
digits
The response is the occurrence of vaso-constriction in the
32
= 0.30.
the
see Pregibon (1981).
This data set is inherently unstable.
deletes
observations
selected
various
observations 4 and 18
As previous authors have
shown.
once
#4 and #18. almost perfect discrimination is possible.
Our analyses are listed in Table 2.
have
of
and is regressed on the logarithms of the rate and volume of air inspired.
In our work. we took Rate
one
skin
values of b.
were
We used the Huber weight function
When we tried to use the Hampel
immediately
given
weight
O.
in
only.
~
which
and
function,
case
even
computing the maximum likelihood estimates is delicate.
A biased analysis uses a Hubet fun~tii6n with b
= 6.408.
but with c(13.a) == O.
When we used this value of b in ouf~~ondr£ionar Ffsher-consistent score.
that
observations
4
and
18
we
a:r"~ baPely(aotlrlweignted and the resulting analysis
looks very much like the usual likeliHooa~analys{s. As we move b to 5.5 and
observations
4
and
b
seems
5.0.
18 are givenJv~rt'1ittle "~ight; and parameter estimates and
standard errors change dramatically. :J! f'Iii"
sensitivity
find
to
be
thi!i~'t'exa:mple
too
the
value
of
the
md~~J)impoftantfI~dT' we recommend to successively
decrease b and see how estimates. 'stahdata'erFors{~dw~ights change.
'.
:',
, .... ~.,Il>
,
"-'
- 12 -
6
roNa..USIONS
Conditionally unbiased score functions are appealing because their definition
does not depend on the distribution of the predictors {Xi}.
robustness.
the
unbiased
one
context
can
score
functions.
In
of
addition.
score 'functions are often far easier to define.
ignoring the bias and setting c
considered.
the
resulting estimators have an analogous optimality theory to that
already developed for unconditionally unbiased
conditionally
In
construct
=0
turned out not to matter much in the
situations
where
this bias is large.
Although
examples
With our
estimator. we avoid this problem with little additional compleXity.
REFERENCES
Carroll. R. J. (1979). Estimating ~ariances of robust estimators when the
errors are asymmetric.
Journal of the American Statistical Association
74. 674-679.
Giltinan. D. M.• Carroll. R. J. & Ruppert. D. (1986).
Some new methods
weighted regression when there are possible outliers. Technometrics
219-230.
for
28.
Robust
Hampel. F. R.• Ronchetti. E.• Rousseeuw. P. J. & Stahel. W. (1986).
Statistics: The Approach Based on InEluence Functions. John Wiley & Sons.
New York.
McCullagh. P. & NeIder. J. A. (1983).
Hall. New York and London.
Pregibon. D. (1981).
9. 705-724.
Generalized Linear Models.
Logistic regression diagnostics.
Annals
or
Chapman &
Statistics
Stefanski. L. A.• Carroll. R. J. & Ruppert. D. (1986). Optimally bounded score
functions for generalized linear models with applications to logistic
regression. Biometrika 73. 413-425.
Rousseeuw. P. J. (1984). Least median of squares regression.
American Statistical Association 79. 871-880.
AI,.I.::!:ou=.;rna=I:.........>o:<.:£:..-.-=t;:.::h:;::e
e
- 13 -
TABLE 1
This is a reanalysis of the food stamp data.
weights wb in equation (3.3) are computed.
Huber(7)
c(l3.a)=O
Huber(7)
Conditional
Unbiased
Huber (5. 5)
Conditional
Unbiased
Hampel (6.14.32)
Conditional
Unbiased
4.26
(2.55)
4.51
(2.54)
5.49
(2.66)
6.00
(2.76)
-1.85
( .53)
-1.85
( .54)
-1.78
(.54)
-1.76
(.51 )
-1.80
( .54)
0.90
(.50)
0.75
( .52)
0.74
( .51)
0.62
( .52)
0.70
( .52)
-0.33
( .27)
-0.89
( .43)
-0.93
( .43)
-1.10
( .45)
-1.18
0.16
0.13
0.41
0.0
0.54
MLE
Intercept
0.93
(1. 62)
Tenancy
Supplemental
Income
Log( l+MI).
MI=Monthly
Income
For selected observations. the
( .47)
Weights
#5
#66
0.21
0.76
",L-
.S
f
e ,9 ·.60i 7 ,(;;7:
W1Jge2i;>.uP.S!
~ .
.._.;:'::.:'1..I.::·'.',:~, 1J~r_._n:~J_ ;~:~~':'~.&L,._.1~.:;:~~:;_,_
r-, f
- 14 -
TABLE 2
This is a reanalysis of the skin vasoconstriction data.
observations. the weights w in equation (3.3) are computed.
b
For
selected
MLE
Huber{6.41)
c{l3.a)=O
Huber{6.41)
Condi tional
Unbiased
Huber{5.5)
Conditional
Unbiased
Intercept
-2.92
(1.29)
-5.71
(2.45)
-2.98
(1.35)
-6.41
(2.84)
log(volume)
5.22
(1. 93)
9.13
(3.73)
5.27
(1. 93)
9.98
(4.38)
log{rate)
4.63
(1. 79)
8.09
(3.31)
4.67
(1. 86)
8.85
(3.82)
0.38
0.44
>.80
>.80
0.25
0.29
Weights
#4
#18
e·
•
© Copyright 2026 Paperzz