Testing for structural change in binary choice models with

Testing for structural change in binary choice
models with autocorrelation
Laurent L. Pauwels
The University of Sydney
Felix Chan
Curtin University
Johnathan Wongsosaputro
The University of Sydney
Preliminary draft: February 2012
Abstract
While there is vast literature concerning structural change tests for
linear time series models, the literature for such tests in the context of
binary choice models is somewhat sparse. More importantly, empirical
studies in macroeconomics have applied standard tests for structural
changes in probit models even though these tests were developed for
linear regression models. As such, the validity and performance of
these tests for binary choice models are unknown. This paper carries
out a simulation analysis of the size and power of the Andrews (1993)
‘supremum’ LM, LR and Wald tests in the context of binary choice
model with varying levels of autocorrelation. It is found that the tests
exhibit greater size distortion, have lower power, and are slightly less
precise in identifying the breakpoint than in the linear model. Bootstrapping is also considered as an alternative approach to obtaining
critical values, and though it reduces the size distortion in finite samples, it is unable to accommodate the distortion associated with high
autocorrelation.
Key words: Binary choice models, Probit models, Structural change, Autocorrelation, Simulations
1
1
Introduction
A structural change refers to a shift in the parameters of a model of interest.
When the conditional relationship between the dependent and explanatory
variables changes, estimates of model coefficients are inaccurate across different regimes. Typically, structural change tests are used to detect whether
such change has occurred in linear models. Important structural change tests
for empirical work are the ‘supremum’ test by Andrews (1993) and the associated ‘average’ and ‘exponential’ test of Andrews and Ploberger (1994),
which generalises the work of Chow (1960) and Quandt (1960).1 Although
Andrews (1993) derives their limiting distribution, the quantiles of the exact
limiting distributions for such tests generally do not have a closed form, and
have to be approximated.
The important question is: what provides the best possible approximation to the finite sample distribution of such tests? Andrews (1993) and
Andrews (2003) approximate the finite-sample distribution through simulations, Hansen (1997) provides parametric approximations, and more recently,
Estrella (2003) uses a method based on the Newton-Raphson algorithm to
calculate exact p-values for tests. These different methods to approximate
finite sample distribution of tests often suffer from random error or simulation bias, which could affect the size and power of the tests. In the light of
asymptotic approximation’s shortcomings, Diebold and Chen (1996) evaluate
the sizes of the structural change tests in linear models with autocorrelation,
comparing both asymptotic approximation and bootstrapped critical values.
Other work studying the properties of the Andrews (1993) and related tests
in linear models include Yang (2001) and Hansen (2000).
Although, the examination of finite sample properties of structural change
tests have mostly been restricted to linear models, several empirical studies
have used such tests in binary choice model. These include Estrella et al.
(2003) as well as Kauppi (2010), both of which test the stability of the
predictive relationship between U.S ressesions and its determinant such as
the yield-curve with Andrews (1993). The presence of structural changes
can lead to substantial changes in the estimated probabilities of the U.S.
recession occurring. Furthermore, business cycles tend to exhibit high levels of persistence as discussed by Kauppi and Saikkonen (2008), and hence
lead Chauvet and Potter (2005) to question the performance of the Andrews
(1993) test in the presence of autocorrelated errors.
This paper studies the properties of structural change tests with an un1
Extensions of these tests also include Bai and Perron (1998) and Bai (1999) to multiple
unknown breakpoints in linear models
2
known breakpoint in binary choice models relevant to studying business cycles among other issues, comparing the asymptotic approximation by Andrews (1993) and parametric bootstrapping of the finite sample distribution.
To the authors’ knowledge, no study has been carried out to evaluate such
properties in the context of non-linear models. One notable exception is
Hoyo et al. (2005), which examines the performance of the Andrews (1993)
‘supremum’ test in non-linear and dynamic models. The paper considers the
size and power of the three Andrews (1993) ‘supremum’ tests when applied
to probit models with different levels of autocorrelation and varying sample
sizes using a simulation-based approach similar to the work by Diebold and
Chen (1996) in the linear model. In addition to the size of the tests, power
is also considered under several different alternative specifications, as well as
three different levels of data trimming. One method to accommodate for autocorrelation is investigated, namely the use of a robust variance-covariance
estimator (HAC).
This paper focuses mainly on the binary probit model as it appears to
be more commonly used in empirical studies that involve testing for structural changes. Estrella et al. (2003) investigate the predictive power of the
yield curve in predicting recessions and inflationary pressure, the latter two
having a binary structure. They test for a structural change in the conditional relationship using the Andrews (1993) ‘supremum’ test. Chauvet and
Potter (2005) consider a similar problem in a binary probit model, but use
Bayesian techniques to test for a structural change instead. The binary logit
model is also explored. However, since the logit’s results and conclusions are
comparable to the probit case, they are not included in this paper and are
available in a “supplement” paper.
This paper’s main findings can be summarised as follows. The simulations
show that the three tests are undersized in small sample and converge to the
nominal size in large sample (T > 100), but slower than in the linear case.
The size distortion also worsens as autocorrelation increases for all three tests.
Bootstrapping helps the tests to exhibit empirical size closer to the nominal
size. The relationship between the ‘supremum’ W ≥ LR ≥ LM observed in
linear models by Diebold and Chen (1996) does not seem to hold in the binary
probit model. There does not seem to be a clear ordering of magnitude of the
three tests in the probit case but it appears that the ‘supremum’ LR ≥ LM
and that the Wald test is mostly smaller than the two other. The tests’
power increase with the sample size but is much lower than in the linear
model. The power of the tests are less affected by autocorrelation than the
size.
The rest of the paper is organised as follows. Section 2 sets out the
motivations, the models and tests considered. Section 3 outlines the simula3
tion specifications for the size of the structural change ‘supremum’ test and
presents the results. Section 4 presents and discusses simulation results for
the power of the test. Section 5 concludes and suggests possible avenues of
further study.
2
2.1
Structural change in the probit model
Time series binary choice models
There are many instances when binary choice models are used to analyse and
forecast recurrent events over time in Macroeconomics and Finance. Binary
random variables are well suited to describe the recurrent pattern occurring
in a dependent variable of interest. Some example include: Business cycles,
bull and bear stock markets (financial cycles), financial crises, “Hot” and
“Cold” IPO markets, and booms and slumps of commodity or the real restate
markets. See for example, Pagan and Harding (2011) and Harding and Pagan
(2011) for a discussion.
One of the most common application of such time series binary choice
model is forecasting U.S. recessions. Chauvet and Potter (2010) setup a
model for U.S. business cycles to forecast the probability of recessions as
such:
yt∗ = x0t β + εt
εt |xt ∼ i.i.d N(0, 1)
(1)
with
(
1 if yt∗ < 0
yt =
0 if yt∗ ≥ 0
(2)
where yt∗ represents the state of the economy, as measured by the NBER
Business Cycle Dating Committee. yt takes the value 0 if the observation is an
expansion and 1 if it is a recession. xt typically contains coincident indicators
including: production, sales, income and employment. Alternatively, the
interest rate spread or the yield-curve can also be employed as a predictor
of U.S. recessions (see Chauvet and Potter, 2005, or Kauppi and Saikkonen,
2008, for example).
It is common practice in empirical work using probit models to assume
parameter stability and independent errors. However, business cycles often
exhibit high levels of persistence as shown among others by Kauppi and
Saikkonen (2008). Chauvet and Potter (2005) states that the effect of not
4
modelling autocorrelated errors is potentially large on recession probabilities.
As such εt in (1) can be further specified as
εt = ρεt−1 + νt ,
νt ∼ N(0, 1),
(3)
in the case of first order autocorrelation, where |ρ| < 1. Hence, Chauvet and
Potter (2010) deals with the possibility of autocorrelated errors by allowing
the latent variable, yt∗ , to follow a first order autoregressive process
∗
+ εt
yt∗ = x0t β + θyt−1
(4)
where |θ| < 1.
Over the last few decades, forecasts of recessions using these coincident
variables may have been compromised by structural changes in the U.S. economy. A change in the stability of U.S business cycle fluctuations can affect
the frequency, duration and probability of future recessions and expansions.
Koop and Potter (2000) and McConnell and Perez-Quiros (2000) among others find a substantial decline in the amplitude of the US business cycle since
the mid-1980s. Furthermore, Chauvet and Potter (2010) estimates several
specifications of the probit model with Bayesian methods and presents evidence in favour of recurrent structural changes in the U.S. business cycles.
Similar questions have been raised about the stability of the predictive
relationship between U.S business cycles and the yield-curve. For example,
Estrella et al. (2003) and Kauppi (2010) test the stability of the predictive
power of the yield curve in similar probit models, using the seminal work by
Andrews (1993). The Andrews (1993) test present the advantage of being
versatile by allowing for an endogenous breakpoint and being valid under
mild regularity conditions in a wide variety of parametric models.
2.2
Testing for structural change
In model (1), the null hypothesis of no structural change is
H0 : β = β0 for all t ≥ 1 for some β0 ∈ B ⊂ Rp .
whilst the alternative of a single structural change at an unknown date is
(
β1 (π) for t = 1, ..., T π
H1 (π) : β =
β2 (π) for t = T π + 1, ..., T
where π ∈ (0, 1) denotes the proportion of observations before the change
point in the unrestricted model. The location of the breakpoint is then T π.
5
Under the alternative hypothesis, the unrestricted model is defined as
(
x0t β1 + εt for t = 1, ..., T π
yt∗ =
x0t β2 + εt for t = T π + 1, ..., T
(5)
with
(
1 if yt∗ < 0
yt =
0 if yt∗ ≥ 0
as before. The Andrews (1993) tests rely on a very general set of assumptions that remain valid for a broad class of estimators including maximum
likelihood, generalised method of moments and several robust forms of least
squares. The probit model in (1) is estimated using maximum likelihood, as
is common practice in literature. The log-likelihood function to be maximised
in the binary probit model (1) for the full sample can be written as
`(β) =
T
X
ln [1 − Φ(x0t β)] 1(yt = 0) + ln Φ(x0t β) 1(yt = 1).
(6)
t=1
with 1(·) as an indicator function and where the probability of observing 1
or 0 is defined as
Pr(yt = 1|xt ) = Φ(x0t β) and
Pr(yt = 0|xt ) = 1 − Pr(yt = 1|xt ),
where φ and Φ are the standard Gaussian density and distribution functions
respectively. The log-likelihood before and after the structural change is
defined as
πT
T
X
X
`1 (·) =
`t (·) and `2 (·) =
`t (·)
t=1
t=πT +1
The log-likelihood of the unrestricted model is `(·) = `1 (·) + `2 (·). These
log-likelihoods can be evaluated at the maximum likelihood estimates for the
pre-break and post-break sample (β̂1 and β̂2 ), as well as for restricted model
or the full sample (β̂).
Andrews (1993) proposes three ‘supremum’ tests for identifying a single
structural change at an unknown location. The test statistics are the supremum of the Wald (WT ), Likelihood Ratio (LRT ) and Lagrange Multiplier
(LMT ) statistics defined as
sup WT (π), sup LMT (π) and sup LRT (π)
π∈Π
π∈Π
π∈Π
6
where Π is the untrimmed region defined a priori, with Π bound away from
0 and 1 to ensure that the limiting distribution of the test statistics will
converge in distribution. Although Andrews (1993) arbitrarily recommends
trimming the data by 15% on both ends, such that Π = [0.15, 0.85], it has also
been suggested in Bai and Perron (2004) that using a higher trimming could
lead to power gains. The three statistics are computed for every observation
within the pre-defined region, Π, before obtaining its supremum, and are
defined as follows
0 h
i−1 b
WT (π) = β̂1 (π) − β̂2 (π) Σ
β̂
(π)
−
β̂
(π)
,
(7)
1
2
β̂1 (π),β̂2 (π)
!0
!
i−1 ∂` (β̂(π))
∂`1 (β̂(π)) h b
1
1
Σβ̂(π)
,
(8)
LMT (π) =
π(1 − π)
∂ β̂
∂ β̂
h
i
LRT (π) = 2 `1 (β̂1 (π)) + `2 (β̂2 (π)) − `(β̂(π)) ,
(9)
where β̂1 (π) and β̂2 (π) are
likelihood estimates before and af the maximum
∂`1 (β̂(π))
ter the breakpoint, and
is the first derivative of the log-likelihood
∂ β̂
prior to the breakpoint evaluated at the maximum likelihood estimate for
the restricted model. Practically, the choice of which test to use usually depends on ease of computation, since all three test statistics are asymptotically
equivalent. The sup LM test tends to be the most efficient in many cases
since it only requires the restricted model to be estimated. Both Estrella
et al. (2003) and Kauppi (2010) use the sup LM test for this reason.
b
In the simple case when there is no serial correlation Σ
β̂1 (π),β̂2 (π) is defined as the inverse of the sum of the Hessian matrices evaluated at the preand post-breakpoint, respectively. As noted
in Andrews
(1993) (p.835), the
√ asymptotic variance-covariance of T β̂1 (π) − β̂2 (π) does not contain a
covariance term since the proportion of observations adjacent to the breakb
point approaches zero as T → ∞. On the other hand, Σ
β̂(π) is the inverse
of the Hessian matrix evaluated at the maximum likelihood estimate of the
restricted model
When there is serial correlation, due to the high level of persistence
in business cycles as discussed in section 2.1 for example, the varianceb
b
covariance matrices (Σ
β̂1 (π),β̂2 (π) or Σβ̂(π) ) have to be modified accordingly.
This can be achieved by including an appropriate kernel estimator. For example, Estrella and Rodrigues (1998) derives a consistent estimator of the
variance-covariance matrix for probit models, while Estrella et al. (2003) uses
the Newey and West (1987) estimator, and Kauppi (2010) uses a Parzen
kernel. A discussion pertaining to the choice of kernel and its bandwidth
7
parameter can also be found in Andrews (1991). Alternatively, it is also possible to model the persistence directly in the model of interest as shown in
equation (4) in the case of business cycle application for example.
2.3
Large and finite sample properties of the tests
The limiting distribution of the Andrews (1993) ‘supremum’ tests requires a
mild set of assumptions, and this result is directly applicable to binary choice
models such as the probit model. Assumption 1 specifies standard conditions
such as smoothness, full rank, weak asymptotic dependence, asymptotic covariance stationarity, and consistency of estimates. Andrews’s assumption 3
requires the variance-covariance matrix to be estimated consistently, which
holds in probit models as shown by Estrella and Rodrigues (1998). Under
these assumptions, theorem 3 of Andrews (1993) states that the three test
statistics converge in distribution to the square of a tied-down Bessel process.
Andrews (1993) publishes the critical values of the tests, which are updated
in a corrigendum in Andrews (2003).
The finite sample properties of the three tests in non-linear models, however, are lacking in literature. While it is well-known that LM ≤ LR ≤ W in
linear generalised least squares models (see Engle, 1984 and Breusch, 1979),
such a relationship is not always observed in non-linear models. Furthemore,
the rates of convergence of the three tests are also likely to be different for
non-linear models. As a result, the choice of test to be used should not be decided purely based on computational convenience without first ascertaining
their relative performances in finite samples.
The finite sample properties of the Andrews (1993) tests have bee studied extensively in the literature for the linear model. Diebold and Chen
(1996) provides one of the early contribution comparing the performance of
asymptotic and bootstrapped based approximations to the finite sample distributions of the Andrews (1993) tests for structural change. The simulation
results are presented for linear models with autocorrelated errors. They affirm the conjecture stated in Quandt (1958) that the usual χ2 critical values
ordinarily used for the Chow (1960) test are inappropriate for ‘supremum’
tests. They further observe that the size distortion of the Andrews (1993)
test statistics under small sample sizes and high degrees of autocorrelation
in linear models can be accommodated through bootstrapping, but the case
for probit models has not been investigated thus far, and is one of the aims
of this paper.
8
3
Simulation analysis of test size
3.1
Design
Equations (1) - (3) form the model used in the simulation analysis.2 There
is one explanatory variable generated as xt ∼ N(0, 4) and the model also
contains a constant. The coefficient on that variable is set to β = 1. The
simulations are conducted with autoregressive terms ρ = [0, 0.9] in increments of 0.1, and further include ρ = 0.95 and 0.99 to investigate the performances of the tests in extreme cases.3 The sample sizes are T = [50, 500]
in increments of 50, as well as T = 10 and 30. The most commonly-used
nominal test sizes α = 1%, 5%, 10% are analysed as well as trimming regions
Π = [0.15, 0.85], [0.10, 0.90], [0.05, 0.95].
In an approach that mirrors Diebold and Chen (1996), the model is estimated without accounting for the autocorrelation in the error process in order
to assess its impact on the properties of the tests. Although the Andrews
(1993) tests are able to accommodate heteroscedasticity-and-autocorrelationconsistent (HAC) estimation methods, Diebold and Chen (1996) show that
the sizes of the tests exhibit considerable robustness to distortion even when
autocorrelation is not accounted for. This robustness is a result of the better
adequacy of bootstrap approximation to finite sample distributions compared
to asymptotic approximation. This Monte Carlo study also compares both
types of approximation.
Bootstrapping based on resampling the residuals, as done by Diebold and
Chen (1996) for linear models, is not possible for probit models. However,
since the distribution of the probit errors are assumed to be N(0, 1), it is
possible to do parametric bootstrapping where a new set of errors are redrawn from the same distribution as the data generating process. The new
set of errors are then used to regenerate a new set of dependent variables
based on the model under the null, using the estimated parameters and the
explanatory variables obtained from the data.4 Alternatively, Albanese and
Knott (1994) proposes an empirical bootstrap procedure for the logit and
probit models that involves resampling the dependent and explanatory variables instead of the residuals. Although, this approach is interesting as it
can be extended to block-bootstrapping, it may not be the most appropriate
2
Simulations with the logit specification were also conducted and delivered comparable
results to the probit specification. Hence, only the probit model is discussed. The logit
results are available in a “supplement” paper.
3
The standardised case when νt ∼ N(0, 1 − ρ2 ) instead of νt ∼ N(0, 1) has also been
investigated, and the results were comparable in either case. The results are available
from a “Supplement” paper from the same authors.
4
See Appendix A for details.
9
avenue when considering small samples and high level of persistence. Hence,
this approach is left as a further avenue for research.
The simulation is implemented with the following steps:
Step 1. Generate xt and εt as described earlier and generate yt as specified
in equations (1) and (2) for t = 1, . . . , T .
Step 2. Estimate the model with maximum likelihood to obtain β̂ and compute supπ∈Π W , supπ∈Π LR, and supπ∈Π LM , as in section 2.2.
Step 3. Repeat steps 1 to 2, N = 1000 times.
Step 4. Compute the percentage of rejections of supπ∈Π W , supπ∈Π LR, and
supπ∈Π LM when compared to both the asymptotic critical values in
Andrews (2003) and the bootstrapped critical values. The resulting
percentage is the empirical test size or α̂.
The results are summarised using response surfaces similarly to Diebold
and Chen (1996).5 The response surfaces plots the size distortion (α̂ − α)
as a function of the sample size (T ) and persistence parameter (ρ) when
the nominal test size is fixed at 10%. Diebold and Chen (1996) regresses
the size distortions of the tests on third-order expansions of the simulation
parameters, T −1 , ρ, and α, including their powers and cross-products, selecting statistically significant variables to draw the response surface. Surface
regressions reduce computational burden by allowing less simulations to be
carried out, and have also been used in MacKinnon (1996) and Bai and
Perron (2003) among others. Initially, the experiments followed the surface
regression approach, but it was found that they often yielded large residuals.
Hence, unlike Diebold and Chen (1996), the simulations are carried out for
every point on the surface plot instead of using surface regressions. In turn,
this leads to more accurate surface plots.
The simulations are carried out using MATLAB R2009b, with simulation time approximately equal to 1.2 minutes per observation, such that a
simulation of sample size T = 100 requires around two hours of computing
time.
3.2
Results
This paper follows the notation used in Diebold and Chen (1996) when
referring to each test. AsySupLM , AsySupLR, and AsySupW refer to
the tests that make use of Andrews (2003) asymptotic critical values, while
5
Tables with all of the test size results are also provided in the appendix.
10
BootSupLM , BootSupLR, and BootSupW refer to the tests that use bootstrap critical values. All the simulations were conducted also for a linear
model with an AR(1) error term, mirroring the binary choice approach. The
results are not reported here as they were all consistent with the findings of
Diebold and Chen (1996).
The response surfaces for the size distortion of the three tests are presented for the 15% trimming level and at the 10% level of statistical significance in Figure 1. The corresponding tables for the size of the three tests
at the 10% statistical significance level and for different trimming levels are
available in the Appendix.
The main observations are:
1. The sizes of AsySupW , AsySupLR, and AsySupLM convergence towards the nominal size as the sample size increases, but the convergence
appears to be slower than observed in the linear model.
2. The relationship SupW ≥ SupLR ≥ SupLM as observed in linear models (see Diebold and Chen, 1996, p.231) does not hold in
the binary probit model. There does not seem to be a clear ordering of magnitude of the three tests. However, it seems that mostly
AsySupLR ≥ AsySupLM and that the AsySupW is mostly smaller
than the two other. In small samples, all three tests are undersized.
3. When ρ < 0.5, BootSup tests usually exhibits empirical size closer
to the nominal size than their AsySup counterparts for small samples. Hence, the bootstrap distribution is a better approximation to
the finite-sample distribution of the test statistics as compared to the
asymptotic approximation.
4. High autocorrelation where ρ > 0.7 leads to extremely large size distortions that worsen as the sample size increases.
5. Bootstrapping does not appear to reduce the size distortion in larger
samples when ρ > 0.4.
6. Unlike in the linear model, BootSupW , BootSupLR, and BootSupLM
have sizes that are similar but not identical. In particular, BootSupW
has the lowest size distortion in small samples.
7. As seen from Table 6, the BootSupLM critical value is ∞ for 5%
trimming when T = 10 because no observations are trimmed in this
case. As shown in Andrews (1993), this causes the limiting distribution
to diverge.
11
Figure 1: Size distortion response surfaces for the probit model
(a) AsySupLM
(b) BootSupLM
(c) AsySupLR
(d) BootSupLR
(e) AsySupW
(f) BootSupW
12
The particularly poor finite-sample performance of AsySupW could be
due to the way that the test is defined. As is pointed out in Section 2.2, the
variance of β̂1 − β̂2 is computed additively without regard to the covariance
term. While Andrews (1993) correctly points out that the covariance term
approaches zero asymptotically, this could still have a substantial impact on
the test statistic for smaller sample sizes.
The changing order of magnitudes for AsySup tests, AsySupLR and
AsySupLM most likely occurs because of the non-linear nature of the probit
model. In contrast, Diebold and Chen (1996) shows that the Andrews (1993)
tests have the following relationships in linear models:
W
W
, LM =
,
LR = T log 1 +
T
1+ W
T
which does not hold in non-linear models. One potential explanation that the
ordering between AsySupLR and AsySupLM seem to be preserved is that
they are both constructed using likelihood functions whereas the AsySupW
relies on the ML estimators. This in turn implies that the three BootSup
statistics will also not be identical, as observed in point 6.
The size distortion observed under high levels of autocorrelation occurs
due to a violation of the assumptions of the tests. The Andrews (1993) tests
assume weak asymptotic dependence. When the level of autocorrelation is
fairly low, the assumption holds and the tests exhibit low size distortion when
T is sufficiently large. The assumption is violated when ρ increases further,
and the tests does not converge to its limiting distribution, leading to size
distortion. The autocorrelation builds up in the residuals as T increases,
and this manifests itself in the probit model as persistent strings of zeroes
or ones in the dependent variable, which the test incorrectly attributes to
the presence of a structural change. In contrast, when T is small, there
is less impact on size since the autocorrelation is not able to build up as
much. This explains the increasing size distortion in larger samples under
high autocorrelation. It remains to be seen whether this size distortion can
be vitiated by accounting for the autocorrelation using appropriate kernel
bandwidths as demonstrated for probit models by Estrella and Rodrigues
(1998). This is tackled in the next section.
The same conclusions described above are observed when the significance
levels are changed to 5% and 1% instead of the 10% level and when the level
of trimming on both ends is changed to 10% and 5% from 15%. Furthermore,
there is little improvement in the size of the tests when the level of trimming
is increased (see Tables 1 to 3) for relatively large T > 100. This contrasts
with the findings of Bai and Perron (2004), who suggest that increasing the
13
level of trimming can reduce size distortions in linear models when the data
contains high persistence. They further find that a trimming level of 20%
is often sufficient to reduce size distortions to manageable levels. However,
note that it does seem that for sample size T ≤ 100, increased trimming does
improve the size of the test.
Overall, the simulations suggest that the Andrews (1993) ‘supremum’
tests are sensitive to autocorrelation when applied to binary probit models,
more so than their linear counterparts. Furthermore, while bootstrapping can
deal with size distortion brought about by small sample sizes, it is unable to
adequately deal with the distortion associated with high autocorrelation.
4
Power of the test
4.1
Design
As in section 3.1, the simulations rely on (1) - (3), xt ∼ N(0, 4) and values for
ρ, T and Π are as before. In order to assess the power of the tests, a single
structural break located at T π = T /2 is introduced as in (5). The pre-break
value of β1 = 1 and post-break β2 = 2.6
The simulations also include the case of two structural breaks at π1 = 0.3
and π2 = 0.7. The unrestricted model in this case is:

0

xt β1 + εt for t = 1, . . . , T π
yt∗ = x0t β2 + εt for t = (T π1 + 1), . . . , T π2 ,

 0
xt β3 + εt for t = (T π2 + 1), . . . , T.
where the coefficients are
Spec.
A
B
β1
1
1
β2
1.5
1.5
β3
2
1
Specifications A and B allows comparing the effects of consecutive structural
breaks in the same direction and in the opposite direction. In addition,
specification B also has a regime-switching interpretation, since the first and
third and subsamples have the same coefficient. The levels of autocorrelation
and sample sizes simulated are the same as when testing for size.
The effect of different values of β on the probit model can be seen from
the CDF of the normal distribution, Φ(x0t β):
6
The experiments were also run for the case when β1 = 1 and β2 = 1.5 pre- and
post-break respectively. The results are available in a supplement paper.
14
As β increases (decreases) in magnitude, the probability of observing yt = 1
increases (decreases) when xt is positive. This is usually observed as longer
strings of zeroes or ones in the binary dependent variable. In contrast, a
change in β is observed in the linear model as a direct change in the magnitude of yt given xt .
4.2
Results
The following observations are with regards to the power of the test statistics
when the DGP contains a single structural break:
1. The power of all three test statistics increase with T , but all three
are less powerful than their counterparts in the linear model. (See
Appendix)
2. The three BootSup statistics show greater power than AsySup for small
sample sizes (T ≤ 100), which is expected since the three AsySup tests
are undersized.
3. A structural break of larger magnitude leads to an increase in the power
of the tests.
4. Higher trimming does not always increase power, unlike in the linear
models, as shown in Tables 7 to 9 in the appendix.
5. The mean position of the identified breakpoint is generally close to
the true breakpoint at the centre, but not as close as in the linear
15
Figure 2: Power under a single break in the probit model
(a) AsySupLM
(b) BootSupLM
(c) AsySupLR
(d) BootSupLR
(e) AsySupW
(f) BootSupW
16
models. Also, the breakpoints identified by the three tests are often
far apart. AsySupLR seems to be the closest to identifying the actual
breakpoint most of the time. Furthermore, the tests lose precision when
the autocorrelation increases.7
6. Table 7 shows that AsySupLM has 100% rejections when the sample
size is 10 with 5% trimming. This occurs because no observations are
trimmed in this case, which means that the limiting distribution does
not converge, thereby invalidating the asymptotic critical values. This
is affirmed in Table 6, where AsySupLM has ∞ as its critical value
when T = 10.
Caution must be applied when interpreting power in cases where the sizes
of the test statistics are known to be greatly distorted. In particular, the
diminishing power gains in probit models arising from increased trimming
when the sample size is large and autocorrelation is high is most likely due
to the size distortion dominating any actual power gains.
The power of the test statistics when the DGP contains two structural
breaks:
1. While in the linear model, the power increases with T , but falls with
ρ, the power also increases with T in the probit models, but the effect
of ρ are difficult to interpret due to the substantial size distortions.
2. As expected, when the structural breaks are both in the same direction,
the test statistics exhibit greater power than when the second break
goes in the opposite direction and reverts to the initial parameter value,
β1 = β3 = 1. In fact, these tests exhibit the lowest power out of all the
simulations. This is the case for both linear and probit models, and
matches the conclusions of Bai and Perron (2004), as well as Prodan
(2008).
3. The average estimated breakpoint is located between both true breakpoints. The simulation output suggests that the estimated breakpoints
are almost equally concentrated around each of the two actual breakpoints.
5
Concluding comments
The Andrews (1993) ‘supremum’ test statistics are commonly used in empirical studies to detect the presence of a structural change in both linear and
7
Results are available in a “supplement” paper
17
Figure 3: Power under 2 breaks: β1 = 1, β2 = 1.5, β3 = 2
(a) AsySupLM
(b) BootSupLM
(c) AsySupLR
(d) BootSupLR
(e) AsySupW
(f) BootSupW
18
Figure 4: Power under 2 breaks: β1 = 1, β2 = 1.5, β3 = 1
(a) AsySupLM
(b) BootSupLM
(c) AsySupLR
(d) BootSupLR
(e) AsySupW
(f) BootSupW
19
non-linear models. While the properties of the test statistics in linear models have been well-researched, the literature available for non-linear models
is somewhat sparse. Previous studies suggest that the tests work well under
standard i.i.d. assumptions in large samples when applied to linear models, and that bootstrapping can improve the performances of the tests when
conditions are more adverse.
The results presented in this paper concur in part with those studies, but
further suggest that the performances of the asymptotic and bootstrap tests
in probit models both deteriorate when affected by autocorrelation in the
residuals. In particular, the tests dramatically over-reject when the degree
of autocorrelation is high, are substantially less powerful, and become less
precise in identifying the location of the breakpoint.
This research lays the groundwork for several possible avenues for further
study. An important extension of this paper would be to determine the effectiveness of various kernel bandwidths in accounting for autocorrelation in the
residuals of the probit model in order to find out whether the size distortions
can indeed be reduced to a manageable level while maintaining decent power.
An equally important study would be to further investigate the properties
of the Bai and Perron (1998) multiple break tests in order to establish the
validity of the limiting distributions and asymptotic critical values for nonlinear models. Further study could also be done to investigate the effect of
negative autocorrelation in the residuals, as well as the performance of the
tests in detecting partial breaks.
Finally, the ability of the tests to locate the position and not just the
presence of the breakpoint should also be assessed. Bai and Perron (1998)
do propose a method to calculate confidence intervals in a linear application
of their tests, which would naturally also apply to the Andrews (1993) tests
in the linear model but not necessarily in non-linear models.
Acknowledgements
The authors would like to thank Andrey Vasnev, Michael McAleer, Richard
Gerlach and Tommaso Proeitti for their insightful contribution as well as the
Participants of MODSIM 2011 conference.
A
Tables
20
Table 1: Size of the Sup-LM test under different trimming levels at 10%
significance
α = 0.10
Linear AsySupLM
Linear BootSup
Probit AsySupLM
Probit BootSupLM
ρ
T
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0
10
30
50
100
200
350
500
0.020
0.055
0.062
0.069
0.073
0.084
0.078
0.008
0.052
0.057
0.068
0.075
0.081
0.081
0.003
0.038
0.045
0.061
0.072
0.076
0.072
0.102
0.103
0.107
0.091
0.108
0.094
0.088
0.109
0.110
0.115
0.096
0.105
0.090
0.091
0.109
0.105
0.115
0.089
0.112
0.093
0.089
0.013
0.028
0.046
0.061
0.077
0.066
0.081
0.010
0.047
0.063
0.069
0.091
0.066
0.086
0.999
0.083
0.083
0.083
0.110
0.088
0.098
0.134
0.110
0.130
0.084
0.100
0.088
0.096
0.139
0.100
0.118
0.074
0.111
0.090
0.105
0.000
0.117
0.107
0.064
0.111
0.080
0.102
0.4
10
30
50
100
200
350
500
0.010
0.044
0.064
0.075
0.086
0.094
0.091
0.007
0.047
0.063
0.067
0.088
0.086
0.090
0.002
0.039
0.067
0.069
0.084
0.087
0.086
0.101
0.123
0.094
0.095
0.110
0.101
0.102
0.100
0.117
0.100
0.098
0.123
0.099
0.098
0.100
0.125
0.109
0.101
0.130
0.100
0.107
0.015
0.041
0.054
0.077
0.086
0.085
0.095
0.010
0.052
0.066
0.088
0.091
0.091
0.090
1.000
0.073
0.090
0.110
0.112
0.101
0.110
0.133
0.119
0.090
0.106
0.128
0.117
0.127
0.137
0.113
0.082
0.119
0.119
0.115
0.113
0.000
0.091
0.097
0.110
0.124
0.104
0.110
0.7
10
30
50
100
200
350
500
0.014
0.060
0.066
0.074
0.079
0.065
0.096
0.008
0.056
0.061
0.075
0.073
0.061
0.093
0.003
0.053
0.065
0.074
0.083
0.057
0.101
0.111
0.117
0.111
0.131
0.095
0.070
0.091
0.103
0.121
0.121
0.124
0.102
0.074
0.091
0.103
0.126
0.124
0.127
0.106
0.082
0.093
0.022
0.055
0.068
0.100
0.137
0.142
0.155
0.018
0.062
0.079
0.103
0.152
0.159
0.157
1.000
0.072
0.100
0.109
0.178
0.173
0.159
0.131
0.158
0.139
0.133
0.193
0.165
0.195
0.125
0.140
0.125
0.128
0.204
0.172
0.202
0.000
0.140
0.113
0.108
0.205
0.174
0.172
0.9
10
30
50
100
200
350
500
0.024
0.074
0.081
0.092
0.107
0.088
0.098
0.014
0.075
0.078
0.095
0.121
0.089
0.106
0.005
0.069
0.088
0.103
0.130
0.108
0.110
0.163
0.130
0.121
0.132
0.127
0.118
0.091
0.148
0.138
0.129
0.136
0.145
0.119
0.101
0.148
0.137
0.142
0.156
0.155
0.137
0.107
0.032
0.091
0.177
0.223
0.288
0.360
0.382
0.029
0.094
0.171
0.225
0.293
0.349
0.395
1.000
0.103
0.181
0.219
0.288
0.343
0.380
0.147
0.171
0.280
0.301
0.322
0.395
0.390
0.139
0.169
0.262
0.289
0.332
0.379
0.415
0.000
0.146
0.222
0.230
0.298
0.351
0.386
0.99
10
30
50
100
200
350
500
0.018
0.090
0.113
0.133
0.139
0.127
0.113
0.013
0.086
0.120
0.141
0.130
0.128
0.127
0.004
0.083
0.134
0.149
0.134
0.139
0.146
0.172
0.156
0.169
0.162
0.158
0.159
0.120
0.170
0.166
0.188
0.171
0.163
0.166
0.129
0.170
0.178
0.219
0.202
0.179
0.171
0.154
0.037
0.147
0.242
0.404
0.527
0.652
0.710
0.032
0.146
0.221
0.392
0.533
0.657
0.731
0.999
0.149
0.213
0.352
0.527
0.655
0.747
0.143
0.247
0.324
0.429
0.554
0.670
0.724
0.134
0.251
0.310
0.415
0.563
0.682
0.753
0.000
0.211
0.256
0.373
0.558
0.678
0.772
21
Table 2: Size of the Sup-LR test under different trimming levels at 10%
significance
α = 0.10
Linear AsySupLR
Linear BootSup
Probit AsySupLR
Probit BootSupLR
ρ
T
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0
10
30
50
100
200
350
500
0.093
0.085
0.081
0.077
0.080
0.088
0.078
0.094
0.079
0.073
0.075
0.080
0.088
0.081
0.081
0.071
0.068
0.075
0.078
0.080
0.076
0.102
0.103
0.107
0.091
0.108
0.094
0.088
0.109
0.110
0.115
0.096
0.105
0.090
0.091
0.109
0.105
0.115
0.089
0.112
0.093
0.089
0.013
0.047
0.058
0.095
0.104
0.086
0.091
0.010
0.039
0.049
0.076
0.114
0.091
0.100
0.008
0.039
0.045
0.059
0.098
0.101
0.115
0.127
0.091
0.130
0.100
0.104
0.090
0.089
0.123
0.098
0.130
0.098
0.108
0.091
0.106
0.123
0.103
0.134
0.092
0.109
0.099
0.104
0.4
10
30
50
100
200
350
500
0.096
0.082
0.079
0.085
0.090
0.101
0.094
0.090
0.080
0.078
0.076
0.098
0.090
0.091
0.068
0.072
0.083
0.076
0.092
0.089
0.092
0.101
0.123
0.094
0.095
0.110
0.101
0.102
0.100
0.117
0.100
0.098
0.123
0.099
0.098
0.100
0.125
0.109
0.101
0.130
0.100
0.107
0.021
0.058
0.093
0.101
0.117
0.107
0.111
0.020
0.059
0.076
0.091
0.120
0.106
0.109
0.019
0.056
0.073
0.083
0.107
0.107
0.110
0.121
0.122
0.099
0.132
0.117
0.120
0.121
0.129
0.120
0.103
0.128
0.122
0.125
0.121
0.129
0.127
0.107
0.128
0.115
0.121
0.129
0.7
10
30
50
100
200
350
500
0.118
0.089
0.085
0.080
0.081
0.068
0.097
0.101
0.090
0.090
0.082
0.077
0.065
0.095
0.085
0.094
0.083
0.089
0.089
0.063
0.102
0.111
0.117
0.111
0.131
0.095
0.070
0.091
0.103
0.121
0.121
0.124
0.102
0.074
0.091
0.103
0.126
0.124
0.127
0.106
0.082
0.093
0.031
0.078
0.130
0.155
0.166
0.161
0.161
0.026
0.073
0.108
0.141
0.173
0.161
0.176
0.023
0.068
0.093
0.124
0.159
0.188
0.180
0.124
0.136
0.153
0.144
0.202
0.160
0.196
0.128
0.136
0.153
0.143
0.200
0.162
0.205
0.128
0.141
0.152
0.147
0.205
0.186
0.196
0.9
10
30
50
100
200
350
500
0.165
0.103
0.101
0.097
0.116
0.092
0.100
0.143
0.099
0.099
0.106
0.127
0.094
0.108
0.114
0.102
0.107
0.116
0.142
0.109
0.113
0.163
0.130
0.121
0.132
0.127
0.118
0.091
0.148
0.138
0.129
0.136
0.145
0.119
0.101
0.148
0.137
0.142
0.156
0.155
0.137
0.107
0.062
0.165
0.260
0.282
0.336
0.403
0.411
0.055
0.147
0.237
0.293
0.353
0.403
0.427
0.045
0.129
0.210
0.264
0.365
0.416
0.430
0.147
0.185
0.271
0.295
0.349
0.415
0.428
0.143
0.187
0.262
0.298
0.368
0.434
0.459
0.143
0.176
0.248
0.296
0.366
0.436
0.443
0.99
10
30
50
100
200
350
500
0.161
0.131
0.133
0.137
0.142
0.129
0.116
0.137
0.127
0.141
0.151
0.136
0.133
0.128
0.110
0.125
0.159
0.166
0.145
0.144
0.150
0.172
0.156
0.169
0.162
0.158
0.159
0.120
0.170
0.166
0.188
0.171
0.163
0.166
0.129
0.170
0.178
0.219
0.202
0.179
0.171
0.154
0.070
0.270
0.355
0.490
0.574
0.677
0.731
0.066
0.249
0.350
0.510
0.588
0.695
0.767
0.048
0.214
0.316
0.513
0.613
0.733
0.791
0.142
0.270
0.321
0.469
0.571
0.671
0.733
0.126
0.265
0.315
0.471
0.587
0.687
0.771
0.126
0.259
0.317
0.451
0.573
0.717
0.806
22
Table 3: Size of the Sup-Wald test under different trimming levels at 10%
significance
α = 0.10
Linear AsySupWald
Linear BootSup
Probit AsySupWald
Probit BootSupWald
ρ
T
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0
10
30
50
100
200
350
500
0.188
0.104
0.101
0.083
0.085
0.091
0.080
0.191
0.109
0.101
0.089
0.087
0.090
0.081
0.170
0.102
0.089
0.081
0.087
0.086
0.076
0.102
0.103
0.107
0.091
0.108
0.094
0.088
0.109
0.110
0.115
0.096
0.105
0.090
0.091
0.109
0.105
0.115
0.089
0.112
0.093
0.089
0.000
0.006
0.016
0.019
0.049
0.042
0.055
0.000
0.007
0.016
0.019
0.052
0.053
0.059
0.000
0.007
0.016
0.021
0.054
0.051
0.060
0.109
0.105
0.123
0.088
0.099
0.074
0.091
0.109
0.114
0.116
0.078
0.105
0.086
0.096
0.109
0.110
0.112
0.071
0.107
0.084
0.092
0.4
10
30
50
100
200
350
500
0.195
0.120
0.089
0.091
0.098
0.102
0.095
0.196
0.117
0.098
0.086
0.105
0.094
0.096
0.172
0.112
0.099
0.091
0.102
0.093
0.095
0.101
0.123
0.094
0.095
0.110
0.101
0.102
0.100
0.117
0.100
0.098
0.123
0.099
0.098
0.100
0.125
0.109
0.101
0.130
0.100
0.107
0.000
0.001
0.007
0.026
0.050
0.055
0.070
0.000
0.001
0.007
0.025
0.042
0.067
0.060
0.000
0.001
0.006
0.024
0.042
0.062
0.065
0.113
0.093
0.079
0.111
0.135
0.118
0.123
0.109
0.105
0.088
0.111
0.125
0.130
0.110
0.109
0.101
0.091
0.116
0.114
0.126
0.123
0.7
10
30
50
100
200
350
500
0.228
0.117
0.105
0.092
0.083
0.069
0.099
0.228
0.121
0.113
0.095
0.088
0.066
0.097
0.201
0.128
0.113
0.099
0.093
0.067
0.104
0.111
0.117
0.111
0.131
0.095
0.070
0.091
0.103
0.121
0.121
0.124
0.102
0.074
0.091
0.103
0.126
0.124
0.127
0.106
0.082
0.093
0.000
0.001
0.016
0.046
0.090
0.117
0.123
0.000
0.001
0.015
0.046
0.090
0.107
0.128
0.000
0.001
0.014
0.047
0.088
0.107
0.120
0.099
0.121
0.142
0.146
0.194
0.166
0.196
0.105
0.122
0.138
0.127
0.205
0.173
0.204
0.105
0.123
0.124
0.136
0.201
0.180
0.195
0.9
10
30
50
100
200
350
500
0.259
0.129
0.115
0.105
0.122
0.093
0.107
0.257
0.139
0.121
0.114
0.131
0.098
0.112
0.234
0.139
0.129
0.126
0.144
0.112
0.114
0.163
0.130
0.121
0.132
0.127
0.118
0.091
0.148
0.138
0.129
0.136
0.145
0.119
0.101
0.148
0.137
0.142
0.156
0.155
0.137
0.107
0.000
0.012
0.058
0.151
0.231
0.328
0.363
0.000
0.008
0.052
0.140
0.228
0.312
0.359
0.000
0.005
0.041
0.129
0.209
0.285
0.340
0.104
0.167
0.263
0.285
0.337
0.410
0.411
0.097
0.164
0.254
0.293
0.347
0.417
0.435
0.097
0.166
0.249
0.292
0.339
0.394
0.433
0.99
10
30
50
100
200
350
500
0.278
0.160
0.155
0.160
0.148
0.133
0.118
0.266
0.166
0.164
0.160
0.150
0.134
0.129
0.241
0.178
0.181
0.181
0.151
0.149
0.153
0.172
0.156
0.169
0.162
0.158
0.159
0.120
0.170
0.166
0.188
0.171
0.163
0.166
0.129
0.170
0.178
0.219
0.202
0.179
0.171
0.154
0.000
0.027
0.123
0.307
0.506
0.649
0.713
0.000
0.019
0.107
0.280
0.483
0.647
0.727
0.000
0.010
0.092
0.237
0.443
0.617
0.716
0.100
0.227
0.311
0.416
0.573
0.674
0.742
0.099
0.233
0.304
0.422
0.576
0.677
0.759
0.099
0.232
0.296
0.423
0.583
0.667
0.771
23
Table 4: Bootstrap critical values for 15% trimming and 10% significance
α = 0.10
Linear Model
Probit Model
ρ
T
LM
LR
Wald
LM
LR
Wald
0
10
30
50
100
200
350
500
5.01
5.77
6.05
6.45
6.40
6.92
6.90
6.95
6.40
6.45
6.67
6.50
6.99
6.95
10.03
7.14
6.88
6.90
6.61
7.06
7.00
2.31
4.54
4.84
6.13
6.18
6.42
6.67
3.66
5.82
5.84
7.02
7.11
7.03
7.21
0.78
2.36
3.02
4.50
5.39
5.97
6.12
0.4
10
30
50
100
200
350
500
5.06
5.68
6.17
6.44
6.50
7.02
6.89
7.05
6.30
6.59
6.65
6.61
7.09
6.94
10.24
7.01
7.04
6.88
6.72
7.16
6.99
2.62
4.63
5.76
5.95
5.89
6.40
6.44
4.00
5.72
6.74
6.63
7.11
6.82
6.85
0.97
2.46
3.65
4.52
5.10
5.79
6.04
0.7
10
30
50
100
200
350
500
5.12
5.75
6.12
6.04
6.50
6.94
7.23
7.18
6.38
6.53
6.23
6.61
7.01
7.29
10.50
7.11
6.97
6.43
6.72
7.08
7.34
3.04
4.37
5.53
6.18
5.96
6.73
6.35
4.51
5.90
6.73
7.33
6.50
7.14
6.51
1.25
2.86
3.73
4.86
5.28
6.16
5.96
0.9
10
30
50
100
200
350
500
5.12
5.75
6.12
6.04
6.79
6.48
7.23
7.18
6.38
6.53
6.23
6.91
6.54
7.29
10.50
7.11
6.97
6.43
7.03
6.60
7.34
3.57
5.31
5.48
5.98
6.61
6.72
6.97
5.12
6.78
6.99
7.04
6.92
6.98
6.92
1.53
3.53
4.31
5.28
5.96
6.36
6.59
0.99
10
30
50
100
200
350
500
4.99
5.81
5.89
6.55
6.60
6.48
6.96
6.91
6.46
6.27
6.78
6.71
6.54
7.01
9.96
7.21
6.68
7.01
6.83
6.60
7.05
3.85
5.67
5.88
6.84
6.60
6.82
6.85
5.39
7.11
7.74
7.36
7.14
7.23
6.91
1.70
3.98
4.86
6.01
6.36
6.82
6.66
24
Table 5: Bootstrap critical values for 10% trimming and 10% significance
α = 0.10
Linear Model
Probit Model
ρ
T
LM
LR
Wald
LM
LR
Wald
0
10
30
50
100
200
350
500
5.12
6.02
6.31
6.95
6.81
7.36
7.16
7.17
6.72
6.74
7.20
6.93
7.44
7.21
10.49
7.53
7.22
7.46
7.05
7.52
7.26
2.69
5.68
5.66
7.33
6.99
7.14
7.11
4.15
5.94
5.91
7.11
7.74
7.59
7.51
0.85
2.57
3.32
4.95
5.71
6.42
6.48
0.4
10
30
50
100
200
350
500
5.22
6.05
6.49
6.79
6.83
7.30
7.29
7.38
6.75
6.96
7.03
6.94
7.37
7.35
10.91
7.57
7.46
7.29
7.07
7.45
7.40
2.89
5.46
6.79
6.48
6.70
6.83
7.11
4.28
5.77
6.90
6.79
7.56
7.22
7.31
1.07
2.65
3.94
4.88
5.39
5.99
6.51
0.7
10
30
50
100
200
350
500
5.28
6.05
6.39
6.51
6.94
7.22
7.66
7.50
6.76
6.84
6.73
7.06
7.29
7.72
11.17
7.58
7.33
6.96
7.19
7.37
7.78
3.45
5.31
6.28
6.81
6.44
7.24
6.87
4.83
6.07
6.89
7.55
7.06
7.58
7.10
1.28
3.04
3.97
5.27
5.48
6.51
6.28
0.9
10
30
50
100
200
350
500
5.28
6.05
6.39
6.51
7.02
6.89
7.66
7.50
6.76
6.84
6.73
7.14
6.95
7.72
11.17
7.58
7.33
6.96
7.27
7.02
7.78
3.88
5.84
5.99
6.54
7.01
7.28
7.29
5.37
7.00
7.25
7.46
7.42
7.31
7.18
1.61
3.66
4.50
5.43
6.16
6.54
6.79
0.99
10
30
50
100
200
350
500
5.08
6.04
6.17
6.78
7.01
6.89
7.42
7.10
6.74
6.58
7.02
7.14
6.95
7.48
10.33
7.56
7.03
7.27
7.27
7.02
7.53
4.16
6.03
6.28
7.32
7.02
7.17
7.12
5.67
7.29
8.00
8.13
7.61
7.74
7.48
1.72
4.00
4.98
6.08
6.57
7.16
7.05
25
Table 6: Bootstrap critical values for 5% trimming and 10% significance
α = 0.10
Linear Model
Probit Model
ρ
T
LM
LR
Wald
LM
LR
Wald
0
10
30
50
100
200
350
500
5.12
6.36
6.63
7.38
7.17
7.72
7.77
7.17
7.15
7.12
7.67
7.30
7.81
7.83
10.49
8.07
7.65
7.97
7.44
7.90
7.89
∞
6.80
7.21
8.81
8.05
8.56
7.93
4.15
6.02
6.04
7.21
7.91
8.21
8.48
0.85
2.70
3.65
5.50
6.17
6.90
6.97
0.4
10
30
50
100
200
350
500
5.22
6.24
6.89
7.29
7.28
7.80
7.72
7.38
7.00
7.42
7.56
7.41
7.89
7.78
10.91
7.88
8.00
7.86
7.55
7.98
7.84
∞
7.02
7.80
8.17
7.75
7.93
8.10
4.28
6.09
7.10
6.90
7.91
7.83
7.77
1.07
2.86
4.17
5.14
6.12
6.46
6.86
0.7
10
30
50
100
200
350
500
5.28
6.43
6.76
6.86
7.44
7.63
8.24
7.50
7.24
7.26
7.10
7.58
7.71
8.31
11.17
8.19
7.81
7.36
7.72
7.80
8.38
∞
6.27
7.45
8.22
7.48
8.11
7.89
4.83
6.30
7.07
7.64
7.54
8.14
7.98
1.28
3.12
4.22
5.51
5.86
6.74
6.82
0.9
10
30
50
100
200
350
500
5.28
6.43
6.76
6.86
7.55
7.35
8.24
7.50
7.24
7.26
7.10
7.70
7.42
8.31
11.17
8.19
7.81
7.36
7.85
7.50
8.38
∞
7.00
7.15
7.89
7.92
8.03
8.04
5.37
7.20
7.56
7.75
8.11
7.83
7.96
1.61
3.67
4.56
5.58
6.55
6.99
7.09
0.99
10
30
50
100
200
350
500
5.08
6.42
6.47
7.15
7.38
7.35
7.92
7.10
7.23
6.93
7.42
7.52
7.42
7.99
10.33
8.17
7.44
7.70
7.66
7.50
8.05
∞
6.96
7.45
7.90
7.68
7.81
7.62
5.67
7.38
8.11
8.81
8.67
8.27
7.94
1.72
4.02
5.07
6.10
6.59
7.40
7.35
26
B
B.1
Power of Andrews (1993) tests
β1 = 1, β2 = 2
Table 7: Power of the Sup-LM test under different trimming levels at 10%
significance when β1 = 1, β2 = 2
α = 0.10
Linear AsySupLM
Linear BootSup
Probit AsySupLM
Probit BootSupLM
ρ
T
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0
10
30
50
100
200
350
500
0.184
0.970
1.000
1.000
1.000
1.000
1.000
0.125
0.967
1.000
1.000
1.000
1.000
1.000
0.063
0.956
1.000
1.000
1.000
1.000
1.000
0.552
0.991
1.000
1.000
1.000
1.000
1.000
0.540
0.991
1.000
1.000
1.000
1.000
1.000
0.540
0.988
1.000
1.000
1.000
1.000
1.000
0.000
0.022
0.083
0.211
0.501
0.810
0.916
0.000
0.046
0.110
0.215
0.477
0.792
0.906
1.000
0.063
0.129
0.228
0.445
0.765
0.891
0.141
0.152
0.185
0.295
0.599
0.857
0.939
0.169
0.154
0.162
0.232
0.543
0.837
0.915
0.000
0.117
0.144
0.184
0.387
0.756
0.891
0.4
10
30
50
100
200
350
500
0.179
0.953
0.997
1.000
1.000
1.000
1.000
0.117
0.939
0.996
1.000
1.000
1.000
1.000
0.056
0.923
0.996
1.000
1.000
1.000
1.000
0.561
0.984
0.998
1.000
1.000
1.000
1.000
0.549
0.971
0.997
1.000
1.000
1.000
1.000
0.549
0.963
0.997
1.000
1.000
1.000
1.000
0.001
0.041
0.092
0.246
0.537
0.806
0.939
0.001
0.062
0.116
0.241
0.507
0.785
0.932
1.000
0.076
0.137
0.252
0.478
0.759
0.913
0.073
0.170
0.212
0.322
0.617
0.844
0.946
0.095
0.156
0.199
0.296
0.585
0.824
0.943
0.000
0.126
0.147
0.230
0.468
0.754
0.919
0.7
10
30
50
100
200
350
500
0.154
0.829
0.981
1.000
1.000
1.000
1.000
0.108
0.803
0.971
1.000
1.000
1.000
1.000
0.049
0.776
0.965
1.000
1.000
1.000
1.000
0.446
0.877
0.985
1.000
1.000
1.000
1.000
0.418
0.867
0.984
1.000
1.000
1.000
1.000
0.418
0.856
0.981
1.000
1.000
1.000
1.000
0.003
0.052
0.119
0.312
0.606
0.855
0.946
0.004
0.073
0.137
0.304
0.580
0.830
0.943
0.999
0.083
0.150
0.298
0.557
0.812
0.938
0.106
0.207
0.238
0.430
0.708
0.883
0.953
0.114
0.182
0.191
0.406
0.659
0.870
0.950
0.000
0.158
0.178
0.317
0.566
0.833
0.941
0.9
10
30
50
100
200
350
500
0.115
0.579
0.776
0.964
0.999
1.000
1.000
0.076
0.557
0.760
0.960
0.999
1.000
1.000
0.035
0.513
0.740
0.951
1.000
1.000
1.000
0.343
0.705
0.847
0.980
0.999
1.000
1.000
0.337
0.693
0.831
0.978
0.999
1.000
1.000
0.337
0.675
0.826
0.970
1.000
1.000
1.000
0.012
0.064
0.153
0.403
0.664
0.872
0.937
0.009
0.067
0.149
0.387
0.634
0.867
0.928
1.000
0.066
0.146
0.366
0.616
0.851
0.921
0.121
0.190
0.229
0.470
0.715
0.897
0.943
0.135
0.146
0.229
0.422
0.693
0.888
0.943
0.000
0.094
0.161
0.360
0.643
0.877
0.933
0.99
10
30
50
100
200
350
500
0.085
0.412
0.449
0.566
0.668
0.806
0.876
0.056
0.383
0.438
0.546
0.649
0.792
0.865
0.034
0.357
0.402
0.521
0.641
0.780
0.850
0.339
0.511
0.539
0.628
0.701
0.805
0.885
0.328
0.515
0.547
0.584
0.689
0.796
0.874
0.328
0.497
0.546
0.579
0.682
0.789
0.859
0.015
0.087
0.184
0.408
0.612
0.774
0.856
0.012
0.078
0.176
0.390
0.600
0.796
0.870
1.000
0.090
0.163
0.363
0.566
0.779
0.868
0.101
0.182
0.272
0.456
0.643
0.810
0.867
0.093
0.162
0.273
0.439
0.628
0.824
0.880
0.000
0.132
0.232
0.408
0.592
0.802
0.878
27
Table 8: Power of the Sup-LR test under different trimming levels at 10%
significance when β1 = 1, β2 = 2
α = 0.10
Linear AsySupLR
Linear BootSup
Probit AsySupLR
Probit BootSupLR
ρ
T
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0
10
30
50
100
200
350
500
0.562
0.985
1.000
1.000
1.000
1.000
1.000
0.536
0.977
1.000
1.000
1.000
1.000
1.000
0.501
0.971
1.000
1.000
1.000
1.000
1.000
0.552
0.991
1.000
1.000
1.000
1.000
1.000
0.540
0.991
1.000
1.000
1.000
1.000
1.000
0.540
0.988
1.000
1.000
1.000
1.000
1.000
0.006
0.063
0.151
0.319
0.602
0.857
0.929
0.011
0.053
0.133
0.281
0.572
0.835
0.924
0.009
0.043
0.106
0.247
0.526
0.812
0.911
0.149
0.173
0.236
0.328
0.597
0.853
0.928
0.157
0.180
0.228
0.324
0.577
0.824
0.924
0.157
0.181
0.231
0.323
0.563
0.816
0.906
0.4
10
30
50
100
200
350
500
0.535
0.965
0.997
1.000
1.000
1.000
1.000
0.507
0.958
0.997
1.000
1.000
1.000
1.000
0.464
0.953
0.996
1.000
1.000
1.000
1.000
0.561
0.984
0.998
1.000
1.000
1.000
1.000
0.549
0.971
0.997
1.000
1.000
1.000
1.000
0.549
0.963
0.997
1.000
1.000
1.000
1.000
0.008
0.075
0.169
0.343
0.630
0.847
0.946
0.014
0.067
0.143
0.306
0.604
0.830
0.941
0.011
0.056
0.116
0.262
0.546
0.806
0.930
0.108
0.174
0.223
0.331
0.633
0.856
0.946
0.107
0.176
0.228
0.333
0.597
0.826
0.940
0.107
0.171
0.232
0.322
0.580
0.807
0.927
0.7
10
30
50
100
200
350
500
0.443
0.864
0.984
1.000
1.000
1.000
1.000
0.418
0.850
0.981
1.000
1.000
1.000
1.000
0.386
0.832
0.971
1.000
1.000
1.000
1.000
0.446
0.877
0.985
1.000
1.000
1.000
1.000
0.418
0.867
0.984
1.000
1.000
1.000
1.000
0.418
0.856
0.981
1.000
1.000
1.000
1.000
0.030
0.114
0.197
0.418
0.674
0.877
0.956
0.027
0.103
0.172
0.383
0.646
0.868
0.955
0.021
0.091
0.158
0.345
0.614
0.851
0.947
0.128
0.186
0.250
0.422
0.684
0.882
0.961
0.130
0.193
0.257
0.425
0.650
0.870
0.955
0.130
0.191
0.232
0.404
0.648
0.834
0.947
0.9
10
30
50
100
200
350
500
0.347
0.647
0.807
0.969
0.999
1.000
1.000
0.323
0.624
0.790
0.962
0.999
1.000
1.000
0.289
0.588
0.778
0.959
1.000
1.000
1.000
0.343
0.705
0.847
0.980
0.999
1.000
1.000
0.337
0.693
0.831
0.978
0.999
1.000
1.000
0.337
0.675
0.826
0.970
1.000
1.000
1.000
0.032
0.157
0.263
0.509
0.732
0.886
0.943
0.025
0.144
0.238
0.486
0.721
0.881
0.939
0.017
0.121
0.208
0.446
0.700
0.886
0.938
0.136
0.221
0.272
0.534
0.737
0.890
0.946
0.144
0.209
0.270
0.526
0.728
0.886
0.939
0.144
0.204
0.252
0.509
0.703
0.893
0.941
0.99
10
30
50
100
200
350
500
0.342
0.456
0.485
0.580
0.677
0.812
0.879
0.319
0.443
0.480
0.564
0.660
0.796
0.866
0.284
0.433
0.464
0.548
0.650
0.784
0.854
0.339
0.511
0.539
0.628
0.701
0.805
0.885
0.328
0.515
0.547
0.584
0.689
0.796
0.874
0.328
0.497
0.546
0.579
0.682
0.789
0.859
0.041
0.182
0.322
0.517
0.654
0.789
0.865
0.037
0.162
0.298
0.517
0.671
0.825
0.885
0.030
0.145
0.270
0.508
0.684
0.836
0.893
0.120
0.205
0.367
0.518
0.654
0.795
0.872
0.113
0.197
0.353
0.511
0.671
0.828
0.888
0.113
0.198
0.341
0.485
0.667
0.840
0.896
28
Table 9: Power of the Sup-Wald test under different trimming levels at 10%
significance when β1 = 1, β2 = 2
α = 0.10
Linear AsySupWald
Linear BootSup
Probit AsySupWald
Probit BootSupWald
ρ
T
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0
10
30
50
100
200
350
500
0.698
0.991
1.000
1.000
1.000
1.000
1.000
0.697
0.990
1.000
1.000
1.000
1.000
1.000
0.672
0.986
1.000
1.000
1.000
1.000
1.000
0.552
0.991
1.000
1.000
1.000
1.000
1.000
0.540
0.991
1.000
1.000
1.000
1.000
1.000
0.540
0.988
1.000
1.000
1.000
1.000
1.000
0.000
0.006
0.016
0.085
0.303
0.758
0.898
0.000
0.008
0.018
0.088
0.282
0.715
0.877
0.000
0.009
0.018
0.090
0.245
0.674
0.863
0.075
0.124
0.135
0.223
0.569
0.855
0.945
0.079
0.132
0.136
0.203
0.530
0.846
0.921
0.079
0.119
0.137
0.198
0.407
0.809
0.900
0.4
10
30
50
100
200
350
500
0.678
0.981
0.997
1.000
1.000
1.000
1.000
0.676
0.971
0.997
1.000
1.000
1.000
1.000
0.651
0.964
0.997
1.000
1.000
1.000
1.000
0.561
0.984
0.998
1.000
1.000
1.000
1.000
0.549
0.971
0.997
1.000
1.000
1.000
1.000
0.549
0.963
0.997
1.000
1.000
1.000
1.000
0.000
0.001
0.024
0.096
0.374
0.747
0.925
0.000
0.001
0.025
0.098
0.327
0.723
0.912
0.000
0.002
0.026
0.091
0.286
0.684
0.899
0.075
0.123
0.186
0.302
0.614
0.843
0.950
0.082
0.138
0.194
0.253
0.572
0.834
0.946
0.082
0.138
0.177
0.217
0.540
0.795
0.935
0.7
10
30
50
100
200
350
500
0.606
0.881
0.985
1.000
1.000
1.000
1.000
0.592
0.877
0.984
1.000
1.000
1.000
1.000
0.565
0.864
0.981
1.000
1.000
1.000
1.000
0.446
0.877
0.985
1.000
1.000
1.000
1.000
0.418
0.867
0.984
1.000
1.000
1.000
1.000
0.418
0.856
0.981
1.000
1.000
1.000
1.000
0.000
0.007
0.013
0.120
0.490
0.809
0.941
0.000
0.008
0.014
0.111
0.440
0.788
0.927
0.000
0.009
0.012
0.092
0.401
0.764
0.912
0.080
0.159
0.228
0.417
0.686
0.896
0.957
0.085
0.174
0.197
0.374
0.664
0.886
0.958
0.085
0.173
0.206
0.356
0.630
0.857
0.960
0.9
10
30
50
100
200
350
500
0.511
0.697
0.829
0.973
0.999
1.000
1.000
0.482
0.675
0.821
0.967
0.999
1.000
1.000
0.448
0.659
0.807
0.961
1.000
1.000
1.000
0.343
0.705
0.847
0.980
0.999
1.000
1.000
0.337
0.693
0.831
0.978
0.999
1.000
1.000
0.337
0.675
0.826
0.970
1.000
1.000
1.000
0.000
0.007
0.050
0.254
0.601
0.854
0.927
0.000
0.007
0.044
0.233
0.560
0.846
0.921
0.000
0.007
0.036
0.208
0.536
0.826
0.914
0.103
0.169
0.238
0.484
0.724
0.896
0.944
0.111
0.172
0.233
0.446
0.723
0.894
0.947
0.111
0.175
0.220
0.421
0.687
0.891
0.940
0.99
10
30
50
100
200
350
500
0.475
0.513
0.521
0.596
0.685
0.814
0.881
0.462
0.495
0.515
0.583
0.668
0.796
0.867
0.436
0.480
0.506
0.567
0.656
0.789
0.857
0.339
0.511
0.539
0.628
0.701
0.805
0.885
0.328
0.515
0.547
0.584
0.689
0.796
0.874
0.328
0.497
0.546
0.579
0.682
0.789
0.859
0.001
0.014
0.073
0.317
0.578
0.765
0.853
0.001
0.009
0.060
0.286
0.556
0.768
0.862
0.001
0.005
0.046
0.245
0.511
0.747
0.851
0.096
0.160
0.279
0.447
0.660
0.808
0.867
0.082
0.156
0.266
0.439
0.654
0.824
0.878
0.082
0.155
0.265
0.441
0.636
0.814
0.889
29
B.2
β1 = 1, β2 = 1.5, β3 = 2
Table 10: Power of the Sup-LM test under different trimming levels at 10%
significance when β1 = 1, β2 = 1.5, β3 = 2
α = 0.10
Linear AsySupLM
Linear BootSup
Probit AsySupLM
Probit BootSupLM
ρ
T
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0
10
30
50
100
200
350
500
0.090
0.869
0.984
1.000
1.000
1.000
1.000
0.065
0.842
0.982
1.000
1.000
1.000
1.000
0.030
0.794
0.975
1.000
1.000
1.000
1.000
0.380
0.935
0.991
1.000
1.000
1.000
1.000
0.356
0.929
0.991
1.000
1.000
1.000
1.000
0.356
0.916
0.988
1.000
1.000
1.000
1.000
0.001
0.032
0.094
0.218
0.428
0.664
0.786
0.000
0.061
0.123
0.235
0.429
0.647
0.774
1.000
0.076
0.138
0.254
0.429
0.637
0.754
0.149
0.144
0.231
0.305
0.502
0.734
0.821
0.163
0.149
0.226
0.289
0.479
0.718
0.768
0.000
0.107
0.185
0.233
0.410
0.657
0.739
0.4
10
30
50
100
200
350
500
0.084
0.798
0.971
1.000
1.000
1.000
1.000
0.053
0.770
0.965
1.000
1.000
1.000
1.000
0.020
0.723
0.957
1.000
1.000
1.000
1.000
0.379
0.893
0.990
1.000
1.000
1.000
1.000
0.380
0.895
0.988
1.000
1.000
1.000
1.000
0.380
0.870
0.980
1.000
1.000
1.000
1.000
0.002
0.043
0.102
0.231
0.454
0.665
0.802
0.002
0.070
0.126
0.238
0.465
0.643
0.795
1.000
0.084
0.149
0.255
0.454
0.626
0.784
0.090
0.159
0.225
0.315
0.508
0.714
0.830
0.114
0.169
0.180
0.292
0.470
0.672
0.809
0.000
0.153
0.171
0.233
0.432
0.641
0.771
0.7
10
30
50
100
200
350
500
0.084
0.660
0.870
0.998
1.000
1.000
1.000
0.057
0.630
0.852
0.997
1.000
1.000
1.000
0.031
0.588
0.827
0.996
1.000
1.000
1.000
0.337
0.774
0.922
0.998
1.000
1.000
1.000
0.319
0.767
0.918
0.998
1.000
1.000
1.000
0.319
0.753
0.888
0.999
1.000
1.000
1.000
0.003
0.052
0.118
0.281
0.493
0.733
0.849
0.004
0.076
0.143
0.289
0.487
0.714
0.827
0.999
0.096
0.160
0.297
0.475
0.710
0.802
0.115
0.153
0.205
0.367
0.584
0.783
0.857
0.123
0.161
0.194
0.364
0.554
0.779
0.838
0.000
0.126
0.165
0.296
0.497
0.730
0.798
0.9
10
30
50
100
200
350
500
0.061
0.424
0.621
0.863
0.984
1.000
1.000
0.038
0.395
0.596
0.842
0.982
1.000
1.000
0.017
0.345
0.573
0.823
0.977
1.000
1.000
0.259
0.521
0.713
0.904
0.987
1.000
1.000
0.245
0.527
0.708
0.893
0.984
1.000
1.000
0.245
0.520
0.693
0.881
0.983
1.000
1.000
0.015
0.054
0.145
0.360
0.563
0.758
0.861
0.010
0.065
0.143
0.354
0.549
0.738
0.843
0.999
0.071
0.137
0.340
0.540
0.724
0.823
0.110
0.135
0.249
0.399
0.624
0.800
0.879
0.132
0.116
0.235
0.372
0.605
0.789
0.869
0.000
0.106
0.167
0.353
0.575
0.755
0.851
0.99
10
30
50
100
200
350
500
0.061
0.312
0.355
0.444
0.538
0.666
0.776
0.038
0.288
0.346
0.432
0.519
0.651
0.760
0.019
0.271
0.347
0.409
0.507
0.630
0.732
0.277
0.430
0.456
0.513
0.574
0.709
0.777
0.274
0.427
0.457
0.475
0.561
0.702
0.772
0.274
0.415
0.466
0.466
0.564
0.673
0.764
0.015
0.098
0.203
0.392
0.634
0.792
0.867
0.012
0.108
0.198
0.370
0.639
0.786
0.882
1.000
0.107
0.181
0.345
0.606
0.776
0.866
0.111
0.150
0.279
0.458
0.698
0.801
0.870
0.106
0.141
0.266
0.413
0.688
0.787
0.888
0.000
0.114
0.244
0.383
0.665
0.779
0.887
30
Table 11: Power of the Sup-LR test under different trimming levels at 10%
significance when β1 = 1, β2 = 1.5, β3 = 2
α = 0.10
Linear AsySupLR
Linear BootSup
Probit AsySupLR
Probit BootSupLR
ρ
T
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0
10
30
50
100
200
350
500
0.385
0.912
0.988
1.000
1.000
1.000
1.000
0.350
0.893
0.986
1.000
1.000
1.000
1.000
0.307
0.872
0.983
1.000
1.000
1.000
1.000
0.380
0.935
0.991
1.000
1.000
1.000
1.000
0.356
0.929
0.991
1.000
1.000
1.000
1.000
0.356
0.916
0.988
1.000
1.000
1.000
1.000
0.006
0.059
0.116
0.242
0.454
0.660
0.804
0.011
0.063
0.094
0.224
0.438
0.654
0.786
0.009
0.050
0.082
0.199
0.415
0.609
0.764
0.145
0.172
0.193
0.245
0.450
0.683
0.822
0.154
0.183
0.204
0.252
0.429
0.638
0.784
0.154
0.158
0.208
0.247
0.436
0.618
0.761
0.4
10
30
50
100
200
350
500
0.358
0.858
0.980
1.000
1.000
1.000
1.000
0.338
0.837
0.974
1.000
1.000
1.000
1.000
0.303
0.807
0.967
1.000
1.000
1.000
1.000
0.379
0.893
0.990
1.000
1.000
1.000
1.000
0.380
0.895
0.988
1.000
1.000
1.000
1.000
0.380
0.870
0.980
1.000
1.000
1.000
1.000
0.009
0.064
0.134
0.253
0.488
0.701
0.810
0.015
0.067
0.115
0.218
0.464
0.679
0.793
0.012
0.055
0.102
0.181
0.422
0.653
0.768
0.107
0.159
0.199
0.259
0.448
0.687
0.817
0.115
0.166
0.193
0.259
0.440
0.665
0.810
0.115
0.169
0.192
0.260
0.438
0.652
0.757
0.7
10
30
50
100
200
350
500
0.334
0.725
0.895
0.998
1.000
1.000
1.000
0.319
0.703
0.878
0.998
1.000
1.000
1.000
0.282
0.685
0.857
0.997
1.000
1.000
1.000
0.337
0.774
0.922
0.998
1.000
1.000
1.000
0.319
0.767
0.918
0.998
1.000
1.000
1.000
0.319
0.753
0.888
0.999
1.000
1.000
1.000
0.031
0.081
0.166
0.319
0.532
0.748
0.865
0.030
0.068
0.146
0.286
0.504
0.730
0.843
0.020
0.061
0.119
0.256
0.467
0.712
0.832
0.135
0.157
0.219
0.320
0.582
0.756
0.844
0.128
0.161
0.219
0.318
0.526
0.757
0.835
0.128
0.163
0.220
0.300
0.524
0.711
0.813
0.9
10
30
50
100
200
350
500
0.262
0.480
0.668
0.875
0.984
1.000
1.000
0.236
0.471
0.646
0.860
0.982
1.000
1.000
0.206
0.450
0.622
0.833
0.978
1.000
1.000
0.259
0.521
0.713
0.904
0.987
1.000
1.000
0.245
0.527
0.708
0.893
0.984
1.000
1.000
0.245
0.520
0.693
0.881
0.983
1.000
1.000
0.030
0.126
0.223
0.438
0.612
0.792
0.873
0.023
0.115
0.201
0.409
0.614
0.790
0.868
0.017
0.095
0.176
0.376
0.600
0.763
0.864
0.119
0.166
0.271
0.443
0.637
0.801
0.888
0.136
0.171
0.262
0.441
0.612
0.804
0.875
0.136
0.162
0.244
0.426
0.607
0.766
0.858
0.99
10
30
50
100
200
350
500
0.283
0.371
0.392
0.466
0.549
0.675
0.776
0.260
0.367
0.389
0.454
0.534
0.655
0.764
0.225
0.353
0.377
0.432
0.524
0.637
0.738
0.277
0.430
0.456
0.513
0.574
0.709
0.777
0.274
0.427
0.457
0.475
0.561
0.702
0.772
0.274
0.415
0.466
0.466
0.564
0.673
0.764
0.040
0.168
0.292
0.491
0.684
0.789
0.875
0.038
0.149
0.285
0.497
0.712
0.812
0.891
0.031
0.125
0.255
0.488
0.718
0.827
0.907
0.131
0.197
0.315
0.491
0.707
0.776
0.862
0.132
0.181
0.315
0.499
0.712
0.812
0.891
0.132
0.180
0.303
0.486
0.709
0.820
0.910
31
Table 12: Power of the Sup-Wald test under different trimming levels at 10%
significance when β1 = 1, β2 = 1.5, β3 = 2
α = 0.10
Linear AsySupWald
Linear BootSup
Probit AsySupWald
Probit BootSupWald
ρ
T
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0
10
30
50
100
200
350
500
0.552
0.935
0.991
1.000
1.000
1.000
1.000
0.543
0.926
0.991
1.000
1.000
1.000
1.000
0.509
0.912
0.986
1.000
1.000
1.000
1.000
0.380
0.935
0.991
1.000
1.000
1.000
1.000
0.356
0.929
0.991
1.000
1.000
1.000
1.000
0.356
0.916
0.988
1.000
1.000
1.000
1.000
0.000
0.006
0.017
0.070
0.308
0.568
0.733
0.000
0.006
0.017
0.076
0.297
0.555
0.714
0.000
0.006
0.018
0.079
0.282
0.514
0.691
0.101
0.113
0.223
0.214
0.444
0.711
0.828
0.096
0.122
0.207
0.202
0.424
0.646
0.770
0.096
0.136
0.199
0.188
0.392
0.595
0.701
0.4
10
30
50
100
200
350
500
0.535
0.896
0.987
1.000
1.000
1.000
1.000
0.532
0.888
0.980
1.000
1.000
1.000
1.000
0.501
0.867
0.974
1.000
1.000
1.000
1.000
0.379
0.893
0.990
1.000
1.000
1.000
1.000
0.380
0.895
0.988
1.000
1.000
1.000
1.000
0.380
0.870
0.980
1.000
1.000
1.000
1.000
0.000
0.006
0.018
0.094
0.329
0.590
0.756
0.000
0.006
0.020
0.098
0.310
0.568
0.733
0.000
0.006
0.019
0.097
0.301
0.539
0.716
0.067
0.149
0.187
0.275
0.458
0.698
0.827
0.081
0.142
0.185
0.253
0.436
0.647
0.802
0.081
0.150
0.161
0.219
0.397
0.597
0.788
0.7
10
30
50
100
200
350
500
0.491
0.774
0.921
0.998
1.000
1.000
1.000
0.480
0.764
0.901
0.998
1.000
1.000
1.000
0.440
0.739
0.884
0.999
1.000
1.000
1.000
0.337
0.774
0.922
0.998
1.000
1.000
1.000
0.319
0.767
0.918
0.998
1.000
1.000
1.000
0.319
0.753
0.888
0.999
1.000
1.000
1.000
0.000
0.007
0.018
0.125
0.388
0.655
0.810
0.000
0.007
0.021
0.118
0.366
0.644
0.783
0.000
0.007
0.016
0.107
0.352
0.617
0.766
0.090
0.160
0.193
0.349
0.573
0.778
0.866
0.106
0.146
0.202
0.332
0.556
0.769
0.848
0.106
0.145
0.185
0.297
0.537
0.753
0.816
0.9
10
30
50
100
200
350
500
0.411
0.538
0.704
0.878
0.984
1.000
1.000
0.391
0.529
0.685
0.873
0.982
1.000
1.000
0.364
0.509
0.670
0.853
0.978
1.000
1.000
0.259
0.521
0.713
0.904
0.987
1.000
1.000
0.245
0.527
0.708
0.893
0.984
1.000
1.000
0.245
0.520
0.693
0.881
0.983
1.000
1.000
0.000
0.004
0.032
0.215
0.478
0.724
0.839
0.000
0.003
0.035
0.210
0.446
0.699
0.827
0.000
0.002
0.030
0.191
0.426
0.673
0.809
0.094
0.146
0.250
0.399
0.628
0.796
0.884
0.111
0.133
0.238
0.384
0.623
0.790
0.883
0.111
0.129
0.217
0.366
0.611
0.769
0.880
0.99
10
30
50
100
200
350
500
0.435
0.424
0.422
0.482
0.553
0.680
0.777
0.418
0.413
0.423
0.473
0.540
0.665
0.767
0.393
0.419
0.429
0.451
0.532
0.645
0.740
0.277
0.430
0.456
0.513
0.574
0.709
0.777
0.274
0.427
0.457
0.475
0.561
0.702
0.772
0.274
0.415
0.466
0.466
0.564
0.673
0.764
0.001
0.015
0.090
0.302
0.595
0.775
0.857
0.001
0.011
0.070
0.281
0.569
0.766
0.864
0.001
0.009
0.056
0.235
0.531
0.738
0.849
0.101
0.164
0.270
0.444
0.700
0.794
0.865
0.086
0.161
0.256
0.433
0.702
0.809
0.889
0.086
0.162
0.250
0.440
0.686
0.797
0.896
32
B.3
β1 = 1, β2 = 1.5, β3 = 1
Table 13: Power of the Sup-LM test under different trimming levels at 10%
significance when β1 = 1, β2 = 1.5, β3 = 1
α = 0.10
Linear AsySupLM
Linear BootSup
Probit AsySupLM
Probit BootSupLM
ρ
T
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0
10
30
50
100
200
350
500
0.017
0.209
0.392
0.813
0.991
1.000
1.000
0.010
0.188
0.345
0.775
0.988
1.000
1.000
0.005
0.152
0.300
0.727
0.982
1.000
1.000
0.119
0.333
0.541
0.859
0.993
1.000
1.000
0.117
0.330
0.520
0.842
0.991
1.000
1.000
0.117
0.305
0.477
0.817
0.988
1.000
1.000
0.015
0.063
0.096
0.161
0.239
0.298
0.370
0.012
0.077
0.130
0.190
0.248
0.314
0.376
1.000
0.116
0.161
0.224
0.268
0.335
0.389
0.178
0.173
0.223
0.242
0.306
0.358
0.430
0.193
0.158
0.215
0.226
0.313
0.377
0.433
0.000
0.147
0.169
0.215
0.257
0.352
0.387
0.4
10
30
50
100
200
350
500
0.013
0.188
0.361
0.707
0.982
1.000
1.000
0.004
0.169
0.319
0.672
0.975
1.000
1.000
0.002
0.138
0.264
0.612
0.957
0.999
1.000
0.140
0.342
0.520
0.800
0.985
1.000
1.000
0.148
0.305
0.477
0.763
0.984
1.000
1.000
0.148
0.273
0.440
0.741
0.976
1.000
1.000
0.017
0.062
0.100
0.164
0.231
0.330
0.383
0.015
0.085
0.128
0.197
0.240
0.337
0.406
1.000
0.107
0.156
0.242
0.282
0.349
0.420
0.145
0.195
0.200
0.239
0.286
0.374
0.454
0.142
0.180
0.183
0.241
0.258
0.388
0.468
0.000
0.134
0.161
0.242
0.252
0.377
0.442
0.7
10
30
50
100
200
350
500
0.024
0.140
0.245
0.494
0.867
0.989
1.000
0.016
0.120
0.217
0.457
0.836
0.984
1.000
0.004
0.094
0.193
0.416
0.803
0.969
1.000
0.157
0.227
0.310
0.601
0.908
0.989
1.000
0.142
0.219
0.317
0.570
0.892
0.986
1.000
0.142
0.214
0.298
0.543
0.884
0.985
1.000
0.021
0.090
0.127
0.208
0.280
0.417
0.457
0.017
0.104
0.164
0.226
0.298
0.438
0.471
0.999
0.133
0.182
0.242
0.331
0.462
0.471
0.173
0.231
0.230
0.297
0.338
0.442
0.502
0.163
0.217
0.218
0.285
0.351
0.449
0.514
0.000
0.194
0.222
0.259
0.354
0.444
0.520
0.9
10
30
50
100
200
350
500
0.018
0.117
0.156
0.263
0.469
0.709
0.892
0.011
0.102
0.142
0.254
0.449
0.670
0.864
0.004
0.091
0.131
0.221
0.428
0.621
0.820
0.120
0.209
0.253
0.342
0.546
0.770
0.923
0.114
0.204
0.230
0.314
0.517
0.742
0.905
0.114
0.205
0.223
0.305
0.489
0.714
0.870
0.039
0.124
0.212
0.335
0.446
0.534
0.629
0.031
0.139
0.224
0.347
0.447
0.550
0.634
1.000
0.145
0.237
0.350
0.456
0.552
0.643
0.190
0.229
0.291
0.409
0.483
0.601
0.660
0.183
0.201
0.292
0.401
0.464
0.573
0.644
0.000
0.183
0.245
0.354
0.456
0.543
0.633
0.99
10
30
50
100
200
350
500
0.029
0.141
0.131
0.159
0.182
0.222
0.267
0.015
0.127
0.137
0.154
0.189
0.214
0.270
0.007
0.126
0.161
0.160
0.202
0.219
0.269
0.156
0.233
0.198
0.214
0.215
0.265
0.311
0.153
0.241
0.207
0.205
0.220
0.261
0.290
0.153
0.252
0.245
0.212
0.235
0.262
0.282
0.054
0.179
0.274
0.428
0.591
0.703
0.771
0.050
0.173
0.277
0.412
0.590
0.693
0.781
1.000
0.186
0.271
0.406
0.577
0.688
0.777
0.207
0.283
0.362
0.454
0.597
0.715
0.776
0.193
0.265
0.332
0.435
0.601
0.711
0.787
0.000
0.241
0.327
0.411
0.595
0.704
0.770
33
Table 14: Power of the Sup-LR test under different trimming levels at 10%
significance when β1 = 1, β2 = 1.5, β3 = 1
α = 0.10
Linear AsySupLR
Linear BootSup
Probit AsySupLR
Probit BootSupLR
ρ
T
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0
10
30
50
100
200
350
500
0.121
0.277
0.448
0.837
0.992
1.000
1.000
0.117
0.266
0.409
0.802
0.988
1.000
1.000
0.093
0.231
0.367
0.758
0.982
1.000
1.000
0.119
0.333
0.541
0.859
0.993
1.000
1.000
0.117
0.330
0.520
0.842
0.991
1.000
1.000
0.117
0.305
0.477
0.817
0.988
1.000
1.000
0.014
0.059
0.082
0.120
0.142
0.202
0.273
0.014
0.056
0.065
0.112
0.141
0.187
0.252
0.011
0.053
0.063
0.098
0.118
0.188
0.241
0.169
0.111
0.133
0.136
0.155
0.218
0.297
0.171
0.120
0.141
0.142
0.136
0.220
0.272
0.171
0.124
0.156
0.137
0.137
0.196
0.255
0.4
10
30
50
100
200
350
500
0.133
0.256
0.411
0.733
0.984
1.000
1.000
0.115
0.245
0.385
0.695
0.977
1.000
1.000
0.095
0.202
0.334
0.649
0.968
1.000
1.000
0.140
0.342
0.520
0.800
0.985
1.000
1.000
0.148
0.305
0.477
0.763
0.984
1.000
1.000
0.148
0.273
0.440
0.741
0.976
1.000
1.000
0.020
0.071
0.074
0.122
0.165
0.234
0.270
0.023
0.071
0.069
0.113
0.158
0.223
0.263
0.016
0.066
0.064
0.095
0.147
0.212
0.253
0.140
0.128
0.109
0.124
0.150
0.265
0.304
0.144
0.128
0.117
0.129
0.155
0.237
0.288
0.144
0.124
0.120
0.141
0.162
0.214
0.278
0.7
10
30
50
100
200
350
500
0.157
0.200
0.279
0.517
0.877
0.990
1.000
0.143
0.178
0.268
0.489
0.843
0.985
1.000
0.119
0.159
0.235
0.446
0.813
0.972
1.000
0.157
0.227
0.310
0.601
0.908
0.989
1.000
0.142
0.219
0.317
0.570
0.892
0.986
1.000
0.142
0.214
0.298
0.543
0.884
0.985
1.000
0.042
0.108
0.114
0.159
0.214
0.339
0.351
0.037
0.096
0.101
0.146
0.218
0.330
0.337
0.029
0.081
0.103
0.139
0.196
0.315
0.322
0.151
0.170
0.144
0.152
0.218
0.353
0.370
0.127
0.177
0.142
0.157
0.219
0.322
0.361
0.127
0.185
0.152
0.159
0.223
0.315
0.326
0.9
10
30
50
100
200
350
500
0.123
0.166
0.183
0.282
0.483
0.715
0.897
0.110
0.151
0.182
0.275
0.457
0.681
0.868
0.094
0.136
0.167
0.248
0.434
0.627
0.829
0.120
0.209
0.253
0.342
0.546
0.770
0.923
0.114
0.204
0.230
0.314
0.517
0.742
0.905
0.114
0.205
0.223
0.305
0.489
0.714
0.870
0.064
0.153
0.225
0.320
0.414
0.485
0.578
0.052
0.153
0.209
0.320
0.415
0.482
0.579
0.045
0.129
0.193
0.291
0.407
0.497
0.575
0.169
0.199
0.221
0.327
0.406
0.501
0.590
0.162
0.185
0.222
0.322
0.406
0.489
0.591
0.162
0.185
0.204
0.299
0.380
0.488
0.568
0.99
10
30
50
100
200
350
500
0.159
0.187
0.161
0.177
0.192
0.229
0.272
0.146
0.176
0.155
0.167
0.193
0.219
0.273
0.121
0.177
0.188
0.172
0.208
0.225
0.272
0.156
0.233
0.198
0.214
0.215
0.265
0.311
0.153
0.241
0.207
0.205
0.220
0.261
0.290
0.153
0.252
0.245
0.212
0.235
0.262
0.282
0.094
0.245
0.366
0.494
0.626
0.715
0.769
0.085
0.218
0.360
0.511
0.649
0.729
0.796
0.069
0.191
0.326
0.497
0.667
0.737
0.813
0.196
0.252
0.338
0.458
0.621
0.715
0.766
0.177
0.254
0.338
0.439
0.632
0.729
0.788
0.177
0.256
0.323
0.424
0.639
0.736
0.795
34
Table 15: Power of the Sup-Wald test under different trimming levels at 10%
significance when β1 = 1, β2 = 1.5, β3 = 1
α = 0.10
Linear AsySupWald
Linear BootSup
Probit AsySupWald
Probit BootSupWald
ρ
T
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0.15
0.10
0.05
0
10
30
50
100
200
350
500
0.236
0.334
0.499
0.848
0.993
1.000
1.000
0.234
0.316
0.469
0.829
0.989
1.000
1.000
0.209
0.301
0.426
0.787
0.984
1.000
1.000
0.119
0.333
0.541
0.859
0.993
1.000
1.000
0.117
0.330
0.520
0.842
0.991
1.000
1.000
0.117
0.305
0.477
0.817
0.988
1.000
1.000
0.000
0.005
0.008
0.048
0.120
0.209
0.291
0.000
0.006
0.010
0.049
0.122
0.206
0.284
0.000
0.006
0.013
0.053
0.122
0.202
0.276
0.124
0.170
0.195
0.244
0.263
0.341
0.398
0.127
0.172
0.178
0.223
0.279
0.310
0.405
0.127
0.164
0.163
0.182
0.233
0.296
0.371
0.4
10
30
50
100
200
350
500
0.239
0.326
0.458
0.761
0.984
1.000
1.000
0.238
0.292
0.429
0.724
0.979
1.000
1.000
0.218
0.276
0.396
0.689
0.973
1.000
1.000
0.140
0.342
0.520
0.800
0.985
1.000
1.000
0.148
0.305
0.477
0.763
0.984
1.000
1.000
0.148
0.273
0.440
0.741
0.976
1.000
1.000
0.000
0.004
0.021
0.046
0.134
0.237
0.302
0.000
0.003
0.019
0.049
0.128
0.239
0.302
0.000
0.004
0.020
0.047
0.129
0.220
0.295
0.101
0.150
0.186
0.207
0.243
0.358
0.415
0.110
0.172
0.180
0.220
0.232
0.353
0.454
0.110
0.171
0.179
0.211
0.230
0.336
0.413
0.7
10
30
50
100
200
350
500
0.271
0.254
0.322
0.547
0.882
0.990
1.000
0.262
0.238
0.315
0.511
0.859
0.986
1.000
0.240
0.232
0.290
0.475
0.825
0.974
1.000
0.157
0.227
0.310
0.601
0.908
0.989
1.000
0.142
0.219
0.317
0.570
0.892
0.986
1.000
0.142
0.214
0.298
0.543
0.884
0.985
1.000
0.000
0.009
0.017
0.088
0.192
0.347
0.382
0.000
0.008
0.017
0.091
0.178
0.330
0.377
0.000
0.009
0.016
0.086
0.172
0.333
0.360
0.103
0.192
0.216
0.292
0.316
0.429
0.478
0.108
0.191
0.222
0.281
0.331
0.428
0.509
0.108
0.186
0.246
0.271
0.350
0.445
0.490
0.9
10
30
50
100
200
350
500
0.261
0.199
0.222
0.300
0.496
0.723
0.898
0.247
0.194
0.207
0.290
0.469
0.692
0.873
0.210
0.190
0.207
0.278
0.449
0.638
0.836
0.120
0.209
0.253
0.342
0.546
0.770
0.923
0.114
0.204
0.230
0.314
0.517
0.742
0.905
0.114
0.205
0.223
0.305
0.489
0.714
0.870
0.000
0.015
0.076
0.223
0.375
0.480
0.585
0.000
0.013
0.062
0.209
0.367
0.476
0.576
0.000
0.010
0.052
0.185
0.347
0.460
0.570
0.128
0.244
0.306
0.404
0.457
0.563
0.643
0.128
0.223
0.309
0.402
0.475
0.549
0.635
0.128
0.218
0.301
0.379
0.484
0.567
0.650
0.99
10
30
50
100
200
350
500
0.273
0.236
0.182
0.187
0.200
0.231
0.275
0.263
0.234
0.186
0.187
0.197
0.225
0.276
0.231
0.232
0.213
0.186
0.218
0.229
0.279
0.156
0.233
0.198
0.214
0.215
0.265
0.311
0.153
0.241
0.207
0.205
0.220
0.261
0.290
0.153
0.252
0.245
0.212
0.235
0.262
0.282
0.001
0.026
0.136
0.358
0.565
0.694
0.760
0.001
0.020
0.114
0.335
0.551
0.684
0.775
0.001
0.011
0.096
0.291
0.517
0.646
0.752
0.121
0.269
0.353
0.431
0.596
0.717
0.768
0.107
0.266
0.346
0.440
0.606
0.716
0.793
0.107
0.266
0.347
0.446
0.610
0.717
0.791
35
References
Albanese, M. and Knott, M. (1994). Bootstrapping latent variable models
for binary response. The British Journal of Mathematical and Statistical
Psychology, 47:235–246.
Andrews, D. W. K. (1991). Heteroskedasticity and autocorrelation consistent
covariance matrix estimation. Econometrica, 59(3):817–58.
Andrews, D. W. K. (1993). Tests for parameter instability and structural
change with unknown change point. Econometrica, 61(4):821–56.
Andrews, D. W. K. (2003). Tests for parameter instability and structural change with unknown change point: A corrigendum. Econometrica,
71(1):395–397.
Andrews, D. W. K. and Ploberger, W. (1994). Optimal tests when a nuisance
parameter is present only under the alternative. Econometrica, 62(6):1383–
1414.
Bai, J. (1999). Likelihood ratio tests for multiple structural changes. Journal
of Econometrics, 91(2):299–323.
Bai, J. and Perron, P. (1998). Estimating and testing linear models with
multiple structural changes. Econometrica, 66(1):47–78.
Bai, J. and Perron, P. (2003). Computation and analysis of multiple structural change models. Journal of Applied Econometrics, 18(1):1–22.
Bai, J. and Perron, P. (2004). Multiple structural change models: a simulation analysis. Unpublished manuscript.
Breusch, T. S. (1979). Conflict among criteria for testing hypotheses: Extensions and comments. Econometrica, 47(1):203–207.
Chauvet, M. and Potter, S. (2005). Forecasting recessions using the yield
curve. Journal of Forecasting, 24(2):77–103.
Chauvet, M. and Potter, S. (2010). Business cycle monitoring with structural
changes. International Journal of Forecasting, 26(4):777–793.
Chow, G. (1960). Tests of equality between sets of coefficients in two linear
regressions. Econometrica, 28:591–605.
36
Diebold, F. X. and Chen, C. (1996). Testing structural stability with endogenous breakpoint a size comparison of analytic and bootstrap procedures.
Journal of Econometrics, 70(1):221–241.
Engle, R. F. (1984). Wald, likelihood ratio, and lagrange multiplier tests in
econometrics. In Grilichesa, Z. and Intriligator, M. D., editors, Handbook
of Econometrics, volume 2 of Handbook of Econometrics, chapter 13, pages
775–826. Elsevier.
Estrella, A. (2003). Critical values and p values of bessel process distributions: Computation and application to structural break tests. Econometric
Theory, 19(6):1128–1143.
Estrella, A. and Rodrigues, A. P. (1998). Consistent covariance matrix estimation in probit models with autocorrelated errors. Staff Reports 39,
Federal Reserve Bank of New York.
Estrella, A., Rodrigues, A. P., and Schich, S. (2003). How stable is the
predictive power of the yield curve? evidence from germany and the united
states. The Review of Economics and Statistics, 85(3):629–644.
Hansen, B. E. (1997). Approximate asymptotic p values for structural-change
tests. Journal of Business & Economic Statistics, 15:60–67.
Hansen, B. E. (2000). Testing for structural change in conditional models.
Journal of Econometrics, 97(1):93–115.
Harding, D. and Pagan, A. R. (2011). An econometric analysis of some
models for constructed binary time series. Journal of Business & Economic
Statistics, 29(1):86–95.
Hoyo, J., Llorente, J., and Rivero, C. (2005). Testing for breaks in non-linear
and dynamic models. In Econometric Study Group.
Kauppi, H. (2010). Yield-curve based probability forecasts of u.s. recessions:
Stability and dynamics. Discussion Papers 57, Aboa Centre for Economics.
Kauppi, H. and Saikkonen, P. (2008). Predicting u.s. recessions with dynamic binary response models. The Review of Economics and Statistics,
90(4):777–791.
Koop, G. and Potter, S. (2000). Nonlinearity, structural breaks or outliers
in economic time series? In William Barnett, e. a., editor, Nonlinear
Econometric Modeling in Time Series Analysis, pages 61–78. Cambridge
University Press.
37
MacKinnon, J. G. (1996). Numerical distribution functions for unit root and
cointegration tests. Journal of Applied Econometrics, 11(6):601–18.
McConnell, M. M. and Perez-Quiros, G. (2000). Output fluctuations in the
united states: What has changed since the early 1980’s? American Economic Review, 90(5):1464–1476.
Newey, W. K. and West, K. D. (1987). A simple, positive semi-definite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica, 55(3):703–08.
Pagan, A. and Harding, D. (2011). Econometric analysis and prediction of
recurrent events. NCER Working Paper Series 75, National Centre for
Econometric Research.
Prodan, R. (2008). Potential pitfalls in determining multiple structural
changes with an application to purchasing power parity. Journal of Business & Economic Statistics, 26:50–65.
Quandt, R. (1958). The estimation of the parameters of a linear regression
system obeying two separate regimes. Journal of the American Statistical
Association, 53:873–880.
Quandt, R. E. (1960). Tests of the hypothesis that a linear regression system
obeys two separate regimes. Journal of the American Statistical Association, 55(290):324–330.
Yang, J. (2001). Structural change tests under regression misspecifications.
Economics Letters, 70(3):311–317.
38