Control charts with predetermined sampling intervals

Control charts with predetermined
sampling intervals
J. Rodrigues Dias
Department of Mathematics, University of Évora, Évora, Portugal, and
Paulo Infante
Department of Mathematics, University of Évora, Évora, Portugal
Abstract
Purpose – The aim of this study is to investigate a new sampling methodology previously proposed for
systems with a known lifetime distribution: the Predetermined Sampling Intervals (PSI) method.
Design/methodology/approach – The methodology is defined on basis of system hazard cumulative rate,
and is compared with other approaches, particularly those whose parameters may change in real time,
taking into account current sample information.
Findings – For different lifetime distributions, the results obtained for adjusted average time to signal
(AATS) using a control chart for the sample mean are presented and analysed. They demonstrate the high
degree of statistical performance of this sampling procedure, particularly when used in systems with an
increasing failure rate distribution.
Practical implications – This PSI method is important from a quality and reliability management point
of view.
Originality/value - This methodology involves a process by which sampling instants are obtained at the
beginning of the process to be controlled. Also this new approach allows for statistical comparison with
other sampling schemes, which is a novel feature.
Keywords Control charts, Predetermined sampling intervals, PSI method, Failure rate, Sampling
methods, Statistical quality control
Paper Type Research paper
Introduction
Control charts, introduced by Shewhart in about 1930, are powerful tools that enable changes to
be detected very easily in process parameters as production processes are monitored. The usual
practice when using a Shewhart control chart is for samples (about five) of a fixed size to be
taken periodically and the control limit coefficient maintained constant (often equal to 3).
In recent years, the design of control charts has followed a new direction. Chart parameters
are allowed to change during the production process rather than being maintained fixed with the
purpose of improving their statistical performance. Such charts can be classified, from an
implementation point of view, into two categories: charts with parameters that are fixed but not
constant for the duration of the monitoring operation, and charts for which at least one of the
parameters is allowed to change in real time, taking into account current sample information.
The latter are called adaptive charts. A great deal of the research carried out on this type of
chart has considered the case of a variable sampling interval, where sample size and the control
limit coefficient are nevertheless fixed. Reynolds et al. (1988) were the first investigators to
consider a variable sampling interval (VSI) control chart for the monitoring of the process mean.
Since then, a large number of researchers have studied the statistical properties of this type of
chart. Both Prabhu et al. (1993) and Costa (1994) independently proposed a chart with a variable
sample size (VSS), and studied its properties and performance; the sampling interval and control
limit coefficient for such a chart are fixed. Prabhu et al. (1994) proposed a combined adaptive
control chart with a variable sampling interval and sample size (VSSI). Costa (1999) proposed a
chart with the three parameters as variable (VP). Rodrigues Dias (1999a,b), meanwhile, proposed
a simple and interesting methodology (NSI – Normal Sampling Intervals method) based on the
2
density function of the standard normal variable for obtaining different sampling intervals, and
Infante (2004) analysed its statistical properties and compared it with other adaptive sampling
methods. In Rodrigues Dias (2006) and Rodrigues Dias (2007a, b) other results from
an
economic point of view are presented.
A few researchers have proposed control charts with parameters that are fixed but not
constant for the duration of the monitoring operation. Chart parameters values are obtained at the
beginning of the process to be controlled and are not updated during operations. The main
purpose is to improve process control properties in cases where lifetime distribution shows an
increasing failure rate, rather than the exponential distribution, which is the most widely
considered factor in previous studies. All such studies dealing with the design of control charts
adopt an economic approach, in which an appropriate cost function is formulated and optimised
with respect to sampling interval, sample size and control limit coefficient. Banerjee and Rahim
(1988), assuming that the system lifetime follows a Weibull distribution, analysed a model in
which sampling interval is a predetermined parameter while sample size and control limit
coefficient remain constant for the duration of the process. They discuss numerical results and
the sensitivity of design to Weibull parameters. Rahim and Banerjee (1993) considered a general
increasing failure rate distribution. The generalized model allows for the termination of a
production cycle at a certain time instant, even if no failure has been detected up until that point.
Parkhideh and Case (1989) considered a model in which all three chart parameters are allowed to
change over time; their design methodology comprises six decision variables. Otha and Rahim
(1997), assuming a Weibull lifetime distribution, proposed an alternative methodology with a
simplified design for reducing the number of decision variables from six to three, and provided
numerical examples showing how their model achieved improved performance as compared with
3
that developed by Parkhideh and Case (1989). Tagaras (1997, 1998) presented surveys of
research carried out on static and dynamic methods.
In this paper, a new methodology proposed by Rodrigues Dias (2002) for obtaining
sampling instants is considered, defined on basis of system hazard cumulative rate. As in
Banerjee and Rahim (1988) and Rahim and Banerjee (1993), the probability of a process shift in
a sampling interval, given that no shift has occurred before the start of the interval, is constant
for all intervals. However, this new approach allows for statistical comparison with other
sampling schemes, which is a novel feature.
In the next section, this methodology is presented and some of its statistical properties are
analysed. Following this, considering Weibull and Burr XII distributions, results obtained for
adjusted average time to signal (AATS) using X control charts are presented and analysed. The
new method here considered is compared with the periodic scheme and other adaptive sampling
methods for different mean changes. The conclusion is drawn that statistical performance is
better when the probability of the shift being detected decreases and when the average number of
samples analysed in the in-control state decreases. Such results become more marked as failure
rate increases. Finally, some concluding remarks are made.
It is interesting to note the importance of the results here presented (with great reductions in
AATS in some cases), in particular when we compare them with the corresponding results
concerning other methods considered in the literature. On other hand, it is important to highlight
the practical application of the new PSI method here considered. In fact, if the distribution of
system lifetime is known, what usually happens (at least approximatly), then at the beginning of
the sampling process we can settle the instants at which the samples are taken from the process.
This is not the case when adaptive sampling methods are considered. Obviously, this is a great
4
advantage of this PSI method in terms of quality and reliability management. Some work is
being done to obtain a simple solution (for ∆H, later considered) in order to minimize the total
expected cost, considering costs associated with sampling, imperfect operation and false alarms.
The new Predetermined Sampling Intervals (PSI) Method
Let us consider T as system lifetime, that is, T is a random variable that represents the time
before the occurrence of an assignable cause. The hazard rate of the system is given by:
h(t)=f(t)/R(T)
(1)
where f(t) is the density function and R(t) is the reliability function, which is defined as R(t)=1F(t), where F(t) is the distribution function of T. The cumulative hazard rate can be defined as:
t
H(t) = ∫ h(t)dt = − ln R(t)
(2)
0
Rodrigues Dias (1987), considering H(t) as a random variable, shows that H(t) follows an
exponential distribution with mean and variance equal to one. The graphical representation of
H(t) is a straight line if the hazard rate is constant (exponential distribution), and either a convex
or concave curve if the hazard rate is decreasing or increasing, respectively. This result, which is
obvious from the definition, is important for the geometric interpretation of this approach.
According to this approach, sampling instants are determined such a way that the
cumulative hazard rate between any two consecutive inspections is constant, that is, the
probability of a process shift during a sampling interval, given that no shift occurs until the start
of the interval, is constant for all intervals. Sampling instants tk verify:
H(t k ) = k∆H , k=1, 2, …
5
R (t k ) = exp(− k∆H )
t k = R −1  exp ( − k∆H )  ; t 0 = 0
(3)
This last expression enables sampling instants to be obtained for any system with a known
invertible reliability function. In other cases, sampling instants can also be obtained if the
reliability inverse function can be obtained by computational methods.
Let us now consider G as the time from the process shift until the first sample taken after
the shift. Generically, the expected value of G is given by:
∞
E(G ) = ∑ t k [F(t k ) − F(t k −1 )] − E(T )
(4)
k =1
In view of (3), we obtain:
F(t k ) − F(t k −1 ) = exp(− k∆H )[exp(∆H ) − 1]
∞
E(G ) = ∑ R −1 [exp(− k∆H )]exp(− k∆H ) [exp(∆H ) − 1] − E(T)
(5)
k =1
Let F be the time from the process shift until a sample point falls outside the control limits
(adjusted time to signal). The expected value of F (AATS) is given by (Rodrigues Dias, 2002):
∞
E(F) = [1 − exp(− ∆H )] ∑ t k exp[− (k − 1)∆H ] − E(T) +
k =1
[
]
∞
q
(t k +1 − t k ) q k − exp(− k∆H )
+ [1 - exp(- ∆H )]
∑
q − exp(− ∆H ) k =1
(6)
where tk is given by (3) and q is the probability of the shift not being detected.
Let us now consider N as the number of samples from the start of the process until the
first sample is taken after the shift occurs. We have:
∞
t k +1
∞
k =0
tk
k =0
E( N) = ∑ (k + 1) ∫ f ( t )dt = ∑ R (t k )
(7)
6
Considering (3), it can be written:
E( N) =
1
1 − exp(− ∆H )
(8)
This sampling scheme, defined above by means of ∆H, enables correspondence to be
established with a periodic sampling scheme with a sampling interval equal to P, in view of the
analogy between period P and constant increase ∆H.
Within the context of quality control, this new approach is based on the intuitively simple
idea that less frequent sampling should be carried out when the hazard rate is low and,
conversely, more frequent sampling should be carried out when the hazard rate is high.
According to this method, if failure rate increases (or decreases) then sampling intervals decrease
(or increase), as it can be seen in Rodrigues Dias (1987).
The implementation of this method is simple because it only requires the determination of
parameter ∆H. As sampling instants are scheduled at the beginning of process control, quality
management team knows exactly when to sample, and consequently this knowledge facilitates
personal and machine management more efficiently.
By adopting an economic approach, ∆H could be obtained in such a way that an
appropriate cost function is minimised, as when the sampling interval in the fixed sampling
scheme is obtained.
Comparison of the new method with other procedures
In this paper, it is assumed that the quality characteristic of the production process is normally
distributed, with mean µ0 and standard deviation σ0. At some time in the future, as a result of the
7
occurrence of an assignable cause, the mean shifts to µ1=µ0±λσ0, λ>0. When a sample of size n
is taken, the sample mean is computed and plotted on a Shewhart X control chart and, if X falls
outside the control limits given by µ 0 ± 3σ0 / n , then the process can be regarded as being out
of control. It is assumed here that observations are independent.
In order to compare the performance of different procedures in terms of their AATS, the
usual procedure is to match their in-control performances. This can be accomplished by
designing charts in such a way that average sample size and average sampling interval, when
µ=µ0, are the fixed sample size and the fixed sampling interval, respectively. Keeping fixed
control limits, the average number of false alarms is equal to the average number of items
sampled while the process is in control.
Let us now assume that system lifetime T follows a Weibull distribution, with scale
parameter α and shape parameter β. The reliability function is:
  t β 
R ( t ) = exp−   , t ≥ 0
  α  
(9)
According to the new methodology, using (3) and carrying out algebraic simplifications,
the following sampling instants can be obtained:
t k = α (k∆H ) , k = 1, 2, ...
1/ β
(10)
If a Burr XII distribution is considered (Zimmer et al.,1988; Zimmer et al., 1998),
then the reliability function is given by:
R(t) =
1
  t c 
1 +   
  S  
v
, t ≥ 0,
S, c, v > 0
(11)
8
where c and v are shape parameters and S is a scale parameter.
Using (3), the following expression for sampling instants is obtained:
1/ c
1/ v



1
t k = S 
 − 1
 exp ( − k∆H ) 



, k = 1, 2, ...
(12)
Considering different values for the shape parameter of the Weibull distribution and two
different combinations for the parameters of the Burr distribution, let us now compare the AATS
of the PSI method with other sampling schemes. In order to do this, let us consider different
values for mean shift parameter λ.
Comparison with the periodic sampling scheme
∆H is obtained in such a way that, as long as the process is in control, the average number of
samples for the PSI method matches the average number of samples for the periodic sampling
scheme. Thus:




1
∆H = − ln 1 − ∞



R(kP)
 ∑

k =0
(13)
where P represents the periodic sampling interval.
Rodrigues Dias (1987), assuming a perfect inspections context, shows that ∆H can be
evaluated in a simpler way, obtaining the approximation:
∆H ≅
P
E (T )
(14)
which is an important relationship since it enables ∆H to be obtained very simply and does not
depend on lifetime distribution. At the same time, it is an excellent approximation: for example,
9
when P=1 and E(T)=1000, for the lifetime distributions considered in this paper, relative error is
less than 0.00001%; and even in the event of a limit situation with P=100 and E(T)=1000,
relative error is less than 0.1%. Rodrigues Dias (1987) studied this approximation and others
obtained from it.
Let us consider E(Fe) as AATS using the periodic method (equal sampling intervals) and
E(Fd) as AATS using the PSI method (different sampling intervals). Comparing the two methods
in terms of AATS, we can consider:
Qd =
E(Fe ) − E(Fd )
× 100%
E(Fe )
(15)
and, using the PSI method, Qd can be said to be a measure of the relative reduction in AATS.
Results obtained for a range of possible mean shifts and different lifetime distributions are
presented in Table I. For the Weibull distribution, different values are considered for the shape
parameter corresponding to five systems with increasing failure rates (β>1), one system with a
constant failure rate (β=1) and one system with a decreasing failure rate (β<1). For the Burr
distribution, let us consider two cases: in case A1, let us assume that c=3, v=1 and S=826.99,
which corresponds to a system where hazard increases, reaches a maximum, and then decreases,
much the same as the hazard for the lognormal distribution; in case A2, let us assume that
c=4.8737, v=6.15784 and S=1551.09, which is an approximation to the normal distribution. In
all cases it is assumed that E(T)=1000, n=5 and P=1.
It can easily be concluded that the adoption of the PSI method leads to great reductions in
AATS when the system shows an increasing failure rate. These are all the greater as the shape
parameter of the Weibull distribution increases, and in some cases such reductions are sharp.
10
At the same time, this approach performs best when the probability of the shift being
detected is small. Even so, AATS obtained using the PSI method is always less than AATS
obtained using the periodic method, which is not the case with any adaptive sampling method.
λ
Lifetime
Distribution
0.25
0.50
0.75
1.00
1.25
1.50
1.75
2.00
3.00
E(FD)
140.70
33.54
10.33
4.01
1.89
1.07
0.72
0.58
0.50
Qd
-6.1
-2.0
-0.7
-0.3
-0.1
-0.1
-0.1
0.0
0.0
E(FD)
132.66
32.90
10.26
4.00
1.89
1.07
0.72
0.58
0.50
Qd
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
E(FD)
97.24
27.85
9.32
3.76
1.81
1.04
0.70
0.57
0.49
Qd
26.7
15.4
9.2
5.9
4.0
2.9
2.2
1.7
1.5
E(FD)
74.94
23.12
8.12
3.38
1.66
0.96
0.66
0.54
0.47
Qd
43.5
29.7
20.9
15.4
11.9
9.5
7.9
6.9
6.2
E(FD)
60.64
19.53
7.07
3.01
1.50
0.88
0.61
0.50
0.44
Qd
54.3
40.6
31.1
24.7
20.4
17.2
14.9
13.5
12.5
E(FD)
50.83
16.84
6.22
2.69
1.36
0.81
0.56
0.46
0.40
Qd
61.7
48.8
39.4
32.8
28.0
24.5
21.9
20.2
19.1
E(FD)
38.34
13.13
4.98
2.19
1.13
0.68
0.48
0.39
0.35
Qd
71.1
60.1
51.5
45.1
40.3
36.6
33.8
32.0
30.8
c=4.8737
E(FD)
53.72
17.47
6.40
2.75
1.39
0.82
0.57
0.47
0.41
v=6.15784
Qd
59.5
46.9
37.6
31.2
26.5
23.1
20.6
19.0
17.9
c=3
E(FD)
93.55
25.91
8.70
3.54
1.72
0.99
0.68
0.55
0.48
v=1
Qd
29.5
21.2
15.1
11.1
8.3
6.2
4.6
3.6
2.8
δ=0.8
δ=1.0
Weibull
δ=2.0
δ=3.0
δ=4.0
δ=5.0
Burr
δ=7.0
Table I - Values of Qd and E(Fd) for different lifetime distributions and different mean
changes, where n=5.
For Burr distribution cases, the adoption of the PSI method also leads to great reductions in
AATS. Despite these reductions being much more marked in case A2, since Qd>0 for systems
11
with lifetime distribution A1, it may be concluded that this method is not only efficient when
used in systems with markedly increasing failure rates.
Finally, when system lifetime shows a decreasing failure rate, this method is slower than
the periodic method to detect small shifts, while its efficacy is similar for moderate and large
shifts; the PSI method is therefore not recommended in these cases. However, PSI methodology
performance will probably be different under different control conditions.
Since PSI method performance as compared with the periodic scheme increases when the
probability of a shift being detected decreases, Qd increases when sample size decreases. Thus, in
processes with high sampling costs, the implementation of the PSI method can lead to significant
cost reductions. AATS reductions for different sample sizes and different mean changes are
presented in Table II for a Weibull lifetime distribution with β=4.
λ
n
0.25
0.50
1.00
1.50
2.00
3.00
1
62.2
55.9
43.2
33.7
27.1
19.1
3
57.7
46.3
30.4
21.7
16.7
12.7
5
54.3
40.6
24.7
17.2
13.5
12.5
8
50.3
35.1
20.1
14.0
12.6
12.5
15
43.9
27.8
15.0
12.6
12.5
12.5
Table II - Values of Qd for different sample sizes and different mean changes,
for a Weibull lifetime distribution where β=4.
PSI method performance for the different relations between E(T) and the sampling period
used in the periodic scheme P was also analysed. Considering E(T)=1000 and a Weibull lifetime
distribution where β=3, Qd was obtained for different P values and for different mean changes,
12
and the results are presented in Table III. It may be concluded that PSI approach performance
with predefined intervals as compared with the periodic sampling method increases as P
increases. Therefore this method is just as efficient, as fewer samples are inspected during the incontrol period. Taking into consideration the fact that there is a large sampling period for
processes with a very high sampling cost, it may be concluded that the implementation of PSI
method in such situations leads to significant savings.
λ
P
0.25
0.50
1.00
1.50
2.00
3.00
0.1
22.6
14.6
7.3
4.5
3.2
2.9
1
43.5
29.7
15.4
9.5
6.9
6.2
5
62.0
45.9
25.5
16.0
11.6
10.5
10
69.8
53.8
31.3
19.9
14.5
13.1
20
76.7
61.9
37.9
24.6
18.0
16.3
50
84.4
72.0
47.8
32.1
23.8
21.6
100
88.8
78.7
55.8
38.7
29.0
26.4
Table III - Values of Qd for different sampling periods and different mean changes,
for a Weibull lifetime distribution where β=3 and E(T)=1000.
Comparison with various adaptive sampling schemes
In this section, the PSI method is compared in terms of AATS with other previously developed
adaptive schemes: the VSI, VSS, VSSI, VP and NSI adaptive methods, each under the same
conditions during the in-control period, that is, the average number of false alarms is the same
for all sampling schemes, as well as the average number of samples and the average number of
items inspected.
13
With the VSI procedure, using only two sampling intervals, the area within the control
limits is divided by threshold limits into two regions: a small sampling interval d1 is used if a
sample mean falls outside the threshold limits and a large sampling interval d2 is used if a sample
mean falls within these limits (Reynolds et al., 1988).
Let us consider here the most widely used VSS procedure, with two different sampling
intervals. The size of each sample depends on what is observed in the preceding sample. If the
sample mean falls within a region adjacent to the control limits then the next sample size n1 is
small, while if the sample mean falls within a region adjacent to µ0 then the next sample size n2
is large (Prabhu et al., 1993; Costa, 1994).
The VSSI procedure alternates a long sampling interval d2 with a small sample size n1 and
a short sampling interval d1 with a large sample size n2, the former combination being used when
the sample mean is very different from the target process mean, that is, when the sample mean
falls within the central region (Prabhu et al., 1994).
The variable parameter adaptive scheme (VP) is characterised by two sampling intervals,
two sample sizes and two control limit coefficients. X values are plotted on a chart with warning
and control limits given, respectively, by µ 0 ± Wi σ0
n i and µ 0 ± Li σ0
n i for L1>L0>L2 and
W1>W2, where i=1 if the last sample is small and i=2 if the last sample is large, and where L0 is
usually equal to 3 (Costa, 1999).
Rodrigues Dias (1999a,b) proposed a new adaptive NSI (Normal Sampling Intervals)
method. Let ti be the actual sampling instant and x i be the mean value of the ith sample (within
the control limits). For the next sampling instant, according to this method:
ti+1 = ti + k φ(ui),
(16)
14
with
ui =
xi − µ0
σ0
n
,
t0 = 0, t1 = k φ(0), -L<ui<L
(17)
where φ(u) is the density function of the standard normal distribution, n is sample size, L is the
control limit coefficient and k is a convenient scale constant (Infante and Rodrigues Dias, 2003;
Infante, 2004; Rodrigues Dias, 2006a; Rodrigues Dias, 2006b; Rodrigues Dias, 2007).
In order to compare these methods with the PSI method in terms of the AATS, let us again
consider Qd given by (15) where E(Fd) is AATS where the adaptive scheme is used. Thus, Qd is a
measure of the relative reduction in AATS, compared with the periodic sampling scheme, where
an adaptive method is used. Thus, all the methods considered can be compared.
The different sampling schemes considered in this paper are presented in Table IV.
Sampling Method
Parameters
VSI (a)
VSI (b)
VSI (c)
VSS (a)
VSS (b)
VSS (c)
VSSI (a)
VSSI (b)
VSSI (c)
VP (a)
(d1, d2)=(0.015, 2.0); W=0.6665
(d1, d2)=(0.015, 1.5); W=0.9572
(d1, d2)=(0.015, 1.2); W=1.3689
(n1, n2)=(2, 25); W=1.5032
(n1, n2)=(3, 15); W=1.3757
(n1, n2)=(4, 9); W=1.2754
(d1, d2)=(0.015, 1.394); (n1, n2)=(1, 15); W=1.0633
(d1, d2)=(0.015, 1.281); (n1, n2)=(3, 12); W=1.2151
(d1, d2)=(0.015, 1.328); (n1, n2)=(4, 8); W=1.1454
(d1, d2)=(0.015, 1.394); (n1, n2)=(1, 15);
(L1, L2)=(6.0000, 2.5953); (W1, W2)=(1.0676; 1.0527)
(d1, d2)=(0.015, 1.281); (n1, n2)=(3, 12);
(L1, L2)=(6.0000, 2.5078); (W1, W2)=(1.2206; 1.1961)
(d1, d2)=(0.015, 1.328); (n1, n2)=(4, 8);
(L1, L2)=(6.0000, 2.5491); (W1, W2)=(1.1504; 1.1309)
k=3.5354
VP (b)
VP (c)
NSI
Table IV – Values of the parameters used for the adaptive methods.
It is our opinion that the values used for adaptive parameters are reasonable, considering
that a wide range of shifts is covered; this seems quite reasonable for some practical applications.
At the same time, these values have been used to carry out similar comparisons in past studies.
15
The results obtained for a range of possible mean changes are presented in Table V.
λ
Adaptive Sampling Intervals
0.125 0.250 0.500 0.750 1.000 1.250 1.500 1.750 2.000 3.000
NSI
1.8
7.0
24.2
42.7
53.6
51.8
37.5
16.7
-1.1
-15.1
VSI(a)
3.0
11.6
37.3
58.4
61.2
42.2
5.1
-38.4
-72.6
-98.5
VSI(b)
2.6
10.0
33.4
55.2
63.3
53.2
27.4
-4.4
-29.9
-49.2
VSI(c)
1.8
7.2
25.4
45.8
58.0
55.7
39.0
15.4
-4.3
-19.7
VSS(a)
4.7
34.3
74.2
67.4
42.9
5.6
-37.1
-70.7
-81.4
-18.6
VSS(b)
2.0
17.6
58.7
66.0
52.9
27.2
-4.7
-28.7
-34.2
-2.3
VSS(c)
0.7
6.3
30.5
45.5
43.1
29.1
10.0
-4.9
-9.0
-0.2
VSSI(a)
5.5
31.4
74.1
71.4
46.4
10.2
-29.3
-60.2
-73.2
-47.7
VSSI(b)
3.6
21.0
66.0
77.7
69.1
51.5
28.4
4.0
-14.5
-27.7
VSSI(c)
2.8
13.7
49.7
71.4
71.2
56.9
33.1
5.8
-15.8
-32.3
VP(a)
33.5
62.4
80.6
71.8
46.2
9.7
-29.9
-61.0
-74.1
-49.4
VP(b)
24.7
52.7
77.7
79.3
69.0
51.2
27.7
2.8
-16.3
-30.1
VP(c)
12.9
35.1
66.4
76.2
71.7
56.5
32.2
4.3
-18.0
-33.8
δ=2.0
33.9
26.7
15.4
9.2
5.9
4.0
2.9
2.2
1.7
1.5
δ=3.0
51.3
43.5
29.7
20.9
15.4
11.9
9.5
7.9
6.9
6.2
δ=4.0
62.7
54.3
40.6
31.1
24.7
20.4
17.2
14.9
13.5
12.5
δ=5.0
68.2
61.7
48.8
39.4
32.8
28.0
24.5
21.9
20.2
19.1
δ=7.0
76.5
71.1
60.1
51.5
45.1
40.3
36.6
33.8
32.0
30.8
c=4.8737
v=6.15784
65.9
59.5
46.9
37.6
31.2
26.5
23.1
20.6
19.0
17.9
c=3
v=1
32.6
29.5
21.2
15.1
11.1
8.3
6.2
4.6
3.6
2.8
Predefined Sampling Intervals
Weibull
Burr
Table V – Values of Qd for different adaptive sampling schemes and different
mean changes, where n=5.
The following conclusions may be drawn:
16
a) The PSI method X chart, as compared with any adaptive chart, enables faster detection
of very small shifts (λ<0,25) and large shifts (λ≥2). When the process-failure
mechanism follows a Weibull model with β≥3 or a Burr A2 model, considering a shift
λ=0.25, the PSI methodology chart is only slower in terms of detection as compared
with the VP chart.
b) The PSI method X chart, as compared with the VSI and NSI adaptive sampling
interval methods, performs best for λ≤0.25 and λ≥2. When λ=0.5, the PSI method
performs better than the other methods for systems with a Weibull lifetime distribution
with shape parameter β≥4 or with Burr A2. It performs better than the VSI(a), VSI(b),
VSI(c) when λ>1.5 (for systems with lifetime distribution A2 and Weibull where β≥4),
and better than the NSI method (for systems with lifetime distribution A2 and Weibull
where β≥5).
c) The PSI method chart is the only one which shows better performance than the periodic
chart for all mean changes.
d) Based on the results obtained and the conclusions set out above, the PSI approach
appears to provide a valuable alternative to adaptive sampling schemes where a variety
of mean changes are likely to occur during the process.
Conclusions
In this paper, a new methodology, the Predetermined Sampling Intervals (PSI) method, proposed
by Rodrigues Dias (2002) for obtaining sampling intervals at the beginning of the process to be
controlled for any lifetime distribution with a closed-form reliability function is considered. This
17
methodology enables a statistical comparison to be carried out with other sampling schemes,
something which has not previously been done. On the other hand, it is easy to implement and it
facilitates quality management.
Considering lifetime distributions with different failure rates and a wide range of mean
changes, it may be concluded that, as compared with the periodic sampling method, reductions in
the adjusted average time to failure become more marked with an increase in failure rate. These
reductions increase when the probability of the shift being detected decreases, that is, the new
method performs better for small shifts.
On the basis of sensitivity analysis, it may be concluded that the PSI method performs
better when the average number of samples analysed in the in-control state decreases. The
implementation of the method could lead to great savings in the case of processes where the cost
of sampling is high.
With the exception of the case of a system with a decreasing failure rate, it may be
concluded that this method is always better than the periodic sampling scheme, which is not the
case with any adaptive sampling scheme. As compared with adaptive sampling methods, it may
be concluded that the PSI method is faster in terms of the detection of very small shifts and large
shifts. At the same time, because it shows a good level of performance for all mean changes and
because its implementation is potentially very simple (if system lifetime distribution is known
and the inverse of the reliability function can be obtained), the new method provides a valuable
alternative to adaptive sampling schemes, particularly in the case of processes in which a variety
of mean changes are likely to occur.
It is the belief of the authors that the idea on which this method is based could be adapted
to other situations, including sampling problems in other contexts. Finally, the idea of combining
18
the Normal Sampling Intervals (NSI) method with the Predetermined Sampling Intervals (PSI)
method in what could be called a combined adaptive sampling scheme can produce very positive
results, as shown by Infante (2004) and Infante and Rodrigues Dias (2004), in which a new
sampling method combining the previous ones is proposed and analysed.
References
Banerjee, P. K. and Rahim, M. A. (1988), “Economic design of X control charts under Weibull
shock models”, Technometrics, Vol. 30, pp. 407-414.
Costa, A. (1994), “ X Chart with variable sample size”, Journal of Quality Technology, Vol. 26,
pp. 155-163.
Costa, A. (1999), “ X Charts with variable parameters”, Journal of Quality Technology, Vol.31,
pp. 408-416.
Infante, P. (2004), “Sampling methods in quality control”, PhD Thesis, University of Évora, in
Portuguese.
Infante, P. and Rodrigues Dias, J. (2003), “Robustez de um novo método dinâmico de
amostragem em controlo de qualidade”, in Brito, P., Figueiredo, A., Sousa, F., Teles, P.
and Rosado, F. (Ed.), Literacia e Estatística, Portuguese Statistical Society (SPE)
Editions, pp. 345-360, in Portuguese.
Infante, P. and Rodrigues Dias, J. (2004), “Esquema combinado de amostragem em controlo de
qualidade com intervalos predefinidos adaptáveis”, in Rodrigues, P., Rebelo, E. and
Rosado, F. (Ed.), Estatística com Acaso e Necessidade, Portuguese Statistical Society
(SPE) Editions, pp. 335-347, in Portuguese.
19
Otha, H. and Rahim, M. A. (1997), “A dynamic economic model for an X control chart design”,
IIE Transactions, Vol. 29, pp. 481-486.
Parkhideh, B. and Case, K. E. (1989), “The economic design of a dynamic X control chart”, IIE
Transactions, Vol. 21, pp. 313-323.
Prabhu, S. S., Runger, G. C. and Keats, J. B. (1993), “An adaptive sample size X chart”,
International Journal of Production Research, Vol. 31, pp. 2895-2909.
Prabhu, S. S., Montgomery, D. C. and Runger, G. C. (1994), “A combined adaptive sample size
and sampling interval X control scheme”, Journal of Quality Technology, Vol. 27, pp.
74-83.
Rahim, M. A. and Banerjee, P. K. (1993), “A generalized model for the economic design of
X control charts for production systems with increasing failure rate and early
replacement”, Naval Research Logistics, Vol. 40, pp. 787-809.
Rodrigues Dias, J. (1987), “Systems Inspection Policies”, PhD Thesis, University of Évora, in
Portuguese.
Rodrigues Dias, J. (1999a), “Analysis of a new method to obtain different sampling intervals in
statistical quality control”, in Proceedings of the IV Congreso Galego de Estatística e
Investigación de Opéracions, Universidade de Santiago de Compostela, pp. 155-158.
Rodrigues Dias, J. (1999b), “A new method for obtaining different sampling intervals in
statistical quality control”, University of Évora, 18 pp.
Rodrigues Dias, J. (2002), “Sampling in quality control using different and predetermined
intervals: a new approach”, in Proceedings of the Joclad 2002, 10 pp, in Portuguese.
Rodrigues Dias, J. (2006), “A new simple optimal solution for an intuitive sampling method:
an economic-statistical quality control model”, submitted.
20
Rodrigues Dias, J. (2007a), “Some optimal results for an economic adaptive sampling quality
control model”, in Silva, H. B., Matos, J. (Ed.), Proceedings of JOCLAD 2007, Instituto
de Engenharia do Porto, ISBN: 978-972-8688-50-9, pp. 101-105.
Rodrigues Dias, J. (2007b), “New Results in Economic Statistical Quality Control”. Economic
Quality Control, Vol. 22, No.1, pp. 41-54.
Tagaras, G. (1997), “Economic design of a time-varying and adaptive control charts”, in Rahim,
M. A. and Al-Sultan, K. S. (Ed.), Optimisation in quality control, Kluwer Academic
Publishers, Boston, pp. 145-173.
Tagaras, G. (1998), “A survey of recent developments in the design of adaptive control charts”,
Journal of Quality Technology, Vol. 30, pp. 212-231.
Zimmer, W. J., Keats, J. B. and Wang, F. K. (1998), “The Burr XII distribution in reliability
analysis”, Journal of Quality Technology, Vol. 30, pp. 386-394.
About the authors
J. Rodrigues Dias is Associate Professor with Habilitation in the Department of Mathematics,
University of Évora, Évora, Portugal.
Paulo Infante is Assistant Professor in the Department of Mathematics, University of Évora,
Évora, Portugal.
J. Rodrigues Dias is the corresponding author and can be contacted at: [email protected]
21