MAINTENANCE ORIENTED OPTIMAL DESIGN OF ACCELERATED
DEGRADATION TESTING
A Dissertation by
Murad Hamada
Masters of Science, Wichita State University, 1998
Bachelor’s of Science, Kansas State University, 1996
Submitted to the department of the Industrial and Manufacturing Engineering
and the faculty of the graduate school of
Wichita State University
in partial fulfillment of
the requirements for the degree of
Doctor of Philosophy
December 2006
MAINTENANCE ORIENTED OPTIMAL DESIGN OF ACCELERATED
DEGRADATION TESTING
We have examined the final copy of this dissertation for form and content, and recommend that
it be accepted in partial fulfillment of the requirement for the degree of Doctor of Philosophy
with a major in Industrial Engineering.
_________________________________
Haitao Liao, Committee Co-chair
_________________________________
Gamal Weheba, Committee Co-chair
We have read this dissertation and recommend its acceptance:
_________________________________
Don Malzahn, Committee Member
_________________________________
Krishna Krishnan, Committee Member
_________________________________
Hamid Lankarani, Committee Member
Accepted for the College of Engineering
_________________________________
Zulma Toro-Ramos, Dean
Accepted for the Graduate School
_________________________________
Susan Kovar, Dean
ii
DEDICATION
To whom I owe it all, my parents Mohammed and Aysheh, for their guidance, support and
prayers and to my beautiful wife, Reem, who I love dearly, and without her being there for me
day and night, I would not have been here
To my brothers Nimer, Ashraf, Bandar and Ahmad
for their continuous encouragement
To my beautiful children, Sami and Leen
iii
ACKNOWLEDGEMENTS
I would like to thank my advisors, Dr. Haitao Liao and Dr. Gamal Weheba for their
thoughtful, patient guidance and support. A special thank is due to Cessna Aircraft Company,
represented by the leadership team, my managers and my colleagues for their support. I would
also like to extend my gratitude to members of my committee, Dr. Don Malzahn, Dr. Krishna
Krishnan, and Dr. Hamid Lankarani, for their helpful comments and suggestions on all stages of
this dissertation.
iv
ABSTRACT
In this dissertation, the problem of using accelerated degradation testing data for
reliability estimation is studied and demonstrated. Simulation and analytical approaches have
been investigated. By simulation which generates a large number of degradation paths, the
reliability estimate of the product can be estimated using an empirical formation. This approach
is general and has a great flexibility in estimating the reliability of a product regardless of the
functional form of the degradation paths. However, it is time-consuming and sometimes can not
provide efficient and accurate reliability estimates. Alternatively, the analytical approach may
provide the closed-form expressions for reliability estimates for specific degradation process
models. If the model fits, this approach is more accurate and efficient than the simulation
approach. More importantly, when the closed-form solution exists, the optimal design of testing
plans can be formulated and solved with the objective of either improving the accuracy of the
reliability estimate or the accuracy of the economic loss.
In addition to the statistical study of the reliability estimation, the optimal design of
Accelerated Degradation Testing (ADT) plans has been investigated extensively. The objectives
considered include minimizing the variance of single reliability estimate for the maintenance
requirement, minimizing the weighted variances of multiple reliability estimates and minimizing
the weighted economic loss associated with the reliability estimates considering multiple
maintenance requirements. In the literature, this work is the first study regarding the optimal
design of testing plans that considers maintenance requirements. By determining the optimal
setting of decision variables such as the stress levels in the ADT experiments, the improvements
in these objectives have been demonstrated using numerical examples. It can be seen that the
v
novel methodology developed in this dissertation can significantly reduce the uncertainty of
certain indices associated with the reliability estimates. This work is important in the area of
reliability engineering as it indicates an efficient way of conducting accelerate degradation
testing to verify the product’s reliability and make management decisions under limited testing
resources.
vi
TABLE OF CONTENTS
Chapter
1
2
3
4
Page
INTRODUCTION ………………………………………………….…………………..1
1.1
Current Industry Situation and Outlook ……………...........................................1
1.2
Goals of Dissertation ………………………………………….………………...4
1.3
Outline of Dissertation ………………………………………..………………...8
LITERATURE REVIEW ……………………………………….………………………9
2.1
The History of Reliability Engineering …………………………….……….......9
2.2
The Definition of Reliability Engineering ……………………………..……....13
2.3
Reliability Modeling, Testing and Design ………………………………..…....15
2.4
Degradation Modeling ………………………………………………………....19
2.5
Design of Degradation Test Plans ……………………………………………..35
2.6
Maintenance Relating to Reliability …………………………………………...38
2.7
The Loss Function ……………………………………………..........................50
DEGRADATION MODELING AND EFFECTS OF TEST PLANS…........................54
3.1
Reliability Estimates Based on Degradation Data ..............................................54
3.2
Reliability Estimation Based on Varying Stress Levels.……………………….58
3.3
Summary and Discussion ……………………………………………...……….71
MAINTENANCE-ORIENTED OPTIMAL DESIGN OF ADT PLANS ……………...73
4.1
Analytical Method for Reliability Estimation …….............................................76
4.2
Maximum Likelihood Estimates of Model Parameters from Degradation Data..78
4.3
Derivation of the Variance of Reliability Estimate ……………………………..80
vii
5
6
4.4
Hypothesis Testing ………………………………………………………………85
4.5
Confidence Intervals of Reliability Function………………………………….....86
4.6
Minimization of Variance of Reliability Estimate ……………………………....87
4.7
Minimization of Variance of the Overall Loss ……………………………….....88
4.8
Summary and Discussion ……………………………………………...………...91
APPLICATIONS OF DEGRADATION MODELING …………………………………93
5.1
MLEs of Model Parameters from Degradation Data – Validation Procedure …..93
5.2
Variance of Reliability Estimate – Numerical Example ..………………….……95
5.3
Hypothesis Testing - Numerical Example ………….………………………….101
5.4
Confidence Intervals of Reliability Function – Numerical Example ...………...104
5.5
Minimization of Variance of Reliability Estimate – Numerical Example ……..106
5.6
Minimization of Variance of the Overall Loss - Numerical Example..………...111
5.7
Summary and Discussion ……………………………………………...……….122
CONCLUSIONS AND FUTURE RESEARCH ……………………………………....122
6.1
Conclusions …………………………………………………….........................123
6.2
Future Research ………………………………………………..........................124
REFERENCES ………………………………………………………………………...............126
APPENDIXES …………………………………………………………………………………133
CURRICULUM VITA ………………………………………………………………………...156
viii
LIST OF TABLES
Table
Page
3.1
Different Distribution/Values Used in the First Degradation Path Simulation…….........59
3.2
Different Distribution/Values Used in the Second Degradation Path Simulation ….......63
3.3
βˆ1 and βˆ 2 Averages from the First Simulation under Stress 1 …….………..................64
3.4
βˆ1 and βˆ 2 Averages from the First Simulation under Stress 2 …………...…................65
3.5
βˆ1 and βˆ 2 Averages from the First Simulation under Stress 3 ………………………...65
3.6
βˆ1 and βˆ 2 Averages from the Second Simulation under Stress 1 …………...................66
3.7
βˆ1 and βˆ 2 Averages from the Second Simulation under Stress 2 …………...................67
3.8
βˆ1 and βˆ 2 Averages from the Second Simulation under Stress 3 …………...................67
3.9
ˆ
ˆ
Summary of β̂ 1 and β̂ 2 Values (First Simulation)…...……………………………........68
3.10
ˆ
ˆ
Summary of β̂ 1 and β̂ 2 Values (Second Simulation)….………………………………..69
5.1 (a) LED Light Intensity Degradation Data at 40mA ………......…………............................94
5.1 (b) LED Light Intensity Degradation Data at 35mA ………...…………...............................94
5.2
Maximum Likelihood Estimates of the Model Parameters ……………….…………….95
5.3
Reliability Estimates, ∂Rˆ / ∂aˆ0 , ∂Rˆ / ∂aˆ1 , and ∂Rˆ / ∂σˆ over Time ……….……………...99
5.4
Variances of Reliability Estimates over Time ……………………….………………...100
5.5
The Mean and the Variance of the Reliability Estimate over Time …….……………..102
5.6
Significance Levels Calculated over Time ………………………….……....................103
5.7
Upper and Lower Confidence Intervals over Time (Pilot Testing Plan) …....................104
ix
5.8
Optimal and Non Optimal Variances of Reliability Estimates over Time .....................10
5.9
Upper and Lower Confidence Intervals over Time (Optimal Testing Plan) …………..109
5.10
Loss Functions Formulations ……………………………………………......................111
5.11
Loss Function Results – Non Optimal Settings …………………………......................113
5.12
Loss Function Results – Optimal Settings ………………………………......................114
5.13
Variance of the Total Loss Using the Non Optimal Stress Settings …………………...115
5.14
Variance of the Total Loss Using the Optimal Stress Settings ………………………...116
5.15
Loss Function Results: Optimal Settings, Multiple Maintenance Requirements……....119
5.16
Variance of the Weighted Sum of the Total Loss Using the Optimal Stress Settings.....119
x
LIST OF FIGURES
Figure
Page
1.1
Traditional Flow of Development vs. Proposed Flow of Development of Maintenance
Requirements …………………………………………………………………………………….7
3.1
Example of Five Degradation Paths …………………………………………………….55
3.2 (a through f) Reliability Estimates Rˆ (t ) for Various Failure Thresholds, D f ………………56
3.3
Reliability Estimates for Different Failure Thresholds …………………........................57
3.4
A Sample of 7 Degradation Paths from the 10th Run in Simulation 1 …….....................60
3.5 (a through j) Parameter Estimates in Simulation 1, where the true parameters
β1 ~ NORM (1,0.5) , β 2 ~ NORM (1,0.5) and ε ~ NORM (0,1) ………………………………….62
ˆ
3.6 (a and b) Comparison of the Estimation Accuracy for Parameter β̂ 1 ………………………70
3.7
ˆ
Comparison of the Estimation Accuracy for Parameter β̂ 2 ……………….....................71
4.1
Relationship between the Reliability Function and Degradation Process ………............77
4.2
Illustration of the Loss Function over Time ………………………………....................89
5.1
Reliability Estimates over Time …………………………………………........................98
5.2
Non Optimal Variances of Reliability Estimates over Time …………………………..101
5.3
Reliability Estimates and Confidence Intervals over Time (Pilot Testing Plan)……….105
5.4
Optimal and Non Optimal Variances of Reliability Estimates over Time .....................108
5.5
Reliability Estimates and Confidence Intervals over Times (Optimal Testing Plan) ….110
5.6
Loss Functions at Maintenance Requirement, T = 1,000 hrs, and Reliability
Requirements, R0 (T ) = 0.65 …………………………………………………………………...112
5.7
Illustration of the Loss Function Results – Non Optimal Settings ………………..…...113
5.8
Illustration of the Loss Function Results – Optimal Settings ….…………....................114
xi
5.9
Illustration of the Variance of the Loss Function Results – Non Optimal Settings .…...116
5.10
Illustration of the Variance of the Loss Function Results – Optimal Settings...………..117
5.11 Illustration of the Loss Function Results for the Weighted Sum of Maintenance
Requirements – Optimal Settings …………………………………………………………….. 120
5.12 Illustration of the Variance of the Weighted Sum of the Loss Function Results – Optimal
Settings..………………………………………………………………………………………..121
I-1 (a through j) Illustrate the Parameter Estimates for Simulation 1, where β1 = NORM (1.5,0.5) ,
β 2 = NORM (1.5,0.5) , ε = NORM (0,1) , and t i is the time from i =1 to 10 for each of the 10
Runs ……………………………………………………………………………………………138
I-2 (a through j) Illustrate the Parameter Estimates for Simulation 1, where β1 = NORM (2,0.5) ,
β 2 = NORM (2,0.5) , ε = NORM (0,1) , and t i is the time from i =1 to 10 for each of the 10
Runs…...……………………………………………….……………………………………….140
I-3 (a through j) Illustrate the Parameter Estimates for Simulation 2, where
β1 = NORM (1.25,0.5) , β 2 = NORM (1.25,0.5) , ε = NORM (0,1) , and t i is the time from i =1 to
10 for each of the 10 Runs ……………………………………………………………………..142
I-4 (a through j) Illustrate the Parameter Estimates for Simulation 2, where
β1 = NORM (1.55,0.5) , β 2 = NORM (1.55,0.5) , ε = NORM (0,1) , and t i is the time from i =1 to
10 for each of the 10 Runs ……………………………………………………………………..144
I-5 (a through j) Illustrate the Parameter Estimates for Simulation 2, where
β1 = NORM (2.25,0.5) , β 2 = NORM (2.25,0.5) , ε = NORM (0,1) , and t i is the time from i =1 to
10 for each of the 10 Runs ……………………………………………………………………..146
II.1
Loss Functions at Maintenance Requirement, T = 1,500 hrs, and Reliability
Requirements, R0 (T ) = 0.55 …………………………………………………………...............147
II.2
Loss Functions at Maintenance Requirement, T = 2,000 hrs, and Reliability
Requirements, R0 (T ) = 0.47 …………………………………………………………...............148
II.3
Loss Functions at Maintenance Requirement, T = 2,500 hrs, and Reliability
Requirements, R0 (T ) = 0.41 …………………………………………………………...............149
II.4
Loss Functions at Maintenance Requirement, T = 3,000 hrs, and Reliability
Requirements, R0 (T ) = 0.36 …………………………………………………………………...150
xii
II.5
Loss Functions at Maintenance Requirement, T = 3,500 hrs, and Reliability
Requirements, R0 (T ) = 0.31 …………………………………………………………………...151
II.6
Loss Functions at Maintenance Requirement, T = 4,000 hrs, and Reliability
Requirements, R0 (T ) = 0.28 …………………………………………………………………...152
II.7
Loss Functions at Maintenance Requirement, T = 4,500 hrs, and Reliability
Requirements, R0 (T ) = 0.25 …………………………………………………………...............153
II.8
Loss Functions at Maintenance Requirement, T = 5,000 hrs, and Reliability
Requirements, R0 (T ) = 0.23 …………………………………………………………...............154
xiii
LIST OF ABREVIATIONS
ADT
Accelerated Degradation Testing
CAA
Civil Aviation Authority
Cdf
Cumulative Density Function
DOD
Department of Defense
DOE
Design of Experiments
FAA
Federal Aviation Administration
FMEA
Failure Mode Effect Analysis
HCI
Hot Carrier Induced
LED
Light Emitting Diode
LINEX
Linear Exponential
LSL
Lower Specification Limit
MIL-HDBK
Military Handbook
MIL-STD
Military Standard
MFOP
Maintenance Free Operating Period
MLE
Maximum Likelihood Estimate
MTBF
Mean Time between Failures
MTTF
Mean Time to Failure
NPRALS
Non-parametric Regression Accelerated Life-stress
OEM
Original Equipment Manufacturer
p.d.f.
Probability Density Function
USL
Upper Specification Limit
xiv
CHAPTER 1
INTRODUCTION
1.1
Current Industry Situation and Outlook
The demand for air travel both within the U.S. and between the U.S. and the rest of the
world declined sharply in 2002, forcing both U.S. carriers and other carriers around the world to
cut scheduled flights. Worldwide travel demand, both local and international, also declined
during much of 2002, forcing world airlines to also adjust schedules downward. Business and
corporate aviation continues to be a bright spot for the general aviation industry. Increased
growth in fractional ownership companies and corporate flying has continued to expand the
market for business and corporate jet aircrafts, though at reduced annual numbers.
Current forecasts assume that business use of general aviation aircraft will expand at a
more rapid pace than that for personal/sport use. The business/corporate side of general aviation
should continue to expand based on the security restrictions imposed on flying by commercial
aircraft.
Security concerns for corporate staff combined with convenience and schedule
flexibility have made fractional and corporate aircraft ownership as well as on-demand charter
flights viable alternatives to travel on commercial flights (FAA Aerospace Forecast Fiscal Years
2003-2014).
Although the future ahead of general aviation manufacturers is bright, it has its own
challenges. The manufacturers are facing strong pressures to develop newer, higher technology
aircrafts in record time. In addition, there are competitive pressures to improve productivity,
reliability, and overall quality.
With more and more OEMs getting into the world of
-1-
manufacturing general aviation airplanes, the emphasis is being placed more and more on
improving the reliability and quality of the airplanes that are being manufactured.
The business environment in the twenty-first century will be characterized by intense
global competition. To survive and grow in such a competitive environment, manufacturers will
continuously be challenged to design, develop, test and manufacture higher reliability products in
ever-shorter time periods, and at the same time, at a lower cost.
Manufacturers are realizing that in order for them to stay competitive in today’s market
and to actually survive the ever-increasing competitive market of manufacturing general aviation
aircraft, they need to control the cost of ownership or the life cycle cost of the aircraft, which is
driven by entire life of the aircraft, from conception to disposal.
This realization is driving manufacturers to focus on understanding the performance and
the degradation characteristics of their products by spending more time and money modeling the
performance of the products and analyzing the results. Manufacturers are performing more tests
on computers, in labs, and on test benches during the conceptual stages of the design, rather than
when the product (aircrafts) is actually operational following customer sale/delivery.
Aircraft designs tend to stretch technology beyond its limits. The operators want to carry
higher payloads using less fuel often at higher speeds, and, of course, they want to keep their
costs to a minimum. Depending on how one measures this, air travel can be shown to be the
safest mode of transport. Unfortunately, it is also probably the least survivable and the most
newsworthy when something does go wrong. This means that a great deal of emphasis is placed
on safety and reliability of the aircraft design.
The study of reliability is important on a global basis. Industries are following more
advanced reliability-engineering practices to stay ahead of world competition. Manufacturers
2
across all Industries need to be knowledgeable about and institute reliability engineering
programs, Kececioglu (1991).
Over the past 30 years, aircraft, electronic, and automotive
industries have been increasingly applying results taken from reliability studies to optimize
maintenance and replacement decisions, Inozu and Perakis (1991).
The overall objective of design for reliability is to ensure that the final product will be
both economically reliable and reliably safe. Economically reliable means that the product’s
observed reliability has been established with consideration of life cycle costs. These costs
include the acquisition cost, scheduled maintenance cost, unscheduled maintenance cost and the
cost of failures. Reliably safe requires designing sufficient reliability into the product to ensure
that the time to failure is within an acceptable limit, Ebeling (1997).
However, designing sufficient reliability into the product to ensure that the time to failure
is within an acceptable limit is not always assured and can have devastating effects. As an
example, the electricity grid exhibited a serious reliability problem in the blackout of the
northeastern United States in 2003. Also, NASA missions have experienced well-publicized
failures in recent years that are attributable to poor reliability.
Current reliability technology, based on statistics and probability theory, relies largely on
life data information. A large part of efforts in reliability activities was spent to obtain and
interpret the data. Reliability prediction and demonstration are two typical examples, O’Conner
(1993).
Consequently, reliability is unable to be largely and proactively built in, and critical
failure modes can not be efficiently eliminated at design stage due to the lack of such
relationships, Blanks (1994). In current practice, it is not uncommon that, to achieve a certain
level of reliability, numerous investigations and studies are made on product design, functional
3
prototypes, etc. As a result, the time to market of new products is usually too long and because
of that reliability testing gets overlooked.
Traditionally, reliability has focused on the stages of design, implementation
(development
of
manufacturing,
assembly,
and
distribution
processes),
operations
(manufacturing, assembly, and distribution processes), and maintenance. The quantitative
methods that have been developed to increase reliability in these stages have had a huge impact
on system reliability and mission success. But even higher levels of reliability are possible if the
scope of analysis is widened to include all stages of the life cycle and additional interactions
between system components.
Reliability, design, and manufacturing engineers have developed a large array of methods
and tools for producing reliable products. These methods and tools have played a critical role in
the achievement of high reliability with low cost and short time. However, due to changes in the
way that new product-concepts are being developed and brought to market, the usual methods
and tools used for design-for-reliability are unable to fully meet the challenges in the new
business environment, Meeker and Hamada (1995).
1.2
Goals of Dissertation
There is a relationship between degradation testing and the prediction of maintenance
intervals through time to failure distributions. Therefore, since degradation testing can be used
to estimate time to failure distributions and time to failure distributions can be used in
establishing preventative maintenance intervals, it is important to define the relationship between
Degradation Testing and the Time-to-Failure Distribution.
4
The difficulty with any kind of maintenance activity, when considering the economic life
of a product, is that the OEMs and the operators have little quantitative assurance that a
maintenance strategy will provide a more economical strategy prior to implementation. In other
words, the OEMs don’t know when parts will fail and for how long they will fail and what type
of failure will occur prior to the actual failure occurring. Therefore, the OEMs don’t know what
spare parts to keep in stock to support their customers.
Operators on the other hand, are not sure either when a component will fail; how often it
will fail; for how long and what type of failure it will be until the actual failure occurs.
Therefore, operators don’t know if they should replace a component prior to the end of life and
risk the potential of further life or not replace the component and risk the potential of a
catastrophic failure. It should be noted that the most common routine maintenance is visual
inspection of the aircraft prior to scheduled departure (known as a walk around) by pilots and
mechanics to ensure there are no obvious problems (e.g., cracks, caps and hatches not locked and
shut, or even pieces missing).
In many cases however, OEMs and operators must follow a maintenance program that is
mandated and approved by a regulatory authority such as the Federal Aviation Administration
(FAA), Civil Aviation Authority (CAA), and so on. These regulatory agencies pay no attention
the economically reliable side of maintenance but focus on what they feel is the reliably safe side
of maintenance.
There are many factors un-addressed by basic reliability analysis and unrelated to time to
failure that strongly influence economic risk performance. These factors are both internal to, and
external to a particular product or process. These factors include the flexibility to reconfigure a
process or alter the products in a way that minimizes the current failure of a system. Other
5
factors that influence economic risk performance include the availability of stored spare parts
that may be consumed or sold into the product market when a particular component is needed as
well as the availability and costs of maintenance resources.
Consideration of these factors led to the conclusion that reliability modeling can only be
one element of an analysis designed to assess the economic risk of failure and incorporate it into
operational and maintenance decision-making. Therefore, the discussion in this dissertation will
take the economic risk of failure into account but will only address the impact of Reliability
Degradation on Maintenance from a single element point of view.
This dissertation will focus on how OEMs and operators can benefit from the use of
degradation testing in predicting an optimal preventative maintenance and scheduled
maintenance frequency for components that have new designs and that have not been field tested
to know the optimal maintenance frequency. Is performing a traditional test to examine the
wear-out characteristics of components the most cost effective way? Or can this information be
determined using the results from Degradation testing?
Figure 1.1 represents the traditional flow of development of maintenance requirements
where typically maintenance requirements are an afterthought.
The traditional flow of
development describes that the manufacturers typically start out by coming up with a new idea
for a new product. A team of engineers is then tasked with determining what the reliability
requirements are for that new product. After the reliability requirements are set, testing is
conducted to make sure that the component is capable of achieving the reliability requirement.
At that point, the maintenance requirements are established based on the existing reliability
requirements.
6
The proposed approach that is used in this dissertation, also shown in Figure 1, is
different in the sense that the maintenance requirements are set by the customers after the new
product idea is established. The reliability requirements are then established to make sure that
the maintenance requirements are met and finally testing is conducted to make sure that the
required reliability is substantiated.
Traditional flow of development:
New
Product
Set Reliability
Requirements
Reliability
Testing
Customer
Maintenance
Requirement
Set Reliability
Requirements
Reliability
Testing
Proposed flow of development:
New
Product
Customer
Maintenance
Requirement
Figure 1.1: Traditional Flow of Development vs. Proposed Flow of Development of
Maintenance Requirements
This dissertation will attempt to prove that the maintenance requirements can be used to
drive the optimal design of degradation testing plans which is the reverse of what has been done
in literature. We will also attempt to prove that this procedure has an economic advantage over
traditional methods by only performing the tests that are required to maintain the product rather
than testing the product beyond its economic life.
7
1.3
Outline of Dissertation
The remainder of this dissertation will be structured as follows: Chapter 2 covers a brief
history of what reliability is, how it started and how it has transformed over the years to become
the science that we know today. Chapter 2 also provides a detailed review of the research
relevant to the topic of this dissertation.
Chapter 3 describes a degradation model that is used in this dissertation to demonstrate
the impact of stress and duration of the test on the reliability estimate. Finally, Chapter 3
presents a simulation study on the application of the degradation model with varying stress levels
and the time duration.
Chapter 4 outlines the contributions of this dissertation by introducing the proposed
model and the derivations of that model and other relevant measures of reliability estimation.
Chapter 4 introduces the notations of the Wiener process and three different testing plans.
Chapter 5 illustrates several examples to demonstrate the application of the theory of the
proposed model as well as the applicability of the proposed testing plans. Finally, Chapter 6
draws conclusions from this dissertation and provides recommendations on how this research can
be extended.
8
CHAPTER 2
LITERATURE REVIEW
2.1
The History of Reliability Engineering
A quantitative and formal approach to reliability grew out of the demands of modern
technology, and particularly out of the experience in World War II with complex military
systems, Barlow and Proschan (1967). During that time, electronic tubes were by far the most
unreliable component used in electronic systems. The high failure rates of the systems resulted
in reduced availability and increased costs. This led to various studies and ad hoc groups whose
purpose was to identify ways that their reliability, and the reliability of the systems in which they
operated, could be improved.
The 1950s saw the advent of reliability engineering as an independent discipline. It
became clear that the emerging discipline of Reliability Engineering was actually an
interdisciplinary science that required combining concepts from mathematics, physics,
chemistry, and engineering to achieve the goal of higher reliability.
One group during that time period recommended that there needed to be better reliability
data collected from the field, better components developed, quantitative reliability requirements
established, reliability verified by tests before full-scale production and a permanent committee
established to guide the reliability discipline, Denson (1998).
In response to these recommendations, the US Department of Defense and the electronics
industry established a committee, called the Advisory Group on Reliability of Electronic
Equipment. The charter of this committee was to identify actions that could be taken to provide
9
more reliable electronic equipment. The committee summarized the investigations on how to
improve reliability in a report that was the basis for the development of MIL-STD-781
handbook, Reliability Qualification and Production Approval Tests.
One major task at that time was the identification of root causes of field failures and
determination of corrective actions. This task required engineers to have strong background on
the physics and the mechanics of the components and systems being studied. The needs of
modern technology, especially the complex systems used in military and in space program, led to
a quantitative approach based on mathematical modeling and analysis, Blischke and Murthy
(2003).
This era resulted in a variety of efforts to improve reliability through data collection and
design, establishment of reliability programs and organizations, symposiums and technical
journals devoted to reliability and quality engineering, statistical techniques development such as
using the Weibull distribution and US military handbooks that provided guidelines on the
development and application of electronic components and equipment.
Since component level reliability analysis conventions have their background in the
military and space industries, the components used in these applications were clearly safety
critical. Therefore, it was necessary to create qualification criteria and reliability prediction
methods and hence, another major task was the specification of quantitative reliability
requirements. This task led to the desire to have the means of measuring reliability so that the
probability of success over mission time could be estimated. Also, it was this task that promoted
the marriage between reliability engineering and the science of probability and statistics.
Reliability engineering has become a very fast growing and important field in consumer
and capital goods industries, in space and defense industries, and in NASA and the Department
10
of Defense (DOD) agencies, Kececioglu (1991).
This growth has been motivated by the
increased complexity and sophistication of systems, public awareness, and profit considerations
resulting from the high cost of failures, their repairs, and warranty programs, Ebeling (1997).
In the l960s, many statistical methods were developed and refined for reliability testing
and estimation. In addition to academic success, this decade also witnessed the success of
reliability applications in NASA programs and private industries. Numerous unprecedented
stories, such as the Apollo Program and the launching of the first commercial satellite in which
reliability engineering was considered as an essential part, opened a new era for Reliability
Engineering.
The most famous handbook, MIL-HDBK-217, Reliability Prediction of Electronic
Equipment, was introduced during this time period. Once issued, this handbook quickly became
the standard by which reliability predictions were performed. The MIL-HDBK-217 standard has
since been revised and updated several times and is still considered a major reference in military
and the industry. It played an un-replaceable role in the advancement of reliability engineering.
The 1970’s brought a much better understanding of the physics of failures. Both military
and consumer industries had suffered fro low reliability of silicon-integrated devices since their
invention. With the increase of the integration density of the silicon-integrated devices, high
failure rates became more painful. This challenge attracted many efforts and resulted in an
important realization that failure analysis was a powerful tool for the improvement of reliability.
Consequently, the view shared by many professionals started to change the fact that
reliability discipline was only a subject dealing with the application of probability and statistics.
Meanwhile, a new view started to become more widely accepted that the reliability discipline
was an engineering subject utilizing many tools including probability and statistics. This change
11
laid a milestone in the development of reliability engineering and led to significant improvement
of many electronic products, especially since the major focus of that time period was on
electronic components.
Following the introduction of the military handbooks in the early 1970s, the handbook
models were updated on average every 6 years, and the models became overly pessimistic.
Therefore, in 1994, the U.S. Military Specifications and Standards Reform initiative led to the
cancellation of many military specifications and standards, Perry (1994). Since then and due to
the abandoning of the military handbooks that have been providing clear guidelines and
requirements on acceptable reliability levels, the reliability in the marketplace today has been
driven by the consumers of electronic components.
The trends of the 70s with regard to the physics of failure continued into the early 1980s.
Failure modes and mechanisms of many products received extensive investigations and became
clear and as a result, reliability testing and screening based on the understanding of failure
mechanisms was much more meaningful and efficient. In addition, due to much effective work
in failure analysis, design improvement and reliability assurance; the reliability of many products
reached a new level at which many problems that used to be encountered before were mitigated.
During the 1980s, software-reliability problems started to be extrusive because failures
and malfunctions of hardware due to software defects were repeatedly on the rise. To address
these issues, reliability engineers and scientists developed a large array of methods and models
for software reliability, measurement and prediction; such as fault-tolerance design and best
practices.
Software continued to be the main driver of system reliability developments throughout
the l990s. This trend will certainly remain in the early years of the 21st century. Nevertheless,
12
efforts spent on software reliability fall far behind those spent on hardware reliability. The
evolution of new technology is driving consumers to replace existing products with new ones in
shorter periods of time and in many cases prior to the end of life of the old products. This
consumer behavior has shifted the focus of manufacturers from the reliability of products to the
shorting of product development cycles and has caused some major issues, especially in areas
where high reliability is required.
In the area of hardware reliability, the continuous increase of product reliability has
raised challenges to reliability practitioners. The time and recourses needed to test products of
high reliability continues to increase and became unaffordable in the 1990s. This challenge
inspired researchers to develop more efficient methods for evaluating reliability, and it also made
them focus on defining mission success by the requirement as identified by the customers and
not as the manufacturers say it should be.
2.2
The Definition of Reliability Engineering
Immediately following reliability’s emergence as a technical discipline just after World
War I, it was used to compare operational safety of airplanes. Reliability was then measured as
the number of accidents per hour of flight time, Rausand and Hoyland (2004). Reliability
engineering was considered as an equal to applied probability theory and statistics. Nowadays,
reliability research has been clearly sub-divided into smaller entities and research topics may be
divided by the methodology that applies: mathematics based approaches have a long history,
especially in reliability analysis of large systems; while physics based approaches are being
introduced especially in component level studies.
13
Reliability theory is a body of ideas, mathematical models, and methods directed to
predict, estimate, understand, and optimize the lifespan distribution of systems and their
components, Barlow and Proschan (1967). The term reliability is defined as the probability that
a component or a system will perform a required function for a given period of time when used
under stated operating conditions, Ebeling (1997). This definition has its roots in military
handbook MIL-STD-721C and is still being widely used by many authors and researchers today.
ISO however has a different and more general definition of what the term reliability
means. ISO describes reliability as the ability of an item to perform a required function, under
given environmental and operational conditions and for a stated period of time (ISO 8402).
Others measures include Maintenance Free Operating Period (MFOP), which allows a period of
operation during which an item will be able to carry out all its assigned missions, without the
operator being restricted in any way due to system faults or limitations, with the minimum of
maintenance, Kumar, Knezevic, and Crocker (1999).
There are also as many ways to measure reliability as there are different ways to define it.
The most widely used measures of reliability are Mean Time to Failure (MTTF) and the Mean
Time between Failures (MTBF), which is the mean, or the expected value, of a probability
distribution. One key issue regarding reliability measurement/estimation is that no one can
calculate the exact period of time for which a component will work without a failure. The only
thing that any model can do is to calculate the probability of the component working without
failure for a period of time.
Many systems are designed to be operated for specified periods of time. If there is no
benefit to having the system last longer than what the customers need, then it may be a waste of
resources to over-design the system’s capabilities. But in many situations, this is not the case.
14
For example, the Mars rovers; Spirit and Opportunity, were scheduled for 90-day missions and
those missions turned into much longer than that. In this case, the scientific benefits of this extra
system life have been tremendous.
The basic assumption is that when randomly selecting a large sample from a very large
population, the sample will possess the same properties and behavior as the total population. It is
important to understand that such a description explains what happens when a large number of
components are put into operation. The resulting reliability calculations have no meaning when
applied to a single item, Goldberg (1981).
To express this relationship mathematically, the continuous random variable T is defined
as the time to failure of a system or a component where T ≥ 0. Then the reliability can be
expressed as:
F(t) = Pr {T < t}
R(t) = 1 - F(t) = Pr{T ≥ t}
where F(t) is defined as the probability that a failure occurs before time t , Ebeling (1997).
There are several other definitions for reliability estimation in literature, but in general
most of the scientists and researchers describe reliability in terms of performance without failure
under stated conditions for a specified period of time.
2.3
Reliability Modeling, Testing and Design
The design of system components, including architecture, hardware, and software, has
been studied extensively. Lifetime modeling, physics of failure modeling, and other physical
modeling techniques can be used to create more reliable systems. Obviously, the more accurate
15
and precise the models are, the better that reliability can be modeled. Because of the complexity
of many of the systems being designed today in domains such as medical devices, aerospace, and
consumer electronics; system-level models are increasingly needed to predict the many failure
modes that can occur.
In industry, the concept of design reliability is not very widely accepted and
implemented. Many large-scale companies have established their design reliability procedures
and documented best practices. In most cases; however, design reliability is evaluated and
estimated without considering manufacturing and maintenance constraints. In other words, the
interactions between reliability design, manufacturability and maintainability have not been
widely understood.
Reliability is affected by activities throughout the system life cycle. At the earlier stages,
the greatest effects come from organizational policies and decisions stemming from the
organization’s understanding of the market, customer’s needs, and the capabilities of the supply
chain. While the data that goes into these decisions is improving, it is still the source of
significant error.
The design stages are governed by physical modeling techniques and
probabilistic risk assessment.
Reliability interventions at the latter stages depend on
maintenance models and intervention strategies.
At design stage, reliability is usually expected to be very high through the use of a
number of tools, such as design of experiments (DOE), and failure mode and effect analysis
(FMEA). The high reliability observed in design, however, usually cannot be seen in the field
because it is frequently unable to be fully assured in manufacturing due to lack of
manufacturability of the design. Process variation and its effects on reliability are not considered
16
in current design reliability programs. Reactively, reliability screening is sometimes employed
to precipitate early failures so as to improve field reliability.
In recent years, designs of experiments have been empowered and Taguchi’s robust
design methodology has also been modified to proactively assure reliability, Yang and Yang
(1998). In order to find the root causes of failures, reliability and product engineers have spent
more efforts in investigating the mechanisms of product failures, leading to considerable
improvement in product reliability, Ebel (1998).
Traditional life tests that record only time-to-failure are frequently conducted to assess
reliability of products. When using life testing experiments, most units have a very long life
under the normal use conditions. Therefore, by the time the experiment is completed and an
estimate of reliability is obtained, the results will be outdated. This test method is also of low
efficiency in the evaluation of high-reliability products because few or no failures could be
observed at low test stress levels. Traditional reliability analysis based on few or no life data
would generate misleading conclusions. Therefore, there is a strong need for developing more
efficient methods for reliability analysis. To overcome these issues, accelerated life testing was
introduced, Meeker and Escobar (1998).
Reliability growth testing and accelerated life testing have been used to test a design over
the long term in a short time period. Reliability growth testing involves subjecting the design to
the extremes expected during normal use to assess how the design responds. Accelerated life
testing involves subjecting the design to stresses in excess of normal use to simulate a longer
time period than is feasible during testing. Highly accelerated life testing and screening have
been spreading throughout the industry over the past few years and in spite of the difficulties in
17
evaluating products that have high reliability, these test methods have proven more useful than
the traditional life tests.
Meeker and Escobar (1993) briefly reviewed the statistical issues of accelerated life tests.
They gave some guidelines and recommendations of how to plan and improve accelerated test
procedures.
In addition, they gave an overview of available and developing methods for
planning and analyzing accelerated test programs. In accelerated life testing, a certain number of
units are subjected to stresses that are higher than the normal conditions. Because of the higher
values of stress, the units should fail sooner than usual. The short lives of the units under high
stress conditions are then used to estimate the expected life of the units under the normal use
conditions. This is done by postulating some relationships between failure or degradation and
levels of stress, then extrapolating from high stress to normal stress.
Given that products are more frequently being designed with higher reliability and
developed in a shorter amount of time, it is often not possible to test new designs to failure under
normal operating conditions. In some cases, it is possible to infer the reliability behavior of unfailed test samples with only the accumulated test time information and assumptions about the
distribution.
Another option in this situation is the use of degradation analysis. Degradation analysis
involves the measurement and extrapolation of degradation or performance data that can be
directly related to the presumed failure of the product in question. Many failure mechanisms can
be directly linked to the degradation of part of the product and degradation analysis allows the
user to extrapolate to an assumed failure time based on the measurements of degradation or
performance over time.
18
2.4
Degradation Modeling
Component degradation modeling developed to understand the aging process of a
component can have many useful applications with potential advantages.
Two common
assumptions typically made when performing degradation modeling are the following:
A parameter can be measured over time that drifts monotonically either upwards, or
downwards towards a specified failure threshold, D f and when it reaches this failure threshold,
the failure occurs. The drift is linear over time with a slope, or rate of degradation, that depends
on the relevant stress the unit is operating under and also the (random) characteristics of the unit
being measured.
To predict reliability based on degradation modeling, we need to clearly understand the
failure mechanism. In order to do that, there must be an underlying model to characterize
propagation of the degradation path. The model parameters or coefficients can then be thought
of as random variables for individuals within the population. For many parts and systems, the
specific failure mechanisms are unknown or there are many competing failure mechanisms.
One of the most important advantages of degradation modeling is that multiple
degradation measurements can be recorded on each individual unit within the population.
Therefore, it is not necessary to wait until failure to occur to obtain data and start analyzing it. In
comparison to traditional life testing, even under accelerated conditions, the degradation
modeling will yields more useful data sooner. For highly reliable parts or systems, where
failures are rare or take along time to occur, this may be the only possible approach.
There are many examples of failure mechanisms where there is some defined degradation
path and reliability prediction based on degradation modeling is a viable approach for reliability
19
prediction. For example, fatigue crack growth, as an indication of structural degradation, has
long been studied in degradation analysis. Bogdanoff (1984) presented an extensive analysis of
fatigue crack growth data and investigated the impact of initial crack length on the modeling of
fatigue crack growth. Lu and Meeker (1993) analyzed crack propagation data they identified the
crack length, as the degradation measure and the degradation path was expressed as a function of
initial crack length, number of stress cycles, and other empirically determined model parameters.
The failure criterion was defined as a crack reaching a pre-determined length.
In some cases, it is possible to directly measure the degradation over time, as with the
crack size or even the wear of brake pads. In other cases, direct measurement of degradation
might not be possible without invasive or destructive measurement techniques that would
directly affect the subsequent performance of the product. In such cases, the degradation of the
product can be estimated through the measurement of certain performance characteristics, such
as using resistance to gauge the degradation of a dielectric material.
In either case, however, it is necessary to be able to define a level of degradation or
performance at which a failure is said to have occurred. With this failure level of performance
defined, it is a relatively simple matter to use basic mathematical models to extrapolate the
performance measurements over time to the point where the failure is said to occur. Once these
have been determined, it is merely a matter of analyzing the extrapolated failure times like
conventional time-to-failure data.
Once the level of failure is defined, the degradation for multiple units over time needs to
be measured. As with conventional reliability data, the amount of certainty in the results is
directly related to the number of units being evaluated. The performance or degradation of these
units needs to be measured over time, either continuously or at predetermined intervals. Once
20
this information has been recorded, the next task is to extrapolate the performance measurements
to the defined failure level in order to estimate the failure time using either linear, exponential,
power or logarithmic model to perform the extrapolation. These models have the following
forms:
Linear: y = a ∗ t + b
Exponential: y = b ∗ e a∗t
Power: y = b ∗ t a
Logarithmic: y = a ∗ ln(t ) + b
Where y represents the performance, t represents time and a and b are model parameters to be
solved for. Once the model parameters a and b are estimated for each sample i, a time, t can be
extrapolated, which corresponds to the defined level of failure y . The computed t can now be
used as the time-to-failure. As with any sort of extrapolation, it is important not to extrapolate
too far beyond the actual range of data in order to avoid modeling errors.
For many products, it is possible to measure performance characteristics and record
degradation data over time. These measurements may contain fairly credible, accurate and
useful information about product reliability. Reliability evaluation based on such information
can be efficient.
For some high-reliability products, however, the rate of performance
degradation under use conditions is so slow that it is difficult to make useful inferences about
reliability in a reasonable amount of time. In such cases, products are often subjected to elevated
stress levels in order to accelerate the degradation process. Physical relationships or regression
models can then be used to describe degradation rate as a function of the stress and extrapolate
an estimate of the time-to-failure distribution at design stress.
21
2.4.1
Natural Degradation
In literature, there are several papers studying the relationship between natural
performance degradation and reliability, and using degradation data to evaluate reliability of
products. Nelson (1990) proposed a method of using degradation data to estimate the life
distribution and established the relationship between breakdown voltage and exposure time at
design temperature.
Meeker and Hamada (1995) summarized the statistical tools and concepts that are used
for the rapid development and evaluation of high-reliability products. They introduced a generic
model for degradation failure and the relationship between degradation and failure was
discussed. They pointed out that the degradation data can provide more information than are
available from traditional time to failure data, especially in applications where few or no failures
are anticipated.
Yang and Xue (1996) used independent increment random process to model degradation
paths and estimate reliability of components and systems based on degradation data. They
introduced state trees to conduct system reliability analysis and the analysis of variance and
design of experiment techniques to assess the criticality of product parameters or components to
performance degradation.
Tseng, Hamada and Chiao (1995) conducted a case study of using degradation data to
improve fluorescent lamp reliability. Lu, Park and Yang (1997) proposed a model that utilizes
random regression coefficients and a non-constant standard deviation to analyze linear
degradation data. They estimated the model parameters by maximum likelihood method and
22
considered a special type of the degradation called hot carrier induced (HCI) degradation that
develops gradually and changes performance of metalized and oxidized semi conductors.
Bagdonavicius and Nikulin (2000) considered the estimation of degradation models with
and without covariates. They also used the maximum likelihood method to estimate model
parameters. The maximum-likelihood estimator (MLE) is a method of parameter estimation
involving the maximization of the likelihood equation.
The best parameter estimates are
obtained by determining the parameter values that maximize the value of the likelihood equation
for a particular data set. The method of maximum likelihood provides one solution to the
problem of estimation. The maximum-likelihood estimator has several desirable properties: the
estimator is efficient in the sense that there is no estimator with smaller variance, the estimator
approaches the true population parameter as the number of observations increases and the
distribution of deviations of the estimator from the population parameter approaches a normal
distribution for large numbers of observations.
Gertsbackh and Kordonskiy (1969) presented a simple linear model with random
intercept and random slope, and showed that the associated time-to-failure follows the Bernstein
distribution.
Tseng, Hamada and Chiao (1995) used a simple linear regression to model
luminosity degradation, which is a quality characteristic of fluorescence. Seber and Wild (1989)
provided theories and applications of nonlinear regression approach including nonlinear models
with dependent errors and useful classes of growth, compartment and multiphase models.
Many researchers have developed different perspectives and modeling approaches to
reliability prediction. Sethuraman and Young (1986) developed a cumulative damage threshold
crossing model. Under this model, an item consists of a large number of components that suffer
damage at regular moments of time. Failure occurs as soon as the maximum cumulative damage
23
to some component crosses a certain threshold. Times to failure data is used to estimate the
model parameters.
Lu, Lu and Kolarik (2001) developed a technique for predicting system performance
reliability in real-time considering multiple failure modes.
The technique includes on-line
multivariate monitoring and forecasting of performance measures and conditional performance
reliability estimates. The performance measures are treated as a multivariate time series and a
state-space approach is used to model the multivariate time series. The predicted mean vectors
and covariance matrix of performance measures are used for the assessment of system reliability.
The technique provides a means to forecast and evaluate the performance degradation of an
individual system in a dynamic environment in real-time.
Abdel-Hameed (1992) introduced failure models of devices subject to deterioration
(degradation). He discussed the properties of different classes of life distributions (such as
increasing failure rate, increasing failure rate average, etc.) under various threshold distributions.
Pascual and Meeker (1999) used a random fatigue-limit model to describe the variation in fatigue
life and the unit-to-unit variation in the fatigue limit. Bogdanoff (1984) presented an analysis of
fatigue crack growth data and investigated the impact of a fixed versus a variable initial crack
length on the modeling of fatigue crack growth.
Wu and Shao (1999) studied the asymptotic properties of the least squares estimators for
the degradation measurements under a nonlinear mixed-effects model. Lu and Meeker (1993)
used a regression model for the analysis of degradation data at a fixed level of stress (i.e., no
acceleration) to estimate a time-to-failure distribution. They assumed that, for each unit in a
random sample of n units, degradation measurements, x, will be taken at pre-specified times: t1,
24
t2… ts, generally until x crosses a pre-specified critical level D or until time ts, whichever comes
first. The sample degradation path of ith unit at time tj is given by,
x ij =
f (t j;φ ,θ i ) + ε
ij
where ε ij is measurement error.
Degradation path models often include terms that are nonlinear in the parameters. The
parameters are divided into two types: fixed-effects parameters φ that are common for all units,
and random-effects parameters θ i representing individual unit characteristics. Lu and Meeker
(1993) assumed that the random-effects parameters are characterized by a multivariate
distribution function G(⋅) which may depend on some unknown parameters that must be
estimated from the data. If this were to be extended to accelerated degradation testing, then one
or more of the fixed-effects parameters or the parameters of G(⋅) will depend on stress.
Lu and Meeker (1993) reviewed accelerated degradation testing, modeling, planning and
analysis. They developed statistical methods for using degradation measures to estimate a timeto-failure distribution for a broad class of degradation models. They used a ‘two-stage’ method
to estimate the mixed-effect path model parameters. They applied Monte Carlo simulation to
compute an estimate of the distribution function of the time-to-failure and they suggested
bootstrap methods for setting confidence intervals. These can be used with a much more general
and practical class of degradation models. They assumed that the random effect parameter
follows a multivariate normal distribution.
The Monte Carlo simulation varies the measured quantities randomly in ways that
represent the experimental uncertainties, and the calculations leading to the final answer are
repeated with these artificial quantities.
This is done repeatedly, and the variances and
covariances in the resulting final answers are calculated. In cases where the final answer might
25
depend on non-linear fits to the input data, Monte Carlo simulation may be the only feasible way
of determining the uncertainty in the final result.
The variance and the covariance are obtained using the Fisher Information Matrix as
described in Nelson (1982).
The following is an illustration of Fisher information matrix
concept: Given a statistical model
{f (x θ )} of a random vector
X
X , the Fisher information
matrix, I , is the variance of the score function U . So,
I = Var (U )
If there is only one parameter involved, then I is simply called the Fisher information or
information of { f X (x θ )}. If
{f (x θ )}
X
belongs to the exponential family, I = E (U T U ) .
Furthermore, with some regularity conditions imposed, I = − E (∂U / ∂θ ) .
As an example, the normal distribution, N (μ , σ 2 ) , belongs to the exponential family and
its log-likelihood function A(θ x ) is
(x − μ ) ,
1
− ln 2πσ 2 −
2
2σ 2
(
)
2
where θ = (μ , σ 2 ) , then the score function U (θ ) is given by
⎛ ( x − μ ) ( x − μ )2 1
⎜
⎜ σ 2 , 2σ 4 , 2σ 2
⎝
⎞
⎟.
⎟
⎠
Taking the derivative with respect to θ , we have
2
⎞
- (x - μ ) σ 4
∂U ⎛⎜ − 1 σ
⎟.
=
∂θ ⎜⎝ - (x - μ ) σ 4 1 2σ 4 − (x - μ )2 σ 6 ⎟⎠
(
)
Therefore, the Fisher information matrix I is
2
⎛ ∂U ⎞ ⎛⎜1 σ
− E⎜
⎟=⎜
⎝ ∂θ ⎠ ⎝ 0
26
⎞
⎟.
1 (2σ )⎟⎠
0
4
In linear regression model with constant variance σ 2 , it can be shown that the Fisher information
matrix, I is
1
σ
2
X T X,
where X is the design matrix of the regression model.
Lu, Park, and Yang (1997) proposed a linear degradation path model with random
intercept and slope that are normally distributed with correlation. Moreover, an error term with
non-constant standard deviation is also included for the replicated degradation data. The basic
idea under degradation path models is to limit the sample space of the degradation process and
assume all sample functions admit the same functional form but with different parameters. This
assumption however is quite restrictive when the patterns of some sample degradation paths are
inconsistent with the others, possibly due to inherent variations within an individual unit.
In field applications, this situation is naturally encountered due to intensive or even slight
variation of environmental stresses. Based on optimal fuzzy clustering, Wu and Tsai (2000)
treated those special units (paths) as “outliers” and assigns smaller weights to them while
estimating model parameters. Their procedure provides much tighter confidence interval for
time-to-failure distribution than the original two-stage method, without treating those units as
“outliers”.
Linear degradation can be used in some simple wear processes (e.g. aircraft tire wear),
whereas in some engineering applications the degradation measure may be also represented by a
linear function of time after simple transformation.
For example, the percentage loss of
tranconductance of MOSFET devices, which is related to failure, has a linear relationship with
the logarithm of time t . Among linear regression models, the simplest model assumes the
intercept of model to be a constant.
27
Zuo, Jiang and Yam (1999) introduced three approaches for reliability modeling of
continuous state devices. The three approaches were based on a random process model, the
general path model and the multiple linear regression models, respectively. They also proposed
a mixture model that can be used to model both catastrophic failures and degradation failures. In
general, degradation path models are usually nonlinear functions of time and sometimes
linearization is infeasible.
2.4.2
Accelerated Degradation Testing (ADT) Models
Nelson (1990) pointed out that accelerated degradation tests have some advantages over
accelerated life tests. In accelerated degradation tests, we can analyze the data more quickly, and
therefore can draw faster conclusions. He briefly surveyed the degradation behavior of various
products and materials subject to degradation, accelerated degradation testing models and
inference procedures, and also presented basic accelerated degradation models under constant
stress.
Meeker and Escobar (1998) reviewed some available references that describe the
applications of ADT models. They proposed mathematical models to analyze ADT data and
suggested methods for estimating failure time distributions, and their confidence intervals.
Meeker and Luvalle (1995) presented the results of the effect of a humidity accelerated
life test of conductive anodic filament failures on printed circuit boards.
They derived a
statistical distribution for the time to failure using approximate chemical models. They also used
the maximum likelihood method to estimate the model parameters. Tang and Chang (1995)
predicted the reliability of power supplies by examining the degradation process of DC output.
The degradation process was accelerated by subjecting the power supplies to elevated
temperature. They used regression methods to study the relationship between the parameters and
28
the associated stress levels.
Tyoskin and Sonkina (1997) developed a model for predicting reliability based on small
sample size by using degradation measurements. Their study suggested a way for assessing
reliability in situations where products are very expensive and thus only few samples can be
tested. Tseng and Yu (1997) proposed a termination rule for degradation experiments, which is
used to determine how long the degradation test of light emitting diodes should last. Yang and
Yang (1998) developed a method for rapidly evaluating reliability using degradation data. The
rapid evaluation is achieved by accelerating degradation process and tightening failure criteria.
The method was successfully used to assess the reliability of infrared light emitting diodes.
Lu and Meeker (1993) used a regression model for the analysis of degradation data to
estimate a time-to-failure distribution. They assumed that, for each unit in a random sample of n
units, degradation measurements, D will be taken at pre-specified times: t1, t2,.., ts, generally until
D crosses a pre-specified critical level or failure threshold Df or until time ts, whichever comes
first. Degradation path models often include terms that are nonlinear in the parameters that are
being estimated. These parameters can be divided into two types: the fixed-effects parameters
that are common for all units, and the random-effects parameters representing individual unit
characteristics. Lu and Meeker (1993) used Monte Carlo simulations to compute an estimate of
the distribution function for the time-to-failure and they suggested parametric bootstrap methods
for setting confidence intervals.
Lu and Meeker (1993) proposed the following general assumptions about the manner in
which degradation testing and measurement should be conducted.
Lu and Meeker (1993)
proposed that sample units are randomly selected from a population or production process and
random measurement errors are independent across time and units. Lu and Meeker (1993)
29
proposed that sample units are tested in a particular homogeneous environment (e.g., the same
constant temperature).
Lu and Meeker (1993) also proposed that the measurement (or
inspection) times are pre-specified. They are the same across all the test units, and may or may
not be equally spaced in time. This assumption is used for constructing confidence intervals for
the time-to-failure distribution via the bootstrap simulation method.
Elsayed (1996) provided a brief review of the degradation models and classified ADT
models into two types: physics-statistics-based model and statistics-based model. Furthermore,
he classified statistics-based model into two categories: parametric model and nonparametric
model. The following review will proceed in accordance with this classification.
2.4.2.1 Physics-Statistics Based Models
Nelson (1981) analyzed the degradation of an insulation material at different stress levels.
He assumed that the temperature is the only acceleration factor that determines the degradation
profile over time and presented a relationship among the absolute temperature, the median
breakdown voltage and time.
He then estimated the lifetime distribution based on the
performance degradation model.
Carey and Koening (1991) used the degradation data from a temperature accelerated life
test to estimate the reliability of a family of integrated logic devices, a component of a generation
of submarine cables, at normal operating condition.
They assumed that the maximum
propagation time delay (maximum degradation) and the absolute temperature are related by the
Arrhenius law. The utilized the maximum likelihood estimator to estimate the parameters of the
Arrhenius relation, which is used for predicting the maximum degradation at normal operating
condition.
30
Whitmore and Schenkelberg (1997) modeled accelerated degradation process by a
Brownian motion with a time scale transformation. The model incorporated the Arrhenius law
for high stress testing. Inference methods for the model parameters based on ADT data were
presented. Meeker, Escobar and Lu (1998) described accelerated degradation models that relate
to physical failure mechanisms. They modeled the acceleration using models that describe the
effect that temperature has on the rate of a failure causing chemical reaction. They used
Arrhenius model to describe this relationship and used the maximum likelihood method to
estimate model parameters. Confidence intervals for time-to-failure distribution were obtained
by simulation-based methods.
The Arrhenius model is one of the earliest and most successful acceleration models that is
used in accelerated life testing to establish a relationship between absolute temperature and
reliability (time to failure). It was originally developed by Swedish chemist Svante Arrhenius to
define the relationship between temperature and the rates of chemical reaction. This empirically
based model takes the form
⎧ ΔH ⎫
t ( f ) = A ∗ Exp ⎨
⎬
⎩k ∗T ⎭
T denoting temperature measured in degrees Kelvin (273.16 + degrees Celsius) at the
point when the failure process takes place and k is Boltzmann's constant (8.617 x 10-5 in ev/ k ).
The constant A is a scaling factor that drops out when calculating acceleration factors, with
ΔH denoting the activation energy, which is the critical parameter in the model. The value of
ΔH depends on the failure mechanism and the materials involved, and typically range from 0.3
or 0.4 up to 1.5, or even higher. Acceleration factors between two temperatures increase
exponentially as ΔH increases.
31
The acceleration factor between a higher temperature T2 and a lower temperature T1 is
given by
⎧⎪⎛ ΔH ⎞ ⎡⎛ 1 ⎞ ⎛ 1
AF = Exp⎨⎜
⎟ ∗ ⎢⎜⎜ ⎟⎟ − ⎜⎜
⎪⎩⎝ k ⎠ ⎣⎝ T1 ⎠ ⎝ T2
⎞⎤ ⎫⎪
⎟⎟⎥ ⎬
⎠⎦ ⎪⎭
Using the value of k given above, this can be written in terms of T in degrees Celsius as
⎧
⎤⎫
⎡
1
1
−
AF = Exp ⎨ΔH × 11605 × ⎢
⎥⎬
⎣ (T1 + 273.16) (T2 + 273.16) ⎦ ⎭
⎩
Chang (1993) presented a generalized Eyring model to describe the dependence of
performance aging on accelerated stresses in a power supply. The tests considered involve
multiple measurements in a two-way design. The mean time to failure of the power supply at the
normal operating condition was estimated.
2.4.2.2 Statistics Based Models
The statistics-based models consist of parametric models and non-parametric models.
The parametric models assume that the degradation path of a unit follows a specific functional
form with random parameters, or the degradation measure follows an assumed distribution with
time-dependent parameters.
Moreover, these models assume that there is only a scaling
transformation of the degradation paths or the degradation measure distributions at different
stress levels but their forms remain unchanged. The non-parametric models relax the assumption
about the form of the degradation paths or distribution of degradation and establish them in a
non-parametric way. The non-parametric models have greater flexibility in contrast to the
parametric regression models, but they may not have explicit physical meaning.
32
2.4.2.3 Parametric Models
Based on the degradation paths, Crk (2000) extended the methodology of the general
degradation path approach to the development of the multivariate, multiple regression analysis of
function parameters with respect to applied stresses. Parametric models have two components, a
parametric distribution for the lifetime of a unit; and an assumed relationship between one or
more of the parameters of the parametric distribution and the stress.
Tang and Chang (1995) modeled nondestructive accelerated degradation data as a
collection of stochastic processes for which the parameters depend on the stress levels. The
model adopts the independent increment concept by assuming the incremental degradation
within a time interval Δt is random variable with mean μi Δt and variance σ i Δt . The constants
2
μi and σ i 2 are the parameters under the ith stress level, which are linked with applied stresses by
a linear regression approach. The actual degradation path is the summation of these increments,
whose first passage time to a threshold level D follows Birnbaum-Saunders distribution
when D >> μi Δt .
If the independent increment is s-normally distributed, then an inverse
Gaussian distribution is used as it is a statistically more accurate model as discussed by
Bhattacharyya and Fries (1982) and Desmond (1986).
Whitmore and Schenkelberg (1997) found that the Wiener process with linear drift is
very useful as a degradation model, where they assumed that the degradation Y (t ) , conditioned
on the position τ , can be modeled as a Gaussian process with a mean component m(t ) and a
diffusion parameter σ 2 , i.e.
Y (t ) = m(t ; β ) + σ × W (t )
33
where W (t ) is the standard Wiener process with the following properties:
•
W (0) = 0 , where W (t ) ∈ (−∞, ∞)
•
Stationary and independent increment, i.e. W (t i + Δt ) − W (t i ) ~ N (0, Δt )
•
W (t ) ~ N (0, t )
Among the approaches of degradation modeling by Brownian motion, Doksum and
Hoyland (1992) discussed ADT models for the variable-stress case and introduce a flexible class
of models based on the concept of accumulated decay. The variable stresses considered are
simple-step-stress, multiple-step-stress and progressive stress. The proposed model is a timetransformed Brownian motion with drift model, which assumes that certain deterministic stress
level imposes the same scaling effect on drift and Brownian motion terms.
Pieper, Domine and Kurth, (1997) proposed a different model for the first passage time
distribution under simple-step-stress condition. They also discussed an interesting extension that
the time change point is random variable.
However, the expression for the first passage
probability density in this case cannot be obtained in an explicit form.
2.4.2.4 Non-parametric Models
Shiau and Lin (1999) presented a Non-parametric Regression Accelerated Life-stress
(NPRALS) model for some groups of accelerated degradation curves.
They assumed that
various stress levels only influence the degradation rate, but not the shape of the degradation
curve.
An algorithm is proposed to estimate the components of NPRALS such as the
acceleration factor. By investigating the relationship between the acceleration factors and the
stress levels, the mean time to failure estimate of the product under the normal condition is
obtained.
34
The non-parametric regression models bear the degradation-path-free property in contrast
to the parametric models. They relaxed the specification of the form of the degradation path and
perform much better than parametric models if the assumed path function is far from true in the
parametric modeling. However, non-parametric models require more data to obtain the same
accuracy as that of the parametric models assuming that the parametric models are correct. In
other words, the efficiency of non-parametric models is relatively low. Moreover, the time
scaling assumption is important since it is required for predicting the form of degradation curve
under normal operating conditions, but this assumption is rather weak. Moreover, to utilize the
non-parametric regression model, the span of degradation curve under normal condition has to be
covered by that of the accelerated degradation data after time scaling, and ADT must be
conducted until test units fail.
Another non-parametric approach is to utilize the degradation hazard function. Eghbali
and Elsayed (1999) developed the degradation hazard function approach for the analysis of
degradation data. This approach is based on a new concept called degradation hazard function.
It is assumed that the degradation hazard function can be written in terms of two separable
functions: time and degradation measure. The stress covariates can be incorporated in the model
so that this approach can be used to model the results of the ADT. The effect of the degradation
phenomenon on the product performance can be expressed by a random variable, x, called
degradation measure.
2.5
Design of Degradation Test Plans
Designing a degradation experiment consists of a degradation model, some constraints
and an optimization criterion such as the asymptotic variance of mean time to failure (MTTF)
35
estimate, variance of the reliability estimate or variance of the estimated 100th percentile of the
lifetime distribution and some decision variables. Even though, the optimization problem may
be feasible, the obtained optimum test plan cannot correct the bias of a degradation model,
therefore, a test plan is inappropriate if the degradation model is not accurate. There are two
major types of degradation test plans:
2.5.1
Constant Stress Degradation Test Plans
Tseng and Yu (1997) proposed an intuitively appealing method for choosing the time to
terminate a degradation test by analyzing the asymptotic convergence property of MTTF
estimate but the termination rule is approximate since no constraint has been considered.
Boulanger and Escobar (1994) presented a method to determine the stress levels, sample size at
each level and observation times. However, their method is discussed under a predetermined
termination time.
Park and Yum (1997) developed an optimal accelerated degradation test plan under the
assumptions of destructive testing and the simple constant rate relationship between the stress
and the product performance. By solving a constrained nonlinear programming problem, the
stress levels, the proportion of test units allocated to each stress level and the inspection times are
determined such that the asymptotic variance of the MLE of the MTTF at the normal operating
conditions is minimized. MLEs are utilized to obtain the model parameters. The model is
applied to the accelerated degradation test data of Light Emitting Diode (LED) subject to
accelerated temperature and current to predict reliability at normal operating conditions.
Yu and Tseng (1999) designed an optimal degradation experiment by considering the
constraint of the total experimental cost. They made the assumption that the degradation path
36
can be transformed to a simple form. The optimal decision variables, sample size, inspection
frequency and termination time are determined by minimizing the variance of the estimated 100th
percentile of the lifetime distribution. As an application, Yu and Chiao (2002) designed an
optimal degradation experiment for improving LED reliability.
Wu and Chang (2002) investigated the Nonlinear Mixed-effect model and propose an
enumeration algorithm, step-by-step, to determine the optimal sample size, inspection frequency
and termination time under a cost constraint. The variance of the estimator of percentile of the
failure time distribution was minimized. They also studied the sensitivity of the optimal plan to
the changes of model parameters and cost. Their investigation showed that the optimal solution
can be slightly sensitive to the changes in the values of model parameters.
2.5.2
Variable Stress Degradation Test Plans
Since conducting a constant-stress accelerated degradation test can be too costly, it may
not be very applicable for assessing the lifetime distribution for a newly developed product
because typically only a few test units are available. To overcome this difficulty, a variablestress such as step-stress accelerated degradation test experiment can be carried out. Tseng and
Wen (2000) provided an illustration of statistical inference procedure for a step-stress ADT using
a case study of LEDs.
In the literature, however, variable-stress degradation test plans are rare. Tang, Yang and
Xie (2004) investigated planning of an optimum step-stress accelerated degradation test
experiment where the test stress is increased in steps from a lower stress to a higher stress
throughout the test. Based on the maximum likelihood theory, the asymptotic variance of MTTF
estimate at the normal operating conditions is then derived and used as a constraint instead of an
37
objective function. The optimum testing plan which minimizes the testing cost gives the optimal
sample size, number of inspections at each stress level and number of total inspections.
2.6
Maintenance Relating to Reliability
Many new technological systems, especially those using advanced and innovative
technologies to achieve improved economic performance, include components that are designed
with data collected under conditions not directly representative of the operating parameters
prevalent in the new design and its operation. The performance of these components may be
critical to achieving the design lifetime at the expected operating conditions, and, thereby,
reaping the anticipated economic rewards of the new design.
Premature degradation of the critical components can incur considerable cost due to
repair or premature replacement and a general loss of confidence in the new system; these can
easily result in a net economic loss. This possibility of an economic loss dictates can dictate
specific maintenance activities on these components. The aim of these maintenance activities is
to facilitate operational decisions which will mitigate economic loss; for example, should the unit
continue to operate as planned or should modifications in the unit or its operation be introduced.
Maintenance is often overlooked as an area to improve reliability. As a matter of a fact,
maintenance in many cases is viewed as doing more harm to reliability performance than good
especially because it is believed that maintenance activities will take the component back to the
start of the bath tub curve cycle where the infant mortality rate is higher. So what is the optimal
time to perform maintenance on a part to prevent it from entering into the wear out phase of the
bath tub curve?
38
Moreover, the current generations of operation strategies, such as lean, and Six-Sigma are
forcing organizations to reduce inventory levels to enable faster response to changing demands
in the marketplace. The effect of these production strategies, from a maintenance viewpoint, is
that system downtime is more costly to the organization and therefore, companies strive to
maximize uptime of their equipments and minimize their inventory levels. At the same time, the
number of products that are returned to retailers has increased significantly over the past 20
years, leading to higher customer service and reverse logistics costs. The costs of inadequate
reliability are tremendous. Bhote and Bhote (2004) reported that warranty costs for the big three
automakers alone reach 6 billion per year. So while goal of minimizing inventory levels may
seem worthwhile, the warranty costs associated with it can be high.
The frequency and duration of unplanned failures are critical drivers of the economic
performance of a process. Nevertheless, the relationship between reliability and the economic
risk of failures is seldom straightforward. Even fairly sophisticated models generally provide
limited insights into the bottom-line economic performance of the product or process.
Nachlas and Cassady (1999) emphasized that there is an optimal balance between
preventive maintenance actions and corrective maintenance actions.
Increased preventive
maintenance will reduce the number of corrective maintenance actions required; however the
growing cost of an aggressive preventive maintenance policy will begin to outweigh the savings
in corrective maintenance actions at some point. The evaluation of preventive maintenance and
corrective maintenance policies must be considered simultaneously in order to maximize overall
system performance.
The impact of maintenance actions has been addressed in a large volume of failure-timebased maintenance models varying from perfect maintenance or replacement Sahin and
39
Polatoglu (1998) to imperfect maintenance Wang and Pham (1996). However, the existing
maintenance models for degrading systems have been limited to perfect maintenance actions
Buerenguer, Grall, Dieulle and Roussignol (2003), which restore the system to as good-as-new
state without considering the residual damage over time. Furthermore, most, if not all, of the
maintenance models consider a cost objective function which includes different cost elements
such as inspection cost, maintenance cost, failure cost and replacement cost. However, the cost
minimization objectives are usually sensitive to the uncertainty in cost estimates, whereas,
periods of time of a maintained system on operation (uptime) and maintenance (downtime) can
be more accurately measured.
In today’s environment many OEMs and operators use the knowledge of past component
or system behavior and the components critical to continued system operation as the basis for
specifying preventive maintenance intervals. However, some optimization algorithms have been
used to determine the optimal intervals at which preventive maintenance should be performed.
2.6.1
Preventive Maintenance
Preventive maintenance consists of scheduled actions performed on components,
subsystems or systems before failure occurs to promote continuous system operation. Preventive
maintenance is used to increase the mean time to failure by performing maintenance activities at
regular intervals. It includes periodic repair or replacement of worn components, oil changes,
lubrication, partial and complete overhauls at specified periods, and more. However, because of
the probabilistic distribution of failure timing, these intervals will always be too frequent for
some failures and too infrequent for others. Newer electronics and communications technologies
40
enable and enhance strategies such as preventative (predictive) maintenance. Rather than wait
for a failure to occur, models can be used to identify trends that indicate a failure is likely. The
occurrence and cost of system failures are reduced when conditions that may lead to failures are
corrected early.
Long-term effects and cost comparisons generally indicate that performing preventive
maintenance is more cost-effective than only performing corrective maintenance or repairs upon
system failure. This is because the costs associated with preventive maintenance actions are
typically far less than those associated with a loss of operational time due to the system failure.
Wang and Pham (1996) indicated that it has been shown that the imperfect preventive
maintenance can restore the age of a deteriorating component (or system) to a younger age and
reduce the system’s overall failure rate.
Canfield (1986) proposed a periodic preventive maintenance policy which is assumed to
slow the rate of system degradation, while the hazard rate keeps monotone increase. Chan and
Shaw (1993) also studied the reduction of the failure rate after performing preventive
maintenance. Malik (1979) proposed the improvement factor model to measure the restoration
of age and failure rate of a system after performing a preventive maintenance. Jayabalan and
Chaudhuri (1992) also used the improvement factor method to investigate the restoration effect
on the age of a system after a preventive maintenance. Most of the preventive maintenance
models with the improvement factor in the literature assume that the improvement factor is
constants. Lie and Chun (1986) considered the improvement factor as a variable; yet, some
parameters are not well defined. Yang, Lin and Cheng (2003) also proposed an improvement
factor which is a function of the number of preventive maintenance operations performed and the
cost of preventive maintenance.
41
Many preventive maintenance models proposed for deteriorating systems are typically to
determine the optimum interval between each preventive maintenance activity, and the optimum
number of preventive maintenance operations before replacing the system. This is done by
minimizing the expected average cost over a finite or infinite time span. Nakagawa (1986)
considered periodic and sequential preventive maintenance policies for the system with minimal
repair at failure and provided the optimal policies by minimizing the expected cost rates.
For the past thirty years, there has been an increasing interest in the study of the
preventive maintenance of components. McCall (1965) and Pierskalla and Voelker (1976)
provided surveys of early preventive maintenance models. Subsequent papers by Sherif and
Smith (1981), Valdez-Flores and Feldman (1989), and Murdock (1995) provided a more current
summary of the existing literature.
Several authors have explored the situation in which preventive maintenance actions do
not completely renew a component. Lie and Chun (1986) considered the case in which there are
two types of preventive maintenance actions, simple preventive maintenance and preventive
replacement. Under a simple preventive maintenance action, the failure rate of the component is
reduced to a level between the failure rate just prior to failure and that of a new component. This
type of simple preventive maintenance is also discussed by Malik (1979) and is called the
improvement factor model.
Preventive replacement provides complete renewal of the
component through replacement. The average cost rate per cycle is minimized to determine the
optimal number of simple preventive maintenance actions performed before a preventive
replacement. This model also provides for two types of corrective maintenance actions, minimal
repair and corrective replacement.
42
Periodic inspection consists of equipment checks that can be performed on standby and
online components as required. Periodic inspections are necessary because not all failures are
self-announcing. During an inspection, a technician records equipment deterioration so that
plans can be made to replace or repair worn parts before they cause system failure. Thus,
periodic inspections help to ensure that preventive maintenance intervals are scheduled
appropriately.
2.6.2
Corrective Maintenance
Corrective maintenance consists of the actions taken to return a failed component or a
system to satisfactory operational condition. This action usually involves diagnosis of the
problem by a maintenance technician or repair team, repair or replacement of the failed
components that are responsible for the overall system failure, and the verification that the repair
or the corrective actions taken have returned the system to successful operational condition.
Since the time at which failures will occur is unknown, the corrective maintenance
actions cannot be scheduled or planned for with a high degree of accuracy. Therefore, each time
a corrective maintenance is necessary, there is a time associated with completing it. This time is
known as downtime, and is defined as the length of time a component, or a system is not
performing its required operation. Factors affecting the length of downtime include the physical
characteristics of the system, environmental factors, human factors, materials and more. Because
of all these factors, the time required to return an item to a satisfactory operational condition is
not constant and hence a distribution can be fitted to model the times to repair.
43
Such a distribution is called the repair or downtime distribution. It includes the time it
takes to successfully diagnose the cause of failure, the time it takes to get the parts necessary to
perform the repair, the time it takes to access the failed parts, the time it takes to repair the failed
parts or replace them with other parts, the time it takes to return the system to operating status,
the time it takes to verify that the system is functioning successfully and the time it takes to
return the system to normal operation.
The issue is not whether a repair/overhaul or replacement takes place; rather, the issue is
the relative age of the component after repair or replacement.
For example, if a failed
component is replaced by a new component, it is considered the same as if the component was
repaired/overhauled to an “as good as new” condition. Corrective maintenance actions can be
categorized as follows: Perfect Repair, Minimal Repair, Imperfect Repair, and General Repair.
2.6.2.1 Perfect Repair
Perfect repair models assume that after a corrective maintenance action the component is
rendered “as good as new” condition. The perfect repair assumption is reasonable if failed
components are replaced by new and identical components or if the repair/overhaul procedure is
detailed and thorough enough to eliminate nearly all of the aging effects. Models that assume
perfect repair are able to take advantage of basic renewal theory and queuing theory results as
presented by Barlow and Proschan (1975), Cox (1962), Daigle (1992), and Clarke and Disney
(1985), to name a few, but there are many other textbooks that relate to both of these theories.
Many of the current models assume perfect repair at some point for analytical purposes.
44
However, models that rely heavily on the perfect repair assumption have been well established in
the literature.
Nevertheless; there are some shortcomings for using a perfect repair or renewal model.
Ascher and Feingold (1984) discussed the fact that if the modeled system actually is represented
by a number of components, then the repair or replacement of any one component will rarely
renew the entire system. They also discussed that the remaining components that have not failed
have accumulated some wear on them due to an aging process. In addition, the repair of one
component is not always completely performed and other components may be damaged in the
repair effort.
Economically, it may be unreasonable to perform a complete repair or replacement at
every failure. Commonly, components are repaired to a state that is likely to hold the system
operational until the next overhaul is due, or a component may be replaced by a used component
rather than a new component. Minimal repair models were some of the first models to address
the fact that alternatives to the perfect repair model are necessary.
2.6.2.2 Minimal Repair
The concept of minimal repair was first introduced by Barlow and Hunter (1960) and
discussed again in Barlow and Proschan (1967). Also, the survey paper by Valdez-Flores and
Feldman (1989) devotes a section in it to the discussion of minimal repair models. Under
minimal repair, a component is repaired to a state that is stochastically identical to the state just
prior to failure. In other words, the failure rate is identical before and after a maintenance action,
but is different between a new component and after the maintenance action. Justification for the
45
minimal repair assumption is commonly expressed in one of two ways. First, a conscious
decision may be made to repair a system just enough to restore operation in an effort to avoid the
time or material required to renew the system. Second, the system may actually represent a large
number of components and therefore per our discussion above where Ascher and Feingold
(1984) indicated that the repair or replacement of a single failed component may have little effect
on the state of the overall system when the state of all other components has not been altered or
renewed.
Barlow and Hunter (1960) defined a model in the context of a block replacement policy,
where a component is replaced at regular intervals, and any failures are only minimally repaired.
Sandve and Aven (1999) constructed several minimal repair models to minimize the long run
expected cost per time unit under three preventive maintenance policies. The first policy is a
traditional block replacement policy. The second policy allows for replacement instead of
minimal repair if a component failure occurs close enough to the planned replacement time, and
the third policy requires a decision to be made at each component failure. Shue (1993) proposed
a model that determines the optimal number of minimal repairs until replacement rather than
using an age or block replacement policy. The cost model used expresses costs as a function of
component age. Several synonyms for minimal repair exist in the literature such as: “bad-as-old”
Ascher (1968), “restoration process” Bassin (1969), and “age-persistence” Balaban (1978).
2.6.2.3 Imperfect Repair
In the literature, the term imperfect repair has taken a broad meaning. Some authors have
used imperfect repair synonymously with minimal repair, while most authors have defined
46
imperfect repair to include policies with some mixture of minimal repair and perfect repair.
Pham and Wang (1996) addressed both imperfect preventive maintenance and corrective
maintenance models.
The first imperfect repair model is developed by Brown and Proschan (1983) and is
referred to as the (p, q) model. The (p, q) model is built on the assumption that upon failure a
component undergoes perfect repair with probability p, and undergoes minimal repair with
probability (q = 1 - p). The distribution on the time between perfect repairs is given and is
shown to preserve certain characteristics of the original life distribution of the component. The
failure rate of the (p, q) model is shown to be a multiple probabilities, p, of the original failure
rate. Wang and Pham (1996) used the (p, q) rule to model imperfect preventative maintenance.
This basic model was extended by Block, Borges and Savits (1985) with the expression
of the values p and q as a function of the age of the component, and is referred to as the (p(t),
q(t)) model. Again, the distribution on the time between perfect repairs is given along with the
associated failure rate.
In both models, repair times are assumed negligible.
Iyer (1992)
assumed that repair times for the (p(t), q(t)) model are non-negligible and obtained down-time
results.
2.6.2.4 General Repair
Recently, general repair models have been discussed as the most generally applicable
imperfect repair corrective maintenance model that includes perfect repair and minimal repair as
special cases. The distinct feature of a general repair model is that every repair effort can render
the state of the component anywhere in the continuum from just prior to failure to a complete
47
renewal. Some general repair models provide for the cases in which repair efforts potentially
make the state of the component worse than just prior to failure, or better than renewal. These
characteristics of general repair models are what separate them from those models discussed in
the imperfect repair section.
As an example, the concept of virtual age which was first introduced by Kijima,
Morimura, and Suzuki (1988) allows repair actions to bring the state of the component to a value
somewhere between completely new and just prior to failure. Replacements are made according
to a block replacement policy. As another example, Malik (1979), introduced the concept of the
improvement factor model, in which, the improvement factor refers to the degree to which
preventative maintenance actions reduce the failure rate of a component. This model is unique,
since most of the models in the literature focus on chronological time, operational age, or virtual
age to measure the deterioration of a component. A common assumption is that the failure rate is
strictly increasing with time. This provides a one-to-one correspondence between any time scale
and the failure rate between maintenance actions.
Lie and Chun (1986) used the improvement factor approach to model the effect of an
imperfect preventative maintenance action within a more complex maintenance environment
than is considered by Malik (1979). The improvement factor in this model is treated as a
decision variable and is chosen using a cost model. The improvement factor renders the failure
rates somewhere between unchanged and that of a new component. In addition, the optimal
number of imperfect preventative maintenance actions before replacement is obtained. Although
the improvement factor model has been used in the context of preventative maintenance, it has
obvious potential for use in general repair models.
48
An improvement factor model simply adds or subtracts a certain amount of age from the
accumulated age of the component to model the effects of a repair action. Stadje and Zuckerman
(1991) treated the degree of repair as a decision variable that is between the minimal repair and
perfect repair cases. A cost model was developed to determine the optimal degree of repair at
failure and a decision theory approach was used to identify the optimal type of strategy.
2.6.3
Condition Based Maintenance
In the literature, most of the research available on preventive maintenance and
replacement models assumes underlying failure time distributions Grall, Dieulle, Buerenguer and
Roussignol (2002). Recently, condition-based maintenance policies have received increasing
attention. This type of maintenance policies takes into account updated risks of failures, and
suggests system inspection and maintenance action only if it is necessary based on the currently
observed system state. Importance of such policies has been amplified due to the available
accurate sensors that can continuously provide performance indicators and process parameters at
low cost. For example, a major manufacturer of elevators for high rising buildings continuously
monitors the braking systems, and acceleration and deceleration of the elevator globally. Albin
and Chao (1992) considered a series system including a multi-state degrading component.
Preventive replacement is carried out if its degradation state, modeled as a continuously
monitored Markov chain, exceeds a threshold level. Chen, Chen, and Yuan (2003) also used
Markov chain to model maintained systems subject to degradation.
These models assume the degradation process as a multi-state system in order to simplify
its mathematical representation. However, the classification of the multiple states is usually
49
arbitrary, and using the transition probabilities among those states may not be elaborate enough
in improving the effectiveness of maintenance. A more realistic modeling scheme is to treat the
system as a continuous state system.
However, because of mathematical complexity,
maintenance policies regarding continuous-state systems are rare. Grall, Dieulle, Buerenguer
and Roussignol (2002) studied the inspection-maintenance strategy for maintaining a single unit
exhibiting continuous-time degradation which was modeled by a Gamma process.
2.7
The Loss Function
The loss function is any departure that may occur from the target and there are several
loss functions that have been studied: the symmetric loss function, the asymmetric linear or
squared error loss function, and the Linear Exponential (LINEX) loss function.
2.7.1
Symmetric Loss Function
This loss function was modeled by many authors including Taguchi. Taguchi, Elsayed
and Hsiang (1989) identified the loss as the cost that is incurred by society when the consumer
uses a product whose performance characteristics differ from the target. The concept of social
loss was a departure from the traditional thinking. Traditionally, performance is viewed as a step
function where a product is either good or bad. This view assumes a product is uniformly good
between the specifications (LSL the lower specification and USL the upper specification).
Taguchi, Elsayed and Hsiang (1989) characterized this cost or loss as a quadratic function. This
loss reduces to zero when the production process manufactures at exactly the target value, and it
50
increases quadratically as the process moves away from the target. Taguchi, Elsayed and Hsiang
(1989) imposed a quadratic loss function of the form L( y ) = k ( y − T ) 2 within operational limits
where y is the response value, T is the target value, and k is the monetary constant or the loss per
unit due to deviation from the target.
2.7.2
Asymmetric Loss Function
Various authors have postulated viable formulations of asymmetric loss criteria and have
studied the impact of asymmetric loss functions from several different perspectives. Wu and
Tang (1998) and Maghsoodloo and Li (2000) have considered tolerance design with asymmetric
loss functions, while Moorhead and Wu (1998) have analyzed the effect of this type of loss
function on parameter design. Ladany (1995) presented a solution to the problem of setting the
optimal target of a production process prior to starting the process under a constant asymmetric
loss function.
2.7.3
Linear Exponential (LINEX) Loss function
This function is a special type of the asymmetric loss function and it rises approximately
exponentially on one side of zero and approximately linearly on the other side, Soliman (2002).
Varian (1975) motivated the use of the LINEX loss function on the basis of an example in which
there was a natural imbalance in the economic results of estimation errors of the same
magnitude. He argued that LINEX loss was a rational way to formulate the consequences of
estimation errors in real estate assessment.
51
Basu and Ebrahimi (1991) determined an expression for the LINEX estimator of the
survival function of a system having a Type II censored exponential lifetime. Using simulated
data, Basu and Thompson (1992) and Thompson and Basu (1993) obtained LINEX estimates of
the reliability of simple stress-strength systems. Pandey, Singh and Srivastava (1994) studied the
problem of estimating the shape parameter of a Pareto distribution using a LINEX loss function.
Sharma, Rathi and Namita (2004) stated that reliability characteristics, which are
typically the main drivers for establishing maintenance requirements, are analyzed to meet
assigned reliability goals, but since in applications in which the consequences of overestimation
of reliability, as an example, are considered to be much more severe than those associated with
underestimation or reliability.
For example, Pasu and Ebrahimi (1991) discussed that in
estimating reliability and failure rate functions, an overestimate is usually much more serious
than an underestimate; in this case the use of symmetrical loss function might be inappropriate.
In engineering contexts, it is often the case that the overestimation of system reliability
can result in a higher level of risk of unanticipated and catastrophic failures; one would prefer to
err on the conservative side in such problems. Feynman (1987) stated that in the disaster of a
space shuttle, the management overestimated the average life or reliability of solid fuel rocket
booster. This is an example where an asymmetrical loss function might be more appropriate.
The following three chapters are structured as follows:
Chapter 3 describes a degradation model that is used in this dissertation to demonstrate
the impact of stress and duration of the test on the reliability estimate. Chapter 4 outlines the
contributions of this dissertation by introducing the proposed model and the derivations of that
model and other relevant measures of reliability estimation.
Chapter 5 illustrates several
examples to demonstrate the application of the theory of the proposed model as well as the
52
applicability of the proposed testing plans. Chapter 6 draws conclusions from this dissertation
and provides recommendations on how this research can be extended.
53
CHAPTER 3
DEGRADATION MODELING AND EFFECTS OF TEST PLANS
This chapter consists of two main sections. Section 1 provides an example of the
reliability estimates based on degradation data, and section 2 demonstrates the reliability
estimation procedure using a simulation of the general path model with multiple varying stress
levels and multiple runs and at different times.
3.1
Reliability Estimates Based on Degradation Data
In this section an example of reliability estimation based on degradation paths without
considering stresses is provided. Let D(t ) be the component degradation at time t , and D f be
the failure threshold. The associated reliability function then equals the probability that the
degradation D(t ) is less than or equal to D f , i.e. R(t ) = Pr( D(t ) ≤ D f ) .
More
specifically,
the
degradation
model, D (t ) = β1 + β 2t + ε ,
is
utilized
for
demonstration, where β1 and β 2 are the parameters to be estimated and ε is an error term.
Both β 1 and β 2 are assumed to follow the normal distribution, and ε is also assumed to follow
the normal distribution. For demonstration, 2,000 degradation paths are generated.
More
specifically, both β1 and β 2 follow the normal distribution of N (1.5,0.5) and 2,000 pairs of
values will be randomly generated for β 1 , β 2 ; and 2,000 values of ε that follows the normal
distribution of N (0,1) .
Figure 3.1 gives an example of 5 degradation paths, which show the shape of most of the
54
degradation paths over time.
Figure 3.1: Example of Five Degradation Paths
The next step is to determine the number of degradation paths crossing the failure
threshold D f at each specific t , where D f = 5, 6, 7, 8, 9 and 10. After determining the number
of degradation paths crossing a specific failure threshold D f at each specific t i , the reliability
estimate Rˆ (t ) at the different failure threshold levels will be calculated using the following
equation:
Number of Paths Crossing D f at time t
Rˆ (t ) = 1 −
N
where N is the total number of degradation paths, and in this demonstration N = 2,000.
Figures 3.2 (a through f) illustrates the associated reliability estimation Rˆ (t ) over time
for various failure thresholds, D f .
55
Reliability Estimates of Degradation Paths
Crossing Df=5
Reliability Estimates of Degradation Paths
Crossing Df=6
1.20
Estimate of Reliability
Estimate of Reliability
1.20
1.00
0.80
0.60
0.40
0.20
0.00
1.00
0.80
0.60
0.40
0.20
0.00
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
Time (t)
Reliability Estimates of Degradation Paths
Crossing Df=7
7
8
9
10
9
10
9
10
Reliability Estimates of Degradation Paths
Crossing Df=8
1.20
Estimate of Reliability
1.20
Estimate of reliability
6
(b) Rˆ (t ) Estimate when D f = 6
(a) Rˆ (t ) Estimate when D f = 5
1.00
0.80
0.60
0.40
0.20
0.00
1.00
0.80
0.60
0.40
0.20
0.00
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
Time (t)
5
6
7
8
Time (t)
(c) Rˆ (t ) Estimate when D f = 7
(d) Rˆ (t ) Estimate when D f = 8
Reliability Estimates of Degradation Paths
Crossing Df=9
Reliability Estimates of Degradation Paths
Crossing Df=10
1.20
Estimate of Reliability
1.20
Estimate of Reliability
5
Time (t)
1.00
0.80
0.60
0.40
0.20
0.00
1.00
0.80
0.60
0.40
0.20
0.00
0
1
2
3
4
5
6
7
8
9
10
0
Time (t)
1
2
3
4
5
6
7
8
Time (t)
(f) Rˆ (t ) Estimate when D f = 10
(e) Rˆ (t ) Estimate when D f = 9
Figure 3.2 (a through f): Reliability Estimates Rˆ (t ) for Various Failure Thresholds, D f
56
It can be seen from the figures that as the failure threshold level increases, the reliability
estimate increases as well, which is due to the longer time period required to reach that higher
failure threshold level. The other conclusion that can be drawn from the previous analysis is that
the time to stop a particular test depends on the threshold level that is specified. This can be
better explained when combining Figures 3.2 (a through f) all in as can be seen in Figure 3.3.
Figure 3.3: Reliability Estimates for Different Failure Thresholds
In Figure 3.3, it is clear that the reliability estimate ≈ 28% when the failure threshold is
equal to 7 at time, t = 8 is very similar to the reliability estimate when the failure threshold that is
equal to 9 at time, t = 10, which means that the test will have to run longer when the failure
threshold is 9 in order to demonstrate a reliability estimate of ≈ 28%.
57
3.2
Reliability Estimation Based on Varying Stress Levels
In the previous section, it was shown that the degradation paths vary in relation to the test
time. It was also shown that as the failure threshold level increases, the reliability estimate
increases as well, which is also dependent on the test time, and finally it was shown that the time
to stop a particular test depends on the threshold level that is specified in the analysis or that is
provided by the customer.
In this section, stress as a factor affecting the same degradation path that was used in the
previous section will be considered such that D z (t ) = β1 + β 2 t i + ε
where D z (t ) is the
degradation path at time t and under stress level Z . β 1 and β 2 are the parameters that will have
to be estimated that are dependent on stress and ε is an error term that is randomly generated.
Suppose that a failure occurs when D z (t ) ≥ D f , where D f is the failure threshold, and
that the actual degradation path of a particular unit is given by D z (t ) = β1 + β 2 t i + ε where β1
and β 2 are random variables that follows a normal distribution N ( μ , σ 2 ) in which μ is
dependant on stress level Z such that μ = a 0 Z + a1 , and σ 2 is not dependent on stress. For the
purposes of this analysis and for simplicity of illustration, a 0 = a1 = σ 2 will be assumed to equal
0.5 and therefore β 1 ~ N ((0.5 × Z + 0.5),0.5) and β 2 ~ N ((0.5 × Z + 0.5),0.5) .
ε is the error
term and it is also a random variable random that follows a normal distribution N ( μ , σ 2 ) in
which μ and σ 2 are not dependant on stress level Z . For the purposes of this analysis and for
simplicity of illustration, the assumption will be made that μ = 0 and σ 2 = 1 , and therefore
ε ~ N (0,1) .
Two different simulations to estimate the reliability based on varying stress levels are
58
performed. At the end of these simulations, the accuracy of each simulation is going to be
compared. Each simulation is to be performed at three different stress levels Z . During the first
and
the
second
simulations,
for β 1 ~ N ((0.5 × Z + 0.5),0.5) ,
50
random
variables
β 2 ~ N ((0.5 × Z + 0.5),0.5)
will
be
and ε ~ N (0,1)
generated
at each stress
level Z 1 , Z 2 and Z 3 .
In the first simulation, the stress levels that will be used are Z 1 = 1, Z 2 = 2, and Z 3 = 3.
Since μ = a0 Z + a1 and since a 0 = a1 = 0.5 , the different values for μ are μ1 = 1 , μ 2 = 1.5
and μ 3 = 2 . In the second simulation, the stress levels that will be used are Z 1 = 1.5, Z 2 = 2.1,
and Z 3 = 3.5. Again, since μ = a 0 Z + a1 and since a 0 = a1 = 0.5 , the different values for μ are
μ1 = 1 .25, μ 2 = 1.5 5 and μ 3 = 2 .25. It is very clear that the different stress levels will have an
impact on the distribution of β1 and β 2 .
After generating 50 random variables as described above, 50 degradation
paths, D z (t ) = β1 + β 2 t i + ε , will then be calculated for each t i , where t i is the time from i =1 to
10. This procedure will be repeated 10 different times with each replication being considered a
run.
Therefore, for the first simulation, the degradation path D z (t ) = β1 + β 2 t i + ε will
calculated, where the data for β 1 , β 2 and ε will be randomly generated per the normal
distributions that are identified in Table 3.1.
Table 3.1: Different Distribution/Values Used in the First Degradation Path Simulation
β2
NORM (1,0.5)
ε
ti
DZ1 (t )
β1
NORM (1,0.5)
NORM (0,1)
1, 2…10
DZ 2 (t )
NORM (1.5,0.5)
NORM (1.5,0.5)
NORM (0,1)
1, 2…10
DZ 3 (t )
NORM (2,0.5)
NORM (2,0.5)
NORM (0,1)
1, 2…10
59
Figure 3.4 illustrates a sample of 7 degradation paths from the 10th run of the first
simulation, where β1 = NORM (1,0.5) , β 2 = NORM (1,0.5) , ε = NORM (0,1) and t i is the time
from i =1 to 10.
7 Degradation Paths for Simulation 1, Z1, Run 10
18
16
14
Degradation Paths
12
10
8
6
4
2
0
1
2
3
4
5
6
7
8
9
10
Time (1.....10)
Figure 3.4: A Sample of 7 Degradation Paths from the 10th Run in Simulation 1
After computing the 50 degradation paths for all 10 runs for the first simulation, the
parameters estimates for β1 and β 2 are be computed. This is to be done by computing the
parameter estimates for each run and then averaging the parameter estimates for all the runs at
each stress level. This means that there will be 3 parameter estimates per simulation, one at each
stress level. The parameter estimates computation will be performed using software called
Minitab for simplicity purposes.
60
The results for the 50 parameters estimate for β1 and β 2 are plotted on a figure for each
run, and therefore, there are 10 different figures depicting the estimates for β1 and β 2 at each
stress level (30 parameter estimate figures per simulation). Therefore, for the first simulation,
there are 10 different figures for DZ1 (t ) , 10 different figures for DZ 2 (t ) and 10 different figures
for DZ 3 (t ) , and the same is true for the second simulation at the different stress levels.
Figure 3.5 (a through j) below illustrate the parameter estimates for simulation 1, where
β1 = NORM (1,0.5) , β 2 = NORM (1,0.5) , ε = NORM (0,1) , and t i is the time from i =1 to 10 for
each of the 10 runs. The remaining figures at the other stress levels for the first simulation and
all the figures for the second simulation are placed in the Appendix I.
Parameter Estimates Dz1 (Run # 2)
Parameter Estimates Dz1 (Run # 1)
3.00
3.00
2.50
2.50
2.00
β 2 Estimate
β 2 Estimate
3.50
2.00
1.50
1.00
0.50
0.5
1
1.5
2
1.00
0.50
0.00
-0.50 0
0.00
-0.50 0
1.50
2.5
0.5
2
1.5
-1.00
-1.00
β 1 Estimate
β 1 Estimate
(a) Parameter Estimates Run # 1
(b) Parameter Estimates Run # 2
Parameter Estimates Dz1 (Run # 3)
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
-0.50 0
-1.00
-1.50
Parameter Estimates Dz1 (Run # 4)
3.00
2.50
β 2 Estimate
β 2 Estimate
1
0.5
1
1.5
2
2.5
2.00
1.50
1.00
0.50
0.00
-0.50 0
0.5
1
1.5
-1.00
-1.50
β 1 Estimate
β 1 Estimate
61
2
2.5
(c) Parameter Estimates Run # 3
(d) Parameter Estimates Run # 4
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
-0.50 0
-1.00
Parameter Estimates Dz1 (Run # 6)
3.00
2.50
β 2 Estimate
β 2 Estimate
Parameter Estimates Dz1 (Run # 5)
0.5
1
1.5
2
2.00
1.50
1.00
0.50
0.00
2.5
-0.50
0
0.5
β 1 Estimate
3.00
2.50
2.50
2.00
2.00
β 2 Estimate
β 2 Estimate
Parameter Estimates Dz1 (Run # 8)
3.00
1.50
1.00
0.50
0.00
1
1.5
2
2.5
3
1.50
1.00
0.50
0.00
-0.50 0
0.5
1
1.5
2
2.5
-1.00
-1.00
-1.50
β 1 Estimate
β 1 Estimate
(g) Parameter Estimates Run # 7
(h) Parameter Estimates Run # 8
Parameter Estimates Dz1 (Run # 9)
Parameter Estimates Dz1 (Run # 10)
3.00
3.00
2.50
2.50
2.00
1.50
2.00
β 2 Estimate
β 2 Estimate
2
(f) Parameter Estimates Run # 6
Parameter Estimates Dz1 (Run # 7)
0.5
1.5
β 1 Estimate
(e) Parameter Estimates Run # 5
-0.50 0
1
1.00
0.50
0.00
-0.50 0
1.50
1.00
0.50
-1.00
0.00
-0.50 0
-1.50
-1.00
0.5
1
1.5
2
2.5
β 1 Estimate
0.5
1
1.5
2
β 1 Estimate
(i) Parameter Estimates Run # 9
(j) Parameter Estimates Run # 10
Figure 3.5 (a through j): Parameter Estimates in Simulation 1, where the true parameters
β1 ~ NORM (1,0.5) , β 2 ~ NORM (1,0.5) and ε ~ NORM (0,1)
62
In the second simulation, similar analysis is to be performed. The stress levels that are be
used are Z 1 = 1.5, Z 2 = 2.1, and Z 3 = 3.5. Since μ = a 0 Z + a1 and since a 0 = a1 = 0.5 , the
different values for μ are μ1 = 1 .25, μ 2 = 1.5 5 and μ 3 = 2 .25. It is very clear that the different
stress levels have an impact on the distribution of β1 and β 2 .
After generating 50 random variables as described above, 50 degradation
paths, D z (t ) = β1 + β 2 t i + ε are calculated for each t i where t i is the time from i =1 to 10. This
procedure is repeated 10 different times with each replication being considered a run. Therefore,
for the second simulation, the degradation paths D z (t ) = β1 + β 2 t i + ε are calculated per Table
3.2, where the data for β1 , β 2 and ε are randomly generated per the normal distributions
identified in Table 2 below.
Table 3.2: Different Distribution/Values Used in the Second Degradation Path Simulation
β2
NORM (1.25,0.5)
ε
ti
DZ1 (t )
β1
NORM (1.25,0.5)
NORM (0,1)
1, 2…10
DZ 2 (t )
NORM (1.55,0.5)
NORM (1.55,0.5)
NORM (0,1)
1, 2…10
DZ 3 (t )
NORM (2.25,0.5)
NORM (2.25,0.5)
NORM (0,1)
1, 2…10
After computing the 50 degradation paths for all 10 runs for the first simulation, the
parameter estimates for β1 and β 2 are computed. This is done by computing the parameter
estimates for each run and then averaging the parameter estimates for all the runs at each stress
level. This means that there are 3 different parameter estimates per simulation, one at each stress
level. The parameter estimates computation is to be performed Minitab software for simplicity
purposes.
63
After all the parameter estimates are computed per Table 3.1 and Table 3.2 and the plots
of the parameter estimates are generated, the averages, of all of parameter estimates β1 and
β 2 for all of the 50 degradation paths, are calculated for each run using the following equation:
βˆ
i =1...50
βˆ
∑
=
i
50
As stated before, in the previous equation, the number in the denominator, 50, was
chosen to randomly generate only 50 degradation paths, but this number could be any other
number.
Table 3.3: β̂ 1 and βˆ 2 Averages from the First Simulation Under Stress 1.
DZ1 (t ) NORM (1,0.5)
β̂1
βˆ 2
Parameter Estimates Run (1)
Parameter Estimates Run (2)
Parameter Estimates Run (3)
Parameter Estimates Run (4)
Parameter Estimates Run (5)
Parameter Estimates Run (6)
Parameter Estimates Run (7)
Parameter Estimates Run (8)
Parameter Estimates Run (9)
Parameter Estimates Run (10)
0.99513
0.95600
1.00677
1.02439
0.99460
0.99368
0.95910
1.03628
1.00544
1.00193
1.11007
1.00214
1.11843
1.04147
1.10181
1.15171
1.04019
1.09743
0.99809
1.03854
It is important to obtain the average of the parameter estimate, β̂ 1 and βˆ 2 , for all the 10
runs. This is done by using the following equation:
ˆ
βˆ( run =1...10) =
∑ βˆ
run
10
The average of all of the parameter estimates summarized in Table 3.3 for the first
ˆ
ˆ
simulation and at the first stress level, DZ1 (t ) NORM (1,0.5) , are β̂ 1 = 0.99733 and βˆ 2 = 1.06999.
This information is used later to come up with a regression line that shows which simulation
64
generated results that best match the assumptions made and under what stress levels.
Table 3.4: β̂ 1 and βˆ 2 Averages from the First Simulation under Stress 2.
DZ 2 (t ) NORM (1.5,0.5)
β̂1
βˆ 2
Parameter Estimates Run (1)
Parameter Estimates Run (2)
Parameter Estimates Run (3)
Parameter Estimates Run (4)
Parameter Estimates Run (5)
Parameter Estimates Run (6)
Parameter Estimates Run (7)
Parameter Estimates Run (8)
Parameter Estimates Run (9)
Parameter Estimates Run (10)
1.47971
1.54079
1.56437
1.43548
1.45248
1.48266
1.55049
1.40170
1.51286
1.45932
1.56213
1.42846
1.52494
1.61542
1.60228
1.53831
1.51491
1.51999
1.59605
1.44133
It is important to obtain the average of the parameter estimate, β̂ 1 and βˆ 2 , for all the 10
runs. This is done by using the following equation:
ˆ
βˆ( run =1...10 ) =
∑ βˆ
run
10
The average of all of the parameter estimates summarized in Table 3.4 for the first
ˆ
ˆ
simulation and at the second stress level, DZ 2 (t ) NORM (1.5,0.5) , are β̂ 1 = 1.48799 and βˆ 2 =
1.53438. This information is used later to come up with a regression line that demonstrates
which simulation generated results that best match the assumptions made and under what stress
levels.
Table 3.5: β̂ 1 and βˆ 2 Averages from the First Simulation under Stress 3.
DZ 3 (t ) NORM (2,0.5)
β̂1
βˆ 2
Parameter Estimates Run (1)
Parameter Estimates Run (2)
Parameter Estimates Run (3)
Parameter Estimates Run (4)
Parameter Estimates Run (5)
2.20429
2.03535
1.91805
1.90811
1.81842
2.01193
2.14055
2.01215
1.99985
2.11305
65
2.03415
1.98591
2.12433
1.96502
2.01384
Parameter Estimates Run (6)
Parameter Estimates Run (7)
Parameter Estimates Run (8)
Parameter Estimates Run (9)
Parameter Estimates Run (10)
2.06160
2.13057
2.03099
2.10492
2.05102
It is important to obtain the average of the parameter estimate, β̂ 1 and βˆ 2 , for all the 10
runs. This is done by using the following equation:
ˆ
βˆ( run =1...10 ) =
∑ βˆ
run
10
The average of all of the parameter estimates summarized in Table 3.5 for the first
ˆ
ˆ
simulation and at the third stress level, DZ 3 (t ) NORM (2,0.5) , are β̂1 = 2.00075 and βˆ 2 = 2.06566.
This information is used later to come up with a regression line that demonstrates which
simulation generated results that best match the assumptions made and under what stress levels.
Table 3.6: β̂ 1 and βˆ 2 Averages from the Second Simulation under Stress 1.
DZ1 (t ) NORM (1.25,0.5)
β̂1
βˆ 2
Parameter Estimates Run (1)
Parameter Estimates Run (2)
Parameter Estimates Run (3)
Parameter Estimates Run (4)
Parameter Estimates Run (5)
Parameter Estimates Run (6)
Parameter Estimates Run (7)
Parameter Estimates Run (8)
Parameter Estimates Run (9)
Parameter Estimates Run (10)
1.29085
1.10115
1.26095
1.32566
1.14020
1.25155
1.32898
1.29308
1.26207
1.36213
1.30222
1.31714
1.24461
1.24107
1.28040
1.33690
1.30607
1.30867
1.23719
1.38841
It is important to obtain the average of the parameter estimate, β̂ 1 and βˆ 2 , for all the 10
runs. This is done by using the following equation:
66
ˆ
βˆ( run =1...10 ) =
∑ βˆ
run
10
The average of all of the parameter estimates summarized in Table 3.6 for the second
ˆ
ˆ
simulation and at the first stress level, DZ1 (t ) NORM (1.25,0.5) , are β̂ 1 = 1.26166 and βˆ 2 =
1.29627. This information is used later to plot a regression line that demonstrates which
simulation generated results that best match the assumptions made and under what stress levels.
Table 3.7: β̂ 1 and βˆ 2 Averages from the Second Simulation under Stress 2.
DZ 2 (t ) NORM (1.55,0.5)
β̂1
βˆ 2
Parameter Estimates Run (1)
Parameter Estimates Run (2)
Parameter Estimates Run (3)
Parameter Estimates Run (4)
Parameter Estimates Run (5)
Parameter Estimates Run (6)
Parameter Estimates Run (7)
Parameter Estimates Run (8)
Parameter Estimates Run (9)
Parameter Estimates Run (10)
1.51762
1.56656
1.48992
1.62073
1.42140
1.65545
1.50863
1.53141
1.46266
1.47465
1.49149
1.52619
1.67822
1.71418
1.62906
1.52878
1.64274
1.62650
1.57202
1.68173
It is important to obtain the average of the parameter estimate, β̂ 1 and βˆ 2 , for all the 10
runs. This is done by using the following equation:
ˆ
βˆ( run =1...10 ) =
∑ βˆ
run
10
The average of all of the parameter estimates summarized in Table 3.7 for the second
ˆ
ˆ
simulation and at the second stress level, DZ 2 (t ) NORM (1.55,0.5) , are β̂ 1 = 1.52490 and βˆ 2 =
1.60909. This information is used later to come up with a regression line that demonstrates
which simulation generated results that best match the assumptions made and under what stress
levels.
67
Table 3.8: β̂ 1 and βˆ 2 Averages from the Second Simulation under Stress 3.
DZ 3 (t ) NORM (2.25,0.5)
β̂1
βˆ 2
Parameter Estimates Run (1)
Parameter Estimates Run (2)
Parameter Estimates Run (3)
Parameter Estimates Run (4)
Parameter Estimates Run (5)
Parameter Estimates Run (6)
Parameter Estimates Run (7)
Parameter Estimates Run (8)
Parameter Estimates Run (9)
Parameter Estimates Run (10)
2.25540
2.33897
2.38584
2.13792
2.27622
2.21392
2.23794
2.09227
2.31007
2.23381
2.32147
2.33249
2.30391
2.45792
2.25709
2.25780
2.24575
2.23095
2.21387
2.32544
It is important to obtain the average of the parameter estimate, β̂ 1 and βˆ 2 , for all the 10
runs. This is done by using the following equation:
ˆ
βˆ( run =1...10 ) =
∑ βˆ
run
10
The average of all of the parameter estimates summarized in Table 3.8 for the second
ˆ
ˆ
simulation and at the third stress level, DZ 3 (t ) NORM (2.25,0.5) , are β̂1 = 2.24823 and βˆ 2 =
2.29467. This information is used later to come up with a regression line that demonstrates
which simulation generated results that best match the assumptions made and under what stress
ˆ
ˆ
levels. Table 3.9 summarizes the values of β̂1 and βˆ 2 for all of the three stress levels for the
ˆ
ˆ
first simulation and Table 3.10 summarizes the values of β̂1 and βˆ 2 for all of the three stress
levels for the second simulation.
ˆ
ˆ
Table 3.9: Summary of β̂ 1 and βˆ 2 Values (First Simulation)
First Simulation
68
DZ1 (t )
Norm (1.00,0.5)
DZ 2 (t )
Norm (1.50,0.5)
DZ 3 (t )
Norm (2.00,0.5)
β̂1
ˆ
0.99733
1.48799
2.00075
ˆ
1.06999
1.53438
2.06566
βˆ 2
ˆ
ˆ
Table 3.10: Summary of β̂ 1 and βˆ 2 Values (Second Simulation)
Second Simulation
DZ1 (t )
Norm (1.25,0.5)
DZ 2 (t )
Norm (1.55,0.5)
DZ 3 (t )
Norm (2.25,0.5)
β̂1
ˆ
1.26166
1.52490
2.24823
ˆ
1.29627
1.60909
2.29467
βˆ 2
The values summarized in Table 3.9 and Table 3.10 are plotted on figures to show the
ˆ
ˆ
results of the parameter estimates β̂1 and βˆ 2 that were obtained during each simulation at the
different stress level. There are a total of 4 figures generated; two figures for comparing the
ˆ
results of the first simulation with the second simulation for parameter estimate β̂1 and the other
two figures are to compare the results of the first simulation with the second simulation for
ˆ
parameter estimate βˆ 2 . Each one of these figures is examined and a regression line equation and
R squared values are computed and compared. Figure 3.6 (a and b) shows a comparison of the
ˆ
estimation accuracy for parameter β̂ 1 .
69
ˆ
ˆ
(a) Regression Line for β̂ 1 1st Simulation (b) Regression Line for β̂ 1 2nd Simulation
ˆ
Figure 3.6 (a and b): Comparison of the Estimation Accuracy for Parameter β̂ 1 .
It can be concluded from looking at both of the two figures above that Figure 3.6 (a) of
ˆ
the first simulation shows a better fit of the regressing line for β̂1 than Figure 3.6 (b). It is
important to note that the regression line equation for Figure 3.6 (a) has values for a 0 and a1
that are very close to the values that were initially assumed in the equation that was used to
estimate the mean, μ = a 0 Z + a1 . It is essential to recall that a 0 = a1 were assumed to equal 0.5
and therefore, the model estimated in Figure 3.6 (a) is the better model.
ˆ
Figure 3.7 (a and b) shows a comparison of the estimation accuracy for parameter βˆ 2 .
70
ˆ
ˆ
(a) Regression Line for βˆ 2 1st Simulation (b) Regression Line for βˆ 2 2nd Simulation
ˆ
Figure 3.7: Comparison of the Estimation Accuracy for Parameter βˆ 2
Similar conclusions can be drawn from looking at both of the two figures above that
ˆ
Figure 3.7 (a) of the first simulation shows a better fit of the regressing line for βˆ 2 than Figure
3.7 (b).
It is important to note that the regression line equation for Figure 3.7 (a) has values for
a 0 and a1 that are very close to the values that were initially assumed in the equation that was
used to estimate the mean, μ = a0 Z + a1 . It is essential to recall that a 0 = a1 were assumed to
equal 0.5 and therefore, the model estimated in Figure 3.7 (a) is the better model.
3.3
Summary and Discussion
The analysis/simulation that was performed in this chapter, and that is demonstrated in
Figures 3.6 and 3.7, is a clear indicator that the stress has a direct effect on the performance on
71
the degradation and that varying the stress levels have a varying impact on the reliability. This
simulation also demonstrated that when there are different settings of the degradation model (i.e.
stress, number of units tested, etc.), this can lead to a difference in the estimation accuracy of the
model being estimated. For simple path models, the reliability can be estimated or expressed as
a function of the basic path parameters in a closed form. With acceleration however, the
reliability estimation depends on the level of acceleration variables.
72
CHAPTER 4
MAINTENANCE-ORIENTED OPTIMAL DESIGN OF ADT PLANS
To maintain a product in a timely and effective fashion, it is necessary and important to
estimate the failure time distribution of the product. The previous chapter has shown how to use
degradation testing to estimate failure time distributions, and how the different settings of a
degradation testing affect the estimation accuracy. After the failure time distribution has been
determined, it is straightforward to determine preventative maintenance schedules such as the
optimal preventive maintenance intervals and preventive maintenance threshold in order to
ensure the reliability of the product under the normal operation. However, in many engineering
applications, maintenance schedules have been predetermined due to management and customer
concerns and/or by a variety of mandatory regulation rules.
In the literature, scarce work has been done in the area of optimal design of testing plans
considering a variety of maintenance requirements, especially through accelerated degradation
testing.
To resolve this problem, the remainder of this dissertation focuses on how
manufacturers and customers can benefit from the use of degradation testing in the establishment
of confidence intervals of reliability estimates as well as the economical loss associated with
existing maintenance programs. To improve the accuracy of the reliability estimates and the
associated economical losses, it is important to use scientifically designed testing plans.
This problem is investigated in this chapter.
More specifically, it is shown that
maintenance requirements can be used to drive the optimal design of accelerated degradation
testing plans under a variety of constraints. This is the reverse of what has been done in the
literature, as typically, the maintenance requirements are established based on the results of the
testing and not the other way around. For example, based on a required maintenance interval of
73
T hours for a component, the optimal design of accelerated degradation testing plan is
influenced by the reliability of that component to be proven, and the loss associated with the
maintenance requirement.
In this chapter, the Wiener process (Whitmore and Schenkelberg, 1997) is used to model
the degradation process of a component.
The reliability of the component at a specific
maintenance requirement T i.e., Rˆ (T ) is estimated analytically instead of using simulation
techniques. In order to speed up the degradation process of the component, it is assumed that
stress levels used in the test are higher than the design stress Z 0 . Moreover, the test must end
before the maximal test duration Tmax imposed by limited test resources.
The variances of the reliability estimate are derived using Fisher’s Information Matrix
approach. Afterwards, a hypothesis testing is performed to test if the reliability of the product
exceeds the reliability requirement before the scheduled maintenance, i.e., to test the hypothesis
that Rˆ true (T ) ≥ R0 (T ) , where Rˆ true (T ) and R0 (T ) are the reliability estimate and the required
reliability respectively at the maintenance time of T hours.
As for the optimal design of testing plans, two types of design problems are investigated.
The first optimization problem minimizes the variance of the reliability estimate, Rˆ true (T ) at the
required maintenance point T hours. The goal is to obtain the most accurate reliability estimate
using limited testing resources. The mathematical formulation of this optimal design problem
can be expressed as:
(
Min Var Rˆtrue (T ; Z1 , Z 2 , T1 , T2 )
)
Subject to:
Z 0 < Z1 < Z 2 < Z H
74
T1 , T2 < Tmax
where Z H is the highest stress level allowed, Tmax is the maximal test duration for the
experiment, Z1 , Z 2 are applied stress levels in the ADT experiment, T1 , T2 are testing durations of
the experiment for stress levels Z1 and Z 2 respectively. It has been shown in Chapter 3 that the
accuracy of reliability estimates depends on the settings of experiments. As a result, the variance
of the reliability estimate can be minimized by obtaining the optimal settings of the decision
variables such as Z1 , Z 2 and T1 , T2 .
The second optimal design of testing plan is formulated to minimize the variance of the
overall loss function based on the degradation test. The ultimate goal is to obtain the most
accurate estimate of the economic loss incurred using limited testing resources by optimally
setting the decision variables of the experiment. The mathematical formulation can be expressed
as:
Min Var ( loss; Z1 , Z 2 , T1 , T2 , R0 (T ) )
Subject to:
Z 0 < Z1 < Z 2 < Z H
T1 , T2 < Tmax
where R0 (T ) is the required reliability at the maintenance time T . This formulation can be
extended to a more general case which involves more than one maintenance requirements (from
several customers for instance):
k
Min
∑ ω Var ( loss; Z , Z , T , T , R (T ) )
i =1
i
1
2
1
2
0
Subject to:
75
i
Z 0 < Z1 < Z 2 < Z H
T1 , T2 < Tmax
where R0 (Ti ) is the required reliability at the maintenance time Ti , and ωi is the relative weight
of the ith requirement.
4.1
Analytical Method for Reliability Estimation
In Chapter 3, estimating the reliability of a component using a simulation approach has
been demonstrated. This approach is generic thus can be used for all degradation models, but it
is extremely time consuming especially when the model becomes more complex. In some
situations, reliability functions can be derived analytically for specific degradation models. In
this section, the approximate reliability function associated with the Wiener process degradation
model is derived, and this model is utilized throughout the remainder of this dissertation.
Let D(t ) be the degradation measure at time t , D f be the failure threshold and T (hrs)
be the associated failure time. In addition, it is assumed that failure occurs the first time
when D(t ) ≥ D f . The formal definition of degradation-related failure is given by:
T = inf {t :| D(t ) > D f }
t ≥0
One of the widely used degradation models is the Wiener process model:
D (t ) = β1t + σ W (t ) ,
t≥0
where β1 is the drift parameter, σ is the diffusion parameter, and W (t ) is the standard
Brownian motion process with the independent increment property.
When deriving the
approximate reliability function associated with the Wiener process degradation model, stress
76
levels will not be considered at this time. From the definition of the Wiener process, it can be
determined that D(t ) at time t is normally distributed with the p.d.f. Norm( β1t , σ 2t ) .
Figure 4.1: Relationship between the Reliability Function and Degradation Process
Figure 4.1 illustrates the reliability function associated with the distribution of the
degradation process, where the shaded area represents the reliability of the component at time t.
By considering the relationship between the reliability function and degradation process, i.e.,
R (t ) = Pr( D(t ) ≤ D f ) , the associated reliability function can be approximately expressed as:
⎛ D f − β1t ⎞
⎛ D f − μ (t ) ⎞
R (t ) = Φ ⎜
⎟
⎟ = Φ⎜
⎝ σ t ⎠
⎝ σ (t ) ⎠
where
β1
>> σ 2 .
The
probability
of
failure
at
time
t
can
be
expressed
as F (t ) = Pr( D(t ) ≥ D f ) = 1 − R(t ) .
A similar derivation can be obtained when taking the stress into account in deriving the
reliability function at different operating or testing conditions. Generally, the reliability function
can be expressed as follows:
77
RZ (t ) = Pr( DZ (t ) ≤ D f )
where DZ (t ) ~ Norm(β1 (Z ) t , σ 2 t ) which depends on the stress level Z . Generally, by assuming
β1 (Z ) as a function of applied stresses, the associated reliability function can be approximated
by:
⎛ D f − β1 ( Z )t ⎞
⎟⎟
RZ (t ) = Φ⎜⎜
σ t
⎠
⎝
(4.1)
Before the reliability function and other reliability indices can be estimated, the model
parameters of the degradation process need to be estimated first. These model parameters are
derived in the following section.
4.2
Maximum Likelihood Estimates of Model Parameters from Degradation Data
To estimate the model parameters, the maximum likelihood approach is utilized. As
discussed in Chapter 2, the MLEs of parameters can be obtained by maximizing the associated
likelihood function of the observed data. This approach has many good and desirable properties.
For instance, the estimator is efficient in the sense that there is no estimator with smaller
variance, the estimate approaches to the true value as the number of observations increases and
the distribution of deviations of the estimator from the true parameter asymptotically follows the
normal distribution.
To start, no stress level is involved. Let Di,j be the degradation measure of unit i at time j.
By considering the independent increment property of the Wiener process and the independence
of the test units, the likelihood function of the observed degradation data can be expressed as:
78
m
ni −1
i =1
j =1
L( Data | β1 , σ 2 ) = ∏∏
⎛ ( D − Di , j − β1 (ti , j +1 − ti , j )) 2 ⎞
exp ⎜ − i , j +1
⎟⎟
⎜
2σ 2 (ti , j +1 − ti , j )
2π (ti , j +1 − ti , j )σ
⎝
⎠
1
and the log-likelihood function is:
⎛
l = ln L( Data | β1 , σ ) = −∑∑ ⎜ ln
⎜
i =1 j =1 ⎝
m ni −1
2
( Di , j +1 − Di , j − β1 (ti , j +1 − ti , j )) 2 ⎞
2π (ti , j +1 − ti , j ) + ln σ +
⎟⎟
2σ 2 (ti , j +1 − ti , j )
⎠
(
)
To obtain the MLEs of the model parameters, numerical method can be applied to maximize the
log-likelihood function.
When modeling ADT data where the stresses are considered, the distribution of the
degradation process DZ (t ) ~ Norm(β1 (Z ) t , σ 2 t ) where the drift parameter β1 (Z ) depends on the
stress. Let Dk ,i , j be the degradation measurement of component i at time j under the stress level
Z k , m be the number of stress levels, nk be the number of units tested under the stress Z k , mki
be the number of measurements of unit i under the stress Z k . Considering the independent
increment property of the Weiner process and the independence of the test units, the likelihood
function of the observed ADT data with the stress levels can be expressed as:
L( Data | β1 ( z ), σ 2 )
m
nk mki −1
= ∏∏ ∏
k =1 i =1
j =1
⎛ ( Dk ,i , j +1 − Dk ,i , j − β1 ( zk )(tk ,i , j +1 − tk ,i , j )) 2 ⎞
exp ⎜ −
⎟⎟
⎜
2σ 2 (tk ,i , j +1 − tk ,i , j )
2π (tk ,i , j +1 − tk ,i , j )σ
⎝
⎠
1
The log-likelihood function is:
l = ln L( Data | β1 ( z ), σ 2 )
m nk mki −1 ⎛
= −∑∑ ∑ ⎜ ln
⎜
k =1 i =1 j =1 ⎝
(
)
2π (tk ,i , j +1 − tk ,i , j ) + ln σ +
( Dk ,i , j +1 − Dk ,i , j − β1 ( zk )(tk ,i , j +1 − tk ,i , j )) 2 ⎞ (4.2)
⎟⎟
2σ 2 (tk ,i , j +1 − tk ,i , j )
⎠
By specifying the functional form of β1 (Z ) such as β1 ( Z ) = a0 Z a1 , the MLEs of the model
parameters a 0 , a1 and σ can be estimated using a numerical method that maximizes the log-
79
likelihood function in equation (4.2). Then the reliability function at a given stress level or at
regular operating conditions can be estimated by substituting the given stress level and the
parameter estimates into the expression of the reliability function. By the invariance property of
MLE, the resulting reliability estimate will be the MLE of the reliability function. It is important
to note that the power law model β1 ( Z ) = a0 Z a1 will be used hereafter.
4.3
Derivation of the Variance of Reliability Estimate
It is important to quantify the uncertainty of the parameter estimates and the subsequent
reliability estimate. To do this, several statistical approaches may be applied such as the Fisher
Information approach. The Fisher information matrix can be utilized to obtain the variance and
the covariance of the ML estimates of model parameters as well as the variance of other
reliability index of our interest. For instance, the covariance matrix can be utilized to calculate
the asymptotic variance of the reliability estimate using the δ method:
⎡ ∂Rˆ ∂Rˆ ∂Rˆ ⎤
⎡ ∂Rˆ ∂Rˆ ∂Rˆ ⎤
Var ( Rˆ ) = ⎢
,
,
,
,
⎥ × Cov(θˆ) × ⎢
⎥
⎣ ∂a0 ∂a1 ∂σ ⎦
⎣ ∂a0 ∂a1 ∂σ ⎦
T
(4.3)
In order to estimate the asymptotic variance of the reliability estimate, it is necessary to
⎡ ∂Rˆ ∂Rˆ ∂Rˆ ⎤
,
,
estimate the Cov(θˆ) where θ = [a 0 , a1 , σ 2 ] as well as the values of ⎢
⎥ . In the next
⎣ ∂a 0 ∂a1 ∂σ ⎦
sections, these steps will be investigated in detail.
4.3.1
Covariance of Model Parameter Estimates
80
From the ML theory, the covariance matrix of the model parameter estimates can be
evaluated by:
Cov ( aˆ0 , aˆ1 ) Cov(aˆ0 ,σˆ ) ⎤
⎡ Var (aˆ0 )
Cov(θˆ) = F0 = ⎢
Var (aˆ1 )
Cov(aˆ1 ,σˆ ) ⎥⎥
⎢
⎢⎣Symmetric
Var (σˆ ) ⎥⎦
−1
(4.4)
which is the inverse of the Fisher matrix F0 , where the Fisher information matrix F0 is
expressed as:
⎡ ⎡ ∂ 2A ⎤
⎡ ∂ 2A ⎤
⎡ ∂ 2A ⎤ ⎤
E
E
E
−
−
⎢ ⎢
⎢ ˆ ˆ ⎥
⎢− ˆ ˆ ⎥ ⎥
2⎥
ˆ
a
a
a
∂
∂
∂
0
0
1
⎣
⎦
⎣
⎦
⎣ ∂a0∂σ ⎦ ⎥
⎢
⎢
⎡ ∂ 2A ⎤
⎡ ∂ 2A ⎤ ⎥
F0 = ⎢
E ⎢− 2 ⎥
E ⎢−
⎥⎥
ˆ
ˆ
ˆ
⎢
σ
a
a
∂
∂
∂
⎣
⎣
⎦⎥
1 ⎦
1
⎢
⎥
⎡ ∂ 2A ⎤ ⎥
⎢
E ⎢− 2 ⎥ ⎥
⎢ symetric
⎣ ∂σˆ ⎦ ⎥⎦
⎣⎢
[
(4.5)
]
which is evaluated at the MLEs θˆ = aˆ 0 , aˆ1 , σˆ 2 , where E denotes the expectation over the
independent random observations. The Fishers information matrix can be derived as follows.
After substituting β1 ( Z ) = a0 Z a1 into equation (4.2), the log-likelihood function of the
observed accelerated degradation data with the stress levels can be expressed as:
l = ln L( Data | β1 ( z ), σ 2 )
⎛
⎜
= −∑∑ ∑ ln
k =1 i =1 j =1 ⎜
⎝
m
nk mki −1
(
2π (tk ,i , j +1 − tk ,i , j )
)
(D
+ ln σ +
k ,i , j +1
− Dk ,i , j − a0 zk a1 (tk ,i , j +1 − tk ,i , j )
2σ 2 (tk ,i , j +1 − tk ,i , j )
)
2
⎞ (4.6)
⎟
⎟
⎠
The entries of the Fisher information matrix are calculated by taking the second partial
derivatives to the log-likelihood function with respect to aˆ0 , aˆ1 , σˆ :
For â0 ,
81
(
)
aˆ1
⎞
m nk mki −1 ⎛ 2 D
ˆ aˆ1
k ,i , j +1 − Dk ,i , j − a0 Z k ( t k ,i , j +1 − t k ,i , j ) Z k ( t k ,i , j +1 − t k ,i , j )
∂A
⎟
= −∑∑ ∑ ⎜
⎟
∂aˆ0
2σˆ 2 ( tk ,i , j +1 − tk ,i , j )
k =1 i =1 j =1 ⎜
⎝
⎠
since E ⎡⎣ Dk ,i , j +1 − Dk ,i , j ⎤⎦ = aˆ0 Z kaˆ1 ( tk ,i , j +1 − tk ,i , j )
⇒
ˆ
m nk mki −1 ⎛ Z 2 a1 t
⎞
∂ 2A
k ( k ,i , j +1 − t k ,i , j )
⎜
⎟
=
−
∑∑
∑
⎟
∂aˆ02
σˆ 2
k =1 i =1 j =1 ⎜
⎝
⎠
2 aˆ
⎡ ∂ 2 A ⎤ m nk mki −1 Z k 1 (tk ,i , j +1 − tk ,i , j )
;
E ⎢ − 2 ⎥ = ∑∑ ∑
σˆ 2
⎣ ∂aˆ0 ⎦ k =1 i =1 j =1
For â1 ,
(
)
⎞
m nk mki −1 ⎛ 2 D
ˆ aˆ1
ˆ aˆ1
k ,i , j +1 − Dk ,i , j − a0 Z k ( tk ,i , j +1 − tk ,i , j ) a0 Z k ln Z k ( t k ,i , j +1 − t k ,i , j )
∂A
⎜
⎟
= −∑∑ ∑
2
⎜
⎟
∂aˆ1
ˆ
2σ ( tk ,i , j +1 − tk ,i , j )
k =1 i =1 j =1
⎝
⎠
since E ⎡⎣ Dk ,i , j +1 − Dk ,i , j ⎤⎦ = aˆ0 Z kaˆ1 ( tk ,i , j +1 − tk ,i , j )
2
2 2 aˆ
⎡ ∂ 2 A ⎤ m nk mki −1⎛⎜ aˆ 0 Z k 1 (ln Z k ) (t k ,i , j +1 − t k ,i , j ) ⎞⎟
E ⎢− 2 ⎥ = ∑∑ ∑
;
⎟
σˆ 2
⎣ ∂aˆ1 ⎦ k =1 i =1 j =1 ⎜⎝
⎠
For σˆ ,
(
)
2
aˆ
m nk mki −1⎛
1 Dk ,i , j +1 − Dk ,i , j − aˆ 0 Z k 1 (t k ,i , j +1 − t k ,i , j )
∂A
⎜
= −∑∑ ∑
−
⎜
∂σˆ
σ3
k =1 i =1 j =1 σˆ
⎝
(
⎞
⎟
⎟
⎠
)
2
aˆ
m nk mki −1⎛
1 3 Dk ,i , j +1 − Dk ,i , j − aˆ 0 Z k 1 (t k ,i , j +1 − t k ,i , j )
∂ 2A
⎜
= −∑∑ ∑ − 2 +
⎜ σˆ
σ4
∂σˆ 2
k =1 i =1 j =1
⎝
⎡ ∂ 2 A ⎤ m nk mki −1⎛ 1 3σ 2
E ⎢−
= ∑∑ ∑ ⎜⎜ − 2 + 4
2 ⎥
σ
⎣ ∂σˆ ⎦ k =1 i =1 j =1 ⎝ σˆ
⎡ ∂ 2 A ⎤ m nk mki −1 2
E ⎢ − 2 ⎥ = ∑∑ ∑ 2 ;
⎣ ∂σˆ ⎦ k =1 i =1 j =1 σˆ
82
⎞
⎟⎟
⎠
⎞
⎟
⎟
⎠
For aˆ 0 aˆ1 ,
(
)
aˆ1
⎞
m nk mki −1 ⎛ 2 D
ˆ aˆ1
k ,i , j +1 − Dk ,i , j − a0 Z k ( t k ,i , j +1 − tk ,i , j ) Z k ( t k ,i , j +1 − t k ,i , j )
∂A
⎟
= −∑∑ ∑ ⎜
⎟
∂aˆ0
2σˆ 2 ( tk ,i , j +1 − tk ,i , j )
k =1 i =1 j =1 ⎜
⎝
⎠
aˆ1
m nk mki −1 ⎛ D
⎞
ˆ 2 aˆ1
(
∂ 2A
k ,i , j +1 − Dk ,i , j ) Z k ln ( Z k ) − a0 Z k ln ( Z k ) ( t k ,i , j +1 − t k ,i , j )
⎟
= −∑∑ ∑ ⎜
⎟
ˆ2
∂aˆ0 ∂aˆ1
σ
k =1 i =1 j =1 ⎜
⎝
⎠
2 aˆ
⎡ ∂ 2 A ⎤ m nk mki −1 aˆ0 Z k 1 ln( Z k )(tk ,i , j +1 − tk ,i , j )
;
E ⎢−
⎥ = ∑∑ ∑
σˆ 2
⎣ ∂aˆ0 ∂aˆ1 ⎦ k =1 i =1 j =1
For aˆ 0σ̂ ,
(
)
aˆ1
⎞
m nk mki −1 ⎛ 2 D
ˆ aˆ1
k ,i , j +1 − Dk ,i , j − a0 Z k ( t k ,i , j +1 − tk ,i , j ) Z k ( tk ,i , j +1 − tk ,i , j )
∂A
⎜
⎟
= −∑∑ ∑
⎟
∂aˆ0
2σˆ 2 ( tk ,i , j +1 − tk ,i , j )
k =1 i =1 j =1 ⎜
⎝
⎠
(
) ⎞⎟
aˆ1
m nk mki −1⎛ − 2 (D
ˆ 2 aˆ1
∂ 2A
k ,i , j +1 − Dk ,i , j ) Z k − a 0 Z k (t k ,i , j +1 − t k ,i , j )
⎜
= −∑∑ ∑
⎜
∂aˆ 0 ∂σˆ
σˆ 3
k =1 i =1 j =1 ⎝
⎟
⎠
⎡ ∂ 2A ⎤
E ⎢−
⎥ =0;
⎣ ∂aˆ 0 ∂σˆ ⎦
For aˆ1σˆ ,
(
)
⎞
m nk mki −1 ⎛ 2 D
ˆ aˆ1
ˆ aˆ1
k ,i , j +1 − Dk ,i , j − a0 Z k ( t k ,i , j +1 − t k ,i , j ) a0 Z k ln Z k ( t k ,i , j +1 − t k ,i , j )
∂A
⎟
= −∑∑ ∑ ⎜
⎟
∂aˆ1
2σˆ 2 ( tk ,i , j +1 − tk ,i , j )
k =1 i =1 j =1 ⎜
⎝
⎠
(
) ⎞⎟
m nk mki −1⎛ − 2 (D
ˆ aˆ1
ˆ 2 2 aˆ1 ln Z k (t k ,i , j +1 − t k ,i , j )
∂ 2A
k ,i , j +1 − Dk ,i , j ) a 0 Z k ln Z k − a 0 Z k
⎜
= −∑∑ ∑
⎜
∂aˆ1∂σˆ
σˆ 3
k =1 i =1 j =1 ⎝
⎟
⎠
⎡ ∂ 2A ⎤
E ⎢−
⎥ = 0.
ˆ
ˆ
a
σ
∂
∂
1
⎣
⎦
After substituting all the entries into the Fisher information matrix, equation (4.5), F0
83
can be expressed as:
⎡ m nk mki −1 ⎛ Z k2 aˆ1 ( tk ,i , j +1 − tk ,i , j ) ⎞ m nk mki −1 ⎛ aˆ0 Z k2 aˆ1 ln ( Z k ) ( tk ,i , j +1 − tk ,i , j ) ⎞
⎤
⎢ ∑∑ ∑ ⎜
⎥
⎟
⎜
⎟
0
∑∑
∑
⎟ k =1 i =1 j =1 ⎜
⎟
ˆ2
σˆ 2
σ
⎢ k =1 i =1 j =1 ⎜⎝
⎥
⎠
⎝
⎠
⎢
⎥
2
⎢
⎥
m nk mki −1 ⎛ a
ˆ02 Z k2 aˆ1 ( ln Z k ) ( tk ,i , j +1 − tk ,i , j ) ⎞
⎜
⎟
⎥
0
F0 = ⎢
∑∑
∑
⎟
σˆ 2
⎢
⎥
k =1 i =1 j =1 ⎜
⎝
⎠
⎢
⎥
m nk mki −1
⎢
2
⎛
⎞⎥
Symetric
∑∑
∑
⎢
⎜ ˆ 2 ⎟⎥
k =1 i =1 j =1 ⎝ σ ⎠ ⎥
⎢
⎣
⎦
This expression is later substituted in equation (4.3) to estimate the asymptotic variance
of the reliability estimate.
4.3.2 Estimation of the First Derivatives of Reliability Estimate
By substituting β1 ( Z ) = a0 Z a1 into equation (4.1), the reliability function becomes:
⎛ D f − a 0 Z a1 t ⎞
⎟
RZ (t ) = Φ⎜
⎟
⎜
σ
t
⎠
⎝
where Φ (⋅) is the Cdf (cumulative density function) of the standard normal distribution. To
estimate the asymptotic variance of the reliability estimate, it is necessary to estimate the first
derivatives of the reliability estimates.
Taking the first derivative of the reliability estimate with respect to θ = [a0 , a1 , σ ] yields:
⎛ D − aˆ0 Z aˆ1 t ⎞ ⎛ − Z aˆ1 t ⎞
∂Rˆ
=φ⎜ f
⎟⎟ × ⎜
⎟
⎜ σˆ t
∂aˆ0
⎝
⎠ ⎝ σˆ t ⎠
(4.7)
⎛ D f − aˆ0 Z aˆ1 t ⎞ ⎛ −aˆ0 Z aˆ1 ln Z t ⎞
∂Rˆ
=φ⎜
⎟⎟ × ⎜
⎟
⎜
∂aˆ1
σˆ t
⎠
⎝ σˆ t
⎠ ⎝
(4.8)
84
⎛ D − aˆ0 Z aˆ1 t ⎞ ⎛ aˆ0 Z aˆ1 t ⎞
∂Rˆ
=φ⎜ f
⎟⎟ × ⎜ 2 ⎟
⎜
∂σˆ
ˆ
t
σ
⎝
⎠ ⎝ σˆ t ⎠
(4.9)
where φ (⋅) is the p.d.f. (probability density function) of the standard normal distribution. By
substituting the first derivatives equations (4.7, 4.8 and 4.9) along with F0 into equation (4.3),
the asymptotic variance of the reliability estimate can be determined.
4.4
Hypothesis Testing
In many industrial applications, it is essential to demonstrate a reliability index such as
reliability, mean life or failure rate, of a product is better or worse than a specified value,
especially when the maintenance schedule has been determined. In this section, a method for
comparing the reliability estimate of a component with a specified target value under a specific
maintenance requirement is presented.
More specifically, a hypothesis testing is used to examine if the reliability estimate of the
component is greater than or less than the required reliability for the predetermined maintenance
intervalT . The null hypothesis, H 0 , is to check if the product reliability at T is greater than a
minimal requirement:
H 0 : Rˆ true (T ) ≥ R0 (T )
and the alternative, H 1 , is a statement that the null hypothesis is not true, i.e., the product
reliability at T is less than the minimal requirement:
H 1 : Rˆ true (T ) < R0 (T ) .
By assuming Rˆ true (T ) follows the lognormal distribution with parameters μln Rˆ and
85
( )
Var ln Rˆ , Rˆ true (T ) can be expressed as:
(
⎛
var ln Rˆtrue
ˆ
⎜
Rtrue (T ) = exp μln Rˆ +
true
⎜
2
⎝
) ⎞⎟
⎟
⎠
and
(
( ( (
)
var Rˆtrue = exp var ln Rˆtrue
)) − 1) × ( Rˆ )
2
true
where Rˆ true (T ) , Var ( Rˆtrue (T )) are obtained in the above sections using the ML approach.
Therefore,
(
var ln Rˆtrue
μln Rˆ
true
)
(
⎡⎛ var Rˆ
true
= ln ⎢⎜
⎢⎜ ˆ 2
⎢⎣⎜⎝ Rtrue
(
= ln Rˆtrue −
)
) ⎞⎟ + 1⎤⎥
⎥
⎥⎦
⎟⎟
⎠
(
var ln Rˆtrue
2
)
(4.10)
(4.11)
Then the hypothesis testing can be conducted by obtaining the z value for the normal
distribution and utilizing the normal distribution tables to determine the significance level α in
order to reject or accept the null hypothesis. More specifically, the z value can be obtained by:
z=
4.5
ln Rˆtrue (T ) − RZ0 (T )
var (ln Rˆtrue (T ))
(4.12)
Confidence Intervals of Reliability Function
In many applications, a point estimate of product reliability is not enough. It is important
to calculate the confidence intervals of the reliability. The confidence interval measures the
uncertainty associated with the reliability estimate, and can serve as the objective function in the
86
optimal design of testing plans to obtain the optimal settings of testing parameters. By assuming
the reliability estimate follows lognormal distribution, the confidence interval can be expressed
as:
[
]
[
]
(
))
⎡
⎤
Rˆ (t , Z 0 ) w
Rˆ (t ; Z 0 ) × w
,
R(t ; Z 0 ) ∈ ⎢
⎥
ˆ
ˆ
ˆ
ˆ
⎣1 − R(t ; Z 0 ) + [ R(t , Z 0 ) w] 1 − R(t ; Z 0 ) + [ R(t , Z 0 ) w] ⎦
(4.13)
where
(
(
w = exp z1−α 2 Var Rˆ ( t , Z 0 )
4.6
)
⎡ Rˆ ( t , Z 0 ) × 1 − Rˆ ( t , Z 0 ) ⎤
⎣
⎦
(4.14)
Minimization of Variance of Reliability Estimate
The accuracy of a reliability estimate based on an ADT model depends on the settings of
the ADT experiment. The optimal design of an ADT plan consists of objective function, several
constraints and experimental parameters such as stress levels, sample allocation ratio at each
stress level, inspection frequency and test termination time.
The mathematical formulation can be expressed as:
(
Min Var Rˆtrue (T ; Z1 , Z 2 , T1 , T2 )
)
Subject to:
Z 0 < Z1 < Z 2 < Z H
T1 , T2 < Tmax
(
)
where Var Rˆtrue , which may be obtained through equation (4.3), is a function of the
maintenance requirement T as well as the decision variables of the ADT experiment.
This problem is a nonlinear optimization problem which can be solved using a numerical
87
method. In this dissertation, an optimization program based on Excel solver is utilized to solve
this problem. By determining the optimal settings of the ADT experiment, the variance of the
reliability estimate can be minimized. This will also have a direct impact on the confidence
intervals of the reliability function described above.
4.7
Minimization of Variance of the Overall Loss
Generally each product or process performance characteristic will have a target or
nominal value (e.g., required reliability). Therefore, one of the widely used objectives for the
optimal design of testing plans is to reduce the variability around that target value. However,
beyond this objective it is also important to estimate the loss caused by reliability estimation
especially when maintenance requirements are considered. Figure 4.2 illustrates that the loss
function can be calculated for each reliability estimate for each maintenance time interval.
88
Reliability Estimate Over Time
Loss
Reliability Estimate
Normal
Distribution
Time (hrs)
Figure 4.2: Illustration of the Loss Function over Time
Specifically, the expected total loss for each required maintenance interval can be
calculated by:
E ( loss ) =
(
∫
1
0
) (
(
(
Loss Rˆ (T ) × φ Rˆ (T ), Var Rˆ (T )
)) dRˆ (T )
(4.15)
)
where the Loss Rˆ (T ) is the loss function that is specified by the manufacture and φ (⋅) is the
p.d.f. of the standard normal distribution and
⎛ R − Rˆ (T ) ⎞
⎟.
φ Rˆ (T ),Var Rˆ (T ) = φ ⎜ 0
(
(
))
⎜
⎟
ˆ
⎝ VarR(T ) ⎠
Similarly, the variance of the total loss for each required maintenance interval can be
calculated by:
89
Var ( loss ) =
∫(
1
0
(
) (
)
(
2
Loss Rˆ (T ) − E[loss] × φ Rˆ (T ), Var Rˆ (T )
)) dRˆ (T )
(4.16)
In this dissertation, the expectation and variance of the total loss are calculated numerically using
excel.
To improve the estimation accuracy of the total loss, the optimal design of testing plans
can be performed, which minimizes the variance of the total loss. To do this, it is assumed that
the target value is the reliability at the maintenance requirement, and the loss increases as the
product’s reliability deviates from that maintenance requirement (i.e., over-design or underdesign). If a product performs reliably beyond the requirement, it is a loss to the manufacturer,
otherwise it is a loss to the customer and ultimately to the manufacturer as well in the form of
either losing customers or paying unexpected high warranty costs.
One of the practical challenges in evaluating the reliability loss is to select an appropriate
loss function since the expected loss may vary as the maintenance requirement changes. It has
been discussed in Chapter 2 that a symmetric loss function may not be realistic in estimating the
reliability loss because underestimating the reliability of a product is not as costly as
overestimating the reliability of that product.
Moreover, in many industrial applications, different maintenance requirements for the
same product may be provided and required by different customers; therefore, instead of
reducing the variance of the estimated loss for each maintenance requirement, multiple
requirements need to be considered simultaneously.
In other words, tradeoff has to be
considered when improving accuracy of the estimated total losses for all customers. This can be
done by the optimal design of testing plans which minimizes the variance of the estimated
overall reliability loss.
In order to improve the accuracy of the estimated overall loss, the optimization problem
90
that minimizes the variance of the overall loss based on the degradation tests is investigated. The
general mathematical formulation can be expressed as:
k
Min
∑ ω Var ( loss; Z , Z , T , T , R (T ) )
i =1
i
1
2
1
2
0
i
Subject to:
Z 0 < Z1 < Z 2 < Z H
T1 , T2 < Tmax
where R0 (Ti ) is the required reliability at the maintenance time Ti , k is the number of
requirements, and ωi is the relative weight of the ith requirement. The objective is to minimize
the weighted sum of the variance estimates for all the maintenance requirements at the same time
rather than minimizing the variance estimates for each maintenance requirement. Obviously,
when k =1, single maintenance requirement will be considered. This problem is a nonlinear
optimization problem which can be solved using a numerical method.
4.8
Summary and Discussion
In this chapter, it is shown that maintenance requirements can be used to drive the
optimal design of accelerated degradation testing plans under a variety of constraints. The
Wiener process was used to model the degradation process of the components and the reliability
of the component at a specific maintenance requirement T i.e., Rˆ (T ) was estimated analytically
instead of using simulation techniques. The variances of the reliability estimate were derived
using Fisher’s Information Matrix approach. Afterwards, a hypothesis testing is performed to
test if the reliability of the product exceeds the reliability requirement before the scheduled
91
maintenance. Three types of optimal design of testing plans were investigated. Numerical
examples that demonstrate all the modeling approaches described in this chapter are presented in
Chapter 5.
92
CHAPTER 5
APPLICATIONS OF DEGRADATION MODELING
The modeling approaches that were developed in Chapter 4 are as follows:
1) Estimate Parameters of the Degradation Model using Wiener Model
2) Determine Variance of Reliability
3) Perform a hypothesis testing procedure
4) Determine the Confidence Intervals of Reliability Estimates
5) Optimizing Test Plans to Minimize the Variance of Reliability Estimate
6) Optimizing Test Plans to Minimize the Variance of Estimated Overall Loss
7) Optimizing Testing Plan to Minimize the Weighted Sum of Variance of Estimated
Overall Loss
In this chapter, numerical examples of using ADT data to illustrate each of these
approaches are presented.
5.1
MLEs of Model Parameters from Degradation Data – Validation Procedure
In this example, a data set of LED light intensity degradation collected from an
experiment given in Eghbali (2000) is used to demonstrate how to obtain the parameter
estimates. In the experiment, electric current is the accelerating stress at two levels (I 1 = 40mA)
and (I 2 = 35mA) . Nine LED units are tested under the (I 1 = 40mA) stress level and ten LED
units are tested under the (I 2 = 35mA) stress level. The degradation of light intensity, in lux
(lumen/meter2), observed in the ADT experiment is transformed by taking negative natural
logarithm and by shifting the initial degradation measurements of all test units to a common
93
initial degradation value x0 = −5.0 , and the degradation threshold, D f = -4.3. The transformed
degradation data are presented in Table 5.1 (a and b). To model the degradation process, the
Wiener process model introduced in Chapter 4 is utilized.
Before the reliability function and other reliability indices can be estimated, the model
parameters of the degradation process need to be estimated. The method of maximum likelihood
approach given in Chapter 4 is utilized in this section to estimate the parameters θ = [a 0 , a1 , σ 2 ]
of the degradation process.
Table 5.1 (a): LED Light Intensity Degradation Data at 40mA
Stress
Level
Unit
#
Electric Current = 40mA
Measurement Time (hours)
1
2
3
4
5
6
7
8
9
50
100
150
200
250
-4.6995
-4.4568
-4.3583
-4.1734
-3.99
-4.5853
-4.1105
-3.7381
-3.5268
-3.3326
-4.4918
-4.0063
-3.6119
-3.4022
-3.2968
-4.566
-4.1605
-3.8304
-3.5544
-3.1773
-4.32
-3.912
-3.65
-3.297
-2.4853
-4.6152
-4.2759
-3.9528
-3.6652
-3.5268
-4.6886
-3.9686
-3.6382
-3.4022
-3.3668
-4.2336
-3.8077
-3.4673
-3.2189
-3.1773
-4.2759
-3.8491
-3.2571
-2.9957
-1.9456
Table 5.1 (b): LED Light Intensity Degradation Data at 35mA
Stress
Level
Unit
#
Electric Current =
35mA
Measurement Time (hours)
1
2
3
4
5
50
100
150
200
250
-4.7444
-4.5756
-4.4654
-4.2904
-4.2336
-4.6777
-4.4654
-4.2759
-4.1414
-4.0924
-4.6994
-4.4397
-4.32
-4.1605
-4.023
-4.6052
-4.3982
-4.0804
-3.7593
-3.6652
-4.6565
-4.3583
-3.9686
-3.4022
-2.9957
94
6
7
8
9
10
-4.9337
-4.6671
-4.4918
-4.1605
-4.0924
-4.4654
-4.0924
-3.8922
-3.612
-3.5268
-4.6254
-4.2064
-3.9528
-3.5544
-3.4641
-4.4397
-4.0286
-3.8077
-3.4022
-3.3326
-4.382
-3.99
-3.6889
-3.2571
-3.0922
To obtain the maximum likelihood estimates of the model parameters, numerical method
is applied to maximize the log-likelihood function given in equation (4.2). In the Wiener process
model discussed in Chapter 4, the MLEs of the model parameters were estimated where β1 (Z ) is
dependent on the stress. For instance, if β 1 = a 0 Z a1 , the MLEs of β1 (Z ) can be obtained by
substituting the MLEs of a 0 and a1 respectively. Similarly, the reliability function at a given
stress level or at regular operating conditions can be obtained by substituting the given stress
level and the parameter estimates obtained in Table 5.2 into the expression of the reliability
function given in equation (4.1). By the invariance property of MLE, this result is the MLE of
the reliability function.
Table 5.2: Maximum Likelihood Estimates of the Model Parameters
â0
3.82E-06
â1
2.02
σˆ
2
log-likelihood
5.2
0.000562
246.12526
Variance of Reliability Estimate – Numerical Example
95
In order to obtain the variance and the covariance of the Maximum Likelihood estimates,
the Fisher information matrix is calculated. The covariance matrix is utilized to estimate the
asymptotic variance of the reliability estimate which was derived in equation (4.3). In order to
estimate the asymptotic variance of the reliability estimate using the δ method, the Cov(θˆ) and
⎡ ∂Rˆ ∂Rˆ ∂Rˆ ⎤
2
,
,
⎥ are estimated independently where θ = [a 0 , a1 , σ ] .
⎢
⎣ ∂a 0 ∂a1 ∂σ ⎦
5.2.1
Estimation of Covariance of Model Parameters
From the ML theory, the covariance matrix of the model parameter estimate was derived
in equation (4.4) which is the inverse of the true Fisher matrix F0 . Based on the likelihood
function, the Fisher information matrix F0 was expressed in equation (4.5). Evaluated at the
[
]
MLEs θˆ = aˆ 0 , aˆ1 , σˆ 2 , and where E denotes the expectation over the independent random
observation or data point. The entries of the Fisher information matrix, equation (4.5), were
calculated
by
taking
the
second
derivative
of
equation
(4.6)
with
respect
to,
aˆ 0 , aˆ1 , σˆ 2 , aˆ 0 aˆ1 , aˆ 0σˆ 2 , aˆ1σˆ 2 and the results of the entries of the Fisher information matrix are as
follows:
⎡ ∂ 2A ⎤
E ⎢− 2 ⎥ = 1.55542E+13,
⎣ ∂â 0 ⎦
⎡ ∂ 2A ⎤
E ⎢− 2 ⎥ = 2998.7415,
⎣ ∂â1 ⎦
⎡ ∂ 2A ⎤
E ⎢−
= 270684.28,
2 ⎥
⎣ ∂σˆ ⎦
96
⎡
∂ 2A ⎤
E ⎢−
⎥ = 215935529.8,
⎣ ∂aˆ 0 ∂aˆ1 ⎦
⎡ ∂ 2A ⎤
⎡ ∂ 2A ⎤
=
E ⎢−
E
⎥
⎢− ˆ ⎥ = 0
⎣ ∂a1σ̂ ⎦
⎣ ∂aˆ 0σ̂ ⎦
After substituting all the expectations results obtained above into the Fisher information matrix
equation (4.5), F0 can be expressed as:
⎡1.55542E + 13
F0 = ⎢⎢215935529.8
⎢⎣
0
215935529.8
2998.7415
0
⎤
0 ⎥⎥
270684.28⎥⎦
0
and the corresponding covariance matrix of the model parameter estimates which is the inverse
of F0 is:
⎡1.99893E - 10
−1
ˆ
Cov(θ ) = F0 = ⎢⎢- 1.43940E - 05
⎢⎣
0
- 1.43940E - 05
1.036828499
0
⎤
⎥
0
⎥
3.7E - 06⎥⎦
0
This solution of Cov(θˆ) is later substituted in equation (4.3) to estimate the asymptotic
variance of the reliability estimate as is demonstrated in the following section.
5.2.2
Estimation of the First Derivatives of Reliability Estimate
The first partial derivatives,
∂Rˆ ∂Rˆ
∂Rˆ
and
,
have been derived in Chapter 4. In order
∂aˆ0 ∂aˆ1
∂σ̂
to calculate their values at different points in time, the reliability estimates need to be calculated
first. The reliability estimates under the normal operating condition of Z 0 =10mA are calculated
97
by obtaining the Cdf at
D f − aˆ 0 Z aˆ1 t
σ̂ t
for the given failure threshold, D f = -4.3. The results of
the reliability estimates at different points in time are illustrated in Figure 5.1 as well as in Table
5.3.
Reliability Estimate Over Time
1.0
0.9
0.8
Reliability Estimate
0.7
0.6
0.5
0.4
0.3
0.2
0.1
13
,0
00
12
,0
00
11
,0
00
10
,0
00
9,
00
0
8,
00
0
7,
00
0
6,
00
0
5,
00
0
4,
00
0
3,
00
0
2,
00
0
1,
00
0
10
0
0.0
Time (hrs)
Figure 5.1: Reliability Estimates over Time
The reliability estimates obtained above along with the parameter estimates given in
Table 5.2 are used in equations (4.7, 4.8 and 4.9) and are utilized to estimate the values of
∂Rˆ ∂Rˆ ∂Rˆ
,
,
at different points in time. These values are calculated and shown in Table 5.3.
∂aˆ0 ∂aˆ1 ∂σ̂
98
Table 5.3: Reliability Estimates, ∂Rˆ / ∂aˆ0 , ∂Rˆ / ∂aˆ1 , and ∂Rˆ / ∂σˆ over Time
T (hrs)
100
500
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
5,500
6,000
6,500
7,000
7,500
8,000
8,500
9,000
9,500
10,000
10,500
11,000
11,500
12,000
12,500
13,000
(D
f
(
− aˆ0 Z aˆ1 t ) / σˆ t
2.79
0.94
0.40
0.11
-0.09
-0.25
-0.38
-0.50
-0.60
-0.69
-0.77
-0.85
-0.92
-0.99
-1.06
-1.12
-1.18
-1.23
-1.29
-1.34
-1.39
-1.44
-1.48
-1.53
-1.57
-1.62
-1.66
The calculated values of
)
Reliability
Estimate
∂Rˆ / ∂aˆ0
∂Rˆ / ∂aˆ1
∂Rˆ / ∂σˆ
0.9973
0.8276
0.6561
0.5442
0.4633
0.4010
0.3511
0.3098
0.2751
0.2455
0.2200
0.1977
0.1782
0.1610
0.1457
0.1321
0.1200
0.1091
0.0994
0.0906
0.0826
0.0755
0.0690
0.0631
0.0577
0.0529
0.0485
-363.17
-25,168.44
-51,298.15
-67,694.88
-78,316.49
-85,212.65
-89,531.21
-91,985.29
-93,052.74
-93,071.44
-92,289.75
-90,895.52
-89,033.88
-86,818.68
-84,340.33
-81,671.23
-78,869.67
-75,982.81
-73,048.81
-70,098.60
-67,157.17
-64,244.64
-61,377.11
-58,567.34
-55,825.33
-53,158.76
-50,573.40
0.00
-0.22
-0.45
-0.60
-0.69
-0.75
-0.79
-0.81
-0.82
-0.82
-0.81
-0.80
-0.78
-0.76
-0.74
-0.72
-0.69
-0.67
-0.64
-0.62
-0.59
-0.56
-0.54
-0.51
-0.49
-0.47
-0.44
0.06
4.05
8.26
10.91
12.62
13.73
14.42
14.82
14.99
14.99
14.87
14.64
14.34
13.99
13.59
13.16
12.71
12.24
11.77
11.29
10.82
10.35
9.89
9.44
8.99
8.56
8.15
∂Rˆ ∂Rˆ ∂Rˆ
,
, along with the solution of the Cov(θˆ) obtained
,
∂aˆ0 ∂aˆ1 ∂σ̂
in section 5.2.1 are substituted in equation (4.3) to determine the asymptotic variance of the
reliability estimate. Table 5.4 gives the asymptotic variance of the reliability estimate over time
using the δ method per equation (4.3).
The variances of the reliability estimate calculated in Table 5.4 are illustrated in Figure
5.2. It can be noted that the variance of reliability estimates starts increasing as time increases
99
until a certain point in time is reached and the variance of the reliability estimate starts
decreasing.
Table 5.4: Variances of Reliability Estimates over Time
T (hrs)
100
500
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
5,500
6,000
6,500
7,000
7,500
8,000
8,500
9,000
9,500
10,000
10,500
11,000
11,500
12,000
12,500
13,000
(D
f
(
− aˆ0 Z aˆ1 t ) / σˆ t
)
Non Optimal Variance of
Reliability Estimate
2.79
0.94
0.40
0.11
-0.09
-0.25
-0.38
-0.50
-0.60
-0.69
-0.77
-0.85
-0.92
-0.99
-1.06
-1.12
-1.18
-1.23
-1.29
-1.34
-1.39
-1.44
-1.48
-1.53
-1.57
-1.62
-1.66
3.56694E-06
0.017131156
0.0711668
0.123932643
0.165874841
0.196373204
0.21678188
0.228828905
0.234170642
0.234264739
0.230346174
0.223439064
0.214380213
0.203845191
0.192373276
0.180389931
0.168226416
0.15613662
0.144311317
0.132890149
0.121971642
0.111621511
0.101879536
0.09276521
0.084282369
0.076422959
0.069170094
100
Variance of Reliability Estimate Over Time
0.25
Variance of Reliability Estimate
0.2
0.15
0.1
0.05
0
0
10
00
10
00
20
00
30
00
40
00
50
00
60
0
70
0
00
80
00
90
0
00
10
0
00
11
0
00
12
0
00
13
Time (hrs)
Figure 5.2: Non Optimal Variances of Reliability Estimates over Time
5.3
Hypothesis Testing - Numerical Example
In this section, a method to compare the reliability estimates obtained in Table 5.3 with a
specified target value is presented, where the target value is assumed to be driven by the
maintenance requirement (reliability value for given maintenance interval). As described in
Chapter 4, the null hypothesis is H 0 : Rˆ true (T ) ≥ R0 (T ) , and the alternative is H 1 : Rˆ true (T ) <
R0 (T ) .
In Chapter 4, Rˆ true (T ) was assumed to follow a lognormal distribution with parameters
( )
( )
μln Rˆ and Var ln Rˆ . The mean, μln Rˆ , and the variance, Var ln Rˆ , of the ML reliability estimate
101
are obtained using equations (4.10 and 4.11). The values of the mean and the variance of the
reliability estimate are shown in Table 5.5.
Table 5.5: The Mean and the Variance of the Reliability Estimate over Time
T (hrs )
Reliability
Requirement
ln of
Reliability
Requirement
100
500
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
5,500
6,000
6,500
7,000
7,500
8,000
8,500
9,000
9,500
10,000
10,500
11,000
11,500
12,000
12,500
13,000
0.99
0.63
0.3
0.17
0.1
0.06
0.04
0.025
0.015
0.013
0.01
0.005
0.005
0.004
0.003
0.002
0.001
0.001
0.001
0.001
0.001
0.0005
0.0005
0.0005
0.0005
0.0001
0.0001
-0.0101
-0.4620
-1.2040
-1.7720
-2.3026
-2.8134
-3.2189
-3.6889
-4.1997
-4.3428
-4.6052
-5.2983
-5.2983
-5.5215
-5.8091
-6.2146
-6.9078
-6.9078
-6.9078
-6.9078
-6.9078
-7.6009
-7.6009
-7.6009
-7.6009
-9.2103
-9.2103
Reliability
Estimate
Variance ln of
Reliability
Estimate
Mean ln of
Reliability
Estimate
0.9973
0.8276
0.6561
0.5442
0.4633
0.4010
0.3511
0.3098
0.2751
0.2455
0.2200
0.1977
0.1782
0.1610
0.1457
0.1321
0.1200
0.1091
0.0994
0.0906
0.0826
0.0755
0.0690
0.0631
0.0577
0.0529
0.0485
0.0000
0.0247
0.1530
0.3496
0.5725
0.7980
1.0149
1.2190
1.4094
1.5864
1.7510
1.9045
2.0479
2.1823
2.3086
2.4278
2.5404
2.6471
2.7485
2.8450
2.9372
3.0253
3.1097
3.1907
3.2685
3.3434
3.4156
-0.0027
-0.2016
-0.4979
-0.7833
-1.0556
-1.3127
-1.5543
-1.7813
-1.9952
-2.1975
-2.3898
-2.5732
-2.7488
-2.9177
-3.0805
-3.2379
-3.3905
-3.5388
-3.6833
-3.8242
-3.9619
-4.0967
-4.2289
-4.3585
-4.4859
-4.6111
-4.7345
The hypothesis testing is conducted by obtaining the z values for each time interval.
This is obtained by using equation (4.12) and incorporating the mean and the variance of the true
reliability obtained above. After the z values are calculated, the normal distribution tables are
102
utilized to determine the significance level α . The significance level α is used to determine if
the null hypothesis is true or if the alternate hypothesis is true at a significance level of 95%.
Table 5.6 illustrates the results of this methodology.
Table 5.6: Significance Levels Calculated over Time
T (hrs )
Reliability
Requirement
Reliability
Estimate
z
Significance
Level α
100
500
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
5,500
6,000
6,500
7,000
7,500
8,000
8,500
9,000
9,500
10,000
10,500
11,000
11,500
12,000
12,500
13,000
0.99
0.63
0.3
0.17
0.1
0.06
0.04
0.025
0.015
0.013
0.01
0.005
0.005
0.004
0.003
0.002
0.001
0.001
0.001
0.001
0.001
0.0005
0.0005
0.0005
0.0005
0.0001
0.0001
0.9973
0.8276
0.6561
0.5442
0.4633
0.4010
0.3511
0.3098
0.2751
0.2455
0.2200
0.1977
0.1782
0.1610
0.1457
0.1321
0.1200
0.1091
0.0994
0.0906
0.0826
0.0755
0.0690
0.0631
0.0577
0.0529
0.0485
-3.8941
-1.6571
-1.8052
-1.6719
-1.6479
-1.6799
-1.6524
-1.7278
-1.8570
-1.7033
-1.6742
-1.9747
-1.7816
-1.7626
-1.7959
-1.9105
-2.2068
-2.0706
-1.9450
-1.8281
-1.7189
-2.0147
-1.9122
-1.8152
-1.7230
-2.5153
-2.4219
0.0000
0.0487
0.0355
0.0473
0.0497
0.0465
0.0492
0.0420
0.0317
0.0443
0.0470
0.0242
0.0374
0.0390
0.0363
0.0280
0.0137
0.0192
0.0259
0.0338
0.0428
0.0220
0.0279
0.0347
0.0424
0.0059
0.0077
It can be noted that at each time interval and using the significance level of 95% as a
bench mark, the null hypothesis is true in for each time interval. This means that the reliability
estimate is significantly higher than the required reliability at each time interval.
103
5.4
Confidence Intervals of Reliability Function
The next step is to calculate the confidence intervals of reliability estimates to measure
the uncertainty associated with the reliability estimate.
The reliability requirement at the
required maintenance interval is assumed to be a constant, i.e., R0 (T ) = 0.99. As discussed in
Chapter 4, the reliability estimate is assumed to follow a lognormal distribution and the
confidence intervals of reliability estimates are calculated using equations (4.13 and 4.14). The
results of the upper and the lower confidence intervals over time are illustrated in Table 5.7.
Table 5.7: Upper and Lower Confidence Intervals over Time (Pilot Testing Plan)
T (hrs )
Reliability
Estimate
w
Lower
Confidence
Upper
Confidence
100
500
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
5,500
6,000
6,500
7,000
7,500
8,000
8,500
9,000
9,500
10,000
10,500
11,000
0.9973
0.8276
0.6561
0.5442
0.4633
0.4010
0.3511
0.3098
0.2751
0.2455
0.2200
0.1977
0.1782
0.1610
0.1457
0.1321
0.1200
0.1091
0.0994
0.0906
0.0826
0.0755
0.0690
3.2100
4.5219
6.9924
10.3235
14.7930
20.7905
28.8376
39.6324
54.1105
73.5274
99.5697
134.5053
181.3868
244.3264
328.8683
442.4945
595.3100
800.9740
1077.9642
1451.2971
1954.8676
2634.6335
3552.9541
0.9915
0.5149
0.2144
0.1036
0.0551
0.0312
0.0184
0.0112
0.0070
0.0044
0.0028
0.0018
0.0012
0.0008
0.0005
0.0003
0.0002
0.0002
0.0001
0.0001
0.0000
0.0000
0.0000
0.9992
0.9560
0.9303
0.9249
0.9274
0.9330
0.9398
0.9468
0.9536
0.9599
0.9656
0.9707
0.9752
0.9791
0.9825
0.9854
0.9878
0.9899
0.9917
0.9931
0.9944
0.9954
0.9962
104
11,500
12,000
12,500
13,000
0.0631
0.0577
0.0529
0.0485
4794.5023
6474.3261
8748.8467
11830.8692
0.0000
0.0000
0.0000
0.0000
0.9969
0.9975
0.9980
0.9983
The results of the reliability estimates that were shown in Figure 5.1 along with
confidence intervals of the reliability estimate that were calculated at different points in time per
Table 5.7 are illustrated in Figure 5.3.
Reliability Estimate (Pilot Testing Plan)
1.0
0.9
0.8
0.6
0.5
0.4
0.3
0.2
0.1
Time (hrs)
Reli Estimate
Lower Confidence
Upper Confidence
Figure 5.3: Reliability Estimates and Confidence Intervals over Time
(Pilot Testing Plan)
105
00
13
,0
00
12
,0
00
11
,0
00
10
,0
9,
00
0
8,
00
0
7,
00
0
6,
00
0
5,
00
0
4,
00
0
3,
00
0
2,
00
0
1,
00
0
0.0
10
0
Reliability Estimate
0.7
It can be noted that the upper and lower confidence intervals appear to very wide which
leads to the conclusion that the pilot testing plan can be optimized further to come up with tighter
confidence intervals.
5.5
Minimization of Variance of Reliability Estimate – Numerical Example
The accuracy of the reliability prediction based on an ADT model is dependent on a welldesigned ADT test plan and it is very clear from Figure 5.3 that there appear to be an opportunity
for the design of the testing plan to be optimized. This optimization can be done by minimizing
the variance around the reliability estimate. The mathematical formulation that was introduced
in Chapter 4 is expressed as:
(
Min Var Rˆtrue (T ; Z1 , Z 2 , T1 , T2 )
)
Subject to:
Z 0 < Z1 < Z 2 < Z H
T1 , T2 < Tmax
(
)
where Var Rˆtrue , which may be obtained through equation (4.3), is a function of the
maintenance requirement T as well as the decision variables of the ADT experiment. In order to
minimize the variance around the reliability estimate, the maximum likelihood estimates
approach is utilized and a numerical method using Excel is applied to come up with the most
optimum stress levels for the test design.
It is essential to recall that in the LED light intensity degradation experiment, the current
is the accelerating stress at two levels (I 1 = 40mA) and (I 2 = 35mA) . By using the maximum
106
likelihood estimates approach to minimize the variance of the reliability estimate, the current at
two levels (I 1 = 40mA) and (I 2 = 35mA) turned out not to be the optimal setting. The analysis
concluded that the accelerating stresses of (I 1 = 45mA) and (I 2 = 29.219mA) are the optimal
settings that can minimize the variance around the reliability estimate.
These optimal setting caused a very obvious reduction in the variance of the reliability
estimate as can be seen from Table 5.8 and Figure 5.4 where the calculated variances using the
optimal and the non optimal stress levels are illustrated.
Table 5.8: Optimal and Non Optimal Variances of Reliability Estimates over Time
T (hrs)
Non Optimal Variance of
Reliability Estimate
Optimal Variance of
Reliability Estimate
100
3.56694E-06
6.08486E-07
500
0.017131156
0.002922416
1,000
0.0711668
0.012140395
1,500
0.123932643
0.021141757
2,000
0.165874841
0.028296706
2,500
0.196373204
0.033499443
3,000
0.21678188
0.036980974
3,500
0.228828905
0.039036084
4,000
0.234170642
0.039947335
4,500
0.234264739
0.039963387
5,000
0.230346174
0.039294916
5,500
0.223439064
0.038116627
6,000
0.214380213
0.036571272
6,500
0.203845191
0.034774095
7,000
0.192373276
0.032817093
7,500
0.180389931
0.030772845
8,000
0.168226416
0.028697863
8,500
0.15613662
0.026635456
9,000
0.144311317
0.024618169
9,500
0.132890149
0.022669824
10,000
0.121971642
0.020807228
107
10,500
0.111621511
0.019041592
11,000
0.101879536
0.017379702
11,500
0.09276521
0.015824883
12,000
0.084282369
0.014377789
12,500
0.076422959
0.013037046
13,000
0.069170094
0.011799775
Variance of Reliability Estimate Over Time
0.25
Variance of Reliability Estimate
0.2
0.15
0.1
0.05
0
0
10
0
10
0
00
20
00
30
00
40
00
50
00
60
00
70
00
80
00
90
0
10
00
00
11
0
0
00
12
0
00
13
Time (hrs)
Non Optimal Variance of Reliability Estimate
Optimal Variance of Reliability Estimate
Figure 5.4: Optimal and Non Optimal Variances of Reliability Estimates over Time
The optimal settings of (I 1 = 45mA) and (I 2 = 29.219mA) are used to recalculate the
confidence intervals of reliability estimates to measure the uncertainty associated with the
reliability estimate at the optimal settings.
The reliability requirement at the required
maintenance interval is also assumed to be a constant, i.e., R0 (T ) = 0.99 and as discussed in
Chapter 4, the reliability estimate is assumed to follow a lognormal distribution and the
108
confidence intervals of reliability estimates are calculated using equations (4.13 and 4.14). The
results of the upper and the lower confidence intervals of the reliability estimate over time at the
optimum stress levels are illustrated in Table 5.9.
Table 5.9: Upper and Lower Confidence Intervals over Time (Optimal Testing Plan)
T (hrs)
Reliability
Estimate
w
Lower
Confidence
Upper
Confidence
100
500
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
5,500
6,000
6,500
7,000
7,500
8,000
8,500
9,000
9,500
10,000
10,500
11,000
11,500
12,000
12,500
13,000
0.9973
0.8276
0.6561
0.5442
0.4633
0.4010
0.3511
0.3098
0.2751
0.2455
0.2200
0.1977
0.1782
0.1610
0.1457
0.1321
0.1200
0.1091
0.0994
0.0906
0.0826
0.0755
0.0690
0.0631
0.0577
0.0529
0.0485
1.6188
1.8649
2.2328
2.6226
3.0427
3.5020
4.0087
4.5713
5.1987
5.9006
6.6877
7.5723
8.5677
9.6893
10.9546
12.3831
13.9972
15.8223
17.8873
20.2250
22.8727
25.8730
29.2743
33.1317
37.5080
42.4746
48.1131
0.9957
0.7202
0.4608
0.3128
0.2210
0.1605
0.1189
0.0894
0.0680
0.0523
0.0405
0.0315
0.0247
0.0194
0.0153
0.0121
0.0096
0.0077
0.0061
0.0049
0.0039
0.0031
0.0025
0.0020
0.0016
0.0013
0.0011
0.9983
0.8995
0.8099
0.7579
0.7243
0.7010
0.6844
0.6724
0.6637
0.6576
0.6535
0.6511
0.6501
0.6502
0.6514
0.6534
0.6562
0.6596
0.6637
0.6682
0.6732
0.6786
0.6844
0.6905
0.6969
0.7035
0.7102
109
The results of the reliability estimates that were shown in Figure 5.1 along with
confidence intervals of the reliability estimate at the optimum stress levels that were calculated at
different points in time per Table 5.9 are illustrated in Figure 5.5.
Reliability Estimate (Optimal Testing Plan)
1.0
0.9
0.8
Reliability Estimate
0.7
0.6
0.5
0.4
0.3
0.2
0.1
00
13
,0
00
12
,0
00
11
,0
00
10
,0
9,
00
0
8,
00
0
7,
00
0
6,
00
0
5,
00
0
4,
00
0
3,
00
0
2,
00
0
1,
00
0
10
0
0.0
Time (hrs)
Reli Estimate
Lower Confidence
Upper Confidence
Figure 5.5: Reliability Estimates and Confidence Intervals over Time
(Optimal Testing Plan)
It can be noted that when comparing Figure 5.3 (non optimal settings) and Figure 5.5
(optimal settings), it can be clearly seen that the confidence intervals in Figure 5.5 are much
tighter around the reliability estimate than the confidence intervals in Figure 5.3. In other words,
the optimal settings of applied stresses resulted in a much tighter confidence levels around the
reliability estimate.
110
5.6
Minimization of Variance of the Overall Loss – Numerical Example
After illustrating the methodology for reducing the variability around the target value in
the previous section, it is essential to be able to estimate the expected total loss that is associated
with the reliability estimation especially when maintenance requirements are considered.
(
Recall that in Chapter 4, it was stated that the Loss Rˆ (T )
)
is the loss function that is
specified by the manufacture. For the purposes of illustration the concepts that were introduced
(
)
in section 4.7, four different loss functions, Loss Rˆ (T ) , are generated per table 5.10 below:
Table 5.10: Loss Functions Formulations
X ≤ R0 (T )
X > R0 (T )
y = X − R0 (T )
1
Symmetric Linear Loss Function
y = R0 (T ) − X
2
Asymmetric Linear Loss Function
y = R0 (T ) − X
3
Symmetric Quadratic Loss Function
y = 0.5( X − R0 (T ) )
4
Asymmetric Quadratic Loss Function
y = 0.5( X − R0 (T ) )
y = ( X − R0 (T ) ) 2
2
2
y = 0.5( X − R0 (T ) )
2
y = 0.25( X − R0 (T ) )
2
where R0 (T ) is the reliability requirement at the maintenance requirement time T (hrs ) and X
is the true reliability of the product to be estimated. These formulations are used to simulate
different loss functions at different maintenance requirements, T = 1,000, 1,500, 2,000, 2,500,
3,000, 3,500, 4,000, 4,500, 5,000 hrs and at different reliability requirements, R0 (T ) = 0.65, 0.55,
0.47, 0.41, 0.36, 0.31, 0.28, 0.25, 0.23 respectively.
An example of the four different loss function formulations at maintenance requirements,
T = 1,000 hrs and reliability requirements, R0 (T ) = 0.65 is illustrated in Figure 5.6. The loss
111
function formulations at other maintenance requirements and reliability requirements are placed
in the Appendix II.
Loss Functions at 1000 hrs
Loss
L(i) Symmetric Linear
L(i) Asymmetric Linear
0.60
0.60
0.45
0.45
0.30
0.30
0.15
0.15
0.00
0.00
L(i) Symmetric Quadratic
L(i) Asymmetric Quadratic
0.20
0.20
0.15
0.15
0.10
0.10
0.05
0.05
0.00
0.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Reliability
Figure 5.6: Loss Functions at Maintenance Requirement, T = 1,000 hrs, and Reliability
Requirements, R0 (T ) = 0.65
The expected total loss for each required maintenance interval is calculated using
equation (4.15) and the variance of the total loss for each required maintenance interval is
calculated using equation (4.16). Both the expected total loss and the variance of the total loss
are calculated numerically using Microsoft Excel software for both the optimal and the non
optimal stress settings that were discussed in the previous sections.
112
Table 5.11 and Figure 5.7 illustrate the results of the loss functions formulations
described in Table 5.10 at the non optimal stress levels.
Table 5.11: Loss Function Results: Non Optimal Settings
T (hrs)
Required
Reliability
Reliability
Estimate
Non
Optimal
Symmetric
Linear
Non
Optimal
Asymmetric
Linear
Non
Optimal
Symmetric
Quadratic
Non
Optimal
Asymmetric
Quadratic
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
0.65
0.55
0.47
0.41
0.36
0.31
0.28
0.25
0.23
0.6561
0.5442
0.4633
0.4010
0.3511
0.3098
0.2751
0.2455
0.2200
0.04325681
0.06333218
0.07077947
0.0739043
0.0755901
0.07728791
0.07737377
0.07736091
0.07607811
0.03476451
0.0495199
0.05190823
0.0510226
0.04934277
0.04754028
0.04615071
0.0447528
0.04326136
0.005921
0.0096258
0.0110137
0.0119576
0.0128956
0.0141454
0.0147612
0.0154036
0.0154955
0.00500295
0.00771122
0.00794739
0.0078425
0.00781121
0.00798652
0.00807769
0.00821098
0.00815228
Loss Functions (Non Optimal Settings)
0.09
0.08
0.07
Expected Loss
0.06
0.05
0.04
0.03
0.02
0.01
0
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
Time (hrs)
Non Optimal Symmetric Linear
Non Optimal Asymmetric Linear
Non Optimal Symmetric Quadratic
Non Optimal Asymmetric Quadratic
Figure 5.7: Illustration of the Loss Function Results – Non Optimal Settings
113
Table 5.12 and Figure 5.8 illustrate the results of the loss functions formulations
described in Table 5.10 at the optimal stress levels.
Table 5.12: Loss Function Results: Optimal Settings, Single Maintenance Requirement
T (hrs)
Required
Reliability
Reliability
Estimate
Optimal
Symmetric
Linear
Optimal
Asymmetric
Linear
Optimal
Symmetric
Quadratic
Optimal
Asymmetric
Quadratic
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
0.65
0.55
0.47
0.41
0.36
0.31
0.28
0.25
0.23
0.6561
0.5442
0.4633
0.4010
0.3511
0.3098
0.2751
0.2455
0.2200
0.00966198
0.01681347
0.02228852
0.02553417
0.02677418
0.02673032
0.02581009
0.02451193
0.02501981
0.00708575
0.01283465
0.01695442
0.01928489
0.01984409
0.01897027
0.01809424
0.01676951
0.01625595
0.0006645
0.0015238
0.0023027
0.002772
0.0029367
0.0029406
0.0028
0.0026286
0.0027936
0.00048513
0.00117041
0.00175367
0.00207292
0.00211972
0.00198392
0.00184411
0.00166754
0.00166751
Loss Functions (Optimal Settings)
0.09
0.08
0.07
Expected Loss
0.06
0.05
0.04
0.03
0.02
0.01
0
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
Maintenance Interval (hr)
Optimal Symmetric Linear
Optimal Asymmetric Linear
Optimal Symmetric Quadratic
Optimal Asymmetric Quadratic
Figure 5.8: Illustration of the Loss Function Results – Optimal Settings
114
It can be noted when comparing Figure 5.7 (Loss Function Results – Non Optimal
Settings) and Figure 5.8 (Loss Function Results – Optimal Settings) that the loss associated with
the optimal stress settings is less than the loss associated with the non-optimal stress settings
which leads to the conclusion that optimizing the settings of the ADT experiment is an effort that
is worthy of consideration as it was proven that the total loss are reduced when optimizing the
settings that impact the ADT experiment.
Table 5.13 and Figure 5.9 illustrate the results of the variance of the loss functions
formulations described in Table 5.10 at the non optimal stress levels of (I 1 = 40mA) , and
(I 2 = 35mA) .
Table 5.13: Variance of the Total Loss Using the Non Optimal Stress Settings
T (hrs )
Required
Reliability
Reliability
Estimate
Non
Optimal
Variance
Symmetric
Linear
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
0.65
0.55
0.47
0.41
0.36
0.31
0.28
0.25
0.23
0.6561
0.5442
0.4633
0.4010
0.3511
0.3098
0.2751
0.2455
0.2200
0.00043938
0.00117214
0.00156889
0.00174371
0.00182254
0.00188493
0.00185788
0.00181916
0.00171764
115
Non
Optimal
Variance
Asymmetric
Linear
Non
Optimal
Variance
Symmetric
Quadratic
Non Optimal
Variance
Asymmetric
Quadratic
0.00028408
0.00071681
0.00084362
0.00083064
0.000776
0.00071259
0.0006604
0.00060826
0.00055491
8.2675E-06
2.7136E-05
3.8049E-05
4.5727E-05
5.3151E-05
6.3287E-05
6.7794E-05
7.2324E-05
7.1471E-05
5.9092E-06
1.742E-05
1.9806E-05
1.9656E-05
1.9486E-05
2.016E-05
2.0288E-05
2.0539E-05
1.9772E-05
Variance of Loss Functions (Non Optimal Settings)
0.002
0.0018
0.0016
Variance of Loss
0.0014
0.0012
0.001
0.0008
0.0006
0.0004
0.0002
0
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
Maintenance Interval (hr)
Non Optimal Variance Symmetric Linear
Non Optimal Variance Asymmetric Linear
Non Optimal Variance Symmetric Quadratic
Non Optimal Variance Asymmetric Quadratic
Figure 5.9: Illustration of the Variance of the Loss Function Results –
Non Optimal Settings
Table 5.14 and Figure 5.10 illustrate the results of the variance of the loss functions
formulations described in Table 5.10 at the optimal stress levels of
(I 1 = 45mA)
(I 2 = 29.219mA) .
Table 5.14: Variance of the Total Loss Using the Optimal Stress Settings
T (hrs )
Required
Reliability
Reliability
Estimate
Optimal
Variance
Symmetric
Linear
Optimal
Variance
Asymmetric
Linear
Optimal
Variance
Symmetric
Quadratic
Optimal
Variance
Asymmetric
Quadratic
1,000
1,500
2,000
2,500
3,000
0.65
0.55
0.47
0.41
0.36
0.6561
0.5442
0.4633
0.4010
0.3511
9.9081E-06
3.9946E-05
8.1293E-05
0.00011499
0.00013031
5.3311E-06
2.3274E-05
4.7029E-05
6.5564E-05
7.1525E-05
4.7372E-08
3.3075E-07
8.7355E-07
1.3633E-06
1.5766E-06
2.5257E-08
1.9511E-07
5.0656E-07
7.6195E-07
8.2049E-07
116
and
3,500
4,000
4,500
5,000
0.31
0.28
0.25
0.23
0.3098
0.2751
0.2455
0.2200
0.00013013
0.00011938
0.00010472
0.00010912
6.5478E-05
5.8588E-05
4.8925E-05
4.6001E-05
1.5841E-06
1.4137E-06
1.2123E-06
1.3697E-06
7.2002E-07
6.1201E-07
4.8678E-07
4.8722E-07
Variance of Loss Functions (Optimal Settings)
0.002
0.0018
0.0016
Variance of Loss
0.0014
0.0012
0.001
0.0008
0.0006
0.0004
0.0002
0
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
Maintenance Interval (hr)
Optimal Variance Symmetric Linear
Optimal Variance Asymmetric Linear
Optimal Variance Symmetric Quadratic
Optimal Variance Asymmetric Quadratic
Figure 5.10: Illustration of the Variance of the Loss Function Results –
Optimal Settings
It can be noted that when comparing Figure 5.9 (Variances of Total Loss – Non Optimal
Settings) and Figure 5.10 (Variances of Total Loss – Optimal Settings) that the variances of total
loss associated with the optimal stress settings are less than the variances of total loss associated
with the non-optimal stress settings which leads to the conclusion that optimizing the settings of
the ADT experiment is an effort that is worthy of consideration as it was proven that the
variances of total loss are reduced when optimizing the settings that impact the ADT experiment.
To take this conclusion a step further, and as was discussed in Chapter 4, the next step is
117
to minimize the variance of the estimated weighted sum of the overall reliability loss rather than
minimizing the variance of each estimated reliability loss individually.
The mathematical
formulation that was introduced in Chapter 4 is expressed as:
k
Min
∑ ω Var ( loss; Z , Z , T , T , R (T ) )
i =1
i
1
2
1
2
0
i
Subject to:
Z 0 < Z1 < Z 2 < Z H
T1 , T2 < Tmax
where R0 (Ti ) is the required reliability at the maintenance time Ti , k is the number of
requirements, and ωi is the relative weight of the ith requirement.
In order to minimize the variance of the estimated overall reliability loss, the maximum
likelihood estimates approach is utilized and a numerical method using Microsoft Excel is
applied to come up with the most optimum stress levels that will minimize the estimate of the
overall reliability loss for the test design.
It is essential to recall that in the LED light intensity degradation experiment, the current
is the accelerating stress at two levels (I 1 = 40mA) and (I 2 = 35mA) . These two stress levels
were concluded not to be the optimal settings of the ADT experiment. By using the maximum
likelihood estimates approach to minimize the variance of each reliability estimate individually,
it was concluded that the accelerating stresses of (I 1 = 45mA) and (I 2 = 29.219mA) are the
optimal settings that can minimize the variance around the reliability estimate when considering
the variance of each reliability estimate individually.
When using the maximum likelihood estimates approach to minimize the weighted sum
of the variance of reliability estimate, it was concluded that (I 1 = 45mA) and (I 2 = 29.219mA)
118
are not the optimal settings of the ADT experiment. It was discovered that (I 1 = 45mA) and
(I 2 = 26.329mA)
are the optimal settings that minimize the weighted sum of the variance of
reliability estimate.
Utilizing similar approach to the one above, the expected total loss for each required
maintenance interval is calculated using equation (4.15) and the variance of the total loss for
each required maintenance interval is calculated using equation (4.16) using the optimal settings
obtained when minimizing the variance of the weighted sum of the variance of reliability
estimate. Both the expected total loss and the variance of the total loss are calculated using
Microsoft Excel software.
Tables 5.15 and Figures 5.11 illustrate the results of the loss functions per the
formulations described in Table 5.10 at the optimal stress levels calculated from minimizing the
weighted sum of the variances of the reliability estimate.
Table 5.15: Loss Function Results: Optimal Settings, Multiple Maintenance Requirements
T (hrs )
Required
Reliability
Reliability
Estimate
Weighted Sum
of Loss
Optimal
Symmetric
Linear
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
0.65
0.55
0.47
0.41
0.36
0.31
0.28
0.25
0.23
0.6561
0.5442
0.4633
0.4010
0.3511
0.3098
0.2751
0.2455
0.2200
0.009170439
0.015959358
0.021194494
0.024367857
0.025632224
0.025634589
0.024780265
0.023540414
0.021987594
119
Weighted
Sum of Loss
Optimal
Asymmetric
Linear
Weighted
Sum of Loss
Optimal
Symmetric
Quadratic
Weighted
Sum of Loss
Optimal
Asymmetric
Quadratic
0.006719249
0.012185287
0.016135834
0.018444798
0.019065225
0.01827161
0.017466472
0.01620204
0.015032225
0.0006151
0.00141069
0.00214052
0.00259248
0.00275696
0.00276171
0.00262948
0.00246514
0.00224146
0.00044837
0.00108353
0.0016325
0.00194629
0.00200221
0.00187676
0.00174659
0.00157777
0.00141525
Loss Functions of Weighted Sum of Maintenance Requirements
(Optimal Settings)
0.09
0.08
0.07
Expected Loss
0.06
0.05
0.04
0.03
0.02
0.01
0
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
Maintenance Interval (hr)
Sum Var Optimal Symmetric Linear
Sum Var Optimal Asymmetric Linear
Sum Var Optimal Symmetric Quadratic
Sum Var Optimal Asymmetric Quadratic
Figure 5.11: Illustration of the Loss Function Results for the Weighted Sum of
Maintenance Requirements – Optimal Settings
Tables 5.16 and Figure 5.12 illustrate the results of the variance of the loss functions per
the formulations described in Table 5.10 at the optimal stress levels calculated from minimizing
the weighted sum of the variances of the reliability estimate.
Table 5.16: Variance of the Weighted Sum of the Total Loss Using the Optimal Stress
Settings
T (hrs )
Required
Reliability
Reliability
Estimate
Weighted
Sum of
Variance
Optimal
Symmetric
Linear
1,000
0.65
0.6561
8.68529E-06
120
Weighted
Sum of
Variance
Optimal
Asymmetric
Linear
Weighted
Sum of
Variance
Optimal
Symmetric
Quadratic
Weighted
Sum of
Variance
Optimal
Asymmetric
Quadratic
4.66487E-06
3.9504E-08
2.1E-08
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
0.55
0.47
0.41
0.36
0.31
0.28
0.25
0.23
0.5442
0.4633
0.4010
0.3511
0.3098
0.2751
0.2455
0.2200
3.50307E-05
7.15946E-05
0.00010215
0.000116678
0.000117087
0.000107764
9.46439E-05
7.96984E-05
2.04186E-05
4.14889E-05
5.85008E-05
6.45024E-05
5.94278E-05
5.34636E-05
4.47549E-05
3.71698E-05
2.7598E-07
7.3538E-07
1.1634E-06
1.3577E-06
1.3671E-06
1.2209E-06
1.0448E-06
8.342E-07
1.628E-07
4.2765E-07
6.5531E-07
7.1533E-07
6.3045E-07
5.3767E-07
4.2706E-07
3.3169E-07
Weighted Sum of Variance of Loss Functions (Optimal Settings)
0.002
0.0018
0.0016
Variance of Loss
0.0014
0.0012
0.001
0.0008
0.0006
0.0004
0.0002
0
1,000
1,500
2,000
2,500
3,000
3,500
4,000
4,500
5,000
Maintenance Interval (hr)
Sum Var Optimal Variance Symmetric Linear
Sum Var Optimal Variance Asymmetric Linear
Sum Var Optimal Variance Symmetric Quadratic
Sum Var Optimal Variance Asymmetric Quadratic
Figure 5.12: Illustration of the Variance of the Weighted Sum of
the Loss Function Results – Optimal Settings
It can be noted that when minimizing the sum of the variances of the reliability estimate,
the result is less overall loss at the optimal settings than when minimizing each variance of the
reliability estimate individually as can be seen from Tables 5.15 and 5.16 and Figures 5.11 and
5.12.
121
5.7
Summary and Discussion
In summary, Chapter 5 demonstrated examples of the concepts and models introduced in
chapter 4. These examples serve as validations of the proposed models as well as illustrations of
the usefulness of all of the proposed models. This research can be extended in many directions
many ways, and the discussion of how this research can be extended is the focus of chapter 6.
122
CHAPTER 6
CONCLUSIONS AND FUTURE RESEARCH
6.1
Conclusions
In this dissertation, the problem of using accelerated degradation testing data for
reliability estimation is studied and demonstrated. Simulation and analytical approaches have
been investigated. By simulation which generates a large number of degradation paths, the
reliability estimate of the product can be estimated using an empirical formation. This approach
is general and has a great flexibility in estimating the reliability of a product regardless of the
functional form of the degradation paths. However, it is time-consuming and sometimes can not
provide efficient and accurate reliability estimates. Alternatively, the analytical approach may
provide the closed-form expressions for reliability estimates for specific degradation process
models. If the model fits, this approach is more accurate and efficient than the simulation
approach. More importantly, when the closed-form solution exists, the optimal design of testing
plans can be formulated and solved with the objective of either improving the accuracy of the
reliability estimate or the accuracy of the economic loss.
In addition to the statistical study of the reliability estimation, the optimal design of ADT
plans has been investigated extensively. The objectives considered include minimizing the
variance of single reliability estimate for the maintenance requirement, minimizing the weighted
variances of multiple reliability estimates and minimizing the weighted economic loss associated
with the reliability estimates considering multiple maintenance requirements. In the literature,
this work is the first study regarding the optimal design of testing plans that considers
123
maintenance requirements. By determining the optimal setting of decision variables such as the
stress levels in the ADT experiments, the improvements in these objectives have been
demonstrated using numerical examples. It can be seen that the novel methodology developed in
this dissertation can significantly reduce the uncertainty of certain indices associated with the
reliability estimates. This work is important in the area of reliability engineering as it indicates
an efficient way of conducting accelerate degradation testing to verify the product’s reliability
and make management decisions under limited testing resources.
6.2
Future Research
In the future research, the work that has been done in this dissertation can be extended in
several directions. One direction is to consider the optimal design of testing plans when multiple
acceleration stresses are involved in ADT experiments. This problem becomes more difficult to
solve when more flexibility in changing the test duration and the total number of degradation
measurements for each stress level is given. It is expected that an efficient algorithm will be
developed to provide optimal solutions when different objective functions are considered.
Another direction for extension is to consider various preventive maintenance policies,
such as the block replacement policy, the age replacement policy as well as various imperfect
maintenance policies. When different maintenance policies and effects of maintenance actions
are considered, more realistic and complex problems regarding the optimal design of testing
plans need to be derived and solved. For instance, to reduce the management risk of performing
such maintenance, a Long-Run Cost Rate denoted by  Rˆ (t , z ) can be defined as follows:
124
C 0 × (1 − Rˆ (T , z )) + C1 × Rˆ (T , z )
T
T × Rˆ (t , z ) + ∫ t × fˆ (t , z )dt
0
where C0 is the Repair cost, C1 is (PM) Preventative Maintenance Cost and T is the Preventative
Maintenance Interval (the maintenance requirement). By considering the uncertainty of the
reliability estimate Rˆ ( t , z ) through ADT experiments, the uncertainty of the Long-Run cost
Rate, measured by its variance, can be expressed as:
⎡
⎤
⎢
⎥
C0 × (1 − Rˆ (T , z )) + C1 × Rˆ (T , z ) ⎥
⎢
ˆ
Var ⎡⎣ A⎤⎦ = Var
T
⎢
⎥
ˆ
(
,
)
×
+
T
R
t
z
t × fˆ (t , z )dt ⎥
⎢
∫
⎢⎣
⎥⎦
0
This variance can be used as the objective function when designing the optimal settings
of an ADT experiment. This problem is practical and especially useful for experiment planers
who are interested in the accurate estimate of the long-run cost associated with the product to
which a predetermined maintenance strategy is attached.
125
REFERENCES
126
LIST OF REFERENCES
Abdel-Hameed, M., (1992), “Failure models of devices subject to deterioration,” Order Statistics
and Nonparametric: Theory and Applications, Elsevier Science Publishers, pp. 307-313.
Albin, S. L. and Chao, S., (1992), “Preventive replacement in systems with dependent
components,” IEEE Transaction on Reliability, 41, pp. 230-238.
Ascher, H. and Feingold, H., (1984), Repairable Systems Reliability, Marcel Dekker, Inc., New
York.
Ascher, H., (1968), “Evaluation of Repairable System Reliability Using the 'Bad-as-Old'
Concept,” IEEE Transactions on Reliability, R-17, pp. 103-110.
Bagdonavicius, V. and Nikulin, M., (2002), “Accelerated Life Models: Modeling and Statistical
Analysis,” (Chapman & Hall/CRC, New York).
Balaban, H. S., (1978), “A Stochastic Characterization of Failure Processes under Minimal
Repair” Unpublished Ph. D. Dissertation, George Washington University.
Barlow, R. and Hunter, L., (1960), “Optimum Preventive Maintenance Policies,” Operations
Research, 8, pp. 90-100.
Barlow, R. and Proschan, F., (1967), Mathematical Theory of Reliability, John Wiley & Sons,
Inc., New York.
Barlow, R. and Proschan, F., (1975), Statistical Theory of Reliability and Life Testing, Holt,
Rinehart, and Winston, Inc., New York.
Bassin, W. M., (1969), “Increasing Hazard Functions and Overhaul Policy,” Proceedings of the
Annual Reliability and Maintainability Symposium, pp. 173-180.
Basu, A. P. and Ebrahimi, N., (1991), “Bayesian approach to life testing and reliability
estimation using asymmetric loss-function,” J. Statistical Planning & Inference, 29, pp. 21–31.
Basu, A. P. and Thompson, R. D., (1992), “Life testing and reliability under asymmetric loss,” J.
P. Klein, editor, Survival Analysis - State of the Art, Dordrecht: Kluwer Academic Publishers,
pp. 1-7.
Bhattacharyya, G. K. and Fries, A., (1982), “Fatigue-failure models – Birnbaum-Saunders vs.
Inverse Gaussian,” IEEE Transaction on Reliability, 31(5), pp. 439-440.
Bhote, K. R. and Bhote, A. K., (2004), World Class Reliability, AMACOM.
Blanks, H., (1994), “Quality and reliability research into the next century,” Quality and
Reliability Engineering International, 10, pp. 179-184.
127
Blischke, W. R., Prabhakar, D. N. and Murthy, (2003), Case Studies in Reliability and
Maintenance, John Wiley & Sons, New York.
Block, H. W., Borges, W. S. and Savits, T. H., (1985), Age-Dependent Minimal Repair,” Journal
of Applied Probability, 22, pp. 370-385.
Bogdanoff, J. L., (1984), “Probabilistic Models of Fatigue Crack Growth-II,” Engineering
Fracture Mechanics, 20, No. 2, pp. 255-270.
Boulanger, M. and Escobar, L. A., (1994), “Experimental design for a class of accelerated
degradation tests,” Technometrics, 36(3), pp. 260-272.
Brown, M. and Proschan, F., (1983), “Imperfect Repair,” Journal of Applied Probability, 20, pp.
851-859.
Buerenguer, C., Grall, A., Dieulle, L. and Roussignol, M., (2003), “Maintenance policy for a
continuously monitored deteriorating system,” Probability in the Engineering and Information
Sciences, 17, pp. 235-250.
Canfield, R. V. (1986), “Cost optimization of periodic preventive maintenance,” IEEE
Transactions on Reliability, R-35, pp. 78-81.
Carey, M. and Koening, (1991), “Reliability assessment based on accelerated degradation: A
case study,” IEEE Transactions on Reliability, 40, no. 5, pp. 499-506.
Chan, J. K. and Shaw, L., (1993), “Modeling repairable systems with failure rates that depend on
age and maintenance,” IEEE Transactions on Reliability, 42, pp. 566-571.
Chang, D.S. (1993), “Analysis of Accelerated Degradation Data in a Two-Way Design,”
Reliability Engineering & System Safety, 39, No. 1, pp. 65-69.
Chen, C. T., Chen, Y. W. and Yuan, J., (2003), “On a dynamic preventive maintenance policy
for a system under inspection,” Reliability Engineering and System Safety, 80, pp. 41-47.
Clarke, B. and Disney, R. L., (1985), Probability and Random Processes: A First Course with
Applications, John Wiley and Sons, Inc. New York.
Cox, D., (1962), Renewal Theory, Methuen and Co., Ltd., London.
Crk V., (2000) “Reliability Assessment from Degradation Data,” RAMS, pp. 155-161.
Daigle, J. N., (1992), Queuing Theory for Telecommunications, Addison-Wesley Publishing Co.,
Inc.
128
Denson, W., (1998), “The history of reliability prediction,” IEEE Transactions on Reliability, 47,
no. 3-SP, September, pp. SP32l-328.
Desmond, A. F., (1986), “On the relationship between two fatigue-life models,” IEEE
Transactions on Reliability, 35(2), pp. 167-169.
Dieulle, L., Buerenguer, C., Grall, A. and Roussignol, M., (2003), “Sequential condition- based
maintenance scheduling for a deteriorating system,” European Journal of Operational Research,
150, pp. 451-461.
Doksum, K. A. and Hóyland, A., (1992), “Models for variable-stress accelerated life testing
experiments based on Wiener Process and the Inverse Gaussian distribution,” Technometrics,
34(1), pp. 74-82.
Ebel, U., (1998), “Reliability physics in electronics: A historical view,” IEEE Transactions on
Reliability, 47, no. 3-SP, September, pp. SP379-389.
Ebeling, C. E., (1997), An Introduction To Reliability and Maintainability Engineering, (The
McGraw-Hill Companies).
Eghbali, G., (2000), “Reliability estimate using accelerated degradation data”, Ph.D. Thesis No.
17, Department of Industrial Engineering, Rutgers University.
Elsayed, E. A., (1996), Reliability Engineering, (Addison Wesley Longman, New York).
Ettouney, M. and Elsayed, E. A., (1999), “Reliability estimation of degraded structural
components subject to corrosion,” Proceeding 5th ISSAT International Conference on Reliability
and Quality in Design, Las Vegas, Nevada, pp. 291-295.
FAA Aerospace Forecast Fiscal Years 2003-2014, http:/apo.faa.gov/foreca02/content_5.htm.
Feynman, R. P., (1987), “Mr. Feynman Goes to Washington,” Engineering and Science,
California Institute of Technology, pp. 6–22.
Gertsbackh, B. and Kordonskiy, K. B., (1969), Models of Failure (English translation from the
Russian version), (New York: Springer-Verlag).
Goldberg, H., (1981), Extending the Limits of Reliability Theory, John Wiley & Sons, New
York.
Grall, A., Dieulle, L., Buerenguer, C. and Roussignol, M., (2002), “Continuous-time predictive
maintenance scheduling for a deteriorating system,” IEEE Transaction on Reliability, 51, pp.
141-150.
Inozu, B. and Perakis, A. N., (1991), “Statistical Analysis of Failure Time Distributions for Great
Lakes Marine Diesels Using Censored Data,” Journal of Ship Research, 35, No. 1, 73-82.
129
Iyer, S., (1992), “Availability Results for Imperfect Repair,” Sankhya: The Indian Journal of
Statistics, 54, Series B, Pt. 2, pp. 249-256.
Jayabalan, V. and Chaudhuri, D., (1992), “Optimal maintenance-replacement policy under
imperfect maintenance,” Reliability Engineering and System Safety, 36, pp. 165-169.
Kececioglu, D., (1991), Reliability Engineering Handbook: Volume I, Prentice Hall, Inc., New
Jersey.
Kijima, M. and Nakagawa, T., (1992), “Replacement Policies of a Shock Model with Imperfect
Preventive Maintenance,” European Journal of Operational Research, 57, pp. 100-110.
Kijima, M., Morimura, H. and Suzuki, Y., (1988), “Periodical Replacement Problem without
Assuming Minimal Repair,” European Journal of Operational Research, 37, pp. 194-203.
Kumar, U. D., Knezevic, J. and Crocker, J., (1999), “Maintenance free operating period - an
alternative measure to MTBF and failure rate for specifying reliability,” Reliability Engineering
& System Safety, 64(1), pp. 127-131.
Ladany, S. P., (1995), “Optimal set-up of a manufacturing process with unequal revenue from
oversized and undersized items,” IEEE 1995 Engineering Management Conference, pp. 428–
432.
Lie, C. H. and Chun, Y. H., (1986), “An Algorithm for Preventive Maintenance Policy,” IEEE
Transactions on Reliability, R-35, No. 1, pp. 71-75.
Lu, C. and Meeker, W., (1993), “Using degradation measures to estimate a time-to-failure
distribution,” Technometrics, 35, no. 2, pp. l6l-174.
Lu, J. C., Park, J. and Yang, Q., (1997), “Statistical inference of a time-to-failure distribution
derived from linear degradation data,” Technometrics, 39(4), pp. 391-400.
Lu, S., Lu, H. and Kolarik, W. J., (2001), “Multivariate performance reliability prediction in realtime,” Reliability Engineering and System Safety, 72, pp. 39-45.
Maghsoodloo, S. and Li, M. C., (2000), “Optimal asymmetric tolerance design,” IIE
Transactions, 32, pp. 1127–1137.
Malik, M. A., (1979), “Reliable preventive maintenance scheduling,” AIIE Transactions, 11, pp.
221-228.
McCall, J., (1965), “Maintenance Policies for Stochastically Failing Equipment: A Survey,”
Management Science, 11, No. 5, pp. 493-524.
130
Meeker, W. and Escobar, L., (1993), “A review of recent research and current issues in
accelerated testing,” International Statistical Review, 61, no. 1, pp. 147-168.
Meeker, W. and Escobar, L., (1998), Statistical Methods for Reliability Data, John Wiley &
Sons, New York.
Meeker, W. and Hamada M., (1995), “Statistical tools for the rapid development & evaluation of
high-reliability products,” IEEE Transactions on Reliability, 44, no. 2, pp. 187-198.
Meeker, W. and LuValle, M. J., (1995), “An accelerated life test model based on reliability
kinetics,” Technometrics, 37, pp. 133-146.
Meeker, W., Escobar, L. and Lu, L., (1998), “Accelerated Degradation Tests: Modeling and
Analysis,” Technometrics, 40, pp. 89-99.
Moorhead, P. R. and Wu, C. F. J., (1998), “Cost-driven parameter design,” Technometrics, 40,
pp. 111–119.
Murdock, W., (1995), “Component Availability for an Age Replacement Preventive
Maintenance Policy,” Unpublished Ph.D. Dissertation, Virginia Polytechnic Institute and State
University, Department of Industrial and Systems Engineering.
Nachlas, J. A. and Cassady, C. R., (1999), “Preventive Maintenance Study: A Key Component in
Engineering Education to Enhance Industrial Productivity and Competitiveness,” European
Journal of Engineering Education, 24, No. 3, pp. 299-309.
Nakagawa, T., (1986), “Periodic and sequential preventive maintenance policies” Journal of
Applied Probability, 23, pp. 536-542.
Nelson W., (1981), “Analysis of performance-degradation data analysis from accelerated tests,”
IEEE Transactions on Reliability, 30, no. 2, pp.149-155.
Nelson, W., (1982), Applied Life Data Analysis, John Wiley & Sons, Inc., New York.
Nelson, W., (1990), Accelerated Testing, John Wiley & Sons, Inc., New York.
O’conner, P., (1993), “Quality and reliability: Illusions and realities,” Quality and Reliability
Engineering International, 9, pp. 163-168.
Pandey, M., Singh, V. P., and Srivastava, C. P. L., (1994), “A Bayesian estimation of reliability
model using the linex loss function,” Microelectronics Reliability, 34(9), pp. 1519-1523.
Park, J. I. and Yum, B. J., (1997), “Optimal design of accelerated degradation tests for estimating
mean lifetime at the use condition,” Engineering Optimization, 28, pp. 199-230.
131
Pascual, F. G. and Meeker, W., (1999), “Estimating fatigue curves with the random fatigue-limit
model (discussion and reply),” Technometrics, 41, No. 4, pp. 277-302.
Perry, W. J., (1994), “Specifications & Standard – A New Way of Doing Business” (U.S.
Department of Defense Policy Memorandum).
Pham, H. and Wang, H., (1996), “Imperfect Maintenance,” European Journal of Operational
Research, 94, pp. 425-438.
Pieper, V., Domine, M. and Kurth, P., (1997), “Level crossing problem and drift reliability,”
Mathematical Methods of Operations Research, 45, pp. 347-354.
Pierskalla, W. and Voelker, J., (1976), “A Survey of Maintenance Models: The Control and
Surveillance of Deteriorating Systems,” Naval Research Logistics Quarterly, 23, pp. 353-388.
Rausand, M. and Hoyland, A., (2004), System Reliability Theory: Models, Statistical Methods,
and Applications, (John Wiley & Sons, New York).
Sahin and Polatoglu, H., (1998), “Quality, Warranty and Preventive Maintenance,” Kluwer
Academic Publishers, Boston/Dordrecht/London.
Sandve, K. and Aven, T., (1999), “Cost Optimal Replacement of Monotone, Repairable
Systems,” European Journal of Operational Research, 116, pp. 235-248.
Seber, G. A. and Wild, C. J., (1989), Nonlinear Regression (New York: John Wiley).
Sethuraman, J. and Young, T. R., (1986), “Cumulative damage threshold crossing models,” in
Reliability and Quality Control, Elsevier Science Publishers B. V. (North-Holland), pp. 309-319.
Sharma, K. K., Rathi, S., and Namita, (2004), “Bayesian analysis of some static system models,”
Journal of Quality in Maintenance Engineering, 10, pp. 273.
Sherif, Y. and Smith, M., (1981), “Optimal Maintenance Models for Systems Subject to Failure A Review,” Naval Research Logistics Quarterly, 28, pp. 47-74.
Shiau, J. J. H. and Lin, H. H., (1999), “Analyzing accelerated degradation data by nonparametric
regression,” IEEE Transaction on Reliability, 48(2), pp. 149-158.
Shue, S. H., (1993), “A Generalized Model for Determining Optimal Number of Minimal
Repairs before Replacement,” European Journal of Operational Research, 69, pp. 38-49.
Soliman, A. A., (2002), “Reliability estimation in a generalized life-mode with application to the
Burr-XII,” IEEE Transaction on Reliability, 51, No. 3.
Stadje, W. and Zuckerman, D., (1991), “Optimal Maintenance Strategies for Repairable Systems
with General Degree of Repair,” Journal of Applied Probability, 28, pp. 384-396.
132
Taguchi, G., Elsayed, E. A., and Hsiang, T., (1989), Quality Engineering in Production Systems,
McGraw-Hill.
Tang, L. and Chang, D., (1995), “Reliability prediction using nondestructive accelerateddegradation data: Case study on power supplier,” IEEE Transactions on Reliability, 44, no. 4, pp.
562-566.
Tang, L. C., Yang G. Y. and Xie M., (2004), “Planning of step-stress accelerated degradation
test,” Proceedings Annual Reliability and Maintainability Symposium.
Thompson, R. D., and Basu, A. P., (1993), “Bayesian reliability of stress strength systems,”
Basu, A. P. editor, Advances in Reliability, Amsterdam: Elsevier, pp. 411-421.
Tseng, S. and Yu, H., (1997), “A termination rule for degradation experiments,” IEEE
Transactions on Reliability, 46, no. 1, pp.130-133.
Tseng, S. T. and Wen, Z. C., (2000), “Step-stress accelerated degradation analysis for highly
reliable products,” Journal of Quality Technology, 32, pp. 209-216.
Tseng, S., Hamada, M. and Chiao, C., (1995), “Using degradation data to improve fluorescent
lamp reliability,” Journal of Quality Technology, 27, no.4, pp. 363-369.
Tyoskin, T. and Sonkina, A., (1997), “Parametric reliability-prediction based on small samples,”
IEEE Transactions on Reliability, 46, no. 3, pp. 394-399.
Valdez-Flores, C. and Feldman, R., (1989), “A Survey of Preventive Maintenance Models for
Stochastically Deteriorating Single-Unit Systems,” Naval Research Logistics, 36, pp. 419-446.
Varian, H. R., (1975), “A Bayesian Approach to Real Estate Assessment,” North Holland, pp.
195–208.
Wang W. and Dragomir-Daescu, D., (2002), “Reliability Quantification of Induction Motors Accelerated Degradation Testing Approach,” Proceedings of the Annual Reliability and
Maintainability Symposium, No. 4, pp. 325-331.
Wang, H. and Pham, H., (1996), “Optimal Maintenance Policies for Several Imperfect Repair
Models,” International Journal of Systems Science, 27, No. 6, pp. 543-549.
Whitmore, G. A. and Schenkelberg, F., (1997), “Modeling accelerated degradation data using
wiener diffusion with a time scale transformation,” Lifetime Data Analysis, 10, pp. 27-45.
Wu, C. C. and Tang, G. R., (1998), “Tolerance design for products with asymmetric quality
losses,” Int. J. Prod. Res., 36, pp. 2592–2541.
133
Wu, S. J. and Chang, C. T., (2002), “Optimal design of degradation tests in presence of cost
constraint,” Reliability Engineering & System Safety, 76, pp. 109-115.
Wu, S. J. and Shao, J. (1999), “Reliability analysis using the least-square method in nonlinear
mixed-effect degradation models,” Statistica Sinica, 9, pp. 855-877.
Wu, S. J. and Tsai, T. R., (2000), “Estimation of time-to-failure distribution derived from a
degradation model using fuzzy clustering,” Quality and Reliability Engineering International, 16,
pp. 261-267.
Yang, C., Lin, C. and Cheng, C., (2003), “Periodic Preventive Maintenance Model For
Deteriorating Equipment Using Improvement Factor Method,” Proceedings of the Fifth ROC
Symposium on Reliability and Maintainability, pp. 345-355.
Yang, K. and Xue, J., (1996), “Continuous state reliability analysis,” the Proceedings of Annual
Reliability and Maintainability Symposium, pp. 25 1-257.
Yang, K. and Yang, G., (1998), “Degradation reliability assessment using severe critical values,”
International Journal of Reliability, Quality and Safety Engineering, 5, no. 1, pp. 85-95.
Yu, H. F. and Chiao, C. H., (2002), “An optimal designed degradation experiment for reliability
improvement,” IEEE Transaction on Reliability, 51(4), pp. 427-433.
Yu, H. F. and Tseng, S. T., (1999), “Designing a degradation experiment,” Naval Research
Logistics, 46, pp. 689-706.
Zuo, J. M., Jiang, Renyan, and Richard, C. M. Yam (1999), “Approaches for reliability modeling
of continuous-state devices,” IEEE Transactions on Reliability, 48, No. 1, pp. 9-18.
134
APPENDIXES
135
APPENDIX I: PARAMETER ESTIMATES – CHAPTER 3 SIMULATION
Figure I-1 (a through j) below illustrate the parameter estimates for simulation 1, at the
second stress level, where β1 = NORM (1.5,0.5) , β 2 = NORM (1.5,0.5) , ε = NORM (0,1) , and t i
is the time from i =1 to 10 for each of the 10 runs.
Parameter Estimates Dz2 (Run # 2)
3.50
3.50
3.00
3.00
2.50
2.50
β 2 Estimate
β 2 Estimate
Parameter Estimates Dz2 (Run # 1)
2.00
1.50
1.00
0.50
2.00
1.50
1.00
0.50
0.00
-0.50 0
0.5
1
1.5
2
2.5
0.00
-0.50 0
3
0.5
1
β 1 Estimate
3.00
β 2 Estimate
β 2 Estimate
2.50
2.00
1.50
1.00
0.50
1.5
3
Parameter Estimates Dz2 (Run # 4)
3.50
1
2.5
(b) Parameter Estimates Run # 2
Parameter Estimates Dz2 (Run # 3)
0.5
2
β 1 Estimate
(a) Parameter Estimates Run # 1
0.00
-0.50 0
1.5
2
2.5
3
β 1 Estimate
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
-0.50 0
-1.00
0.5
1
1.5
2
2.5
β 1 Estimate
(c) Parameter Estimates Run # 3
(d) Parameter Estimates Run # 4
136
3
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
-0.50 0
Parameter Estimates Dz2 (Run # 6)
β 2 Estimate
β 2 Estimate
Parameter Estimates Dz2 (Run # 5)
1
2
3
4
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
-0.50 0
-1.00
1
2
β 1 Estimate
β 1 Estimate
(e) Parameter Estimates Run # 5
(f) Parameter Estimates Run # 6
Parameter Estimates Dz2 (Run # 8)
3.50
3.50
3.00
3.00
2.50
2.00
2.50
β 2 Estimate
β 2 Estimate
Parameter Estimates Dz2 (Run # 7)
1.50
1.00
0.50
0.00
2.00
1.50
1.00
0.50
-0.50 0
0.5
1
1.5
2
2.5
3
0.00
0
-1.00
0.5
1
(g) Parameter Estimates Run # 7
2
2.5
3
(h) Parameter Estimates Run # 8
Parameter Estimates Dz2 (Run # 9)
Parameter Estimates Dz2 (Run # 10)
3.50
3.00
2.50
β 2 Estimate
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
-0.50 0
1.5
β 1 Estimate
β 1 Estimate
β 2 Estimate
4
3
2.00
1.50
1.00
0.50
0.5
1
1.5
2
2.5
0.00
-0.50 0
3
β 1 Estimate
0.5
1
1.5
2
2.5
β 1 Estimate
(i) Parameter Estimates Run # 9
(j) Parameter Estimates Run # 10
137
3
Figure I-1 (a through j): Illustrate the Parameter Estimates for Simulation 1, where
β1 = NORM (1.5,0.5) , β 2 = NORM (1.5,0.5) , ε = NORM (0,1) , and t i is the time from i =1 to
10 for each of the 10 Runs.
Figure I-2 (a through j) below illustrate the parameter estimates for simulation 1, at the
third stress level, where β1 = NORM (2,0.5) , β 2 = NORM (2,0.5) , ε = NORM (0,1) , and t i is the
time from i =1 to 10 for each of the 10 runs.
Parameter Estimates Dz3 (Run # 1)
Parameter Estimates Dz3 (Run # 2)
4.00
3.00
β 2 Estimate
β 2 Estimate
3.50
2.50
2.00
1.50
1.00
0.50
0.00
0
1
2
3
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
4
0
1
β 1 Estimate
3
4
β 1 Estimate
(a) Parameter Estimates Run # 1
(b) Parameter Estimates Run # 2
Parameter Estimates Dz3 (Run # 3)
Parameter Estimates Dz3 (Run # 4)
4.00
4.00
3.50
3.50
3.00
3.00
β 2 Estimate
β 2 Estimate
2
2.50
2.00
1.50
1.00
0.50
2.50
2.00
1.50
1.00
0.50
0.00
0.00
0
1
2
3
4
β 1 Estimate
0
1
2
3
β 1 Estimate
(c) Parameter Estimates Run # 3
(d) Parameter Estimates Run # 4
138
4
Parameter Estimates Dz3 (Run # 6)
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
β 2 Estimate
β 2 Estimate
Parameter Estimates Dz3 (Run # 5)
0
1
2
3
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
4
0
1
β 1 Estimate
(e) Parameter Estimates Run # 5
4
Parameter Estimates Dz3 (Run # 8)
4.00
5.00
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
3.50
β 2 Estimate
β 2 Estimate
3
(f) Parameter Estimates Run # 6
Parameter Estimates Dz3 (Run # 7)
3.00
2.50
2.00
1.50
1.00
0.50
0.00
0
1
2
3
0
4
1
(g) Parameter Estimates Run # 7
3.50
β 2 Estimate
3.00
2.50
2.00
1.50
1.00
0.50
0.00
2
4
Parameter Estimates Dz3 (Run # 10)
4.00
1
3
(h) Parameter Estimates Run # 8
Parameter Estimates Dz3 (Run # 9)
0
2
β 1 Estimate
β 1 Estimate
β 2 Estimate
2
β 1 Estimate
3
4
β 1 Estimate
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
0
1
2
3
β 1 Estimate
(i) Parameter Estimates Run # 9
(j) Parameter Estimates Run # 10
139
4
Figure I-2 (a through j): Illustrate the Parameter Estimates for Simulation 1, where
β1 = NORM (2,0.5) , β 2 = NORM (2,0.5) , ε = NORM (0,1) , and t i is the time from i =1 to 10
for each of the 10 Runs.
Figure I-3 (a through j) below illustrate the parameter estimates for simulation 2, at the
first stress level, where β1 = NORM (1.25,0.5) , β 2 = NORM (1.25,0.5) , ε = NORM (0,1) , and t i
is the time from i =1 to 10 for each of the 10 runs.
Parameter Estimates Dz1 (Run # 2)
3.50
3.50
3.00
3.00
2.50
2.50
β 2 Estimate
β 2 Estimate
Parameter Estimates Dz1 (Run # 1)
2.00
1.50
1.00
0.50
0.00
-0.5-0.50 0
2.00
1.50
1.00
0.50
0.00
-0.50 0
0.5
1
1.5
2
2.5
3
0.5
1
β 1 Estimate
Parameter Estimates Dz1 (Run # 4)
4.00
β 2 Estimate
β 2 Estimate
3.00
2.00
1.00
0.00
0.5
1
1.5
2.5
(b) Parameter Estimates Run # 2
Parameter Estimates Dz1 (Run # 3)
0
2
β 1 Estimate
(a) Parameter Estimates Run # 1
-1.00
1.5
-1.00
2
2.5
3
-2.00
β 1 Estimate
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
-0.50 0
-1.00
0.5
1
1.5
2
2.5
β 1 Estimate
(c) Parameter Estimates Run # 3
(d) Parameter Estimates Run # 4
140
3
Parameter Estimates Dz1 (Run # 5)
Parameter Estimates Dz1 (Run # 6)
3.00
2.00
β 2 Estimate
β 2 Estimate
2.50
1.50
1.00
0.50
0.00
-0.50
0
0.5
1
1.5
2
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
-0.50 0
-1.00
0.5
1
β 1 Estimate
Parameter Estimates Dz1 (Run # 8)
β 2 Estimate
β 2 Estimate
1
1.5
2
2.5
3
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
-0.50 0
-1.00
0.5
1
(g) Parameter Estimates Run # 7
2
2.5
(h) Parameter Estimates Run # 8
Parameter Estimates Dz1 (Run # 9)
Parameter Estimates Dz1 (Run # 10)
4.00
3.50
β 2 Estimate
β 2 Estimate
1.5
β 1 Estimate
β 1 Estimate
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
-0.50 0
-1.00
-1.50
2.5
(f) Parameter Estimates Run # 6
Parameter Estimates Dz1 (Run # 7)
0.5
2
β 1 Estimate
(e) Parameter Estimates Run # 5
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
-0.50 0
-1.00
-1.50
1.5
0.5
1
1.5
2
2.5
3
3.00
2.50
2.00
1.50
1.00
0.50
0.00
-0.50 0
β 1 Estimate
0.5
1
1.5
2
2.5
β 1 Estimate
(i) Parameter Estimates Run # 9
(j) Parameter Estimates Run # 10
141
3
Figure I-3 (a through j): Illustrate the Parameter Estimates for Simulation 2, where
β1 = NORM (1.25,0.5) , β 2 = NORM (1.25,0.5) , ε = NORM (0,1) , and t i is the time from i =1
to 10 for each of the 10 Runs.
Figure I-4 (a through j) below illustrate the parameter estimates for simulation 2, at the
second stress level, where β1 = NORM (1.55,0.5) , β 2 = NORM (1.55,0.5) , ε = NORM (0,1) , and
t i is the time from i =1 to 10 for each of the 10 runs.
Parameter Estimates Dz2 (Run # 2)
3.50
3.50
3.00
3.00
2.50
2.50
β 2 Estimate
β 2 Estimate
Parameter Estimates Dz2 (Run # 1)
2.00
1.50
1.00
2.00
1.50
1.00
0.50
0.50
0.00
-0.50 0
0.00
0
1
2
3
4
0.5
1
β 1 Estimate
3.00
2.50
2.00
β 2 Estimate
β 2 Estimate
3
Parameter Estimates Dz2 (Run # 4)
3.50
1.50
1.00
0.50
0.00
2
2.5
(b) Parameter Estimates Run # 2
Parameter Estimates Dz2 (Run # 3)
1
2
β 1 Estimate
(a) Parameter Estimates Run # 1
-0.50 0
1.5
3
4
-1.00
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
-0.50 0
-1.00
0.5
1
1.5
2
2.5
β 1 Estimate
β 1 Estimate
(c) Parameter Estimates Run # 3
(d) Parameter Estimates Run # 4
142
3
Parameter Estimates Dz2 (Run # 5)
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
β 2 Estimate
β 2 Estimate
Parameter Estimates Dz2 (Run # 5)
0
0.5
1
1.5
2
2.5
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
3
0
0.5
1
β 1 Estimate
(e) Parameter Estimates Run # 5
2.5
3
Parameter Estimates Dz2 (Run # 8)
4.00
4.00
3.50
3.50
3.00
2.50
3.00
β 2 Estimate
β 2 Estimate
2
(f) Parameter Estimates Run # 6
Parameter Estimates Dz2 (Run # 7)
2.00
1.50
1.00
0.50
2.50
2.00
1.50
1.00
0.50
0.00
-0.50 0
0.00
-0.50 0
0.5
1
1.5
2
2.5
3
0.5
1
(g) Parameter Estimates Run # 7
3.00
β 2 Estimate
2.50
2.00
1.50
1.00
0.50
1
1.5
2.5
3
Parameter Estimates Dz2 (Run # 10)
3.50
0.5
2
(h) Parameter Estimates Run # 8
Parameter Estimates Dz2 (Run # 9)
0.00
-0.50 0
1.5
β 1 Estimate
β 1 Estimate
β 2 Estimate
1.5
β 1 Estimate
2
2.5
3
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
0
β 1 Estimate
0.5
1
1.5
2
2.5
β 1 Estimate
(i) Parameter Estimates Run # 9
(j) Parameter Estimates Run # 10
143
3
Figure I-4 (a through j): Illustrate the Parameter Estimates for Simulation 2, where
β1 = NORM (1.55,0.5) , β 2 = NORM (1.55,0.5) , ε = NORM (0,1) , and t i is the time from i =1
to 10 for each of the 10 Runs.
Figure I-5 (a through j) below illustrate the parameter estimates for simulation 2, at the
third stress level, where β1 = NORM (2.25,0.5) , β 2 = NORM (2.25,0.5) , ε = NORM (0,1) , and t i
is the time from i =1 to 10 for each of the 10 runs.
Parameter Estimates Dz3 (Run # 2)
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
β 2 Estimate
β 2 Estimate
Parameter Estimates Dz3 (Run # 1)
0
1
2
3
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
4
0
1
β 1 Estimate
β 2 Estimate
β 2 Estimate
Parameter Estimates Dz3 (Run # 4)
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
2
3
4
(b) Parameter Estimates Run # 2
Parameter Estimates Dz3 (Run # 3)
1
3
β 1 Estimate
(a) Parameter Estimates Run # 1
0
2
4
5
β 1 Estimate
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
0
1
2
3
β 1 Estimate
(c) Parameter Estimates Run # 3
(d) Parameter Estimates Run # 4
144
4
Parameter Estimates Dz3 (Run # 6)
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
β 2 Estimate
β 2 Estimate
Parameter Estimates Dz3 (Run # 5)
0
1
2
3
5.00
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
0
4
1
β 1 Estimate
(e) Parameter Estimates Run # 5
β 2 Estimate
β 2 Estimate
2
3
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
0
4
1
(g) Parameter Estimates Run # 7
3.50
3.00
β 2 Estimate
β 2 Estimate
4
Parameter Estimates Dz3 (Run # 10)
4.00
2.50
2.00
1.50
1.00
0.50
0.00
2
3
(h) Parameter Estimates Run # 8
Parameter Estimates Dz3 (Run # 9)
1
2
β 1 Estimate
β 1 Estimate
0
4
Parameter Estimates Dz3 (Run # 8)
5.00
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
1
3
(f) Parameter Estimates Run # 6
Parameter Estimates Dz3 (Run # 7)
0
2
b1 Estimate
3
4
β 1 Estimate
4.50
4.00
3.50
3.00
2.50
2.00
1.50
1.00
0.50
0.00
0
1
2
3
β 1 Estimate
(i) Parameter Estimates Run # 9
(j) Parameter Estimates Run # 10
145
4
Figure I-5 (a through j): Illustrate the Parameter Estimates for Simulation 2, where
β1 = NORM (2.25,0.5) , β 2 = NORM (2.25,0.5) , ε = NORM (0,1) , and t i is the time from i =1
to 10 for each of the 10 Runs.
146
APPENDIX II: OTHER LOSS FUNCTION FORMULATIONS PER CHAPTER 5
Figure II.1 illustrates four different loss function formulations at maintenance
requirements, T = 1,500 hrs and reliability requirements, R0 (T ) = 0.55 is illustrated in Figure
II.1.
Loss Functions at 1500 hrs
L(i) Symmetric Linear
Loss
0.60
L(i) Asymmetric Linear
0.60
0.45
0.45
0.30
0.30
0.15
0.15
0.00
0.00
L(i) Symmetric Quadratic
0.16
L(i) Asymmetric Quadratic
0.16
0.12
0.12
0.08
0.08
0.04
0.04
0.00
0.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Reliability
Figure II.1: Loss Functions at Maintenance Requirement, T = 1,500 hrs, and Reliability
Requirements, R0 (T ) = 0.55
Figure II.2 illustrates four different loss function formulations at maintenance
requirements, T = 2,000 hrs and reliability requirements, R0 (T ) = 0.47 is illustrated in Figure
II.2.
147
Loss Functions at 2000 hrs
L(i) Symmetric Linear
0.60
L(i) Asymmetric Linear
0.4
0.45
0.3
0.30
0.2
Loss
0.15
0.1
0.00
0.0
L(i) Symmetric Quadratic
0.16
L(i) Asymmetric Quadratic
0.100
0.12
0.075
0.08
0.050
0.04
0.025
0.00
0.000
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Reliability
Figure II.2: Loss Functions at Maintenance Requirement, T = 2,000 hrs, and Reliability
Requirements, R0 (T ) = 0.47
Figure II.3 illustrates four different loss function formulations at maintenance
requirements, T = 2,500 hrs and reliability requirements, R0 (T ) = 0.41 is illustrated in Figure
II.3.
148
Loss Functions at 2500 hrs
L(i) Symmetric Linear
Loss
0.60
L(i) Asymmetric Linear
0.4
0.45
0.3
0.30
0.2
0.15
0.1
0.0
0.00
L(i) Symmetric Quadratic
0.20
L(i) Asymmetric Quadratic
0.08
0.15
0.06
0.10
0.04
0.05
0.02
0.00
0.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Reliability
Figure II.3: Loss Functions at Maintenance Requirement, T = 2,500 hrs, and Reliability
Requirements, R0 (T ) = 0.41
Figure II.4 illustrates four different loss function formulations at maintenance
requirements, T = 3,000 hrs and reliability requirements, R0 (T ) = 0.36 is illustrated in Figure
II.4.
149
Loss Functions at 3000 hrs
L(i) Symmetric Linear
0.60
0.3
0.45
Loss
L(i) Asymmetric Linear
0.4
0.30
0.2
0.15
0.1
0.0
0.00
L(i) Symmetric Quadratic
L(i) Asymmetric Quadratic
0.20
0.100
0.15
0.075
0.10
0.050
0.05
0.025
0.00
0.000
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Reliability
Figure II.4: Loss Functions at Maintenance Requirement, T = 3,000 hrs, and Reliability
Requirements, R0 (T ) = 0.36
Figure II.5 illustrates four different loss function formulations at maintenance
requirements, T = 3,500 hrs and reliability requirements, R0 (T ) = 0.31 is illustrated in Figure
II.5.
150
Loss Functions at 3500 hrs
L(i) Symmetric Linear
L(i) Asymmetric Linear
0.60
0.3
0.45
0.2
0.30
0.1
Loss
0.15
0.00
0.0
L(i) Symmetric Quadratic
0.24
L(i) Asymmetric Quadratic
0.12
0.18
0.09
0.12
0.06
0.06
0.03
0.00
0.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Reliability
Figure II.5: Loss Functions at Maintenance Requirement, T = 3,500 hrs, and Reliability
Requirements, R0 (T ) = 0.31
Figure II.6 illustrates four different loss function formulations at maintenance
requirements, T = 4,000 hrs and reliability requirements, R0 (T ) = 0.28 is illustrated in Figure
II.6.
151
Loss Functions at 4000 hrs
L(i) Symmetric Linear
Loss
0.8
L(i) Asymmetric Linear
0.4
0.6
0.3
0.4
0.2
0.2
0.1
0.0
0.0
L(i) Symmetric Quadratic
L(i) Asymmetric Quadratic
0.24
0.12
0.18
0.09
0.12
0.06
0.06
0.03
0.00
0.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Reliability
Figure II.6: Loss Functions at Maintenance Requirement, T = 4,000 hrs, and Reliability
Requirements, R0 (T ) = 0.28
Figure II.7 illustrates four different loss function formulations at maintenance
requirements, T = 4,500 hrs and reliability requirements, R0 (T ) = 0.25 is illustrated in Figure
II.7.
152
Loss Functions at 4500 hrs
L(i) Symmetric Linear
Loss
0.8
L(i) Asymmetric Linear
0.4
0.6
0.3
0.4
0.2
0.2
0.1
0.0
0.0
L(i) Symmetric Quadratic
0.3
L(i) Asymmetric Quadratic
0.16
0.12
0.2
0.08
0.1
0.04
0.0
0.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Reliability
Figure II.7: Loss Functions at Maintenance Requirement, T = 4,500 hrs, and Reliability
Requirements, R0 (T ) = 0.25
Figure II.8 illustrates four different loss function formulations at maintenance
requirements, T = 5,000 hrs and reliability requirements, R0 (T ) = 0.23 is illustrated in Figure
II.8.
153
Loss Functions at 5000 hrs
L(i) Symmetric Linear
Loss
0.8
L(i) Asymmetric Linear
0.4
0.6
0.3
0.4
0.2
0.2
0.1
0.0
0.0
L(i) Symmetric Quadratic
0.3
L(i) Asymmetric Quadratic
0.16
0.12
0.2
0.08
0.1
0.04
0.00
0.0
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Reliability
Figure II.8: Loss Functions at Maintenance Requirement, T = 5,000 hrs, and Reliability
Requirements, R0 (T ) = 0.23
154
© Copyright 2026 Paperzz