i-1 - Prof. KK Aggarwal

Software Reliability
and
Maintainability
Prof. K.K. Aggarwal
Vice Chancellor
G.G.S. Indraprastha University
Kashmere Gate, Delhi, India
Page 1 of 87
USES OF SOFTWARE
ENGINEERING STUDIES
1. To
evaluate
software
engineering
technology quantitatively.
2. To evaluate development status during
the test phases.
3. To monitor the operational performance
of software.
4. To enrich the insight into the software
product and software development
process.
Page 2 of 87
INTRODUCTION
• Three most significant needs of Software are :
¾ Time of delivery, and
¾ Level of quality required
¾ Cost
• While it is easy to quantify schedule and cost,
quantification of quality has been more difficult.
• Reliability is the most important characteristic of
Software Quality.
• Reliability is the probability that the software will
work without “failure” for a specified period of time
in a specified “environment”.
Page 3 of 87
RELIABILITY APPROACHES
The Developer – oriented approach
• Attempting to count the faults found
by counting either failures or repairs.
• Even correct enumeration of faults is
not a good status indicator.
Page 4 of 87
The User Oriented Approach
• Software reliability
• Relates to operation rather than design of
the program
• Dynamic rather than static
• Easily associated with costs
• Suitable for examining the significance of
trends, for setting of objectives, and the
schedule for meeting these objectives.
Page 5 of 87
FAULT
• A fault is the defect in the program that,
when
executed
under
particular
conditions, causes a Failure.
¾ A fault can cause more than one failure
¾ It is a property of the program
¾ Created when a programmer makes an
error
• A fault can be defined as a defective,
missing, or extra instruction or set of
related instructions that is the cause of
one or more actual or potential failures.
Page 6 of 87
The number of faults in the software is the
difference between the number introduced and
the number removed.
¾ Faults are introduced when the code is
being developed by programmers.
¾ The process of fault removal introduces
some new faults.
¾ The fault removal resulting from execution
depends on the occurrence of the
associated failure.
¾ Faults
can
execution.
also
be
found
without
Page 7 of 87
FAILURE
• Defined as departure of operations from
requirements.
• A software failure must occur during
execution of a program. Potential failures
do not count.
• Documentation faults are not to be
counted.
• Requirements are somewhat subject to
interpretation.
• Requirements and failures can be
considered as “positive” and “negative”
specifications.
Page 8 of 87
• The definition of failure is really project
specific and must be established in
consultation with customer.
• The
process
of
establishing
the
requirements for a system involves a
consideration of the operational profile
also.
• The failure process and hence software
reliability is directly dependent on the
environment or the operational profile.
• Operational Profile is the set of input
states that the program can execute along
with the probabilities with which they will
occur.
Page 9 of 87
* Input state A
(PA=0.12)
* Input state
B
(PB=0.08)
Input Space
Probability of occurrence
Probability of occurrence
0.15
0.12
0.10
0.08
0.05
A
B
Input State
Portion of operational profile
Input state
Operational profile
Page 10 of 87
FAILURE BEHAVIOUR
•
•
It is affected by :
1. The number of faults in the software being
executed, and
2. The execution environment or operational profile
of execution
Failures occurrence in time can be characterized by :
i) time of failure,
ii) time interval between failures,
iii) cumulative failures experienced upto a give
time, and
iv) failures experienced in a time interval
•
All the foregoing
variables.
four
quantities
are
random
Page 11 of 87
Probability Distribution at Two Different Times
Value of random variable
(failures in time period)
Probability of Elapsed Time
Time=1 hour
Time=5 hours
0
0.10
0.01
1
0.18
0.02
2
0.22
0.03
3
0.16
0.04
4
0.11
0.05
5
0.08
0.07
6
0.05
0.09
7
0.04
0.12
8
0.03
0.16
9
0.02
0.13
10
0.01
0.10
11
0
0.07
12
0
0.05
13
0
0.03
14
0
0.02
15
0
0.01
Mean failures
3.04
7.77
Page 12 of 87
10
Mean value function
Mean
failures
Time
=5 hr
5
Time
=1 hr
•
Failure
Intensity
5
Time (hrs)
10
Failure
Intensity
(failures/hr)
5
10
The variation can be expressed as :
¾ Mean Value Function – represents the average cumulative
failures association with each time point.
¾ Failure Intensity Function – represents the rate of change
of the mean value function.
Page 13 of 87
Example Program :
1.
2.
3.
4.
5.
6.
read (a,b,c);
if a <> 0 then begin
d : = b*b – 5 *a*c;
X:=0
if d>0 then
X : = (-b + trunc (sqrt (d))) div (2*a)
end
else
7.
X: = -c div b;
8.
if (a*X*X+ b*X+c = 0) then
9.
writeln (X, ‘is an integer solution’)
else
10. writeln(‘There is no integer solution’)
Page 14 of 87
• This program displays an integral solution to
the quadratic equation ax2 + bx + c = 0 for
integral values of a, b and c.
• Assume the range of
a:
0 to 10 (11)
b:
-5 to 5 (11)
c:
-20 to 20 (41)
Even then the total number of states, T = 4,961
• The program has a fault at line 3.
Page 15 of 87
Set 1 :
Location
a
b
c
d
x
1
0
3
6
**
**
7
**
-2
8
**
-2
9
**
-2
Output
-2 is an integer
solution
The value of a = 0 causes the selection of a path that
does not include location 3. No failure possible.
Page 16 of 87
Set 2 :
Location
a
b
c
d
x
1
3
2
0
**
**
2
**
**
3
4
**
6
4
0
8
4
0
9
4
0
Output
0 is an integer
solution
The fault is reached, but the computation proceeds just
as there were no fault because c = 0. No infection.
Page 17 of 87
Set. 3 :
Location
a
b
c
d
x
1
1
-1
-12
**
**
2
**
**
3
61
**
6
61
4
8
61
4
9
61
4
Output
4 is an integer
solution
The fault infects the succeeding data state. D = 61
insetad of 49. Error propagates to location 6 where it is
cancelled by the integer square root calculation where
correct answer, 7 is computed. No failure again.
Page 18 of 87
Set. 4 :
Location
1
a
10
b
0
c
-10
d
x
**
**
2
**
**
3
500
**
4
500
**
6
500
1.1
10
500
1.1
Output
There is no
integer solution
Here fault is executed. It infects the data state. Also, the
data state error propagates to output.
Page 19 of 87
• Each computation falls into one of the four
categories.
¾ the fault is not executed
¾ the fault is executed but does not infect any data
state
¾ some data states are infected, but the output is
nonetheless correct
¾ data infection does cause an incorrect output
Page 20 of 87
COST IMPACT OF SOFTWARE
DEFECTS
• The obvious benefits of formal technical
reviews is the early discovery of software
defects so that each defect is corrected
prior to the next step.
• It is indicated that design activities
introduce between 50 and 65 percent of all
errors.
• FTRs have been shown to be up to 75
percent effective.
Page 21 of 87
• The review process thus substantially
reduces the cost of subsequent steps in
the
development
and
maintenance
phases.
• If an error uncovered during design costs
1.0 monetary unit to correct, the same
error just before testing commences may
cost 6.5 units; during testing, 15 units;
and after release, 67 units.
Page 22 of 87
DEFECT AMPLIFICATION & REMOVAL
Development step
Defects
Detection
Errors passed through
Errors from
previous step
Percent
efficiency
Amplified errors 1: x
for error
Newly generated errors detection
Errors
passed to
next step
Defect Amplification Model
Page 23 of 87
Preliminary Design
Detailed Design
Integration Test
Validation Test
Code/Unit test
System Test
Defect Amplification – No Reviews
Page 24 of 87
Defect Amplification – Reviews Conducted
Preliminary Design
Detailed Design
Integration Test
Validation Test
Code/Unit test
System Test
Page 25 of 87
COST COMPARISONS
Errors found
Number
Unit Cost
Total
Reviews conducted
During design
22
1.5
33
Before test
36
6.5
234
During test
21
15
315
After release
3
67
201
793
No reviews
conducted
Before test
22
6.5
143
During test
82
15
1230
After release
12
67
804
2177
Page 26 of 87
OPERATIONAL PROFILE & TEST
PROFILE
Operational Profile
• It is desirable to use the concept of
operational profile even if we can draw it
only approximately.
• In the absence of the use of operational
profile, we are using flat operational
profile.
• The environment or operational profile of
a program is established by enumerating
the possible input states and their
probabilities of occurrence.
Page 27 of 87
• It is possible to perform a grouping or
equivalence partitioning of the input space
and select only one input state from each
group with a probability equal to the
probability (total) of occurrence of all states
in the group.
• Some
systems
operate
in
several
characteristic operational profiles.
The
proportion of time spent in each mode varies
from installation to installation.
• Here it may be desirable to determine the
reliability for each mode. An example is a
Telephone Switching System which may
operate in either a business customer mode
or a residential customer mode.
Page 28 of 87
• Operational mode A has a wide variety of
types of calls such as conference calls, credit
card calls and international calls. Operational
mode B has fewer of the special calls, hence
the operational profile shows more of a peak.
Probability of
occurence Pk
Op. mode B
Op. mode A
Input State k
Page 29 of 87
TEST PROFILE
• Software systems are subjected to extensive
testing prior to release. The testing phase of
the software development life cycle requires a
judicious choice of a test suite.
• Exhaustive Testing is practically impossible
even for most trivial applications.
• We suggest using an operational profile
approach for designing an optimal testing
strategy.
• If M is the total number of all possible inputs,
let us assume we can choose only T < < M
test cases for cost considerations.
Page 30 of 87
• The optimum test strategy is the one that yields the
greatest reduction in failure intensity per unit of test
cost. This would be accomplished if we select these
T inputs such that the failure intensity reduces most
rapidly with respect to text execution time.
• By a Knowledge of operational Profile & Test Profile,
we can quantify Test Effectiveness.
Probability
of
Occurence
Op. Profile
Test Profile
Input State
T
M
Page 31 of 87
SOFTWARE RELIABILITY
MODELS
General Characteristics
• A software reliability model has the form
of a random process that describes the
behaviour
of
failures
with
time.
Specification of the model generally
includes specification of a function of time
such as the mean value function or failure
intensity.
• A software reliability model describes
software failures as a random process,
which is characterized in either times of
failure or the number of failures at fixed
Page 32 of 87
times.
• Let M(t) be a random process
representing the number of failures
experienced by time t. The μ(t), the
mean value function is defined as :
μ (t) = E [M(T)]
• The failure intensity function is then
defined as :
λ(t) = d/dt[μ(t)]
Page 33 of 87
GOOD MODEL
Good software reliability model has several
important characteristics :
1. Gives good predictions of future failure
behaviour
2. Is easy for measuring parameters
3. Is based on sound assumptions
4. Is widely applicable
5. Is simple
6. Is insensitive to noise
Page 34 of 87
• A good model considerably enhances
communication on a project.
The
advantages are significant even if the
projections are made only with a limited
accuracy.
• Developing a practically useful model may
require several person years but its
application requires a small fraction of
project resources.
• For research investigations, a range of
models may be applied but for real
projects, application of more than one or
two
models
is
conceptually
and
economically impractical.
Page 35 of 87
RELIABILITY MODELS
Model
1
Criteria
3
4
2
Results
5
6
1
Jelinski – Moranda
y
y
-
y
y
y
y
2
Weibull
y
n
y
y
n
n
n
3
Duane Model
-
y
y
y
y
y
y
4
Rayleigh Model
n
y
n
-
y
n
n
5
Shick-Wolverton
n
y
n
n
y
n
n
6
Musa Basic Model
y
y
-
y
y
y
y
7
Goel-Okumoto
y
y
-
y
y
y
y
8
Bayesian Jelinski – Moranda
y
n
y
y
n
y
-
9
Littlewood Model
y
n
y
y
n
y
-
10
Bayesian Littlewood
y
n
y
y
n
y
-
11
Keiller-Littlewood
y
n
y
y
n
y
-
12
Littlewood-Verrall
y
n
y
y
n
y
-
13
Schneidewind Model
y
y
-
y
y
y
y
14
Musa-Okumoto (LP)
y
y
y
y
y
y
y
15
Littlewood NHPP
y
n
y
y
n
y
-
Page 36 of 87
Musa’s MODEL
λ(μ) = λ0 [1-μ/γ0]
EXAMPLE
Assume that a program will experience 100
failures in infinite time. It has now experienced
50. The initial failure intensity was 10
failures/cpu hr.
- the current failure intensity ?
10 (1-50/100) = 5 failures/cpu hr
- the decrement of failure intensity per failure ?
dλ/dμ = - (λ0/γ0) = -0.1/cpu hr
Page 37 of 87
Mean failures experienced, μ
Failure Intensity function
OBSERVING THE RELATIONSHIP BETWEEN THE
FAILURES AND FAILURE INTENSITY, WE CAN
DERIVE:
μ (τ) = γ ο (1 – Exp (- (λο / γο ) τ))
Page 38 of 87
Mean failures experienced versus
execution time
Page 39 of 87
The failure Intensity as a function of execution time can now be
expressed as
λ (τ) = λο Exp ( - λο / γο ) τ
EXAMPLE (CONTD.)
The failure intensity at 10 cpu hr?
λ(10) = 3.68 failures/ cpu hr.
At 100 cpu hr?
λ(100) = 0.000454 failures/cpu hr.
Page 40 of 87
Derived Quantities
Assume that you have chosen a failure
intensity objective for the software product
being developed. Suppose some portion of
the failures are being removed through
correction of their associated faults.
Then we can use the objective & the present
value of failure intensity to determine the
additional expected number of failures that
must be experienced to reach that objective.
For basic model,
Uμ = (γ0/λ0) (λP - λf)
Page 41 of 87
Mean failures experienced
Additional failures to failure intensity Objective
The expected number of failures that will be experienced
between a present failure intensity of 3.68 failures/cpu hr
and an objective of 0.000454 failures/cpu hr ?
Uμ = 100/10 (3.68 – 0.000454)
= 37 failures.
Page 42
of 87
Similarly, we can determine the additional execution time
required to reach the failure intensity objective.
Δτ = γ0 /λ0 In (λp/λf)
Example (Contd.)
Calculate the cpu time required to reach a failure intensity
objective of 0.000454 failures/cpu hr if the present value of
failure intensity is 3.68 failures/cpu hr.
Δτ = 100/10 In (3.68/0.000454)
= 90 cpu hr.
Page 43 of 87
APPLICATION OF MUSA MODEL
• Regarding evaluating the cost effectiveness
of a design review.
• 50,000 source instructions.
• Previous experience indicates 8 faults/1000
instructions and an initial failure intensity of
10 failures/hour.
• Required failure intensity is 1 failure/10 hr.
• We need 5 per–hour of effort/hour of
computer time and a total of 8 per–hour for
failure identification and correction per
failure.
• In addition to testing time, computer is
required for ½ hour per failure.
Page 44 of 87
• Loaded salary is $100/hour.
• Computer
time
$1000/hour.
is
priced
at
• It is found that design reviews
reduce the fault figure to 6.
• Design reviews require 5 meetings
attended by 6 persons on the
average for 10 hours.
Page 45 of 87
λ
λoA
λoB
λf
μB γoB μA γ oA
Page 46 of 87
λ
λOA
λOB
λf
Time
tB
tA
Page 47 of 87
CASE A (WITHOUT REVIEWS)
λOA = 10 failures/hr
νOA = 50,000 x 8/1000 = 400 failures
μA = 396 failures
tA = 184 hrs.
Total person hrs.
Total Comp. hrs.
Total Cost
= 8 x 396 + 5 x 184
= 3168 + 920 = 4088 hrs.
= 184 + ½ (396)
= 382 comp. hrs.
= $4088 x 100 + 382 x 1000
= $790,800
Page 48 of 87
CASE B (WITHOUT REVIEWS)
λOB
νOB
μA
tA
= 50,000 X 6/1000 = 300 failure
= 10 x ¾ = 7.5 failures/hr.
= 296 failures
= 173 hrs.
Total person hrs.
Total Comp. hrs.
Total Cost
=
=
=
=
=
=
8 x 296 + 5 x 173
2368 + 865 = 3233 hrs.
173 + ½ (296)
321 comp. hrs.
$3233 x 100 + 321 x 1000
$644,300
Page 49 of 87
Total Review effort
=
5 x 6 x 10 = 300 per-hr
Review Cost
=
=
$300 x 100
$30,000
Total Cost
=
=
$644,300 + 30,000
$674,300
• EFFICACY OF REVIEWS IS ESTABLISHED
Page 50 of 87
LIFE CYCLE COST OPTIMISATION
• The basis for optimization is the
assumption that reliability improvement is
obtained by more extensive testing, which
of course affects costs and schedules.
• Costs and Schedules for other phases are
assumed to be constant.
• The part of development cost due to
testing decreases with higher F.I.
objectives,
while
operational
cost
increases. Thus total cost has a minimum.
Page 51 of 87
Cost
Total
Operation
System Test
Failure Intensity
Start of System Test
Life Cycle Cost Optimization
Page 52 of 87
Software Maintenance
• Maintenance of every software is a must
¾ To correct errors, if any
¾ To adapt s/w in ever changing environment
¾ To improve the efficiency
Page 53 of 87
CATEGORIES
1) CORRECTIVE MAINTENANCE
¾ Refers to modification initiated by the
defects in the software.
¾ Emergency fixings are known as:
PATCHING
INCREASED
PROGRAM
COMPLEXITY
UNFORSEEN
RIPPLE
EFFECTS
Page 54 of 87
2) ADAPTIVE MAINTENANCE
It includes changing the software to match
changes in the ever changing environment.
Work
patterns
Business
Rules
H/W
Govt.
Platform Policies
S/W
Platform
Page 55 of 87
3) PERFECTIVE MAINTENANCE
It means improving processing efficiency or
performance, or restructuring the software to
improve changeability.
EXPANSION
Enhancement in
existing system
functionality
Improvement in
computational
efficiency
Page 56 of 87
Distribution of Efforts among
Three Categories
17%
18%
65%
Corrective
Adaptive
Perfective
Page 57 of 87
Distribution of efforts among various causes
Emergency Debugging
50
Routine Debugging
41.8
40
Data Env. Adaption
H/W, O.S. Change
30
Enhancements for Users
20
17.3
12.4
10
0
9.3
6.2
5.5
4
3.4
Documentation
Improvement
Code Efficiency
Improvement
Others
Page 58 of 87
Factors affecting Maintainability
1. Average Cyclomatic Complexity (ACC)
CC= e-n + 2P
ACC= Average of CC of all modules
2. Readability of Source Code(RSC)
CR = LOC/LOM
3. Documentation Quality (DOQ)
Judged through Fog Index
4. Understandability of Software (UOS)
Based on use of symbols in source code and
other supporting documents
Page 59 of 87
Proposed Fuzzy Model
Knowledge Base
Data Base
ACC
RSC
DOQ
UOS
Fuzzification
Module
Inference
Engine
Rule Base
DeFuzzification
Module
Maintain
-ability
Page 60 of 87
Input Variable ACC
low
av
3
8
high
1
13
Page 61 of 87
Input Variable RSC
avg
1
good
poor
CR
0
4
5
6
7
8
Page 62 of 87
Input Variable DOQ
•FOG =0.4 * [(# of words/# of sentences) +
%age of words with 3 or more syllables]
1
med
high
low
Fog Index
9
10
12
16
18
Page 63 of 87
Input Variable UOS
moderate
1
more
less
Number of symbols
0
500
700
900
Page 64 of 87
Output Variable Maintainability
v_poor
1
good
very_good
0
2
4
avg
poor
6
8
Page 65 of 87
Rule Base for Fuzzy Model
high
med
Rule 1
v_good
Rule 2
Rule10
Rule11
v_good
Rule12
Rule 9
avg
Rule 6
avg
Rule 4
Rule 8
Rule14
Rule 7
v_good
Rule17
Rule37
v_good
Rule35
Rule34
good
good
Rule55
good
Rule41
Rule60
avg
Rule59
poor
Rule50
Rule43
avg
good
Rule62
Rule69
poor
avg
Rule68
avg
Rule49
≡ ACC is low
poor
Rule52
poor
avg
Rule73
Rule74
avg
Rule75
poor
Rule72
v_poor
Rule71
Rule78
v_poor
poor
Rule77
Rule70
Rule67
Rule53
avg
avg
avg
avg
good
Rule54
v_poor
poor
avg
poor
avg
Rule51
poor
Rule66
Rule61
Rule58
Rule44
good
Rule63
avg
poor
Rule45
poor
Rule65
Rule25
Rule48
Rule64
good
poor
good
Rule47
avg
Rule40
Rule26
R ule46
good
Rule42
poor
Rule57
avg
UOS
avg
good
Rule56
avg
Rule22
avg
Rule36
poor
Rule33
avg
less
avg
Rule39
good
Rule31
avg
good
Rule38
good
Rule30
Rule32
avg
good
Rule27
v_poor
Rule24
poor
Rule23
Rule16
Rule13
Rule28
moderate
Rule21
avg
good
avg
avg
Rule18
poor
Rule15
avg
avg
good
Rule29
good
Rule20
good
avg
good
Rule 5
DOQ
Rule19
v_good
Rule 3
more
low
≡ ACC is av
Rule80
v_poor
Rule79
v_poor
poor
avg
Rule76
avg
Rule81
v_poor
avg
≡ ACC is high
This area corresponds to RSC as good
This area corresponds to RSC as avg
This area corresponds to RSC as poor
Page 66 of 87
Rule Base Representation
• Whenever a fuzzy model is to be
simulated, the rule base is usually stored
as
• If (ACC is low) and (RSC is good) and
(DOQ is high) and (UOS is more) then
(maintainability is v_good)
• If (ACC is av) and (RSC is good) and (DOQ
is high) and (UOS is more) then
(maintainability is v_good)
• If (ACC is high) and (RSC is poor) and
(DOQ is high) and (UOS is more) then
(maintainability is good)
Page 67 of 87
Problems with Existing
Representation
• Takes lot of storage space
• Takes more time in finding the rules,
which get fired based on a set of inputs
Page 68 of 87
New Representation of Rule Base
• No need to store membership function of
each input in the rule
• Only output membership function is
stored
• A relation is established between every
combination of membership functions of
inputs and corresponding applicable rule
number with the help of weightages
• Outputs must be stored in a specific order
of rule numbers
Page 69 of 87
Working of New Representation
Total _ Rules =
N
∏ Mi
i =1
• Weightages of each of membership
functions of every input are defined as :
• W1, 1 = 1
• Wi,j = Wi,(j-1) + Wi,1 if j > 1
• Wi,1 = W(i-1),Mi-1 if i >1
• Here i represents i-th input and j
represents jth membership function of ith
input.
Page 70 of 87
How to Find Rule Numbers
(getting fired)
• For
a
particular
combination
of
membership functions of each input, the
rule number, which will get fired, is found
as:
N
N
i =1
i =1
Rule _ number = ∑ Wi, k − ∑ Wi,1 + 1
Page 71 of 87
Fuzzy Model of S/W Maintainability
R.No
O/p
R.No
O/p
R.No
O/p
1
v_good
2
v_good
3
good
4
v_good
5
good
6
avg
7
avg
8
Avg
9
poor
10
good
11
Avg
12
Avg
…
…
…
…
…
…
79
v_poor
80
v_poor
81
v_poor
Page 72 of 87
Fuzzy Model of S/w Maintainability
Consider the values of inputs such that
ACC belongs to low, RSC belongs to avg
and poor, DOQ belongs to med, and UOS
belongs to more and moderate.
Total number of rules, which will get
fired corresponding to this input set, will
be 4.
Page 73 of 87
Fuzzy Model of S/w Maintainability
Rule _Number 1 = 1 (weightage of low of ACC)
+ 6 (weightage of avg of RSC)
+ 18(weightage of med of DOQ)
+ 27(weightage of more of UOS)
- 40 (1+3+9+27) + 1= 13
Rule_Number 2 = 1 + 9 + 18 + 27 – 40 + 1 = 16
Rule_Number 3 = 1 + 6 + 18 + 54 – 40 + 1 = 40
Rule_Number 4 = 1 + 9 + 18 + 54 – 40 + 1 = 43
Page 74 of 87
Fuzzy Model of S/w Maintainability
Membership Membership Membership
Function 1
Function 2
Function 3
ACC
1
2
3
RSC
3
6
9
DOQ
9
18
27
UOS
27
54
81
Page 75 of 87
Fuzzy Model of S/w Maintainability
• Assume ACC= 2, RSC= 5.5, DOQ=15,
UOS= 350.
• ACC = 2 belongs to fuzzy set low with
membership grade of 1
• RSC = 5.5 belongs to fuzzy set good with
membership grade of 0.25 and to fuzzy set
avg with membership grade of 0.5
• DOQ = 5 belongs to fuzzy set med with
membership grade of 1
• UOS = 350 belongs to fuzzy set more with
membership grade of 1
Page 76 of 87
Fuzzy Model of S/w Maintainability
• With these input values we find that rule
numbers 10 and 13 fire.
• During composition of these rules we get
the following
Min (0.25, 1, 1) = 0.25
Min (0.5,1, 1) = 0.5
• When these two rules are implicated, first
rule gives maintainability value v_good to
an extent of 0.25 and second rule gives
the maintainability value good to the
extent 0.5
Page 77 of 87
Output Computation
1
v_good
good
0.5
0.25
2
4
6
Page 78 of 87
Defuzzification
= 3.2
Page 79 of 87
Maintenance Time for Typical Projects
Project
ACC
RSC
DOQ
UOS
Maintainability
Average
Corrective
maint-time
1
8.48
3.80
11.31
355
3.61
11.30
2
11.60
7.73
14.83
528
7.37
21.70
3
12.62
5.67
10.56
492
5.11
18.30
4
5.39
8.32
12.14
567
6.81
18.00
5
14.51
8.90
12.30
363
8.00
21.10
6
7.50
7.42
8.86
390
4.56
16.10
7
10.73
9.22
12.45
451
7.07
17.90
8
9.11
6.90
13.28
479
6.00
17.20
Number
Page 80 of 87
ACC
maint-time
25
20
15
10
5
0
20
maint-tim e
ACC
15
10
5
0
1
2
3
4
5
6
Project Number
7
8
Plot of ACC versus maint-time
Page 81 of 87
maint-time
10
25
8
20
6
15
4
10
2
5
0
0
1
2
3 4
5 6
7
Project Number
maint-time
RSC
RSC
8
Plot of RSC versus maint-time
Page 82 of 87
maint-time
20
25
15
20
10
15
10
5
5
0
0
1
2
3 4 5 6 7
Project Number
maint-time
DOQ
DOQ
8
Plot of DOQ versus maint-time
Page 83 of 87
maint-time
600
500
400
300
200
100
0
25
20
15
10
5
0
1
2
maint-time
UOS
UOS
3 4 5 6 7 8
Project Number
Plot of UOS versus maint-time
Page 84 of 87
maint-time
10
25
8
6
20
15
4
2
10
5
0
0
1
2
3
4
5
6
7
Project Number
maint-time
maintainability
maintainability
8
Plot of values of maintainability versus maint-time
Page 85 of 87
CONCLUSION
• Model measures software maintainability
based on four important aspects of
software – ACC, RSC, DOQ, and UOS
• Fuzzy approach used to integrate these
four aspects
• A new efficient representation of a rule
base has been proposed
• Output can advise the software project
managers in judging the maintenance
efforts of the software
Page 86 of 87
Thank
you
Page 87 of 87