Politehnica University of Timisoara
Mobile Computing, Sensors Network and Embedded Systems Laboratory
Embedded systems testing
Testing design techniques
instructor: Razvan BOGDAN
Outlines
1. The test development process
2. Categories of Test Design Techniques
3. Specification-based or Black-box Techniques
4. Structure-based or White-box Techniques
5. Experience-based Techniques
6. Choosing Test Techniques
THE TEST DEVELOPMENT PROCESS
• Deriving test cases from requirements
• Designing test cases must be a controlled process.
• Test cases can be crated in a formal way or in an informal way, depending on
the project delimitations and on the maturity of the process that is in use.
Test scenarios
test case 6
test case 5
test case 4
test object and
requirements on
the test object
delivering test
requirements
and test
criteria
test case 6
test case 5
test case 4
test case 3
test case 2
test case 1
test case 3
test case 2
test case 1
THE TEST DEVELOPMENT PROCESS
• Traceability
• Tests should be traceable: which test case was included in the test portfolio
based on which requirement?
• The consequences of changes in the requirements on the tests to be made can
be identified directly.
• Traceability also helps to determine requirement coverage.
Test scenarios
test case 6
test case 5
test case 4
test object and
requirements on
the test object
delivering test
requirements
and test
criteria
test case 6
test case 5
test case 4
test case 3
test case 2
test case 1
test case 3
test case 2
test case 1
THE TEST DEVELOPMENT PROCESS
• Definitions
• Test object:
The subject to be examined: a document or a piece of software in the software
development process.
• Test condition:
An item or an event: a function, a transaction, a quality criterion or an element in the
system.
• Test criteria:
The test object has to confirm the test criteria in order to pass the test.
• Test execution schedule:
A scheme for the execution of the test procedures. The test procedures are included in the
test execution schedule in their context and in their order in which they are to be executed.
THE TEST DEVELOPMENT PROCESS
• Test case description according to IEEE 829
• Preconditions: situation previous to test execution or characteristics of the test
object before conducting the test case.
• Input values: description of the input data on the test object.
• Expected results: output data that the test object is expected to produce.
• Post conditions: characteristics of the test object after test execution,
description of its situation after the test.
• Dependencies: order of execution of the test cases, reason for dependencies.
• Distinct identification: Id or key in order to link, for example, an error report to
the test case where it appeared.
• Requirements: characteristics of the test object that the test case will examine.
THE TEST DEVELOPMENT PROCESS
• Combining test cases
• Test cases may be combined to test suites and test scenarios.
• A test procedure specification: defines the sequence of actions for the execution
of a single test case or a test suite. It is a script or screenplay for the test
describing the steps, handlings and/or activities required for test execution.
• With the use of adequate tools, test suites can be coded to be executed
automatically.
• The (dynamic) test plan states the sequence of tests planned, who is to do them
and when. Constraints that have to be considered are priorities, availability of
resources, test infrastructure etc.
• The test execution schedule defines the order of the execution of test
procedures and automation test scripts using for prioritization, regression tests
etc.
THE TEST DEVELOPMENT PROCESS
• Summary
•Test cases and test suites are derived from the test object requirements or
characteristics
•Components of a test case description are:
• key/Id
• input values
• pre-conditions
• expected results
• post-conditions
• dependencies
• requirements out of which the test case was derived
Outlines
1. The test development process
2. Categories of Test Design Techniques
3. Specification-based or Black-box Techniques
4. Structure-based or White-box Techniques
5. Experience-based Techniques
6. Choosing Test Techniques
CATEGORIES OF TEST DESIGN TECHNIQUES
• Black box and white box testing
black box
Equivalence partitioning
Boundary value analysis
State transition testing
Decision tables
Use case based testing
experience-based techniques
white box
dynamic
White-Box
Statement coverage
Branch coverage
Condition coverage
Path coverage
Reviews / walkthroughs
static
Black-Box
Analytical QA
•Dynamic testing is divided in two categories/groups
•The grouping is done on the bases of the method to derive test cases
•Every group has its own set of methods for designing test cases
Control flow analysis
Data flow analysis
Compiler metrics/analyzer
✔
CATEGORIES OF TEST DESIGN TECHNIQUES
• Specification-based or Black-box Techniques
• The tester looks at the test object as a black box
• Internal structure of the test object is irrelevant or unknown
• Test cases are derived/selected based on specification analysis (functional
and non-functional) of a component system
• Testing of input/output behaviour
• Functionality is the focus of attention!
• Black-box technique is also called functional testing or specification oriented
testing
Test case design
Test cases on the basis of specifications
Internal program structure irrelevant
Test of all selected resp chosen combinations of
input/output data
Test object
Black-Box
CATEGORIES OF TEST DESIGN TECHNIQUES
• Structure-bases or White-box Techniques
• The tester knows the internal structure of the program/code
•i.e. component hierarchy, control flow, data flow, etc.
•Test cases are selected on the basis of internal program code/program structure
•During testing, interference with the test execution is possible
• Structure of the program is the focus of attention!
•White-box technique is also called structure-based testing or control flow based testing
Test case design
Test cases on the basis of program
structure
Test process controlled externally
Analyze control flow within the test object
during test execution
CATEGORIES OF TEST DESIGN TECHNIQUES
• Categories of test design methods – overview
• Specification-based methods
• Test objects have been selected in accordance with the functional software model.
• The coverage of specification can be measured (e.g. which percentage of
specification covered by test cases).
• Structure-based methods
• The internal structure of the test object is used to design test cases
(code/statements, menus, calls, etc.)
• The coverage percentage is measured and used as a bases for creating additional
test cases
• Experience-bases methods
• Knowledge and experience about the test object and its environment are
the bases for designing test cases
• Knowledge and experience about possible weak spots, probable errors
and former errors are used to determine test cases.
CATEGORIES OF TEST DESIGN TECHNIQUES
• Summary
• Test cases can be designed using different methods
•If specification functionality is the focus of testing, the methods used are called
specification oriented methods or black-box methods.
•If the internal structure of an object is investigated, the methods used are called
structure oriented methods or white-box methods.
•Experience based methods use knowledge and skills of the personnel involved in
test case design.
Outlines
1. The test development process
2. Categories of Test Design Techniques
3. Specification-based or Black-box Techniques
4. Structure-based or White-box Techniques
5. Experience-based Techniques
6. Choosing Test Techniques
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Overview
• The following black-box methods will be explained in detail:
• Equivalence partitioning
• Boundary value analysis
• Decision table testing and cause-effect graphing
• State transition testing
• Use case testing
• This accounts for the most important and popular methods
• Other black box methods are for instance
• Statistical testing
• Pairwise testing
• Smoke testing
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• General
• Functional testing is targeted at verifying the correctness and the
completeness of a function
•Are all specified function available within the module?
•Do executed functions give correct results?
• The execution of the test cases should be done without high redundancy,
but nevertheless comprehensive
•Test as little as possible but
•Test as much as necessary
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning
• Equivalence class partitioning is what most testers do intuitively: they
divide the possible values into classes. Hereby they look at
• Input values of a program (usual use of EC-method)
• Output values of a program (rarely used EC-method)
• The range of defined values is grouped into equivalence classes, for
which the following rules apply:
• All values, for which a common behaviour of the program is expected, are
grouped together in one equivalence class
• Equivalence classes may not overlap and may not contain any gaps
• Equivalence classes may contain a range of values:
(e.g. 0<x<10) or a single value (e.g. x = “Yes”)
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – valid and invalid
•The equivalence classes of each variable (element) may be divided further
• Valid EC: all values within the definition range are combined into one
equivalence class, if they are handles identically by the test object
• Invalid EC: we distinguish two cases for values outside of the definition
range:
• Values with a correct format but outside of the value range can be combined
into one or more equivalence classes
• Values with a wrong format generally build a separate EC
•Tests are performed using a single representative from each EC
• For every other value from the EC the same behaviour as for the chosen value is
expected
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – example
• Equivalence classes are chosen for valid and invalid inputs
•If a value x is defined as 0 ≤ x ≤ 100, then we can initially identify three
equivalence classes:
1. x<0
2. 0 ≤ x ≤ 100
3. X > 100
(invalid input values)
(valid input values)
(invalid input values)
• Further invalid EC can be defined, containing, but not limited to:
• Non-numerical inputs
• Numbers too big or too small
• Non-supported format for numbers
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – input variables
• All input values (elements) of the test objective were identified e.g.
•Fields of a GUI (e.g. system test)
•Parameters of a function (e.g. component test)
• A range for each input value is defined
•this range defines the sum of all valid Equivalence Classes (vEC)
•Invalid equivalence classes (iEC) result from the values outside of this
range
•values expected to be handled differently (known or suspected) are
assigned to a different separate equivalence class.
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – example
•A program expects a percentage value according to the following
requirements:
•Only integer values are allowed
•0 is the valid lower boundary of the range
•100 is the valid upper boundary of the range
•Valid are all the numbers from 0 to 100, invalid are all the negative numbers,
all numbers grater than 100, all decimal numbers and all non numerical values
(e.g. “fred”)
•one valid equivalence class: 0 ≤ x ≤ 100
•1st invalid equivalence class: 0 < x
•2nd invalid equivalence class: x > 100
•3rd invalid equivalence class: x = no integer
•4th invalid equivalence class: x = non numeric (n.n.)
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – example
•The percentage value will now be displayed in a bar chart. The following
additional requirements apply (both values included):
•Values between 0 and 15:
•Values between 16 and 50:
•Values between 51 and 85:
•Values between 86 and 100:
grey bar
green bar
yellow bar
red bar
•Now there are four instead of one valid equivalence classes:
•1st valid equivalence class: 0 ≤ x ≤ 15
•2nd valid equivalence class: 16 ≤ x ≤ 50
•3rd valid equivalence class: 51 ≤ x ≤ 85
•4th valid equivalence class: 86 ≤ x ≤ 100
<0
0-15
16-50
51-85 86-100
>100
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – picking representatives
• On the last step, one representative of each EC is determined as well as the
result to be expected for it
Variable
Equivalence class
Percentage value (valid)
EC1:0≤x≤15
+10
EC2:16≤x≤50
+20
EC3:51≤x≤85
+80
EC4:86≤x≤100
+90
EC5: x<0
-10
Percentage value (invalid)
EC6: x>100
Representatives
+200
EC7: x no integer
1.5
EC8:x non numeric
fred
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Exercise: Equivalence Partitioning
• From a given specification (online shop) please extract:
•All input values
•Equivalence classes for each of the input values
•Valid equivalence classes
•Invalid equivalence classes
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – example 2/1
• Analysing the specification
•A piece of code computes the price of a product based on its value, a discount in
% and shipping costs (6, 9 or 12 EURO, depending on shipping mode)
Variable
Equivalence class
Value of
goods
EC11: x≥0
valid
1000.00
EC12: x<0
invalid
-1000.00
EC13: x non-numerical value
invalid
fred
valid
10%
EC22: x<0%
invalid
-10%
EC23: x>100%
invalid
200%
EC24: x non-numerical value
invalid
fred
EC31: x=6
valid
6
EC32: x=9
valid
9
EC33: x=12
valid
12
EC34: x={6,9,12}
invalid
4
EC35: x non-numerical value
invalid
fred
Discount
Shipping costs
EC21: 0%≤x≤100%
Status
Representative
•Assumptions:
•Value of goods is
give as a positive
number with 2
decimal places
•Discount is a
percentage value
without decimal
places between 0%
and 100%.
•Shipping costs can
only be 6, 9 or 12
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – example 2/2
• Test cases for valid EC:
•Valid equivalence classes provide the following combinations or test cases: T01, T02 and T03
Variable
Equivalence class
Value of
goods
EC11: x≥0
Discount
Shipping costs
Representative
T01
T02
T03
valid
1000.00
*
*
*
EC12: x<0
invalid
-1000.00
EC13: x non-numerical
value
invalid
fred
valid
10%
*
*
*
EC22: x<0%
invalid
-10%
EC23: x>100%
invalid
200%
EC24: x non-numerical
value
invalid
fred
EC31: x=6
valid
6
EC32: x=9
valid
9
EC33: x=12
valid
12
EC34: x={6,9,12}
invalid
4
EC35: x non-numerical
value
invalid
fred
EC21: 0%≤x≤100%
Status
*
*
*
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – example 2/3
• Test cases for invalid EC:
•The following test cases were created using the invalid EC, each in combination with valid
ECs of other elements:
Variable
Equivalence
class
Value of
goods
Discount
Shipping
costs
Status
Representative
EC11
valid
1000.00
EC12
invalid
-1000.00
EC13
invalid
fred
EC21
valid
10%
EC22
invalid
-10%
EC23
invalid
200%
EC24
invalid
fred
EC31
valid
6
EC32
valid
9
EC33
valid
12
EC34
invalid
4
EC35
invalid
fred
T04
T05
T06
T07
T08
T09
T10
*
*
*
*
*
*
*
*
*
*
*
*
*
*
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – example 2/4
• Test cases for invalid EC:
•10 test cases are derived: 3 positive (valid values) and 7 negative (invalid values) test
cases:
Variable
Status
Representative
T01
T02
T03
Value of
goods
valid
1000.00
*
*
*
invalid
-1000.00
invalid
fred
valid
10%
invalid
-10%
invalid
200%
invalid
fred
valid
6
valid
9
valid
12
invalid
4
invalid
fred
Discount
Shipping
costs
T04
T05
T06
T07
T08
T09
T10
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – output based
•Equivalence classes can also be built, based on the expected output values
•The method is used in analogy, applied to the output values
•The variable (element) is thus the output (for example, a field values on
the GUI)
•Equivalence classes are built for all possible outputs defined
•A representative is determined for each equivalence class of output values
•The input leading to the representative value is than acquired
•A higher costs and effort because the input values have to be found for the
given output recursively
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – in general /1
•Partitioning
•The quality of the test depends on precisely segmented
variables/elements in equivalence classes
•EC that were not identified hold the risk of overlooking possible
defects, since the representatives used did not cover all possibilities
•Test cases
•Equivalence class method provides test cases for which a
representative still has to be chosen
•Test data combinations are selected by defining the representative or
representatives of each equivalence class
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – in general /2
•Choosing representatives
•Any value within EC can be a representative. Optimal are:
•Typical values (used often)
•Problem values (suspected failures)
•Boundary values (on the edge of the EC)
•Representatives of valid EC may be combined
•Representatives of invalid EC may not be combined
•Representatives of invalid EC may only be combined with valid
representatives of other EC
•For test cases, representatives of invalid EC should be combined with always
the same values of other valid EC (standard combinations)
•Choosing representatives implies that the function within the program uses
compare operations
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – coverage
• Equivalence class coverage can be used as exit criteria to end testing
activities
•EC – Coverage = Number of EC tested / Number of EC defined * 100
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – transition
•The transition from the specification or definition of functionality to the
creation of equivalence classes
•Often a difficult task due to the lack of precise and complete
documentation
•Boundaries that are not defined of missing descriptions make it difficult to
define equivalence classes
•Often, contact with the customer is needed to complete information
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Equivalence Partitioning – benefits
• Benefits
•Systematic test case design method, i.e. with a minimum amount of test
case a defined coverage can be expected
•Partitioning value ranges in equivalence classes on the basis of
specifications covers the functional requirements
•Prioritizing equivalence classes can be used to prioritize test cases
(Inputs that are used rarely are to be tested last)
•Test of known exceptions is covered by test cases on the basis of negative
equivalence classes
•Equivalence partitioning is applicable at all levels of testing
•Can be used to achieve input and output coverage goals
•Can be applied to human input or via interfaces to a system or interface
parameters in integration testing
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Boundary analysis /1
•Boundary analysis extends equivalence class partitioning by introducing a
rule for the choice of representatives
•The edge values of the equivalence class are to be tested intensively
•Why put more attention to the edges?
•Often, the boundaries of value ranges are not well defined or lead to
different interpretations
•Checking if the boundaries were programmed correctly
•Please note:
•Experience shows that errors occur very frequently on the boundaries of
values ranges!
•Boundary value analysis can be applied at all test levels. It is easy to apply
and its defect finding capability is high on base of detailed specifications.
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Boundary analysis /2
•Boundary analysis assume that:
•The equivalence class is composed of a continuous value range (not a single value or a
group of discrete values )
•That boundaries can be defined for the value range
•As an extension to equivalence class partitioning, boundary value analysis is a
method suggesting the choice of representatives
•Equivalence class partitioning:
•Examines one (typical)value of the equivalence class
•Boundary analysis
•Examines the boundaries and the neighbouring values
•Uses the following scheme:
Value range: Bottom Value ≤ x ≤ Top Value
bottom value –δ
Lower boundary
Lower boundary +δ
Top value –δ
Higher boundary
Top value +δ
δ is the smallest step defined for the value
For example: 1- for integer values
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Defining boundary values
•Basic scheme can only be applied when the value range was defined
accordingly
•In this case additional testing is needed for a valued in the middle of the value
range
•If the EC is defined as single numerical value, for example x=5, the
neighbouring value will be used as well
•The representatives (of the class and its neighbouring values) are: 4, 5 and 6
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Boundary analysis for invalid EC
•Boundary value make little sense for invalid equivalence classes
•Representatives of the invalid EC at the boundary to a valid EC are already
covered through the basic schema
•For value ranges defined as a set of values, no boundaries can be created
generally
•For example: single, married, divorced, widowed
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Boundary analysis example 3a
•Example 3a:
•Value range for a discount in %: 0.00 ≤ x≤100.00
•Definition of EC
•3 classes:
•EC: x <0
•EC: 0.00 ≤ x≤100.00
•EC: x > 100
•Boundary analysis
•Extends the representative to:
•2.EC: -0.01; 0.00; 0.01; 99.99; 100.00; 100.01
•Please note
•Instead of one representative for the valid EC, there are now six
representatives (four valid and two invalid)
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Boundary analysis example 3b
•Basic scheme: choose three values to be tested – the exact boundary and two
neighbouring values (within and outside the EC)
•Alternative point of view: since the boundary value belongs to the EC, only two
values are needed for testing: one within and one outside the EC
•Example 3b
•value range for a discount in %: 0.00≤ x ≤ 100.00
•valid EC: 0.00≤ x ≤ 100.00
•Boundary analysis additional representatives are: -0.01; 0.00; 100.00; 100.01
•0.01 – same behaviour as 0.00
•99.99 – same behaviour as 100.00
• A programming error caused by a wrong comparison operator will be found with
the two boundary values
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Decision table testing /1
•Equivalence class partitioning and boundary analysis deal with isolated
input conditions
•However, an input condition may have an effect only in combination with
other input conditions
•All previously described methods do not take into account the effects of
dependencies and combinations
•Using the full set of combination of all input equivalence classes usually
leads to a very high number of test cases (test case explosion)
•With the help of cause-and-effect graphs and the decision tables derived
from them, the amount of possible combinations can be reduced
systematically to a subset
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Decision table testing /2
•A cause-and-effect diagram use a formal language
•A cause-and-effect diagram is created by translating the (mostly informal)
specification of a test object into a formal language
•The test object is subject to a certain amount of effect that are traced back to
their respective causes
•Elements 7 symbols:
•Assertion (if cause A – then effect E)
•Negation (if cause A – then no effect E)
•Or (if cause A or B – then effect E)
•And (if cause A and B – then effect E)
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Decision table testing /3
•Other elements of a cause-and-effect diagram:
•Exclusive (Either cause A or cause B)
•Inclusive (At least one of two causes: A or B)
•One and only one (One and exactly one of two causes: A or B)
•Required (If cause A then also cause B)
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Decision table testing /4
•Example 5: Online-Banking
•The user has identified himself via account number and PIN. If having
sufficient coverage, he is able to set a transferal. To do this, he must input the
correct details of the recipient and a valid TAN.
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Decision table testing /5
•Example 5: Online-Banking
Preconditions (Causes)
Activities (Effects)
T01
T02
T03
T04
T0
5
Enough coverage
Yes
No
-
-
Correct recipient
Yes
-
No
-
Valid TAN
Yes
-
-
No
Do transferal
Yes
No
No
No
Mark TAN as used
Yes
No
No
No
Deny transferal
No
Yes
Yes
No
Request again TAN
No
No
No
Yes
•Each table column represents a test case
•Creating a decision table:
•Choose an effect
•Trace back along the diagram to identify the cause
•Each combination of causes represent a column of the decision table (a test case)
•Identical combinations of causes, leading to a different effects, may merged, to form a single
test cases.
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Decision table testing /6
•Practical use
•The specification is divided into manageable parts, thus leading to
decision tables of a practical size
•It is difficult to deduce boundary values out of a cause-and-effect diagram
or decision table
•It is recommended to combine test cases derived from decision tables with
values derived from boundary analysis
•The number of causes and effects that are examined will determine the
complexity of the cause-and-effect diagram: for n preconditions that may be
true or false, 2n test cases can be created
•On systems of larger size, this method is only manageable with tool support
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Decision table testing /7
•Benefits
•Systematical identification of input combinations (combined causes) that
might not be found using other methods
•Test cases are easily derived from the decision table
•Easy to determine sufficient test case coverage, e.g. at least one test case
created for each column of the decision table
•The number of test cases can be reduced by systematically merging columns
of the decision table
•Drawbacks
•Setting up a large number of causes leads to complex and extensive results
•Thus, many errors can occur when applying this method
•This makes it necessary to use a tool
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• State Transition Testing /1
•Many methods only take into account the system behaviour in terms of input
data and output data
•Different states that a test object might take on are not taken into account
•For example, results of actions that happened in the past- actions that caused the
test object to be in a certain internal state
•The different states that a test object can take on are modelled using state
transition diagrams
•State transition analysis is used to define state transition based test cases
•State transition testing is much used within embedded software industry
and technical automation in general
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• State Transition Testing /2
•To determine test cases using a
state transition diagram, a
transition tree is constructed:
•The initial state is the root of
the tree
•For every state that may be
reached from the initial state,
a node is created which is
connected to the root by a
branch
•This operation is repeated
and comes to an end if
•The state of the node is an
end state (a leaf of the
tree)
Or
•The same node with the
same state is already part
of the tree
dead
married
“to marry”
dead
“to die”
“to die”
divorced
widowed
married
“to marry”
“d.o.p
”
married
“to marry”
dead
“to
die”
single
dead
“to die”
“be
single”
unborn
Event “d.o.p.”: death of
partner
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• State Transition Testing /3
•Every path from the root to a leaf then represents a test case
for state transition testing
•The state transition tree for this example leads to the
following six test cases:
State 1
State 2
State 3
State 4
unborn
single
dead
unborn
single
married
dead
unborn
single
married
widowed
State 5
End state
D
D
M
M
V
W
M
D
S
D
dead
dead
dead
dead
U
unborn
single
married
widowed
married
married
unborn
single
married
divorced
dead
dead
unborn
leidg
married
divorced
married
married
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
dead
married
dead
“to die”
married
“to die”
“to marry”
widowed
divorced
“to marry”
“d.o.p”
“getting divorced”
married
dead
“to die”
“to marry”
error
“to die”
single
dead
“getting divorced”
“be single”
• The transition tree of our
example may now be
extended using invalid
transitions (negative test
cases, robustness testing).
• Example: two possible
invalid transitions – there
are more
• Impossible transitions
between states can not be
tested.
“d.o.p”
error
unborn
Event “d.o.p.”: death of partner
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• State Transition Testing – Summary
•Test exit criteria
•Every state has to be entered at least once
•Every transitions has to be executed at least once
•Benefits/Drawbacks of this method
•Good testing method for test objects than can be described as state
machines
•Good testing method to test classes, only if the object life cycle is available
•Very often, states are rather complex, i.e. a lot of parameters are necessary
to describe the state
•Designing test cases and analysing test results can be difficult and time
consuming in this cases
•Only covering all states does not guarantee complete test coverage
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Use Case Testing /1
•Test cases are derived directly from the use cases of the test object
•The test object is seen as a system reacting with actors
•A use case describes the interaction of all involved actors leading to an
end result of the system
•Every use case has pre-conditions that have to be met in order to execute
the use case (the test case) successfully
•Every use case has post-conditions describing the system after execution
of the use case (the test case)
•Use cases are elements of the Unified Modelling Language UML*
•Use case diagrams are one of 13 different types of diagrams used by UML
•A use case diagram is a diagram describing a behaviour, it does not
describe the sequence of events
•It shows the system reaction from the viewpoint of a user
*: * UML is a non-proprietary specification language for object modeling
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Use Case Testing /2
•Example of a simple use case diagram (source: Wikipedia)
• The diagram on the left describes the
functionality of a sample Restaurant
System.
• Use cases are represented by ovals
and the actors are represented by
stick figures
• The Patron actor can Eat Food, Pay for
Food, or Drink Wine.
• Only the Chef actor can Prepare
Food. Note that both the Patron and
the Cashier are involved in the Pay for
Food use case.
• The box defines the boundaries of
the Restaurant System, i.e. the use
cases shown are part of the system
being modelled, the actors are not.
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Use Case Testing /3
•Every use case describes a certain task (user-system-interaction)
•Use-case descriptions include, but are not limited to:
•Pre-conditions
•Expected results/System behaviour
•Post-conditions
•These descriptive elements are also used to define the corresponding test
cases.
•Every use case may be used as the basis for a test case.
•Every alternative within the diagram corresponds to a separate test case.
•Typically, information provided with a use case has not enough detail to
define the test cases directly. Additional data is needed (input data, expected
results) to make up the test case.
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Use Case Testing –Summary
•Benefits
•Well suited for acceptance testing and system testing, because each use case
describes a user scenario to be tested
•Useful for designing acceptance tests with customer/user participation
•Well suited if system specifications are available in UML
•May be combined with other specification-based test techniques
•Drawbacks
•No derivation of additional test cases beyond the information provided by the
use case
•Therefore, this method should be used only combined with other methods of
systematic test case design
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Specification-based or Black-box techniques- General
conclusions /1
•Testing system functionality is the main goal of black box testing
•Therefore, the test result depends on the quality of the system
specification (e.g. completeness, missing or wrong specifications lead to
bad test cases)
•If specifications are wrong, tests will be wrong too. Testing is only
performed for described functions. Missing specification of required
functionality will not be discovered during testing.
•If the test object holds functions that have not been specified, they will
not be examined
•Such superfluous functions may cause problems in the area of
stability and security (e.g. software for automated teller machines)
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Specification-based or Black-box techniques- General conclusions
/2
•In spite of these drawbacks, functional testing is still the most important
testing activity
•Black-box methods are always used in testing
•The drawbacks can be compensated using additional methods of test case
design, e.g. white box testing or experience based testing
SPECIFICATION-BASED OR BLACK-BOX
TECHNIQUES
• Specification-based or Black-box techniques - Summary
•Black-box methods:
•Equivalence class partitioning
•Boundary analysis
•Cause Effect graphing and decision tables
•State-transition testing
•Use case based testing
•Black-box testing verifies the specified functions: if functions are not
specified, they are not tested.
•Additional code (i.e. code that should not be there) cannot be detected using
black box testing
Outlines
1. The test development process
2. Categories of Test Design Techniques
3. Specification-based or Black-box Techniques
4. Structure-based or White-box Techniques
5. Experience-based Techniques
6. Choosing Test Techniques
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Structure-based or white box techniques /1
•The following techniques will be explained in detail:
•Statement Testing and Coverage
•Decision Testing and Coverage (=branch)
•Condition Testing and Coverage
•Path Testing Coverage
•Remark
•These techniques represent the most important and the most widely used
dynamic testing techniques. They relate to the static analysis techniques
which were described earlier.
•Other white box testing techniques include but are not limited to:
•LCSAJ (Linear Code Sequence and Jump)
•Data flow based techniques
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Structure-based or white box techniques /2
•It is based on an identified structure of the software or the system
•Component-level: the structure of a software component, i.e.
statements, decisions, branches, distinct paths
•Integration-level: the structure may be a call tree (a diagram in which
modules call other modules)
•System-level: the structure may be a menu structure, business process or
web page structure
•Three code-related structural test design techniques for code coverage
based on statements, branches and decisions.
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Structure-based or white box techniques -Tools /1
•During white-box testing, the program under test is executed, same as in
black box testing. Both make up dynamic testing.
•Theory states that all parts of a program should be executed at least once
during testing.
•The rate of coverage of the program is measured using tools (e.g. coverage
analysers):
•Code instrumentation is performed in order to count execution paths, i.e.
counters are inserted into the program code of the test object
•These counters are initialized holding the value of zero, each execution
path increments the respective counter.
•Counters that remain zero after testing indicate program parts that have
not been executed.
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Structure-based or white box techniques -Tools /2
•White-box techniques need the support of tools in many areas, there are:
•Test case specification
•Automatically generating a control flow graph from program source
code
•Test execution
•Tools to monitor and control the program flow inside the test objects
•Tool support ensures the quality of the tests and increases efficiency
•Because of the complexity of the necessary measures for white box
testing, manual test execution is
•Time consuming, resource consuming
•Difficult to implement and prone to errors
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• The main types of coverage
•Statement Testing and Coverage
•The percentage of executable statements that have been exercised by the
test cases
•Can also be applied to modules, classes, menu points etc.
•Decision Testing and Coverage (=branch coverage)
•The percentage of decision outcomes, that have been exercised by the
test cases
•Path Testing and Coverage
•The percentage of execution paths, that have been exercised by the test
cases
•Condition Testing and Coverage
•The percentage of all single condition outcomes independently affecting
a decision outcome, that have been exercised by the test cases
•Condition coverage comes in various degrees, e.g. single, multiple and
minimal multiple condition coverage.
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Statement Testing and Coverage
•The statement of the program code is the focus of attention
•What test cases are necessary in order to execute all (or a certain
percentage) of the existing code statements?
•Basis of this investigation in the control flow graph
•All instructions are represented as nodes and the control flow between
the instructions is represented as an edge (arrow)
•Multiple instructions are combined in a single node if they can only be
executed in one particular sequence
•Aim of the test (test exit criteria) is to achieve the coverage of selected
percentage of all statements, called the statement coverage (C0 code
coverage).
•Statement coverage (C0) = Number of executed statements / Total number of
statements * 100%
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Statement Testing and Coverage – Example 1/1
•We are assessing the following segment of program code, which is
represented by the control flow graph (see right side):
if(i>0) {
j=f(i);
if(j>10){
while(k>10){
…
}
}
}
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Statement Testing and Coverage – Example 1 /2
•Consider the program represented by the control flow
graph on the right
•Contains two if-statements and a loop (do-while) inside the
second if-statement
•There are three different “routes” through the program
segment
•The first if-statements allows two directions
•The right-hand direction of the first if-statement is divided
again using the second if-statement
•All the statements of this program can be reached using
the route to the right
•A single test case will be enough to reach 100% statement
coverage
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Statement Testing and Coverage – Example 2
•In this example the graph is slightly more complex:
•The program contains the if-statements and a loop
(inside one if statement)
•Four different “routes” lead through this program segment
•The first if-statement allows two directions
•In both branches of the if-statement another ifstatement allows for again two different directions
•For a 100% statement coverage four test cases are
needed
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Statement Testing and Coverage – General Conclusions
•The measurement of coverage is done using specifically designed tools
•These tools are called Coverage Analysis Tools or Coverage Analysers
•Benefits/drawbacks of this method
•Dead code, that is, code made up of statements that are never executed,
will be discovered
•If there is dead code within a program, a 100% coverage cannot be
achieved
•Missing instructions, that is, code which is necessary in order to fulfil the
specification, cannot be detected
•Testing is only done with respect to the executed statements: can all code
be reached/executed?
•Missing code cannot be detected using white box test techniques
(coverage analysis)
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Decision Testing and Coverage*
•Instead of statements, decision coverage focuses on the control flow with
a program segment (not the nodes, but the edges of a control flow graph)
•All edges of the control flow graph have to be covered at least once
•Which test cases are necessary to cover each edge of the control flow
graph at least once?
• Aim of this test (test exit criteria) is to achieve the coverage of a selected
percentage of all decisions , called the decision coverage (C1 code
coverage)
• Decision coverage (C1 ) = Number of executed decisions/ Total number of all
decisions * 100%
Which is synonymous to:
• Branch coverage (C1 ) = Number of covered decisions/ Total number of all
branches * 100%
* Also referred to as “branch coverage”
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Decision Testing and Coverage – Example 1
•The control flow graph on the right represents the
program segment to be inspected
•Three different “routes” lead through the graph of
this program segment
•The first if-statements leads onto two different
directions
•One path of the first if-statement is divided again in two
different paths, one of which holds a loop.
•All edges can only be reached via a combination of three
possible paths
•Three test cases are needed to achieve a decision
coverage of 100 %
•Only using the two directions on the right, nine out of
ten edges can be covered (C1 -value=90%)
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Decision Testing and Coverage – Example 2
•In this example the graph is slightly more complex
•Four different “routes” lead through the graph of this
program segment
•The first if-statements allows two directions.
•In both branches of the if-statement another ifstatement allows again for two different directions
•In this example, the loop is not counted as an
additional decision
•For a 100% decision coverage four test cases are
needed
•In this example, the same set of test cases is also
required for 100% statement coverage!
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Decision Testing and Coverage – General Conclusions
•Achieving 100% decision coverage requires at least as many test cases as
100% statement coverage – in most cases more
•A 100% decision coverage always includes a 100% statement coverage
•In most cases edges are covered multiple times
•Drawbacks:
•Missing statements cannot be detected
•Not sufficient to test complex conditions
•Not sufficient to test loops extensively
•No consideration of dependencies between loops
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Statement Testing and Coverage and Decision Testing and
Coverage
•Both methods relate to the paths through the control flow graph
•They differ in the amount of test cases necessary to achieve 100%
coverage
•Only the final result of a condition is considered, although the resulting
condition can be made up of several atomic conditions
•The condition If((a>2) OR (b<6)) may only be true or false
•Which path of the program is executed depends only on the final outcome
of the combined condition
•Failures due to a wrong implementation of parts of a combined condition
may not be detected
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Condition Testing and Coverage
•The complexity of a condition that is made up of several atomic conditions is
taken into account
•An atomic condition cannot be divided further into smaller condition
statements
•This method aims at finding defects resulting from the implementation of
multiple conditions(combined conditions)
•Multiple conditions are made up of atomic conditions which are combined
using logical operators like OR, AND, XOR, etc. Example: ((a>2)OR(b<6))
•Atomic conditions do not contain logical operators but only relational
operators and the NOT-operator (=, >, <, etc.)
•There are three types of condition coverage
•Simple condition coverage
•Multiple condition coverage
•Minimal multiple condition coverage
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Simple Condition Testing and Coverage
•Every atomic sub-condition of a combined condition statement has to take at
least once the logical values true as well as false
Example 2
Consider the following condition:
a >0 OR b<6
Test cases for simple condition coverage could be
for example:
-a=3
(true)
b=7(false)
a>2 OR b<6 (true)
a=1 (false)
b=5 (true)
a>2 OR b<6 (true)
• This example is used to explain
condition coverage, using a multiple
condition expression.
• With only two test cases, a simple
condition coverage can be achieved
• Each sub-condition has taken on
the value true and the value
false.
• However, the combined result is true
in both cases
• True OR false = true
• False OR true = true
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Multiple Condition Testing and Coverage
• All combinations that can be created using permutation of the atomic
sub conditions must be part of the tests.
• This example is used to explain
condition coverage, using a multiple
condition expression.
Example 2
• With four test cases, the multiple
condition coverage can be achieved
Consider the following condition:
• All possible combinations of true
a >0 OR b<6
and false were created.
Test cases for multiple condition coverage could be for
• All possible results of the
example:
multiple conditions were
a=3 (true)
b=7(false)
a>2 OR b<6 (true)
achieved.
a=3 (true)
b=5 (true)
a>2 OR b<6 (true)
• The number of test cases increases
a=1 (false)
b=5 (true)
a>2 OR b<6 (true)
exponentially:
a=1 (false)
b=7 (false)
a>2 OR b<6 (false)
• n =number of atomic conditions
• 2n =number of test cases
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Minimal Multiple Condition Coverage (defined c.c.)
• All combinations that can be created using the logical results of the subconditions must be part of the test, only if the change of the outcome of one
sub-condition changes the result of the combined condition
Example 2
Consider the following condition:
a >0 OR b<6
Test cases for multiple condition coverage could be for
example:
a=3 (true)
b=7(false)
a>2 OR b<6 (true)
a=3 (true)
b=5 (true)
a>2 OR b<6 (true)
a=1 (false)
b=5 (true)
a>2 OR b<6 (true)
a=1 (false)
b=7 (false)
a>2 OR b<6 (false)
• This example is used to explain condition
coverage, using a multiple condition
expression.
• For three out of the four test cases the
change of a sub-condition changes the
overall result
• Only for case no. 2 (true OR true
=true) the change of a sub-condition
will not result in a change of the
overall condition. This test case can
be omitted!
• The number of test cases can be reduced
to a value between n+1 and 2n
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
All complex decisions must be tested – the minimal multiple condition
coverage is a suitable method to achieve this goal.
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Path Testing and Coverage /1
•Path coverage focuses on the execution of all possible paths through a
program
•A path is a combination of program segments (in a control flow graph: an
alternating sequence of nodes and edges)
•For decision coverage, a single path through a loop is sufficient. For path
coverage they are additional test cases:
•One test case not entering the loop
•One additional test case of loop executions
•This may easily lead to a very high number of test cases
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Path Testing and Coverage /2
•Focus of the coverage analysis is the control flow graph
•Statements are nodes
•Control flow is represented by the edges
•Every path is a unique way from the beginning to the end of the control
flow graph
•The aim of this test(test exit criteria) is to reach a defined path coverage
percentage:
Path coverage = Number of covered paths / Total number of paths * 100%
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Path Testing and Coverage – Example 1
• The control flow graph on the right represents
the program segment to be inspected. It
contains three if-statements
• Three different paths leading through the graph
of this program segment achieve full decision
coverage.
• However, five different possible paths may be
executed
• Five test cases are required to achieve 100%
path coverage
• Only two are needed for 100% C0 -three are
needed for 100% C1 -coverage
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Path Testing and Coverage – Example 2
• The control flow graph on the right represents
the program segment to be inspected. It
contains two if-statements and a loop inside the
second if-statement
• Three different paths leading through the graph
of this program segment achieve full decision
coverage.
• Four different paths are possible, if the loop is
executed twice
• Every increment of the loop counter adds a
new test case.
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Path Testing and Coverage – General conclusions
• 100% path coverage can only be achieved for very simple programs
• A single loop can lead to a test case explosion every possible number of loop
execution constitutes a new test case
• Theoretically an indefinite of paths is possible
• Path coverage is much more comprehensive that statement or decision
coverage
• Every possible path through the program is executed
• 100% path coverage includes 100% decisions coverage, which again
contains 100% statement coverage
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Exercise: White-box Techniques
• Using a control flow graph (part 1) and pseudo code (part 2) the
minimum number of test cases has to be determined to achieve 100%
coverage of:
• Statements
• Branches/decisions
• Paths
STRUCTURE-BASED OR WHITE-BOX
TECHNIQUES
• Summary
• White box and black box methods are dynamic methods, the test object is
executed during test
• White box comprise:
• Statement coverage
• Decision coverage
• Path coverage
• Condition coverage(single, multiple, minimum multiple)
• Only existing code can be tested. If functions are missing this fact cannot
be discovered. Dead and superfluous code, however, can be discovered
using white box testing.
• White box methods are used mainly in lower test levels like component
testing or integration testing
• The methods differ in their intensity of test (test depth)
• Depending on the method, the number of test cases differ
Outlines
1. The test development process
2. Categories of Test Design Techniques
3. Specification-based or Black-box Techniques
4. Structure-based or White-box Techniques
5. Experience-based Techniques
6. Choosing Test Techniques
EXPERIENCE-BASED TECHNIQUES
black box
Boundary value analysis
State transition testing
Decision tables
Use case based testing
✔
✔
✔
experience-based techniques
white box
dynamic
Equivalence partitioning
Statement coverage
Branch coverage
Condition coverage
Path coverage
Reviews / walkthroughs
static
• Practice of creating test cases
without a clear methodical
approach based on the
intuition and experience of
the tester.
• Test cases are based on
intuition and experience
• Where have errors
accumulated in the past?
• Where does software
often fail?
Analytical QA
• Definition of Experience-based Techniques
Control flow analysis
Data flow analysis
Compiler metrics/analyzer
EXPERIENCE-BASED TECHNIQUES
• Fundamentals
• Experience based testing is also called intuitive testing and
includes: error guessing (weak point oriented testing) and
exploratory testing (iterative testing based on gained knowledge
about the system)
• Mostly applied in order to complement other, more formally
created test cases
• Does not meet the criteria for systematical testing
• Often produce additional test cases that might not be created
with other practices, for example:
• Testing a leap year after 2060 (known problems of the past)
• Empty sets within input values (a similar application has had
errors on this)
EXPERIENCE-BASED TECHNIQUES
• Test Case Design
• The tester must dispose of applicable experience or knowledge
• Intuition – Where can defects be hiding?
• Intuition characterizes a good tester
• Experience – What defects were encountered where in the past?
• Knowledge based on experience
• An alternative is to set up a list of recurring defects
• Knowledge / Awareness – Where are specific defects expected?
• Specific details of the project are incorporated
• Where will defects be made due to time pressure and
complexity ?
• Are inexperienced programmers involved?
EXPERIENCE-BASED TECHNIQUES
• Intuitive Test Case Design –possible sources
• Test results and practical experience with similar systems
• Possibly a predecessor of the software or another system with
similar functionality
• User experience
• Exchange of experience with the system as a user
• Focus on deployment
• What parts of the system will be used the most?
• Development problems
• Are there any weak points out of difficulties in the
development process?
EXPERIENCE-BASED TECHNIQUES
• Error Guessing in practice
• Check defect lists
• List possible defects
• Weight factors depending on risk and probability of
occurrence
• Test case design
• Creating test cases aimed at producing the defects on the list
• Prioritizing test cases by their risk value
• Update the defect list along testing
• Iterative procedure
• A structured collection of experience is useful when repeating
the procedure in future projects
EXPERIENCE-BASED TECHNIQUES
• Exploratory Testing
• Test case design procedure especially suitable when the
information basis is weakly structured
• Also useful when time for testing is scarce
• Procedure:
• Examine the single parts of the test object
• Execute few test cases, exclusively on the parts to be tested,
applying error guessing
• Analyze results, develop a rough model of how the test object
functions
• Iteration: Design new test objects applying the knowledge
recently acquired
• Thereby focusing on conspicuous areas and on exploring further
characteristics of the test object
• Capture tools may be useful for logging test activities
EXPERIENCE-BASED TECHNIQUES
• Exploratory Testing – Principles
• Choose small objects and/or concentrate on particular aspects of a test
object
• A single iteration should not take more than 2 hours
• The results of the iteration form, the information basis of the following
iteration
• Additional test cases are derived from the particular test situation
• Modelling takes place during testing
• Test design, test execution, test logging and learning, based on a test
charter containing test objectives are concurrent and carried out
within time-boxes
• Preparing further tests
• Herewith, knowledge can be gained to support the appropriate
choice of test case design methods
EXPERIENCE-BASED TECHNIQUES
• Experience-based versus Specification-based Techniques
• Intuitive test case design is a good complement to systematical
approaches
• It should still be treated as a complementary activity
• It cannot give proof of completeness – the number of test cases can
vary considerably
• Tests are executed in the same way as with systematically defined
test cases
• The difference is the way in which the test cases were
designed/identified
• Through intuitive testing defects can be detected that may not be
found through systematical testing methods
EXPERIENCE-BASED TECHNIQUES
• Summary
• Experience based techniques complement systematical
techniques to determine test case
• They depend strongly individual ability of the tester
• Error guessing and Explorative testing are two of the more widely
used techniques of experience based testing
Outlines
1. The test development process
2. Categories of Test Design Techniques
3. Specification-based or Black-box Techniques
4. Structure-based or White-box Techniques
5. Experience-based Techniques
6. Choosing Test Techniques
CHOOSING TEST TECHNIQUES
• Criteria for choosing the appropriate Test Techniques / 1
• State of information about the test object
• Can white-box tests be made at all (source code available)?
• Is there sufficient specification material to define black-box tests, or are
explorative tests needed to start with?
• Predominant test goals
• Are functional tests explicitly requested?
• Which non-functional test are needed?
• Are structural tests needed to attain the test goals?
• Risk aspects
• Is serious damage expected from hidden defects?
• How high is the sequence of usage for the test object?
• Are there contractual or legal standards on test execution and test coverage that
have to be met?
CHOOSING TEST TECHNIQUES
• Criteria for choosing the appropriate Test Techniques / 2
• Project preconditions
•
•
•
•
How much time and who is planed for testing?
How high is the risk, that testing might not be completed as planed?
Which software development method are used?
What are the weak points of the project process?
• Characteristics of the test object
• What possibilities for testing does the test object offer?
• What is the availability of the test object?
• Contractual and client requirements
• Were there any specific agreements made with the client / originator of the
project about the test procedures?
• What documents are to be handed over at the time of deployment of the
system?
CHOOSING TEST TECHNIQUES
• Criteria for choosing the appropriate Test Techniques / 3
• Best practice
• Which approaches have proven to be appropriate on similar
structures?
• What experience was gained with which approaches in the past?
• Test levels
• At which test levels should tests be done?
• Further criteria should be applied depending on the specific
situation!
CHOOSING TEST TECHNIQUES
• Different interests cause different test design approaches
• Interest of the project manager:
• To create software of ordered quality
• meeting time and budget restrictions
• Interests of the client / initiator of the project
• To receive software of best quality (functionality, reliability,
usability, efficiency, portability and maintainability)
• Meeting time and budget restrictions
• Interests of the test manager:
• Sufficient and intensive testing / adequate deployment of the
required techniques, from the testing point of view
• To assess the quality level that the project has reached
• To allocate and use the resources planed for tests in an optimal
way
CHOOSING TEST TECHNIQUES
• Summary
• Testers generally use a combination of test techniques including
process, rule and data-driven techniques to ensure adequate
coverage of the object under test
• Criteria for choosing the appropriate test case design approach:
•
•
•
•
•
Test basis (Information basis about the test objects)
Testing goals (What conclusions are to be reached through testing)
Risk aspects
Project framework / preconditions
Contractual / client requirements
© Copyright 2026 Paperzz