Software Testing

Software Testing
Objectives and principles
Techniques
Process
Object-oriented testing
Test workbenches and frameworks
Lecture Objectives
Understand

Software testing objectives and
principles
 Testing techniques – black-box and
white-box
 Testing process – unit and integration
 Object-oriented testing
 Test workbenches and frameworks

Can We Exhaustively Test Software?
A
less than
8 cycles
less than
8 cycles
B
There are 250 billion unique paths between A and B.
If each set of possible data is used, and a single run takes 1
millisecond to execute, it would take 8 years to test all paths.

Can we test all types of software
bugs?


Software testing is mainly suitable for dealing with faults that consistently define
themselves under well defined conditions
Testers do encounter failures they can’t reproduce.



Under seemingly exact conditions, the actions that a test case specifies can sometimes,
but not always, lead to a failure
Software engineers sometimes refer to faults with this property as Mandelbugs (an
allusion to Benoit Mandelbrot, a leading researcher in fractal geometry)
Example: the software fault in the Patriot missile defense system responsible for
the Scud incident in Dhahran




To project a target’s trajectory, the weapons control computer required its velocity and
the time as real values
The system, however, kept time internally as an integer, counting tenths of seconds and
storing them in a 24 bit register
The necessary conversion into a real value caused imprecision in the calculated range
where a detected target was expected next
For a given velocity of the target, these inaccuracies were proportional to the length of
time the system had been continuously running
Testing Objectives

Software testing can show the presence
of bugs, but it can never show their
absence. Therefore,



Testing is the process of exercising a
program with the specific intent of finding
errors prior to delivery to the end user.
A good test case is one that has a high
probability of finding an error.
A successful test is one that uncovers an
error.
Testing Principles






All tests should be traceable to customer
requirements
Tests should be planned long before testing begins
The Pareto principle applies to software testing
Testing should begin “in the small” and progress
toward testing “in the large”
Exhaustive testing is not possible
To be most effective, testing should be conducted by
an independent third party
Test Case Design
Testing must be planned and performed
systematically…not ad hoc or random.
Testing can be performed in two ways:


1.
2.
Knowing the specified function that a product
has been designed to perform – black-box
testing.
Knowing the internal workings of the product
and testing to ensure all parts are exercised
adequately – white-box testing.
Black-box Testing



An approach to testing where the
program is considered as a ‘blackbox’
The program test cases are based
on the system specification
Test planning can begin early in the
software process
Equivalence Partitioning
Divide the input domain into
classes of data from which
test cases can be derived.
 Strives to define a test that
uncovers classes of errors –
reducing total number of test
cases required.

Example…




Specifications for DBMS state that product must
handle any number of records between 1 and 16,383
(2 14 –1)
If system can handle 34 records and 14,870 records,
then probably will work fine for 8,252 records, say.
If system works for any one test case in range
(1..16,383), then it will probably work for any other
test case in range
Range (1..16,383) constitutes an equivalence class

Any one member is as good a test case as any other
member of the class
…Example

Range (1..16,383) defines three
different equivalence classes:
Equivalence Class 1: Fewer than
1 record
 Equivalence Class 2: Between 1
and 16,383 records
 Equivalence Class 3: More than
16,383 records

Boundary Value Analysis
Technique that leads to
selection of test cases that
exercise bounding values.
 Selecting test case on or just
to one side of boundary of
equivalence class increases
probability of detecting fault.

"Bugs lurk in corners and congregate at boundaries"
DBMS Example







Test case 1: 0 records
Test
Test
Test
Test
Test
Test
Member of equivalence class 1 (&
adjacent to boundary value)
case 2: 1 record
Boundary value
case 3: 2 records
Adjacent to boundary value
case 4: 723 records
Member of equivalence class 2
case 5: 16,382 records Adjacent to boundary value
case 6: 16,383 records Boundary value
case 7: 16,384 records Member of equivalence class 3 (&
adjacent to boundary value)
White-box Testing


Test case design method that uses the control
structure of the procedural design to derive
test cases.
Can derive tests that:




Guarantee all independent paths have been
exercised at least once
Exercise all logical decisions on their true and false
sides
Execute all loops at their boundaries and within
operational bounds
Exercise internal data structures to ensure validity
Basis Path Testing



Proposed by Tom McCabe.
Use cyclomatic complexity measure
as guide for defining a basis set of
execution paths.
Test cases derived to exercise the
basis set are guaranteed to execute
every statement at least once.
Independent Paths


CC = 5
So 5 independent paths
1.
2.
3.
4.
5.
a,
a,
a,
a,
a,
c, f
d, c, f
b, e, f
b, e, a, …
b, e, b, e, …
a
b
c
e
f
d
The Flowgraph


Before the cyclomatic complexity can be
calculated, and the paths determined, the
flowgraph must be created.
Done by translating the source code into
flowgraph notation:
sequence
if
while
until
case
PROCEDURE average
Example
INTERFACE RETURNS average, total.input, total.valid;
INTERFACE ACCEPTS value, minimum, maximum;
TYPE value[1:100] IS SCALAR ARRAY;
TYPE average, total.input, total.valid;
minimum, maximum, sum IS SCALAR;
TYPE i IS INTEGER;
1
i = 1;
total.input = total.valid = 0;
2
sum = 0;
DO WHILE value[i] <> -999 AND total.input < 100
3
increment total.input by 1;
4
IF value[i] >= minimum AND value[1] <= maximum
6
THEN increment total.valid by 1;
5
sum = sum + value[i]
7
ELSE skip
ENDIF
8
increment i by 1;
9 ENDDO
IF total.valid10
>0
11
THEN average = sum/total.valid;
12
ELSE average = -999;
13 ENDIF
END average
…Example
1
Flowgraph for average
2
3
10
4
11
12
5
13
Determine the:
1. Cyclomatic complexity
6
7
2. Independent paths
8
9
Condition Testing


Exercises logical conditions contained
within a program module.
Types of errors found include;





Boolean operator error (OR, AND, NOT)
Boolean variable error
Boolean parenthesis error
Relational operator error (>,<,=,!=,…)
Arithmetic expression error
Loop Testing
Focus exclusively on the validity
of loop constructs.
 4 types of loop can be defined:

Simple
 Nested
 Concatenated
 Unstructured

Loop Types
Simple
Nested
Concatenated
Unstructured
Simple Loops

Where n is the max number of
passes, the following test can be
applied:
Skip loop entirely
 Only one pass
 2 passes
 m passes (where m<n)
 n-1, n, n+1 passes

Nested Loops


If the approach for simple loops is extended, number
of possible tests would grow geometrically –
impractical.
Instead:




Start at innermost loop. Set all other loops to minimum
values.
Conduct simple loop test for innermost loop while holding
outer loops at minimum loop counter values. Add other test
for out-of-range or excluded values.
Work outward, conducting tests for next loop, but keeping
all other outer lops at minimum values and other nested
loops to ‘typical’ values.
Continue until all loops tested.
Concatenated Loops
Test as simple loops provided
each loop is independent.
 If two loops are concatenated
and loop counter for loop 1 is
used as initial value for loop 2,
then test as nested loops.

Unstructured Loops
Can’t test unstructured
loops effectively.
 Reflects very bad
practice and should be
redesigned.

The Tester

Who does the testing?
a)
b)
c)
d)
Developer
Member of development
team
SQA
All of the above
Independent Test Group




Strictly speaking, testing should be performed
by an independent group (SQA or 3rd party)
Members of the development team are
inclined to be more interested in meeting the
rapidly-approaching due-date.
The developer of the code is prone to test
“gently”.
Must remember that the objective is to find
errors, not to complete test without finding
them (because they’re always there!)
Successful Testing

The success of testing can be measured by
applying a simple metric:
Errors
DRE 
Errors  Defects

So as defect removal efficiency approaches 1,
process approaches perfection.
DRE 
1.0
P
The Testing Process

Unit testing





Testing of individual program components
Often performed by the component developer
Tests often derived from the developer’s
experience!
Increased productivity possible with xUnit
framework
Integration testing



Testing of groups of components integrated to
create a system or sub-system
The responsibility of an independent testing team
Tests are based on a system specification
Testing Phases
Unit
testing
Software developer
Integration
testing
Development team/ SQA/
Independent Test Group
Integration Testing




Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box
testing with tests derived from the
specification
Main difficulty is localizing errors
Incremental integration testing reduces
this problem
Incremental Integration Testing
A
T1
T1
A
T1
T2
A
T2
T2
B
B
T3
T3
B
C
T4
T3
C
T4
T5
D
Test sequence
1
Test sequence
2
Test sequence
3
Approaches to Integration
Testing

Top-down testing


Bottom-up testing


Start with high-level system and integrate
from the top-down replacing individual
components by stubs where appropriate
Integrate individual components in levels
until the complete system is created
In practice, most integration involves a
combination of these strategies
Top-down Testing
Level 1
Testing
sequence
Level 2
Le vel 2
stubs
Le vel 3
stubs
Level 1
Level 2
Le vel 2
. ..
Level 2
Bottom-up Testing
Test
drivers
Level N
Test
drivers
Level N
Level N–1
Le vel N
Level N–1
Level N
Level N
Level N–1
Testing
sequence
Which is Best?

In bottom-up testing:




Test harnesses must be constructed and this takes
time.
Integration errors are found later rather than
earlier.
Systems-level design flaws that could require
major reconstruction are found last.
There is no visible, working system until the last
stage so is harder to demonstrate progress to
clients.
Interface Testing



Takes place when modules or subsystems are integrated to create larger
systems
Objectives are to detect faults due to
interface errors or invalid assumptions
about interfaces
Particularly important for objectoriented development as objects are
defined by their interfaces
Interface Testing
Test
cases
A
B
C
Interfaces Types

Parameter interfaces


Shared memory interfaces


Block of memory is shared between procedures
Procedural interfaces


Data passed from one procedure to another
Sub-system encapsulates a set of procedures to
be called by other sub-systems
Message passing interfaces

Sub-systems request services from other subsystems
Interface Errors

Interface misuse


Interface misunderstanding


A calling component calls another component and makes an
error in its use of its interface e.g. parameters in the wrong
order
A calling component embeds assumptions about the
behaviour of the called component which are incorrect
Timing errors

The called and the calling component operate at different
speeds and out-of-date information is accessed
Interface Testing Guidelines





Design tests so that parameters to a called procedure
are at the extreme ends of their ranges
Always test pointer parameters with null pointers
Use stress testing in message passing systems
In shared memory systems, vary the order in which
components are activated
Design tests which cause the component to fail
Stress Testing

Exercises the system beyond its maximum
design load.


Stressing the system test failure behaviour.


Stressing the system often causes defects to come
to light
Systems should not fail catastrophically. Stress
testing checks for unacceptable loss of service or
data
Particularly relevant to distributed systems
which can exhibit severe degradation as a
network becomes overloaded
Object-Oriented Testing



The components to be tested are
object classes that are instantiated
as objects
Larger grain than individual
functions so approaches to whitebox testing have to be extended
No obvious ‘top’ to the system for
top-down integration and testing
Testing Levels
Test object classes
 Test clusters of
cooperating objects
 Test the complete OO
system

Object Class Testing

Complete test coverage of a class
involves




Testing all operations associated with an
object
Setting and interrogating all object
attributes
Exercising the object in all possible states
Inheritance makes it more difficult to
design object class tests as the
information to be tested is not localized
Object Integration



Levels of integration are less distinct in
object-oriented systems
Cluster testing is concerned with
integrating and testing clusters of
cooperating objects
Identify clusters using knowledge of the
operation of objects and the system
features that are implemented by these
clusters
Approaches to Cluster Testing

Use-case or scenario testing



Thread testing


A thread consists of all the classes needed to respond to a
single external input. Each class is unit tested, and then the
thread set is exercised.
Object interaction testing


Testing is based on a user interactions with the system
Has the advantage that it tests system features as
experienced by users
Tests sequences of object interactions that stop when an
object operation does not call on services from another
object
Uses-based testing

Begins by testing classes that use few or no server classes.
Next, classes that use the first group of classes are tested,
followed by classes that use the second group, and so on.
Scenario-Based Testing
Identify scenarios from usecases and supplement these
with interaction diagrams that
show the objects involved in the
scenario
 Consider the scenario in the
weather station system where a
report is generated

Collect Weather Data
Weather Station Testing

Thread of methods executed


CommsController:request WeatherStation:report 
WeatherData:summarize
Inputs and outputs



Input of report request with associated
acknowledge and a final output of a report
Can be tested by creating raw data and ensuring
that it is summarized properly
Use the same raw data to test the WeatherData
object
OO Testing: Myths & Reality



Inheritance means never
having to say you are sorry
Reuse means never having
to say you are sorry
Black box testing is sufficient
Implications of Inheritance

Myth:


specializing from tested superclasses
means subclasses will be correct
Reality:

Subclasses create new ways to misuse
inherited features


Different test cases needed for each context
Need to retest inherited methods, even if
unchanged.
Implications of Reuse

Myth:


Reusing a tested class means that the
behavior of the server object is trustworthy
Reality:

Every new usage provides ways to misuse
a server.


Even if many server object of a given class
function correctly, nothing is to prevent a new
client class from using it incorrectly
we can't automatically trust a server because it
performs correctly for one client
Implication of Encapsulation

Myth:


White-box testing violates encapsulation,
surely black-box testing (of class
interfaces) is sufficient.
Reality:


Studies indicate that “thorough” BBT
sometimes exercises only 1/3 of code.
BBT exercises all specified behaviors, what
about unspecified behaviors?!

Need to examine implementation.
And What About
Polymorphism?

Each possible binding of a
polymorphic component
requires separate
test…probably separate test
case!
Testing Workbenches



Testing is an expensive process phase.
Testing workbenches provide a range of
tools to reduce the time required and total
testing costs
Most testing workbenches are open
systems because testing needs are
organization-specific
Difficult to integrate with closed design and
analysis workbenches
A Testing Workbench
Test data
generator
Specification
Source
code
Test
manager
Test data
Oracle
Dynamic
analyser
Program
being tested
Test
results
Test
predictions
Execution
report
Simulator
File
comparator
Report
generator
Test results
report
Workbench Components






Test manager: manages the running of program tests
Test data generator: selects test data from database or
uses patterns to generate random data of correct form
Oracle: Predicts expected results (may be previous
version/prototype)
Comparator: compare results of oracle and program, or
program and previous version (regression test)
Dynamic analyzer: counts number of times each
statement is executed during test.
Simulator: simulates environment (target platform, user
interaction, etc)
xUnit Framework
Developed by Kent Beck
 Makes object-oriented unit
testing more accessible.
 Freeware versions available for
most object-oriented languages


www.xprogramming.com/softwar
e.htm
jUnit – “successful”
jUnit – “unsuccessful”
Simple Guide to Using xUnit

Subclass TestCase class for the object under test


Add a test method to the test class for each method.


Ensure test class has scope over object under test.
An xUnit test method is an ordinary method without
parameters.
Code the test case in the test method



Creates objects necessary for the test (fixture)
Exercises objects in the fixture
Verifies the result.
(1)
(2)
(3)
Key Points
Exhaustive testing is not possible
 Testing must be done systematically using
black-box and white-box testing techniques
 Testing must be done at both unit and
integration levels
 Object-oriented programming offers its own
challenges for testing
 Testing workbenches and frameworks can help
with the testing process

References
M. Grottke and K.S. Trivedi. Fighting Bugs:
Remove, Retry, Replicate and Rejuvinate. IEEE
Computer, February 2007, pp. 107 – 109.
 R. Pressman. Software Engineering: A
Practitioners Approach, New York, NY:
McGraw-Hill, 6th Ed, 2004.
 I. Sommerville. Software Engineering, 6th Ed.
New York, NY: Addison-Wesley, 2000.
