Testing Strategies

Software Verification and Validation
Objectives



Testing allegiance to customer not
development team
Test ALL documents
Intent: Find Defects
 Good test = test has high probability of finding
an error
 Successful test = test that uncovers an error



Prevent defect migration (apply early in lifecycle)
Develop and Use Test Tools
Trained, skilled people in testing
People




Creatively destructive
Hunt for errors (not people who create errors)
Trained for testing
25% of development time spent on testing,
but teachers spend 5% teaching it
Software Verification and Validation
Verification – Are we building the product right?
 Set of tasks that ensure SW correctly implements a
specific function
Validation – Are we building the right product?
 Different set of tasks that ensure SW built is
traceable to customer requirements
Methods for Verification
Formal Proof
Inspections
Walkthrough
BuddyCheck
Formal
Formal
Informal
Very informal
Presenter
NONE
Not author
Anyone
None
# people
Team
3-6
Larger #s
1 or 2
Preparation
Yes
Yes
Presenter
NONE
Data/report
Yes…a proof
yes
?
No
Advantages
Very effective
Effective
Familiarizes
Many
Inexpensive
disadvantages
Requires
trained
mathematicians
Short term cost
Fewer errors found
Fewer errors
found
Verfication: Correctness Proofs


Testing uncovers errors, cannot prove
program correctness
Manual correctness proofs
 Mathematical induction and predicate calculus
 Feasible only on small programs
 Proofs can contain errors

Automated Correctness proofs
 MacroCompiler produces symbolic representation
of SW
 Predicate calculus and AI theory
 Limited to types of applications
Strategies and Techniques for Validation

Well-defined activities
 Low-level: unit testing, integration test
 High-level: usability, functional test, system test,
acceptance test
 Regression testing – test unchanged code AGAIN



Testware
Black box – derived from requirements and
functional specs/no knowledge of structure or
code
White box – based on internal code
Test Information Flow
SWConfig (specs, code)
Debug
corrections
errors
Testing
Eval
Expected results
Test results
Reliability
Test Config (plan,cases)
expected results
Predicted reliability
TESTING STRATEGIES
Software Testing Steps
Unit Testing

Evaluate
 Module interface
• # of input parms = args, match, order, input-only args altered?, global
variable definitions consistent, constraints passed (ex. maxsize of
array)
 Logical data structure
• Improper or inconsistent declaration, erroneous initialization, incorrect
variable names, inconsistent data types, underflow, overflow, address
exceptions, global data
 File Structure
• File attributes correct, OPEN statements correct, format spec matches
I/O statement, buffer size=record size, files opened before use, EOF
handled, I/O errors handled, textual errors in output
 White box and Coverage
 Most common errors (computations)
• Arithmetic precedence, mixed mode ops, incorrect initialization,
precision inaccuracy, symbolic rep of expression
More Unit Testing
 Comparisons
• Of different data types, logical operators or precedence NOT,
expectation of equality when precision error, improper or nonexistent loop termination
 Antibugging
• Cleanly terminate processing ore reroute when error occurs
(incorporated, but not usually tested)
 Error Handling
• Intelligible description, provide enough info, error noted = error
encountered, error handling BEFORE system intervention
Integration Testing




Data can be lost across interface
One module can have an inadvertent effect
on another
Accumulating imprecision
Global variables
Testing at Different Levels
Bottom-up Integration
 Uses Drivers at many
different levels
 Disadv:interface
problems appear later
in process
Top-down Integration
 Uses stubs
 Test main logic first,
add modules, retest
 Disadv: planned lowerlevels may be
impossible to write
Still need entire package validated
Top-down Approach
Bottom-up Approach
Validation Testing





Uses black box tests to demonstrate
conformity to requirements
Deviations can rarely be fixed prior to
scheduled completion
Configuration review of all deliverables for
support of maintenance
Alpha – at developer’s site by customer
Beta – at customer’s site -> report back to
developer
System Testing
Incorporated into larger computer-based system
 Recovery tests within time – cause system to
fail and verify recovery performed
 Security – testing plays role of person who
breaks in, attacks system, penetrates
database
 Stress tests – abnormal frequency, volume,
quantities
 Performance tests – timings/resource
utilizations
TESTING TECHNIQUES
Black Box Testing

Interface Testing
 Unit interfaces and I/O interfaces

Equivalence Partitioning




Input class: generate 1 test point for each input class
Output class – generate 1 test point that gives result in each output class
Subdivide into subclasses (middle of range and 2 extremes)
Error Handling Routines/Exception output conditions (input beyond
accepted range)

Boundary value analysis

 The boundaries of a range
Functional Testing
 functions w/I certain math classes can be distinguished by their values on a
small # of points
 F(x) = y, if x > 0; y-1, if x <= 0
 Functions of more than one variable



Random inputs
Cause-effect graphing testing
Comparison testing
White Box Testing


Based on program structure
Coverage metrics for thoroughness
 Statement coverage
 Branch coverage (each T/F)
 Path coverage (every combination of T/Fs)
• Impractical – too many paths
• Eliminate infeasible paths
If y < 0 then
x = y –1
else
x = y+1;
if x > 0 then …
then-then infeasible
• Missing paths (“special cases”) – can involve 1 single input
data point (if y = 0)
More…Path coverage
• Coincidental correctness
Read x;
y = x + 2;
Write y;
vs. y = x * 2;
Can’t distinguish if input is 2 => 1 point per path is insufficient
Paths
104 possible paths
Loop <= 20
At 1 task/msec = 3170 years
More White Box Testing


Data Flow Coverage
Mutation Analysis
 Create mutant programs
 See if mutant programs give identical output for each test;
• If yes, mutant is live
• If no, mutant is killed
 Display set of live mutants, if all killed, confidence in Test
 Examples:
•
•
•
•
•
Replace 1 constant or variable with another
Replace one arithmetic operator with another
Similarly, relational and logical operators
Delete a statement
Increment change
Tools

Static Analyzers
 No actual inputs,
compilers, check syntax,
unreachable code,
undefined references




Code Auditors
Assertion Processors
Test File/Data
Generation
Test Verifier





Test Harnesses
Output Comparators
Simulators
Data Flow Analyzers
Symbolic Execution
Systems (verifiers)
Reliability



Based on error rate
Based on internal characteristics of program
(complexity, #operands, #operators)
Seed SW with known errors and evaluate #seeded
errors detected vs. actual errors detected –this
evaluates power of tests