Software Testing - Purdue University

Software Testing and Reliability
Testing Real-Time Systems
Aditya P. Mathur
Purdue University
May 19-23, 2003
@
Guidant Corporation
Minneapolis/St Paul, MN
Graduate Assistants: Ramkumar Natarajan
Baskar Sridharan
Last update: April 17, 2003
Learning objectives

Issues in testing real-time systems

Methodology for testing real-time systems

Tools for testing real-time systems
Software Testing and Reliability Aditya P. Mathur 2002
2
References

Testing Embedded Systems, Do You have the GuTs for it?, Vincent
Encontre, Rational June 2001,
http://www.rational.com/products/whitepapers/final_encontre_testing.jsp



CodeTest Vision for Embedded Software, Applied Microsystems
Corporation, http://www.amc.com/products/codetest/.
G-Cover Object Code Analyzer, Green Hills Software Inc.,
http://www.ghs.com/products/safety_critical/gcover.html
DO-178B, http://www.rtca.org and
http://www.windriver.com/products/html/do178b.html
Software Testing and Reliability Aditya P. Mathur 2002
3
The RTCA DO-178B Guidelines



The FAA’s DO-178B offers guidelines for the development of
airborne systems equipment software.
The guidelines in DO-178B impose constraints on the software
development process so that the resulting system is safe.
Most RTOS tool vendors have accepted the guidelines in DO178B and begun to offer tool support.
Software Testing and Reliability Aditya P. Mathur 2002
4
Levels of Criticality-A, B


Level A: Software with anomalous behavior, as shown by the
system safety assessment process, would cause or contribute
to a failure of system function resulting in a catastrophic
failure condition for the aircraft.
Level B: Software with anomalous behavior, as shown by the
system safety assessment process, would cause or contribute
to a failure of system function resulting in a hazardous/severemajor failure condition for the aircraft.
Guidant products: Level A or B?
Software Testing and Reliability Aditya P. Mathur 2002
5
Levels of Criticality-C, D, E



Level C: Software with anomalous behavior, as shown by the
system safety assessment process, would cause or contribute
to a failure of system function resulting in a major failure
condition for the aircraft.
Level D: Software with anomalous behavior, as shown by the
system safety assessment process, would cause or contribute
to a failure of system function resulting in a minor failure
condition for the aircraft.
Level E: Software with anomalous behavior, as shown by the
system safety assessment process, would cause or contribute
to a failure of system function with no effect on aircraft
operational capability or pilot workload.
Software Testing and Reliability Aditya P. Mathur 2002
6
Guidelines Related to Testing

Test Procedures are correct

Results are correct

Compliance with Software Verification Plan

Test coverage of High-Level requirements

Test coverage of Low-Level requirements

Test coverage of object code

Test coverage - Statement, Decision, and Modified condition/decision

Test coverage - Data coupling and control coupling
Software Testing and Reliability Aditya P. Mathur 2002
7
Issues in Testing Real-Time Systems


Separation of development and execution platforms
Variety of execution platforms (combinations of hardware and
OS, e.g. Z80/QNX, Tornado/PowerPC)

Tight resources and timing constraints

Emerging quality and certification standards
Software Testing and Reliability Aditya P. Mathur 2002
8
Test Methodology: Unit Testing

Identify the module to be tested. This module becomes MuT.

Prepare MuT for testing.


Isolate the MuT from its environment.
Make MuT an independently executable module
by adding a stub and a test driver.
Test harness
Test
Driver
Generate tests
Software Testing and Reliability Aditya P. Mathur 2002
MuT
Module under test
stub
Replaces the rest of the
application
9
Issues in Unit Testing

How to generate tests?

What to observe inside MuT?




Returned parameters
Internal state such as values of global variables,
coverage, control flow for verification against a UML
sequence diagram, etc.
How to observe?

Use coverage tools or debuggers

Insert probes
When to stop testing?
Software Testing and Reliability Aditya P. Mathur 2002
10
Test Methodology: Integration Testing

Combine multiple MuT into a larger software component.

Build a test harness.

Look for the correctness of interactions amongst the various
MuTs in the component. Again, UML sequence diagrams can
be useful in validating the interactions against design.
Software Testing and Reliability Aditya P. Mathur 2002
11
Test Methodology: System (Unit) Testing

The MuT is now a complete software application.

Test is executed under the RTOS.




Communication amongst individual modules might be via the
RTOS.
Observation is done using probes. The data so collected is
analyzed on the host platform.
The test driver could now be a simulator. The system needs to
be brought to a desired initial state and then generate and
send to the system a sequence of test messages.
Usually one does grey-box testing at this stage but more could
be done.
Software Testing and Reliability Aditya P. Mathur 2002
12
Test Methodology: System Integration Testing


Integrate other software components of the end-application.
These might be off-the-shelf components.
Test is integrated system
Software Testing and Reliability Aditya P. Mathur 2002
13
Test Methodology: System Validation Testing

For overall embedded system, perform the following:



Check if the system meets the end-user functional
requirements.
Perform non-functional testing. This includes load
testing, robustness testing, and performance testing.
Check for conformance with any inter-operability
requirements.
Software Testing and Reliability Aditya P. Mathur 2002
14
Run-time Analysis
Source
Compiler
Assembler/Linker
Executable
GUI
Analysis tools
RTA server
Target platform
WindRiver and Agilent offer real-time trace
tools that work with Agilent’s logic analyzers.
Software Testing and Reliability Aditya P. Mathur 2002
15
Applicability of Test Techniques to Real-Time Systems



All techniques for test data generation and coverage analysis
covered in this course are applicable when testing real-time
applications.
Due to the embedded nature of such applications, one
generally needs to use special tools that operate in a clientserver mode; the server running on the host machine and the
client on the target environment.
The client provides data gathered during the execution of the
embedded application and the server analyses and presents it
to the tester.
Software Testing and Reliability Aditya P. Mathur 2002
16
Tools for Testing Real Time Systems




Rational: Test RealTime: code coverage; execution sequence,
unit testing; integration testing, checking assertions for C++
classes; testing C threads, tasks, processes; evaluatin the UML
model; requirement to test traceability; plug-in for mathworks
Simulink.
Applied Microsystems: CodeTest: A suite of several tools;
performance measurement; tracing of execution history; finds
memory leaks; code coverage measurement; supports QNX.
Green Hills Software: G-Cover: Coverage analysis at the object
code level; indirect source code coverage information.
WindRiver: RTA: Detects memory leaks; identifies hot spots;
per task profiling.
Software Testing and Reliability Aditya P. Mathur 2002
17
Summary

Issues in testing real-time systems

Test tool architecture

Test methodology
Software Testing and Reliability Aditya P. Mathur 2002
18