Bernard Task4 Comparison of Testing Strategies draft

Comparison of Testing Strategies
Provide a logical comparison, with substantial details, of white box testing, black box testing,
unit testing, and integration testing and how can be used to verify the application.
 Software testing is a process used to identify the correctness, completeness, and quality of
developed computer software. It includes a set of activities conducted with the intent of finding
errors in software so that it could be corrected before the product is released to the end users.
 In simple words, software testing is an activity to check whether the actual results match
the expected results and to ensure that the software system is defect free.
 Why is testing is important?
 This isChinaAirlines Airbus A300 crashing due to a software bug on April 26, 1994 killing
264 innocent lives
 Software bugs can potentially cause monetary and human loss, history is full of such examples
Read more at http://www.guru99.com/software-testing-introductionimportance.html#uh7b2Z3BOHZBSEJK.99
https://www.atlassian.com/software-testing
Build a Solid Software Test Strategy
An effective testing strategy includes automated, manual, and exploratory tests to efficiently reduce
risk and tighten release cycles. Tests come in several flavors:

Unit tests validate the smallest components of the system, ensuring they handle known input and
outputs correctly. Unit test individual classes in your application to verify they work under expected,
boundary, and negative cases.

Integration tests exercise an entire subsystem and ensure that a set of components play nicely
together.

Functional tests verify end-to-end scenarios that your users will engage in.
So why bother with unit and integration tests if functional tests hit the whole system? Two reasons:
test performance and speed of recovery. Functional tests tend to be slower to run, so use unit tests
at compile time as your sanity-check. And when an integration test fails, it pin-points the bug's
location better than functional tests, making it faster for developers to diagnose and fix. A healthy
strategy reqires tests at all levels of the technology stack to ensure each part, as well as the system
as a whole, works correctly.
Bernard Nsabimana
Task4
Comparison of Testing Strategies
Test Strategy
The purpose of testing is to find defects, not to pass easy tests. A test strategy basically tells you which types of
testing seem best to do, the order in which to perform them, the proposed sequence of execution, and the optimum
amount of effort to put into each test objective to make your testing most effective. A test strategy is based on the
prioritized requirements and any other available information about what is important to the customers. Because
you will always face time and resource constraints, a test strategy faces up to this reality and tells you how to make
the best use of whatever resources you do have to locate most of the worst defects. Without a test strategy, you are
apt to waste your time on less fruitful testing and miss using some of your most powerful testing options. You should
create the test strategy at about the middle of the design phase as soon as the requirements have settled down.
A test strategy is an outline that describes the testing approach of the software development cycle. It is
created to inform project managers, testers, and developers about some key issues of the testing
process. This includes the testing objective, methods of testing new functions, total time and resources
required for the project, and the testing environment.
Test strategies describe how the product risks of the stakeholders are mitigated at the test-level, which
types of test are to be performed, and which entry and exit criteria apply. They are created based on
development design documents. System design documents are primarily used and occasionally,
conceptual design documents may be referred to. Design documents describe the functionality of the
software to be enabled in the upcoming release. For every stage of development design, a corresponding
test strategy should be created to test the new feature sets.
Definition
Black Box Testing is a software testing method in which the internal structure/ design/ implementation of
the item being tested is NOT known to the tester.
Definition - What does Black Box Testing mean?
Black box testing is a software testing technique that focuses on the analysis of software
functionality, versus internal system mechanisms. Black box testing was developed as a method of
analyzing client requirements, specifications and high-level design strategies.
A black box software tester selects a set of valid and invalid input and code execution conditions and
checks for valid output responses.
Black box testing is also known as functional testing.
Bernard Nsabimana
Task4
Comparison of Testing Strategies
Techopedia explains Black Box Testing
A search engine is a simple example of an application subject to routine black box testing. A search
engine user enters text in a Web browser's search bar. The search engine then locates and retrieves
related user data results (output).
Black box testing advantages include:




Simplicity: Facilitates testing of high-level designs and complex applications
Conserves resources: Testers focus on software functionality.
Test cases: Focusing on software functionality to facilitate quick test case development.
Provides flexibility: Specific programming knowledge is not required.
Black box testing also has certain disadvantages, as follows:



Test case/script design and maintenance may be problematic because black box testing tools
depend on known inputs.
Graphical User Interface (GUI) interaction may damage test scripts.
Testing only covers application functions.
what is Black Box Testing?
Black box testing is a software testing techniques in which
functionality of the software under test (SUT) is tested without looking
at the internal code structure, implementation details and knowledge of
internal paths of the software.This type of testing is based entirely on
the software requirements and specifications.
Read more at http://www.guru99.com/black-box-testing.html#ioxo5sFxMQ7eJLSi.99







Also known as functional testing. Asoftware testing technique whereby the internal workings of the item being tested
are not known by the tester. For example, in a black box test on a software design the tester only knows the inputs
and what the expected outcomes should be and not how the program arrives at those outputs. The tester does not
ever examine the programmingcode and does not need any further knowledge of the program other than its
specifications.
The advantages of this type of testing include:
The test is unbiased because the designer and the tester are independent of each other.
The tester does not need knowledge of any specific programming languages.
The test is done from the point of view of the user, not the designer.
Test cases can be designed as soon as the specifications are complete.
The disadvantages of this type of testing include:
The test can be redundant if the software designer has already run a test case.
The test cases are difficult to design.
Testing every possible input stream is unrealistic because it would take a inordinate amount of time; therefore, many
program paths will go untested.
Bernard Nsabimana
Task4
Comparison of Testing Strategies
For a complete software examination, both white box and black box tests are required.
Definition - What does Functional Testing mean?
Functional testing is a software testing process used within software development in which software
is tested to ensure that it conforms with all requirements. Functional testing is a way of checking
software to ensure that it has all the required functionality that's specified within its functional
requirements.
Techopedia explains Functional Testing
Functional testing is primarily is used to verify that a piece of software is providing the same output
as required by the end-user or business. Typically, functional testing involves evaluating and
comparing each software function with the business requirements. Software is tested by providing it
with some related input so that the output can be evaluated to see how it conforms, relates or varies
compared to its base requirements. Moreover, functional testing also checks the software for
usability, such as by ensuring that the navigational functions are working as required.
Some functional testing techniques include smoke testing, white box testing, black box testing, unit
testing and user acceptance testing.
In Black Box Testing we just focus on inputs and output of the software system without
bothering about internal knowledge of the software program.
The above Black Box can be any software system you want to test. For example : an operating system
like Windows, a website like Google ,a database like Oracle or even your own custom application.
Under Black Box Testing , you can test these applications by just focusing on the inputs and outputs
without knowing their internal code implementation.
Bernard Nsabimana
Task4
Comparison of Testing Strategies
Black box testing - Steps







Here are the generic steps followed to carry out any type of Black Box Testing.
Initially requirements and specifications of the system are examined.
Tester chooses valid inputs (positive test scenario) to check whether SUT processes them correctly .
Also some invalid inputs (negative test scenario) are chosen to verify that the SUT is able to detect
them.
Tester determines expected outputs for all those inputs.
Software tester constructs test cases with the selected inputs.
The test cases are executed.
Software tester compares the actual outputs with the expected outputs.
Defects if any are fixed and re-tested.
Types of Black Box Testing



There are many types of Black Box Testing but following are the prominent ones Functional testing – This black box testing type is related to functional requirements of a system; it
is done by software testers.
Non-functional testing – This type of black box testing is not related to testing of a specific
functionality , but non-functional requirements such as performance, scalability, usability.
Regression testing – Regression testing is done after code fixes , upgrades or any other system
maintenance to check the new code has not affected the existing code.
Tools used for Black Box Testing:
Tools used for Black box testing largely depends on the type of black box testing your are doing.
For Functional/ Regression Tests you can use - QTP
For Non-Functional Tests you can use - Loadrunner
Black box testing strategy:



Following are the prominent test strategy amongst the many used in Black box Testing
Equivalence Class Testing: It is used to minimize the number of possible test cases to an optimum
level while maintains reasonable test coverage.
Boundary Value Testing: Boundary value testing is focused on the values at boundaries. This
technique determines whether a certain range of values are acceptable by the system or not.It is very
useful in reducing the number of test cases. It is mostly suitable for the systems where input is within
certain ranges.
Decision Table Testing: A decision table puts causes and their effects in a matrix. There is unique
combination in each column.
Read more at http://www.guru99.com/black-box-testing.html#ioxo5sFxMQ7eJLSi.99
Black Box Testing and Software Development Life Cycle
(SDLC)

Black box testing has its own life cycle called Software Test Life Cycle (STLC) and it is relative
to every stage of Software Development Life Cycle.
Requirement – This is the initial stage of SDLC and in this stage requirement is gathered. Software
testers also take part in this stage.
Bernard Nsabimana
Task4
Comparison of Testing Strategies



Test Planning & Analysis – Testing Types applicable to the project are determined. A Test Plan is
created which determines possible project risks and their mitigation.
Design – In this stage Test cases/scripts are created on the basis of software requirement
documents
Test Execution- In this stage Test Cases prepared are executed. Bugs if any are fixed and re-tested.
Read more at http://www.guru99.com/black-box-testing.html#ioxo5sFxMQ7eJLSi.99
Definition - What does White-Box Testing mean?
White-box testing is a methodology used to ensure and validate the internal framework,
mechanisms, objects and components of a software application. White-box testing verifies code
according to design specifications and uncovers application vulnerabilities.
White-box testing is also known as transparent box testing, clear box testing, structural testing and
glass box testing. Glass box and clear box indicate that internal mechanisms are visible to a
software engineering team.
Techopedia explains White-Box Testing
During white-box testing, code is run with preselected input values for the validation of preselected
output values. White-box testing often involves writing software code stubs and drivers.
White-box testing advantages include:





Enables test case reusability and delivers greater stability
Facilitates code optimization
Facilitates finding of the locations of hidden errors in early phases of development
Facilitates effective application testing
Removes unnecessary lines of code
Disadvantages include:
Bernard Nsabimana
Task4
Comparison of Testing Strategies




Requires a skilled tester with internal structure knowledge
Time consuming
High costs
Code bit validation is difficult.
White-box testing complements unit testing, integration testing and regression testing
White Box Testing is a software testing method in which the internal structure/ design/ implementation of
the item being tested is known to the tester.
While White Box Testing (Unit Testing) validates internal structure and working of your
software code, the main focus of black box testing is on the validation of your functional
requirements.
To conduct White Box Testing , knowledge of underlying programming language is essential. Current
day software systems use a variety of programming languages and technologies and its not
possible to know all of them. Black box testing gives abstraction from code and focuses testing
effort on the software system behaviour.
Also software systems are not developed in a single chunk but development is broken down in
different modules. Black box testing facilitates testing communication amongst
modules (Integration Testing) .
In case you push code fixes in your live software system , a complete system check (black box
regression tests) becomes essential.
Though White box testing has its own merits and help detect many internal errors which may
degrade system performance
Read more at http://www.guru99.com/black-box-testing.html#ioxo5sFxMQ7eJLSi.99
Also known as glass box, structural, clear box and open box testing. Asoftware testing technique whereby explicit
knowledge of the internal workings of the item being tested are used to select the test data. Unlike black box testing,
white box testing uses specific knowledge of programming codeto examine outputs. The test is accurate only if the
tester knows what the program is supposed to do. He or she can then see if the program diverges from its intended
goal. White box testing does not account for errors caused by omission, and all visible code must also be readable.
For a complete software examination, both white box and black box tests are required.
White-box testing (also known as clear box testing, glass box testing, transparent box testing,
and structural testing) is a method of testing software that tests internal structures or workings of an
application, as opposed to its functionality (i.e. black-box testing). In white-box testing an internal
perspective of the system, as well as programming skills, are used to design test cases. The tester
chooses inputs to exercise paths through the code and determine the appropriate outputs. This is
analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).
Bernard Nsabimana
Task4
Comparison of Testing Strategies
While white-box testing can be applied at the unit, integration and system levels of the software
testing process, it is usually done at the unit level. It can test paths within a unit, paths between units
during integration, and between subsystems during a system–level test. Though this method of test
design can uncover many errors or problems, it might not detect unimplemented parts of the specification
or missing requirements.
White-box test design techniques include:

Control flow testing

Data flow testing

Branch testing

Path testing

Statement coverage

Decision coverage
Overview[edit]
White-box testing is a method of testing the application at the level of the source code. The tests case are
derived through the use of the design techniques mentioned above: control flow testing, data flow testing,
branch testing, path testing, statement coverage and decision coverage as well as modified
condition/decision coverage. White-box testing is the use of these techniques as guidelines to create an
error free environment by examining any fragile code. These White-box testing techniques are the
building blocks of white-box testing, whose essence is the careful testing of the application at the source
code level to prevent any hidden errors later on.[1] These different techniques exercise every visible path
of the source code to minimize errors and create an error-free environment. The whole point of white-box
testing is the ability to know which line of the code is being executed and being able to identify what the
correct output should be.[1]
Levels[edit]
1. Unit testing. White-box testing is done during unit testing to ensure that the code is working as
intended, before any integration happens with previously tested code. White-box testing during
unit testing catches any defects early on and aids in any defects that happen later on after the
code is integrated with the rest of the application and therefore prevents any type of errors later
on.[1]
2. Integration testing. White-box testing at this level are written to test the interactions of each
interface with each other. The Unit level testing made sure that each code was tested and
working accordingly in an isolated environment and integration examines the correctness of the
behaviour in an open environment through the use of white-box testing for any interactions of
interfaces that are known to the programmer.[1]
Bernard Nsabimana
Task4
Comparison of Testing Strategies
3. Regression testing. White-box testing during regression testing is the use of recycled white-box
test cases at the unit and integration testing levels.[1]
Basic procedure[edit]
White-box testing's basic procedures involve the understanding of the source code that you are testing at
a deep level to be able to test them. The programmer must have a deep understanding of the application
to know what kinds of test cases to create so that every visible path is exercised for testing. Once the
source code is understood then the source code can be analyzed for test cases to be created. These are
the three basic steps that white-box testing takes in order to create test cases:
1. Input, involves different types of requirements, functional specifications, detailed designing of
documents, proper source code, security specifications.[2] This is the preparation stage of whitebox testing to layout all of the basic information.
2. Processing Unit, involves performing risk analysis to guide whole testing process, proper test
plan, execute test cases and communicate results.[2] This is the phase of building test cases to
make sure they thoroughly test the application the given results are recorded accordingly.
3. Output, prepare final report that encompasses all of the above preparations and results. [2]
Definition - What does Unit Test mean?
A unit test is a software development life cycle (SDLC) component in which a comprehensive testing
procedure is individually applied to the smallest parts of a software program for fitness or desired
operation.
Techopedia explains Unit Test
A unit test is a quality measurement and evaluation procedure applied in most enterprise software
development activities. Generally, a unit test evaluates how software code complies with the overall
objective of the software/application/program and how its fitness affects other smaller units. Unit
tests may be performed manually - by one or more developer - or through an automated software
solution.
When tested, each unit is isolated from the primary program or interface. Unit tests are typically
performed after development and prior to publishing, thus facilitating integration and early problem
detection. The size or scope of a unit varies by programming language, software application and
testing objectives.
Levels
Bernard Nsabimana
Task4
Comparison of Testing Strategies
Mainly applicable to higher levels of testing : Acceptance testing and system testing.
Mainly applicable to lower levels of testing: Unit testing and Integration testing
There are four levels of software testing: Unit >> Integration >> System >> Acceptance
1.
Unit Testing is a level of the software testing process where individual units/components of a
software/system are tested. The purpose is to validate that each unit of the software performs as designed.
2.
Integration Testing is a level of the software testing process where individual units are combined and tested
as a group. The purpose of this level of testing is to expose faults in the interaction between integrated units.
3.
System Testing is a level of the software testing process where a complete, integrated system/software is
tested. The purpose of this test is to evaluate the system’s compliance with the specified requirements.
4.
Acceptance Testing is a level of the software testing process where a system is tested for acceptability. The
purpose of this test is to evaluate the system’s compliance with the business requirements and assess whether
it is acceptable for delivery.
Responsibility
Generally, independent Software Testers
Generally, Software Developers
Programming knowledge
Not required
Required
Implementation knowledge
Not required
Required
Basic for test cases
Requirement specifications
Detail design
Verification: The process of evaluating software to determine whether the products of a given
development phase satisfy the conditions imposed at the start of that phase.
Bernard Nsabimana
Task4
Comparison of Testing Strategies
Validation: The process of evaluating software during or at the end of the development process to
determine whether it satisfies specified requirements.
Testing is the process of analysing a software item to detect the differences between existing and
required conditions (that is defects/errors/bugs) and to evaluate the features of the software item.
a test is an activity in which system or component is executed under specified conditions,the results are
observed or recorded,and an evaluation is made of some aspect of the system.
software testing is the process of executing a program with the intention of finding errors in the code.
Software testing
-is an investigation conducted to provide stakeholders with information about the quality of the software
product or service under test
-provides an objective, independent view of the software to allow the business to appreciate and
understand the risks of software implementation.
Unit Testing
DEFINITION
Unit Testing is a level of the software testing process where individual units/components of a
software/system are tested. The purpose is to validate that each unit of the software performs as
designed.
Unit testing of software applications is done during the development (coding) of an application.
The objective of unit testing is to isolate a section of code and verify its correctness. In procedural
programming a unit may be an individual function or procedure
The goal of unit testing is to isolate each part of the program and show that the individual parts are
correct. Unit testing is usually performed by the developer.
Read more at http://www.guru99.com/unit-testing.html#RrokpIc3AWAOmPOT.99
Unit testing best practices






Unit Test cases should be independent. In case of any enhancements or change in requirements, unit
test cases should not be affected.
Test only one code at a time.
Follow clear and consistent naming conventions for your unit tests
In case of change in code in any module, ensure there is a corresponding unit test case for the module
and the module passes the tests before changing the implementation
Bugs identified during unit testing must be fixed before proceeding to the next phase in SDLC
Adopt a “test as your code” approach. The more code you write without testing the more paths you
have to check for errors.
Bernard Nsabimana
Task4
Comparison of Testing Strategies
Read more at http://www.guru99.com/unit-testing.html#RrokpIc3AWAOmPOT.99
A unit is the smallest testable part of software. It usually has one or a few inputs and usually a single
output. In procedural programming a unit may be an individual program, function, procedure, etc. In
object-oriented programming, the smallest unit is a method, which may belong to a base/super class,
abstract class or derived/child class. (Some treat a module of an application as a unit. This is to be
discouraged as there will probably be many individual units within that module.)
Unit testing frameworks, drivers, stubs and mock or fake objects are used to assist in unit testing.
METHOD
Unit Testing is performed by using the White Box Testing method.
When is it performed?
Unit Testing is the first level of testing and is performed prior to Integration Testing.
Who performs it?
Unit Testing is normally performed by software developers themselves or their peers. In rare cases it may
also be performed by independent software testers.
TASKS

Unit Test Plan

Prepare

Review

Rework
Bernard Nsabimana
Task4
Comparison of Testing Strategies



Baseline
Unit Test Cases/Scripts

Prepare

Review

Rework

Baseline
Unit Test

Perform
BENEFITS

Unit testing increases confidence in changing/maintaining code. If good unit tests are written and if they are
run every time any code is changed, the likelihood of any defects due to the change being promptly caught is
very high. If unit testing is not in place, the most one can do is hope for the best and wait till the test results
at higher levels of testing are out. Also, if codes are already made less interdependent to make unit testing
possible, the unintended impact of changes to any code is less.

Codes are more reusable. In order to make unit testing possible, codes need to be modular. This means that
codes are easier to reuse.

Development is faster. How? If you do not have unit testing in place, you write your code and perform that
fuzzy ‘developer test’ (You set some breakpoints, fire up the GUI, provide a few inputs that hopefully hit
your code and hope that you are all set.) In case you have unit testing in place, you write the test, code and
run the tests. Writing tests takes time but the time is compensated by the time it takes to run the tests. The
test runs take very less time: You need not fire up the GUI and provide all those inputs. And, of course, unit
tests are more reliable than ‘developer tests’. Development is faster in the long run too. How? The effort
required to find and fix defects found during unit testing is peanuts in comparison to those found during
system testing or acceptance testing.

The cost of fixing a defect detected during unit testing is lesser in comparison to that of defects detected at
higher levels. Compare the cost (time, effort, destruction, humiliation) of a defect detected during acceptance
testing or say when the software is live.

Debugging is easy. When a test fails, only the latest changes need to be debugged. With testing at higher
levels, changes made over the span of several days/weeks/months need to be debugged.

Codes are more reliable. Why? I think there is no need to explain this to a sane person.
TIPS

Find a tool/framework for your language.

Do not create test cases for everything: some will be handled by themselves. Instead, focus on the tests that
impact the behavior of the system.

Isolate the development environment from the test environment.

Use test data that is close to that of production.

Before fixing a defect, write a test that exposes the defect. Why? First, you will later be able to catch the
defect if you do not fix it properly. Second, your test suite is now more comprehensive. Third, you will most
probably be too lazy to write the test after you have already fixed the defect.
Bernard Nsabimana
Task4
Comparison of Testing Strategies

Write test cases that are independent of each other. For example if a class depends on a database, do not
write a case that interacts with the database to test the class. Instead, create an abstract interface around that
database connection and implement that interface with mock object.

Aim at covering all paths through the unit. Pay particular attention to loop conditions.

Make sure you are using a version control system to keep track of your code as well as your test cases.

In addition to writing cases to verify the behavior, write cases to ensure performance of the code.

Perform unit tests continuously and frequently.
ONE MORE REASON
Lets say you have a program comprising of two units. The only test you perform is system testing. [You
skip unit and integration testing.] During testing, you find a bug. Now, how will you determine the cause of
the problem?

Is the bug due to an error in unit 1?

Is the bug due to an error in unit 2?

Is the bug due to errors in both units?

Is the bug due to an error in the interface between the units?

Is the bug due to an error in the test or test case?
Unit testing is often neglected but it is, in fact, the most important level of testing.
Integration Testing
Definition - What does Integration Testing mean?
Integration testing is a software testing methodology used to test individual software components or
units of code to verify interaction between various software components and detect interface defects.
Components are tested as a single group or organized in an iterative manner. After the integration
testing has been performed on the components, they are readily available for system testing.
Techopedia explains Integration Testing
Integration is a key software development life cycle (SDLC) strategy. Generally, small software
systems are integrated and tested in a single phase, whereas larger systems involve several
integration phases to build a complete system, such as integrating modules into low-level
subsystems for integration with larger subsystems. Integration testing encompasses all aspects of a
software system's performance, functionality and reliability.
Most unit-tested software systems are comprised of integrated components that are tested for error
isolation due to grouping. Module details are presumed accurate, but prior to integration testing,
each module is separately tested via partial component implementation, also known as a stub.
The three main integration testing strategies are as follows:
Bernard Nsabimana
Task4
Comparison of Testing Strategies



Big Bang: Involves integrating the modules to build a complete software system. This is
considered a high-risk approach because it requires proper documentation to prevent failure.
Bottom-Up: Involves low-level component testing, followed by high-level components. Testing
continues until all hierarchical components are tested. Bottom-up testing facilitates efficient error
detection.
Top-Down: Involves testing the top integrated modules first. Subsystems are tested individually.
Top-down testing facilitates detection of lost module branch links.
DEFINITION
Integration Testing is a level of the software testing process where individual units are combined and
tested as a group.
In Integration Testing, individual software modules are integrated logically and tested as a group.
A typical software project consists of multiple software modules, coded by different
programmers. Integration testing focuses on checking data communication amongst these modules.
Read more at http://www.guru99.com/integration-testing.html#srjGL3EhHdGggbJc.99
Need of Integration Testing:
Although each software module is unit tested, defects still exist for various reasons like





A Module in general is designed by an individual software developer who understanding and
programming logic may differ from other programmers. Integration testing becomes necessary to
verify the software modules work in unity
At the time of module development, there wide chances of change in requirements by the clients.
These new requirements may not be unit tested and hence integration testing becomes necessary.
Interfaces of the software modules with the database could be erroneous
External Hardware interfaces, if any, could be erroneous
Inadequate exception handling could cause issues.
Read more at http://www.guru99.com/integration-testing.html#srjGL3EhHdGggbJc.99
Best Practices/ Guidelines for Integration Testing




First determine the Integration Test Strategy that could be adopted and later prepare the test cases
and test data accordingly.
Study the Architecture design of the Application and identify the Critical Modules. These need to be
tested on priority.
Obtain the interface designs from the Architectural team and create test cases to verify all of the
interfaces in detail. Interface to database/external hardware/software application must be tested in
detail.
After the test cases, it’s the test data which plays the critical role.
Bernard Nsabimana
Task4
Comparison of Testing Strategies

Always have the mock data prepared, prior to executing. Do not select test data while executing the
test cases.
Read more at http://www.guru99.com/integration-testing.html#srjGL3EhHdGggbJc.99
The purpose of this level of testing is to expose faults in the interaction between integrated units.
Test drivers and test stubs are used to assist in Integration Testing.
Note: The definition of a unit is debatable and it could mean any of the following:
1.
the smallest testable part of a software
2.
a ‘module’ which could consist of many of ’1′
3.
a ‘component’ which could consist of many of ’2′
ANALOGY
During the process of manufacturing a ballpoint pen, the cap, the body, the tail and clip, the ink cartridge
and the ballpoint are produced separately and unit tested separately. When two or more units are ready,
they are assembled and Integration Testing is performed. For example, whether the cap fits into the body
or not.
METHOD
Any of Black Box Testing, White Box Testing, and Gray Box Testing methods can be used. Normally, the
method depends on your definition of ‘unit’.
TASKS

Integration Test Plan

Prepare
Bernard Nsabimana
Task4
Comparison of Testing Strategies



Review

Rework

Baseline
Integration Test Cases/Scripts

Prepare

Review

Rework

Baseline
Integration Test

Perform
When is Integration Testing performed?
Integration Testing is performed after Unit Testing and before System Testing.
Who performs Integration Testing?
Either Developers themselves or independent Testers perform Integration Testing.
APPROACHES

Big Bang is an approach to Integration Testing where all or most of the units are combined together and
tested at one go. This approach is taken when the testing team receives the entire software in a bundle. So
what is the difference between Big Bang Integration Testing and System Testing? Well, the former tests only
the interactions between the units while the latter tests the entire system.

Top Down is an approach to Integration Testing where top level units are tested first and lower level units are
tested step by step after that. This approach is taken when top down development approach is followed. Test
Stubs are needed to simulate lower level units which may not be available during the initial phases.

Bottom Up is an approach to Integration Testing where bottom level units are tested first and upper level
units step by step after that. This approach is taken when bottom up development approach is followed. Test
Drivers are needed to simulate higher level units which may not be available during the initial phases.

Sandwich/Hybrid is an approach to Integration Testing which is a combination of Top Down and Bottom Up
approaches.
TIPS

Ensure that you have a proper Detail Design document where interactions between each unit are clearly
defined. In fact, you will not be able to perform Integration Testing without this information.

Ensure that you have a robust Software Configuration Management system in place. Or else, you will have a
tough time tracking the right version of each unit, especially if the number of units to be integrated is huge.

Make sure that each unit is first unit tested before you start Integration Testing.

As far as possible, automate your tests, especially when you use the Top Down or Bottom Up approach,
since regression testing is important each time you integrate a unit, and manual regression testing can be
inefficient.
Seven basics principles governing Software Testing :
Bernard Nsabimana
Task4
Comparison of Testing Strategies
Summary of the Seven Testing Principles
Principle 1
Principle 2
Principle 3
Principle 4
Principle 5
Principle 6
Principle 7
Testing shows presence of defects
Exhaustive testing is impossible
Early Testing
Defect Clustering
Pesticide Paradox
Testing is context dependent
Absence of errors - fallacy
Read more at http://www.guru99.com/software-testing-seven-principles.html#qVI43ih5gsL6RtKr.99
 Consider a scenario where you are moving a file from folder A to Folder B.Think of all the
possible ways you can test this.
 Apart from the usual scenarios, you can also test the following conditions
 Trying to move the file when it is Open
 You do not have the security rights to paste the file in Folder B
 Folder B is on a shared drive and storage capacity is full.
 Folder B already has a file with the same name, infact the list is endless
 Or suppose you have 15 input fields to test ,each having 5 possible values , the number of
combinations to be tested would be 5^15
 If you were to test the entire possible combinations project EXECUTION TIME &
COSTS will rise exponentially.
 Hence, one of the testing principle states that EXHAUSTIVE testing is not possible. Instead
we need optimal amount of testing based on the risk assessment of the application.
 And the million dollar question is, how do you determine this risk ?
 To answer this lets do an exercise
 In your opinion, Which operations is most likely to cause your Operating system to fail?
 I am sure most of you would have guessed, Opening 10 different application all at the same
time.
 So if you were testing this Operating system you would realize that defects are likely to be
found in multi-tasking and needs to be tested thoroughly which brings us to our next
principle Defect Clustering which states that a small number of modules contain most of the
defects detected.
 By experience you can identify such risky modules.But this approach has its own problems
 If the same tests are repeated over and over again , eventually the same test cases will no
longer find new bugs
 This is the another principle of testing called “Pesticide Paradox”
 To overcome this, the test cases need to be regularly reviewed & revised , adding new &
different test cases to help find more defects.
Bernard Nsabimana
Task4
Comparison of Testing Strategies
 But even after all this sweat & hard work in testing, you can never claim you product is bug
free. To drive home this point , lets see this video of public launch of Windows 98
 You think a company like MICROSOFT would not have tested their OS thoroughly & would
risk their reputation just to see their OS crashing during its public launch!
 Hence, testing principle states that - Testing shows presence of defects i.e. Software Testing
reduces the probability of undiscovered defects remaining in the software but even if no
defects are found, it is not a proof of correctness.
 But what if , you work extra hard , taking all precautions & make your software product 99%
bug free .And the software does not meet the needs & requirements of the clients.
 This leads us to our next principle, which states that Absence of Error is a Fallacy i.e. Finding and fixing defects does not help if the system
build is unusable and does not fulfill the users needs & requirements
 To fix this problem , the next principle of testing states that
 Early Testing - Testing should start as early as possible in the Software Development
Life Cycle. so that any defects in the requirements or design phase are captured as well more on
this principle in a later training tutorial.
 And the last principle of testing states that the Testing is context dependent which basically
means that the way you test a e-commerce site will be different from the way you test a
commercial off the shelf application
Read more at http://www.guru99.com/software-testing-seven-principles.html#qVI43ih5gsL6RtKr.99

1. Software Testing Strategies

2. Strategic Approach to Testing - 1 Testing begins at the component level
and works outward toward the integration of the entire computer-based
system. Different testing techniques are appropriate at different points in
time. The developer of the software conducts testing and may be assisted
by independent test groups for large projects. The role of the independent
tester is to remove the conflict of interest inherent when the builder is
testing his or her own product.

3. Strategic Approach to Testing - 2 Testing and debugging are different
activities. Debugging must be accommodated in any testing strategy. Need
Bernard Nsabimana
Task4
Comparison of Testing Strategies
to consider verification issues are we building the product right? Need to
Consider validation issues are we building the right product?

4. Strategic Testing Issues - 1 Specify product requirements in a
quantifiable manner before testing starts. Specify testing objectives
explicitly. Identify the user classes of the software and develop a profile for
each. Develop a test plan that emphasizes rapid cycle testing.

5. Strategic Testing Issues - 2 Build robust software that is designed to test
itself (e.g. use anti-bugging). Use effective formal reviews as a filter prior to
testing. Conduct formal technical reviews to assess the test strategy and
test cases.

6. Stages of Testing Module or unit testing. Integration testing, Function
testing. Performance testing. Acceptance testing. Installation testing.

7. Unit Testing Program reviews. Formal verification. Testing the program
itself. black box and white box testing.

8. Black Box or White Box? Maximum # of logic paths - determine if white
box testing is possible. Nature of input data. Amount of computation
involved. Complexity of algorithms.

9. Unit Testing Details Interfaces tested for proper information flow. Local
data are examined to ensure that integrity is maintained. Boundary
conditions are tested. Basis path testing should be used. All error handling
paths should be tested. Drivers and/or stubs need to be developed to test
incomplete software.

10. Generating Test Data Ideally want to test every permutation of valid
and invalid inputs Equivalence partitioning it often required to reduce to
infinite test case sets Every possible input belongs to one of the
equivalence classes. No input belongs to more than one class. Each point
is representative of class.
Bernard Nsabimana
Task4
Comparison of Testing Strategies

11. Regression Testing Check for defects propagated to other modules by
changes made to existing program Representative sample of existing test
cases is used to exercise all software functions. Additional test cases
focusing software functions likely to be affected by the change. Tests
cases that focus on the changed software components.

12. Integration Testing Bottom - up testing (test harness). Top - down
testing (stubs). Modified top - down testing - test levels independently. Big
Bang. Sandwich testing.

13. Top-Down Integration Testing Main program used as a test driver and
stubs are substitutes for components directly subordinate to it. Subordinate
stubs are replaced one at a time with real components (following the depthfirst or breadth-first approach). Tests are conducted as each component is
integrated. On completion of each set of tests and other stub is replaced
with a real component. Regression testing may be used to ensure that new
errors not introduced.

14. Bottom-Up Integration Testing Low level components are combined in
clusters that perform a specific software function. A driver (control
program) is written to coordinate test case input and output. The cluster is
tested. Drivers are removed and clusters are combined moving upward in
the program structure.

15. Bottom - Up Top - Down Big Bang Sandwich Integration Early
Early Early Time to get working program Late Early Late Early Drivers
Yes No Yes Yes Stub No Yes Yes Yes Parallelism Medium Low High
Medium Test specification Easy Hard Easy Medium Product control seq.
Easy Hard Easy Hard

1. UNITTESTING
Bernard Nsabimana
Task4
Comparison of Testing Strategies

2.  is a level of the software testing process where individual
units/components of a software/system are tested. The purpose is to
validate that each unit of the software performs as designed.

3. UNIT TESTING a method by which individual units of source code are
tested to determine if they are fit for use concerned with functional
correctness and completeness of individual program units typically
written and run by software developers to ensure that code meets its
design and behaves as intended. Its goal is to isolate each part of the
program and show that the individual parts are correct.

4. What is Unit TestingConcerned with Functional correctness and
completeness Error handling Checking input values (parameter)
Correctness of output data (return values) Optimizing algorithm and
performance

5. Types of testing Black box testing – (application interface, internal
module interface and input/output description) White box testing- function
executed and checked Gray box testing - test cases, risks assessments
and test methods

6. Traditional testing vs UnitTesting

7. Traditional Testing Test the system as a whole Individual
components rarely tested Errors go undetected Isolation of errors
difficult to track down

8. Traditional Testing Strategies Print Statements Use of Debugger
Debugger Expressions Test Scripts

9. Unit Testing Each part tested individually All components tested at
least once Errors picked up earlier Scope is smaller, easier to fix errors

10. Unit Testing Ideals Isolatable Repeatable Automatable Easy to
Write
Bernard Nsabimana
Task4
Comparison of Testing Strategies

11. Why Unit Test? Faster Debugging Faster Development Better
Design Excellent Regression Tool Reduce Future Cost

12. BENEFITS Unit testing allows the programmer to refactor code at a
later date, and make sure the module still works correctly. By testing the
parts of a program first and then testing the sum of its parts, integration
testing becomes much easier. Unit testing provides a sort of living
documentation of the system.

13. GUIDELINES Keep unit tests small and fast  Ideallythe entire test
suite should be executed before every code check in. Keeping the tests
fast reduce the development turnaround time. Unit tests should be fully
automated and non-interactive  The test suite is normally executed on a
regular basis and must be fully automated to be useful. If the results
require manual inspection the tests are not proper unit tests.

14. GUIDELINES Make unit tests simple to run  Configure the
development environment so that single tests and test suites can be run by
a single command or a one button click. Measure the tests 
Applycoverage analysis to the test runs so that it is possible to read the
exact execution coverage and investigate which parts of the code is
executed and not.

15. GUIDELINES Fix failing tests immediately  Each developer should be
responsible for making sure a new test runs successfully upon check in,
and that all existing tests runs successfully upon code check in. If a test
fails as part of a regular test execution the entire team should drop what
they are currently doing and make sure the problem gets fixed.

16. GUIDELINES Keep testing at unit level  Unittesting is about testing
classes. There should be one test class per ordinary class and the class
behaviour should be tested in isolation. Avoid the temptation to test an
entire work-flow using a unit testing framework, as such tests are slow and
hard to maintain. Work-flow testing may have its place, but it is not unit
testing and it must be set up and executed independently.
Bernard Nsabimana
Task4
Comparison of Testing Strategies

17. GUIDELINES Start off simple  One simple test is infinitely better than
no tests at all. A simple test class will establish the target class test
framework, it will verify the presence and correctness of both the build
environment, the unit testing environment, the execution environment and
the coverage analysis tool, and it will prove that the target class is part of
the assembly and that it can be accessed.

18. GUIDELINES Keep tests independent  Toensure testing robustness
and simplify maintenance, tests should never rely on other tests nor should
they depend on the ordering in which tests are executed. Name tests
properly  Make sure each test method test one distinct feature of the class
being tested and name the test methods accordingly. The typical naming
convention is test[what] such As testSaveAs(), testAddListener(),
testDeleteProperty() etc.

19. GUIDELINES Keep tests close to the class being tested  If the class
to test is Foo the test class should be called FooTest (not TestFoo) and
kept in the same package (directory) as Foo. Keeping test classes in
separate directory trees makes them harder to access and maintain. Make
sure the build environment is configured so that the test classes doesnt
make its way into production libraries or executables.

20. GUIDELINES Test public API  Unittesting can be defined as testing
classes through their public API. Some testing tools makes it possible to
test private content of a class, but this should be avoided as it makes the
test more verbose and much harder to maintain. If there is private content
that seems to need explicit testing, consider refactoring it into public
methods in utility classes instead. But do this to improve the general
design, not to aid testing.

21. GUIDELINES Think black-box  Actas a 3rd party class consumer,
and test if the class fulfills its requirements. And try to tear it apart. Think
white-box  Afterall, the test programmer also wrote the class being tested,
and extra effort should be put into testing the most complex logic.
Bernard Nsabimana
Task4
Comparison of Testing Strategies

22. GUIDELINES Test the trivial cases too  It is sometimes
recommended that all non-trivial cases should be tested and that trivial
methods like simple setters and getters can be omitted. However, there are
several reasons why trivial cases should be tested too:  Trivialis hard to
define. It may mean different things to different people.  From a black-box
perspective there is no way to know which part of the code is trivial.  The
trivial cases can contain errors too, often as a result of copy-paste
operations:

23. GUIDELINES Focus on execution coverage first  Differentiate
between execution coverage and actual test coverage. The initial goal of a
test should be to ensure high execution coverage. This will ensure that the
code is actually executed on some input parameters. When this is in place,
the test coverage should be improved. Note that actual test coverage
cannot be easily measured (and is always close to 0% anyway).

24. GUIDELINES Cover boundary cases  Make sure the parameter
boundary cases are covered. For numbers, test negatives, 0, positive,
smallest, largest, NaN, infin ity, etc. For strings test empty string, single
character string, non-ASCII string, multi-MB strings etc. For collections test
empty, one, first, last, etc. For dates, test January 1, February 29,
December 31 etc. The class being tested will suggest the boundary cases
in each specific case. The point is to make sure as many as possible of
these are tested properly as these cases are the prime candidates for
errors.

25. GUIDELINES Provide a random generator  When the boundary
cases are covered, a simple way to improve test coverage further is to
generate random parameters so that the tests can be executed with
different input every time. To achieve this, provide a simple utility class that
generates random values of the base types like doubles, integers, strings,
dates etc. The generator should produce values from the entire domain of
each type.
Bernard Nsabimana
Task4
Comparison of Testing Strategies

26. GUIDELINES Test each feature once  When being in testing mode it
is sometimes tempting to assert on "everything" in every test. This should
be avoided as it makes maintenance harder. Test exactly the feature
indicated by the name of the test method. As for ordinary code, it is a goal
to keep the amount of test code as low as possible.

27. GUIDELINES Use explicit asserts  Always prefer assertEquals(a, b)
to assertTrue(a == b) (and likewise) as the former will give more useful
information of what exactly is wrong if the test fails. This is in particular
important in combination with random value parameters as described
above when the input values are not known in advance.

28. GUIDELINES Provide negative tests  Negative tests intentionally
misuse the code and verify robustness and appropriate error handling.
Design code with testing in mind  Writingand maintaining unit tests are
costly, and minimizing public API and reducing cyclomatic complexity in the
code are ways to reduce this cost and make high-coverage test code faster
to write and easier to maintain.

29. GUIDELINES Dont connect to predefined external resources 
Unittests should be written without explicit knowledge of the environment
context in which they are executed so that they can be run anywhere at
anytime. In order to provide required resources for a test these resources
should instead be made available by the test itself.

30. GUIDELINES Know the cost of testing  Not writing unit tests is costly,
but writing unit tests is costly too. There is a trade-off between the two, and
in terms of execution coverage the typical industry standard is at about
80%. Prioritize testing  Unit testing is a typical bottom-up process, and if
there is not enough resources to test all parts of a system priority should be
put on the lower levels first.

31. GUIDELINES Prepare test code for failures  Ifthe first assertion is
false, the code crashes in the subsequent statement and none of the
Bernard Nsabimana
Task4
Comparison of Testing Strategies
remaining tests will be executed. Always prepare for test failure so that the
failure of a single test doesnt bring down the entire test suite execution.

32. GUIDELINES Write tests to reproduce bugs  When a bug is reported,
write a test to reproduce the bug (i.e. a failing test) and use this test as a
success criteria when fixing the code. Know the limitations  Unit tests can
never prove the correctness of code.

33. Unit Testing Techniques: Structural, Functional & Error based
Techniques Structural Techniques:  It is a White box testing technique that
uses an internal perspective of the system to design test cases based on
internal structure. It requires programming skills to identify all paths through
the software. The tester chooses test case inputs to exercise paths through
the code and determines the appropriate outputs.

34. Major Structural techniques are: Statement Testing: A test strategy in
which each statement of a program is executed at least once. Branch
Testing: Testing in which all branches in the program source code are
tested at least once. Path Testing: Testing in which all paths in the
program source code are tested at least once. Condition Testing:
Condition testing allows the programmer to determine the path through a
program by selectively executing code based on the comparison of a
value Expression Testing: Testing in which the application is tested for
different values of Regular Expression.

35. Unit Testing Techniques:Functional testing techniques:These are Black
box testing techniques which tests the functionality of the application.

36. Some of Functional testingtechniques Input domain testing: This
testing technique concentrates on size and type of every input object in
terms of boundary value analysis and Equivalence class. Boundary
Value: Boundary value analysis is a software testing design technique in
which tests are designed to include representatives of boundary values.
Syntax checking: This is a technique which is used to check the Syntax of
the application. Equivalence Partitioning: This is a software testing
Bernard Nsabimana
Task4
Comparison of Testing Strategies
technique that divides the input data of a software unit into partition of data
from which test cases can be derived

37. Unit Testing Techniques:Error based Techniques: The best person to
know the defects in his code is the person who has designed it.

38. Few of the Error basedtechniques Fault seeding techniques can be
used so that known defects can be put into the code and tested until they
are all found. Mutation Testing: This is done by mutating certain
statements in your source code and checking if your test code is able to
find the errors. Mutation testing is very expensive to run, especially on very
large applications. Historical Test data: This technique calculates the
priority of each test case using historical information from the previous
executions of the test case.
Bernard Nsabimana
Task4
Comparison of Testing Strategies
http://softwaretestingfundamentals.com/differences-between-black-box-testing-and-white-boxtesting/
http://agile.csc.ncsu.edu/SEMaterials/BlackBox.pdf
http://thesai.org/Downloads/Volume3No6/Paper%203A%20Comparative%20Study%20of%20White%20Box,%20Black%20Box%20and%20Grey%20Box%20Test
ing%20Techniques.pdf
http://www.slideshare.net/wdelzein/black-white-box-testing
http://searchsecurity.techtarget.com/tip/Black-box-and-white-box-testing-Which-is-best
http://www.sersc.org/journals/IJSEIA/vol5_no3_2011/1.pdf
http://www.cs.colostate.edu/~malaiya/structbbox2.pdf
http://www.guru99.com/black-box-testing.html
Bernard Nsabimana
Task4
Comparison of Testing Strategies
Bernard Nsabimana
Task4
Comparison of Testing Strategies