theoritical framework

5
CHAPTER 2
THEORITICAL FRAMEWORK
2.1
Basic Theory
2.1.1 Definition of System
According to Bagad(2009, p. 1-3), system is defined as a group of interrelated
components organized with a purpose. Characteristics of a system are:
1. A system can be either probabilistic or deterministic in nature.
2. Systems often have multiple goals.
3. Systems often consist at subsystems.
4. Subsystems send and receive data from each other.
5. Subsystems may be open or closed.
6. A system always exists and functions within an environment.
Figure 2.1 Model of a System
Source: Bagad (2009, pp. 1-3)
2.1.2 Definition of Information System
According to Rainer & Cegielski(2010, p. W-9), information system is a
process that collects, processes, stores, analyzes, and disseminates information for a
specific purpose; most ISs are computerized.
According to Hall(2011, p. 772), information system is set of formal
procedures by which data are collected, processed into information, and distributed
to users.
2.1.3 Definition of Enterprise Resource Planning (ERP)
According to Leon(2008, p. 29), ERP is a set of tools and processes that
integrates departments and functions across a company into one computer system.
5
6
According to Ray(2011, p. 4), ERP is an integrated information system built on
a centralized database and having a common computing platform that helps in
effective usage of enterprise’s resources and facilitates the flow of information
between all business functions of the enterprise (and with external stakeholders).
2.1.4 Definition of Testing
According to Farooq (2011, p. 741), testing is a process of verifying and
validating that a software application or program meets the business and technical
requirements that guided its design and development and works as expected and also
identifies important errors or flaws categorized as per the severity level in the
application that must be fixed.
According to Black (2009, p. 1), testing is a process to quantify the quality of
the software development. There are three key questions that should be asked before
testing:
1. What you might test?
2. What you should test?
3. What can you test?
There are two types of testing procedure, they are:
1. Test Granularity
Test granularity refers to the fineness or coarseness of a test’s focus.
There are three techniques in doing granularity testing, they are:

Structural (White Box)
Structural tests / white box tests find bugs in low level structural
elements such as line of code, database schemas, chips,
subassemblies, and interfaces.

Behavioral (Black Box)
Behavioral tests / black box tests find bugs in high level operations,
such as major features, operational profiles, and customer scenarios.

Live (Alpha, Beta, or Acceptance)
Live tests involve putting customers, content experts, early adopters,
and other end users in front of the system.
7
Figure 2.2 The test granularity spectrum and owners
Source: Black (2009, p. 2)
2. Test Phases
Test phases are steps of testing process. There are several commonly
known phases in testing, they are:

Unit testing, it focuses on an individual piece of code.

Component or Subsystem testing, it focuses on the constituent
pieces of the system.

Integration or Product testing, it focuses on the relationships and
interfaces between pairs of components and group of components in
the system under test.

String testing, it focuses on problems in typical usage scripts and
customer operational strings.

System testing, it encompasses the entire system, fully integrated.
Sometimes, as in installation and usability testing, these tests look at
the system from a customer or end user point of view.

Acceptance or User Acceptance testing, the objective is to
demonstrate that the system meets requirements. This phase of
testing is common in contractual situations, when successful
completion of acceptance tests obligates a buyer to accept a system.

Pilot testing, it checks the ability of the assembly line to massproduce the finished system.
7
8
Figure 2.3 The test execution period for various test phases in a development
project
Source: Black (2009, p. 9)
2.2
Topic Related Theory
2.2.1 Activity Diagram
Activity diagrams can be used to describe any business processes done by
people in an organization. However, they are also used to describe processes that
include both manual and automated system activities, so they can be used to define a
use case. (Satzinger, Jackson, & Burd, 2010, p. 242)
The benefit of creating an activity diagram is that it is more visual and makes it
easier to understand the overall flow of activity. (Satzinger, Jackson, & Burd, 2010,
p. 250)
An activity is operation sequence from start to end the system done and per
activity can be transformed on data or process. (Azizi, 2011, p. 80)
2.2.2 Failure Mode and Effects Analysis (FMEA)
FMEA is a technique for understanding and prioritizing possible failure modes
(or quality risks) in system functions, features, attributes, behaviors, components, and
interfaces. (Black, 2009, p. 32).
9
Figure 2.4 Failure Mode and Effects Analysis
Source: (Black, 2009, p. 33)
1) System Function or Feature
In most rows, you enter a concise description of a system function. If the
entry represents a category, you must break it down into more specific
functions or features in subsequent rows.(Black, 2009, p. 32)
2) Potential Failure Mode(s)-Quality Risk(s)
In the Potential Failure Mode(s)-Quality Risk(s) column, for each specific
function or feature (but not for the category itself), you identify the ways you
might encounter a failure. These are quality risks associated with the loss of a
specific system function. Each specific function or feature can have multiple
failure modes.(Black, 2009, p. 32)
3) Potential Effect(s) of Failure
In the Potential Effect(s) of Failure column, you list how each failure
mode can affect the user, in one or more ways. I keep these entries general
rather than trying to anticipate every possible unpleasant outcome.(Black,
2009, p. 33)
4) Critical?
In the Critical? column you indicate whether the potential effect has
critical consequences for the user. Is the product feature or function
completely unusable if this failure mode occurs? (Black, 2009, p. 33)
5) Severity
9
10
In the Severity column, you capture the effect of the failure (immediate or
delayed) on the system. (Black, 2009, p. 33). This example uses a scale from 1
to 5 as follows:
1. Loss of data, hardware damage, or a safety issue
2. Loss of functionality with no workaround
3. Loss of functionality with a workaround
4. Partial loss of functionality
5. Cosmetic or trivial
6) Potential Cause(s) of Failure
In the Potential Cause(s) of Failure column, you list possible factors that
might trigger the failure—for example, operating-system error, user error, or
normal use. (Black, 2009, p. 33)
7) Priority
In the Priority column, you rate the effect of failure on users, customers,
or operators. (Black, 2009, p. 34). This example uses a scale from 1 to 5, as
follows:
1. Complete loss of system value
2. Unacceptable loss of system value
3. Possibly acceptable reduction in system value
4. Acceptable reduction in system value
5. Negligible reduction in system value
8) Detection Method(s)
In the Detection Method(s) column, you list a currently existing method
or procedure, such as development activities or vendor testing, that can find
the problem before it affects users, excluding any future actions (such as
creating and executing test suites) you might perform to catch it. (Black, 2009,
p. 34)
9) Likelihood
In the Likelihood column, you have a number that represents the
vulnerability of the system, in terms of: a) existence in the product (e.g., based
on technical risk factors such as complexity and defect history); b) escape
from the current development process; and c) intrusion on user operations.
(Black, 2009, p. 34). This example uses the following 1-to-5 scale:
1. Certain to affect all users
11
2. Likely to impact some users
3. Possible impact on some users
4. Limited impact to few users
5. Unimaginable in actual usage
10) RPN (Risk Priority Number)
As with the informal technique, the RPN (Risk Priority Number) column
tells you how important it is to test this particular failure mode. The risk
priority number (RPN) is the product of the severity, the priority, and the
likelihood. Because this example used values from 1 to 5 for all three of these
parameters, the RPN ranges from 1 to 125.(Black, 2009, p. 34)
11) Recommended Action
The Recommended Action column contains one or more simple action
items for each potential effect to reduce the related risk (which pushes the risk
priority number toward 125). (Black, 2009, p. 34)
12) Who/When
The Who/When? column indicates who is responsible for each
recommended action and when they are responsible for it (for example, in
which test phase). (Black, 2009, p. 35)
13) References
The References column provides references for more information about
the quality risk. Usually this involves product specifications, a requirements
document, and the like. (Black, 2009, p. 35)
14) Action Results
The Action Results columns allow you to record the influence of the
actions taken on the priority, severity, likelihood, and RPN values. You will
use these columns after you have implemented your tests, not during the initial
FMEA. (Black, 2009, p. 35)
2.2.3 Test Plan
The Test Plan level highlights test plans associated with the functional area to
which it belongs. This is important for future capabilities, since we’ll want the
tracking tool to create a new test plan from an existing test plan by adding new test
cases. Such incremental addition of test cases can occur, for example, in response to
triaging a customer bug or as we grow our automated test set. (Black, 2009, p. 224)
11
12
Figure 2.5 Test Plan Template
Source: (Black, 2009, p. 52)
a)
Setting
It describes where testing is intended to be performed and the way those
organizations doing the testing relate to the rest of the organization. The
description might be as simple as ‘‘our test lab.’’ (Black, 2009, p. 54)
b) Quality Risk
The purpose of quality risk is to summarize the quality risks document and
communicate it as well as to plan. It cross-references the test strategy and the test
environments against the various risk categories. (Black, 2009, p. 56)
c)
Entry criteria
Entry criteria spell out what must happen to allow a system to move into a
particular test phase. (Black, 2009, p. 58). These criteria should address questions
such as the following:
i.
Are the
necessary documentation, design,
and requirements
information available that will allow testers to operate the system and
judge correct behavior?
ii.
Is the system ready for delivery, in whatever form is appropriate for
the test phase in question?
13
iii.
Are the supporting utilities, accessories, and prerequisites available
informs that testers can use?
iv.
Is the system at the appropriate level of quality? Such a question
usually implies that some or all of a previous test phase has been
successfully completed, although it could refer to the extent to which
code review issues have been handled. Passing a smoke test is another
frequent measure of sufficient quality to enter a test phase.
v.
Is the test environment — lab, hardware, software, and system
administration support—ready?
d) Continuation criteria
Continuation criteria define those conditions and situations that must
prevail in the testing process to allow testing to continue effectively and
efficiently. (Black, 2009, p. 59).
e)
Exit criteria
Exit criteria address the issue of how to determine when the project has
completed testing. For example, one exit criterion might be that all the planned
test cases and the regression tests have been run. (Black, 2009, p. 60).
ANSI/IEEE Standard 829-1983 describes a test plan as: “A document
describing the scope, approach, resources, and schedule of intended testing activities.
It identifies test items, the features to be tested, the testing tasks, who will do each
task, and any risks requiring contingency planning.” (Ammann & Offutt, 2008, p.
225). The purpose of a test plan is to define the strategies, scope of testing,
philosophy, test exit and entrance criteria, and test tools that will be used.(Ammann
& Offutt, 2008, p. 226).
A test plan is a document consisting of different test cases designed for
different testing objects and different testing attributes. The test plan is a matrix of
test and test cases listed in order of its execution.(Agarwal et al., 2010, p. 178).
The purpose of test planning therefore is to put together a plan which will
deliver the right tests, in the right order, to discover as many of the issues with the
software as time and budget allow. (Jenkins, 2008, p. 20).
2.2.4 Test Case
Test case is a sequence of steps, substeps, and other actions, performed serially,
in parallel, or some combination of consecution, that creates the desired test
13
14
conditions that the test case is designed to evaluate. In some styles of documentation,
particularly IEEE 829, these elements are referred to as test specifications and test
procedures. (Black, 2009, p. 610)
Figure 2.6 Test Case Template
Source: (Black, 2009, p. 93)
A test Case is composed of the test case values, expected results, prefix values,
and postfix values necessary for a complete execution and evaluation of the software
under test. (Ammann & Offutt, 2008, p. 15). Test Case Values is the input values
necessary to complete some execution of the software under test. (Ammann &
Offutt, 2008, p. 14). Prefix Values is any inputs necessary to put the software into the
appropriate state to receive the test case values. (Ammann & Offutt, 2008, p. 15).
Postfix Values is any inputs that need to be sent to the software after the test case
values are sent. (Ammann & Offutt, 2008, p. 15).
A test case is a set of instructions designed to discover a particular type of error
or defect in the software system by inducing a failure. The goal of selected test cases
is to ensure that there is no error in the program and if there is it then should be
immediately depicted. (Agarwal et al., 2010, p. 179).
2.2.5 Test Scenario
Test scenarios are similar to use cases, used to describe the system as a whole
in requirement definition. Test scenarios are written for defining end-to-end flow of
how user is going to use the application. It helps in identifying complete flow with
15
definition of different actors and transactions occurring while working with
application and using the system, scenario writing makes test case writing easier and
thus, all thes cases can be written. Both valid and invalid scenarios must be
considered and system response must be traced back to requirements in each event.
Any point where scenarios do not get completed and some assumption is required
while writing them indicated lacunae in requirement. (Limaye, 2009, p. 358)
A test scenario provides a complete test for a major function or use case for a
system and consists of one or more test descriptions. Each test description supports a
subset of major functionality, as defined by the System Requirements Specification
(SyRS) and/or the Software Requirements Specification (SRS).
A test description consists of one or more test procedures. A test procedure is a
documented step-by-step process that supports a subset of major functionality, as
described by the scenario. (Texas Department of Information Resources, 2008, p. 2)
2.2.6 Test Suite
A collection of tests used to validate the behavior of a product. The scope of a
test suite varies from organization to organization. There may be several test suites
for a particular product for example. In most cases however a test suite is a high level
concept, grouping together hundreds or thousands of tests related by what they are
intended to test. (Team, 2008)
A test suite often contains detailed instructions or goals for each collection of
test cases and information on the system configuration to be used during testing.
(C.P. Indumathi, 2010, p. 614)
2.2.7 Test Cycle
Test cycle is a partial or total execution of all the test suites planned for a given
test phase as part of that phase. A test phase involves at least one cycle (usually
more) through all the designated test suites. Test cycles are usually associated with a
single release of the system under test, such as a build of software or a motherboard.
Generally, new test releases occur during a test phase, triggering another test
cycle.(Black, 2009, p. 610)
15
16
Figure 2.7 Test Cycle
Source: (Black, 2009, p. 123)
There are four ways to spread tests across cycles:
17
1) Assigning an aggregate risk priority number to each test suite in
advance, based on the risk priority number for each test case, and then
running the test suites in a way that favors the higher-priority tests. It’s
called the static priority approach. (Black, 2009, p. 122)
Figure 2.8 Test Selection Using Static Priority Approach
Source: (Black, 2009, p. 124)
17
18
2) Assigning the risk priority numbers to each test suite in advance, but
adjusting them dynamically as each test cycle begins, and then running
the test suites in priority order. It’s called the dynamic priority approach.
(Black, 2009, p. 122)
Figure 2.9 Test Selection using Dynamic Priority Approach
Source: (Black, 2009, p. 126)
Test prioritization tries to order tests for execution, so the chances of
early detection of faults during retesting of the modified system are
increased. The goal is to increase the likelihood of revealing faults earlier
during execution of the prioritized test suite. (Petrus et al., 2013, p. 206)
Test suite minimization using dynamic interaction patterns identifies the
reduced test cases that provide the best coverage of the requirements and
a minimized cost for executing the deliverables. (Petrus et al., 2013, p.
206)
19
3) Randomly distributing the test suites across the test cycles. It’s called the
shotgunning approach.(Black, 2009, p. 122)
Figure 2.10 Test Selection Using Shotgunning Approach
Source: (Black, 2009, p. 128)
19
20
4) Running the entire set of test suites straight through as many times as
possible (definitely more than once). It’s called the railroading approach.
(Black, 2009, p. 122)
Figure 2.11 Test Selection Using Railroading Approach
Source: (Black, 2009, p. 129)
2.2.8 Testing Execution
Testing is a process of technical investigation that is intended to reveal qualityrelated information about the product with respect to the context in which it is
intended to operate. This includes the process of executing a program or application
with the intent of finding errors. (Petrus et al., 2013, p. 205)
This portion of the test plan addresses important factors affecting test
execution. For example, in order to run tests, you often need to receive items from
the outside world, primarily resources (or funding for those resources) and systems to
test. In the course of running tests, you will gather data that you must track, analyze,
21
and report to your team, your peers, and your managers. In addition, you will run
through distinct test cycles in each test phase. (Black, 2009, p. 62)
1.
Resources
This section is used to identify the key participants in the test effort and
the role of the participats in testing, along with any other resources not
identified elsewhere in the plan.(Black, 2009, p. 62)
2.
Test Case and Bug Tracking
This section deals with the systems used to manage and track test cases
and bugs. Test case tracking refers to the spreadsheet, database, or tool
we use to manage all the test cases in the test suites and how can
progress be tracked through those tests.(Black, 2009, p. 64)
3.
Bug Isolation and Classification
This section of the test plan is the part to explain the degree to which the
test team intend to isolate bugs and to classify bug reports. Isolating a
bug means to experiment with the system under test in an effort to find
connected variables, causal or otherwise.(Black, 2009, p. 64)
4.
Test Release Management
One of the major interfaces between the overall project and testing
occurs when new revisions, builds, and components are submitted to the
test team for testing. In the absence of a predefined plan for this, the
testing can turn from a hand-off point degrade into absolute chaos. Each
release of the program must be identified in order to make the testing
process easier as the test team can identify which version of software
contains bugs and what the bugs are.(Black, 2009, p. 65)
5.
Test Cycles
A test cycle means running one, some, or all of the test suites planned for
a given test phase. Generally, new test releases occur during a test phase,
triggering another test cycle.(Black, 2009, p. 68)
6.
Test Hours
A time table of shifts and assigned team to do the testing at specific
hours in order to make the testing processes more effective and have a
clear portion of hours to limit so that the testing will not exceed the
targetted time. (Black, 2009, p. 69)
21
22
2.2.9 Test Tracking Spreadsheet
In its most basic form, test tracking spreadsheet is a to-do list, with the added
capability of status tracking(Black, 2009, p. 199).
Figure 2.12 Test Tracking Spreadsheet
Source: (Black, 2009, p. 220)
The explanations of roll up columns are as the following:
a) Queue. The test case is ready to run, assigned to a tester for execution inthis
test pass.(Black, 2009, p. 209)
b) In Progress. The test is currently running and will probably continueto do so
for a while. If a test takes less than a day, It can be marked as test In Progress
since test case is tracked daily.(Black, 2009, p. 209)
c) Block. Some condition such as a missing piece of functionalityor a lack of a
necessary component in the test environment.(Black, 2009, p. 209)
d) Skip. When the test team decided to skip the test for this pass, typically
becauseit’s a relatively low priority.(Black, 2009, p. 209)
e) Pass. The test case ran to completion and the tester observed only
expectedresults, states, and behaviors.(Black, 2009, p. 209)
23
f)
Fail. The tester observed unexpected results, states,or behaviors that call into
question the qualityof the system with respectto the objective of the test. The
tester reported one or more bugs.(Black, 2009, p. 209)
g) Warn. In one or more ways, the testers observed unexpected results,states, or
behaviors, but theunderlying quality of the system withrespect to the
objective of the test is not seriously compromised.(Black, 2009, p. 209)
h) Closed. After being marked as Fail or Warn in the first cycle of a test pass,the
next test releaseincluded a fix to the bug(s) that afflicted this testcase. Closed
status means that the test is already done.(Black, 2009, p. 209)
2.2.10 Bug Tracking Database
A bug-tracking database facilitates clear communication about defects, wellwritten standardized reports to report the bugs better (Black, 2009, p. 147).
Based on Kolluri (2012, p. 31), bug tracking is a system that is indispensable
for any system that has to perform well. It is a tool that facilitates fixing of bugs
faster and ensures the quality of software being developed or being used. These
systems are widely used and they are treated as essential repositories that help in
finding status of bugs and quickly resolving them thus the progress of the project can
be ascertained.
Figure 2.13 Design for a Basic Bug Tracking Database
Source: (Black, 2009, p. 155)
23
24
The aim of reporting problems is to bring them to the attention of the
appropriate people, who will then cause the most important bugs to be fixed—or will
at least attempt to have them fixed.(Black, 2009, p. 158). A bug report should go
through an identifiable life cycle, with clearownership at each phase or state in its life
cycle. The appropriate life cycle for your organization might vary:
a)
Review. When a tester enters a new bug report in the bug-tracking
database, the bug-tracking database holds it for review before it becomes
visible outside the test team. If non-testers can report bugs directly into
the system, then the managers of those non-testers should determine the
review process for those non-tester bug reports.(Black, 2009, p. 158)
b) Rejected. If the reviewer decides that a report needs significant rework—
either more research or information or improved wording —the reviewer
rejects the report. This effectively sends the report back to the tester, who
can then submit a revised report for another review. The appropriate
project team members can also reject a bug report after approval by the
reviewer.(Black, 2009, p. 159)
c)
Open. If the tester has fully characterized and isolated the problem, the
reviewer opens the report, making it visible to the world as a known
bug.(Black, 2009, p. 159)
d) Assigned. The appropriate project team members assign it to the
appropriate development manager, who in turn assigns the bug to a
particular developer for repair.(Black, 2009, p. 159)
e)
Test. Once development provides a fix for the problem, it enters a test
state. The bug fix comes to the test organization for confirmation testing
(which ensures that the proposed fix completely resolves the bug as
reported) and regression testing (which addresses the question of
whether the fix has introduced new problems as a side effect).(Black,
2009, p. 159)
f)
Reopened. If the fix fails confirmation testing, the tester reopens the bug
report. If the fix passes confirmation testing but fails regression testing,
the tester opens a new bug report.(Black, 2009, p. 159)
g) Closed. If the fix passes confirmation testing, the tester closes the bug
report.(Black, 2009, p. 159)
25
h) Deferred. If appropriate project team members decide that the problem is
real but choose either to assign a low priority to the bug or to schedule
the fix for a subsequent release, the bug report is deferred. Note that the
project team can defer a bug at any point in its life cycle.(Black, 2009, p.
159)
i)
Cancelled. If appropriate project team members decide that the problem
is not real, but rather is a false positive, the bug report is cancelled. Note
that the project team can cancel a bug at any point in its life cycle.(Black,
2009, p. 159)
Figure 2.14 Bug Cycle
Source: (Black, 2009, p. 160)
2.2.11 Bug Report
A bug report is a technical document that describes the various symptomsor
failure modes associated with a single bug A good bug report providesthe project
management team the information they need to decide when andwhether to fix a
problem. A good bug report also captures the information aprogrammer will need to
fix and debug the problem.(Black, 2009, p. 146)
25
26
Figure 2.15 Bug Report
Source: (Black, 2009, p. 156)
2.3
Framework
To simplify and show the connections of the processes in this thesis writing, a
framework is made which is depicted in figure 2.16.
This framework is started by understanding the Mobiz ERP system as Mobiz
ERP system is something that had never been learned before and the functions
towards the general daily business processes are done during the internship to
understand well what the system able to do.
Understanding PT. Anugrah Busana Indah’s business process is done in order
to find out whether the system can cover whole of the business processes or not. The
business processes are explained in chapter 3 in subchapter 3.2 Current Business
Process. The business processes understanding is needed to determine whether the
Mobiz ERP system can support the company.
To match the business processes that are stated in subchapter 3.2, subchapter
3.3 of chapter 3 is made to give links between each business process in PT. Anugrah
Busana Indah, with the functions that are provided in Mobiz ERP System. Upon
these mappings, if there are some lacks of functions in Mobiz ERP System, then
modifications will be made. These modifications are stated in subchapter 3.4 System
Modification. The requirement and modification analysis is provided by PT. M-One.
From these requirements, FMEA (Failure Mode and Effects Analysis) is made
to define what system functions in Mobiz ERP System that are going to be tested and
to prioritize the testing sequence. Based on the priority FMEA can show us what
27
system functions are needed to be tested. FMEA also tells what the impacts are if the
specific function is not working well.
After the features of the system that are needed to be tested are identified, the
next step is to develop a test plan. Test plan is used to show the initial states or
processes that are needed to be done in order to do the proper testing. It defines the
whole testing process such as how to begin the test, how to end the test, how is the
testing going to be done, what are the pre-requirements of hardware or connections
in order to start the test, and other states such as how long will the specific test must
be done.
Next, test case and test scenario are made based on the test plan. Test case and
scenario defines what could happen in each features so that all of the scenarios are
covered in the testing. Test case are made based on the test scenario, as test case is
made to show the sequence of doing the testing and the sequence itself can only be
identified if all of the scenarios are known.
After test case and scenario are made, then a test cycle is going to be
developed. Test cycle is used to show the results of the testing from each
modification during a certain period of time. Test cycle also defines what features are
going to be tested in a period of time and how much is the portion of time for each
test.
After the initial plans are developed, then the testing is executed. The bugs that
are found throughout the testing are to be reported to the programmers so the bugs
could be fixed immediately. Each bug that is found is stored in bug tracking database
and bug tracking spreadsheet can also be produced. Bug tracking spreadsheet is used
as a to-do list and to track the bug status. A bug report is also produced as a detail
explanation of the found bugs. The final result of this thesis would be to determine
whether the application, along with the modifications, is ready to be deployed.
27
28
Figure 2.16 Framework