How Silk Central™ brings flexibility to agile development

How Silk Central™ brings
flexibility to agile development
The name ‘agile development’ is perhaps slightly misleading as it is by
its very nature, a carefully structured environment of rigorous
procedures. However, even within this context there is room for
innovation within the testing sphere. This whitepaper gives a short
introduction to how Borland Silk Central™ can be effectively deployed
within an agile development environment.
The first part of this paper describes the agile development
approach and its possible peculiarities for testing. The second
describes how the Borland Silk Central™ development team uses
Silk Central™ together with a Continuous Integration tool in their
agile SCRUM® development environment to ensure a thorough
process with satisfactory outcomes.
2
HOW SILK CENTRAL™ BRINGS FLEXIBILITY TO AGILE DEVELOPMENT
What is Agile Software Development?
“Agile software development is a group of
software development methods based on
iterative and incremental development,
where requirements and solutions evolve
through collaboration between
self-organizing, cross-functional teams” 1
The difference between agile development and other
iterative and incremental development approaches, such
as IBM Rational’s RUP® or Boehm’s spiral model, is how
the process is broken down into short, self-contained
development cycles (so-called iterations) and the
cross-functional structure of the development teams.
This structure allows for a complete process, from
implementation via testing through to documentation in
one such iteration resulting in the immediate availability of fully operational software with immediate additional value for customers.
Agile requirements
Within the agile software development software arena,
requirements are referred to by different names,
depending on their size and effort. ’Goal story’ or ‘epic’
describes a whole theme, ‘requirements’ describe
sub-parts of such a theme and ‘user stories’ refer to
requirement sub-parts.
A user story is the minimum unit visible outside of the
development team. Once allocated to an iteration, it is
broken down into tasks by the team for better management. If one task cannot be finished within an iteration,
the user story is incomplete and therefore not part of
the iteration end product, with a resultant compromise
on customer value.
Therefore, user stories have to fulfill certain criteria to
allow the team to finish them in time and produce as
much customer value as possible. Agile development
guru Mike Cohn believes a good user story has to be:2
•
•
This short-clocked approach allows more efficient
interaction with customers, a faster, more flexible
response to changes, and – ultimately – higher customer satisfaction levels.
•
•
•
•
Independent
Negotiable
Valuable to users or customers
Estimatable
Small
Testable
The last bullet point is the one we are going to focus on
in this whitepaper. We shall answer questions like
‘How is testing done in an agile development process in
general?’ and ’How can Silk Central™ support you in
your testing to achieve the most customer value?’
Incremental functionality
The ultimate goal of each iteration must be fully operational software, which utilizes all the required functionality from the user stories within the iteration, delivered
in a tested and documented state. Since each user story
describes just one piece of the functionality from a
requirement, which in turn contains only part of the
whole theme, functionality grows incrementally, with
each iteration building on the results of the previous
one – as shown simplified in the following graphic.
It is important to consider that incremental development
sometimes means the creation of interim functionality
that changes several times before the final delivery.
Figure 1: Agile Development 1
Borland
Figure 2 illustrates functionality from one iteration
being replaced, refined and extended as the iterations
progress before finally achieving new functionality.
So – what does that mean for our testing assets?
Agile testing and the team
Testing is one of the necessary steps towards finishing
a user story – no testing means no contribution to the
overall iteration result and therefore no additional
customer value. Consequently, the agile team invests
an essential part of their time within an iteration in the
creation and maintenance of more effective tests.
Each team member creates or enhances tests according
to his/her role in the team to ensure the highest-level
of test coverage, and therefore quality, from beginning
to end. This means developers immediately covering
new functionality with unit tests, where applicable, and
testers creating manual and automated functional and
performance tests. Hence, the functionality we have
created, the tests being created for it, and the team
members’ functional expertise all mesh together like
cog wheels; the more efficient this process becomes
the higher the resulting quality and customer satisfaction,
and the overall success of the project.
As you can see, tests and the functionality created
have a strong relationship. And so, as the functionality
changes incrementally, tests must behave in a similar
way. Clearly, the success of agile development stands
and falls with the team, and in the remainder of this
paper, we consider how Silk Central™ can support agile
teams to create more effective testing regimes that
ultimately deliver better, more robust products that
achieve higher customer satisfaction. We will do this by
examining how it used by one such agile team – the
team developing Silk Central™.
Silk Central™ and SCRUM
The Silk Central™ team uses Scrum, a specific form of
an agile development process, based on defined
methods and predefined roles.
Figure 3: SCRUM Process3
This illustrates the basic concepts and some of the
involved components. These include:
•
•
•
•
Product Backlog – a collection of all goal stories/
requirements/user stories to be implemented across
releases/iterations.
Sprint (Iteration) Backlog – a selection of user stories
that have been planned for a specific iteration.
Sprint (Iteration) – the timeframe wherein the planned
user stories are implemented, tested and documented
in an incremental and iterative way. The Silk Central™
development team uses an iteration length of 14 days.
The two intertwined circles describe the iterative
process per day and per sprint.
Working increment of the software – how the iteration translates as additional functionality and
customer value.
Release, Product Backlog, User Stories
and their relation to Requirements in
Silk Central™
The Silk Central™ development team uses a commercial agile project management tool for managing the
work for each release and iteration. This tool is tightly
integrated with Silk Central™ via the open plug-in API
for synchronizing all the product backlog information.
Figure 2: Iterations
3
4
HOW SILK CENTRAL™ BRINGS FLEXIBILITY TO AGILE DEVELOPMENT
Figure 4: User Stories
The prioritized product backlog lists goal stories,
requirements and user stories in descending order of
priority, clearly indicating the importance of specific
functionality to be implemented in a specific version/
release. Figure 4: User Stories, shows the number 4
goal story for Silk Central™ version 12.0 with its
associated requirements and user stories:
The team will estimate goal stories, requirements
and user stories indicating the required effort for
implementing, testing and documenting the appropriate
functionality. This estimate is usually very high-level at
the goal story level and at the beginning of a project,
and becomes more granular at user story level and as
the project progresses. This process of evolution cause
functionality to slip to a next version, but ongoing
backlog prioritization will ensure that important
functionality will always rise to the top and can be
achieved as planned.
User stories are implemented during sprints (iterations).
Consequently, a sprint backlog which matches resource
availability for the given timeframe (ie the duration of
the iteration) with the estimated effort for a selection of
user stories will be created. This is the work the team
expects to complete in the coming iteration.
Hence the “SilkCentral Test Manager” project, the trunk,
contains all the information of previous releases and the
information of the current release, where ‘information’
means all the backlog information with according tests
and results. For example, the baseline project ‘SilkCentral
Test Manager 12.0‘, a branch contains all the information
of the release backlog of Silk Central™ 12.0 and the
previous releases.
Figure 4: User Stories
Silk Central™ simplifies the planning and scheduling
process by filtering the requirements, using the
synchronized information, to show user stories for a
specific version and iteration. Figure 6 highlights the
filter definition and the resulting requirements structure
for the iteration backlog of iteration 16 of Silk Central™
12.0 in Silk Central’s requirements area (Figure 7).
As mentioned above, all the backlog information,
such as goal stories, requirements, user stories, their
estimates and schedules in terms of releases and
iterations – is synchronized with Silk Central™ where
each release is treated as an increment to all previous
releases. The project, SilkCentral Test Manager, shown
in the screenshot below is the working project that
grows with each release.
A baseline is created after each release, representing
the end-state of the release: compare that to the trunk
and branches in source control systems.
Figure 6: Silk Central Filter
Borland
5
Figure 7: Silk Central Requirement Structure
A set of manual tests for covering this specific user
stories can easily be created from this structure:
Figure 8: Silk Central User Stories
Figure 8 illustrates how manual tests have been added
for the new user stories in sprint 16. All other user
stories that carried over from sprint 15 will be tested
with the already created manual tests for sprint 16.
Manual iteration testing
When discussing tests for user stories, it is important
to consider that a user story often represents just one
increment toward some specific functionality.
Hence, the functionality that might not be fully finished
and is likely to be changed again in a subsequent iteration.
Therefore, these manual tests are essentially ‘throw-away
tests‘, executed only once to ensure the implementation
of a specific piece of functionality at a specific point
in time.
The team uses the Manual Execution Planning Area
introduced in Silk Central™ 12.0 both for executing
manual tests of a specific iteration and tracking their
progress. This planning area allows them to most
efficiently define what tests should be executed by
which tester, at which point in time and in which
environments: one iteration equals one manual test
cycle within the same timeframe.
6
HOW SILK CENTRAL™ BRINGS FLEXIBILITY TO AGILE DEVELOPMENT
This test cycle has the whole team assigned and all the
manual tests that should be done in this iteration can
easily be filtered by reusing the filter created for the
current iteration in the requirement’s area. The following
graphic, Figure 9: Silk Central – Filter for Iteration,
shows the test filter definition and Figure 10: Silk
Central Manual Execution Planning illustrates how this
filter is then applied in the Manual Execution Planning
area to select the tests.
Figure 11: Silk Central - Sprint 16 Overview
Figure 9: Silk Central - Filter for Iteration
The team can track all this information in the ‘Manual
Tests assigned to me’ panel on the personal dashboard,
showing all currently not finished tests that are assigned
to a specific tester or to the ‘No specific tester’ area.
Besides this tabular/textual tracking of progress, a
variety of visual progress representations are also
available. These are summarized below:
Burn down chart
Figure 10: Silk Central Manual Execution Planning
Tests can then easily be dragged and dropped to the
appropriate test cycle to schedule them for the current
iteration. The following example (Figure 11) shows
Sprint 16 of Silk Central™ 12.0 which was scheduled
from 9 February to 21 February. Of the 11 tests
assigned, 10 have been completed by the end of the iteration.
A burn down chart is an illustration of the work left to
do versus the time left to do it. Figure 12 shows a
testing burn down chart at the top of a test cycle. It compares progress (bars) against an ideal testing
progress (line). The bars show the number of open
tests per day. Figure 12 illustrates that nothing has
been tested until day 4, testing starts on day 5, testing
effort was on track for the first time on day 7 and how
testing continued until the end of the iteration.
All the tests are assigned to the ‘No specific tester‘ area
and not directly to a specific tester – highlighting the
fact that testing is the responsibility of the whole team
and not of a single person. Allocating tests to the ‘No
specific tester‘ area ensures that all testing activities
are visible to the whole team.
Figure 12: Silk Central Burn Down Chart
Borland
Testing cycle progress
The personal dashboard displays a ’Testing Cycle Progress‘ panel – much like the agile burn up chart. At the
beginning of a test cycle all tests are ’Not Executed‘
and, as the iteration progresses, they are increasingly
replaced by completed tests. In the graphic below,
progress is illustrated by the blue area and completed
tests by the green area. At the end of the iteration all
’Not Executed‘ tests are replaced by completed ones.
7
At the end of an iteration the test cycle is finished by
the team which results in finishing all the open tests.
Although being primarily ‘throw-away‘, manual tests
play an important role as they run in parallel with the
automated unit tests. This ensures a basic level of
customer facing quality – in terms of usability and
functionality – and therefore a first step towards
ensuring the iteration goal of implemented, tested and
documented user stories. Those manual tests also
support the creation of automated tests by identifying
and exploring the new added functionality and how it
interacts with the current functionality.
Continuous Automated Testing
The Silk Central™ development team puts a strong
focus on automated testing as it would not be possible
to keep that high level of quality with manual testing
only. Each iteration result is deployed on the internal
production system used by other product teams, so
the result of an iteration represents not just new
functionality, but is also an extended automation test
set covering added new functionality.
Figure 13: Silk Central - Testing Cycle Progress
Testing cycle results summary
Figure 14 illustrates progress in a similar way. The ‘Testing Cycle Result Summary‘ panel puts the
focus on the testers with work allocated per tester and
the status and quantity of tests described as ‘not
started’, ‘in progress’ and ‘finished’. As the whole team
works on, or tracks, their tests, there is only one entry
for the ‘No specific tester‘, as shown on the screenshot
below (Figure 14).
Figure 14: Silk Central Testing Cycle Results Summary
The team relies on a mix of different automated test
types. The mix includes JUnit for automated unit
testing, Silk4J for automated functional testing and Silk
Performer for automated performance testing. They
serve different purposes and allow different execution
intervals, as continuous execution will find defects as
quickly as possible and deliver the greatest benefit.
A Continuous Integration (CI) tool, called Jenkins4 , is
used for efficient continuous execution. This tool is
tightly integrated in the development and testing
environments, allowing the tight execution of the tests
to facilitate building the source code at different points
in time.
One of these points in time is every check-in to the
source control system by any developer, which triggers
the creation of the build artifacts by the CI system. The successful creation of build artifacts triggers
specific execution plans – ‘CI Tests‘ – in Silk Central™ to
verify that the build was is not broken. Consequently the
execution information, which consists of the tests to run
and where to get the application under tests (AUT), is
distributed to the execution servers. These fetch the
build artifacts, run the tests and upload the results
8
HOW SILK CENTRAL™ BRINGS FLEXIBILITY TO AGILE DEVELOPMENT
back to Silk Central™ which then informs the team via
eMail notification if tests failed to show up as early as
possible if the build is broken.
This whole CI Testing sequence is visualized in Figure
15 – Continuous Integration Testing:
Figure 16 – Daily Build Testing
Using this approach for every daily build, the CI allows
for ~ 7500 test executions on different configurations
to be executed.
Figure 15 – Continuous Integration Testing
Beyond verifying the check-ins several times per day,
every evening a daily build is created which is then
tested overnight with an extensive set of tests. Again the CI system creates build artifacts and out of
these successful build artifacts the setup is created. If the setup creation succeeded, specific execution
plans are triggered in Silk Central™. This will result in
tests being distributed to specific execution servers
– now also representing different configurations
meaning different combinations of an installation
language, a browser, a database management system
and a webserver.
Consequently the tests are run in parallel on different
environments with an English, German or Japanese Silk
Central™ installation, running the latest Internet
Explorer or Firefox, having Oracle 10g/11g or Microsoft
SqlServer 2005/2008 as a database backend and using
Tomcat or IIS. The different execution servers again
fetch the build artifacts, install the software, run the
tests and return the test results to Silk Central™ which then automatically informs the team via eMail
if tests failed.
The team uses Silk Central’s code coverage capability for
testing the quality of the added source code. Figure 17
illustrates how you can drill down from ‘package’, via
‘classes’ to the ‘methods’ level to reveal those classes
and methods that are not covered so far.
Figure 17: Silk Central Code Coverage
Iteration Review
A meeting, called an ‘Iteration Review’, is held at the
end of each iteration and the team presents the result
of the iteration to all stakeholders and interested
parties. This presentation includes a demonstration
of how the planned user stories were implemented,
tested and documented. Silk Central™ contributes a
useful overview of the testing aspect by filtering the
requirements area to the specific iteration and show
the document view with the test coverage information.
Borland
9
Figure 18: Silk Central - Iteration Review Test Coverage
In the following screenshot, Figure 18, Iteration 16 of
Silk Central™ 12.0 is shown once more.
Hence you see which requirements are covered by
testing and which tests were successfully executed.
Consequently test results for the iteration backlog are
available with a single click and users have the confidence of having implemented, tested and documented
user stories.
Next release
Once all iterations have been completed and the
product has been released, the team creates a baseline
of the working project for saving the state of the
the release. They will also immediately be able to see if the fixed code had any side effects on other areas of
this product release as well.
The new release backlog is then again added to the
working project ’SilkCentral Test Manager‘ and the
release cycle with all the iterations starts again.
Summary
This whitepaper has described how the Silk Central™
development team in Linz uses Silk Central™ to support their agile development approach. The Silk Central™ development team has been working according to
SCRUM since 2007 and has gradually refined and
improved their agile approach more and more, using
Silk Central™ as the central hub. The team has consequently enjoyed the following benefits:
•
•
Figure 19: SIlk Central Project Releases
•
release, as shown in Figure 19: Silk Central Project
Releases. At this point all Source Control profiles are
changed to point to the appropriate code branch.
•
With this approach the team is later able to run all the
tests for patch/hotfix verifications with the same code
base and test base as was available at the time of •
Clear definition of what has to be achieved and tested
in a given timeframe
Coverage of user stories with different types of tests
from the first minutes of an iteration – giving immediate control on quality
‘In time’ feedback on every development commit
through tight CI tool integration and therefore ability
to work very efficiently and effectively
Daily build testing (~7500 different tests on several
different environments) – so the whole product and –
how the new parts integrate into the old ones – can be
tested to immediately reveal any negative impacts
This creates a central place to track all test executions
and their results on the basis of builds and therefore
monitor how quality changes over time
•
•
It also helps to define a way of moving manual tests
to automated (functional) ones and incrementally
increase the set of automated (functional) tests
Facility to provide an iteration drop every fortnight on the
internal Silk Central™ production server and receiving
valuable feedback on the interim release state.
And so have the stakeholders:
•
•
•
•
Costly defects are found very early in the lifecycle.
This reduces the likelihood of surprises at the end of
iterations/releases.
It creates a central place where all the information
around test coverage, test progress, test results, and
code quality is brought together via meaningful
reports/dashboard panels. This provides valuable
quality insights.
It also means there is a central information hub where
users can check whether a user story has been tested
or not and therefore can be accepted or not accepted
at the end of an iteration.
Ability to provide an iteration drop every two weeks and
receiving valuable feedback on the interim release state.
By using a ‘real-life’, working example he have
demonstrated that Silk Central™ has everything it
takes to support any development process, increase
the efficiency and productivity of a development team.
This reduces the time-to-market and will save on costs.
This is best illustrated by Figure 20: Silk Central™
Development Team Approach. This is a graphic
summarization of how the different parts interact
with each other in one iteration. Silk Central™ is an
indispensible resource for testers and developers,
and one that has proved itself in time and time again
to be a cost effective tool that delivers real value as
well as a more robust product.
Wikipedia - http://en.wikipedia.org/wiki/Agile_software_development
1
User Stories Applied, Mike Cohn, 2004, Page 17
2
http://en.wikipedia.org/wiki/Scrum_%28development%29
3
http://jenkins-ci.org/
4
Figure 20: Silk Central Development Team Solution
© 2012 Micro Focus Limited.
All rights reserved. MICRO FOCUS, the Micro Focus logo, among others, are trademarks or registered trademarks of Micro Focus
Limited or its subsidiaries or affiliated companies in the United Kingdom, United States and other countries. All other marks are the
property of their respective owners.
112422_WP_HC (11/12)