Dynamic Software Tracking - Quality Engineered Software and

Dynamic Software Tracking
J. Erik Hemdal
Robert L. Galen
BOSCON 2004 – Track C4
Agenda
●
Introduction
●
Project Goals
●
Measures from Defect Data
●
Triage and Workflow
●
Conclusions
●
Questions and Answers
Why are we here?
●
●
Discuss the Big Four elements you need to control
for a successful project
Examine a variety of defect-data measures
–
–
●
For what they tell us
For how they can be defeated
Show how you can manage defects so that you can
stay in control of your project and workload –
especially in the endgame close to release points.
The Big Four Elements
●
●
●
●
Have stable, agreed-on goals
Know the quality level and quality trends of your
software
Keep your team healthy
Always know how much work remains and how
much time is left
Understanding & Influencing the Goals
●
The formal requirements
●
Unstated requirements
●
Expectations from customer, sponsor, team
●
Negotiation and balance are the key
–
–
–
–
Features
Specific delivery timing
Level of quality (Hot-button issues)
Essential vs. “nice-to-have” elements
Understanding & Influencing the Goals
(cont.)
Understanding the overall project plan
–
–
–
●
Development phasing (building to features, stability and
completeness)
Understanding the methodology (waterfall, incremental,
agile/extreme, RUP)
Test phasing (how many passes, reduction of change,
entrance & exit criteria)
Key Milestones
–
–
–
–
Delivered phases
Code freeze / complete
Beta tests
Final release
Understanding & Influencing the Goals
(cont.)
●
Release Criteria is the critical guide for releasing
software. Should cover –
–
–
–
–
●
Scope
Quality
Time
Team
Good release criteria should
1.
2.
3.
4.
5.
6.
Define success
Learn what’s important for the project
Be Drafted and thoroughly reviewed
Be “SMART” (specific, measurable, attainable, realistic
and trackable)
Assist in gaining consensus
Serve as a goal / guide throughout triage and release
Software Measures from Defect Data
●
Good defect tracking provides insight
–
–
–
●
●
●
Into the quality of the product
Into the performance and health of the team
Into the amount of remaining work and rework
Can be a hot-button issue because of misuse
New way of thinking: A “Defect” means “something
to change”. More things to change means more
work.
Changes stand between you and Done!
Common Defect Measures
Measure
Illustrates
Defect counts or defects / KLOC
Defect density, predictor for future defect
trends and overall product quality
Defects / Unit of testing time
Testing workflow and productivity
Defect count over time
Time distribution of defects, looking for
downward trends
Determine overall maturity - release / ship
readiness
Determine overall maturity - release / ship
readiness
Root cause, true cost of quality
# found, # fixed, # remaining over time
# high – medium – low severity defects
Phase –containment of defects
Fix hours / defect
Defects by state counts
Cost per defect, predict development repair
efficiency
Defect workflow in time, potential
bottlenecks
Defects/KLOC
Defects per KLOC by Module
●
Uses:
●
120
110
●
100
Indicate overall quality of
code
Guide to trouble spots
Defect Density
90
80
●
Depends on:
70
●
60
●
50
Definition of a KLOC
Modular architecture with
visible granularity
40
30
●
20
Defeat by:
●
10
●
0
E
A
C
D
Module ID
B
AVG
●
Attacking the KLOC
Blame requirements
Writing longer code
Defects/Unit of Test Time
●
Uses:
●
●
●
Depends on:
●
●
Factors testers usually don't control
Defeat by:
●
●
●
Overall level of test “productivity”
Check test strategies or test “wearout”
Untestable/unreachable code
Cutting test time
Graphical display – similar to Defects/KLOC
Defect Counts over Time
New Defects by Week
●
Uses:
●
40
35
●
Defect Count
30
25
●
20
Adjust test sequencing and
scheduling
Can indicate significant
problems in test
Depends on:
●
Overall coordination
15
●
10
Defeat by:
●
5
●
0
●
1
2
3
4
5
Week Number
6
7
8
Interrupting testing
Finding significant defects
Missing functionality
# Found, # Fixed and # Open
●
Found vs. Fixed Defects
Uses:
•
Suggests when to ship
120
110
●
100
Defect Count
90
Depends on:
●
80
70
New
Closed
60
●
Reliable, repeatable test
capability
Change control
Cum-New
Cum-Closed
50
40
●
30
Defeat by:
●
20
●
10
0
●
1
2
3
4
5
6
7
8
9 10 11 12
Week Number
Curtailing test
Many small changes
Late changes
# High/Med/Low Severity Defects
Defect Counts by Class
●
Use:
●
65
60
Indicates overall readiness of a
product
55
Defect Count
50
●
45
Depends on:
●
40
●
Class C
Class B
35
30
Class A
25
●
Defeat by:
20
●
15
●
10
●
5
0
1
2
3
4
Week Number
5
Proper triage
Effective criteria
Management fiat
Subtle negotiation
Peer pressure
Phase-Containment of Defects
Defect Count per Phase When Caused
When Found vs. When Caused
●
Uses:
●
40
●
Identify process problems
Justify process improvement
35
30
●
Reqts.
Design
Code
Test
Integration
25
20
15
●
●
10
Depends on:
Defeat by:
●
5
●
0
Reqts
Design
Code
Test
Phase When Found
Int'gn
A defined phase-gate
process
Lack of commitment to the
process
Lack of time to analyze raw
defect data
Average Fix Hours per Defect
●
Uses:
●
●
●
Depends on:
●
●
●
Ability to capture data
High trust culture and within the development team
Defeat by:
●
●
Illustrate the cost of poor quality
Provide data for repair estimates
Misusing the data
Similar measures possible for build, test, review
time
Defects By Status Histogram
Defects by State
●
Uses:
•
•
18
16
Show team bottlenecks
Gauge project status and
product maturity
Count of Defects
14
●
12
•
New
10
Depends on:
Assigned
Open
8
Consistent and reliable state
update activity
Closed
6
●
Defeat by:
•
4
2
0
1
2
3
Week Number
4
Gamesmanship about defect
status
Dynamic Tracking
●
Defect Triage
●
Workflow Management
●
Defect Packaging
Defect Triage
●
Manage three elements
–
–
–
●
How serious is this defect?
How soon should this be fixed?
What is the effect on team and customer?
Define a triage team and procedure early
–
–
–
●
Severity
Priority
Impact
Include QA, developers, project office, support
Set up the “drill”
Update defect reports with triage decisions – who, what,
why
Map triage policy to the overall release plan
Workflow Management
●
●
●
●
Don’t just let work happen based on priority or
chance - rather, proactively manage repairs!
Pre analyze scope, level of difficulty and potential
impact – prior to assigning major repairs
Each engineer has an assigned “work queue” or a
to-do list that is managed from the defect tracking
system
Leverage the DTS for reporting and work flow
management
Workflow Management (cont.)
●
Create guidelines per engineer 5-10 work items
–
–
–
●
●
●
1 or 2 high priority defects
3-4 moderate priority defects
1-2 defects to investigate/analyze
Good idea: Focus on your “Top 10” issues, then
stop and analyze
Reallocate judiciously; avoid churning
Create engineer profiles to help understand
strengths and skills
Defect Packaging
●
Schedule a series of code drops or “packages” for
testing
–
–
–
–
–
●
●
●
First release
Updates; new functions
“Critical fix” package
Release candidate
Final release
Each should have a primary goal
Testing involves high fixed costs; packages give
you the most value (fixed defects) in each cycle
Don't waste the “power of the package”
Tying Things Together
●
Supporting the Goals
●
Maintaining the Level of Quality
●
Watching the Health of the Team
●
Knowing What's Left to be Done
Supporting the Goals
●
●
●
Effective defect triage helps you to maintain your
focus on the goals of the project.
Triage directs your team's effort to the most
important work, balancing the demands of all
stakeholders
Defect data can flag roadblocks and slowdowns
before they derail the project
Maintaining the Level of Quality
●
Defect measures and reports can give you a direct
reading on the state of your project
–
How many defects there are
–
How defects affect the product and project
–
Where extra design, test, or review effort is warranted
–
When it's appropriate to release (or Not to release)
Watching the Health of the Team
●
Defect data helps here by –
Signaling overloaded and frustrated team members
–
Capturing and documenting key decisions and key
changes
–
Communicating changes that WILL NOT be made
–
Preventing unnecessary work
Knowing What's Left to be Done
●
Defect tracking data helps here by indicating –
How many items must be changed
–
How long will these changes take
–
How much will they cost
–
What will be unfinished when we stop
Questions?
References
●
●
●
●
●
●
●
A. Allison, "Meaningful Metrics" The Software Testing & Quality Engineering
Magazine (May/Jun 2001)
R. Black, Managing the Testing Process, 2’nd Edition, New York, NY: John Wiley
& Sons Inc., 2002
R. Galen, “Mastering the Software Project Endgame”, New York, NY: Dorset
House, 2004 - forthcoming
C. Necaise, "Managing the Endgame." The Software Testing & Quality
Engineering Magazine (Jan/Feb 2000)
J. Rothman, "Release Criteria: Is This Software Done?" The Software Testing &
Quality Engineering Magazine (Mar/Apr 2002)
J. Rothman, "Managing Projects: Release Criteria, or Is it Ready to Ship."
Newsletter Vol. 1, No. 2 (1999)
B. Schoor, “Managing Quality During the Endgame” Presentation from
schoorconsulting.com website and PSQT/PSTT 2002 North conference –
www.softdim.com (2002)
Contact Information
Erik Hemdal
Independent Consultant
and Instructor
Bob Galen
EMC2 Corp. &
RGalen Consulting
Group, LLC
[email protected]
[email protected]
[email protected]
www.rgalen.com