Practical ways to measure and track the progress of Agile projects Ensuring that key data is visible in an Agile environment Dave Browett – March 2013 I am a Project Manager! • These slides are a summary of experience collected over many years – more recently at Micro Focus • I’ve been managing Agile projects in various capacities since approx 2003 • This is not an Agile “primer” – I’m assuming that you all have a basic knowledge of Agile • Some of this is not for the Agile purist – I have included an appropriate warning… 2 Scrum is good for team communications 3 But what about inter-team communications? 4 What about communications with key stakeholders? 5 Challenges • Should we be attempting to standardize these reports/communications? – Self-organized teams vs consistent metrics across teams • Providing senior management/stakeholders with highlevel reports that allow – Progress against a release to be easily understood – Any “at risk” items to be flagged as early as possible – Any key dependencies/issues to be raised as early as possible 6 Self-organized teams vs consistent metrics across teams Should we standardise? – Communication of issues, dependencies etc – Metrics • Iteration length • Velocity Typically where there are more than one Scrum team any issues between them can be raised and resolved at a “Scrum of scrums”. How frequent these need to be will differ from project to project – the key thing is to provide this information in a timely fashion so it minimises the impact to other teams iterations 7 Challenge - Providing senior management/stakeholders with high-level reports 8 Where are we? • • • • “Our velocity is 40” “We’ve done 280 story points” “We’ve done 7 out of 9 iterations” “We’ve spent 4000 man-hours” • Only when we know the TOTAL MUST-HAVE payload can we use the above information to report how we’re doing and predict what to expect... 9 Where are we? Scope Story Points Velocity =40 280 Time - iterations 7 8 9 Only when we know the TOTAL MUST-HAVE payload can we use the above information to report how we’re doing and predict what to expect... Scope Story Points Must Have Payload =300 Velocity =40 360 280 Looking Good! Time - iterations 7 8 9 Only when we know the TOTAL MUST-HAVE payload can we use the above information to report how we’re doing and predict what to expect... Scope Story Points Velocity =40 360 280 Must Have Payload =200 Easy! Time - iterations 7 8 9 Only when we know the TOTAL MUST-HAVE payload can we use the above information to report how we’re doing and predict what to expect... Must Have Payload =400 Velocity =40 360 280 Challenged! Scope Story Points Time - iterations 7 8 9 Payload Calculation - predictable delivery Scope Story Points Best/worst case delivery range can be predicted within this zone Velocity =v2 Velocity =v1 MMF + y Worst case MMF Best case MMF - y Time - iterations Predictable Velocity • Teams need to be able to deliver for each iteration a predictable number of story points • Obviously this number may vary from iteration to iteration depending on sickness/holiday etc but the key principle is that the team commit to and deliver a number of story points that is related to their performance in previous iterations. • “See-sawing” velocity is a warning sign – it could mean that the team are – – – – The team are over-committing The team are not estimating or looking ahead sufficiently The team are not producing releasable software within the iteration Beware of “iceberg agile”... 15 Beware of “Iceberg Agile”! • A key aspect of Agile is transparency. • Every iteration is displayed with each story and the tasks within are updated to show a picture that represents the state of the iteration as accurate and "up to the minute" as possible. • This transparency will build trust at all levels – – The Scrum team can show progress both in terms of achieved velocity and demonstrable features – Stakeholders/managers can take confidence from a regular cadence that provides demonstrable features • But if your team is doing "Iceberg Agile" this breaks down - not all the planned stories will be completed and the reported velocity will be lower than expected or swing from low to high… 16 Beware of “Iceberg Agile”! • • The team may still believe they are on-track, the carried over stories couldn't be done in a single iteration, the work is still being done and it will all come together one or two more iterations down the line... But the transparency has been lost – the work done on these carried over stories is hard to estimate – these stories can't be demonstrated in the review before they have been finished! • If your team is doing "Iceberg Agile" you will typically see – only a part of what was planned being demonstrated in the review, a significant amount of work will be under the surface – difficult to assess in terms of progress and not able to be demoed. • Carried over stories should be the exception rather than the rule and demoing stories should become a key consideration for assessing and accepting stories (in fact sometimes if you are wondering whether you have one story or two it's good practice to think of how you're going to demo the feature) 17 Beware of “Iceberg Agile”! • Teams that do "Iceberg Agile" will typically – Carry over several stories as common practice – Be unable to demonstrate all planned stories – Have a velocity that see-saws as the credit for carried over stories gets re-allocated one or two iterations down the line • These teams will suffer from lack of transparency and consequently it is difficult to predict what they are capable of consistently achieving. Be aware and try to avoid your team doing "Iceberg Agile"! 18 Velocity calculations across multiple teams – Agile Purist Warning! • Strictly you can’t simply “add-up” story points across teams (because each team is likely to have different measures) • Then again – surely it doesn’t make sense to have wildly different sp measures across teams… (find an agile purist near you and discuss!) • So – perhaps the pragmatic solution is to ensure that sp measures across teams are of the same order • Assuming the principle above is held then these payload calculations can be used for an entire project across several teams – as a “high level indication”. 19 Bear in mind also… • Estimation is always an inexact science! • Beware of false precision, “37.5sp remaining” 20 Possible actions for a challenged project • Increase resource – although adding resource to a team is likely to *reduce* its velocity in the short-term and bringing in a new team is also likely to require rampup/familiarisation. • We can increase velocity on new features by reducing velocity on other things… – Undertake a business review of “critical defects” – Temporary relaxation of Service Levels/SLAs • Reduce payload – business review of Must Have features 21 Team workload categories New Features Maintenance Enhancements Technical Debt Vtech debt Venhancements Vnew features Vmaintenance To maximise velocity on new features we need to reduce velocity on other areas 22 Managing Payload – a balanced release Maximum Story Points achievable based on average total velocity Payload is in the AT RISK region Payload is in the CAUTION region Maximum Story Points achievable taking into account maintenance Payload is not threatened Example 1 - Well balanced Release Maximum Story Points achievable based on average total velocity Maximum Story Points achievable taking into account maintenance Key Estimated MUST HAVE payload Estimated Non MUST HAVE payload AT RISK payload Example 2 – Under committed Release (slack) Maximum Story Points achievable based on average total velocity Maximum Story Points achievable taking into account maintenance Key Estimated MUST HAVE payload Estimated Non MUST HAVE payload AT RISK payload Example 3 – Over committed Release Maximum Story Points achievable based on average total velocity Maximum Story Points achievable taking into account maintenance Key Estimated MUST HAVE payload Estimated Non MUST HAVE payload AT RISK payload Example 4 – Release with non MUST HAVE at Risk with additional Caution indicator Maximum Story Points achievable based on average total velocity Maximum Story Points achievable taking into account maintenance Key Estimated MUST HAVE payload Non MUST HAVE payload CAUTION payload AT RISK payload Take aways • Think about the data that your teams provide to other scrum teams and stakeholders – Timely and Accurate – Impact of impediments – Think about % maintenance as being capacity which could be diverted to developing new features • • • • • • Understand the importance of teams committing to an iteration and having a measurable team velocity that allows forecasts to be made Clearly identifying MUST HAVE items makes a payload more realistic, achievable and with the stretch goal more clearly defined. Beware of the signs of “iceberg agile” Beware of false precision when providing estimates Try to classify your release using the release balancing concept as early as you can once the MUST HAVE payload is sufficiently defined Classic Project Management activities - such as calculating the critical path and understanding dependencies, are still needed in the Agile world! 28 Thank you – any questions? My blog on WordPress http://davebrowettagile.wordpress.com/ [email protected] 29
© Copyright 2026 Paperzz