Month: March 2012
A few weeks ago the iTropics team took a detailed look at the journey of a medium sized feature going through the system of development. One objective was to calculate this task’s Cycle Time, a metric from Lean that is defined as the length of time a feature takes to go from beginning development to “done done” or QA’d successfully. This definition could also be expanded to include specification through to deployed-into-production. I also had another objective though, and this was to highlight a specific form of waste in the system known as “hand-off” waste, or the delays that occur when one functional role has completed their part and the feature is awaiting the next functional role to begin theirs. For most processes in organisations, it is common to find that a large part of the total time that a task takes to complete is actually idle time. Unfortunately the Scrum model of software development does not have any mechanisms to highlight this waste, and thus this problem is not very frequently addressed.
So I manually tracked this feature as it went though specification to development to QA (and looping back a few times) and the results were quite interesting, though I must say I wasn’t surprised by the amount of hand-off waste we discovered.
Above is what the board looked like after we’d spoken about it. The numbers are the days in February, the red letters are the days where value-adding activity was undertaken (spec, dev, qa).
What we found was that the feature took from the 6th February, when we had a meeting with Contiki about the requirements of the feature, until the 29th February when it was deployed (in actual fact we might have continued past that date but I stopped counting as we were close enough!). As this was only a medium-sized feature it is pretty clear that this duration is probably open to improvement. On further inspection and discussion we found that there was a delay of a week between talking to Contiki and finalizing a specification, a delay of another week while deployment issues were sorted out, and further intermittent delay due to a few QA rejections (the specific reasons for these delays aren’t relevant here ). All in all, only around 7 or 8 days of actual work across Spec/Dev/QA were needed, but this feature still took 23 real days (or 17 working days) to turn around. That means that over 50% of this task’s total duration, nothing was getting done on it! That seems like a problem worth solving.
I believe most of this waste is simply due to a lack of focus on the individual features as they go through the system, or what Lean describes as too much work-in-progress. Most of the time, an individual engaging in multi-tasking takes longer to do each task than if they’d done each task one at a time. In the same way, a development team working on too many features at the same time means that each task takes longer to complete than if there were fewer features in progress, principally because there is more waiting time between functional roles while complete work on all the other items being worked on. Research also shows that increased work-in-progress in a system also negatively affects quality.
Another interesting observation I made later was that that 50%+ waste time is agnostic to the size of the feature. Now we have yet to find out if this feature is a typical case, but if it is, it means that when we are estimating story sizes we are only factoring in less that 50% of the activity involved, and the hand-off waste is ignored. Now you might say the hand-off waste would probably be consistent across all sizes of stories, and you’d probably be right (although I do suspect there is a fair bit of variation, which will throw off individual estimates), but isn’t it a bit stupid to be putting as much time into estimation as we do, only for over 50% of the time actually taken to complete the feature is left up to chance?
The team agreed that we found this analytical tool to be a useful way to make waste visible and thus get us into a position to start doing something about it. I might well manually track a few more stories this way, but we are hoping that Greenhopper will allow us to get these measurements automatically and fuel our retrospectives for ideas on how to improve the Cycle Time of our features.
A few weeks ago Caplin Systems hosted an “agile/ux safari”, which is basically the equivalent in the software development industry of a factory-floor tour. As Caplin take UX very seriously there was a lot of focus on this side of things, with design walls, empathy maps and persona development all very prominent in their office space in Houndsditch, London. However I was more interested in the plain old development side of things and here were my key takeaways of the night, alongside why I was particularly interested in them:
- Caplin deliver in 2-week sprints, with a customer demo after every 10 work days; In iTropics we want to go from 4 weeks to 2 weeks as soon as we can.
- They don’t do planning poker, as they ‘have a rough idea’ of how much work they can do in a fortnight and find the time planning to be wasteful; This is a trade-off I believe in, as the time won back from these sometimes long meetings is very probably worth any slight loss in estimation accuracy at the size of feature story that we work with.
- They have a sprint board, that tracks work across a day-to-day value stream, as well as higher-level ‘phase’ board, that tracks major deliverables at a project level across sprints; We’ve started experimenting with this in iTropics as the project-level roadmap is being lost and along with it the vision for the team and suite of products.
- Finally, they have loads of UX/UCD work happening before features are given to the development teams to build; This is something we are working towards (albeit in very small increments!) at the moment.
Thanks to Caplin Systems for opening their doors to us and showing their working methods, and thanks to Johanna Kollman and the Agile UX Meetup group for organising!
Some photos of the evening: