My software development team at work recently decided to dispense with our Kanban board and move back to Scrum (long story). The result is that we are now working from a ‘vanilla’ Scrum board and I can no longer measure cycle time, see all the work in progress, easily see impediments/bottle-necks, and all the usual benefits that come with a visual mapped value stream.
Wanting to continue on my path to deepening my understanding of Kanban as well as trying it out in different scenarios, I thought I’d give Personal Kanban a try. Jim Benson wrote the book, which I would be tempted to describe as possibly the GTD of the Lean world. Jim’s advice includes that the usual to-do list lacks context and does not do a good job of providing feedback and communicating progress as well as a Personal Kanban (PK) board.
So I’ve created a PK board and put it up in my home office (aka the bedroom):
It’s decidedly low-tech. I have four columns: Backlog, Next Up, In Progress and Done. That’s the value stream mapped. I have four corresponding boxes at the bottom of every work item card, in which I will note the dates that the card moved into a particular column. That’s cycle time sorted. I’ve divided work items into categories like ‘Presentation’, ‘Book’, ‘Blog-post’, etc. That’s the similar-sizing of work sorted (albeit using the hack of categorizing the work, much like t-shirts sizing does for Kanban software teams). Natural space on the board is limited to around 3-4 cards, so that’s the first stab at limiting work-in-progress sorted. I plan to retrospect every 6 weeks or so and record the data (eg. average time to read a book), so that’s the continuous improvement trigger sorted.
I can already report that it helps to focus by forcing me to decide what is important to work on and limiting those options. The power here is as much as selecting what you WON’T spend time on as what you will. With strict WIP limits, you know that the only way you can start reading that next book you’re excited about getting into is by finishing the previous one – a potentially powerful hack. Anything that makes me START less and FINISH more is definitely a good thing.
I hope to report back a lot more in a few more weeks. I’m especially interested to discover the drawbacks of a system like this. So stay tuned!
I could’ve written this post at least 6 months ago, if not more, as it was really later last year / earlier this year that I began to cotton on to a new area of expertise opening up in the world of Software Development: that of Product Management.
Being the Dev Team Leader of a team that has really good engineering but is pretty light on product management skills led me to tweet a quote from Marty Cagan’s book “Inspired: How to Create Products Customers Love” earlier this year:
“It doesn’t matter how good your engineering team is if they are not given something worthwhile to build.”
Increasingly these days I think the problem is (generally) becoming less about building stuff that “doesn’t break” and much more about the problem of building relevant, dare I say useful, software. In the world of enterprise it is still Project Managers and Business Analysts that occupy this space, and these skillsets don’t seem to include the ones that help us determine whether our users are delighted or our stakeholders are enfranchised (eg. Analytics / Customer Feedback, Stakeholder Prioritization models, UX/Usability skills, Agile product development skills like MVP, etc).
I think part of the problem is also wrapped up in the configuration of the present-day large organisation (at least in mine), where the PM/BAs do not act as champions of the product but rather as relatively disenfranchised interfaces between the developers and the ‘business’ users, who for far too long we have mistakenly believed understand how to build software applications. This configuration doesn’t allow for the PM/BA to really own the product, or own a vision for the product, thereby not shouldering the actual responsibility of user satisfaction and resulting in an incoherent or indeed non-existent product strategy, and generally little value derived over a period of time.
I now see two parts to a successful software team. The first part is the software delivery part: Good engineers, good engineering ethic (TDD, refactoring / tech debt payoff, etc), Continuous Delivery capability enabling fast-feedback, etc etc. Input here is requirement (probably a story or epic in Agile terms), output is a piece of working software in production, with hopefully a very short duration of time between these two things. The second part is the product discovery part – “What should we build?”. Input here is things like customer feedback both quantitative (analytics) and qualitative (surveys), the vision of the product (from a Product Management team with a strong sense of what should/could work!), and of course any other stakeholders you might have (people that have a stake in the product but might not use it directly – eg. call centre agents that get more demand when the website goes down). Output here is of course, a new requirement/epic/story.
These two parts are as critical as each other. A cracking product manager might as well go home instead of be frustrated by a team of developers that repeatedly churn out broken software un-timeously. Likewise, customers can see little value in a cracking software development team churning out irrelevant features.
I recommend the reading of two books that talk a lot about this space:
Marty Cagan is part of the Silicon Valley Product Group, and his site has a lot more information about Product Management. This books succinctly summarises what a good Product Manager is and isn’t, and provides a lot of practical advice to get going with better product management practices immediately.
The second book is also hugely popular and focuses on how Lean Principles-inspired product development results in products that matter – must read material for any aspiring Product Managers! Eric Ries also blogs at startuplessonslearned.com.
These two books will fast-track your introduction to this area, and allow you start thinking about whether the activities undertaken by those who are telling you what to build really the ones that uncover what users want.
I’m looking forward to watching how the Product Management space evolves and discovers more about how we can incorporate insight gained from better knowledge of our users into the development of ever more useful software!
A few weeks ago the iTropics team took a detailed look at the journey of a medium sized feature going through the system of development. One objective was to calculate this task’s Cycle Time, a metric from Lean that is defined as the length of time a feature takes to go from beginning development to “done done” or QA’d successfully. This definition could also be expanded to include specification through to deployed-into-production. I also had another objective though, and this was to highlight a specific form of waste in the system known as “hand-off” waste, or the delays that occur when one functional role has completed their part and the feature is awaiting the next functional role to begin theirs. For most processes in organisations, it is common to find that a large part of the total time that a task takes to complete is actually idle time. Unfortunately the Scrum model of software development does not have any mechanisms to highlight this waste, and thus this problem is not very frequently addressed.
So I manually tracked this feature as it went though specification to development to QA (and looping back a few times) and the results were quite interesting, though I must say I wasn’t surprised by the amount of hand-off waste we discovered.
Above is what the board looked like after we’d spoken about it. The numbers are the days in February, the red letters are the days where value-adding activity was undertaken (spec, dev, qa).
What we found was that the feature took from the 6th February, when we had a meeting with Contiki about the requirements of the feature, until the 29th February when it was deployed (in actual fact we might have continued past that date but I stopped counting as we were close enough!). As this was only a medium-sized feature it is pretty clear that this duration is probably open to improvement. On further inspection and discussion we found that there was a delay of a week between talking to Contiki and finalizing a specification, a delay of another week while deployment issues were sorted out, and further intermittent delay due to a few QA rejections (the specific reasons for these delays aren’t relevant here ). All in all, only around 7 or 8 days of actual work across Spec/Dev/QA were needed, but this feature still took 23 real days (or 17 working days) to turn around. That means that over 50% of this task’s total duration, nothing was getting done on it! That seems like a problem worth solving.
I believe most of this waste is simply due to a lack of focus on the individual features as they go through the system, or what Lean describes as too much work-in-progress. Most of the time, an individual engaging in multi-tasking takes longer to do each task than if they’d done each task one at a time. In the same way, a development team working on too many features at the same time means that each task takes longer to complete than if there were fewer features in progress, principally because there is more waiting time between functional roles while complete work on all the other items being worked on. Research also shows that increased work-in-progress in a system also negatively affects quality.
Another interesting observation I made later was that that 50%+ waste time is agnostic to the size of the feature. Now we have yet to find out if this feature is a typical case, but if it is, it means that when we are estimating story sizes we are only factoring in less that 50% of the activity involved, and the hand-off waste is ignored. Now you might say the hand-off waste would probably be consistent across all sizes of stories, and you’d probably be right (although I do suspect there is a fair bit of variation, which will throw off individual estimates), but isn’t it a bit stupid to be putting as much time into estimation as we do, only for over 50% of the time actually taken to complete the feature is left up to chance?
The team agreed that we found this analytical tool to be a useful way to make waste visible and thus get us into a position to start doing something about it. I might well manually track a few more stories this way, but we are hoping that Greenhopper will allow us to get these measurements automatically and fuel our retrospectives for ideas on how to improve the Cycle Time of our features.
A few weeks ago Caplin Systems hosted an “agile/ux safari”, which is basically the equivalent in the software development industry of a factory-floor tour. As Caplin take UX very seriously there was a lot of focus on this side of things, with design walls, empathy maps and persona development all very prominent in their office space in Houndsditch, London. However I was more interested in the plain old development side of things and here were my key takeaways of the night, alongside why I was particularly interested in them:
- Caplin deliver in 2-week sprints, with a customer demo after every 10 work days; In iTropics we want to go from 4 weeks to 2 weeks as soon as we can.
- They don’t do planning poker, as they ‘have a rough idea’ of how much work they can do in a fortnight and find the time planning to be wasteful; This is a trade-off I believe in, as the time won back from these sometimes long meetings is very probably worth any slight loss in estimation accuracy at the size of feature story that we work with.
- They have a sprint board, that tracks work across a day-to-day value stream, as well as higher-level ‘phase’ board, that tracks major deliverables at a project level across sprints; We’ve started experimenting with this in iTropics as the project-level roadmap is being lost and along with it the vision for the team and suite of products.
- Finally, they have loads of UX/UCD work happening before features are given to the development teams to build; This is something we are working towards (albeit in very small increments!) at the moment.
Thanks to Caplin Systems for opening their doors to us and showing their working methods, and thanks to Johanna Kollman and the Agile UX Meetup group for organising!
Some photos of the evening:
On the 7th and 8th of February 2012 I attended the Travel Technology Europe 2012 show at Earl’s Court. Like its name suggests, this is a conference that focuses on the technology involved in the travel industry. A few of the big technology-driven travel companies were there, including Expedia and Travelport, but there was also a huge variety of smaller businesses occupying a variety of market spaces, including everything from web-based analytics to ‘data aggregators’, aka companies that are exclusively in the business of collating and providing product (tour) data from and to a huge network of online travel companies. This blog post summarized my experiences and takeaways at the show.
As I was a little late I joined in on the end of an EasyJet presentation on how their analytics package is driving more personalized campaigns to more targeted user base than before, with very successful results. For example, if someone from the UK visits their holiday home in the south of France every Easter, EasyJet now has the platform to email that user in the run-up to Easter offering cheap flights. It was also interesting to note that the EasyJet website is powered by SiteCore, the same enterprise CMS that Trafalgar uses for their website.
Directly after the EasyJet talk, I attended a panel discussion chaired by Kevin May, operator of travel technology website Tnooz (www.tnooz.com – a website I will definitely be reading more of in future). Kevin had three panellists – John Watton (Director of Brand & Marketing for EAN, the Expedia Affiliate Network); Alex Bainbridge from TourCMS, and Kevin O’Sullivan, a developer who won the last THack (read on to see what a THack is) by building an Android application that you could issue voice commands to, like “Flights to Paris”, and it would use various travel APIs to conduct the relevant search. The panel discussion was entitled “What THack did to the industry” and Kevin elaborated on a series of THacks conducted around the world and what the outcomes were: a variety of sites and apps that integrated travel data to solve a (travel) business need. A THack is a developer day or weekend where developers come together to ‘build something cool’ using APIs made exclusively available to them without any commercial barriers to entry. Apparently THack 2011 was where Expedia announced that they were making their APIs publicly accessible, and today any developer can sign up and start building applications interacting with Expedia services. The overall feeling of the panel participants was that APIs are the future and that travel companies, especially tour operators with data available for their own products, need to make this data accessible via an API or risk being left behind by those operators that do. Hopefully iTropics can scratch this itch for the Travel Corporation with our new API version 3! Expedia also recommended creating a developer portal so that developers could ‘self-serve’ accessing the APIs, and it was interesting to see that they have partnered with Mashery, an API management solution provider, to help make this happen.
One interesting comment from the panel was that it is no longer necessary for developers to wait for business agreements when developing products, but can now even lead on commercial opportunities by proving the technology first (eg. mashups or integrations) and the business falling in behind a successful proof-of-concept to get the commercials in place.
There were also a number of exhibitors at the show. One was AdInsights, a company that solves an interesting problem. An online shopper might click an banner advertisement or a Google Adword, storing information in their cookie that may be relevant to a website’s online media campaign. But that user may then pick up the phone to call a number on a website, preferring to talk to someone. At that point all online activity is left behind as their is no link between that phone-in customer and their online activity. What AdInsight’s analytics package does is dynamically display a specific phone number to any user coming to the site, which uniquely identifies them. When they call in, they are routed through AdInsight’s phone network which has ALSO collected information from the user’s cookie (including what phone number they are seeing on the website) and is thus able to link the calling customer to their online activity, including which online media campaign (eg. Facebook promotion) drove them to the site. The result is a comprehensive view of the calling customer’s online activity, enabling accurate tracking of online media campaigns but also allowing call-centre staff to contextualize their conversation with that client based on their surfing history (“Oh I see you were looking at the web page for European Whirlwind – we have a special on that this week!”).
Next I saw a presentation from Distribute Travel (specialists in travel API feeds) & Net Effect (network of several hundred travel websites and integrators of multiple APIs to feed those sites with bookable products) that presented a talk called “Turning the Data Tap”. The presenters expounded on the virtues of having an API. They advised that travel companies should even make their product data APIs open (publicly accessible), and let consumers/developers decide how to use that product data, as this enables the maximum amount of innovation. Indeed, developers or companies consuming your product data may surprise you with the new ways that they sell your products!
To see the presentations in Prezi format (quite cool) see here:
All in all it was great to see so much technology maturity and innovation happening in the travel/technology sector. This is certainly a very exciting space to be in right now, and there are seemingly huge commercial opportunities if we are strategic about our technology development choices going forward. I recommend adding www.tnooz.com to your reading list to keep up!
I just killed my Google+ account and along the way I was presented with this screen:
To which I wrote the following:
Not adding any value to my life – my friends are just sharing links from the web; nobody is commenting on how awesome their holiday was, uploading photos, creating events or generally being ‘social’. The UI tech is great by the way, but the UX is not compelling in the slightest.
IMO Google+ needed to launch locally (like FB) and spread organically, with people joining because their friends were already on it and being SOCIAL (not just sharing internet links). Instead it launched to the entire world simultaneously, who promptly all arrived to be greeted by…not much at all.
Oh and circles are way too much bother for most people 😉
Why do you think Google+ failed?
A few months ago, in my continuing quest to find a personal finance manager (UK equivalent of Mint.com) I registered on http://www.moneydashboard.com. After getting past the initial shock/horror of Silverlight, I tried to perform some basic tasks but found the site cryptic to figure out and difficult to use throughout subsequent attempts.
After forgetting about the site, I received an email a couple of weeks ago informing me of all their whizz-bang new features. Unfortunately for them I took it as a reminder to unsubscribe, and requested that they delete all my info. Today I received a confirmation email apologising for the delay and asking for any feedback. Turns out I was in a feedback kind of mood…
- The fact that this email is delayed by nearly two weeks, and that my initial email seemingly went to three separate mailboxes, makes me think that Money Dashboard (MD) has loads of operational inefficiencies that will ultimately drag on all aspects of the user experience of the product. Someone, probably middle-management, should be focusing less on the work and more on the system of work over there. Simplifying the system and eliminating confusion (like which support mailbox to watch) will allow people to focus on customer service and user experience of MD.
- Silverlight was the wrong technology decision. I’m guessing MD have a) heard it before and b) believe can’t do much about it now (but MD can).
- Even factoring out Silverlight, the user experience of the site was poor. I recommend investing in more UX design people from which the development process begins (rather than the salespeople or business analysts or whoever is driving development now).
Was I too harsh? Probably. I am sick of half-ass tech out there? Definitely.