Kanbaning your Scrum (2 of 3)
John Coleman looks at the benefit of optimizing flow even for more complicated work.
You might remember the 1st post.
We optimized the amount of work in progress for simple/obvious work. Now let’s make it a bit more complicated. You might remember I ran a simulation with teams around the world over a number of years.
I used Mike Burrows’ Featureban (https://www.agendashift.com/featureban) and Andy Carmichael’s additional design, with some tuning. “Daily”, team members flipped coins. Heads or Tails, heads was a good day, tails not so good, and people learned to benefit from one of the rules for tails that one could move an item from selected into build. On a given day a PBI could move into selected and build with heads or tails, and then with two heads, could move through Build and Test into Done. In this case, we know that each PBI takes two to three days if there is no waiting. We seemed to get reliable with 2 day cycle time towards the end and it seemed pointless to continue as it was obvious from that point we would stay consistent as long as we did not change policies.
You might rightly say this is not realistic, every item taking two to three days. So a complication was added to the simulation. Features consisted of a variable number of PBIs, with inter-dependencies between features. And Andy Carmichael added a cost of delay profile & numbers per feature to make it interesting. I am grateful to Andy Carmichael for this twist.
Features with a variable number of PBIs, dependencies, and shock horror classes of service :) - Last column is the cost of delay per day, CLUD is 1st days cost of delay per day kicks in, CLAD is last day cost of delay burns value. If intangible Feature 12 was delivered to done (all PBIs delivered to done plus one day to deploy (sometimes 2-5 days in other simulations), if team members got tails they could toss again (usually I did not activate this rule if I have an extra 15 minutes). Features 16-20 would change unexpectedly for a "black swan farming effect" (bad and/or good). Sometimes, we would change the Features to be relevant to the industry. Time did not permit in this particular simulation instance for black swan farming (see experience report at http://blackswanfarming.com/). But it was attempted in many previous simulation instances.
*CLUD = “CoD Low Up-to Date”, i.e. there is no advantage in implementing before this point **CLAD = “CoD Low After Date”, i.e. no point implementing after this point — no value remaining.
It was interesting that in all cases (given time) where PBI cycle time reduced, that reduction was passed on to reduce feature end to end customer cycle time also. In the earlier simulations, teams were given a horrible setup with existing work in progress including a poorly selected feature (feature 4) so teams naturally wanted to cancel that feature for now. So since cancelled work would mess the stats, I let the PO decide in consultation with teams which features to order high up the Product Backlog. It took longer and it removed excuses so it was worth it. Daniel Vacanti’s approach according in his book “Actionable Agile Metrics for Predictability” (see https://www.actionableagile.com/publications/) would have been to move cancelled items to done with an annotation of “cancelled”, and I could have filtered those out I guess. For the clearest learning outcomes, the choice was to keep it simple. I guess this is a departure from reality because work does get cancelled. I might look at that in future simulations using Daniel’s Actionable Agile software at https://www.actionableagile.com/ which not only allows filtering but also has Monte Carlo capability. Let’s leave Monte Carlo for another post.
We even had some good/bad “black swans” (features 16 to 20) in the previous simulations (predicted millions of value and actually generated 50c/50p, or predicted 20k of value and earned 5m), highlighting the effect of making smaller bets. Black Swan Farming (see experience report at http://blackswanfarming.com/) is akin to playing a small amount of money on many tables (features) in a casino to see which bets are performing better, instead of putting down big bets on only a few tables (features). This seems similar to “taking a bite” in Large Scale Scrum (discovering complexity and value by doing a tiny piece of work that cross-cuts specialist layers).
In this simulation, the audience had already done their Professional Scrum with Kanban curriculum which featured GetKanban, and we had additional time to run this advanced Featureban simulation. There was a method to my madness. I wanted to see if teams would revert to type as soon as I put pressure on them to compete again in class after the GetKanban learnings, similar to the “Squirrel Burger” moment in the Professional Scrum Master class. The attendees seemed to have listened and synthesized the PSK content, as it was obvious to me that they were smart enough to understand that regardless of how features get ordered/ranked (even using “class of service”:)), once they started, the management of aged work was a major policy. Before PSK was ever announced, in my previous Scrum with Kanban classes, a really smart person in a UK bank came up with the policy “if something is older than 4 days, let’s move it regardless of trying to finish features, after all, we decided to start it so we should finish it and avoid work rotting while in progress”.
Andy and I introduced a rule that I got stricter within the last 30–40 simulations or so, that is a PBI cannot move through two in-progress columns in one day as we’re trying to get this simulation as close to reality as possible. One could also argue it is the reality. Here is an example where we did not implement this rule:
I am left in no doubt from the simulations I ran that “Little’s Flaw killer policies” improve the performance of Scrum for complicated work up to 10x for knowledge work (timeboxes for unlimited WiP prevented the worsening of the push system).
Here is a smoother example:
The inherent nature of complex work upsets the delivery of value due to its unpredictable nature but its delivery can still be improved by limiting WiP though swarming, collaboration, and the sharing/pairing/mobbing of work. In the next experiments in H2 2018, complex work is something that will be looked into in more detail. I will revert with complex work real-but-anonymized-data /simulation-data in due course. Logically, given Scrum makes impediments visible, and is based on the three pillars of transparency, inspection & adaptation, I expect Professional Scrum with Kanban to improve the performance of Scrum with complex work also. Complex work by its nature has unknown unknowns and is therefore unpredictable, yet I expect good/bad news to arise sooner from complex work due to increased team performance from Professional Scrum with Kanban. Let’s see…..
Meanwhile, check out Daniel Vacanti’s case study for Kanban at Siemens Health Services at https://www.infoq.com/articles/kanban-siemens-health-services, where reduction of end to end customer cycle time for complex work was validated.
You will miss a trick if you do not optimize to flow of work via WiP limits. Don’t be fooled by temporarily rising throughput without WiP limits. WiP is a leading indicator of cycle time. Cycle time can be a leading indicator for throughput but there can also be a tradeoff between cycle time and throughput, for example, when you deliberately incur “flow debt” (borrowing work time from other items to prioritize an item(s) in progress, thus causing relative ageing and disruption the assumptions for Little’s Law — see Daniel Vacanti’s Little Flaw video at https://vimeo.com/52683659).
Visualize your work for sure, but go the extra mile to optimize the flow of value. It leads to a calmer and most sustainable work environment, and I’d be surprised if it doesn’t improve quality. Don’t under-estimate Professional Scrum with Kanban, it adds horsepower to your Professional Scrum.
Get an advantage on the competition internally and externally. See our training offerings at https://ace.works/TrainingEvents.