Latest Presentation – 21st Century Portfolio Management

I recently spoke at the LKCE conference in Vienna on 21st Century Portfolio Management. The talk was recorded and is available here http://vimeo.com/52546904 It’s about an hour long.

I’ve now presented this material in Madrid, Boston, Tokyo, Vienna, Utrecht, and at various clients around Australia – and each time have found that the contents of the talk has generated a good amount of interest.

The feedback I have been getting, in person after each talk, is that there isn’t a lot out there (in books, articles, blog posts, guidance from agile consultancies etc) on Agile at the portfolio level and beyond, and that much of what I talk about is classed as undiscussables in most organisations.

My shorter Boston talk, that was recorded back in May, has generated over 1000 views (the next most watched being Steve Denning, David Anderson and Don Rienertsen with a few hundred each) which kind of backs this interest up.

The good news is that in Australia we are actually doing what I talk about i.e. it’s not just theory. I hope to publish more on that (and the results) in the future.

Lean Software Management BBC Worldwide Case Study

Dr Peter Middleton and I have had our “Lean Software Management BBC Worldwide Case Study” paper accepted by the IEEE Transactions on Engineering Management. It will be published in the February 2012 issue. The paper was edited by Dr Jeffrey K Liker, author of the Toyota Way.

You can download a copy of the paper prior to its publication here.

I believe it will be one of the most significant papers in Software Engineering this decade.

David Anderson

Abstract

This case study examines how the lean ideas behind the Toyota production system can be applied to software project management. It is a detailed investigation of the performance of a nine person software development team employed by BBC Worldwide based in London. The data collected in 2009 involved direct observations of the development team, the kanban boards, the daily stand-up meetings, semistructured interviews with a wide variety of staff, and statistical analysis.

The evidence shows that over the 12-month period, lead time to deliver software improved by 37%, consistency of delivery rose by 47%, and defects reported by customers fell 24%.

The significance of this work is showing that the use of lean methods including visual management, team-based problem solving, smaller batch sizes, and statistical process control can improve software development. It also summarizes key differences between agile and lean approaches to software development. The conclusion is that the performance of the software development team was improved by adopting a lean approach. The faster delivery with a focus on creating the highest value to the customer also reduced both technical and market risks. The drawbacks are that it may not fit well with existing corporate standards.

Value Delivered

The paper doesn’t include the increase in business value delivered over the period of study. This was due to confidentiality agreements. What I can say is that during the period of study, the digital assets produced rose by hundred of thousands of hours of content, a 610% increase in valuable assets output by software products written by the team.

Authors

Peter Middleton received the M.B.A. degree from the University of Ulster, Northern Ireland, in 1987, and the Ph.D. degree in software engineering from Imperial College, London, U.K., in 1998.

He is currently a Senior Lecturer in computer science at Queen’s University Belfast, Northern Ireland. He is the coauthor of the book Lean Software Strategies published in 2005, and the Editor of a book of case studies on applied systems thinking: the Delivering Public Services that Work published in 2010. His research interests include combining systems thinking with lean software development to help organizations significantly improve their performance.

David Joyce is a Systems Thinker and Agile practitioner with 20 years software development experience of which 12 years is technical team management and coaching experience. In recent years, David has led both onshore and offshore teams and successfully led an internet video startup from inception to launch. More recently David has coached teams on Lean, Kanban and Systems Thinking at BBC Worldwide in the U.K. He is a Principal Consultant at ThoughtWorks.

Mr. Joyce was awarded the Lean SSC Brickell Key award for outstanding achievement and leadership

Programme Level Kanban

I was recently asked

I’m looking for pointers and experience of running programmes with Agile, particularly topics such as:
- team structures
- communication and coordination processes

Rather than Mike Cohn’s Scrum of Scrums, my answer is to use a master Kanban board to visualise the progress of the projects within a programme, which in turn will naturally enhance the communication and coordination process.

The programme board has cards for each of the sub projects or sub feature sets (MMFs) only. The detail for each of these is broken out on each of the teams Kanban boards, not on the programme level board.

This approach visualises what is going on at a higher level, and enables the various representatives from each of the sub teams to collaborate, understand what is coming their way that could affect them, and facilitate synchronisation.

A daily standup is still held, but the rhythm is around:

  1. what is blocking your team, or about to block another team
  2. what work is in progress
  3. bottlenecks (either current or impending)
  4. are priorities clear on what gets pulled next
  5. what needs to be expedited

The standups still run from right to left on the board, in other words upstream; from what is about to released, back all the way to analysis.

Each team records Lead Time and Cycle Time in elapsed working days (some sub project teams may still use points but augment these with LT and CT). This enables those at the programme level to be able to compare teams. Those sub teams with longer Lead Times are asked if they needed more resources, assistance in removing bottlenecks, if we have to go and work on the System conditions etc etc. The only caveat to this is keeping managers in check so that they dont start using these metrics for pointing fingers (at “slow” teams) rather than continual improvement opportunities.

As Dr Peter Middleton says the usual constraint for programme visualisation is the ability of the human mind to handle complexity, this is why tools struggle, as you get to see all individual work items for its sub-projects, too complex! There is no need for one gigantic board/tool visualising everything in fine detail.

Journey to Systemic Improvement – Lean eXchange presentation

Today I gave a talk at the UK Lean eXchange entitled Journey to Systemic Improvement.

My slides can be found here.

Note it is a media rich presentation so the PDF is almost 50MB!!!

A video recording of the presentation and our second running of the Red Bead Experiment will soon be available.

Kanban Results

Over the past year our Kanban teams have been striving to reduce the following:

  • Lead Time – the time it takes from a customer request to when it is delivered
  • Development time – the time it takes from entering the Ready For Development queue to when it is handed off to QA
  • Engineering time – the time it takes from entering the Ready For Engineering queue to when it has passed QA, left Engineering, and is ready for UAT

Through various means; working on the system, talking about blockers first in the standup, actively assigning, escalating and removing blockers, recognising and reducing bottlenecks, retrospectives, improving our process by separating common cause problems from special cause problems, using MMFs and component stories and tasks, implementing Kaizen, implementing classes of service, highlighting items that have been on the board for too long, to name but a few, we have seen improved results which are depicted below in Statistical Process Control charts using data taken from our largest Kanban team.

Note the links in the above paragraph link to other areas of this blog that describe in more detail how each of these have been achieved. You can click on each of the charts below to see a larger version.

Lead Time

Lead time has reduced from a mean of 22 days to 14 days over the past year. There is a consistent downward trend with the majority of the most recent items under the mean. Each of the outliers were proved to be special cause. The periods on the charts have been split from 2008 until our financial year end (April 2009), and from July 2009 until October 2009.

Lead Time Oct 09

Development Time

Development time has reduced from a mean of 9 days to 3 days over the past year.There is a consistent downward trend. This portion of the value stream was directly under the team’s control and not subject to delays from 3rd parties or upstream or downstream parties. The major factor in reducing development time has been to limit work in process. The periods on the charts have been split from 2008 until our financial year end (April 2009), and from July 2009 until October 2009.

Dev Time Oct 09

Engineering Time

Engineering time has reduced from a mean of 11 days to 8 days over the past year. Once again there is a downward trend. However there are more outliers that required investigation. Some of the outliers were proved to be special cause, but the majority were down to waiting for 3rd parties to complete their development and QA, something the team actively worked on to reduce. The periods on the charts have been split from 2008 until our financial year end (April 2009), and from July 2009 until October 2009.

Engineering Time Oct 09

Throughput

We class throughput as the number of items released, and would expect an upward trend as the code base is decoupled, work items broken into MMFs, and cycle time reduces. The chart below shows this upward trend in the number of releases per month. Note that we are subject to release freezes hence the drop from December to February where the release freezes were imposed.

Releases Oct 09

Bugs Per Week

We need to ensure that the reduction in lead and cycle times, and increase in throughput are not at the expense of quality. The chart below shows that the number of live bugs is within statistical control, and since July we are actually seeing a reduction.

Bugs per week Oct 09

 

There are now several follow on posts from this original post

http://leanandkanban.wordpress.com/2009/10/25/kanban-results-feedback/

http://leanandkanban.wordpress.com/2009/10/26/kanban-results-part-2/

 

Design Team Kanban Evolution 2

I blogged previously that our Design team were looking to modify their Kanban board.

Here is a picture of their new board. Note the following:

  • An Express lane for items that need to be expedited through the system
  • Swimlanes for each specialisation
  • Limiting work in progress by limiting Avatar tokens
  • Limiting queues by only allowing a certain number of slots for cards
  • Making blockers visible with pink Post-its stuck to the blocked item, and their own swimlane.

new design Kanban board

Another Kanban Board Example

As I have mentioned previously we have many teams using a Kanban System.

Here is an example of a Kanban board from one of our product teams. Note that they use the following:

  • A star above each lane where a date stamp is required that will be used to produce metrics (lead time & cycle time) to help the team improve. This team literally use a stamp to record the date on the card, gives a nice satisfying thump each time you stamp a card.
  • A No Entry Cross sign to limit work in progress. For example you can only fit 2 cards in the Ready for Review stage so there is no need to write the number 2 above that stage.
  • A Feature (MMF) input queue containing the next MMF ready to pull.
  • A Feature (MMF) currently in progress state. Our other teams are using swimlanes for this but this is quite a small team so will typically only have 1 MMF in progress at any one time.
  • White cards to denote packaged releases, containing release number and other release info, with the related cards pinned behind.
  • Avatars depicting who is working on what item.

.kanban board 2)

Design Team Kanban Evolution

Our Design team had a meeting today to discuss how they can improve the value they get from their Kanban implementation. They found the following 4 areas they would like to see benefits from or that they can improve:

  • Better visibility of upcoming work
  • Getting things out the door quicker
  • Reduction on overhead from the board for smaller tasks
  • Limits on the number of things that are in progress

They decided that they need to improve their tracking to help identify weaknesses against the areas above, and that they need to simplify the board.

Their current board and suggested changes are below.

design kanban board 1

kanban-board

Classes of Service and Policies

David Anderson writes

Traditional project management treats every item homogeneously. Kanban allows us to break the triple constraint and optimize delivery based on risk. A Kanban card can have a class of service, an indicator that speaks to the risk associated with that feature.

For each item we can control its priority and speed of flow via pull-based prioritisation decisions made according to the Cost of Delay.

Classifying classes of service are are typically defined based on business impact, they will be a context specific activity that will result is a specific set of service levels that are unique and differentiating to a line of business. Some business may choose to delineate several classes of service based on the cost of delay, up to 3 classifications could be quite reasonable depending on the funds put at risk through delay.

Classes of service will be unique in your project, however here are some examples:

  • Expedite (or “Silver Bullet”)
  • Does the feature need to be delivered by a certain date?
  • Standard e.g. First In First Out (FIFO).
  • Intangible
  • Is it a nice to have?
  • Is it chargeable?
  • “Google Time”

Fixed delivery date could be regulatory or seasonal related, for example Christmas, Goldratt talks about this as self expediting. Pull by the oldest item when looking at fixed delivery date. Have a certain portion of standard class items on the board as these can be cannon fodder for higher class items, as they should have a low cost of delay. Intangible items may include production bug fix requests, usability improvements, branding and design changes and their like. These may come with their own classes of service, for example bug fixes may be given different classes of service based on severity. Its recommended to have between 3 and 6 classes of service.

You can have Kanban limits not just on each state on the Kanban board but also on each of the classes of service, for example a limit on non chargeable work.

Classes of service  work through the use of color to delineate the class, and simple prioritization policies that can be used by anyone to make a properly risk aligned prioritisation decision, in the field, on any given day, often without any management intervention or supervision.

Polices can include; prioritisation, limits, time and risk constraints, order, colours and annotations. As a good guideline you should look to have no more than 6 policies per class of service.

Policies will be unique in your project, however here are some examples:

  • Are there any fixed delivery items that need to be pulled now into the Kanban system?  Pull this in preference to other items regardless of priority
  • If a request meets a certain criteria then it gets a faster class of service on the board
  • Expedite / fast track prerequisites
  • Only 1 expedite request on the whole board
  • At least 4 standard items
  • If total WIP is 12 and we have a policy that 50% will be high priority then we want to ensure that 6 items are high priority.

Work items should flow through the system in a risk optimal fashion, and the result should be risk optimal releases of software, which maximise business value and minimise cost of delay penalties.

Tactical business pressure is dealt with via classes of service. If something is needed faster it is processed with a higher class of service. Ensuring that all the really important things are on time enables a far better conversation with the customer and maximises their satisfaction.

Label each of your backlog items against a class or service, either all upfront, in batches, or just in time.
You can even use classes of service to help manage shared or scarce resources.

Look to report by class of service

  • Work in Process
  • Cumulative Flow
  • Report on age of items (that are on the board)
  • Due Date performance vs SLA as a %

Corey Ladas writes

If you look at the cycle time histogram in Benjamin Mitchells article, you see two (or possibly three) very distinct groups of work items. That’s exactly the kind of information I’d hope to extract out such a study.

What do the work items in those clusters have in common? There are your service classes.

Implicit Limits

Our teams are using avatars to sign up to an item of work. If the item is paired then 2 avatars are placed next to an item.

This approach has the following benefits:

The Kanban principle of “make it visible”

  • Who is working on what becomes a moot point at standups so can be skipped
  • Items with no one attached are questioned at the standup

Limiting WIP

  • A fixed number of avatars are available per person. This implicitly limits the items they can work on, when someone has attached all of their avatars next to work items they can’t take on any more!

avatar