Creating Useful Measures

Getting managers curious is one of the starting points of Systems Thinking. In our organisation I have presented internally many times on the subject (either informally, during pre-arranged lunch time sessions or at senior level briefings), sent round countless articles, organised discussions with organisations who have implemented and benefited from Systems Thinking, and organised external speakers to present on the subject; Barry Wrighton from Vanguard and Richard Durnell from Thoughtworks Australia.

Following on from these activities one of our senior managers recently proposed:

Although we have a number of metrics currently in place we should look to start again, with a greater focus on measuring improvements as experienced by people within the organisation or even by its customers. I suggest that managers and teams work together to consider what measures would help them understand their actions, understand the results, help them identify opportunities for improvement, and measure the impact of their actions. This is a different approach to our metrics thus far, which had been determined, analysed and owned by a fairly distant management group.

From curious to a normative loop experience*

Following this we piloted the creation of new useful measures within one of our product delivery teams, who had been working together on a product for just under 2 years.

To start with I did some digging around the current metrics in place. Most were found to be either project or code centric:

  • Within program budget (Red, Amber, Green)
  • Within project budget (Red, Amber, Green)
  • On time (Red, Amber, Green)
  • Velocity
  • % unit test coverage (Target: >50)
  • Relational cohesion (Target: >1 or <5)
  • Afferent coupling at assembly level (Target: <15)
  • Afferent coupling at type level (Target: <5)
  • % methods will less than 10 lines of code (Target: <10)
  • LCOM Henderson-Sellers (LCOMHS) (Target: <0.8)

Although valuable these measures have no real connection with operational improvement. What was needed was a completely new set of measures created from a quite different perspective; useful measures relating to what matters to our customers. These new metrics should tell everybody how well we are doing from a customer perspective.

Step 1 – Define the Purpose

To start we need to understand and define the purpose from the customer perspective, clarity of purpose and measures that relate to purpose are essential to improvement.

To help us in defining purpose we:

  1. Talked to our customers!
  2. Studied demand
  3. Looked at the company values
  4. Talked to the product owner and stakeholders
  5. Looked at previous business cases and presentations
  6. Reviewed organisational business units goals

How do you study purpose? Go out into the system and study demand, what matters to your customer. Look at Value Demand vs Failure Demand, go and talk to your customers!

Jeremy Cox – Vanguard

With this information we then held a session with the team to define:

  1. Who are our customers?
  2. What is the purpose of our product or service from the customer perspective

This particular team are regarded as highly successful, have worked together for quite some time, and have produced high quality software, but interestingly not many within the team had even thought about either of these questions. There were a number of business cases in place, a number of metrics purporting that they were successful, but each were from an internal perspective and not from a customer perspective.

If you find that despite lots of measures you don’t really know much about that matters to customers, then it can be a powerful starting place for change. Clarity of purpose and measures that relate to purpose are prerequisites to learning and improvement.
John Seddon

To define the purpose the team brainstormed, wrote keywords on PostIts, grouped these into common themes, and from there formulated the purpose.

Once the purpose had been defined we gained feedback from various stakeholders and customers to ensure it matched their impression.

Step 2 – Define Measures

As our senior manager had observed, measures are usually set by management who are often distant from the work. Our new measures should be defined by the whole team and by the workers doing the work.

Measures should be in the hands of those doing the work that are useful in respect to purpose. When people have clarity of purpose, measures in their hands that relate to that purpose, then they help people understand what is happening where they work, and they are able to contribute more in improving the work. This results in greater control and flexibility.

John Seddon

We held a second session where the team:

  1. Reviewed the current measures
  2. Defined new measures relating to the new purpose, for example
    • Number of D****** orders
    • Number of T****** orders
    • Number of orders per cost
    • End to end time to fulfill customer demand
    • Exploitation of i****** c******
  3. Note, some of the above has been obfuscated due to commercially sensitive information.

  4. Defined useful internal improvement measures
    • Money spent vs expected ROI
    • Lead Time (from idea to live)
    • Cycle Time (from the time team starts work, to ready for live)
    • # Live releases
    • # Failed releases
    • # Orders per month (throughput)
    • # Live defects per week (Failure demand)
    • # Support calls per week (Failure demand)
    • # Change requests (Failure demand)

It was agreed that these measures would be plotted in a control chart over time. Measures taken over time become more reliable, more predictable and the impact of changes can be seen more clearly.

You can see an example of a control chart below. The chart is showing end to end times (from a customer perspective) for a service. The data has been split after a process has been changed, and again after an IT system has been introduced. This is using real operational performance data which is quite different from the normal IT measurement (on time and on budget). We can observe the real impact of change over time.

Step 3 – Implement

We are now on a real journey of continual improvement. The team have created measures relating to customer purpose and these measures are in their hands. Any future work will be analysed against these measures and progress regularly studied. It has also been decided that following the success of this pilot that all other teams should adopt the same approach.

Its also worth noting that our new measures will, for the time being, co-exist with existing measures. Over time we will reduce the old measures as people realise they are becoming less and less useful. Its important to note that there are various levels of managers who are likely to cling to old targets and become perturbed by the thought of loosing cherished measures. They need to experience the normative loop* and discover themselves why the new measures are better for the customer and the organisation’s service.

Step 4 – Study

When setting measures you may not get it right first time, review your data regularly; is Failure Demand dropping? are end to end times reducing? are your customers happier?  Measures will evolve as you do.

Further information

A useful checklist from Vanguard:

  1. What is the purpose? (from the customer’s point of view)
  2. What are the current measures in use?
  3. Can you point to current measures of improvement?
  4. Do these current measures relate to purpose or do they encourage an internal perspective?
  5. How do current work measures impede workflow?
  6. What measures would be useful to access achievement of a customer focused purpose?
  7. How could these be used by the people who do the work?

Jeremy Cox of Vanguard ran a “making measures work for you” workshop at a recent Vanguard network day, I blogged about this here.

Mary Poppendieck and Tom Poppendieck also have an excerpt from their Leading Lean Software book on defining purpose.

I have also produced some results of internal improvement measures (plotted in control charts).

* Normative loop – people have to be able to “see” and “experience” for themselves

9 thoughts on “Creating Useful Measures

    • Yes using Vanguard’s Capchart tool. There are other tools that can also generate SPC charts on the market, and Vanguard also have a free web based tool on their site.

  1. It’s nice to read something thoughtful and constructive from someone in the Systems Thinking movement (as opposed to those who just criticize and hurl insults).

    Absolutely, start with the customer and define your purpose. This is a core lean mindset and there’s helpful overlap in the lean approach and systems thinking.

  2. Excellent point of view. I recently worked with a “measurement guru” who kept asking “how are you going to measure that” before we had even defined what we want to do or why.

    I think the idea of defining the customers and the purpose of the measures is a far better approach. I also really like the idea of asking how the measures currently

    What I would add is really just a couple of questions
    – What decisions are we going to make using these measures and how will the decision makers be able to use the information
    – How behaviors or decisions might the measures encourage that we don’t want. For example measuring quality might tempt us to hold off deploying good enough solutions to get them perfect, or measuring velocity might encourage short-cuts if not matched with a quality measure.
    – Following those two questions – what feedback do the decision makers have for us around improving the usefulness of the measures to them? Did the measures help us make good decisions? How do we mitigate the risks of behaviors or decisions based on this measure that are not actually aligned to our purpose?

  3. Nice to see that we share the same approach. I would like to know what the definition of velocity is in your measures. I more and more see this term but everybody uses a different meaning.

    • By velocity I was referring specifically to “the number of story points a team completed in the last iteration”. But on reflection I was using the term loosely to refer to “the amount of work we got done in a specific period of time, which then predicts how much we can probably get done next time”.

      The reason it is useful on a project because you can work out how long the project will take to finish everything in its backlog if it keeps going at the same pace. Oddly though, some teams treat it as a goal, reporting proudly that they are now working on more widgets each week than ever before … rather than using it as a way to plan future work.

      But as my grandmother would have said “I don’t care how much you are working on, I care how long I have to wait until I get what I want”.

  4. David,

    Excellent post. I appreciate the idea of using customer input and asking “why” before you start measuring.

    I am working in a large distributed software team where the software is our end product, so the software development process is, in effect, our “production” process.

    Do you have any suggestions for metrics that would help track and improve the software development process prior to launching to production? I’ve used defects per story point, but I question how meaningful any measure of defects really is when using Agile.

    • Good “internal” team metrics I have used are:

      Lead Time (in our case from request to delivery)
      Cycle Time (in our case from date work started to release ready)
      # change requests (although it was working software it didnt meet the recipients needs)
      # live defects

      However all of these still wont help a team understand if what they have delivered is of real value to the recipient. This is why I feel we should understand the purpose from the customer perspective and define useful measures that relate to that purpose, any software released should then show evidence that we are helping achieve purpose. This can be difficult for customers who we never see or get to talk to e.g. web site development, but I like AB Testing as an example of evidence based results of our output for this.

  5. David,

    This post is another excellent example of you pushing the envelop when it comes to applying agile techniques in a measured, mature way.

    I’d like to see a little more of a concrete example of what ended up on your post it notes, and how you classified your work and applied it to different metrics.

    I think many of us from the agile purist world have a ton to learn from those such as yourself who are applying system thinking to knowledge work, can’t wait to read more.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s