Taking repetition to task

This originally appeared on my old blog in March 2010.

Others have talked about the virtues of stories as vertical slices of a problem (end-to-end capabilities) rather than horizontal slices (system layers or components). So, if we slice the problem with user stories, how do we slice the user-stories themselves?

If, as I sometimes say, acceptance tests (a.k.a. examples/scenarios/acceptance-criteria) are the knife with which we slice a story into even thinner vertical slices, then I would say my observation of ‘tasks’ is that they are used as the knife used to cut a story into horizontal slices. This feels wrong…

Sometimes I also wonder, hasn’t anyone else noticed that the idea of counting the effort of completed tasks on burn-down/up charts is counter to the value that we measure progress only with working software? Surely it makes more sense to measure progress with passing tests (or “checks” – whichever you prefer).

These are two of the reasons I’ve never felt very comfortable with tasks, because:

  • they’re often applied in such a way that the story is sliced horizontally
  • they encourage measuring progress in a less meaningful way than working software

Tasks are, however, very useful for teams at first. Just like anything else we learn how to do, learning how to do it on paper can often help us then discard the paper and do the workings in our heads. However, what I’ve noticed is that most teams I’ve worked with continue to write and estimate tasks long after the practice is useful or relevant to them.

For example, there comes a time for many teams where tasks become repetitive. “Add x to the Model”, “Change View”… and so on. Is this adding value to the process or are you just doing it because the process says you should do it?

Simply finding that your tasks are repetitive doesn’t mean the team is ready to stop using them. There is another important ingredient, meaningful acceptance criteria (scenarios / acceptance-tests / examples).

I often see stories with acceptance criteria such as:

  • Must have a link to save the profile
  • Must have a drop down to select business sector
  • Business sector must be mandatory

Although these are “acceptance criteria” they aren’t what we mean by acceptance criteria in the context of user stories. Firstly, they are talking about how the user interacts rather than what they need to achieve (I’ve talked about this before). Secondly, they aren’t examples. What we want are the variations that alter the behaviour or response of the product:

  • Should create a new profile
  • Profile cannot be saved with blank “business sector”

As our product fulfils each of these criteria, we are making progress. Jason Gorman illustrates one way of approaching this.

So, if you are using tasks, consider an alternative approach. First, look at your acceptance criteria, make sure they are more like examples and less like instructions. Once that’s achieved, consider slicing each criterion (or scenario) horizontally with the tasks rather than the story. Pretty soon, you’ll find that you don’t need tasks anymore and you can simply measure progress in terms of the new capabilities you add to your product.

Updated 30-02-2010: I’ve inserted a new paragraph as the opening paragraph referencing an article on slicing user-stories to add some background to the vertical slicing metaphor. I’ve provided a link but I’m not sure who first came up with the metaphor.

  • Michael Bolton

    Surely it makes more sense to measure progress with passing tests (or “checks” – whichever you prefer).

    It doesn't make sense to measure progress with passing tests, whether you call them checks or not. It might make sense to do that if the development of a software product were linear, or a set of pegs that you insert into a board. But it isn't.

    A passing test (and in particular, a passing check) tells you that a product is capable of producing a correct result in a specific, highly controlled set of conditions. It tells you that the product can do something. It doesn't tell you, or even warn you, of terrible problems in the product of which the check is unaware. For that, you need exploration. As J.B. Rainsberger said once, a green bar doesn't tell you you're done; it tells you that you're ready for a real tester to kick the snot out of it.

    It does make sense, though, to measure progress in terms of completed features, which I think is what you're saying, where “completed” means that the feature has been developed and checked and tested to the degree that a person has deemed it acceptable. I see great risk if the person has delegated that decision to checks alone, especially when the checks have been devised entirely in advance of the programming work on the feature.

    —Michael B.

  • Thanks for that Michael.

    I agree that completed features is ultimately a far better measure of progress than just tests.

    Although my point was only that it makes *more* sense to track passing tests than expended effort on 'tasks'.

    What I had in my mind was the typical measure of progress: tasks on a task-board and effort expended on a burn-up/down chart… I think a step in the right direction would be to place less emphasis on those things and measure something that gives us a better indication of progress. I think passing tests are a better indication of progress than tasks. Just like miles travelled/remaining on a journey is a better indication of progress than the amount or fuel-burned/remaining. Useful as that latter is, it isn't an indication of progress.

    I wasn't saying it was the best and only way of measuring progress 🙂

    I've not worked on any projects where automated checks are implemented entirely in advance of programming the feature… if people are doing that, they're not doing xDD (TDD/ATDD/BDD). These are iterative and incremental learning processes and, in my experience, do not work well if you try to specify all the checks in advance.

    Thanks again for the comment.