Tag Archives: Software testing

Scenario-Oriented vs. Rules-Oriented Acceptance Criteria

Acceptance Criteria, Scenarios, Acceptance Tests are, in my experience, often a source of confusion.

Such confusion results in questions like the one asked of Rachel Davies recently, i.e. “When to write story tests” (sometimes also known as “Acceptance Tests” or in BDD parlance “Scenarios”).

In her answer, Rachel highlighted that:

…acceptance criteria and example scenarios are a bit like the chicken and the egg – it’s not always clear which comes first so iterate!”

In that article, Rachel distinguishes between acceptance criteria and example scenarios by reference to Liz Keogh’s blog post on the subject of “Acceptance Criteria vs. Scenarios”:

where she explains that acceptance criteria are general rules covering system behaviour from which executable examples (Scenarios) can be derived

Seeing the acceptance criteria as rules and the scenarios as something else, is one way of looking at it. There’s another way too…

Instead of Acceptance Criteria that are rules and Acceptance Tests that are scenarios… I often find it useful to arrive at Acceptance Criteria that are scenarios, of which the Acceptance Tests are just a more detailed expression.

I.e. Scenario-Oriented Acceptance Criteria.

Expressing the Intent of the Story

Many teams I encounter, especially newer ones, place higher importance on the things we label “acceptance criteria” than on the things we label “acceptance tests” or “scenarios”.

By this I mean that, such a team might ultimately determine whether the intent of the story was fulfilled by evaluating the product against these rules-oriented criteria. Worse still, I’ve observed some teams also have:

  • Overheads of checking that these rule-based criteria are fulfilled as well as the scenario-oriented acceptance tests
  • Teams working in silos where BAs take sole responsibility of criteria and testers take sole responsibility for scenarios
  • A desire to have traceability from the acceptance tests back to the acceptance criteria
  • Rules expressed as rules in two places – the criteria and the code
  • Criteria treated as the specification, rather than using specification by example

If we take the approach of not distinguishing between acceptance criteria and acceptance tests (i.e. see the acceptance tests as the acceptance criteria) then we can encourage teams away from these kinds of problems.

I’m not saying it solves the problem – it’s just easier, in my experience, to move the team towards collaboratively arriving at a shared understanding of the intent of the story this way.

To do this, we need to iteratively evolve our acceptance criteria from rules-oriented to scenario-oriented criteria.

Acceptance Criteria: from rules-oriented to scenario-oriented:

Let’s say we had this user story:

As a snapshot photographer
I want some photo albums to be private
So that I have a backup of my personal photos online

And let’s say the product owner first expresses the Acceptance Criteria as rules:

  • Must have a way to set the privacy of photo albums
  • Private albums must be visible to me
  • Private albums must not be visible to others

We might discuss those to identify scenarios (as vertical slices through the above criteria):

  • Create new private album
  • Make public album private
  • Make private album public

Here, many teams would retain both acceptance criteria and scenarios. That’s ok but I’d only do that if we collectively understood that the information from these two will come together in the acceptance tests… And that whatever we agree in those acceptance tests supersede the previously discussed criteria.

This understanding tends to occur in highly collaborative teams. In newer teams, where disciplines (Business Analysts, Testers, Developers) are working more independently this tends to be harder to achieve.

In those situations (and often even in highly collaborative teams) I’d continue the discussion to iterate over the rules-oriented criteria to arrive at scenario-oriented criteria, carrying over any information captured in the rules, discarding the original rules-based criteria as we go. This might leave me with the following scenarios:

  • Create new private album (visible to me, not to others)
  • Make public album private (visible to me, not to others)
  • Make private album public (visible to me, & others)

Which, with a subtle change to the wording, become the acceptance criteria. I.e. the story is complete when we:

  • Can create new private album (visible to me, not to others)
  • Can make public album private (visible to me, not to others)
  • Can make private album public (visible to me, & others)

Once the story is ‘in-play’ (e.g. during a time-box/iteration/sprint) I’d elaborate these one at a time, implementing just enough to get each one passing as I go. By doing this we might arrive at:

Scenario: Can Create new private album
When I create a new private album called "Weekend in Brighton"
Then I should be able to see the album
And others should not be able to see it

Scenario: Can Make public album private
Given there is a public album called "Friday Night"
When I make that album private
Then I should be able to see the album
And others should not be able to see it

Scenario: Can Make private album public
Given there is a private album called "Friday Night"
When I make that album public
Then I should be able to see the album
And others should be able to see it

By being an elaboration of the scenario-oriented acceptance criteria there’s no implied need to also check the implementation against the original ‘rules-oriented’ criteria.

Agreeing that this is how we’ll work, it encourages more collaborative working – at least between the Business Analysts and Testers.

These new scenario-based criteria become the only means of determining whether the intent of the story has been fulfilled, avoiding much of the confusion that can occur when we have rules-oriented acceptance criteria as well as separate acceptance test scenarios.

In short, more often than not, I find these things much easier when the criteria and the scenarios are essentially one and the same.

Why not  give it a try?

Old Favourite: Taking Repetition To Task

This originally appeared on my old blog on 16th March 2010…

Others have talked about the virtues of stories as vertical slices of a problem (end-to-end capabilities) rather than horizontal slices (system layers or components). So, if we slice the problem with user stories, how do we slice the user-stories themselves?

If, as I sometimes say, acceptance tests (a.k.a. examples/scenarios/acceptance-criteria) are the knife with which we slice a story into even thinner vertical slices, then I would say my observation of ‘tasks’ is that they are used as the knife used to cut a story into horizontal slices. This feels wrong…

Sometimes I also wonder, hasn’t anyone else noticed that the idea of counting the effort of completed tasks on burn-down/up charts is counter to the value that we measure progress only with working software? Surely it makes more sense to measure progress with passing tests (or “checks” – whichever you prefer).

These are two of the reasons I’ve never felt very comfortable with tasks, because:

  • they’re often applied in such a way that the story is sliced horizontally
  • they encourage measuring progress in a less meaningful way than working software

Tasks are, however, very useful for teams at first. Just like anything else we learn how to do, learning how to do it on paper can often help us then discard the paper and do the workings in our heads. However, what I’ve noticed is that most teams I’ve worked with continue to write and estimate tasks long after the practice is useful or relevant to them.

For example, there comes a time for many teams where tasks become repetitive. “Add x to the Model”, “Change View”… and so on. Is this adding value to the process or are you just doing it because the process says you should do it?

Simply finding that your tasks are repetitive doesn’t mean the team is ready to stop using them. There is another important ingredient, meaningful acceptance criteria (scenarios / acceptance-tests / examples).

I often see stories with acceptance criteria such as:

  • Must have a link to save the profile
  • Must have a drop down to select business sector
  • Business sector must be mandatory

Although these are “acceptance criteria” they aren’t what we mean by acceptance criteria in the context of user stories. Firstly, they are talking about how the user interacts rather than what they need to achieve (I’ve talked about this before). Secondly, they aren’t examples. What we want are the variations that alter the behaviour or response of the product:

  • Should create a new profile
  • Profile cannot be saved with blank “business sector”

As our product fulfils each of these criteria, we are making progress. Jason Gorman illustrates one way of approaching this.

So, if you are using tasks, consider an alternative approach. First, look at your acceptance criteria, make sure they are more like examples and less like instructions. Once that’s achieved, consider slicing each criterion (or scenario) horizontally with the tasks rather than the story. Pretty soon, you’ll find that you don’t need tasks anymore and you can simply measure progress in terms of the new capabilities you add to your product.


Boredom: a Testing Smell?

This first appeared on my old blog in June 2005.

Somebody I know who was doing some (unscripted) testing spoke of being bored the other day… I have always found boredom to be a sign that something is wrong.

I believe, as has been said by Kaner et al, that testing is a brain-engaged activity. If that is the case, why would I ever be bored?

Borrowing the Smell metaphor… I would say that boredom is a Bad Testing Smell. If it isn’t a bad smell, it is a whiff of an underlying bad-smell for sure.

If on the rare occasion I find that I am bored, I’d ask myself:

  1. Am I testing this area more than I need to? (if so, why?)
  2. Am I losing concentration because my brain is tired? Do I need a break?
  3. Is what I am doing so repetitive that perhaps it should be automated?
  4. Is there a better way of testing this feature?

If I answer “yes” to any of these, I know that perhaps I need to do something differently. Whether that is feasible in a given context, might be a different story.

Update 9th July 2010: This applies to scripted testing as much as it does to exploratory testing. Generally, I’ve found that boredom during scripted testing is because we’re asking a human to do something we should be asking a computer to do… The main barrier to this, in my experience, is that it is harder to write the automated tests than to write manual scripted tests… so, I look for ways to make automated tests as easy to write as manual scripted ones. Usually, I can achieve this with a little effort and have found it makes a huge difference.