Old Favourites – antonymarcano.com /blog Thinking through writing... on innovation, business, technology and more Sat, 03 Sep 2016 10:48:44 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.14 Old Favourite: Radio by Example /blog/2014/02/old-favourite-radio-by-example/ /blog/2014/02/old-favourite-radio-by-example/#respond Wed, 26 Feb 2014 22:51:10 +0000 /blog/?p=543 Continue reading Old Favourite: Radio by Example ]]> This first appeared on my old blog in  “2008 under the title Specification by Example”. I’ve made a couple of minor refinements to the text and I’ve added the image below from the 2009 “GOOS” book .

Internet radio‘ has come a long way since it first began. Long gone are the days where you have to seek out an online station that streamed a preselected playlist with mostly music that you like, or several stations for those with more eclectic taste. Instead, services like Pandora.com and Last.fm create a personalised radio station that matches the user’s own personal taste.

These personalised internet-radio stations are far more sophisticated than just specifying what genres and styles you like. When they first started you didn’t do that at all. When last.fm first started you provided an example of a song that you like. Software analysed the song finding common aspects of the music’s ‘DNA’ – including genre, tempo, how melodic it is and countless other sound characteristics in their database. From this they create a user-specific internet radio station that matches the users taste. As you listen, you give them feedback saying which songs you love and which ones you hate. Your future play-lists are refined with each piece of feedback you provide. This feedback is in itself more examples of songs that do and don’t match your taste. As a result you’d find yourself listening to songs you’d not have otherwise discovered.

That is Specification by Example! The product (in the above, your personal playlist) evolves by inferring the desired rules and characteristics from concrete examples.

And, as several have been saying for years… Test Driven Development is just that – Specification by Example – applied to software development. Instead of examples of songs, we provide examples of what we’d like the product, or part thereof, to do. These examples take the form of tests. It so happens that the evolution of the software product is performed by humans, unlike the internet playlist evolved by a machine.

Write a failing acceptance test, then cycle over writing a failing unit test, making that pass, and refactor repeating until the acceptance test passes.

]]>
/blog/2014/02/old-favourite-radio-by-example/feed/ 0
Old Favourite – More Sharks and Delaying Critical Mass /blog/2011/06/old-favourite-more-sharks-and-delaying-critical-mass/ /blog/2011/06/old-favourite-more-sharks-and-delaying-critical-mass/#respond Tue, 07 Jun 2011 10:26:41 +0000 /blog/?p=352 Continue reading Old Favourite – More Sharks and Delaying Critical Mass ]]> This article originally featured on my old blog on 19th January 2010.

In a previous post I talked about Critical Mass of software. I showed how an ever-increasing cost of change resulted in it becoming more economical to completely rewrite the system than to enhance and maintain the original.

I explained how this could be avoided by using practices that sustain a consistent and flat cost of change. I also mentioned that you could defer reaching critical mass. Some teams find it difficult to get the time to do this because “the business” always has “more important” or “higher-value” things on their backlog.

What are the implications of reaching critical mass? Well, depending on what the software does and whether the rewritten version still has to do all of those things, it could cost millions… or more.

Presented with the situation that the product could reach critical mass within a year – costing millions to replace – do you think “the business” would start to think it is worth investing in reducing the cost of change? Obviously, I would advise teams to avoid this situation in the first place:

The reality for many teams I’ve encountered is that they don’t feel empowered to push back on the business’ demands for that next feature in half the time it takes to do it properly. If we could present back the impact of that choice as shortening the time to Critical Mass and bringing about costs to replace the software far in excess of the value gained by delivering that feature a month early then perhaps the business would be better informed.

A big visible chart on the wall, showing the estimated point at which Critical Mass was reached could play a big role in getting the business more interested in sustainable change, or at least inspire a conversation on the subject.

Convincing “the business” to invest in some remedial refactoring or allowing three times as long for each feature to facilitate enough remedial refactoring for each new feature and refactoring-as-we-go for the new feature might be easier if we could represent this idea visually:

The problem with this idea is how do we credibly determine when Critical Mass is going to be reached? Many experienced practitioners could probably reasonably accurately estimate when that was going to happen purely on gut feel but this would be torn to pieces by many product managers. I’ve not solved this problem yet because doing this would require a mathematical model, determined by empirical data.

Perhaps someone will take inspiration from the ideas in these blog posts and find a solution. Perhaps someone has already done this or maybe this is an idea for a future PhD I might undertake? Maybe someone else will take inspiration from this article and undertake that work before I do. If so, they’re welcome to (with due attribution where applicable of course 😉

I think it is possible, however, to come up with a simple model based on the average complexity per unit of value (assuming that the team is using value and complexity). Trending this and keeping an estimate of a complete re-write up to date might allow the simple charts above to be maintained along side release-level burn-down charts.

These ideas are still in their infancy for me. Has this problem been solved already? If not, I encourage others to explore my hypothesis and help take it from just an interesting idea to something more useful.

]]>
/blog/2011/06/old-favourite-more-sharks-and-delaying-critical-mass/feed/ 0
Old Favourite – Sharks, Debts, Critical Mass and other reasons to Sustain Quality /blog/2011/06/sharks-debts-critical-mass/ /blog/2011/06/sharks-debts-critical-mass/#respond Tue, 07 Jun 2011 10:23:16 +0000 /blog/?p=350 Continue reading Old Favourite – Sharks, Debts, Critical Mass and other reasons to Sustain Quality ]]> This article originally featured on my old blog on 18th January 2010.

A while back I tweeted about critical mass of software:

Critical Mass of Code – past which the changeability of the code is infeasible, requiring that it be completely rewritten.

An elaboration of this might be:

Critical Mass of Software: the state of a software system when the cost of changing it (enhancement or correcting defects) is less economical than re-writing it.

This graph illustrates a hypothetical project where the cost of change increases over time (the shape of which reminds me of a thresher shark):

Note:The cost of a rewrite gradually grows at first to account for the delay between starting the re-write and achieving the same amount of functionality in the original version of the software at the same point in time. The gap between the cost-of-change and the cost-of-rewrite begins to decrease as the time it takes to implement each feature of comparable benefit in the original version grows.

It was just one of those thoughts that popped into my head. I soon discovered that critical mass of software is not a new idea.

One opportunity that I feel organisations miss out on is that this can be used as a financial justification for repaying technical debt, thus preventing or delaying the point at which software reaches its Critical Mass, or avoiding it altogether.

Maintaining quality combined with practices that keep the effort involved in completing each feature from growing over time can defer critical mass indefinitely:

Note: The cost of a rewrite is slightly less at first assuming that we are doing a little extra work to ensure that we get the infrastructure we need up and running to support sustainable change (e.g. continuous integration etc.)

This flat cost of change curve is one of the motivations of many agile practices and software craftsmanship techniques, such as continuous integration, test-driven-development, behaviour driven development and the continuous refactoring inherent in the latter two. A flat cost-of-change trend is good for the business, as they have more predictable expenditure.

The behaviour I’ve noticed with some teams is to knowingly accrue technical debt to shorten time to a near-future delivery in order to capitalise on an opportunity and then pay it back soon after:

This certainly defers reaching critical mass but it doesn’t prevent it.

Unfortunately, I see more teams taking on technical debt and not repaying it (for various reasons, often blamed on “the business” putting pressure on delivery dates).

If we had a way of estimating the point at which we’d reach critical mass, we’d be able to compare the benefit of repaying technical debt now in order to avoid or delay the cost of a rewrite… or even better, demonstrate the value of achieving a flatter cost of change.

I’m not sure that we have the data or the means of predicting it, but perhaps these illustrations of a hypothetical scenario will help you explain why sustaining constant quality, automating repeatable tests (or checking as some prefer to call it) wherever technically possible and, on those occasions when we do need to make a conscious choice to compromise quality to capitalise on an opportunity, that the debt is repaid soon after.

One company I know reached this point a couple of years ago and embarked on a department wide transformation to grow their skills in agile methods so that they not only re-wrote the software but could also avoid ever reaching critical mass again. A bold move, but one that paid off within a year. Even for them, they’ll need to avoid complacency as time goes on and not forget the lessons of the past. Let’s hope they do.

]]>
/blog/2011/06/sharks-debts-critical-mass/feed/ 0
Old Favourite – Developer Race-Tech: Continuous Testing /blog/2011/06/developer-race-tech/ /blog/2011/06/developer-race-tech/#comments Mon, 06 Jun 2011 09:36:08 +0000 /blog/?p=332 Continue reading Old Favourite – Developer Race-Tech: Continuous Testing ]]> The original version of this article appeared on my old blog on 28th April 2010. This version has had some edits…

Gearboxes in competitive motor racing are designed to shift as fast as possible. A competitive race-car has computer controlled, hydraulically activated gear shifts that change gears up or down faster than you can blink (literally)! In Formula One, each shift is so fast that the gearbox systems have been dubbed “seamless shift” gearboxes and can shift gear in around 2-4 100ths (0.02 – 0.04) of a second. Compare that to the circa 1-2 second gear-shift a competent driver takes to manually de-clutch, change gear and re-clutch on a road car. Even automatic gearboxes on road cars can’t keep pace with the rapid gear changes that a race car delivers.

Competitive Edge

The race car is only saving fractions of a second with each accelerated up-shift, but the race-driver changes gear so often per lap and there are so many laps in a race that these fractions of a second can add up to a vital and significant lead over their competitors.[1]
mclaren-mercedes

Set aside the lap-time improvements for a moment from spending at least half a second more on the power per up-shift than with a manual box… The modern F1 driver has much more to do in the cockpit too than the comparative tedium of changing gear [2].

Developer race-tech

Now, in motor racing, such gearboxes are taken for granted. No serious Formula One racing team would put their car on the track without one.

In a world where a lot of software development is for companies competing in the economical equivalent of a race, it seems that development teams aren’t taking full advantage of their own metaphorical millisecond gear-shifts.

There are tools that are a developer’s equivalent of a racing gearbox. The simplest example is background compilation. Other examples are refactoring tools that make widespread changes with a simple key-stroke. What once took minutes or hours with global find & replace or even scripting, we can complete in seconds. IDEs that automatically complete class-names and method names are another example. These are tools that many of us take for granted, like the racing driver and automated gear-shift.

Falling behind

But there are other places where we are still not taking full advantage of the technology. I am still encountering surprisingly few people using continuous testing tools, where tests are run automatically in the background every time a file is saved, and not just the test I’m working on, all my tests. There are a few tools available for this, including Infinitest and ZenTest.

Despite the availability of such tools, I still see many developers using keyboard shortcuts to execute tests. Or, worse, reaching for the mouse, right-clicking, navigating another menu level down and running the tests with a left-click, or worse still, forgetting to run them altogether. I want rapid feedback with minimum friction to obtaining it. Much like background compilation, unit tests should just run, and run often, alerting us when a problem occurs. The easier they are to run, the more likely we are to run them… How much easier could it be than it happening automagically each time we save a file?

Need for Speed

Try it for a while and, like me, even keyboard-shortcut executed tests start to feel like you’re stuck in traffic rather than being in a competitive race to the finish… Or, maybe it’s just me and my need for speed

 

The video is of me on the Silverstone Southern Circuit (part of the F1 GP circuit) taking my UK National Class B Racing License Test in a Nissan R35 GTR – approx 480+bhp, 0-62mph (0-100kmh) in 3.5secs, using it’s double-clutch, paddle-operated gear-shift providing gear changes in about 0.2 of a second. I passed… despite being very cautious on the brakes… and was noticeably quicker than the other person doing their test… which you see towards the end as I pass them 🙂

[1] At the Monaco GP, an F1 driver will change gear around 50-60 times in one lap. Over 78 laps, this adds up to as many as 4680 gear changes over the whole race (up and down-shifts).

[2] The comparative tedium of de-clutching, shifting and re-engaging the clutch is something that a race-driver should not need to worry about as they approach 200mph while adjusting the brake-balance lever, switching to a new fuel-air mixture setting, pressing the KERS boost button and/or deactivating the Drag Reduction System.

 

]]>
/blog/2011/06/developer-race-tech/feed/ 3
Old Favourite: Feature Injection User Stories on a Business Value Theme /blog/2011/03/fi_stories/ /blog/2011/03/fi_stories/#respond Thu, 24 Mar 2011 12:38:25 +0000 /blog/?p=252 Continue reading Old Favourite: Feature Injection User Stories on a Business Value Theme ]]> This originally appeared on my old blog in May 2010

Feature Injection, an approach to Agile Business Analysis created by Chris Matts, is a much misunderstood thing –. It is a way of combining several techniques to understand just enough of a business problem to start expressing solutions to it. It provides specific techniques to incrementally and iteratively comprehend each of the following:

  • The business value sought (the why)
  • The problem domain (what specifically needs solving to deliver that value)
  • The resulting roles, incentives and product capabilities (the solution)

Basically, it helps us to evolve everyone’s understanding of the business-need as we (by other means) also evolve the implementation of the product.

Now, before you say, “it’s nothing new”, Chris Matts will be the first to agree. He’s taken his experience of various techniques, combined several of them that often work, evolved it into an approach to agile business analysis and given it a name.

I wish I could explain all of this now, but we’ll have to save that for another day. This post is intended to try to explain the relationship between Feature Injection and User Stories.

I’ve seen several examples of user stories taking some inspiration from Feature Injection, however, I’m not sure any of them do Feature Injection enough justice.

A common story

One of the problems with how User Stories are often applied is that people focus on the role, the capability and then try to work out what the benefit is. This is quite natural, explained nicely by Udi Dahan:

Users ultimately dictate solutions to us, as a delta from the previous set of solutions we’ve delivered them. That’s just human psychology
– writer’s block when looking at a blank page, as compared to the ease with which we provide ‘constructive criticism’ on somebody else’s work

This is something that other aspects of Feature Injection actually help you to solve… but again, today I’m just focusing on user stories…

The tendency of people to dictate solutions, rather than the problem that needs solving, has lead some to emphasise that we should put the benefit of the story first. For example, let’s say a fictional printer manufacturer consistently entices 3% of everyone they e-mail, reminding them to check their ink-levels, to purchase print consumables. In this situation, some might illustrate taking a story like this…


As the PrintCo marketing manager
I want customers to register their e-mail addresses
So that we increase the sales of our print consumables

And changing it to this:


In order to increase the number of sales of our print consumables
As a marketing manager
I want customers to register their e-mail addresses

The intent behind this shuffling around is to get people to think about the problem in order of business value first, then the stakeholder then what the stakeholder thinks will deliver the value.

But, the story is talking about a stakeholder… In my experience, this doesn’t get the best value from the user story approach.

Stakeholder ‘stories’ or User Stories?

I’ve found that user stories are most useful when communicating to the team if they encourage a conversation around who the user is, what capability the user needs and why it’s important to the user (could that be why they are called “user stories”?). This helps us to understand what user experience they need and what capability will make that possible.

This template, brought to us by Rachel Davies and Tim McKinnon’s 2001 XP Day session “Tuning XP”, helps us with that:


As <some role>
I want <some capability>
So that <some benefit>

If we discuss this solely from the business stakeholder’s point of view, the team doesn’t get as much understanding from what is needed by the user or why the user will need this capability. If the capability summarises what the product will enable or do for user it’s a lot easier to see what needs to be changed in the product. The benefit should be the answer to “what’s in it for the user?” or “why would they want the product to do that?” (exploring the persona of the user is helpful in finding this out).

So, taking the earlier PrintCo example, an alternative way of writing the user story would be like this:


As a PrintCo Customer
I want to be prompted to share my e-mail address with PrintCo
So that I can avoid those times when I need to print something but the ink is empty

Ah… but now we’re back where we started – where’s the business value? How do we get people to think about the business value first and then identify the roles and capabilities?

Let’s start again

Ideally, we want the benefit of driving the features from the business value and the increased understanding of what we’re implementing by understanding the user in our conversations around a user story.

 

To get to the right place, we need to start again with our example. Imagine that, instead of the above, we’d taken a different approach.

Recall that the business value or goal was:

Increase PrintCo printer cartridge sales by increasing our customer mailing list

Then, we could consider what behaviour we want to encourage and for whom. For example, we want the Customer to share their e-mail address with us online.

So, what will encourage that behaviour… What’s in it for our users? Into our business value statement, we “inject” the beginnings of a capability (or feature) that will help us deliver the value:


In order that PrintCo printer cartridge sales go up by growing the mailing list:
As a PrintCo Customer
I want <something>
So that an e-mail reminder helps me to avoid those times when I need to print something but the ink is empty

Now we’re ready to talk about the special <something> that we don’t currently have that will give the user the benefit they want:


In order that PrintCo printer cartridge sales go up by growing the mailing list:
As a PrintCo Customer
I want to be prompted for my e-mail address
So that an e-mail reminder can help me avoid those times when I need to print something but the ink is empty

We also want the Customer Services Rep to be encouraged to ask customers for their e-mail address when dealing with customers over the phone. So, we inject some more features:

In order that PrintCo printer cartridge sales go up by growing the mailing list:
  As a PrintCo Customer
  I want to be prompted for my e-mail address
  So that an e-mail reminder can help me avoid those times when
    I need to print something but the ink is empty

  As a PrintCo Customer Services Rep
  I need <something>
  So that I increase the points I get for e-mail captures, improving my
    position on the high-scores display

  As a PrintCo Customer Services Rep
  I need <something>
  So that I can see that the points I earn for e-mail captures affect my
    position on the high-scores display

And now we want to know what that special something is going to be for our Customer Services Rep:

In order that PrintCo printer cartridge sales go up by growing the mailing list:
  As a PrintCo Customer
  I want to be prompted for my e-mail address
  So that an e-mail reminder can help me avoid those times when I need
    to print something but the ink is empty

  As a PrintCo Customer Services Rep
  I need to be prompted to ask for the customer’s e-mail address
  So that I increase the points I get for e-mail captures, improving my
    position on the high-scores display

  As a PrintCo Customer Services Rep
  I need e-mail captures shown as one of the columns on the high-scores display
  So that I can see that the points I earn for e-mail captures affect my
    position on the high-scores display

And, if it is important that you capture the key stakeholder:


In order that the Marketing Manager grows PrintCo printer cartridge sales by growing the mailing list…

Or, you don’t need to even use ‘In order to’… you can word it any way you like


Goal: the Marketing Manager wants to grow PrintCo printer cartridge sales by growing the mailing list…

The most important thing here is not the wording. It’ that we find a way of keeping the business value in the conversation and when we’re talking about the capabilities, we focus on the user. What I’ve described is just one way of achieving that.

It’s like a “theme” for the stories

Themes have been suggested by many as a way of grouping user stories. In this case, the ‘In order to…’ can be thought of as a theme for the stories – i.e. the theme is described in terms of business value. Stories, described in terms of the user capability and incentive/context, are injected into the “theme” as we identify roles and capabilities that deliver the business value.

Notice, that I thought about the “As a…” and “So that…” aspects of the user story first and then summarised the capability, yet the order in which I expressed them remained as normal. As Rachel Davies once highlighted to me, just because we think of these things in one order, that may not be the best order for a discussion with the people who will implement it.

Another way of putting this is that the “As a… I want… So that…” is optimised for the conversation that helps the product owner(s) communicate what is required to the rest of the team, not for the order in which we generally discover the information.

In Summary

As you can see, Feature Injection doesn’t encourage you to simply move the “So that” to the beginning and reword it with “In order to”. I can see why people do that. One possible reason is because the stakeholder intent has been mashed into a user story, but that loses the purpose of a user story. I can also see why that happens, because there wasn’t an obvious place to think or talk about stakeholder intent. Business-value focused themes give us that.

I hope this helps clarify a little about Feature Injection and how it relates to user stories… but remember, this is only the tip of the Feature Injection iceberg. Actually it’s just one of the outcomes of Feature Injection. Watch this space for more.

]]>
/blog/2011/03/fi_stories/feed/ 0
Old Favourite: Adaptive Budgets? “Pull” the other one! /blog/2010/12/old-favourite-adaptive-budgets-pull-the-other-one/ /blog/2010/12/old-favourite-adaptive-budgets-pull-the-other-one/#respond Mon, 13 Dec 2010 13:04:38 +0000 /blog/?p=167 Continue reading Old Favourite: Adaptive Budgets? “Pull” the other one! ]]> This was originally posted on my old blog on 10th April 2010

Recently, I wrote about my views on using and estimating with task-cards. I highlighted that tracking progress with burn-up/down charts showing effort completed/remaining is not a true measure of progress, especially if we subscribe to the idea that we measure progress with working-software.

I also highlighted how tasks “horizontally slice” a “vertically sliced” story.

This inspired an article on infoq by Mark Levison. Since its publication, there have been several comments.

There are some specific points I’ll be answering on infoq, but some general points are more easily answered here…

True measures of progress
One of the key points I was trying to make in my first post (linked to above) is that if “working software” is our measure of progress then tracking task completion is not consistent with that.

One of the problems task-hours are trying to solve is to provide a visible indication of progress. It’s also something that many people find easier to understand. The idea of relative-points estimation seems to baffle many people when they first encounter it.

In my original post (linked above) I referenced an approach, blogged about by Jason Gorman, outlining a solution to providing a visible indication of progress that is compatible with the idea of measuring progress with working software. It also works more seamlessly with the solution of measuring throughput trends of the team (e.g. with story points) so that the team can estimate how much can be done in the next iteration.

Value Points & Task Points
Value points were mentioned in the infoq comments. Value points are solving a different problem and don’t help the team estimate what can be done. The value of a story isn’t an indication of its complexity. Value, combined with complexity points, can help determine priorities. Arlo Belshey explains some interesting views on this.

Prioritisation is, in my view, one of the few reasons to place some sort of estimate on value & complexity. Unfortunately, it seems to be often used to establish a contract with the team for when something will be done and apply pressure when the estimates turn out to be inaccurate.

Non-estimating pull systems
My preference is to use such estimates only for prioritisation… after that, things just take as long as they take. There is some benefit in tracking accuracy trends so that we can learn from them, however, spending too much time on these things simply slows down the process of getting things done. Interestingly, I have not yet seen the business be quite so passionate about evaluating whether their value estimates were accurate when the product makes it to the real world. Funny that.

Instead, I prefer to leave estimation behind once we’ve prioritised things and pull stories or customer valued work items through the system without using previous estimates to predict their completion date. With the way that most budgets are determined and allocated, this idea isn’t compatible with the way much of the business world works. For pull systems that eliminate estimation as a means of predicting the future to work (such as Kanban), we need a more adaptive approach to budgetary spend. Business needs to find a way to adapt budget allocation more frequently, perhaps as frequently as monthly, perhaps more frequently still. The business now needs to look at how it can respond to change rather than focus on following a budget-plan.

Business-world: now it’s your turn
Budget holders now need to be more agile. We, those who evolve the implementation of the ideas of the business, have responded to their demands to be able to respond to rapidly changing markets and provide that competitive edge. For the more mature implementation teams, the hindrance now is no longer how we make the products, but the business culture of inflexible and predictive budget allocation.

]]>
/blog/2010/12/old-favourite-adaptive-budgets-pull-the-other-one/feed/ 0
Old Favourite: Taking Repetition To Task /blog/2010/12/old-favourite-taking-repetition-to-task/ /blog/2010/12/old-favourite-taking-repetition-to-task/#comments Mon, 13 Dec 2010 13:02:41 +0000 /blog/?p=168 Continue reading Old Favourite: Taking Repetition To Task ]]> This originally appeared on my old blog on 16th March 2010…

Others have talked about the virtues of stories as vertical slices of a problem (end-to-end capabilities) rather than horizontal slices (system layers or components). So, if we slice the problem with user stories, how do we slice the user-stories themselves?

If, as I sometimes say, acceptance tests (a.k.a. examples/scenarios/acceptance-criteria) are the knife with which we slice a story into even thinner vertical slices, then I would say my observation of ‘tasks’ is that they are used as the knife used to cut a story into horizontal slices. This feels wrong…

Sometimes I also wonder, hasn’t anyone else noticed that the idea of counting the effort of completed tasks on burn-down/up charts is counter to the value that we measure progress only with working software? Surely it makes more sense to measure progress with passing tests (or “checks” – whichever you prefer).

These are two of the reasons I’ve never felt very comfortable with tasks, because:

  • they’re often applied in such a way that the story is sliced horizontally
  • they encourage measuring progress in a less meaningful way than working software

Tasks are, however, very useful for teams at first. Just like anything else we learn how to do, learning how to do it on paper can often help us then discard the paper and do the workings in our heads. However, what I’ve noticed is that most teams I’ve worked with continue to write and estimate tasks long after the practice is useful or relevant to them.

For example, there comes a time for many teams where tasks become repetitive. “Add x to the Model”, “Change View”… and so on. Is this adding value to the process or are you just doing it because the process says you should do it?

Simply finding that your tasks are repetitive doesn’t mean the team is ready to stop using them. There is another important ingredient, meaningful acceptance criteria (scenarios / acceptance-tests / examples).

I often see stories with acceptance criteria such as:

  • Must have a link to save the profile
  • Must have a drop down to select business sector
  • Business sector must be mandatory

Although these are “acceptance criteria” they aren’t what we mean by acceptance criteria in the context of user stories. Firstly, they are talking about how the user interacts rather than what they need to achieve (I’ve talked about this before). Secondly, they aren’t examples. What we want are the variations that alter the behaviour or response of the product:

  • Should create a new profile
  • Profile cannot be saved with blank “business sector”

As our product fulfils each of these criteria, we are making progress. Jason Gorman illustrates one way of approaching this.

So, if you are using tasks, consider an alternative approach. First, look at your acceptance criteria, make sure they are more like examples and less like instructions. Once that’s achieved, consider slicing each criterion (or scenario) horizontally with the tasks rather than the story. Pretty soon, you’ll find that you don’t need tasks anymore and you can simply measure progress in terms of the new capabilities you add to your product.


]]>
/blog/2010/12/old-favourite-taking-repetition-to-task/feed/ 1