Category Archives: Agile

Old Favourite: Radio by Example

This first appeared on my old blog in  “2008 under the title Specification by Example”. I’ve made a couple of minor refinements to the text and I’ve added the image below from the 2009 “GOOS” book .

Internet radio‘ has come a long way since it first began. Long gone are the days where you have to seek out an online station that streamed a preselected playlist with mostly music that you like, or several stations for those with more eclectic taste. Instead, services like Pandora.com and Last.fm create a personalised radio station that matches the user’s own personal taste.

These personalised internet-radio stations are far more sophisticated than just specifying what genres and styles you like. When they first started you didn’t do that at all. When last.fm first started you provided an example of a song that you like. Software analysed the song finding common aspects of the music’s ‘DNA’ – including genre, tempo, how melodic it is and countless other sound characteristics in their database. From this they create a user-specific internet radio station that matches the users taste. As you listen, you give them feedback saying which songs you love and which ones you hate. Your future play-lists are refined with each piece of feedback you provide. This feedback is in itself more examples of songs that do and don’t match your taste. As a result you’d find yourself listening to songs you’d not have otherwise discovered.

That is Specification by Example! The product (in the above, your personal playlist) evolves by inferring the desired rules and characteristics from concrete examples.

And, as several have been saying for years… Test Driven Development is just that – Specification by Example – applied to software development. Instead of examples of songs, we provide examples of what we’d like the product, or part thereof, to do. These examples take the form of tests. It so happens that the evolution of the software product is performed by humans, unlike the internet playlist evolved by a machine.

Write a failing acceptance test, then cycle over writing a failing unit test, making that pass, and refactor repeating until the acceptance test passes.

Slow is Smooth–Smooth is Fast

A few years ago, to explain what goes wrong for many organisations that “go agile”, I wrote an article called “Putting the Kart before the Horse” [1].

Go-karting is where most of the current Formula One racing drivers first learned the basics of race-craft. Antony Marcano, a former kart racer himself, recounts a father-and-son racing experience that helps him explain what goes wrong for many organizations that adopt Scrum as their first attempt to “go agile.”

In this article I explained how novice drivers try to go as fast as possible rather than trying to master the racing line:

At the open practice session my son and I attended, novice drivers seemed to approach their first lap as if it really were as easy as experienced drivers make it look. They floored the throttle and flew down the straight only to crash—painfully and expensively—into the barrier at the first hairpin turn.

I went on to liken this to how many teams adopt agile…

Organizations starting their transition to agile methods with Scrum are much like the novice drivers trying to go fullthrottle before learning how to take the corners. Before being able to truly bend and change their software with speed and ease, they fly into short iterations and frequent incremental product delivery—crashing painfully into the barrier of hard-to-change software.

It was my son’s first experience in a go-kart. We started by driving slowly with him following me to learn the racing line. We gradually went faster and faster. Soon, we were the quickest drivers on the track.

Since then, he has gone on to advance further and further in his racing. Back then, we were in slow, 35mph go karts. This year he started racing for a team in race-tuned karts (see photo). The driver-coach in the team watched him driving on his first few attempts in his new high-performance machine (70+ mph, super-car acceleration). While he was reasonably quick, he wasn’t smooth.

Photo of Damani Marcano driving his 2013 Mach1 Kart for MLC Motorsport

Damani Marcano driving his 2013 Mach1 Kart for MLC Motorsport

His coach told him “slow down”. His coach said “whatever speed you can do while still driving on the ideal racing line is the speed you should be going”. My son went out again. Initially slower than he was before, his speed progressively built… we watched his lap-times improve dramatically.

Yet again, we relearned this lesson: slow is smooth–smooth is fast.

For some reason, I see teams who are novices in agile development make the same mistake, however, not always feeling like they have a choice. Kanban might make it easier to start from where you are and gradually improve but it’s ok if you’ve decided to start with Scrum. When starting with Scrum, start slow, focus on doing fewer things well first then progressively improve. On a course I gave recently, one of the attendees said “but how do you get management and marketing to back-off enough to allow us to slow down?”

My answer: avoid going faster than you can. But if you feel compelled to go faster, make it a business decision. Help them understand that there is a risk that whatever time you save now will cost you more in the future. Help them understand all the effort you’ll spend repairing the damage that going too fast will cause. Pretty soon you’ll be spending so much time fixing problems and struggling to adapt hard-to-change code that there will be little capacity left to add new features. Essentially, incurring a  “Technical Debt” .

As a general rule, I always try to start something new, slow and steady. I only go as fast as I know I can while predictably and sustainably delivering. I’m mindful of business pressures but experience has shown me that by focusing on “making things right” we can deliver smoothly, build trust and progressively become faster.

—————–

[1] You can read the original article here: Putting the Kart before the Horse? – published in Better Software Magazine, 2009

My son’s story is continued here: http://damanimarcano.com

Is updated here: https://www.facebook.com/DKMRacing

And here: https://twitter.com/DKMRacing

And here (updated 03-02-2014)…

UPDATE [11-12-2013]: Damani Marcano finished his first ever season 3rd in the Hoddesdon Kart Club Championship Junior Rotax Formula at Rye House, the track that started Lewis Hamilton’s career. This achievement is significant since, as a novice license holder, Damani had to spend the first half of the season starting every heat from the back of the grid. Find out what happens next season by following his story on Twitter and/or Facebook. Who knows, by adding to his ‘likes’ / ‘followers’ you may become a part of his story by helping him become more attractive to a potential sponsor.

Making the right things vs. Making things right

Recently, David Anderson wrote of how he disagrees with the idea of focusing more on ‘doing the right thing’ than ‘doing things right’.

…we’ve heard a number of leaders in the Agile community promoting the idea that you should focus on doing the right thing – discovering what customers really want and need – rather than focusing on building and deploying working software

David disagrees because…

“doing things right” creates the circumstances to enable deferred commitment. Deferred commitment enables better predictability and faster delivery and a virtuous cycle ensues. Deferred commitment forces hard selection decisions and creates a focus on how selection is done. Combined with the limited WIP of a kanban system, deferred commitment raises awareness of the scarce resource of engineering capability. This is turn focuses attention on how to best utilize that scarce resource by “doing the right thing.”

There is another reason why focusing on “doing things right” is a good idea. One of the very principles behind many agile/lean approaches is iteration, allowing us to make the wrong thing so we can see it is wrong and more rapidly figure out how to make it right. The ability to do that quickly in an easy to evolve product is a significant aspect of agile approaches. “Doing things right”, as David points out, is the path to making this rapid evolution viable.

One way this is relevant is when we give the customer what they say they want so that they can then see what they really need. The ability to iterate quickly and predictably, as David clarifies, builds trust and leads to getting the right thing faster.

So, I entirely agree with David, however, I would phrase it in a slightly different way. To me, there is a distinction between “doing things” and “making things”.

David is talking more about what I would describe as  “making the right things” vs. “making things right”. I think of “doing” as a layer below this. Where “making things” is more what we do and “doing things” is more how we do it.

doing the right things -> doing things right -> making things right -> making the right things

For example, using TDD might be “doing the right thing” but maybe the practitioner isn’t very experienced in it and is not using many of the techniques that give them the most benefit for the effort they put in. While they might be effective, by “doing the right thing” (TDD in this case), they are not as efficient at it because they’re not yet “doing TDD right”.

So, if you ever hear me talk of “doing the right things” before “doing things right” then it’s because of this distinction in my mind between “doing” and “making”.

Start with “making things right” and you’ll get better at “making the right things”.

To improve at “making things right”…

Start with “doing the right things” and you’ll get better at “doing things right“.

—————————

Footnote :

Barry Boehm is the first thinker who I saw use the phrase…”Building the right thing” vs. “Building things right” in his 1981 book Software Engineering Economics. I use the term “making” because it’s a more general term that has fewer implications on “how” we create a product. This is partly because developing software products with agility doesn’t feel like “building” to me.

Software Pottery

“Building software” is a phrase I hear used time and again. Indeed, many do still think of software development as akin to construction.  Modern (agile) software development doesn’t feel like that to me. There was a time, like the construction architect, when we had to increment and iterate on paper and in models. The cost of computing was too high to do such iteration inside the machine. The cost difference between making a small change in our understanding and reflecting that in the actual product was too great. The cost of change meant that we had to treat the creation of software like constructing buildings or bridges. But those times are, or should be, in the past.

To me, modern (agile) software development feels more like pottery. But, instead of ‘throwing clay’ we throw our consciousness, in the form of electrons flowing over silicon, onto a spinning wheel of user-story implementation cycles.

Picture of a pot being created and evolved on a pottery wheel.

Source: http://www.flickr.com/photos/kellinahandbasket/2183799236

We evolve our understanding of what our customer wants with each conversation. We reflect that understanding by evolving the product’s shape as we touch the outer surface with each new automated acceptance test (or example scenario). We evolve its structure as we touch the inner surface with each new unit test (or unit specification). We keep it supple as we moisten it with each refactoring. Our customer looks on, gives us feedback and we start the next revolution of the software development wheel. Never letting the product dry and harden; keeping it flexible, malleable, soft(ware).

There is nothing that feels like construction here. I never feel like I’m “building” something. Much more like I’m evolving my understanding, evolving the product, evolving my understanding again and so on.

For these reasons l am not comfortable with referring to this process as “building”. I’m also not comfortable calling it “throwing” (as in pottery).

I am very comfortable calling it “development”. I use the word “develop” in terms of its literal meaning:

to bring into being or activity; generate; evolve.
-dictionary.com

From the first thought of a new idea, through each conversation along the way, to each revolution of the software development wheel, I develop products.

I don’t “build”.

I create. I make. I shape. I evolve.

—————————
Footnote: I’m not aware of anyone drawing this analogy before. If you have heard or read someone explain it in this way before now, please let me know.

Knowledge-work, is serendipity-work

Serendipity is core to the success of modern companies. While some may not have articulated their insights as facilitating serendipity, it was certainly what they were thinking, even if they didn’t realise it.

Enabling Serendipity

For example, Google’s 20% time, where employees can spend 20% of their time working on anything they like, has resulted in numerous “happy accidents” where pet projects have matured into fully fledged revenue enhancing products, such as gmail.

Charles Leadbeater talks of the “we think” phenomenon, where innovation occurs simply by people having access to products that give them flexibility in how they use them. This philosophy taps into the free thinking collective consciousness of the social web. The birth of Twitter is the perfect example. Hash tags, “@” mentions, retweets are all customer innovations that were subsequently added to the Twitter experience we know and love today. It’s highly likely that “@” mentions were a significant factor in boosting user engagement and hashtags are now one of Twitter’s revenue sources.

Countless more examples exist but how is this achieved and what does this have to do with serendipity?

Adrian Cockroft of Netflix explains that you can’t force innovation, you have to get out of the way of innovation. This applies to what we enable people to do with products, such as the Twitter story, as much as what people do with their 20% (or more) time.

Facilitating Serendipity

It’s not just the organisation that needs to embrace serendipity, the working environment does too. In “Designs for Working” Malcolm Gladwell highlights:

“Who, after all, has a direct interest in creating diverse, vital spaces that foster creativity and serendipity? Employers do.”

He uses Greenwich Village, where opportunities for people to meet by happenstance are rife,  as a model for the ideal work environment. Many of his ideas can be found in the forward thinking companies, I’ve already mentioned, that are breaking all the rules and being so successful as a result.

Much innovation in what we produce or even how we produce it may never have happened if they had required a typical business case or if the working environment was more like a factory-line or made up of closed offices.

The Serendipity Economy

Daniel Rasmus, the person who coined the phrase “the serendipity economy”, captures the problem with your traditional business case so well when he says:

“If the only economics you have to work with come from the industrial age, then everything looks like a factory.”

Rasmus highlights, in “Welcome to the Serendipity Economy”, the benefits of knowledge, ideas, experimentation are not immediate or predictable within a specific time-frame or quantifiable with traditional measures of productivity or returns on investment. He highlights the importance of serendipity in innovation and the power of social networking in accelerating the process.

I was talking with one client the other day about their budget approvals process. It was so heavyweight that several increments of the product could have been developed with the money spent on documenting theoretical feasibility and commercial viability.

Exploiting Serendipity

Instead, companies would be better off adopting a lean start-up model for new projects. Ensure that employees have social networking tools at their disposal. Share the ideas. Create a simple sign up page that’s sent out to employees or even the general public. Get feedback. Provide “seed-funding” to the projects that inspire the most passionate responses. Get the first few features built. See how hard or easy it is. If it works out, you’ll have empirical data to base future estimates on and you’ll have the beginnings of the product (maybe something releasable).

All this can be done with less than many large companies would have spent on writing business cases with hypothesised technical-feasibility and unproven commercial-viability. Yes, some ideas will flop, but that experience could be the very thing that inspires your biggest success.

Google, Twitter, Netflix and more show us that investing in serendipity (and talent) is key to the modern organisation’s success. It will take a leap of faith that many feel they cannot afford to take in these difficult economic times. But, to survive and thrive in the future, you have to ask if it’s a leap you can afford to not take.

GhostDriver – Headless Selenium RemoteWebDriver

GhostDriver is a Selenium server that can be connected to via a RemoteWebDriver.

It runs completely heedlessly on PhantomJS but still behaves (mostly) like a typical webkit browser. By avoiding the overhead of actually rendering the page in a GUI it can run standard Selenium tests noticeably faster and on a headless build server too.

This is especially beneficial if you want to use continuous testing IDE plugins like Infinitest or JUnitMax or simply to encourage developers to run end-to-end tests more frequently.

GhostDriver is written on top of PhantomJS, a headless WebKit browser which uses QtWebKit.

It’s already 35%-50% faster than ChromeDriver despite the fact that QtWebKit has the standard WebKit Javascript engine. QtWebKit is moving to the V8 JS engine (as used in Chrome) which means it’s only going to get faster…

It’s still work in progress and hasn’t implemented the entire Selenium wire protocol yet but it’s certainly one to watch.

Presentation at SeleniumConf 2012: http://www.youtube.com/watch?v=wqxkKIC2HDY

Find it on GitHub: https://github.com/detro/ghostdriver/

Implemented by: Ivan De Marino

This is not a manifesto: Valuing Throughput over Utilisation

In a previous article, This is not a manifesto, I expressed the values I hold as a software development team member. Today, I’m going to talk about the first of these values.

Before I do, I’d like to say what I mean by “software development team”. I mean a cross-discipline team with the combined skills to deliver a software product – product owner, user experience, business analysts, programmers, testers, dev-ops, etc.

A common problem

Many teams I encounter will, at least in the beginning, have team-members who specialise in a single role. Each team-member will rarely, if ever, step outside their job description [1]. This can cause a problem.

Many teams find themselves in a situation where some team members have little to do on the current stories. At this point, the team has some choices, they can focus the under-utilised people on:

  1. Future user-stories, working on tasks most relevant to their job title
  2. Another team, working on tasks most relevant to their job title (e.g. ‘matrix management’)
  3. Current stories, taking on tasks that will take the story to completion sooner, even if it’s more relevant to someone else’s job title
  4. Things that will make the more utilised people get through their work faster, today and in the future.

More often than not, I find teams taking option 1. I think that people choose this option because it feels like it should increase the throughput of the team – i.e. the amount of new features they can add to the product. In fact, it has the opposite effect.

Little’s law

Let’s consider a team that is working in time-boxes or ‘sprints’ (à la Scrum) and measures it’s throughput with ‘velocity’ (story-points per-sprint). Points are accrued as each user-story is completed – i.e. coded, tested and product-owner validated.Shows a system through which a series of balls are being processed. Each ball represents one story point. It also shows that there is spare capacity in the team.

Shows a system through which a series of balls are being processed. Each ball represents one story point. It also shows that the spare capacity in the team has been used up, even though it does nothing to increase throughput.

In this particular team, it is able to complete an average of 10 points per sprint. Let’s say this is due to a bottleneck in the process. This bottleneck might limit the amount of testing that can be completed during the sprint. This is illustrated in fig.1 where each ball is a story point.

Little’s law [2] tells us that the amount of time it takes to complete an item of work (cycle-time) is:

        Work in Progress (WIP)   
Throughput

Which we can read as:

   story points in progress
  velocity 

 In this simplified example, WIP is 10 points and throughput is 10 points per sprint. The average cycle time of a story is therefore 10/10 = 1 sprint. Notice, however, there’s all that spare capacity.

Let’s say this spare capacity is developers. So, the team starts taking on more user-stories from the backlog – increasing the number of story points in progress to 20 points (fig.2).

Because the bottleneck remains, velocity remains the same – 10 points per sprint. The amount of flexibility that the team has, however, is now reduced because the average time required to get a story to completion has increased from 1 sprint to 20/10 = 2 sprints [3].

Often this will take the form of testers working a sprint behind the developers. Worse still, over 10 sprints, the team still only completes 100 story points (as they would have before) but is left with a lot of unfinished work. This ‘inventory’ of unfinished work carries overheads which can, ultimately, reduce the velocity of the team.

The impact of filling capacity in this way has yet another effect.

Latency effect

As the developers get through more stories, a queue of stories that are “ready for testing” will build up. Testers working through these will, at least, have questions for the developer(s) or even find some defects. The developer, having moved on from that story, is now deep into the code and context of another story. When the tester has questions, or finds defects, relating to a story that the developer had worked on, say a week ago, then the developer has to reload their understanding of this old code and context in order to answer any questions or fix any defects. This context-switching carries significant overheads [4].

The end result is that the effort required to complete a story increases due to the repeated context switching, therefore reducing velocity.

So, not only does filling the capacity with more work fail to increase throughput, it adds costly context-switching overheads – ultimately slowing everything down.

This phenomenon is not unique to teams using fixed-length time-boxes, such as sprints. Strictly following Kanban avoids this problem, but what I’ve seen is some teams creating a ‘ready for testing’ queue – so that developers can start work on the next story. This has the same latency effect and turns a process designed for continuous flow into a batch and queue process. But, I digress.

What to do with the spare capacity?

The simple answer is to look at the whole approach and determine what is slowing things down. In the example above, I’d be wondering what’s slowing the testing down. Many of the things that hinder testing can be addressed by changing how we do things ‘upstream’.

Are lots of defects being found, causing the testers to spend more time investigating and reproducing them? Can we get the testers involved earlier? Can any predictable tests be defined up front so that developers can make sure the code passes them before it even gets to the testers (e.g. Behaviour Driven Development)?

Are the testers manually regression testing every sprint? Could the developers help by automating more of that? Are the testers having to perform repetitive tasks to get to the part of the user-journey they’re actually testing? Can that be automated by the developers?

Is there anything else that is impacting them? Test data set-up and maintenance, product owner availability to answer questions? Anything else?

Addressing any of these issues is likely to speed up the testing process and increase throughput of the entire team as a result. One solution is to put these types of tasks onto the product backlog. This is fine but if we assign point values to them it can give a skewed view of velocity. Or rather, velocity is no longer a measure of throughput. You won’t be able to see if these types of tasks are actually improving things unless you are also measuring what proportion of the points are delivering new product capabilities.

The only good reason I can think of for story-pointing these throughput-enhancing tasks is if your focus is utilisation – i.e. maximising the number of story-points in progress. Personally, I care more about measuring and improving throughput. By doing so, we get the right utilisation for free and a faster, more capable team.

Up next: Valuing Effectiveness over Efficiency.

Footnotes:

[1] “Lessons Learned in Close Quarters Battle” Illustrates how stepping outside our job descriptions can move the team through each story more quickly by using special-forces room-clearing as an analogy.

[2] Little’s law (PDF) – the section “Evolution of Little’s Law in Operations Management” that references Hopp and Spearman’s observation about throughput (TH), work-in-progress (WIP) and cycle-time (CT) – i.e. TH=CT/WIP. And therefore we can say CT = WIP/TH.

[3] Little’s law also illustrates that if we reduce the work in progress to one quarter, then the cycle time for each story reduces to one quarter of a sprint. We’ll still get through 10 points per sprint, but stories will be completed more as a continuous stream throughout the sprint rather than all at the end.

[4] “The Multi-Tasking Myth” by Jeff Atwood, talks about multi-tasking across projects and pulls together several resources to illustrate the impact of multitasking at various levels. This applies when multi-tasking across stories.

Acknowledgements:

I’d like to say a special thank you to my fellow RiverGliders – Andy Palmer and James Martin – for the feedback that helped me refine this article.

This is not a manifesto

Recently, I had cause to ponder the values I hold as a member of a software development team. Values that, alongside other values I hold, drive my choices and behaviours. They sit behind the things I do and how I do them. They underly my thinking when considering how we can improve as a team and as an individual contributing to that team – for example, during retrospectives. This is not a manifesto of what I will value. This is an articulation of the things I do value.

In what I do and how I seek to improve as a team member, I have found that I value:

Throughput over Utilisation
Effectiveness over Efficiency
Advancement over Speed
Quality over Quantity

While there is value in the items on the right, I care about the items on the left more.

Note, that I say I ‘care’ about the items on the left more. I actually do care. I know that I care about these things more because it actually annoys me if I’m being asked to focus on the items on the right (probably because I know that they will be at the expense of the items on the left). I also know because I actually have a positive feeling when I am being asked for the items on the left (probably because I know that, over time, we’ll get the items on the right for free).

So, what do I mean by these ‘buzzwords’. Stick around and I’ll tell you in a series of upcoming blogposts. Some of these apply in ways that go beyond the obvious.

Acknowledgements:

You probably noticed that I used the same format as the Agile Manifesto. This is because it happens to provide a structure that I believe most clearly and concisely communicates the idea.

I’d like to say a special thank you to Andy Palmer and Matt Roadnight for the awesome conversations that helped me find the right words to express these values and ideas.

Scenario-Oriented vs. Rules-Oriented Acceptance Criteria

Acceptance Criteria, Scenarios, Acceptance Tests are, in my experience, often a source of confusion.

Such confusion results in questions like the one asked of Rachel Davies recently, i.e. “When to write story tests” (sometimes also known as “Acceptance Tests” or in BDD parlance “Scenarios”).

In her answer, Rachel highlighted that:

…acceptance criteria and example scenarios are a bit like the chicken and the egg – it’s not always clear which comes first so iterate!”

In that article, Rachel distinguishes between acceptance criteria and example scenarios by reference to Liz Keogh’s blog post on the subject of “Acceptance Criteria vs. Scenarios”:

where she explains that acceptance criteria are general rules covering system behaviour from which executable examples (Scenarios) can be derived

Seeing the acceptance criteria as rules and the scenarios as something else, is one way of looking at it. There’s another way too…

Instead of Acceptance Criteria that are rules and Acceptance Tests that are scenarios… I often find it useful to arrive at Acceptance Criteria that are scenarios, of which the Acceptance Tests are just a more detailed expression.

I.e. Scenario-Oriented Acceptance Criteria.

Expressing the Intent of the Story

Many teams I encounter, especially newer ones, place higher importance on the things we label “acceptance criteria” than on the things we label “acceptance tests” or “scenarios”.

By this I mean that, such a team might ultimately determine whether the intent of the story was fulfilled by evaluating the product against these rules-oriented criteria. Worse still, I’ve observed some teams also have:

  • Overheads of checking that these rule-based criteria are fulfilled as well as the scenario-oriented acceptance tests
  • Teams working in silos where BAs take sole responsibility of criteria and testers take sole responsibility for scenarios
  • A desire to have traceability from the acceptance tests back to the acceptance criteria
  • Rules expressed as rules in two places – the criteria and the code
  • Criteria treated as the specification, rather than using specification by example

If we take the approach of not distinguishing between acceptance criteria and acceptance tests (i.e. see the acceptance tests as the acceptance criteria) then we can encourage teams away from these kinds of problems.

I’m not saying it solves the problem – it’s just easier, in my experience, to move the team towards collaboratively arriving at a shared understanding of the intent of the story this way.

To do this, we need to iteratively evolve our acceptance criteria from rules-oriented to scenario-oriented criteria.

Acceptance Criteria: from rules-oriented to scenario-oriented:

Let’s say we had this user story:

As a snapshot photographer
I want some photo albums to be private
So that I have a backup of my personal photos online

And let’s say the product owner first expresses the Acceptance Criteria as rules:

  • Must have a way to set the privacy of photo albums
  • Private albums must be visible to me
  • Private albums must not be visible to others

We might discuss those to identify scenarios (as vertical slices through the above criteria):

  • Create new private album
  • Make public album private
  • Make private album public

Here, many teams would retain both acceptance criteria and scenarios. That’s ok but I’d only do that if we collectively understood that the information from these two will come together in the acceptance tests… And that whatever we agree in those acceptance tests supersede the previously discussed criteria.

This understanding tends to occur in highly collaborative teams. In newer teams, where disciplines (Business Analysts, Testers, Developers) are working more independently this tends to be harder to achieve.

In those situations (and often even in highly collaborative teams) I’d continue the discussion to iterate over the rules-oriented criteria to arrive at scenario-oriented criteria, carrying over any information captured in the rules, discarding the original rules-based criteria as we go. This might leave me with the following scenarios:

  • Create new private album (visible to me, not to others)
  • Make public album private (visible to me, not to others)
  • Make private album public (visible to me, & others)

Which, with a subtle change to the wording, become the acceptance criteria. I.e. the story is complete when we:

  • Can create new private album (visible to me, not to others)
  • Can make public album private (visible to me, not to others)
  • Can make private album public (visible to me, & others)

Once the story is ‘in-play’ (e.g. during a time-box/iteration/sprint) I’d elaborate these one at a time, implementing just enough to get each one passing as I go. By doing this we might arrive at:

Scenario: Can Create new private album
When I create a new private album called "Weekend in Brighton"
Then I should be able to see the album
And others should not be able to see it

Scenario: Can Make public album private
Given there is a public album called "Friday Night"
When I make that album private
Then I should be able to see the album
And others should not be able to see it

Scenario: Can Make private album public
Given there is a private album called "Friday Night"
When I make that album public
Then I should be able to see the album
And others should be able to see it

By being an elaboration of the scenario-oriented acceptance criteria there’s no implied need to also check the implementation against the original ‘rules-oriented’ criteria.

Agreeing that this is how we’ll work, it encourages more collaborative working – at least between the Business Analysts and Testers.

These new scenario-based criteria become the only means of determining whether the intent of the story has been fulfilled, avoiding much of the confusion that can occur when we have rules-oriented acceptance criteria as well as separate acceptance test scenarios.

In short, more often than not, I find these things much easier when the criteria and the scenarios are essentially one and the same.

Why not  give it a try?

Sushi, Hibachi and Other Ways of Serving Software Delicacies

Let’s say you own a restaurant. Quite a large restaurant. You’ve hired a manager to run the place for you because you are about to take the fruits of your success and invest in opening three more around the country. You leave the manager with a budget to make any improvements he sees fit.

After being away for a few weeks, you return to your restaurant. It’s very busy thanks to the great reputation you had built for it. The kitchen has some new state of the art equipment and you can see that there are a lot of chefs and kitchen-staff preparing meals furiously. Obviously, with a restaurant this full you have to prepare a lot of meals.

Too many chefs?

But then, you notice that there are only a few service staff taking orders and delivering the meals to the customers. Tables are waiting a long time for their meals. You notice the service staff spending a lot of their time taking meals back to the kitchen because they have gone cold waiting so long to be served. You also overhear several customers complaining that the meal they received was not the one they ordered, increasing the demand for the preparation of more meals. You call the manager over and ask him what’s going on… he says “because we need to invest more in the kitchen”. You wonder why. He tells you “because there are so many meals to prepare and we can’t keep up”.

At this point, it’s quite obvious that the constraint is not through lack of investment in the kitchen. Instead the lack of investment is in taking the right orders and getting the meals to the customers. To solve this, I think most people would increase the investment in getting the order right and in how the meals get to the customer not by investing more into the kitchen.

Obvious, right? Despite this, time and again, I see organisations happy to grow their investment in more programmers writing more code and fixing more bugs and all but ignore the end-to-end process of finding out what people want and taking these needs through to working capabilities available to the customer. Business analysis, user-experience, testing and operations all seem to suffer in under investment. If it takes you four weeks to write some code for several new features and then another four weeks for that to become capabilities available to your customer, hiring more programmers to write more code is not going to make things any better.

Sushi and Hibachi

So, how would you advise the restaurant? In the short term, you can get some of the chefs to serve customers while you hire more service staff. Another solution is to invest in infrastructure that brings the meals closer to the customer, such as in hibachi restaurants where the chef takes the order and cooks the meal at the table or in sushi restaurants where meals are automatically transported via conveyor belt.

In software teams, we can bring the preparation of features closer to our customers. We can involve a cross-functional team in the conversation with the customer (such as the chef being at the customer’s table). We can also automate a large proportion of the manual effort of getting features from a programmer’s terminal into the hands of the customer (as in with the conveyor belt). And, there is so much we can automate.

Integration, scripted test execution, deployment and release management can largely be automated. Since speeding up these tasks has the potential to increase the throughput of an entire development organisation, it seems appropriate to invest in these activities as we would in automating any other significant business activity – such as stock-control or invoice processing.

Cooking for ourselves

Why not ask a good number of our programmers to take some time out from automating the rest of the business and automate more of our own part of the business? Taking this approach seems to be the exception rather than the rule. I see organisations having one or two dev-ops folks trying to create automated builds; and similar number of testers, with next to no programming experience, trying to automate a key part of the business – regression testing. And then asking them all to keep up with an army of developers churning out new code and testers churning out more scripted tests.

Serving up the capabilities

Instead, let’s stop looking so closely at one part of the process and look more at the whole system of getting from concept to capability. Let’s invest in automating for ourselves as much as we automate for others – and let’s dedicate some of our best people to doing so.

In short, it’s not just about the speed at which we cook up new features. It’s as much about the speed at which we can serve new, working capabilities to our customers.