Tag Archives: testing

Sushi, Hibachi and Other Ways of Serving Software Delicacies

Let’s say you own a restaurant. Quite a large restaurant. You’ve hired a manager to run the place for you because you are about to take the fruits of your success and invest in opening three more around the country. You leave the manager with a budget to make any improvements he sees fit.

After being away for a few weeks, you return to your restaurant. It’s very busy thanks to the great reputation you had built for it. The kitchen has some new state of the art equipment and you can see that there are a lot of chefs and kitchen-staff preparing meals furiously. Obviously, with a restaurant this full you have to prepare a lot of meals.

Too many chefs?

But then, you notice that there are only a few service staff taking orders and delivering the meals to the customers. Tables are waiting a long time for their meals. You notice the service staff spending a lot of their time taking meals back to the kitchen because they have gone cold waiting so long to be served. You also overhear several customers complaining that the meal they received was not the one they ordered, increasing the demand for the preparation of more meals. You call the manager over and ask him what’s going on… he says “because we need to invest more in the kitchen”. You wonder why. He tells you “because there are so many meals to prepare and we can’t keep up”.

At this point, it’s quite obvious that the constraint is not through lack of investment in the kitchen. Instead the lack of investment is in taking the right orders and getting the meals to the customers. To solve this, I think most people would increase the investment in getting the order right and in how the meals get to the customer not by investing more into the kitchen.

Obvious, right? Despite this, time and again, I see organisations happy to grow their investment in more programmers writing more code and fixing more bugs and all but ignore the end-to-end process of finding out what people want and taking these needs through to working capabilities available to the customer. Business analysis, user-experience, testing and operations all seem to suffer in under investment. If it takes you four weeks to write some code for several new features and then another four weeks for that to become capabilities available to your customer, hiring more programmers to write more code is not going to make things any better.

Sushi and Hibachi

So, how would you advise the restaurant? In the short term, you can get some of the chefs to serve customers while you hire more service staff. Another solution is to invest in infrastructure that brings the meals closer to the customer, such as in hibachi restaurants where the chef takes the order and cooks the meal at the table or in sushi restaurants where meals are automatically transported via conveyor belt.

In software teams, we can bring the preparation of features closer to our customers. We can involve a cross-functional team in the conversation with the customer (such as the chef being at the customer’s table). We can also automate a large proportion of the manual effort of getting features from a programmer’s terminal into the hands of the customer (as in with the conveyor belt). And, there is so much we can automate.

Integration, scripted test execution, deployment and release management can largely be automated. Since speeding up these tasks has the potential to increase the throughput of an entire development organisation, it seems appropriate to invest in these activities as we would in automating any other significant business activity – such as stock-control or invoice processing.

Cooking for ourselves

Why not ask a good number of our programmers to take some time out from automating the rest of the business and automate more of our own part of the business? Taking this approach seems to be the exception rather than the rule. I see organisations having one or two dev-ops folks trying to create automated builds; and similar number of testers, with next to no programming experience, trying to automate a key part of the business – regression testing. And then asking them all to keep up with an army of developers churning out new code and testers churning out more scripted tests.

Serving up the capabilities

Instead, let’s stop looking so closely at one part of the process and look more at the whole system of getting from concept to capability. Let’s invest in automating for ourselves as much as we automate for others – and let’s dedicate some of our best people to doing so.

In short, it’s not just about the speed at which we cook up new features. It’s as much about the speed at which we can serve new, working capabilities to our customers.

Old Favourite: QA / Testing – what’s the difference?

Software is about the only industry one of the few industries that lumps testing and QA under one banner. It’s one of those things where common misuse of a term results in the community changing it’s meaning… this happens in mainstream language all the time.

Testing something is actually more analogous to quality control or, QC , [although it isn’t quality control].

QA is more concerned with the process – collecting information about the performance of the process in order to determine if we are ‘assuring’ (or more realistically increasing the probability of) quality. Statistical information about problems found in the product (during quality control) is just one of many pieces of info useful to someone concerned with QA (which really should be the whole project team)

In short:

QC helps us answer the question ‘does our product work?’

QA helps us answer the question ‘does our process work?’

Unfortunately, in the software industry, all too many teams don’t realise their process doesn’t work until the testers find all the ways in which the product doesn’t work… maybe that’s why software testing has come to be known as QA.

This article originally appeared on my old blog in July 2008. I had already elaborated on this topic in the article “What’s in a word” in Better Software Magazine in March 2008.

Taking repetition to task

This originally appeared on my old blog in March 2010.

Others have talked about the virtues of stories as vertical slices of a problem (end-to-end capabilities) rather than horizontal slices (system layers or components). So, if we slice the problem with user stories, how do we slice the user-stories themselves?

If, as I sometimes say, acceptance tests (a.k.a. examples/scenarios/acceptance-criteria) are the knife with which we slice a story into even thinner vertical slices, then I would say my observation of ‘tasks’ is that they are used as the knife used to cut a story into horizontal slices. This feels wrong…

Sometimes I also wonder, hasn’t anyone else noticed that the idea of counting the effort of completed tasks on burn-down/up charts is counter to the value that we measure progress only with working software? Surely it makes more sense to measure progress with passing tests (or “checks” – whichever you prefer).

These are two of the reasons I’ve never felt very comfortable with tasks, because:

  • they’re often applied in such a way that the story is sliced horizontally
  • they encourage measuring progress in a less meaningful way than working software

Tasks are, however, very useful for teams at first. Just like anything else we learn how to do, learning how to do it on paper can often help us then discard the paper and do the workings in our heads. However, what I’ve noticed is that most teams I’ve worked with continue to write and estimate tasks long after the practice is useful or relevant to them.

For example, there comes a time for many teams where tasks become repetitive. “Add x to the Model”, “Change View”… and so on. Is this adding value to the process or are you just doing it because the process says you should do it?

Simply finding that your tasks are repetitive doesn’t mean the team is ready to stop using them. There is another important ingredient, meaningful acceptance criteria (scenarios / acceptance-tests / examples).

I often see stories with acceptance criteria such as:

  • Must have a link to save the profile
  • Must have a drop down to select business sector
  • Business sector must be mandatory

Although these are “acceptance criteria” they aren’t what we mean by acceptance criteria in the context of user stories. Firstly, they are talking about how the user interacts rather than what they need to achieve (I’ve talked about this before). Secondly, they aren’t examples. What we want are the variations that alter the behaviour or response of the product:

  • Should create a new profile
  • Profile cannot be saved with blank “business sector”

As our product fulfils each of these criteria, we are making progress. Jason Gorman illustrates one way of approaching this.

So, if you are using tasks, consider an alternative approach. First, look at your acceptance criteria, make sure they are more like examples and less like instructions. Once that’s achieved, consider slicing each criterion (or scenario) horizontally with the tasks rather than the story. Pretty soon, you’ll find that you don’t need tasks anymore and you can simply measure progress in terms of the new capabilities you add to your product.

Updated 30-02-2010: I’ve inserted a new paragraph as the opening paragraph referencing an article on slicing user-stories to add some background to the vertical slicing metaphor. I’ve provided a link but I’m not sure who first came up with the metaphor.