old favourites – antonymarcano.com /blog Thinking through writing... on innovation, business, technology and more Sat, 03 Sep 2016 10:48:44 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.14 Old Favourite: QA / Testing – what’s the difference? /blog/2010/11/old-favourite-qa-testing-whats-the-difference/ /blog/2010/11/old-favourite-qa-testing-whats-the-difference/#comments Fri, 19 Nov 2010 13:03:45 +0000 /blog/?p=140 Continue reading Old Favourite: QA / Testing – what’s the difference? ]]> Software is about the only industry one of the few industries that lumps testing and QA under one banner. It’s one of those things where common misuse of a term results in the community changing it’s meaning… this happens in mainstream language all the time.

Testing something is actually more analogous to quality control or, QC , [although it isn’t quality control].

QA is more concerned with the process – collecting information about the performance of the process in order to determine if we are ‘assuring’ (or more realistically increasing the probability of) quality. Statistical information about problems found in the product (during quality control) is just one of many pieces of info useful to someone concerned with QA (which really should be the whole project team)

In short:

QC helps us answer the question ‘does our product work?’

QA helps us answer the question ‘does our process work?’

Unfortunately, in the software industry, all too many teams don’t realise their process doesn’t work until the testers find all the ways in which the product doesn’t work… maybe that’s why software testing has come to be known as QA.

This article originally appeared on my old blog in July 2008. I had already elaborated on this topic in the article “What’s in a word” in Better Software Magazine in March 2008.

]]>
/blog/2010/11/old-favourite-qa-testing-whats-the-difference/feed/ 2
Old Favourite: Expected Exceptions /blog/2010/07/old-favourite-expected-exceptions/ /blog/2010/07/old-favourite-expected-exceptions/#comments Sat, 24 Jul 2010 21:03:48 +0000 /blog/?p=76 Continue reading Old Favourite: Expected Exceptions ]]> This first appeared on my old blog in November 2008.

I’ve decided that I don’t like typical patterns for testing exceptions. I decided this a while ago as far as “Expected Exception” attributes/annotations are concerned and stuck with the traditional try/catch approach (I’ll explain why in a minute). Now, I’ve decided I don’t like the typical “try/fail or catch” approach and have started using a subtle evolution of it.

First, let me explain why I don’t like Expected Exception attributes/annotations. The final nail in the coffin of this approach was hammered home for me when working with Liz Keogh a while back.

Here is an example in Java of your expected exception pattern (for brevity I won’t include assertions for e.getMessage()):

 

@Test(expected=BeyondMyExpertiseException.class)
public void shouldComplainWhenNotAClass() throws Exception {
	DomainExpert expert = new DomainExpert();
	String thisCheckpoint = nameOfSomeInterface();

	expert.howDoIRestore(thisCheckpoint);
}

 

So, apart from the obvious fact that it is only implicit as to which method threw that exception (because I know that none of the other steps can throw that exception)… and we want our tests to communicate information explicitly, yes? The insight that Liz shared with me is that it changes the flow of information (ok, I’m paraphrasing now) compared to a test that doesn’t expect an exception.

In a ‘positive’ test, the flow of information that is expressed to the reader is What I need->what I do->what I expect. In an expected exception test, this is changed to what I expect->what I need->what I do. The latter just doesn’t flow very well and because it’s different to your positive tests there’s an overhead involved for the reader (me or someone else later on) to process this shift in structure… I’ve found that such tests just don’t jump out at me when I’m scanning the tests…

Since then, despite fashion, I committed to the old-fashioned way of writing exceptions – “try/fail or catch”:

 

@Test
public void shouldComplainWhenNotAClass() throws Exception {
	DomainExpert expert = new DomainExpert();
	String thisCheckpoint = nameOfSomeInterface();

	try {
		expert.howDoIRestore(thisCheckpoint);
		fail("Should have thrown " +
			BeyondMyExpertiseException.class.getSimpleName());
	} catch (BeyondMyExpertiseException e) {
	}
}

 

Ok, I accept, it looks more cluttered by comparison but the flow of information makes more sense and I make it explicit that the expert.howDoIRestore(thisCheckpoint) method call is the one that should have thrown the exception. (Note: The idea here is not to reduce how much you type but to make the test more expressive). The “try/fail or catch approach only works when your code doesn’t throw an exception… If your code throws a different exception, the failure trace just tells you what exception was actually thrown, it doesn’t tell you what exception was expected. So, here is a slightly different way of writing it:

 

@Test
public void shouldComplainWhenNotAClass() throws Exception {
	DomainExpert expert = new DomainExpert();
	String thisCheckpoint = nameOfSomeInterface();
	try {
		expert.howDoIRestore(thisCheckpoint);
		fail();
	} catch (Exception e) {
		assertThat(e,is(instanceOf(
				BeyondMyExpertiseException.class)));
	}
}

 

Notice that I’m only catching Exception now, not BeyondMyExpertiseException. This still feels a little jumbled… Because my assertion is inside the catch block, I have to have the fail() method call just after the call that should throw the exception. Hmmm… don’t like that… Instead, this makes more sense:

 

@Test
public void shouldComplainWhenNotAClass() throws Exception {
	DomainExpert expert = new DomainExpert();
	String thisCheckpoint = nameOfSomeInterface();
	Exception thrownException = null;

	try {
		expert.howDoIRestore(thisCheckpoint);
	} catch (Exception e) {
		thrownException = e;
	}
	assertThat(thrownException,is(instanceOf(
				BeyondMyExpertiseException.class)));
}

 

Giving this failure trace when it fails:

 

java.lang.AssertionError:
Expected: is an instance of
com.testingreflections.atdd.expertise.
    misunderstanding.BeyondMyExpertiseException
        got: < java.lang.UnsupportedOperationException >

 

So, for those who want to type as little as possible, this isn’t for you… But if you want tests that drive out your exception handling to be more expressive, then this is an alternative to the usual “try/fail or catch” approach. I think that perhaps there’s an even better way of doing this… maybe next I’ll see how approximating closures with an anonymous class might help improve the readability of this… Let me know if you know of a better way.

]]>
/blog/2010/07/old-favourite-expected-exceptions/feed/ 1
Boredom: a Testing Smell? /blog/2010/07/boredom-a-testing-smell/ /blog/2010/07/boredom-a-testing-smell/#respond Fri, 09 Jul 2010 10:18:48 +0000 /blog/?p=27 Continue reading Boredom: a Testing Smell? ]]> This first appeared on my old blog in June 2005.

Somebody I know who was doing some (unscripted) testing spoke of being bored the other day… I have always found boredom to be a sign that something is wrong.

I believe, as has been said by Kaner et al, that testing is a brain-engaged activity. If that is the case, why would I ever be bored?

Borrowing the Smell metaphor… I would say that boredom is a Bad Testing Smell. If it isn’t a bad smell, it is a whiff of an underlying bad-smell for sure.

If on the rare occasion I find that I am bored, I’d ask myself:

  1. Am I testing this area more than I need to? (if so, why?)
  2. Am I losing concentration because my brain is tired? Do I need a break?
  3. Is what I am doing so repetitive that perhaps it should be automated?
  4. Is there a better way of testing this feature?

If I answer “yes” to any of these, I know that perhaps I need to do something differently. Whether that is feasible in a given context, might be a different story.

Update 9th July 2010: This applies to scripted testing as much as it does to exploratory testing. Generally, I’ve found that boredom during scripted testing is because we’re asking a human to do something we should be asking a computer to do… The main barrier to this, in my experience, is that it is harder to write the automated tests than to write manual scripted tests… so, I look for ways to make automated tests as easy to write as manual scripted ones. Usually, I can achieve this with a little effort and have found it makes a huge difference.

]]>
/blog/2010/07/boredom-a-testing-smell/feed/ 0
Taking repetition to task /blog/2010/07/taking-repetition-to-task/ /blog/2010/07/taking-repetition-to-task/#comments Thu, 08 Jul 2010 22:41:08 +0000 /blog/?p=21 Continue reading Taking repetition to task ]]> This originally appeared on my old blog in March 2010.

Others have talked about the virtues of stories as vertical slices of a problem (end-to-end capabilities) rather than horizontal slices (system layers or components). So, if we slice the problem with user stories, how do we slice the user-stories themselves?

If, as I sometimes say, acceptance tests (a.k.a. examples/scenarios/acceptance-criteria) are the knife with which we slice a story into even thinner vertical slices, then I would say my observation of ‘tasks’ is that they are used as the knife used to cut a story into horizontal slices. This feels wrong…

Sometimes I also wonder, hasn’t anyone else noticed that the idea of counting the effort of completed tasks on burn-down/up charts is counter to the value that we measure progress only with working software? Surely it makes more sense to measure progress with passing tests (or “checks” – whichever you prefer).

These are two of the reasons I’ve never felt very comfortable with tasks, because:

  • they’re often applied in such a way that the story is sliced horizontally
  • they encourage measuring progress in a less meaningful way than working software

Tasks are, however, very useful for teams at first. Just like anything else we learn how to do, learning how to do it on paper can often help us then discard the paper and do the workings in our heads. However, what I’ve noticed is that most teams I’ve worked with continue to write and estimate tasks long after the practice is useful or relevant to them.

For example, there comes a time for many teams where tasks become repetitive. “Add x to the Model”, “Change View”… and so on. Is this adding value to the process or are you just doing it because the process says you should do it?

Simply finding that your tasks are repetitive doesn’t mean the team is ready to stop using them. There is another important ingredient, meaningful acceptance criteria (scenarios / acceptance-tests / examples).

I often see stories with acceptance criteria such as:

  • Must have a link to save the profile
  • Must have a drop down to select business sector
  • Business sector must be mandatory

Although these are “acceptance criteria” they aren’t what we mean by acceptance criteria in the context of user stories. Firstly, they are talking about how the user interacts rather than what they need to achieve (I’ve talked about this before). Secondly, they aren’t examples. What we want are the variations that alter the behaviour or response of the product:

  • Should create a new profile
  • Profile cannot be saved with blank “business sector”

As our product fulfils each of these criteria, we are making progress. Jason Gorman illustrates one way of approaching this.

So, if you are using tasks, consider an alternative approach. First, look at your acceptance criteria, make sure they are more like examples and less like instructions. Once that’s achieved, consider slicing each criterion (or scenario) horizontally with the tasks rather than the story. Pretty soon, you’ll find that you don’t need tasks anymore and you can simply measure progress in terms of the new capabilities you add to your product.

Updated 30-02-2010: I’ve inserted a new paragraph as the opening paragraph referencing an article on slicing user-stories to add some background to the vertical slicing metaphor. I’ve provided a link but I’m not sure who first came up with the metaphor.

]]>
/blog/2010/07/taking-repetition-to-task/feed/ 2
From Scrum to Kanban – good and bad reasons to switch… /blog/2010/07/from-scrum-to-kanban-good-and-bad-reasons-to-switch/ /blog/2010/07/from-scrum-to-kanban-good-and-bad-reasons-to-switch/#comments Thu, 08 Jul 2010 22:32:56 +0000 /blog/?p=18 Continue reading From Scrum to Kanban – good and bad reasons to switch… ]]> This originally appeared on my old blog in February 2009.

There are, IMHO, some good reasons and some bad reasons to consider switching from Scrum to Kanban… or for considering Kanban over Scrum as a starting point for ‘going Agile’ (so to speak)…

‘Good’ reasons for considering Kanban are…

  • Wanting/needing more visibility of specific development process constraints (bottlenecks) than Scrum gives you (Scrum shouts “there’s a problem!”, Kanban points at where the problem is)
  • Kanban can avoid waste of stories not filling a Scrum sprint (although finishing ‘early’ can allow teams to make improvements they might not otherwise have afforded themselves)
  • Kanban can focus teams on vertical stories from the outset whereas new Scrum teams seem to start with horizontal slicing.

‘Bad’ reasons to choose Kanban over Scrum are…

  • Wanting to say you are “Agile” without really changing your development process
  • Because using Scrum is exposing rigidity and brittleness of software that is the output of your development process and wanting to hide that behind Kanban words like cadence
  • Hiding impact of speculative design behind Kanban work-items when it fails in Scrum because the work never seems to fit into a Sprint, spilling the story over multiple sprints

(by speculative designs I mean implementing architecture that is more than is necessary for current valued-work-item)

For a team that has legacy development practices, producing legacy code for which it simply isn’t realistic to do incremental and iterative development but wants gradual and continuous improvement… I think Kanban is perhaps a better place to start. Your first ‘work-item’ may take 3 months… but it’s an honest 3 months! The trick is to make continuous improvements to gradually increase the tempo of your delivery.

If a team needs to suffer the pain – that comes from seeing that no matter how hard you try you simply can’t fit the implementation of even the smallest feature into one month – before it realises it has a problem… Then maybe Scrum is the better place to start.

Whichever you choose, I hope you choose the right approach for you, for the right reasons 😉

]]>
/blog/2010/07/from-scrum-to-kanban-good-and-bad-reasons-to-switch/feed/ 2
MARTA – Risk Management… beyond mitigation /blog/2010/07/marta-risk-management-beyond-mitigation/ /blog/2010/07/marta-risk-management-beyond-mitigation/#respond Thu, 08 Jul 2010 22:25:03 +0000 /blog/?p=15 Continue reading MARTA – Risk Management… beyond mitigation ]]> Originally posted on my old blog in November 2009:

In a previous rant about the misuse of the term mitigate in the context of risk management I listed the following strategies (I call them MARTA) for managing a given risk:

  • Mitigate – Reduce the severity of its impact
  • Avoid – Don’t do the thing that makes the risk possible
  • Reduce – Make the risk less likely to happen
  • Transfer – Move the impact of the problem to another party (e.g. insure such as paid insurance or outsource with penalties for failure)
  • Accept – Do nothing or set aside budget to cope with the impact

I recently found myself having to explain this and used the analogy of crossing a busy road with fast-moving cars. What’s the risk? Well, you might get hit by a car.

This will probably be more useful if you take a moment to think of a busy road with fast moving traffic that you know of and then use each of the above strategies to identify different ways of managing the risk. What factors would be significant in deciding on which strategy (or combination of strategies) was the way to go?

Ok, now that you’ve had a chance to think about it, here is what I came up with:

  • Mitigate – Walk down the street until I can find a section of the road where there is a 20mph speed restriction. (This is mitigation because I’m not necessarily making it any less likely that I’m hit, but if I am hit the ‘impact’ is reduced – i.e. I’ll probably live – albeit with injury).
  • Avoid – I could simply not cross the street, by deciding that whatever is on the other side simply isn’t that important or I could use an underground subway (which of course has other risks associated with it depending on the area you’re in).
  • Reduce – Find a stretch of road where there are fewer cars – reducing the probability of being hit by a car.
  • Transfer – Get someone else to cross the street, maybe someone more skilled at crossing the road than me.
  • Accept – Now, if it was a busy street, I wouldn’t ‘accept’ the risk. But, if the road allowed for lots of visibility and there were very few cars and there were speed bumps slowing the traffic down to 10mph then I might just accept the risk.

The person I explained this to found this to be a useful exercise in understanding my views on risk management – beyond mitigation. Hope you find this way of explaining it useful too.

]]>
/blog/2010/07/marta-risk-management-beyond-mitigation/feed/ 0