All posts by AntonyMarcano

Co-founder of RiverGlide and an agile software development coach, trainer, speaker and consultant with fifteen years multi-sector experience, Antony’s experience in agile software development spans 11+ years employing Scrum, Extreme Programming (XP) and more. Antony has diverse skills but is best known for his abilities with test-driven (or example-driven) methods, software testing and techniques for baking quality into software.

My Tack on Effective Change

One of the key characteristics of how we coach our clients’ teams is that we help them start from where they are and introduce small, frequent changes that help them progressively achieve their goals.

Each incremental change is driven by a problem the teams recognise they are facing. We then help to find a small change they are happy to introduce as an experiment. This change must be a solution or a step towards a solution to the identified problem. Sometimes this experiment will be limited to a single iteration. Sometimes it’s limited to a single user-story in a given iteration.

Based on the outcome of the experiment, the team decides what to do next. They may continue the experiment, introduce the change beyond the limitations of the experiment or even abandon that change and try something else.

This has two distinguishing factors.

1. Solving Problems: We focus on solving problems not introducing solutions

2. Continuous Improvement: Delivery continues, becoming increasingly productive

1. Solving Problems: For example, we’ve often been contacted to ‘introduce TDD’ or ‘transition a team to Scrum’. Usually, we find that behind these solutions are problems that our client instinctively understands are there but articulates them as these kinds of solutions. Sometimes they want to reduce the amount of re-work, or get early visibility of achievable scope within a time-constraint. Other-times it’s a solution for solution sake.

By solving the problems the people in the organisation recognise as holding them back, they are bought into each change. Sometimes our job is to help them recognise the problems. Retrospectives are a great way of achieving that.

2. Continuous improvement: When helping an organisation to solve the problems they face they expect large and significant change. Instead we help them make a continuous series of small, team-driven changes. This allows them to continue to deliver with the net effect of gradual and continuous improvements in productivity.

We generally find ourselves taking this approach because for most of our clients, big-bang change is not practical simply due to the impact on productivity. We can understand this using the Satir Change Model (explained very well by Steven Smith).

In this simplified and slightly adapted representation, you can see that with a large change there is a period of net-negative productivity. Once the improvement starts to take hold, performance increases take productivity back into the black.

On rare occasions when an organisation’s software reaches critical mass (it’s more expensive to change than to replace) big-bang changes might be necessary. We’ve worked with a client that had exactly that problem. They took the brave step of ceasing delivery while they redefined how they did things and coached their teams in how to do it. This worked out positively for them. After about 6 months of intensive coaching and 3 months of implementation they delivered a revenue generating system to replace something that had been previously worked on for 18 months and had remained incomplete.

Most organisations would find it difficult to take that step and need to continue to deliver whilst transitioning from how they work today to how they want to work in the future. It is for this reason that we find ourselves employing small incremental changes, eliminating the things that add friction to their process – one problem at a time. Doing this with small experiments helps the organisation limit the risk and achieve continuous delivery and continuous improvement.

A way of understanding how this works is with the same Satir Change Model. If we take the curve shown above illustrating big-bang change, shrink it so that each change represents a much smaller alteration to working practices and then repeat, what we end up with is a much healthier looking series of changes.

Yes, each change can slow productivity temporarily but the impact is dramatically reduced and quickly offset by the improvements gained.

The area under the graph is productivity. If the changes are small enough there is never any period of net-negative output.

Another reason this works well is that it reduces the amount of cultural resistance that you have to deal with. Instead of a lot of resistance from many people for complex reasons, you tend to have only a few people resisting the change. Because the change is driven by a need to solve a recognised problem you have more people in favour of the change. Because change is frequent, people get used to change. In fact, they become experts at it and no longer resist the mere idea of change.

Commercially, it also gives you much more flexibility and allows you to adapt your vision continuously as you learn more.

Julian Davies, a developer at one of our clients, likened this to sailing. To travel from point a to point b in sailing you do so in a series of direction changes, called tacking. This is necessary when point b is against the wind.

You could, theoretically, do this in two long legs with one direction change. However, this leaves you vulnerable to changes in wind-direction potentially taking you significantly, perhaps even catastrophically, off course. Tacking is generally done with a series of small diagonal legs where each leg allows you to adapt to the changing wind. These frequent opportunities to read the context, i.e. the wind and currents, means that we will only ever be slightly off course with plenty of opportunity to recover.

Julian likened it to agile development in general but I think it applies equally to what I’m talking about here today.

With big-bang changes, what happens when we’re in the deepest trough of the curve and an opportunity arises? It is much more difficult to change course. With small changes we can put the latest experiment on hold to capitalise on that opportunity. When the wind changes on our projects, say when someone leaves or half the team is ill with a winter flu bug, we can quickly respond to those changes.

Acknowledgements: Thanks to Andy Palmer for reviewing this article and providing the snappy title.

You’re almost cuking it…

In “You’re Cuking it Wrong”, Jonas Nicklas, shows several examples of bad scenarios (or acceptance tests whichever term you prefer) and demonstrates better approaches. This is an excellent post on common mistakes made when writing example scenarios with Cucumber.

I think, however, he could have gone further in one case. One of his examples of a bad scenario looks like this:

Scenario: Adding a subpage
Given I am logged in
Given a microsite with a Home page
When I click the Add Subpage button
And I fill in "Gallery" for "Title" within "#document_form_container"
And I press "Ok" within ".ui-dialog-buttonpane"
Then I should see /Gallery/ within "#documents"

(Dude – yep, seen these… I agree… not good). He goes on to suggest it should really look like this:

Scenario: Adding a subpage
Given I am logged in
Given a microsite with a home page
When I press "Add subpage"
And I fill in "Title" with "Gallery"
And I press "Ok"
Then I should see a document called "Gallery"

This is a massive improvement. It keeps the specifics that inform the reader and give them some context (like filling in the “Title” with “Gallery”) and takes them further away from the implementation. Although it gets closer to expressing customer intent, I think it could go further. At the moment, it is describing the ‘what’ and some of the ‘how’. This example makes complete sense if what we are exploring is the design of the UI. I’ve not found these scenarios to be a good place to do that, however.

Instead, in these specifications we want our examples to illustrate the customer intent. The ‘what’ not the ‘how’. There are other places we can capture the ‘how’ – i.e. in the step methods.

Instead, I would write it like this:

Scenario: Adding a subpage
Given a microsite with a home page
Given I am logged in
When I Add a Subpage with a Title of "Gallery"
Then I should see a document called "Gallery"

I’ve removed all the “Tasks” and left only “Activities”. This leaves the user experience completely open. This ensures that when there are UI changes, I only change the code that performs the tasks (clicking, pressing, etc.) and my scenarios evaluate whether the customer intent is still fulfilled – without having to go back and change a lot of files.

Otherwise – great post Jonas 🙂

Old Favourite: Expected Exceptions

This first appeared on my old blog in November 2008.

I’ve decided that I don’t like typical patterns for testing exceptions. I decided this a while ago as far as “Expected Exception” attributes/annotations are concerned and stuck with the traditional try/catch approach (I’ll explain why in a minute). Now, I’ve decided I don’t like the typical “try/fail or catch” approach and have started using a subtle evolution of it.

First, let me explain why I don’t like Expected Exception attributes/annotations. The final nail in the coffin of this approach was hammered home for me when working with Liz Keogh a while back.

Here is an example in Java of your expected exception pattern (for brevity I won’t include assertions for e.getMessage()):

 

@Test(expected=BeyondMyExpertiseException.class)
public void shouldComplainWhenNotAClass() throws Exception {
	DomainExpert expert = new DomainExpert();
	String thisCheckpoint = nameOfSomeInterface();

	expert.howDoIRestore(thisCheckpoint);
}

 

So, apart from the obvious fact that it is only implicit as to which method threw that exception (because I know that none of the other steps can throw that exception)… and we want our tests to communicate information explicitly, yes? The insight that Liz shared with me is that it changes the flow of information (ok, I’m paraphrasing now) compared to a test that doesn’t expect an exception.

In a ‘positive’ test, the flow of information that is expressed to the reader is What I need->what I do->what I expect. In an expected exception test, this is changed to what I expect->what I need->what I do. The latter just doesn’t flow very well and because it’s different to your positive tests there’s an overhead involved for the reader (me or someone else later on) to process this shift in structure… I’ve found that such tests just don’t jump out at me when I’m scanning the tests…

Since then, despite fashion, I committed to the old-fashioned way of writing exceptions – “try/fail or catch”:

 

@Test
public void shouldComplainWhenNotAClass() throws Exception {
	DomainExpert expert = new DomainExpert();
	String thisCheckpoint = nameOfSomeInterface();

	try {
		expert.howDoIRestore(thisCheckpoint);
		fail("Should have thrown " +
			BeyondMyExpertiseException.class.getSimpleName());
	} catch (BeyondMyExpertiseException e) {
	}
}

 

Ok, I accept, it looks more cluttered by comparison but the flow of information makes more sense and I make it explicit that the expert.howDoIRestore(thisCheckpoint) method call is the one that should have thrown the exception. (Note: The idea here is not to reduce how much you type but to make the test more expressive). The “try/fail or catch approach only works when your code doesn’t throw an exception… If your code throws a different exception, the failure trace just tells you what exception was actually thrown, it doesn’t tell you what exception was expected. So, here is a slightly different way of writing it:

 

@Test
public void shouldComplainWhenNotAClass() throws Exception {
	DomainExpert expert = new DomainExpert();
	String thisCheckpoint = nameOfSomeInterface();
	try {
		expert.howDoIRestore(thisCheckpoint);
		fail();
	} catch (Exception e) {
		assertThat(e,is(instanceOf(
				BeyondMyExpertiseException.class)));
	}
}

 

Notice that I’m only catching Exception now, not BeyondMyExpertiseException. This still feels a little jumbled… Because my assertion is inside the catch block, I have to have the fail() method call just after the call that should throw the exception. Hmmm… don’t like that… Instead, this makes more sense:

 

@Test
public void shouldComplainWhenNotAClass() throws Exception {
	DomainExpert expert = new DomainExpert();
	String thisCheckpoint = nameOfSomeInterface();
	Exception thrownException = null;

	try {
		expert.howDoIRestore(thisCheckpoint);
	} catch (Exception e) {
		thrownException = e;
	}
	assertThat(thrownException,is(instanceOf(
				BeyondMyExpertiseException.class)));
}

 

Giving this failure trace when it fails:

 

java.lang.AssertionError:
Expected: is an instance of
com.testingreflections.atdd.expertise.
    misunderstanding.BeyondMyExpertiseException
        got: < java.lang.UnsupportedOperationException >

 

So, for those who want to type as little as possible, this isn’t for you… But if you want tests that drive out your exception handling to be more expressive, then this is an alternative to the usual “try/fail or catch” approach. I think that perhaps there’s an even better way of doing this… maybe next I’ll see how approximating closures with an anonymous class might help improve the readability of this… Let me know if you know of a better way.

Being a youDevise Developer – Week 2

This week I got to work on a nice cross-section of things. Over four days I paired with three different people on three different features: a new and interesting reports tool and a couple of features on the main product. Before I worked on anything, the guys kept being very apologetic about the code and how hard it was going to be to work with… but it really wasn’t as bad as they made out. It was quite obvious which code was more recent – it was cleaner and better tested. But even the slightly older code wasn’t as bad as they said it would be.
Photo of a desk with a keyboard and monitor on it.
One challenge I’d previously highlighted did show up, the fact that it’s hard to find which of the end-to-end tests and integration tests you need to run when adding a new feature. When a full suite of integration tests take 10minutes to complete it can be discouraging to run them before you make any changes. A solution that Justin and I had found came in handy. We took this solution and instead of a 10 minute integration-test run we got to the point where we could run only the relevant tests for the feature we were working on – which took 50 seconds. We’re not trying to apply this to everything straight away – we’re finding the relevant tests for the feature we’re working on and applying the solution to those as we go.

The solution in question is to use categories to ‘tag’ tests with a feature name and for those categories to be runnable. So, we can tag all the acceptance tests and integration tests relevant to users, say setting up email alerts, and run the ‘category’ as a suite. This is explained in this video about Runnable Categories.

Another good week! Everyone is open to trying new things and very capable.

I’m on holiday for the next week, so my next post on this thread will be in around two weeks. I’m going to miss being there this coming week – but I am human and I do need to have a holiday.

Being a youDevise Developer – Week 1

In my previous post, I gave the background to me spending the next month or two as a developer on a youDevise product.

I’ve just completed my first ‘official’ week working with them. It was one of, if not, the smoothest of inductions I’ve ever experienced. I arrived and was shown a desk to work at. There was a welcome letter in front of a dual screen developer machine with Ubuntu installed. The letter told me everything I needed to get logged in, access e-mail and wiki links telling me where to find the rest of the information I needed to configure the machine. For the project I was about to work on.

Their CEO sent out an introduction e-mail telling everyone about me and others who started that day on the development team and in other non-technical departments. I also experienced a warm welcome from the team and was assigned a youDevise mentor to help me settle in.
My first few days involved several presentations – mostly demonstrative – introducing me to youDevise products and their business model. Yet I still got to work on code almost immediately.

I was very lucky to get to pair with their summer intern, Marius Cobzarenco, on a part of an all-new reporting capability for one of their products. I am so impressed with Marius. He left Cambridge University only three weeks ago and I have to say he is one of the most competent graduates that I’ve ever met! He has a level of technical competence that rivals many much more experienced developers and he has a passion and aptitude for learning that is rare. He became competent in Behaviour Driven Development after working with me for only an hour or two and fell in love with the approach. I was so impressed with him that I decided to sponsor his attendance at a TDD workshop this weekend with Jason Gorman.

Marius is a reflection of the high standards youDevise sets for itself. Whenever I’m there, I never feel like the smartest person in the room.

Speaking of smart people, other youDevisers I’ve also had the opportunity to work with include Joe Schmetzer and Stephen Siard.

Joe is a great guy – extraordinarily capable yet incredibly humble. Joe really knows his stuff! I’m looking forward to sharing with him, but mostly, learning from him. Stephen – who I have secretly, in my mind, nicknamed ‘the professor’ simply because his intellect is especially humbling for me – is another who I have been very lucky to work with. He has a way with algorithms that is tantamount to wizardry – but he is far too scientific to be called “the wizard” 🙂

These guys stand alongside other similarly impressive and diverse individuals that I’ll mention in future posts – each with their unique talents.

Next week, we start a new iteration. I’ll be seeing what it’s like working with some of their legacy code. Based on what I’ve seen so far, at least I know they recognise where they have legacy and that they are passionate about writing tests and cleaning the code up as they go.

Being a youDevise Developer – Introduction

youDevise has been one of my clients for a couple of years, introduced to me by Steve Freeman. You can see my influence in various places – I got them started with Root Cause Analysis (which they took and evolved an entire process around it) and they took the ideas I shared in JNarrate and implemented them in their own Narrative framework.

Their CTO has been one of the most vocal advocates of RiverGlide, the company I created with Andy Palmer last year.

youDevise are enjoying a period of growth – driven by demand for their services. They have been hiring developers and testers but are struggling to keep up with demand due to their high-standards – they value quality over quantity. About 6 weeks ago they had retained my services for 2 days a month for some coaching and consulting. We got talking about the challenges of finding the right talent for their organisation. Then, it occurred to them, perhaps I could help so they asked me if I would consider working for them as a developer, in-between the coaching and consulting days.

This was a great opportunity for both of us. During our relationship so far, I’d only seen things from the outside. My visits were never more than 1-2 days at a time and involved working with several people looking at slices of problems they wanted to solve or goals they wanted to achieve. I only ever had anecdotal information – never first hand experience of working in their environment. First-hand experience will give me insights that will contribute significantly to the value I can deliver when coaching their teams.

For me, it meant I would get to work as a team member on a real project. I like to do a tour of duty working on a commercial project for real at least 3 out of every 12-18 months. It keeps my skills sharpened and keeps the advice I give as a consultant honest.

So, here we are. For the next month or two I’ll be working 4 days per week for youDevise as a consultant-developer, 2 days a month as a coach/consultant leaving me a couple of days a month to spend with other clients and on other projects.

Monsters, Names, Pot-Roast & The Waterfall Model

“Antony” (without the ‘H’) is the anglicised version of Antonius. In victorian times (there or thereabouts I’m guessing), among those wishing to appear oh so intelligent, gossip spread that the spelling of “Antony” was wrong… For, so they would say, it is born of the greek word “anthos” (meaning “flower”) – oh dear… so many poor children with misspelt names… 

Despite being completely wrong, the world forgot of my name’s etruscan origin and spelt it with an ‘H’… This misinformation established itself through the eras so much so that, today, the de-facto spelling is “Anthony”. It has even found it’s way into the American pronunciation of the name as: “An-thon-ee”.

Waterfall development has something in common with this story… somehow, through misinformation, what it once was has been warped, into something else.

The key difference is that Waterfall is now increasingly represented as was originally intended. Unfortunately for me, my name is not…

 

Monsters & Legends

Some might think that the Waterfall Model is an approach to software development, first explained (but not named) in Winston Royce’s 1970 Paper “Managing the Development of Large Software Systems” (PDF), but they could be wrong…

Somehow, it seems to have become something else… it became the way (many) people thought software should be developed… the norm for software ‘professionals’. Years of anecdotal failure followed and Waterfall became a legend – told time and again much like a scary camp-fire story… The enemy of effective software development… A monster that will consume all the resources it can, spewing out nothing but documentation, rarely concluding in working software – at best 20% of the time.

This negative view, to what was once the de-facto approach to software development, is actually far closer to Royce’s original words on Waterfall than many seem to know…

 

The Truth & Technology

In Royce’s original paper, he shows a progression of activities, that came to be known as the waterfall model.

What we rarely hear of is Royce’s original words on the subject:

“…the implementation described above is risky and invites failure.”

Further to this, Royce goes on to explain that the reason that this cannot work is because there are too many things we cannot analyse up-front:

“The testing phase which occurs at the end of the development cycle is the first event for which timing, storage, input/output transfers, etc., are experienced as distinguished from analyzed. These phenomena are not precisely analyzable. They are not the solutions to the standard partial differential equations of mathematical physics for instance.”

He explains that we need feedback loops. He goes on to warn of (a conservative) 100% overrun in schedule and costs:

“…invariably a major redesign is required. A simple octal patch or redo of some isolated code will not fix these kinds of difficulties. The required design changes are likely to be so disruptive that the software requirements upon which the design is based and which provides the rationale for everything are violated. Either the requirements must be modified, or a substantial change in the design is required. In effect the development process has returned to the origin and one can expect up to a 100-percent overrun in schedule and/or costs.”

Some of Royce’s strategies, like “Involve the customer” and obtaining early feedback, have lived on in modern (Agile) methodologies. Beyond that, we should remember that his specific recommendations on how to solve the problems of a waterfall model were all based on the technology of the time.

 

Pot-Roast & The Cost of Change

In 1970, computing was much more expensive than it is today. In those days, changing software was far more expensive than changing pictures and words on paper. It was also much harder to express your design in a human-friendly way in the programming languages of that time. As a result of these and other factors, documentation was a major part of how Royce tried to solve the inherent problems of the Waterfall model.

 

Technology, tools & thinking have moved on and our documentation no longer needs to be static. It lives. It can breath. The specification can automatically verify that the implementation does what we said it should do (e.g. as in BDD Specs or ATDD/TDD Tests). Modern programming languages allow us to express the design and our understanding of the domain far more clearly, negating the need to first detail our thoughts on paper in natural language. We simply don’t have to cut the ends off that pot-roast anymore.

 

Only now, at the end…

Waterfall, thanks to the popularity of Agile, has gone from something I was shown at school as “how software is developed” to being seen in the light that it was originally presented – how software should not be implemented.

This is despite those who still profess the legitimacy of Waterfall and those still shocked and surprised when they hear of Royce’s own words against the monster he unintentionally created.

As for my name, I hold out little hope for change. I doubt that the world will use my name as it was originally intended and so I have resigned myself to needing two domain names… one with an ‘H’ in it, and the correct one without – I wonder which one brought you here.

Boredom: a Testing Smell?

This first appeared on my old blog in June 2005.

Somebody I know who was doing some (unscripted) testing spoke of being bored the other day… I have always found boredom to be a sign that something is wrong.

I believe, as has been said by Kaner et al, that testing is a brain-engaged activity. If that is the case, why would I ever be bored?

Borrowing the Smell metaphor… I would say that boredom is a Bad Testing Smell. If it isn’t a bad smell, it is a whiff of an underlying bad-smell for sure.

If on the rare occasion I find that I am bored, I’d ask myself:

  1. Am I testing this area more than I need to? (if so, why?)
  2. Am I losing concentration because my brain is tired? Do I need a break?
  3. Is what I am doing so repetitive that perhaps it should be automated?
  4. Is there a better way of testing this feature?

If I answer “yes” to any of these, I know that perhaps I need to do something differently. Whether that is feasible in a given context, might be a different story.

Update 9th July 2010: This applies to scripted testing as much as it does to exploratory testing. Generally, I’ve found that boredom during scripted testing is because we’re asking a human to do something we should be asking a computer to do… The main barrier to this, in my experience, is that it is harder to write the automated tests than to write manual scripted tests… so, I look for ways to make automated tests as easy to write as manual scripted ones. Usually, I can achieve this with a little effort and have found it makes a huge difference.

Taking repetition to task

This originally appeared on my old blog in March 2010.

Others have talked about the virtues of stories as vertical slices of a problem (end-to-end capabilities) rather than horizontal slices (system layers or components). So, if we slice the problem with user stories, how do we slice the user-stories themselves?

If, as I sometimes say, acceptance tests (a.k.a. examples/scenarios/acceptance-criteria) are the knife with which we slice a story into even thinner vertical slices, then I would say my observation of ‘tasks’ is that they are used as the knife used to cut a story into horizontal slices. This feels wrong…

Sometimes I also wonder, hasn’t anyone else noticed that the idea of counting the effort of completed tasks on burn-down/up charts is counter to the value that we measure progress only with working software? Surely it makes more sense to measure progress with passing tests (or “checks” – whichever you prefer).

These are two of the reasons I’ve never felt very comfortable with tasks, because:

  • they’re often applied in such a way that the story is sliced horizontally
  • they encourage measuring progress in a less meaningful way than working software

Tasks are, however, very useful for teams at first. Just like anything else we learn how to do, learning how to do it on paper can often help us then discard the paper and do the workings in our heads. However, what I’ve noticed is that most teams I’ve worked with continue to write and estimate tasks long after the practice is useful or relevant to them.

For example, there comes a time for many teams where tasks become repetitive. “Add x to the Model”, “Change View”… and so on. Is this adding value to the process or are you just doing it because the process says you should do it?

Simply finding that your tasks are repetitive doesn’t mean the team is ready to stop using them. There is another important ingredient, meaningful acceptance criteria (scenarios / acceptance-tests / examples).

I often see stories with acceptance criteria such as:

  • Must have a link to save the profile
  • Must have a drop down to select business sector
  • Business sector must be mandatory

Although these are “acceptance criteria” they aren’t what we mean by acceptance criteria in the context of user stories. Firstly, they are talking about how the user interacts rather than what they need to achieve (I’ve talked about this before). Secondly, they aren’t examples. What we want are the variations that alter the behaviour or response of the product:

  • Should create a new profile
  • Profile cannot be saved with blank “business sector”

As our product fulfils each of these criteria, we are making progress. Jason Gorman illustrates one way of approaching this.

So, if you are using tasks, consider an alternative approach. First, look at your acceptance criteria, make sure they are more like examples and less like instructions. Once that’s achieved, consider slicing each criterion (or scenario) horizontally with the tasks rather than the story. Pretty soon, you’ll find that you don’t need tasks anymore and you can simply measure progress in terms of the new capabilities you add to your product.

Updated 30-02-2010: I’ve inserted a new paragraph as the opening paragraph referencing an article on slicing user-stories to add some background to the vertical slicing metaphor. I’ve provided a link but I’m not sure who first came up with the metaphor.

From Scrum to Kanban – good and bad reasons to switch…

This originally appeared on my old blog in February 2009.

There are, IMHO, some good reasons and some bad reasons to consider switching from Scrum to Kanban… or for considering Kanban over Scrum as a starting point for ‘going Agile’ (so to speak)…

‘Good’ reasons for considering Kanban are…

  • Wanting/needing more visibility of specific development process constraints (bottlenecks) than Scrum gives you (Scrum shouts “there’s a problem!”, Kanban points at where the problem is)
  • Kanban can avoid waste of stories not filling a Scrum sprint (although finishing ‘early’ can allow teams to make improvements they might not otherwise have afforded themselves)
  • Kanban can focus teams on vertical stories from the outset whereas new Scrum teams seem to start with horizontal slicing.

‘Bad’ reasons to choose Kanban over Scrum are…

  • Wanting to say you are “Agile” without really changing your development process
  • Because using Scrum is exposing rigidity and brittleness of software that is the output of your development process and wanting to hide that behind Kanban words like cadence
  • Hiding impact of speculative design behind Kanban work-items when it fails in Scrum because the work never seems to fit into a Sprint, spilling the story over multiple sprints

(by speculative designs I mean implementing architecture that is more than is necessary for current valued-work-item)

For a team that has legacy development practices, producing legacy code for which it simply isn’t realistic to do incremental and iterative development but wants gradual and continuous improvement… I think Kanban is perhaps a better place to start. Your first ‘work-item’ may take 3 months… but it’s an honest 3 months! The trick is to make continuous improvements to gradually increase the tempo of your delivery.

If a team needs to suffer the pain – that comes from seeing that no matter how hard you try you simply can’t fit the implementation of even the smallest feature into one month – before it realises it has a problem… Then maybe Scrum is the better place to start.

Whichever you choose, I hope you choose the right approach for you, for the right reasons 😉