antonymarcano.com » Project Management /blog Thinking through writing... on innovation, business, technology and more Fri, 22 May 2015 11:41:52 +0000 en-US hourly 1 http://wordpress.org/?v=4.0.7 Slow is Smooth–Smooth is Fast /blog/2013/08/slow-is-smooth-smooth-is-fast/ /blog/2013/08/slow-is-smooth-smooth-is-fast/#comments Wed, 21 Aug 2013 12:30:44 +0000 /blog/?p=522 A few years ago, to explain what goes wrong for many organisations that “go agile”, I wrote an article called “Putting the Kart before the Horse” [1].

Go-karting is where most of the current Formula One racing drivers first learned the basics of race-craft. Antony Marcano, a former kart racer himself, recounts a father-and-son racing experience that helps him explain what goes wrong for many organizations that adopt Scrum as their first attempt to “go agile.”

In this article I explained how novice drivers try to go as fast as possible rather than trying to master the racing line:

At the open practice session my son and I attended, novice drivers seemed to approach their first lap as if it really were as easy as experienced drivers make it look. They floored the throttle and flew down the straight only to crash—painfully and expensively—into the barrier at the first hairpin turn.

I went on to liken this to how many teams adopt agile…

Organizations starting their transition to agile methods with Scrum are much like the novice drivers trying to go fullthrottle before learning how to take the corners. Before being able to truly bend and change their software with speed and ease, they fly into short iterations and frequent incremental product delivery—crashing painfully into the barrier of hard-to-change software.

It was my son’s first experience in a go-kart. We started by driving slowly with him following me to learn the racing line. We gradually went faster and faster. Soon, we were the quickest drivers on the track.

Since then, he has gone on to advance further and further in his racing. Back then, we were in slow, 35mph go karts. This year he started racing for a team in race-tuned karts (see photo). The driver-coach in the team watched him driving on his first few attempts in his new high-performance machine (70+ mph, super-car acceleration). While he was reasonably quick, he wasn’t smooth.

Photo of Damani Marcano driving his 2013 Mach1 Kart for MLC Motorsport
Damani Marcano driving his 2013 Mach1 Kart for MLC Motorsport

His coach told him “slow down”. His coach said “whatever speed you can do while still driving on the ideal racing line is the speed you should be going”. My son went out again. Initially slower than he was before, his speed progressively built… we watched his lap-times improve dramatically.

Yet again, we relearned this lesson: slow is smooth–smooth is fast.

For some reason, I see teams who are novices in agile development make the same mistake, however, not always feeling like they have a choice. Kanban might make it easier to start from where you are and gradually improve but it’s ok if you’ve decided to start with Scrum. When starting with Scrum, start slow, focus on doing fewer things well first then progressively improve. On a course I gave recently, one of the attendees said “but how do you get management and marketing to back-off enough to allow us to slow down?”

My answer: avoid going faster than you can. But if you feel compelled to go faster, make it a business decision. Help them understand that there is a risk that whatever time you save now will cost you more in the future. Help them understand all the effort you’ll spend repairing the damage that going too fast will cause. Pretty soon you’ll be spending so much time fixing problems and struggling to adapt hard-to-change code that there will be little capacity left to add new features. Essentially, incurring a  “Technical Debt” .

As a general rule, I always try to start something new, slow and steady. I only go as fast as I know I can while predictably and sustainably delivering. I’m mindful of business pressures but experience has shown me that by focusing on “making things right” we can deliver smoothly, build trust and progressively become faster.

—————–

[1] You can read the original article here: Putting the Kart before the Horse? – published in Better Software Magazine, 2009

My son’s story is continued here: http://damanimarcano.com

Is updated here: https://www.facebook.com/DKMRacing

And here: https://twitter.com/DKMRacing

And here (updated 03-02-2014)…

UPDATE [11-12-2013]: Damani Marcano finished his first ever season 3rd in the Hoddesdon Kart Club Championship Junior Rotax Formula at Rye House, the track that started Lewis Hamilton’s career. This achievement is significant since, as a novice license holder, Damani had to spend the first half of the season starting every heat from the back of the grid. Find out what happens next season by following his story on Twitter and/or Facebook. Who knows, by adding to his ‘likes’ / ‘followers’ you may become a part of his story by helping him become more attractive to a potential sponsor.

]]>
/blog/2013/08/slow-is-smooth-smooth-is-fast/feed/ 0
This is not a manifesto: Valuing Throughput over Utilisation /blog/2012/02/throughput-over-utilisation/ /blog/2012/02/throughput-over-utilisation/#comments Sat, 11 Feb 2012 10:37:16 +0000 /blog/?p=431 In a previous article, This is not a manifesto, I expressed the values I hold as a software development team member. Today, I’m going to talk about the first of these values.

Before I do, I’d like to say what I mean by “software development team”. I mean a cross-discipline team with the combined skills to deliver a software product – product owner, user experience, business analysts, programmers, testers, dev-ops, etc.

A common problem

Many teams I encounter will, at least in the beginning, have team-members who specialise in a single role. Each team-member will rarely, if ever, step outside their job description [1]. This can cause a problem.

Many teams find themselves in a situation where some team members have little to do on the current stories. At this point, the team has some choices, they can focus the under-utilised people on:

  1. Future user-stories, working on tasks most relevant to their job title
  2. Another team, working on tasks most relevant to their job title (e.g. ‘matrix management’)
  3. Current stories, taking on tasks that will take the story to completion sooner, even if it’s more relevant to someone else’s job title
  4. Things that will make the more utilised people get through their work faster, today and in the future.

More often than not, I find teams taking option 1. I think that people choose this option because it feels like it should increase the throughput of the team – i.e. the amount of new features they can add to the product. In fact, it has the opposite effect.

Little’s law

Let’s consider a team that is working in time-boxes or ‘sprints’ (à la Scrum) and measures it’s throughput with ‘velocity’ (story-points per-sprint). Points are accrued as each user-story is completed – i.e. coded, tested and product-owner validated.

In this particular team, it is able to complete an average of 10 points per sprint. Let’s say this is due to a bottleneck in the process. This bottleneck might limit the amount of testing that can be completed during the sprint. This is illustrated in fig.1 where each ball is a story point.

 

Shows a system through which a series of balls are being processed. Each ball represents one story point. It also shows that there is spare capacity in the team.

Shows a system through which a series of balls are being processed. Each ball represents one story point. It also shows that the spare capacity in the team has been used up, even though it does nothing to increase throughput.

Little’s law [2] tells us that the amount of time it takes to complete an item of work (cycle-time) is:

        Work in Progress (WIP)   
Throughput

Which we can read as:

   story points in progress
  velocity 

In this simplified example, WIP is 10 points and throughput is 10 points per sprint. The average cycle time of a story is therefore 10/10 = 1 sprint. Notice, however, there’s all that spare capacity.

Let’s say this spare capacity is developers. So, the team starts taking on more user-stories from the backlog – increasing the number of story points in progress to 20 points (fig.2).

Because the bottleneck remains, velocity remains the same – 10 points per sprint. The amount of flexibility that the team has, however, is now reduced because the average time required to get a story to completion has increased from 1 sprint to 20/10 = 2 sprints [3].

Often this will take the form of testers working a sprint behind the developers. Worse still, over 10 sprints, the team still only completes 100 story points (as they would have before) but is left with a lot of unfinished work. This ‘inventory’ of unfinished work carries overheads which can, ultimately, reduce the velocity of the team.

The impact of filling capacity in this way has yet another effect.

Latency effect

As the developers get through more stories, a queue of stories that are “ready for testing” will build up. Testers working through these will, at least, have questions for the developer(s) or even find some defects. The developer, having moved on from that story, is now deep into the code and context of another story. When the tester has questions, or finds defects, relating to a story that the developer had worked on, say a week ago, then the developer has to reload their understanding of this old code and context in order to answer any questions or fix any defects. This context-switching carries significant overheads [4].

The end result is that the effort required to complete a story increases due to the repeated context switching, therefore reducing velocity.

So, not only does filling the capacity with more work fail to increase throughput, it adds costly context-switching overheads – ultimately slowing everything down.

This phenomenon is not unique to teams using fixed-length time-boxes, such as sprints. Strictly following Kanban avoids this problem, but what I’ve seen is some teams creating a ‘ready for testing’ queue – so that developers can start work on the next story. This has the same latency effect and turns a process designed for continuous flow into a batch and queue process. But, I digress.

What to do with the spare capacity?

The simple answer is to look at the whole approach and determine what is slowing things down. In the example above, I’d be wondering what’s slowing the testing down. Many of the things that hinder testing can be addressed by changing how we do things ‘upstream’.

Are lots of defects being found, causing the testers to spend more time investigating and reproducing them? Can we get the testers involved earlier? Can any predictable tests be defined up front so that developers can make sure the code passes them before it even gets to the testers (e.g. Behaviour Driven Development)?

Are the testers manually regression testing every sprint? Could the developers help by automating more of that? Are the testers having to perform repetitive tasks to get to the part of the user-journey they’re actually testing? Can that be automated by the developers?

Is there anything else that is impacting them? Test data set-up and maintenance, product owner availability to answer questions? Anything else?

Addressing any of these issues is likely to speed up the testing process and increase throughput of the entire team as a result. One solution is to put these types of tasks onto the product backlog. This is fine but if we assign point values to them it can give a skewed view of velocity. Or rather, velocity is no longer a measure of throughput. You won’t be able to see if these types of tasks are actually improving things unless you are also measuring what proportion of the points are delivering new product capabilities.

The only good reason I can think of for story-pointing these throughput-enhancing tasks is if your focus is utilisation – i.e. maximising the number of story-points in progress. Personally, I care more about measuring and improving throughput. By doing so, we get the right utilisation for free and a faster, more capable team.

Up next: Valuing Effectiveness over Efficiency.

Footnotes:

[1] “Lessons Learned in Close Quarters Battle” Illustrates how stepping outside our job descriptions can move the team through each story more quickly by using special-forces room-clearing as an analogy.

[2] Little’s law (PDF) – the section “Evolution of Little’s Law in Operations Management” that references Hopp and Spearman’s observation about throughput (TH), work-in-progress (WIP) and cycle-time (CT) – i.e. TH=CT/WIP. And therefore we can say CT = WIP/TH.

[3] Little’s law also illustrates that if we reduce the work in progress to one quarter, then the cycle time for each story reduces to one quarter of a sprint. We’ll still get through 10 points per sprint, but stories will be completed more as a continuous stream throughout the sprint rather than all at the end.

[4] “The Multi-Tasking Myth” by Jeff Atwood, talks about multi-tasking across projects and pulls together several resources to illustrate the impact of multitasking at various levels. This applies when multi-tasking across stories.

Acknowledgements:

I’d like to say a special thank you to my fellow RiverGliders – Andy Palmer and James Martin – for the feedback that helped me refine this article.

]]>
/blog/2012/02/throughput-over-utilisation/feed/ 2
Sushi, Hibachi and Other Ways of Serving Software Delicacies /blog/2011/07/sushi-hibachi-and-serving-software/ /blog/2011/07/sushi-hibachi-and-serving-software/#comments Fri, 01 Jul 2011 14:57:22 +0000 /blog/?p=372 Let’s say you own a restaurant. Quite a large restaurant. You’ve hired a manager to run the place for you because you are about to take the fruits of your success and invest in opening three more around the country. You leave the manager with a budget to make any improvements he sees fit.

After being away for a few weeks, you return to your restaurant. It’s very busy thanks to the great reputation you had built for it. The kitchen has some new state of the art equipment and you can see that there are a lot of chefs and kitchen-staff preparing meals furiously. Obviously, with a restaurant this full you have to prepare a lot of meals.

Too many chefs?

But then, you notice that there are only a few service staff taking orders and delivering the meals to the customers. Tables are waiting a long time for their meals. You notice the service staff spending a lot of their time taking meals back to the kitchen because they have gone cold waiting so long to be served. You also overhear several customers complaining that the meal they received was not the one they ordered, increasing the demand for the preparation of more meals. You call the manager over and ask him what’s going on… he says “because we need to invest more in the kitchen”. You wonder why. He tells you “because there are so many meals to prepare and we can’t keep up”.

At this point, it’s quite obvious that the constraint is not through lack of investment in the kitchen. Instead the lack of investment is in taking the right orders and getting the meals to the customers. To solve this, I think most people would increase the investment in getting the order right and in how the meals get to the customer not by investing more into the kitchen.

Obvious, right? Despite this, time and again, I see organisations happy to grow their investment in more programmers writing more code and fixing more bugs and all but ignore the end-to-end process of finding out what people want and taking these needs through to working capabilities available to the customer. Business analysis, user-experience, testing and operations all seem to suffer in under investment. If it takes you four weeks to write some code for several new features and then another four weeks for that to become capabilities available to your customer, hiring more programmers to write more code is not going to make things any better.

Sushi and Hibachi

So, how would you advise the restaurant? In the short term, you can get some of the chefs to serve customers while you hire more service staff. Another solution is to invest in infrastructure that brings the meals closer to the customer, such as in hibachi restaurants where the chef takes the order and cooks the meal at the table or in sushi restaurants where meals are automatically transported via conveyor belt.

In software teams, we can bring the preparation of features closer to our customers. We can involve a cross-functional team in the conversation with the customer (such as the chef being at the customer’s table). We can also automate a large proportion of the manual effort of getting features from a programmer’s terminal into the hands of the customer (as in with the conveyor belt). And, there is so much we can automate.

Integration, scripted test execution, deployment and release management can largely be automated. Since speeding up these tasks has the potential to increase the throughput of an entire development organisation, it seems appropriate to invest in these activities as we would in automating any other significant business activity – such as stock-control or invoice processing.

Cooking for ourselves

Why not ask a good number of our programmers to take some time out from automating the rest of the business and automate more of our own part of the business? Taking this approach seems to be the exception rather than the rule. I see organisations having one or two dev-ops folks trying to create automated builds; and similar number of testers, with next to no programming experience, trying to automate a key part of the business – regression testing. And then asking them all to keep up with an army of developers churning out new code and testers churning out more scripted tests.

Serving up the capabilities

Instead, let’s stop looking so closely at one part of the process and look more at the whole system of getting from concept to capability. Let’s invest in automating for ourselves as much as we automate for others – and let’s dedicate some of our best people to doing so.

In short, it’s not just about the speed at which we cook up new features. It’s as much about the speed at which we can serve new, working capabilities to our customers.

]]>
/blog/2011/07/sushi-hibachi-and-serving-software/feed/ 2
To hack, or not to hack /blog/2011/06/to-hack-or-not-to-hack/ /blog/2011/06/to-hack-or-not-to-hack/#comments Tue, 07 Jun 2011 11:25:49 +0000 /blog/?p=361 Recently, Engadget reported that Nokia had decided to abandon MeeGo in favour of Windows Mobile. What was interesting is the reported process used to make this decision:

Elop drew out what he knew about the plans for MeeGo on a whiteboard, with a different color marker for the products being developed, their target date for introduction, and the current levels of bugs in each product. Soon the whiteboard was filled with color, and the news was not good: At its current pace, Nokia was on track to introduce only three MeeGo-driven models before 2014-far too slow to keep the company in the game.

This brought to mind some blog posts I wrote in early 2010 (now copied to this blog – see previous two entries). I talked about ‘critical mass’…

Critical Mass of Software: the state of a software system when the cost of changing it (enhancement or correcting defects) is less economical than re-writing it.

In the case of Nokia and MeeGo, there isn’t enough information to know if that’s what had happened but it does suggest, at least, that it was more economical to choose an alternative third-party OS than to continue with MeeGo. Integrating Windows Mobile is going to come at some cost, of course, but clearly they decided it was less than sticking with their current strategy.

This is an example of modern markets demanding more sustainable, adaptive and agile product development cycles, and a company recognising this and acting on it.

It is also a demonstration of how an ever-growing cost of change (e.g. through the number of bugs consuming resources that would otherwise be adding new value) is really deferring of cost – i.e. technical debt.

There are many arguments for ‘prudent’ acceptance of technical debt… This story, I think, is an example of where it wasn’t obvious when that debt was no longer prudent until it was too late for MeeGo as a product.

And this is one of the problems with some lean concepts applied to new products and start-ups. The advice is often to get feedback from the market quickly by hacking the product together and release as quickly as possible. The trouble is, at what point do we take a step back and change to more sustainable practices? Probably around the time the product is selling… but then we’re under pressure from competitors who have seen our innovations and want to copy them… so we’re tempted to keep hacking out more features only to find that the faster we hack the slower we go.

There is no magic answer, to the question “when do we stop hacking and start sustainably building?” We might wait until we’re making enough money to hire more people to do remedial work.

As soon as we find we’re enhancing a feature, it might mean that it’s a key part of our product – one we’re going to keep. We might then treat any code we touch during that enhancement as legacy code, building in the cost of remedial work into the enhancement.

Some would argue that it’s faster overall to never hack things in and to commit to  sustainable product development practices from the get-go. This probably applies in many cases but especially to large complex products like operating systems.

Whatever the strategy used, there will come a point where you can’t hack in any more features. The question is, will you decide when that happens or will the cost of change decide for you.

 

 

]]>
/blog/2011/06/to-hack-or-not-to-hack/feed/ 2
Old Favourite – More Sharks and Delaying Critical Mass /blog/2011/06/old-favourite-more-sharks-and-delaying-critical-mass/ /blog/2011/06/old-favourite-more-sharks-and-delaying-critical-mass/#comments Tue, 07 Jun 2011 10:26:41 +0000 /blog/?p=352 This article originally featured on my old blog on 19th January 2010.

In a previous post I talked about Critical Mass of software. I showed how an ever-increasing cost of change resulted in it becoming more economical to completely rewrite the system than to enhance and maintain the original.

I explained how this could be avoided by using practices that sustain a consistent and flat cost of change. I also mentioned that you could defer reaching critical mass. Some teams find it difficult to get the time to do this because “the business” always has “more important” or “higher-value” things on their backlog.

What are the implications of reaching critical mass? Well, depending on what the software does and whether the rewritten version still has to do all of those things, it could cost millions… or more.

Presented with the situation that the product could reach critical mass within a year – costing millions to replace – do you think “the business” would start to think it is worth investing in reducing the cost of change? Obviously, I would advise teams to avoid this situation in the first place:

The reality for many teams I’ve encountered is that they don’t feel empowered to push back on the business’ demands for that next feature in half the time it takes to do it properly. If we could present back the impact of that choice as shortening the time to Critical Mass and bringing about costs to replace the software far in excess of the value gained by delivering that feature a month early then perhaps the business would be better informed.

A big visible chart on the wall, showing the estimated point at which Critical Mass was reached could play a big role in getting the business more interested in sustainable change, or at least inspire a conversation on the subject.

Convincing “the business” to invest in some remedial refactoring or allowing three times as long for each feature to facilitate enough remedial refactoring for each new feature and refactoring-as-we-go for the new feature might be easier if we could represent this idea visually:

The problem with this idea is how do we credibly determine when Critical Mass is going to be reached? Many experienced practitioners could probably reasonably accurately estimate when that was going to happen purely on gut feel but this would be torn to pieces by many product managers. I’ve not solved this problem yet because doing this would require a mathematical model, determined by empirical data.

Perhaps someone will take inspiration from the ideas in these blog posts and find a solution. Perhaps someone has already done this or maybe this is an idea for a future PhD I might undertake? Maybe someone else will take inspiration from this article and undertake that work before I do. If so, they’re welcome to (with due attribution where applicable of course ;-)

I think it is possible, however, to come up with a simple model based on the average complexity per unit of value (assuming that the team is using value and complexity). Trending this and keeping an estimate of a complete re-write up to date might allow the simple charts above to be maintained along side release-level burn-down charts.

These ideas are still in their infancy for me. Has this problem been solved already? If not, I encourage others to explore my hypothesis and help take it from just an interesting idea to something more useful.

]]>
/blog/2011/06/old-favourite-more-sharks-and-delaying-critical-mass/feed/ 0
Old Favourite – Sharks, Debts, Critical Mass and other reasons to Sustain Quality /blog/2011/06/sharks-debts-critical-mass/ /blog/2011/06/sharks-debts-critical-mass/#comments Tue, 07 Jun 2011 10:23:16 +0000 /blog/?p=350 This article originally featured on my old blog on 18th January 2010.

A while back I tweeted about critical mass of software:

Critical Mass of Code – past which the changeability of the code is infeasible, requiring that it be completely rewritten.

An elaboration of this might be:

Critical Mass of Software: the state of a software system when the cost of changing it (enhancement or correcting defects) is less economical than re-writing it.

This graph illustrates a hypothetical project where the cost of change increases over time (the shape of which reminds me of a thresher shark):

Note:The cost of a rewrite gradually grows at first to account for the delay between starting the re-write and achieving the same amount of functionality in the original version of the software at the same point in time. The gap between the cost-of-change and the cost-of-rewrite begins to decrease as the time it takes to implement each feature of comparable benefit in the original version grows.

It was just one of those thoughts that popped into my head. I soon discovered that critical mass of software is not a new idea.

One opportunity that I feel organisations miss out on is that this can be used as a financial justification for repaying technical debt, thus preventing or delaying the point at which software reaches its Critical Mass, or avoiding it altogether.

Maintaining quality combined with practices that keep the effort involved in completing each feature from growing over time can defer critical mass indefinitely:

Note: The cost of a rewrite is slightly less at first assuming that we are doing a little extra work to ensure that we get the infrastructure we need up and running to support sustainable change (e.g. continuous integration etc.)

This flat cost of change curve is one of the motivations of many agile practices and software craftsmanship techniques, such as continuous integration, test-driven-development, behaviour driven development and the continuous refactoring inherent in the latter two. A flat cost-of-change trend is good for the business, as they have more predictable expenditure.

The behaviour I’ve noticed with some teams is to knowingly accrue technical debt to shorten time to a near-future delivery in order to capitalise on an opportunity and then pay it back soon after:

This certainly defers reaching critical mass but it doesn’t prevent it.

Unfortunately, I see more teams taking on technical debt and not repaying it (for various reasons, often blamed on “the business” putting pressure on delivery dates).

If we had a way of estimating the point at which we’d reach critical mass, we’d be able to compare the benefit of repaying technical debt now in order to avoid or delay the cost of a rewrite… or even better, demonstrate the value of achieving a flatter cost of change.

I’m not sure that we have the data or the means of predicting it, but perhaps these illustrations of a hypothetical scenario will help you explain why sustaining constant quality, automating repeatable tests (or checking as some prefer to call it) wherever technically possible and, on those occasions when we do need to make a conscious choice to compromise quality to capitalise on an opportunity, that the debt is repaid soon after.

One company I know reached this point a couple of years ago and embarked on a department wide transformation to grow their skills in agile methods so that they not only re-wrote the software but could also avoid ever reaching critical mass again. A bold move, but one that paid off within a year. Even for them, they’ll need to avoid complacency as time goes on and not forget the lessons of the past. Let’s hope they do.

]]>
/blog/2011/06/sharks-debts-critical-mass/feed/ 0
What do special forces teams have in common with agile teams? /blog/2011/05/updated-lessons-learned-in-close-quarters-battle/ /blog/2011/05/updated-lessons-learned-in-close-quarters-battle/#comments Mon, 30 May 2011 15:59:40 +0000 /blog/?p=92 In 2008, I wrote an article for Better Software Magazine

“Few would think that Special Forces tactics bear any relation to software project teams. But Antony Marcano draws a surprising parallel between the dynamics of modern Special Forces “room-clearing” methods and the dynamics of modern software development teams.”

My thinking has moved on slightly since then. Recently, several people have expressed an interest in the article.

So, here is an updated version, based on my more recent thinking and with some additional detail that had to be omitted from the original due to word-count restrictions of publishing in printed form…

Please note – The comparison with military and police armed forces is purely metaphorical and specific to the message I hope emerges about roles and titles.

Lessons Learned in Close Quarters Battle

I’m at the door, heart pounding so loud I’m sure everyone can hear it. Four silhouettes are flat to the wall. I’m point man, stuck to the edge of the door we’re about to breach. The team lead whispers through a covert radio mike,  “Alpha 1 in position.” Silence… tense anticipation… then, after a thirty-second eternity, my earpiece crackles as control responds, “Standby… Standby… Strike! Strike! Strike!”

The method of entry (MOE) man, on the opposite edge of the door frame, kicks in the door; behind me, the team lead throws in a stun grenade. BANG! As point man, I enter immediately—followed by number two—and five action-packed seconds and three targets later, we’re able to call out “ROOM CLEAR!”

Image of Antony Training with his Airsoft TeamInstantly, we move to the next room. The MOE man is nearest to it, immediately becomes point-man. The man on rear cover follows taking up the MOE position, becoming the MOE man. Meanwhile, the original number two and I are exiting the room.

I’m the first out and, seeing that the number two position on the next door is yet to be filled, I fall in behind the point man, becoming team lead. I am now responsible for calling the next breach and for throwing in the next flash-bang. The former number-two drops in behind me to provide rear cover. Each of us instantly switches roles, and with a deafening bang, the next door is breached without hesitation.

This might sound like a scene from an action movie but, no, it was a close quarters battle (CQB) training session I attended with the rest of The Foundation:Special Operations Group—a top UK Airsoft team. Airsoft is a skirmishing sport similar to paintball but with more realistic looking equipment and no mess.

In this training session, run by a private security contractor, we were practicing the latest CQB techniques taught to police and military armed forces. In CQB situations, armed forces don’t have time to shuffle around to get team members into the position that an individual’s job title might dictate. Lives are at stake (or points in Airsoft). The team must be able to adapt instantly; there can be no waste in the process.

 

Fixed Roles are accompanied by Waste

Historically, CQB dynamic entry (room clearing) was, and still is in places, taught with fixed roles. Instead of each person reading the situation and falling into the position that maximises the flow of the team from room to room, each person had a fixed role.

Prior to practicing the dynamic role-switching, or ‘read system’ we also tried the fixed role position. This was noticeably slower. Instead of the team-members outside the room falling straight into the position on the next room, they had to wait for the number 1 & 2 (and sometimes number 3) to exit the first room and get into position covering the next door. Only then could the MOE specialist move to the door allowing the last team-member to move up to continue their role providing rear cover.

The extra time it takes waiting for each specialist to be in position can take 10-30 seconds longer per-room. This time adds up to several minutes of wasted time when you have a whole series of rooms to clear. This fixed-role system not only puts the same person at the greatest risk as point-man on each room, as was highlighted by a British Army CQB instructor I trained with last year, but reduces your flexibility or can even halt the operation—especially when one of your team-members is injured (or worse).

If everyone is a specialist, you need more people in those specialist roles to deal with the risk that one of them could be taken out at any time. We don’t have quite this concern in software teams, however, people do fall ill or need time off work for personal reasons.

 

We Still Need Experts

Indeed, there are experts within a dynamic team. For example, our MOE expert might tackle the especially tricky entries or advise the team on entry tactics before the game, but we all are capable to some degree in MOE. Each of us is capable of dynamically switching roles, quickly adapting to changing circumstances. This is a perfect example of a truly cross-functional team of generalising specialists [1].

Skills Demands are Constantly Changing

This dynamic role switching is almost the opposite of your typical software organisation where each person has a job title that fixes his role—business analyst, programmer, tester. These job titles make complete sense in phased-development approaches (e.g. waterfall) where the work is divided up as if it were a production line: Business analysts pass the outcome of business analysis to developers who pass the result of development on to testers and so on. These job titles make less sense when using agile approaches that integrate these activities so tightly that they are all but inseparable.

More significantly, however, the balance of skills needed during each iteration (or time-box) and even for each user-story fluctuates depending on the nature of the features being implemented. One change may involve more refactoring (changing internal and not external behaviour) and be largely protected by pre-existing automated tests. Another change may involve limited coding and much more exploratory testing. There are endless variations on how the emphasis on each skill-area will change as the work flows through our value-stream.

Like the Airsoft (or Special Forces) team going from one room to the next, software teams must seamlessly go from one story (or feature or business-value-increment) to the next. It’s simply too wasteful for progress to be halted while we wait for a fixed-role specialist to finish his previous task.

Everyone on the team must constantly adapt so that we continue to maintain a constant and efficient flow of value… Fulfilling our shared responsibility of frequently delivering working software.

History Repeating?

An increasing number of organisations seem to recognize this, creating a demand for multi-skilled people. Interestingly, however, history seems to be repeating itself!

Up to the mid 1990s, high- and low-level software design was performed by systems analysts and then coded by programmers. Perhaps due to the demands of rapid application development, the roles later combined and the multi-skilled analyst-programmer emerged. Subsequently, this became the norm and analyst-programmers were thereafter known only as “developers.”

Today the title of developer-tester (or tester-developer) is emerging in response to the flexibility demanded by agile teams—kind of an analyst-programmer-tester. As the uptake of agile methods grows, this demand is only going to rise! If history indeed repeats itself, the developer-tester may, too, become the norm—negating the need for the “tester” suffix that differentiates them. Yes, the “software tester” job title, one of the few remaining titles derived from phased-development methods of old, could suffer the same fate as Ye Olde Systems Analyst.

This wouldn’t mean that software testing as a discipline will disappear altogether—just that many of the testers and developers of today will need to leave their comfort zones to become the developers of tomorrow.

Acquire ‘Tags’ or Badges – not a Job Title

Fixed, specialised job titles alone could be part of the problem. Maybe we need to stop linking specific roles to job titles. So often at conferences people ask the audience, “put your hand up if you are… analysts… programmers… testers… designers…”

I notice that many people will only put their hand up when one of these is called. You may notice me put my hand up for each of these roles. I think of these labels more like tags than titles. I feel I can tag myself with all of these roles and switch between them (to varying degrees of competency) as the situation demands.

Maybe they’re not so much like tags as badges. In some forces and certainly in the cadets and even among scouts and brownies, as you demonstrate new skills, you are entitled to wear new badges.

The difficulty with fixed-role thinking is that for many, their very identity is attached to their job titles. The challenge becomes showing people why they would want to broaden their thinking. I’m not sure I have time to address that in this particular post.

Maximise (your) Value

I can tell you what’s in it for me, personally. The tasks most relevant to my job-title may not be relevant to the team’s current story or goals. By dynamically switching roles, I am able to do the task that is the most relevant and valuable to the team at that time, within the context of the project’s goals, rather than just the tasks of most relevance to my job title. I am then able to accelerate the speed at which the team can clear each story, as the CQB team clears each room.

I recognise that this isn’t always immediately practical. Some teams are very protective over access to source code… but this is a matter of trust. There are many ways you can earn this trust… It may be that you need to demonstrate your abilities to manage your own git, mercurial or subversion repository before you’ll be given access to the relevant project repositories. It may be that you can demonstrate that you are able to build an open-source code-base using similar technology on your own. You may simply need to be able to talk about the right things while asking another team-member with a different area of expertise for help – demonstrating that you’ve at least understood the basics. So, one of the things outside your job-title that may add value to the team might be learning how to do something that the team will soon need doing.

So, hold on to your job titles and stick to your job descriptions if you choose. Personally, I’ve chosen to develop my individuality, embrace new skills and acquire new ‘tags’ & badges—bringing to the team more value, flexibility, throughput and opportunity.

 

References: [1] Ambler, Scott. “Generalizing Specialists: Improving Your IT Career Skills.” Agile Modeling, 2006. www.agilemodeling.com/essays/generalizingSpecialists.htm

To find out more about Airsoft or to take your colleagues on a similar journey to this article, check out http://firefight.co.uk. Tell them that this article sent you their way… they know who I am and have already taken one of my client’s teams on a similar journey.

]]>
/blog/2011/05/updated-lessons-learned-in-close-quarters-battle/feed/ 1
Old Favourite: Adaptive Budgets? “Pull” the other one! /blog/2010/12/old-favourite-adaptive-budgets-pull-the-other-one/ /blog/2010/12/old-favourite-adaptive-budgets-pull-the-other-one/#comments Mon, 13 Dec 2010 13:04:38 +0000 /blog/?p=167 This was originally posted on my old blog on 10th April 2010

Recently, I wrote about my views on using and estimating with task-cards. I highlighted that tracking progress with burn-up/down charts showing effort completed/remaining is not a true measure of progress, especially if we subscribe to the idea that we measure progress with working-software.

I also highlighted how tasks “horizontally slice” a “vertically sliced” story.

This inspired an article on infoq by Mark Levison. Since its publication, there have been several comments.

There are some specific points I’ll be answering on infoq, but some general points are more easily answered here…

True measures of progress
One of the key points I was trying to make in my first post (linked to above) is that if “working software” is our measure of progress then tracking task completion is not consistent with that.

One of the problems task-hours are trying to solve is to provide a visible indication of progress. It’s also something that many people find easier to understand. The idea of relative-points estimation seems to baffle many people when they first encounter it.

In my original post (linked above) I referenced an approach, blogged about by Jason Gorman, outlining a solution to providing a visible indication of progress that is compatible with the idea of measuring progress with working software. It also works more seamlessly with the solution of measuring throughput trends of the team (e.g. with story points) so that the team can estimate how much can be done in the next iteration.

Value Points & Task Points
Value points were mentioned in the infoq comments. Value points are solving a different problem and don’t help the team estimate what can be done. The value of a story isn’t an indication of its complexity. Value, combined with complexity points, can help determine priorities. Arlo Belshey explains some interesting views on this.

Prioritisation is, in my view, one of the few reasons to place some sort of estimate on value & complexity. Unfortunately, it seems to be often used to establish a contract with the team for when something will be done and apply pressure when the estimates turn out to be inaccurate.

Non-estimating pull systems
My preference is to use such estimates only for prioritisation… after that, things just take as long as they take. There is some benefit in tracking accuracy trends so that we can learn from them, however, spending too much time on these things simply slows down the process of getting things done. Interestingly, I have not yet seen the business be quite so passionate about evaluating whether their value estimates were accurate when the product makes it to the real world. Funny that.

Instead, I prefer to leave estimation behind once we’ve prioritised things and pull stories or customer valued work items through the system without using previous estimates to predict their completion date. With the way that most budgets are determined and allocated, this idea isn’t compatible with the way much of the business world works. For pull systems that eliminate estimation as a means of predicting the future to work (such as Kanban), we need a more adaptive approach to budgetary spend. Business needs to find a way to adapt budget allocation more frequently, perhaps as frequently as monthly, perhaps more frequently still. The business now needs to look at how it can respond to change rather than focus on following a budget-plan.

Business-world: now it’s your turn
Budget holders now need to be more agile. We, those who evolve the implementation of the ideas of the business, have responded to their demands to be able to respond to rapidly changing markets and provide that competitive edge. For the more mature implementation teams, the hindrance now is no longer how we make the products, but the business culture of inflexible and predictive budget allocation.

]]>
/blog/2010/12/old-favourite-adaptive-budgets-pull-the-other-one/feed/ 0
Old Favourite: Taking Repetition To Task /blog/2010/12/old-favourite-taking-repetition-to-task/ /blog/2010/12/old-favourite-taking-repetition-to-task/#comments Mon, 13 Dec 2010 13:02:41 +0000 /blog/?p=168 This originally appeared on my old blog on 16th March 2010…

Others have talked about the virtues of stories as vertical slices of a problem (end-to-end capabilities) rather than horizontal slices (system layers or components). So, if we slice the problem with user stories, how do we slice the user-stories themselves?

If, as I sometimes say, acceptance tests (a.k.a. examples/scenarios/acceptance-criteria) are the knife with which we slice a story into even thinner vertical slices, then I would say my observation of ‘tasks’ is that they are used as the knife used to cut a story into horizontal slices. This feels wrong…

Sometimes I also wonder, hasn’t anyone else noticed that the idea of counting the effort of completed tasks on burn-down/up charts is counter to the value that we measure progress only with working software? Surely it makes more sense to measure progress with passing tests (or “checks” – whichever you prefer).

These are two of the reasons I’ve never felt very comfortable with tasks, because:

  • they’re often applied in such a way that the story is sliced horizontally
  • they encourage measuring progress in a less meaningful way than working software

Tasks are, however, very useful for teams at first. Just like anything else we learn how to do, learning how to do it on paper can often help us then discard the paper and do the workings in our heads. However, what I’ve noticed is that most teams I’ve worked with continue to write and estimate tasks long after the practice is useful or relevant to them.

For example, there comes a time for many teams where tasks become repetitive. “Add x to the Model”, “Change View”… and so on. Is this adding value to the process or are you just doing it because the process says you should do it?

Simply finding that your tasks are repetitive doesn’t mean the team is ready to stop using them. There is another important ingredient, meaningful acceptance criteria (scenarios / acceptance-tests / examples).

I often see stories with acceptance criteria such as:

  • Must have a link to save the profile
  • Must have a drop down to select business sector
  • Business sector must be mandatory

Although these are “acceptance criteria” they aren’t what we mean by acceptance criteria in the context of user stories. Firstly, they are talking about how the user interacts rather than what they need to achieve (I’ve talked about this before). Secondly, they aren’t examples. What we want are the variations that alter the behaviour or response of the product:

  • Should create a new profile
  • Profile cannot be saved with blank “business sector”

As our product fulfils each of these criteria, we are making progress. Jason Gorman illustrates one way of approaching this.

So, if you are using tasks, consider an alternative approach. First, look at your acceptance criteria, make sure they are more like examples and less like instructions. Once that’s achieved, consider slicing each criterion (or scenario) horizontally with the tasks rather than the story. Pretty soon, you’ll find that you don’t need tasks anymore and you can simply measure progress in terms of the new capabilities you add to your product.


]]>
/blog/2010/12/old-favourite-taking-repetition-to-task/feed/ 1
Old Favourite: QA / Testing – what’s the difference? /blog/2010/11/old-favourite-qa-testing-whats-the-difference/ /blog/2010/11/old-favourite-qa-testing-whats-the-difference/#comments Fri, 19 Nov 2010 13:03:45 +0000 /blog/?p=140 Software is about the only industry one of the few industries that lumps testing and QA under one banner. It’s one of those things where common misuse of a term results in the community changing it’s meaning… this happens in mainstream language all the time.

Testing something is actually more analogous to quality control or, QC , [although it isn’t quality control].

QA is more concerned with the process – collecting information about the performance of the process in order to determine if we are ‘assuring’ (or more realistically increasing the probability of) quality. Statistical information about problems found in the product (during quality control) is just one of many pieces of info useful to someone concerned with QA (which really should be the whole project team)

In short:

QC helps us answer the question ‘does our product work?’

QA helps us answer the question ‘does our process work?’

Unfortunately, in the software industry, all too many teams don’t realise their process doesn’t work until the testers find all the ways in which the product doesn’t work… maybe that’s why software testing has come to be known as QA.

This article originally appeared on my old blog in July 2008. I had already elaborated on this topic in the article “What’s in a word” in Better Software Magazine in March 2008.

]]>
/blog/2010/11/old-favourite-qa-testing-whats-the-difference/feed/ 2