To hack, or not to hack

Recently, Engadget reported that Nokia had decided to abandon MeeGo in favour of Windows Mobile. What was interesting is the reported process used to make this decision:

Elop drew out what he knew about the plans for MeeGo on a whiteboard, with a different color marker for the products being developed, their target date for introduction, and the current levels of bugs in each product. Soon the whiteboard was filled with color, and the news was not good: At its current pace, Nokia was on track to introduce only three MeeGo-driven models before 2014-far too slow to keep the company in the game.

This brought to mind some blog posts I wrote in early 2010 (now copied to this blog – see previous two entries). I talked about ‘critical mass’…

Critical Mass of Software: the state of a software system when the cost of changing it (enhancement or correcting defects) is less economical than re-writing it.

In the case of Nokia and MeeGo, there isn’t enough information to know if that’s what had happened but it does suggest, at least, that it was more economical to choose an alternative third-party OS than to continue with MeeGo. Integrating Windows Mobile is going to come at some cost, of course, but clearly they decided it was less than sticking with their current strategy.

This is an example of modern markets demanding more sustainable, adaptive and agile product development cycles, and a company recognising this and acting on it.

It is also a demonstration of how an ever-growing cost of change (e.g. through the number of bugs consuming resources that would otherwise be adding new value) is really deferring of cost – i.e. technical debt.

There are many arguments for ‘prudent’ acceptance of technical debt… This story, I think, is an example of where it wasn’t obvious when that debt was no longer prudent until it was too late for MeeGo as a product.

And this is one of the problems with some lean concepts applied to new products and start-ups. The advice is often to get feedback from the market quickly by hacking the product together and release as quickly as possible. The trouble is, at what point do we take a step back and change to more sustainable practices? Probably around the time the product is selling… but then we’re under pressure from competitors who have seen our innovations and want to copy them… so we’re tempted to keep hacking out more features only to find that the faster we hack the slower we go.

There is no magic answer, to the question “when do we stop hacking and start sustainably building?” We might wait until we’re making enough money to hire more people to do remedial work.

As soon as we find we’re enhancing a feature, it might mean that it’s a key part of our product – one we’re going to keep. We might then treat any code we touch during that enhancement as legacy code, building in the cost of remedial work into the enhancement.

Some would argue that it’s faster overall to never hack things in and to commit to  sustainable product development practices from the get-go. This probably applies in many cases but especially to large complex products like operating systems.

Whatever the strategy used, there will come a point where you can’t hack in any more features. The question is, will you decide when that happens or will the cost of change decide for you.



  • Hi Antony, would be interesting to know how you see the forked Kanban approach fitting here.

  • Sorry it’s taken so long to get back to you on this Salim. Somehow I missed this comment.

    Indeed, that gives us a formalised approach.

    It still requires the discipline to put an experiment that has worked into the backlog in order to repay the debt. This is not immune to the pressures I outlined in the article above.

    Kent Beck talked of how he went about writing JUnitMax, a commercial IDE plugin that continuously ran unit tests in the background. He didn’t write any tests for it in the first month because he wanted to get feedback from people as quickly as possible [1]. He, of course, had the discipline to go back to it later. He was also ‘the business’ and ‘the developer’ and had all the understanding necessary to generally make the right choice… sustainable or disposable.

    When we were working on, which is now on hold indefinitely [2], James Martin started out in much the same way as Kent Beck did. Everything we were doing was an experiment. He soon encountered a situation where progress on evolving an experiment became very difficult and this meant slowing down in order to speed up again by writing tests and refactoring because it was no longer safe to progress that experiment.

    The problem that arises is when you want to add an experimental feature where the feedback needs you to evolve it again and again and again. Suddenly realise that it works the way your customers want it to. Should we then write tests and refactor it? Will the demand for the next experiment to evolve allow us to do this? This can be solved using continuous-rewriting [3] but there may come a point where having unit tests and well factored code will make each evolution faster and faster. So, having some guidance around when to have disposable experiments and when to have evolving experiments (where we write tests and refactor) would be useful for many. For me, if the experiment has resulted in feedback such that I’m evolving it, it should go through the sustainable ‘backlog’ route in the forked Kanban approach. If I have no basis for an idea then that would go down the disposable experiment route.

    There is also the challenge of untangling experimental code from the core code. If, however, you have evolved a sustainable approach to feature toggling then this is much easier to do and will improve the speed at which you can migrate experimental code to become part of the product’s core or ditch the code altogether.

    This all ignores the value of trying to express our understanding as tests which in itself can evolve our ideas or quickly highlight the gaps in our thinking.

    The Forked Kanban approach is a nice way to visualise and distinguish between user stories that are a certain refinement to an existing product or a potentially disposable experiment. In XP this was done with ‘spike’ stories [4]. The only difference being that the code for a spike was often thrown away or retained temporarily for only for reference purposes. In Kent Beck’s story, he was essentially ‘spiking’ but actually released the code.

    In a world where I have feature toggling I’d treat each user story as an experiment. I’d be more inclined to not have a fork and just place the experimental lane to the left so that there is a single, linear path. I might choose different wording…

    | ideas / needs | developing | learning (active experiment) | removing / improving | review / accept |

    I’d also want this to be supported by a continuous delivery pipeline.

    I’d then make a judgement call on where I needed tests and where I should be refactoring in the earlier stages and, for any item that we decide to improve, agree higher expectations on tests and internal code quality.

    In short, I can see the value of the forked Kanban approach. I think it doesn’t necessarily solve some of the problems I’ve outlined in the article but it does help the team see which things probably need a sustainable approach vs those things that can be treated as potentially disposable. Either way, to be most successful, it requires a fair amount of maturity in both the people and the tech.

    How do you see the forked Kanban approach? (Given that a lot of time has passed since your comment)

    [1] Kent Beck’s JUnitMax story:

    [2] – no more:

    [3] Continuous Rewriting:

    [4] Spike solution: