tag:blog.patchspace.co.uk,2013:/posts PatchSpace Blog 2024-04-07T20:29:28Z Ash Moran tag:blog.patchspace.co.uk,2013:Post/424845 2013-03-05T21:40:00Z 2021-11-24T23:21:42Z The Pocket Calculator Kata

This is just a short post to link to The Pocket Calculator Kata.

The aim of this kata is to implement (most of) the features of the Casio SL-1100TV 10-digit calculator. Pretty much everything except its "stylish & cool design", in fact.

A full test suite is provided, and all tests are in the same simple format: press these buttons, expect this display output. The examples are in Gherkin syntax, as used by Cucumber, SpecFlow etc, but there's absolutely no requirement to use either of these tools. You could translate the scenarios into NUnit tests if you like, or anything else you enjoy working with.

There's almost no maths in this kata, the problem is almost entirely around modelling the state of the calculator. There are quite a few features altogether, so just pick the ones you're interested in. Be warned: it looks deceptively simple! Don't be afraid to stop and think if you get stuck, there may be an easier route through the features than the first one you try.

The full kata description is on GitHub: https://github.com/patchspace/katas/tree/master/pocket_calculator

If you have any comments or would like to ask any questions about this kata, feel free to email me (Ash) at ash.moran@patchspace.co.uk. If you have a solution you'd like to share, I'll happily add a link to it in this blog post.

]]>
Ash Moran
tag:blog.patchspace.co.uk,2013:Post/424846 2013-02-23T16:57:52Z 2013-10-08T16:52:47Z Parsing for Fun and Profit (slides & code)

Here are the slides and code for my talk Parsing for Fun and Profit given at North West Ruby User Group (Manchester, UK) in February 2013. Slides first, then a guided tour of the code at the bottom of this post.

Slides

Parsing for Fun and Profit from PatchSpace Ltd

Talk summary

The talk is a very brief overview of what parsing is, why you'd want to do it and what you can do with a parser. The goal is not to go into detail about parsing or any of the tools used, or to show any "parsing best practices", just to show that building a simple language application is more accessible than you might think.

The specific examples are done by writing Parsing Expression Grammars in Treeptop. I show how to build a grammar one rule at a time by incrementally building up a suite of examples in RSpec, taking the Arithmetic sample grammar from the Treetop gem. As an example of a more complete language application with a more complex grammar, I show how to build a syntax highlighter for a simple subset of Ruby, which turns source code into marked-up HTML.

Code highlights

The source code is availble at https://github.com/patchspace/parsing_for_fun_and_profit. You may like to browse all the example source, but here are some highlights:

  • arithmetic_parser_spec.rb is the example we worked through semi-live, and shows how you can build a grammar one rule at a time
  • simple_ruby.treetop is a Treetop grammar for a very small subset of Ruby - it's by no means production quality, but it's expressive enough for our demo purposes
  • simple_ruby_parser_spec.rb shows how you can build a complex grammar by inspecting a simplified version of the parse tree that Treetop generates
  • simple_ruby_parser.rb contains the code that generates these simplified syntax trees
  • spec_helper.rb shows how to get helpful error messages from Treetop - it relies on the tree simplifier in the SimpleRubyParser node classes
  • bin/rb2html is our little syntax highlighter application - it only takes about 20 lines of code!

Most of the code is commented to explain why it's done the way it is.

Contact

If you have any questions or comments, feel free to contact me at ash.moran@patchspace.co.uk.

]]>
Ash Moran
tag:blog.patchspace.co.uk,2013:Post/424847 2012-02-22T13:57:07Z 2013-10-08T16:52:47Z How to answer "What is your greatest weakness?"

Humans are creatures of habit, and creatures of ritual. These habits and rituals are comforting to us, and give a sense of structure to our lives and how we behave. But these rituals can crystallise, and we often work through them so religiously and mechanically that to an outsider it might well appear that the ritual works us through it, rather than the reverse. Sometimes they can take on a distinctly pathological character, as thinking about the true purpose of the activity stops, and the ritual starts serving some other end.

One activity I believe is in danger of being so far ritualised, if it hasn’t been already, is the job interview. This is generally structure as being beckoned into a room with several (relatively) senior staff members. They will shake your hand, the purpose of which is usually to determine your chances of defeating a stone crab in an arm wrestling contest. Then there is a brief moment where the interview panel forms 90% of their opinions about you. After a short pause, when any awkward smiles have subsided, and it has been confirmed that you are not, in fact, a hipster, the panel launches into the main event: the questions. Now the focus is squarely on the least important person in the room: you, the candidate.

The questions

The questions start innocently enough, usually with something mundane and autobiographical. In the modern era, this often requires you to recall the order you listed the events of your life on LinkedIn. But as the questions unroll, progress through the interview begins to resemble a life-threatening run through an increasingly deadly gauntlet. You may be faced first with the leg-piercingly sharp but nevertheless predictable floor spikes of What do you know about our company?. No sweat! You read the company website, after all. You do know they’re in the insurance business, right?

A little further down the platform you encounter a pair of menacing swinging axes, out of sync and leaving perilously little opportunity to slip through. What else is engraved on them but Why do you want to leave your current job? Restrain yourself with the negatives, build up the positives (just enough!) and… dive right through!

Almost at the end now, just one more challenge. What’s this in front of you? A cold sweat breaks out on your brow. Before you rotates a giant wooden column, from which swing deadly morning stars, interspersed with serrated blades that leap out at all heights. The sound of cold, hard steel slicing the air makes you weak at the knees. You’ve reached: The Death Column of What Is Your Greatest Weakness?

The man behind the curtain

It’s at the point of What is your greatest weakness? that I believe most interviews unhinge from reality. Because, as quality guru W Edwards Deming pointed out decades ago: most variation is in the system, and a bad system will defeat a good person every time.

To ask what an individual’s greatest weakness is during an interview to decide whether they should join an organisation is nonsense. The candidate will have many strengths and weaknesses, but the only ones that matter are the ones that become relevant once he is embedded as an employee in his new team. He may think his greatest weakness is that he’s too shy, which may be of no consequence if he’ll be working on his own a lot, or if the team includes an especially empathic and nurturing colleague already. Another candidate may feel she’s unduly prone to procrastinate. But again, this may not be a problem at all, because she’ll be joining as a developer in a team that pair-programs extensively and is extremely diligent about daily standups. Quite likely something completely unexpected will turn out to be a problem. The interviewer thinks: did we remember to mention that the team is all Chinese and only half of them speak English?

A case of unexpected situational weakness happened to me recently. I’d been discussing doing some management work at a company, where I expected to mainly be dealing with process matters. Determining appropriate metrics, ensuring team members were communicating the right information, focusing test coverage across existing code – these were things on my mind. Then as the conversation progressed, it became apparent I might have to lead by example with some TDD practices. I became acutely alert – this is a .Net shop! As of writing this, I haven’t worked in a .Net shop for several years, and while I know some C#, I’m in no way qualified to demonstrate the latest testing techniques to an inexperienced team. Suddenly, something that had not been even a slight concern to me for over four years – my knowledge of .Net – presented the risk of being a major weakness.

As it happens, further discussion established that my rustiness with .Net tooling wasn’t a problem. I wouldn’t be needed to demonstrate technical mastery to achieve a useful purpose. And there would, in any case, people on hand with more knowledge of this while I take the time to learn. But it drove home a real risk in my current skill-set that could become a weakness in future situations similar to this.

Your weakest link

The level you achieve as you try to winch up the obstacles in your new job will be determined, just as with a physical chain, by your weakest link. But the work in a software company – whether you’re a developer, a business analyst, a manager, or a tester – is no simple cargo-hauling. It’s complex work: you need a repertoire of skills, you need to know when to play them, and you must realise that everybody else in the team is doing the same. So your weakest link will be determined as much by the system you’re in as anything about you personally.

Your weakest link may well be hidden from you, simply by the filters you use to see the world. As Goldratt pointed out in The Choice, one of the biggest obstacles to thinking clearly is believing that we know. This is not any individual’s fault. As humans, we are innately subject to a long list of cognitive biases. For the case in point, we all seem fairly well shielded from the reality that our weaknesses are brought out more by the systems and situations we find ourselves in than anything inherent in each of use. For all the motivational posters and exhortations of “there is no I in TEAM”, we still subconsciously take an analytic, reductionist attitude to the world. If we didn’t, we wouldn’t be asking questions like What is your greatest weakness?

A disclaimer is needed here, as it’s not always the case that the system creates the weakest link. There still exist some people, who are so spectacularly anti-social, so spectacularly arrogant, so spectacularly lazy, or so spectacular in some other special way, that they will become the weakest link in almost any situation. I’d be very surprised if more than one in twenty people in an organisation fell even close this category, however. They do exist, but they are the exception to the rule. The rest are merely in the wrong place.

Time to ask for your money back

The astute reader may have noticed that by this point I haven’t actually described how you should answer the question What is your greatest weakness? The reason is that to do so would be to commit a subtle failure of logical dogfooding: the “correct” answer will be determined more by your situation that anything about the question itself.

The questions you hear in an interview will reveal a lot about the mindset of the organisation. While they are mercifully rare, some firms do run interviews like the gauntlet described above – the principle being that they hire anyone who makes it out alive. If so, it’s likely that they’re primarily testing your ability to dodge flying blades. Maybe a clever twist on the (vomit-inducing) “I’m a perfectionist” or the (mutually destructive) “I’m a workaholic, I never go home on time”, is what they want: after all, there will be many more knives coming your way if you land the job.

Far more likely – and you should always apply Hanlon’s Razor – is that the questions have been merely cargo-culted in from the pool of ritual questions. The interviewer may have recently read the latest “Top 20 Questions to Ask in An Interview” posts. (If your interviewer reads too much Hacker News, they may have got everything they know from “Top N Ways to do X” posts.) In this situation you have more hope. If you are dealing with genuine and intelligent people, being able to move from a me! me! me! perspective to a system-level perspective could well make you shine out from the crowd, as this mindset is currently still rare. Equally, the biases and filters could kick in, and you might just blur into the background.

The problems with many interview questions run very deep, flowing as they from our mindset of ritual reductionism. The ideas here may not be immediately useful to you in an your next interview situation, but hopefully they will let you challenge the basis of these questions by seeing the systems involved. If you’d like to learn more about this mode of thinking, I highly recommend Goldratt’s The Choice, which is specifically written about thinking clearly in everyday problems like this. (This is not an affiliate link.)

Thanks for reading

Do you agree? Do you disagree? How have you seen people’s actual weaknesses play out, compared to their professed ones? Maybe you have a lot of experience as either a hirer or hiree, and have an opinion on this question, or others.

If you have any thoughts, I’d love to hear them. I’m sure many people reading this have more experience on one side of this fence or the other.

My name is Ash Moran. I’m a software developer and agile coach, and owner of PatchSpace Ltd (Twitter). If you have any feedback, questions, or would like to know more about my services, feel free to contact me at ash.moran@patchspace.co.uk, or continue the discussion in the comments.

]]>
Ash Moran
tag:blog.patchspace.co.uk,2013:Post/424848 2012-02-13T13:16:34Z 2024-04-07T20:28:11Z Coffee Shop Kanban: Is your dev team a Starbucks or a Costa?

The Kanban software methodology is gaining adoption, and is often seen as an alternative to Scrum. It is less prescriptive than Scrum, which makes it easier to start using immediately. But from a learning and improving perspective, there’s a hidden value in Kanban. It works in an analogous way to many everyday systems, like supermarket checkouts, carwashes, and restaurants – and because of that, we can improve our software development by using these as metaphors.

My favourite example of a kanban system is related to something close to every developer’s heart: coffee. Specifically, coffee shops. They make an excellent example because they have most of the properties of Kanban and Lean in software, but also because some coffee shops do it really well, while others do it really, really badly.

For the benefit of readers abroad who may not have visited the UK, Costa is a chain of coffee shops which, in my experience at least, has service of… variable quality. Feel free to substitute your own alternative establishment of choice as you read.

What is Kanban, really?

Kanban is about managing queues of work in an economically effective way. That is, given not everything can be done at once, how do we make the process as profitable and healthy as possible? (Tip: pretending everything can be done at once is an excellent way to cause huge queues of work.) When you’re waiting to pay for your shopping at a supermarket, to have your car cleaned in a carwash, or for the waiter to take your order in a restaurant, you’re in a queue. You arrive (randomly), give your order (which is variable) and wait for the work to get processed (which is also variable).

The core stated properties of a Kanban system according to the Kanban methodology (as defined in David Anderson’s Kanban) are:

  • Visualise work
  • Limit work in process
  • Measure and manage flow
  • Make process policies explicit
  • Use models to recognise improvement opportunities

I’m going to focus here on visualisation and limiting work in process, as they are the easiest things to see, and are high leverage points for a team new to Kanban. I’ll also look at managing the process, but analogies have to be drawn more carefully here.

Is coffee fuel for work, or the way you work?

When you’re in a hurry, you probably think of a coffee shop like this: you wait to get served, you give your order, you wait for your drinks, you pay, and you leave.

But to think about it in Lean terms, we need a clearer model, one of the whole system. What happens from the coffee shop’s point of view is this:

  • You, the customer, arrive at the customer queue, and you do so randomly (at this point, you wait)
  • A barista takes your order, a list of random drinks – that is a batch of work to be done, of variable size, complexity and value
  • Your order goes in a queue (at this point, not only are you waiting, but so are the drinks)
  • One or more baristas make your drinks, that is, the batch of work gets processed (you’re still waiting)
  • A barista hands you your order, that is, the completed batch of work

Before we go on, just reflect how similar this is to software:

  • The client / business turns up with a “new idea”
  • The client describes some work they want you to do (which will be of variable size, complexity and value)
  • You put the request in your backlog
  • At some point, one or more developers become free and turn the request into working code (which will take a variable amount of time)
  • You deploy the software/deliver the code, etc

So if you can accept that an index card describing the new feature “View product listings by category” is more or less the same as an empty coffee cup waiting for a drink, the two processes become coherent.

The flow of coffee

Let’s look at some “real world coffee”, and what happens when the service is good and bad.

First of all you have to turn up at the queue for the counter. Simple. Or is it? How many times have you walked into a coffee shop and seen a long queue moving slowly, while staff were chatting and gossiping, and walked out? How many times have you seen an overloaded counter and wondered which end of the line to join? By comparison, in the software world, how well-defined is the process for clients to request work? Are they seen to quickly, or do they have to wait unduly?

When you get to the counter, you have to place your order. What happens here? Is the order written down, or does the barista remember it in their head? Imagine you place an order for one large mocha with sprinkles, two lattes (one large, one small but with an extra shot of coffee) and a double caramel macchiato. How confident are you that they remembered your order exactly? Mistakes here are expensive, as putting milk in a black coffee means not only do you have to wait for an extra drink, but so does the rest of the queue. Note, though, that the cost of the coffee is irrelevant, but the cost of a lost customer is significant. Being able to confirm the order (and hence quality of service) upfront is of economic benefit.

So having asked for your drinks, your order is now in a queue. It’s in a state of waiting from the point you finish describing the drinks you want, to the point a barista starts making them. How do coffee shops deal with the order? Let’s apply the Kanban principles to explore this.

Is the work visualised?

In software, when we “visualise work”, we often write in on a Post-It note, an index card, or possibly some online system that can create a suitable graphical display. That is, we create tokens, or kanbans. We only do this because otherwise the work would be completely invisible – there are no tangible artefacts in knowledge work. Once work is visualised as a kanban, it can be more easily managed according to Lean principles, such as limiting the number of them being worked on at any given time.

Imagine time was frozen at the moment you completed your order. What would happen to your order if removed that barista from the shop, and resumed time? In Starbucks, every order gets written on a piece of paper, attached to a mug (forming a kanban), and put in a space for unfulfilled orders. My experience in Costa is that the order is usually in the barista’s head. Think about your software team: could you remove a person after they took any client request, and still be confident that the work would (somehow) still get done? Or do you suffer from complaints about work that was promised but forgotten about? Be aware of the limitations of the metaphor here, however, as the work items in software development are incomparably more complex than coffee orders. But next time someone forgets your chocolate sprinkles, ask yourself why.

Often the barista that fulfils an order will be a different one that took it. In Starbucks, this makes no difference, as the order was written down, and anybody can pick it up. (Again, software requirements are too complex for this.) In Costa, I’ve usually seen orders transferred done verbally. (Ironically, software requirements are too complex for this too.) At busy times, the person at the till will shout across the order to the person nearest the coffee machine, possibly while they’re still making the last order. Written down in a consistent way vs shouted to a busy member of staff. Go on, tell me… exactly how well would you expect this to work? Watch the communication. Watch how many orders are repeated. Now think about your office. When did you last see someone call across to an overworked developer, “Bob, can you just …?”

Is the work in process limited?

There’s a gotcha in most coffee shops: the coffee machine is the bottleneck for significant portions of the day. When the shop gets busy, you can usually find staff available to take and deliver orders, but they’ll be backed up waiting to be made. For this reason, if the staff take on too many orders, they just sit around being unfulfilled. What are the consequences of this?

If those drink orders are held in memory, they can degrade, increasing the number of mistakes. These mistakes have to be corrected, which adds more (re)work into the system, compounding the problem. As contention over the coffee machine increases, staff can end up literally stepping on each others' feet. The situation is no different in software (except perhaps for the injured toes) and can create the illusion that more staff are needed. This is a fallacy I analyse in Why You Shouldn’t Hire More Developers.

Is the process managed explicitly?

Tacit policies cause many problems. They increase conflict, as different people will apply different policies to the same problem. They cause unnecessary (bad) variation, as the same problem will be solved in different ways by different people. They cause indecision, as unnecessary thought is applied to routine work; but at the same time they also cause rework (to correct bad variation) as inappropriate policies are applied hastily.

Now, I’ve never worked in either Starbucks or Costa, so I can’t personally vouch for the level of either of their policy management. But I do know that if something is done inconsistently, it’s safe to say there’s either no policy, or no management. If something is done consistently, there’s at the least a culture for that pattern of behaviour. And if something useful is done consistently well, there’s probably a well-managed policy.

Coffee shops are easy to analyse for some sorts of process policy. Drinks preparation is the easiest, as you can observe it directly. Starbucks is meticulous in the way in prepares drinks. My experience of Costa drinks has actually been fairly consistent too. Indie coffee shops usually have a lot more variation. Some make drinks in a very consistent way. There’s one I’ve been to, however, that would serve pretty much any dark brown liquid in anything suitably-sized vessel and call it “coffee”. Variation like this can be very expensive, as customers will drift elsewhere to where they know they will get what they ordered.

In software development we have it much harder, as the work is much more variable. But we can make many valuable things part of an explicit policy. When regular information on the current situation is important, we can institute a policy of daily standup meetings. When avoiding putting the team in a state of constant overwork is important, we can limit the amount of work in process by some rule(s). When keeping software in a deployable state is important, we can have a policy that the continuous integration server must be running and always be green.

There’s one Starbucks policy I began to infer which I’ll draw attention to, as it sparked a long discussion between me and a friend. When a large queue of potential customers starts to form in front of the till, and there’s a bottleneck at the coffee machine, the last barista will not leave the till and go to make drinks. Instead, they call further down the line to collect more orders, even though they can’t be served, and therefore means customers have to wait a long time for their drinks. Why do they do this? I believe what they are intending to do is known in Theory of Constraints terms as exploiting the market, as the ultimate constraint on a coffee shop is not the staff or machines, it’s how many customers walk through the door. Starbucks has an explicitly managed, reliable process for creating a flow of drinks, which makes full use of the coffee machine bottleneck without overloading the staff. They know that no matter how many orders are taken, they will all be made in a timely, accurate manner. Starbucks has this level of process capability. They know where the constraints are in their business, they know how to exploit them, and they know how to manage the flow of work through them. Do you?

Summary

Throughout this article I’ve given a lot of examples based on my experience in Costa. This may give an unfair bias, as in fairness, the service in most chain coffee shops I’ve been to sucks. Usually it’s not from lack of effort or care from the staff either: they simply don’t know any better, and their managers don’t teach them. (Nor do the managers know any better, and their managers don’t teach them either – I can only assume this goes all the way up to the top.)

Software teams are full of motivated, independent-thinking individuals, who take enough pride in their work that they fight against a system that hinders them. On that basis, even though the work is a lot more demanding, I believe software can provide a much better service than coffee shops. But along the way, we can learn a lot from the way they are run.

The purpose of this article is to give you a metaphor for software development that you can study in real life and in real time. Take a trip to a coffee shop, to many if possible, and watch how they operate. Study the behaviour of customers in the queues, on the way the process is managed, on how failure is handled. Try to find where the principles of coffee shop flow apply to software, and where they don’t.

The real question is: Do you want your development team to be a Starbucks or a Costa?

Further reading

To be able to analyse any sort of process, you need to understand the rules of the game you’re playing. When you understand the rules, you can learn how they interact, and how they form a cohesive system. By far the best reference for the rules of software development (and all sorts of product development) is The Principles of Product Development Flow by Don Reinertsen. It’s not a beginner’s book, being quite technical in some places. But if you want to further your understanding of software development processes, you owe it to yourself to read this. (This is not an affiliate link.)

Thanks for reading

My name is Ash Moran. I’m a software developer and agile coach, and owner of PatchSpace Ltd (Twitter). If you have any feedback, questions, or would like to know more about my services, feel free to contact me at ash.moran@patchspace.co.uk, or continue the discussion in the comments.

]]>
Ash Moran
tag:blog.patchspace.co.uk,2013:Post/424849 2012-02-03T12:29:00Z 2013-10-08T16:52:47Z Why You Shouldn't Hire More Developers

Juan Palacio (@juan_palacio) has kindly translated this article into Spanish as Por qué no necesitas contratar más programadores. Offers of translations to other languages are gratefully received.

The smouldering situation

You’re the lead developer in a team of five. You’re all burnt out. Each of you is in the office from early morning til late into the evening, trying to hack away at the relentlessly growing backlog. In fact, evenings are better for work, because in the day you’re swamped with bug reports and operational issues, and developers rarely get time to work on “new” features. Marketing just signed up a new client with a whole load of new feature requests, and you barely have time to speak to the clients already in production. What’s more, you’re losing more and more of your time in the day to meetings, trying to get the situation under control. You’re probably pulling your hair out thinking “we just don’t have enough time to do all this!”. You need to do something, so you think.

On the back of an envelope you sketch out the situation. Your 5 developers are putting in at least 60 hours a week each, that’s 300 developer-hours a week in total. Out of that 300 hours, you estimate you’re spending 50 hours a week bug-fixing, 30 hours a week on ops issues and 20 hours a week on meetings. 100 hours a week before you get to the new features – that’s a third of your time! But look at the backlog: since the new client came on board, it’s not going down, it’s going up!

The conclusion is obvious: there’s just too much work on. You’re already all working overtime, so you need more people. If you had just two new developers, they could handle the bug fixes and ops issues, and still have time to chip away at the backlog. (Well, they’d only be paid for 40 hours a week each, but they’ll soon pick up the corporate culture of going the extra mile, right?) The solution is now even more obvious: you go to your line manager and ask to start recruiting. Right?

Wrong. The very last thing you want to do in this situation is hire more developers.

The new hire stokes the fire

Your first new developer, Alice, starts on a Monday. She ties up another developer for the whole first day getting her machine partially installed. Tuesday morning she goes off on her own because everyone is in a meeting, but then has to spend the afternoon unpicking what she did because it turns out you use a custom build of one tool. Bob knew this, but forgot to document it because some time last month he was … called into an meeting at short notice. Wednesday you set Alice loose on an easy bug-fix task. It takes her all day as she learns to navigate the code, but she commits it and moves on. Thursday and Friday she spends trying to implement one of the easy features in the backlog, but over half that is spent with another developer because first an old bug got in the way. (It might be in the bug tracker, but since it hit 200 open tickets, nobody really checks it any more.) Anyway, a week goes by, and the work goes out in a Friday evening deploy.

There is a weekend. You’ve all learnt to turn off over the weekend, overtime hasn’t crept in that far yet.

Monday is chaos. In fixing the first bug she tackled, Alice changed something she thought was an error, but was actually an obscure edge case of a business rule. Nobody reviewed it because they lost enough time helping her get set up, and everybody knew it was easy anyway! So the deploy is rolled back. Conversation quickly reveals that the feature Alice committed on Friday was design on top of her misunderstanding of the business rules. Now someone in the team has to do a thorough code review. Even without counting the hours attached to this, it’s clear the team is significantly behind, and in a large or complex code base, there is no reason to believe this will improve soon.

Is this all Alice’s fault? Could she have tried harder? Or is the system at fault?

The bottleneck

What is throttling the performance of this company? It’s clear it’s not in marketing – they’re bringing in clients quicker than the software can be rolled out. And it’s not in analysis – the requirements are building up faster than the developers can turn them into code. (We’ll assume for now that these requirements are actually effective.) It’s not even operations – a week’s work went out on Friday evening, and even if it broke the business rules, it was operational. That leaves us with the bottleneck squarely in development. So if development is the bottleneck, why was it wrong to start hiring developers, to increase the capacity of this overstretched skill?

The assumptions behind hiring

To explain this situation I’m going to make explicit some of the tacit assumptions that often underly hiring in an overworked team. This is quite a crucial point, as much of the shared mindset in software organisations is tacit, and influences decisions without ever being held accountable. It is comparable to the difference between invisible work and work visualised on, for example, a kanban board. (Note that even organisations that use kanban boards often have other, unvisualised work.) The following is not an exhaustive list, but it will serve the point. Many teams act as if the following are true:

  • Developers are fungible
  • Productivity is proportional to developer-hours
  • Fixing bugs is valuable
  • The requirements are all necessary

Developers are fungible

Tom deMarco calls this The Myth of the Fungible Resource (in Slack). Many factory and warehouse jobs are largely fungible, in that the time to bring someone up to full productivity is inconsequential (hours or days). This is not true of development, where even if a new hire knows the programming language, framework and even the generic business domain, it will still take a long time for tacit knowledge of the codebase to flow into his head.

I don’t think developers actually believe they are fungible (at least, none I’ve met would say so), yet I’ve seen teams hiring as if this assumption was valid. Any time you act as if a new developer working alone will immediately increase team productivity, you are acting as if it was true. This tacit assumption is in contradiction to what most developers will explicitly state the nature of their work is like. In a contradiction, at best one side is right.

Productivity is proportional to developer-hours

There are two forms of this assumption: first, the idea that a developer working a 10-hour day will be 25% more productive than a developer working an 8-hour day; second, the idea that a team of 10 developers is 25% more productive than a team of 8.

To address the first, remember that the nature of software development is creating new knowledge, which I explained previously in the post Why Can’t Developers Estimate Time?. One consequence of this is that development is a creative task that involves constantly making logical decisions. (For example, is it time to break up this long block of code? To use XML or JSON? To replace the application framework?) As explained in the article Do You Suffer From Decision Fatigue?, the human brain has a limited capacity to make these types of choice, and once tired, it will take shortcuts. The feeling of “I just want to go home” may be causing you to introduce bugs. Using overtime as evidence the team has too little capacity is therefore in contradiction to what scientific studies show. That one side of this contradiction paints a picture of developer heroism does not make it any more true.

The second form of the productivity-time assumption is based on the idea that the productivity of a team scales linearly. This is not true for the simple reason that the complexity in managing a team is not the number of people involved but the paths and amount of communication involved. Compare, for example, how easy it is to get 50 people to pass a ball down a line, versus getting even 5 people to agree on the menu for a meal in a Chinese restaurant.

Fixing bugs is valuable

Bugs are, by definition, something the system was not intended to do. There are times when nobody knows if an idea will work (this is the realm of the Lean Startup). But there are many, many defects in the world where the developers had, at the time they wrote the bug into the system, the knowledge to determine the behaviour was wrong, yet for some reason they didn’t. Imagine you’ve taken your otherwise immaculate car in to have the brakes replaced, and when you drive it afterwards it starts pulling to one side. Exactly how much value would you see in having the wheels re-aligned, even if it was done for free?

When these sorts of bugs are fixed, what is actually happening is not work, but rework. The developer must load the knowledge of that bit of code into his head, including the requirements, the way it is implemented, the dependencies it has, and then make the change. Even in the case where the bug fix is purely the addition of code, and not changing existing code, the developer must still repeat the process of understanding the subsystem to make that addition. When a new developer is doing this, they may have to learn from scratch a whole area of code, along with any tacit knowledge required for it, and then cross their fingers they don’t break anything (if a test suite catches a bug here, that knowledge has already been made explicit). If bug-fixing is waste, fixing bugs introduced while bug-fixing is doubly so. I call it whack-a-mole development, a term I’m deeply saddened hasn’t caught on yet.

If your team is spending time any significant amount of time fixing bugs, it has much more capacity than you realise. That’s not to say it’s an easy reserve to tap into, but it is there. The attitude that bugs are inevitable is harmful, as it will give strength to the tacit assumption that fixing bugs is valuable.

The requirements are all necessary

I’ve saved this for last as it has a different nature to the other assumptions: it necessarily involves decisions made outside the development team. Unless the team is making the software entirely for itself, someone else will be involved specifying the development work being done. If it turns out that 30% of the features in your software are never used or unnecessary, then at least 30% of the development time is pure waste. (It may be more, due to the complexity of managing the larger codebase, and the waste due to bugs in the surplus code.) However, as many teams are contractually obliged to deliver a fixed spec without reference to the value of the features in that spec, this may be a difficult source of waste to fix. Because of this, I won’t say much more about it. It is in any case usually easier to get someone to clean our their garden shed if you can show you can keep your bedroom tidy first.

The reality of the busy team

Remember that we came here because Alice was brought in to increase the capacity of an “overworked” team. Yet we’ve seen that the assumptions underlying the need to hire her were false:

  • The team is not running at full capacity, it is spending at least 25% of its time on rework and avoidable maintenance, even taking into account overtime
  • The team is not even producing maximum quality given the existing skills of the team, because some of the bugs were introduced due to developers being fatigued and over-stressed
  • Alice can’t be brought in to give immediate relief, because the communication overhead actually reduces productivity in the short term at least

A note on team sizes

You may hold a valid reservation about my bold statement that you shouldn’t hire more developers: increasing capacity isn’t the only reason you may want to do so. A very valid one is redundancy, as very small teams are vulnerable to Murphy’s Law. If your only developer is run over by a bus, your project is in immediate jeopardy. (It was in jeopardy before, it just took a bus to show it.) Then again, it is possible to have a team of 10 devastated by a single errant bus incident, if the team has formed knowledge silos.

Christopher Allen’s article The Dunbar Number as a Limit to Group Sizes explains some of the consequences of various team sizes.

Small team sizes may be less of a risk than they appear though. In my personal experience, developers are very rarely run over by buses. And they very rarely leave because of pay. But developers do very frequently leave because of unsatisfactory working conditions. If you’re the manager of a situation like the one in the story, one of them has probably told you so, in as many words.

There’s also another situation you might want to increase the size of your team: when the person you’re bringing in has the knowledge and experience to help improve everyone else’s effectiveness. In this case, though, their responsibilities will have to extend far beyond pure development.

What to do

The first thing is step back and check if you’re trying to solve a problem fundamentally caused by systematic waste by throwing more effort at it. This is akin to putting more sailors on water-bailing duty when the ship’s engineer should be welding the hull shut. Fred Brooks stated Brooks’s Law over thirty years ago: “Adding manpower to a late software project makes it later”. Please don’t ignore the past unless you want to turn your office into a historical re-enactment. I’ve had someone personally tell me “We have a perfect graph showing velocity going down as we started adding more people!”.

Improving the productivity of a software team is hard. It involves understanding the business, the team, the history, the obstacles blocking progress. It is a complex, context-sensitive problem. This being a blog post, one already in need of a TL;DR summary, I’ll just point you in the direction of a suitable body of knowledge, and suggest you read The Goal.

We see the world filtered by the metaphors we hold. The Goal (by Eli Goldratt) shows how our common assumptions blind us to the real causes of the problems we face every day. It has sold millions of copies, has been used in thousands of corporations, and is taught in hundreds of colleges and universities. The Goal is the archetypal book on how to focus on what matters. It will take you only a couple of days to read, and will teach you to see the real source of bottlenecks in your organisation. (This is not an affiliate link.)

I’ll end with a rule of thumb though: when faced with a situation like the one described above, try to exploit what you already have before throwing more effort and money at the problem. You’ll often realise you can be more effective with the people and resources you already have, once you discover the real reason things are going wrong.

Thanks for reading

My name is Ash Moran. I’m a software developer and agile coach, and owner of PatchSpace Ltd (Twitter). If you have any feedback, questions, or would like to know more about my services, feel free to contact me at ash.moran@patchspace.co.uk, or continue the discussion in the comments.

]]>
Ash Moran
tag:blog.patchspace.co.uk,2013:Post/424850 2012-02-02T11:16:17Z 2013-10-08T16:52:47Z Speedy TDD with Rails (the wrong way)

Here are the slides for a presentation I gave at the Sheffield Ruby User Group for ShRUG 26: Speedy TDD with Rails.

To summarise: this is a report of my efforts to increase the TDD feedback speed using Rails (3.1 at the time). It is based on work at a client, in a team of 3. All the work was done within the constraint that I couldn’t make major architectural changes as I was not going to be a long-term maintainer of the code. I was also the only member of the team applying TDD, so all changes had to be made in an unobtrusive way. This is why the presentation is subtitled the wrong way: the best I achieved within these constraints was a set of hacks to make TDD in Rails bearable, if not enjoyable.

I go into a little more detail below.

Speedy TDD with Rails
View more presentations from PatchSpace Ltd

The death of the feedback loop

TDD in Rails has recently become hampered by the fact that between Rails 2 and 3, the boot time for Rails went up to 10 or 20s on many development machines. This means that a naive TDD cycle of [write test / run failing test / change code / run passing test] easily takes 30 seconds, minimum. If you’re used to strict TDD, and running tests on every change, this means it can literally take an hour to do what would otherwise take a few minutes. It is basically impossible to do TDD in an unmodified Rails environment.

Preloading Rails

Spin offers one type solution, by preloading as much of Rails as possible, but only booting the environment modified during a TDD cycle on demand. Unfortunately, it only saves a few seconds, and doesn’t really change the quality of the TDD cycle. Spork is able to preload more, but only by extensively monkey-patching Rails. My own experience, one shared by members of the ShRUG audience, is that this can introduce so many subtle bugs that the time you might save in the TDD loop is lost fixing unexpected weirdness. On this basis, I consider pre-forking an ineffective strategy to improve TDD cycle time. And as someone else has already pointed out, it solves the wrong problem anyway.

Persistent test environment

Another strategy I tried, which only works with browser integration tests, is to keep a persistent Rails environment running and turn on code reloading, as used in the development environment. Guard::Rails helps here. While code reloading in Rails is also a hack, it’s a more reliable and better-understood one than (metaprogramming-“optimised”) pre-forking. The downside is that because both Cucumber and RSpec expect to be run only once during each Rails process lifetime, you have to run the tests in a separate process. In my case, I was using RSpec to drive Capybara in one process, with a separate Guard::Rails-managed process running the app. For want of an application service layer, I controlled application state by making a second connection to the application database from the RSpec process, and using the Mongoid models directly. While all of this leads to slow tests, it’s still (ironically) faster than running a controller test.

Isolating components

The only strategy I had any significant success with was to break up the tests based on their dependencies, and only load Rails where necessary. However, it turned out options for this can be quite limited. Mongoid is quite straightforward to break out. Testing Mongoid models doesn’t give you unit tests (they’re still Mongoid integration tests), but Mongoid only takes a second or two to load and connect to the database, which is an order of magnitude faster than Rails. Other parts of the app will be more or less separable on the basis of their dependencies. For example, I had some luck initially testing Draper decorator objects, until an upgrade to Draper introduced a direct dependency on ActionController we couldn’t remove without monkey-patching.

Conclusion

This last obstacle was what finally formed my opinion: the Rails community, on aggregate, either does not value TDD, or has a serious underestimation of the level of feedback it can provide. Whatever the Rails core team values enough to let the boot time reach its current epic size, it is not TDD. And many of the gems I use on this project (including Draper, Devise, CanCan) are not designed to work in a way that enables easy testing in isolation. This is not to say they aren’t thoroughly tested, or that they weren’t developed TDD themselves, but they do not facilitate TDD for their users. I do not believe that any significant proportion of the Rails community is trying to break down dependencies in such a way that gives inherently fast TDD, although I hope to be proved wrong. Gary Bernhardt is one exception, Kevin Rutherford is another. And if you can make ShRUG in February, you’ll see that Tom Crayford is a third, when he gives his talk Isolation vs Rails: More Fastererer Speedy Testing Mk II Edition.

]]>
Ash Moran
tag:blog.patchspace.co.uk,2013:Post/424851 2011-10-12T12:29:55Z 2013-10-08T16:52:47Z The Agile Waiting Game

This post was originally written as a guest post on the MagRails Conference blog. You can find it here.

Agile practices (pair programming, TDD, continuous integration, etc) and methodologies (Scrum, XP etc) are intended to increase the productivity of teams that adopt them. In one sense you can think of it as “team + practice = improved team”. The bit that gets the most focus is “practice”; what we will explore here is the nature of “+”.

Filling up

If you want to take a bath, initially the bath will be empty. You make an intervention to fix this undesirable state: put the bath plug in the plughole and turn the taps on. Is the bath ready now? I mean, now? Unless you’re reading this over a modem from 1983 and it took you an hour to get this far, the answer is no: the bath hasn’t had time to fill yet. There’s a delay between turning the taps on and the bath being full, somewhere in the region of 10 minutes. This is significant: it means you can go and make a cup of tea without fear of flooding the house. It also means that if the bath isn’t full in half an hour, something is wrong. (Perhaps the plug is leaking slightly, or the flow through the taps is too slow.) Whatever happens, we intuitively know how to use this delay to manage “bath + running taps = full bath”.

The same idea is equally – if not more – important in software teams. Without a sense of the delay between the original state and the desired state given a certain intervention, it is easy to over- or under-manage.

Say your team currently integrates and releases on a 6-month cycle, at the end of which usually follows a month of mad scrambling and tail-chasing to deploy the code. You want to improve this, so you intervene by introducing fortnightly iterations, where code is supposed to be fully integrated at the end of each iteration. If after a month of this (2 iterations), the team’s productivity is down, changes are not being fully integrated, and not all integrated features are fully working, has the intervention failed? At this stage, there isn’t really enough evidence to suggest this. An understanding of the nature of the intervention and its inherent delays helps here.

Moving to frequent iterations reduces the batch size of work being integrated. A manual integration process twice a year may be fine; the same process every 2 weeks will show enormous waste. Therefore the team will need to automate its integration process, but this will take time. This effort will take capacity away from existing development, so in the short term you would expect measured productivity to go down. (I’ll ignore any debate on actual productivity for the purpose of this example.) Also, with more rapid integration, team members will also be forced to communicate more frequently, and meetings will need to be made more focused and resolve conflict faster.

If 3 months later, the same pain is still being felt, has the intervention failed? Now the answer looks more like yes: it should not take most teams three months to achieve significant improvements in a deployment process. However, the the real answer is context-specific, or “it depends” as it’s known in the consulting trade.

There is no need to pick just one or the other out of the two (arbitrary) time-points: reacting at 1 month or 3 months with no information in between is still less than ideal. When filling a bath, you don’t need to wait until the end to see if it’s ever going to fill: by checking occasionally, you can see if it filling as expected. You may be able to see water leaking as it becomes half-full, or you may see it filling slowly and realise you only half-opened the taps. This extra feedback lets you manage more effectively. But if the bath takes 15 minutes to fill under ideal conditions, and you’re only 10 minutes in, don’t expect it to be full yet.

You wouldn’t blame a bath for not filling quickly enough, so don’t blame a team for not improving fast enough, unless they are actually goofing off. Slow progress may be due to a new practice coming into conflict with existing policies or mindsets (which you should watch for), but even under ideal conditions will take time.

Draining out

The reverse situation is also important. When you take the plug out of a bath, it takes time to empty. Imagine you filled the bath and got in, but half an hour later – to your bemusement – you realise the water level has dropped significantly. What gives? It may be that there’s a tiny crack you didn’t notice when filling it, which has allowed water to seep out. Again, the inherent delay and your own perception means it took a while for the problem to become apparent.

Software development teams can have leaks too. Any time a process which maintains long-term productivity of is dropped or reduced, the team’s performance will also start to fall – but it will take time for the problem to appear. Reducing TDD will not cause the defect rate to rise sharply, nor the ease of adding features to fall sharply. Instead, what will happen is that 6 months down the line, customers are complaining about the increasing number of bugs, and developers are complaining about the increasing difficulty of fixing them. The delays between different people’s responses will not be uniform, either, so watch for the early complainers – far from being a nuisance, they’re the system giving you a hint of what the future holds.

There’s a particularly insidious situation that can emerge when practices are dropped, known in systems thinking as “shifting the burden”. Here, some team members will pick up the slack, increasing the delay before the fundamental problem becomes apparent – which it will do so dramatically.

Again, you wouldn’t pull the bath plug out of a bath and exclaim “The bath isn’t empty! We didn’t need the bath plug after all!”, so don’t let a team reduce or drop practices it knows to maintain productivity, only to act surprised when things fall apart later.

A few delays

Start looking for delays in your team and software development process. As a starter, here are some examples:

  • learning a new language or framework
  • changing the practices in use (including learning a new one)
  • changing the members of a team
  • changing suppliers
  • taking on or dropping clients
  • learning to look for delays

If you’re too busy in the day to think about the delays around you, think about it next time you’re running a bath. What are your thoughts? Feel free to leave your comments below. I’m happy to wait for them.

My name is Ash Moran. I’m a software developer and agile coach, and owner of PatchSpace Ltd (Twitter). If you have any feedback, questions, or would like to know more about my services, feel free to contact me at ash.moran@patchspace.co.uk.

]]>
Ash Moran
tag:blog.patchspace.co.uk,2013:Post/424852 2011-05-05T13:44:47Z 2024-04-07T20:29:28Z Never Let the Bottleneck Monitor Itself

There's an important concept I use - the origin of which I will talk about later - which applies pretty much everywhere I go. Namely: at any point in time, in any organisation (or process, or system) I'm involved in, there will be a vanishingly small number of bottlenecks - that is, the people, teams, machines or otherwise that limit the output of the whole organisation. In fact, there's usually only one.

Child's play

This idea is very easy to show in a simple, linear system. Imagine you're getting a group of children to make paper aeroplanes (let's call them Andy, Bob and Claire). The steps are as follows:

  • the first child, Andy, folds the paper in half
  • the second child, Bob, folds the wings
  • the third child, Claire, tests that they can fly (by throwing them)

Let's assume that the children can do their activity at the following rate:

  • Andy can fold paper at 10 planes/minute
  • Bob can fold wings at 5 planes/minute
  • Claire can test-fly planes at 20 planes/minute

What will happen over time if all the children work as fast as they can? If the answer doesn't leap out immediately, watch this short cartoon animation of people at work. (Really! I guarantee you'll be hooked.)

[As an aside: If you're having trouble working out why Andy and Claire aren't helping Bob with the wing-folding, you haven't spent enough time in corporate environments. Instead, imagine that the task is not making paper planes, but completing a piece of complex, tedious ISO9001 documentation. Alternatively, imagine that the children are being paid piecework for their activities, or are routinely whipped for not meeting a production quota. All the preceding situations should produce the relevant bottleneck suitable for our discussion.]

So how much is the group of children capable of producing overall? Because the slowest child (Bob) is only working through 5 planes/minute, the group as a whole can only make 5 planes/minute. Any excess planes Andy makes will pile up (as "inventory"), while Claire will be spending a lot of time twiddling her thumbs. This, in the simplest sense, is a bottleneck.

What happens if the bottleneck stops working? If you watched the video, you know the answer. Andy has enough extra capacity to build up a few spare planes, so he can take a break now and again. Claire can sprint to catch up. But the Bob is different: he can't sprint to catch up with Andy, and he can't produce any excess to keep a buffer for Claire. In short, there's a lot of pressure on Bob to work as efficiently as possible.

Bottlenecks in a software organisation

I should point out that software teams are different from groups of children - although given the way some are managed, not everyone appears to believe so. And software organisations are have more complex workflows than this - although people attempting strict adherence to the mythical Waterfall methodology might lead you to believe otherwise. If the bottleneck in a development team is not under pressure from others, they will be more than likely under pressure from themselves to do well. Also, software development is a specialised skill, and even moving roles in the same company has a long lead time due to the amount of tacit knowledge involved.

Now, people are strange things. In some sense, we can be abstracted as "resources" - I don't advise it though, if you want to make many friends. But as Tom de Marco points out in Slack [amzn]: we are not fungible. That is, you can't easily swap one person out for another, in all but the most simplistic situations - making paper planes, perhaps. Our lack of fungibility stems from a more fundamental issue of dealing with people: we are not mechanical parts, we are complex parts of a complex system. We have an influence back out on the people giving us work that, unlike machinery, goes way beyond failing due to regular maintenance, or topping out at a capacity limit. Both can be useful analogies, though - as most importantly, we experience stress.

Having described bottlenecks, the pressure on them, the way to look at them in a more complex organisation (and a few caveats to the model) - it's now time to answer an important question: who should be responsible for monitoring the health of the bottleneck?

The Golden Rule

You must never let the bottleneck monitor itself

Why is this?

You may have already seen the pieces. Let's recap. A bottleneck:

  • is responsible for the throughput of the whole organisation (or team)
  • is under intense pressure, either from itself, or from elsewhere in the organisation
  • can't (immediately) get outside assistance, either because the skills are too specialised, or the organisation doesn't have the capability to accommodate high demand on it

To understand what happens when the bottleneck tries to monitor itself, let's take a simplified system. Let's pick a single individual, and let's call him Bob, Bob senior.

Bob could be a freelancer, a single worker in a department (perhaps with a specialised skill, left to his own devices), the lone manager of a department, or any other similar situation. Fundamentally, he is on his own in some way. And there's an important condition: Bob is a bottleneck. If Bob slacks, goofs off, or makes a hash of things, everybody loses. Quite possibly, nobody cares about our Bob unless he screws up.

Bob has two responsibilities: he must get his work done, and he must somehow ensure that his own work is managed. How should Bob manage his time? He has two conflicting requirements. Like everybody, he's in an evolving world, so he either has to sink or swim. (Although, like Deming said: It is not necessary to change. Survival is not mandatory.) Assuming he wants to swim, he must:

  • do his work at maximum efficiency
  • manage himself to ensure he spends his time effectively, and improve the way he works

(Aside: most organisations don't attempt to measure effectiveness - they measure efficiency at best, in terms of staff utilisation. As we've seen, keeping everyone 100% busy is not a condition for maximum throughput. Eli Goldratt's The Goal [amzn] has a lot more to say about this.)

The core conflict of a bottleneck

So our protagonist Bob is burdened with this conflict, and he must make the best of it. What happens if he chooses the route of doing? Then he will do more, but it will not necessarily be his best. He may make errors of commission - doing things that didn't need to be done, because he didn't stop to think if they could be eliminated. Or he may make errors of omission - missing things he could have seen had he stopped to think about the bigger picture. Both of these cry for more effective management. But Bob is effectively his own manager, so what happens if he chooses the route of management? Now something more insidious takes place. For every moment he spends managing himself, he is not doing his own work. Remember: Bob is a bottleneck, so he is under intense pressure to work efficiently, and hopefully effectively. Every moment he spends managing himself, improving himself, he is not doing.

What happens when a bottleneck is not doing? This is easy: as we've seen, the whole organisation loses. It is as true in a complex software organisation as it is with children making paper planes. And when Bob is making the whole organisation lose, he is putting himself under more pressure: either from someone else, or from himself, or possibly both. (Although he will probably just see it as work piling up faster.)

What happens when people are put under pressure? The become stressed. And what happens when people are stressed? This is complex, but a simple yet plausible explanation lies again in Slack: that pressure up to a point increases productivity, but beyond that, causes it to degrade.

This is an interesting situation. It means that if Bob does not attempt to manage himself, he will cause his own productivity to degrade in the long term - relative to the rest of the world around him - by not improving. But if he does attempt to manage himself, he will cause his productivity to degrade in the short term, by spending his capacity on management, not action. So our poor friend Bob is in a bind: he loses capacity either way.

Well, Bob is not one to accept defeat. He wants to be the best he can, bottleneck or not. So he makes a resolution (in his spare time, possibly a random thought from his subconscious) to manage himself as effectively as possible. But what is this activity? It's meta-management - the management of management, a form of management in itself. So in order to manage himself better (perhaps to spend more time doing), he must increase the amount of time he spends managing. And decrease the amount of time he spends doing. And so, increase the pressure he has on himself. And how does he resolve this? Well, he must choose to spend more time either:

  • doing - to achieve more; or
  • managing - to be more effective

Does this look familiar? We have a circle! And it has teeth. They often do, because the virtuous ones usually take careful planning, while the vicious ones lie waiting for us everywhere.

Hope for the future

So what can we do about this? Well, this is where I welcome creativity is needed, so the following are only a few suggestions.

  • Greater appreciation of bottlenecks: Bottlenecks are a concept we all understand intuitively, but don't always consciously look out for. Seeing a few extreme cases (and they occur in day-to-day life) of what can go wrong when a bottleneck's time is wasted can help us focus on the importance of the situation.
  • Protecting retrospectives: Allocating a percentage of a bottleneck's time to regular, continual improvement is essential. Nobody should be allowed to consider themselves "too busy" to review their progress, for therein lies a death spiral. Equally, they must trust their managers to know when their reflection and improvement is sufficient.
  • Peer review: I've tried this personally, and it can be highly effective. Gathering opinions from impartial people on a regular basis can stop you veering off on a tangent. It is also naturally timeboxed, as it can be done in down-time, and you must return the favour.
  • Making work more visual: It's easy for anyone to get overworked, but without a simple way to see what people are working on, bottlenecks may end up taking on tasks that are wasteful, redundant, or could be done better by someone else. Someone else needs to be involved in this, though.
  • Creating a culture of slack: If we can accept there are only so many hours a week that any individual can work on one thing, we'll be less inclined to push for more and more work when what is really needed is rest and review.

These are mostly steps to exploit the bottleneck. ("Exploit" in this sense does not mean to depersonalise them in the same way "resource" often does, merely to not waste their time.) Most of the suggestions above are about time- and energy-management.

    I've concentrated on what happens when the bottleneck is a single person. There's a lot more to be said about the bottleneck team, as opposed to the bottleneck individual, but most of the same concepts apply - they just have a more complex human element.

    In all cases, though, the situation can be addressed without command-and-control attitude, or hostile criticism. The bottlenecks in most organisations are people, and the most effective way to waste time at a bottleneck person is to treat them as anything but. In fact, I suspect a lot of waste of bottleneck individuals' time stems precisely from the fact that we don't take into account the human element of pressure and stress.

    Further reading

    If you'd like to learn more about bottlenecks, the ideas I use come from the Theory of Constraints. There's a wealth of information out there, but I recommend reading the original business novel The Goal [amzn] (already mentioned), which explains how to identify and exploit bottlenecks. Its sequel It's Not Luck [amzn] explains why even complex organisations have few real bottlenecks, and explores different types. I've referenced Slack [amzn] twice in this post - it's not about bottlenecks, but a lot of the ideas about trying to run over-capacity are highly relevant. If you're interested in more references or further explanation, please ask in the comments.

    ]]>
    Ash Moran
    tag:blog.patchspace.co.uk,2013:Post/424853 2011-04-08T12:57:00Z 2024-04-07T20:29:28Z Why Can't Developers Estimate Time?

    A few interesting points came up on a mailing list thread I was involved in. Here are a few of them. The original comments are presented as sub-headers / quoted blocks, with my response below. This isn't a thorough look at the issues involved, but what I thought were relevant responses. Note: I've done some editing to improve the flow and to clarify a few things.

    Why can't developers estimate time?

    We can't estimate the time for any individual task in software development because the nature of the work is creating new knowledge.

    The goal of software development is to automate processes. Once a process is automated, it can be run repeatedly, and in most cases, in a predictable time. Source code is like a manufacturing blueprint, the computer is like a manufacturing plant, the inputs (data) are like raw materials, and the outputs (data) are like finished goods. To use another analogy, the reason Starbucks makes drinks so quickly and repeatably is because they invested a lot of time in the design of the process, which was (and is, ongoing) a complex and expensive task. Individual Starbucks franchises don't have to re-discover this process, they just buy the blueprint. I'll leave it as an exercise to the reader to infer my opinion of the Costa coffee-making process.

    It's not actually always a problem that development time is unpredictable, because the flipside is that so is the value returned. A successful piece of software can make or save vastly more than its cost. Tom DeMarco argues for focussing on the high value projects for exactly this reason. Note that this does require a value-generation mindset, rather than the currently-prevalent cost-control mindset. This is a non-trivial problem.

    By far the best explanation I've read of variability and how to exploit it for value is Don Reinertsen's Principles of Product Development Flow, which is pretty much the adopted "PatchSpace Bible" for day-to-day process management. And when I say "by far the best", I mean by an order of magnitude above pretty much everything else I've read, apart from the Theory of Constraints literature.

    Here is the data from my last development project. (Histogram generated in R with 5-hour buckets: the horizontal axis shows the duration in hours for the user stories - 0-5 hours, 5-10 hours, etc; the vertical axis is the number of stories that took that duration). We worked in 90 minute intervals and journaled the work on Wave, so we knew task durations to a pretty fine resolution. (We did this for both client communication and billing purposes.) The result: our development times were about as predictable as radioactive decay, but they were very consistently radioactive. Correlation with estimates was so poor I refused to estimate individual tasks, as it would have been wilfully misleading, but we had enough data to make sensible aggregates.

    Rule of thumb: take the estimates of a developer, double it and add a bit

    The double-and-add-a-bit rule is interesting. When managers do this, how often are tasks completed early? We generally pay much more attention to overruns than underruns. If a team is not completing half of its tasks early, it is padding the estimates, and that means trading development cycle time for project schedule. Cycle time is usually much more valuable than predictability, as it means getting to market sooner. Again, see Reinertsen's work, the numbers can come out an order of magnitude apart.

    Also, this is the basis for Critical Chain project management, which halves the "safe" estimates to condense the timescale, and puts the remaining time (padding on individual tasks) at the end, as a "project buffer". This means that Parkinson's Law doesn't cause individual tasks to expand unduly. I'm unconvinced that Critical Chain is an appropriate method for software though, as the actual content of development work can change significantly, as feedback and learning improves the plan.

    People in general just make shit up

    It's not just developers that are bad with estimates either. Everyone at some point is just winging it because it's something they've never done before and won't be able to successfully make a judgement until they have.

    As a community we need to get away from this. If we don't know, we don't know, and we need to say it. Clients who see regular progress on tasks they were made aware were risky (and chose to invest in) have much more trust in their team than clients whose teams make shit up. It's true! Srsly. Don't just take my word for it, though - read David Anderson's Kanban.

    Estimating is a very important skill and should be taught more in junior dev roles

    I propose an alternative: what we need to do is teach to junior devs the meaning of done. If estimation problems are bad enough, finding out at some indeterminate point in the future that something went out unfinished (possibly in a rush to meet a commitment … I mean - estimate!) blows not only that estimate out of the water, but the schedule of all the current work in process too. This is very common, and can cause a significant loss of a development team's capacity.

    ]]>
    Ash Moran
    tag:blog.patchspace.co.uk,2013:Post/424854 2011-03-19T15:17:00Z 2024-04-07T20:29:27Z The Price of a Specialist Skill

    Having a rare skill means that a small increase in demand may translate to a large increase in the the rate your work is valued at, the rate you can charge. If so, focussing on that specialism should lead to greater profits, right? Maybe it can, but there are other forces at play. To demonstrate this, here is the story of a (fictional) thatcher and his business. Disclaimer: I've never been involved in the thatching business, but I've seen a roof thatched slowly, day by day. And believe me, it takes a long time.

    The Story of the Thatcher and the Cottage-Owner

    The thatcher lives in the countryside. He doesn't spend all his time thatching, in fact, most of his work is joinery for a local farm. But his own house is thatched, and he has kept his thatching skills fresh.

    The houses nearby now mainly have tiled rooves, the mark of industrialisation increasingly stamped over the houses. But one cottage at least still has a thatched roof, and it inevitably needs repair from time to time. The cottage-owner has not lived there all his life, and is certaintly no expert on thatching.

    One day, a storm damages the roof of the cottage: part of the surface is blow off, and some parts look susceptible to rain. The cottage-owner realises he needs his roof fixed, and goes out looking for someone to help. The first people he finds are tilers: tilers are plentiful and local in this landscape.

    The first tiler replies: "We can't fix a thatched roof. We'd have to tile this for you. We'd have to start from scratch, and it will cost a lot of money." The cottage-owner goes to find another.

    The second tiler replies: "Sorry, this roof is thatched. We can remove the thatching and tile it, but it will cost a lot of money." The cottage-owner goes off again.

    The next person the cottage-owner stumbles across is our thatcher. The thatcher looks at the roof and says, "Yes, I see the surface damage." He investigates a bit further: "Let me look inside … ah, some damage from rodents." At this point he uses his own judgment to conclude that he has investigated the situation to an appropriate level, and presents his findings: "It will not be easy to fix, but I have the skills, and this is my day rate. It is moderate, if not cheap." The cottage-owner decides that the thatcher is the best person to help.

    The thatcher makes progress on the roof, and  for a while, everything is fine. Then, as he tears up the damaged thatching, he realised why some part of the roof got so badly damaged in the storm: the beams underneath were rotten. He describes this situation to the cottage owner, and that he'll need to hire help from the farm to fix it. "I know how to do this, but your roof will collapse soon if it is not taken care of."

    "But… I can't afford to pay more, I was hoping to be able to patch up the damaged part of the roof." the distressed cottage-owner explains.

    And so, the thatcher is caught in a dilemma: does he continue to apply his specialist skill, which is now revealed to be suddenly less valuable than before (the amount of work has increased, but not the money paying for it); or does he tell the cottage-owner that he would be better to have his roof removed and rebuilt (possibly tiled), and lose this business forever? There is not even the possibility of finding another thatcher, because there are no more in the area. The cottage-owner, it turns out, does not want to "waste" the money invested so far, and so the thatcher continues.

    The thatcher hires two farm workers, without whose help he could not hope to fix damage of this scale. Slowly but surely, he rebuilds and rethatches the roof, encountering more underlying damage along the way. Eventually it is complete, finished to the best of the thatcher's ability, but at great expense to the him, and at great sacrifice to his joinery work.
    What is the the problem solving thinking behind this, and what is real economic situation?

    The goal of the cottage-owner was to stop his roof leaking in, even if he believed his goal to be to rethatch the roof. The goal of the thatcher was to provide a means to stop the roof leaking in, which in his case he could do by thatching. We'll assume that the goal of the thatcher was not simply to rethatch a roof, as much deeper problems emerge when the goal of both supplier and customer is to apply solutions in search of a goal. (You could re-think this in a less utilitarian way if you include the aesthetic value of a thatched roof.)

    The problem the cottage-owner observed was storm damage. The problem the thatcher observed was more complex, due to some underlying damage. The problem neither saw, nor could see, was structural damage. (Please forgive me if the thatching example here is tenous, as I disclaim knowledge of the technical skills here!)

    The initial chosen solution was to rethatch the roof. This made economic sense until the point when deeper structural problems were found. Here, the thatcher is now well aware of the dilemma: to abandon or to subsidise his customer. But what he misses is the application of the sunk cost fallacy: the cottage-owner not wanting to "waste" the time and money invested.

    And therein, I believe, lies the true price of a specialist skill: that without being able to refer your customer to alternative providers, it may become at some point economically rational to abandon them, but to do so would significantly impact the customer's situation, and may trigger arguably irrational, but (due to their origin in human nature) unavoidable negative consequences. The thatcher would certainly have good reason to believe that a half-finished thatched roof in the area would not do much good for his reputation as a thatcher.

    What solutions can be applied to improve the situation?

    Some of these problems can be tackled more-or-less directly. If the thatcher had ensured that the cottage-owner understood that the true total cost could be as high as having half the roof rethatched, then rebuilt from scratch, the risk of being financially squeezed would have been removed; but due to our habit of treating estimates as commitments, this may have been as likely or more to push the cottage-owner to having his roof tiled, even if that would have been a more expensive solution and less desirable solution. Sadly, our current educational system does a poor job of teaching essential knowledge of statistical variation, so it falls upon every software developer to educate their clients on the matter.

    Then there is tackling the sunk-cost fallacy. Unfortunately, while that is sometimes easy, it can be as hard to break in programming as it is in poker. More significantly, breaking your own logical fallacies is limited only by your own willingness to challenge your own assumptions; helping your customers break their assumptions is another level harder, and not something they will always want to pay for, even if they need it. (From experience, this is not an uncommon situation.)

    What are your thoughts?

    Have you been in this situation? Does any of it resonate? Are there faulty or unstated assumptions in the story which means it can easily play out a different way? Have you encountered other problems, or do you have other solutions? All ideas are welcome below.

    Updates

    I have edited the conversation around the initial investigation ("The next person the cottage-owner stumbles across is our thatcher…") to clarify some of the terms of the arrangement (the meaning is unchanged).
    ]]>
    Ash Moran
    tag:blog.patchspace.co.uk,2013:Post/424855 2011-03-08T16:30:00Z 2013-10-08T16:52:47Z Systems Thinking Sheffield 2: Why Won't My Car Start?

    These are the slides for the presentation and interactive session at Systems Thinking Sheffield 2, held in February 2011 at the GIST Lab.

    Note that the slides were prepared quite quickly, which means some of the examples are not as tight as they could be. Also, the output of the "story of the hosed monkeys" interactive tree-drawing session isn't included. I need to write a separate post about that one, as it raises interesting points both about behaviour in organisations and how to model it. (If you'd like to know more about this, please request it in the comments.)

    This is the first time I've tried to present these ideas in this format, so I learned a lot. A few key points:

    • Many people's instinctive reaction to figuring out why a situation plays out the way it does is by gathering facts, rather than by asking "why do we see this?", and challenging assumptions. That, I suspect, is becase we think primarily by pattern matching, rather than analysis.
    • It's easier to introduce logic trees by presenting a partially-complete one (and they're all partially complete) and having people raise informal objections, than to teach by building one from scratch.
    • People value the emphasis on externalising and de-personalising problems, and questioning, rather than directly criticising, logic. I included a reference to the Agile Retrospective Prime Directive, which went down well even with a largely non-software audience.

    If you have any questions, please feel free to comment. I want to refine my presentation of logic trees over time. Many people are put off them at first, but everyone who has humoured me long enough to draw one said afterwards that they found the activity valuable.

    Why Won't My Car Start?
    View more presentations from PatchSpace Ltd
    ]]>
    Ash Moran
    tag:blog.patchspace.co.uk,2013:Post/424856 2009-12-17T02:32:00Z 2024-04-07T20:28:12Z The Mars Lander (without integration tests) in Ruby

    At the Agile 2009 Conference in August, J B Rainsberger gave a talk called Integration Tests are a Scam, which you can see in video. The session is well worth watching. While it's long, and takes a while to get to the core issues, it's a very thorough analysis of the costs of slow test runs and (failed) attempts to enumerate all application behaviour from too high a level.

    J.B. wrote an example of how focused tests can be used to detect integration issues in the blog post Surely we need integration tests for the Mars rover!. The example is worked through in pseudo-code. I find it hard to read extensive pieces of code, so I turned it into a coding exercise. Here is the pseudo-code translated into Ruby, with comments about the order of how it was built up (more interesting than replaying the SCM patches).

    J.B.'s methodical approach to collaboration/contract tests is simple and powerful. The Mars Lander example makes a good concrete example. I highly recommend working through it in your language of choice; I learned a lot.

    ]]>
    Ash Moran
    tag:blog.patchspace.co.uk,2013:Post/424857 2009-12-15T21:51:00Z 2013-10-08T16:52:47Z Customer Input and the Russian Doll of Software Development

    While replying to a mailing list post, I realised I was doing a terrible job of articulating where I thought the value of communication from customers is in the software development cycle(s).


    The start of the thread was "is it normal for customers to have no contact with developers?". I said this is a terrible thing, and customers should always be able to talk to developers. This is simplistic - so I refined it to saying that having a primary point of contact before the developers is not a bad thing. This is unclear - so I tried to refine it, and in the process decided it was time to dust off OmniGraffle.

    As an initial attempt, ths model is that software development is a set of nested cycles, each of which involves specifying the problem in such a way you can test the solution, developing something to meet that specification, and refactoring to improve design and understanding.

    Now, a few unintended things spring out of this, but let's tackle the initial problem - where should customers provide input? My current position is that maximum value of customer input is during test case preparation, as identifying what problem to solve is almost always the hardest part of software. At the other stages, the focus is technical, and, with possible exceptions, customer input is of little value.

    I once worked with a guy who constantly sat down by and badgered developers when they were trying to work. Little of his input was useful, and much of it caused delay and multi-tasking. A good deal of it was blue sky daydreaming that probably had no benefit in the next 6 months, at least. It doesn't have to be this bad, but incoming communication that interrupts developer and is not part of a feedback loop is waste, in my experience.

    However, developer contact with the customer is of immense value, as the ability to clarify and mine for insight enables simplification of code and reduces rework.

    The line is thicker between the inner-most development and the customer because my experience is that when developers have questions during the coding phase, it's often about unexpected costs stemming from technical limitations. These tradeoff conversations enable economic decisions about what is feasible, rather than a build-at-all-cost mentality (yet another issue of fixed-price contracts).

    I'm making no claims that this is generally applicable, and counter-examples are welcome.

    What is a Market Test?

    Watch out - the following is more conjucture than fact. I am only now going through the first iteration of customer development, so my opinion in relation to Lean Startup matters should not be given much (if any) weight.

    I coined the term Market Test because I couldn't think of anything better to represent the idea that what you fundamentally need is to specify something that will sell (or be used, if it's free/internal etc). An example of what I have in mind is an analytics system that monitors signup rate. It's analogous to the Customer Development Engineering on slide 23 of The Lean Startup slides (Eric Ries and Steve Blank). I nested it because I'm naturally uneasy about any segmentation or conflict (read "discordant" rather than "antagonistic") introduced into a development cycle. The idea that Customer Development Engineering is segmented from or in conflict with development may be a misinterpretation; it may be presented that way for visual impact.

    But: A team that can't invalidate its own assumptions is lacking a core self-improvement skill.

    How many cycles are there?

    It has been pointed out to me that the cycles in this diagram are all fundamentally the same, and that each one falls out of the next. Exactly what needs to be done at each layer is a technicality. That means the diagram simplifies to this:

    And let's put the customer inside the process FTW. This implies that all you need to develop valuable software is:
    • someone capable of identifying/proposing a problem and proposing a solution
    • an iterative development process that incorporates self-improvement
    • a development team capable of apply this recursively to solve problems at all levels
    Which fits with another idle thought I have at the moment - that the only valuable design principles are those that apply recursively - but that one needs to be worked out first.

    Comments welcome. Especially any that explain how I ended up at this conclusion simply by asking "where should customers provide input?".

    ]]>
    Ash Moran
    tag:blog.patchspace.co.uk,2013:Post/424858 2009-12-14T23:07:00Z 2013-10-08T16:52:47Z Testing Software is not Expensive - It's Free

    A common criticism of (aka excuse for not doing) test-driven-development is that it's too expensive in terms of developer time. Critics who take this position usually point to the time developers spend writing test cases, which at first seems like a sensible observation. There are (at least) two problems with this.

    First - the same people that label TDD as waste are often people who will happily spend - or allow their staff to spend - hours or days at a time in a debugger. Testing to find defects is waste.

    Second - and more importantly - writing test cases is not the same as running them to test the software.

    At some point, somebody has an idea. They say, I have this problem (for our purposes here, we'll assume they know this exactly), and if I can write a program to do these things, then my problem will be solved. That person has the ultimate test case for the as-yet-unwritten software: if it behaves how they want it to, their problem will go away.

    Now, a developer takes over, and turns this conversation into a set of ideas about what the code should do to implement this behaviour. At the very least, having written something, he should run it and inspect what it does, to verify that it behaves as they expect. (Some don't even do that much...) This is simple manual testing. But note two things:
    • if he doesn't think about what it must do, he has zero chance of designing the right solution
    • if he doesn't test his assumptions about the software's behaviour, he pushes errors downstream, where they become slower and more expensive to correct
    Now given that the developer here must think about what he is doing - the most effective way to think about it is to express it in an unambiguous form. A form that something stupid and mindless can understand - say, a computer. If he can specify the problem in a way a computer can understand, the only source of error is in getting this spec right in the first place. But fortunately, as he's thinking about what he's doing, this is usually not a large source of errors. (If it is, you have a bigger problem on your hands.)

    How does our developer know if the computer has understood the spec for this code? The only way is to make the computer able to verify the program against the spec. Otherwise, the spec is about as useful as a stray Word file, such as a signed-off requirements document. We want booleans here. Flashing lights. Possibly red and green.

    When this developer runs his spec program against his solution program, the computer is doing what he should do anyway before releasing it to his customer. The only difference is it can do it many orders of magnitudes faster than he can. So fast, in fact, that it is effectively instant. How much does it cost to fire off the test run? A few seconds of developer time. Or, if you're using an automatic test runner, exactly nothing.

    Up to this point, we've established two things
    • writing test cases is the process of formalising a spec so that a computer can be employed for testing
    • running tests is effectively free
    But, how free?

    Information

    The inspiration for this post came from chapter 4 of Don Reinertsen's Managing the Design Factory (It's All About Information). The purpose of this chapter is to explain ways to efficiently generate valuable information. The examples in the chapter are largely from circuit engineering, but even there, there exists a continuum. From page 76:
    [Testing costs] could be twice as high with four iterations instead of two. This means that when testing costs dominate the economics we should concentrate on quality per iteration. We do not want to incur extra, expensive trials when the cost of a trial is high. In contrast, when testing costs are lower, we will get to higher quality faster by using multiple iterations.
    So, if testing software is essentially free, how many iterations should we have? The answer is hinted to on page 74: this is an economic order quantity (Wikipedia) problem in disguise[1]. Out of sheer laziness to get an equation editor working, I'll reuse the slightly arcane, CC-licensed Wikipedia equation:
    Where:
    • Q* is the optimal order quantity - how many tests you should batch before you start a test run
    • C is the order cost - the cost of a test run
    • D is the rate at which the product is demanded - arguably requests for features (this is not explained in MtDF, presumably because you can demand features arbitrarily fast) 
    • H is the holding cost - the cost of running tests late in development, when change is more expensive

    The key, though, is that if C, the cost of running tests, is at or near zero, and H, the cost of making changes late is high (and every developer's experience is that tracking down bugs in old code is much harder than in freshly-written code) the optimal batch size of tests to hold is also at or near 0. Which in reality means:

    You should strive to keep the cost of testing software at effectively 0,
    and to run all your tests every time you make a change

    If you've done TDD for a while, you'll know this intuitively. But expressing it in terms of existing economic models, already in use in other forms of engineering, puts it on solid ground.

    I'll leave it open to interpretation exactly what I include in the scope of a "test", but that will be touched on in my next post. And if you doubt just how free software testing can be, take inspiration from IMVU's continuous deployment: Doing the impossible fifty times a day.

    [1] You mean you didn't spot it either? :)
    ]]>
    Ash Moran
    tag:blog.patchspace.co.uk,2013:Post/424859 2009-12-02T22:31:00Z 2013-10-08T16:52:47Z GeekUp Sheffield 20: Elephants in the Meeting Room

    Here are the slides from the GeekUp Sheffield 20 presentation: Elephants in the Meeting Room

    Elephants In The Meeting Room
    View more presentations from PatchSpace Ltd
    ]]>
    Ash Moran
    tag:blog.patchspace.co.uk,2013:Post/424860 2009-10-15T22:30:00Z 2013-10-08T16:52:47Z NWRUG October 2009: Uses & Abuses of Mocks & Stubs

    These are the slides for the NWRUG presentation on mocks, from July 2009.

    Note that most of the slides were written in the middle of the night, and I didn't have much time to trim them down. And I didn't get to beta test them on a real live human being. So the presentation goes on a bit long, and some things look a bit strange without me there explaining them. I've corrected the slide that I noticed was spectacularly wrong (ie, the spec didn't even pass), but otherwise it's as presented

    Also my opinions on some things may have changed since, so consider this an archive…

    From Specification To Success
    View more presentations from PatchSpace Ltd
    ]]>
    Ash Moran
    tag:blog.patchspace.co.uk,2013:Post/424861 2009-07-16T22:13:00Z 2013-10-08T16:52:47Z NWRUG July 2009: darcs

    These are the slides for the NWRUG presentation on darcs, from July 2009.

    NWRUG July 2009 - Darcs
    View more presentations from PatchSpace Ltd
    ]]>
    Ash Moran
    tag:blog.patchspace.co.uk,2013:Post/424862 2008-10-01T22:30:00Z 2013-10-08T16:52:47Z GeekUp Sheffield 6: From Specification to Success

    These are the slides for the GeekUp Sheffield presentation on developing software with user stories.

    The structure of the huddle was like this:

    • Intro - 10 mins
    • Audience writing stories - 10 mins
    • Audience prioritising - 15 mins (after it overran)
    • Break for coding - 45 mins (there was another talk here which gave me just enough time to code up the top-voted feature)
    • Demo of Cucumber, Celerity, RSpec using the code from the break - 15 mins (for full details and links, grab the slides).
    From Specification To Success
    View more presentations from PatchSpace Ltd
    ]]>
    Ash Moran
    tag:blog.patchspace.co.uk,2013:Post/424863 2008-06-04T22:30:00Z 2013-10-08T16:52:47Z GeekUp Sheffield 2: Encouraging Agile Discipline

    These are the slides for the GeekUp Sheffield presentation on encouraging discipline in software teams, from June 2008. The nature of the session was a huddle, so the slides are brief, and most of the value was in the discussion after.

    Encouraging Agile Discipline
    View more presentations from PatchSpace Ltd
    ]]>
    Ash Moran