Netflix – Less Is More

I know, I know I said I’d write part 3 of the BDD post. I promise I’ll get to it.

But recently I started watching a new Netflix show that got me thinking. It’s called Castlevania. It completely appeals to the Dungeons and Dragons playing geek in me, everyone should watch it.

But what I thought was interesting was how the show aired. I was told about this show by my friend, he was complaining that as he was just getting into it, IT ENDED.

Netflix has trained us to binge our TV, we can’t wait week to week anymore. They drop whole seasons at once, they even autoplay the next episode. That’s what I thought was interesting about Castlevania. They only made 4 episodes.

Now recently Netflix has been cancelling some of their shows. So I thought that maybe only releasing 4 episodes of this was a kind of taster. I found a cool quote from the CEO of Netflix Reed Hastings on cancelling shows. He said, “I’m always pushing the content team. We have to take more risk. You have to try more crazy things. Because we should have a higher cancel rate overall.” This is amazing lean thinking, keep trying new things, fail often and learn from it.

After watching the show, I totally agreed with my friend. So I took to facebook to see what everyone else is thinking. I found this:

Pasted image at 2017_07_25 02_51 PM (1)

Loads of people demanding more of the show. Don’t believe me, give it a google. Check out the reddit page, or the rotten tomatoes reviews. Now I am not claiming to understand why there was only 4 episodes. I know that kind of animation is hard to produce and takes a lot of time. But after reading the CEO’s comments, I would like to think this is a minimum viable product of sorts. Get your users addicted to binge watching TV, then only release a few episodes. Then just wait for feedback. If nobody replies, you have only wasted the time it takes to make 4 episodes. But if people demand more, well you can start working on more.

It looks like it worked. As you can see if the screenshot above Netflix are making 8 more episodes. So what can we learn from this?

Fast feedback is the key. Are we building a big feature currently? Does it really need everything? Would it make more sense to have something simple that just shows the basic idea? It doesn’t need to look pretty (my designer friends will hate me for that). Once it shows off something new, it can be used for feedback. All the rest can be done with the customer who is interested. Maybe the cool new feature you are building is actually not going to sell, it would be great to find that out early.

Let me know what you think.


Where To Start Part 2

This is the second part of my original where to start with BDD post, which can be found here:

Alright so now our team is a bit more collaborative. We have a multifunctional team, with some developers, a tester and a BA. We are also visualizing our work, and limiting it so there is work being finished and not just started. Now we are in a position to introduce BDD.

So at this point we should have some form of KanBan board. Work starts in the “To Do” lane and continues to the “In Progress” lane. Something interesting to think about, are there many bugs being found in the testing lane? In the last post I talked about developers not picking up new work until all the bugs found in the testing lane are fixed. But how can we try to prevent those bugs in the first place?


This is where BDD fits in nicely. For those that don’t know, it stands for behaviour driven development. I have previously talked about what it is and why I love it. It allows for a conversation before any code is written. Personally I have found most bugs come from missing or misunderstood requirements. So if we can agree on acceptance criteria together then there will be less confusion in requirements and less bugs.

The easiest way to start is by adding a new lane to your board. After “To Do” and before “In Progress” add “Discuss”. So what happens in this new lane? The answer is simply a conversation. This is where the 3 amigos fit in. A tester, a business analyst and the developer who has just picked up the story. The most important aspect of BDD is this conversation. We can now talk about the requirements, and the expected behaviour of the software. This conversation should remove confusion in what the value of the story is.

This is easier if we just start with a user story and no acceptance criteria. The AC will be written as a group. Each person will bring their own unique viewpoint, if there is already acceptance criteria then confirmation bias can make us miss something. As it is easier to just not think and accept was is already written down.

What I have found works is specification by example. So start by trying to capture the behaviour using examples. John Ferguson Smart recently ran a workshop in which he talked about example mapping. I loved it and his ideas can be found here:

Using examples can also be a great judge of story size. Usually if there are more than 4 examples I recommend splitting the story. Keeping stories nice and small reduces complexity and helps get fast feedback as they take less time to get to the customer.

BOOM we now have examples. Now its time to add a new lane, call it “Distil”. This is where we turn our examples into automatable acceptance criteria. The examples become “Given, When, Then” steps. These can then be automated as failing tests. The story can then progress into the development lane and can be started on.

To make this a little more clear, in my next post, I am going to go through a practice story.

So like Led Zeppelin famously sung we don’t want any “communication breakdown” driving us insane. Give this a try and let me know how you get on.


Test Bash Belfast – Afternoon Session


One thing I enjoyed was that lunch was provided. This means nobody had to leave and we got to meet and talk to fellow test-bashers. My favourite part of any conference or meetup is meeting folks. You can learn from talking to fellow testers just as much if not more than from some talks.

On to the afternoons talks.

Shift Left, Shift Right, And Improve The Centre – Augusto Evangelisti (@augeva)

Gus started off by saying it was ok to sleep during his talk as he had the after lunch death slot. Of course Gus is way to energetic for anyone to fall asleep.
He started out by busting some myths about continuous delivery (CD). Such as:

  • CD does not work for complex things
  • CD teams have buggy software
  • CD can only work in non regulated industries

Gus went on to mention various companies such as Facebook who proved these wrong. We tend to complain that our software is too complicated for this to work with, but we would be lying if we said we are more complex than Facebook.

A team should be able to envision, analyze, monitor, and support software. Teams can do this, in three main brackets, left, right and the centre.

Shift Left:
Reduce the complexity of the code. Chunks of code should be small, Gus said about the size of your head. This makes it easier to review and maintain.
Using BDD as a collaboration tool will help keep work focused and uncomplex. This leads nicely into test automation. Pair programming can help reduce complexity and ties in well with code reviews. Another cool idea was mob programming. Maybe trying it for one or two stories and seeing how it works. Of course impact mapping is also a brilliant way to question the value of features, before any code is written. Gus then mentioned improving testability but thanks to Rob from the morning did not have to delve into the subject. Another good shift left activity mentioned was WIP limits, this allows a team to focus on tasks and actually start getting work finished not just started.

Improve The Centre:
The centre is usually where we live as testers. We are used to exploratory testing and getting bugs found and fixed. But Gus says to help improve here, we can teach others. We can pair up with our team mates and teach exploratory testing. We can help everyone to get thinking like a tester, and at the same time improve our own dev and analysis skills. Pair exploratory testing is a great way to do this. Even showing some testing in the demo can help explain how we think, and help share ideas.

Shift Right:
Right shifting is all about finding the value in what we do. This is where we engage the customer. Gus suggests monitoring the customers use of our software. We can then see what is used and what doesn’t work. This allows us to use customer feedback to design new products. A great way of doing this, is by using canary releases. For those who don’t know what this is, here is Martin Fowler’s explanation: Canary release is a technique to reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users before rolling it out to the entire infrastructure and making it available to everybody.

As testers how can we help do this? Gus answered by saying we need to develop three core skills. These are:

  • Active listening
  • Empathy
  • Influencing

A Test Pyramid Heresy – John Ferguson Smart (@wakaleo)

John says that test automation is like any other tool. It is either a benefit or a hazard. So we need to ask ourselves, how much are our tests worth? Three questions we can use to figure that out are, why, what and how.

  • Why are we doing this?
  • What are we trying to test?
  • How are we going to test it?

Without asking these question the testing pyramid can very easily turn into the testing ice cream cone. Don’t use a web test when a service or unit test could do the same job. My favourite point John made was about writing unit tests. Writing unit tests after the code is written is largely a waste of time. Unit tests that are written after the code, will perfectly test that the code is doing the wrong thing. If we write the unit tests first we can guide how the code is written.

Tests come from different sources. There are three main areas:
Business tests – Typically acceptance criteria and created using BDD (feature mapping/example mapping).
QA tests – Usually exploratory and found manually.
Developer tests – These are usually unit and integration level tests.

Instead of focusing on writing loads of tests, start by writing examples of how the API should work. Tests should have three main roles:

  1. Discovery – Using tools like impact mapping to help figure out what features are valuable.
  2. Describe – BDD tools to help describe and document the software.
  3. Demonstrate – Tests should show the code does what you expected it to do.

Testing In Production – Jon-Hare Winton (@jonhw87)

My first thought when I heard this title was, this man is insane. Testing on production is pure madness. But I decided to put my bias aside and see if there was some method to the madness.

Jon started by saying we do most of our work very far away from where our users are. Test environments are not an accurate portrayal of live. OK he has a point here. Jon then says what’s wrong with our test environments. We test something, make a mess, don’t clean it up and move on. This leaves us with a really messy area to test. They are lower in importance to our dev environments so don’t get the same level of maintenance. They are inaccurate and full of weird test data. Are they really a good environment to use the software the same way as our customers?

Feature toggling was Jon’s recommendation. What has worked for them is toggle something off to the customer, but start testing on live. Jon was saying they started slowly, just manual testing, just a little bit. Eventually the automation was run on hidden live environments. So once the feature is tested on live it can then be toggled on and the customer gets to see and use it.

I was sold on the idea. I now want to try it. Testing on production, maybe not such an insane idea after all.

The End

This is where my notes ended. My head was about to explode with all the information I had taken in over the day. I decided to just listen for the last two talks. So I must apologise to Sharon McGee, and Simon Tomes (@simon_tomes ), for not having any notes.

Luckily though it has taken me so long to get this second post out, the talks have since been put online. I highly recommend everyone get over to the dojo and watch the talks, especially the last two, they were fantastic.
They can be found here:

You can also look out for me messing up a 99 second talk, by not being able to make a solid point in 99 seconds (shocking I know).

Roll on Test Bash Dublin, watch this space for more details.

Test Bash Belfast – Morning Session


I am just back from the Ministry of Testing’s “Test Bash” in Belfast. First I want to say how fantastic it was, I really enjoyed the day.

Here are my notes from the day. These are just some points I jotted down, not everything from the day, so if anyone thinks I missed something cool, please share it with me. 

Cynefin For Testers – Liz Keogh (@lunivore):

Liz opened the day with a talk about different levels of work complexity. Liz says we estimate work into 5 different areas of complexity.

5. Nobody has ever done this.
4. Somebody outside the organisation has done it, but we are not sure if we can do it.
3. Somebody inside the organisation has done it before.
2. Someone in our team has done it before.
1. We all know how to do it.

We insist on doing the 1s and 2s first as we like things we know how to do. What we should be doing is the 5s and 4s, failing and learning from them. So don’t insist on doing work we already know how to do.
Another great point was humans LOVE seeing patterns even if they are not there.
An example Liz used was this:
1  2    4  5
What is missing from the above list?
Well following the pattern we get, 1  2  3  4  5. But if we tried to break the pattern or ask for some context we may get something else like, 1  2  :  4  5.
My main takeaway from this talk was to work on the more complex things first, fail and learn. Also try and break pattern matching even though as humans we love patterns.

Tested By Monkeys – Jeremias Rößler (@roesslerj)

I learnt a new term from this talk and loved it, banana software. When you buy a banana it is usually green and ripens when you take it home. The same can be said with software, when it is released it is unripe and ripens when our customers start finding bugs. Which leads on to another great point, when is a bug a bug?
Jeremias answered with “when it is not a feature”.  Which is a brilliant point. He went on to say, if your customer is used to your software already you can’t change it. This means that even if we find a bug, it may be being used as a feature by a customer, if we “fix” it we may be breaking our software for a customer.
So regression testing is NOT trying to find new bugs, but checking that the software works they way it is currently working. So even if the software is wrong, it should be the same wrong every time.
This is where we use the monkey. The monkey can do the boring regression checking for us, and leave us to do more fun things, like exploratory testing or questioning business value.
What I took from this talk was that we should be automating the boring checks and just because we think something is a bug our customers may not. This talk will be given at Agile Testing Days, if you are lucky enough to be heading to it, I highly recommend checking it out. 

The Automated Acceptance Testing Paradox – Mark Winteringham (@2bittester)

Mark asked a valuable question – how do we know what is and is not acceptable in our acceptance tests?
Webdriver is a powerful tool but cannot test UI design. For example if a website loaded with no CSS or JavaScript the acceptance tests will still pass. So what do our acceptance tests mean to us as a team?
We should not use our tests for validating, but for guiding. A wonderful quote from Mark, “a failed check is an invitation to explore”.
So if the check fails have discussions, explore the area. Is the check failing for a correct reason? Is it even valid anymore?
What if a developer is working on an area of the code, and the tests are still passing? Usually we celebrate this – fantastic we have not broken anything – but what if the checks are not correct and thus of no value? Maybe we have made a wrong assumption, we can use this checks as a guide to ask these questions. 
Tools do not replace us, they support us.
My main takeaways, be aware of your assumptions. User acceptance checks are guides, and other testing activities are required.

A Tale Of Testability – Rob Meaney (@RobMeaney )

Robs message was clear, lets see what can happen if we design software with testability in mind. What our your current testing problems? Bring them to your team, fix them together.
Some questions to keep in mind, how can we make this software easier to test? How can we design it to be easier to test?
Look at everything, boundaries, all our internal dependencies, hardware, etc.
Get into exploratory testing, learn, design, execute and get feedback.
To me Rob’s key points where collaboration and removing the poor bored zombie tester. Involve him in your design and prevent problems.

Testing So You Can Move On – Nicola Owens (@NicolaO55)

I am not a consultant, so I think some of the advice in this talk whilst good was lost on me. Nicola focused on getting a company into a comfortable place with their testing so you can leave and have helped.
Some points I did take from this was around communication skills. Asking the right questions to leave somewhere with good testing practices.
What will make your life easier when I am gone?
Would you like help with that?
Try to earn the trust of folks who don’t usually like test.
Great points to think about.

This takes us up to lunch. I am aware that there is a lot to digest here. So I will leave after lunch to another post.

Where To Start

Recently at a ministry of testing meetup I got asked an interesting question. The question was: “How do I start BDD?”

To me this is interesting as it depends on what level your teams are currently at. Visualization is the key to start any kind of agility. So what I would suggest is start there.

What happens to the work? Is there a backlog of cards sitting in some online tracking tool? If so thats a great place to start. Get a big whiteboard and stick it beside the team. Add some columns that represent the work a team does with a user story. We can start simple, the board will probably ready something like: Next, Develop, Test, Ready for Production, Done.


Having the board beside the team will really help show blockers that can get lost in online tools. This allows the team to run a morning standup focusing on the stories as they move through the board. In the standup start with the closest story to done and work through the board. This hammers home the importance of getting work finished instead of starting new work.

After this I think you may see a lot of work in progress (WIP). There will probably be work queuing up in test, and developers working on multiple stories. So this is where we introduce a WIP limit. The Kanban idea of “stop starting start finishing” works really well, and a WIP limit is where you will notice this. The idea behind WIP limits is to encourage team work instead of work being passed off to different people. A good place to start is having a WIP limit set to the number of developers. If there are 4 devs on the team then start with a WIP limit of 4. This means that no other stories can be worked on until the 4 are done. This encourages pairing amongst team members, as the dev is no longer handing over work to a tester, they can work together to test something. This stops a queue of work forming in test, and helps eliminate blockers.

At this point you may be thinking you are not using all the capacity in the team. When a tester is working on a story surely a developer can be working on a new one? Well this leads me to ask what happens when a bug is found in test? Another story is spun up and now the developer has three cards to worry about. This means some fire fighting starts to happen or low priority bugs are deemed not important enough, and we end up going live with bugs.

It may seem counter productive, but less is more. If the developer is only allowed to work on one story at a time then bugs raised will be addressed straight away. It also means we no longer need a bug tracking tool, I have used red sticky notes on the story to show a bug.

So now as a team we can focus on getting stories done instead of starting new work and never getting anything fully finished. Have we done anything too stressful? Not really just visualizing our work and limiting our work in progress. There is probably enough here to start with. I am also aware I never answered the original how to start BDD question. But this is really laying the foundations. In my next blog I will show how easy it is to add BDD on top of what we have done here.

Paul Mccartney understood the value of working as a team in his frog song, “Win or lose, sink or swim….We all stand together”


P.S. This all came from a conversation at Ministry of Testing Dublins last meetup. I highly encourage everyone to check out @ministryoftest and find a meetup in your area.
For all the latest Dublin meetups check here:

On Safe Ground

I recently watched the David Bowie documentary “Five Years”. The documentary focused on five key years in his life and showed how he kept changing his sound and style. It was told through interviews with people whom he worked with it and featured tonnes of his quotes. I have quoted and used Bowie before in this blog, but something he once said really hit home with me. The quote was used to describe why he kept changing, he said : “the minute you know you’re on safe ground, you are dead”.

This seems a little bit intense, but what about Bowie wasnt? It made me think about feeling safe and getting comfortable in a working environment. Now there is nothing wrong with being comfortable in what you do, it means you are good at it. It has happened to me in the past, but once I start feeling safe I can get lazy. Also just because something is working, does not mean it can’t be improved. Taking Bowie as an example he could have easily just coasted, but no, right up until he died he was experimenting and trying new ideas.

So what can testers do to keep their days interesting?

Become T-shaped is the answer. For those of you who have not yet come across the term the following is my understanding of what I strive to be as a tester.


tshapedWhat does being t-shaped involve? Well if you take a quick look at the picture, you will see a person with a core skill and two arms worth of others. Your core skill is whatever you enjoy doing the most, developing, testing, or analysing business requirements. The t-shaped part comes from the arms. Having an arm for each other skill makes you a more well rounded team member.

As a tester if I have some understanding of development it makes testing easier but also more interesting. This is also true of having a business understanding. No matter what your role is on the team, it is GREAT to know what you are doing is delivering customer value. It also really helps if you understand the customer’s needs, this allows you to think like the person using the software. For a tester this makes our jobs easier as we will know how the software is going to be used. This knowledge will make a developer’s life easier as well. There are usually a million different ways to create something, knowing its use up front allows us to pick the fastest, smartest way to get a working solution.

Give it a try, you will probably find your teammates will be happy that someone is helping them with their workload. You may also find it makes your day to day work more interesting and you will no longer be coasting on safe ground.

I will leave you with a quote from Soundgarden’s Chris Cornell, “Arms held out, in your Jesus Christ pose”.

What Chris did not realise is that he was talking about being t-shaped.


Just Keep Failing

Over the holidays I started reading Bruce Springsteen’s autobiography. It’s a very honest and interesting read, I highly recommend it. Any fans of the boss know he is sitting on a mountain of unreleased material, the book explains why. Once he was given his album budget and recording time, he would just go straight to the studio with the band and record songs until they ran out of money. He put it this way; “We kept on, exhausting ourselves in the process, but exhaustion has always been my friend and I don’t mind going there. Near the bottom of its fathomless pit I usually find results. We failed until we didn’t.”


Now I am NOT suggesting pushing people to the point of breaking exhaustion. It may work for a band on a budget, but would kill the spirit of an office. What I did find interesting is that last line – “We failed until we didn’t”. That is exactly how humans learn to do anything. So why would we punish a team or person for trying something and failing at it? We need to built an environment where it is safe to fail. Try something, if it doesn’t work find out why, learn from it, and try something else.

For example I have always enjoyed creating and changing Agile boards. We can add a swimlane to try something new. If it works GREAT lets keep it and move on. If it doesnt work, why not? Are we missing something else? Often you find out a lot about your team and ways of working by failing on something.

How do we capture these failures and learn from them? Regular retrospectives. If as a team we can meet up and figure out what works and doesn’t, then we can keep improving. I have seen this work for a lot of things. Maybe a team wants to try using a new tool? They can set it up and test it over a week or two. Then have a retro after and see the pros and cons on using it. I have found this not only improves the overall product but allows teams to feel empowered and to keep discovering and learning new things.

Making sure your team members are not afraid to try something is really important. We want our teams to feel like David Bowie, “No, I won’t feel afraid, I won’t be afraid anymore”


Atari : Or how I learned to stop putting pressure on teams and trust people


I recently watched the documentary Atari: Game Over. For those who have yet to watch it, it’s about the Atari games engine and the rumors surrounding the dumping of the ET game. I would highly recommend everyone watch it. What I found most interesting about it was the company’s attitude towards its staff. There are interviews with old staff members. It seem Atari was the cool place to work at the time, and why was this? They left their staff to pretty much their own devices. There was nearly an “anything goes” attitude, once the work got done. This allowed for people to have fun and try new things. What happened? Work got done! Not only that, but the developers made some of the most popular and best selling games at the time. Some of these games took 6 months, some took a year, but the finished product always seemed to sell well. Along comes the ET film and, with it, a great idea for a new Atari game.


This is where the pressure increased. Suddenly this game was needed for the Christmas market. Rights for the game were finally obtained in July, thus giving the developer (Howard Scott Warshaw) only 5 and a half weeks to make the game. This is a big difference from the 6 – 12 months he had to create some best sellers of Atari’s fame. The game is regarded as the worst game ever made in a lot of reviews. Personally I think that’s a bit harsh, knowing the developer and the deadlines. But it was the start of the end for Atari. What can we learn from this? Well a lot of people would say that the game should have been better. There were some bugs in the finished product, making the game an overall frustrating experience for the user. So, I firmly believe given the same amount of time as previous games that ET could have been much better and a massive hit. The pressure put onto the developer, to create a game under such a deadline, was a massive mistake. The more pressure, the easier it becomes to cut corners. The first thing that usually goes is testing, hence you get a bad user experience, exactly what happened with the ET game.

Sadly I have seen this happen to teams I have worked on. Best practices go out the window once there is some pressure. “I don’t have time to write tests, this needs to be live”, or “I was told to release this card anyway and we can spin up another card to fix the bugs”. This tends to happen when the teams are judged on how many cards they release per week. Suddenly its a race to get cards done by Friday. The answer to this is to focus on the value that is being released to the customer, not how many cards we get done. Pressure tends to come from above, but if we are focusing on value and each card delivers something useable to the customer then  we can build up trust and this will help to remove pressure. I, of course, don’t know the details around the ET game but I imagine had a user been put in front of it before it was released, a lot of the bugs would have been found and it could have possible saved a chunk of money.

Queen and David Bowie understood what happens with too much pressure. We don’t want to be “watching some good friends screaming ‘Let me out'”


Teams should be empowered to work and trusted to do able to do their jobs. If there is pressure try and not let the teams see it. Fix it by allowing the customer to see the product early.

Misheard Lyrics 2

A few posts back, I talked about misheard lyrics and how they can apply to assuming our customers’ needs. After a conversation with a friend, I realised I had missed another aspect. What if we had a non-English speaker mishearing English lyrics? This could change the meaning of the song in a whole different way.

I think this works much like people with different skill sets in a team. So, we have the developer who mishears the lyrics and a tester, whose first language is not English, hearing something else. Possibly, we get test cases assuming one thing and software that does something completely different. Sometimes we even get people hearing something they like and ignoring anything else. Paul Simon summed this up nicely in his song, The Boxer, “A man hears what he wants to hear and disregards the rest”


So how do we fix this?
The answer is Behaviour Driven Development (BDD). So what is BDD?

BDD was created as an evolution of test driven development. It was developed by Dan North as a place to start writing tests. It relates to how the software should work or behave. When a user clicks a button, what exactly will happen, or what does the user expect to happen?

How do we find out these answers? Well, through conversations with business stakeholders and end-users. What BDD allows for is these conversations. It allows developers, testers and business stakeholders to talk before any code is written. This would stop the misunderstanding in the lyrics. It would even allow for all different meanings of the lyrics to be addressed. Everyone gets their view point heard, and all it costs is the price of a conversation.

I’ve seen it save hours of waste in misunderstood requirements. Once the conversation has been had, then acceptance test driven development can start, and everyone is on the same page. Don’t believe me? Try it for yourself! Start small, then tell me all about the results.

Why Ask Why?


My dad used to work in the main Lab in Guinness. He told me a funny story, which I will share.
When he started in 1977, the guys he worked with did, what was called, a KBOS test. KBOS stood for something, he thinks maybe it was “potassium” related. The reason he can’t remember is that writing on the test had faded and all that was left was the initials “KBOS”.

But every day, without fail, someone in the lab would do this test. They would check the potassium levels on some samples. It was such a big deal that other labs would send over samples. All the results would then get recorded in a book and also sent over to another lab. The lab they used to work in was horrible. It was an old run down building full of rats and cockroaches. So eventually in 1988 they got to move labs.
In the move, they were trying to streamline the lab. Making sure only the important tests and equipment was moved. Eventually someone remembered the super important KBOS test. At this point someone asked, “What does KBOS mean anyway?”. Nobody could answer.

My dad had been doing the test since he started as had, it turned out, a number of people for years. Nobody knew why. So, they asked the other lab who they sent the results to. They checked their books. Yes, they had other results different tests to make sure Guinness was not polluting Dublin. Right there beside the results was indeed the KBOS results. But they realised that they NEVER USED the results for anything. It seems somebody, years before, needed to check it for a few weeks, and the test was then passed on to everyone else who started. So for 11 years this test was just done. Years of time wasted on a test that nobody ever even looked at the results for. The guys in the lab had a good laugh about it and moved on.

I think it shows the importance of asking “Why?”. Years ago, had someone questioned the KBOS test then they could have saved hundreds of man hours in doing it and recording the results. So, when you get asked to do something – to develop a new feature, or even define and break it down. Do you ask “Why?”. Do you question the business value? Sometimes you will find that the work is being done just because someone said “Lets do it.” There could be no real reason for it, no value to your customers. You could be wasting time doing something that will not meet the needs of your customers.

After all in the words of Paul Simon, aren’t we all “Just trying to keep my customers satisfied”.