Test Bash Belfast – Morning Session


I am just back from the Ministry of Testing’s “Test Bash” in Belfast. First I want to say how fantastic it was, I really enjoyed the day.

Here are my notes from the day. These are just some points I jotted down, not everything from the day, so if anyone thinks I missed something cool, please share it with me. 

Cynefin For Testers – Liz Keogh (@lunivore):

Liz opened the day with a talk about different levels of work complexity. Liz says we estimate work into 5 different areas of complexity.

5. Nobody has ever done this.
4. Somebody outside the organisation has done it, but we are not sure if we can do it.
3. Somebody inside the organisation has done it before.
2. Someone in our team has done it before.
1. We all know how to do it.

We insist on doing the 1s and 2s first as we like things we know how to do. What we should be doing is the 5s and 4s, failing and learning from them. So don’t insist on doing work we already know how to do.
Another great point was humans LOVE seeing patterns even if they are not there.
An example Liz used was this:
1  2    4  5
What is missing from the above list?
Well following the pattern we get, 1  2  3  4  5. But if we tried to break the pattern or ask for some context we may get something else like, 1  2  :  4  5.
My main takeaway from this talk was to work on the more complex things first, fail and learn. Also try and break pattern matching even though as humans we love patterns.

Tested By Monkeys – Jeremias Rößler (@roesslerj)

I learnt a new term from this talk and loved it, banana software. When you buy a banana it is usually green and ripens when you take it home. The same can be said with software, when it is released it is unripe and ripens when our customers start finding bugs. Which leads on to another great point, when is a bug a bug?
Jeremias answered with “when it is not a feature”.  Which is a brilliant point. He went on to say, if your customer is used to your software already you can’t change it. This means that even if we find a bug, it may be being used as a feature by a customer, if we “fix” it we may be breaking our software for a customer.
So regression testing is NOT trying to find new bugs, but checking that the software works they way it is currently working. So even if the software is wrong, it should be the same wrong every time.
This is where we use the monkey. The monkey can do the boring regression checking for us, and leave us to do more fun things, like exploratory testing or questioning business value.
What I took from this talk was that we should be automating the boring checks and just because we think something is a bug our customers may not. This talk will be given at Agile Testing Days, if you are lucky enough to be heading to it, I highly recommend checking it out. 

The Automated Acceptance Testing Paradox – Mark Winteringham (@2bittester)

Mark asked a valuable question – how do we know what is and is not acceptable in our acceptance tests?
Webdriver is a powerful tool but cannot test UI design. For example if a website loaded with no CSS or JavaScript the acceptance tests will still pass. So what do our acceptance tests mean to us as a team?
We should not use our tests for validating, but for guiding. A wonderful quote from Mark, “a failed check is an invitation to explore”.
So if the check fails have discussions, explore the area. Is the check failing for a correct reason? Is it even valid anymore?
What if a developer is working on an area of the code, and the tests are still passing? Usually we celebrate this – fantastic we have not broken anything – but what if the checks are not correct and thus of no value? Maybe we have made a wrong assumption, we can use this checks as a guide to ask these questions. 
Tools do not replace us, they support us.
My main takeaways, be aware of your assumptions. User acceptance checks are guides, and other testing activities are required.

A Tale Of Testability – Rob Meaney (@RobMeaney )

Robs message was clear, lets see what can happen if we design software with testability in mind. What our your current testing problems? Bring them to your team, fix them together.
Some questions to keep in mind, how can we make this software easier to test? How can we design it to be easier to test?
Look at everything, boundaries, all our internal dependencies, hardware, etc.
Get into exploratory testing, learn, design, execute and get feedback.
To me Rob’s key points where collaboration and removing the poor bored zombie tester. Involve him in your design and prevent problems.

Testing So You Can Move On – Nicola Owens (@NicolaO55)

I am not a consultant, so I think some of the advice in this talk whilst good was lost on me. Nicola focused on getting a company into a comfortable place with their testing so you can leave and have helped.
Some points I did take from this was around communication skills. Asking the right questions to leave somewhere with good testing practices.
What will make your life easier when I am gone?
Would you like help with that?
Try to earn the trust of folks who don’t usually like test.
Great points to think about.

This takes us up to lunch. I am aware that there is a lot to digest here. So I will leave after lunch to another post.