Skip to main content

The 5s Methodology to Testing

This morning I finally got round to start reading a new book, a book I've been saying that I'll read for a while, actually this could apply to 2 books, as I've also started reading a non work related book in Game Of Thrones, which I thought might fill up some time between the series starting up again... Game Of Thrones is amazing, you should most definitely read it... however I digress, the book that prompted this blog post is Clean Code: A Handbook of Agile Software Craftsmanship (Robert C. Martin)

Obviously having only started reading it this morning, there's a specific excerpt that caught my attention and one that has made me want to write a blog post... and apologies it's been a while... However, the book talks about the 5S Methodology they apply it to coding principles, and whilst that's still useful for this blog, I wanted to try and apply it to testing principles, so here goes...

For those not familiar with the 5S Methodologies, they are 5 Japanese words, and I've listed them here (copied from Wikipedia): seiri, seiton, seiso, seiketsu and shitsuke.

Now the fun part, applying these to some solid testing principles:

  • Seiri (Sort)

How can this be a testing principle you may ask? Well, it can be used to explain how test cases should be visible. They should have relevant naming, the test/check should clearly state what it is testing/checking, both automated checks and manual tests. All too often I have seen test case names as 1 words with no real description of what the test is doing how it's to be run etc. In some cases this may be okay (I can't really think of one though), but not when we are talking about acceptance tests that other people may be required to run. (Clean Code does talk about naming conventions for variables etc. I'll probably write more about this at a later date)...

  • Seiton (Systemic Arrangement)

They should be where one expects to find them. This applies to both automated checks and manual tests, if it is an automated test, then ideally they should live with the solution/code they are testing, if they are manual tests, then they should be organised in such a way that "makes sense", they should be easy to find and structured accordingly. It's about making other testers lives easier if they wish to view test cases, it should be instinctive where to find the tests for a certain piece of software.

  • Seiso (Shine)

The test cases and checks should be kept up to date, whether automated checks or manual tests, they need to be run regularly and fix any failing tests, remove out of date tests, apply the boy scout principle,  "Always leave the campground cleaner than you found it" by applying this to our tests, we're talking about refactoring broken/failing tests, we can only do this if they are run regularly. If they're not run regularly we're not refactoring a handful of tests/checks, but probably fixing a large number of tests/checks.


  • Seiketsu (Standardisation)

This can be applied to tests/checks (ensuring standard naming conventions are used) but it also applies to toolsets, when having multiple teams on projects, it's important that teams use the same approaches for testing. This is for automated checks (e.g. teams shouldn't be using different tools to automate UI checks) as well as for storing of test cases, reporting bugs. Otherwise it becomes un manageable, if teams start using google docs for tests (don't get me started on that one) and another team is using Microsoft Test Manager, how can anyone have a reasonable idea of what coverage is where across a project.

  • Shutsuke (Sustain)

This is related to ensuring that the above are maintained, being able to reflect on the work you have done and ensure it is up to a standard and meets the appropriate principles that you employ and of course about having the discipline to do all of this.  Don't be too proud, admit mistakes and work on improving them and making sure it doesn't happen again. We are all human, we all make mistakes, grow from it and ensure that you learn from it.

So there we have it, the 5S Methodology applied to testing. I'm sure you could interpret them differently, if you have/can, please comment. I'm interested in hearing what people think.

As I delve further into the book, I'm sure it will prompt more blog posts, so you have that to look forward (?) to :)




Comments

  1. Hey Gareth Waterhouse, i have just bumped into you blog and I must admit that I am in love with it. Your style of writing as well as the information provided if just on another level.

    ReplyDelete

Post a Comment

Popular posts from this blog

Coding something simple.... or not! Taking a screenshot on error using Selenium WebDriver

I recently wrote a little function that takes a screenshot at the end of a test if it has errored. What sounded very simple at the start turned out to be quite a bit of work, and quite a few lines of code to handle certain scenarios! It's now over 50 lines of code! I'll start with what I had at the beginning, this was to simply take a screenshot in the working directory, we are using SpecFlow and Selenium to run the tests, so we are going to check if the ScenarioContext.Current.TestError isn't null, if it is, then using Selenium, take a screenshot (note the below code is a simplified version of what I had at the beginning). [AfterScenario]         public static void TakeScreenShotOnError()         {             if (ScenarioContext.Current.TestError == null) return;             var screenshotDriver = Driver as ITakesScreenshot;             if (screenshotD...

How to manage resources within new teams?

Working where I work we are constantly spinning up new teams to take on new workloads as business come up with new demands and new features they want developed and tested. The problem with this is how do we ensure the work of the newly spun up team is of sufficient quality. One method is by taking people from other established teams and placing them on the new team. This works great for the new team, but unfortunately it will oftenl eave the established team lacking in a resource whilst they try and fill the gap left by the person who has left. We are seeing this often with our offshore teams, it can be damaging to the team structure and the teams velocity, but try as I might, I can't think of another way around it. It's far easier to take 1 person from a team that is established than it is to build a whole new team from scratch. At least by leaving the core of a team in place, you should be guaranteeing that the new team are aware of any coding standards or any QA standard...

Considerations when creating automated tests

We recently released to a number of teams our automated regression pack that has been worked on over the past few months. This regression pack tests legacy code, but contains a large number of tests.  As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack.  I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post...  So here goes,  when creating automated tests, it's important to consider and adhere to the following: - Think about data . The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention. - The tests need to be idempotent - By making it so that each test is standalone and does...