Skip to main content

Test Iceberg of Automation

I'm sure you've all heard of the automated testing pyramid, I'll describe it briefly here, but you can read all about it here.



It's essentially a strategy that shows good practice ratio of Acceptance Tests (generally UI) to Integration Tests to Unit Tests, and here it is here in a simple form.

It states that it is a good ratio to have your testing covered with 10% of acceptance tests, 20 % integration tests and 70% unit tests. Why is that you may ask? The primary focus of this is on Return on Investment, by finding bugs/breakages at Unit test level you are finding cheap bugs, as Unit Tests are quick and easy to maintain, whereas acceptance tests, whilst having value, are harder to maintain and take longer to run.

Obviously, it's not a strict ratio, but I think it's a good practice to try and live by.

However, I digress, the main point of this post is to put another spin on the automation triangle, and is possibly more QA centric than the automation pyramid, as I don't often see (rightly or wrongly) QA getting involved in creation of Unit Tests/Integration tests, I feel you can visualise the ratio in an easier way, and that is in the form of an Iceberg.



Interesting Fact: only around 11% of an iceberg is actually visible...

Now if we apply that interesting fact to automated testing, (I guess you can see where I am going with this) we can say that the 11% that we can see as QA is the acceptance tests, these are tests that in general have been driven by the QA and are what will often fall into our domain to maintain and create.

The remaining 89%, are more dev focused, in that unit tests and integration tests are generally maintained by the developers, and not the QA department (at least in my experience). So this allows the QA team to work on acceptance tests and get them working effectively, and leaving the Unit and Integration tests to the developers (although it is definitely wise to get involved as much as you can).

Please don't think I'm saying that unit tests aren't important (quite the opposite as my previous blog posts will attest to), as QA need to be aware of what Unit Tests and Integration tests are to be run, as this will influence what tests are run as part of the acceptance tests, you only have to look at the Titanic to see what happened there when the rest of the iceberg was ignored.

Comments

  1. Nice post Gareth. I've been working the same iceberg metaphor around in my mind for a while now.

    I hadn't limited the metaphor to just automation though - in my opinion, the iceberg is a metaphor for the test strategy.

    Every member on the development team should be aware of just how much iceberg there is, both above & below the water - i.e. what is the entire test strategy of the development team.

    I really should get my thoughts onto paper for some open criticism.

    Thanks for the post Gareth,

    Duncs

    ReplyDelete
  2. One thing to note is that Mike Cohn has stated that because of the types of tests and where they are at in the pyramid (which causes it to be a pyramid shape) structure you will get this distribution.

    He doesn't necessarily say this is the way it should be as common practice, he states that it comes out this way because at the lower levels you have more 'atomic' tests (Unit/Code level tests/asserts) in order to get coverage of the code.

    The benefit is finding issues/defects sooner and reducing the amount of churn/rework (lower cost ratio in comparison to later on down the road to release).

    The whole purpose IMO of the pyramid is to get testing going earlier and to leverage the tools at hand (xUnit Harnesses and automation). Acceptance Test level tests are valuable, don't disregard them because of the smaller percentage. These tests via proper framework architecture and planning will be easier to build and maintain. They have value as they represent the final integration of the system with the user.

    By following the suggestions of the pyramid you start at the micro level and work towards the macro. This provides greater coverage and exercising of the software under test. By using automation you leverage a tool/machine to aid in the execution of those tests and get efficiency gains as a benefit. That is what Mike is really pushing.

    Jim Hazen

    ReplyDelete

Post a Comment

Popular posts from this blog

Coding something simple.... or not! Taking a screenshot on error using Selenium WebDriver

I recently wrote a little function that takes a screenshot at the end of a test if it has errored. What sounded very simple at the start turned out to be quite a bit of work, and quite a few lines of code to handle certain scenarios! It's now over 50 lines of code! I'll start with what I had at the beginning, this was to simply take a screenshot in the working directory, we are using SpecFlow and Selenium to run the tests, so we are going to check if the ScenarioContext.Current.TestError isn't null, if it is, then using Selenium, take a screenshot (note the below code is a simplified version of what I had at the beginning). [AfterScenario]         public static void TakeScreenShotOnError()         {             if (ScenarioContext.Current.TestError == null) return;             var screenshotDriver = Driver as ITakesScreenshot;             if (screenshotD...

How to manage resources within new teams?

Working where I work we are constantly spinning up new teams to take on new workloads as business come up with new demands and new features they want developed and tested. The problem with this is how do we ensure the work of the newly spun up team is of sufficient quality. One method is by taking people from other established teams and placing them on the new team. This works great for the new team, but unfortunately it will oftenl eave the established team lacking in a resource whilst they try and fill the gap left by the person who has left. We are seeing this often with our offshore teams, it can be damaging to the team structure and the teams velocity, but try as I might, I can't think of another way around it. It's far easier to take 1 person from a team that is established than it is to build a whole new team from scratch. At least by leaving the core of a team in place, you should be guaranteeing that the new team are aware of any coding standards or any QA standard...

Considerations when creating automated tests

We recently released to a number of teams our automated regression pack that has been worked on over the past few months. This regression pack tests legacy code, but contains a large number of tests.  As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack.  I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post...  So here goes,  when creating automated tests, it's important to consider and adhere to the following: - Think about data . The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention. - The tests need to be idempotent - By making it so that each test is standalone and does...