Skip to main content

How to decide what and when to automate tests?



We all know that repetitive manual testing can be and is at times boring.... but unfortunately it's a necessity for some aspects of testing.

One thing that I love, and sure enough this reduces the load of manual testing, is automated testing, be it from the Service level, through an API and especially WebUI testing. Whenever any testing comes along, there is a question that is regularly asked by QA, do we want to automate this?

When deciding what tests we should automate I feel that it's important to answer some questions:
  • Will this test form a part of the regression pack for the application?
  • Will this test be run multiple times during the development process?
  • Can the same level of testing be achieved by automating this test?
I'll tackle the first question, as it's the most basic and the easiest to answer. If a test is to form a part of a regression pack, then yes it should be automated. The reason being that it will save time in the future, and will offer more assurance in releasing future software releases.

As for the second question, if a test is to be run multiple times then surely it makes sense to automate it and reduce the amount of effort it takes to run. This is especially interesting when it comes to a test to check if a bug has been fixed, in order to verify that subsequent builds do not reintroduce this bug into the wild, then by all means automate it, and run it as part of the build process (if at all possible) to try and catch these issues.

Finally, there are some aspects of manual testing that cannot be automated, for instance, checking the location of an element on a web page (for Web UI testing), whilst the naked eye can easily notice if an element is misplaced, or if something is appearing incorrectly (i.e. text overlapping other text). Because of this, I tend to shy away from automated testing for cross browser testing at the moment... However...

Google have an interesting piece of software that monitors the top 1000 pages in search results when testing new versions of chrome, they will detect if there are any variations between the version under test or previous versions, and email developers to let them know. It is even so clever to include knowledge of where there may be dynamic content, like on bbc.co.uk for instance where news articles are ever changing and dynamically creating the front page. Whilst I understand that something like that is extremely complex and possibly overkill for some applications, it is an extremely impressive piece of software that I would love to one day see in action!

So whilst automation is an extremely effective toolset to have, there will always have to be some element of manual testing to go along with the automation. Now this manual testing, doesn't have to be scripted, far from it, it can be in the form of Exploratory Testing (more about in future posts). As time goes forward, I am sure there will be more effective ways of performing cross browser testing and ensuring elements are displayed on the front end.  This doesn't hinder the effectiveness of automating tests at a service level or even an API level, as the response and requests for these are structured in a way that isn't going to change over time, and so I find that you can achieve 100% coverage on Service and API level tests in an automation test suite.

There is also the benefit of the time saved by automating a test can be spent tackling more important and complex test cases during the development of an application. So not only does it help with reducing testing effort on regression, it increases the effectiveness of the testing effort going forward. 

It also lends itself to application ownership, which is often only seen during the development of the application, but in reality should be whilst the application is being used, as these tests will live for the lifecycle of the product.

For this post we have only really talked about Acceptance Tests, in future we will discuss the importance of Unit Tests and Integration Tests.

Comments

Post a Comment

Popular posts from this blog

Coding something simple.... or not! Taking a screenshot on error using Selenium WebDriver

I recently wrote a little function that takes a screenshot at the end of a test if it has errored. What sounded very simple at the start turned out to be quite a bit of work, and quite a few lines of code to handle certain scenarios! It's now over 50 lines of code! I'll start with what I had at the beginning, this was to simply take a screenshot in the working directory, we are using SpecFlow and Selenium to run the tests, so we are going to check if the ScenarioContext.Current.TestError isn't null, if it is, then using Selenium, take a screenshot (note the below code is a simplified version of what I had at the beginning). [AfterScenario]         public static void TakeScreenShotOnError()         {             if (ScenarioContext.Current.TestError == null) return;             var screenshotDriver = Driver as ITakesScreenshot;             if (screenshotD...

How to manage resources within new teams?

Working where I work we are constantly spinning up new teams to take on new workloads as business come up with new demands and new features they want developed and tested. The problem with this is how do we ensure the work of the newly spun up team is of sufficient quality. One method is by taking people from other established teams and placing them on the new team. This works great for the new team, but unfortunately it will oftenl eave the established team lacking in a resource whilst they try and fill the gap left by the person who has left. We are seeing this often with our offshore teams, it can be damaging to the team structure and the teams velocity, but try as I might, I can't think of another way around it. It's far easier to take 1 person from a team that is established than it is to build a whole new team from scratch. At least by leaving the core of a team in place, you should be guaranteeing that the new team are aware of any coding standards or any QA standard...

Considerations when creating automated tests

We recently released to a number of teams our automated regression pack that has been worked on over the past few months. This regression pack tests legacy code, but contains a large number of tests.  As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack.  I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post...  So here goes,  when creating automated tests, it's important to consider and adhere to the following: - Think about data . The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention. - The tests need to be idempotent - By making it so that each test is standalone and does...