Skip to main content

Considerations when creating automated tests

We recently released to a number of teams our automated regression pack that has been worked on over the past few months. This regression pack tests legacy code, but contains a large number of tests. 

As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack.  I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post... 

So here goes,  when creating automated tests, it's important to consider and adhere to the following:

- Think about data. The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention.
- The tests need to be idempotent - By making it so that each test is standalone and does not affect other tests you will not get random failures if you run the tests in certain orders etc.
- The tests should execute and pass when run in isolation but also in sequence.
- There should be no dependency between tests and it should not matter which order the tests are run.
- The tests should also run on International sites. If they don't apply to one or more sites, use the appropriate tagging - They shouldn't be concerned with language etc. if the element is the same then the test should still be able to run and find the element.
- The tests shouldn't be flaky. Random failures shouldn't happen.
- The tests should be able to be run on any Development environment - There should be no third party dependencies that are not available in any development environment, this way teams will get the most value out of the tests
- The tests should follow some form of existing structure (if one is in place) - We used Page Object model and as such we had a structure that suited this framework.
- The tests shouldn’t take long to run. This can be subjective, but if it takes longer than doing a manual test, then try and come up with a better solution
- Try to reuse code wherever possible, do not duplicate
- Existing Page Objects should also be used where possible
- Changes to shared packages should be run by others first
- Avoid testing too many things at one time in one test, or tests that are “too long”. Ideally, when a scenario fails, you want to immediately know what went wrong.
- Try to keep scenarios within system boundaries. For example, if your test needs some products in the bag, don’t let your test do that through the UI. Do it via the DB.

So that's pretty much it :)  

If you can think of anything else that I might have missed off, that you think needs to be considered when creating an effective and reliable set of automated tests, please let me know!

Comments

Popular posts from this blog

Coding something simple.... or not! Taking a screenshot on error using Selenium WebDriver

I recently wrote a little function that takes a screenshot at the end of a test if it has errored. What sounded very simple at the start turned out to be quite a bit of work, and quite a few lines of code to handle certain scenarios! It's now over 50 lines of code! I'll start with what I had at the beginning, this was to simply take a screenshot in the working directory, we are using SpecFlow and Selenium to run the tests, so we are going to check if the ScenarioContext.Current.TestError isn't null, if it is, then using Selenium, take a screenshot (note the below code is a simplified version of what I had at the beginning). [AfterScenario]         public static void TakeScreenShotOnError()         {             if (ScenarioContext.Current.TestError == null) return;             var screenshotDriver = Driver as ITakesScreenshot;             if (screenshotD...

How to manage resources within new teams?

Working where I work we are constantly spinning up new teams to take on new workloads as business come up with new demands and new features they want developed and tested. The problem with this is how do we ensure the work of the newly spun up team is of sufficient quality. One method is by taking people from other established teams and placing them on the new team. This works great for the new team, but unfortunately it will oftenl eave the established team lacking in a resource whilst they try and fill the gap left by the person who has left. We are seeing this often with our offshore teams, it can be damaging to the team structure and the teams velocity, but try as I might, I can't think of another way around it. It's far easier to take 1 person from a team that is established than it is to build a whole new team from scratch. At least by leaving the core of a team in place, you should be guaranteeing that the new team are aware of any coding standards or any QA standard...