We recently released to a number of teams our automated regression pack that
has been worked on over the past few months. This regression pack tests legacy
code, but contains a large number of tests.
As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack. I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post...
So here goes, when creating automated tests, it's important to consider and adhere to the following:
- Think about data. The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention.
- The tests need to be idempotent - By making it so that each test is standalone and does not affect other tests you will not get random failures if you run the tests in certain orders etc.
- The tests should execute and pass when run in isolation but also in sequence.
- There should be no dependency between tests and it should not matter which order the tests are run.
- The tests should also run on International sites. If they don't apply to one or more sites, use the appropriate tagging - They shouldn't be concerned with language etc. if the element is the same then the test should still be able to run and find the element.
- The tests shouldn't be flaky. Random failures shouldn't happen.
- The tests should be able to be run on any Development environment - There should be no third party dependencies that are not available in any development environment, this way teams will get the most value out of the tests
- The tests should follow some form of existing structure (if one is in place) - We used Page Object model and as such we had a structure that suited this framework.
- The tests shouldn’t take long to run. This can be subjective, but if it takes longer than doing a manual test, then try and come up with a better solution
- Try to reuse code wherever possible, do not duplicate
- Existing Page Objects should also be used where possible
- Changes to shared packages should be run by others first
- Avoid testing too many things at one time in one test, or tests that are “too long”. Ideally, when a scenario fails, you want to immediately know what went wrong.
- Try to keep scenarios within system boundaries. For example, if your test needs some products in the bag, don’t let your test do that through the UI. Do it via the DB.
So that's pretty much it :)
If you can think of anything else that I might have missed off, that you think needs to be considered when creating an effective and reliable set of automated tests, please let me know!
As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack. I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post...
So here goes, when creating automated tests, it's important to consider and adhere to the following:
- Think about data. The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention.
- The tests need to be idempotent - By making it so that each test is standalone and does not affect other tests you will not get random failures if you run the tests in certain orders etc.
- The tests should execute and pass when run in isolation but also in sequence.
- There should be no dependency between tests and it should not matter which order the tests are run.
- The tests should also run on International sites. If they don't apply to one or more sites, use the appropriate tagging - They shouldn't be concerned with language etc. if the element is the same then the test should still be able to run and find the element.
- The tests shouldn't be flaky. Random failures shouldn't happen.
- The tests should be able to be run on any Development environment - There should be no third party dependencies that are not available in any development environment, this way teams will get the most value out of the tests
- The tests should follow some form of existing structure (if one is in place) - We used Page Object model and as such we had a structure that suited this framework.
- The tests shouldn’t take long to run. This can be subjective, but if it takes longer than doing a manual test, then try and come up with a better solution
- Try to reuse code wherever possible, do not duplicate
- Existing Page Objects should also be used where possible
- Changes to shared packages should be run by others first
- Avoid testing too many things at one time in one test, or tests that are “too long”. Ideally, when a scenario fails, you want to immediately know what went wrong.
- Try to keep scenarios within system boundaries. For example, if your test needs some products in the bag, don’t let your test do that through the UI. Do it via the DB.
So that's pretty much it :)
If you can think of anything else that I might have missed off, that you think needs to be considered when creating an effective and reliable set of automated tests, please let me know!
Comments
Post a Comment