Skip to main content

Small change? Test everything!

As a QA our job is to ensure quality, however, all too often I hear about a small change, and the testing that a QA has said is needed is massive. I feel that QAs have a tendency to say to test everything when they don't necessarily understand the change, when with a few questions we can isolate the change down to a specific system and come up with an appropriate 10 minute test strategy.

Unfortunately, I think this comes out a lot as the QA is scared to ask exactly what the change is, what the affected systems are, and in all honesty no one should be afraid to ask if they don't understand anything. On the flip side, whoever you ask, you shouldn't take their response as gospel, do some investigation work yourself until you fully understand the risks and the affects this change will have.

I've experienced a number of scenarios where I've questioned the amount of testing or the type of testing that is being completed on a task. For example, a database change will very rarely (if ever) require cross browser testing, or a small change (ie. adding a link to a page) in one part of a system will not require regression testing of the whole system, but all too often this is something that is done and not challenged enough unfortunately.

There needs to be clear lines about what is in scope for testing and what is out of scope, and the risks that are associated with not testing (if any). The risk of not testing a database change cross browser is negligible (depending on the change of course).

Alas, I am not just talking about functional testing, a lot of the time, non functional testing, such as performance testing is performed when it's not necessary. To rectify we need to talk to other stakeholders, talk to developers, don't be afraid to not understand something at first, only through asking questions will we learn.

What can we do to rectify this?

We as QA need to be a lot smarter about what we test and how we test, or else we will get a bad reputation for being slow workers, for not testing efficiently, or get labelled as inconsistent. If one person says to do it this way, and another person on another team says to test something similar in another way, it makes everyone look bad. As a QA department consistency across teams will improve the perception of QA across the IT department, and I've mentioned before this is an area that is often lacking, be it rightly or wrongly, and we should do all we can to improve this.

Comments

  1. Lol, well said. The tendency to want to test everything without meaningful justification or quantifying the risk that may be introduced by the change. In my view its lack of leadership and frankly individuals choosing to take the path of least ressistance, also lack of strong leadership in a lot of QA departments. Frankly QA's are the butt of way too many jokes :-) "reputation for being slow workers, for not testing efficiently, or get labelled as inconsistent". Its easy to see why people think QA are lazy slow or even thick. :-) But there are exceptions out there.....

    ReplyDelete
    Replies
    1. I also think a lot of it is down to things like I said in my previous post that 50% of qa people shouldn't really be in QA, and that there can be a very big fear of change when systems are tightly coupled it's difficult to fully understand the implications when no one else on your team fully understand them. It would be lovely if systems weren't so coupled together, would make testing more straight forward and releases easy! One can dream right!?

      Delete
  2. This is where developers should take the responsibly of ensuring that the change doesn't break any existing unit tests (if any have been written). If no unit tests exist then, at least ensure they write some for the change. More often than not the answer from developers is that it's not possible to unit test the change, which is a poor excuse.

    I agree, that you need time set aside to asses the change and then figure out what knock on effects the change has on other systems.

    You also need to identity if the change affects core high risk areas of the system, such as payment changes or product selection (add to basket). That will more often than not dictate the depth of your testing effort.

    I'm a big fan of automated tests, which if exist will cover a broad area of your bases.
    Lastly, performance tests are often forgotten and I've seen releases rolled back due to changes that have bought the site to a standstill because there hasn't been any planning around ensuring system performance.

    ReplyDelete
    Replies
    1. I don't think it should only come from developers, we as a QA team need to ask and do some investigations ourselves, walk through the changes and the systems affected, that should help drive the test plan. We shouldn't just accept test everything from developers (a mistake I have made in the past).

      I like the high risk area approach, as all too often testing is squeezed, by identifying the high risk areas we can focus our testing effort on those when appropriate.

      Performance tests are often forgotten about, but they are also just bolted on to projects willy nilly, oh and sometimes the results of which are ignored!! :p

      Delete

Post a Comment

Popular posts from this blog

Coding something simple.... or not! Taking a screenshot on error using Selenium WebDriver

I recently wrote a little function that takes a screenshot at the end of a test if it has errored. What sounded very simple at the start turned out to be quite a bit of work, and quite a few lines of code to handle certain scenarios! It's now over 50 lines of code! I'll start with what I had at the beginning, this was to simply take a screenshot in the working directory, we are using SpecFlow and Selenium to run the tests, so we are going to check if the ScenarioContext.Current.TestError isn't null, if it is, then using Selenium, take a screenshot (note the below code is a simplified version of what I had at the beginning). [AfterScenario]         public static void TakeScreenShotOnError()         {             if (ScenarioContext.Current.TestError == null) return;             var screenshotDriver = Driver as ITakesScreenshot;             if (screenshotD...

How to manage resources within new teams?

Working where I work we are constantly spinning up new teams to take on new workloads as business come up with new demands and new features they want developed and tested. The problem with this is how do we ensure the work of the newly spun up team is of sufficient quality. One method is by taking people from other established teams and placing them on the new team. This works great for the new team, but unfortunately it will oftenl eave the established team lacking in a resource whilst they try and fill the gap left by the person who has left. We are seeing this often with our offshore teams, it can be damaging to the team structure and the teams velocity, but try as I might, I can't think of another way around it. It's far easier to take 1 person from a team that is established than it is to build a whole new team from scratch. At least by leaving the core of a team in place, you should be guaranteeing that the new team are aware of any coding standards or any QA standard...

Considerations when creating automated tests

We recently released to a number of teams our automated regression pack that has been worked on over the past few months. This regression pack tests legacy code, but contains a large number of tests.  As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack.  I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post...  So here goes,  when creating automated tests, it's important to consider and adhere to the following: - Think about data . The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention. - The tests need to be idempotent - By making it so that each test is standalone and does...