Skip to main content

Are you a Tester or a QA Engineer?

When I first started out in the Software QA world, my job title was a Test Analyst, which I disagreed with, we weren't just testing software, in my eyes we were doing far more than that, and still are.I often hear people talk about software testing and QA in the same sentence as much the same thing.

They are not the same thing!!

By QA I mean, as I'm sure you're aware, Quality Assurance. Testing can form a part of that QA as an activity to help ensure quality, but they are not and should never be classified as the same thing.

Testing to me, is the physical act of ensuring that software works correctly, and that tests pass. QA is more around ensuring quality in the product, not just through testing the code, but from discovery, requirements gathering, test case design and test case review.

As a QA we shouldn't just accept requirements as they are written, we shouldn't just accept design documents, we need to provide a quality gate throughout the life of a project. If by testing code we can ensure quality, then testing helps, but it should by no means guarantee or be the only piece of Quality Assurance that anybody does on a project.

A prime example is if you're not questioning requirements, or not questioning designs then you can write test cases against the requirements, against the designs, but the tests themselves whilst they may pass, they're not necessarily delivering much quality to the business, or ensuring quality products.


Comments

  1. Hi, Gareth...

    I'd like to point out a couple of things with which I disagree.

    To me, testing is not "the physical act of ensuring that software works correctly." Nothing can do that. The space of valid inputs is (typically, at least) intractably large; the space of invalid inputs is infinitely large; the potential variation in timing is infinitely large; the set of possible states of programs running concurrently to yours is infinite; the potential variation in sequences is infinite; and the potential sources of error in the test are effectively infinite too.

    At best, testing can suggest (not ensure) that the product CAN work, not that it DOES work, and certainly not that it WILL work. Testing can suggest (but not ensure) that the product doesn't work. In either case, it takes judgement by a skilled human to decide whether the test provides useful information, and it that human who makes the determination how to apply that information. Neither the test nor the testing does that on its own.

    http://www.developsense.com/blog/2014/01/very-short-blog-posts-11-passing-test-cases/

    I would characterize the work of "testing the code,...discovery, requirements gathering, test case design and test case review" as precisely within the scope of the work of a tester. Comparing the code to some document, whether done by human or machine, is something that we call "checking".

    http://www.satisfice.com/blog/archives/856

    Meanwhile, a "quality assurance person" questioning the requirements and questioning the design doesn't make the product better, any more than any investigative reporting makes society better when she investigates agricultural practices or apparent miscarriages of justice.

    http://www.developsense.com/blog/2010/05/testers-get-out-of-the-quality-assurance-business/

    Until someone with authority to make changes makes a change, nothing happens. The product doesn't get better for having been tested. The product gets better for having been testing and fixed, and neither testers nor "quality assurance" people do that.

    ReplyDelete
    Replies
    1. Hey,

      Thanks for commenting.

      Perhaps I should have expanded on "ensuring the software works correctly" to "ensuring the software works as expected against requirements", this to me is what testing is all about, it will as you say, suggest that it CAN work, but also that it DOES work as expected against requirements as expected etc. So like you say, the judgement of a skilled human against documentation (be it COAs against a PBI or requirements in any shape or form) is an essential part of this process.

      I like your thoughts around the whole team being part of QA, and you're right, but as a tester (or even QA) we need to encourage quality within the team (as should everyone), but all too often developers don't think about the bigger picture, and things can get missed. I suppose the key point is to work as a team to deliver quality, however, we as testers need to help ensure that quality is met, through our test cases and through our discussions and communication with others.

      A "QA person" questioning the requirements and questioning the design in my experience does make the product better, but as you say, it depends on who you question the code or question the requirements to and how you do it. The product does get better by having been tested providing results are fed back and acted upon as you say, and as you said in one of your comments on the last post, we provide information to people who can ensure quality.

      So actually, not sure how that contradicts my original post, but I can definitely feel there needs to be a follow up on it. :)

      Thanks again.

      Delete
  2. Hi Gareth,

    I think Mr Bolton was just using your blog to promote his own personal interest rather than providing any real feedback.

    I understand what your saying! Testing is one of the tools to ensure quality not the only one and not the same QA. Testing focuses what's wrong bugs, failures etc QA focusses on how to make the product superior. There is a difference because you can add quality with out testing. QA can start as early as the requirements stage whereas testing has wait until the code has been delivered into an appropriate environment.

    Nice Blog, keep up the good work

    ReplyDelete

Post a Comment

Popular posts from this blog

Considerations when creating automated tests

We recently released to a number of teams our automated regression pack that has been worked on over the past few months. This regression pack tests legacy code, but contains a large number of tests.  As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack.  I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post...  So here goes,  when creating automated tests, it's important to consider and adhere to the following: - Think about data . The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention. - The tests need to be idempotent - By making it so that each test is standalone and does...

Coding something simple.... or not! Taking a screenshot on error using Selenium WebDriver

I recently wrote a little function that takes a screenshot at the end of a test if it has errored. What sounded very simple at the start turned out to be quite a bit of work, and quite a few lines of code to handle certain scenarios! It's now over 50 lines of code! I'll start with what I had at the beginning, this was to simply take a screenshot in the working directory, we are using SpecFlow and Selenium to run the tests, so we are going to check if the ScenarioContext.Current.TestError isn't null, if it is, then using Selenium, take a screenshot (note the below code is a simplified version of what I had at the beginning). [AfterScenario]         public static void TakeScreenShotOnError()         {             if (ScenarioContext.Current.TestError == null) return;             var screenshotDriver = Driver as ITakesScreenshot;             if (screenshotD...

How to manage resources within new teams?

Working where I work we are constantly spinning up new teams to take on new workloads as business come up with new demands and new features they want developed and tested. The problem with this is how do we ensure the work of the newly spun up team is of sufficient quality. One method is by taking people from other established teams and placing them on the new team. This works great for the new team, but unfortunately it will oftenl eave the established team lacking in a resource whilst they try and fill the gap left by the person who has left. We are seeing this often with our offshore teams, it can be damaging to the team structure and the teams velocity, but try as I might, I can't think of another way around it. It's far easier to take 1 person from a team that is established than it is to build a whole new team from scratch. At least by leaving the core of a team in place, you should be guaranteeing that the new team are aware of any coding standards or any QA standard...