Skip to main content

QAs are like Referees, people only talk about them when they make mistakes

In a lot of companies and a lot of articles that I read online, people say that QA is under estimated, that good QA doesn't get noticed enough, and they're talking about it like it's a bad thing?

However, I strongly believe that good QA doesn't need to be credited, the fact that people aren't talking about QA is a good thing, it's not just that no news is good news, but far more than that.

Let me give you the example of referees in football, if a referee has a good game then he's not going to be talked about, a good referee is a referee who goes unnoticed, doesn't make any bad decisions, and lets the game flow well. People aren't saying that the referee missed a blatant penalty or sent someone off.


To liken this to QA, if everything goes well on a project and and the product is released without any bugs and the quality of the software at the end was of a high standard then not very often will someone say the QA was great, I think it comes down to the assumption that software should be perfect without minimal effort, however, we know this isn't the case :)

Very rarely, a referee will do a great piece of refereeing, like here when Phil Dowd allowed Sunderland to play advantage and then broug the play back and awarded a penalty against Cardiff when no advantage was gained. This was heralded by Gus Poyet (Sunderland manager) as "the best decision I've ever seen from a referee".

Just like above, people will sometimes hail QA, and say what a great job they are doing, but I suppose what I'm trying to say, is that we should be happy that nobody is talking about us, it's when people start saying that the QA missed a bug or something, that we should be worried.

Comments

  1. Disclaimer: I'm a ref and a tester.

    I don't think we should be happy about this. The youth league I ref for has a real problem getting enough refs as parents and coaches think they can treat them like crap.
    Same argument can apply to test/QA - if no-one is writing about them and what a good job they do then (1) the pay will be crap and (2) no-one will want to do the job, they will want to be players or managers or coaches

    ReplyDelete
    Replies
    1. I don't think recognition is a QA only problem, it's devs as well.

      How many games have you come off thinking to yourself you refereed that game to a high standard, and done well? Do you expect anybody else to say well done? Surely the satisfaction comes from yourself, as with the satisfaction of releasing software that users will find beneficial, and is bug free. That to me is good enough, sure it's nice when people say well done, but it's not the norm, and if it was the norm then surely it would lose it's meaning?

      I'd much rather not hear anything about the QA on a project rather than hearing comments about how it was done poorly, which I'm sure is the same for refereeing?

      We do need to attract people to QA, and even make the work that we do more visible, that will help improve the pay, and will help people want to do the job, not necessarily pats on the back when things go well?

      Delete

Post a Comment

Popular posts from this blog

Coding something simple.... or not! Taking a screenshot on error using Selenium WebDriver

I recently wrote a little function that takes a screenshot at the end of a test if it has errored. What sounded very simple at the start turned out to be quite a bit of work, and quite a few lines of code to handle certain scenarios! It's now over 50 lines of code! I'll start with what I had at the beginning, this was to simply take a screenshot in the working directory, we are using SpecFlow and Selenium to run the tests, so we are going to check if the ScenarioContext.Current.TestError isn't null, if it is, then using Selenium, take a screenshot (note the below code is a simplified version of what I had at the beginning). [AfterScenario]         public static void TakeScreenShotOnError()         {             if (ScenarioContext.Current.TestError == null) return;             var screenshotDriver = Driver as ITakesScreenshot;             if (screenshotD...

How to manage resources within new teams?

Working where I work we are constantly spinning up new teams to take on new workloads as business come up with new demands and new features they want developed and tested. The problem with this is how do we ensure the work of the newly spun up team is of sufficient quality. One method is by taking people from other established teams and placing them on the new team. This works great for the new team, but unfortunately it will oftenl eave the established team lacking in a resource whilst they try and fill the gap left by the person who has left. We are seeing this often with our offshore teams, it can be damaging to the team structure and the teams velocity, but try as I might, I can't think of another way around it. It's far easier to take 1 person from a team that is established than it is to build a whole new team from scratch. At least by leaving the core of a team in place, you should be guaranteeing that the new team are aware of any coding standards or any QA standard...

Considerations when creating automated tests

We recently released to a number of teams our automated regression pack that has been worked on over the past few months. This regression pack tests legacy code, but contains a large number of tests.  As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack.  I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post...  So here goes,  when creating automated tests, it's important to consider and adhere to the following: - Think about data . The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention. - The tests need to be idempotent - By making it so that each test is standalone and does...