Skip to main content

Should a failed test on CI automatically raise a bug?

I got into work this morning, and was looking at some CI Builds, and had a thought that it might be useful to have a bug raised automatically (through an API) when a test that is running on CI failed.... I tweeted about it to get peoples opinion(s)...



I got a few responses on twitter, and one which made me rethink why this might be a good idea and that was:


It could easily lead to and incredible amount of noise if we are raising bugs against every failed test, and also 1 failed test case != 1 bug, in that if 3 automated tests on CI fail then we don't necessarily need 3 bugs. It could also potentially (as my reply states) devalue what a bug actually is, meaning that if a tester raises a bug then it might get ignored/lost in all the noise. 

I also asked our internal QA Slack channel, responses were informative, and again helped lead me away from this potentially noisy and crazy idea.



Both of these points are extremely valid, and if you make it the number 1 priority to fix a broken test then why bother to create a bug? Surely just the visibility of a red cross next to a test run is enough to get people to work on it? Which is very true. 

Another factor to consider is the type of test, if a Unit Test fails, do we need a bug? Most definitely not.... There is definitely more of an argument if an Acceptance Test fails though (though after this discussion today I don't think there is!).

So, what started out as a blog post about CI and bugs, has provided me with 3 insightful thoughts:

1 - The focus of this blog post being that a failed test on CI most definitely does not mean a new bug (whether the process is manual or automated in creating said bug) needs to be raised. It ultimately would devalue a bug, it would mean that people wouldn't necessarily talk about said issues/bug, and 1 failed test != 1 new bug.

2 - Asking people for feedback is invaluable, it can help shape your opinion and give you quick feedback before you go off down a rabbit hole!

3 - Discussions like this are the exact reason why I wanted to have an internal QA community, so we can discuss things and get feedback.


Comments

  1. Quite an insightful post. This has cleared so many of my doubts in this subject & has thrown light on many aspects that I didn’t know before.

    ReplyDelete
  2. there is always first time for everythign in life and it only takes one small step to conquer the greatest fears although this small step could take alot of courage but at the end its always worth it.

    ReplyDelete

Post a Comment

Popular posts from this blog

Coding something simple.... or not! Taking a screenshot on error using Selenium WebDriver

I recently wrote a little function that takes a screenshot at the end of a test if it has errored. What sounded very simple at the start turned out to be quite a bit of work, and quite a few lines of code to handle certain scenarios! It's now over 50 lines of code! I'll start with what I had at the beginning, this was to simply take a screenshot in the working directory, we are using SpecFlow and Selenium to run the tests, so we are going to check if the ScenarioContext.Current.TestError isn't null, if it is, then using Selenium, take a screenshot (note the below code is a simplified version of what I had at the beginning). [AfterScenario]         public static void TakeScreenShotOnError()         {             if (ScenarioContext.Current.TestError == null) return;             var screenshotDriver = Driver as ITakesScreenshot;             if (screenshotD...

How to manage resources within new teams?

Working where I work we are constantly spinning up new teams to take on new workloads as business come up with new demands and new features they want developed and tested. The problem with this is how do we ensure the work of the newly spun up team is of sufficient quality. One method is by taking people from other established teams and placing them on the new team. This works great for the new team, but unfortunately it will oftenl eave the established team lacking in a resource whilst they try and fill the gap left by the person who has left. We are seeing this often with our offshore teams, it can be damaging to the team structure and the teams velocity, but try as I might, I can't think of another way around it. It's far easier to take 1 person from a team that is established than it is to build a whole new team from scratch. At least by leaving the core of a team in place, you should be guaranteeing that the new team are aware of any coding standards or any QA standard...

Considerations when creating automated tests

We recently released to a number of teams our automated regression pack that has been worked on over the past few months. This regression pack tests legacy code, but contains a large number of tests.  As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack.  I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post...  So here goes,  when creating automated tests, it's important to consider and adhere to the following: - Think about data . The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention. - The tests need to be idempotent - By making it so that each test is standalone and does...