Skip to main content

Unit Tests? Integration Tests? Acceptance Tests? What do they mean?

I'm currently working with a new team who haven't really worked in an Agile way before, nor do they have much experience of what types of testing you can do on an application, so in preparation I tried to come up with simple definitions on the above types of tests.

I thought it would make a good blog post, as it's somethign that I would undoubtedly find useful at a future point... So here goes:

To define a Unit Test it's a simple test that tests a single piece of code/logic. When it fails it will tell you what piece of code is broken.

An Integration Test is a test that tests the combination of different pieces of an application, when it fails it will tell you that your system(s) are not working together as you thought they would.

An Acceptance Test is a test that tests the software does what is expected by the customer/user of the software. When it fails it tells you that your application is not doing what the customer/user thought it would do or even what it should do.

These are quick, simple and dirty definitions of the different types of testing you might come across in a project, there are more, but these are the ones that I am going through with the team, so these are the ones that have made it into this blog post!

Feel free to agree/disagree/add more...

Comments

  1. Your definitions are, more or less, what I would have said. But there is a key distinction that needs to be identified, I think, between Acceptance Testing, and all the rest.

    Instead of the traditional testing "pyramid", think instead, of a modern rail or vehicle bridge.

    All the "lower" forms of testing cover the "vertical" stacks: unit, functional, and even some forms of integration, basically are the tests that insure that the pillars or pylons of the bridge are sound.

    Acceptance Testing, however, covers the "horizontal plane". It is designed to be sure of one basic goal: can the user cross the bridge? Can he get from point A to point B, consistently?

    Why is this distinction important? Well, because it helps to better understand what we mean when we say something is "covered".

    Staying with the metaphor, I can write a Gherkin spec covering a user journey across that bridge that passes consistently for months. What does that tell us with any certainty about the underlying bridge supports? Only that they managed to keep the bridge up while I crossed it.

    But without unit, functional, and integration tests, the Gherkin specs can't know if any particular pillar has hairline cracks in the concrete, or that a faulty girder bolt has sheered and is putting extra stress on the suspension cables, or that debris is building up around the base, which will eventually rot the connecting beams.

    And why is all this important? Because a lot of people point to that old testing "pyramid" and complain about "duplication of effort", not realizing that you're testing *two different things*. The user journey, and the application, are fundamentally two different things. And the testing must reflect that. So yes, it's possible that some unit tests are exercising the same piece of code as a Gherkin spec, but they're doing it *under different conditions*, in different contexts, with different goals in mind. What those folks who complain about duplication are missing, is that testing is not a linear activity, and that product quality is not one-dimensional.

    ReplyDelete

Post a Comment

Popular posts from this blog

Coding something simple.... or not! Taking a screenshot on error using Selenium WebDriver

I recently wrote a little function that takes a screenshot at the end of a test if it has errored. What sounded very simple at the start turned out to be quite a bit of work, and quite a few lines of code to handle certain scenarios! It's now over 50 lines of code! I'll start with what I had at the beginning, this was to simply take a screenshot in the working directory, we are using SpecFlow and Selenium to run the tests, so we are going to check if the ScenarioContext.Current.TestError isn't null, if it is, then using Selenium, take a screenshot (note the below code is a simplified version of what I had at the beginning). [AfterScenario]         public static void TakeScreenShotOnError()         {             if (ScenarioContext.Current.TestError == null) return;             var screenshotDriver = Driver as ITakesScreenshot;             if (screenshotD...

How to manage resources within new teams?

Working where I work we are constantly spinning up new teams to take on new workloads as business come up with new demands and new features they want developed and tested. The problem with this is how do we ensure the work of the newly spun up team is of sufficient quality. One method is by taking people from other established teams and placing them on the new team. This works great for the new team, but unfortunately it will oftenl eave the established team lacking in a resource whilst they try and fill the gap left by the person who has left. We are seeing this often with our offshore teams, it can be damaging to the team structure and the teams velocity, but try as I might, I can't think of another way around it. It's far easier to take 1 person from a team that is established than it is to build a whole new team from scratch. At least by leaving the core of a team in place, you should be guaranteeing that the new team are aware of any coding standards or any QA standard...

Considerations when creating automated tests

We recently released to a number of teams our automated regression pack that has been worked on over the past few months. This regression pack tests legacy code, but contains a large number of tests.  As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack.  I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post...  So here goes,  when creating automated tests, it's important to consider and adhere to the following: - Think about data . The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention. - The tests need to be idempotent - By making it so that each test is standalone and does...