Skip to main content

Posts

Showing posts with the label Acceptance Tests

Working with Test Cases in TFS and MTM

Where I work we use TFS and MTM, and there a number of pain points around it, namely it's slow, and can be difficult to work with if you're not used to the UI, they are 2 things that unfortunately for the time being I can't help with, however, there was one grievance in that passing a test in MTM doesn't update the Test Case in TFS. I can understand why this is, as an Acceptance Test in TFS and a Test in MTM are 2 different things, in that an Acceptance Test in TFS can be run on multiple configurations inside MTM, so why would a passed test in MTM update the Test in TFS? This meant that the testers would have to export the tests in Excel and performa  mass update to pass the TFS test cases, which was a bit of a pain and unnecessary. I did some research, and found other people had the same problem, so thought how it would be great if we could use the TFS API to update all the test cases against the PBI to "Passed" just by inputting the PBI number. The h...

Dealing with Selenium WebDriver Driver.Quit crashes (Where chromedriver.exe is left open)

We recently came across a problem with Selenium not quitting the webdriver and this would then lock a file that was needed on the build server to run the builds. We were using Driver.Quit() but this sometimes failed and would leave chromedriver.exe running. I looked around and found this was a common issue that many people were having. We (I say we, as we came to the solution through paired programming), came up with the following, that would encapsulate the driver.quit inside a task and if this task takes longer than 10 seconds, then it will clean up any processes started by the current process, in the case of the issue on the build server, it would kill any process started by Nunit. [AfterTestRun]         public static void AfterTestRun()         {             var nativeDriverQuit = Task.Factory.StartNew(() => Driver.Quit());             if (!nativeDriverQuit.Wait(TimeSpan.Fr...

Test Iceberg of Automation

I'm sure you've all heard of the automated testing pyramid, I'll describe it briefly here, but you can read all about it here . It's essentially a strategy that shows good practice ratio of Acceptance Tests (generally UI) to Integration Tests to Unit Tests, and here it is here in a simple form. It states that it is a good ratio to have your testing covered with 10% of acceptance tests, 20 % integration tests and 70% unit tests. Why is that you may ask? The primary focus of this is on Return on Investment, by finding bugs/breakages at Unit test level you are finding cheap bugs, as Unit Tests are quick and easy to maintain, whereas acceptance tests, whilst having value, are harder to maintain and take longer to run. Obviously, it's not a strict ratio, but I think it's a good practice to try and live by. However, I digress, the main point of this post is to put another spin on the automation triangle, and is possibly more QA centric than the automation ...

Advantages of using Test Management tools

Before I start talking about test management tools, let me clarify what I mean by the term test Management tools...  I am not taking about your office excel program where you store your test cases in. I'm talking about bespoke test Management tools, your quality centers or Microsoft test manager... In the strict case of the term test Management tool, Microsoft Excel can be used as such, but heck, so could a notepad if used in the right way... For the sake of this blog post I am talking about bespoke test Management tools. Firstly, what test tools are out there? There are many more out there today than when I first started in QA over 5 years ago. When I started the market was primarily dominated by a tool called Quality Center, this would run in a browser (only Ie unfortunately) and was hosted on a server.. Nowadays it's market share has somewhat dwindled, and there are some new kids on the block.  One of the more popular tools is that of Microsoft Test Manager, it...

Using BDD and gherkinising your Acceptance Tests

In my post Testing of Automated tests , I mention about a BDD framework which involves using BDD to drive your acceptance tests. BDD stands for Behaviour Driven Development.  One effective method of writing BDD tests are by using a format known as Gherkin language. These consist of Given, When, Thens. The main advantage of the gherkin language is that it's readable by the business, and in an ideal world forms part of the Conditions of Acceptance around a PBI. Also, using a Visual Studio plugin of SpecFlow , you can integrate your Gherkinised COAs into your solution with feature files, and then drive the automated tests, however, for this post I will focus solely on how to effectively gherkinise your acceptance tests. A Feature file consists of a feature outline, which details what the feature file is testing followed by Scenarios and examples (parameters).  The BDD scenarios are made up of a Given, When, Then... These are effectively an initial state (Given), an action (W...

Testing of Automated Tests?

I've often mentioned how important automated tests are throughout my blog.  However, automated tests can also give false positives... This is where paired programming and testing of the automated tests come in to play. The example I am going to use employs the Page Object model with BDD, more of which can be read about here  and here , until I get round to creating a new blog post on it... Scenario Outline: Search for generic term Given I am on the homepage And I have entered <searchterm> into the search box When I click search Then search results for <searchterm> are displayed Let me explain anyway, if we have an assertion in the step definition for the Then step (written in C# using NUnit) that does the following: [Then(@"search results for (.*) are displayed")]         public void SearchResultsForAreDisplayed(string searchTerm)         {             Assert.I...

How to decide what and when to automate tests?

We all know that repetitive manual testing can be and is at times boring.... but unfortunately it's a necessity for some aspects of testing. One thing that I love, and sure enough this reduces the load of manual testing, is automated testing, be it from the Service level, through an API and especially WebUI testing. Whenever any testing comes along, there is a question that is regularly asked by QA, do we want to automate this? When deciding what tests we should automate I feel that it's important to answer some questions: Will this test form a part of the regression pack for the application? Will this test be run multiple times during the development process? Can the same level of testing be achieved by automating this test? I'll tackle the first question, as it's the most basic and the easiest to answer. If a test is to form a part of a regression pack, then yes it should be automated. The reason being that it will save time in the fut...