Skip to main content

Working with Test Cases in TFS and MTM

Where I work we use TFS and MTM, and there a number of pain points around it, namely it's slow, and can be difficult to work with if you're not used to the UI, they are 2 things that unfortunately for the time being I can't help with, however, there was one grievance in that passing a test in MTM doesn't update the Test Case in TFS.

I can understand why this is, as an Acceptance Test in TFS and a Test in MTM are 2 different things, in that an Acceptance Test in TFS can be run on multiple configurations inside MTM, so why would a passed test in MTM update the Test in TFS?

This meant that the testers would have to export the tests in Excel and performa  mass update to pass the TFS test cases, which was a bit of a pain and unnecessary.

I did some research, and found other people had the same problem, so thought how it would be great if we could use the TFS API to update all the test cases against the PBI to "Passed" just by inputting the PBI number.

The hardest thing about this was coming up with the query that would bring back the library of Acceptance Tests, but thankfully I could create the query in TFS and get the actual code for the query and put that into the program.

I figured the simplest way would be to have a console application that the tester writes what pbi they wish to update and away it goes and does the magic, and writes each test that is updated to the window and then notes the success at the end.

I have sent this round to the teams and it's proving very useful, I've added in some exceptions that if a test is already at failed then don't update it, as that can be a manual process, and I wouldn't want this bulk update to change test cases that it's not meant to.


I would share the source code, however I think it's very bespoke to the set up of the work here, so wouldn't necessarily be useful. I just thought it would be useful to let people know that it can be done, and I understand the reasons why it's not in there by default.

Comments

Popular posts from this blog

Coding something simple.... or not! Taking a screenshot on error using Selenium WebDriver

I recently wrote a little function that takes a screenshot at the end of a test if it has errored. What sounded very simple at the start turned out to be quite a bit of work, and quite a few lines of code to handle certain scenarios! It's now over 50 lines of code! I'll start with what I had at the beginning, this was to simply take a screenshot in the working directory, we are using SpecFlow and Selenium to run the tests, so we are going to check if the ScenarioContext.Current.TestError isn't null, if it is, then using Selenium, take a screenshot (note the below code is a simplified version of what I had at the beginning). [AfterScenario]         public static void TakeScreenShotOnError()         {             if (ScenarioContext.Current.TestError == null) return;             var screenshotDriver = Driver as ITakesScreenshot;             if (screenshotD...

How to manage resources within new teams?

Working where I work we are constantly spinning up new teams to take on new workloads as business come up with new demands and new features they want developed and tested. The problem with this is how do we ensure the work of the newly spun up team is of sufficient quality. One method is by taking people from other established teams and placing them on the new team. This works great for the new team, but unfortunately it will oftenl eave the established team lacking in a resource whilst they try and fill the gap left by the person who has left. We are seeing this often with our offshore teams, it can be damaging to the team structure and the teams velocity, but try as I might, I can't think of another way around it. It's far easier to take 1 person from a team that is established than it is to build a whole new team from scratch. At least by leaving the core of a team in place, you should be guaranteeing that the new team are aware of any coding standards or any QA standard...

Considerations when creating automated tests

We recently released to a number of teams our automated regression pack that has been worked on over the past few months. This regression pack tests legacy code, but contains a large number of tests.  As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack.  I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post...  So here goes,  when creating automated tests, it's important to consider and adhere to the following: - Think about data . The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention. - The tests need to be idempotent - By making it so that each test is standalone and does...