Skip to main content

Engineering on Legacy Code

A recent project I was on meant testing a lot of legacy code,  in fact I think it was all legacy code! So I thought I'd write some bits about what the challenges are or what you should look for.

Firstly, let me start by defining what I mean by legacy code. I have seen definitions of legacy code which state any code without unit tests can be defined as legacy, whilst this is true, I also like to think of legacy code as something that isn't being refactoring, isn't being improved upon, it is what it is, to quote Ronseal "it does exactly what it says on the tin".

The problem with the Ronseal analogy, is that what happens if you can't find the tin? Or you can't make sense of the tin? This brings me onto the first challenge... In that if it is legacy code, and there's no supporting documentation around how it works or what certain features are for, then it makes our lives as testers (and developers) difficult. We have to ask questions over what certain things do, and more often than not the person we ask won't know either. This tripped me up in the projects first release, and I'm not too ashamed to admit, but we had to rollback the release due to a bug about us not truly understanding some legacy feature. This was a good lesson, we learnt from it, and we were far more cautious and inquisitive about future releases. We made sure we understood everything.

Which brings me onto the next challenge/tip. Make sure you understand everything around the legacy code that you are testing, if there's documentation, read it, if there's questions that need answering then ask them. There is no such thing as a stupid question! This will all help drive your testing and helping you decide what and how to test.

Another challenge around how to test legacy code, is that you are often limited by what has been developed in the past, for instance we wanted to perform performance testing on an internal application, but we had no scripts for this, performance testing of this application wasn't ever considered necessary, until now. The problem is that we needed performance testing, as we were increasing the amount of data for certain calls. We didn't have time for anyone to develop performance tests, so we had to decide to perform the testing at a lower level, by testing the sprocs that retrieved and set the data, this gave us enough confidence, and was relatively quick and easy to do.

Finally, with there being no unit tests on the code, and no automated tests that worked, we were forced to do more manual testing than I perhaps would have liked.

Despite the above challenges, we successfully released the project on time. A lot of this was down to how we managed the releases and released small pieces in quick succession, for instance had we released big bang and found the bug that caused the first release to be rolled back, we would have had to rollback everything which would not have been fun!


So there you have it, a few challenges that I came across when testing on a legacy system. What challenges can you think of? This post started with the title "Testing on legacy code", but I think if you replace the word testing with developing for this post, a lot of the points will still hold true,  it's not just about testing but engineering on legacy code. I know you can make the case that the above is everything you should be trying to achieve when testing any code, but I think when testing on  the legacy code, the above points are even more important.

Comments

Popular posts from this blog

Coding something simple.... or not! Taking a screenshot on error using Selenium WebDriver

I recently wrote a little function that takes a screenshot at the end of a test if it has errored. What sounded very simple at the start turned out to be quite a bit of work, and quite a few lines of code to handle certain scenarios! It's now over 50 lines of code! I'll start with what I had at the beginning, this was to simply take a screenshot in the working directory, we are using SpecFlow and Selenium to run the tests, so we are going to check if the ScenarioContext.Current.TestError isn't null, if it is, then using Selenium, take a screenshot (note the below code is a simplified version of what I had at the beginning). [AfterScenario]         public static void TakeScreenShotOnError()         {             if (ScenarioContext.Current.TestError == null) return;             var screenshotDriver = Driver as ITakesScreenshot;             if (screenshotD...

How to manage resources within new teams?

Working where I work we are constantly spinning up new teams to take on new workloads as business come up with new demands and new features they want developed and tested. The problem with this is how do we ensure the work of the newly spun up team is of sufficient quality. One method is by taking people from other established teams and placing them on the new team. This works great for the new team, but unfortunately it will oftenl eave the established team lacking in a resource whilst they try and fill the gap left by the person who has left. We are seeing this often with our offshore teams, it can be damaging to the team structure and the teams velocity, but try as I might, I can't think of another way around it. It's far easier to take 1 person from a team that is established than it is to build a whole new team from scratch. At least by leaving the core of a team in place, you should be guaranteeing that the new team are aware of any coding standards or any QA standard...

Considerations when creating automated tests

We recently released to a number of teams our automated regression pack that has been worked on over the past few months. This regression pack tests legacy code, but contains a large number of tests.  As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack.  I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post...  So here goes,  when creating automated tests, it's important to consider and adhere to the following: - Think about data . The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention. - The tests need to be idempotent - By making it so that each test is standalone and does...