Skip to main content

QA Vision for the next 12 months

I was recently asked about a vision for QA over the next 12 months, where would I like QA to be and how am I planning on achieving that...

I thought I'd write where I  want QA to be and document the progress over the next year or so, and hopefully achieve most, if not all, of what I want.

Firstly, a big problem where I am currently is performance testing, we hand it over at the end of a project to a third party who then run performance tests on it and come back with results, there are a number of issues that are wrong with this, mainly being we are leaving something that is incredibly important right to the end of a project, so any issues are extremely difficult to fix. So the first thing I want to do is embed Performance Testing right into the sprint, and actually try and do it in an Agile way, I read a blog post here  and really want to try and achieve that, we have the tooling to do it in house, so why wouldn't we do so? Sure it will require some help from experts at the start but eventually we should be able to bring it entirely in house at the end of it. The benefits of this are that we will have less shocks at the end of a project, a smoother release process and a quicker time to release as there is no 2 week period that is needed for performance tests to run. To achieve this we will need better acceptance criteria around performance, and education around best coding practices but this is a big goal and I really want to achieve this. I would argue this is the highest priority of all what I want to achieve over the next 12 months.

Next up is to have some form of induction process for new QAs, currently there isn't one. New QAs are put into teams and there is no induction over system architecture, QA processes, automation framework nothing at all. I want to rectify this, the problem being that this is time consuming, and can vary from team to team slightly, however the goal is to make it as generic as possible whilst still giving huge value to new members. The advantages of this would mean that QA members can hit the ground running quicker and hopefully have less time wasted asking questions and waiting for answers, all the information can be in this induction pack that they will complete.

We do a lot of releases, but as of now there is no automated test pack for a release, we are in the process of rectifying this, by creating an automated deployment test pack that can be run in production and pre-production, and will verify the core functionality of the website. The goal would be to have teams run this test pack as part of CI on a nightly basis, with the benefits being that teams will have confidence that their code is working as it should and the actual deployment and release will be quicker and hopefully smoother. This will increase the time taken to deploy new code, which is essential if we are to achieve more regular releases.

I also feel that we unfortunately have no clear process for security testing, we have done security testing in the past, but I feel that the approach needs to be documented and a clear partnership with the third party established. We need to manage this properly, so we don't end up in a similar position to where we are currently with our performance testing.

I would also like to work on having a clear career development plan with the QA members in the teams, similar to what I've documented here but make it official and so it's clear what skills are strong and what skills QA members need to work on to progress to the next level. I feel that this would also help give visibility over areas that we are lacking in as a QA community so we can look at addressing those weaknesses.

Mobile automation to be used and adopted by all the teams, so both android/iOS applications and the mobile website have some form of an automated test pack that can be run. We know the tooling we want to use, it's just a matter of setting up the framework so that tests can easily be added and created by the teams. (FYI, the toolset is going to be Espresso for android and kif for iOS).

Finally, I wish to develop a strong culture of Research & Development, a place where QAs can work on individual projects that will ultimately benefit the team, I'm not entirely sure how to get this started, but a simple way is to have regular meetings for people to chat about things that they think would be good for their team(s). Then there's also going to conferences and things like that, speaking to other people and finding out what they have worked on and what they have done well and not so well. Maybe even come the end of the 12 months, host an external QA event for the public to come and see and get people speaking at, and realising that ASOS isn't just about fashion, but about the technologies that help deliver it to the multiple platforms.

These are the main points that I wish to achieve, I'm sure there will be others added to this over time, but I'm positive there is enough there to keep me and others busy in implementing the above! This obviously needs buy in from everybody involved, but I strongly believe that if we achieve the above we will have a very strong QA department, and one that is fun and challenging to work in.

I will definitely keep you all posted, do you have a QA Vision for the next year? what do you wish to achieve with your work?

Comments

  1. The writer has written this blog in the most artistic way. Splendid!
    resumeyard.com

    ReplyDelete

Post a Comment

Popular posts from this blog

Coding something simple.... or not! Taking a screenshot on error using Selenium WebDriver

I recently wrote a little function that takes a screenshot at the end of a test if it has errored. What sounded very simple at the start turned out to be quite a bit of work, and quite a few lines of code to handle certain scenarios! It's now over 50 lines of code! I'll start with what I had at the beginning, this was to simply take a screenshot in the working directory, we are using SpecFlow and Selenium to run the tests, so we are going to check if the ScenarioContext.Current.TestError isn't null, if it is, then using Selenium, take a screenshot (note the below code is a simplified version of what I had at the beginning). [AfterScenario]         public static void TakeScreenShotOnError()         {             if (ScenarioContext.Current.TestError == null) return;             var screenshotDriver = Driver as ITakesScreenshot;             if (screenshotD...

How to manage resources within new teams?

Working where I work we are constantly spinning up new teams to take on new workloads as business come up with new demands and new features they want developed and tested. The problem with this is how do we ensure the work of the newly spun up team is of sufficient quality. One method is by taking people from other established teams and placing them on the new team. This works great for the new team, but unfortunately it will oftenl eave the established team lacking in a resource whilst they try and fill the gap left by the person who has left. We are seeing this often with our offshore teams, it can be damaging to the team structure and the teams velocity, but try as I might, I can't think of another way around it. It's far easier to take 1 person from a team that is established than it is to build a whole new team from scratch. At least by leaving the core of a team in place, you should be guaranteeing that the new team are aware of any coding standards or any QA standard...

Considerations when creating automated tests

We recently released to a number of teams our automated regression pack that has been worked on over the past few months. This regression pack tests legacy code, but contains a large number of tests.  As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack.  I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post...  So here goes,  when creating automated tests, it's important to consider and adhere to the following: - Think about data . The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention. - The tests need to be idempotent - By making it so that each test is standalone and does...