Skip to main content

Unicom - 12th NextGen Testing Conference

I recently (though it seems a long time ago now... as I've just got round to writing this!) had the pleasure of attending the Unicom - 12th NextGen Testing Conference in London, I was lucky in that I won free tickets, so a free conference is definitely appealing! :)

I thought it would be good to write a blog post detailing the conference, what I learnt, what I got out of it and if I enjoyed it...

The conference itself was chaired by Donald Firesmith, he was over from Pittsburgh, and was chairing the conference and also giving a talk about the Many Types of Testing & Testing Philosophies.

Many Types of Testing & Testing Philosophies

This was a very interesting talk, and possibly one of my favourites, it opened my eyes a bit, and I've been in testing for 8 years, but there were some types of testing in there that I had not heard about, and even some that I knew of that weren't included (which I had a chat with Donald about afterwards and agreed that they should be in there - Localisation testing (ensuring the correct content is delivered based on localisation and even personalisation as well as Persona based testing - taking on the role of a user and acting in that way).

He also made a statement that was a bit scary, but probably very true:

"Testers are aware of only a minority of the types of testing, and test managers/leads even less"

This is scary because, well, it's true. I'm more hands off than I used to be as I have become a lead, and it's important for me to stay up to date with the different testing methodologies and philosophies that are used in testing today, and reading blogs/articles on line is one way of staying up to date, but it's putting them into use when you really learn stuff. It's definitely something that I want to work on, and will hopefully get the chance to over the next few months by working more closely with the teams and senior testers.

The next talk was also very interesting, and was given by Colin Deady a Test Manager @Capita IT.

Behaviour Driven Development - You can deliver Zero known Defect releases

This talk was, as the title suggests, was about how BDD can help deliver zero defects. It's around the team mindset and signing up to deliver quality software, so whilst BDD can help deliver zero "known" defect releases (and notice the "known"), it's a whole lot of other things that together deliver quality software.

Things like the team signing up to fix defects within certain time frames, signing up to review and write BDD scenarios and how having a zero defects mentality can in fact kill motivation. You can't force a team to deliver zero known defects, as mentioned above it's about empowering the team and giving them control over how they deliver zero known defects.

There was also a round table session where we could choose to sit at one of the following tables:

- Test Automation
- BDD
- Agile
- Testing in DevOps

And some others that I can't remember ashamedly, I chose to sit at the Test Automation table, and it was definitely very interesting hearing where people are at with regards to their automation journey, I call it a journey, but it seems like a journey that will never end. I was pleased to be able to be in a position to offer guidance to others and help them not make the same mistakes that I've read about and seen happen time and time again. There was in particular 2 people from Kent University who were talking about making changes to legacy systems, systems that have zero unit test coverage, and I informed them of the Boy Scout Principle, in that you leave the campsite better than how you found it... Applying that to their problem, meant when you refactor some legacy code or touch it, make it better, add some Unit Tests in etc. That is one sure fire way of improving the quality of your code, slowly but steadily.

The other interesting talk was presented by the amazing Dot Graham...I got talking to her at lunch, and found out that I had read one of her books, so that was pretty cool!

It Seemed a Good Idea at the Time - Intelligent Mistakes in Test Automation

I enjoyed this talk, a lot of it I was already aware of, but she did point me in the direction of TestAutomationPatterns.org a website that I didn't know existed, and have subsequently spent a lot of time reading on. It's definitely worth a look. It highlights a lot of common mistakes that are made when people attempt to do Test Automation, and I think a number of people definitely learned a lot in this session.

The most interesting part of this was how people measure ROI on Test Automation, people responded with the usual, like "more time to do other forms of testing" etc. when in fact ROI is defined by Wikipedia as:

Return on investment, or ROI, is the most common profitability ratio. There are several ways to determine ROI, but the most frequently used method is to divide net profit by total assets. So if your net profit is $100,000 and your total assets are $300,000, your ROI would be .33 or 33 percent.
So it stumped almost everyone I think, it's extremely difficult to quantify Return of Investment of doing Test Automation, and something that is a common question that people who are looking at investing in Automation will be asked "What's the ROI?" the answer itself is very difficult to measure!

The final talk that I'm going to mention was presented by Raji Bhamidipati and was one that I was particularly looking forward to...

Pair testing in an Agile Team

We're all aware of Paired programming and how it helps deliver good quality code, or at least "can help". One thing that isn't mentioned often is around Paired Testing. I was particularly interested in this talk because we had discussed Paired testing in a community meeting and even done the following exercise from Tasty Cupcakes: Pairing for Non Developers so I was interested to see what other people were doing when it came to pairing.

Raji didn't disappoint, obviously she mentioned the benefits of paired testing, complimentary skillsets that work well together and can keep both people engaged. Obviously some people are not going to pair well together, if people don't get on then understandably they will not benefit from this approach. It's also important, and Raji mentioned this, to keep both of them engaged. To do this, Popcorn pairing is good, in that one person is the driver and the other the navigator, and making notes. If both people are not engaged, then it can be a waste of time. 

That said, and with my experience with the pairing exercise mentioned earlier, it's definitely something that I recommend, and not just Paired Testing, but pairing with developers to help write code and spot bugs as they are writing code, most things in life is better when you do it with someone else, and testing/engineering/developing is definitely one of those things!

Conclusion


All in all it was a good conference, whilst I didn't learn as much as I thought I might have, it definitely enforced what I already knew and made me think about certain aspects of test automation especially, opened my eyes a bit towards pairing and the different types of testing that there are and perhaps most importantly it has made me want to present at a conference in the future. I spoke to Rob Lambert and mentioned that one of the struggles that I have is finding something to talk about, he gave me some good advice and that is to talk about my past experiences, and sure enough, I've found something I want to talk about, so watch out! 

Comments

Popular posts from this blog

Coding something simple.... or not! Taking a screenshot on error using Selenium WebDriver

I recently wrote a little function that takes a screenshot at the end of a test if it has errored. What sounded very simple at the start turned out to be quite a bit of work, and quite a few lines of code to handle certain scenarios! It's now over 50 lines of code! I'll start with what I had at the beginning, this was to simply take a screenshot in the working directory, we are using SpecFlow and Selenium to run the tests, so we are going to check if the ScenarioContext.Current.TestError isn't null, if it is, then using Selenium, take a screenshot (note the below code is a simplified version of what I had at the beginning). [AfterScenario]         public static void TakeScreenShotOnError()         {             if (ScenarioContext.Current.TestError == null) return;             var screenshotDriver = Driver as ITakesScreenshot;             if (screenshotD...

How to manage resources within new teams?

Working where I work we are constantly spinning up new teams to take on new workloads as business come up with new demands and new features they want developed and tested. The problem with this is how do we ensure the work of the newly spun up team is of sufficient quality. One method is by taking people from other established teams and placing them on the new team. This works great for the new team, but unfortunately it will oftenl eave the established team lacking in a resource whilst they try and fill the gap left by the person who has left. We are seeing this often with our offshore teams, it can be damaging to the team structure and the teams velocity, but try as I might, I can't think of another way around it. It's far easier to take 1 person from a team that is established than it is to build a whole new team from scratch. At least by leaving the core of a team in place, you should be guaranteeing that the new team are aware of any coding standards or any QA standard...

Considerations when creating automated tests

We recently released to a number of teams our automated regression pack that has been worked on over the past few months. This regression pack tests legacy code, but contains a large number of tests.  As a bit of background, a number of teams are working on new solutions whilst some are still working on legacy code. With this in mind we constructed an email with a list of guidelines when creating new tests that need to be added to this regression pack.  I figured that these can be quite broad so should apply for any organisation, so thought it would make an interesting blog post...  So here goes,  when creating automated tests, it's important to consider and adhere to the following: - Think about data . The tests need to retrieve or set the data they need without any manual intervention - This should help them be more robust and easier to run without manual intervention. - The tests need to be idempotent - By making it so that each test is standalone and does...