FOUR common QA Automated Testing mistakes we’ll help you avoid

FOUR common QA Automated Testing mistakes we’ll help you avoid

QA AutomationOnce you’ve decided to go down the road of automated testing, be aware that it’s still not the time to hi-five everyone you pass in the office corridor. Not until you’ve read this article anyway…

That feeling of accomplishment and job well done may still be fresh in your mind, but the reality there is still some serious planning to do if you want to successfully implement QA Automated Testing.

We at Galil Software have seen and managed various implementations of Automated Testing, and are here to assist you in whatever implementation you decide to take on, if that’s with or even without us (but surely you wouldn’t want to miss out on all that industry knowledge – right?). Therefore we’ve come up with a list of the FOUR most common mistakes to avoid when implementing QA Automated Testing.

1. Planning and Organizing

OK, so once you’re ready to rock and roll, in our opinion it’s hugely important to plan and organize your testing correctly.

You may well go with huge numbers of scripts (but see Point 3 below) for a full-on automation, but there are TWO things we really should point out that will only help you and your team:

–          Risky areas need to be identified: If you don’t allocate the correct testing resources to areas you know really need testing, you’re going to waste time AND money. How do you know which areas are going to be risky? Well, ask. Ask everybody you can, from developers to your technical writers to your customer reps. You could also look at previous bug reports for an indication of risk.

–          Be open to change: You do not have to stick rigidly to your testing plan. We all know that the software industry can be extremely dynamic but the testing team is usually at the impact zone of any last-minute changes. Just be aware that things may change. Sometimes for the better, sometimes not. But hey, your customers/board have spoken, so accept those changes with a smile …

2. Failure to Embrace Technology Changes

As we touched on in the previous point, failure to adapt to change is a recipe for failure. Failure to adapt to changes in technology can be even more critical, as over reliance on any one particular technology can be dangerous for an organization. Especially in fast-moving industries like mobile.

What we’d highly recommend you do is to 1) make the most of the stable technology components you have access to, including things like file systems, command lines, and APIs, so that you minimize any impact short-term technological advances may bring, and 2) document scripts, which will make any changes that much easier to get past (just be aware that over reliance on scripting isn’t such a good thing, see the following point).

3. Over Reliance on Scripting

It does make some kind of sense, at least initially, to think that if you script all those thousands of test cases, you’ll save a ton of time.

However, from our experience, not ALL test cases need automating. And without setting some sort of criteria, the cost-effectiveness of implementing automation may well be lost.

Therefore we’d suggest you set criteria to determine if test cases need automating and which test cases should be automated first.

Decide on criteria of Labor (how time intensive is a test case, and determine if it is even needed again), Reliability (if test cases need to be highly accurate, automation can certainly assist in eradicating the human error factor), and Repeatability (just how often will a test case need to be repeated – obviously the higher the number, the more need for automation).

4. Failure to Analyze Results Properly

For a truly successful implementation of QA Automation, it’s not enough to just worry about the execution of the tests – we highly recommend investing time and energy in analyzing results and reports.

Implementing a good result reporting component will help immensely, particularly in reporting bugs. For example, test results of Success or Fail are not enough for a true diagnosis of any issues that may arise during testing.

Implementing a scale of severity would be a major asset in identifying critical bugs and determining the test results. This may prove especially beneficial for test cases where, for example, correct data inputs are not critical. Even if the testing tool detects input errors, the test should continue until it comes across a critical bug.

Feel free to get in touch with us with any questions!

Skip to content