The biggest lie by test automation tools vendors: Record & forget

While working with some of my clients, I realized how easy it is to push a test automation tool in organization. Managers are suckers for productivity tools, including myself. If you can prove to me that $x improves my team’s productivity by y%, there you go, you sold me. No questions asked! Right? Test automation tools fall under same category, easy to showcase and sell, but hard to implement.

Why hard to implement? And what you really need to know before you acquire a new tool or planning your test automation strategy?

Fact #1: Each set of your newly recorded automated test case is not 100% reliable and will fail occasionally regardless of application changes.

There can be many reasons behind such random failures (let’s say we’re using Selenium):

1. Page not loaded in-time and script triggered first.

2. Slower/Faster backend.

3. XPath changes because of AJAX calls

And several more.

How to fix this? I faced this similar problem many years ago and what I did was little bit hacky but driven by perfectionism (something I try really hard to avoid these days).

OK, I doubt anything below this sentence would make sense economically but it’s a good story:

First, I wanted to quantify how many times per 100 or 1000 times failure occurred (you really don’t need to do this, seriously, unless you just enjoy it)

Second, after determining the failure rate, I had to speculate what was failing my scripts. Now I could make changes orderly, one-by-one and wait for 100 or more occurrences but that involved a lot of time. So instead of choosing ordered fashion, I created 2-3 independent fixes and left them running. Few hours went by, 3 of my fixes were being tested. Maybe after 3 or 4 hours, one of the fix worked and error rate was down to 0 per 100.

So that ends very unproductive and impractical fix of a common test automation problem.

More practical approach: If you have a lot of such random failures, run each test case two times before you call it a confirmed fail.

Fact #2: You will spend at least 3x of time maintaining these test cases compared to writing those. So plan accordingly.

How do you tackle with test automation issues? Any interesting automation problem you solved? Feel free to comment.

About the Author

Abhimanyu is founder of Test Collab, a test case management tool. Test Collab makes your testing more productive and efficient by enabling teams to collaborate in real-time.

  • nielsbrinch

    In Nine Circle we make a basic, superficial test of everything and then only expand the test to more detailed test cases to handle specific issues that we know are especially error prone. That way we keep the amount of test cases to maintain as low as possible, which allows us to keep it fully updated at all times.

    Complete test coverage is practically unobtainable when considering the maintenance cost and should not be a declared goal.