DevOps through the vision of a QA

The popular term DevOps is presented as a union of “departments” that has been promoting a set of processes and methods which lead to new thinking about communication and collaboration between them.

The main feature of the DevOps is to strongly advocate automation and monitoring in all stages of software development, integration, testing, release and management of infrastructure. DevOps aims to provide, shorter development cycles, increased deployment frequency, more secure releases, in close alignment with business objectives

DevOps will have one hand in the infrastructure and the other in development and, in some cases, they may also have a hand in the area of quality assurance (QA). It’s more of combining infra, dev, and quality.

In short, a DevOps must act as an agent of change, integrating development and operations. For that, it is necessary to invest in knowledge and constant updating.


Let’s discuss more about DevOps:

It’s more about shared responsibilities and about building the stuff together. So to be a DevOps enabled team, there should be a cultural shift which enables the whole team to work together as a single entity. In fact, the whole team should be involved. In one of my previous organizations, we QA’s do the Production Deployments. It was QA’s responsibility to release the product to production. After that, we monitored logs from Kibana, Sentry, Datadog, etc to see whether there are any new errors/previous errors are resolved or not in the latest release.

DevOps is about the way of working together, communication, collaboration & cooperation  across multidisciplinary teams.


How you define DevOps in Organization


Moving to DevOps is a change process and you need to handle the change management. I would like to share the questions/thoughts my last CTO shared when we decided to shift to DevOps: “What are we going to do?” “Why should we do it that way?” “What benefits will we get out of it?”. Without proper understanding of these it will remain as a blindspot for the team members.

It is easier to introduce DevOps when that introduction is divided into smaller parts, alternating between cultural changes and technical changes. Instead of a big change, there will be a small cultural change, followed by a small technical change, followed by another cultural change, and so on. That way, teams will never feel that everything has suddenly changed. Instead, they will feel that changes have occurred more at a natural pace.


Integrating Teams


The key is collaboration between these teams which has four main axes: Culture, Automation, Evaluation and Sharing.

Culture: collaboration / end of divisions / healthy relationship / behavior change;

Automation: deployment / control / monitoring / configuration management;

Evaluation: metrics / measurements / performance / logs;

Sharing: feedback / good communication.

This improvement in the relationship and collaboration increases efficiency and reduces the production risks associated with frequent changes.

In short, DevOps must act as an agent of change, integrating development and operations. For that, it is necessary to invest in knowledge and constant updating.


Role of QA in DevOps enabled Team:


Continuous Testing is the key point in a DevOps enabled Team. QA should be aware of what tests to be executed after each feature changes. After the code commits, the automated tests should be executed in the CI pipeline and there should be proper metrics to track the results. Here, we should have proper monitoring to detect errors during early stages which results in lesser damage.

QA should be working closer with the Dev & Ops team to set up an automated build and deploy processes. These are the people in charge of creating a test environment for automated executions which plays a vital role in integrating automated tests with Dev cycle in continuous integration.

In the CI phase, automation adds value by building the automation framework and automated scripts for features.. And obviously, the missed parts by automation will be covered by manual smoke/ad-hoc tests on the particular build/commit. Here, there shouldn’t be a discretion between manual and automated testing, both play key roles.

The principle of continuous testing refers to better collaboration of QA with Devs & Ops throughout the DevOps lifecycle. Along with that, it’s our responsibility (QA) to verify that the correct coverage is achieved based on the risk profile or other model-based strategies.

Gitlab Test Management now available!

I am happy to announce that we have launched Gitlab integration with Test Collab. Now, all your failed test cases can be reported to Gitlab in a single click to your Gitlab as bugs so that you can focus your efforts on quality assurance rather than switching between different softwares.

Once you have setup Gitlab integration you can also get requirements from Gitlab linked with test cases in Test Collab so that you can have a better traceability and coverage analysis.

You can do following with Gitlab integration-


– Link Gitlab requirements or user stories to your test cases

– Create Test Executions on basis of Gitlab requirements or user stories

– Automatically push failed test cases to Gitlab as bugs

– Automatically close Gitlab bugs once the failed test case is passed.

– Generate Test Coverage report for Gitlab user stories or requirements


I hope you will find it useful. Try it out yourself with our free trial-


Test Collab outage 8 January 2020, Issue rectified

We just recovered our hosted application from an outage. As some of you might have noticed, there was a lag between ‘creating a test case’ and that test case showing up on test case manage page. This was due to unexpected replication lag between database slave server and master server. The issued was identified and rectified within 2 hours of first report of incident from client. We apologize for any inconvenience.

Should you get your testers certified?

Testers certifications has been a thing of debate. There are some points to be considered to settle this:

  • Why you need this?
  • Will this prove to be a paradigm shift for your organization?
  • Are the testers in your team ready for this? 
  • Is it going to be a costly affair and is it really worth investing time and money?

Are these the concerns eating out your mind? Then read on as we analyze the need by answering a few simple questions…

Continue reading


Using source code visualizations as a coverage map for testing

I’m not a big fan of tracing or linking dead text requirements documents back to test cases unless it is absolutely required. This got me thinking what else can be used as a reference map for testing….?

We all could use some sort of map while testing exploratory’ily(?) So doing some searches I randomly stumbled across this post by Fred Beringer and it struck me that source code visualization can really be useful in exploratory testing. The main problem while exploratory testing is that we could miss critical areas of application. But what if we have such a map?

Continue reading

When to stop testing or stop documenting?

As product managers, every now and then we have to make decision whether to continue testing that feature or move on? It doesn’t just apply to testing efforts, but also to test case coverage & documentation, i.e. to continue writing more test cases for a particular feature or move on to next?

How do you decide in such cases?

Maybe we can borrow a concept or two from behavioural psychology: maximizing and satisficing.

Maximizing is when you’re trying to do as much as possible within given means.
Strictly in psychology terms, Maximization is a style of decision-making characterized by seeking the best option through an exhaustive search through alternatives.

Satisfising is when you’re trying to do just good enough to be satisfied (MVP for ideas and decisions).
In psychology terms, Satisfising is a style of decision-making which would mean exploring through options until you find one just enough or acceptable enough.

So which one to use when?

Look how often your customer spend time on this feature. If it’s, say 80% – 90%, it’s certainly calls for maximizer behavior – and you want to put everything you have in your arsenal at this, in terms of testing and documenting.

If the feature is occasionally used, you can get away with satisfising. You just need to do enough so that you know it works as desired – nothing more!

Obviously this principle can be applied on all types of optimization problems: at work and at life in general.


Free test management plans launched for Test Collab

Today we have released pricing plans for Test Collab. These free forever plans will help development and quality assurance teams to improve their test management without worrying about any payments or subscription ever!

It has always been our mission to making quality assurance affordable but we realized  that there are not many free test management software available, especially cloud based. There are few open source test management software available which you have to download and host on your own server but it becomes quite cumbersome and complicated. Our free plans are available on cloud so your team can focus on testing rather than managing servers and software installation.

We have also made testing more affordable for bigger teams as we have launched $10/user pricing for large teams.

If you have an active free trial or expired account, you can find the option to downgrade/upgrade your account under ‘Update Plan’ page. There are however some limits in free plans which will be available on our pricing page. Signup for now for free plan.



GDPR Compliance update

We are in process of rolling out organization-wide update (internal and external) to comply with upcoming deadline of GDPR. Test Collab will be fully compliant with GDPR before the 25th May 2018 deadline.

If you have any questions or suggestions regarding GDPR compliance, reach us out from ‘Contact us’ link below.

New reports: one more step towards test intelligence

We’ve always been fascinated with extracting useful insights from the data - we’ve been reviewing a lot of our clients at Test Collab over last 9 months. Reviewing their problems, possible solutions and if we can positively impact the productivity given the data they have.

Each and every test case, its result of every execution ever, time spent and who assigned/executed a test - gave us a lot of data to work with.

There are potentially lot of useful insights that can be extracted with so much data. With this release we’ve attempted to scratch the surface - and there will be a lot more to come. For now, we believe you’re going to love these new reports.

You’ll find quite a few new charts on project overview page, milestone view page and new test case status report page (under Reports tab). It'll be unfair to not mention some of them here, given how useful they can be:

1. Test Case Last Run Statuses

Want to know overall health of your project? Just look at this chart and you'll have high-level understanding of how good your project is, testing-wise that is. This report shows you last state of all your test cases. 

Pro-tip: As you add new features, your unexecuted test cases will go up. When working on new project, keep an eye on 'unexecuted' test cases and schedule execution on such cases every few days. Alternatively, drill down suite-wise to see statuses in further details.

2. Time spent on test cases

This is one of my personal favourite. Over time, there are some test cases that take a lot of your developer's / tester's time. This chart will help you locate such outliers and let you further analyse why. You can plot multiple time metrics for time spent on each cases: average time, overall, maximum, minimum etc.

Pro-tip #1: Use this chart to locate outliers test cases, then run 5-whys analysis as to what took them so long.

Pro-tip #2: Alternatively, you can use this data to decide which test cases should be automated before the rest. ​

3. Error prone Test Cases

This is a distribution chart made of failure rates of your test cases. Highly useful when you want to pinpoint the troublesome cases of your project. If you think testing all cases all-the-time is good strategy, this chart will be an eye-opener. 

Sample use case: When developing new features, schedule testing of cases with overall high failure rates at relatively early stage. This will give your testers / developers more time to find cause of such failures resulting in less surprises on release date.

Pro-tip: We've observed cases with high failure rates are often a sign of either outdated test case documentation or some big underlying problem. Pay special attention to the cases above threshold failure rate.

4. Cases passed by suites

This heatmap chart shows you all the test suites of your project, color-coded as the percent of test cases passed. Green'ier the suite = Better passing % of test cases. The area of the block represents how many test cases there are in this suite relative to other suites. Larger area = Higher number of cases in the suite

Pro-tip: Sometimes a single module / set of features negatively impacts your project while other modules might be functioning as expected. This chart will help you locate such problem areas and act on them. Start paying attention to light green regions, find out what's lowering the score, is it development, testing or documentation?

5. Milestone burndown

This is useful when you're doing sprints or deadline-driven releases. Quickly see your tasks left as a burndown in timeline with ideal vs actual effort by your team. You'll also get instant feedback as a team of how your efforts are contributing towards the end goal and how fast.

Several other new reports which aren't mentioned here, but are released with this version: Test case assigned vs unassigned, defects reported over time, test run results over time and some new metrics.

As mentioned above, this is just beginning of large milestone - if you have some ideas for new reports / charts / widgets / metrics etc, please get in touch and let us know. 

1 2 3 6