Should you get your testers certified?

Testers certifications has been a thing of debate. There are some points to be considered to settle this:

  • Why you need this?
  • Will this prove to be a paradigm shift for your organization?
  • Are the testers in your team ready for this? 
  • Is it going to be a costly affair and is it really worth investing time and money?

Are these the concerns eating out your mind? Then read on as we analyze the need by answering a few simple questions…

Continue reading

Using source code visualizations as a coverage map for testing

I’m not a big fan of tracing or linking dead text requirements documents back to test cases unless it is absolutely required. This got me thinking what else can be used as a reference map for testing….?

We all could use some sort of map while testing exploratory’ily(?) So doing some searches I randomly stumbled across this post by Fred Beringer and it struck me that source code visualization can really be useful in exploratory testing. The main problem while exploratory testing is that we could miss critical areas of application. But what if we have such a map?

Continue reading

When to stop testing or stop documenting?

As product managers, every now and then we have to make decision whether to continue testing that feature or move on? It doesn’t just apply to testing efforts, but also to test case coverage & documentation, i.e. to continue writing more test cases for a particular feature or move on to next?

How do you decide in such cases?

Maybe we can borrow a concept or two from behavioural psychology: maximizing and satisficing.

Maximizing is when you’re trying to do as much as possible within given means.
Strictly in psychology terms, Maximization is a style of decision-making characterized by seeking the best option through an exhaustive search through alternatives.

Satisfising is when you’re trying to do just good enough to be satisfied (MVP for ideas and decisions).
In psychology terms, Satisfising is a style of decision-making which would mean exploring through options until you find one just enough or acceptable enough.

So which one to use when?

Look how often your customer spend time on this feature. If it’s, say 80% – 90%, it’s certainly calls for maximizer behavior – and you want to put everything you have in your arsenal at this, in terms of testing and documenting.

If the feature is occasionally used, you can get away with satisfising. You just need to do enough so that you know it works as desired – nothing more!

Obviously this principle can be applied on all types of optimization problems: at work and at life in general.

 

Free test management plans launched for Test Collab

Today we have released pricing plans for Test Collab. These free forever plans will help development and quality assurance teams to improve their test management without worrying about any payments or subscription ever!

It has always been our mission to making quality assurance affordable but we realized  that there are not many free test management software available, especially cloud based. There are few open source test management software available which you have to download and host on your own server but it becomes quite cumbersome and complicated. Our free plans are available on cloud so your team can focus on testing rather than managing servers and software installation.

We have also made testing more affordable for bigger teams as we have launched $10/user pricing for large teams.

If you have an active free trial or expired account, you can find the option to downgrade/upgrade your account under ‘Update Plan’ page. There are however some limits in free plans which will be available on our pricing page. Signup for now for free plan.

 

 

GDPR Compliance update

We are in process of rolling out organization-wide update (internal and external) to comply with upcoming deadline of GDPR. Test Collab will be fully compliant with GDPR before the 25th May 2018 deadline.

If you have any questions or suggestions regarding GDPR compliance, reach us out from ‘Contact us’ link below.

New reports: one more step towards test intelligence

We’ve always been fascinated with extracting useful insights from the data - we’ve been reviewing a lot of our clients at Test Collab over last 9 months. Reviewing their problems, possible solutions and if we can positively impact the productivity given the data they have.


Each and every test case, its result of every execution ever, time spent and who assigned/executed a test - gave us a lot of data to work with.


There are potentially lot of useful insights that can be extracted with so much data. With this release we’ve attempted to scratch the surface - and there will be a lot more to come. For now, we believe you’re going to love these new reports.


You’ll find quite a few new charts on project overview page, milestone view page and new test case status report page (under Reports tab). It'll be unfair to not mention some of them here, given how useful they can be:

1. Test Case Last Run Statuses

Want to know overall health of your project? Just look at this chart and you'll have high-level understanding of how good your project is, testing-wise that is. This report shows you last state of all your test cases. 

Pro-tip: As you add new features, your unexecuted test cases will go up. When working on new project, keep an eye on 'unexecuted' test cases and schedule execution on such cases every few days. Alternatively, drill down suite-wise to see statuses in further details.

2. Time spent on test cases

This is one of my personal favourite. Over time, there are some test cases that take a lot of your developer's / tester's time. This chart will help you locate such outliers and let you further analyse why. You can plot multiple time metrics for time spent on each cases: average time, overall, maximum, minimum etc.

Pro-tip #1: Use this chart to locate outliers test cases, then run 5-whys analysis as to what took them so long.

Pro-tip #2: Alternatively, you can use this data to decide which test cases should be automated before the rest. ​

3. Error prone Test Cases

This is a distribution chart made of failure rates of your test cases. Highly useful when you want to pinpoint the troublesome cases of your project. If you think testing all cases all-the-time is good strategy, this chart will be an eye-opener. 

Sample use case: When developing new features, schedule testing of cases with overall high failure rates at relatively early stage. This will give your testers / developers more time to find cause of such failures resulting in less surprises on release date.

Pro-tip: We've observed cases with high failure rates are often a sign of either outdated test case documentation or some big underlying problem. Pay special attention to the cases above threshold failure rate.

4. Cases passed by suites

This heatmap chart shows you all the test suites of your project, color-coded as the percent of test cases passed. Green'ier the suite = Better passing % of test cases. The area of the block represents how many test cases there are in this suite relative to other suites. Larger area = Higher number of cases in the suite

Pro-tip: Sometimes a single module / set of features negatively impacts your project while other modules might be functioning as expected. This chart will help you locate such problem areas and act on them. Start paying attention to light green regions, find out what's lowering the score, is it development, testing or documentation?

5. Milestone burndown

This is useful when you're doing sprints or deadline-driven releases. Quickly see your tasks left as a burndown in timeline with ideal vs actual effort by your team. You'll also get instant feedback as a team of how your efforts are contributing towards the end goal and how fast.

Several other new reports which aren't mentioned here, but are released with this version: Test case assigned vs unassigned, defects reported over time, test run results over time and some new metrics.

As mentioned above, this is just beginning of large milestone - if you have some ideas for new reports / charts / widgets / metrics etc, please get in touch and let us know. 

Test Collab outage 5 December 2017: Issue rectified

As some of you may have noticed that we had an outage for quite a few hours. The issue was caused by manual error on production server at December 5, 22:00 PM. We were able to bring systems back up at December 6, 03:11:20 IST. We’ll be working on new measures to make sure such errors are not caused in future.

Test Collab’s new version: JIRA Cloud bi-directional integration, automatic screenshot upload and 4 new integrations

We are proud to announce the launch of Test Collab 1.16. This release aims to improve tester’s productivity and adds new integrations with your favorite tools.

 

JIRA Cloud bi-directional Integration

Test Collab can now be used inside your JIRA cloud instance. You can create/manage test cases and test executions directly from your JIRA cloud instance. JIRA cloud Plugin can be obtained from JIRA marketplace
If you are a JIRA self-hosted user, check this listing

 

Automatic Screenshot Upload

This feature is going to save a lot of time for testers. While testing when a bug is encountered, a tester usually has to attach a screenshot to a failed test case along with necessary information about the error. We’ve automated a part of that workflow and now attaching/annotating screenshot is done in single key-press. Forget about saving file in a directory and then attaching to your case, just click Print-Screen and it’s automatically uploaded.
Check out this video below to see it in action:

 

New Integrations

We’ve also release support for new integrations with these tools: Trello, Asana, YouTrack and Team Foundation server. You can see list of all our integrations.

Would you like to suggest a new ideas for our next release? Please get in touch.

Test Driven development for API-first apps with Postman & Newman

Test driven development as you may already know is a simple practice of having your dev workflow in such a manner that your tests determine the flow of development. In simple words, you’re writing tests first and then developing the actual code that passes those tests. Here are some benefits of this approach in case you’re wondering.

API-first involves developing your API first (preferably with a well-developed API framework) and any other UI or client later. There are immense benefits to this approach, since API endpoints are much easier to test. All you have to document is input and output in most cases while developing monoliths you know it’s not as simple.

Another benefit both of these strategies offers comes from separation of concerns principle, since you’re dealing with backend only – it eases up any future debugging and deployment.

If until now you’ve found yourself spending a lot of time on monoliths then you can understand how painful impractical it might be to use Test Driven development. However if you enter into API development, things change and wonderful things start to happen when you chose TDD.

With tools like Postman, it gets even easier to do so which we’ll explore shortly.

To Do

We’ll be combining Test Driven development principles with API-first strategy and focus on a workflow which would be much faster than traditional monolith-style development.

We’ll also look at implementing collaboration within team (scm), so everyone in a team can contribute.

About Postman & Newman

Postman is a simple tool which allows you to make HTTP requests with various parameters and save different set of collections which you can refer later or trigger again.

Newman is another tool from Postman developers which is a command line runner for API requests.

You’ll need to install both of these tools before moving forward.

Workflow

Our workflow is broken down in following steps:

1. Creating requests (equivalent of writing test cases)
2. Automating these requests for cli (so they can be used in build servers)
3. File structure for requests we create (to enable collaboration)
4. Bonus: Documentation generation from test cases

#1 Creating some requests using Postman (Test definition)

Since we’re talking test driven development here, let’s first create a request (or test) in Postman along with our expected result. We don’t have any API yet but let’s do it anyway since we need to define behavior first.

We’ll first create a collection in Postman so that API can be shared with team later. In that collection, we’ll make some requests.

test driven development - step 1

You can repeat these steps to document multiple end-points all at once or one-by-one.

#2 Automating requests with Newman

Now that we have some requests we’d like to execute, let’s see how we can automate them with our development environment. Right click on collection menu and export the whole collection to JSON file.

Your API collection can be exported to a json file.

It’ll enable you to maintain your source code control workflow and collaboration.

Alternatively, you can also generate a shareable link to your collection which is what we’re going to use for speed. Go to share collection from the context menu, then Collection Link to do so.

Now let’s run this collection with following command:

newman run <your collection json file or url>

image-3.png

As we have no API running right now, there are 2 failures which are clearly shown.

Now let’s develop our sample API and run this again:

We’ll use simple node/express code here:

const express = require('express')

const app = express()

 

app.get('/say-hello', function (req, res) {

 res.send('Hello World!')

})

 

app.listen(8080, function () {

 console.log('Example app listening on port 3000!')

})

And run it with

node server.js

And run newman again:

image-4.png

Note that I’m using URL as my JSON collection – for a typical development environment this would be a file in your local filesystem.

#3 Committing your work

Now you have API code + JSON collection, both are standard files which can be committed to any repository. You can decide any standard directory structure that works for you. For my example, I’ll choose:

/api (actual application)

/tests (for my collection file)

The good thing about having your collection file in source control is that all of your team members know how to call a particular API endpoint.

#4 Bonus: API doc generation from test cases

Another benefit is automatic documentation generation – Postman does a great job here. Each collection you upload to Postman server automatically gets a documentation page like this:

image-5-orig.png

Apart from that, each collection file can be fed into a monitoring agent when you deploy to production. These are available in Postman pro edition but you can do this yourself too.

Conclusion

Tools like Postman and Newman combine both GUI and command line interfaces so APIs can be developed easily. It also makes TDD easier along with team collaboration without leaving our standard source code control tools.

1 2 3 6