Category Archives for "Testing Tips"


Repurposing test cases with Tags / DRY in software testing

I’ve discussed about DRY in software testing earlier also which involved reusing test cases across projects, today I’m going to discuss a similar topic: reusability/repurposing of test cases within project itself.

During your software development life cycle, there are several phases where your team tests the application. Let’s see how a typical development cycle works before checking out the problem (it’s somewhat similar to what we do for Test Collab):

  1. During development, developer executes set of few tests in order to make sure their changes don’t break the build before committing the code. (Smoke tests)
  2. A continuous build verifies that all the code is fine by executing some specific tests. (Some functional tests)
  3. Tester executes a few tests manually at the end of the day to make sure application is running as it should. (All functional tests)
  4. Then there’s a staging / testing server, where all the automated testing is carried out. (All functional tests)
  5. Before every release, manual load testing is done so it can be made sure that this version is going to be okay after released. (Load tests)
  6. After every release, specific tests are executed every 7 days to make sure your application is running online. (Production monitoring tests)

Now, this may or may not resemble to your software development lifecycle, it’s just there to show you how testing or software QA has multiple contexts.

Do you notice something wrong with above image? It’s most common issue among QA teams. Each testing context is treated as a container/category for these test cases, and when some test case, say, Test Case #1 also needs to act as a ‘Load test’, it is copied in ‘Load test’ container as it is. This is absolutely wrong. So what happens is you end up with hundreds/thousands of test cases which are often duplicates of each other. A test case should not be hard coded for a single context if it is required in other contexts too.

For example a test case which verifies “User registration” in my app might be treated as ‘smoke’ test and at the same time a ‘production monitoring’ test. Similarly, I might add more information in this same test case, so it can also act as ‘Load test’.

So what is the correct way?

Treat each testing context as a “Tag”. Tags are not new to the blogging world, but they also make sense in testing and here’s how:

So by applying tags we’ve avoided creating 3 new duplicates. Now one single test can belong to multiple testing scenarios or contexts:

Few days back, we’ve released tagging feature which was long awaited by some of our customers. I hope this article helps you getting best out of it.


How to Write Test Cases for Your Quality Assurance Process

A software tester should have a working understanding of the steps needed to “test” a software program functionalities. In software engineering, a software tester uses a structured set of procedures to execute what is known as a “test case”.

To “execute” a test case, the tester writes a queue of practices to sort through any kinks or ineffective directions within the application, so the software functions as intended.

These procedures, also known as “testing practices” assist the software tester to write a set of instructions for the test case with a desired result, that the application’s output is satisfactory.

A test case, or test script, is a single, a group, or a grouping of increments a software tester writes to demonstrate how an application functions. These increments, or “steps,” the tester writes to map out the characteristics of how a software program will behave when it is executed.

To ensure all requirements of any application are met, there are two types of test scripts the software tester can use to begin the testing practice. The first type of test script is known as a “formal test case,” which uses “individual test scripts” that are conducted in two stages: a positive test and a negative test. The second type of test script is called an “informal test case.”

An informal test case, or “scenario testing,” tests how an application functions in hypothetical events, or scenarios: the software tester creates a complex theorem, set of postulated ideas, or “what if” situations that can conclude with a varied array of outcome, positive, or negative. The tester can choose from the outcomes which scenarios support an application’s effective function.

When writing the steps for a test script, software tests should consider how they will write the scripts and where the intended destination for the scripts will be.

Software tests can design test scripts that are larger, containing a greater number of increments, with greater detailed descriptions. An end location (i.e., a spreadsheet, database, or a word document) for archival and later retrieval is necessary and should be included within the test practices planning stage.

Writing a test case

Well designed test cases are comprised of three sections:

  • Inputs
  • Outputs
  • Order of Execution

Inputs, or keyboard entry data, for example, includes data (information) entered in or typed in from interfacing devices. Data culled from databases or files, or data from the environment where the system executes are other types of input.

The environment of the system at the data’s introduction and any data from interfacing systems are considered additional sources of originating input.

Outputs, or displayed data (i.e. words visible on a computer’s screen) includes data transferred to interfacing systems, external devices, and data written to databases or files.

The order of execution, how a test case design is demonstrated, is queued between two styles:

  • Cascading test cases: one test case is expanded upon by another, where the first test case completes, and a system’s environment is left ready for a second test case to execute, next a third, etc.
  • Independent test cases: tests which function singularly and are not reliant on previous or subsequent test cases for execution.

The choice and construction of a good test case provides the software tester the opportunity to discern greater numbers of defects or errors in applications, while utilizing fewer resources.

To write a test case, after reviewing the parameters for test design above, adapt a hierarchy, or structured set of steps, to form a detailed “use case.”

A “use case” will utilize, or denote, the specific steps a software tester implements when writing a test case.

The steps to write a case are:

  • Establish or generate a complete set of test scripts (positive and negative) and/or scenarios (informal test scripts)
  • Per individual scenario, identify one test case or greater.
  • Per individual test case, identify the exact processes or conditions which cause the application to execute.
  • Every individual test case, identify the values of data to test.
  • Determine pass/fail per test case/script.

The first test case can be used as a benchmark for subsequent test cases; a test summary of the pass/fail results is recommended. This summary is a detailed brief of the test case’s pass/fail outcome.

Parameters for test cases which may be used:

  • test case description
  • test case identification (assigned name, or assigned number)
  • test increment or step
  • test category / test suite
  • test case description, author(s)
  • register check mark or check boxes denoting if the test can be automated
  • actual results and/or expected results, pass/fail result(s)
  • remarks

Hope that makes things clear for first time test case writers.


Screencast: Test Automation with Test Collab

Ok first things first: What’s the main problem with most test automation techniques are: they do not offer very well interface, so you don’t have a good workflow around them in order to execute and extract meaningful information from them, most of time these tests end up being forgotten in build process or gets too old with time without ever being updated. Nor are they integrated well around your manual testing process. Writing automated test case is too damn easy, but difficult part is collecting insights and meaningful data from them over time. See this screencast and find out how Test Collab can solve your test automation worries: (small note: this is my first screencast, so please excuse bad voice quality)

To enable this feature in your account, go to Settings > General Settings, then check ‘Enable Test Automation’ and hit ‘Save’. Refresh once to see the ‘Remote Executors’ link under ‘Settings’ tab.

Test Management in Reusable architecture

If you’re using a reusable codebase or share some common libraries across your multiple projects, you know how difficult it can get to manage test cases for such projects. Suppose you have an individual library or component, say ‘Billing’ which is used in two projects: ‘Project A’ and ‘Project B’.

So the test cases which belong to ‘Billing’ component will always need to be copied within test cases of ‘Project A’ and ‘Project B’. Most of the times, teams are happy doing it manually. But that’s a wrong approach. Maybe worst if your components are always evolving. Why? Because as the individual component is evolved, its test cases increases in number and also gets updated. If you’ve copied the test cases of such components into other projects manually, you’ll need to maintain your changes inside the project too. Sooner or later, it becomes very difficult to keep the list of test cases updated in projects which are using this component and project begins to suffer.

So how do you fix it up?

We’ve solved this problem in our test management tool with a feature called ‘Linking of Test Suites’. A test suite itself is not coupled to single project but can be coupled to multiple projects. In technical terms, entity-relation is:

Project has and belongs to many Test Suites and vice versa, opposed to what old test management tools still follow:

Project has many Test Suites.

How this works?

Here’s how linking of test suites works:

  1. First you create a project which represents your entire reusable layer or a reusable component. This can be called ‘Core’ or ‘Reusable Layer’ or whatever you call it.
  2. Create test suites and then test cases inside this project. (Remember these are the test cases which will be linked across multiple children projects)
  3. Now create a new child project and go to ‘Test Suites & Cases’ > ‘Import’. Select the desired test suites and press ‘Link’ button
  4. Now you’ve created a common link between these two parent-child projects which means any new test cases created in your ‘Billing’ test suite will automatically show up in our ‘Project A’, and so will any test case changes.

Note that ‘Link’ button acts differently than ‘Copy’. While copy completely clones selected test suites in the destination project, the ‘Link’ feature merely allows test suites to be shared across destination project. So, ‘Copy’ feature can be used too when required.

There are several other goodies we offer in Test Collab. Create your free trial account today and look for yourself.

Capturing a test case as soon as its born

ABC Company is a hypothetical profitable small sized software development company with 30 employees. Their workflow is pretty simple:
1. Capture requirements
2. Plan
3. Code
4. Test
5. Pack/Deploy
They do pretty well with all the processes. Their last project was a big success and now they are getting ready with the new project. While they were on their Phase 2, i.e. planning, Robert, their junior developer while reading requirements went through ‘CSV Upload’ feature. From the idea he concluded that this should handle both types of CSV files – one with the headers and one without. So while testing, he established, both such files should be checked. He checks with his senior developer, who agrees and says he’ll keep that in his mind. But as all of us know, our mind is terrible at remembering things, the release date came near and software was marked ready to be shipped. Now a day before delivery, buyer is shown full featured demo of the product. The not-so-computer-savvy buyer makes a CSV file real quick in an excel file without the headers to check the great CSV Upload feature and suddenly identifies that it is not making any sense. Add 4-5 instances like this and they’ll surely delay a day or two in your project.

So the point is: do you see how a ‘test case’ plays an important role? Apart from that, how important it is to capture a test case as soon as it has taken birth in a team member’s brain. We think, very important. Because it happens only while working on requirements, planning and coding that you are so deep down the project and have most insight about a project. That’s exactly when you should start thinking about test cases. But Robert didn’t do it – why? Because he had no platform to fill in the piece of information he had in his brain, where others could see and act on it. So what do people like Robert often do when they don’t know what to do? Sometimes they talk and sometimes they forget about it – even if they talk, chances are that piece of information is still lost because that’s not in a system others rely on. There’s also another article on why you should involve everybody in your team in testing.

Now hypothetically, lets say meanwhile in ABC Company, they discussed and figured out that they need some sort of control or management over testing. They started using Test Collab. Every team member was instructed to post in a test case as soon as they thought of it. During the end of each phase, seniors would review the reported test case, and give a go ahead. Before testing, senior developers would arrange test in sequential order, and assign them to testers. They also configured their bug tracking tool with Test Collab, which meant that instant a failure is reported, the whole development team will be informed of the failed test. CEO and other managers go through the stats produced by Test Collab and can easily conclude if their testing is slightly growing to take more time, or if the staging instance of software they have deployed has suddenly a lot of failures.

Testing automatically will become more interesting and fun once a process is standardized.

Do you already capture test cases right after they’re born? Let us know.

Test Metrics to die for

[frame_left][/frame_left] Yes, there are a few of them you would want to die for. Here are some examples from our Test Collab application in order of their importance (but order varies for different type of organizations):
1. Average time taken per test execution: This tells you efficiency of testers. And a sudden rise or fall can be a bad news for software managers.
Why is it important?
Keeping close eye on this will help managers knowing if something is changed abnormally in the project or if suddenly there’s a communication problem in team or if just testers are not working productively.

2. Defects Reported: Total number of failures/defects in project reported since creation.
Why is it important?
Every defect reported from your test case management tool is a cost expense which needs to be paid before you deliver. Keeping in close touch with this metric will keep you connected with project’s target date and budget and make smarter decision when need arises. Sudden rise of this metric means your project health is coming down and your developers might not be doing a good job.

3. Average success % per execution: No of passes / No. of total test cases executed * 100 (Ideal value: 100%)
Why is it important?
This metric tells you what % of cases are passed and failed on average. Higher value indicates the project is going great and there’s a great communication between developers and testers. Rising value indicates team is improving itself and paying more attention to the quality. Declining or a low value is bad sign and that’s when a manager should jump in to take necessary steps to fix it. Make a rule for lowest value in your organization and tell your developers to maintain this. If they don’t jump in to find out the reason.

4. Test Cases: Total number of test cases in a project.
Why is it important?
A greater number of test cases usually suggest that team has worked well identifying what to test as end-user in a particular feature or the whole project. A greater number early while development and with slow rise during development also suggests that project is in good direction.

5. Test Executions: Total number of test executions in project. Don’t forget that each execution contains various set of tests.
Why is it important?
At lower level this metrics tell you how much testing has been performed on a project, not in terms of the time but total number of instances. Comparing this value with your other projects can suggest you if there were greater/lesser effort spent on testing. And sometimes, depending on project outcome, you can acknowledge if this worked in your favor or against (even though a project failure can be occur by various reasons).

6. Time spent on testing: Total time spent on testing in a project.
Why is it important?
For very obvious reasons. This can tell you how much time you’re spending on testing (excluding your test management time though). You can compare this with your planned cost of testing or cost of project. If that tells you that you’re spending 60% of allotted time of a project in testing, you can focus on improving your testing process and thereby reducing the time.

There are several other important metrics which you should not forget:
1. Defects per size: Defects detected / system size
2. Cost to report a defect: Cost of testing / the number of defects reported
3. Defects detected in testing: Defects detected in testing / total system defects
4. Defects detected in production: Defects detected in production/system size
5. Quality of Testing: No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100
6. Effectiveness of testing to business: Loss due to problems / total resources processed by the system.

This list covers some major standard testing metrics. There can be several metrics in your organization. So which metrics do you die for? We’ll appreciate your input.

P.S. See also how we do it in Test Collab:

Involve everyone in team to your testing process

Your testing process includes creation of new test cases, execution of tests and compiling the stats into important business conclusions. Be it your developer, designer, tester, manager or CEO. In some companies, I’ve observed it is completely left on testers to maintain and execute tests. Don’t be one of them.

I think you should involve everyone to your testing process and here’s why:

1. More members mean more cases. It also means more time to execute, but prioritising tests will overcome that issue.

2. Developers alone can’t lay down all the test cases. For UI testing, it’s equally important for a designer to submit some test cases which shall be checked from time to time.

3. Every team member has different set of interests where they are at their best. Let them think over what tests should be performed on such areas.

4. An idea for test case can strike at the speed of light in every other team member’s brain anytime. If you will have a ready to use platform, they will use it which will add to the product’s quality. Also, see a hypothetical case study on this.

5. Everyone sees things differently, so if you get more team members to execute tests you will see different type of feedback coming.

At the end, what matters is your commitment to testing. But Test Collab automatically encourages whole teams to take part in testing process once you start using it, and tries making testing a habit in organization.