1

Test Collab 1.2 launched with new deployment options: VirtualBox and middleware package

With the current release 1.2, we introduce trial version for self hosted in middleware and Virtual machine format. So far trial was available only on our hosted platform, but now you can try Test Collab on your own infrastructure for 30 days. To choose right package for you, please go to our new trial page.

Apart from that we’ve introduced some new features and bug fixes. Here’s the complete change log:

4397

Bug

   

Hierarchy of Test Suites not shown correctly on test case add page

4358

Documentation

 

Multiple user integration not documented

   

4331

Documentation

 

Document jira custom fields integration process

   

4024

Bug

   

Deleted Execution’s name missing on activity on production

 

4023

Bug

   

In production, attachments are always private irrespective of settings

3983

Bug

   

No permission for requirement

     

3981

Bug

   

Incomplete info on activities

       

3979

Bug

   

‘Filter By Requirements’ did not work for limited rights user

 

3975

Bug

   

Tracability Matrix could be seen even when user had ‘No Access’ to requirements module

3927

Feature

   

Automatic redirection to the requested page after user has logged in

3913

Bug

   

Names of deleted milestones and executions not shown in activity list

3845

Bug

   

Custom field not shown with issue created on JIRA

   

3809

Bug

   

Warnings related to resources especially Images

   

3807

Enhancement

 

Suites drop down’s structure on upload CSV page

   

3804

Bug

   

Suite name same as deleted suite is not allowed

   

3664

Bug

   

Suite not shown expanded, when its case was being viewed while navigating on suites dashboard

3643

Bug

   

Editing a test or a suite makes it last child of its parent suite.

 

3642

Bug

   

Changing suite of a test cases doesn’t update sort order on test case

3580

Bug

   

Breadcrumb missing on milestone view page

   

3549

Feature

   

Ability to send emails to all administrator

   

3305

Feature

   

Copy test case functionality

Dear Software Teams: Using Spreadsheets for testing is a crime against productivity

I remember few weeks after our launch (in mid 2011), a potential customer emailed us saying that he needs to convince his team that using Test Collab would be better than using Excel for their software projects. Then we received a few more emails like this, and found out many corporates and big companies were stuck with Excel as a testing tool, it appeared they never even looked for a specialized tool.

Here’s a thought: Why not use a specialized tool instead, like Test Collab? You’ll spend hundred of hours maintaining a large sheet with thousands of records which becomes obsolete after a few months. Choosing such a cumbersome solution has only two possible outcomes: (a) No one in QA team will want to use the solution – because its ineffective and it wastes their time. (b) These spreadsheet will become so ugly and annoying as project grows that no one will be able to use it, even if they want.

So, want a reason to ditch spreadsheets (or Excel) right now? I’ll give you eight:

  • It was not made for test management. Although some might argue that it can be repurposed to do whatever, but good luck first figuring that out, then writing rules for team and getting team on same page.
  • No user management or roles: You cannot define permissions granularly without being an Excel nerd. Even if you do, it’s simply not worth considering hours you’ll spend on development and maintenance.
  • Categorization / Tagging / Reusing data across projects will never be possible. Even an Excel nerd can’t help you here.
  • Real Test Execution / Tracking time elapsed is not possible. A tester can never feel the execution running in an spreadsheet, because that is what he’s working on – a sheet. With a software like ours, testers get real sense of activity while tests are executing, while a sheet on the other hand is just a place for end result to go.
  • Integrating with your defect manager: Testers can push defect from test failures to your issue manager in real time. Can your spreadsheet do that? No, I’m guessing.
  • No data organization
  • No out-of-the-box metrics: We provide every metric we gather from data about your testing. Sure spreadsheets can do that, but not without several hours of formulas writing.
  • No API for custom rules and logic

The point is: Using a test management solution is a crucial factor for software development, without one in place you might be losing without even knowing it.

And what’s worse than spreadsheets? Not managing your tests at all.

1

Repurposing test cases with Tags / DRY in software testing

I’ve discussed about DRY in software testing earlier also which involved reusing test cases across projects, today I’m going to discuss a similar topic: reusability/repurposing of test cases within project itself.

During your software development life cycle, there are several phases where your team tests the application. Let’s see how a typical development cycle works before checking out the problem (it’s somewhat similar to what we do for Test Collab):

  1. During development, developer executes set of few tests in order to make sure their changes don’t break the build before committing the code. (Smoke tests)
  2. A continuous build verifies that all the code is fine by executing some specific tests. (Some functional tests)
  3. Tester executes a few tests manually at the end of the day to make sure application is running as it should. (All functional tests)
  4. Then there’s a staging / testing server, where all the automated testing is carried out. (All functional tests)
  5. Before every release, manual load testing is done so it can be made sure that this version is going to be okay after released. (Load tests)
  6. After every release, specific tests are executed every 7 days to make sure your application is running online. (Production monitoring tests)

Now, this may or may not resemble to your software development lifecycle, it’s just there to show you how testing or software QA has multiple contexts.

Do you notice something wrong with above image? It’s most common issue among QA teams. Each testing context is treated as a container/category for these test cases, and when some test case, say, Test Case #1 also needs to act as a ‘Load test’, it is copied in ‘Load test’ container as it is. This is absolutely wrong. So what happens is you end up with hundreds/thousands of test cases which are often duplicates of each other. A test case should not be hard coded for a single context if it is required in other contexts too.

For example a test case which verifies “User registration” in my app might be treated as ‘smoke’ test and at the same time a ‘production monitoring’ test. Similarly, I might add more information in this same test case, so it can also act as ‘Load test’.

So what is the correct way?

Treat each testing context as a “Tag”. Tags are not new to the blogging world, but they also make sense in testing and here’s how:

So by applying tags we’ve avoided creating 3 new duplicates. Now one single test can belong to multiple testing scenarios or contexts:

Few days back, we’ve released tagging feature which was long awaited by some of our customers. I hope this article helps you getting best out of it.

2

How to Write Test Cases for Your Quality Assurance Process

A software tester should have a working understanding of the steps needed to “test” a software program functionalities. In software engineering, a software tester uses a structured set of procedures to execute what is known as a “test case”.

To “execute” a test case, the tester writes a queue of practices to sort through any kinks or ineffective directions within the application, so the software functions as intended.

These procedures, also known as “testing practices” assist the software tester to write a set of instructions for the test case with a desired result, that the application’s output is satisfactory.

A test case, or test script, is a single, a group, or a grouping of increments a software tester writes to demonstrate how an application functions. These increments, or “steps,” the tester writes to map out the characteristics of how a software program will behave when it is executed.

To ensure all requirements of any application are met, there are two types of test scripts the software tester can use to begin the testing practice. The first type of test script is known as a “formal test case,” which uses “individual test scripts” that are conducted in two stages: a positive test and a negative test. The second type of test script is called an “informal test case.”

An informal test case, or “scenario testing,” tests how an application functions in hypothetical events, or scenarios: the software tester creates a complex theorem, set of postulated ideas, or “what if” situations that can conclude with a varied array of outcome, positive, or negative. The tester can choose from the outcomes which scenarios support an application’s effective function.

When writing the steps for a test script, software tests should consider how they will write the scripts and where the intended destination for the scripts will be.

Software tests can design test scripts that are larger, containing a greater number of increments, with greater detailed descriptions. An end location (i.e., a spreadsheet, database, or a word document) for archival and later retrieval is necessary and should be included within the test practices planning stage.

Writing a test case

Well designed test cases are comprised of three sections:

  • Inputs
  • Outputs
  • Order of Execution

Inputs, or keyboard entry data, for example, includes data (information) entered in or typed in from interfacing devices. Data culled from databases or files, or data from the environment where the system executes are other types of input.

The environment of the system at the data’s introduction and any data from interfacing systems are considered additional sources of originating input.

Outputs, or displayed data (i.e. words visible on a computer’s screen) includes data transferred to interfacing systems, external devices, and data written to databases or files.

The order of execution, how a test case design is demonstrated, is queued between two styles:

  • Cascading test cases: one test case is expanded upon by another, where the first test case completes, and a system’s environment is left ready for a second test case to execute, next a third, etc.
  • Independent test cases: tests which function singularly and are not reliant on previous or subsequent test cases for execution.

The choice and construction of a good test case provides the software tester the opportunity to discern greater numbers of defects or errors in applications, while utilizing fewer resources.

To write a test case, after reviewing the parameters for test design above, adapt a hierarchy, or structured set of steps, to form a detailed “use case.”

A “use case” will utilize, or denote, the specific steps a software tester implements when writing a test case.

The steps to write a case are:

  • Establish or generate a complete set of test scripts (positive and negative) and/or scenarios (informal test scripts)
  • Per individual scenario, identify one test case or greater.
  • Per individual test case, identify the exact processes or conditions which cause the application to execute.
  • Every individual test case, identify the values of data to test.
  • Determine pass/fail per test case/script.

The first test case can be used as a benchmark for subsequent test cases; a test summary of the pass/fail results is recommended. This summary is a detailed brief of the test case’s pass/fail outcome.

Parameters for test cases which may be used:

  • test case description
  • test case identification (assigned name, or assigned number)
  • test increment or step
  • test category / test suite
  • test case description, author(s)
  • register check mark or check boxes denoting if the test can be automated
  • actual results and/or expected results, pass/fail result(s)
  • remarks

Hope that makes things clear for first time test case writers.

25

Screencast: Test Automation with Test Collab

Ok first things first: What’s the main problem with most test automation techniques are: they do not offer very well interface, so you don’t have a good workflow around them in order to execute and extract meaningful information from them, most of time these tests end up being forgotten in build process or gets too old with time without ever being updated. Nor are they integrated well around your manual testing process. Writing automated test case is too damn easy, but difficult part is collecting insights and meaningful data from them over time. See this screencast and find out how Test Collab can solve your test automation worries: (small note: this is my first screencast, so please excuse bad voice quality)

To enable this feature in your account, go to Settings > General Settings, then check ‘Enable Test Automation’ and hit ‘Save’. Refresh once to see the ‘Remote Executors’ link under ‘Settings’ tab.

Multiple user integration with your issue manager now possible!

Till now it was possible to integrate single user of your issue manager with Test Collab profile. So it wasn’t possible to know which user actually reported that failure by looking at the defect in your issue manager, since the API was using a single user’s credentials. This week we have released a new feature which will let all your team members have their own credentials stored with Test Collab, so that when they report a new failure, the reporter(in your defect manager) would be automatically shown as the person who actually executed that test.

To configure this feature, simply follow these steps:

  1. Login to your account, then go to edit profile

  2. You will notice a link on bottom of this edit profile page, which goes something this:
  3. Clicking that link will take you to the page where you can specify your login credentials for your issue manager profile. These credentials will be stored with your Test Collab profile. Fields displayed on this page depends on the type of issue manager you use. Some issue managers like JIRA, Fogbugz and others do not require an API key, rather they need username and password. It will show you username and password automatically if that’s the case.
  4. After clicking the ‘link account’ button you should see a message like ‘credentials stored successfully’. That means your issue manager account is now linked with your Test Collab profile.

Please note: this feature does not validate specified issue manager credentials, unlike our integration process. That means you’ll have two make sure you enter correct credentials otherwise new issues won’t be created when this user tries reporting a test failure. If some user’s profile is not linked to the issue manager, then the default user will be used as a reporter to report the defect. The default user is one which you have configured on issue manager integration page. As an administrator you can also go to user’s index and link their profiles one by one.

Thanks to Amy Wan of Graph Net Health for reporting this enhancement. We hope you’ll enjoy this new feature. Thanks for reading the post.

Test Management in Reusable architecture

If you’re using a reusable codebase or share some common libraries across your multiple projects, you know how difficult it can get to manage test cases for such projects. Suppose you have an individual library or component, say ‘Billing’ which is used in two projects: ‘Project A’ and ‘Project B’.

So the test cases which belong to ‘Billing’ component will always need to be copied within test cases of ‘Project A’ and ‘Project B’. Most of the times, teams are happy doing it manually. But that’s a wrong approach. Maybe worst if your components are always evolving. Why? Because as the individual component is evolved, its test cases increases in number and also gets updated. If you’ve copied the test cases of such components into other projects manually, you’ll need to maintain your changes inside the project too. Sooner or later, it becomes very difficult to keep the list of test cases updated in projects which are using this component and project begins to suffer.

So how do you fix it up?

We’ve solved this problem in our test management tool with a feature called ‘Linking of Test Suites’. A test suite itself is not coupled to single project but can be coupled to multiple projects. In technical terms, entity-relation is:

Project has and belongs to many Test Suites and vice versa, opposed to what old test management tools still follow:

Project has many Test Suites.

How this works?

Here’s how linking of test suites works:

  1. First you create a project which represents your entire reusable layer or a reusable component. This can be called ‘Core’ or ‘Reusable Layer’ or whatever you call it.
  2. Create test suites and then test cases inside this project. (Remember these are the test cases which will be linked across multiple children projects)
  3. Now create a new child project and go to ‘Test Suites & Cases’ > ‘Import’. Select the desired test suites and press ‘Link’ button
  4. Now you’ve created a common link between these two parent-child projects which means any new test cases created in your ‘Billing’ test suite will automatically show up in our ‘Project A’, and so will any test case changes.

Note that ‘Link’ button acts differently than ‘Copy’. While copy completely clones selected test suites in the destination project, the ‘Link’ feature merely allows test suites to be shared across destination project. So, ‘Copy’ feature can be used too when required.

There are several other goodies we offer in Test Collab. Create your free trial account today and look for yourself.

Test management in different industries

One common benefit I enjoy working in this IT industry is ample availability of information on the net, be it know-how, tutorials, wisdom or anything else, but various other small businesses in other industries lack this. For example, I’ve been visiting a lot of blogs lately on testing / QC to find out what small businesses in other industries are using for their test management. It seems there is no industry standard tool there so leaders in these industries have their own in-house solutions (Excel spreadsheet or others). Maybe the reason is the unavailability of the tools. For the same reason, we’ve made our test case management tool to be easily used across various other industries. Our primary focus is still on IT industry but a features like custom fields in test cases means you can tailor our tool to your needs without the overhead of development costs.

So which other industries can use test management tools?

Basically any organization which develops a product knows they need a strict test management. Some examples:

  • Electronics
  • Manufacturing
  • Architecture & Designing
  • Consulting
  • Automotive
  • Education
  • Ofcourse, this is just a tip of iceberg!

    If you’re from a non-IT industry, please share your experience. We’d like to know any challenges you faced while managing QC process or any other facts which can enlighten others.

    Our first design revision and 80th day of development

    I’m so happy that we have made so much progress in this project. Several hours discussing, coding and debating is finally showing up. Today we’ve integrated great design and its comforting my eyes so much. Here are some screenshots:

    Today is also 80th day of development. Still more than a month to go and a lot of work to do…

    Capturing a test case as soon as its born

    ABC Company is a hypothetical profitable small sized software development company with 30 employees. Their workflow is pretty simple:
    1. Capture requirements
    2. Plan
    3. Code
    4. Test
    5. Pack/Deploy
    They do pretty well with all the processes. Their last project was a big success and now they are getting ready with the new project. While they were on their Phase 2, i.e. planning, Robert, their junior developer while reading requirements went through ‘CSV Upload’ feature. From the idea he concluded that this should handle both types of CSV files – one with the headers and one without. So while testing, he established, both such files should be checked. He checks with his senior developer, who agrees and says he’ll keep that in his mind. But as all of us know, our mind is terrible at remembering things, the release date came near and software was marked ready to be shipped. Now a day before delivery, buyer is shown full featured demo of the product. The not-so-computer-savvy buyer makes a CSV file real quick in an excel file without the headers to check the great CSV Upload feature and suddenly identifies that it is not making any sense. Add 4-5 instances like this and they’ll surely delay a day or two in your project.

    So the point is: do you see how a ‘test case’ plays an important role? Apart from that, how important it is to capture a test case as soon as it has taken birth in a team member’s brain. We think, very important. Because it happens only while working on requirements, planning and coding that you are so deep down the project and have most insight about a project. That’s exactly when you should start thinking about test cases. But Robert didn’t do it – why? Because he had no platform to fill in the piece of information he had in his brain, where others could see and act on it. So what do people like Robert often do when they don’t know what to do? Sometimes they talk and sometimes they forget about it – even if they talk, chances are that piece of information is still lost because that’s not in a system others rely on. There’s also another article on why you should involve everybody in your team in testing.

    Now hypothetically, lets say meanwhile in ABC Company, they discussed and figured out that they need some sort of control or management over testing. They started using Test Collab. Every team member was instructed to post in a test case as soon as they thought of it. During the end of each phase, seniors would review the reported test case, and give a go ahead. Before testing, senior developers would arrange test in sequential order, and assign them to testers. They also configured their bug tracking tool with Test Collab, which meant that instant a failure is reported, the whole development team will be informed of the failed test. CEO and other managers go through the stats produced by Test Collab and can easily conclude if their testing is slightly growing to take more time, or if the staging instance of software they have deployed has suddenly a lot of failures.

    Testing automatically will become more interesting and fun once a process is standardized.

    Do you already capture test cases right after they’re born? Let us know.