Test Collab

Test Collab 1.5 launched, introducing Test Case versioning, Active directory integration and custom fields for test cases

Announcements No Comments

We are happy to announce the release of Test Collab v1.5. This version introduces Test Case versioning, Active directory integration and custom fields for test cases. Complete changelog of 1.5 is as follows:

#6007 Feature: Executor name should be shown on test case index of bi-integration panel
#5959 Bug: CSV Export has issues with special characters
#5943 Feature: Encrypt external passwords and api keys before storing in database
#5942 Feature: Test case versioning of custom fields
#5915 Bug: Autocomplete for suite dashboard and execution assign fails if regular expression special characters are present
#5900 Feature: Revision note field on test case edit form
#5840 Feature: Test case versioning
#4641 Feature: Execution start and end timestamps
#3366 Feature: View own execution permission
#3306 Feature: LDAP and Active directory integration for user authentication
#2958 Requirement: Custom fields for test cases
#2897 Requirement: How a role assigned to a user can be changed?
#2704 Bug: Order of test cases in Tracibility matrix
#3848 Feature: Xml import of test cases and suites
#4605 Bug: Editing project level test automation settings (parameters) aren’t reflected immediately on ‘test automate’ page
#5056 Bug: All tags cannot be removed from a test case
#4543 Bug: If there are no test suites and cases in a project then clicking Search or Reset buttons on suites manage redirect me to project’s dashboard
#4511 Bug: Checkbox are shown on all columns of tree
#3459 Bug: At the time of editing a test execution, related (though archived) milestone should be shown selected
#1944 Bug: user should not be able to delete a role which is already assigned to some users
#5774 Feature: Improve import processing screen
#5727 Feature: Suite Hierarchy structure on copy from other project
#5567 Feature: Unfuddle custom fields with select type and their option ids
#5021 Feature: Change all flash charts to JS
#4800 Feature: Test run templates
#4617 Feature: Implement new responsive select list on export page
#4353 Feature: Improve import from other project
#4316 Feature: Reusable steps after import (copy) should expand as separate steps
#4027 Feature: Option to copy test suite
#5250 Enhancement: ajax url should be independent of siteUrl in the config
#2957 Requirement: Export execution report
#4618 Enhancement: Suites dashboard state should be sharable.
#4520 Enhancement: Automatically set the selected suite as parent when redirecting to add test case page from suites manage page
#4519 Enhancement: Auto complete on suites combo on add test case
#4476 Enhancement: Sorted tree structure for suites selection for various pages
#4475 Enhancement: Sorted tree structure for suites on Export page
#4462 Enhancement: When a limited rights user is to be shown “No Access” message, can we redirect him to last used (referer) page in place of the dashboard or project’s overview page ?
#4444 Enhancement: Better responsive select box for large list
#3635 Enhancement: Title of newly copied test case should be unqiue
#3927 Feature: Automatic redirection to the requested page after user has logged in

JIRA 6 plugin for two-way integration with Test Collab

Announcements 1 Comment

Many of you asked for it, so here it is. We are sorry for a little delay in this release – but now it is here and ready to use. Visit Downloads page.
For those of you who missed our little demo on two-way integration, please take a look at our previous blog post on Two-way integration. Note that we have used Redmine to prepare a video demonstration, same features and functionality is available on JIRA 5 and 6 versions.
We hope you’ll like this release and as always for any support, queries and feedback – feel free.

Introducing two-way integration: test case management inside issue manager

Uncategorized 2 Comments

Now quickly create test cases, assign test executions and manage software testing from within your issue manager. We have released two-way integration plugins which makes all of it possible. No need to login into multiple systems, just save the Test Collab API key once in your issue manager and you’re done.

The video below shows how this feature connects Redmine and Test Collab, same is available for other issue managers too:

To know more, visit help page for two-way integration.

Test Automation for Windows and Linux

Announcements,Testing Tips 3 Comments

Imagine being able to track your manual and automated test results from a same place. Sounds cool right? With Test Collab not only you can do exactly that but also assign tests to human or machine with just a few clicks without messing up with a dozen APIs or doing custom code. The bottom line is:

  1. You create test and store them in your test case manager so other team members can run it.
  2. Few of these tests are automated too.
  3. But there is no easy way to keep track of automated results for each test. Okay, maybe your build server does that but then again you have to check test results at two different places: Test Case manager for manual test and Build server for automated test.
  4. And to run manual test, you will assign an execution to your team. Similarly you will trigger a new build on your build server for running automated tests.

     

    We know this is unproductive so we bring you our new improved remote executor.

For people who never read about remote executors yet, here’s a quick summary: Test Collab Remote Executor turns your machine into a test slave which is used to run automated tests. It posts all necessary info produced while testing to Test Collab server for analysis. Check the how-to screen cast here (little outdated) – it isn’t best explanation but will help you in a basic way. For further help, you can always reach our support team.

While we launched this quite a while back, but now we revamped the whole thing and added support for linux platform too. This is first stable release of our remote executor. Download here

Introducing new licensing options: site license and free for non-profits

Uncategorized 1 Comment

We’ve introduced two new types of licenses for Test Collab: site licenses (i.e. no restrictions on number of users) and free licenses for non-profit groups.

Site license is available for both self-hosted and hosted instances. For hosted site license, we deploy your account on a dedicated instance which would be dynamically and automatically scaled as per your needs. You wouldn’t have to worry about storage/space/memory etc. ever. Read more about the new pricing options

Free licenses are meant for non-profit groups and are issued on case-by-case basis. Please contact us at support@testcollab.com with your firm and registration details, along with a basic description of where you intend to use Test Collab. Not-for-profit open source projects are also welcome too.

Test Collab 1.2 launched!

Announcements 1 Comment

With the current release 1.2, we introduce trial version for self hosted in middleware and Virtual machine format. So far trial was available only on our hosted platform, but now you can try Test Collab on your own infrastructure for 30 days. To choose right package for you, please go to our new trial page.

Apart from that we’ve introduced some new features and bug fixes. Here’s the complete change log:

4397

Bug

   

Hierarchy of Test Suites not shown correctly on test case add page

4358

Documentation

 

Multiple user integration not documented

   

4331

Documentation

 

Document jira custom fields integration process

   

4024

Bug

   

Deleted Execution’s name missing on activity on production

 

4023

Bug

   

In production, attachments are always private irrespective of settings

3983

Bug

   

No permission for requirement

     

3981

Bug

   

Incomplete info on activities

       

3979

Bug

   

‘Filter By Requirements’ did not work for limited rights user

 

3975

Bug

   

Tracability Matrix could be seen even when user had ‘No Access’ to requirements module

3927

Feature

   

Automatic redirection to the requested page after user has logged in

3913

Bug

   

Names of deleted milestones and executions not shown in activity list

3845

Bug

   

Custom field not shown with issue created on JIRA

   

3809

Bug

   

Warnings related to resources especially Images

   

3807

Enhancement

 

Suites drop down’s structure on upload CSV page

   

3804

Bug

   

Suite name same as deleted suite is not allowed

   

3664

Bug

   

Suite not shown expanded, when its case was being viewed while navigating on suites dashboard

3643

Bug

   

Editing a test or a suite makes it last child of its parent suite.

 

3642

Bug

   

Changing suite of a test cases doesn’t update sort order on test case

3580

Bug

   

Breadcrumb missing on milestone view page

   

3549

Feature

   

Ability to send emails to all administrator

   

3305

Feature

   

Copy test case functionality

Dear Software Teams: Using Spreadsheets for testing is a crime against productivity

Testing Tips No Comments

I remember few weeks after our launch (in mid 2011), a potential customer emailed us saying that he needs to convince his team that using Test Collab would be better than using Excel for their software projects. Then we received a few more emails like this, and found out many corporates and big companies were stuck with Excel as a testing tool, it appeared they never even looked for a specialized tool.

Here’s a thought: Why not use a specialized tool instead, like Test Collab? You’ll spend hundred of hours maintaining a large sheet with thousands of records which becomes obsolete after a few months. Choosing such a cumbersome solution has only two possible outcomes: (a) No one in QA team will want to use the solution – because its ineffective and it wastes their time. (b) These spreadsheet will become so ugly and annoying as project grows that no one will be able to use it, even if they want.

So, want a reason to ditch spreadsheets (or Excel) right now? I’ll give you eight:

  • It was not made for test management. Although some might argue that it can be mold to do whatever, good luck first ‘molding’, then writing rules for team and getting team on same page.
  • No user management or roles: You cannot define permissions granularly without being an Excel nerd. Even if you do, it’s simply not worth considering hours you’ll spend on development and maintenance.
  • Categorization / Tagging / Reusing data across projects will never be possible. Even an Excel nerd can’t help you here.
  • Real Test Execution / Tracking time elapsed is not possible. A tester can never feel the execution running in an spreadsheet, because that is what he’s working on – a sheet. With a software like ours, testers get real sense of activity while tests are executing, while a sheet on the other hand is just a place for end result to go.
  • Integrating with your defect manager: Testers can push defect from test failures to your issue manager in real time. Can your spreadsheet do that? No, I’m guessing.
  • No data organization
  • No out-of-the-box metrics: We provide every metric we gather from data about your testing. Sure spreadsheets can do that, but not without several hours of formulas writing.
  • No API for custom rules and logic

The point is: Using a test management solution is a crucial factor for software development, without one in place you might be losing without even knowing it.

And what’s worse than spreadsheets? Not managing your tests at all.

Repurposing test cases with Tags / DRY in software testing

Testing Tips No Comments

I’ve discussed about DRY in software testing earlier also which involved reusing test cases across projects, today I’m going to discuss a similar topic: reusability/repurposing of test cases within project itself.

During your software development life cycle, there are several phases where your team tests the application. Let’s see how a typical development cycle works before checking out the problem (it’s somewhat similar to what we do for Test Collab):

  1. During development, developer executes set of few tests in order to make sure their changes don’t break the build before committing the code. (Smoke tests)
  2. A continuous build verifies that all the code is fine by executing some specific tests. (Some functional tests)
  3. Tester executes a few tests manually at the end of the day to make sure application is running as it should. (All functional tests)
  4. Then there’s a staging / testing server, where all the automated testing is carried out. (All functional tests)
  5. Before every release, manual load testing is done so it can be made sure that this version is going to be okay after released. (Load tests)
  6. After every release, specific tests are executed every 7 days to make sure your application is running online. (Production monitoring tests)

Now, this may or may not resemble to your software development lifecycle, it’s just there to show you how testing or software QA has multiple contexts.

Do you notice something wrong with above image? It’s most common issue among QA teams. Each testing context is treated as a container/category for these test cases, and when some test case, say, Test Case #1 also needs to act as a ‘Load test’, it is copied in ‘Load test’ container as it is. This is absolutely wrong. So what happens is you end up with hundreds/thousands of test cases which are often duplicates of each other. A test case should not be hard coded for a single context if it is required in other contexts too.

For example a test case which verifies “User registration” in my app might be treated as ‘smoke’ test and at the same time a ‘production monitoring’ test. Similarly, I might add more information in this same test case, so it can also act as ‘Load test’.

So what is the correct way?

Treat each testing context as a “Tag”. Tags are not new to the blogging world, but they also make sense in testing and here’s how:

So by applying tags we’ve avoided creating 3 new duplicates. Now one single test can belong to multiple testing scenarios or contexts:

Few days back, we’ve released tagging feature which was long awaited by some of our customers. I hope this article helps you getting best out of it.

How to write a test case? For dummies

Testing Tips No Comments

A software tester should have a working understanding of the steps needed to “test” a software program functionalities. In software engineering, a software tester uses a structured set of procedures to execute what is known as a “test case”.

To “execute” a test case, the tester writes a queue of practices to sort through any kinks or ineffective directions within the application, so the software functions as intended.

These procedures, also known as “testing practices” assist the software tester to write a set of instructions for the test case with a desired result, that the application’s output is satisfactory.

A test case, or test script, is a single, a group, or a grouping of increments a software tester writes to demonstrate how an application functions. These increments, or “steps,” the tester writes to map out the characteristics of how a software program will behave when it is executed.

To ensure all requirements of any application are met, there are two types of test scripts the software tester can use to begin the testing practice. The first type of test script is known as a “formal test case,” which uses “individual test scripts” that are conducted in two stages: a positive test and a negative test. The second type of test script is called an “informal test case.”

An informal test case, or “scenario testing,” tests how an application functions in hypothetical events, or scenarios: the software tester creates a complex theorem, set of postulated ideas, or “what if” situations that can conclude with a varied array of outcome, positive, or negative. The tester can choose from the outcomes which scenarios support an application’s effective function.

When writing the steps for a test script, software tests should consider how they will write the scripts and where the intended destination for the scripts will be.

Software tests can design test scripts that are larger, containing a greater number of increments, with greater detailed descriptions. An end location (i.e., a spreadsheet, database, or a word document) for archival and later retrieval is necessary and should be included within the test practices planning stage.

Writing a test case

Well designed test cases are comprised of three sections:

  • Inputs
  • Outputs
  • Order of Execution

Inputs, or keyboard entry data, for example, includes data (information) entered in or typed in from interfacing devices. Data culled from databases or files, or data from the environment where the system executes are other types of input.

The environment of the system at the data’s introduction and any data from interfacing systems are considered additional sources of originating input.

Outputs, or displayed data (i.e. words visible on a computer’s screen) includes data transferred to interfacing systems, external devices, and data written to databases or files.

The order of execution, how a test case design is demonstrated, is queued between two styles:

  • Cascading test cases: one test case is expanded upon by another, where the first test case completes, and a system’s environment is left ready for a second test case to execute, next a third, etc.
  • Independent test cases: tests which function singularly and are not reliant on previous or subsequent test cases for execution.

The choice and construction of a good test case provides the software tester the opportunity to discern greater numbers of defects or errors in applications, while utilizing fewer resources.

To write a test case, after reviewing the parameters for test design above, adapt a hierarchy, or structured set of steps, to form a detailed “use case.”

A “use case” will utilize, or denote, the specific steps a software tester implements when writing a test case.

The steps to write a case are:

  • Establish or generate a complete set of test scripts (positive and negative) and/or scenarios (informal test scripts)
  • Per individual scenario, identify one test case or greater.
  • Per individual test case, identify the exact processes or conditions which cause the application to execute.
  • Every individual test case, identify the values of data to test.
  • Determine pass/fail per test case/script.

The first test case can be used as a benchmark for subsequent test cases; a test summary of the pass/fail results is recommended. This summary is a detailed brief of the test case’s pass/fail outcome.

Parameters for test cases which may be used:

  • test case description
  • test case identification (assigned name, or assigned number)
  • test increment or step
  • test category / test suite
  • test case description, author(s)
  • register check mark or check boxes denoting if the test can be automated
  • actual results and/or expected results, pass/fail result(s)
  • remarks

Hope that makes things clear for first time test case writers.

Screencast: Test Automation with Test Collab

Announcements,Testing Tips 7 Comments

Ok first things first: What’s the main problem with most test automation techniques are: they do not offer very well interface, so you don’t have a good workflow around them in order to execute and extract meaningful information from them, most of time these tests end up being forgotten in build process or gets too old with time without ever being updated. Nor are they integrated well around your manual testing process. Writing automated test case is too damn easy, but difficult part is collecting insights and meaningful data from them over time. See this screencast and find out how Test Collab can solve your test automation worries: (small note: this is my first screencast, so please excuse bad voice quality)

To enable this feature in your account, go to Settings > General Settings, then check ‘Enable Test Automation’ and hit ‘Save’. Refresh once to see the ‘Remote Executors’ link under ‘Settings’ tab.

Page 1 of 212