Test Collab

JIRA 6 plugin for two-way integration with Test Collab

Announcements 1 Comment

Many of you asked for it, so here it is. We are sorry for a little delay in this release – but now it is here and ready to use. Visit Downloads page.
For those of you who missed our little demo on two-way integration, please take a look at our previous blog post on Two-way integration. Note that we have used Redmine to prepare a video demonstration, same features and functionality is available on JIRA 5 and 6 versions.
We hope you’ll like this release and as always for any support, queries and feedback – feel free.

Introducing two-way integration: test case management inside issue manager

Uncategorized 2 Comments

Now quickly create test cases, assign test executions and manage software testing from within your issue manager. We have released two-way integration plugins which makes all of it possible. No need to login into multiple systems, just save the Test Collab API key once in your issue manager and you’re done.

The video below shows how this feature connects Redmine and Test Collab, same is available for other issue managers too:

To know more, visit help page for two-way integration.

Test Automation for Windows and Linux

Announcements,Testing Tips 3 Comments

Imagine being able to track your manual and automated test results from a same place. Sounds cool right? With Test Collab not only you can do exactly that but also assign tests to human or machine with just a few clicks without messing up with a dozen APIs or doing custom code. The bottom line is:

  1. You create test and store them in your test case manager so other team members can run it.
  2. Few of these tests are automated too.
  3. But there is no easy way to keep track of automated results for each test. Okay, maybe your build server does that but then again you have to check test results at two different places: Test Case manager for manual test and Build server for automated test.
  4. And to run manual test, you will assign an execution to your team. Similarly you will trigger a new build on your build server for running automated tests.

     

    We know this is unproductive so we bring you our new improved remote executor.

For people who never read about remote executors yet, here’s a quick summary: Test Collab Remote Executor turns your machine into a test slave which is used to run automated tests. It posts all necessary info produced while testing to Test Collab server for analysis. Check the how-to screen cast here (little outdated) – it isn’t best explanation but will help you in a basic way. For further help, you can always reach our support team.

While we launched this quite a while back, but now we revamped the whole thing and added support for linux platform too. This is first stable release of our remote executor. Download here

Introducing new licensing options: site license and free for non-profits

Uncategorized 1 Comment

We’ve introduced two new types of licenses for Test Collab: site licenses (i.e. no restrictions on number of users) and free licenses for non-profit groups.

Site license is available for both self-hosted and hosted instances. For hosted site license, we deploy your account on a dedicated instance which would be dynamically and automatically scaled as per your needs. You wouldn’t have to worry about storage/space/memory etc. ever. Read more about the new pricing options

Free licenses are meant for non-profit groups and are issued on case-by-case basis. Please contact us at support@testcollab.com with your firm and registration details, along with a basic description of where you intend to use Test Collab. Not-for-profit open source projects are also welcome too.

Test Collab 1.2 launched!

Announcements 1 Comment

With the current release 1.2, we introduce trial version for self hosted in middleware and Virtual machine format. So far trial was available only on our hosted platform, but now you can try Test Collab on your own infrastructure for 30 days. To choose right package for you, please go to our new trial page.

Apart from that we’ve introduced some new features and bug fixes. Here’s the complete change log:

4397

Bug

   

Hierarchy of Test Suites not shown correctly on test case add page

4358

Documentation

 

Multiple user integration not documented

   

4331

Documentation

 

Document jira custom fields integration process

   

4024

Bug

   

Deleted Execution’s name missing on activity on production

 

4023

Bug

   

In production, attachments are always private irrespective of settings

3983

Bug

   

No permission for requirement

     

3981

Bug

   

Incomplete info on activities

       

3979

Bug

   

‘Filter By Requirements’ did not work for limited rights user

 

3975

Bug

   

Tracability Matrix could be seen even when user had ‘No Access’ to requirements module

3927

Feature

   

Automatic redirection to the requested page after user has logged in

3913

Bug

   

Names of deleted milestones and executions not shown in activity list

3845

Bug

   

Custom field not shown with issue created on JIRA

   

3809

Bug

   

Warnings related to resources especially Images

   

3807

Enhancement

 

Suites drop down’s structure on upload CSV page

   

3804

Bug

   

Suite name same as deleted suite is not allowed

   

3664

Bug

   

Suite not shown expanded, when its case was being viewed while navigating on suites dashboard

3643

Bug

   

Editing a test or a suite makes it last child of its parent suite.

 

3642

Bug

   

Changing suite of a test cases doesn’t update sort order on test case

3580

Bug

   

Breadcrumb missing on milestone view page

   

3549

Feature

   

Ability to send emails to all administrator

   

3305

Feature

   

Copy test case functionality

Dear Software Teams: Using Spreadsheets for testing is a crime against productivity

Testing Tips No Comments

I remember few weeks after our launch (in mid 2011), a potential customer emailed us saying that he needs to convince his team that using Test Collab would be better than using Excel for their software projects. Then we received a few more emails like this, and found out many corporates and big companies were stuck with Excel as a testing tool, it appeared they never even looked for a specialized tool.

Here’s a thought: Why not use a specialized tool instead, like Test Collab? You’ll spend hundred of hours maintaining a large sheet with thousands of records which becomes obsolete after a few months. Choosing such a cumbersome solution has only two possible outcomes: (a) No one in QA team will want to use the solution – because its ineffective and it wastes their time. (b) These spreadsheet will become so ugly and annoying as project grows that no one will be able to use it, even if they want.

So, want a reason to ditch spreadsheets (or Excel) right now? I’ll give you eight:

  • It was not made for test management. Although some might argue that it can be mold to do whatever, good luck first ‘molding’, then writing rules for team and getting team on same page.
  • No user management or roles: You cannot define permissions granularly without being an Excel nerd. Even if you do, it’s simply not worth considering hours you’ll spend on development and maintenance.
  • Categorization / Tagging / Reusing data across projects will never be possible. Even an Excel nerd can’t help you here.
  • Real Test Execution / Tracking time elapsed is not possible. A tester can never feel the execution running in an spreadsheet, because that is what he’s working on – a sheet. With a software like ours, testers get real sense of activity while tests are executing, while a sheet on the other hand is just a place for end result to go.
  • Integrating with your defect manager: Testers can push defect from test failures to your issue manager in real time. Can your spreadsheet do that? No, I’m guessing.
  • No data organization
  • No out-of-the-box metrics: We provide every metric we gather from data about your testing. Sure spreadsheets can do that, but not without several hours of formulas writing.
  • No API for custom rules and logic

The point is: Using a test management solution is a crucial factor for software development, without one in place you might be losing without even knowing it.

And what’s worse than spreadsheets? Not managing your tests at all.

Repurposing test cases with Tags / DRY in software testing

Testing Tips No Comments

I’ve discussed about DRY in software testing earlier also which involved reusing test cases across projects, today I’m going to discuss a similar topic: reusability/repurposing of test cases within project itself.

During your software development life cycle, there are several phases where your team tests the application. Let’s see how a typical development cycle works before checking out the problem (it’s somewhat similar to what we do for Test Collab):

  1. During development, developer executes set of few tests in order to make sure their changes don’t break the build before committing the code. (Smoke tests)
  2. A continuous build verifies that all the code is fine by executing some specific tests. (Some functional tests)
  3. Tester executes a few tests manually at the end of the day to make sure application is running as it should. (All functional tests)
  4. Then there’s a staging / testing server, where all the automated testing is carried out. (All functional tests)
  5. Before every release, manual load testing is done so it can be made sure that this version is going to be okay after released. (Load tests)
  6. After every release, specific tests are executed every 7 days to make sure your application is running online. (Production monitoring tests)

Now, this may or may not resemble to your software development lifecycle, it’s just there to show you how testing or software QA has multiple contexts.

Do you notice something wrong with above image? It’s most common issue among QA teams. Each testing context is treated as a container/category for these test cases, and when some test case, say, Test Case #1 also needs to act as a ‘Load test’, it is copied in ‘Load test’ container as it is. This is absolutely wrong. So what happens is you end up with hundreds/thousands of test cases which are often duplicates of each other. A test case should not be hard coded for a single context if it is required in other contexts too.

For example a test case which verifies “User registration” in my app might be treated as ‘smoke’ test and at the same time a ‘production monitoring’ test. Similarly, I might add more information in this same test case, so it can also act as ‘Load test’.

So what is the correct way?

Treat each testing context as a “Tag”. Tags are not new to the blogging world, but they also make sense in testing and here’s how:

So by applying tags we’ve avoided creating 3 new duplicates. Now one single test can belong to multiple testing scenarios or contexts:

Few days back, we’ve released tagging feature which was long awaited by some of our customers. I hope this article helps you getting best out of it.

How to write a test case? For dummies

Testing Tips No Comments

A software tester should have a working understanding of the steps needed to “test” a software program functionalities. In software engineering, a software tester uses a structured set of procedures to execute what is known as a “test case”.

To “execute” a test case, the tester writes a queue of practices to sort through any kinks or ineffective directions within the application, so the software functions as intended.

These procedures, also known as “testing practices” assist the software tester to write a set of instructions for the test case with a desired result, that the application’s output is satisfactory.

A test case, or test script, is a single, a group, or a grouping of increments a software tester writes to demonstrate how an application functions. These increments, or “steps,” the tester writes to map out the characteristics of how a software program will behave when it is executed.

To ensure all requirements of any application are met, there are two types of test scripts the software tester can use to begin the testing practice. The first type of test script is known as a “formal test case,” which uses “individual test scripts” that are conducted in two stages: a positive test and a negative test. The second type of test script is called an “informal test case.”

An informal test case, or “scenario testing,” tests how an application functions in hypothetical events, or scenarios: the software tester creates a complex theorem, set of postulated ideas, or “what if” situations that can conclude with a varied array of outcome, positive, or negative. The tester can choose from the outcomes which scenarios support an application’s effective function.

When writing the steps for a test script, software tests should consider how they will write the scripts and where the intended destination for the scripts will be.

Software tests can design test scripts that are larger, containing a greater number of increments, with greater detailed descriptions. An end location (i.e., a spreadsheet, database, or a word document) for archival and later retrieval is necessary and should be included within the test practices planning stage.

Writing a test case

Well designed test cases are comprised of three sections:

  • Inputs
  • Outputs
  • Order of Execution

Inputs, or keyboard entry data, for example, includes data (information) entered in or typed in from interfacing devices. Data culled from databases or files, or data from the environment where the system executes are other types of input.

The environment of the system at the data’s introduction and any data from interfacing systems are considered additional sources of originating input.

Outputs, or displayed data (i.e. words visible on a computer’s screen) includes data transferred to interfacing systems, external devices, and data written to databases or files.

The order of execution, how a test case design is demonstrated, is queued between two styles:

  • Cascading test cases: one test case is expanded upon by another, where the first test case completes, and a system’s environment is left ready for a second test case to execute, next a third, etc.
  • Independent test cases: tests which function singularly and are not reliant on previous or subsequent test cases for execution.

The choice and construction of a good test case provides the software tester the opportunity to discern greater numbers of defects or errors in applications, while utilizing fewer resources.

To write a test case, after reviewing the parameters for test design above, adapt a hierarchy, or structured set of steps, to form a detailed “use case.”

A “use case” will utilize, or denote, the specific steps a software tester implements when writing a test case.

The steps to write a case are:

  • Establish or generate a complete set of test scripts (positive and negative) and/or scenarios (informal test scripts)
  • Per individual scenario, identify one test case or greater.
  • Per individual test case, identify the exact processes or conditions which cause the application to execute.
  • Every individual test case, identify the values of data to test.
  • Determine pass/fail per test case/script.

The first test case can be used as a benchmark for subsequent test cases; a test summary of the pass/fail results is recommended. This summary is a detailed brief of the test case’s pass/fail outcome.

Parameters for test cases which may be used:

  • test case description
  • test case identification (assigned name, or assigned number)
  • test increment or step
  • test category / test suite
  • test case description, author(s)
  • register check mark or check boxes denoting if the test can be automated
  • actual results and/or expected results, pass/fail result(s)
  • remarks

Hope that makes things clear for first time test case writers.

Screencast: Test Automation with Test Collab

Announcements,Testing Tips 7 Comments

Ok first things first: What’s the main problem with most test automation techniques are: they do not offer very well interface, so you don’t have a good workflow around them in order to execute and extract meaningful information from them, most of time these tests end up being forgotten in build process or gets too old with time without ever being updated. Nor are they integrated well around your manual testing process. Writing automated test case is too damn easy, but difficult part is collecting insights and meaningful data from them over time. See this screencast and find out how Test Collab can solve your test automation worries: (small note: this is my first screencast, so please excuse bad voice quality)

To enable this feature in your account, go to Settings > General Settings, then check ‘Enable Test Automation’ and hit ‘Save’. Refresh once to see the ‘Remote Executors’ link under ‘Settings’ tab.

Multiple user integration with your issue manager now possible!

Announcements No Comments

Till now it was possible to integrate single user of your issue manager with Test Collab profile. So it wasn’t possible to know which user actually reported that failure by looking at the defect in your issue manager, since the API was using a single user’s credentials. This week we have released a new feature which will let all your team members have their own credentials stored with Test Collab, so that when they report a new failure, the reporter(in your defect manager) would be automatically shown as the person who actually executed that test.

To configure this feature, simply follow these steps:

  1. Login to your account, then go to edit profile

  2. You will notice a link on bottom of this edit profile page, which goes something this:
  3. Clicking that link will take you to the page where you can specify your login credentials for your issue manager profile. These credentials will be stored with your Test Collab profile. Fields displayed on this page depends on the type of issue manager you use. Some issue managers like JIRA, Fogbugz and others do not require an API key, rather they need username and password. It will show you username and password automatically if that’s the case.
  4. After clicking the ‘link account’ button you should see a message like ‘credentials stored successfully’. That means your issue manager account is now linked with your Test Collab profile.

Please note: this feature does not validate specified issue manager credentials, unlike our integration process. That means you’ll have two make sure you enter correct credentials otherwise new issues won’t be created when this user tries reporting a test failure. If some user’s profile is not linked to the issue manager, then the default user will be used as a reporter to report the defect. The default user is one which you have configured on issue manager integration page. As an administrator you can also go to user’s index and link their profiles one by one.

Thanks to Amy Wan of Graph Net Health for reporting this enhancement. We hope you’ll enjoy this new feature. Thanks for reading the post.

Page 1 of 212