How to Write Test Cases for Your Quality Assurance Process
A software tester should have a working understanding of the steps needed to “test” a software program functionalities. In software engineering, a software tester uses a structured set of procedures to execute what is known as a “test case”.
To “execute” a test case, the tester writes a queue of practices to sort through any kinks or ineffective directions within the application, so the software functions as intended.
These procedures, also known as “testing practices” assist the software tester to write a set of instructions for the test case with a desired result, that the application’s output is satisfactory.
A test case, or test script, is a single, a group, or a grouping of increments a software tester writes to demonstrate how an application functions. These increments, or “steps,” the tester writes to map out the characteristics of how a software program will behave when it is executed.
To ensure all requirements of any application are met, there are two types of test scripts the software tester can use to begin the testing practice. The first type of test script is known as a “formal test case,” which uses “individual test scripts” that are conducted in two stages: a positive test and a negative test. The second type of test script is called an “informal test case.”
An informal test case, or “scenario testing,” tests how an application functions in hypothetical events, or scenarios: the software tester creates a complex theorem, set of postulated ideas, or “what if” situations that can conclude with a varied array of outcome, positive, or negative. The tester can choose from the outcomes which scenarios support an application’s effective function.
When writing the steps for a test script, software tests should consider how they will write the scripts and where the intended destination for the scripts will be.
Software tests can design test scripts that are larger, containing a greater number of increments, with greater detailed descriptions. An end location (i.e., a spreadsheet, database, or a word document) for archival and later retrieval is necessary and should be included within the test practices planning stage.
Writing a test case
Well designed test cases are comprised of three sections:
- Order of Execution
Inputs, or keyboard entry data, for example, includes data (information) entered in or typed in from interfacing devices. Data culled from databases or files, or data from the environment where the system executes are other types of input.
The environment of the system at the data’s introduction and any data from interfacing systems are considered additional sources of originating input.
Outputs, or displayed data (i.e. words visible on a computer’s screen) includes data transferred to interfacing systems, external devices, and data written to databases or files.
The order of execution, how a test case design is demonstrated, is queued between two styles:
- Cascading test cases: one test case is expanded upon by another, where the first test case completes, and a system’s environment is left ready for a second test case to execute, next a third, etc.
- Independent test cases: tests which function singularly and are not reliant on previous or subsequent test cases for execution.
The choice and construction of a good test case provides the software tester the opportunity to discern greater numbers of defects or errors in applications, while utilizing fewer resources.
To write a test case, after reviewing the parameters for test design above, adapt a hierarchy, or structured set of steps, to form a detailed “use case.”
A “use case” will utilize, or denote, the specific steps a software tester implements when writing a test case.
The steps to write a case are:
- Establish or generate a complete set of test scripts (positive and negative) and/or scenarios (informal test scripts)
- Per individual scenario, identify one test case or greater.
- Per individual test case, identify the exact processes or conditions which cause the application to execute.
- Every individual test case, identify the values of data to test.
- Determine pass/fail per test case/script.
The first test case can be used as a benchmark for subsequent test cases; a test summary of the pass/fail results is recommended. This summary is a detailed brief of the test case’s pass/fail outcome.
Parameters for test cases which may be used:
- test case description
- test case identification (assigned name, or assigned number)
- test increment or step
- test category / test suite
- test case description, author(s)
- register check mark or check boxes denoting if the test can be automated
- actual results and/or expected results, pass/fail result(s)
Hope that makes things clear for first time test case writers.