Triaged Tester

March 20, 2009

Test cases for Black box

Filed under: Black Box Testing,Test case,Tips — Triaged Tester @ 9:34 am
Tags: , , ,

For Black box testing, the test cases can be generated by using all or a combination of the below techniques.

Graph Based : Software testing begins by creating a graph of important objects and their relationships and then devising a series of tests that will cover the graph so that each objects and their relationships and then devising a series of tests that will cover the graph so that each object and relationship is exercised and error is uncovered.

Error Guesssing – Error Guessing comes with experience with the technology and the project. Error Guessing is the art of guessing where errors can be hidden. There are no specific tools and techniques for this, but you can write test cases depending on the situation. 

Boundary Value Analysis (BVA) is a test data selection technique (Functional Testing technique) where the extreme values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. The hope is that, if a system works correctly for these special values then it will work correctly for all values in between. 

Equivalence partitioning is a testing method that divides the input domain of a program into classes of data from which test cases can be derived.

Compasrision Testing – There are situations where independent versions of software be developed for critical applications, even when only a single version will be used in the delivered computer based system. It is these independent versions which form the basis of a black box testing technique called Comparison testing or back-to-back testing.

The Orthogonal Array Testing Strategy (OATS) is a systematic, statistical way of testing pair-wise interactions by deriving a suitable small set of test cases (from a large number of possibilities).


January 19, 2009

Test cases – types of results

Filed under: Black Box Testing,Test case — Triaged Tester @ 5:47 am

·         A Test Case can either return a PASS, FAIL or CHECK.

·         A status of CHECK indicates that the test has been run, but validation is to be done.

·         A Test Case cannot return any other states.

·         If a Test Case times out, the test harness should report as a FAIL.

·         The test harness should identify an additional state of a Test Case as “SKIPPED”.

·         A status of “SKIPPED” should occur only when a Test Case does *not* get a chance to return a result.

·         Optionally the test case can also return a status – NA or not applicable only if that test case is obsolete now compared to the current release and has missed the analysis during the test suite creation.

·         The test reporting infrastructure should be able to report on PASS, FAIL, CHECK, SKIPPED and NA.

·         A status of CHECK must eventually transform to either PASS or FAIL.


Create a free website or blog at