Triaged Tester

March 19, 2009

Testing Types & Testing Techniques

Filed under: Black Box Testing,General,Terminology — Triaged Tester @ 9:27 am
Tags: , , ,

Testing types deal with what aspect of the computer software would be tested, while testing techniques deal with how a specific part of the software would be tested.

That is, testing types mean whether we are testing the function or the structure of the software. In other words, we may test each function of the software to see if it is operational or we may test the internal components of the software to check if its internal workings are according to specification.

On the other hand, ‘Testing technique’ means what methods or ways would be applied or calculations would be done to test a particular feature of a software  (Sometimes we test the interfaces, sometimes we test the segments, sometimes loops etc.) 

March 11, 2009

How to Implement Test Automation Framework Methodology

Filed under: Automation,Checklist,Stratergies,Tips — Triaged Tester @ 6:10 am
Tags: , ,

1.Identification of the Scope of Testing: Company Oriented, Product Oriented, Project Oriented
2.Identification of the Needs of Testing: Identify Types of testing e.g. FT, Web Services etc. and application / modules to be tested.
3.Identification of the Requirements of Testing: Find out the Nature of Requirements, Identification of type of actions for each requirement & identification of High Priority Requirements
4.Evaluation of the Test Automation Tool: Preparation of Evaluation Checklist, Identification of the Candidate Tools Available, Sample Run, Rate & Select the Tool, Implementation & Training
5.Identification of the Actions to be automated: Actions, Validations & Requirements supported by the Tool
6.Design of the Test Automation Framework: Framework Guidelines, Validations, Actions Involved, Systems Involved, Tool Extensibility Support, Customs Messages & UML Documentation
7.Design of the Input Data Bank: Identification of Types of Input file, Categorization & Design of File Prototypes
8.Development of the Automation Framework: Development of Script based upon Framework Design, Driver Scripts, Worker Scripts, Record / Playback, Screen / Window / Transaction, Action / Keyword & Data Driven
9.Population of Input Data Bank: Different Types of Data Input, Population of Data from Different Data Sources, Manual Input of Data and Parent – Child Data Hierarchy
10.Configuration of the Schedulers: Identify Scheduler Requirements & Configure the Schedulers.

February 11, 2009

Code coverage – The strategy

Filed under: Code Coverage,Stratergies — Triaged Tester @ 1:05 pm
Tags: ,

Here’s how I approach my job, and how coverage helps.


1. I divide the code under test into three categories.

·         High risk code could cause severe damage (wipe out data, give non-obviously wrong answers that might cost a user a lot of money), or has many users (so the cost of even minor bugs is multiplied), or seems likely to have many mistakes whose costs will add up (it was a tricky algorithm, or it talks to an ill defined and poorly-understood interface, or I’ve already found an unusual number of problems).

·         Low risk code is unlikely to have bugs important enough to stop or delay a shipment, even when all the bugs are summed together. They would be annoyance bugs in inessential features, ones with simple and obvious workarounds.

·         Medium risk code is somewhere in between. Bugs here would not be individually critical, but having too many of them would cause a schedule slip. There’s good reason to find and fix them as soon – and as cheaply – as possible. But there are diminishing returns here – time spent doing a more thorough job might better be spent on other tasks.


Clearly, these are not hard-and-fast categories. I have no algorithm that takes in code and spits out “high”, “medium”, or “low”. The categories blend together, of course, and where debatable code lands probably don’t matter much. I also oversimplify by treating risk as being monolithic. In reality, some medium risk code might be high risk with respect to certain types of failures, and I would tailor the type of testing to the blend of risks.


2. I test the high risk code thoroughly. I use up most of the remaining time testing the medium risk code. I don’t intentionally test the low risk code. I might hope that it gets exercised incidentally by tests that target higher-risk code, but I will not make more than a trivial, offhand effort to cause that to happen.


3. When nearing the end of a testing effort (or some milestone within it), I’ll check coverage.

·         Since high risk code is tested thoroughly, I expect good coverage and I handle missed coverage as described earlier.

·         I expect lower coverage for medium risk code. I will scan the detailed coverage log relatively quickly, checking it to see whether I overlooked something – whether the missed coverage suggests cases that I’d really rather a customer weren’t the first person to try. I won’t spend any more time handling coverage results than I would for thorough testing (even though there’s more missed coverage to handle).

·         The coverage for low risk code is pretty uninteresting. My curiosity might be piqued if a particular routine, say, was never entered. I might consider whether there’s an easy way to quickly try it out, but I won’t do more. So, again, coverage serves its purpose: I spend a little time using it to find omissions in my test design.

Create a free website or blog at