Triaged Tester

March 16, 2009

Test Activities during phases

Filed under: Black Box Testing,Checklist,General,Guidelines,Test Plan,Tips — Triaged Tester @ 8:51 am
Tags: ,

Test activities vary with the model and also on the type of project. So here is a generic list of items that needs to be done. You can always do it at any point when enough data is available.

1. Requirement Phase :

  • Invest in analysis at the begining of the project
  • Start developing the test set at the requirement analysis phase
  • The correctness, consistency and completeness of the requirements should be analysed.

2. Design Phase :

  • Analysis of design to check its completeness and consistency
  • Analysis of design to check whether it satisfies the requirements
  • Generation of test data based on design
  • Setting up of test bed

3. Programming/Coding Phase :

  • Check code for consistency with design
  • Perform system testing in an organized manner – Buddy testing, feature testing, integration testing, System testing etc
  • Use available tools
  • Apply stress to the program
  • Test one at a time
  • Measure test coovergae

4. Maintanence Phase :

  • Retest/Regress

March 11, 2009

How to Implement Test Automation Framework Methodology

Filed under: Automation,Checklist,Stratergies,Tips — Triaged Tester @ 6:10 am
Tags: , ,

1.Identification of the Scope of Testing: Company Oriented, Product Oriented, Project Oriented
2.Identification of the Needs of Testing: Identify Types of testing e.g. FT, Web Services etc. and application / modules to be tested.
3.Identification of the Requirements of Testing: Find out the Nature of Requirements, Identification of type of actions for each requirement & identification of High Priority Requirements
4.Evaluation of the Test Automation Tool: Preparation of Evaluation Checklist, Identification of the Candidate Tools Available, Sample Run, Rate & Select the Tool, Implementation & Training
5.Identification of the Actions to be automated: Actions, Validations & Requirements supported by the Tool
6.Design of the Test Automation Framework: Framework Guidelines, Validations, Actions Involved, Systems Involved, Tool Extensibility Support, Customs Messages & UML Documentation
7.Design of the Input Data Bank: Identification of Types of Input file, Categorization & Design of File Prototypes
8.Development of the Automation Framework: Development of Script based upon Framework Design, Driver Scripts, Worker Scripts, Record / Playback, Screen / Window / Transaction, Action / Keyword & Data Driven
9.Population of Input Data Bank: Different Types of Data Input, Population of Data from Different Data Sources, Manual Input of Data and Parent – Child Data Hierarchy
10.Configuration of the Schedulers: Identify Scheduler Requirements & Configure the Schedulers.

March 9, 2009

Recommended tools

Filed under: Automation,Checklist,Guidelines,Tips,Tool — Triaged Tester @ 1:23 pm
Tags: ,
Functionality Description Representative Tools
Functional Testing Record and Playback tools with scripting support aid in automating the functional testing of online applications  Win Runner, Rational Robot, Silk Test and QA Run. Tools like CA-Verify can be used in the m/f environment
Test Management  Management the test effort  Test Director
Test Coverage Analyzer Reports from the tool provide data on coverage per unit like Function, Program, and Application Rational Pure Coverage 
File Comparators  Verify regression test results (by comparison of results from original and changed applications). Comparex (from Sterling Software)
Load Testing  Performance and scalability testing Load Runner, Performance Studio, Silk Performer and QA Load
Run-time error checking Detect hard to find run-time errors, memory leaks, etc. Rational Purify
Debugging tools  Simplify isolation and fixing of errors Xpediter,  ViaSoft (Mainframe applications), VisualAge debuggers and many other debuggers that come with development kits. 
Test Bed Generator Tools aid in preparing test data by analyzing program flows and conditional statements CA-Datamacs

February 19, 2009

Test Metric

Filed under: Checklist,Guidelines,Metrics — Triaged Tester @ 1:09 pm
Test metric Definition Purpose How to calculate
Number of remarks The total number of remarks found in a given time period/phase/test type. A remark is a claim made by test engineer that the application shows an undesired behavior. It may or may not result in software modification or changes to documentation. One of the earliest indicators to measure once the testing commences; provides initial indications about the stability of the software.  Total number of remarks found.
Number of defects The total number of remarks found in a given time period/phase/test type that resulted in software or documentation modifications. A more meaningful way of assessing the stability and reliability of the software than number of remarks. Duplicate remarks have been eliminated; rejected remarks have been done. Only remarks that resulted in modifying the software or the documentation are counted.
Remark status The status of the defect could vary depending upon the defect-tracking tool that is used. Broadly, the following statuses are available: To be solved: Logged by the test engineers and waiting to be taken over by the software engineer. To be retested: Solved by the developer, and waiting to be retested by the test engineer. Closed: The issue was retested by the test engineer and was approved. Track the progress with respect to entering, solving and retesting the remarks. During this phase, the information is useful to know the number of remarks logged, solved, waiting to be resolved and retested. This information can normally be obtained directly from the defect tracking system based on the remark status.
Defect severity The severity level of a defect indicates the potential business impact for the end user (business impact = effect on the end user x frequency of occurrence). Provides indications about the quality of the product under test. High-severity defects means low product quality, and vice versa. At the end of this phase, this information is useful to make the release decision based on the number of defects and their severity levels. Every defect has severity levels attached to it. Broadly, these are Critical, Serious, Medium and Low.
Defect severity index An index representing the average of the severity of the defects. Provides a direct measurement of the quality of the product—specifically, reliability, fault tolerance and stability. Two measures are required to compute the defect severity index. A number is assigned against each severity level: 4 (Critical), 3 (Serious), 2 (Medium), 1 (Low). Multiply each remark by its severity level number and add the totals; divide this by the total number of defects to determine the defect severity index.
Time to find a defect The effort required to find a defect. Shows how fast the defects are being found. This metric indicates the correlation between the test effort and the number of defects found. Divide the cumulative hours spent on test execution and logging defects by the number of defects entered during the same period.
Time to solve a defect Effort required to resolve a defect (diagnosis and correction). Provides an indication of the maintainability of the product and can be used to estimate projected maintenance costs. Divide the number of hours spent on diagnosis and correction by the number of defects resolved during the same period. 
Test coverage Defined as the extent to which testing covers the product’s complete functionality.  This metric is an indication of the completeness of the testing. It does not indicate anything about the effectiveness of the testing. This can be used as a criterion to stop testing. Coverage could be with respect to requirements, functional topic list, business flows, use cases, etc. It can be calculated based on the number of items that were covered vs. the total number of items.
Test case effectiveness The extent to which test cases are able to find defects. This metric provides an indication of the effectiveness of the test cases and the stability of the software. Ratio of the number of test cases that resulted in logging remarks vs. the total number of test cases.
Defects/ KLOC The number of defects per 1,000 lines of code. This metric indicates the quality of the product under test. It can be used as a basis for estimating defects to be addressed in the next phase or the next version. Ratio of the number of defects found vs. the total number of lines of code (thousands) 
Workload capacity ratio Ratio of the planned workload and the gross capacity for the total test project or phase. This metric helps in detecting issues related to estimation and planning. It serves as an input for estimating similar projects as well. Computation of this metric often happens in the beginning of the phase or project. Workload is determined by multiplying the number of tasks against their norm times. Gross capacity is nothing but planned working time, determined by workload divided by gross capacity.
Test planning performance The planned value related to the actual value. Shows how well estimation was done. The ratio of the actual effort spent to the planned effort.
Test effort percentage Test effort is the amount of work spent, in hours or days or weeks. Overall project effort is divided among multiple phases of the project: requirements, design, coding, testing and such.  The effort spent in testing, in relation to the effort spent in the development activities, will give us an indication of the level of investment in testing. This information can also be used to estimate similar projects in the future. This metric can be computed by dividing the overall test effort by the total project effort.
Defect category An attribute of the defect in relation to the quality attributes of the product. Quality attributes of a product include functionality, usability, documentation, performance, installation and internationalization. This metric can provide insight into the different quality attributes of the product. This metric can be computed by dividing the defects that belong to a particular category by the total number of defects.
Should be found in which phase An attribute of the defect, indicating in which phase the remark should have been found. Are we able to find the right defects in the right phase as described in the test strategy? Indicates the percentage of defects that are getting migrated into subsequent test phases. Computation of this metric is done by calculating the number of defects that should have been found in previous test phases.
Residual defect density An estimate of the number of defects that may have been unresolved in the product phase. The goal is to achieve a defect level that is acceptable to the clients. We remove defects in each of the test phases so that few will remain.  This is a tricky issue. Released products have a basis for estimation. For new versions, industry standards, coupled with project specifics, form the basis for estimation.
Defect remark ratio Ratio of the number of remarks that resulted in software modification vs. the total number of remarks. Provides an indication of the level of understanding between the test engineers and the software engineers about the product, as well as an indirect indication of test effectiveness. The number of remarks that resulted in software modification vs. the total number of logged remarks. Valid for each test type, during and at the end of test phases.
Valid remark ratio Percentage of valid remarks during a certain period. Valid remarks = number of defects + duplicate remarks + number of remarks that will be resolved in the next phase or release. Indicates the efficiency of the test process. Ratio of the total number of remarks that are valid to the total number of remarks found.
Bad fix ratio Percentage of the number of resolved remarks that resulted in creating new defects while resolving existing ones.  Indicates the effectiveness of the defect-resolution process, plus indirect indications as to the maintainability of the software. Ratio of the total number of bad fixes to the total number of resolved defects. This can be calculated per test type, test phase or time period.
Defect removal efficiency The number of defects that are removed per time unit (hours/days/weeks) Indicates the efficiency of defect removal methods, as well as indirect measurement of the quality of the product. Computed by dividing the effort required for defect detection, defect resolution time and retesting time by the number of remarks. This is calculated per test type, during and across test phases.
Phase yield Defined as the number of defects found during the phase of the development life cycle vs. the estimated number of defects at the start of the phase. Shows the effectiveness of the defect removal. Provides a direct measurement of product quality; can be used to determine the estimated number of defects for the next phase. Ratio of the number of defects found by the total number of estimated defects. This can be used during a phase and also at the end of the phase.
Backlog development The number of remarks that are yet to be resolved by the development team. Indicates how well the software engineers are coping with the testing efforts. The number of remarks that remain to be resolved.
Backlog testing The number of resolved remarks that are yet to be retested by the development team. Indicates how well the test engineers are coping with the development efforts. The number of remarks that have been resolved.
Scope changes The number of changes that were made to the test scope. Indicates requirements stability or volatility, as well as process stability. Ratio of the number of changed items in the test scope to the total number of items.

February 10, 2009

Test Automation Methodology

Filed under: Automation,Checklist,Deliverables,Guidelines,Tips — Triaged Tester @ 1:03 pm




Identification of Automation Objectives/Test Requirements Study

Ø  Understanding the objectives of Automation for design of an effective test architecture

Ø  Understanding of the test requirements/application requirements for automation

Test Automation Strategy Document

Identification of Tool

Ø  Normally done in the proposal stage itself

Ø  Application of Tool Selection Process of eTest Center to arrive at the best fit for the given automation objectives


Script Planning & Design

Ø  Design of Test Architecture

Ø  Identification of Reusable functions

Ø  Identification of required libraries

Ø  Identification of test initialization parameters

Ø  Identification of all the scripts 

Ø  Preparation of Test design document – Application of Naming Conventions

Ø  Test Data Planning

Design Document

Test Environment Setup

Ø  Hardware

Ø  Software – Application, Browsers

Ø  Test Repositories

Ø  Version Control Repositories

Ø  Tool setup


Development of Libraries

Ø  Development of libraries

Ø  Debug/Testing of libraries

Phased delivery if applicable

Development of Scripts

Ø  Development of scripts

Ø  Debug/Testing of scripts

Phased delivery if applicable

Development of Test Suites

Ø  Integration of Scripts into test Suites

Ø  Debug/Testing of Suites

Test Suites & Libraries

Deployment/Testing of Scripts

Ø  Deployment of Test Suites

Test Run Results, Defect Reports, Test Report

Next Page »

Create a free website or blog at