Triaged Tester

February 25, 2009

Metrics for Evaluating system testing

Filed under: Metrics,Quality,Tips — Triaged Tester @ 1:13 pm
Tags:

Metric = Formula

Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of Code)

Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).

Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

Defects per size = Defects detected / system size

Test cost (in %) = Cost of testing / total cost *100

Cost to locate defect = Cost of testing / the number of defects located

Achieving Budget = Actual cost of testing / Budgeted cost of testing

Defects detected in testing = Defects detected in testing / total system defects

Defects detected in production = Defects detected in production/system size

Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100

Effectiveness of testing to business = Loss due to problems / total resources processed by the system.

System complaints = Number of third party complaints / number of transactions processed

Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10

Source Code Analysis = Number of source code statements changed / total number of tests.

Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation

Test Execution Productivity = No of Test cycles executed / Actual Effort for testing

February 19, 2009

Test Metric

Filed under: Checklist,Guidelines,Metrics — Triaged Tester @ 1:09 pm
Tags:
PRODUCT
Test metric Definition Purpose How to calculate
Number of remarks The total number of remarks found in a given time period/phase/test type. A remark is a claim made by test engineer that the application shows an undesired behavior. It may or may not result in software modification or changes to documentation. One of the earliest indicators to measure once the testing commences; provides initial indications about the stability of the software.  Total number of remarks found.
Number of defects The total number of remarks found in a given time period/phase/test type that resulted in software or documentation modifications. A more meaningful way of assessing the stability and reliability of the software than number of remarks. Duplicate remarks have been eliminated; rejected remarks have been done. Only remarks that resulted in modifying the software or the documentation are counted.
Remark status The status of the defect could vary depending upon the defect-tracking tool that is used. Broadly, the following statuses are available: To be solved: Logged by the test engineers and waiting to be taken over by the software engineer. To be retested: Solved by the developer, and waiting to be retested by the test engineer. Closed: The issue was retested by the test engineer and was approved. Track the progress with respect to entering, solving and retesting the remarks. During this phase, the information is useful to know the number of remarks logged, solved, waiting to be resolved and retested. This information can normally be obtained directly from the defect tracking system based on the remark status.
Defect severity The severity level of a defect indicates the potential business impact for the end user (business impact = effect on the end user x frequency of occurrence). Provides indications about the quality of the product under test. High-severity defects means low product quality, and vice versa. At the end of this phase, this information is useful to make the release decision based on the number of defects and their severity levels. Every defect has severity levels attached to it. Broadly, these are Critical, Serious, Medium and Low.
Defect severity index An index representing the average of the severity of the defects. Provides a direct measurement of the quality of the product—specifically, reliability, fault tolerance and stability. Two measures are required to compute the defect severity index. A number is assigned against each severity level: 4 (Critical), 3 (Serious), 2 (Medium), 1 (Low). Multiply each remark by its severity level number and add the totals; divide this by the total number of defects to determine the defect severity index.
Time to find a defect The effort required to find a defect. Shows how fast the defects are being found. This metric indicates the correlation between the test effort and the number of defects found. Divide the cumulative hours spent on test execution and logging defects by the number of defects entered during the same period.
Time to solve a defect Effort required to resolve a defect (diagnosis and correction). Provides an indication of the maintainability of the product and can be used to estimate projected maintenance costs. Divide the number of hours spent on diagnosis and correction by the number of defects resolved during the same period. 
Test coverage Defined as the extent to which testing covers the product’s complete functionality.  This metric is an indication of the completeness of the testing. It does not indicate anything about the effectiveness of the testing. This can be used as a criterion to stop testing. Coverage could be with respect to requirements, functional topic list, business flows, use cases, etc. It can be calculated based on the number of items that were covered vs. the total number of items.
Test case effectiveness The extent to which test cases are able to find defects. This metric provides an indication of the effectiveness of the test cases and the stability of the software. Ratio of the number of test cases that resulted in logging remarks vs. the total number of test cases.
Defects/ KLOC The number of defects per 1,000 lines of code. This metric indicates the quality of the product under test. It can be used as a basis for estimating defects to be addressed in the next phase or the next version. Ratio of the number of defects found vs. the total number of lines of code (thousands) 
PROJECT
Workload capacity ratio Ratio of the planned workload and the gross capacity for the total test project or phase. This metric helps in detecting issues related to estimation and planning. It serves as an input for estimating similar projects as well. Computation of this metric often happens in the beginning of the phase or project. Workload is determined by multiplying the number of tasks against their norm times. Gross capacity is nothing but planned working time, determined by workload divided by gross capacity.
Test planning performance The planned value related to the actual value. Shows how well estimation was done. The ratio of the actual effort spent to the planned effort.
Test effort percentage Test effort is the amount of work spent, in hours or days or weeks. Overall project effort is divided among multiple phases of the project: requirements, design, coding, testing and such.  The effort spent in testing, in relation to the effort spent in the development activities, will give us an indication of the level of investment in testing. This information can also be used to estimate similar projects in the future. This metric can be computed by dividing the overall test effort by the total project effort.
Defect category An attribute of the defect in relation to the quality attributes of the product. Quality attributes of a product include functionality, usability, documentation, performance, installation and internationalization. This metric can provide insight into the different quality attributes of the product. This metric can be computed by dividing the defects that belong to a particular category by the total number of defects.
PROCESS
Should be found in which phase An attribute of the defect, indicating in which phase the remark should have been found. Are we able to find the right defects in the right phase as described in the test strategy? Indicates the percentage of defects that are getting migrated into subsequent test phases. Computation of this metric is done by calculating the number of defects that should have been found in previous test phases.
Residual defect density An estimate of the number of defects that may have been unresolved in the product phase. The goal is to achieve a defect level that is acceptable to the clients. We remove defects in each of the test phases so that few will remain.  This is a tricky issue. Released products have a basis for estimation. For new versions, industry standards, coupled with project specifics, form the basis for estimation.
Defect remark ratio Ratio of the number of remarks that resulted in software modification vs. the total number of remarks. Provides an indication of the level of understanding between the test engineers and the software engineers about the product, as well as an indirect indication of test effectiveness. The number of remarks that resulted in software modification vs. the total number of logged remarks. Valid for each test type, during and at the end of test phases.
Valid remark ratio Percentage of valid remarks during a certain period. Valid remarks = number of defects + duplicate remarks + number of remarks that will be resolved in the next phase or release. Indicates the efficiency of the test process. Ratio of the total number of remarks that are valid to the total number of remarks found.
Bad fix ratio Percentage of the number of resolved remarks that resulted in creating new defects while resolving existing ones.  Indicates the effectiveness of the defect-resolution process, plus indirect indications as to the maintainability of the software. Ratio of the total number of bad fixes to the total number of resolved defects. This can be calculated per test type, test phase or time period.
Defect removal efficiency The number of defects that are removed per time unit (hours/days/weeks) Indicates the efficiency of defect removal methods, as well as indirect measurement of the quality of the product. Computed by dividing the effort required for defect detection, defect resolution time and retesting time by the number of remarks. This is calculated per test type, during and across test phases.
Phase yield Defined as the number of defects found during the phase of the development life cycle vs. the estimated number of defects at the start of the phase. Shows the effectiveness of the defect removal. Provides a direct measurement of product quality; can be used to determine the estimated number of defects for the next phase. Ratio of the number of defects found by the total number of estimated defects. This can be used during a phase and also at the end of the phase.
Backlog development The number of remarks that are yet to be resolved by the development team. Indicates how well the software engineers are coping with the testing efforts. The number of remarks that remain to be resolved.
Backlog testing The number of resolved remarks that are yet to be retested by the development team. Indicates how well the test engineers are coping with the development efforts. The number of remarks that have been resolved.
Scope changes The number of changes that were made to the test scope. Indicates requirements stability or volatility, as well as process stability. Ratio of the number of changed items in the test scope to the total number of items.

February 18, 2009

Quality Factors

Filed under: Metrics,Quality — Triaged Tester @ 1:08 pm
Tags: ,

Some of the sample quality factors are listed below. These quality factors are responsible for measurement paradigms

  • Correctness
  • Reliability
  • Testability
  • Flexibility
  • Usability
  • Portability
  • Interoperability
  • Efficiency
  • Integrity
  • Maintainability
  • Revisibility
  • Survivability

A whole lot of “ility” factors there 🙂

February 17, 2009

Use of Metrics

Filed under: Metrics,Tips — Triaged Tester @ 1:08 pm
Tags: ,

What Can Measures Do for You?

                     Facilitate estimation

      Identify risky areas

      Measure testing status

      Measure/predict product quality

      Measure test effectiveness

      Identify training opportunities

      Identify process improvement opportunities

      Provide “meters” to flag actions

February 12, 2009

Software Measurement

Filed under: General,Metrics — Triaged Tester @ 1:05 pm
Tags:

Bill Hetzel once told – “It’s easy to get numbers, what is hard is to know they are right and understand what they mean”

So what is Software Measurement?

      It’s “quantified observation” about any aspect of software (product, process or project)

There are lots and lots of measures. Primitive methods included – Number of staff assigned to project A, Pages of requirements specifications, Hours worked to accomplish change request X, Number of operational failures in system Y this year, Lines of code in program Z.

Some computed measures include – Defects per 1000 lines of code in program A, Productivity in function points delivered by person B, Quality Score for project C, Average cigarette consumption per line of code, Accuracy of hours worked per week is ±20%

 

But what makes a good measure is that it should be – Simple, Objective, Easily collected, Robust and Valid

Create a free website or blog at WordPress.com.