Triaged Tester

March 19, 2009

Testing Types & Testing Techniques

Filed under: Black Box Testing,General,Terminology — Triaged Tester @ 9:27 am
Tags: , , ,

Testing types deal with what aspect of the computer software would be tested, while testing techniques deal with how a specific part of the software would be tested.

That is, testing types mean whether we are testing the function or the structure of the software. In other words, we may test each function of the software to see if it is operational or we may test the internal components of the software to check if its internal workings are according to specification.

On the other hand, ‘Testing technique’ means what methods or ways would be applied or calculations would be done to test a particular feature of a software  (Sometimes we test the interfaces, sometimes we test the segments, sometimes loops etc.) 

Advertisements

March 18, 2009

Stop Testing

Filed under: Black Box Testing,General,Test Management — Triaged Tester @ 9:14 am
Tags: , , ,

Time and again the most important question that haunts a tester – when are you stopping your test?

Well’ there is no right or wrong answer for this. But definetly you can concur at the time to stop testing using these items

1. All high priority bugs are fixed

2.The bug convergence shows good result

3. ZBB ( Zero Bug Bounce) has been achieved

4.The testing budget is achieved

5.The project duration is completed 🙂

6. The risk in the project is under acceptable limit

Practically item # 6 would be the main and most acceptable solution to stop testing.  Now what risks need to be monitored for these answers ? . I would go with – Test coverage, Number of test cycles & priority of open  bugs

March 16, 2009

Test Activities during phases

Filed under: Black Box Testing,Checklist,General,Guidelines,Test Plan,Tips — Triaged Tester @ 8:51 am
Tags: ,

Test activities vary with the model and also on the type of project. So here is a generic list of items that needs to be done. You can always do it at any point when enough data is available.

1. Requirement Phase :

  • Invest in analysis at the begining of the project
  • Start developing the test set at the requirement analysis phase
  • The correctness, consistency and completeness of the requirements should be analysed.

2. Design Phase :

  • Analysis of design to check its completeness and consistency
  • Analysis of design to check whether it satisfies the requirements
  • Generation of test data based on design
  • Setting up of test bed

3. Programming/Coding Phase :

  • Check code for consistency with design
  • Perform system testing in an organized manner – Buddy testing, feature testing, integration testing, System testing etc
  • Use available tools
  • Apply stress to the program
  • Test one at a time
  • Measure test coovergae

4. Maintanence Phase :

  • Retest/Regress

February 12, 2009

Software Measurement

Filed under: General,Metrics — Triaged Tester @ 1:05 pm
Tags:

Bill Hetzel once told – “It’s easy to get numbers, what is hard is to know they are right and understand what they mean”

So what is Software Measurement?

      It’s “quantified observation” about any aspect of software (product, process or project)

There are lots and lots of measures. Primitive methods included – Number of staff assigned to project A, Pages of requirements specifications, Hours worked to accomplish change request X, Number of operational failures in system Y this year, Lines of code in program Z.

Some computed measures include – Defects per 1000 lines of code in program A, Productivity in function points delivered by person B, Quality Score for project C, Average cigarette consumption per line of code, Accuracy of hours worked per week is ±20%

 

But what makes a good measure is that it should be – Simple, Objective, Easily collected, Robust and Valid

January 24, 2009

Performance Testing

Filed under: General,Performance — Triaged Tester @ 7:12 am
Tags:

Performance tests deal with how well a set of functions or operations work, rather than how they work as in feature tests. Whether a performance test passes or fails depends on whether the specific set of functions or operations deliver the functionality in a performing way. The performance release criteria of the product have to be validated with tests categorized as performance tests. 

 

Since the performance tests need to reveal the true performance nature of the product, the test code that needs to measure the performance needs to be clean and lean.  In other words, one needs to be careful not to introduce any overhead that is not related to the product’s true functionality.  Validation, error check need to be done in a way that should not affect the true performance measurement of the product.

 

Reporting on a performance test is far more demanding than from a feature test. A report on the performance results need to not only report on whether the performance tests pass or fail based on a specific set of criteria, but also need to have detailed information in order for developer or reader to digest how the performance characteristic is. The results analysis and data interpretation is of significant importance to a performance test.  This needs to be handled with great care.

 

Performance test need to be clear on:

1.)  What is the objective of the test? What for, why do you want to do it?

2.)  What are the key primitives/actions that the test is targeted to?

3.)  How is it related to release criteria/customer impact?

4.)  What are the performance metrics that are meaningful to this operation, e.g. latency, throughput?

5.)  What bottlenecks would be identified for the tests?

6.)  What is the clear context under which this test is conducted? Single user or multiple concurrent users?

7.)  Resource utilization: CPU, DiskI I/Os, memory consumption, network throughput/usage…

8.)  The environment parameters in which the results obtained? Large customer sets/data center?

What is the hardware used?

Next Page »

Blog at WordPress.com.