Triaged Tester

January 27, 2009

Stress Testing

Filed under: Stress — Triaged Tester @ 7:16 am
Tags:

Stress test is targeted to find issues with stability and robustness of the product, such as AVs, resource leaks, deadlocks, and various issues/inconsistency under the stress/load condition.

 

Most time stress tests are running in combination with other diagnostic utility such as debugger, page heap, any fault injection tools/instrumentations, performance monitor, and all other necessary tracing and logging tools that deemed to be useful for debugging and analysis. These are crucial information for developer and tester for analyzing the final outcome of the results.

 

Feature validation is important part of the stress, though it is sometimes not done as much as it should. As a good practice, all stress run should implement feature validation to ensure the system behave consistently under the load condition.

 

Stress tests should also focus on negative conditions. Systems tend to work fine when everything is tested for valid conditions and tends to become unstable when the error paths are hit. It is very important that these error conditions are included in the stress scenarios.

 

Stress test need to be clear on:

1.)  What is the objective of the test? What for, why do you want to do it?

2.)  What is the key primitive/action that is targeted to?  And the code path it covers?

3.)  How it related to release criteria/customer impact? Duration of the runs?

4.)  Stability of the runs, and issues/bugs: AVs, leaks, deadlocks, or data integrity issues, or any other feature inconsistency that occurred during the stress?

5.)  Not often done, but it is also a good practice to report the performance patterns during the stress, thought the perf number obtained in stress is tainted by the instrumentation tools we used?

6.)  What are location/links to dumps, perf logs, trace files …, or simply offending call stack…

7.)  What is the clear context under which this test is conducted? Single user or multiple concurrent users?

8.)  Perf counters that record the stress run on resource utilization: CPU, Disk I/Os, memory consumption, network throughput/usage…

9.)  The environment parameter

10.)       What is the hardware used?s in which the results obtained? Large customer sets/data center

January 26, 2009

Test Bed Architecture

Filed under: Black Box Testing,Test Bed — Triaged Tester @ 7:15 am
Tags:

Typical test bed architecture of test teams in SDLC

testarchitecture

January 25, 2009

Automation – Do’s & Don’ts

Filed under: Automation — Triaged Tester @ 7:14 am
Tags:

On an average a majority of the Cases in a given project are automated. Hence proper thought should be given to general practices involved with automation. Automation in the current infrastructure involves using a test case manager, a test harness, logging infrastructure and lab hardware. Good automation requires the above components be as robust as possible. It also requires test code developers to follow the best practices outlined here.

 

Don’ts:

Do not develop tests specific to the hardware.

Do not use custom harnesses unless approved by your lead.

Tests should not require a reboot of the machine.

Avoid restarting services.

Avoid sleep statements in test code.

Test Cases should not result in false failures or passes.

Do not implement the harness’s functionality in your test code.

No hard coded dependency on external resources.

 

Do’s:

Validate that the tests run on a typical lab machine.

Use the templates provided by the operations team for test development.

Develop with the harness provided by the operations team.

Use the layered approach. Driver and functionality components.

Case s should be self verifiable.

Case s should behave deterministically.

Case s should use setup and cleanup functions.

Random data generation should generate predictable/same values for the same seed.

Ensure that Setup function can be called consecutively without any side effects.

Tests should run on all supported hardware.

Tests should always return a result

January 24, 2009

Performance Testing

Filed under: General,Performance — Triaged Tester @ 7:12 am
Tags:

Performance tests deal with how well a set of functions or operations work, rather than how they work as in feature tests. Whether a performance test passes or fails depends on whether the specific set of functions or operations deliver the functionality in a performing way. The performance release criteria of the product have to be validated with tests categorized as performance tests. 

 

Since the performance tests need to reveal the true performance nature of the product, the test code that needs to measure the performance needs to be clean and lean.  In other words, one needs to be careful not to introduce any overhead that is not related to the product’s true functionality.  Validation, error check need to be done in a way that should not affect the true performance measurement of the product.

 

Reporting on a performance test is far more demanding than from a feature test. A report on the performance results need to not only report on whether the performance tests pass or fail based on a specific set of criteria, but also need to have detailed information in order for developer or reader to digest how the performance characteristic is. The results analysis and data interpretation is of significant importance to a performance test.  This needs to be handled with great care.

 

Performance test need to be clear on:

1.)  What is the objective of the test? What for, why do you want to do it?

2.)  What are the key primitives/actions that the test is targeted to?

3.)  How is it related to release criteria/customer impact?

4.)  What are the performance metrics that are meaningful to this operation, e.g. latency, throughput?

5.)  What bottlenecks would be identified for the tests?

6.)  What is the clear context under which this test is conducted? Single user or multiple concurrent users?

7.)  Resource utilization: CPU, DiskI I/Os, memory consumption, network throughput/usage…

8.)  The environment parameters in which the results obtained? Large customer sets/data center?

What is the hardware used?

January 23, 2009

Test Team deliverables

Filed under: Deliverables — Triaged Tester @ 7:11 am
Tags:

The specific deliverables may vary from project to project. Below is just a guideline.

 

Product Definition Process: Early Product Planning – Plan out what we want

  • If not current delivering a product QA should be developing knowledge about
    • New Technologies the product will be based on
    • Customer segment to be addressed
  • Requirements: Nail down product and project requirements(Functional specifications)
    • High Level QA  Plan (testing practices, tools, approach, standards, tools) – QA Mgr
    • High Level Resource estimations based on Feature lists – QA Mgr
    • Review Test Plan template and propose any changes
  • Design:  Design the architecture and testing of the product (Design Documents and Test Plans)

o   Test Specs complete, reviewed and signed off

o   Bottoms up test schedules created and reviewed

o   BVTs and Automation designed

o   Security Testing Planned

o   Globalization/Localization Test Planned

o   Performance / Scale Testing planned

  • Implementation: Code the product/application (Code Complete)
    • BVTs created and automated
    • All BVTs pass 100% on 100% of features
    • Basic end-to-end scenarios pass
    • Test cases defined
    • Release Criteria Created and Signed off
  • Verification: Stabilize the product (Stabilization/Technical Preview)
    • Test automation complete
    • Security test pass
    • Globalization / Localization Validation
    • First Performance and Soak Validation
    • Evaluate Code Coverage and add Cases where needed
    • Customer Support / help and Evaluation of problems found in field
  • Evaluate the product (Beta)
    • Review Test Coverage and enhance test sets where needed
    • Expansion of platform on which automation is run
    • Large Scale Deployment Validation
    • Design change Requests
      • Speclette Reviews
      • Test Planlette creation
      • Cost Estimation
      • Approval
      • Test Execution
    • Customer / Support and Evaluation of problems found in field
    • Documentation Reviews
  • Release Candidate:  Produce Release candidate and drop to end users
    • Customer Support / Support and Evaluation of problems found in field
    • Continued automation
  • Ship the product (RTM, RTW, RTS)
    • Release Validation
    • Prepare for Sustained Engineering Handoffs
  • End of Life Process Post-RTM – Clean up and distribution.
    • Automation handoff to Sustained Engineering
    • Support Sustained Engineer
      • Regression support for enhanced feature request
      • Problem identification for problems found in field 
Next Page »

Create a free website or blog at WordPress.com.