Triaged Tester

March 18, 2009

Stop Testing

Filed under: Black Box Testing,General,Test Management — Triaged Tester @ 9:14 am
Tags: , , ,

Time and again the most important question that haunts a tester – when are you stopping your test?

Well’ there is no right or wrong answer for this. But definetly you can concur at the time to stop testing using these items

1. All high priority bugs are fixed

2.The bug convergence shows good result

3. ZBB ( Zero Bug Bounce) has been achieved

4.The testing budget is achieved

5.The project duration is completed 🙂

6. The risk in the project is under acceptable limit

Practically item # 6 would be the main and most acceptable solution to stop testing.  Now what risks need to be monitored for these answers ? . I would go with – Test coverage, Number of test cycles & priority of open  bugs


March 17, 2009

Deliverables @ various phases

Filed under: Black Box Testing,Deliverables,Test Management — Triaged Tester @ 9:10 am
Tags: , ,

This diagram does not depict when and where are the test plan and test strategy documents generated.Ideally, these documents are ready before you begin the test activities

test Deliverables @ phases

February 4, 2009

Logging – Do’s & Don’ts

Filed under: Automation,Checklist,Guidelines — Triaged Tester @ 7:22 am
Tags: , ,

Logging is imperative for the successful release of any product. It is required for ensuring that enough test cases are passing and release criteria is being met with. Without proper logging, there is no way of determining, if a component meets the QA needs as prescribed. The logging infrastructure has to be robust.

 Error reporting is one of the crucial aspects of logging required for test cases. Errors, Warnings and informational messages should be context sensitive and be able to point to the source of the problem in the test code. They should also be as much unique as possible so that a result from a wild card search should be able to point the test suite and the test Case the error message belongs to.




  • Use custom logging only in conjunction with logging provided by operations team.
  • Do not log multiple passes/failures per test case.
  • Do not instantiate your own version of the Logging object provided by operations team.
  • No “No repro on rerun” explanations.
  • Do not log a failure in place of warnings. A failing Test caseshould report only one failure.




  • Do use the logging infrastructure provided by the operations team.
  • Do use only one Pass or one Fail per test case.
  • Tests should always log a result per test case.
  • Do Use multiple warnings instead of failures.
  • Log as much information needed for easier troubleshooting.
  • Log the state of your objects for easier troubleshooting.

February 3, 2009

UI Automation – Do’s & Don’ts

Filed under: Automation,Checklist,Guidelines,Tips — Triaged Tester @ 7:19 am
Tags: , ,


  • Explicitly set focus to the window on which the test Case is expected to input data.
  • Capture screen shots for failures. Helps in debugging.
  • Point to dev resource files for localized strings in a UI app. This would help a lot in pseudo loc testing.
  • Leave the UI in a known state after a test Case is done executing. E.g.: Close the wizard even if the test Case is to test the first screen of a wizard by canceling out of the wizard after the first screen.
  • Validate that the object being created using the UI with the one being stored in the backend system
  • Log every valid UI operation being performed in the test Case.
  • Use resource files for all data required to be entered in the UI.
  • Try to localize the usage of the UI automation tool as much as possible in order to avoid deep dependencies with one tool. E.g. Encapsulate the usage of the UI automation tool into a small framework for your UI automation. Use this framework to write test cases rather than the tool directly. This would decouple the tool from the test cases to a large extent. If the team decides to use a new tool, only the framework would need to rewritten using the new tool but the bulk of test code remains unchanged.



  • Do not log multiple times.
  • Do not sleep after every UI operation.
  • Do not give unusually long timeouts for every major operation.
  • Fail immediately after an unexpected screen.
  • Do not re-launch the UI application for every test Case as it is a time consuming process

Blog at