Triaged Tester

January 6, 2009

Test Automation code review guidelines

Filed under: Automation — Triaged Tester @ 1:50 pm

Test Automation code review guidelines

Duplicate code – code that is the same or almost the same or even similar to code elsewhere.

  • Creating 2 separate test case methods for similar test conditions such as min/max values, nulls or other boundary conditions for parameters. Instead, extract out the common logic in the two test cases and make it a method that both can call
  • Testing the same functionality dropped in 2 different releases by rewriting existing test cases just for the new release. Existing test code should be written in a way that it works across multiple releases.
  • If an existing test case changes from one release to another, the use of pre-compile directives can be employed. Or, if the test case becomes fundamentally different then a new test case could be written

Duplicating functionality – Re-writing existing libraries is expensive to write and maintain.

  • Creating a separate random or project data generation when a new product revision is released. Instead, data generation libraries should be re-used as much as possible across projects
  • Writing separate releases of library code to handle multiple different namespaces of the same product. Pre-processor directives or reflections or generics should be used to make code independent of project namespaces. This applies specifically to code which will be used to test multiple releases of the product
  • Re-writing functions which have part of the same functionality implemented by other methods should be avoided.

Incorrect Exception handling – “swallowing” exceptions is the worst practice, but even solely logging is bad. In general, test code should not need exception handling, as the underlying test harness will automatically deal with it. The exception is if you have significant information to add to the exception message, and in that case, the original exception should be passed as an InnerException.

System resource leaks – This is especially common when using SqlConnection, SqlCommand etc. statements. The best way to use these is to wrap it with a “using” statement. Anything that implements IDisposable should be wrapped in this statement

  • Any resources used should be opened pessimistically, i.e. they should be opened in such a way that they are automatically closed once the usage is complete. This includes opening files in read-only mode unless read/write is required, and closing database connections immediately after usage
  • In the case of unmanaged pointers, system resources need to be properly released. It is a good design approach to implement a SafeHandle derived class to manage this for you. This would force you to implement the releasehandle code which can include memory clean-up related code
  • Un-managed code usage should be avoided as much as possible and C# should be used as much as possible. If unavoidable, then usage should be called out and clearly documented

Non-use of verification functionality – Test code should not throw exceptions but rather use Assert.* methods, and Assert.Fail() if nothing else matches. Don’t hesitate to incorporate Assert.* calls into your libraries – some of the most efficient test cases relegate all verifications to library methods. All test cases should rely on some form of verification.

  • Verifications should be performed for all possible fields of the object being tested. This should include core fields and also audit fields such as created, updated date etc.
  • Verifications should be bundled together for an entire object and implemented hierarchically, for example if a customer object has a emails collection then there should be a separate verification method for verifying email collections which should be called from the customer verification method.

Magic numbers and the like – Hard coded values can be a maintenance nightmare down the road. Centralize these settings somewhere in your code instead

  • All configuration and other data values should be in a configuration file. Initially, these can be exposed by a separate “Settings” class with static fields or methods, where the values are inline. Later, this class can be modified to get the values from a different source, like a configuration file
  • Settings such as web service URL’s, database connection strings and assembly names/paths are another example

Automation handover – Large pieces of automation are often developed by one Tester who has everything configured correctly on his or her machine. Oftentimes that Tester doesn’t realize that the code cannot be compiled by another Tester who gets it from source. The cause of this is usually references to binaries that are hard-coded to a local machine path. This should be avoided using the practices below. Also, the projects should contain enough documentation that another Tester could easily figure out what to do with it. A summary document in the solution or code comments can help here significantly.

  • All paths should use relative pathsinstead of absolute paths. Use Reflection to determine the execution directory and configuration paths.

Each project should have a ReadMe file, this file should include any external dependencies, URL’s special set-up instructions or note configuration values that need to be changed for a local machine. Testers could generate a .chm file to give an extra professional look to it, especially in the cases where an adopting tester will use it.


Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at

%d bloggers like this: