Triaged Tester

State Machine based test automation framework – SMAF

Key words:

State Machine, Finite state machine, Model based, Graph Theory, State Changing, Automation, Harness, testing, Test Driver, Test Generator, Decision Module, Implementation gateway.


Model-based testing allows large numbers of test cases to be generated from a description of the behavior of the system under test. Given the same description and test runner, many variations of scenarios can be exercised and large areas of the application under test can be covered, thus leading to a more effective and more efficient testing process

Current State:

Approach1:  Is a typical hands-on tester, manually running all tests from the keyboard. Hands-on testing is common throughout the industry today—it provides immediate benefits, but in the long run it is tedious for the tester and expensive for the company.

Approach2 practices what we call “static test automation.” Static automation scripts exercise the same sequence of commands in the same order every time. These scripts are costly to maintain when the application changes. The tests are repeatable; but since they always perform the same commands, they rarely find new bugs.

Approach3 operates closer to the cutting edge of automated testing. These types of “random” test programs are called dumb monkeys because they essentially bang on the keyboard aimlessly. They come up with unusual test action sequences and find many crashing bugs, but it’s hard to direct them to the specific parts of the application you want tested. Since they don’t know what they are doing, they miss obvious failures in the application.

Approach4 combines the other testers’ approaches with a type of intelligent test automation called “model-based testing.” Model-based testing doesn’t record test sequences word for word like static test automation does, nor does it bang away at the keyboard blindly. Model-based tests use a description of the application’s behavior to determine what actions are possible and what outcome is expected.

This automation generates new test sequences endlessly, adapts well to changes in the application, can be run on many machines at once, and can run day and night

Direct benefit of such a model

  • Programmatic
  • Efficient coverage
  • Tests what you expect and what you don’t
  • Very nimble and rapid development as it can discard inputs which point to fault areas to reduce failures
  • Resistant to pesticide paradox in testing.
  • Finds crashing and non crashing bugs
  • Significant investment in tested app

Indirect benefits

  • Improve specs
  • More nimble test automation
  • Better relation with Dev’s and produces the effect of working together than against
  • Attract and retain high quality SDETs

Disadvantages of Model Based Testing

  • Requires that testers be able to program
  • Effort is needed to develop the model.

An introduction to Model based Testing

The main premise behind model-based testing is to create a model, a representation of the behavior of the system under test. One way to describe the system behavior is through variables called operational modes. Operational modes dictate when certain inputs are applicable, and how the system reacts when inputs are applied under different circumstances. This information is encapsulated in the model, which is represented by a finite state transition table. In this context, a state is a valid combination of values of operational modes. Invalid combinations of these values represent situations that are physically impossible for the system under test, and are therefore excluded. Each entry in the state transition table consists of an initial state, an input that is executed, and an ending state that describes the condition of the system after the input is applied:

The model is created by taking any valid combination of values of operational modes as the initial state. All inputs that are applicable from this state are then listed, along with the new state of the software after the input is applied. As an analogy, think of a light switch that can be turned on or off. When the light is on, it can’t be turned on again; similarly when the light is already off, it can’t be turned off:

By repeating this process is for all states, a state transition table is formed that describes in great detail how the system under test behaves under all circumstances that are physically possible within the confines of the areas being modeled.

Extending the Model Based philosophy to a software

The process of developing model-based software test automation consists of the following steps:

  1. Exploring the system under test, by developing a model or a map of testable parts of the application.
  2. Domain definition: enumerating the system inputs  and determine verification points in the model
  3. Developing the model itself.
  4. Generating test cases by traversing the model
  5. Execution and evaluation of the test cases
  6. Note the bugs the model missed and improve the model to catch (maybe build some intelligence)

Proposed approach for the harness

Problem statement

Create a model based intelligent automation harness that would make testing cheaper, faster and better.

Approach considerations

Address the # 2 of the disadvantages of Model based testing

Creating a state model

Model-based testing solves these problems by providing a description of the behavior, or model, of the system under test. A separate component then uses the model to generate test cases. Finally, the test cases are passed to a test driver or test harness, which is essentially a module that can apply the test cases to the system under test.

Given the same model and test driver, large numbers of test cases can be generated for various areas of testing focus, such as stress testing, regression testing, and verifying that a version of the software has basic functionality. It is also possible to find the most time-efficient test case that provides maximum coverage of the model. As an added benefit, when the behavior of the system under test changes, the model can easily be updated and it is once again possible to generate entire new sets of valid test cases.

(A)   Model : – State machines are the heart of the model, linking to the test driver is cumbersome. A dedicated editor saves time and effort. The editor also enforces coherency in the model by using a set of rules defining legal actions. I am looking at WWF (state transition workflows to achieve this)

(B)   Test case generator

The following algorithms can be considered for generating test cases

  • The Chinese Postman algorithm is the most efficient way to traverse each link in the model. Speaking from a testing point of view, this will be the shortest test sequence that will provide complete coverage of the entire model. An interesting variation is called the State-changing Chinese Postman algorithm, which looks only for those links that lead to different states (i.e. it ignores self-loops).
  • The Capacitated Chinese Postman algorithm can be used to distribute lengthy test sequences evenly across machines.
  • The Shortest Path First algorithm starts from the initial state and incrementally looks for all paths of length 2, 3, 4, etc. This is essentially a depth-first search.
  • The Most Likely First algorithm treats the graph as a Markov chain. All links are assigned probabilities and the paths with higher probabilities will be executed first. This enables the automation to be directed to certain areas of interest.

(C)   Here is a possible implementation of a test driver outlined:

The decision module gets a test sequence from any one of the graph we have designed. It reads the test sequence input by input, determines which action is to be applied next, and calls the function in the implementation module that performs that input. The implementation module logs the action it is about to perform and then executes that input on the system under test. Next, it verifies whether the system under test reacted correctly to the input. Since the model accurately describes what is supposed to happen after an input is applied, oracles can be implemented at any level of sophistication.

A test harness designed in this particular way is able to deal with any input sequence because its decision logic is dynamic. In other words, rather than always executing the same actions in the same order each time, it decides at runtime what input to apply to the system under test. Moreover, reproducing a bug is simply a matter of feeding as input sequence the execution log of the test sequence that caused or revealed the failure.


It can take significant effort to understand and model an application. And it can be difficult to leave the easy path of hands-on testing or static automation long enough to invest time thinking about how to test that application. The rewards, however, are great:

  • Model-based testing creates flexible, useful test automation from practically the first day of development.
  • Models are simple to modify, so model-based tests are economical to maintain over the life of a project.
  • Models can generate innumerable test sequences tailored to your needs.
  • Models allow you to get more testing accomplished in a shorter amount of time because a test generator can create and verify test sequences around the clock on multiple machines.
  • Model-based testing can supplement other forms of testing, and can perform tests that aren’t practical under other test approaches.

You and I know that software testing is no fairy tale, and that happily-ever-after’s are never guaranteed. But adding model based intelligence to your testing is a powerful tool to help you find your way toward our own happy ending.


  1. B. Beizer, “Software Testing Techniques”, 2nd Edition 1990.
  2. H. Robinson, “Finite State Model-Based Testing on a Shoestring”, Proceedings of the Software
  3. Testing Analysis and Review Conference, San Jose, CA, Nov. 1999.
  4. J. Whittaker, “Stochastic Software Testing”, Annals of Software Engineering, 4, August 1997
  5. J. Clarke, “Automated Test Generation from a Behavioral Model”, Presented at Software Quality Week, 1998
  6. L. Apfelbaum, and J. Doyle, “Model-Based Testing”, Presented at Software Quality Week, 1997

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Blog at

%d bloggers like this: