Triaged Tester

January 2, 2009

Testing Types

Filed under: General — Triaged Tester @ 11:15 am
Tags:

The beginning of wisdom is the definition of terms.     – Plato

Base Class Specialized Class Comments Pos/Neg test % Execute When?
Regression Prevention Testing Acceptance Testing   Acceptance testing occurs after successful build verification testing, but before a full test pass.
(1) Tests that verify a build is in a known stable state.
(2) Quick set of tests to verify that all major functionality is working as designed in the functional specification and is acceptable for running a full test pass.
100/0 Every “official” build, after Level 0 BVTs complete
Functional Testing Accessibility Testing The responsibility to develop products and information technologies that are accessible and usable by all people, including those with disabilities. 99/1 Once per product cycle.  Full test pass
Other testing Ad Hoc Testing Regardless of focus, ad hoc testing intends to find bugs.  Ad hoc is usually destructive or negative path testing and almost never focuses on positive path except in the case of a quick sanity check for new code for instance.
(1) Directed Ad Hoc testing is when a specific sub system or feature or scenario is to be focused on during the testing.
(2) Undirected Ad Hoc testing is when a tester is given full license to party on any part of the feature set/system to look for bugs.
20/80 As often as possible
Functional Testing Beta Testing   A pre-released version of software sent to customer sites for evaluation and feedback.  Sometimes it is a production beta, meaning that the software is deployed and is “live” and people are using the software for their daily routines. Sometimes it is a developer beta, or a feedback beta which are both looking for customer input but are not supported in production. 95/5 1 or 2 times per product cycle
Functional Testing Black Box Testing Testing that treats the component under test as a black box.  That is testing that is based on input-output response model.  Certain inputs will produce certain outputs.  (Testing done without the knowledge of the structure or design of the code.) 50/50 All the time
Functional Testing Boundary Testing Testing the boundaries of the specified feature.  The cookbook for boundary tests is (a) lower limit -1 (b) lower limit (c) lower limit + 1 (d) upper limit – 1 (e) upper limit (f) upper limit + 1.  Boundary tests are interesting because these are areas where code is often not written in a manner to protect itself from these inputs.  “off by 1” errors are usually found with boundary tests.For instance, if you were testing an unsigned integer field, the boundary tests would be:
-1 – Negative path, expect a handled exception
0 – Positive path, should work
1 – Positive path, should work
65355 – Positive path, should work
65356 – Positive path, should work
65357 – Negative path, expect a handled exception 

 

50/50 All the time
Build Process Testing Build Verification Tests (BVT) The primary purpose for these tests is to verify the overall integrity of the build and to determine that the build under test meets a minimum stability and quality standard to warrant further testing.  If the tests pass less than 100%, the build is rejected. 100/0 *Sometimes* before code is checked in (depends on scope of code change).
After rolling build is completed.
Functional Testing Business Cycle Testing   Business Cycle Testing should emulate the activities performed over time.  A period should be identified, such as one year, and transactions and activities that would occur during a year’s period should be executed.  This includes all daily, weekly, monthly cycles and events that are date sensitive. 80/20 once per milestone
Functional Testing Component Testing Component tests are the core tests used to insure that a component is fulfilling its requirements. These tests include tests for the ideal code paths using boundary values, as well as tests for unexpected code paths such as invalid inputs, bad data, and unexpected user inputs.  Most of these tests will occur at the API level.  However, depending upon the purpose of the component, some GUI testing may also be done.   80/20 All the time
Configuration Testing Configuration Testing   Configuration testing verifies the operation of the target-of-test on different software and hardware configurations.  In most production environments, the particular hardware specifications for the client workstations, network connections and database servers vary.  Client workstations may have different software loaded (e.g. applications, drivers, etc.) and at any one time many different combinations may be active and using different resources.
(2) Testing of the product across the supported operating systems, file handlers, and browser (for that matter any software or hardware components of the platform on which the software is intended to execute.)
80/20 full pass once per milestone
Scenario Testing Deployment Testing The process of setting up and configuring a multi-machine topology based on customer requirements.  Common to server based teams.  May involved Active Directory, Exchange servers, SQL servers, Web servers, etc. 80/20 full pass once per milestone
Functional Testing Documentation Testing Documentation testing is focused on the user education documentation and/or the SDK documentation.  The verification involves grammatic correctness, as well as verification that step by step instructions lead to the correct state, or that source code examples compile and do the right thing. 100/0 1 or 2 times per product cycle
Other testing Dogfood Testing Dogfood testing is all about using the product under test as a customer would.  It tends to focus on creating scenarios where the product is required to do your work during the day. 90/10 ???
Other testing Dumb Monkey Testing Dumb Monkeys are usually data driven automation that tests the system in a random manner, with no knowlegde of state.  Variable data is provided to the monkey and it just bangs away until it hits a hard stop or is killed by the user.  Also known as bug hunters. 0/100 All the time
Configuration Testing Environment Testing See Configuration Testing    
Regression Prevention Testing Exit Criteria (a.k.a. Milestone Exit Criteria).
The team wide goals to be reached by the end of the milestone. These usually include a list of certain features to be completed, a list of work item deliverables, and a certain level of quality, which is usually measured by active bug count, could include a code coverage measurement, a Pass% for a full test suite, etc.
90/10 once per milestone
Other testing Exploratory Testing See Ad Hoc testing    
Other testing Failure Testing See Recovery Testing    
Functional Testing Functional Testing Tests designed to verify the features perform as outlined in the functional specification. 
Includes user interface, backend, error handling, performance, stress, accessibility, and complete functionality testing.
A functional test is any test that is designed to test software functionality.  Functional testing of the target-of-test should focus on any requirements for test that can be traced directly to use cases (or business functions), and business rules.  The goals of these tests are to verify proper data acceptance, processing, and retrieval, and the appropriate implementation of the business rules.  
80/20 All the time
Trustworthy Computing Testing Fuzz Testing Fuzz testing is a method of finding software security holes by feeding purposely invalid and ill-formed data as input to program interfaces (input files, network ports, APIs, etc.).     
International Testing Globalization Testing Globalization is the process of designing and implementing product development or content (including text and non-text elements) so that it can accommodate the cultural and linguistic conventions in any market (locale) without modification to the core source code. Globalization testing focuses on ensuring that the product supports national conventions used in various locales and accommodates the input, storage, and transmission of textual data regardless of the language or encoding method. 80/20 Once per milestone, full test pass run on INTL platforms
Functional Testing Happy Path Testing see Positive Path Testing 100/0  
Other testing Injection Testing Allows testers to modify the executable image; can be transparent or alter the behavior
Some uses:
(a) Simulating APIs in order to perform regression runs, which can significantly help reduce hardware count, setup time, repro time, fix-verification time, and regression testing time.
Failing classes of APIs (such as memory allocators, network, registry, synchronization, and file access) in order to expose unhandled exceptions, such as access violations, under severe and uncommon conditions.
(b) Testing the recovery features of software by intentionally causing crashes and verifying that the software gracefully recovers from failures.
(c) Increasing code coverage by making it easier to force difficult code paths.
(d) instrumenting a binary for code coverage analysis
50/50 Here and there
Functional Testing Installation Testing (1) Testing to verify that the install functions as expected and that the uninstall returns the system to a previous state. Tests may be manual, automated, or both.
(2) The process of evaluating whether or not the software can be loaded onto target machine(s) and made to work according to specification.  Installation testing has two purposes. The first is to insure that the software can be installed under different conditions, such as a new installation, an upgrade, and a complete or custom installation, and under normal and abnormal conditions. The second purpose is to verify that, once installed, the software operates correctly (second part is functional testing). 
80/20 Every build
Functional Testing Integration Testing Testing specifically aimed at verifying the various subsystems work together to satisfy common user scenarios.  The tests verify that the components work together as a solution at the product level.  
100/0 Every build
Other testing Limit Testing See Boundary Testing    
Performance Testing Load Testing See Stress Testing    
International Testing Localization Testing  Localization is the process of adapting a product or content to meet the language, cultural expectations, and geopolitical requirements of a specific geographic region. Localization testing focuses only on changes that have been made to the core product to adapt it to the specific target market.  Almost always specific to User Interface (and not APIs). 100/0 Once per milestone, full test pass run on INTL platforms
Performance Testing Memory Leak testing Testing focused on looking for memory leaks.  Usually done via repetitive test suites, or with stress test suites.  Memory leaks lead to system reliability issues. 100/0 every so often
Other testing Model Based Testing A model is a description of a system’s behavior.  Model based testing leverages a system model to generate test cases.  Models can be of several forms, the most common is a finite state model.  The model serves as the oracle and from it, tests can be generated.  Models include states, actions, and transitions and are represented by a cyclic graph. 50/50 Feature Code Complete
Performance Testing Multi-user Testing Multiple security contexts concurrently accessing the same data, or concurrently performing closely related operations.  It is related to Load/Stress testing, but is more end user scenario focused. Multi-threaded automation is often used to achieve this. 80/20 Once per milestone
Functional Testing Negative Path Testing Negative path tests focus on invalid input for the specified feature.  These tests are specifically intended to test exception paths and verify graceful error handling. 0/100 Every day
Trustworthy Computing Testing Penetration Testing Testing that attempts to breach the perimeter security of a system in order to get to protected data within the system.    
Performance Testing Performance Testing   Testing to evaluate the ability of the system to meet pre-defined performance requirements.  Performance may be measured as time for selected processes to complete (end user latency), as the number of transactions that can be completed in a specific unit of time, or as the number of MCycles required for a specific transaction (makes it easy to change hardware and still have valid perf data that can be compared).Performance testing also focuses on identifying bottlenecks in the system.  A system can only perform as fast as its slowest component(s).  Disk I/O, Network I/O, memory pressure, CPU utilization, disk fragmentation, swapping and paging are areas that performance testing may focus on to isolate such bottlenecks.  

80/20 Every week.  starting in M1
Functional Testing Positive Path Testing Positive path tests focus on valid input for the specified feature.  These are test specifically intended to work using valid input data.   100/0 Every day
Trustworthy Computing Testing Privacy testing Testing to verify that end users private data is always under their control.  For instance, Internet explorer warns you that you are about to leave an HTTPS server when you navigate away.  This tells the user that the data that they submit is no longer encrypted. 100/0 Once per milestone
Other testing Recovery Testing The process of evaluating whether or not a product responds to errors and failures as specified, usually by returning to a known, usable state.  Recovery testing ensures that the target-of-test can successfully recover from a variety of hardware, software, or network malfunctions with undue loss of data or data integrity.   50/50 Once per milestone
Regression Prevention Testing Regression Testing   Regular execution of a suite of tests to ensure that ongoing work on the product has not inadvertently resulted in a degradation of functionality or a re-introduction of bugs.  All bugs found, fixed and resolved will, if possible and practical; have an automated regression test written for it.   100/0 Every build
Trustworthy Computing Testing Reliability testing This consists of tests that help measure the robustness of the component when reviewed using metrics that are: measurable, specific to the root functionality of the component, test-able/simulate-able. For these metrics, the idea is to try and determine what a customer rate would be for those operation(s) and, extrapolate out to say 9 months worth of operations or, 3x the peak rate (based upon what is do-able. In other words, if this operation is resource intensive, this may be scaled back).As an example, if you have a service (say messenger) which has as it’s root function the ability to send and display messages, a relability metric could be the # of messages sent/received w/o a single reboot. This is testable and measurable. The numbers would be based upon actual data or, a best guess with pm/dev involvement (say that a heavy usage customer may use on average messenger as a IM-like server so, the average would be 1 msg every 10 min for an 8 hour day so, 3x that would be one message every 3 min => 20msg/hour * 2000 work hours = 40,000 msg’s/year. Given the 7,10,14,21 day longhaul milestones for b1,b2,rc,rtm that would equate to 40k*1/3=13K for b1,
40k * 10/21=20K for b2, etc. Note, this can also be measure as ops-rate/day with the number of
 days (so you would just track the # of days that coincide w/longhaul metrics).  

80/20 once a milestone
Functional Testing Requirements Testing The process of verifying that features in product code match the specification for those features.  Requires rigorous specification process and constant iteration on the spec as the design may change over time. 100/0 For every feature spec
Trustworthy Computing Testing Security Testing   The process of attempting to break the security features set up to protect information.  Security testing focuses on two key areas of security: application-level security, including access to the data or business functions, and system-level security, including logging into / remote access to the system.
Application-level security ensures that, based upon the desired security, actors are restricted to specific functions / use cases or are limited in the data that is available to them.  If there is security at the data level, testing ensures that user “type” one can see all customer information, including financial data, however, user two only sees the demographic data for the same client.  System-level security ensures that only those actors granted access to the system are capable of accessing the applications and only through the appropriate gateways.  On the wire security (inability to tamper or view data sent on the wire)
50/50 Couple times per milestone
Other testing Smart Monkey Testing Testing that is aware of system state and/or can make decisions based on rules and state (Genetic Algorithms) See Model Based Testing. 50/50  
Performance Testing Stress Testing   Evaluates the product’s ability to perform under peak load conditions.  Stress testing is a type of performance test implemented and executed to find errors due to low resources or competition for resources.  
Exercising the feature or product under a load (e.g. large numbers of concurrent users or large transaction volume).
Load testing is a performance test which subjects the target-of-test to varying workloads to measure and evaluate the performance behaviors and ability of the target-of-test to continue to function properly under these different workloads.   The goal of load testing is to determine and ensure that the system functions properly beyond the expected maximum workload.  
80/20 once per milestone
Scenario Testing System Testing  The process of evaluating the whole product deployed in a real-world simulation.  80/20 Once per milestone
Trustworthy Computing Testing Threat Modeling Threat modeling is the process of deducing potential attacks on the code base that can lead to one of several threats within Trustworthy computing.  STRIDE.  Stands for:
Spoofing of user identity
Tampering
Repudiability – threats are associated with users (malicious or otherwise) who can deny a wrongdoing without any way to prove otherwise.
Information Disclosure (privacy breach)
Denial of Servicec
Elevation of priveledge
0/100 Once per milestone
Functional Testing Unhappy Path testing Seen Negative Path Testing 0/100  
Functional Testing Unit Tests A set of tests usually written by the development staff that developers are required to run prior to checking in changes to the source code control system. The smallest unit that can be isolated and executed for testing (a class of group of classes, typically).
A test, typically written by the development staff, for the purpose of verifying that a feature or portion of a feature functions as intended.
50/50 Every checkin
Functional Testing User Interface Testing  UI tests individually exploit all menus, text boxes, check boxes, etc., and errors. UI tests also utilize these items in combination together in various ways to insure that the integration between the features exists and is correct.   80/20 M2+, wait until close to visual freeze, then every day.
Performance Testing Volume Testing  See Stress Testing 80/20  
Functional Testing White Box Testing Testing that is done with the knowledge of the internal structure and design of the code 50/50 Every day
Advertisements

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: