Triaged Tester

March 19, 2009

Testing Types & Testing Techniques

Filed under: Black Box Testing,General,Terminology — Triaged Tester @ 9:27 am
Tags: , , ,

Testing types deal with what aspect of the computer software would be tested, while testing techniques deal with how a specific part of the software would be tested.

That is, testing types mean whether we are testing the function or the structure of the software. In other words, we may test each function of the software to see if it is operational or we may test the internal components of the software to check if its internal workings are according to specification.

On the other hand, ‘Testing technique’ means what methods or ways would be applied or calculations would be done to test a particular feature of a software  (Sometimes we test the interfaces, sometimes we test the segments, sometimes loops etc.) 

February 2, 2009

Project Terminology

Filed under: Terminology,Test Management — Triaged Tester @ 7:17 am
Tags: ,

Are you lost, when some one a googly at you? Not anymore !!! Now you can belt them for a six!!!

Term Abbr. Definition
alpha   A very early release of a product to get preliminary feedback about the feature set and usability.
beta   A pre-released version of software that is sent to customers for evaluation and feedback.
blocking bug   A defect that prevents further or more detailed analysis or verification of a functional area or feature, or any issue that would prevent the product from shipping.
buddy build   A build of a product or component that is meant for verifying a fix or unblocking an area. (Also known as buddy drop and private build.)
buddy drop   See buddy build.
buddy test   A set of tests run by a tester on a private build from a developer.  Looking for system integration issues prior to checkin.
bug committee   See War Team
build acceptance test (BAT) BAT See build verification test.
build verification test (BVT) BVT An automated test suite run on each new build to validate the integrity of the build and basic functionality before the new build is released for general testing. (Also known as build acceptance test.)
Bitmap

check-in test

 

 

CIT A test run by a developer to determine whether his code has affected the general stability of the product. (Also known as developer quicktest.) 
code complete   A development milestone marking the point at which all features for the release are implemented and functionality has been verified against the functional specification.  Not to be confused with Feature code complete.
code freeze   A determinate time when no check-ins can be made into the source tree of a project without approval of the triage committee.
configuration testing   Testing on a variety of hardware platforms and software configurations (OS/drivers) to determine how the software behaves.
content plan   A non-technical document that describes the content associated with a user assistance project in detail. For example, the text and images that appear on each page of a wizard, tutorial, or interactive content. 
critical update   A broadly released fix for a specific problem addressing a critical, non-security related bug.
Create-Read-Update-Delete CRUD Create, Read, Update, Delete.  The fundamental operations performed on a database
design change request DCR A requested change to the functional specification after it is deemed frozen.
developer quicktest   See check-in test.
documentation plan   A document that includes the non-technical details of a user assistance deliverable. For example, an online Help plan describes the audience for the Help content, the style guide to be used, topic types, and format that will be delivered. (See also content plan).
Bitmap

dot release

 

 

  an incremental release to the product that signifies that only one or a couple of files were recompiled and added to the setup image.
exit criteria EC A set of criteria that a product or service must meet before a particular milestone is complete.
Bitmap

feature code complete

 

 

  A intra-milestone deliverable marking the point at which all code to enable a feature is supposed to be written and is fully testable
feature team   A team (of developers, testers, user interface designers, writers, editors, localizers, product planners, product marketing, and program managers) that is responsible for a feature.
freeze   A point at which an implementation or functional specification cannot change without significant justification and approval in a product-wide triage meeting.
functional specification FS A document that describes the user problem, requirements, and functionality details of a feature or set of features.
Bitmap

globalization

 

 

  The process of designing and implementing a product and/or content (including text and non-text elements) so that it can accommodate any local market (locale).
golden master   The final version of software for manufacturing, which has been virus checked, time stamped, and, if needed, compressed. 
Bitmap

hard triage

 

 

  See lockdown.
hotfix QFE A single cumulative package composed of one or more files used to address a defect in a product. Hotfixes address a specific customer situation and may not be distributed outside the customer organization without written legal consent from Microsoft.
A broadly released fix for a specific product addressing a security vulnerability. 
Bitmap

Individual Contributor

 

 

IC An engineer who has responsibilities at the individual level, does not formally manage other people.
launch   The activities leading up to and through a product’s release into the marketplace.
localizability   The ability of a product and/or content (including text and non-text elements) to be adapted for any local market (locale).
localization LOC The process of adapting a product and/or content (including text and non-text elements) to meet the language, cultural, and political expectations and/or requirements of a specific local market (locale).
lockdown   A development process for tightly controlling code changes in an effort to reduce bug regressions. (Also known as hard triage).
Bitmap

marketing plan

 

 

  A document that includes the details of positioning, a situation analysis, marketing objectives, marketing strategies, analysis of competition, description of target audience, pricing, estimated cost of goods, and profitability.
marketization   The process of modifying the user experience through changes, additions or deletions of functionality and/or content to better suit different markets (not limited to the localizable portion of the product).
milestone   1) A specific, measurable event that sharply defines the product development stage; 2) The current phase in the product cycle.  Binary in nature, either 100% complete or 100% not complete.
milestone 0 (M0) M0 The planning and designs phase of a product in which functional specifications, designs, and schedules are completed before implementation begins.
milestone 1 (M1) M1 The first full coding milestone in the implementation phase, in which all developers are coding against the functional specification.
milestone 2 – n M2-Mn Additional implementation milestones.
milestone criteria   See exit criteria.
Bitmap

point release

 

 

  See Dot Release
postmortem   A review at the end of a project to discuss what went well and what went poorly so that effective processes are reinforced and improvements are identified.
private build   See buddy build.
production beta   A beta release that is intended for a set of customers to put into a production environment.  Effectively, a mini-RTM, with support from the product team, QFE and hotfix support.  Implies that the production beta is upgradeable to RTM bits.
quick fix engineering QFE See hotfix
Bitmap

real-time mode

 

 

  See lockdown.
release   A particular version of a piece of software, most commonly associated with the most recent version (as in “the latest release”).
release candidate RC A build of the product produced with no known issues that the product team believes should prevent it from being released to manufacturing or to the Web. 
release candidate 0  RC0 The first build of the product produced after code complete that has no known issues and no new active bugs accepted by the triage committee for a certain number of days. (Also known as zero bug release).
release criteria   See exit criteria.
release to manufacturing RTM The point at which the final disks and full sets of documentation are sent to manufacturing for product build and subsequent release.
release to Web RTW The point at which the final code is declared ready to be propagated to the data center.
Bitmap

rolling build

 

 

  a system that allows for incremental code to be compiled into a build.  Instead of compiling a bunch of code at once, rolling builds focus on building a single changelist from Depot to isolate build breaks by checkin.
security patch   See hotfix
service level agreement SLA An agreement between two organizations detailing the level and nature of support that one team agrees to provide, and to which the other team agrees to reciprocal commitments.
service pack SP A cumulative set of all hotfixes, security patches, critical updates, and updates created and fixes for defects found internally since the release of the product. Service packs may also contain a limited number of customer requested design changes or features.
signoff   The point at which program management, development, test, user assistance, localization, and Product Support Services agree that a release is official before manufacturing will start reproducing it. 
silver master    The release disk set that the localization vendor sends to Microsoft, which contains all of the final-tested software files that comprise the product. 
sim-ship   The simultaneous release to manufacturing of localized versions with the source product.
Bitmap

stock keeping unit

 

 

SKU  a number associated with a product for inventory purposes.  Describes different versions of the same product (E.g. VS.net Pro, VS.net Enterprise)
testability   Testability is the degree to which systems are designed and implemented to make it easier for test automation to achieve complete code path coverage and simulate all usage situations in a cost efficient manner.   Testability is also defined as visibility and control.  Visibility is our ability to observe the states, outputs, resource usage and other side effects of the software under test.  Control is our ability to apply inputs to the software under test or place it in specified states
test design specification TDS A document that defines the testing requirements for an area of the product. (Also known as test requirements document.)
test release document TRD A document that the development team writes that defines which parts of the product are testable and which are not. 
test requirements document TRD See test design specification.
triage   The process of deciding which bugs to fix, which to postpone to a future release, and which not to fix.
Bitmap

triage committee

 

 

  See War Team
unit test   A test written and run by a developer to test specific modules and behaviors of his code in depth. Often a subset of unit tests are also used as check-in test.
user assistance functional specification   A technical document that describes the user problems, requirements, and functionality details of a user assistance feature or set of features, such as an online Help or content-tracking system. (See also content plan, user assistance plan, and documentation plan).
user assistance plan   A non-technical document that contains vision, business case, and customer focus for each type of user assistance content; lists user assistance features, documents, deliverables, and other content that will ship with the product or service. (See also content plan, documentation plan.)
user experience UX A team of designers who perform usability studies with customers, as well as design user interface
vision document   A document that describes the strategic goals and direction for the product, service, or feature.
vision statement   A one or two sentence summary of the principle objectives for the current release, which can be used by any team member to help prioritize work and make project decisions.
visual freeze   A point after which the user interface cannot change without approval from the triage committee.
war team   A committee that represents the entire product group that meets to decide which bugs to fix, which not to fix, and which to postpone.  Triage teams roll up into War Team and usually have one representative attending.  (also known as Ship Room, Ship Team)
zero bug bounce ZBB The first point after code complete that there are no active bugs accepted by the triage committee that are older than a certain number of hours.
zero bug release ZBR See release candidate 0.

January 12, 2009

Test Terms

Filed under: Black Box Testing — Triaged Tester @ 5:40 am
Tags:

These are few of the testing terms that normally do rounds in office

Term AKA Definition
Agent daemon Automation System component – A process running on any machine which is a part of the automation system.  Talks to the distributor to give “ready” and “busy” status. Also usually gathers machine software and hardware configuration data to upload to a configuration database.
Automation   Executable test code that requires no human intervention during the execution in order to complete successfully
Automation System   An end to end solution for automated test case execution. Provides the infrastructure for test case automation to be executed and reported on. 
Automation Target Machine/Machine Under Test   Automation System Component – Machine where Harness is launched and test cases are executed.   A single machine.
Code Coverage   The degree to which a component or product has had automation touch some metric of code.  Metric can be function coverage, branch coverage, block coverage, or arc coverage.  Block coverage is most often used.  Product code needs to be instrumented for code coverage in order to track it.
Code Generators   Agents send machine software and hardware information to this database.  Used by the scheduler to find valid automation target machines for automation.
Configuration database   Automation System Component – Agents send machine software and hardware information to this database.  Used by the scheduler to find valid automation target machines for automation.
Code Writer Code Generators A tool that can query some structure of a target of test (an assembly, a database) and generate test code that can be executed against that target of test.
Configuration Matrix   The hardware and software configuration dimensions supported for a product release.  The dimensions are multiplied into a full matrix
Data Driven Automation   Automation that accepts variable input at run time.  Variables are stored in some persisted state (database, file) and read into the code as it is executing.
Delta diff The resultant difference between 2 things.
Distributor/scheduler/controller Scheduler, controller Automation System Component – This is the part of the automation system where test automation is queued up to be executed on machines within a machine pool (lab).  It often has access to configuration data from the automation target machines in order to send tests to the correct automation targets (e.g. If a test requires IA64). The distributor is usually in communication with agents on the Automation Target Machines.   In some cases, Distributors are overloaded with coordination instructions for multi-machine automation (better to do this in a harness, for portability).  Distributors are basically fancy ShellExec engines.
Genetic Algorithms   Data driven automation that “evolves” towards the favored result.  User “selects” the favored result prior to execution and the genetic algorithm can mutate and become more and more efficient at finding the least number of steps required to get from start to favored result.
Harness   Automation System Component – The end point executable run on an Automation Target Machine which knows how to interpret structure in a binary or some other blob which contains test cases and  execution instructions.  Easy to use harnesses have both a GUI and a command line mode (for use with a distributor).
Instrumented Build   A build of the product which has been modified in some way to trace code behaviors, such as code coverage, or memory tracing.
ITE   Integrated Test Environment.  A tool used to create and execute Model Based Tests.
Log File   Verbose data about the Run, parameters used, system configuration used, trace data, and the result of test cases.   Log files can be are parsed and can have result data uploaded to a result database.   Log files are often saved to a file share and linked to results if needed for later analysis.
Logger Logging API API used by test automation authors to write comments and pass/fail results to the log file.
Machine Paver   Automation System Component – A component that can provision machines depending on test requirements.  A paver can blow away and restore machines in the Lab to allow for the install of custom, or standard test baseline images. May be reactive (test cases submitted to the system and request a configuration that is not available so the paver blows away machines and restores or creates the correct image for the TC), or proactive (paver sets up machines to base configurations prior to test case execution, test cases are designed to run on those configs available in the lab only)
Machine Pool   Automation System Component – A logical grouping of random machines, usually in a Lab.  Might be secured.
MAGEN   A tool that allows users to create dimensions and expand them into matrices.
Model Based Testing   A testing method that usually uses a finite state machine from which test cases and paths for testing are derived.
OASYS   Office Automation System.  One of two end to end automation systems at Microsoft.
Offline Run   Automation System Component – a persisted representation of a Run, can be used for offline automation.
Offline automation portable automation Automation that can run outside of DMZ.  The only part of the system used is the Harness. Offline Run is passed to the harness and tests are executed.  Assumption is that the automation target machine(s) have been created already and exist in the correct state for the test cases to be executed.
Parameter Matrix   A set of dimensions and values for a method signature which takes more than 1 parameter.
PICT PairWise Independent Cominatorics A tool that can be used to create equivalency classes based on pair wise combination of a set of dimensions.  Results in far fewer outputs than a full matrix expansion.  Guarantees that all values from any 2 dimensions are combined at least once in the result set.
Report   Visualization of automation results.
Result   The actual Pass/Fail/Other from a test case.   Also a logical object in the result database.   Linked to test case and contains metadata describing the Automation Target machine on which the test was executed.   Data for the result is retrieved from the log file.
Run Job A logical object that represents the execution instructions, parameters, binaries, command line, and any other information required by the harness to execute a test case or set of test cases.
Scenario End user case An end to end test case that covers any user scenario.
TCM Test Case Manager A database where test cases can be created, modified, stored, and deleted.  Stores metadata about automated test cases, stores descriptions and test steps for manual test cases.
Test Case (automated)   • A test case *must* be atomic – 100% self contained, can be executed independently, or in any order with any other set of test cases.
• Test cases *must* be implemented so that they can be run in any order (API and UI)
• API (not UI) Test cases *must* be implemented so that they can be run in parallel (or multi-threaded).
• Test cases *must* be globalized, that is to say that any test can run on any international platform/OS.
• Test cases *must not* have dependencies on each other, execution sequence *must* be irrelevant
• Test cases *should* clean up any data that it wrote, or restore any data it deleted or modified during execution.  Test Cases *should not* have side/after-effects that could affect other tests.
• Test cases *must* verify a valid starting state exists on the machine under test.
• Test cases *must* be able configure the machine under test with data required for the test.
• Test cases *must* be able to execute successfully on a valid automation target machine that is not connected to a network.
• Test cases *must* specify an execution timeout which if exceeded results in a failure.
• A test case *should* map to a method, or function in an Assembly DLL. (assuming we will use a Harness like NUnit, or Junit)
– For example, a single assembly could contain multiple test cases, but each of those cases should be independently executable. 
– One programmatic entry point per test case is a good rule of thumb.
• A test case *must* specify any special execution constraints that are beyond the baseline, and be able to verify those execution constraints prior to execution.
– For example, a test case might be specifically built to run *only* on a specific SKU of Green.  The test case needs to verify that it is being executed on that SKU prior to test step execution to avoid a false failure due to invalid execution environment.
• A test case *must* log one and only one Pass/Fail result per execution
– Internal test steps can log pass fail, but the Test case pass/fail is the result that we record for reporting purposes. 
– Test step results are logically “anded” together to create the test case result.
• If you have a “test case” that requires another “test case” to execute before it, you actually have 2 test steps within 1 test case.  Log one result.  This is critical for results to mean something for reporting later on.
4.1.2.2 Test Case: TCM level definition (metadata):
• A test case is represented in a Test Case manager, with the following required data:
– A test case *must* have a description of each execution step in the test. 
– A test case *must* have a description of required environment/hardware that is beyond the baseline configuration which is needed for successful execution.
– A test case *must* have a reference to the automation scripts/binaries as well as source code.
– A test case *must* have a description and a reference to any data required to set up the test.  

Test Suite   a collection of test cases.
Test Variation   Same test case, using different variables for each execution.

Create a free website or blog at WordPress.com.