Criteria for selecting a management testing tool (or quality management tool)

5 02 2009

Criteria for selecting a management testing tool (or quality management tool):

This article will help you to choose a management testing tool in peace. Why? Because for some client it’s not easy as we can believe. Example: a client already has a tool from HP (Quality Center) and at the same time using Enterprise Architect (EA) to manage its requirements. The question is « Is what we use quality center or EA? ». From this question there was a stir on the choice. And inter-service policy and the brake of some users does not help the stand-by was delivered … until the list below is made.

This list exists only to provide FACTUAL information about the main functions to be performed by a management testing tool. As you may be able to see it is not oriented for Quality Center is chosen (I want to say it’s not irony).

A repository of tests should be able to manage (only basic functions):

Level 1 Functions
Manage the test requirements:
* Support for the creation
* Multi-user: easy sharing between all project stakeholders
* Easy change
* Easy link between requirements> Test Sheets > Defects
* Generating a matrix coverage requirements ==> campaign / scenarios
* Analysis of coverage
* Print friendly list
* Version management requirements

Manage the test plan to test cases:
* Support for the creation
* Easy change
* Ability to attach documents
* Multi-user
* Version management

Manage books / files / test plan:
* Support for the creation of scenarios
* Multi-user
* Version management

Help manage tests:
* Status of design tests
* Status of the test run
* Status of an overall campaign test
* Printing of reports already formatted and filled
* Multi-user

Function Level 2:
Manage defects:
* Tools for creating worflow defects
* Create and complete an information sheet defect
* Possibility of attachments, screen captures, etc.
* Integrated search engine
* Easy and direct link between a sheet / test cases and an defect
* Notification by mail to the defects involved in the project
* Multi-user
* Printing of reports already formatted and filled with figures and statistics modules / functions, etc.

Interfacing and direct steering:
* Automates functional tests
* Automates testing techniques (performance)
* Defects (if not integrated)





Best practices and test repository

22 10 2008

Say Uncle, you can give me some best practices pleaaaaaaaaaaase?
The first good practices are:
– Good sense
– Organization,
– Reflections.

Nothing but magic, properly implemented, will guarantee the proper use of a benchmark test. I will expose them as a question to ask the good moments:
Management requirements or requirements:
– Uniqueness?
o This is the only item on this test?

– Clarity?
o Reflect on the tree to establish,
o The need is expressed clearly enough and is there no gray area or ambiguities on the understanding?
o What are the requirements implicit and not listed in the expression of needs (for example), but we must still consider?
– Accuracy?
o The requirement is it enough to be measurable and quantifiable targets via specific test?
– Validity?
o The document led to the requirement is updated and validated by the customer?
– Testability?
o Is it possible to create one or more relevant cases to effectively test all aspects of this requirement?

Managing the card catalog test or the Test Plan:
Fundamental:
– Each sheet test should be linked to one or more requirements or you can say goodbye to the test coverage
– Once the test sheet drafted a trick is to check if all the cards are connected at least once a requirement, otherwise you have holes in your racket!
– Depending on the strategy, the Test Plan should be prioritized according to the functional, organizational or business customer
– The tree test plan must:
o Be easy to understand by functional
o In order of business process / application client (use numerical codes to manage the directory eg
o be determined at the outset between all actors
o 4 to 5 levels MAXIMUM directories otherwise you watch the madness
– Provide care tabs « Details » and « Design steps:
o Enter a clear description of the common people for the purpose of the test card
– Provide care to the tab Design steps:
o Be sufficiently precise and detailed to avoid any ambiguity and interpretation at runtime
o Display data to be used and their origins
o = A STEP ONE ACTION
o Inform the expected outcome (RA) so precise and clear because the result will be compared to the result (RO) during execution

Good questions:
– Uniqueness?
o A target = sheet test
– Clarity?
o The stages of the procedure are clearly identified, no-til no gray area or ambiguities on the way forward?
– Data?
o All representative data needed for testing are they identified?
o Have data that are:
consistent
Clarifies
Representative
Prepared
– Coverage?
o The modus operandi of the sheet test is relevant and complete the objective identified?
– Generic?
o No test sheet valued unless a value is hard in the test

Design and execution of test campaigns or Test Lab
– Cash?
o Environment to conduct any tests available?
o The application to test it is installed with the correct version (versioning policy of the client)?
o The access (login / passwords) to applications and other servers valid?
– Preparation?
o The test cards are up to date or she date a year ago?
o Automated test scripts are identified and in the right directories?
– Clarity?
o The names of campaigns and test sheets are quite explicit for your next tier include?
– Data?
o At’on all the data needed to run the cards?
– Coverage?
o The entire campaign covers happens objective identified?

Good practices during the execution of the test sheets:
– Have environments, platforms and networks available
– Availability of the coffee machine effective
– During the execution of steps:
o Follow the procedure in the « description » on LETTER!
o The tests outside the marked paths are BANNED!
o Compare the result (RO) with the field Expected Result (or expected outcome (RA)).
o

Management of anomalies or Defects:
Fundamental:
– Have a workflow defined anomalies that is known, validated, shared and clear. This is for all players
– If yes, do all the players are aware of their roles and responsibilities?
– If the execution proceeds failure:
o Verify that the specified outcome is correct
o Enter the result as clearly as possible
o Check that the anomaly is reproducible integrator or ridicule you and not take into account the anomaly
o Open an anomaly
o Describe the anomaly as clear as possible otherwise your work will not help and you will lose a lot of time in explanation
o ALWAYS associated with a sheet test

– Reproducibility?
o The anomaly is it reproducible under the same conditions?
– Compliance?
o The abnormality has been detected by following the procedure laid down in the sheet test?
o expected results included in the file are updated?
– Uniqueness?
o The anomaly detected is not already open?
To do this, ask your colleagues and your manager is a good start
Use filters and search fields
o Is not there already opened a ticket on a subject almost identical?
Clarity?
– Gravity?
o What is the level of severity of the anomaly?
Critical
Major
Minor

A final practice for the road? Make a copy and paste the above practices in an Excel file, the file named « Checklist guaranteeing the quality of the test repository ». You do not have:
– Go to request and ask a bonus to your manager ;-)
– And submit your client to be finally a real example of industrialization of testing tools

And for the record, know that putting into practice every day these recommendations will make the implementation of this tool CMMI level 3 (see article on CMMI3 and QC).