Intro Models Documentation Design Scripting Tools Execution Defects Reporting  

Test execution and evidence

Once test planning and scripting has been completed and we have the necessary test tools we are ready to test.

When test execution can commence

Before we can begin testing we have to ensure the test entry criteria, specified in the Test Strategy, are met. One of the most common reasons for delay is instability of the software under test or the test environment itself. A good way of measuring the stability of the enviroment is to develop a smoke test. A smoke test is a very quick health check of the test environment and is a happy path test across the most important features of the system. If an environment fails smoke testing then the integrated environment is rejected and testing is suspended. Once the fault is fixed and smoke testing is completed testing can start.

The test environment

The test environment is a collection of components that make up the system under test. It usually comprises hardware, operating system, third party software possibly including a database and the application software. If the system under test is a subsystem there could be other systems connected to it, possibly simulators. It is very important to maintain the integrity of the test environment so usually it is only accessed by testing personnel and developers are not generally allowed to alter its configuration or apply code changes to it.

It is imperitive that a build note accompanies the delivery of the software prior to installation, the build note should detail the features delivered along with any defect solutions. It is also important that the environment is built to the level specified in the Test Plan. By controlling the hardware and software build levels you can be confident of what you have tested and when you tested it.

Test execution and evidence collection

Once smoke testing has completed in the integrated environment test execution can begin. Normally test sets are maintained in and executed from the test repository. Each step in the test script is followed exactly and the actual result is compared against the expected. If they match the test step is set to pass and the next step is performed, this is repeated until all the test steps are completed. If the expected and actual result differ then the step is failed and the test is marked as a fail.

It is important to capture test evidence wherever possible and store it against the test, in the repository. I have encountered many cases where the software under test has regressed and I have been able to prove this by comparing test evidence with that fom previous test iterations on earlier builds. If you don't capture test evidence it cannot be proved that regression has occurred and the suspicion could be that an error went unnoticed on earlier test iterations. All errors should have test defects raised against them.


Home      |      Testing      |      Telecoms      |      Resources      |      Clients      |      Contact     |     Care Database
 Copyright © 2016 Chic Computer Consultants. All rights reserved