Test automation

01 January 2009 |


Why is enterprise-scale test automation necessary? In the era of Web-enabled development techniques, cycle times have diminished and speed-to-market is a central focus.

 As a result, many products and services that were previously shipped in boxes are now easily and quickly published to the Web.

While development cycle times have reduced, the time required to test the software has not. Software testers are under high pressure to complete their role quickly, and failing to do so leaves the perception that testers are the bottlenecks in the software delivery process. Automating this process offers the promise of testing that matches the speed of development, but it does come with a variety of challenges.

Challenges with test automation

Many companies have used test automation, but only on small projects. The successes and lessons learned during small-scale test automation projects can be extended to larger implementations. For example, at BT, automated testing is imperative due to the required time to market, complexity and scale of its 21st Century Network programme. Organisations with less complex requirements can also derive business benefit from automating their testing.

It’s important that organisations considering deploying test automation recognise that the process requires a huge commitment in terms of investment and sponsorship, and it may not yield an immediate return. But with simple dedication, it can produce invaluable results. For test automation to be successful, the right people must apply it.

It is important to first find testers who understand the software for which they will be writing automated scripts. Testers without the proper experience and knowledge are only able to use the software at the most basic level and will prevent the full benefits of automation – such as modularisation, data externalisation and re-use – from being realised. If there is a limited pool of experienced testers it is better to use them for defining test requirements and designing test cases rather than working on automation. After the experienced testers have been identified a dedicated test script production team can be created. This team industrialises the production of test scripts and becomes the Test Automation Centre (TAC).

Inside the TAC

For each application or business area, the company needs to define a framework for how to automate the tests. This identifies the factors that drive the greatest efficiency and maximum re-use while minimising future maintenance. For example, companies must decide early on which tests should not be automated.

This decision is based on the degree of technical difficulty of automation; if the difficulty level is too high, the automation will be too costly. Difficulty can be measured by considering factors like frequency of test runs, lifespan of tests, cost of automation against the cost of manual testing and script maintenance.

The next step is to begin scripting proof-of-concept tests to ensure that time and resources are not misused. By running a small number of test scripts a company can confirm its overall understanding of them. These early tests will demonstrate the complexity of the tests, highlight any technical issues of using the tests with the applications and provide an accurate estimate of the cost of testing.

Finally, companies should implement a three-stage review process for the ongoing creation of test scripts in order to ensure their quality: First, the scripts undergo a peer review within the automation team. Next, scripts are reviewed by one of the automation designers and finally they are signed off on by an external automation expert.

Identifying return on investment


Is automation is more cost-effective than manual testing? In order to gauge ROI, the common tasks that would need to be done in either case must be removed from consideration. Next, the remaining tasks are assessed for effort and cost. The key measure is the number of test cycles that are needed before the outlay is offset by the savings. Depending on the business functionality being tested, the average ROI point is generally between six months and a year. In just more than 18 months since BT set out to establish an industrial-scale automated testing capability its TAC is paying real dividends as part of the overall effort to improve customer satisfaction and reduce cycle times. BT estimates that for every test execution run of 100 test cases it saves on average 35 hours of manual test effort. Without automated testing, it would have taken twice the number of testers to validate each system and process incorporated into the 21st Century Network programme. BT also found additional benefits such as identifying errors and omissions in the tests and recognising bugs in the applications that had not been spotted by manual testers.

Marta Zarraga

Director, End-to-end testing and service introduction, BT