IEEE NoVA Chapter

presented by


ABSTRACT

How test teams introduce an automated software test tool on a new project is nearly as important as the selection of the most appropriate test tool for the project. A tool is only as good as the process being used to implement the tool.

Over the last several years test teams have largely implemented automated testing tools on projects, without having a process or strategy in place describing in detail the steps involved in using the test tool productively. This approach commonly results in the development of test scripts that are not reusable, meaning that the test script serves a single test string but cannot be applied to a subsequent release of the software application. In the case of incremental software builds and as a result of software changes, these test scripts need to be recreated repeatedly and must be adjusted multiple times to accommodate minor software changes. This approach increases the testing effort and brings subsequent schedule increases and cost overruns.

The fallout from a bad experience with a test tool on a project can have a ripple effect throughout an organization. The experience may tarnish the reputation of the test group. Confidence in the tool by product and project managers may have been shaken to the point where the test team may have difficulty obtaining approval for use of a test tool on future efforts. Likewise, when budget pressures materialize, planned expenditures for test tool licenses and related tool support may be scratched.

By developing and following a strategy for rolling out an automated test tool, the test team can avoid having to make major unplanned adjustments throughout the test process. The presentation "Automated Software Testing" addresses these various issues and their solutions.


BIOGRAPHIES

John Paul, co-author of Automated Software Testing (Addison-Wesley, 1999), has performed as a senior programmer/analyst on financial and budgeting systems as well as a host of other information systems. His software development leadership responsibilities have included system analysis and design, application prototyping, and application development using a number of different methodologies and programming techniques. His software development responsibilities have included application testing using automated test tools as well as the performing Year 2000 compliance testing.

Jeff Rashka, co-author of Automated Software Testing (Addison-Wesley, 1999) is a Systems Engineering Manager at Science Applications International Corporation (SAIC) and has a masters degree in Information Systems from George Mason University. He has performed as a manager on a multitude of significant information system and systems integration projects. System applications have included worldwide transportation asset management, enterprise information management, financial management, bar-coded inventory management and shipboard information systems. Jeff also has process improvement management experience implementing the guidelines contained within the Software Engineering Institute’s Capability Maturity Model (CMM).