ABSTRACT
The September chapter meeting of the NoVA/DC /DC IEEE Computer Society will feature a debate panel on two different approaches to software quality assurance: software process improvement versus software product assessment. The panel will feature two short 10-minute presentations by strong advocates on either side of the issue. A moderated discussion between the panel and audience will ensue for 30 minutes. The panelists will then be given 5 minutes each to make closing remarks.
Software product assessment techniques will typically employ:
- formal inspections,
- requirements analysis,
- configuration control,
- structured programming,
- unit testing under coverage tools,
- integration testing, and
- design for test
Software process improvements methodologies include the Software Engineering Institute's (SEI) Capability Maturity Model (CMM), NASA's Quality Improvement Paradigm, ISO 9000 and 9001 standards, and the Cleanroom software process model. The main argument made by software process advocates is that improved software development processes can prevent bugs in software, while software assessment approaches detect and correct bugs in software. Software processes will often keep track of metrics such as:
- software defect insertion history
- staffing history
- productivity history
- costs of software development
- code re-use
Advocates on either side have strongly differing views on which approach to software assurance results in higher quality software. Regardless of which camp one considers, both camps aim to improve the quality of software being produced today. Therefore, even staunch advocates of product assurance will acknowledge the benefits of good software development processes, and similarly, software process advocates will acknowledge the benefits of software product assessment techniques.
Perhaps one of the most contentious issues, however, is the role of software testing in software process improvement. The premier example of this debate occurs between Cleanroom process advocates and software testing advocates. The Cleanroom process advocates off-line reading of code in place of on-line software testing. Cleanroom relies on human discipline for software quality, rather than product assessment. Orthodox Cleanroom practices state that the development teams do not need to compile or execute the software code. The only testing acceptable in Cleanroom processes is stochastic testing to be used specifically for reliability estimation rather than defect detection.
To illustrate the contentiousness of the issue, in the March/April 1997 issue of IEEE Software, software testing expert Boris Beizer proclaims in his soapbox article, "Cleanroom Process Model: A Critical Examination", that "Cleanroom is an underdeveloped and overexposed minority practice". Further, Beizer states, "The statistically valid supporting evidence for Cleanroom, despite its continual republication (which doesn't add to the data set), is nil". Beizer poses a formal challenge to Cleanroom advocates: "Develop your software using Cleanroom. Then let us test it further [using the stochastic tests proscribed by Cleanroom] under coverage tools (can't hurt, can it?). We'll then remove all uncovered code. A repetition of your stochastic tests will yield the same mean time between failures; and by the Cleanroom doctrine this revised code should be equally acceptable. The challenge: agree or explain the fallacy in this reasoning."
Our panelists will debate the fallacies, theories, merits, and pitfalls of software process improvement and software product assessment approaches to software assurance.
BIOGRAPHIES
Don O'Neill
Don O'Neill is a seasoned software engineering manager and technologist currently serving as an independent consultant. Following his twenty-seven year career with IBM's Federal Systems Division, Mr. O'Neill completed a three year residency at Carnegie Mellon University's Software Engineering Institute (SEI) under IBM's Technical Academic Career Program.
As an independent consultant, Mr. O'Neill conducts defined programs for managing strategic software improvement. These include implementing an organizational Software Inspections Process, directing the National Software Quality Experiment, implementing Software Risk Management on the project, conducting the Project Suite Key Process Area Defined Program, and conducting Global Software Competitiveness Assessments. Each of these programs includes the necessary practitioner and management training.
Mr. O'Neill served on the Executive Board of the IEEE Software Engineering Technical Committee and as a Distinguished Visitor of the IEEE. He is a founding member of the National Software Council (NSC) and the Washington DC Software Process Improvement Network (SPIN).
Don O'Neill's Position Statement:
"Process Improvement Versus Product Assessment"
Critical Questions
- What do projects depend upon - process improvement or product assessment?
- What does the enterprise need?
- What is the outlook?
While we debate the merits of process improvement versus product assessment, the global software competitiveness of the nation is being threatened. Of course both process improvement and product assessment are essential in a well balanced software strategy. The problem is that the industry is so dynamic that is is continually off balance.
In this environment on the factory floor, practitioners are forced to practice triage. Their choices are forging a shared vision among stakeholders [people], evolving a domain specific reference architecture [product], maturing the organization's software process [process].
In focusing on near term demands, the industry tilts towards product assessment. Process improvement operates over the long planning horizon. Product assessment operates over the short term, the here and now! Project focus is on meeting deliverables... favoring product assessment. The enterprise focus is on new business acquisition and competitiveness... forging shared vision among stakeholders and partners.
|
Jonathan D. Addelston
Jonathan D. Addelston has recently re-energized his consulting practice called UpStart Systems after three tours of duty with BDM International, as Chief Technology Officer; with PRC (now Litton/PRC), as Vice President, Software Engineering; and with the Software Productivity Consortium, as its first Vice President, Software Product Development.. He has been very active in the development and review of Software Engineering Institute Process Program materials, serving as BDM's and PRC's chief liaison, a member of the CMM Correspondents Group, the CMM-based Appraisals Review Group, and the Blue Ribbon Panel which evaluated the entire SEI Program before its second contract extension as a Federally Funded Research and Development Center. He is a co-founder, with Judah Mogelinsky, of the first Software Process Improvement Network (SPIN), located in the Washington, D.C. area. Mr. Addelston has been intensively involved in software development since graduating from MIT 32 years ago. Jonathan believes he has participated in about four Society for Software Quality National Software Debates, but he didn't keep metrics.
Jonathan D. Addelston's Position Statement
First let's admit that the framers of tonight's meeting have posed for us a very ambiguous challenge. This is a "Panel on Software Quality Assurance," but the description alternatively calls it a "debate panel" and a "moderated discussion." I hope we are provocative enough to challenge the participants' and audience's professional understanding of software engineering without reverting to primitive name-calling (a step beyond the Beizer challenge quoted by Terry Bollinger in his position statement).
It is indicative of our juvenile lifestage in the software engineering profession that the IEEE and the SEI cannot even agree to a common taxonomy for discussing process vs. product. The SEI Software CMM (V1.1) has Software Quality Assurance as a Key Process Area of its lowest Dantean Circle and places "Inspections" within its Defined Level 3, as a potential approach to Peer Reviews, a process, not a product approach, and not part of quality assurance.
A further debate framework problem is the peculiar choice of exemplar metrics which "software processes will often keep track of," according to our panel organizers. The list omits the key atomic measures: size (not necessarily lines of code nor function points!), effort, defects, and risk. The debate abstract also calls "structured programming" a software product assessment technique. Is Alice's caterpillar at work here? ("When I use a word, it means exactly what I intend it to mean, neither more, nor less.")
I support Don O'Neill's observation that product assessment addresses immediate concerns and process improvement best applies to overarching influences spanning project lifecycles and best addresses organizational, not individual product, needs.
Here's a boating analogy that might work: Product development methods are the sails, product assurance are the keel and tiller, project management is the captain, process improvement is the shipyard filled with naval architects, materials engineers, and construction workers. Hear all those champagne bottles shattering at the new launches?
|
Terry Bollinger
Terry Bollinger is perhaps best known for his 1992 IEEE Software paper "A Critical Look at Software Capability Assessments," in which he and Clem McGowan analyzed the methods used by the Software Engineering Institute to evaluate industrial software development processes. The pointed combination of both criticism and praise in that
paper won them the IEEE Best Paper award that year, and also got them blackballed from speaking at any conference where SEI had had an opportunity to chat with the organizers. In his continuing quest to annoy the powers of process at least once or twice a year, Terry will have a Binary Critic essay entitled "Physics, Software, and Group Intelligence" in the upcoming October 1997 issue of IEEE Computer. In that piece he argues that genuine process improvement is actually one of the most intellectually challenging undertakings imaginable, as it will be possible on a significant scale only when we understand the mechanisms of creative intelligence well enough to extend them into complex systems of computers and people.
Terry has Master's and Bachelor's degrees in Computer Science, and he currently works for a Department of Defense contractor. Among his other interests are physics (he was invited once to give a university physics lecture on possible collapse mechanisms in ultrasonically driven bubbles), science, and spending time with his wife and four young children.
Terry Bollinger's Position Statement
The idea that process alone can eliminate software defects demonstrates how astonishingly little theoretical basis there is behind most of the current crop of "process improvement" methods. The process-only defect removal premise can be traced to production lines, where the goal is to replicate a well-defined product design with enough predictability to ensure that the resulting copies will fall within specified tolerance limits. Within this cookie-cutter process context, the idea of convergent elimination of defects through process refinement makes excellent sense. Indeed, the value of such process-based defect elimination methods has been proven repeatedly for large-scale replicative activities such as automobile production and chip manufacturing.
The problem is that software design is emphatically not a replicative process. Design processes are readily distinguished from replicative processes by the fact that producing exactly the same product multiple times in a design process represents a serious failure of the process, not a success. The difference is crucial, because without the constraining effect of "convergent" products -- that is, products that are identical to within some well-defined level of tolerance -- the argument that process refinement alone can predict and prevent all possible failure modes becomes specious. Such a claim would be akin to trying to predict the outcome of all future presidential elections based only on knowledge of past elections, without bothering to analyze the future candidates and their actual political situations.
What is really needed for design processes is a structured combination of proofs, tests, and creative insights created specifically to identify the faults that are not obvious, and cannot be predicted from similar past examples alone. Without such "expect the unexpected" analysis and testing techniques, the idea that process techniques alone can eliminate all possible failure modes in a new software design can be very risky indeed -- not to mention flat-out wrong.
|
|