Topic 5: Acquisition strategy and development models
In Project Management, the fundamental purpose of our strategy is to reduce the risk and uncertainty inherent in the acquisition of complex systems. The project manager's role in the formulation of an appropriate system acquisition strategy is supportive rather than comprehensive. When talking about acquisition strategy, we are talking of the organisation's responsibility, not the project's.
Without an acquisition strategy, major resources may be expended on procuring a system that will be fatally outdated, doesn't meet user needs, doesn't integrate with planned future systems or is unable to be completed because of conflicting requirements.
Acquisition strategy is fundamentally a long-term issue. It focuses on defining the details of the program objectives and the development of an integrated approach to achieving those objectives. The acquisition strategy and program risk management plan should be complementary.
Regardless of the difference in detail, the framework used will be based on the logic of the system lifecycle. The acquisition process has to encompass four logical parts:
* a concept exploration phase where the feasibility of alternative concepts is evaluated and a solution is determined for the most promising concept
* a program definition and risk reduction phase where the system requirements are refined and critical features analysed
* a solution development phase where the design is finalised, manufacturing and production processes are validated, and the system is integrated and tested
* a final phase where the system is produced and installed in its target environment, operated and modified as required
There is no 'typical' acquisition. However, any process will be based on three fundamental program strategies. The names may change from organisation to organisation, but the principles are the same. The names that are commonly used are:
* 'Grand Design' often called 'big-bang'. Essentially a 'once-through, do-each-step-once' approach—determine user needs, define requirements, design the system, implement the system, test, fix, deliver.
The Grand Design strategy is only suitable for small, clearly defined development products that will be used in known ways by a known user community.
* 'Incremental'. Determines user needs and defines the system requirements upfront, then performs the rest of the development in a sequence of builds.
In general, the incremental strategy is used to reduce risk in the development phase, whereas the evolutionary strategy is used to reduce risk in the operational phase.
* 'Evolutionary'. Develops the system in builds; user needs and system requirements are partially defined upfront, then refined in each succeeding build.
The term 'build' means a version of the system that meets a specified subset of the system requirements. Don't confuse it with a subsystem.
Each of these is intrinsic to the development process.
* Formal design reviews mark and determine the transition points and focus of the development stages. We will discuss design reviews in the next section.
* The hierarchy of specifications and design documents align to the reviews and development stages.
The subject 'development methods' is deeply embedded in the systems engineering process, the systems lifecycle, and the acquisition process. The focus of development is the specification, detailed design, verification and validation of the actual hardware and software that will comprise the final integrated system. To be more accurate we could say that development methods are really various applications of the development process.
The software development 'waterfall' method — a version of Grand Design — remains the basic building block for any of the current software development methods although you will often hear the comment that with third generation development, 'waterfall' is irrelevant. You need to be aware of why it is relevant.
The model fits the way that many systems are developed, and was a useful tool for discussion and phasing. The model was expanded, formalised, used as the basis for several standards, and remained the accepted development model until the 1980s. As with any process though, one of the issues is in defining the point when a project moves from one stage to another. At the end of a stage, documentation for that stage is completed and evidence that the work has been completed to sufficient standards is produced. (Define, Plan, Design, Build, Test, Deploy).
When a change is made within a waterfall process though, all of the material subsequent to the initial point of change needs to be checked to see if it is affected by the change. In a waterfall approach then, the project regresses 'up' the waterfall to the point where the change first affects the work. The project then needs to re-do all of the steps down the waterfall to catch back up to where it was.
Given it takes time to go back and change all the artefacts and re-run review meetings and the like, the waterfall model is inefficient in dealing with change. Many large projects have failed because the rate of change has exceeded the rate at which the project can implement the changes. The projects then enter a 'death spiral' where each month they report that the time to the end of the project is larger than last month.
The 'spiral' process is one of the models more suited to changing requirements. It can be thought of as a number of waterfalls wrapped around each other. The idea of this model is that a relatively small, stable part of the project is undertaken and completed while the more unstable areas are further defined. Once more stable parts of the project can be defined, then they may be started and a second 'part-project' undertaken. You should note that while the term 'spiral model' is not generally used, the spiral model is the process that almost all organisations use to develop their products over the long term.
While we have depicted the spiral as one waterfall following another, it is also common to overlap the waterfall stages. A typical 'waterfall sequence' where artefacts from one waterfall 'drop' into the next waterfall and form the basis for work there. In this sense each artefact is iteratively developed throughout the project.
The 'vee' model was not really a development model as such. It was more a refocussing of the waterfall into two system processes: decomposition and integration, and the linking of the two processes through verification and validation. It is really a simple waterfall process, but it emphasises diagrammatically the verification and validation aspects of the waterfall.
With the incremental approach, we start by assuming that we know the requirements and the succession of builds will allow functionality to be added in a controlled way ('build a little, test a little'). A benefit of the successive builds is that the high-risk elements can be built first, with the lower-risk elements following later. The focus is on reducing the risk of the full development.
An 'evolutionary' approach to development is chosen when we don't fully understand the user need and we cannot define all requirements upfront. It reflects the recognition that systems evolve as a result of changing user needs, changing technology and knowledge gained in operation. With an evolutionary development, a core capability is defined, developed and delivered. As knowledge is gained through system use or as technology changes, new requirements are identified and the core system enhanced to become the new, evolved system.
As with the development models, the reviews should be considered as a frame of reference. The important thing is to be aware of the logic and intent of the reviews—why they are where they are in the development cycle, and what they are seeking to assess.
As the system moves through its levels of development from concept to finished product, reviews are established at each development transition point to check design maturity, review technical risk, and determine whether to proceed to the next level of development. If the design is not mature enough to proceed, then the development level is continued until the review exit criteria are met.
The definition and management of the review program is a major project management responsibility. As we have seen, the schedule for the review program is integrated with the development cycle, which in turn is tied to the system lifecycle.
The number of reviews and the material to be assessed in each review is tailored to fit the particular project development. Factors to consider include:
* the project complexity—number and complexity of subsystems and components to be developed
* the level of technical risk—e.g. state-of-the-art technology or commercial off-the-shelf technology?
* the number of subcontractors—a series of separate subsystem reviews or one main reviews might be needed.
There are three basic points of focus for the reviews: requirements reviews, design reviews, and verification reviews. By verification we mean verifying that what is built complies with the product specifications it was built to. Note that this is different from validation, which compares the functionality and performance of the integrated system to the system specification.
The Alternative System Review (ASR) is conducted at the end of the concept exploration phase in order to confirm that a viable system concept has been established. The things that the review would want demonstrated are that the selected system:
* provides a cost-effective, operationally effective and suitable solution to identified needs
* meets the agreed budget targets
* can be developed within the required time at an acceptable level of risk.
The Systems Requirements Review (SRR) is intended to confirm that the user's requirements are clearly defined and understood, and that they have been translated into a set of contractor specifications for the system.
The System Functional Review (SFR)— also called the System Design Review (SDR) - is the key transition milestone between the requirements reviews and the design reviews. The SFR is conducted primarily to ensure that there is a system design that will meet the technical requirements (functionality and performance) and program requirements (cost, schedule, full lifecycle ownership costs).
As the design is developed the system functions are allocated to the hardware and software elements. A separate set of specifications is developed for the software items that will define the functions, performance, interfaces and other information that will guide the design and development of the software items. In preparation for the Preliminary Design Review and the establishment of the Allocated Baseline, the system software specifications are reviewed at the Software Specification Review (SSR).
The Preliminary Design Review (PDR), represents the establishment of the Allocated Baseline and approval to begin detailed design of the subsystems using the baseline documents. Consequently the input to this review is the specifications which will define the functions, performance, interfaces and other information for the development of the subsystems. Included in these will be the software specifications reviewed at the SSR.
A successful system Critical Design Review marks the point at which design is complete and production can start on the software and hardware elements. For software, this means coding, integrating and testing; for hardware, this means fabrication, assembly and testing. As with the PDR, a number of Critical Design Reviews would be conducted: a review for each of the subsystems and an overall review.
The Test Readiness Reviews (TRRs) are conducted as needed for each subsystem to confirm:
* completeness of the test procedures
* that the subsystem is ready for testing
* that those people who are involved in the conduct and approval of formal testing are prepared and available.
The Functional Configuration Audit/System Verification Review is a series of audits followed by a consolidating review—the System Verification Review. The audits re-examine the original user requirements and then check that they are traceable through the system and subsystem documentation—that is, through the Functional and Allocated Baseline documentation. The FCA also verifies that the test activity traces properly to the specifications and test plans.
The Physical Configuration Audit (PCA) is the formal examination of the 'as built' version of the system. This 'as built' version will be the system reviewed in the FCA/SVR, but incorporating changes those audits/review initiated.
Managing software development
There is no such thing as 'low risk' software development, so the size of the budget is irrelevant to the level of risk the development represents to the project.
The generalised term IT (information technology) is not very helpful when thinking about software development. The challenge here is to maintain high levels of team participation in an environment that requires high levels of control. Project teams need to understand that work in a development project has different management constraints from that of work in a department responsible for developing products. A project's success is measured by how
well it performs to its cost, schedule and product success measures.
One of the big contributors to the problems associated with software development has been the loose management oversight.