Chapter 6: Crosscutting Technical Management
The technical management processes are the bridges between project management and the technical team. In this portion of the engine, eight crosscutting processes provide the integration of the crosscutting functions that allow the design solution to be realized. Every member of the technical team relies on technical planning; management of requirements, interfaces, technical risk, configuration, and technical data; technical assessment; and decision analysis to meet the project’s objectives. Without these crosscutting processes, individual members and tasks cannot be integrated into a functioning system that meets the ConOps within cost and schedule.
The Technical Planning Process, the first of the eight technical management processes contained in the systems engineering engine, establishes a plan for applying and managing each of the common technical processes that will be used to drive the development of system products and associated work products. Technical planning, as opposed to program or project planning, addresses the scope of the technical effort required to develop the system products.
Requirements management activities apply to the management of all stakeholder expectations, customer requirements, and technical product requirements down to the lowest level product component requirements (hereafter referred to as expectations and requirements). The Requirements Management Process is used to:
* Manage the product requirements identified, baselined, and used in the definition of the WBS model products during system design;
* Provide bidirectional traceability back to the top WBS model requirements; and
* Manage the changes to established requirement baselines over the life cycle of the system products.
“Requirements creep” is the term used to describe the subtle way that requirements grow imperceptibly during the course of a project. However, some of the requirements creep involves truly new requirements that did not exist, and could not have been anticipated, during the Technical Requirements Definition Process.
The management and control of interfaces is crucial to successful programs or projects. Interface management is a process to assist in controlling product development when efforts are divided among parties (e.g., Government, contractors, geographically diverse technical teams, etc.) and/or to define and maintain compliance among the products that must interoperate. During product integration, interface management activities would support the review of integration and assembly procedures to ensure interfaces are properly marked and compatible with specifications and interface control documents. The interface management process has a close relationship to verification and validation
The Technical Risk Management Process is one of the crosscutting technical management processes. Risk is defined as the combination of (1) the probability that a program or project will experience an undesired event and (2) the consequences, impact, or severity of the undesired event, were it to occur. Both the probability and consequences may have associated uncertainties. Technical risk management is an organized, systematic risk-informed decisionmaking discipline that proactively identifies, analyzes, plans, tracks, controls, communicates, documents, and manages risk to increase the likelihood of achieving project goals.
Strategies for risk management include transferring performance risk, eliminating the risk, reducing the likelihood of undesired events, reducing the negative effects of the risk (i.e., reducing consequence severity), reducing uncertainties if warranted, and accepting some or all of the consequences of a particular risk. (nota bene: does not dicuss positive risk) Continuous Risk Management (CRM) is a widely used technique within NASA, initiated at the beginning and continuing throughout the program life cycle to monitor and control risk. It is an iterative and adaptive process, which promotes the successful handling of risk.
Configuration Management is a management discipline applied over the product’s life cycle to provide visibility into and to control changes to performance and functional and physical characteristics. CM ensures that the configuration of a product is known and reflected in product information, that any product change is beneficial and is effected without adverse consequences, and that changes are managed. CM reduces technical risks by ensuring correct product configurations, distinguishes among product versions, ensures consistency between the product and information about the product, and avoids the embarrassment of stakeholder dissatisfaction and complaint.
“Redline” refers to the control process of marking up drawings and documents during design, fabrication, production, and testing that are found to contain errors or inaccuracies. All redlines require the approval of the responsible hardware manager and quality assurance manager at a minimum.
The Technical Data Management Process is used to plan for, acquire, access, manage, protect, and use data of a technical nature to support the total life cycle of a system. Data Management (DM) includes the development, deployment, operations and support, eventual retirement, and retention of appropriate technical, to include mission and science, data beyond system retirement.
Technical assessment is the crosscutting process used to help monitor technical progress of a program/project through Periodic Technical Reviews (PTRs). It also provides status information to support assessing system design, product realization, and technical management decisions.
Configuration audits confirm that the configured product is accurate and complete. The two types of configuration audits are the Functional Configuration Audit (FCA) and the Physical Configuration Audit (PCA). The FCA examines the functional characteristics of the configured product and verifies that the product has met, via test results, the requirements specified in its functional baseline documentation approved at the PDR and CDR. FCAs will be conducted on both hardware or software configured products and will precede the PCA of the configured product. The PCA (also known as a configuration inspection) examines the physical configuration of the configured product and verifies that the product corresponds to the build-to (or code-to) product baseline documentation previously approved at the CDR. PCAs will be conducted on both hardware and software configured products.
[Sidebar]
Analyzing the Estimate at Completion
The appropriate formula used to calculate the statistical EAC depends upon the reasons associated with any variances that may exist. If a variance exists due to a one-time event, such as an accident, then EAC = ACWP + (BAC – BCWP). The CPI and SPI should also be considered in developing the EAC. If there is a growing number of liens, action items, or significant problems that will increase the difficulty of future work, the EAC might grow at a greater rate than estimated by the above equation.
[/Sidebar]
The purpose of this section is to provide a description of the Decision Analysis Process, including alternative tools and methodologies. Decision analysis offers individuals and organizations a methodology for making decisions; it also offers techniques for modeling decision problems mathematically and finding optimal decisions numerically. The problem is structured by identifying alternatives, one of which must be decided upon; possible events, one of which occurs thereafter; and outcomes, each of which results from a combination of decision and event.
Once high-level design decisions are made, nested systems engineering processes occur at progressively more detailed design levels flowed down through the entire system. Each progressively more detailed decision is affected by the assumptions made at the previous levels. This is an iterative process among elements of the system. Also early in the life cycle, the technical team should determine the types of data and information products required to support the Decision Analysis Process during the later stages of the project.
Typical evaluation methods include: simulations; weighted tradeoff matrices; engineering, manufacturing, cost, and technical opportunity of trade studies; surveys; extrapolations based on field experience and prototypes; user review and comment; and testing. Regardless of the methods or tools used, results must include:
* Evaluation of assumptions related to evaluation criteria and of the evidence that supports the assumptions, and
* Evaluation of whether uncertainty in the values for alternative solutions affects the evaluation.
