https://sebokwiki.org/w/api.php?action=feedcontributions&user=Skmackin&feedformat=atomSEBoK - User contributions [en]2024-03-19T10:28:45ZUser contributionsMediaWiki 1.35.13https://sebokwiki.org/w/index.php?title=Application_of_Systems_Engineering_Standards&diff=9689Application of Systems Engineering Standards2011-08-09T21:47:55Z<p>Skmackin: </p>
<hr />
<div>Introductory Paragraph(s)<br />
<br />
<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Alignment and Comparison of the Standards|<- Previous Article]] | [[Service Life Extension|Parent Article]] | [[Applications of Systems Engineering|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Alignment_and_Comparison_of_Systems_Engineering_Standards&diff=9688Alignment and Comparison of Systems Engineering Standards2011-08-09T21:47:22Z<p>Skmackin: </p>
<hr />
<div>Introductory Paragraph(s)<br />
<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Relevant Standards|<- Previous Article]] | [[Service Life Extension|Parent Article]] | [[Application of Systems Engineering Standards|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Alignment_and_Comparison_of_Systems_Engineering_Standards&diff=9687Alignment and Comparison of Systems Engineering Standards2011-08-09T21:46:52Z<p>Skmackin: </p>
<hr />
<div>Introductory Paragraph(s)<br />
<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Relevant Standards|<- Previous Article]] | [[Service Life Extension|Parent Article]] | [[Application of Systems Engineering Standards|Next Article ->]]<signatures><br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Relevant_Standards&diff=9686Relevant Standards2011-08-09T21:46:28Z<p>Skmackin: </p>
<hr />
<div>Introductory Paragraph(s)<br />
<br />
<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Systems Engineering Standards|<- Previous Article]] | [[Service Life Extension|Parent Article]] | [[Alignment and Comparison of the Standards|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Systems_Engineering_Standards&diff=9685Systems Engineering Standards2011-08-09T21:45:46Z<p>Skmackin: </p>
<hr />
<div>Introductory Paragraph(s)<br />
<br />
===Topics===<br />
The topics contained within this knowledge area include:<br />
*[[Relevant Standards for Systems Engineering]]<br />
*[[Alignment and Comparison of Systems Engineering Standards]]<br />
*[[Application of Systems Engineering Standards]]<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Disposal and Retirement|<- Previous Article]] | [[Service Life Extension|Parent Article]] | [[Relevant Standards|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Knowledge Area]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=System_Disposal_and_Retirement&diff=9684System Disposal and Retirement2011-08-09T21:44:14Z<p>Skmackin: </p>
<hr />
<div>Design for product or service disposal and retirement is an important part of system life management. At some point, any deployed system will become uneconomical to maintain or perhaps become obsolete. A comprehensive systems engineering process includes an anticipated equipment phase-out period and includes disposal in the design and life cycle cost impacts. <br />
Public focus on sustaining a clean environment encourages contemporary systems engineering design to consider recycling, reuse, and responsible disposal techniques.<br />
<br />
<br />
== Topic Overview ==<br />
<br />
According to [[INCOSE Systems Engineering Handbook]], “The purpose of the Disposal Process is to remove a system element from the operation environment with the intent of permanently terminating its use; and to deal with any hazardous or toxic materials or waste products in accordance with the applicable guidance, policy, regulation, and statutes.” (INCOSE 2011)<br />
<br />
In addition to technological and economical factors, the system being developed must be compatible, acceptable, and ultimately address the design of a system for the environment, in terms of ecological, political, and social considerations. In particular, the ecological considerations associated with system disposal or retirement is of a prime importance. Of particular concern are the problems dealing with the waste identified below.<br />
* Air Pollution and Control <br />
* Water Pollution and Control <br />
* Noise Pollution and Control <br />
* Radiation <br />
* Solid Waste<br />
<br />
In the United States, the United States [[Acronyms|'''Environmental Protection Agency (EPA)''']] and the Occupational Safety and Health Administration (OSHA) govern disposal and retirement of commercial systems; equivalent organizations perform this function in other countries.<br />
<br />
The [[Acronyms|'''Occupational Safety and Health Administration (OSHA)''']] addresses hazardous materials under 1910-119A List of Highly Hazardous Chemicals, Toxics and Reactives (OSHA 2010). System Disposal and Retirement spans both commercial and government developed products and servicesWhile both the commercial and government sector have common goals, the methods to accomplish disposition of materials associated with military systems are different.<br />
<br />
[[Acronyms|'''OSD AT&L''']] on-line reference provides theguidance regarding military system disposal. Directive 4160.21-M Defense Material Disposition Manual dated 18 August, 1997 implements the requirements of the federal property management regulation (FPMR) and other laws and regulations as appropriate regarding the disposition of excess, surplus, and foreign excess personal property (FEPP). Military system disposal activities are compliant with EPA and OSHA requirements.<br />
<br />
== Application to Product Systems ==<br />
<br />
Product system retirement may include system disposal activities or preservation activities (e.g., mothballing) if there is a chance the system may be called upon for use at a later time. [[Acronyms|'''OSD AT&L''']] provides guidance for the preservation of military system such as naval ships and aircraft.<br />
<br />
Blanchard & Fabrycky’s 5th edition of [[Systems Engineering and Analysis]] has several chapters (16, 17) that discuss the topics of design for goals such as “green engineering” reliability, maintainability, logistics, supportability, producibility, disposability, and sustainability. Chapter 16 provides a succinct discussion of “green engineering” considerations and “ecology-based manufacturing.” Ch. 17 also discusses life cycle costing and the inclusion of system disposal and retirement costs, represented in Figure 17.6.<br />
<br />
Some disposal of systems components occurs during the system’s operational life. This happens when the components fail and are replaced. As a result, the tasks and resources needed to remove them from the system need to be planned well before the actual demand for disposal occurs. Planning must consider transportation of failed items, handling equipment, special training requirements for personnel, facilities, technical procedures, technical documentation updates, hazardous material (HAZMAT) remediation, all associated costs, and reclamation or salvage value for precious metals or recyclable components. Phase-out and disposal planning addresses what, where, and when disposal should take place, the economic feasibility of the disposal methods used, and what the effects on the inventory and support infrastructure, safety, environmental requirements, and impact to the environment will be (Blanchard 2010). Disposal is the least efficient and least desirable alternative for the processing of waste material (Finlayson and Herdlick 2008). <br />
<br />
The EPA collects information regarding the generation, management, and final disposition of hazardous wastes regulated under the Resource Conservation and Recovery Act of 1976 (RCRA). <br />
<br />
EPA waste management regulations are codified at 40 C.F.R. parts 239-282. Regulations regarding management of hazardous wastes begin at 40 C.F.R. part 260. Most states have enacted laws and promulgated regulations that are at least as stringent as the federal regulations. Due to extensive tracking of the life of the hazardous waste, the overall process has become known as the cradle to grave system. Stringent bookkeeping and reporting requirements have been levied on generators, transporters, and operators of treatment, storage, and disposal facilities handling hazardous waste.<br />
See the EPA website for a comprehensive list of wastes including resource conservation, hazardous wastes, and non-hazardous wastes. <br />
<br />
Unfortunately, disposability has a lower priority compared to other activities associated with the product development. This is due to the fact that, typically, disposal process is viewed as an external activity to the entity that is in custody of the system at the time. Some of the reasons behind this view include:<br />
*There is no direct revenue associated with the disposal process, and the majority of the cost associated with the disposal process is initially hidden.<br />
*Typically, someone outside of SE performs the disposal activities, causing the attitude of “not my problem.” For example, a car manufacturer may not be concerned about its disposal, nor is the first buyer, since there is a good chance that he or she will sell the car before it is time for the disposal.<br />
<br />
The European Union’s Registration, Evaluation, Authorization, and Restriction of Chemicals (REACH) regulation requires manufacturers and importers of chemicals and products to register and disclose substances in products when meeting specific thresholds and criteria (European Parliament 2007). The [[Acronyms|European Chemicals Agency (ECHA)]] manages the REACH processes.<br />
Numerous substances will be added to the list of substances already restricted under European legislation such as the 2003 regulation on the Restriction on Hazardous Substances (RoHS) in electrical and electronic equipment.<br />
Requirements for substance use and availability are changing across the globe. Identifying the use of materials in the supply chain that may face restriction is an important system life management consideration.<br />
System disposal and retirement requires upfront planning and the development of a disposal plan to manage the activities. An important consideration during system retirement is the proper planning required to update facilities that are required to support the system during retirement, as explained in the California Department of Transportation Systems Engineering Guidebook. <br />
<br />
Disposal needs to take into account environmental and personal risks associated with decommission of the system, and all hazardous material needs to be accounted for. The decommissioning of a nuclear power plant is a prime example of hazardous material control and the need for properly handling and transportation of residual material resulting from the retirement of the plant.<br />
<br />
The [[Acronyms|'''Defense Logistics Agency (DLA)''']] is the lead military agency responsible for providing guidance for worldwide reuse, recycling, and disposal of military products. A critical responsibility of the military services and defense agencies is demilitarization prior to disposal.<br />
<br />
== Application to Service Systems ==<br />
An important consideration during service system retirement or disposal is the proper continuation of services for the consumers of the system. As service systems are retired, it is important to continue to provide the same quality and capacity of services offered by the system. As an existing service system is decommissioned, it is important to plan to bring new systems online operating in parallel of the existing system, so that service interruption is kept to a minimum. This parallel operation can occur over a significant period of time and needs to be carefully scheduled. Examples of parallel operation includes phasing-in new Air Traffic Control (ATC) system (FAA 2006), migration from analog TV to new digital TV modulation (FCC 2009), the transition to Internet protocol version 6 (IPv6) of the Internet, water handling systems, and large commercial transportation systems such as rail and shipping vessels.<br />
[[Systems Engineering Guidebook for Intelligent Transportation Systems (ITS)]], version 1.1, provides planning guidance for the retirement and replacement of large transportation systems. Chapter 4.7 identifies several factors which can shorten the useful life of a transportation system leading to early retirement, such as lack of proper documentation, lack of adequate operations and maintenance budget, and the lack of effective configuration management process.<br />
<br />
== '''Application to Enterprises''' ==<br />
The disposal and retirement of large enterprise service systems requires a phased approach where capital planning is implemented in stages. As is the case of service systems, enterprise systems disposal and retirements requires parallel operation of the replacement system along with the existing (older) system to prevent loss of functionality for the users of the enterprise. <br />
<br />
== '''Other Topics''' ==<br />
See the [[Acronyms|OSHA]] standard and [[Acronyms|EPA]] website for references that provide listings of hazardous materials. See the [http://www.dtc.dla.mil DLA Disposal Services website] for disposal services sites and additional information on hazardous materials.<br />
<br />
== '''Practical Considerations''' ==<br />
A prime objective is to design a product or service such that its components can be recycled after the system has been retired. The recycling process should not create any detrimental effects on the environment.<br />
One of the latest movements in the industry is called green engineering. According to the Environmental Protection Agency (EPA), green engineering is the design, commercialization, and use of processes and products that are technically and economically feasible while minimizing:<br />
*Generation of pollutant at the source<br />
*Risk to human health and the environment<br />
<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
Blanchard, B. S. 2010. Logistics engineering and management. 5th ed. Englewood Cliffs, NJ, USA: Prentice Hall: p341-342.<br />
<br />
Blanchard, B. S., and W. J. Fabrycky. 2005. Systems engineering and analysis. Prentice-hall international series in industrial and systems engineering. 4th ed. Englwood Cliffs, NJ, USA: Prentice-Hall: p 541-565<br />
<br />
DLA. Defense logistics agency disposition services [homepage]. in Defense Logistics Agency (DLA)/U.S. Department of Defense [database online]. Battle Creek, MI, USA, 2010 [cited June 19 2010]: p5. Available from http://www.dtc.dla.mil.<br />
<br />
EPA. Wastes. in U.S. Environmental Protection Agency (EPA) [database online]. Washington, DC, 2010]. Available from http://www.epa.gov/epawaste/index.htm<br />
<br />
ECHA. European chemicals agency ( ECHA ) [home page]. in European Chemicals Agency (ECHA) [database online]. Helsinki, Finland, 2010 Available from http://echa.europa.edu/home_en.asp. <br />
<br />
European Parliament. 2007. Regulation (EC) no 1907/2006 of the european parliament and of the council of 18 december 2006 concerning the registration, evaluation, authorisation and restriction of chemicals (REACH), establishing a european chemicals agency, amending directive 1999/45/EC and repealing council regulation (EEC) no 793/93 and commission regulation (EC) no 1488/94 as well as council directive 76/769/EEC and commission directives 91/155/EEC, 93/67/EEC, 93/105/EC and 2000/21/EC. Official Journal of the European Union 29 (5): 136/3,136/280.<br />
<br />
FAA. 2006. Section 4.1. In Systems engineering manual. Washington, D.C.: U.S. Federal Aviation Administration (FAA). <br />
<br />
FCC. 2009. Radio and television broadcast rules. Washington, D.C.: U.S. Federal Communications Commission (FCC), 47 CFR Part 73, FCC Rule 09-19: p 11299-11318.<br />
<br />
Finlayson, B., and B. Herdlick. 2008. Systems engineering of deployed systems. Baltimore, MD, USA: Johns Hopkins University: p28.<br />
<br />
OSHA. 1996. Hazardous materials: Appendix A: List of highly hazardous chemicals, tocixs and reactives. Washington, D.C.: Occupational Safety and Health Administration (OSHA)/U.S. Department of Labor (DoL), 1910.119(a)<br />
<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
Blanchard and Fabrycky. 2006. [[Systems Engineering and Analysis]]. 4th edition. Prentice Hall International Series.<br />
<br />
Caltrans, and USDOT. 2005. [[Systems Engineering Guidebook for Intelligent Transportation Systems (ITS)]], version 1.1. Sacramento, CA, USA: California Department of Transportation (Caltrans) Division of Reserach & Innovation/U.S. Department of Transportation (USDOT), SEG for ITS 1.1.<br />
<br />
INCOSE. 2011. [[INCOSE Systems Engineering Handbook|Systems Engineering Handbook]]: A Guide for System Life Cycle Processes and Activities. Version 3.2.1. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.1<br />
<br />
Jackson. 2007. [[A Multidisciplinary Framework for Resilience to Disasters and Disruptions]]. Journal of Integrated Design and Process Science Volume 11 (2), IOS Press<br />
<br />
OUSD AT&L. ''Logistics and Material Readiness''. in Office of the Under Secretary of Defense for Acquisition, Transportation, and Logistics (USD AT&L)/U.S. Department of Defense (DoD) [database online]. Arlington, VA, USA, 2010Available from http://www.acq.osd.mil/log/ (accessed August 5, 2010)<br />
<br />
Seacord, Plakosh, Lewis. 2003. [[Modernizing Legacy Systems]], Addison Wesley, Pearson Education Inc.<br />
<br />
===Additional References===<br />
<br />
Blanchard, B. S. 2010. Logistics engineering and management. 5th ed. Englewood Cliffs, NJ, USA: Prentice Hall: p341-342.<br />
<br />
Casetta, E. 2001. Transportation systems engineering: Theory and methods. New York, NY: Kluwer Publishers Academic, Springer. <br />
<br />
DAU. Acquisition community connection (ACC): Where the DoD AT&L workforce meets to share knowledge. in Defense Acquisition University (DAU)/US Department of Defense (DoD) [database online]. Ft. Belvoir, VA, USA, 2010 Available from https://acc.dau.mil/ (accessed August 5, 2010). <br />
<br />
DLA. Defense logistics agency disposition services [homepage]. in Defense Logistics Agency (DLA)/U.S. Department of Defense [database online]. Battle Creek, MI, USA, 2010 [cited June 19 2010]: p5. Available from http://www.dtc.dla.mil.<br />
<br />
ECHA. European chemicals agency ( ECHA ) [home page]. in European Chemicals Agency (ECHA) [database online]. Helsinki, Finland, 2010 Available from http://echa.europa.edu/home_en.asp. <br />
<br />
Elliot, T., K. Chen, and R. C. Swanekamp. 1998. Standard handbook of powerplant engineering. New York, NY: McGraw Hill, section 6.5.<br />
<br />
EPA. Wastes. in U.S. Environmental Protection Agency (EPA) [database online]. Washington, DC, 20102010]. Available from http://www.epa.gov/epawaste/index.htm.<br />
<br />
European Parliament. 2007. Regulation (EC) no 1907/2006 of the european parliament and of the council of 18 december 2006 concerning the registration, evaluation, authorisation and restriction of chemicals (REACH), establishing a european chemicals agency, amending directive 1999/45/EC and repealing council regulation (EEC) no 793/93 and commission regulation (EC) no 1488/94 as well as council directive 76/769/EEC and commission directives 91/155/EEC, 93/67/EEC, 93/105/EC and 2000/21/EC. Official Journal of the European Union 29 (5): 136/3,136/280. <br />
<br />
FAA. 2006. Section 4.1. In Systems engineering manual. Washington, D.C.: U.S. Federal Aviation Administration (FAA). <br />
<br />
FCC. 2009. Radio and television broadcast rules. Washington, D.C.: U.S. Federal Communications Commission (FCC), 47 CFR Part 73, FCC Rule 09-19: p 11299-11318.<br />
<br />
Finlayson, B., and B. Herdlick. 2008. Systems engineering of deployed systems. Baltimore, MD, USA: Johns Hopkins University: p28. <br />
<br />
FSA. Template for 'system retirement plan' and 'system disposal plan'. in Federal Student Aid (FSA)/U.S. Department of Eduation (DoEd) [database online]. Washington, DC, 2010Available from http://federalstudentaid.ed.gov/business/lcm.html (accessed August 5, 2010). <br />
<br />
IEEE 2005. IEEE Standard for Software Configuration Management Plans. New York, NY: Institute of Electrical and Electronics Engineers (IEEE), IEEE STD 828. <br />
<br />
Ihii, K., C. F. Eubanks, and P. Di Marco. 1994. Design for product retirement and material life-cycle. Materials & Design 15 (4): 225-33. <br />
<br />
INCOSE. 2010. In-service systems working group. San Diego, CA, USA: International Council on Systems Engineering (INCOSE). <br />
<br />
INCOSE UK Chapter. 2010. Applying systems engineering to in-service systems: Supplementary guidance to the INCOSE systems engineering handbook, version 3.2, issue 1.0. Foresgate, UK: International Council on Systems Engineering (INCOSE) UK Chapter, p10, 13, 23. <br />
<br />
Institute of Engineers Singapore. 2009. Systems engineering body of knowledge, provisional version 2.0. Singapore. <br />
<br />
Mays, L., ed. 2000. Water distribution systems handbook. New York, NY: McGraw-Hill Book Company: Chapter 3.<br />
<br />
MDIT. 2008. System maintenance guidebook (SMG), version 1.1: A companion to the systems engineering methdology (SEM) of the state unified information technology environment (SUITE). MI, USA: Michigan Department of Information Technology (MDIT), DOE G 200: p38. <br />
<br />
Minneapolis-St. Paul Chapter. 2003. Systems engineering in systems deployment and retirement, presented to INCOSE. Minneapolis-St. Paul, MN, USA: International Society of Logistics (SOLE), Minneapolis-St. Paul Chapter. <br />
<br />
NAS. 2006. Natioanl airspace system (NAS) system engineering manual, version 3.1 (volumes 1-3). Washington, DC: Air Traffic Organization (ATO)/U.S. Federal Aviation Administration (FAA), NAS SEM 3.1. <br />
<br />
NASA. December 2007. Systems engineering handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105. <br />
<br />
OSHA. 1996. Hazardous materials: Appendix A: List of highly hazardous chemicals, tocixs and reactives. Washington, D.C.: Occupational Safety and Health Administration (OSHA)/U.S. Department of Labor (DoL), 1910.119(a). <br />
<br />
Ryen, E. 2008. Overview of hte systems engineering process. Bismarck, ND, USA: North Dakota Department of Transpofration (NDDOT). <br />
<br />
SAE International. 2010. Standards: Automotive--maintenance and aftermarket. Warrendale, PA, USA: Society of Automotive Engineers (SAE) International. <br />
<br />
Schafer, D.L. 2003. Keeping Pace With Technology Advances When Funding Resources Are Diminished. Paper presented at AUTOTESTCON 2003. IEEE Systems Readiness Technology Conference, Anaheim, CA :p 584.<br />
<br />
SOLE. Applications divisons. in The International Society of Logistics (SOLE) [database online]. Hyattsville, MD, USA, 2009Available from http://www.sole.org/appdiv.asp (accessed August 5, 2010). <br />
<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Capability Updates, Upgrades, and Modernization|<- Previous Article]] | [[Service Life Extension|Parent Article]] | [[Systems Engineering Standards|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Capability_Updates,_Upgrades,_and_Modernization&diff=9683Capability Updates, Upgrades, and Modernization2011-08-09T21:42:46Z<p>Skmackin: </p>
<hr />
<div>Modernization and upgrades involve changing the product or service to include new functions, new interfaces, improve system performance and/or improve system supportability. The logistic support of a product or service reaches a point in its life where system modernization is required to resolve supportability problems and to reduce operational costs. The [[INCOSE Systems Engineering Handbook]] (INCOSE 2011) and Blanchard and Fabrycky (2005) stress the importance of using Life Cycle Costs when determining if a product or service should be modernized. Systems can be modernized in the field or returned to a depot or factory for modification. <br />
<br />
Design for system modernization and upgrade is an important part of the system engineering process and should be considered as part of the early requirements and design activities. <br />
[[Engineering Change Proposal (ECP) (glossary)|Engineering Change Proposals (ECP) (glossary)]] are used to initiate updates and modifications to the original system. Product and service upgrades can include new technology insertion, removing old equipment, or adding new equipment. [[Form, Fit, Function, and Interface (F3I) (glossary)|Form, fit, function, and interface (F3I) (glossary)]] is an important principle for upgrades where backward compatibility is a requirement. <br />
<br />
Product and service modernization occurs for many reasons. For example:<br />
*The system or one of its subsystems is experiencing reduced performance, safety or reliability.<br />
*A customer or other stakeholder may desire a new capability for the system. <br />
*Some of the system components may be experiencing obsolescence, including the lack of spare parts. <br />
*New uses for the system require modification to add capabilities not built into the originally deployed system.<br />
<br />
The first three reasons above are discussed in more detail in the INCOSE UK Chapter Guidebook (INCOSE UK Chapter 2010). <br />
<br />
== Topic Overview ==<br />
Product and service modernization involves the same systems engineering processes and principles that are employed during the upfront design, development, integration, and test. The primary difference is the constraints imposed by the existing system architecture, design, and components. <br />
Modernizing a legacy system requires a detailed understanding of the product or service prior to making any changes.<br />
The UK chapter of the INCOSE developed supplementary guidance to the INCOSE Systems Engineering Handbook. This guidance document applies to any system for which multiple systems are produced. These systems may be buildings, transmission networks, aircraft, automobiles or military vehicles, trains, naval vessels, and mass transit systems.<br />
Government and military products provide a comprehensive body of knowledge for system modernization and updates. Key references have been developed by the defense industry and can be peculiar to their needs.<br />
<br />
Key factors and questions that must be considered by the systems engineer when making modifications and upgrades to a product or service include:<br />
<br />
*Type of system (space, air, ground, maritime , and safety critical),<br />
*Missions and scenarios of expected operational usage,<br />
*Policy and legal requirements that are imposed by certain agencies or business markets,<br />
*Product or service life cycle costs, <br />
*Electromagnetic spectrum usage expected, including change in RF emissions <br />
*System Original Equipment Manufacturer (OEM) and key suppliers; availability of parts and subsystems,<br />
*Understanding and documenting the functions, interfaces, and performance requirements, including environmental testing and validation,<br />
*System integration challenges posed by the prevalence of system-of-systems solutions and corresponding interoperability issues between legacy, modified and new systems,<br />
*The amount of regression testing to be performed on the existing software.<br />
<br />
Key processes and procedures that should be considered during product and service modernization, include:<br />
*Legislative policy adherence review and certification,<br />
*Safety critical review,<br />
*Engineering change management and configuration control,<br />
*Analysis of Alternatives,<br />
*Warranty and product return process implementation,<br />
*Availability of manufacturing and supplier sources and products. <br />
<br />
<br />
== '''Application to Product Systems''' ==<br />
<br />
Product modernization involves understanding and managing a list of product deficiencies, prioritized change requests and customer issues associated with product usage. The INCOSE SE Handbook emphasizes the use of [[Failure Modes and Effects Criticality Analysis (glossary)]] (FMECA) to understand the the root causes of product failures to provide the basis for making any product changes. <br />
<br />
Product modernization uses the Engineering Change Management principle of change control boards to review and implement product changes and improvements. [[Acronyms|OSD AT&L]] provides an on-line references for product modernization and the use of an Engineering Change Proposal to document planned product or service modernization efforts.<br />
<br />
Product modernization and upgrades require the use of system documentation. A key part of the product change process is to change the supporting system documentation functions, interfaces, modes, performance requirements, and limitations). Both INCOSE (201) and Blanchard and Fabrycky (2005) stress the importance of understanding the intended usage of the product or service documented in the form of a concept of operations.<br />
<br />
If system documentation is not available, [[Reverse Engineering (glossary)|reverse engineering (glossary)]] is required to capture the proper “as is configuration” of the system and to gain understanding of system behavior prior to making any changes. Seacord, Plakosh, and Lewis's [[Modernizing Legacy Systems]] (2003), states the importance of documenting the existing architecture of a system, including the software architecture prior to making any changes.<br />
<br />
Chapter 5 of Seacord, Plakosh, and Lewis provides a framework for understanding and documenting a legacy system. They point out that the product or service software will undergo a transformation during modernization and upgrades. Chapter 5 introduces a horseshoe model that includes “functional transformation”, “code transformation”, and “architecture transformation”. <br />
<br />
During system verification and validation (after product change), it is important to perform regression testing on the portions of the system that were not modified to confirm that upgrades did not impact the existing functions and behaviors of the system. The degree and amount of regression testing depend on the type of change made to the system and whether the upgrade includes any changes to those functions or interfaces involved with system safety. INCOSE (DATE, pg 126 – 128) recommends the use of a requirements verification traceability matrix to assist the systems engineer during regression testing.<br />
<br />
It is important to consider changes to the system support environment. Change may require modification or additions to the system test equipment and other support elements such as packaging and transportation.<br />
<br />
Some commercial products involve components and subsystems where modernization activities cannot be performed. Examples of these types of commercial systems are Consumer Electronics (e.g., radios and computer components). The purchase price of these commercial systems is low enough that upgrades are not economical and are considered cost prohibitive.<br />
<br />
== '''Application to Service Systems''' ==<br />
Service system modernization may require regulatory changes to allow the use of new technologies and new materials. Service system modernization requires backward compatibility to previous provided service capability during the period of change. <br />
<br />
Service system modernization which spans large geographical areas requires a phased-based change and implementation strategy. Transportation systems such as highways (i.e., Interstate Highways) provide service to many different types of consumers and span such large geographical area. <br />
<br />
Modernization often require reverse engineering prior to making changes to understand how traffic monitoring devices such as metering, TV cameras, and toll tags interface with the rest of the system. The California Department of Transportation's [[Systems Engineering Guidebook for Intelligent Transportation Systems (ITS)]] adds reverse engineering to the processes steps for system upgrade. In addition this reference points out the need to maintain system integrity, and defines integrity to include the accurate documentation of the system's functional, performance, physical requirements in the form of requirements, design and support specifications.<br />
<br />
== '''Application to Enterprises''' ==<br />
<br />
The global positioning system (GPS) is an enterprise system implement by the military but used by both commercial and government consumers worldwide. Modernization may involve changes to only a certain segment of the enterprise such as the ground user segment to reduce size, weight and power. Modernization may only occur in certain geographical areas of operation.<br />
Enterprise system modernization must consider the location of the modification and the conditions under which the work will be performed. The largest challenge is implementing the changes while the system remains operational. In these cases, disruption of ongoing operations is a serious risk. For some systems, the transition between the old and new configuration is particularly important and must be carefully planned. <br />
The air transportation system consists of multiple countries and governing bodies dispersed over the entire world. Enterprise system modernization requires coordination of changes across international boundaries. <br />
Enterprise modifications normally occur at a lower level of the system hierarchy. Change in requirements at the system level would normally constitute a new system or a new model of a system. <br />
The UK Chapter Guidebook discusses the change to the architecture of the system. In cases where a component is added or changed, this change will constitute a change to the architecture,<br />
<br />
== '''Other Topics''' ==<br />
'''The Vee Model for Modifications'''<br />
<br />
Figure below illustrates how the standard [[Vee (V) Model (glossary)|Vee model (glossary)]] would be applied to a system modification. This Vee model is for the entire system: it applies to the whole aircraft, automobile, or other system. The key point is that if a modification is being initiated at a lower level of the system hierarchy, the Vee model must be entered at that level as shown in the figure. The figure shows three entry points to the Vee model. As the INCOSE UK Chapter Guidebook points out, the Vee model may be entered multiple times during the life of the system.<br />
<br />
A change to the system that does not change the system capabilities but does change the requirements and design of a subsystem may be introduced into the process at point B on the Vee model. Changes of this type could provide a new subsystem, such as a computer system, that meets the system-level requirements but has differences from the original that necessitate modifications to the lower-level requirements and design, such as changing disk memory to solid state memory. <br />
<br />
The process for implementing changes starting at this point has been described by Nguyen (2006). Modification introduced at points B or C in Figure 7-1 necessitate flowing the requirements upward through their “parent” requirements to the system-level requirements. <br />
<br />
[[File:052411_BSBW_The_Vee_Model.png|600px|The Vee Model for Modifications at the Three Different Levels]]<br />
<br />
There are many cases where the change to the system needs to be introduced at the lowest levels of the architectural hierarchy; here, the entry point to the process is at point C on the Vee model. These cases are typically related to obsolete parts caused by changes in technology or due to reliability issues with subsystems and parts chosen for the original design. A change at this level should be F3I compatible so that none of the higher-level requirements are affected. The systems engineer must ensure there is no impact at the higher levels, but when there is, this must be identified and worked with the customer and the other stakeholders.<br />
<br />
== Practical Considerations ==<br />
As pointed out by the INCOSE UK Chapter Guidebook there may be multiple modifications to a system in its lifetime. Often these modifications occur concurrently. This situation requires special attention and there are two methods for managing it. <br />
The first method is called the “block” method. This means that a group of systems are being modified simultaneously and will be deployed together as a group at a specific time. This method is meant to ensure that at the end state, all the modifications have been coordinated and integrated so there are no conflicts and no non-compliance issues with the system-level requirements. <br />
The second method is called continuous integration and is meant to occur concurrently with the block method. <br />
Information management systems provide an example of a commercial system where multiple changes can occur concurrently. The information management system hardware and network modernization will cause the system software to undergo changes. Software release management is used to coordinate the proper timing for the distribution of system software changes to end-users (Michigan Department of Information Technology, 2008).<br />
<br />
===Application of Commercial-off-the-Shelf Components===<br />
<br />
One of the more prominent considerations is the use of commercial-off-the-shelf (COTS) components. The application of COTS subsystems, components, and technologies to system life management provides a combination of advantages and risks. The first advantage is the inherent technological advancements that come with COTS components. COTS components continue to evolve toward a higher degree of functional integration, providing increase functionality while shrinking in physical size. The other advantage for using COTS components is cost. . The risk associated with using COTS during system life management involves component obsolescence and changes to system interfaces. Commercial market forces drive some components to obsolescence within two years or less. Application of COTS requires careful consideration to form factor and interface (physical and electrical).<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
Blanchard and Fabrycky. 2006. Systems Engineering and Analysis, 4th edition. Prentice Hall International Series.<br />
<br />
INCOSE Systems Engineering Handbook v3.2.1. A Guide for System Life Cycle Processes and Activities. 2011. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.1<br />
<br />
INCOSE UK Chapter. 2010. Applying systems engineering to in-service systems: Supplementary guidance to the INCOSE systems engineering handbook, version 3.2, issue 1.0. Foresgate, UK: International Council on Systems Engineering (INCOSE) UK Chapter, p10, 13, 23.<br />
<br />
MDIT. 2008. System maintenance guidebook (SMG), version 1.1: A companion to the systems engineering methodology (SEM) of the state unified information technology environment (SUITE). MI, USA: Michigan Department of Information Technology (MDIT), DOE G 200: p38. <br />
<br />
Nguyen, L. 2006. Adapting the vee model to accomplish systems engineering on change projects. Paper presented at 9th Annual National Defense Industrial Association (NDIA) Systems Engineering Conference, San Diego, CA, USA. <br />
<br />
Office of the Under Secretary of Defense for Aquisition, Transportation and Logistics. On-line policies, procedures, and planning references. http://www.acq.osd.mil/log/<br />
<br />
Seacord, Plakosh, Lewis. 2003. Modernizing Legacy Systems, Addison Wesley, Pearson Education Inc.<br />
<br />
Systems Engineering Guidebook for Intelligent Transportation Systems ver 1.1. 2005. California Department of Transportation Division of Research and Innovation <br />
<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
Blanchard and Fabrycky. 2006. [[Systems Engineering and Analysis]], 4th edition. Prentice Hall International Series.<br />
<br />
Caltrans, and USDOT. 2005. [[Systems Engineering Guidebook for Intelligent Transportation Systems (ITS)]], version 1.1. Sacramento, CA, USA: California Department of Transportation (Caltrans) Division of Reserach & Innovation/U.S. Department of Transportation (USDOT), SEG for ITS 1.1.<br />
<br />
INCOSE. 2011. [[INCOSE Systems Engineering Handbook|Systems Engineering Handbook]]: A Guide for System Life Cycle Processes and Activities, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.<br />
<br />
Jackson. 2007. [[A Multidisciplinary Framework for Resilience to Disasters and Disruptions]]. Journal of Integrated Design and Process Science Volume 11 (2), IOS Press<br />
<br />
OUSD AT&L. 2010. [[Logistics and Materiel Readiness]]. Arlington, VA, USA: in Office of the Under Secretary of Defense for Acquisition, Transportation, and Logistics (USD AT&L)/U.S. Department of Defense (DoD) [database online]. Available from [http://www.acq.osd.mil/log/ http://www.acq.osd.mil/log/] (accessed August 5, 2010).<br />
<br />
Seacord, Plakosh, Lewis. 2003. [[Modernizing Legacy Systems]], Addison Wesley, Pearson Education Inc.<br />
<br />
===Additional References===<br />
<br />
Blanchard, B. S., and W. J. Fabrycky. 2005. Systems engineering and analysis. Prentice-hall international series in industrial and systems engineering. 4th ed. Englwood Cliffs, NJ, USA: Prentice-Hall: p 541-565.<br />
<br />
Braunstein, A. 2007. Balancing hardware end-of-life costs and responsibilities. Westport, CT, USA: Experture Group, ETS 07-12-18. <br />
<br />
Casetta, E. 2001. Transportation systems engineering: Theory and methods. New York, NY: Kluwer Publishers Academic, Springer. <br />
<br />
DAU. Acquisition community connection (ACC): Where the DoD AT&L workforce meets to share knowledge. in Defense Acquisition University (DAU)/US Department of Defense (DoD) [database online]. Ft. Belvoir, VA, USA, 2010 Available from https://acc.dau.mil/ (accessed August 5, 2010). <br />
<br />
Elliot, T., K. Chen, and R. C. Swanekamp. 1998. Standard handbook of powerplant engineering. New York, NY: McGraw Hill, section 6.5.<br />
<br />
FAA. 2006. Section 4.1. In Systems engineering manual. Washington, D.C.: U.S. Federal Aviation Administration (FAA). <br />
<br />
FCC. 2009. Radio and television broadcast rules. Washington, D.C.: U.S. Federal Communications Commission (FCC), 47 CFR Part 73, FCC Rule 09-19: p 11299-11318.<br />
<br />
Finlayson, B., and B. Herdlick. 2008. Systems engineering of deployed systems. Baltimore, MD, USA: Johns Hopkins University: p28. <br />
<br />
IEC. 2007. Obsolescence management - application guide, ed 1.0. Geneva, Switzerland: International Electrotechnical Commission, IEC 62302. <br />
<br />
IEEE. 2010 IEEE Standard Framework for Reliability Prediction of Hardware. New York, NY: Institute of Electrical and Electronics Engineers (IEEE), IEEE STD 1413.<br />
<br />
IEEE. 1998 IEEE Standard Reliability Program for the Development and Production of Electronic Systems and Equipment. New York, NY: Institute of Electrical and Electronics Engineers (IEEE), IEEE STD 1332. <br />
<br />
IEEE. 2008. IEEE Recommended practice on Software Reliability. New York, NY: Institute of Electrical and Electronics Engineers (IEEE), IEEE STD 1633. <br />
<br />
IEEE 2008. ISO/IEC/IEEE Systems and Software Engineering – System Life Cycle Processes. Geneva, Switzerland: International Organization for Standardization, ISO 15288.<br />
<br />
IEEE 2005. IEEE Standard for Software Configuration Management Plans. New York, NY: Institute of Electrical and Electronics Engineers (IEEE), IEEE STD 828. <br />
<br />
INCOSE. 2010. In-service systems working group. San Diego, CA, USA: International Council on Systems Engineering (INCOSE). <br />
<br />
INCOSE UK Chapter. 2010. Applying systems engineering to in-service systems: Supplementary guidance to the INCOSE systems engineering handbook, version 3.2, issue 1.0. Foresgate, UK: International Council on Systems Engineering (INCOSE) UK Chapter, p10, 13, 23. <br />
<br />
Institute of Engineers Singapore. 2009. Systems engineering body of knowledge, provisional version 2.0. Singapore. <br />
<br />
ISO/IEC. 2003. Industrial automation systems integration-integration of life-cycle data for process plants including oil, gas production factilies. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electro technical Commission (IEC), . <br />
<br />
Jackson, S. 2007. A multidisciplinary framework for resilience to disasters and disruptions. Journal of Design and Process Science 11 (2): p91-108, 10.<br />
<br />
———. 1997. Systems engineering for commercial aircraft. Surrey, UK: Ashgate Publishing Ltd. <br />
<br />
Koopman, P. Life cycle considerations. in Carnegie-Mellon University (CMU) [database online]. Pittsburgh, PA, USA, 1999Available from http://www.ece.cmu.edu/~koopman/des_s99/life_cycle/index.html (accessed August 5, 2010). <br />
<br />
Livingston, H. 2010. GEB1: Diminishing manufacturing sources and material shortages (DMSMS) management practices. McClellan, CA, USA: Defense MicroElectronics Activity (DMEA)/U.S. Department of Defense (DoD). <br />
<br />
Mays, L., ed. 2000. Water distribution systems handbook. New York, NY: McGraw-Hill Book Company: Chapter 3.<br />
<br />
MDIT. 2008. System maintenance guidebook (SMG), version 1.1: A companion to the systems engineering methdology (SEM) of the state unified information technology environment (SUITE). MI, USA: Michigan Department of Information Technology (MDIT), DOE G 200: p38. <br />
<br />
NAS. 2006. Natioanl airspace system (NAS) system engineering manual, version 3.1 (volumes 1-3). Washington, DC: Air Traffic Organization (ATO)/U.S. Federal Aviation Administration (FAA), NAS SEM 3.1. <br />
<br />
NASA. December 2007. Systems engineering handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105. <br />
<br />
Nguyen, L. 2006. Adapting the vee model to accomplish systems engineering on change projects. Paper presented at 9th Annual National Defense Industrial Association (NDIA) Systems Engineering Conference, San Diego, CA, USA. <br />
<br />
Reason, J. 1997. Managing the risks of organizational accident. Aldershot, UK: Ashgate. <br />
<br />
Ryen, E. 2008. Overview of hte systems engineering process. Bismarck, ND, USA: North Dakota Department of Transpofration (NDDOT). <br />
<br />
SAE International. 2010. Standards: Automotive--maintenance and aftermarket. Warrendale, PA, USA: Society of Automotive Engineers (SAE) International. <br />
<br />
SEI. Software engineering institute. in Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU) [database online]. Pittsburgh, PA, USA, 2010Available from http://www.sei.cmu.edu (accessed August 5, 2010). <br />
<br />
Schafer, D.L. 2003. Keeping Pace With Technology Advances When Funding Resources Are Diminished. Paper presented at AUTOTESTCON 2003. IEEE Systems Readiness Technology Conference, Anaheim, CA :p 584.<br />
<br />
SOLE. Applications divisons. in The International Society of Logistics (SOLE) [database online]. Hyattsville, MD, USA, 2009Available from http://www.sole.org/appdiv.asp (accessed August 5, 2010). <br />
<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Service Life Extension|<- Previous Article]] | [[Service Life Extension|Parent Article]] | [[Disposal and Retirement|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Service_Life_Extension&diff=9682Service Life Extension2011-08-09T21:41:56Z<p>Skmackin: </p>
<hr />
<div>Product and service life extension involves continued usage of a product and service after the system has reached its original [[design life (glossary)]]. Product and service life extension involves assessing the [[Life Cycle Cost (LCC)|life cycle cost (LCC) (glossary)]] of continuing the use of the product or service versus the cost of a replacement system.<br />
<br />
[[Service Life Extension (SLE) (glossary)|service life extension (SLE) (glossary)]] emphasizes reliability upgrades and component replacement, or rebuilding of the system, to delay the system’s entry into wear-out status due to prohibitively expensive sustainment or reliability and performance requirements that can no longer be met. The goal is typically to return the system to as near new condition as possible, consistent with the economic constraints of the program.<br />
<br />
SLE is regarded as an environmentally friendly way to relieve rampant waste by prolonging the use-life of retiring products and preventing them from being discarded too early with their unexplored value. However, challenged by fast-changing technology and physical deterioration, a major concern in planning a product SLE is how fit a product might be to serve an extra life.<br />
<br />
<br />
==Topic Overview==<br />
<br />
Key factors and questions that must be considered by the systems engineer during service life extension include:<br />
*Current life cycle costs of the system, <br />
*Design life and expected remaining useful life of the system,<br />
*Software maintenance<br />
*Configuration Management <br />
*Warranty policy,<br />
*Availability of parts, subsystems, and manufacturing sources <br />
*Availability of system documentation to support life extension<br />
<br />
System [[Design Life (glossary)|design life (glossary)]] is a major consideration for service life extension. System design life parameters are established early on during the system design phase and include key assumptions involving safety limits and material life. Safety limits and the properties of material aging are critical to defining system life extension. Jackson (2007, 91-108) emphasizes that architecting for system resiliency increases system life. Jackson points out that a system can be architected to withstand internal and external disruptions. Systems that age through use, such as aircraft, bridges, and nuclear power plants, require periodic inspection to ascertain the degree of aging and fatigue. he results of inspections determine the need for actions to extend the product life (Elliot, Chen, and Swanekamp 1998, section 6.5).<br />
<br />
Software maintenance is a critical aspect of service life extension. The [[Legacy System (glossary)|legacy system (glossary)]] may include multiple computer resources that have been in operation for a period of many years, with functions that are essential and must not be disrupted during the upgrade or integration process. Typically, legacy systems include a computer resource or application software program which continues to be used because the cost of replacing or redesigning is prohibitive. The Software Engineering Institute (SEI) has addressed the need for service life extension of software products and services and provide useful guidance in the on-line library for Software Product Lines (SEI 2010, 1). <br />
<br />
Systems engineers have found that service life can be extended through the proper selection of materials, for example those that have special coatings. Transportation system elements such as highway bridges and rail systems are being designed for extended service life by using special epoxy-coated steel (Brown, Weyers, Spinkel. 2006, 13).<br />
<br />
Diminishing manufacturing sources and diminishing suppliers need to be addressed early in the service life extension process. Livingston (2010) in ''Diminishing Manufacturing Sources and Material Shortages (DMSMS) Management Practices'' provides a method for addressing product life extension when the sources of supply are an issue. He addresses the product life cycle model and describes a variety of methods that can be applied during system design to minimize the impact of future component obsolescence issues.<br />
<br />
During product and service life extension, it is often necessary to revisit & challenge the assumptions behind any previous lifecycle cost analysis (and constituent analyses) to evaluate their continued validity / applicability early in the service life extension process.<br />
<br />
==Application to Product Systems==<br />
<br />
Product life extension requires an analysis of the life cycle cost associated with continued use of the existing product versus the cost of a replacement product. The [[INCOSE Systems Engineering Handbook]] v.3.2.1, Chapter 3.3 Life-Cycle stages points out the support stage includes service life extension. Chapter 7 Life cycle cost (LCC) model provides a framework to determine if a product’s life should be extended. Blanchard & Fabrycky’s 5th edition of [[Systems Engineering and Analysis]], chapter 17, provides a Life Cycle Cost methodology and emphasis the analysis of different alternatives before making a decision on product life extension. <br />
<br />
For military systems, service life extension is considered a subset of modification or modernization. Military systems use well developed detailed guidance for service life extensions (Service Life Extension Programs – SLEP). Office of the under Secretary of Defense AT&L provides an on-line reference (Defense Acquisition University, https://www.dau.mil )for policies, procedures, planning guidance and whitepapers for military product serviced life extension. Continuous military system modernization is a process by which state-of-the-art technologies are inserted continuously into weapon systems to increase reliability, lower sustainment costs, and increase the war fighting capability of a system to meet evolving customer requirements throughout an indefinite service life. <br />
<br />
Aircraft service life can be extended by reducing dynamic loads which lead to structural fatigue. The Boeing B-52 military aircraft and the Boeing 737 commercial aircraft are prime examples of system life extension. The B-52 aircraft was first fielded in 1955 and continues to be operated in 2011. The Boeing 737 passenger aircraft has been fielded since 1967 and continues operation today.<br />
<br />
For nuclear reactors, system safety is the most important precondition for the service life extension. The system safety must be maintained while extending the service life (Paks, 2010). Built-in test, automated fault reporting and prognostics, analysis of failure modes, and the detection of early signs of wear and aging may be applied to predict the time when maintenance actions will be required to extend the service life of the product.<br />
<br />
==Application to Service Systems==<br />
<br />
For systems that provide services to a larger consumer base, service life extension involves continued delivery of the service to end consumers without any disruption. Service life extension involves capital investment and financial planning. Service life extension involves phased deployment of changes. Examples of this are transportation systems, water treatment facilities, energy generation and delivery systems, and the health care industry. <br />
<br />
As new technologies are introduced, service delivery can be improved while reducing life cycle costs. Service systems have to continuously assess delivery costs based upon the use of newer technologies. <br />
<br />
Water handling systems provide a good example of a service system that undergoes life extension. Water handling systems have been in existence since early civilization. Since water handling systems are in use as long as a site is occupied (e.g. the Roman aqueducts), and upgrades are required as population expands, such systems are a good example of "systems that live forever." For example, there are still US water systems that use a few wooden pipes, since there has been no reason to replace them. Water system life extension must deal with the issue of water quality and capacity for future users (Mays, 2000). Water quality requirements can be further understood from the (AWWA, 2010).<br />
<br />
==Application to Enterprises==<br />
Service life extension of a large enterprise such as NASA’s national space transportation system involves service life extension on the elements of the enterprise such as the space vehicle (Shuttle), ground processing systems for launch operations and mission control, and space-based communication systems that support space vehicle tracking and status monitoring. Service life extension of an enterprise requires a holistic look across the entire enterprise. A balanced approach is required to address the cost of operating older system components versus the cost required to implement service life improvements.<br />
<br />
Large enterprise system such as oil and natural gas reservoirs which span large geographical areas can use advance technology to increase service life. The economic extraction of natural resources from previously established reservoirs can extend the system life of oil and natural gas reservoirs. Life extension methods include pumping special liquids or gases into the reservoir to push the remaining oil or natural gas to the surface for extraction (Office of Natural Gas & Oil Technology).<br />
<br />
==Other Topics==<br />
<br />
Commercial product developers have been required to retain information for extended periods of time after the last operational product or unit leaves active service (for up to twenty years). Regulatory requirements should be considered when extending service life (INCOSE 2011).<br />
<br />
==Practical Considerations==<br />
The cost associated with life extension is one of the main inputs into the decision to extend service life of a product or a service. Often is the case that the funding required for service life extension of large complex systems span several fiscal planning cycles and is therefore subject to changes in attitude by the elected officials the appropriate the funding. <br />
<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
AWWA. AWWA manuals of water supply practices. in American Water Works Association (AWWA) [database online]. Denver, CO, USA, 2010 Available from http://www.awwa.org/Resources/standards.cfm?ItemNumber=47829&navItemNumber=47834 (accessed August 5, 2010).<br />
<br />
Blanchard and Fabrycky. 2006. Systems Engineering and Analysis, 4th edition. Prentice Hall International Series.<br />
<br />
Brown, M., R. Weyers, and M. Sprinkel. 2006. Service life extension of virginia bridge decks afforded by epoxy-coated reinforcement. Journal of ASTM International (JAI) 3 (2) (February 2005): 13.<br />
<br />
DAU. Acquisition community connection (ACC): Where the DoD AT&L workforce meets to share knowledge. in Defense Acquisition University (DAU)/US Department of Defense (DoD) [database online]. Ft. Belvoir, VA, USA, 2010 Available from https://acc.dau.mil/ (accessed August 5, 2010). <br />
<br />
Elliot, T., K. Chen, and R. C. Swanekamp. 1998. Standard handbook of powerplant engineering. New York, NY: McGraw Hill, section 6.5.<br />
<br />
INCOSE Systems Engineering Handbook v3.2.1. A Guide for System Life Cycle Processes and Activities. 2011. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.1<br />
<br />
Jackson. 2007. A Multidisciplinary Framework for Resilience to Disasters and Disruptions. Journal of Integrated Design and Process Science Volume 11 (2), IOS Press<br />
<br />
Livingston, H. 2010. GEB1: Diminishing manufacturing sources and material shortages (DMSMS) management practices. McClellan, CA, USA: Defense MicroElectronics Activity (DMEA)/U.S. Department of Defense (DoD).<br />
<br />
Mays, L., ed. 2000. Water distribution systems handbook. New York, NY: McGraw-Hill Book Company: Chapter 3.<br />
<br />
Office of Natural Gas and Oil Technology. 1999. Reservoir LIFE extension program: Encouraging production of remaining oil and gas. Washington, DC: U.S. Department of Engery (DoE), <br />
<br />
Paks Nuclear Power Plant. Paks nuclear power plant: Service life extension. in Paks Nuclear Power Plant Ltd. [database online]. Hungary, 2010Available from http://paksnuclearpowerplant.com/service-life-extension (accessed August 5, 2010). <br />
<br />
SEI. Software engineering institute. in Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU) [database online]. Pittsburgh, PA, USA, 2010Available from http://www.sei.cmu.edu (accessed August 5, 2010).<br />
<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
Blanchard and Fabrycky. 2006. [[Systems Engineering and Analysis]], 4th edition. Prentice Hall International Series.<br />
<br />
INCOSE. 2011. [[INCOSE Systems Engineering Handbook|Systems Engineering Handbook]]: A Guide for System Life Cycle Processes and Activities. Version 3.2.1. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.1<br />
<br />
Jackson. 2007. [[A Multidisciplinary Framework for Resilience to Disasters and Disruptions]]. Journal of Integrated Design and Process Science Volume 11 (2), IOS Press<br />
<br />
OUSD(AT&L). 2010. [[Logistics and Materiel Readiness]]. On-line policies, procedures, and planning references. Arlington, VA, USA: Office of the Under Secretary of Defense for Aquisition, Transportation and Logistics (OUSD(AT&L)). Available at: [http://www.acq.osd.mil/log/ http://www.acq.osd.mil/log]<br />
<br />
Seacord R.C., D. Plakosh, G.A. Lewis. 2003. [[Modernizing Legacy Systems]]. Boston, MA, USA: Addison Wesley, Pearson Education Inc.<br />
<br />
Caltrans and USDOT. 2005. [[Systems Engineering Guidebook for Intelligent Transportation Systems (ITS)]]. Sacramento, CA, USA: California Department of Transportation (Caltrans) Division of Research and Innovation and U.S. Department of Defense (USDOT), SEG for ITS 1.1.<br />
<br />
<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
<br />
AWWA. AWWA manuals of water supply practices. in American Water Works Association (AWWA) [database online]. Denver, CO, USA, 2010 Available from http://www.awwa.org/Resources/standards.cfm?ItemNumber=47829&navItemNumber=47834 (accessed August 5, 2010). <br />
<br />
Blanchard, B. S. 2010. Logistics engineering and management. 5th ed. Englewood Cliffs, NJ, USA: Prentice Hall: p341-342.<br />
<br />
Braunstein, A. 2007. Balancing hardware end-of-life costs and responsibilities. Westport, CT, USA: Experture Group, ETS 07-12-18. <br />
<br />
Brown, M., R. Weyers, and M. Sprinkel. 2006. Service life extension of virginia bridge decks afforded by epoxy-coated reinforcement. Journal of ASTM International (JAI) 3 (2) (February 2005): 13. <br />
<br />
Caltrans, and USDOT. 2005. Systems engineering guidebook for ITS, version 1.1. Sacramento, CA, USA: California Department of Transportation (Caltrans) Division of Research & Innovation/U.S. Department of Transportation (USDOT), SEG for ITS 1.1 : p278, 101-103, 107.<br />
<br />
Casetta, E. 2001. Transportation systems engineering: Theory and methods. New York, NY: Kluwer Publishers Academic, Springer. <br />
<br />
DAU. Acquisition community connection (ACC): Where the DoD AT&L workforce meets to share knowledge. in Defense Acquisition University (DAU)/US Department of Defense (DoD) [database online]. Ft. Belvoir, VA, USA, 2010 Available from https://acc.dau.mil/ (accessed August 5, 2010). <br />
<br />
DLA. Defense logistics agency disposition services [homepage]. in Defense Logistics Agency (DLA)/U.S. Department of Defense [database online]. Battle Creek, MI, USA, 2010 [cited June 19 2010]: p5. Available from http://www.dtc.dla.mil.<br />
<br />
Elliot, T., K. Chen, and R. C. Swanekamp. 1998. Standard handbook of powerplant engineering. New York, NY: McGraw Hill, section 6.5.<br />
<br />
FAA. 2006. Section 4.1. In Systems engineering manual. Washington, D.C.: U.S. Federal Aviation Administration (FAA). <br />
<br />
FCC. 2009. Radio and television broadcast rules. Washington, D.C.: U.S. Federal Communications Commission (FCC), 47 CFR Part 73, FCC Rule 09-19: p 11299-11318.<br />
<br />
Finlayson, B., and B. Herdlick. 2008. Systems engineering of deployed systems. Baltimore, MD, USA: Johns Hopkins University: p28. <br />
<br />
Gehring, G., D. Lindemuth, and W. T. Young. 2004. Break Reduction/Life extension program for CAST and ductile iron water mains. Paper presented at NO-DIG 2004, Conference of the North American Society for Trenchless Technology (NASTT), March 22-24, 2004, New Orleans, LA, USA. <br />
<br />
Hovinga, M. N., and G. J. Nakoneczny. May 2000. Standard recommendations for pressure part inspection during a boiler life extension program. Paper presented at ICOLM (International Conference on Life Management and Life Extension of Power Plant), Xi’an, P.R. China. <br />
<br />
IEC. 2007. Obsolescence management - application guide, ed 1.0. Geneva, Switzerland: International Electrotechnical Commission, IEC 62302. <br />
<br />
IEEE. 2010 IEEE Standard Framework for Reliability Prediction of Hardware. New York, NY: Institute of Electrical and Electronics Engineers (IEEE), IEEE STD 1413.<br />
<br />
IEEE. 1998 IEEE Standard Reliability Program for the Development and Production of Electronic Systems and Equipment. New York, NY: Institute of Electrical and Electronics Engineers (IEEE), IEEE STD 1332. <br />
<br />
IEEE. 2008. IEEE Recommended practice on Software Reliability. New York, NY: Institute of Electrical and Electronics Engineers (IEEE), IEEE STD 1633. <br />
<br />
<br />
IEEE 2008. ISO/IEC/IEEE Systems and Software Engineering – System Life Cycle Processes. Geneva, Switzerland: International Organization for Standardization, ISO 15288.<br />
<br />
IEEE 2005. IEEE Standard for Software Configuration Management Plans. New York, NY: Institute of Electrical and Electronics Engineers (IEEE), IEEE STD 828. <br />
<br />
Ihii, K., C. F. Eubanks, and P. Di Marco. 1994. Design for product retirement and material life-cycle. Materials & Design 15 (4): 225-33. <br />
<br />
INCOSE. 2010. In-service systems working group. San Diego, CA, USA: International Council on Systems Engineering (INCOSE). <br />
<br />
INCOSE UK Chapter. 2010. Applying systems engineering to in-service systems: Supplementary guidance to the INCOSE systems engineering handbook, version 3.2, issue 1.0. Foresgate, UK: International Council on Systems Engineering (INCOSE) UK Chapter, p10, 13, 23. <br />
<br />
Institute of Engineers Singapore. 2009. Systems engineering body of knowledge, provisional version 2.0. Singapore. <br />
<br />
ISO/IEC. 2003. Industrial automation systems integration-integration of life-cycle data for process plants including oil, gas production factilies. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electro technical Commission (IEC), . <br />
<br />
———. 1997. Systems engineering for commercial aircraft. Surrey, UK: Ashgate Publishing Ltd. <br />
<br />
Koopman, P. Life cycle considerations. in Carnegie-Mellon University (CMU) [database online]. Pittsburgh, PA, USA, 1999Available from http://www.ece.cmu.edu/~koopman/des_s99/life_cycle/index.html (accessed August 5, 2010). <br />
<br />
L3 Communications. 2010. Service life extension program (SLEP). Newport News, VA, USA: L3 Communications, Flight International Aviation LLC. <br />
<br />
Livingston, H. 2010. GEB1: Diminishing manufacturing sources and material shortages (DMSMS) management practices. McClellan, CA, USA: Defense MicroElectronics Activity (DMEA)/U.S. Department of Defense (DoD). <br />
<br />
Mays, L., ed. 2000. Water distribution systems handbook. New York, NY: McGraw-Hill Book Company: Chapter 3.<br />
<br />
MDIT. 2008. System maintenance guidebook (SMG), version 1.1: A companion to the systems engineering methdology (SEM) of the state unified information technology environment (SUITE). MI, USA: Michigan Department of Information Technology (MDIT), DOE G 200: p38. <br />
<br />
NAS. 2006. Natioanl airspace system (NAS) system engineering manual, version 3.1 (volumes 1-3). Washington, DC: Air Traffic Organization (ATO)/U.S. Federal Aviation Administration (FAA), NAS SEM 3.1. <br />
<br />
NASA. December 2007. Systems engineering handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105. <br />
<br />
Office of Natural Gas and Oil Technology. 1999. Reservoir LIFE extension program: Encouraging production of remaining oil and gas. Washington, DC: U.S. Department of Engery (DoE), . <br />
<br />
Paks Nuclear Power Plant. Paks nuclear power plant: Service life extension. in Paks Nuclear Power Plant Ltd. [database online]. Hungary, 2010Available from http://paksnuclearpowerplant.com/service-life-extension (accessed August 5, 2010). <br />
<br />
Reason, J. 1997. Managing the risks of organizational accident. Aldershot, UK: Ashgate. <br />
<br />
Ryen, E. 2008. Overview of hte systems engineering process. Bismarck, ND, USA: North Dakota Department of Transpofration (NDDOT). <br />
<br />
SAE International. 2010. Standards: Automotive--maintenance and aftermarket. Warrendale, PA, USA: Society of Automotive Engineers (SAE) International. <br />
<br />
SEI. Software engineering institute. in Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU) [database online]. Pittsburgh, PA, USA, 2010Available from http://www.sei.cmu.edu (accessed August 5, 2010). <br />
<br />
Schafer, D.L. 2003. Keeping Pace With Technology Advances When Funding Resources Are Diminished. Paper presented at AUTOTESTCON 2003. IEEE Systems Readiness Technology Conference, Anaheim, CA :p 584.<br />
<br />
SOLE. Applications divisons. in The International Society of Logistics (SOLE) [database online]. Hyattsville, MD, USA, 2009Available from http://www.sole.org/appdiv.asp (accessed August 5, 2010). <br />
<br />
Sukamto, S. 2003. Plant aging and life extention program at arun LNG plant lhokseumawe, north aceh, indonesia. Paper presented at 22nd Annual World Gas Conference, 1-5 June, 2010, Tokyo, Japan.<br />
<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Product and Service Life Management|<- Previous Article]] | [[Systems Engineering and Management|Parent Article]] | [[Capability Updates, Upgrades, and Modernization|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Service_Life_Management&diff=9681Service Life Management2011-08-09T21:41:16Z<p>Skmackin: </p>
<hr />
<div>Product and service life management deals with the overall life cycle planning and support of a system. The life of a product or service spans a considerably longer period of time than the time required to design and develop the product or service. Systems engineers need to understand and apply the principles of life management throughout the life cycle of the system.<br />
<br />
Product and service life management is also referred to as system sustainment. Sustainment involves the supportability of operational systems from initial procurement to disposal. Sustainment is a key upfront task for systems engineering that influences product and service performance and support cost for the entire life of the program. Sustainment activities include: design for maintainability, application of built in test, diagnostics, prognostics and other condition-based maintenance techniques, implementation of logistics footprint reduction strategies, identification of technology insertion opportunities, identification of operations and support cost reduction opportunities, and monitoring of key support metrics. Life Cycle Sustainment Plans should be created for large, complex systems (DAU 2010).<br />
<br />
Product and Service Life Management applies to both commercial and government systems. Examples of large commercial systems include energy generation and distribution, information management systems, the Internet, and health industries. Government systems include defense systems, transportation systems, water handling systems, and government services.<br />
It is critical that the planning for system life management occur during the requirements phase of system development. The requirements phase includes analysis of life cycle cost alternatives, and the understanding of how the system will be sustained and modified once it is operational. <br />
<br />
The body of knowledge associated with product and service life management includes the following areas: (1) '''Service Life Extension'''; (2) '''Modernization and Upgrades''', and (3) '''Disposal and Retirement'''. System engineers need to understand the principles of service life extension, the challenges that occur during system modifications, and issues involved with the disposal and retirement after a system has reached its [[useful life (glossary)]]. Managing service life extension uses the [[Engineering Change Management (glossary)]] process with an understanding of the design life constraints of the system. Modernizing existing [[Legacy System (glossary)|legacy systems (glossary)]] requires special attention to understanding the legacy requirements and the importance of having a complete inventory of all the system interfaces and technical drawings. Disposal and retirement of a product after reaching its useful life requires attention to environmental concerns, special handling of hazardous waste, and concurrent operation of a replacement system as the existing system is being retired.<br />
<br />
The principles of product and service life management apply to different types of systems and domains. The type of system (commercial or government) should be used to select the correct body of knowledge and best practices that exist in different domains. For example, military systems would rely on sustainment references and best practices from Department of Defense (i.e., Military service instructions, Defense Acquisition University) and military standardization bodies (i.e., AIAA, SAE, SOLE, National Geo-spatial consortium). <br />
<br />
Systems such as commercial aviation, power distribution, transportation systems, water handling, the Internet, and health industries would rely on system life management references and best practices from a combination of government agencies, local municipalities, and commercial standardization bodies and associations (i.e., Department of Transportation, State of Michigan, ISO, IEEE, INCOSE). <br />
<br />
Some standardization bodies have developed system life management practices that bridge both military and commercial systems (i.e., INCOSE, SOLE, ISO, IEEE).<br />
<br />
There are multiple commercial associations involved with defining engineering policies, best practices, and requirements for commercial product and service life management. Each commercial association has a specific focus for the market or domain area where the product is used. Key commercial associations include: American Society of Hospital Engineering (ASHE); Association of Computing Machinery (ACM); Society of Automotive Engineers (SAE); American Society of Mechanical Engineers (ASME); American Society for Testing & Materials (ASTM) International; National Association of Home Builders (NAHB); and Internet Society (ISOC) including Internet Engineering Task Force (IETF).<br />
<br />
The [[INCOSE Systems Engineering Handbook]] v.3.2.1 identifies several relevant points regarding product and service life management<br />
<br />
[[Systems Engineering Guidebook for Intelligent Transportation Systems (ITS)]], version 1.1, provides guidance on product changes, and system retirement. <br />
<br />
Blanchard and Fabrycky's [[Systems Engineering and Analysis]] (2005), emphasis design for supportability and provides a framework for product and service supportability, and planning for system retirement.<br />
<br />
Seacord, Plakosh, Lewis. 2003. [[Modernizing Legacy Systems]], identifies strategies for product and service modernization.<br />
<br />
Office of the Under Secretary of Defense for Acquisition, Transportation, and Logistics (USD AT&L) [[Logistics and Materiel Readiness]] ([http://www.acq.osd.mil/log/ http://www.acq.osd.mil/log/]) provides on-line policies, procedures, and planning references for product service life extension, modernization and retirement.<br />
<br />
Jackson, S. 2007. [[A Multidisciplinary Framework for Resilience to Disasters and Disruptions]], provides insight into architecting a system for extended service life.<br />
<br />
Typical Pitfalls that occur after Product and Service Deployment<br />
Major pitfalls associated with systems engineering after the deployment of products and services can be avoided if the systems engineer:<br />
*Recognizes that the systems engineering process does not stop when the product or service becomes operational.<br />
*Understands that certain life management functions and organizations, especially in the post-delivery phase of the life cycle, are part of the systems engineering process.<br />
*Knows that modifications need to comply with the system requirements. <br />
*Considers that the users must be able to continue the maintenance activities drawn up during the System Requirement phase after an upgrade or modification to the system are made.<br />
*Accounts for changing user requirements over the system life cycle.<br />
*Adapts the support concepts, drawn up during development, throughout the life cycle.<br />
*Applies engineering change management to the total system.<br />
<br />
Not addressing these areas of concern early in development, and throughout the product or service’s life cycle can have dire consequences.<br />
<br />
<br />
===Topics===<br />
The topics contained within this knowledge area include:<br />
*[[Service Life Extension]]<br />
*[[Capability Updates, Upgrades, and Modernization]]<br />
*[[Disposal and Retirement]]<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
Blanchard and Fabrycky. 2006. [[Systems Engineering and Analysis]], 4th edition. Prentice Hall International Series.<br />
<br />
Defense Acquisition University (DAU). Acquisition Community Connection (ACC): Where the DoD AT&L workforce meets to share knowledge. DAU/US Department of Defense (database online). Ft. Belvoir, VA, USA, 2010 available from https://acc.dau.mil<br />
<br />
INCOSE Systems Engineering Handbook v3.2.1. A Guide for System Life Cycle Processes and Activities. 2011. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.1<br />
<br />
Jackson. 2007. [[A Multidisciplinary Framework for Resilience to Disasters and Disruptions]]. Journal of Integrated Design and Process Science Volume 11 (2), IOS Press<br />
<br />
OUSD(AT&L). 2011. [[Logistics and Materiel Readiness]] On-line policies, procedures, and planning references. Arlington, VA, USA: Office of the Under Secretary of Defense for Aquisition, Transportation and Logistics (OUSD(AT&L). Available at: [http://www.acq.osd.mil/log/ http://www.acq.osd.mil/log/].<br />
<br />
Seacord, Plakosh, Lewis. 2003. [[Modernizing Legacy Systems]], Addison Wesley, Pearson Education Inc.<br />
<br />
Systems Engineering Guidebook for Intelligent Transportation Systems ver 1.1. 2005. California Department of Transportation Division of Research and Innovation<br />
<br />
===Primary References===<br />
<br />
Blanchard and Fabrycky. 2006. [[Systems Engineering and Analysis]], 4th edition. Prentice Hall International Series.<br />
<br />
INCOSE. 2011. [[INCOSE Systems Engineering Handbook|Systems Engineering Handbook]]: A Guide for System Life Cycle Processes and Activities. Version 3.2.1. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.1<br />
<br />
Jackson. 2007. [[A Multidisciplinary Framework for Resilience to Disasters and Disruptions]]. Journal of Integrated Design and Process Science Volume 11 (2), IOS Press<br />
<br />
OUSD(AT&L). 2011. [[Logistics and Materiel Readiness]] On-line policies, procedures, and planning references. Arlington, VA, USA: Office of the Under Secretary of Defense for Aquisition, Transportation and Logistics (OUSD(AT&L). Available at: [http://www.acq.osd.mil/log/ http://www.acq.osd.mil/log/].<br />
<br />
Seacord, Plakosh, Lewis. 2003. [[Modernizing Legacy Systems]], Addison Wesley, Pearson Education Inc.<br />
<br />
Caltrans and USDOT. [[Systems Engineering Guidebook for Intelligent Transportation Systems (ITS)]] ver 1.1. 2005. Sacramento, CA, USA: California Department of Transportation (Caltrans) Division of Research and Innovation and U.S. Department of Transportation (USDOT), SEG for ITS 1.1.<br />
<br />
===Additional References===<br />
<br />
AWWA. AWWA manuals of water supply practices. in American Water Works Association (AWWA) [database online]. Denver, CO, USA, 2010Available from http://www.awwa.org/Resources/standards.cfm?ItemNumber=47829&navItemNumber=47834 (accessed August 5, 2010). <br />
<br />
Blanchard, B. S. 2010. Logistics engineering and management. 5th ed. Englewood Cliffs, NJ, USA: Prentice Hall: p341-342.<br />
<br />
———. 1992. Chapter 8. In Logistics engineering and management. 4th ed., 341-342. Englewood Cliffs, NJ, USA: Prentice Hall. <br />
<br />
Blanchard, B. S., and W. J. Fabrycky. 2005. Systems engineering and analysis. Prentice-hall international series in industrial and systems engineering. 4th ed. Englwood Cliffs, NJ, USA: Prentice-Hall: p 541-565.<br />
<br />
Braunstein, A. 2007. Balancing hardware end-of-life costs and responsibilities. Westport, CT, USA: Experture Group, ETS 07-12-18. <br />
<br />
Brown, M., R. Weyers, and M. Sprinkel. 2006. Service life extension of virginia bridge decks afforded by epoxy-coated reinforcement. Journal of ASTM International (JAI) 3 (2) (February 2005): 13. <br />
<br />
Caltrans, and USDOT. 2005. Systems engineering guidebook for ITS, version 1.1. Sacramento, CA, USA: California Department of Transportation (Caltrans) Division of Research & Innovation/U.S. Department of Transportation (USDOT), SEG for ITS 1.1 : p278, 101-103, 107.<br />
<br />
Casetta, E. 2001. Transportation systems engineering: Theory and methods. New York, NY: Kluwer Publishers Academic, Springer. <br />
<br />
DAU. Acquisition community connection (ACC): Where the DoD AT&L workforce meets to share knowledge. in Defense Acquisition University (DAU)/US Department of Defense (DoD) [database online]. Ft. Belvoir, VA, USA, 2010 Available from https://acc.dau.mil/ (accessed August 5, 2010). <br />
<br />
DLA. Defense logistics agency disposition services [homepage]. in Defense Logistics Agency (DLA)/U.S. Department of Defense [database online]. Battle Creek, MI, USA, 2010 [cited June 19 2010]: p5. Available from http://www.dtc.dla.mil.<br />
<br />
ECHA. European chemicals agency ( ECHA ) [home page]. in European Chemicals Agency (ECHA) [database online]. Helsinki, Finland, 2010 Available from http://echa.europa.edu/home_en.asp. <br />
<br />
Elliot, T., K. Chen, and R. C. Swanekamp. 1998. Standard handbook of powerplant engineering. New York, NY: McGraw Hill, section 6.5.<br />
<br />
EPA. Wastes. in U.S. Environmental Protection Agency (EPA) [database online]. Washington, DC, 20102010]. Available from http://www.epa.gov/epawaste/index.htm.<br />
<br />
European Parliament. 2007. Regulation (EC) no 1907/2006 of the european parliament and of the council of 18 december 2006 concerning the registration, evaluation, authorisation and restriction of chemicals (REACH), establishing a european chemicals agency, amending directive 1999/45/EC and repealing council regulation (EEC) no 793/93 and commission regulation (EC) no 1488/94 as well as council directive 76/769/EEC and commission directives 91/155/EEC, 93/67/EEC, 93/105/EC and 2000/21/EC. Official Journal of the European Union 29 (5): 136/3,136/280. <br />
<br />
FAA. 2006. Section 4.1. In Systems engineering manual. Washington, D.C.: U.S. Federal Aviation Administration (FAA). <br />
<br />
FCC. 2009. Radio and television broadcast rules. Washington, D.C.: U.S. Federal Communications Commission (FCC), 47 CFR Part 73, FCC Rule 09-19: p 11299-11318.<br />
<br />
Finlayson, B., and B. Herdlick. 2008. Systems engineering of deployed systems. Baltimore, MD, USA: Johns Hopkins University: p28. <br />
<br />
FSA. Template for 'system retirement plan' and 'system disposal plan'. in Federal Student Aid (FSA)/U.S. Department of Eduation (DoEd) [database online]. Washington, DC, 2010Available from http://federalstudentaid.ed.gov/business/lcm.html (accessed August 5, 2010). <br />
<br />
Gehring, G., D. Lindemuth, and W. T. Young. 2004. Break Reduction/Life extension program for CAST and ductile iron water mains. Paper presented at NO-DIG 2004, Conference of the North American Society for Trenchless Technology (NASTT), March 22-24, 2004, New Orleans, LA, USA. <br />
<br />
Hovinga, M. N., and G. J. Nakoneczny. May 2000. Standard recommendations for pressure part inspection during a boiler life extension program. Paper presented at ICOLM (International Conference on Life Management and Life Extension of Power Plant), Xi’an, P.R. China. <br />
<br />
IEC. 2007. Obsolescence management - application guide, ed 1.0. Geneva, Switzerland: International Electrotechnical Commission, IEC 62302. <br />
<br />
IEEE. 2010 IEEE Standard Framework for Reliability Prediction of Hardware. New York, NY: Institute of Electrical and Electronics Engineers (IEEE), IEEE STD 1413.<br />
<br />
IEEE. 1998 IEEE Standard Reliability Program for the Development and Production of Electronic Systems and Equipment. New York, NY: Institute of Electrical and Electronics Engineers (IEEE), IEEE STD 1332. <br />
<br />
IEEE. 2008. IEEE Recommended practice on Software Reliability. New York, NY: Institute of Electrical and Electronics Engineers (IEEE), IEEE STD 1633. <br />
<br />
<br />
IEEE 2008. ISO/IEC/IEEE Systems and Software Engineering – System Life Cycle Processes. Geneva, Switzerland: International Organization for Standardization, ISO 15288.<br />
<br />
IEEE 2005. IEEE Standard for Software Configuration Management Plans. New York, NY: Institute of Electrical and Electronics Engineers (IEEE), IEEE STD 828. <br />
<br />
Ihii, K., C. F. Eubanks, and P. Di Marco. 1994. Design for product retirement and material life-cycle. Materials & Design 15 (4): 225-33. <br />
<br />
INCOSE. 2010. In-service systems working group. San Diego, CA, USA: International Council on Systems Engineering (INCOSE). <br />
<br />
———. 2010. Life cycle management working group. San Diego, CA, USA: International Council on Systems Engineering (INCOSE). <br />
<br />
INCOSE UK Chapter. 2010. Applying systems engineering to in-service systems: Supplementary guidance to the INCOSE systems engineering handbook, version 3.2, issue 1.0. Foresgate, UK: International Council on Systems Engineering (INCOSE) UK Chapter, p10, 13, 23. <br />
<br />
Institute of Engineers Singapore. 2009. Systems engineering body of knowledge, provisional version 2.0. Singapore. <br />
<br />
ISO/IEC. 2003. Industrial automation systems integration-integration of life-cycle data for process plants including oil, gas production factilies. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electro technical Commission (IEC), . <br />
<br />
Jackson, S. 2007. A multidisciplinary framework for resilience to disasters and disruptions. Journal of Design and Process Science 11 (2): p91-108, 10.<br />
<br />
———. 1997. Systems engineering for commercial aircraft. Surrey, UK: Ashgate Publishing Ltd. <br />
<br />
Koopman, P. Life cycle considerations. in Carnegie-Mellon University (CMU) [database online]. Pittsburgh, PA, USA, 1999Available from http://www.ece.cmu.edu/~koopman/des_s99/life_cycle/index.html (accessed August 5, 2010). <br />
<br />
L3 Communications. 2010. Service life extension program (SLEP). Newport News, VA, USA: L3 Communications, Flight International Aviation LLC. <br />
<br />
Livingston, H. 2010. GEB1: Diminishing manufacturing sources and material shortages (DMSMS) management practices. McClellan, CA, USA: Defense MicroElectronics Activity (DMEA)/U.S. Department of Defense (DoD). <br />
<br />
Mays, L., ed. 2000. Water distribution systems handbook. New York, NY: McGraw-Hill Book Company: Chapter 3.<br />
<br />
MDIT. 2008. System maintenance guidebook (SMG), version 1.1: A companion to the systems engineering methdology (SEM) of the state unified information technology environment (SUITE). MI, USA: Michigan Department of Information Technology (MDIT), DOE G 200: p38. <br />
<br />
Minneapolis-St. Paul Chapter. 2003. Systems engineering in systems deployment and retirement, presented to INCOSE. Minneapolis-St. Paul, MN, USA: International Society of Logistics (SOLE), Minneapolis-St. Paul Chapter. <br />
<br />
NAS. 2006. Natioanl airspace system (NAS) system engineering manual, version 3.1 (volumes 1-3). Washington, DC: Air Traffic Organization (ATO)/U.S. Federal Aviation Administration (FAA), NAS SEM 3.1. <br />
<br />
NASA. December 2007. Systems engineering handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105. <br />
<br />
Nguyen, L. 2006. Adapting the vee model to accomplish systems engineering on change projects. Paper presented at 9th Annual National Defense Industrial Association (NDIA) Systems Engineering Conference, San Diego, CA, USA. <br />
<br />
Office of Natural Gas and Oil Technology. 1999. Reservoir LIFE extension program: Encouraging production of remaining oil and gas. Washington, DC: U.S. Department of Engery (DoE), . <br />
<br />
OSHA. 1996. Hazardous materials: Appendix A: List of highly hazardous chemicals, tocixs and reactives. Washington, D.C.: Occupational Safety and Health Administration (OSHA)/U.S. Department of Labor (DoL), 1910.119(a). <br />
<br />
Paks Nuclear Power Plant. Paks nuclear power plant: Service life extension. in Paks Nuclear Power Plant Ltd. [database online]. Hungary, 2010Available from http://paksnuclearpowerplant.com/service-life-extension (accessed August 5, 2010). <br />
<br />
Reason, J. 1997. Managing the risks of organizational accident. Aldershot, UK: Ashgate. <br />
<br />
Ryen, E. 2008. Overview of hte systems engineering process. Bismarck, ND, USA: North Dakota Department of Transpofration (NDDOT). <br />
<br />
SAE International. 2010. Standards: Automotive--maintenance and aftermarket. Warrendale, PA, USA: Society of Automotive Engineers (SAE) International. <br />
<br />
———. 2010. Standards: Commercial vehicle--maintenance and aftermarket. Warrendale, PA, USA: Society of Automotive Engineers (SAE) International. <br />
<br />
———. 2010. Standards: Maintenance and aftermarket. Warrendale, PA, USA: Society of Automotive Engineers (SAE) International. <br />
<br />
SEI. Software engineering institute. in Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU) [database online]. Pittsburgh, PA, USA, 2010Available from http://www.sei.cmu.edu (accessed August 5, 2010). <br />
<br />
Schafer, D.L. 2003. Keeping Pace With Technology Advances When Funding Resources Are Diminished. Paper presented at AUTOTESTCON 2003. IEEE Systems Readiness Technology Conference, Anaheim, CA :p 584.<br />
<br />
SOLE. Applications divisons. in The International Society of Logistics (SOLE) [database online]. Hyattsville, MD, USA, 2009Available from http://www.sole.org/appdiv.asp (accessed August 5, 2010). <br />
<br />
Sukamto, S. 2003. Plant aging and life extention program at arun LNG plant lhokseumawe, north aceh, indonesia. Paper presented at 22nd Annual World Gas Conference, 1-5 June, 2010, Tokyo, Japan.<br />
<br />
FSA. Template for 'system retirement plan' and 'system disposal plan'. in Federal Student Aid (FSA)/U.S. Department of Education (DoEd) [database online]. Washington, DC, 2010Available from http://federalstudentaid.ed.gov/business/lcm.html (accessed August 5, 2010). <br />
<br />
<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Quality Management|<- Previous Article]] | [[Systems Engineering and Management|Parent Article]] | [[Service Life Extension|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Knowledge Area]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Quality_Management&diff=9680Quality Management2011-08-09T21:40:41Z<p>Skmackin: </p>
<hr />
<div>== Overview ==<br />
<br />
Whether a systems engineer delivers a product, a service, or an enterprise, the deliverable should meet the needs of the customer and be fit for use. Such a deliverable is said to be of high quality. <br />
<br />
<br />
Over the past 80 years, a ''quality movement'' has emerged to enable organizations to produce high quality deliverables. This movement has gone though four stages, each discussed below. First , '''acceptance sampling''' was developed to apply statistical tests to decide whether or not to accept a lot of material based on a random sample of its content. Second, ''statistical process control'' was developed to determine if production processes were stable. Instead of necessarily measuring products, processes were measured. Processes that departed from a state of statistical control were far more likely to develop low quality deliverables. Third, '''design for quality''' focused on designing processes that were robust against causes of variation, reducing the likelihood that a process would go out of control and accordingly reducing the monitoring requirements. Fourth, '''six sigma''' methods applied the tools and power of statistical thinking to improve other aspects of the organization.<br />
<br />
== Definitions ==<br />
<br />
[http://asq.org/glossary/index.html The American Society for Quality] provides the following definitions:<br />
<br />
[[Quality (glossary)]] A subjective term for which each person or sector has its own definition. In technical usage, quality can have two meanings: 1. the characteristics of a product or service that bear on its ability to satisfy stated or implied needs; 2. a product or service free of deficiencies. According to Joseph Juran, quality means “fitness for use;” according to Philip Crosby, it means `conformance to requirements.' <br />
<br />
[[Acceptance Sampling (glossary)]] Inspection of a sample from a lot to decide whether to accept the entire lot. There are two types: attributes sampling and variables sampling. In attributes sampling, the presence or absence of a characteristic is noted in each of the units inspected. In variables sampling, the numerical magnitude of a characteristic is measured and recorded for each inspected unit; this involves reference to a continuous scale of some kind.<br />
<br />
[[Statistical Process Control (glossary)|Statistical Process Control (SPC) (glossary):]] The application of statistical techniques to control a process; often used interchangeably with the term “statistical quality control.”<br />
<br />
[[Six Sigma (glossary) ]] A method that provides organizations tools to improve the capability of their business processes. This increase in performance and decrease in process variation lead to defect reduction and improvement in profits, employee morale and quality of products or services. Six Sigma quality is a term generally used to indicate a process is well controlled (±6 s from the centerline in a control chart).<br />
<br />
=='''Quality Attributes'''==<br />
<br />
Quality attributes, also known as quality factors, quality characteristics, or “ilities”, are a set of system’s non-functional requirements that are used to evaluate the system performance. There are a large number of system quality attributes that you can find within the literature (http://en.wikipedia.org/wiki/List_of_system_quality_attributes). Of course, depending on the type of the system you deal with, some of these attributes are more prominent than others. Ideally, you would want to optimize for all the quality attributes that are important to the system, but this is an impossible task. Therefore, it is important to conduct a trade off analysis to identify the relationship between the attributes, and whether a change in one attributes would affect positively or negatively any other attribute. An example of such trade off table is shown bellow.<br />
<br />
{|<br />
|+ '''Table Title''' - Attribute Trade off|-<br />
!<br />
!Flexibility<br />
!Maintainability<br />
!Reliability<br />
|-<br />
|Flexibility<br />
|<br />
|<br />
"+"<br />
|<br />
"-"<br />
|-<br />
|Maintainability<br />
|<br />
"+"<br />
|<br />
|<br />
"+"<br />
|-<br />
|Reliability<br />
|<br />
"-"<br />
|<br />
"+"<br />
|<br />
|}<br />
<br />
<br />
Finding the right set of quality attributes is the first step in quality control and management. In order to acheive high quality, quality have to be measured, monitor, manage, and improve on. Therefore; in order to increase the overall system quality, you should be able to;<br />
<br />
*Identify the quality attributes<br />
* Prioritize these attributes<br />
* Identify the metrics that can be used for these attributes<br />
* Measure and monitor the attributes<br />
* Validate the measurements<br />
* Analyze the result of those measurements<br />
* Based on the analysis, establish processes and procedures that result in improved system quality<br />
<br />
=== Quality attributes for products ===<br />
<br />
=== Quality attributes for services ===<br />
<br />
Throughout the SEBOK majority of the discussion concentrates around the products. However, quality of survives also plays a major role in the customer satisfaction, which is the measurement of the overall system quality. Services can be divided to two major categories, primary and secondary. For example, the city public transportation system, the United States postal service, or the medical services provided by the hospital are all examples of the primary services. On the other hand, the services that provide help to the customer, in order to assemble a BBQ grill is an example of a secondary services, which are typically referred to as the customer service. Identifying and appropriate quality attributes is the key step in the quality management for services. Some examples of service quality attributes include; affordability, availability, dependability, efficiency, predictability, reliability, responsiveness, safety, security, usability, etc. Again depending on the type of the service, some of these attributes are more prominent than the others.<br />
<br />
For example, in the case of services that is provided by the hospital, one may be more interested in the availability, reliability, and responsiveness than potentially the security (typically hospitals are assumed to be safe) and the affordability (typically insurance covers the majority of the cost). Of course, if the patient does not have a good insurance coverage, then the importance of affordability will increase.<br />
<br />
=== Quality attributes for enterprises ===<br />
<br />
An enterprise typically refers to a large complex set of interconnected entities that includes people, technologies, processes, financial and physical element. It is obvious that a typical enterprise has a number of internal and external stakeholders, and as a result there are a large number of quality attribute that will define its quality. Identifying the right set of attributes are typically more challenging in such a complex system. An example of an enterprise is the air traffic management system that is mainly responsible for the safe and efficient operation of the civil aviation within a country or collection of countries. There is a large number of stakeholders that are concern about the overall quality of the system, some example of these stakeholders and some of the primary quality attributes that they are concern with are identified in the following table.<br />
<br />
{|<br />
|+ '''Table Title''' - Enterprise Stakeholders and their Quality Attributes|-<br />
!Stakeholders<br />
!Primary Quality Attributes<br />
|-<br />
|Passengers<br />
|Safety, affordability, reliability, etc.<br />
|-<br />
|Airlines<br />
|adaptability, efficiency, profitability, etc.<br />
|-<br />
|Air traffic controller<br />
|Safety, reliability, usability, etc.<br />
|-<br />
|Hardware & software developers<br />
|Reliability, fault tolerance, maintainability, etc.<br />
|-<br />
|Government/regulatory agency<br />
|Safety, reliability, affordability, etc.<br />
|-<br />
|}<br />
<br />
=== Measuring quality attributes===<br />
As previously mentioned, you cannot achieve quality, if you cannot measure. The Measurement System Analysis (MSA) [Wheeler and Lynday 1989] is a set of measuring instruments that provide an adequate capability for the team to conduct appropriate measurements in order to monitor and control quality. The MSA is a collection of;<br />
<br />
Tools: measuring instruments, calibration, etc.<br />
<br />
Processes: testing and measuring methods, set of specification, etc.<br />
<br />
Procedures: policies and procedures and methodologies that are defined by the company and/or regulatory agency<br />
<br />
People: personnel (managers, testers, analysis, etc.) who are involved in the measurement activities.<br />
<br />
Environment: both environmental setting and physical setting that best simulate the operational environment and/or best <br />
setting to get the most accurate measurements<br />
<br />
Once the quality attributes are identified and prioritized, then the MSA will help in the monitoring and controling the oversll system quality.<br />
<br />
Additional detail about measurement are presented in the measurement section.<br />
<br />
== Quality management strategies ==<br />
<br />
<br />
=== Acceptance Sampling ===<br />
In [[Acceptance Sampling (glossary)|acceptance sampling]], a lot of products is presented for delivery. The consumer samples from the lot. Each member of the sample is then either categorized as acceptable or unacceptable based on some attribute (attribute sampling), or measured against one or more metrics (variable sampling). Based on the measurements, inference is made as to whether the lot meets the customer requirements.<br />
<br />
There are four possible outcomes of the sampling of a lot. <br />
<br />
{|<br />
|+ '''Truth Table''' - Outcomes of acceptance sampling<br />
|-<br />
!<br />
!'''Lot meets requirement'''<br />
!'''Lot fails requirement'''<br />
|-<br />
| '''Sample passes test'''<br />
| No error<br />
| Consumer risk<br />
|-<br />
| '''Sample fails test'''<br />
| Producer risk<br />
| No error<br />
|}<br />
<br />
An sample acceptance plan balances the risk of error between the producer and consumer. Detailed ANSI.ISO/ASQ standards describe how this allocation is performed. [ANSI/ISO/ASQ A3534-2-1993: Statistics—Vocabulary and Symbols—Statistical Quality Control.]<br />
<br />
=== Statistical Process Control ===<br />
[[Statistical Process Control (glossary)|Statistical process control (SPC)]] is a method was invented by Walter A. Shewhart, which adopts statistical thinking to monitor and control the behaviors and performances of a process. It means using statistical analysis techniques as tools in appropriate way to estimate the variation in the performance of a process, to investigate the causes of this variation, and to recognize from the data when the process is not performing as it should.(Mary et al, 2006, p.441). By performance here we mean is ‘how well the process is done’. <br />
<br />
The theory of quality management emphasizes to manage processes by fact and to maintain systematic improvement. All product developments are a series of interconnected processes, which have variation in their results. Understanding variation with SPC technology can help the process executors understand the facts of their processes and find the improvement opportunities from a systematic view. <br />
<br />
The key tools in SPC are control charts. The control chart is also called Shewhart 3-sigma chart. It consists of 3 limit lines, called center line which is the mean of statistical samples, Upper and Lower control limit lines which are calculated by mean and standard deviation of statistical samples. The observed data points or their statistic value (mean as process behaves) are drawn in the chart with time or other sequence order. Upper and lower control limits indicate the threshold at which the process output will be considered as ‘unlikely’. <br />
<br />
There are two sources of process variation. One is common cause variation, which due to inherent interaction among process components. Another is assignable cause, which due to events that are not part of the normal process. SPC stresses that bringing a process into a state of statistical control, where only common cause variation existed, and keeping it in control. The usage of control chart is to distinguish between variation in a process resulting from common causes and assignable causes. <br />
<br />
If the process is in control and if standard assumptions are met, more than 99.73% points will plot within the control limit. Any points outside the limits, or appear some systematic patterns, imply a new source of variation would be introduced. A new variation means increased quality cost.<br />
<br />
The control limits of control are based on the understanding of the past of process. So it is also called the natural bound of the process. It presents a graphic display of process stability or instability over time, controls the current of the process, and predicts the future of the process.<br />
<br />
More advanced control charts exist. Cumulative Sum Charts detect small, persistent step change model departures. Moving average charts, with different possible weighting schemes, also detect persistent changes.<br />
<br />
=== Design for Quality ===<br />
<br />
Variation in the inputs to a process usually results in variation in the outputs. Processes can be designed, however, to be robust against variation in the inputs.<br />
<br />
Response surface experimental design and analysis is the statistical technique used to assist in determining the sensitivity of the process to variations in the input. Such an approach was pioneered by Taguchi.<br />
<br />
=== Six Sigma ===<br />
<br />
Six sigma methodology is a set of tools to improve the quality of business processes; in particular, to improve performance and reduce variation. <br />
<br />
Six sigma methods were pioneered by Motorola and came into wide acceptance after they were championed by General Electric.<br />
<br />
Problems are addressed by six sigma projects. These projects follow a five stage process: <br />
# Define the problem, the stakeholders, and the goals<br />
# Measure key aspects and collect relevant data<br />
# Analyze the data to determine cause-effect relationships<br />
# Improve the current process / Design a new process<br />
# Control the future state / Verify the design<br />
<br />
These steps are known as DMAIC for existing processes and DMADV for new processes.<br />
<br />
An extensive literature exists on six sigma. <br />
<br />
A variant of six sigma is called lean six sigma, where the emphasis is on improving or maintaining quality while driving out waste.<br />
<br />
===Standards===<br />
Primary standards for quality management are maintained by ISO, principally the IS0 9000 series. [http://www.iso.org/iso/iso_catalogue/management_and_leadership_standards/quality_management.htm ISO 9000 home site.] The ISO standards provide requirements for the quality management systems of a wide range of enterprises, without specifying how the standards are to be met, and have world-wide acceptance. The key requirement is that the system must be audited.<br />
<br />
In the United States, the Malcolm Baldridge National Quality Award presents up to three awards in six categories: Manufacturing, Service company, Small business, Education, Healthcare, and Nonprofit. The [http://www.nist.gov/baldrige/publications/criteria.cfm Baldridge Criteria] have become de facto standards for assessing the quality performance of organizations.<br />
<br />
==References==<br />
<br />
Evans, James &Lindsay, William (2010) Managing for Quality and Performance Excellence. ISBN 0324783205<br />
<br />
Juran, J. M. (1992) Juran on Quality by Design: The New Steps for Planning Quality into Goods and Services. ISBN 0029166837<br />
<br />
Moen, R D; Nolan, T W & Provost, L P (1991) Improving Quality Through Planned Experimentation ISBN 0-07-042673-2<br />
<br />
Pyzdek, Thomas and Paul A. Keller (2009). The Six Sigma Handbook, Third Edition. New York, NY: McGraw-Hill. ISBN 0071623388.<br />
<br />
Wheeler and Lynday, "Evaluating the Measurement Process", SPC Press, ISBN 9780945320067 ISBN 094532006X<br />
<br />
Mary Beth Chrissis, Mike Konrad, Sandy Shrum (2006). CMMI Guidelines for Process Integration and Product Improvement, Second Edition. Addison Wesley. ISBN 0321279670. <br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Information Management|<- Previous Article]] | [[Systems Engineering Management|Parent Article]] | [[Product and Service Life Management|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Information_Management&diff=9677Information Management2011-08-09T21:39:31Z<p>Skmackin: </p>
<hr />
<div>Information can exist in many forms in an organization - some of it related to a specific system development program, some held at an enterprise level and made available to programs as required. It may be held in electronic format or in physical form (for instance, paper drawings or documents, microfiche or other photographic records).<br />
<br />
The Information Management process ensures that necessary information is created, stored, retained, protected, managed and made easily available to those with a need and who are permitted access. It also ensures that information is disposed of when it is no longer relevant. <br />
<br />
== The Information Management Process ==<br />
To quote from ISO/IEC 15288:2008:<br />
<br />
''The purpose of the Information Management Process is to provide relevant, timely, complete, valid and, if required, confidential information to designated parties during, and, as appropriate, after the system life cycle.''<br />
<br />
''This process generates, collects, transforms, retrieves, disseminates and disposes of information, including technical, project, organizational, agreement and user information''<br />
<br />
The first step in the Information Management process is to Plan Information Management. The output of this step is the Information Management Strategy or Information Management Plan.<br />
<br />
The second step is to Perform Information Management. The outputs of this step are the creation, population and maintenance of one or more Information Repositories, together with the creation and dissemination of Information Management Reports<br />
<br />
=== Plan Information Management ===<br />
Issues that should be considered when creating the Information Management Strategy/Plan include:<br />
*What information has to be managed?<br />
*How long does the information have to be retained?<br />
*What level of configuration control has to be applied to the information?<br />
*Are there any regulatory requirements relating to the management of information for this project? This could include export control requirements.<br />
*Are there any customer requirements or agreements relating to the management of project information?<br />
*Are there any Industry Standards relating to the management of project information?<br />
*Are there any Organization/Enterprise directives, procedures or standards relating to the management of project information?<br />
*Are there any Project directives, procedures or standards relating to the management of project information?<br />
*Who is allowed access to the information? This could include people working the project, other members of the Organization/Enterprise, Customers, Partners, Suppliers and Regulatory Authorities<br />
*Are there requirements to protect the information from unauthorized access? This could include Intellectual Property rights that have to be respected - for instance if information from suppliers is to be stored and there is the possibility of a supplier gaining access to information belonging to a competitor who is also a supplier for the project. <br />
*Is a system data dictionary required to be able to "tag" information for ease of search and retrieval?<br />
*What media will be used for the information to be managed - physical or electronic or both?<br />
*What data repository or repositories are to be used?<br />
*Has Work in Progress (WIP) as well as formally released information been considered when establishing data repositories?<br />
*Has the volume of information to be stored been considered when selecting repositories?<br />
*Has speed of access and search been considered when selecting repositories? <br />
*If electronic information is to be stored, what file formats are/are not allowed?<br />
*If electronic information is to be stored for a long time, how will it be "future-proofed" - for instance, are neutral file formats available, or will copies of the software that created or used the information be retained?<br />
*Have disaster recovery requirements been considered - for instance if a server holding electronic information is destroyed, are there back-up copies of the information? Are the back-up copies regularly accessed to show that information recovery is flawless?<br />
*Is there a formal requirement to archive designated information for compliance with legal (including regulatory), audit and information retention requirements? If so, has an archive and archiving method been defined?<br />
*Some information may not be required to be stored (for instance, the results files for analyses when the information occupies a large volume and can be regenerated by the analysis tool and the input file). However, if the cost to re-generate the information is high, consider doing a cost/benefit analysis for storage versus regeneration. <br />
*Have requirements been defined to ensure that the information being stored is valid?<br />
*Have requirements been defined to ensure that information is disposed of correctly when it is no longer required to be stored, or when it is no longer valid? For instance, has a review period been defined for each piece of information? <br />
<br />
=== Perform Information Management ===<br />
Issues that should be considered when performing Information Management include:<br />
*Is the information valid (is it traceable to the Information Management Strategy/Plan and the list of information to be managed)? <br />
*Has the workflow for review and approval of information been defined to transfer information from "Work in Progress" to "Released"?<br />
*Are the correct Configuration Management requirements being applied to the information? Has the information been baselined?<br />
*Have the correct "tags" been applied to the information to allow for easy search and retrieval?<br />
*Have the correct access rules been applied to the information?<br />
*If required, has the information been translated into neutral file format prior to storage?<br />
*Has a review date been set for assessing the continued validity of the information?<br />
*Has the workflow for review and removal of unwanted, invalid or unverifiable information (as defined in Organization/Enterprise policy, Project policy, Security or Intellectual Property requirements) been defined ?<br />
*Has the information been backed up, and has the backup recovery system been tested? <br />
*Has designated information been archived in compliance with legal (including regulatory), audit and information retention requirements?<br />
<br />
== Linkages to Other Systems Engineering Management Topics ==<br />
The Systems Engineering Information Management process is closely coupled with the [[System Definition]], [[Planning]] and [[Configuration Management]] processes. The requirements for information management are elicited from stakeholders as part of the [[System Definition]] process. What information to be stored, and when in the Systems Engineering lifecycle is defined in the [[Planning]] process, and configuration control requirements are defined in the [[Configuration Management]] process <br />
<br />
== Practical Considerations ==<br />
Key pitfalls and good practices related to Information Management are described in the next two sections.<br />
<br />
=== Pitfalls ===<br />
Some of the key pitfalls encountered in planning and performing Information Management are:<br />
*Not defining a data dictionary for the project, resulting in inconsistencies in naming conventions for information and a proliferation of meta-data "tags", reducing the accuracy and completeness of searches for information and adding time to the performance of a comprehensive search.<br />
*Not "tagging" information with metadata, or doing this inconsistently, so that searches based on metadata tags are ineffective and can overlook key information.<br />
*Not checking that information can be retrieved effectively from a back-up repository, and when access to the back-up is needed, discovering that the back-up information is corrupted or not accessible.<br />
*Saving information in an electronic format that ceases to be accessible and not retaining a working copy of the obsolete software to be able to access the information.<br />
*Archiving information on an electronic medium that does not have the required durability to be readable through the required retention life of the information, and not regularly accessing and re-archiving the information.<br />
*Not checking the continued validity of information, resulting in outdated or incorrect information being retained and used. <br />
<br />
=== Good Practices ===<br />
Some good practices gathered from the references are:<br />
*The DAMA Guide to the Data Management Body of Knowledge provides an excellent overview of Information Management, at both the Project and Enterprise level, together with detailed information on data management.<br />
*Recognize that information is a strategic asset for the organization and needs to be managed and protected.<br />
*Plan for the Organization's information repository storage capacity to need to double every 12 to 18 months<br />
*Information that sits in a repository adds no value. It only adds value when it is used. So the right people need to be able to access the right information easily and quickly.<br />
*Invest time and effort in designing data models that are consistent with the underlying structure and information needs of the organization<br />
*The cost impact of using poor quality information can be enormous. Be rigorous about managing the quality of information.<br />
*The impact of managing information poorly can also be enormous - for instance, by violating Intellectual Property or Export Control rules. Make sure that these requirements are captured and implemented in the Information Repository, and that all users of the repository are aware of the rules that they need to follow and the penalties for infringement. <br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
Mosley, Mark (ed.) 2010. The DAMA Guide to The Data Management Body of Knowledge (DAMA-DMBOK Guide),Technics Publications, ISBN 9781935504023 <br />
<br />
INCOSE. 2010. INCOSE systems engineering handbook, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2<br />
<br />
ISO/IEC 15288:2008. Systems and Software Engineering - System Lifecycle Processes<br />
<br />
Redman, T. 2008. Data Driven: Profiting from Your Most Important Business Asset. Harvard Business Press, ISBN 9781422119129<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Configuration Management|<- Previous Article]] | [[Systems Engineering Management|Parent Article]] | [[Quality Management|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Configuration_Management&diff=9675Configuration Management2011-08-09T21:38:48Z<p>Skmackin: </p>
<hr />
<div>The purpose of configuration management (CM) is to establish and maintain the integrity of all identified outputs of a project or process and make them available to concerned parties. (ISO/IEC 2008) Since unmanaged changes to system artifacts (such as those associated with plans, requirements, design, software, hardware, testing, and documentation) can lead to problems that persist throughout the system lifecycle, a primary objective of CM is to manage and control the change to such artifacts.<br />
<br />
Configuration management is the discipline of identifying and formalizing the functional and physical characteristics of a configuration item at discrete points in the product evolution for the purpose of maintaining the integrity of the product system and controlling changes to the baseline. The baseline for a project contains all of the technical requirements and related cost and schedule requirements that are sufficiently mature to be accepted and placed under change control by the project manager. The project baseline consists of two parts: the technical baseline and the business baseline. The system engineer is responsible for managing the technical baseline and ensuring that it is consistent with the costs and schedules in the business baseline. Typically, the project control office manages the business baseline. <br />
<br />
The ANSI/GEIA EIA-649-A standard presents configuration management from the viewpoint that configuration management practices are employed because they make good business sense rather than because requirements are imposed by an external customer. (ANSI/GEIA October 2005) The standard discusses configuration management principles and practices from an enterprise view; it does not prescribe which CM activities individual organizations or teams within the enterprise should perform. Each enterprise assigns responsibilities in accordance with its own management policy. See also the GEIA-HB-649 Implementation Guide for Configuration Management, which supports and is related to this standard. (ANSI/GEIA October 2005)<br />
<br />
==Configuration Management Process Overview==<br />
Effective Configuration Management depends on the establishment, maintenance, and implementation of an effective CM process The CM process should include, but not be limited to, the following activities: <br />
<br />
*Identification and involvement of relevant stakeholders<br />
*Setting of CM goals and expected outcomes<br />
*Identification and description of CM tasks<br />
*Assignment of responsibility and authority for performing the CM process tasks<br />
*Establishment of procedures for monitoring and control of the CM Process<br />
*Measurement and assessment of the CM process effectiveness.<br />
<br />
As a minimum the CM process should incorporate and detail the following tasks (CMMI Product Team, 2006) :<br />
*Identifying the configuration of selected work products that compose the baselines at given points in time<br />
*Controlling changes to configuration items<br />
*Building or providing specifications to build work products from the configuration management system<br />
*Maintaining the integrity of baselines <br />
*Providing accurate status and current configuration data to developers, end users, and customers.<br />
<br />
The figure below shows the primary functions of systems configuration management.<br />
<br />
<br />
[[File:Cm_functions.png | CM Functions]]<br />
<br />
<div style="text-align: center;"><br />
'''Configuration Management Functions<br />
''' </div><br />
<br />
<br />
<br />
===CM Planning===<br />
The CM plan must be developed in consideration of the organizational context and culture; it must adhere to or incorporate applicable policies, procedures, and standards; and it must accommodate acquisition and subcontractor situations. A CM plan details and schedules the tasks to be performed as part of the CM process: configuration identification, change control, configuration status accounting, configuration auditing, and release management and delivery. <br />
<br />
===Configuration Identification===<br />
This activity is focused on identifying those configuration items which will be managed and controlled under a CM process. The identification activity involves establishing a procedure for labeling items and their versions. The labeling provides a context for each item within the system configuration and shows the relationship between system items.<br />
<br />
===Establishing Baseline===<br />
As the confirmation items are identified they are assembled into a baseline which specifies how a system will be viewed for the purposes of management, control and evaluation. A baseline can only be changed through the formal change procedures. The baseline is fixed at specific point in time in the system life cycle and represents the current approved configuration.<br />
<br />
===Change Control===<br />
A disciplined change control process is critical for systems engineering. A generalized change control process is shown in the figure below, which is adapted from [Blanchard and Fabrycky, 1999).<br />
<br />
[[File:cm_change_control_process.png | CM change control process]]<br />
<br />
<br />
<div style="text-align: center;"><br />
'''Configuration Change Control Process<br />
''' </div><br />
<br />
===Configuration Auditing===<br />
Audits are independent evaluations of the current status of configuration items and to determine conformance of the configuration activities to the CM process. Adherence to applicable CM plans, regulations, and standards, will be assessed.<br />
<br />
===Constraints and Guidance===<br />
Constraints affecting, and guidance for the CM process come from a number of sources. Policies, procedures, and standards set forth at corporate or other organizational levels might influence or constrain the design and implementation of the CM process. Also, the contract with an acquirer or supplier may contain provisions affecting the CM process. Finally, the system life cycle process adopted and the tools, methods and other processes used in system development can affect the CM process. (Abran,A. 2004)<br />
There are a variety of sources for guidance on development of a CM process. These include the ISO standards on system life cycle processes (ISO/IEC 2008) and configuration management guidelines (ISO 10007, 2003); the guide to the software engineering body of knowledge (SWEBOK ) (Abran,A. 2004), and the CMMI for Development (CMMI Product Team, 2006) .<br />
<br />
===Organizational Issues===<br />
Successful CM planning, management and implementation requires an understanding of the organizational context for and the constraints placed on the design and implementation of the CM process.<br />
To plan a CM process for a project, it is necessary to understand the organizational context and the relationships among the organizational elements. CM interacts with other organizational elements, which may be structured in various ways. Although the responsibility for performing certain CM tasks might be assigned to other parts of the organization, the overall responsibility for CM often rests with a distinct organizational element or designated individual (Abran, 2004).<br />
<br />
===Measurement===<br />
In order to carry out certain CM functions, such as status accounting and auditing, and to monitor and assess the effectiveness of CM processes it is necessary to measure and collect data related to CM activities and system artifacts. CM libraries and automated report tools provide convenient access and facilitation of data collection. Examples of metrics include the size of documentation artifacts, number of change requests, mean time to change to a configuration item, and rework costs. <br />
<br />
===CM Tools===<br />
Configuration management employs a variety of tools to automate the process including:<br />
*Library Management<br />
*Tracking and Change Management<br />
*Version Management<br />
*Release Management<br />
<br />
The INCOSE Tools Database Working Group (INCOSE TDWG, 2010) maintains an extensive list of tools including configuration management.<br />
<br />
==Linkages to Other Systems Engineering Management Topics==<br />
Configuration management is involved in the management and control of artifacts produced and modified throughout the system lifecycle in all of areas of system development, operations, and maintenance. This includes being applied to the artifacts of all the other management processes (plans, analyses, reports, statuses, etc.).<br />
<br />
==Practical Considerations==<br />
Key pitfalls and good practices related to systems engineering CM are described in the next two sections. <br />
<br />
===Pitfalls===<br />
<br />
Some of the key pitfalls encountered in planning and performing Configuration Management are in the following table. <br />
<br />
{| <br />
|+ CM Pitfalls<br />
|-<br />
! Name<br />
! Description<br />
|-<br />
| Shallow Visibility<br />
| <br />
*Not involving all affected disciplines in the change control process. <br />
|-<br />
|Poor Tailoring<br />
|<br />
*Inadequate CM tailoring to adapt to the project scale, number of subsystems, etc.<br />
|-<br />
|Limited CM Perspective <br />
|<br />
*Not considering and integrating the CM processes of all contributing organizations including COTS vendors and subcontractors. <br />
|}<br />
<br />
<br />
===Good Practices===<br />
<br />
Some good practices gathered from the references are below.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Name<br />
! Description<br />
|-<br />
| Cross-functional CM<br />
| <br />
*Implement cross-functional communication and CM processes for software, hardware, firmware, data or other types of items as appropriate.<br />
|-<br />
|Full Lifecycle Perspective<br />
|<br />
*Plan for integrated CM through the life cycle. Do not assume that it will just happen as part of the program. <br />
|-<br />
|CM Planning<br />
|<br />
*Processes are documented in a single, comprehensive CM Plan early in the project. The Plan should be a (Systems) CM Plan. <br />
*Include tools selected and used.<br />
<br />
|-<br />
|Requirements Traceability<br />
|<br />
*Initiate requirements traceability at the start of the CM activity.<br />
<br />
|-<br />
|CCB Hierarchy<br />
|<br />
*Use a hierarchy of Configuration Control Boards commensurate with the program elements.<br />
<br />
|-<br />
|Consistent Identification<br />
|<br />
*Software CI and Hardware CI use consistent identification schemes.<br />
<br />
|-<br />
|CM Automation <br />
<br />
|<br />
*Configuration status accounting should be as automated as possible. <br />
<br />
|}<br />
<br />
Additional good practices can be found in (ISO/IEC/IEEE 2009, Clause 6.4) and (INCOSE, 2010, Section 5.4.1.5).<br />
<br />
==Additional References and Readings==<br />
<br />
<br />
<br />
==Glossary==<br />
===Acronyms===<br />
<br />
Acronym Definition <br />
<br />
CM Configuration Management <br />
<br />
==References== <br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
Abran, A., J. W. Moore, P. Bourque, R. Dupuis, and L. L. Tripp. 2004. SWEBOK: Guide to the software engineering body of knowledge: 2004 version. Los Alamitos, CA; Tokyo, Japan: IEEE Computer Society Press. <br />
<br />
ANSI/GEIA. October 2005. GEIA-HB-649, implementation guide for configuration management. Arlington, VA, USA: American National Standards Institute/Government Electronics & Information Technology Association, GEIA-HB-649. <br />
<br />
GEIA. 2004. GEIA consensus standard for data management. Arlington, VA, USA: Government Electronics & Information Technology Association, GEIA-859. <br />
<br />
ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E). <br />
<br />
SEI. 2010. [[Capability Maturity Model Integrated (CMMI) for Development]], version 1.3. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).<br />
<br />
===Additional References===<br />
<br />
Blanchard, B. S., and W. J. Fabrycky. 2005. [[Systems Engineering and Analysis]]. Prentice-hall international series in industrial and systems engineering. 4th ed. Englewood Cliffs, NJ, USA: Prentice-Hall. <br />
<br />
INCOSE Tools database working group (TDWG). in International Council on Systems Engineering (INCOSE) [database online]. San Diego, CA, USA, 2010 Available from http://www.incose.org/practice/techactivities/wg/tools/. <br />
<br />
---. INCOSE measurement tools survey. in International Council on Systems Engineering (INCOSE) [database online]. San Diego, CA, USA, 2008 Available from http://www.incose.org/productspubs/products/SEtools/meassurv.html (accessed 15 August, 2010). <br />
<br />
ISO/IEC/IEEE. 2009. Systems and software engineering - life cycle processes - project management. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC)/Institute of Electrical and Electronics Engineers (IEEE), ISO/IEC/IEEE 16326:2009(E). <br />
----<br />
===Article Discussion===<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Decision Management|<- Previous Article]] | [[Systems Engineering Management|Parent Article]] | [[Information Management|Next Article ->]]</center><br />
==Signature==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Decision_Management&diff=9672Decision Management2011-08-09T21:36:00Z<p>Skmackin: </p>
<hr />
<div>Making decisions is one of the most important processes practiced by systems engineers, project managers, and all team members. Sound decisions are based on good judgment and experience. There are concepts, methods, processes, and tools that can assist in the process of decision making, especially in making comparisons of decision alternatives. These tools can also assist in building team consensus in selecting and supporting the decision made and in defending it to others.<br />
<br />
==Decision Judgment Methods==<br />
<br />
Common alternative judgment methods range from indifference (“I don’t care what we decide to do, just do something…”) to probability based judgement. Methods that the practitioner should be aware of include:<br />
• Emotion based judgment<br />
• Intuition based judgment<br />
• Expert based judgment<br />
• Fact based judgment<br />
• Probability based judgment<br />
These are elaborated upon in the following paragraphs.<br />
<br />
- Emotion Based Judgment<br />
While most people would claim that their decisions are based on sound rationale, once a decision is made public (even within a small team) the decision-makers will vigorously defend their choice, often even in the face of contrary evidence. It is easy to become emotionally tied to your decision, and refuse to consider alternates that are later proven to be superior. Another phenomenon is that people often need “permission” to support an action or idea, as explained by Cialdini (Cialdini 2006), and this inherent human trait also suggests why teams often resist new ideas. <br />
<br />
- Intuition Based Judgment<br />
Intuition plays a key role in leading development teams to creative solutions. Malcolm Gladwell (Gladwell 2005) makes the strong argument that we intuitively see the powerful benefits or fatal flaws inherent in a newly proposed solution. Kelly Johnson, the founder of the highly successful and creative Skunk Works at the Lockheed Aircraft Corporation (Rich and Janos 1996), was an amazing practitioner of intuitive decisions, but he always had his decisions backed up with detailed studies of alternatives, and these then became fact-based decisions.<br />
Intuition can be an excellent guide when based on relevant past experience but it may blind you to as-yet undiscovered concepts. And even when it is appropriate, it is a starting point, not an end point. Ideas generated based on intuition should be considered seriously, but should be treated as an output of a brainstorming session, and evaluated using one of the three approaches in the next three subsections.<br />
<br />
- Expert based Judgment<br />
For certain problems, especially ones involving technical expertise outside your field, calling in experts is a cost effective approach. When facing decisions such as surgery consultation, automobile repair, or electronic component troubleshooting, it makes sense to benefit from expert knowledge. The decision-making challenge is to establish perceptive criteria for selecting the right experts.<br />
<br />
- Fact Based Judgment<br />
This is the most common situation. This will be discussed in more detail in section 6.6.4.<br />
<br />
[[File:decision_selection_flowchart.png|600px|Decision Selection flowchart]]<br />
<br />
===Probability Based Judgment===<br />
Probability based decisions are made when there is uncertainty.<br />
Decision management techniques and tools for decisions based on uncertainty include probability theory, utility functions, decision tree analysis, models, and simulations. A classic mathematically oriented reference in the area of decision analysis is (Raiffa 1997) for understanding decision trees and probability analysis. Another classic introduction is (Schlaiffer 1969) with more of an applied focus. The aspect of modeling and simulation is covered in the popular textbook (Law 2007), which also has good coverage of Monte Carlo analysis. Some of these more commonly used and fundamental methods are overviewed below.<br />
<br />
Decision trees and influence diagrams are visual analytical decision support tools where the expected values (or expected utility) of competing alternatives are calculated. A decision tree uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. Influence diagrams are used for decision models as alternate, more compact graphical representations of decision trees.<br />
<br />
The figure below demonstrates a simplified make vs. buy decision analysis tree and the associated calculations. Suppose making a product costs $200K more than buying an alternative off the shelf, reflected as a difference in the net payoffs in the figure. The custom development is also expected to be a better product with a corresponding larger probability of high sales at 80% vs. 50% for the bought alternative. With these assumptions, the monetary expected value of the make alternative is .8*2.0M + .2*0.5M = 1.7M and the buy alternative is .5*2.2M + .5*0.7M = 1.45M. <br />
<br />
[[File:decision_tree_example.png|400px|Decision Tree Example]]<br />
<br />
Influence diagrams focus attention on the issues and relationships between events. They are generalizations of Bayesian networks whereby maximum expected utility criteria can be modeled. A good reference is (Detwarasiti and Shachter 2005, 207-228) for using influence diagrams in team decision analysis.<br />
<br />
Expected utility is more general than expected value. Utility is a measure of relative satisfaction that takes into account the decision maker's preference function, which may be nonlinear. Expected utility theory deals with the analysis of choices with multidimensional outcomes. The analyst should determine the decision-maker's utility for money and select the alternative course of action that yields the highest expected utility, rather than the highest expected monetary value. A classic reference on applying multiple objective methods, utility functions, and allied techniques is (Kenney and Raiffa 1976). References with applied examples of decision tree analysis and utility functions include (Samson 1988) and (Skinner 1999).<br />
<br />
(Blanchard 2004b) shows a variety of these decision analysis methods in many technical decision scenarios. A comprehensive reference demonstrating decision analysis methods for software-intensive systems is (Boehm 1981, 32-41). It is a major treatment of multiple goal decision analysis, dealing with uncertainties, risks, and the value of information.<br />
Facets of a decision situation which cannot be explained by a quantitative model should be reserved for intuition and judgment applied by the decision maker. Sometimes outside parties are also called upon. One method to canvas experts is the Delphi Technique procedure for organizing and sharing expert forecasts about the future outcomes or parameter values. The Delphi Technique is a method of group decision-making and forecasting that involves successively collating the judgments of experts. A variant called the Wideband Delphi technique is described in (Boehm 1981, 32-41) for improving upon the standard Delphi with more rigorous iterations of statistical analysis and feedback forms.<br />
<br />
General tools, such as spreadsheets and simulation packages, can be used with these methods. There are also tools targeted specifically for aspects of decision analysis such as decision trees, evaluation of probabilities, Bayesian influence networks, and others. The INCOSE website for the tools database (INCOSE 2010, 1)) has an extensive list of analysis tools.<br />
<br />
==Linkages to Other Systems Engineering Management Topics==<br />
<br />
The Decision Management process is closely coupled with the [[Measurement]], [[Planning]], [[Assessment and Control]], and [[Risk Management]] processes.<br />
The [[Measurement]] process describes how to derive quantitative indicators as input to decisions. Refer to the [[Planning]] process area for more information about incorporating decision results into project plans. <br />
<br />
<br />
==Practical Considerations==<br />
Key pitfalls and good practices related to decision analysis are described below. <br />
<br />
===Pitfalls===<br />
Some of the key pitfalls are: <br />
#False confidence in the accuracy of values used in decisions.<br />
#Not engaging experts and holding peer reviews. The decision-maker should engage experts to validate decision values.<br />
#Prime sources of errors in risky decision-making include false assumptions, not having an accurate estimation of the probabilities, relying on expectations, difficulties in measuring the utility function, and forecast errors.<br />
#The analytical hierarchy process may not handle real-life situations taking into account the theoretical difficulties in using eigenvectors.<br />
<br />
===Good Practices===<br />
Some good practices are below.<br />
<br />
{| <br />
|-<br />
! Name<br />
! Description<br />
|-<br />
|Progressive Decision Modeling <br />
| <br />
*Use progressive model building. Detail and sophistication can be added as confidence in the model is built up.<br />
|-<br />
|Necessary Measurements <br />
|<br />
*Measurements need to be tied to the information needs of the decision makers. <br />
|-<br />
|Define Selection Criteria<br />
|<br />
*Define selection criteria and process (and success criteria) before identifying trade alternatives.<br />
<br />
|}<br />
<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
Cialdini, Robert B. 2006. [[Influence: The Psychology of Persuasion]]. Collins Business Essentials.<br />
<br />
Forsberg, K., H. Mooz, H. Cotterman. 2005. [[Visualizing Project Management]], 3rd Ed. John Wiley and Sons. pg 154-155.<br />
<br />
Gladwell, Malcolm. 2005. [[Blink: the Power of Thinking without Thinking]]. Little, Brown & Co.<br />
<br />
Kepner, C. H., B. B. Tregoe. 1997. [[The New Rational Manager]]. Princeton University Press.<br />
<br />
Raiffa, H. 1997. [[Decision Analysis: Introductory Lectures on Choices under Uncertainty]]. New York, NY: McGraw-Hill.<br />
<br />
Rich, Ben, Leo Janos. 1996. [[Skunk Works]]. Little, Brown & Company<br />
Saaty, Thomas L. 2008. Decision Making for Leaders: The Analytic Hierarchy Process for Decisions in a Complex World. Pittsburgh, Pennsylvania: RWS Publications. ISBN 0-9620317-8-X.<br />
<br />
Wikipedia. 2011. Decision making software.<br />
<br />
Schlaiffer, R. 1969. [[Analysis of Decisions under Uncertainty]]. New York, NY: McGraw-Hill book Company.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
<br />
Blanchard, B. S. 2004. Systems engineering management. 3rd ed. New York, NY: John Wiley & Sons.<br />
<br />
Boehm, B. 1981. Software risk management: Principles and practices. IEEE Software 8 (1) (January 1991): 32-41.<br />
<br />
Detwarasiti, A., and R. D. Shachter. 2005. Influence diagrams for team decision analysis. Decision Analysis 2 (4): 207-28.<br />
<br />
INCOSE. 2011. INCOSE systems engineering handbook, version 3.2.1. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.<br />
<br />
Kenney, R. L., and H. Raiffa. 1976. Decision with multiple objectives: Preferences and value- trade-offs. New York, NY: John Wiley & Sons.<br />
<br />
Law, A. 2007. [[Simulation Modeling and Analysis]]. 4th ed. New York, NY: McGraw Hill.<br />
<br />
Parnell, G. S., P. J. Driscoll, and D. L. Henderson. 2010. Decision Making in Systems Engineering and Management. New York, NY: John Wiley & Sons.<br />
<br />
Samson, D. 1988. Managerial decision analysis. New York, NY: Richard D. Irwin, Inc.<br />
<br />
Skinner, D. 1999. Introduction to decision analysis. 2nd ed. Sugar Land, TX, USA: Probabilistic Publishing.<br />
<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Measurement|<- Previous Article]] | [[Systems Engineering Management|Parent Article]] | [[Configuration Management|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Measurement&diff=9670Measurement2011-08-09T21:35:08Z<p>Skmackin: </p>
<hr />
<div>Introductory Paragraph(s)<br />
<br />
==Introduction==<br />
SE measurement and the accompanying analysis are fundamental elements of SE and technical management. SE measurement provides information relating to the products developed, services provided, and processes implemented to support effective management of the processes and to objectively evaluate product or service quality. This measurement supports realistic planning, provides insight into actual performance, and facilitates assessment of suitable actions. (Roedler and Jones 2005, 1-65; Frenz et al. 2010)<br />
<br />
Appropriate measures and indicators are essential inputs to tradeoff analyses to balance cost, schedule and technical objectives. Periodic analysis of the relationships between measurement results and the requirements and attributes of the system provides insight that helps identify issues early, when they can be resolved with less impact. Historical data, together with project or organizational context information, forms the basis for predictive models and methods that should be used. <br />
<br />
==Fundamental Concepts==<br />
The discussion of measurement here is based on some fundamental concepts. Roedler, et al. states three key SE Measurement concepts that are paraphrased here (Roedler and Jones 2005, 1-65):<br />
<br />
#'''SE measurement is a consistent but flexible process''' that is tailored to the unique information needs and characteristics of a particular project or organization and revised as information needs change. <br />
#'''Decision makers must understand what is being measured.''' Key decision makers must be able to connect “what is being measured” to “what they need to know”. <br />
#'''Measurement must be used to be effective'''.<br />
<br />
==Measurement Process Overview==<br />
The measurement process as presented here consists of four activities that follow the structure developed by the Practical Software and Systems Measurement Project and described in (ISO/IEC/IEEE 2007), (McGarry and et al. 2002), and (Murdoch 2006, 67). <br />
<br />
It has been the basis for establishing a common process across the software and systems engineering communities. This measurement approach has been adopted by the Capability Maturity Model Integration (CMMI) measurement and analysis process area (SEI 2006, 10), and by international systems and software engineering standards, such as (ISO/IEC 2008; ISO/IEC/IEEE 2007; ISO/IEEE 2008, 1). The International Council on Systems Engineering (INCOSE) Measurement Working Group has also adopted this measurement approach for several of their measurement assets, such as the INCOSE SE Measurement Primer (Frenz et al. 2010) and Technical Measurement Guide (Roedler and Jones 2005). This approach has provided a consistent treatment of measurement that allows the engineering community to communicate more effectively about measurement. The process is illustrated in Figure 1 from (Roedler and Jones 2005) and (McGarry and et al. 2002). <br />
<br />
<br />
<br />
Figure 1. Four Key Measurement Process Activities (Source: (PSM May 7, 2010))<br />
<br />
===Establish and Sustain Commitment===<br />
This activity focuses on establishing the resources, training, and tools to implement a measurement program and ensure that there is management commitment to use the information that is produced. Refer to (PSM May 7, 2010) and (SPC 2010) for additional detail. <br />
<br />
===Plan Systems Engineering Measurement===<br />
This activity focuses on defining measures that provide insight into project or organization information needs. This includes identifying what the decision makers need to know, relating these information needs to those entities that can be measured, and then identifying, prioritizing, selecting, and specifying measures based on project and organization processes. (Jones 2003, 15-19)<br />
<br />
There are a few widely used approaches to identify the information needs and derive associated measures. Each focuses on identifying measures that are needed for SE Management. These include:<br />
<br />
*The PSM approach, which uses a set of Information Categories, Measurable Concepts, and Candidate Measures to aid the user in determining relevant information needs and aspects about the information needs on which to focus. (PSM May 7, 2010)<br />
<br />
*The Goal-Question-Metric (GQM) approach, which identifies explicit measurement goals. Each goal is decomposed into several questions that help in the selection of measures that address the question and provide insight into the goal achievement. (Park, Goethert, and Florac 1996)<br />
<br />
*Software Productivity Center’s 8-step Metrics Program, which also includes stating the goals and defining measures needed to gain insight for achieving the goals. (SPC 2010)<br />
<br />
The following are good sources for candidate measures that address the information needs and measurable concepts/questions:<br />
*PSM Guide, Version 4.0, Chapters 3 and 5 (PSM May 7, 2010)<br />
*SE Leading Indicators Guide, Version 2.0, Section 3 (Roedler et al. 2010)<br />
*Technical Measurement Guide, Version 1.0, Section 10 (Roedler and Jones 2005, 1-65)<br />
*Safety Measurement (PSM White Paper), Version 3.0, Section 3.4 (Murdoch 2006, 60)<br />
*Security Measurement (PSM White Paper), Version 3.0, Section 7 (Murdoch 2006, 67)<br />
*Measuring Systems Interoperability, Section 5 and Appendix C (Kasunic and Anderson 2004)<br />
*Measurement for Process Improvement (PSM Technical Report), Version 1.0, Appendix E (Statz 2005)<br />
<br />
The INCOSE SE Measurement Primer (Frenz et al. 2010) provides a list of attributes of a good measure with definitions for each attribute. The attributes include ''relevance, completeness, timeliness, simplicity, cost effectiveness, repeatability, and accuracy''. Evaluating candidate measures against these attributes can help assure the selection of more effective measures. <br />
<br />
The details of the measure need to be unambiguously defined and documented. Templates for the specification of measures and indicators are available on the PSM website and in (Goethert and Siviy 2004).<br />
<br />
===Perform Systems Engineering Measurement===<br />
This activity focuses on collection and preparation of measurement data, measurement analysis, and the presentation of the results to inform decision making. The preparation of the measurement data includes verification, normalization, and aggregation of the data, as applicable. Analysis includes estimation, feasibility analysis of plans, and performance analysis of actual data against plans. <br />
<br />
The quality of the measurement results is dependent on the collection and preparation of valid, accurate, unbiased data. Data verification, validation, preparation, and analysis techniques are discussed in (PSM May 7, 2010), Chapters 1 and 4 and (SEI 2006, 10). Per TL 9000, Quality Management System Guidance, “The analysis step should integrate quantitative measurement results and other qualitative project information, in order to provide managers the feedback needed for effective decision making.” (Quest 2010, 5-10) This provides richer information that gives the users the broader picture and puts the information in the appropriate context. <br />
<br />
There is a significant body of guidance available on good ways to present quantitative information. Edward Tufte has several books focused on the visualization of information, including (Tufte 2001). <br />
<br />
More information about understanding and using measurement results can be found in:<br />
*(PSM May 7, 2010)<br />
*(ISO/IEC/IEEE 2007), clauses 4.3.3 and 4.3.4<br />
*(Roedler and Jones 2005), sections 6.4, 7.2, and 7.3<br />
<br />
===Evaluate Systems Engineering Measurement===<br />
This activity includes the knowledge explaining the periodic evaluation and improvement of the measurement process and specific measures. One objective is to ensure that the measures continue to align with the business goals and information needs, and provide useful insight. Refer to (PSM May 7, 2010) and (McGarry and et al. 2002) for additional detail.<br />
<br />
==Systems Engineering Leading Indicators==<br />
Leading indicators are aimed at providing predictive insight regarding an information need. A systems engineering leading indicator “is a measure for evaluating the effectiveness of a how a specific activity is applied on a project in a manner that provides information about impacts that are likely to affect the system performance objectives.” Leading indicators may be individual measures or collections of measures and associated analysis that provide future systems engineering performance insight throughout the life cycle of the system. “Leading indicators support the effective management of systems engineering by providing visibility into expected project performance and potential future states.” <br />
<br />
As shown in Figure 2, a leading indicator is composed of characteristics, a condition and a predicted behavior. The characteristics and condition are analyzed on a periodic or as-needed basis to predict behavior within a given confidence and within an accepted time range into the future. More information is found in (Roedler et al. 2010).<br />
<br />
<br />
<br />
<br />
<br />
Figure 2. Composition of a Leading Indicator (Source: Roedler et al. 2010)<br />
<br />
==Technical Measurement==<br />
Technical measurement is the set of measurement activities used to provide information about progress in the definition and development of the technical solution, ongoing assessment of the associated risks and issues, and the likelihood of meeting the critical objectives of the acquirer. This insight helps make better decisions throughout the life cycle to increase the probability of delivering a technical solution that meets both the specified requirements and the mission needs. The insight is also used in trade-off decisions when performance is not within the thresholds or goals.<br />
<br />
Technical measurement includes measures of effectiveness (MOEs), measures of performance (MOPs), and TPMs. ((Roedler and Jones 2005, 1-65) The relationships between these types of technical measures are shown in Figure 3 and explained in the reference. Using the measurement process described above, technical measurement can be planned early in the life cycle and then performed throughout the life cycle with increasing levels of fidelity as the technical solution is developed, facilitating predictive insight and preventive or corrective actions. More information about technical measurement can be found in (NASA December 2007, 1-360, Section 6.7.2.2; Wasson 2006, Chapter 34), and (Roedler and Jones 2005).<br />
<br />
<br />
<br />
<br />
Figure 3. Relationship of the Technical Measures (Source: (Roedler and Jones 2005)<br />
<br />
==Service Measurement==<br />
The same measurement activities can be applied for service measurement: however, the context and measures will be different. Service providers have a need to balance efficiency and effectiveness, which may be opposing objectives. Good service measures are outcome-based, focus on elements important to the customer (such as service availability, reliability and performance) and provide timely, forward-looking information. <br />
<br />
For services, the terms critical success factors (CSF) and key performance indicators (KPI) are used often when discussing measurement. CSFs are the key elements of the service or service infrastructure that are most important to achieve the business objectives. Key performance indicators are specific values or characteristics measured to assess achievement of those objectives.<br />
More information about service measurement can be found in the Service Design and Continual Service Improvement volumes of (BMP 2010, 1). Service SE can be found in the [[Service Systems Engineering]] article. <br />
<br />
==Linkages to Other Systems Engineering Management Topics==<br />
SE Measurement has linkages to the other SEM topics. The following are a few key linkages adapted from (Roedler and Jones 2005):<br />
*[[Planning]] – SE measurement provides the historical data and supports the estimation for and feasibility analysis of the plans for realistic planning. <br />
*[[Assessment and Control]] – SE measurement provides the objective information needed to performed the assessment and determine appropriate control actions. The use of leading indicators allows for early assessment and control actions that identify risks and/or provide insight to allow early treatment of risks to minimize potential impacts.<br />
*[[Risk Management]] – SE risk management identifies the information needs that can impact project and organizational performance. SE measurement data helps to quantify risks and subsequently provides information about whether risks have been successfully managed.<br />
*[[Decision Management]] – SE Measurement results inform decision making by providing objective insight.<br />
<br />
==Practical Considerations==<br />
Key pitfalls and good practices related to systems engineering measurement are described in the next two sections.<br />
<br />
===Pitfalls===<br />
Some of the key pitfalls encountered in planning and performing SE Measurement are: <br />
#Looking for the one measure or small set of measures that apply to all projects. There is no one-size-fits-all measure or measurement set. Each project has a unique set of information needs (objective, risks, issues). <br />
#Viewing measurement as a single-pass activity. To be effective, measurement needs to be performed continuously, including the periodic identification and prioritization of information needs and associated measures. <br />
#Performing measurement activities without the understanding of why the measures are needed and what information they provide. This can lead to wasted effort. <br />
#Using measurement inappropriately, to measure performance of individuals or make interpretations without context information. This can lead to bias in the results or incorrect interpretations. <br />
<br />
===Good Practices===<br />
Some good practices, gathered from the references:<br />
#Regularly review each measure collected.<br />
#Measurement by itself does not control or improve process performance. Measurement results must be provided to decision makers for appropriate action.<br />
#SE Measurement should be integrated into the project as part of the ongoing project business rhythm, where data is collected as processes are performed, not recreated as an afterthought. <br />
#Successful measurement requires the communication of meaningful information to the decision makers. The presentation of the results should be in the preferred format of the decision maker, in order to allow accurate and expeditious interpretation of the results. <br />
#Information should be obtained early enough to allow decision makers to take the actions necessary to control or treat risks, adjust tactics and strategies, etc. When such actions are not successful, measurement results need to help decision makers determine when to take contingency actions or correct problems. <br />
#Decisions can rarely wait for a complete or perfect set of data, so measurement information often needs to be derived from analysis of the best available data, complemented by real-time events and qualitative insight (including experience).<br />
#The information model defined in (ISO/IEC/IEEE 2007) provides a means to link the entities that are measured to the associated measures and to the identified information need, as well as how the measures are converted into indicators that provide insight to decision makers.<br />
#Use historical data as the basis of plans, measure what is planned versus what is achieved, archive actual achieved results, and use archived data as historical basis of next planning effort.<br />
<br />
Additional information can be found in (Frenz et al. 2010), Section 4.2 and (INCOSE 2010, Section 5.7.1.5).<br />
<br />
==Primary References==<br />
*PSM Guide, Version 4.0 (PSM May 7, 2010)<br />
*SE Leading Indicators Guide, Version 2.0 (Roedler et al. 2010)<br />
*Technical Measurement Guide, Version 1.0 (Roedler and Jones 2005)<br />
*ISO/IEC/IEEE 15939:2007, Measurement Process (ISO/IEC/IEEE 2007)<br />
*INCOSE Systems Engineering Measurement Primer (Frenz et al. 2010)<br />
*<br />
<br />
==Additional Reading==<br />
*(Park, Goethert, and Florac 1996)<br />
*Safety Measurement, Version 3.0 (Murdoch 2006, 60)<br />
*Security Measurement, Version 3.0 (Murdoch 2006, 67)<br />
*Measuring Systems Interoperability (Kasunic and Anderson 2004)<br />
*Measurement for Process Improvement, Version 1.0 (Statz 2005)<br />
*(McGarry and et al. 2002)<br />
*(NASA December 2007)<br />
*(Wasson 2006) <br />
<br />
==Glossary Terms==<br />
To be supplied.<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Risk Management|<- Previous Article]] | [[Systems Engineering Management|Parent Article]] | [[Decision Management|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Risk_Management&diff=9669Risk Management2011-08-09T21:33:25Z<p>Skmackin: </p>
<hr />
<div>==Introduction==<br />
The purpose of risk management is to evaluate concerns and take action to reduce potential risks to an acceptable level before they occur throughout the life of the product or project. Risk management is a continuous, forward-looking process that is applied to anticipate and avert risks that may adversely impact the project. Risk management can be considered a project management or a systems engineering process. A balance must be achieved on each project in terms of overall risk management ownership, implementation, and day-to-day responsibility between by these two top-level processes.<br />
<br />
Risk is a measure of the potential inability to achieve overall program objectives within defined cost, schedule, and technical constraints. It has two components: 1) the probability (or likelihood) of failing to achieve a particular outcome, and 2) the consequences (or impact) of failing to achieve that outcome. (DAU, 2003a) (In the domain of catastrophic risk analysis, risk has three components: 1) threat, 2) vulnerability, and 3) consequence.) (Willis et al. 2005) <br />
<br />
Risk management involves defining a risk management strategy, identifying and analyzing risks, handling selected risks, and monitoring the progress in reducing risks to an acceptable level. (SEI 2010; DoD 2006; DAU 2003a; DAU 2003b; PMI 2008) (Opportunity and opportunity management is briefly discussed in Subsection 3 below.)<br />
<br />
==Risk Management Process Overview==<br />
The SE risk management process includes the following activities: <br />
*Risk planning <br />
*Risk identification <br />
*Risk analysis <br />
*Risk handling <br />
*Risk monitoring<br />
<br />
===Risk Planning===<br />
Risk planning involves establishing and maintaining a strategy for identifying, analyzing, handling, and monitoring risks, and how the strategy will be implemented on the project. The strategy, both the process and its implementation, is documented in a risk management plan (RMP).<br />
<br />
The risk management process and its implementation should be tailored to each project, updated as appropriate throughout the life of the project, and the RMP transmitted in an appropriate means to the project team and key stakeholders. <br />
<br />
The RMP should contain key risk management information, including: 1) a project summary; 2) project acquisition and contracting strategies; 3) key definitions; 4) a list of key documents; 5) process steps; 6) inputs, tools and techniques, and outputs per process step; 7) linkages of risk management with other project processes, 8) key ground rules and assumptions; 9) risk categories, 10) seller and buyer roles and responsibilities, 11) organizational and personnel roles and responsibilities. (Conrow 2003). The level of detail should be risk-driven: simple plans for low risk projects; detailed plans for high risk projects.<br />
<br />
===Risk Identification===<br />
Risk identification is the process of examining the project products, processes, and requirements to identify and document candidate risks. Risk identification should be performed continuously at the individual-level as well as through formerly structured events at both regular intervals and following major program changes (e.g., project initiation, re-baselining, change in acquisition phase).<br />
<br />
It should use one or more top-level approaches (e.g., Work Breakdown Structure, key processes evaluation, key requirements evaluation) and one or more lower-level approaches (e.g., affinity, brainstorming, checklists and taxonomies, examining critical path activities, expert judgment, fishbone diagrams). (Conrow 2009) For example, lower-level checklists and taxonomies exist for software risk identification (Conrow and Shishido 1997, 83-89, p. 84; Boehm 1989, 115-125, Carr et al. 1993, p. A-2) and operational risk identification (Gallagher et al. 2005, p. 4), and have been used on a wide variety of programs. Both the top and lower-level approaches are essential but there is no single acceptable method—all approaches should be examined and used as appropriate. <br />
<br />
Candidate risk documentation should include the following items where possible: 1) risk title; 2) a structured risk description; 3) applicable risk categories; 4) potential root causes; 5) relevant historical information (e.g., actions to date); and 6) responsible individual and manager. (Conrow 2003, p. 198) <br />
<br />
It’s important to use structured risk descriptions such as an If (an event occurs--trigger) Then (an outcome or affect occurs) Because (of the following reasons, or root cause) format. Another useful construct is a Condition (that exists) which leads to a potential Consequence (outcome). (Gluch 1994) These approaches help the analyst to better think through the potential nature of the risk. <br />
<br />
Risk analysis and risk handling activities should only be performed on approved risks because of the scarcity of resources and opportunity costs in not efficiently focusing on the correct risks.<br />
<br />
===Risk Analysis===<br />
Risk analysis is the process of systematically evaluating each identified, approved risk to estimate the probability of occurrence (likelihood) and consequence of occurrence (impact), converting the results to a corresponding risk level or rating.<br />
<br />
While there is no single “best” approach for a given risk category, risk scales and a corresponding matrix, simulations, and probabilistic risk assessments are often used for technical risks, while decision trees, simulations, and payoff matrices are used for cost risk, and simulations are used for schedule risk. Risk analysis approaches are sometimes grouped into qualitative and quantitative methods. A structured, repeatable methodology should be used in order to increase analysis accuracy and reduce uncertainty.<br />
<br />
The most common qualitative method uses (typically) ordinal probability and consequence scales coupled with a risk matrix (also known as a risk cube or mapping matrix) to convert the resulting values to a risk level. Here, one or more probability of occurrence scales, coupled with three consequence of occurrence scales (cost, performance, schedule) are typically used. Mathematical operations should not be performed on ordinal scale values to prevent erroneous results. (Conrow 2003, pp. 187-364)<br />
<br />
Once the risk level for each risk is determined, the risks need to be prioritized. Prioritization is typically performed by risk level (e.g., low, medium, high), risk score [the pair of max (probability), max (consequence) values], and other considerations such as time-frame, frequency of occurrence, and interrelationship with other risks. (Conrow 2003, pp. 187-364) An additional prioritization technique is to convert results into an estimated cost, performance, and schedule value (e.g., probability * dollar consequence). However, the result is only a point estimate and not a distribution of risk.<br />
<br />
Widely used quantitative methods include decision trees and the associated expected monetary value analysis (Clemen and Reilly 2001), modeling and simulation (Law 2007; Mun 2010; Vose 2000), payoff matrices (Kerzner 2009, pp. 747-751), probabilistic risk assessments (Kumamoto and Henley 1996; NASA 2002), and other techniques. Risk prioritization can directly result from the quantitative methods employed.<br />
<br />
For quantitative approaches, care is needed in developing the model structure, since the results will only be as good as the accuracy of the structure coupled with the characteristics of probability estimates or distributions (Law 2007; Evans, Hastings, and Peacock 2000) used to model risk that is present.<br />
<br />
If multiple risk facets exist for a given item (e.g., cost risk, schedule risk, and technical risk), the different results should be integrated into a cohesive three-dimensional “picture” of risk. Sensitivity analyses can be applied to both qualitative and quantitative approaches in an attempt to understand how potential variability will affect results. Particular emphasis should be paid to compound risks (e.g., highly coupled technical risks with inadequate fixed budgets and schedules).<br />
<br />
===Risk Handling===<br />
Risk handling is the process that identifies and selects options and implements the desired option to reduce a risk to an acceptable level, given program constraints (budget, other resources) and objectives. (DAU 2003a, pp. 20-23, 70-78)<br />
<br />
For a given system of interest, risk handling is primarily performed at two levels. At the system level, the overall ensemble of system risks is initially determined and prioritized, and second-level draft risk element plans (REPs) are prepared for handling the risks. For more complex systems, it is important that the REPs at the higher system-of-interest level are kept consistent with the system RMPs at the lower system-of-interest level, and that the top-level RMP preserves continuing risk traceability across the systems of interest.<br />
<br />
The risk handling strategy selected is the combination of the most desirable risk handling option coupled with a suitable implementation approach for that option. (Conrow 2003) Risk handling options include assumption, avoidance, control (mitigation), and transfer. All four options should be evaluated and the best one chosen for each risk. An appropriate implementation approach is then chosen for that option. Hybrid strategies can be developed that include more than one risk handling option, but with a single implementation approach. Additional risk handling strategies can also be developed for a given risk and either implemented in parallel with the primary strategy or made a contingent and implemented if a particular trigger event occurs during the execution of the primary strategy. Often, option choice is difficult because of uncertainties in the risk probabilities and impacts. In such cases, buying information to reduce risk uncertainty via prototypes, benchmarking, surveying, modeling, etc. will clarify risk handling decisions (Boehm, 1981).<br />
<br />
====Risk Handling Plans====<br />
A risk handling plan (RHP, a REP at the system level), should be developed and implemented for all high and medium risks and selected low risks as warranted.<br />
<br />
Each RHP should include : 1) a risk owner and management contacts, 2) selected option, 3) implementation approach, 4) estimated probability and consequence of occurrence levels at the start and conclusion of each activity, 5) specific measurable exit criteria for each activity, 6) appropriate metrics, and 7) resources needed to implement the RHP (e.g., personnel, funding, test equipment, etc.). (Conrow 2003, pp. 365-387)<br />
<br />
Metrics included in each RHP should provide an objective means of determining whether the risk handling strategy is “on track,” and whether it needs to be updated. On larger projects these can include earned value, variation in schedule and technical performance measurements (TPMs), and changes in risk level vs. time. <br />
<br />
The activities present in each RHP should be integrated into the project’s integrated master schedule or equivalent; otherwise there will be ineffective risk monitoring and control.<br />
<br />
===Risk Monitoring===<br />
Risk monitoring is used to evaluate the effectiveness of risk handling activities against established metrics and provide feedback to the other risk management process steps. Risk monitoring results may also provide a basis to update RHPs, develop additional risk handling options and approaches, and re-analyze risks. In some cases, monitoring results may also be used to identify new risks, revise an existing risk with a new facet, or revise some aspects of risk planning. (DAU 2003a, p. 20) Some risk monitoring approaches that can be applied include: 1) earned value, 2) program metrics, 3) TPMs, 4) schedule analysis, and 5) variations in risk level. Risk monitoring approaches should be updated and evaluated at the same time and WBS level; otherwise, the results may be inconsistent.<br />
<br />
==Opportunity and Opportunity Management==<br />
In principle, opportunity management is the dual of risk management, with two components: probability of achieving an improved outcome, and impact of achieving the outcome. Thus, both should be addressed in risk management planning and execution. In practice, however, a positive opportunity exposure will not match a negative risk exposure in utility space, since the positive utility magnitude of improving an expected outcome is considerably less than the negative utility magnitude of failing to meet an expected outcome [Canada, 1971; Kahneman-Tversky, 1979]. Further, since many opportunity-management initiatives have failed to anticipate serious side effects, all candidate opportunities should be thoroughly evaluated for potential risks to prevent unintended consequences from occurring.<br />
<br />
==Linkages to Other Systems Engineering Management Topics==<br />
The measurement process provides indicators for risk analysis. Project planning involves the identification of risk and planning for stakeholder involvement. Project Assessment and Control monitors project risks. Decision management evaluates alternatives for selection and handling of identified and analyzed risks. <br />
<br />
==Practical Considerations==<br />
Key pitfalls and good practices related to systems engineering risk management are described in the next two sections. <br />
<br />
===Pitfalls===<br />
Some of the key pitfalls encountered in performing risk management are: <br />
#Over-reliance on the process side of risk management without sufficient attention to human and organizational behavioral considerations. <br />
#Failure to implement risk management as a continuous process. Risk management will be ineffective if it’s done just to satisfy project reviews or other discrete criteria. (Charette, Dwinnell, and McGarry 2004, 18-24 and Scheinin 2008).<br />
#Over-reliance on tools and techniques, with insufficient thought and resources expended on how the process will be implemented and run on a day-to-day basis.<br />
#A comprehensive risk identification will generally not capture all risks; some risks will always escape detection, which reinforces the need for risk identification to be performed continuously.<br />
#Automatically select the risk handling mitigation option, rather than evaluating all four options in an unbiased fashion and choosing the “best” option.<br />
<br />
===Good Practices===<br />
Some good practices, gathered from the references: <br />
#Risk management should be both “top down” and “bottom up” in order to be effective. The project manager or deputy need to own the process at the top level. But risk management principles should be considered and used by all project personnel. <br />
#Include the planning process step in the risk management process. Failure to adequately perform risk planning early in the project phase, contributes to ineffective risk management.<br />
#Understand the limitations of risk analysis tools and techniques. Risk analysis results should be challenged because considerable input uncertainty and/or potential errors may exist.<br />
#The risk handling strategy should attempt to reduce both the probability and consequence of occurrence terms. It is also imperative that the resources needed to properly implement the chosen strategy be available in a timely manner, else the risk handling strategy, and the entire risk management process, will be viewed as a “paper tiger.”<br />
#Risk monitoring should be a structured approach to compare actual vs. anticipated cost, performance, schedule, and risk outcomes associated with implementing the RHP. When ad-hoc or unstructured approaches are used, or when risk level vs. time is the only metric tracked, the resulting risk monitoring usefulness can be greatly reduced.<br />
#The risk management database (registry) should be updated throughout the course of the program, striking a balance between excessive resources required and insufficient updates performed. Database updates should occur at both a tailored, regular interval and following major program changes.<br />
<br />
==Glossary==<br />
===Acronyms===<br />
<br />
Acronym Definition <br />
<br />
AIAA American Institute of Aeronautics and Astronautics<br />
<br />
REP Risk Element Plan <br />
<br />
CDF Cumulative Distribution Function <br />
<br />
EV Earned Value <br />
<br />
IEC International Electronical Commission <br />
<br />
IEEE Institute of Electrical and Electronics Engineers <br />
<br />
INCOSE International Council on Systems Engineering <br />
<br />
ISO International Organization for Standardization <br />
<br />
PDF Probability Density Function <br />
<br />
RHP Risk Handling Plan <br />
<br />
RMP Risk Management Plan <br />
<br />
TPM Technical Performance Measurement<br />
<br />
===Terminology===<br />
<br />
Issue—(1) An area of concern that may impact the achievement of program/organizational objectives – a problem (existing), a risk (future uncertainty), or lack of information (existing). (ISO/IEC 15939, PSM) (2) A concern that has a probability of occurrence equal to one, a consequence of occurrence greater than zero, and a time-frame in the future. An issue will occur and have a negative impact to the project, but it will not immediately occur. (Conrow, 2008)<br />
<br />
Problem—A problem is a concern that has a probability of occurrence equal to one, a consequence of occurrence greater than zero, and a time-frame that is current (now). A problem is a concern that has occurred and has a negative impact to the project. (Conrow, 2008) <br />
<br />
Risk-1) Risk is a measure of the potential inability to achieve overall program objectives within defined cost, schedule, and technical constraints and has two components: <br />
1.The probability (or likelihood) of failing to achieve a particular outcome and <br />
2.The consequences (or impact) of failing to achieve that outcome. (DAU, 2003a) <br />
A risk has a probability of occurrence that is greater than zero but less than one, a consequence of occurrence greater than zero, and a time-frame in the future. (Conrow 2008) <br />
(2) In the domain of catastrophic risk analysis, such as for terrorist attacks or natural disasters, risk has three components: <br />
1.Threat (the probability that a specific target is attacked in a specific way during a specified period) <br />
2.Vulnerability (the probability that damage occurs given a threat), and <br />
3.Consequence (the magnitude and type of damage resulting from an attack or disaster). (Willis et al. 2005) <br />
<br />
Risk management—Risk management is the act or practice of dealing with risk. It includes risk management planning, identification, analysis, responses (handling), and monitoring and control and associated documentation. Given that risk emergence and risk management are continuous processes, these activities will be performed concurrently rather than sequentially.<br />
<br />
==References== <br />
===Citations===<br />
Boehm, B. 1981. Software Engineering Economics, Prentice Hall. <br />
<br />
Boehm, B. 1989. Software Risk Management. IEEE CS Press: 115-125.<br />
<br />
Canada, J.R. 1971 Intermediate Economic Analysis for Management and Engineering, Prentice Hall.<br />
<br />
Carr, M., S. Konda, I. Monarch, F. Ulrich, and C. Walker. 1993. Taxonomy-based risk identification. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-93-TR-6.<br />
<br />
Charette, R., L. Dwinnell, and J. McGarry. 2004. Understanding the roots of process performance failure. CROSSTALK: The Journal of Defense Software Engineering (August 2004): 18-24.<br />
<br />
Clemen, R., and T. Reilly. 2001. Making hard decisions. Boston, MA, USA: Duxbury.<br />
<br />
Conrow, E. 2003. [[Effective Risk Management: Some Keys to Success]]. 2nd ed. Reston, VA, USA: American Institute of Aeronautics and Astronautics (AIAA).<br />
<br />
Conrow, E. 2008. Risk analysis for space systems. Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February, 2008, Los Angeles, CA, USA. <br />
<br />
Conrow, E., and P. Shishido. 1997. Implementing risk management on software intensive projects. IEEE Software 14 (3) (May/June 1997): 83-9.<br />
<br />
DAU. 2003a. Risk Management Guide for DoD Acquisition: Fifth Edition, Version 2. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press.<br />
<br />
DAU. 2003b. U.S. department of defense extension to: A guide to the project management body of knowledge (PMBOK(R) guide), first edition, version 1. 1st ed. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press.<br />
<br />
DoD. 2006. [[Risk Management Guide for DoD Acquisition]]. Washington, D. C.: Office of the Under Secretary of Defense (Acquisition, Technology & Logistics)/Department of Defense, Sixth Edition, Version 1.<br />
<br />
Evans, M., N. Hastings, and B. Peacock. 2000. Statistical distributions. 3rd ed. New York, NY: Wiley-Interscience.<br />
<br />
Gallagher, B., P. Case, R. Creel, S. Kushner, and R. Williams. 2005. A taxonomy of operational risk. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-2005-TN-036.<br />
<br />
Gluch, P. 1994. A Construct for Describing Software Development Risks. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-94-TR-14.<br />
<br />
Kerzner, H. 2009. Project management: A systems approach to planning, scheduling, and controlling. 10th ed. Hoboken, NJ: John Wiley & Sons.<br />
<br />
Kahneman, D., and Tversky, A. Prospect theory: An analysis of decision under risk, Econometrica, Vol. 47, No. 2. (Mar., 1979), pp. 263-292.<br />
<br />
Kumamoto, H., and Henley, E. 1996. Probabilistic Risk Assessment and Management for Engineers and Scientists, 2nd ed. Piscataway, NJ. Institute of Electrical and Electronics Engineers (IEEE) Press.<br />
<br />
Law, A. 2007. Simulation modeling and analysis. 4th ed. New York, NY: McGraw Hill.<br />
<br />
Mun, J. 2010. Modeling risk. 2nd ed. Hoboken, NJ: John Wiley & Sons. <br />
<br />
NASA. 2002. Probabilistic risk assessment procedures guide for NASA managers and practitioners, version 1.1. Washington, D.C.: Office of Safety and Mission Assurance/National Aeronautics and Space Administration (NASA).<br />
<br />
PMI. 2008. A guide to the project management body of knowledge (PMBOK guide). 4th ed. Newtown Square, PA, USA: Project Management Institute (PMI).<br />
<br />
Scheinin, W. 2008. Start early and often: The need for persistent risk management in the early acquisition phases. Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February 2008, Los Angeles, CA, USA.<br />
<br />
SEI. 2010. [[Capability Maturity Model Integrated (CMMI) for Development]], version 1.3. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).<br />
<br />
Vose, D. 2000. Quantitative risk analysis. 2nd ed. New York, NY: John Wiley & Sons.<br />
<br />
Willis, H. H., A. R. Morral, T. K. Kelly, and J. J. Medby. 2005. Estimating terrorism risk. Santa Monica, CA: The RAND Corporation, MG-388.<br />
<br />
===Primary References===<br />
Boehm, B. 1981. [[Software Engineering Economics]], Prentice Hall. <br />
<br />
Boehm, B. 1989. [[Software Risk Management]]. IEEE CS Press: 115-125.<br />
<br />
Conrow, E. H. 2003. [[Effective Risk Management: Some Keys to Success]]. 2nd ed. Reston, VA, USA: American Institute of Aeronautics and Astronautics (AIAA).<br />
<br />
DoD. 2006. [[Risk Management Guide for DoD Acquisition]]. Washington, D. C.: Office of the Under Secretary of Defense (Acquisition, Technology & Logistics)/Department of Defense, Sixth Edition, Version 1.<br />
<br />
SEI. 2010. [[Capability Maturity Model Integrated (CMMI) for Development]], version 1.3. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).<br />
<br />
===Additional References===<br />
Canada, J.R. 1971 Intermediate Economic Analysis for Management and Engineering, Prentice Hall.<br />
<br />
Carr, M., S. Konda, I. Monarch, F. Ulrich, and C. Walker. 1993. Taxonomy-based risk identification. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-93-TR-6. <br />
<br />
Charette, R. 1990. Application strategies for risk management. New York, NY: McGraw-Hill.<br />
<br />
Charette, Robert. 1989. Software engineering risk analysis and management. New York, NY. McGraw-Hill (MultiScience Press).<br />
<br />
Charette, R., L. Dwinnell, and J. McGarry. 2004. Understanding the roots of process performance failure. CROSSTALK: The Journal of Defense Software Engineering (August 2004): 18-24. <br />
<br />
Clemen, R. T., and T. Reilly. 2001. Making hard decisions. Boston, MA, USA: Duxbury. <br />
<br />
Conrow, E. 2010. Space program schedule change probability distributions. Paper presented at American Institute of Aeronautics and Astronautics (AIAA) Space 2010, 1 September 2010, Anaheim, CA, USA.<br />
<br />
Conrow, E. 2009. Tailoring risk management to increase effectiveness on your project. Presentation to the Project Management Institute, Los Angeles Chapter, 16 April, 2009, Los Angeles, CA.<br />
<br />
Conrow, E. 2008. Risk analysis for space systems. Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February, 2008, Los Angeles, CA, USA. <br />
<br />
Conrow, E., and P. Shishido. 1997. Implementing risk management on software intensive projects. IEEE Software 14 (3) (May/June 1997): 83-9. <br />
<br />
DAU. 2003a. Risk Management Guide for DoD Acquisition: Fifth Edition, Version 2. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press.<br />
<br />
DAU. 2003b. U.S. department of defense extension to: A guide to the project management body of knowledge (PMBOK(R) guide), first edition, version 1. 1st ed. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press. <br />
<br />
Dorofee, A., J. Walker, C. Alberts, R. Higuera, R. Murphy, and R. Williams, eds. 1996. Continuous risk management guidebook. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU). <br />
<br />
Evans, M., N. Hastings, and B. Peacock. 2000. Statistical distributions. 3rd ed. New York, NY: Wiley-Interscience. <br />
<br />
Gallagher, B., P. Case, R. Creel, S. Kushner, and R. Williams. 2005. A taxonomy of operational risk. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-2005-TN-036.<br />
<br />
Gluch, P. 1994. A Construct for Describing Software Development Risks. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-94-TR-14. <br />
<br />
Haimes, Y. Y. 2009. Risk modeling, assessment, and management. Hoboken, NJ: John Wiley & Sons, Inc. <br />
<br />
Hall, E. 1998. Managing risk: Methods for software systems development. New York, NY: Addison Wesley Professional. <br />
<br />
INCOSE. 2011. INCOSE systems engineering handbook, version 3.2.1. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.1.<br />
<br />
ISO. 2009. Risk management—Principles and guidelines. Geneva, Switzerland: International Organization for Standardization (ISO), ISO 31000:2009.<br />
<br />
ISO/IEC. 2009. Risk Management—Risk Assessment Techniques. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 31010:2009.<br />
<br />
ISO. 2003., Space systems - Risk Management. Geneva, Switzerland: International Organization for Standardization (ISO), ISO 17666:2003.<br />
<br />
Jones, C. 1994. Assessment and control of software risks. Upper Saddle River, NJ, USA: Prentice-Hall. <br />
<br />
Kahneman, D., and Tversky, A. Prospect theory: An analysis of decision under risk, Econometrica, Vol. 47, No. 2. (Mar., 1979), pp. 263-292.<br />
<br />
Kerzner, H. 2009. Project management: A systems approach to planning, scheduling, and controlling. 10th ed. Hoboken, NJ: John Wiley & Sons. <br />
<br />
Kumamoto, Hi., and Henley, E.. 1996. Probabilistic Risk Assessment and Management for Engineers and Scientists, 2nd ed. Piscataway, NJ. Institute of Electrical and Electronics Engineers (IEEE) Press.<br />
<br />
Law, A. 2007. Simulation modeling and analysis. 4th ed. New York, NY: McGraw Hill.<br />
<br />
Mun, J. 2010. Modeling risk. 2nd ed. Hoboken, NJ: John Wiley & Sons. <br />
<br />
NASA. 2002. Probabilistic risk assessment procedures guide for NASA managers and practitioners, version 1.1. Washington, D.C.: Office of Safety and Mission Assurance/National Aeronautics and Space Administration (NASA). <br />
<br />
PMI. 2008. A guide to the project management body of knowledge (PMBOK guide). 4th ed. Newtown Square, PA, USA: Project Management Institute (PMI). <br />
<br />
Scheinin, W. 2008. Start early and often: The need for persistent risk management in the early acquisition phases. Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February 2008, Los Angeles, CA, USA. <br />
<br />
USAF. 2005. SMC systems engineering primer & handbook: Concepts, processes, and techniques. 3rd ed. Los Angeles, CA: Space & Missile Systems Center/U.S. Air Force (USAF). <br />
<br />
Vose, D. 2000. Quantitative risk analysis. 2nd ed. New York, NY: John Wiley & Sons.<br />
<br />
Willis, H. H., A. R. Morral, T. K. Kelly, and J. J. Medby. 2005. Estimating terrorism risk. Santa Monica, CA: The RAND Corporation, MG-388.<br />
<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Assessment and Control|<- Previous Article]] | [[Systems Engineering Management|Parent Article]] | [[Measurement|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Assessment_and_Control&diff=9667Assessment and Control2011-08-09T21:31:06Z<p>Skmackin: </p>
<hr />
<div>The purpose of Systems Engineering Assessment and Control (SEAC) is to provide adequate visibility into the project’s actual technical progress and risks with respect to the technical plans (i.e., Systems Engineering Management Plan (SEMP) and subordinate plans). The visibility allows the project team to take timely preventive action when trends are recognized or corrective action when performance deviates beyond established thresholds or expected values. SEAC includes preparing for and conducting reviews and audits to monitor performance. The results of the reviews and measurement analyses are used to identify and record findings/discrepancies and may lead to causal analysis and corrective/preventive action plans. Action plans are implemented, tracked, and monitored to closure. (NASA 2007, Section 6.7) (SEG-ITS, 2009, Section 3.9.3, 3.9.10) (SEI, 1995, PA11) (INCOSE, 2010, Clause 6.2) (CMMI Product Team, 2006)<br />
<br />
The Systems Engineering Assessment and Control process includes determination of appropriate handling strategies and actions for findings and/or discrepancies that are uncovered in the enterprise, infrastructure, or life cycle activities associated with the project, and for initiating the identified actions or replanning. Analysis of the causes of the findings/discrepancies aid in the determination of appropriate handling strategies. Implementation of the approved preventive, corrective, or improvement actions is taken to ensure satisfactory completion of the project within planned technical, schedule, and cost objectives. Potential action plans for findings and/or discrepancies are reviewed in the context of the overall set of actions and priorities in order to optimize the benefits to the project and/or organization. Interrelated items are analyzed together to obtain a consistent and cost effective resolution. <br />
<br />
==SE Assessment and Control Process Overview==<br />
The SE assessment and control process includes the following activities:<br />
*Monitor and review technical performance and resource usage against plan<br />
*Monitor technical risk, escalate significant risks to the Project Risk register and seek project funding to execute risk mitigation plans<br />
*Hold technical reviews and report outcomes at the Project reviews<br />
*Analyze issues and determine appropriate actions<br />
*Manage actions to closure<br />
*Hold a Post Delivery Assessment (also known as a Post Project Review) to capture knowledge associated with the Project (this may be a separate technical assessment or it may be conducted as part of the Project Assessment and Control process). <br />
<br />
Note that the following activities are normally conducted as part of the '''Project''' Assessment and Control process <br />
*Authorization, release and closure of work<br />
*Monitor project performance and resource usage against plan<br />
*Monitor project risk and authorize expenditure of project funds to execute risk mitigation plans<br />
*Hold Project reviews<br />
*Analyze issues and determine appropriate actions<br />
*Manage actions to closure<br />
*Hold a Post Delivery Assessment (also known as a Post Project Review) to capture knowledge associated with the Project <br />
<br />
The figure below shows major technical reviews used in SEAC.<br />
<br />
<div style="text-align: left;"><br />
'''Major Technical Reviews<br />
''' </div><br />
<br />
[[File:Major_technical_reviews.png|600px|Major Technical Reviews]]<br />
<br />
==Linkages to Other Systems Engineering Management Topics==<br />
The Systems Engineering assessment and control process is closely coupled with the [[Measurement]], [[Planning]], [[Decision Management]], and [[Risk Management]] processes.<br />
The [[Measurement]] process provides indicators for comparing actuals to plans. [[Planning]] provides estimates and milestones that constitute plans for monitoring, and the project plan with measures used to monitor progress. [[Decision Management]] uses the results of project monitoring as decision criteria for making control decisions.<br />
<br />
==Practical Considerations==<br />
Key pitfalls and good practices related to SEAC are described in the next two sections.<br />
<br />
===Pitfalls===<br />
Some of the key pitfalls encountered in planning and performing SE Assessment and Control are: <br />
*Since the assessment and control activities are highly dependent on insightful measurement information, it is usually ineffective to proceed independent of the measurement efforts - what you get is what you measure.<br />
*Some things are easier to measure than others - for instance, delivery to cost and schedule. Don't focus on these and neglect harder things to measure like quality of the system, Avoid a "something in time" culture where meeting the schedule takes priority over everything else, but what is delivered is not fit for purpose and drives rework into the project. <br />
*Make sure that the Technical Review Gates have "teeth". Sometimes the Project Manager is given authority (or can appeal to someone with authority) to over-ride a gate decision and allow work to proceed, even when the gate has exposed significant issues with the technical quality of the system or associated work products. This is a major risk if the organization is strongly schedule-driven; it can't afford the time to do it right, but somehow it finds the time to do it again (rework).<br />
*Don't baseline requirements or designs too early. Often there is strong pressure to baseline system requirements and designs before they are fully understood or agreed, in order to start subsystem or component development. This just guarantees high levels of rework. <br />
<br />
===Good Practices===<br />
Some good practices gathered from the references are:<br />
*Provide independent (from customer) assessment and recommendations on resources, schedule, technical status, and risk based on experience and trend analysis.<br />
*Use peer review to ensure the quality of work products before they are submitted for gate review<br />
*Communicate uncertainties in requirements or designs and accept that uncertainty is a normal part of developing a system.<br />
*Do not penalize a Project at Gate Review if they admit uncertainty in requirements - ask for their risk mitigation plan to manage the uncertainty.<br />
*Baseline requirements and designs only when you need to - when other work is committed based on the stability of the requirement or design. If work has to start and the requirement or design is still uncertain, consider how you can build robustness into the system to handle the uncertainty with minimum rework. <br />
*Document and communicate status findings and recommendations to stakeholders. <br />
*Ensure that action items and action-item status, as well as other key status items, are visible to all project participants.<br />
*When performing root cause analysis, consider root cause and resolution data documented in previous related findings/discrepancies.<br />
*Plan and perform Assessment and Control concurrently with the activities for [[Measurement]] and [[Risk Management]]. <br />
*Hold Post Delivery Assessments or Post Project Reviews to capture knowledge associated with the Project - for instance, to augment and improve estimation models, Lessons Learned databases, Gate Review checklists.<br />
*Additional good practices can be found in (INCOSE 2010, Clause 6.2), (SEG-ITS, 2009, Sections 3.9.3 and 3.9.10), (INCOSE, 2010, Section 5.2.1.5), (NASA, 2007, Section 6.7).<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
<br />
INCOSE. 2010. INCOSE systems engineering handbook, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.<br />
<br />
NASA. December 2007. Systems engineering handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105. <br />
<br />
SEI. 1995. A systems engineering capability maturity model, version 1.1. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-95-MM-003. <br />
<br />
CMMI. 2007. Capability maturity model integrated (CMMI) for development, version 1.2, measurement and analysis process area. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU)<br />
<br />
Systems Engineering Guidebook for Intelligent Transport Systems. Version 3.0, US Department of Transportation, Federal Highway Administration, 2009. <br />
<br />
===Additional References===<br />
<br />
ISO/IEC/IEEE. 2009. Systems and software engineering - life cycle processes - project management. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC)/Institute of Electrical and Electronics Engineers (IEEE), ISO/IEC/IEEE 16326:2009(E). <br />
<br />
==Glossary==<br />
===Acronyms===<br />
<br />
Acronym Definition <br />
<br />
SEAC Systems Engineering Assessment and Control<br />
<br />
SEMP Systems Engineering Management Plan<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Planning|<- Previous Article]] | [[Systems Engineering Management|Parent Article]] | [[Risk Management|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Technical_Planning&diff=9665Technical Planning2011-08-09T21:29:47Z<p>Skmackin: </p>
<hr />
<div>Systems Engineering Planning is performed concurrently and collaboratively with project planning, and involves developing and integrating technical plans to achieve the technical project objectives within the resource constraints and risk thresholds. The planning needs to include the success-critical stakeholders to ensure that necessary tasks are defined with the right timing in the life cycle to manage acceptable risks levels and avoid costly omissions. As indicated in (NASA December 2007, 1-360, Section 6.1), (Caltrans and USDOT 2005, 278, Section 3.4.2), (INCOSE 2010, Section 5.1), (DAU February 19, 2010, Section 4.5.1), and (USAF 2004, Chapter 4). SE planning is intended to provide the following elements in a form that best meets the project usage preferences:<br />
*Definition of the project from a technical perspective. <br />
*Definition or tailoring of engineering processes, practices, methods, and supporting enabling environments to be used to develop products or services, as well as transition and implementation of the products or services, as required by agreements.<br />
*Definition of the technical organizational, personnel, and team functions and responsibilities, as well as all disciplines required during the project life cycle.<br />
*Input to the definition of the appropriate life cycle model or approach for the products or services.<br />
*Definition and timing of technical reviews, product or service assessments, and control mechanisms across the life cycle, including the success criteria in terms of cost, schedule, and technical performance at identified project milestones. <br />
*Estimation of technical cost and schedule based on the effort needed to meet the requirements, which becomes input to project cost and schedule planning.<br />
*Determination of critical technologies and associated risks and actions needed to manage and transition the technologies.<br />
*Identification of linkages to other project management efforts.<br />
<br />
SE planning begins with analyzing the scope of technical work to be performed, and understanding the constraints, risks, and objectives that define and bound the solution space for the product or service. The planning includes estimating the size of the work products, establishing a schedule (or integrating the technical tasks into the project schedule), identification of risks that the planning must consider, and negotiating commitments. Iteration of these planning tasks may be necessary to establish a balanced plan with respect to cost, schedule, technical performance, and quality. The planning continues to evolve with each life cycle phase of the project. (NASA December 2007, 1-360, Section 6.1; SEI 1995, PA 12) <br />
<br />
SE planning requires collaboration with all programmatic and technical elements of the project to ensure a comprehensive and integrated planning effort for all of the project's technical aspects. The SE planning should account for the full scope of technical activities, including system development and definition, risk management, quality management, configuration management, measurement, information management, production, verification and test, integration, validation, and deployment. The SE planning integrates all SE functions to ensure that plans, requirements, operational concepts, and architectures are consistent and feasible.<br />
<br />
The scope of the planning can vary between planning a specific task and developing a major technical plan. The integrated planning effort will determine what level of planning and documentation of that planning is appropriate for the project. The integration of each plan with other higher level, peer, or subordinate plans is an essential part of SE planning. For the technical effort, the systems engineering management plan (SEMP), also known as the systems engineering plan (SEP), is the highest level technical plan. It is subordinate to the Project Plan, and often has a number of subordinate technical plans providing detail on specific technical focus areas.<br />
<br />
Task planning identifies the specific work products, deliverables, and success criteria for systems engineering effort in support of integrated planning and project objectives. The success criteria are defined in terms of cost, schedule, and technical performance at identified project milestones. Detailed task planning identifies specific resource requirements (skills, equipment, facilities, dollars) as a function of time and project milestones.<br />
<br />
SE planning is accomplished by both the acquirer and supplier. The SE planning set of activities is performed in the context of the enterprise. Enterprise activities establish and identify relevant policies and procedures for managing and executing the project management and technical effort; identifying the management and technical tasks, their interdependencies, risks and opportunities; and providing estimates of needed resources/budgets. Plans are updated and refined throughout the development process based on status updates and evolving project requirements. (SEI 2007)<br />
<br />
<br />
==SE Planning Process Overview==<br />
The SE planning process includes the following activities:<br />
*Define the project and technical work<br />
*Define the engineering processes, practices, methods, and supporting enabling environments<br />
*Define the life cycle, technical reviews, assessments, and control mechanisms<br />
*Define the technical organizational structure and resources<br />
*Define the schedule and cost of the technical effort<br />
*Develop plans<br />
*Obtain commitment to the Plans<br />
<br />
The figure below shows the SEMP and Integrated Plans.<br />
<br />
<br />
[[File:semp_and_integrated_plans.png | 400px|SEMP and Integrated Plans ]]<br />
<br />
<div style="text-align: center;"><br />
'''SEMP and Integrated Plans<br />
''' </div><br />
<br />
==Linkages to Other Systems Engineering Management Topics==<br />
The project planning process is closely coupled with the [[Measurement]], [[Assessment and Control]], [[Decision Management]], and [[Risk Management]] processes. <br />
<br />
The [[Measurement]] process provides inputs for estimation models. Estimates from Planning are used in[[Decision Management]]. Systems engineering [[Assessment and Control]] processes use Planning results for setting milestones and assessing progress. [[Risk Management]] uses the Planning cost models, schedule estimates, and estimate uncertainty distributions to support quantitative risk analysis (as desired).<br />
<br />
Additionally, Planning needs to use the outputs from [[Assessment and Control]] and [[Risk Management]] to ensure corrective actions have been accounted for in planning future activities. The planning may need to be updated based on results from technical reviews (from [[Assessment and Control]]), issues identified during the performance of [[Risk Management]] activities, or decisions made as a result of the decision management activities. (INCOSE 2010, Section 6.1)<br />
<br />
==Practical Considerations==<br />
Key pitfalls and good practices related to systems engineering planning are described in the next two sections.<br />
<br />
===Pitfalls===<br />
Some of the key pitfalls encountered in planning and performing SE Planning include: <br />
#Significant adverse impacts on all other engineering activities from inadequate SE planning. Although it may be tempting to “save time” by rushing the planning, inadequate planning can create additional cost and schedule due to omissions in the planning, lack of integration of efforts, infeasible schedules, etc. <br />
#Not using engineering staff members who are highly experienced, especially in similar projects, which will likely result in inadequate planning. Due to the number of concurrent and high-priority tasks during the start of a project, less experienced engineers are often assigned significant roles in the SE planning. Even though the more experienced engineering staff members are busy early in the project with many other responsibilities, it is essential to assign the SE planning tasks to those with relevant experience. <br />
<br />
===Good Practices===<br />
Some good practices, gathered from the references:<br />
#Get technical resources from all disciplines involved in the planning process.<br />
#Resolve schedule and resource conflicts early.<br />
#Tasks should be as independent as possible.<br />
#Develop dependency networks to define task interdependencies.<br />
#Integrate risk management with the SE planning to identify areas that require special attention and/or trades. <br />
#The amount of management reserve should be based on the risk associated with the plan.<br />
#Use historical data for estimates and adjust for differences of the project.<br />
#Identify lead times and account for them in the planning, e.g., development of analytical tools.<br />
#Prepare to update plans as additional information becomes available or changes are needed.<br />
#IPDTs are often useful to ensure adequate communications across the necessary disciplines, timely integration of all design considerations, as well as integration and test and consideration of the full range of risks that need to be addressed. Although there are some issues that need to be managed with them, IPDTs tend to break down the communication and knowledge stovepipes that often already exist. <br />
#Additional good practices can be found in (Caltrans and USDOT 2005, 278, Section 3.4.2)(NASA December 2007, 1-360, Section 6.1; INCOSE 2010, Section 5.1; USAF 2004, Chapter 4), and (ISO/IEC/IEEE 2009, Clause 6.1).<br />
<br />
==Glossary==<br />
===Acronyms===<br />
<br />
Acronym Definition <br />
<br />
SEMP Systems Engineering Management Plan<br />
<br />
SEP Systems Engineering Plan<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
<br />
Caltrans, and USDOT. 2005. Systems engineering guidebook for ITS, version 1.1. Sacramento, CA, USA: California Department of Transportation (Caltrans) Division of Reserach & Innovation/U.S. Department of Transportation (USDOT), SEG for ITS 1.1. <br />
<br />
DAU. February 19, 2010. Defense acquisition guidebook (DAG). Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense. <br />
<br />
ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E). <br />
<br />
NASA. December 2007. Systems engineering handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105. <br />
<br />
SEI. 1995. A systems engineering capability maturity model, version 1.1. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-95-MM-003. <br />
<br />
---. 2007. Capability maturity model integrated (CMMI) for development, version 1.2, measurement and analysis process area. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU)<br />
<br />
INCOSE. 2010. INCOSE systems engineering handbook, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2. <br />
<br />
===Additional References===<br />
<br />
ISO/IEC/IEEE. 2009. Systems and software engineering - life cycle processes - project management. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC)/Institute of Electrical and Electronics Engineers (IEEE), ISO/IEC/IEEE 16326:2009(E). <br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Systems Engineering Management|<- Previous Article]] | [[Systems Engineering Management|Parent Article]] | [[Assessment and Control|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Systems_Engineering_Management&diff=9663Systems Engineering Management2011-08-09T21:28:35Z<p>Skmackin: </p>
<hr />
<div>The Systems Engineering Management (SEM) Knowledge Area (KA) is the knowledge associated with managing the resources and assets allocated to perform systems engineering activities, often in the context of a project or a service, but sometimes in the context of a less well-defined activity. SEM is distinguished from general project management by the focus of the former on the technical or engineering aspects of a project. It also includes exploratory R&D activities at the enterprise level in commercial or government.<br />
<br />
Implementing systems engineering requires the coordination of technical and managerial endeavors. Success with the technical is not possible in the absence of managerial. (Blanchard 1992, 341-342) Management provides the planning, organizational structure, collaborative environment, and program controls to ensure that stakeholder needs are met.<br />
<br />
<br />
<br />
[[File:Scope_BoundariesSE_PM_SM.png | 500px | center | Scope_BoundariesSE_PM_SM.png]]<br />
<br />
<div style="text-align: center;"><br />
'''Figure: Systems Engineering Management Boundaries<br />
''' </div><br />
<br />
===Topics===<br />
The topics contained within this knowledge area include:<br />
*[[Planning]]<br />
*[[Assessment and Control]]<br />
*[[Risk Management]]<br />
*[[Measurement]]<br />
*[[Decision Management]]<br />
*[[Configuration Management]]<br />
*[[Information Management]]<br />
*[[Quality Management]]<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Logistics|<- Previous Article]] | [[Systems Engineering and Management|Parent Article]] | [[Planning|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Knowledge Area]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Logistics&diff=9662Logistics2011-08-09T21:24:50Z<p>Skmackin: </p>
<hr />
<div>==Introduction==<br />
<br />
There are several definitions for logistics within systems engineering and the definition used will determine what activities are considered part of logistics. The SEBoK defines logistics as “the science of planning and implementing the acquisition and use of the resources necessary to sustain the operation of a system.” (Cooke HUM - The Government Computer Magazine)<br />
<br />
===Process Approaches===<br />
<br />
This section is in early draft form; it will be completed for SEBoK version 0.5.<br />
<br />
*Document initial and life-cycle resource requirements for performing operations and support. This includes identifying and providing for initial spares, operational and support training capabilities, facilities, etc. Eventual disposal of the system should also be considered. Disposal of any existing systems to be replaced should also be considered.<br />
*Integrated Logistics Support (ILS). Logistics supports many elements of system operations and maintenance. ILS deliverables are the resources necessary to sustain the capabilities required of the system. The most common deliverables include:<br />
**Training and Training System.<br />
**Spares and Repair Parts (including Material Management Data).<br />
**Consumables and Raw Materials.<br />
**Technical Data/Documentation<br />
**Support & Test Equipment (STE).<br />
**Facilities.<br />
**Maintenance Repair Capability.<br />
**Storage and Preservation.<br />
<br />
===Methods & Tools===<br />
<br />
This section has not yet been written; it will be included in SEBoK version 0.5.<br />
<br />
===Evaluation===<br />
<br />
This section has not yet been written; it will be included in SEBoK version 0.5.<br />
<br />
==Practical Considerations==<br />
<br />
During deployment and use, considerations for cost, schedule, and performance or quality must be balanced. Often, cost and schedule are non-negotiable. As such, when there is budget crunch, there is a tendency to cut down on the planned level of ILS. When this occurs, the system engineer has to take note of the deficiencies in ILS and assess the impact on supportability and sustainability. For example, as spare parts may require long lead times (18 to 27 months is not uncommon for the defense sector), reordering of spares may need to be initiated almost immediately into the use phase even if a 2 to 3 year service spares lay-in may have been catered as part of the initial support package. <br />
As the operators become more proficient with the system, the expectation on how the system is to be used increases. This may result in increased utilization of the system and demands for change requests. (For more information on system evolution, please see System Life Management KA.) An increase in utilization is likely to lead to logistics bottlenecks such as a buildup of the return pipeline or a shortage of spare parts. SE activities would include review of the situation, conduction of logistics support analysis, and/or reliability, maintainability and availability reviews. (For additional information, please see the Cross-Cutting KA)<br />
<br />
==Application to Product, Enterprise, and Service Systems Engineering==<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[System Maintenance|<- Previous Article]] | [[System Deployment and Use|Parent Article]] | [[Systems Engineering Management|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=System_Maintenance&diff=9661System Maintenance2011-08-09T21:22:45Z<p>Skmackin: </p>
<hr />
<div>==Introduction==<br />
<br />
For a system to be sustained throughout its system life cycle, the Maintenance Process has to be executed concurrently with the Operations Process (ISO 15288 Clause 6.4.9). The requirements for maintenance have to be defined upfront during the Stakeholder’s Requirement Definition Process (Clause 6.4.1). Considerations include:<br />
<br />
*Maximizing system availability to meet the operational requirements. This has to take into account the designed-in reliability and maintainability of the system.<br />
<br />
*Preserving system operating potential through proper planning of system scheduled maintenance. This requires a reliability-centered maintenance strategy that incorporates preventive maintenance in order to preempt failures thereby extending the mean time between corrective maintenance and this enhances the availability of the system. <br />
<br />
*Outsourcing non-critical maintenance activities so as to optimize scarce technical manpower resources.<br />
<br />
*Harness IT technology for maintenance management. This involves rigorous and systematic capturing and tracking of operating and maintenance activity to facilitate analysis and planning.<br />
<br />
Maintenance management concerns the development and review of maintenance plans, securing and coordinating resources such as budget, service parts provisioning, etc., and management of supporting tasks such as contract administration, engineering support and quality assurance.<br />
<br />
<br />
===Process Approaches===<br />
<br />
The purpose of the Maintenance Process is to sustain the capability of the system to provide a service. This process monitors the system’s capability to deliver services, records problems for analysis, takes corrective, adaptive, perfective and preventive actions and confirms restored capability.<br />
<br />
As a result of the successful implementation of the Maintenance Process:<br />
*A maintenance strategy is developed.<br />
*Maintenance constraints are provided as inputs to requirements.<br />
*Replacement system elements are made available.<br />
*Services meeting stakeholder requirements are sustained.<br />
*The need for corrective design changes is reported.<br />
*Failure and lifetime data is recorded.<br />
<br />
The project should implement the following activities and tasks in accordance with applicable organization policies and procedures with respect to the Maintenance Process:<br />
<br />
*System preparation for operations, including system performance verification before operation.<br />
*Scheduled servicing such as daily inspection/checks, servicing, cleaning.<br />
*Unscheduled servicing (carrying out fault detection and isolation to the faulty replaceable unit and replacement of the failed unit).<br />
*Re-configuration of the system for different roles or functions.<br />
*Scheduled servicing - higher level scheduled servicing but below Depot level.<br />
*Unscheduled servicing (carrying out more complicated fault isolation to the faulty replaceable unit and replacement of the failed unit).<br />
*Minor modifications.<br />
*Minor damage repairs.<br />
*Major scheduled servicing, e.g. overhaul, corrosion treatment<br />
*Major repairs (beyond normal removal and replacement tasks). <br />
*Major modifications.<br />
<br />
The maintenance plan specifies the scheduled servicing tasks and intervals (Preventive Maintenance) and the unscheduled servicing tasks (Adaptive or Corrective Maintenance). All the tasks in the maintenance plan are allocated to the various maintenance agencies. A Maintenance Allocation Chart is developed to tag the maintenance tasks to the appropriate maintenance agencies. These include in-service or in-house work centers, approved contractors, affiliated maintenance or repair facilities, OEMs, etc. The maintenance plan also establishes the requirements for the support resources. <br />
<br />
Related activities such as resource planning, budgeting, performance monitoring, upgrade, longer term supportability and sustenance also need to be managed. These activities are being planned, managed and executed over a longer time horizon and they concern the well being of the system over the entire life cycle. <br />
<br />
Proper maintenance of the system relies very much on the availability of support resources such as Support and Test Equipment (STE), Technical Data, and Facilities. These have to be factored in during the Acquisition Agreement Process.<br />
<br />
===Methods & Tools===<br />
<br />
Training and Certification<br />
<br />
Adequate training must be provided for the technical personnel maintaining the system. While initial training may have been provided during the Transition Process, additional personnel may need to be trained to cope with the increased number of systems being fielded as well as to cater to staff turnover. It is important to define the certification standards and contract for the training materials as part of the Supply Agreement.<br />
<br />
==Practical Considerations==<br />
<br />
The organization responsible for maintaining the system should have clear thresholds established to determine whether a change requested by end users, changes to correct latent defects, or changes required to fulfill the evolving mission are within the scope of a maintenance change or require a more formal project to step through the entire systems engineering life-cycle. Evaluation criteria to make such a decision could include cost, schedule, risk, or criticality characteristics.<br />
<br />
<br />
<br />
==References== <br />
===Citations===<br />
<br />
===Primary References===<br />
<br />
Blanchard, B.S., Fabrycky, W.J. Systems Engineering and Analysis, 3rd Edition, Prentice Hall, 1997<br />
<br />
Systems Engineering Book of Knowledge, Ver 2.0, Institution of Engineers, Singapore, Singapore.<br />
<br />
INCOSE Systems Engineering Handbook, Section 6.7 Acquisition Process, INCOSE-TP-2003-002-03.1, Version 3.1, August 2007.<br />
<br />
ISO/IEC 15288, IEEE Std 15288-2008 (2008). Systems and software engineering - System life cycle processes (2nd edition). Software & Systems Engineering Standards Committee, IEEE Computer Society.<br />
<br />
Defense Acquisition University, Defense Acquisition Guidebook, https://acc.dau.mil/CommunityBrowser.aspx?id=289207, 17 December 2009<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Operation of the System|<- Previous Article]] | [[System Deployment and Use|Parent Article]] | [[Logistics|Next Article ->]]</center><br />
==Signatures==<br />
[[Category:Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=System_Operation&diff=9660System Operation2011-08-09T21:20:34Z<p>Skmackin: </p>
<hr />
<div>==Introduction==<br />
<br />
The role of SE during the operation of the system consists of ensuring that the system maintains key mission and business functions and is operationally effective; that maintenance actions and other major changes are performed according to the long-term vision of the system, meet the evolving needs of stakeholders, and are consistent with the architecture; and that the eventual decommissioning or disposal of the system occurs according to disposal/retirement plans and is compliant with relevant laws and regulations. (For additional information on disposal or retirement, please see the System Life Cycle Management KA.) When the system-of-interest (SOI) replaces an existing or legacy system, it may be necessary to manage the migration between systems such that persistent stakeholders do not experience a breakdown in services. (INCOSE 2010, p. 145)<br />
<br />
===Definition & Purpose===<br />
<br />
This process assigns personnel to operate the system, and monitors the services and operator‐system performance. In order to sustain services it identifies and analyzes operational problems in relation to agreements, stakeholder requirements and organizational constraints. (ISO/IEC 2009, 1)<br />
<br />
===Process Approaches===<br />
<br />
During the operational phase of a program, SE activities are focused on ensuring the system maintains certain operational attributes and usefulness throughout its expected life span. Maintaining operational effectiveness consists of evaluating certain operationally relevant attributes and trends, taking actions to prevent degradation of performance (see 11.4 Maintenance, below), evolving the system to meet changing mission or business needs (see the System Life Management KA), and eventually decommissioning the system and disposing of its components (see the System Life Management KA). Several activities are specifically associated with system use, including:<br />
<br />
*Development of training requirements for operational and support personnel. Identification of training requirements is generally most effective when they are developed early and fulfilled consistently with operational or support needs before system transition.<br />
<br />
*Evaluation of the readiness of the operational and support personnel to operate and assume support responsibility for the system. Evaluation of personnel readiness may include completion of required training or demonstration of capability to operate or support the system. <br />
<br />
*Evaluation Operational Effectiveness. Early in the planning phases of a new system or capability, measures of operational effectiveness are established based on mission and business goals. (For more information, please see the System Definition KA.) Many times, these measures are described as “key technical performance measures” and are used to support transition of the system into operational use (See Section 8.9, Systems Engineering Measurement). These measures are equally important during system operation to ensure certain important system quality attributes are maintained. These attributes are unique for each system and represent characteristics describing the usefulness of the system as defined and agreed to by system stakeholders. Systems engineers monitor and analyze these measurements and recommend actions. <br />
<br />
===Applicable Methods & Tools===<br />
<br />
*Training and Certification. Adequate training must be provided for the operators who are required to operate the system. The objectives of training are to:<br />
**Provide initial training for all operators in order to equip them with the skill and knowledge to operate the system. Ideally, this process will begin prior to system transition and will facilitate delivery of the system. It is important to define the certification standards and required training materials up front. (For more information on material supply, please see section 11.6 Logistics.)<br />
**Provide continuation training to ensure currency of knowledge.<br />
**Monitor the qualification/certification of the operators to ensure that all personnel operating the system meet the minimum skill requirements, and that their currency remains valid.<br />
**Monitor and evaluate the job performance to determine the adequacy of the training program.<br />
<br />
<br />
<br />
==Practical Consideration==<br />
The Operation Process sustains system services by assigning trained personnel to operate the system, monitoring operator-system performance, and monitoring the system performance. In order to sustain services it identifies and analyzes operational problems in relation to agreements, stakeholder requirements and organizational constraints. When the system replaces an existing system, it may be necessary to manage the migration between systems such that persistent stakeholders do not experience a breakdown in services.<br />
<br />
As a result of the successful implementation of the Operation Process:<br />
*An operation strategy is defined and refined along the way.<br />
*Services that meet stakeholder requirements are delivered.<br />
*Approved corrective action requests are satisfactorily completed.<br />
*Stakeholder satisfaction is maintained.<br />
<br />
Outputs of the Operation Process include:<br />
*Operational strategy – including staffing and sustainment of enabling systems and materials. This may incorporate the strategy first defined during the Transition Process.<br />
*System performance reports (statistics, usage data, and operational cost data)<br />
*System trouble/anomaly reports with recommendations for appropriate action<br />
*Operational availability constraints – to influence future design and specification of similar systems or reused systems-elements<br />
<br />
Activities of the Operation Process include:<br />
*Provide operator training to sustain a pool of operators<br />
*Track system performance and account for operational availability<br />
*Perform operational analysis<br />
*Manage operational support logistics<br />
*Document system status and actions taken<br />
*Report malfunctions and recommendations for improvement<br />
<br />
==References== <br />
<br />
<br />
<br />
<br />
===Primary References===<br />
Blanchard, B.S., Fabrycky, W.J. Systems Engineering and Analysis, 3rd Edition, Prentice Hall, 1997<br />
<br />
Systems Engineering Book of Knowledge, Ver 2.0, Institution of Engineers, Singapore, Singapore.<br />
<br />
INCOSE Systems Engineering Handbook, Section 6.7 Acquisition Process, INCOSE-TP-2003-002-03.1, Version 3.1, August 2007.<br />
<br />
ISO/IEC 15288, IEEE Std 15288-2008 (2008). Systems and software engineering - System life cycle processes (2nd edition). Software & Systems Engineering Standards Committee, IEEE Computer Society.<br />
<br />
<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[System Deployment|<- Previous Article]] | [[System Deployment and Use|Parent Article]] | [[System Maintenance|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=System_Transition&diff=9659System Transition2011-08-09T21:19:30Z<p>Skmackin: </p>
<hr />
<div>==Introduction==<br />
As part of the system qualification, an on-site installation, check-out, integration and tests must be carried out to ensure that the system is fit to be deployed into the field and/or put into an operational context. Transfer is the process that bridges the gap between qualification and use; it deals explicitly with the handoff from development to logistics, operations, maintenance, and support.<br />
===Definition & Purpose===<br />
There are many different approaches to transition, or deployment, and many different views on what is included within transition. The SEBoK uses the ISO/IEEE 15288:2008 definition of transition, as seen below (2008):<br />
[The transition] process installs a verified system, together with relevant enabling systems, e.g., operating system, support system, operator training system, user training system, as defined in agreements. This process is used at each level in the system structure and in each stage to complete the criteria established for exiting the stage.<br />
Thinking in a linear fashion, the system is transitioned into operation and then would be used and maintained in the operational environment. However, there are other views on transition. For example, the NASA Systems Engineering Handbook states that transition can include delivery for end-use as well as delivery of components for integration. (December 2007, 1-360) Using this view, transition is the mechanism for moving system components from implementation activities into integration activities. The NASA discussion of transition also implies that transition can include sustainment activities (NASA December 2007, 1-360):<br />
The act of delivery or moving of a product from the location where the product has been implemented or integrated, as well as verified and validated, to a customer. <br />
Many systems are deployed using an iterative or evolutionary approach where operationally useful capabilities are developed and deployed incrementally. While these operationally useful capabilities are fully deployed and transitioned into operational use, transition of logistics, maintenance and support may occur incrementally or be delayed until after the full system capability is delivered.<br />
===Process Approaches===<br />
Just as there are multiple views on the definition of transition and deployment, there are also several ways to divide the activities required for transition. For example, the NASA Systems Engineering Handbook definition of transition states, “This act can include packaging, handling, storing, moving, transporting, installing, and sustainment activities.” However, the SEBoK includes the topic of sustainment as separate from transition; this is instead covered under the Maintenance and Logistics topics (please see Sections 11.5 and 11.6). The International Council on Systems Engineering (INCOSE) views the transition process as two-step: planning and performance. Though there are several processes for deployment and transition, most generally include the following activities as a minimum:<br />
*Develop a Deployment/Transition Strategy. Planning for transition activities ideally would begin early in the SE life cycle, though it is possible to conduct these activities concurrently with realization activities. Planning should generally include some consideration of the common lower-level activities of installation, checkout, integration, and tests. Such activities are crucial to demonstrate that the system and the interfaces with the operational environment can function as intended and meet the contractual system specifications. For these activities to be effectively managed and efficiently implemented, the criteria, responsibility, and procedures for carrying out these activities should be clearly established and agreed upon during the planning phase.<br />
*Develop plans for transitioning systems or system capabilities into operational use and support. Transition plans for the system or incremental system capabilities should be consistent with the overall transition strategy and agreed to by relevant stakeholders. <br />
Planning for transition will often include establishing a strategy for support, which may include organic support infrastructures, contractor logistics support, or other sources. (Bernard et al. 2005, 1-49) It can also include defining the levels of support to be established. The strategy is important because it drives most of the other transition planning activities, as well as product design considerations.<br />
Transition plans should include considerations for coordination with:<br />
*Installation. Installation generally refers to the activities required to physically instate the system; this will likely include connecting interfaces to other systems such as electrical, computer, or security systems, and may include software interfaces as well. Installation planning should generally document the complexity of the system, the range of environmental conditions expected in the operational environment, any interface specifications, and human factors requirements such as safety. When real-world conditions require changes in the installation requirements, these should be documented and discussed with the relevant stakeholders.<br />
*Integration. Though system integration activities will generally be performed prior to installation, there may be additional steps for integrating the system into its operational setting. Additionally, if the system is being delivered incrementally, there will likely be integration steps associated with the transition. (For more information on integration, please see the System Realization Knowledge Area.)<br />
*Verification and Validation (V&V). At this stage, V&V for physical, electrical and mechanical checks may be performed in order to verify that the system has been appropriately installed. Acceptance tests conducted after delivery may become part of this process. (For additional information on V&V, please see the System Realization Knowledge Area) There are several types of acceptance tests which may be used:<br />
**On-site Acceptance Test (OSAT). This test includes any field acceptance test and is performed only after the system has successfully been situated in the operational environment. It may consist of functional tests to demonstrate that the system is functioning and performing properly.<br />
**Field Acceptance Test. This test includes flight and sea acceptance tests and is performed, if applicable, only after the system has successfully passed the OSAT. The purpose of field testing is to demonstrate that the system meets the performance specifications called for in the system specifications in the actual operating environment.<br />
**Operational Test and Evaluation (OT&E). A OT&E consists of a test series designed to estimate the operational effectiveness of the system. <br />
**Evaluate the readiness of the system to transition into operations. This is based upon the transition criteria identified in the transition plan. These criteria should support an objective evaluation of the system’s readiness for transition. The integration, verification, and validation (IV&V) activities associated with transition may be used to gauge whether the system meets transition criteria.<br />
**Analyze the results of transition activities throughout and any necessary actions. As a result of analysis, additional transition activities and actions may be required. The analysis may also identify areas for improvement in future transition activities. <br />
Some common issues that require additional considerations and SE activities are the utilization or replacement of legacy systems. It is also common for an organization to continue testing into the early operational phase. The following activities support these circumstances.<br />
*System Run-in. After the successful completion of the various acceptance tests, the system(s) will be handed over to the user or designated post-deployment support organization. The tested system(s) may have to be verified for a stated period (called the system run-in, normally for 1 to 2 years) for the adequacy of reliability and maintainability (R&M) and integrated logistics support (ILS) Deliverables. R&M are vital system operational characteristics having a dominant impact upon the operational effectiveness, the economy of in-service maintenance support and the life cycle cost (LCC). <br />
*Phasing-In/Phasing-Out. The need for phasing-in will usually be identified during the system definition, when it is clear that the new system entails the replacement of an existing system(s). (For additional information, please see the System Definition KA.) These activities should help to minimize disruption to operations and, at the same time, minimize the adverse effect on operational readiness. It is also important that the phasing-in of a new system and the phasing-out of an existing system occur in parallel with the systems activities of the system run-in to maximize resource utilization. Other aspects of phasing-in/ phasing-out to be considered include: <br />
**Proper planning for the phasing out of an existing system, if any.<br />
**For multi-user or complex systems, phase-by-phase introduction of the system according to levels of command, formation hierarchy, etc.<br />
**Minimum disruption to the current operations of the users.<br />
**Establishment of a feedback system from users on problems encountered in operation, etc.<br />
**Disposal process including handling of hazardous items, cost of disposal, approval etc.<br />
===Applicable Methods & Tools===<br />
*Reliability Demonstration Testing (RDT). The system may have to undergo RDT to ensure that it meets its contractual R&M guarantees. RDT is conducted under actual field conditions, especially for large systems purchased in small quantity. During RDT, the system is operated in the field within stated test duration and all field data are systematically recorded. At the end of the test period, analysis of the RDT data is performed. Data analysis should facilitate determination of system reliability. One possible output of this analysis is found in the figure below.<br />
<br />
[[File:052611_SJ_notional_reliability_analysis.png|400px|Notional Reliability Analysis]]<br />
<br />
==References== <br />
<br />
<br />
===Primary References===<br />
INCOSE Systems Engineering Handbook, Section 6.7 Acquisition Process, INCOSE-TP-2003-002-03.1, Version 3.1, August 2007.<br />
<br />
ISO/IEC 15288, IEEE Std 15288-2008 (2008). Systems and software engineering - System life cycle processes (2nd edition). Software & Systems Engineering Standards Committee, IEEE Computer Society.<br />
<br />
NASA. December 2007. Systems engineering handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105. <br />
<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[System Deployment and Use|<- Previous Article]] | [[System Deployment and Use|Parent Article]] | [[Operation of the System|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=System_Deployment_and_Use&diff=9658System Deployment and Use2011-08-09T21:15:06Z<p>Skmackin: </p>
<hr />
<div>==Introduction==<br />
<br />
System deployment and use are critical systems engineering (SE) activities that ensure that the developed system is operationally acceptable and that the responsibility for the effective, efficient, and safe operations of the system is transferred to the owner. System deployment includes transition of the capability to the ultimate end-user, as well as transition of support and maintenance responsibilities to the post-deployment support organization or organizations. It may include a period of reliability demonstration tests and the phasing out of legacy systems that the developed system replaces. System use includes a continual assessment of the operational effectiveness of the deployed system or service, identification of mission threat and operational risk, and performance of the actions required to maintain operational effectiveness or evolve the capability to meet changing needs. Evolution of the operational system may occur with smaller maintenance actions or, if the changes cross an agreed-to threshold (complexity, risk, cost, etc.), may require a formal development project with deliberate planning and SE activities resulting in an enhanced system. As the operational phase is generally the longest in the system life cycle, activities that may occur during operation are allocated between two knowledge areas (KAs): System Deployment and Use and System Life Cycle Management. The System Life Management KA specifically deals with SE activities required for system evolution and end of system life: these include service life extension, capability updates/upgrades and modernization during system operation, and system disposal and retirement. In contrast, the System Deployment and Use KA specifically deals with activities required to ensure that system operation can continue as expected. This includes the following topics:<br />
<br />
*Deployment/Transition<br />
<br />
*System Use<br />
<br />
*System Maintenance<br />
<br />
Planning for system deployment and use should begin early in the SE process to ensure successful transition into operational use.<br />
<br />
===Topics===<br />
The topics contained within this knowledge area include:<br />
*[[System Deployment]]<br />
*[[Operation of the System]]<br />
*[[System Maintenance]]<br />
*[[Logistics]]<br />
<br />
==System Deployment and Use Fundamentals==<br />
<br />
System deployment and use includes the processes used to plan for and manage the transition of new or evolved systems and capabilities into operational use and the transition of support responsibilities to the eventual maintenance or support organization. The use stage normally represents the longest period of a system life cycle and, hence, generally accounts for the largest portion of the life cycle cost. These activities need to be properly managed in order to evaluate the actual system performance, effectiveness and cost in its intended environment and within its specified utilization over its life cycle. Included in use are the aspects of continuation of personnel training and certification. <br />
As part of deployment/transition activities, special conditions that may apply during the eventual decommissioning or disposal of the system, or for a legacy system it may replace, are identified and accommodated in life cycle plans and system architectures and designs (see System Development KA). SE leadership ensures the developed system meets specified requirements, that it be used in the intended environment, and that when the system is transitioned into operation, it achieves the users’ defined mission capabilities and can be maintained throughout the intended life cycle. <br />
SE ensures that plans and clear transition criteria into operations are developed and are agreed to by relevant stakeholders and that planning is completed for system maintenance and support after the system is deployed. These plans should generally include reasonable accommodation for planned and potential evolution of the system and its eventual removal from operational use. (For additional information on evolution and retirement, please see the System Life Cycle Maintenance KA.)<br />
<br />
==Practical Considerations==<br />
<br />
==Glossary==<br />
<br />
<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
<br />
<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[System Verification and Validation|<- Previous Article]] | [[Systems Engineering and Management|Parent Article]] | [[System Deployment|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Knowledge Area]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=System_Verification&diff=9657System Verification2011-08-09T21:13:54Z<p>Skmackin: </p>
<hr />
<div>Introductory Paragraph(s)<br />
<br />
==Introduction, Definition and Purpose==<br />
'''Introduction''' - Verification is a set of actions used to check the correctness of any element such as a System Element, a system, a document, a service, a task, a requirement, etc. Validation is a set of actions used to check the compliance of these elements to their purpose.<br />
<br />
These verification and validation actions are planned and carried out throughout the life cycle of the system. Verification and validation are generic terms that need to be instantiated within the context they occur. Verification and Validation understood as processes are transverse activities to every life cycle stage of the system. In particular during the development cycle of the system, the Verification Process and the Validation Process are performed in parallel of the System Definition and System Realization processes, and apply onto any activity and product resulting from the activity. The activities of every life cycle process and those of the Verification Process and of the Validation Process fit into each others. <br />
<br />
The Integration Process uses intensively the Verification Process. The Verification Process is performed on an iterative basis on every produced engineering element.<br />
<br />
The Validation Process generally occurs at the end of a set of life cycle tasks or activities, and at least at the end of every milestone of a development project. The Validation Process is not limited to a phase at the end of the development of the system. It might be performed on an iterative basis on every produced engineering element during the development and might begin with the validation of the expressed Stakeholders' Requirements as engineering elements. The Validation Process applied onto the system when completely integrated is often called Final Validation – see [[System Integration]] Figure 1.<br />
<br />
<br />
'''Definitions and Purposes'''<br />
<br />
'''The purpose of Verification''', as a generic action, is to identify the faults/defects introduced at the time of any transformation of inputs into outputs. Verification is used to prove that the transformation was made according to the selected and appropriate methods, techniques, standards or rules. If the verification cannot be performed only on the transformation itself, one uses the outcomes of the transformation to establish the evidence these outcomes own the expected characteristics. Verification is used to prove that the transformation was done in the “right” way.<br />
<br />
'''The purpose of Validation''', as a generic action, is to establish the compliance of any activity output, compared to the inputs of this activity. Validation is used to prove that the transformation of inputs produced the expected, the "right" result.<br />
<br />
<br />
The term Verification is often associated with the term Validation and understood as a single concept Verification and Validation (V & V). Validation is used to ensure that “one is working the right problem” whereas Verification is used to ensure that “one has solved the problem right”. (Martin. 1997)<br />
<br />
Verification and/or Validation are based on tangible evidences; this means based on information whose veracity can be demonstrated, based on factual results obtained by techniques such as inspection, measurement, test, analysis, calculation, etc.<br />
<br />
<br />
'''Verify a system''' (Product, Service, or Enterprise) consists in comparing the realized characteristics or properties of the Product, Service or Enterprise against its expected Design Properties. These Design Properties are either independent of the System Requirements (state of the art), or specific i.e. derived from the System Requirements.<br />
<br />
<br />
'''Validate a system''' (Product, Service, or Enterprise) consists in demonstrating that the Product, Service or Enterprise satisfies its System Requirements. System Validation is related first to System Requirements, and eventually to Stakeholders Requirements depending of the contractual practices of the concerned industrial sector. From a purpose and a global point of view, validate a system consists in acquiring confidence in its ability to achieve its intended mission or use under specific operational conditions.<br />
<br />
<br />
'''Remarks about definitions''': <br />
<br />
There are several books and standards that provide different definitions of Verification and of Validation. The most and general accepted definitions can be found in [ISO-IEC 12207:2008, ISO-IEC 15288:2008, ISO25000:2005, ISO 9000:2005]:<br />
<br />
'''Verification''': confirmation, through the provision of objective evidence, that specified requirements have been fulfilled. With a note added in ISO-IEC 15288: Verification is a set of activities that compares a system or system element against the required characteristics. This may include, but is not limited to, specified requirements, design description and the system.<br />
<br />
<br />
'''Validation''': confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled. With a note added in ISO 9000:2005: Validation is the set of activities ensuring and gaining confidence that a system is able to accomplish its intended use, goals and objectives (i.e., meet stakeholder requirements) in the intended operational environment.<br />
<br />
<br />
==Principles==<br />
<br />
===Concept of Verification and Validation Action===<br />
<br />
'''Verification and Validation Action''' – The terms "verification" and "validation" are generic and are used in association with other terms to define engineering elements. So, one uses hereafter the terms [[Verification and Validation Action (glossary)]] to mention an action of verification or of validation. <br />
<br />
A Verification and Validation Action is defined then performed. <br />
<br />
The definition of a Verification and Validation Action applied to an engineering element includes – see Figure 1: <br />
*to identify the element on which the Verification and Validation Action will be performed,<br />
*to identify the reference/baseline in order to define the expected result of the Verification and Validation Action.<br />
<br />
<br />
The performance of the Verification and Validation Action includes:<br />
*to obtain a result from the performance of the Verification and Validation Action onto the submitted element, <br />
*to compare the obtained result with the expected result, <br />
*to deduce a degree of correctness and of conformance/compliance of the submitted element, <br />
*to decide about the acceptability of this conformance/compliance, because sometimes the result of the comparison may require a judgment of value regarding the relevance in the context of use to accept or not the obtained result (generally analyzing it against a threshold, a limit).<br />
<br />
Note: If there is uncertainty about the conformance/compliance, the cause could come from ambiguity in the requirements; the typical example is the case of a measure of effectiveness expressed without a "limit of acceptance" (above or below threshold the measure is declared unfulfilled).<br />
<br />
[[File:SEBoKv05_KA-SystRealiz_Definition_&_performance_of_V&V_Action.png|300 px|Definition and Performance of a Verification and Validation Action|center|]]<br />
<br />
Figure 1 – Definition and performance of a Verification and Validation Action (Faisandier, 2011)<br />
<br />
<br />
<br />
'''What to verify and validate?''' – Any engineering element can be verified and validated using a specific reference for comparison: Stakeholder Requirement, System Requirement, Function, System Element, Document, etc. Examples are provided in Table 1 below.<br />
<br />
<br />
<br />
Table 1 – Examples of verified and validated items<br />
<br />
<br />
===Validation versus Verification===<br />
<br />
The section discusses the fundamental differences between the two concepts and associated processes. <br />
<br />
'''Source of the terms''' – From an actual and etymological meaning, the term verification comes from the Latin "verus" – that means truth – and "facere" – that means make/perform. So verification means to prove that something is “true” or correct (a property, a characteristic, etc.). The term validation comes from the Latin "valere" – that means become strong – and has the same root as value. So validation means to prove that something has the right features to produce the expected effects. Reference: Verification and Validation in plain English – (Jerome, Lake, INCOSE 1999).<br />
<br />
<br />
'''Process similarities and differences''' - There are similarities between the Verification Process and the Validation Process in term of activities. The techniques used to define and perform the Verification Actions and those for Validation Actions are identical. The main differences concern the reference used to check the correctness of an element, and the acceptability of the effective correctness. Within verification, the comparison between the expected result and the obtained result is generally binary; it is true or not. Within validation the result of the comparison may require a judgment of value to accept or not the obtained result compared to a threshold, a limit.<br />
<br />
<br />
The following Table 2 provides supplementary and synthetic information to help understanding the differences depending of the point of view.<br />
<br />
[[File:JSTable_vvcomparison.png|VV Comparison]]<br />
<br />
Table 2 – Synthetic differences between Verification and Validation TO BE CHANGED<br />
<br />
<br />
Verification vs. Validation (Citation) : According to the NASA Systems Engineering Handbook from a process perspective, the product verification and product validation processes may be similar in nature, but the objectives are fundamentally different. : The distinction between Verification and Validation is significant. Verification consists of proof of compliance with specifications and may be determined by tests, analysis, demonstration, inspection, etc…. Validation consists of proof that the system accomplishes its purpose. It is usually much more difficult (and much more important) to validate a system than to verify it. Strictly speaking, validation can be accomplished only at System level, while verification must be accomplished throughout the entire system architectural hierarchy. (NASA. 2007).<br />
<br />
===Integration, Verification, Validation, Final Validation, Operational Validation===<br />
There is sometimes a misconception that Verification occurs after Integration and before Validation. In most of the cases, it is more appropriate to begin verification activities during development (definition and realization) and to continue them into deployment and use. <br />
<br />
<br />
Once the System Elements have been realized, their integration to form the complete system is performed. Integration consists to assemble and to perform Verification and Validation Action as stated in [[System Integration]]. A Final Validation activity generally occurs when the system is integrated, but a certain number of Verification and Validation Actions are also performed in parallel of System Integration in order to reduce as much as possible the number of Verification and Validation Actions while controlling the risks that could be generated if some checks are dropped out. Integration, Verification and Validation are intimately processed together due to the necessity of optimizing the Verification and Validation strategy and the Integration strategy.<br />
<br />
<br />
System Validation concerns the global system (Product, Service, or Enterprise) seen as a whole and is based on the totality of the requirements (System Requirements, Stakeholder Requirements). But it is obtained gradually throughout the development stage of the system by pursuing three non exclusive ways: <br />
*First by cumulating the Verification and Validation Actions results provided by the application of the Verification Process and the Validation Process to every definition element and to every integration element;<br />
*Second by performing final Validation onto the complete integrated system in an industrial environment (as close as possible from the operational environment);<br />
*Third by performing operational Validation Actions onto the complete system in its operational environment (context of use - higher level of system).<br />
<br />
Operational Validation relates to the operational mission of the system and relate to the acceptance of the system ready for use or for production. For example, operational Validation will force to show in the operational environment that a vehicle has the expected autonomy (is able to cover a defined distance), can cross obstacles, performs safety scenarios as required, etc.<br />
<br />
<br />
===Integration, Verification and Validation level per level===<br />
It is impossible to carry out only a single global Validation on a complete integrated complex system. The sources of faults/defects could be important and it would be impossible to determine the causes of a non conformance raised during this global check. As generally the System of Interest has been decomposed during design in a set of blocks and layers of systems and System Elements, thus every system, System Element is verified, validated and possibly corrected before being integrated into the parent system block of the higher level, as shown on Figure 2.<br />
<br />
<br />
[[File:SEBoKv05_KA-SystRealiz_Verification_and_Validation_level_by_level.png|700px|center|Verification and Validation Level by Level]]<br />
<br />
Figure 2 – Verification and Validation level per level (Faisandier & Roussel, 2011) TO BE CHANGED<br />
<br />
<br />
As necessary, the systems, System Elements are partially integrated in sub-sets in order to limit the number of properties/characteristics to be verified within a single step - see [[System Integration]]. For each level, it is necessary to make sure by a set Verification and Validation Actions that the features stated at the preceding level are not damaged. Moreover, a compliant result obtained in a given environment (for example: final validation environment) can turn into non compliant if the environment changes (for example: operational validation environment). So, as long as the sub-system is not completely integrated and/or does not operate in the real operational environment, no result must be regarded as definitive.<br />
<br />
During modifications made to a system, the temptation is to focus on the new adapted configuration forgetting the environment and the other configurations. However, a modification can have significant consequences on other configurations. Thus, any modification requires regression Verification and Validation Actions (often called Regression Testing).<br />
<br />
===Verification Actions and Validation Actions inside and transverse to levels===<br />
Inside each level of system decomposition, Verification and Validation Actions are performed during System Definition and System Realization as represented in Figure 3 for the upper levels and in Figure 4 for lower levels. The Stakeholder Requirements Definition and the Operational Validation make the link between two levels of the system decomposition.<br />
<br />
[[File:SEBoKv05_KA-SystRealiz_V&V_Actions_upper_levels.png|500px|center|Verification and Validation Actions in Upper Levels of System Decomposition]]<br />
<br />
Figure 3 – Verification and Validation Actions in upper levels of system decomposition (Faisandier, 2011) <br />
<br />
<br />
The System Elements Requirements and the End products Operational Validation make the link between the two lower levels of the decomposition – see Figure 4.<br />
<br />
[[File:SEBoKv05_KA-SystRealiz_V&V_Actions_lower_levels.png|500px|center|Verification and Validation Actions in Lower Levels of System Decomposition]]<br />
<br />
Figure 4 - Verification and Validation Actions in lower levels of system decomposition (Faisandier, 2011) <br />
<br />
<br />
Note 1: The two figures above show a perfect dispatching of verification and validation activities on the right part, using the corresponding references provided by the System Definition processes on the left part. Some times in the real practices, the outputs of the Stakeholder Requirements Definition Process are not sufficiently formalized or do not contain sufficient operational scenarios and cannot serve as a reference to define Verification and Validation Actions to be performed in the operational environment. In this case, the System Requirements Definition Process outputs may be used in place of.<br />
<br />
Note 2: The last level of the system decomposition is dedicated to the Realization of the System Elements, and the vocabulary and the number of activities used on the Figure 4 may be different – see [[System Implementation]].<br />
<br />
===Verification and Validation strategy===<br />
The difference between verification and validation is especially useful for elaborating the Integration strategy, the Verification strategy, and the Validation strategy. In fact the efficiency of the System Realization is gained optimizing the three strategies together to form what is often called Verification & Validation strategy. The optimization consists to define and to perform the minimum of Verification and Validation Actions but detecting the maximum of errors/faults/defects and getting the maximum of confidence in the use of the Product, Service, or Enterprise. Of course the optimization takes into account the potential risks potentially generated if Verification and Validation Actions are dropped out.<br />
<br />
<br />
==Process Approach==<br />
===Introduction===<br />
As explained above, there are two processes: the Verification Process and the Validation Process.<br />
<br />
Because of the generic nature of these processes, they can be applied to any engineering element that has conducted to the definition and realization of the System Elements, the systems and the System of Interest.<br />
<br />
But facing the huge number of potential Verification and validation Actions that may be generated by this normal approach, it is necessary to optimize the verification and validation strategy. This strategy is based on the balance between what should be verified as a must, the constraints such as time, cost, and feasibility of testing that limit naturally the number of Verification and Validation Actions, and the risks one accepts dropping out some Verification and Validation Actions.<br />
<br />
Several approaches exist that may be used for defining the Verification and Validation Processes. INCOSE defines two main steps: plan and perform the Verification Actions. (INCOSE. 2010) <br />
<br />
NASA has a slightly more detailed approach that includes five main steps: prepare verification, perform verification, analyze outcomes, produce report, and capture work products. (NASA. 2007) page 102 <br />
<br />
Any approach may be used, provided that it is appropriate to the scope of the system, the constraints of the project, includes the activities listed above in some way, and is appropriately coordinated with other activities (including System Definition, System Realization, and extension to the rest of the life cycle).<br />
<br />
<br />
===Verification Process===<br />
====Purpose of the Verification Process====<br />
The purpose of the [System] Verification Process is to confirm that the specified design requirements are fulfilled by the system. This process provides the information required to effect the remedial actions that correct non-conformances in the realized system or the processes that act on it. (ISO/IEC. 2008)<br />
<br />
It is possible to generalize the process using an extended purpose as follows: the purpose of the Verification Process applied to any element is to confirm that the applicable design reference is fulfilled by this element.<br />
<br />
Each System Element, system, and the complete System of Interest should be compared against its own design references. As stated by Dennis Buede, “verification is the matching of [configuration items], components, sub-systems, and the system to corresponding requirements to ensure that each has been built right.” (Buede 2009) This means that the Verification Process is instantiated as many times as necessary during the global development of the system. The Verification Process occurs at every different level of the system decomposition and as necessary all along the system development.<br />
<br />
'''The generic inputs''' are the baseline references of the submitted element. If the element is a system, the inputs are the functional and physical architectures elements as described in a System Design Document, the design description of the internal interfaces (Input/Output Flows, Physical Interfaces) to the system and the Interfaces Requirements external to the system.<br />
<br />
'''The generic outputs''' are elements of the Verification and Validation Plan that includes the verification and validation strategy, the selected Verification and Validations Actions, the Verification and Validation Procedures, the Verification and Validation Tools, the verified element or system, the verification reports, the issue/trouble reports and change requests on the design.<br />
<br />
====Activities of the Verification Process====<br />
Major activities and tasks performed during this process include: <br />
<br />
#'''Establish a verification strategy''' drafted in a [[Verification and Validation Plan (glossary)]] (this activity is carried out concurrently to System Definition activities) obtained by the following tasks:<br />
##Identify the verification scope in listing as exhaustive as possible the characteristics or properties that should be checked; the number of Verification and Validation Actions can be very high;<br />
##Identify the constraints according to their origin (technical feasibility, management constraints as cost, time, availability of verification and validation means or qualified personnel, contractual constraints as criticality of the mission) that limit potentially the Verification and Validation Actions; <br />
##Define the appropriate verification and validation techniques to be applied such as inspection, analysis, simulation, peer-review, testing, etc., depending of the best step of the project to perform every Verification and Validation Action according to constraints;<br />
##Trade off of what should be verified (scope) taking into account all the constraints or limits and deduce what can be verified; the selection of Verification and Validation Actions would be made according to the type of system, objectives of the project, acceptable risks and constraints;<br />
##Optimize the Verification and Validation strategy defining the most appropriate verification technique for every Verification and Validation Action, defining the necessary verification and validation means (tools, test-benches, personnel, location, facilities) according to the selected verification technique, scheduling the Verification and Validation Actions execution in the project steps or milestones, defining the configuration of the elements submitted to Verification and Validation Actions (mainly about testing on physical elements).<br />
#'''Perform the Verification and Validation Actions''' includes the following tasks:<br />
##Detail each Verification and Validation Action, in particular the expected results, the verification technique to be applied and corresponding means (equipments, resources and qualified personnel);<br />
##Acquire the verification and validation means used during the system definition steps (qualified personnel, modeling tools, mocks-up, simulators, facilities); then those during the integration step (qualified personnel, Verification and Validation Tools, measuring equipments, facilities, Verification and Validation Procedures, etc.);<br />
##Carry out the Verification and Validation Procedures at the right time, in the expected environment, with the expected means, tools and techniques;<br />
##Capture and record the results obtained when performing the Verification and Validation Actions using Verification and Validation Procedures and means.<br />
#'''Analyze obtained results''' and compare them to the expected results; record the status compliant or not; generate verification reports and potential issue/trouble reports and change requests on the design as necessary.<br />
#'''Control the process''' includes the following tasks:<br />
##Update the Verification and Validation Plan according to the progress of the project; in particular the planned Verification and Validation Actions can be redefined because of unexpected events (addition, deletion or modification of actions);<br />
##Coordinate the verification activities with the project manager for schedule, acquisition of means, personnel and resources, with the designers for issue/trouble/non conformance reports, with configuration manager for versions of physical elements, design baselines, etc.<br />
<br />
<br />
===Validation Process===<br />
====Purpose of the Validation Process====<br />
The purpose of the [System] Validation Process is to provide objective evidence that the services provided by a system when in use comply with stakeholder requirements, achieving its intended use in its intended operational environment. (ISO/IEC. 2008)<br />
<br />
This process performs a comparative assessment and confirms that the stakeholders' requirements are correctly defined. Where variances are identified, these are recorded and guide corrective actions. System validation is ratified by stakeholders. (ISO/IEC. 2008)<br />
<br />
The validation process demonstrates that the realized end product satisfies its stakeholders' (customers and other interested parties) expectations within the intended operational environments, with validation performed by anticipated operators and/or users. (NASA. 2007)<br />
<br />
It is possible to generalize the process using an extended purpose as follows: the purpose of the Validation Process applied to any element is to demonstrate or prove that this element complies with its applicable requirements achieving its intended use in its intended operational environment.<br />
<br />
Each System Element, system, and the complete System of Interest are compared against their own applicable requirements (System Requirements, Stakeholder Requirements). This means that the Validation Process is instantiated as many times as necessary during the global development of the system. The Validation Process occurs at every different level of the system decomposition and as necessary all along the system development. Because of the generic nature of a process, the Validation Process can be applied to any engineering element that has conducted to the definition and realization of the system elements, the systems and the System of Interest. <br />
<br />
In order to ensure that validation is feasible, the implementation of requirements must be verifiable onto the submitted element. Ensuring that requirements are properly written, i.e. quantifiable, measurable, unambiguous, etc., is essential. In addition, verification/validation requirements are often written in conjunction with Stakeholder and System Requirements and provide the method for demonstrating the implementation of each System Requirement or Stakeholder Requirement.<br />
<br />
'''The generic inputs''' are the baseline references of requirements applicable to the submitted element. If the element is a system, the inputs are the System Requirements and Stakeholder Requirements.<br />
<br />
'''The generic outputs''' are elements of the Verification and Validation Plan.<br />
<br />
====Activities of the ValidationProcess====<br />
Major activities and tasks performed during this process include:<br />
<br />
#'''Establish a validation strategy''' drafted in a Verification and Validation Plan (this activity is carried out concurrently to System Definition activities) obtained by the following tasks: <br />
##Identify the validation scope that is represented by the [System and or Stakeholder] Requirements; normally, every requirement should be checked; the number of Verification and Validation Actions can be high; <br />
##Identify the constraints according to their origin : same as for Verification Process; <br />
##Define the appropriate verification/validation techniques to be applied : same as for Verification Process; <br />
##Trade off of what should be validated (scope) : same as for Verification Process; <br />
##Optimize the Verification and Validation strategy : same as for Verification process <br />
#'''Perform the Verification and Validation Actions''' includes the following tasks: same as for Verification Process<br />
#'''Analyze obtained results''' and compare them to the expected results; decide about the acceptability of the conformance/compliance; record the decision and the status compliant or not; generate validation reports and potential issue/trouble reports and change requests on the [System or Stakeholder] Requirements as necessary. <br />
#'''Control the process''' includes the following tasks: same as for Verification Process<br />
<br />
<br />
===Artifacts and Ontology Elements===<br />
These processes may create several artifacts such as: <br />
<br />
#Verification and Validation Plan (contains in particular the Verification and Validation strategy with objectives, constraints, the list of the selected Verification and Validation Actions, etc.) <br />
#Verification and Validation Matrix (contains for each Verification and Validation Action, the submitted element, the applied technique / method, the step of execution, the system block concerned, the expected result, the obtained result, etc.) <br />
#Verification and Validation Procedures (describe the Verification and Validation Actions to be performed, the Verification and Validation Tools needed, the Verification and Validation Configuration, resources, personnel, schedule, etc.) <br />
#Verification and Validation Reports <br />
#Verification and Validation Tools <br />
#Verified and validated element (system, system element, etc.) <br />
#Issue / Non Conformance / Trouble Reports <br />
#Change Requests on requirement, product, service, enterprise <br />
<br />
<br />
These processes handles the ontology elements of Table 3. <br />
<br />
Table 3 - Main ontology elements as handled within Verification and Validation <br />
<br />
[[File:SEBoKv05_KA-SystRealiz_ontology_V&V.png|650px|center|Main Ontology Elements as Handled within Verification and Validation]]<br />
<br />
<br />
The main relationships between ontology elements are presented in Figure 5.<br />
<br />
[[File:SEBoKv05_KA-SystRealiz_V&V_relationships.png|600px|center|Verification and Validation Elements Relationships with other Engineering Elements]] <br />
<br />
Figure 5 – Verification and Validation elements relationships with other engineering elements (Faisandier, 2011)<br />
<br />
<br />
Note: "Design Reference" is a generic term; instances depend of the type of submitted engineering elements, for example: specified requirements, description of design characteristics or properties, drafting rules, standards, regulations, etc.<br />
<br />
===Checking and Correctness of Verification and Validation===<br />
The main items to be checked during the Verification and Validation Processes concern the items produced by the Verification and Validation processes: <br />
*The Verification and Validation Plan, the Verification and Validation Actions, the Verification and Validation Procedures, Verification and Validation reports respect their corresponding template. <br />
*Every verification and validation activity has been planned, performed, recorded and has generated outcomes as defined in the processes description above.<br />
<br />
<br />
===Methods and Techniques===<br />
====Verification and Validation techniques====<br />
<br />
There are several verification/validation techniques / method to check that an element or a system complies to its [System, Stakeholders] Requirements. These techniques are common to verification and validation. The purposes are different; verification is used to detect faults/defects, whereas validation is to prove satisfaction of [System and/or Stakeholders] Requirements. Table 4 below provides synthetic descriptions of some techniques.<br />
<br />
<br />
Table 4 – Verification and Validation techniques NEW TABLE<br />
<br />
<br />
<br />
Note: Demonstration and testing can be functional or structural. Functional demonstration and testing are designed to ensure that correct outputs are produced given specific inputs. For structural demonstration and testing, there are performance, recovery, interface, and stress considerations. These considerations will determine the system’s ability to perform and survive given expected conditions.<br />
<br />
====Validation/ Traceability Matrix====<br />
<br />
The importance of traceability is introduced in every topic of the [[System Definition]] KA using a Traceability Matrix. It may also be extended and used to record data such as the Verification and Validation Actions list, the selected Verification / validation Technique to verify / validate the implementation of every engineering element (in particular Stakeholder and System Requirement), the expected results, the obtained results when the Verification and Validation Action has been performed. The use of such a matrix enables the development team to ensure that the selected Stakeholder and System Requirements have been verified, or to evaluate the percentage of Verification and Validation Actions completed. In addition, the matrix helps to check the performed Verification and Validation activities against the planned activities as outlined in the Verification and Validation Plan, and finally to ensure that System Validation has been appropriately conducted.<br />
<br />
===Application to Product systems, Service systems, Enterprise systems===<br />
<br />
Because of the generic aspect of the process, this one is applied as defined above. The main difference resides on the detailed implementation of the verification techniques described above in Table 5.<br />
<br />
<br />
[[File:SEBoKv05_KA-SystRealiz_Verification_techniques_and_types_of_system.png|600px|center|Verification Techniques and Types of System]]<br />
<br />
Table 5 – Verification techniques and types of system<br />
<br />
==Practical Considerations==<br />
Major pitfalls encountered with System Verification and Validation are presented in Table 6.<br />
<br />
[[File:SEBoKv05_KA-SystRealiz_pitfalls_with_V&V.png|650px|center|Major Pitfalls with System Verification and Validation]] <br />
<br />
<br />
Table 6 – Major pitfalls with System Verification and Validation <br />
<br />
<br />
<br />
Major proven practices encountered with System Verification and Validation are presented in Table 7.<br />
<br />
[[File:SEBoKv05_KA-SystRealiz_practices_with_V&V.png|650px|center|Proven Practices with System Verification and Validation]]<br />
<br />
<br />
Table 7 – Proven practices with System Verification and Validation<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
Buede, D. M. 2009. The engineering design of systems: Models and methods. 2nd ed. Hoboken, NJ: John Wiley & Sons Inc. <br />
<br />
INCOSE. 2010. INCOSE systems engineering handbook, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2. <br />
<br />
ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E). <br />
<br />
NASA. 2007. Systems Engineering Handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105, December 2007.<br />
<br />
===Primary References===<br />
INCOSE. 2010. [[INCOSE Systems Engineering Handbook|Systems Engineering Handbook]]: A Guide to Life Cycle Processes and Activities, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2. <br />
<br />
ISO/IEC. 2008. [[ISO/IEC/IEEE 15288|Systems and Software Engineering - System Life Cycle Processes]]. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), [[ISO/IEC/IEEE 15288|ISO/IEC/IEEE 15288:2008 (E)]]. <br />
<br />
NASA. 2007. [[NASA Systems Engineering Handbook|Systems Engineering Handbook]]. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105, December 2007.<br />
<br />
===Additional References===<br />
<br />
Buede, D. M. 2009. The engineering design of systems: Models and methods. 2nd ed. Hoboken, NJ: John Wiley & Sons Inc. <br />
<br />
DAU. February 19, 2010. Defense acquisition guidebook (DAG). Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense. <br />
<br />
ECSS. 6 March 2009. Systems engineering general requirements. Noordwijk, Netherlands: Requirements and Standards Division, European Cooperation for Space Standardization (ECSS), ECSS-E-ST-10C. <br />
<br />
SAE International. 1996. Certification considerations for highly-integrated or complex aircraft systems. Warrendale, PA, USA: SAE International, ARP475 <br />
<br />
SEI. 2007. Capability maturity model integrated (CMMI) for development, version 1.2, measurement and analysis process area. Pittsburg, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).<br />
<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[System Integration|<- Previous Article]] | [[System Realization|Parent Article]] | [[System Deployment and Use|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=System_Integration&diff=9656System Integration2011-08-09T21:07:18Z<p>Skmackin: </p>
<hr />
<div>Introductory Paragraph(s)<br />
<br />
==Introduction, Definition and Purpose==<br />
<br />
'''Introduction''' –System Integration consists of taking delivery of the Implemented (System) Elements which compose the System of Interest (SoI), assembling these Implemented Elements together, and performing the [[Verification and Validation Action (glossary)|Verification and Validation Actions (glossary)]] (V&V Actions) in the course of the assembly. The ultimate goal of System Integration is to ensure that the individual System Elements function properly as a whole and satisfy the Design Properties or characteristics of the system. System integration is one part of the realization effort and relates only to developmental items. Integration has not to be confused with the assembly of end products on a production line. To perform the production, the assembly line uses a different order from the one for integration. <br />
<br />
<br />
'''Definition and Purpose''' – System Integration consists of a process that “combines system elements (Implemented Elements) to form complete or partial system configurations in order to create a product specified in the system requirements.” (ISO/IEC 15288 2008, p. 44). The process is extended to any kind of Product system, Service system and Enterprise system. <br />
<br />
System integration purpose is to prepare the System of Interest ready for final validation and transition either for use or for production. The integration consists of assembling progressively aggregates of Implemented Elements that compose the System of Interest as architectured during design, and to check correctness of static and dynamic aspects of interfaces between the Implemented Elements. <br />
<br />
The Defense Acquisition University provides the following context for integration: The integration process will be used [. . .] for the incorporation of the final system into its operational environment to ensure that the system is integrated properly into all defined external interfaces. The interface management process is particularly important for the success of the integration process, and iteration between the two processes will occur. (DAU 2010)<br />
<br />
The purpose of system integration can be summarized as: (1) completely assemble the Implemented Elements to make sure that the Implemented Elements are compatible with each others; (2) demonstrate that the aggregates of Implemented Elements perform the expected functions and performance/effectiveness; (3) detect defects/faults related to design and assembly activities by submitting the aggregates to focused Verification and Validation Actions. <br />
<br />
Note: In the systems engineering literature, sometimes the term "integration" is used in a larger acceptation than in the present topic. In this larger sense, it concerns the technical effort to simultaneously design and develop the system and the processes for developing the system, through concurrent consideration of all life cycle stages, needs and competences. This approach requires the "integration" of numerous skills, activities or processes.<br />
<br />
==Principles==<br />
<br />
===Boundary of integration activity===<br />
<br />
In the present sense, integration is understood as the complete bottom-up branch of the V cycle including the tasks of assembly and the appropriate verification tasks. See Figure 1.<br />
<br />
[[File:Limits_of_integration_activities.png|300px|center|Limits of Integration Activities]]<br />
<br />
Figure 1 – Limits of integration activities <br />
<br />
The assembly activity joins together and physically links the Implemented Elements. Each Implemented Elements is individually verified and validated prior to entering integration. Integration then adds the verification activity to the assembly activity excluding the final validation.<br />
<br />
The final validation performs operational tests that authorize the transition for use or the transition for production. Remember that system integration only endeavors to obtain preproduction prototypes of the concerned Product, Service or Enterprise. If the Product, Service or Enterprise is delivered as a unique exemplar the final validation activity serves as acceptance for delivery and transfer for use. If the prototype has to be produced in several exemplars, the final validation serves as acceptance to launch their production. The definition of the optimized operations of assembly which will be carried out on a Production Line relates to the Manufacturing Process and not to the Integration Process.<br />
<br />
Integration activity can sometimes reveal issues or anomalies that require modifications of the design of the system. Modifying the design is not part of the Integration Process but concerns only the Design Process. Integration only deals with the assembly of the Implemented Elements and verification of the system against its properties as designed. <br />
<br />
During the assembly, it is however possible to carry out tasks of finishing touches which require simultaneously several Implemented Elements (e.g. paint the whole of two parts after assembly; calibrate a biochemical component, etc.). These tasks must be planned in the context of integration, and are not carried out on separate Implemented Elements. In any case, they do not include modifications related to design.<br />
<br />
===Aggregation of Implemented Elements===<br />
The integration is used to systematically assemble a higher-level system from implemented lower-level ones (implemented System Elements). Integration often begins with analysis and simulations (e.g., various types of prototypes) and progresses through increasingly more realistic systems, system elements until the final Product, Service or Enterprise is achieved. <br />
<br />
System integration is based on the notion of [[Aggregate (glossary)]]. An Aggregate is a subset of the system made up of several Implemented Elements (implemented System Elements and Physical Interfaces) on which a set of Verification and Validation Actions is applied. Each aggregate is characterized by a configuration which specifies the Implemented Elements to be physically assembled and their configuration status. <br />
<br />
To perform Verification and Validation Actions, a [[Verification and Validation Configuration (glossary)]] that includes the Aggregate plus [[Verification and Validation Tool (glossary)|Verification and Validation Tools (glossary)]] is constituted. The Verification and Validation Tools are enabling products and can be simulators (simulated Implemented Elements), stubs or caps, activators (launchers, drivers), harness, measuring devices, etc.<br />
<br />
===Integration by Level of System===<br />
According to the V model (see Introduction), System Definition (top-down branch) is done by successive levels of decomposition; each level corresponds to physical architecture of systems and System Elements. The integration (bottom-up branch) consists in following the opposite way of composition level by level.<br />
<br />
On a given level, integration is done on the basis of the physical architecture defined during System Definition.<br />
<br />
<br />
===Integration Strategy===<br />
The integration of Implemented Elements is generally performed according to a predefined strategy. The definition of the integration strategy is based on the architecture of the system and relies on the way the architecture of the system has been designed. The strategy is described in an Integration Plan that defines the configuration of expected Aggregates, the order of assembly of these Aggregates in order to carry out efficient Verification and Validation Actions (for example, inspections and/or testing). The integration strategy is thus elaborated starting from the selected Verification & Validation strategy. See [[System Verification and Validation]] topic. <br />
<br />
To define an integration strategy one can use one or several possible integration approaches / techniques. Any of these may be used individually or in combination. The selection of integration techniques depends of several factors, in particular the type of System Element, delivery time, order of delivery, risks, constraints, etc. Each integration technique has strengths and weaknesses which should be considered in the context of the System of Interest. Some integration techniques are summarized hereafter in table 1.<br />
<br />
<br />
Table 1 – Integration techniques <br />
<br />
<br />
<br />
Usually, a mixed integration technique is selected as a trade-off between the different techniques listed above, allowing to optimize work and to adapt the process to the system under development. The optimization takes into account the realization time of the Implemented Elements, their delivery scheduled order, their level of complexity, the technical risks, the availability of assembly tools, cost, deadlines, specific personnel capability, etc.<br />
<br />
==Process Approach==<br />
===Purpose and Principle of Approach===<br />
<br />
'''The purpose''' of the System Integration Process is to assemble the Implemented Elements in order to obtain the system that is compliant with its physical and functional architecture design. <br />
<br />
The activities of the Integration Process and those of the Verification Process fit into each other. <br />
<br />
The process is used by any system of any level of the decomposition of the System of Interest. It is used iteratively starting from a first Aggregate of Implemented Elements till the complete system; the last "loop" of the process results in the entirely integrated System of Interest. <br />
<br />
<br />
'''The generic inputs''' are: the elements of the Functional Architecture (functions, functional interfaces: inputs outputs and control flows); the elements of the Physical Architecture (System Elements, physical interfaces and ports); the specified requirements applicable to the design of the concerned system. <br />
<br />
<br />
'''The generic outputs''' are: the Integration Plan containing the integration strategy; the integrated system; the integration means (enabling products as tools and procedures); integration reports, eventually issue/trouble reports and change requests about design.<br />
<br />
<br />
The outcomes of the Integration Process are used by the Verification Process and by the Validation Process.<br />
<br />
<br />
===Activities of the Process===<br />
<br />
Major activities and tasks performed during this process include:<br />
<br />
# '''Establish the Integration Plan''' (this activity is carried out concurrently to design activity of the system) that defines:<br />
## The optimized integration strategy: order of Aggregates assembly using appropriate integration techniques.<br />
## The Verification and Validation Actions to be processed for the purpose of integration.<br />
## The configurations of the Aggregates to be assembled and verified.<br />
## The integration means and verification means (dedicated enabling products) that may include [[Assembly Procedure (glossary)|assembly procedures (glossary)]], [[Assembly Tool (glossary)|assembly tools (glossary)]] (harness, specific tools), [[Verification and Validation Tool (glossary)|verification and validation tools (glossary)]] (simulators, stubs/caps, launchers, test benches, devices for measuring, etc.), [[Verification and Validation Procedure (glossary)|verification and validation procedures (glossary)]].<br />
# '''Obtain the integration means''' and verification means as defined in the Integration Plan; the acquisition of the means can be done through various ways such as procurement, development, reuse, sub-contracting; usually the acquisition of the complete set of means is a mix of these ways.<br />
# '''Take delivery''' of each Implemented Element:<br />
## Unpack, and reassemble the Implemented Element with its accessories.<br />
## Check the delivered configuration, conformance of Implemented Element, compatibility of interfaces, the presence of mandatory documentation.<br />
# '''Assemble the Implemented Elements''' into Aggregates:<br />
## Gather the Implemented Elements to be assembled, the integration means (Assembly Tools, Assembly Procedures), and the verification means (Verification and Validation Tools, Verification and Validation Procedures).<br />
## Connect the Implemented Elements on each others to constitute Aggregates in the order prescribed by the Integration Plan and in Assembly Procedures using Assembly Tools. <br />
## Add or connect the Verification and Validation Tools to the Aggregates as predefined.<br />
## Carry out eventual operations of welding, gluing, drilling, tapping, adjusting, tuning, painting, parametering, etc.<br />
# '''Verify each Aggregate''':<br />
## Check the Aggregate is correctly assembled according to established procedures.<br />
## Perform the Verification Process that uses Verification and Validation Procedures and check that the Aggregate shows the right Design Properties / specified requirements.<br />
## Record integration results or reports and potential issue reports, change requests, etc.<br />
<br />
===Artifacts and Ontology Elements===<br />
<br />
This process may create several artifacts such as:<br />
<br />
# Integrated System<br />
# Assembly Tool<br />
# Assembly Procedure<br />
# Integration Plan<br />
# Integration Report<br />
# Issue / Anomaly / Trouble Report<br />
# Change Request (about design)<br />
<br />
<br />
This process handles the ontology elements of Table 2.<br />
<br />
[[File:SEBoKv05_KA-SystRealiz_ontology_within_System_Integration.png|650px|center|Main Ontology Elements as Handled within System Integration]]<br />
<br />
Table 2 - Main ontology elements as handled within System Integration <br />
<br />
<br />
Note: Verification and Validation ontology elements are described in [[Verification and Validation]] topic.<br />
<br />
<br />
The main relationships between ontology elements are presented in Figure 2.<br />
<br />
[[File:SEBoKv05_KA-SystRealiz_Integration_relationships.png|600px|center|Integration Elements Relationships with Other Engineering Elements ]]<br />
<br />
Figure 2 – Integration elements relationships with other engineering elements (Faisandier, 2011)<br />
<br />
===Checking and Correctness of Integration===<br />
<br />
The main items to be checked during the integration process:<br />
<br />
* The Integration Plan respects it's template<br />
*The expected assembly order (integration strategy) is realistic<br />
*No System Element and Physical Interface set out in the System Design Document is forgotten<br />
*Every interface and interaction between Implemented Elements is verified<br />
*Assembly Procedures and Assembly Tools are available and validated prior beginning the assembly<br />
*Verification and Validation Procedures, Verification and Validation Tools are available and validated prior beginning the verification<br />
*Integration reports are recorded<br />
<br />
<br />
===Methods and Techniques===<br />
<br />
Several different approaches are summarized above in section "Integration Strategy" that may be used for integration. Other ones exist, in particular for intensive software systems such as Vertical Integration, Horizontal Integration, Star integration.<br />
<br />
'''Coupling matrix and N-square diagram'''<br />
One of the most basic methods to define the aggregates and the order of integration would be the use of N-Square diagrams. Jeff Grady’s - System Integration – page 190 – 1994, CRC Press - Boca Raton, Florida, US<br />
<br />
In the integration context, the coupling matrices are useful for optimizing the Aggregate definition and verification of interfaces:<br />
<br />
*The integration strategy is defined and optimized by reorganizing the coupling matrix in order to group the Implemented Elements in Aggregates minimizing the number of interfaces to be verified between aggregates (see Figure 3).<br />
<br />
[[File:JS_Figure_9.png|600 px|center|Initial arrangement of aggregates on the left; final arrangement after reorganization on the right]]<br />
<br />
Figure 3 - Initial arrangement of aggregates on the left; final arrangement after reorganization on the right(developed for BKCASE)<br />
<br />
<br />
*When verifying the interactions between Aggregates, the matrix is an aid tool for fault detection. If by adding a Implemented Element to an Aggregate, an error is detected, the fault can be either related to the Implemented Element, or to the Aggregate, or to the interfaces. If the fault is related to the Aggregate, it can relate to any Implemented Element or any interface between the Implemented Elements internal to the Aggregate.<br />
<br />
===Application to Product systems, Service systems, Enterprise systems===<br />
<br />
As the nature of implemented System Elements and Physical Interfaces is different for these types of systems, the Aggregates, the Assembly Tools, the Verification and Validation Tools are different. Some integration techniques are more appropriate to types of systems. The Table 3 below provides some examples. <br />
<br />
[[File:SEBoKv05_KA-SystRealiz_integration_elements_for_Product-Service-Enterprise.png|600px|center|Different Integration Elements for Product, Service, and Enterprise Systems ]]<br />
<br />
<br />
Table 3 – Different integration elements for Product, Service, and Enterprise systems TO BE CHANGED<br />
<br />
===Practical Considerations===<br />
<br />
Major pitfalls encountered with System Integration are presented in Table 4:<br />
<br />
[[File:SEBoKv05_KA-SystRealiz_pitfalls_with_System_Integration.png|650px|center|Major Pitfalls with System Integration]]<br />
<br />
Table 4 – Major pitfalls with System Integration<br />
<br />
<br />
<br />
Major proven practices encountered with System Integration are presented in Table 5:<br />
<br />
[[File:SEBoKv05_KA-SystRealiz_practices_with_System_Integration.png|650px|center|Proven Practices with System Integration]]<br />
<br />
Table 5 – Proven practices with System Integration<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
DAU. February 19, 2010. Defense acquisition guidebook (DAG). Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense. <br />
<br />
ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E).<br />
<br />
===Primary References===<br />
<br />
INCOSE. 2010. [[INCOSE Systems Engineering Handbook|Systems Engineering Handbook]]: A Guide for Systems Life Cycle Processes and Activities, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2. <br />
<br />
NASA. 2007. [[NASA Systems Engineering Handbook|Systems Engineering Handbook]]. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
<br />
DAU. February 19, 2010. Defense acquisition guidebook (DAG). Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense. <br />
<br />
Gold-Bernstein, B., and W. A. Ruh. 2004. Enterprise integration: The essential guide to integration solutions. Boston, MA, USA: Addison Wesley Professional. <br />
<br />
Grady, J. O. 1994. System integration, Boca Raton, FL, USA: CRC Press, Inc. <br />
<br />
Hitchins, D. 2009. What are the General Principles Applicable to Systems? Insight. International Council on Systems Engineering. <br />
<br />
Jackson, S. 2010. Architecting Resilient Systems: Accident Avoidance and Survival and Recovery from Disruptions, Hoboken, NJ, USA, John Wiley & Sons. <br />
<br />
Reason, J. 1997. Managing the Risks of Organisational Accidents, Aldershot, UK, Ashgate Publishing Limited. <br />
<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[System Implementation|<- Previous Article]] | [[System Realization|Parent Article]] | [[System Verification and Validation|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=System_Implementation&diff=9655System Implementation2011-08-09T21:04:41Z<p>Skmackin: </p>
<hr />
<div>'''Introductory Paragraph(s)'''<br />
<br />
==Introduction, Definition and Purpose== <br />
'''Introduction''' - Implementation uses the structure created during [[Architectural Design]] and the results of [[System Analysis]] to construct System Elements that meet the Stakeholder Requirements and System Requirements developed in the early life cycle phases. These System Elements are then integrated to form intermediate [[Aggregate (glossary)|aggregates (glossary)]] and finally the complete system-of-interest. See [[System Integration]]. <br />
<br />
<br />
'''Definition and Purpose''' - Implementation is the process that actually yields the lowest-level System Elements in the system hierarchy (system breakdown structure). The System Elements are made, bought, or reused. Production involves the hardware fabrication processes of forming, removing, joining, and finishing; or the software realization processes of coding and testing; or the operational procedures development processes for operators' roles. If implementation involves a production process, a manufacturing system which uses the established technical and management processes may be required. <br />
<br />
The purpose of the implementation process is to design and create or fabricate a System Element conforming to that element’s design properties and/or requirements. The element is constructed employing appropriate technologies and industry practices. This process bridges the System Definition Processes and the Integration process. Figure 1 below portrays how the outputs of System Definition relate to System Implementation, which produces the Implemented (System) Elements required to produce Aggregates and the System of Interest. <br />
<br />
<br />
[[File:JS_Figure_4.png|600px|center|Simplification of how the outputs of system definition relate to system implementation.]]<br />
<br />
Figure 1 - Simplification of how the outputs of System Definition relate to System Implementation. (Developed for BKCASE)<br />
<br />
==Process Approach==<br />
===Purpose and Principle of the approach===<br />
During the implementation process, engineers apply the design properties and/or requirements allocated to a system element to design and produce a detailed description. They then fabricate, code, or build each individual element using specified materials, processes, physical or logical arrangements, standards, technologies, and/or information flows outlined in detailed description (drawings or other design documentation). This System Element will be verified against the detailed description of properties and validated against its requirements. <br />
<br />
If subsequent verification and validation actions or configuration audits reveal discrepancies, recursive interactions occur with predecessor activities or processes as required to mitigate those discrepancies and to modify, repair, or correct the System Element in question. <br />
<br />
Figure 2 provides the context for the implementation process from the perspective of DAU. <br />
<br />
[[File:JS_Figure_5.png|600 px|center|Context Diagram for the Implementation Process / DAU ]] TO BE CHANGED, SAME AS FIG 3<br />
<br />
<br />
Figure 2 - Context Diagram for the Implementation Process / DAU (Source: (DAU February 19, 2010)/Released)<br />
<br />
The International Council on Systems Engineering (INCOSE) provides a similar, but somewhat more detailed view on the context of implementation, as seen in Figure 3. <br />
<br />
[[File:JS_Figure_6.png|600 px|center|Context Diagram for the Implementation Process / INCOSE]]<br />
<br />
Figure 3 - Context diagram for the implementation process / INCOSE (INCOSE. 2010)<br />
<br />
These figures provide a useful overview of the systems engineering community’s perspectives of what is required for implementation and what the general results of implementation may be. These are further supported by the discussion of implementation inputs, outputs, and activities found in the National Aeronautics and Space Association (NASA) Handbook. (NASA. 2007). It is important to understand that these views are process-oriented. While this is a useful model, examining implementation only in terms of process can be limiting. <br />
<br />
Depending on the technologies and systems chosen when a decision is made to produce a System Element, the implementation process outcomes may generate constraints to be applied on the architecture of the higher-level system; those constraints are normally identified as derived System Requirements and added to the set of System Requirements applicable to this higher-level system. The architectural design has to take those constraints into account. <br />
<br />
If the decision is made to purchase or reuse an existing System Element, this has to be identified as a constraint or System Requirement applicable to the architecture of the higher-level system. Conversely, the implementation process may involve some adaptation or adjustments to the System Requirement in order to be integrated into a higher-level system or aggregate. <br />
<br />
Implementation also involves packaging, handling, and storage, depending on the concerned technologies and where or when the System Requirement needs to be integrated into a higher-level aggregate. Developing the supporting documentation for the System Requirement, such as the manuals for operation, maintenance, and/or installation, is also a part of the implementation process. <br />
<br />
The System Element Requirements and the associated verification and validation criteria are inputs to this process; these inputs come from the [[Architectural Design]] process detailed outputs. <br />
<br />
Execution of the implementation process is governed by standards, both industry and government, and the terms of all applicable agreements. This may include conditions for packaging and storage as well as preparation for use activities, such as operator training. In addition, packaging, handling, storage, and transportation (PHS&T) considerations will constrain the implementation activities. For more information, refer to the discussion of PHS&T in the [[System Deployment and Use]]. In addition, the developing or integrating organization will likely have enterprise-level safety practices and guidelines that must also be considered.<br />
<br />
===Activities of the Process===<br />
Major activities and tasks performed during this process include:<br />
<br />
#'''Define the implementation strategy'''. Implementation process activities begin with detailed design and include developing an Implementation Strategy that defines fabrication and coding procedures, tools and equipment to be used, implementation tolerances, and the means and criteria for auditing configuration of resulting elements to the detailed design documentation. In the case of repeated system element implementations (such as for mass manufacturing or replacement elements), the implementation strategy is defined and refined to achieve consistent and repeatable element production; it is retained in the project decision database for future use. The implementation strategy contains the arrangements for packing, store and supply the Implemented Element. <br />
#'''Realize the System Element'''. Realize or adapt and produce the concerned System Element using the implementation strategy items as defined above. Realization or adaptation is conducted with regard to standards that govern applicable safety, security, privacy, and environmental guidelines or legislation and the practices of the relevant implementation technology. This requires the fabrication of hardware elements, development of software elements, definition of training capabilities and drafting of training documentation, and the training of initial operators and maintainers. <br />
#'''Provide evidence of compliance'''. Record evidence that the System Element meets its requirements and the associated verification and validation criteria as well as the legislation policy. This requires the conduction of peer reviews and unit testing as well as inspection of operation and maintenance manuals. Acquire Measured Properties that characterize the Implemented Element (weight, capacities, effectiveness, level of performance, reliability, availability, etc.)<br />
#'''Package, store and supply the Implemented Element'''. This should be defined in the implementation strategy.<br />
<br />
<br />
===Artifacts and Ontology Elements===<br />
This process may create several artifacts such as:<br />
<br />
#Implemented System<br />
#Implementation Tools<br />
#Implementation Procedures<br />
#Implementation Plan or Strategy<br />
#Verification Reports<br />
#Issue / Anomaly / Trouble Report<br />
#Change Request (about design)<br />
<br />
This process handles the ontology elements of Table 1.<br />
<br />
<br />
Table 1 - Main ontology elements as handled within System Element implementation<br />
<br />
[[File:SEBoKv05_KA-SystRealiz_ontology_within_implementation.png|650px|center|Main Ontology Elements as Handled within System Element Implementation ]]<br />
<br />
<br />
The main relationships between ontology elements are presented in Figure 4.<br />
<br />
[[File:SEBoKv05_KA-SystRealiz_Implementation_relationships.png|300px|center|Implementation Elements Relationships with Other Engineering Elements]]<br />
<br />
Figure 4 - Implementation elements relationships with other engineering elements. (Faisandier, 2011)<br />
<br />
<br />
===Methods, Techniques and Tools===<br />
There are many software tools available in the implementation and integration phases. The most basic method would be the use of N-Square diagrams as discussed in Jeff Grady’s book on System Integration. (Grady. 1994)<br />
<br />
===Checking and Correctness of Implementation===<br />
Proper implementation checking and correctness should include testing to determine if the Implemented Element, i.e. piece of software, hardware, or other product, works in its intended use. Testing could include mockups and breadboards as well as modeling and simulation of a prototype or completed pieces of a system. Once this is completed successfully, then the next process would be system integration.<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
DAU. February 19, 2010. Defense acquisition guidebook (DAG). Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense. <br />
<br />
<br />
INCOSE. 2010. INCOSE systems engineering handbook, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2. <br />
<br />
<br />
NASA. December 2007. Systems engineering handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105. <br />
<br />
<br />
Grady, J. O. 1994. System integration. Boca Raton, FL, USA: CRC Press, Inc.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
<br />
DAU. February 19, 2010. Defense acquisition guidebook (DAG). Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense.<br />
<br />
<br />
INCOSE. 2010. INCOSE systems engineering handbook, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2. <br />
<br />
<br />
NASA. December 2007. Systems engineering handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105. <br />
<br />
<br />
Grady, J. O. 1994. System integration. Boca Raton, FL, USA: CRC Press, Inc.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[System Realization|<- Previous Article]] | [[System Realization|Parent Article]] | [[System Integration|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=System_Realization&diff=9654System Realization2011-08-09T20:58:44Z<p>Skmackin: </p>
<hr />
<div>Introductory Paragraph(s)<br />
<br />
===Topics===<br />
The topics contained within this knowledge area include:<br />
*[[System Implementation]]<br />
*[[System Integration]]<br />
*[[System Verification and Validation]]<br />
<br />
=Introduction=<br />
The SEBOK divides the traditional life cycle process steps into four stages. This chapter will discuss the realization stage. The processes included in realization are those required to build a system, integrate disparate system elements, and ensure that the system both meets the needs of stakeholders and aligns with the requirements identified in the system definition stages. These processes are not sequential; their iteration and flow are depicted in Figure 1, which also shows how these processes fit within the context of the System Definition and System Deployment and Use knowledge areas.<br />
<br />
[[File:JS_Figure_1.png|600 px|center|System Realization Context]]<br />
Figure 1 System Realization (Developed for BKCASE)<br />
<br />
Essentially, the outputs of system definition are used during implementation to create [[System Element (glossary)|system elements (glossary)]] and during integration to provide plans and criteria for combining these elements. The requirements derived as part of [[System Definition]] are used to verify and validate System Elements, systems, and the overall [[System of Interest (SoI) (glossary)]]. These activities provide feedback into the system design, particularly when problems or challenges are identified. Finally, when the system is considered verified and validated, it will then become an input to system deployment and use. It is important to understand that there is overlap in these activities; they do not have to occur in sequence. The way these activities are performed is dependent upon the life cycle model in use (for additional information on life cycles, see the Systems Engineering Life Cycles Knowledge Area (KA).<br />
<br />
The realization processes are designed to ensure that the system will be ready to transition and have the appropriate structure and behavior to enable desired operation and functionality throughout the system’s life span. Both DAU and NASA include “transition” in realization in addition to implementation, integration, verification, and validation. (Prosnik 2010; NASA December 2007, 1-360) However, the SEBoK includes transition in the System Deployment and Use KA.<br />
<br />
<br />
==Fundamentals==<br />
<br />
===Macro view of realization processes===<br />
<br />
Figure 2 illustrates a macro view of generic outputs from realization activities when using a Vee life cycle model. The left side of the Vee represents various design activities 'going down' the system.<br />
[[File:The V - A Macro View.png]]<br />
<br />
<br />
Figure 2 - The Vee Activity Diagram (Prosnik. 2010)<br />
<br />
<br />
<br />
The left side of the Vee model demonstrates the development of System Elements specifications and design descriptions. In this stage, verification and validation plans are developed which are later used to determine whether realized System Elements (Products, Services, or Enterprises) are compliant with specifications and [[Stakeholder Requirement (glossary)|stakeholder requirements (glossary)]]. Also during this stage, initial specifications become flow-down requirements for lower-level system models. In terms of time frame, these activities are going on ‘early’ in the system’s life cycle. These activities are discussed in the System Definition Knowledge Area. However, it is important to understand that some of the System Realization activities are initiated at the same time as some System Definition activities.<br />
<br />
The right side of the Vee model, as illustrated in Figure 2, results in System Elements that are assembled into Products, Services, or Enterprises according to the system model described during the left side of the Vee. Verification and validation activities determine how well the realized system fulfills the [[Stakeholder Requirement (glossary)|stakeholder requirements (glossary)]], the [[System Requirement (glossary)|system requirements (glossary)]] and [[Design Property (glossary)|design properties (glossary)]]. These activities should follow the plans developed on the left side of the Vee. <br />
<br />
For example, the U.S. Defense Acquisition University (DAU) provides this overview of what occurs during system realization:<br />
<br />
"Once the products of all system models have been fully defined, Bottom-Up End Product Realization can be initiated. This begins by applying the Implementation Process to buy, build, code or reuse end products. These implemented end products are verified against their design descriptions and specifications, validated against Stakeholder Requirements and then transitioned to the next higher system model for integration. End products from the Integration Process are successively integrated upward, verified and validated, transitioned to the next acquisition phase or transitioned ultimately as the End Product to the user." (Prosnik. 2010)<br />
<br />
While the systems engineering technical processes are life cycle processes, the processes are concurrent, and the emphasis of the respective processes depends on the phase and maturity of the design. Figure 3 portrays (from left to right) a notional emphasis of the respective processes throughout the systems acquisition life cycle from the perspective of the U.S. Department of Defense (DoD). It is important to note that, from this perspective, these processes do not follow a linear progression. Instead, they are concurrent, with the amount of activity in a given area changing over the system’s life cycle. The red boxes indicate the topics that will be discussed below as part of realization.<br />
<br />
[[File:JS_Figure_3.png|Notional Emphasis of Systems Engineering Technical Processes and Program Life-cycle Phases]]<br />
<br />
Figure 3 - Notional Emphasis of Systems Engineering Technical processes and Program Life-Cycle Phases (DAU February 19, 2010)/Released)<br />
<br />
===Ontology for System Realization===<br />
<br />
For general explanations about ontology, refers to the section "Ontology for System Development" in the [[System Definition]] Introduction. The figure 4 below presents an overview of ontology elements used within System Realization activities. A set of major entities, attributes, and relationships are suggested and defined in the following sections; they are consistent with System Definition ontology elements.<br />
<br />
<br />
Figure 4 - A simplified view of a meta-data model for System Realization. (Faisandier, 2011)<br />
<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
DAU. February 19, 2010. Defense acquisition guidebook (DAG). Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense.<br />
<br />
<br />
Prosnik, G. 2010. Materials from "systems 101: Fundamentals of systems engineering planning, research, development, and engineering". DAU distance learning program. eds. J. Snoderly, B. Zimmerman. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense (DoD).<br />
<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
<br />
INCOSE . 2011. INCOSE Systems Engineering Handbook, Version 3.2.1 (Jan 2011) San Diego, CA, USA: International Council on Systems Engineering (INCOSE).<br />
<br />
<br />
ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E). <br />
<br />
<br />
Martin, J. N. 1997. Systems engineering guidebook: A process for developing systems and products. 1st ed. Boca Raton, FL, USA: CRC Press.<br />
<br />
<br />
NASA. December 2007. Systems engineering handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105.<br />
<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
<br />
DAU. February 19, 2010. Defense acquisition guidebook (DAG). Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense.<br />
<br />
<br />
DAU. Your acquisition policy and discretionary best practices guide. In Defense Acquisition University (DAU)/U.S. Department of Defense (DoD) [database online]. Ft Belvoir, VA, USA, 2009 Available from https://dag.dau.mil/Pages/Default.aspx (accessed 2010).<br />
<br />
<br />
ECSS. 6 March 2009. Systems engineering general requirements. Noordwijk, Netherlands: Requirements and Standards Division, European Cooperation for Space Standardization (ECSS), ECSS-E-ST-10C.<br />
<br />
<br />
<br />
----<br />
====Article Discussion====<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[System Analysis|<- Previous Article]] | [[Systems Engineering and Management|Parent Article]] | [[System Implementation|Next Article ->]]</center><br />
==Signatures==<br />
[[Category:Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=System_Analysis&diff=9653System Analysis2011-08-09T20:58:05Z<p>Skmackin: </p>
<hr />
<div>==Introduction, Definition and Purpose of System Analysis==<br />
System Analysis allows the developers of systems to carry out (in the most objective way possible) assessments of engineering data in order to select the most efficient system [[Architecture (glossary)]], and also to provide a set of consistent and accurate engineering data including System Requirements.<br />
<br />
During engineering, assessments should be performed every time technical choices or decisions have to be made or justified, not only to compare different design architectures, but taking into account the System Requirements. System Analysis provides a rigorous approach to technical decision-making. It is used to perform trade-off studies, including a set of analysis such as cost analysis, technical risks analysis, and effectiveness analysis. “Assess and select” is one of the major tasks of the system engineer.<br />
<br />
<br />
----<br />
<br />
==Principles about System Analysis==<br />
One of the major development issues is to ensure that the global set of engineering data of a System of Interest (mainly Stakeholder Requirements, System Requirements, Design Properties) the engineers have created is consistent and that the values of data are relevant. Trade-off studies are at the centre of System Analysis, focusing on these aspects and providing means and techniques:<br />
*to define [[Assessment Criterion (glossary)|Assessment Criteria (glossary)]] based on System Requirements;<br />
*to assess [[Design Property (glossary)|Design Properties (glossary)]] of each candidate solution in comparison to these criteria;<br />
*to score globally the candidate solutions, and to justify the scores.<br />
<br />
<br />
The number and importance of Assessment Criteria to be assessed depend on the considered type of system and on its context of operational use. The various assessments use theoretical models, representation models, mock-ups, simulations, analytical models, etc.<br />
<br />
<br />
<br />
===Trade-off studies===<br />
In the context of the definition of a system, a trade-off study consists of comparing the characteristics of each candidate solution to determine the solution that best globally addresses the Assessment Criteria. The various characteristics analyzed are gathered in Costs Analysis, Technical Risks Analysis, and Effectiveness Analysis (NASA. 2007) page 1-360. Each class of analysis is the subject of the following topics:<br />
#[[Assessment Criterion (glossary)|Assessment Criteria (glossary)]] are used to classify the various candidate solutions between themselves. They are absolute or relative. Example: maximum cost per unit produced is cc$; cost reduction shall be x%; effectiveness improvement is y%, risk mitigation is z%.<br />
#'''Boundaries''' identify and limit the characteristics or criteria to be taken into account in the analysis. Examples: kind of costs to be taken into account; acceptable technical risks; type and level of effectiveness.<br />
#'''Scales''' are used to quantify the characteristics or properties or criteria and to make comparisons. Their definition requires knowing the highest and lowest limits as well as the type of evolution of the characteristic (linear, logarithmic, etc.).<br />
#An [[Assessment Score (glossary)]] is assigned to a characteristic or criterion for each candidate solution. The goal of the trade-off study is to succeed in quantifying the three variables (and their decomposition in sub variables) of cost, risk, and effectiveness for each candidate solution. This operation is generally complex and requires the use of '''models'''.<br />
#The '''optimization''' of the characteristics or properties improves the scoring of interesting solutions.<br />
<br />
<br />
A decision-making process is not an accurate science and trade-off studies have limits. The following concerns shall be taken into account:<br />
*Subjective variables: for example, the component has to be beautiful. What is a beautiful component?<br />
*Uncertain data: for example, inflation has to be taken into account to estimate the cost of maintenance during the complete life cycle. What will be inflation for the next 5 years?<br />
*Sensitivity analysis: a global assessment score associated to every candidate solution is not absolute; it is recommended to get a robust selection by performing sensitivity analysis that consists to stimulate the decision model with small variations of assessment criteria values (weights). The selection is robust if the variations do not change the order of scores.<br />
<br />
<br />
A relevant trade-off study specifies the assumptions, variables, and confidence intervals of the results. Knowledge management and a former experience that result in the building of significant databases and models that are able to demonstrate relevance and efficiency are a considerable advantage.<br />
<br />
===Costs Analysis ===<br />
<br />
A [[Cost (glossary)]] analysis considers the full life cycle costs. The cost baseline can be adapted according to the project and the system. The global life cycle cost may includes example labor and non-labor cost items such as indicated in the Table 1.<br />
<br />
<br />
Table 1 - Types of costs NEW TABLE<br />
<br />
<br />
<br />
Methods for determining cost are described in the [[Planning]] topic.<br />
<br />
===Technical Risks Analysis===<br />
Every [[Risk (glossary)]] analysis concerning every domain is based on three things:<br />
*Analysis of potential threats or undesired events and their probability of occurrence.<br />
*Analysis of the consequences of these threats or undesired events and their classification on a scale of gravity.<br />
*The installation of protections or preventions to reduce the probabilities of threats and/or the levels of harmful effect to acceptable values.<br />
<br />
<br />
The technical risks appear when the system cannot satisfy the System Requirements any longer. The causes reside in the solution itself and/or in the requirements. They are expressed in the form of insufficient effectiveness and can have multiple causes; for example, incorrect assessment of the technological capabilities, failure of parts, breakdowns, breakage, obsolescence of equipment, parts, or software, weakness from the supplier (non-compliant parts, delay for supply, etc.), human factors (insufficient training, wrong tunings, error handling, unsuited procedures, malice), etc. Because some risks come from context modifications, the project or the company cannot react: natural events (storm, flood, etc.), decisions or political events, regulation and standardization evolutions, change of policy or strategy.<br />
<br />
Note: Technical risks have not to be confused with project risks even if the method to manage them is the same. Technical risks address the system itself, not the project for its development. Of course technical risks may react onto project risks. Technical risk can be managed on the same way that project risks all along the development till being satisfactorily mitigated.<br />
<br />
See [[Risk Management]] for more details.<br />
<br />
<br />
===Effectiveness Analysis===<br />
Effectiveness studies use the different types of requirements as a starting point. The effectiveness of the system concerns several essential characteristics that are generally gathered in the following list of analyses, including but not limited to: performance, usability, dependability, manufacturing, maintenance or support, environment, etc. These analyses highlight candidate solutions under various aspects. It is essential to establish a classification in order to limit the number of analysis to the really significant aspects. The main difficulties of the effectiveness analysis are to sort and select the right set of effectiveness aspects; for example, if the product is made for a single use, the maintainability and the capability of evolution will not be relevant criteria.<br />
<br />
<br />
----<br />
<br />
==Process Approach - System Analysis==<br />
===Purpose and principles of the approach===<br />
The system analysis process is used to: (1) provide a rigorous basis for technical decision making, resolution of requirement conflicts, and assessment of alternative physical solutions; (2) determine progress in satisfying System Requirements and derived requirements; (3) support risk management; and (4) ensure that decisions are made only after evaluating the cost, schedule, performance, and risk effects on the engineering or reengineering of the system. (ANSI/EIA. 1998)<br />
<br />
This process is named "Decision Analysis Process" in (NASA. 2007) page 1-360: The Decision Analysis Process is used to help evaluate technical issues, alternatives, and their uncertainties to support decision-making. See [[Decision Management]] for more details.<br />
<br />
<br />
The system analysis supports other system definition processes:<br />
*Stakeholder Requirements definition and System Requirements definition processes use system analysis to solve issues relating to conflicts among the set of requirements, in particular those related to costs, technical risks, and effectiveness (performances, operational conditions, and constraints). System Requirements subject to high risks or which would require different architectures are discussed.<br />
*The Architectural Design process uses it to assess characteristics or Design Properties of candidate Functional and Physical Architectures, providing arguments for selecting the most efficient one in terms of costs, technical risks, and effectiveness (example: performances, dependability, human factors, etc.).<br />
<br />
Like any system definition process, the System Analysis Process is iterative. Each operation is carried out several times; each step improves the precision of analysis.<br />
<br />
<br />
===Activities of the Process ===<br />
Major activities and tasks performed during this process include:<br />
#Planning the trade-off studies.<br />
##Determine the number of candidate solutions to analyze, the methods and procedures to be used, the expected results (objects to be selected: Functional Architecture/Scenario, Physical Architecture, System Element, etc.), the justification items.<br />
##Schedule the analyses according to the availability of models, engineering data (System Requirements, Design Properties), skilled personnel, and procedures.<br />
#Define the selection criteria model.<br />
##Select the Assessment Criteria from non-functional requirements (performances, operational conditions, constraints, etc.).<br />
##Sort and order the Assessment Criteria.<br />
##Establish a scale of comparison for each Assessment Criterion and weigh every Assessment Criterion according to its level of relative importance with the others.<br />
#Identify candidate solutions and related models, and data.<br />
#Assess candidate solutions using previously defined methods or procedures.<br />
##Carry out costs analysis, technical risks analysis, and effectiveness analysis placing every candidate solution on every Assessment Criterion comparison scale.<br />
##Score every candidate solution as an Assessment Score.<br />
#Provide results to the calling process: Assessment Criteria, comparison scales, solutions’ scores, [[Assessment Selection (glossary)]], and eventually recommendations and related arguments.<br />
<br />
<br />
===Artifacts and Ontology Elements===<br />
This process may create several artifacts such as:<br />
#Selection criteria model (list, scales, weighing)<br />
#Costs, risks, effectiveness analysis reports<br />
#Justification reports<br />
<br />
<br />
This process handles the ontology elements of Table 2 within System Analysis.<br />
<br />
[[File:SEBoKv05_KA-SystDef_ontology_elements_System_Analysis.png|600px|center|Main Ontology Elements as Handled within System Analysis]]<br />
<br />
Table 2. Main ontology elements as handled within System Analysis<br />
<br />
<br />
The main relationships between ontology elements of System Analysis are presented in Figure 1.<br />
<br />
[[File:SEBoKv05_KA-SystDef_System_Analysis_relationships.png|550px|center|System Analysis Elements Relationships with Other Engineering Elements ]]<br />
<br />
Figure 1. System Analysis elements relationships with other engineering elements (Faisandier, 2011)<br />
<br />
===Checking and correctness of System Analysis===<br />
<br />
TO BE WRITTEN<br />
<br />
ADD ACTION RELATED TO COMMENT 408<br />
<br />
<br />
===Methods and Modeling Techniques===<br />
*'''General usage of models''': Various types of models can be used in the context of System Analysis:<br />
**'''Physical models''' are scale models allowing to simulate physical phenomena; they are specific to each discipline; associated tools are for example mocks-up, vibration tables, test benches, prototypes, decompression chamber, wind tunnels, etc.<br />
**'''Representation models''' are mainly used to simulate the behavior of a system; for example Enhanced Functional Flow Block Diagrams (EFFBD), Statecharts, State machine diagram (SysML), etc.<br />
**'''Analytical models''' are mainly used to establish values of estimates, and we can consider the deterministic models and probabilistic models (also known as stochastic models). Analytical models use equations or diagrams to approach the real operation of the system. They can be from simplest (addition) to most complicated (probabilistic distribution with several variables).<br />
<br />
<br />
*'''Use right models''' depending of the project progress:<br />
**At the beginning of the project, first studies use simple tools, allowing rough approximations which have the advantage of not requiring too much time and effort; these approximations are often sufficient to eliminate unrealistic or outgoing candidate solutions.<br />
**Progressively with the progress of the project it is necessary to improve precision of data to compare the candidate solutions still competing. The work is more complicated if the level of innovation is high.<br />
**A system engineer alone cannot model a complex system: he has to be supported by skilled people from different disciplines involved.<br />
<br />
<br />
*'''Specialist expertise''': When the values of Assessment Criteria cannot be given in an objective or precise way, or because the subjective aspect is dominating, we can ask to specialists for expertise. The estimates proceeds in four steps:<br />
**Select interviewees to collect the opinion of qualified people for the considered field.<br />
**Draft a questionnaire; a precise questionnaire allows an easy analysis, but a questionnaire too closed risks to neglect significant points.<br />
**Interview a limited number of specialists with the questionnaire and having an in-depth discussion to get precise opinions.<br />
**Analyze the data with several different people comparing their impressions until agreement on a classification of Assessment Criteria and/or candidate solutions.<br />
<br />
<br />
Often used analytical models in the context of System Analysis are summarized in Table 3.<br />
<br />
<br />
<br />
Table 3 - Often used analytical models in the context of System Analysis NEW TABLE<br />
<br />
<br />
----<br />
<br />
==Application to Product systems, Service systems, Enterprise systems==<br />
<br />
TO BE WRITTEN<br />
<br />
<br />
----<br />
<br />
==Practical Considerations about System Analysis==<br />
<br />
Major pitfalls encountered with System Analysis are presented in Table 4.<br />
<br />
[[File:SEBoKv05_KA-SystDef_pitfalls_System_Analysis.png|600px|center|Pitfalls with System Analysis]]<br />
<br />
Table 4 – Pitfalls with System Analysis<br />
<br />
<br />
<br />
Proven practices with System Analysis are presented in Table 5.<br />
<br />
[[File:SEBoKv05 KA-SystDef practices System Analysis.png|600px|center|Proven practices with System Analysis ]]<br />
<br />
Table 5 – Proven practices with System Analysis<br />
<br />
<br />
<br />
----<br />
<br />
==References== <br />
<br />
===Citations===<br />
<br />
NASA. 2007. Systems engineering handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105.<br />
<br />
<br />
ANSI/EIA. 1998. Processes for engineering a system. Philadelphia, PA, USA: American National Standards Institute (ANSI)/Electronic Industries Association (EIA), ANSI/EIA-632-1998.<br />
<br />
===Primary References===<br />
<br />
ANSI/EIA. 1998. Processes for engineering a system. Philadelphia, PA, USA: American National Standards Institute (ANSI)/Electronic Industries Association (EIA), ANSI/EIA-632-1998.<br />
<br />
<br />
NASA. 2007. Systems engineering handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105.<br />
<br />
===Additional References===<br />
<br />
Faisandier, A. 2011. Engineering and architecting multidisciplinary systems. (expected--not yet published). <br />
<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Architectural Design|<- Previous Article]] | [[System Definition|Parent Article]] | [[System Realization|Next Article ->]]>/center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Logical_Architecture&diff=9652Logical Architecture2011-08-09T20:56:26Z<p>Skmackin: </p>
<hr />
<div>==Introduction, Definition and Purpose of Architectural Design==<br />
'''Introduction''' - Architectural design explores, defines, and formalizes the solutions that meet the System Requirements and selects the optimal solution, taking those requirements into account. Design is in fact the core of Systems Engineering.<br />
<br />
<br />
'''Definition and Purpose''' - The design of a system is the creation of a global solution based on principles and concepts logically related and consistent to each others. The solution owns properties and characteristics satisfying as much as possible the problem expressed by a set of System Requirements. It is implementable through technologies.<br />
<br />
The properties and characteristics of a complex system that encompasses several disciplines are numerous. They can be classified and modeled during the design activities in sectors or views such as, but not limited to, functional, temporal, behavioral, performance, operational, environmental, and structural - see [[Fundamentals of System Definition]] topic, System Architecture and System Design sections.<br />
<br />
The designer or architect is helped a lot when [[System Requirement (glossary)|System Requirements (glossary)]] are classified with a design properties oriented classification as suggested in [[System Requirements]] Definition topic, section "Classification of System Requirements".<br />
<br />
<br />
Some essential issues to be approached during the design of systems are related to the properties, characteristics and goals listed below:<br />
<br />
*'''Functional, behavioral, temporal views''' - The functional modeling of the solution, as independent as possible from the implementation technologies, is an essential step toward working out an optimal physical solution. This functional model is established according to various aspects of transformations ([[Function (glossary)|Functions (glossary)]] and [[Input-Output Flow (glossary)|Input-Output Flows (glossary)]]) and considers the behavior of the system ([[Operational Mode (glossary)|Operational Modes (glossary)]], [[Transition of Modes (glossary)|Transitions of Modes (glossary)]], and trigger events), otherwise known as the [[Scenario (glossary)|Scenarios (glossary)]] of the system operation (use cases). A temporal and decision monitoring model supplements well the functional and behavioral models to organize the in service management of the system in order to achieve permanently its [[Mission (glossary)]] and [[Purpose (glossary)]].<br />
<br />
*'''Performance, operational, environmental, structural views''' - The projection or allocation of functional, behavioral, temporal models onto a [[Physical Architecture (glossary)]], dependent of the implementation technologies, includes the definition of systems and [[System Element (glossary)|System Elements (glossary)]] and physical connections ([[Physical Interface (glossary)|Physical Interfaces (glossary)]]) that owns together [[Design Property (glossary)|Design Properties (glossary)]] such as:<br />
**structural properties (simplicity, modularity, adaptability, scalability, reusability, portability, commonality, expandability, etc.),<br />
**effectiveness/performance levels, accuracy, etc., <br />
**operational characteristics (usability, availability, maintainability, reliability, testability, robustness, interoperability, integrity, generality, trainingetc.),<br />
**environmental characteristics (heatproof, shockproof, electrical resistance, radiation resistance, etc.).<br />
<br />
*'''Confidence in the solution''' - The confidence having correctly designed the architecture and found the optimal option, given the complete set of System Requirements. This essential aspect is related to the assessment of the properties and characteristics of the system that are performed during design; refer to [[System Analysis]] topic.<br />
<br />
<br />
The Physical Architectural models should cover properties such as listed above. But it is impossible that a single model represent all properties. A today issue is the consistency of all models that represent the global solution.<br />
<br />
==Principles about Architectural Design==<br />
This section provides a short explanation of the general mechanism of intellectual creation and then focus on known concepts and/or patterns by system designers or architects such as interface, function, input-output-control flow, dynamics, temporal and decision hierarchy, allocation and partitioning, emerging properties, etc.<br />
<br />
<br />
===Intellectual creation principles and patterns===<br />
The intellectual creation works with more or less short "analysis – synthesis" cycles. We analyze objects, ideas, their relationships, and then we try to express or represent synthesis more or less successfully. Figure 1 summarizes this mechanism:<br />
<br />
*'''Analysis''' includes two major tasks: perception and understanding. <br />
**'''Perception''' consists to identify words, figures, drawings, ideas, etc. <br />
**'''Understanding''' consists to associate what is perceived to existing concepts recorded in our memory, sorting the concepts to use them or not. <br />
*'''Synthesis''' includes also two major tasks: imagination and expression. <br />
**'''Imagination''' consists to transform concepts into images, ideas, then organizing, structuring, globalize, clarify and precise them. If the concept has not been recognized during the analysis, it does not exist in our mind, but it can be created by associations of existing concepts by similarity or aggregation, modifications, etc. <br />
**'''Expression''' consists to transcribe ideas and images in words, drawings, symbols, sounds using standards known by a concerned community.<br />
<br />
[[File:SEBoKv05_KA-SystDef_Intellectual_creation_principles.png|400px|center|Intellectual Creation Principles and Mechanism]]<br />
<br />
Figure 1. Intellectual creation principles and mechanism (Faisandier, 2011)<br />
<br />
<br />
This mechanism is used at anytime in engineering activities and in particular when designing architectural views. To design a [[Functional Architecture (glossary)]], the designer analyses System Requirements, recognizes concepts of Functions, Input-Output Flows, and then imagines how to organize them and represent Scenarios as functional flow diagrams using standard representations. If the sequence of functions does not exist, the designer is able to use existing "patterns" and/or create new ones by similarity, aggregation, modification, etc. The same mechanism applies to a Physical Architecture understanding Functions and imagining or recognizing System Elements supporting the functions taking into account properties and characteristics, and then express with textual or graphical models for representation, etc. <br />
<br />
Note: The term "pattern" originates as an architectural concept by Christopher Alexander – refer to (Alexander, Christopher; Sara Ishikawa, Murray Silverstein, Max Jacobson, Ingrid Fiksdahl-King, Shlomo Angel. 1977). In software engineering, a design pattern is a general reusable solution to a commonly occurring problem in software design. A design pattern is not a finished design that can be transformed directly into code. It is a description or template for how to solve a problem that can be used in many different situations – refer to (Gamma, Erich; Richard Helm, Ralph Johnson, and John Vlissides. 1995). <br />
As software engineering, several domains adopted the notion of pattern and defined their basic ones. Systems Engineering would have to do the same for every topic or view.<br />
<br />
===Concept of Interface===<br />
The concept of interface is probably one of the most important to take into account when designing the architecture of the [[System of Interest (SoI) (glossary)]]. From an etymology point of view, the term “interface” comes from the Latin words "inter" and "face" and means “to do something between things”. Therefore, the fundamental aspect of an interface is functional and is defined as inputs and outputs of functions. As the Functions are performed by physical elements (systems or System Elements), the Inputs/Outputs Flows of the functions are also carried by physical elements, called Physical Interfaces. Consequentially, the engineer must consider both the functional and physical aspects in the complete notion of interface. A detailed analysis of a complete interface shows that one needs to consider also the Function “send,” located in one of the System Element, the Function “receive,” located in the other one, and the Function “carry,” performed by the Physical Interface which supports the Input/Input Flow – see Figure 2.<br />
<br />
[[File:SEBoKv05_KA-SystDef_Complete_Interface_Representation.png|500px|center|Complete Interface Representation]]<br />
<br />
Figure 2. Complete Interface Representation (Faisandier, 2011)<br />
<br />
<br />
In the context of complex exchanges between some System Elements, a protocol is seen as a Physical Interface which carries or supports the exchanges of data (functional interface).<br />
<br />
===Functional / Logical Architecture===<br />
====Concept of Function====<br />
A Function is an action that transforms inputs and generates outputs such as materials, energies, or information (or a combination of them). These inputs and outputs are the flow items exchanged between the functions. The general mathematic notation of a function is y = ƒ(x,t) and can be represented graphically – refer to section '''Methods and Modeling Techniques'''.<br />
<br />
The design activities shall proceed from a functional framework, specifically in the case of the processes dealing with control-command, data processing, or services. In order to define the complete set of functions of the system, one must identify all the functions necessitated by the system and derived requirements as well as the corresponding inputs and outputs exchange driven by those functions. These two kinds of functions are:<br />
#The functions directly deduced from the functional requirements and from the interface requirements. They express the expected services of a system to meet the System Requirements.<br />
#The derived functions issued from the alternative solutions of Physical Architecture as the result of the design. They depend on technology choice to implement the Functional Architecture.<br />
<br />
<br />
====Functions hierarchy – Decomposition of Functions====<br />
At the highest level of a hierarchy, it is possible to represent a system as a unique main function (defined as the system's mission) just like a "black box." In order to understand in detail what the system does, this "head-of-hierarchy" is broken down into sub-functions grouped to form a sub-level of the hierarchy, and so on. The functions of the last level of a functional hierarchy can be called leaf-functions. Hierarchies (or breakdowns) decompose a complex or global function into a set of functions for which the physical solutions are known or possible to imagine.<br />
<br />
<br />
====Input/Output and Control Flows – Interfaces between Functions====<br />
The decomposition into functional hierarchy is an interesting way of understanding the system, but it represents an incomplete part of the Functional Architecture because it does not represent the exchanged flows of inputs and outputs. To get a more complete view, representations of the Functional Architecture should be able to model the various transformations operated by the system, using diagrams such as Functional Flow Block diagrams (FFBD)(Oliver, Kelliher, and Keegan. 1997), or Activity Diagram of SysML (OMG. 2010).<br />
<br />
In SE three types of Input/Output Flows exchanged are considered: material, energy, and information.<br />
<br />
<br />
====Control Flow (Trigger)====<br />
The control flow is an element that activates a function as a condition of its execution. The state of this element (or the condition it represents) activates or deactivates the function (or elements thereof). A control flow can be a signal, an event such as position "on", an alarm or a trigger, a temperature variation, the push of a key on a keyboard, etc.<br />
<br />
Note: in SysML (OMG. 2010) all interactions are initiated by signals; there five kinds of signals: creation event, destruction event, asynchronous message, synchronous message, duration constraints.<br />
<br />
====Concept of Dynamics====<br />
The preponderance of mathematics consists of continuous functions that are generally solved though differential equations. In SE, one considers also the set of functions that interact between themselves and with functions outside of the system. Since functions start and stop their execution depending on events, triggers, or durations, the discontinuous aspect of functions is a major consideration when designing the Functional Architecture.<br />
<br />
Modeling using only functional hierarchy decomposition is insufficient to give a complete idea of what the system will do. A Function is action, transformation, and dynamics, so it is necessary to consider the exchanges between the functions, as well as the reactions of the system in its context and operational environment, through control flows, Scenarios, and behaviors of functions, states, or Operational Modes.<br />
<br />
'''Scenario of Functions''' - A Scenario is a chain of Functions performed as a sequence that synchronizes the functions between them, using their control flows to achieve a global transformation of inputs into outputs. A Scenario of Functions expresses the dynamic of an upper level Function; inversely, a Function can be expressed dynamically by a Scenario of (sub) Functions. The Functional Architecture is worked out with Scenarios for each level of the functional hierarchy and for each level of the system hierarchy. The coherent combination of the Scenarios can be considered as the global Functional Architecture of the system.<br />
<br />
'''Operational State/Operational Mode''' – A Scenario of Functions can be viewed by abstracting the transformation of inputs into outputs of each function and focusing on the active or non-active state of the function and on its controls. Now, in a discussion of Operational Mode, the focus is on a scenario of modes. A scenario of modes is a chain of modes performed as a sequence of transitions between the various modes of the system. The transition from one mode to another one is triggered by the arrival of a control flow (event, trigger). An action (function) can be generated within a transition between two modes, following the arrival of an event or a trigger.<br />
<br />
<br />
====Functional Design Patterns====<br />
When designing Scenarios or Functional Architectures, the designer has to recognize and to use known models to perform the expected transformations. Patterns are generic basic models, more or less sophisticated depending of the complexity of the treatment. A pattern can be represented with different notations. Functional design patterns could be classified into several categories, such as examples:<br />
*Basic patterns linking functions: sequence, iteration, selection, concurrence, multiple exits, loop with exit, replication without monitoring, replication with monitoring, etc.<br />
*Complex patterns: monitor a treatment, exchange a message, Man Machine Interface, mutual exchanges, states-transition monitoring, real-time monitoring of processes, queue management, continuous monitoring with supervision, etc.<br />
*Failure detection, identification, recovery (FDIR) patterns: general FDIR pattern, passive redundancies, active redundancies, semi-active redundancies, continuation of treatment in degraded performance, etc.<br />
*Etc.<br />
<br />
<br />
====Temporal and Decision Hierarchy Concept====<br />
Not every Function of a system is performed at the same frequency. It changes depending on the time and the ways functions are started and executed. One must therefore consider several classes of performance.<br />
<br />
There are synchronous functions, that are executed cyclically, and asynchronous functions, that are executed when an event or trigger happens. The decision monitoring inside a system follows the same temporal classification because decisions are related to functions.<br />
<br />
"Real-time" systems and command/control systems combine the cyclic operations (synchronous) and the factual aspects (asynchronous). The cyclic operations consist of sharing out the execution of the functions according to frequencies, which depend on the constraints of capture, or dispatching the input/output and control flows. Two types of asynchronous events can be distinguished:<br />
*The disturbances on the high frequencies (bottom of Figure 3) - the decisions are made at the level they occur or at the immediate upper level. The goal is to ensure the disturbances do not go up in the low frequencies so that the system continues to achieve its mission objectives. This is the way to introduce exception operations. The typical example related to the operations concerns breakdowns or failures.<br />
*The changes happening on the low frequencies (top of Figure 3) - the decisions of change are made at the upper levels. The goal is to transmit them towards the bottom levels to implement the modifications. A typical example relates to operator actions, maintenance operations, etc.<br />
<br />
<br />
[[File:SEBoKv05_KA-SystDef_Temporal_and_decision_hierarchy_levels.png|500px|center|Temporal and Decision Hierarchy Levels]]<br />
<br />
Figure 3. Temporal and Decision Hierarchy Levels (Faisandier, 2011)<br />
<br />
===Physical Architecture===<br />
<br />
====Allocation of Functional Elements to Physical Elements and Partitioning====<br />
A complex system composed of thousands of physical and intangible parts is structured in layers of systems and System Elements. To be mastered easily the number of elements in decomposition is limited to a few ones – see Figure - Hierarchical decomposition of a system-of-interest (ISO-IEC 15288) in the Introductory Paragraphs of [[System Definition]] KA.<br />
<br />
A system Physical Architecture is a structure of the System of Interest that identifies the System Elements with their Physical Interfaces. Physical Architecture and its elements exist in any kind of system, even in systems of [[Service (glossary)|Services (glossary)]] (human roles, infrastructures, procedures, protocols, etc.), organizations or [[Enterprise (glossary)|Enterprises (glossary)]] (departments, sections, divisions, projects, procedures, protocols, etc.), or systems-intensive software (pieces of code, objects, databases, application protocols interface, etc.).<br />
<br />
The System Elements of the system perform the Functions, and the Physical Interfaces between system elements carry the Input/Output Flows and control flows. Partitioning and allocation are activities to decompose, gather, or separate Functions in order to be able to identify feasible System Elements that could perform the Functions of the system.<br />
<br />
====Criteria to Partition and Allocate Functions onto System Elements====<br />
Partitioning and allocation use criteria to find potential affinities between the functions. There are several criteria provided by the list of System Requirements. Some of the criteria include the periodicity of function, whether functions are subjected to similar effectiveness measures, whether they have commonalities in input, output, or control flows, whether similar transformations occur, whether functions occur in similar environments, the level of reuse of system elements, and/or project or enterprise constraints.<br />
<br />
The partitioning of functions stops when the designer is able to identify one or several system elements that could perform the function or a set of functions. Either the system element exists, is re-usable, or can be developed and technically implemented.<br />
<br />
<br />
====Designing Physical Candidate Architectures====<br />
The existence of several candidate arrangements of system elements that might potentially implement a Function feed the list of candidate Physical Architectures that might be selected for the preferred Physical Architecture. All viable Physical Architectures should completely implement all required system Functions. The preferred Physical Architecture represents the optimum design.<br />
<br />
For a specific system level, the number of levels of decomposition of the functions might not exceed three or four because the sub-functions reside in the lower system levels. Even if the designer identifies more than ten leaf system elements to constitute the architecture of the current system level, he or she must synthesize the set into a single level of 7 + or – 2 higher level of systems and/or system elements. Because design of systems is achieved by designers, these recommended limitations come from the short term memory of a median human being. They allow to master the handled elements, their interfaces, their interrelationships and to navigate in the architecture for impact analysis, flow analysis as examples.<br />
<br />
Synthesis is done by grouping the leaf system elements to constitute a set of (sub)systems. It must be done according to design criteria or principles such as reduction of the number of physical interfaces, modularity, testability of system elements, maintainability, compatibility of technologies, usability, consumption of means or resources, emerging properties control (see section further), etc.<br />
<br />
<br />
====Evaluating and Selecting the Preferred Alternative====<br />
The goal of physical design activities is to achieve and provide the best possible set of system elements and the “best” architecture. The “best” means to define the solution that answers, at best, all the requirements, depending on the agreed limits or margins of each requirement. To do so, there is no other way than to produce several architectures, compare them, and select the most suitable one. <br />
Depending on the kind of system, certain analyses (efficiency, dependability, cost, risks, etc.) are required to get sufficient data that characterize the global or detailed behavior of the candidate architectures of the system, regarding the System Requirements. Those analyses are gathered behind the terms [[System Analysis]].<br />
<br />
Design activity includes optimization to obtain a balance among Design Properties, cost, risks, etc. The architectural structure of a system is determined more strongly by non-functional requirements (performance, safety, security, environmental conditions, constraints, etc.) than by functions. There may be many ways to achieve functions but fewer ways to satisfying non-functional requirements.<br />
<br />
<br />
===Emerging Properties===<br />
The notion of emerging property (see [[Emergence (glossary)]]) is used during the design of the system to highlight the necessary derived functions and internal physical or environmental constraints of the system. The corresponding “derived requirements” result in particular from the studies of these emerging properties. They should be added to the System Requirements baseline when they impact the System of Interest (SOI).<br />
<br />
The System Elements that compose the System of Interest interact between themselves and can create desirable or undesirable phenomena called "emerging properties," such as interference or resonance. The definition of the system includes an analysis of the interactions between the System Elements in order to prevent the undesirable properties (negative behaviors) and reinforce the desirable ones (positive behaviors).<br />
<br />
A property which emerges from a system can have various origins: from a single System Element, from several System Elements, or from the interaction between several System Elements.<br />
<br />
The emerging properties of a system can be classified as in (Thome, B. 1993) - see Table 1.<br />
<br />
<br />
Table 1 - Classification of emerging properties NEW TABLE<br />
<br />
===Systems of Systems Architecting===<br />
Considering a set of existing Systems of Interest that have their own existence in their own context of use, the issue is to know if it is possible to constitute a System of Interest that includes those as System Elements. The higher level system has a Mission, a Purpose, a context of use, objectives and architectural elements. The engineering of such systems is a little bit particular and includes generally [[Reverse Engineering (glossary)]] and top down approach. It is the case when upgrading facilities in the frame of a company using information technology keeping legacy systems.<br />
The architecture activity combines a top-down approach (as for a standard system), but also a bottom-up approach which is induced by the necessity to integrate existing or legacy systems on which no or very few modifications can be applied. So, additional tasks consist to identify these existing or legacy systems, to determine their capabilities and interfaces. The architecture activity has to answer two questions:<br />
*how to fulfill requirements of the new System of Interest?<br />
*how to manage with legacy systems?<br />
<br />
The characteristics of a [[System of Systems (SoS) (glossary)]] are provided in the [[Systems of Systems (SoS)]] KA, and are considered as requirements or constraints when designing their architecture.<br />
<br />
<br />
----<br />
<br />
==Process Approach – Architectural Design==<br />
===Purpose and Principle of the approach===<br />
The purpose of the Architectural Design Process is to synthesize a solution that satisfies the System Requirements (ISO/IEC. 2008).<br />
<br />
The architectural solution consists of both a Functional / logical Architecture (expressed as a set of Functions, Scenarios, and/or Operational Modes) and a Physical Architecture (expressed as a set of systems and System Elements physically connected between them) associated to a set of Design Properties.<br />
<br />
<br />
====Transition from System Requirements to Physical Architecture====<br />
The aim of the approach is to progress from the System Requirements baseline – representing the problem from a supplier/designer point of view, and as much as possible independent of technology - to an intermediate model of Functional Architecture - dependent on design decisions - then to allocate the elements of the Functional Architecture to the elements of potential Physical Architectures. The design decisions and the technological solutions are selected according to performance criteria and non-functional requirements such as the operational conditions and life cycle constraints (for example: environmental conditions or maintenance constraints) – see Figure 4.<br />
<br />
[[File:SEBoKv05_KA-SystDef_Progressive_Approach_for_Designing.png|500px|center|Progressive Approach for Designing]]<br />
<br />
Figure 4. Progressive Approach for Designing (Faisandier, 2011)<br />
<br />
====Iterations between Functional and Physical Architectures Design====<br />
The design activities require spending several iterations from the functional design to the physical design and vice versa until both functional and physical architectures are exhaustive and consistent.<br />
<br />
The first design activity is always the creation of a functional design that is based on the nominal Scenarios. The goal is to get a first model that could achieve the mission of the system. The physical design then enables the engineer to determine the main system elements that will perform these Functions and to organize them into a Physical Architecture.<br />
<br />
A second functional design loop allows taking into account the allocations of the Functions onto System Elements and the derived functions coming from the physical solution choices, as well as supplementing the initial functional model by introducing other and/or altered modes, failure analyses, and every operational requirement not taken into account in the first loop. The derived Functions must, in their turn, be allocated to System Elements, and this, in turn, affects the physical design.<br />
<br />
Obviously, other design loops must be considered to produce an exhaustive and consistent functional and physical solution. During design, technological choices conduct potentially to new Functions, new Input/Output and control Flows, and new Physical Interfaces. These new elements can conduct to the creation of new System Requirements, called “derived requirements”, which become part of the requirements baseline.<br />
<br />
<br />
====Generic inputs and outputs of the design process====<br />
Because of the necessary iterative execution of the design, the inputs and outputs of the process evolve incrementally. The generic inputs include of course the System Requirements, but also the generic design patterns that the designer identifies and uses to answer the requirements, the outcomes from the [[System Analysis]], and the feedback from the [[System Verification and Validation]].<br />
<br />
The generic outputs are the selected Functional and Physical Architectures of the System of Interest (SOI), the (stakeholder) requirements of every System Element that comprises the Physical Architecture of the System of Interest, the Interface Requirements between the System Elements, and the rejected solutions elements.<br />
<br />
ADD eventually ACTION RELATED TO COMMENTs 2228, 2229<br />
<br />
<br />
===Activities of the Process===<br />
<br />
ADD ACTION RELATED TO COMMENT 2491<br />
<br />
Major activities and tasks performed during this process include:<br />
<br />
#'''Define the functional architecture''' of the system: <br />
##Identify Functions, Input/Output Flows, Operational Modes, Transition of Modes, and operational Scenarios from the System Requirements by analyzing the functional, interface, and operational requirements.<br />
##Define the necessary inputs and controls (energy, material, and data flows) to each Function and the outputs generated thereby; deduce the necessary Functions which use, transform, move, and generate the Input/Output Flows.<br />
##Allocate performance, effectiveness, and constraints requirements to Functions and to Input, Output, and control Flows.<br />
##Design candidate Functional Architectures using the previous defined elements to model Scenarios of functions and/or model sequences of Operational Modes and Transition of Modes. Integrate the Scenarios of functions in order to get a complete picture of the dynamic behavior of the system and allocate the temporal constraints. Decompose the functional elements as necessary to look towards implementation. Perform functional failure modes and effects analysis and update the Functional Architecture as necessary.<br />
##Select the Functional Architecture by assessing the candidate Functional Architectures against [[Assessment Criterion (glossary)|Assessment Criteria (glossary)]] (related to requirements) and comparing them. Use System Analysis Process to perform the assessments – see [[System Analysis]] topic.<br />
##Synthesize the selected Functional Architecture, verifying its dynamic behavior. Identify the derived functional elements created for the necessity of design.<br />
##Establish traceability between System Requirements and Functional Architecture elements.<br />
#'''Define the Physical Architecture''' of the system. That is:<br />
##Search for System Elements able to perform the Functions as well as Physical interfaces to carry the Input, Output, and control Flows; ensure the System Elements exist or must be engineered. Use partitioning method to perform this allocation (when it is impossible to identify a System Element that performs a Function, decompose the function till it is possible to identify implementable System Elements).<br />
##Design candidate Physical Architectures using the previously-defined elements to model networks of System Elements and Physical Interfaces. For each candidate, this requires the working out of a low-level Physical Architecture with the elementary System Elements. Because these are generally too numerous (ten or more) they have to be grouped into higher-level System Elements, also called systems. It is then possible to work out a high-level Physical Architecture with these systems and System Elements.<br />
##Select the most suitable Physical Architecture by assessing the candidate Physical Architectures against Assessment Criteria (related to non functional requirements) and comparing them. Use the System Analysis Process to perform the assessments – see [[System Analysis]] topic.<br />
##Synthesize the selected Physical Architecture, verifying that it satisfies the System Requirements and is realistic. Identify the derived physical elements and functional elements created for the necessity of design. Establish traceability between System Requirements and physical architecture elements and allocation matrices between functional and physical elements.<br />
#'''Feedback the architectural design and the system requirements'''. That is: <br />
##Model the “allocated functional architecture” onto systems and system elements if such a representation is possible.<br />
##Define derived functional and physical elements induced by the selected functional and physical architectures. Define the corresponding derived requirements and allocate them on appropriate functional and physical architectures elements. Incorporate these derived requirements in the requirements baselines of the systems impacted.<br />
#'''Prepare the technical elements for the acquisition of each system or system element''':<br />
##Define the mission and objectives of the system or System Element from the Functions allocated to the system or System Element and the allocation of performance and effectiveness to the system or System Element, respectively.<br />
##Define the Stakeholder Requirements for this system or System Element (the concerned stakeholder being the System of Interest). Additional discussion on the development of the stakeholder requirements can be found in [[Mission Analysis and Stakeholders Requirements]] topic.<br />
##Establish traceability between the Stakeholder Requirements of the system or System Element and the design elements of the System of Interest. This allows the traceability of requirements between two layers of systems.<br />
<br />
<br />
===Artifacts and Ontology Elements===<br />
This process may create several artifacts such as:<br />
#System Design Document (describes the selected functional and physical architectures)<br />
#System Design Justification Document (traceability matrices and design choices)<br />
#System Element Stakeholder Requirements Document (one for each system or system element of the SOI)<br />
#System Elements Interface Requirements Document (interfaces between the system elements of the SOI)<br />
<br />
Note: The interfaces between the System Elements of the System of Interest are normally described in the System Design Document and then can be grouped in the System Elements Interface Requirements Documents for interfaces management purpose.<br />
<br />
<br />
This process handles the ontology elements of Table 2 for System Functional Design and of Table 3 for System Physical Design.<br />
<br />
[[File:SEBoKv05_KA-SystDef_ontology_elements_Functional_Design.png|650px|center|Main Ontology Elements as Handled within System Functional Design]]<br />
<br />
Table 2. Main ontology elements as handled within System Functional Design<br />
<br />
<br />
Note: The element Scenario is used for Functional Architecture, because as defined a Scenario includes a large set of functional and behavioral elements: flows of functions, flows of inputs-outputs, and flows of controls (triggers) arranged to model the transformations performed by the system and its behavior. Sequences of Operational Modes and Transition of Modes can be used alternatively depending of the used modeling techniques.<br />
<br />
<br />
The main relationships between ontology elements of Functional Design are presented in Figure 5.<br />
<br />
[[File:SEBoKv05_KA-SystDef_Functional_Design_relationships.png|500px|center|Functional Design Elements Relationships with Other Engineering Elements]]<br />
<br />
Figure 5. Functional Design elements relationships with other engineering elements (Faisandier, 2011)<br />
<br />
<br />
[[File:SEBoKv05_KA-SystDef_ontology_elements_Physical_Design.png|700px|center|Main Ontology Elements as Handled within System Physical Design]]<br />
<br />
Table 3. Main ontology elements as handled within System Physical Design<br />
<br />
<br />
Note: The element "interface" may include both functional aspects (I/O Flow) and physical aspects (Physical Interface). It can be used for technical management purpose, for example to generate Interface Description Documents. It is not represented on the figures.<br />
<br />
Note: The System Element Requirements become one part of the Stakeholder Requirements applicable to the System Element of the lower layer of decomposition.<br />
<br />
The main relationships between ontology elements of Physical Design are presented in the Figure 6.<br />
<br />
[[File:SEBoKv05_KA-SystDef_Physical_Design_relationships.png|600px|center|Physical Design Elements Relationships with Other Engineering Elements]]<br />
<br />
Figure 6. Physical Design elements relationships with other engineering elements (Faisandier, 2011)<br />
<br />
===Checking and Correctness of Architectural Design===<br />
The main items to be checked during design concern functional and physical architectures. <br />
<br />
Concerning functional design, check that:<br />
*Every functional and interface requirement corresponds to one or several functions.<br />
*The outputs of functions correspond to submitted inputs.<br />
*Every function produces at least one output.<br />
*Functions are triggered by control flows as needed.<br />
*Functions are sequenced in the right order and synchronized.<br />
*The execution duration of the functions is in the range of the effectiveness or performance requirements.<br />
*All requested operational scenarios are taken into account. <br />
*The simulation of the functional architecture is complete in every possible case and shows that the consummation of input flows and the production of output flows are correctly sized (when simulation of models is possible).<br />
<br />
<br />
Concerning physical design, check that:<br />
*Every system element performs one or several functions of the functional architecture.<br />
*Every function has been allocated to one system element.<br />
*Every input/output flow is carried by a physical interface.<br />
*The components of the context of the System of Interest are linked to system elements of the System of Interest with physical interfaces.<br />
*The functional architecture is correctly projected onto the physical architecture and the allocated functional architecture reflects this projection correctly.<br />
*The physical architecture is implementable by mastered industrial technologies.<br />
<br />
<br />
===Methods and Modeling Techniques===<br />
Design uses modeling techniques that are grouped under the following types of models. Several methods have been developed to support these types of models:<br />
*Functional models such as the structured analysis design technique (SADT/IDEF0), system analysis & real time (SA-RT), enhanced functional flow block diagrams (eFFBD), function analysis system technique (FAST), etc.<br />
*Semantic models such as entities-relationships diagram, class diagram, data flow diagram, etc.<br />
*Dynamic models such as state-transition diagrams, state-charts, eFFBDs, state machine diagrams (SysML), activity diagram (SysML) (OMG. 2010), Petri nets, etc.<br />
*Physical models such as physical block diagrams (PBD), SysML blocks (OMG. 2010), etc.<br />
<br />
ADD eventually ACTION RELATED TO COMMENTS 3039, 1643<br />
<br />
<br />
----<br />
<br />
==Application to Product systems, Service systems, Enterprise systems==<br />
<br />
Table 4 provides some examples or types of System Elements and Physical Interfaces in each category of system.<br />
<br />
[[File:SEBoKv05_KA-SystDef_Types_of_System_Elements_and_Physical_Interfaces.png|600px|center|Types of System Elements and Physical Interfaces]]<br />
<br />
Table 4. Types of System Elements and Physical Interfaces<br />
<br />
<br />
<br />
----<br />
<br />
==Practical Considerations about Architectural Design==<br />
Major pitfalls encountered with Architectural Design are presented in Table 5:<br />
<br />
[[File:SEBoKv05_KA-SystDef_Pitfalls_architectural_design.png|700px|center|Pitfalls with Architectural Design of Systems]]<br />
<br />
Table 5. Pitfalls with architectural design of systems<br />
<br />
<br />
Proven practices with architectural design of systems are presented in Table 6:<br />
<br />
[[File:SEBoKv05_KA-SystDef_practices_architectural_design.png|700px|center|Proven Practices with Architectural Design of System]]<br />
<br />
Table 6. Proven practices with architectural design of system<br />
<br />
[[File:SEBoKv05_KA-SystDef_Requirements_Traceability_between_system-blocks.png|500px|center|Requirements Traceability Between the System-blocks ]]<br />
<br />
Figure 7 - Requirements Traceability between the system-blocks. (Faisandier, 2011)<br />
<br />
<br />
----<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
Alexander, Christopher; Sara Ishikawa, Murray Silverstein, Max Jacobson, Ingrid Fiksdahl-King, Shlomo Angel. 1977. A Pattern Language: Towns, Buildings, Construction. New York: Oxford University Press.<br />
<br />
<br />
Gamma, Erich; Richard Helm, Ralph Johnson, and John Vlissides. 1995. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley.<br />
<br />
<br />
Oliver, D., T. Kelliher, and J. Keegan. 1997. Engineering complex systems with models and objects. New York, NY: McGraw-Hill. <br />
<br />
<br />
OMG. 2010. OMG Systems Modeling Language – specification - version 1.2 – July 2010 - http://www.omg.org/technology/documents/spec_catalog.htm<br />
<br />
<br />
Thome, B. 1993. Systems engineering, principles & practice of computer-based systems engineering. New York, NY: Wiley. <br />
<br />
<br />
ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E).<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
ANSI/IEEE. 2000. IEEE Recommended Practice for Architectural Description for Software-Intensive Systems. Institute of Electrical and Electronics Engineers, ANSI/IEEE Std 1471:2000.<br />
<br />
<br />
INCOSE. 2010. INCOSE systems engineering handbook, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.<br />
<br />
<br />
ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E).<br />
<br />
<br />
ISO/IEC/IEEE 42010. 2011. Systems and software engineering - Architecture description. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC)/Institute of Electrical and Electronics Engineers (IEEE), ISO/IEC/IEEE FDIS 42010:2011 (E).<br />
<br />
<br />
Maier, M., and E. Rechtin. 2009. The art of systems architecting. 3rd ed. Boca Raton, FL, USA: CRC Press.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
<br />
Faisandier, A. 2011. Engineering and architecting multidisciplinary systems. (expected--not yet published). <br />
<br />
<br />
Oliver, D., T. Kelliher, and J. Keegan. 1997. Engineering complex systems with models and objects. New York, NY: McGraw-Hill. <br />
<br />
<br />
Thome, B. 1993. Systems engineering, principles & practice of computer-based systems engineering. New York, NY: Wiley.<br />
<br />
<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[System Requirements|<- Previous Article]] | [[System Definition|Parent Article]] | [[System Analysis|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Stakeholder_Requirements_Definition&diff=9651Stakeholder Requirements Definition2011-08-09T20:53:45Z<p>Skmackin: </p>
<hr />
<div>==Introduction, Definition and Purpose of System Requirements==<br />
'''Introduction''' - This section provides knowledge about the notion of system requirements, the translation of Stakeholder Requirements into System Requirements, the relationships between the system requirements of the System of Interest and those of its systems or system elements, and the related SE activities and methods. The set of baselined System Requirements is one of the major outcomes of these activities. This set represents the problem from a supplier and a designer point of view, and it can be viewed also as a model of the expected System of Interest.<br />
<br />
<br />
'''Definition and Purpose''' - A requirement is “a statement that identifies a system, product or process characteristic or constraint, which is unambiguous, clear, unique, consistent, stand‐alone (not grouped), and verifiable, and is deemed necessary for stakeholder acceptability.” (INCOSE. 2010) p. 362. The System Requirements are all of the requirements at the “system level” that have been properly translated from the list of stakeholders' requirements. The System Requirements will form the basis of system design, verification, and stakeholder acceptance. In this context, the “stakeholders” include, but are not limited to, end users, end user organizations, supporters, developers, producers, trainers, maintainers, disposers, acquirers, customers, operators, supplier organizations, accreditors, and regulatory bodies. (ISO/IEC. 2011)<br />
<br />
System Requirements play major intricate roles in the engineering activities. They serve:<br />
*as the essential inputs of the system design activities, in order designers know what characteristics are expected and so being able to give corresponding properties to the solution.<br />
*as reference for the system validation activities, in order testers may review the requirements during their development checking their right expression as validation reference, and identifying the necessary means to test the system against the requirements.<br />
*as a communication means, between the various technical staff that interact within the project.<br />
<br />
<br />
<br />
----<br />
<br />
==Principles about System Requirements==<br />
===Translation of Stakeholder Requirements into System Requirements===<br />
Quite often, the set of Stakeholder Requirements may contain vague, ambiguous, and qualitative “user-oriented” requirements that are difficult to use for design or to verify. Each of these requirements may need to be further clarified and translated into “engineering-oriented” language to enable proper design and verification activities. The System Requirements resulting from this translation are expressed in technical language useful for design; unambiguous, consistent, with no contradiction, exhaustive, and verifiable. Of course, close coordination with the stakeholders is necessary to ensure the translation is accurate.<br />
<br />
As an example, a need or an expectation such as "To manoeuvre easily a car to park it", will be transformed in a set of Stakeholder Requirements such as, "Increase the drivability of the car", "Decrease the effort for handling", "Assist the piloting", "Protect the coachwork against shocks or scratch", etc. Analyzing for example the Stakeholder Requirement "Increase the drivability of the car", it will be translated in a set of System Requirements specifying measurable characteristics such as the turning circle (steering lock), the wheelbase, etc.<br />
<br />
<br />
===Traceability and assignment of System Requirements during design===<br />
Requirements traceability provides the ability to trace information from the origin of the Stakeholder Requirements at the top level to the lowest level of the system hierarchy - see section "Top-down and recursive approach to system decomposition" in the topic [[Fundamentals of System Definition]]. Traceability is also used to provide an understanding of the extent of a change as an input to impact analyses conducted with respect to proposed engineering improvements or requests for change.<br />
<br />
During design, the assignment of requirements from one level to lower levels in the system hierarchy can be accomplished using several methods, as appropriate - see Table 1.<br />
<br />
<br />
<br />
Table 1 - Assignment types for a System Requirement NEW TABLE<br />
<br />
<br />
===Classification of System Requirements===<br />
Several classifications of System Requirements are possible, depending on the requirements definition methods and/or the design methods used. (ISO/IEC. 2011) provides a classification summarized below – see references for other interesting classifications. An example is given in Table 2.<br />
<br />
<br />
<br />
Table 2 - Example of System Requirements classification NEW TABLE<br />
<br />
<br />
<br />
----<br />
<br />
==Process Approach – System Requirements==<br />
===Purpose and Principle of the Approach===<br />
The purpose of the system requirements analysis process is to transform the stakeholder, user-oriented view of desired services into a technical view of the product that meets the operational needs of the user. This process builds a representation of the system that will meet Stakeholder Requirements and that, as far as constraints permit, does not imply any specific implementation. It results in measurable System Requirements that specify, from the supplier’s perspective, what performance and non-performance characteristics it must possess in order to satisfy stakeholders' requirements. (ISO/IEC. 2008)<br />
<br />
<br />
===Activities of the process===<br />
Major activities and tasks performed during this process include:<br />
#Analyzing the Stakeholder Requirements to check completeness of expected services and [[Operational Scenario (glossary)|Operational Scenarios (glossary)]], conditions, Operational Modes, and constraints.<br />
#Defining the System Requirements and its [[Rationale (glossary)]].<br />
#Classifying the System Requirements using suggested classifications classifications – see examples above.<br />
#Incorporating the derived requirements (coming from design) into the System Requirements baseline.<br />
#Establishing the upward traceability with the Stakeholder Requirements.<br />
#Verifying the quality, completeness of each System Requirement and the consistency of the set of System Requirements.<br />
#Validating the content and relevance of each System Requirement against the set of Stakeholder Requirements.<br />
#Identifying potential [[Risk (glossary)|Risks (glossary)]] (or threats and hazards) that could be generated by the System Requirements.<br />
#Synthesizing, recording, and managing the System Requirements and potential associated Risks.<br />
<br />
<br />
===Artifacts and Ontology Elements===<br />
This process may create several artifacts such as:<br />
#System Requirements Document<br />
#System Requirements Justification Document (for traceability purpose)<br />
#System Requirements Database, including traceability, analysis, rationale, decisions, and attributes, where appropriate.<br />
#System External Interface Requirements Document (this document describes the interfaces of the system with external elements of its context of use; the interface requirements can be integrated or not to the System Requirements Document above).<br />
<br />
<br />
This process handles the ontology elements of Table 3.<br />
<br />
[[File:SEBoKv05_KA-SystDef_ontology_elements_system_requirements.png|600px|center|Main Ontology Elements as Handled within System Requirements Definition]]<br />
<br />
Table 3. Main ontology elements as handled within system requirements definition<br />
<br />
<br />
The main relationships between ontology elements are presented in Figure 1.<br />
<br />
[[File:SEBoKv05_KA-SystDef_System_Requirements_relationships.png|400px|center|System Requirements Relationships with Other Engineering Elements]]<br />
<br />
Figure 1. System Requirements relationships with other engineering elements (Faisandier, 2011)<br />
<br />
===Checking and Correctness of System Requirements===<br />
System Requirements should be checked to gauge whether they are well expressed and appropriate. There are number of characteristics which can be used to check System Requirements. The set of System Requirements can be verified using standard peer review techniques and by comparing each requirement against the set of requirements characteristics listed in Table 2 and Table 3 of section "Presentation and Quality of Requirements".<br />
<br />
The requirements can be validated using the requirements elicitation and rationale capture described in section "Methods and Modeling Techniques" further.<br />
<br />
<br />
===Methods and Modeling Techniques===<br />
====Requirements Elicitation and Prototyping====<br />
Requirements Elicitation is one approach that requires user involvement, and can be effective in gaining stakeholder involvement and buy-in. QFD (Quality Function Deployment) and prototyping are two common techniques that can be applied and are defined in this section. In addition, interviews, focus groups, and Delphi techniques are often applied to elicit requirements.<br />
<br />
QFD is a powerful technique to elicit requirements and compare design characteristics against user needs. The inputs to the QFD application are user needs and operational concepts, so it is essential that the users participate. Users from across the life cycle should be included so that all aspects of user needs are accounted for and prioritized.<br />
<br />
Early prototyping can help the users and developers identify the functional and operational requirements and user interface constraints through interactive use of system elements, models, and simulations. The prototyping allows for realistic user interaction, discovery, and feedback, as well as some sensitivity analysis. This helps to form a more complete set of requirements by improving the user’s requirements understanding.<br />
<br />
====Capturing Requirements Rationale====<br />
One powerful and cost-effective technique to translate Stakeholder Requirements to System Requirements is to capture the rationale for each requirement. Requirements rationale is merely a statement as to why the requirement exists, any assumptions made, the results of related design studies, or any other related supporting information. This is especially powerful in capturing the rationale for stakeholder requirements, as this supports further requirements analysis and decomposition. The rationale can be captured directly in the requirements database. Of course, stakeholder involvement is essential for this process.<br />
It is possible to extend the method providing rationale that explains why a set of requirements satisfies a higher-level one – see (Hull, M. E. C., Jackson, K., Dick, A. J. J. 2010) Chapter 7.<br />
<br />
Some of the benefits include:<br />
*'''Reducing the total number of requirements'''. As requirements are gathered and rationale is required for each, the requirement author may realize that a particular requirement may not be required or supportable, or that it duplicates an existing requirement. Reducing requirements count will reduce project cost and risk.<br />
*'''Early exposure of bad assumptions'''. Many poorly written or incorrect requirements are based upon faulty assumptions. By capturing (and questioning) requirements rationale, incorrect or faulty assumptions can be exposed early in the process, and the related requirements can be eliminated or properly rewritten.<br />
*'''Removes design implementation'''. Many poorly written stakeholder requirements are design requirements in disguise, in that the customer is intentionally or unintentionally specifying a candidate implementation. The requirements authors may be jumping to solution space, rather that stating the problem to be solved. By properly capturing requirements rationale, these “design requirements” are often exposed and rewritten properly to allow appropriate design latitude to develop a cost-effective solution.<br />
*'''Improves communication with the stakeholder community'''. By capturing the requirements rationale for all Stakeholders Requirements, the line of communication between the users and the designers is greatly improved. This will result in better understanding of the problem space, elimination of ambiguous requirements, and, eventually, a greatly improved system validation process, etc. since the intent of the stakeholders is well-documented and clarified. Adapted from (Hooks, I. F., and K. A. Farry. 2000) Chapter 8.<br />
<br />
====Modeling Techniques====<br />
Modeling techniques can be used when requirements must be detailed or refined, or when they address topics not considered during the Stakeholder Requirements Definition and Mission Analysis:<br />
*State-charts models (ISO/IEC. 2011) Section 8.4.2<br />
*Scenarios modeling (ISO/IEC. 2011) Section 6.2.3.1<br />
*Simulations, prototyping (ISO/IEC. 2011) Section 6.3.3.2<br />
*Quality Function Deployment (INCOSE. 2010) p. 83<br />
*Sequence diagram, Activity diagram, Use case, State machine diagram, Requirements diagram of SysML<br />
*Functional Flow Block Diagram for Operational Scenario<br />
*Etc.<br />
<br />
====Presentation and Quality of Requirements====<br />
Generally, requirements are provided in a textual form. Guidelines exist to write good requirements; they include recommendations about sentence syntax, wording (exclusions, representation of concepts, etc.), semantics (specific, measurable, achievable, realistic, testable). Refer to (INCOSE. 2010) section 4.2.2.2.<br />
<br />
There are several characteristics of requirements and sets of requirements that ensure the quality of requirements. These are used both to aid the development of the requirements and to verify the implementation of requirements into the solution. Table 4 provides a list and descriptions of the characteristics for individual requirements and Table 5 provides a list and descriptions of characteristics for a set of requirements, as adapted from (ISO/IEC. 2011) sections 5.2.5 and 5.2.6.<br />
<br />
[[File:SEBoKv05_KA-SystDef_Characteristics_of_Individual_Requirements.png|600px|center|Characteristics of Individual Requirements ]]<br />
<br />
Table 4. Characteristics of Individual Requirements (ISO/IEC 29148)<br />
<br />
<br />
<br />
[[File:SEBoKv05_KA-SystDef_Characteristics_of_a_set_of_Requirements.png|600px|center|Characteristics of a Set of Requirements ]]<br />
<br />
Table 5. Characteristics of a Set of Requirements (ISO/IEC 29148)<br />
<br />
====Requirements in Tables====<br />
Requirements may be provided in a table, especially when specifying a set of parameters for the system or a system element. It is good practice to make standard table templates available. For tables, the following conventions apply: <br />
*Invoke each requirements table in the requirements set that clearly points to the table.<br />
*Identify each table with a unique title and table number.<br />
*Include the word “requirements” in the table title.<br />
*Identify the purpose of the table in the text immediately preceding it and include an explanation of how to read and use the table, including context and units.<br />
*For independent-dependant variable situations, organize the table in a way that best accommodates the use of the information.<br />
*Each cell should contain, at most, a single requirement.<br />
<br />
====Requirements in Flow Charts====<br />
Flow charts often contain requirements in a graphical form. These requirements may include logic that must be incorporated into the system, operational requirements, process or procedural requirements, or other situations that are best defined graphically by a sequence of interrelated steps. For flow charts, the following conventions apply:<br />
*Invoke flow charts in the requirements set that clearly points to the flow chart.<br />
*Identify each flow chart with a unique title and figure number.<br />
*Include the word “requirements” in the title of the flow chart<br />
*Clearly indicate and explain unique symbols that represent requirements in the flow chart.<br />
<br />
====Requirements in Drawings====<br />
Drawings also provide a graphical means to define requirements. The type of requirement defined in a drawing depends on the type of drawing. Following conventions apply:<br />
*Drawings are used when they can aid in the description of the following:<br />
** Spatial requirements<br />
**Interface requirements<br />
**Layout requirements<br />
*Invoke drawings in the requirements set that clearly points to the drawing.<br />
<br />
==Application to Product systems, Service systems, Enterprise systems==<br />
<br />
The classification of System Requirements may have differences between those types of systems.<br />
<br />
<br />
==Practical Considerations about System Requirements==<br />
There are several '''pitfalls''' that will inhibit the generation and management of an optimal set of System Requirements. See Table 6.<br />
<br />
[[File:SEBoKv05_KA-SystDef_pitfalls_System_Requirements.png|600px|center|Major Pitfalls with Definition of System Requirements]]<br />
<br />
Table 6. Major pitfalls with definition of System Requirements <br />
<br />
<br />
The following '''proven practices''' in system requirements engineering have repeatedly been shown to reduce project risk and cost, foster customer satisfaction, and produce successful system development. See Table 7.<br />
<br />
<br />
[[File:SEBoKv05_KA-SystDef_practices_System_Requirements.png|600px|center|Proven Practices with Definition of System Requirements]]<br />
<br />
Table 7. Proven practices with definition of System Requirements <br />
<br />
<br />
<br />
----<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
INCOSE. 2010. INCOSE systems engineering handbook, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2. <br />
<br />
<br />
ISO/IEC. 2011. Systems and software engineering - Life cycle processes - Requirements engineering. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC FDIS 29148. <br />
<br />
<br />
ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E).<br />
<br />
<br />
Hull, M. E. C., Jackson, K., Dick, A. J. J. 2010. Systems Engineering, 3rd ed. London: Springer.<br />
<br />
<br />
Hooks, I. F., and K. A. Farry. 2000. Customer-centered products: Creating successful products through smart requirements management. New York, NY: American Management Association.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
INCOSE. 2010. INCOSE systems engineering handbook, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2. <br />
<br />
<br />
ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E).<br />
<br />
<br />
ISO/IEC. 2011. Systems and software engineering - Life cycle processes - Requirements engineering. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC FDIS 29148. <br />
<br />
<br />
van Lamsweerde, A. 2009. Requirements engineering. New York, NY: Wiley.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
<br />
Faisandier, A. 2011. Engineering and architecting multidisciplinary systems. (expected--not yet published). <br />
<br />
<br />
Hooks, I. F., and K. A. Farry. 2000. Customer-centered products: Creating successful products through smart requirements management. New York, NY: American Management Association. <br />
<br />
<br />
Hull, M. E. C., Jackson, K., Dick, A. J. J. 2010. Systems Engineering, 3rd ed. London: Springer.<br />
<br />
<br />
Roedler, G., D. Rhodes, C. Jones, and H. Schimmoller. 2010. Systems engineering leading indicators guide, version 2.0. San Diego, CA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2005-001-03.<br />
<br />
<br />
----<br />
<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Mission Analysis and Stakeholders Requirements|<- Previous Article]] | [[System Definition|Parent Article]] | [[Architectural Design|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Business_or_Mission_Analysis&diff=9650Business or Mission Analysis2011-08-09T20:51:35Z<p>Skmackin: </p>
<hr />
<div>==Introduction, Definition and Purpose==<br />
'''Introduction''' - The starting point of engineering a system is the definition of the problem to be solved. The stakeholders’ requirements represent the view of the [[User (glossary)|users (glossary)]], [[Acquirer (glossary)|acquirers (glossary)]], and customers of the problem. An important set of activities has to be performed to establish a set of stakeholders’ requirements for a system that can provide the services needed by the acquirer, the users, and the other stakeholders in a defined environment. This section provides knowledge about the notions of needs, expectations, stakeholders’ requirements, concept of operations, and the related systems engineering activities and methods. The set of Stakeholder Requirements represents one of the major outcomes of these activities. For better understanding of this chapter, it is recommended that you read first the Introduction to [[System Definition]] Knowledge Area and the topic [[Fundamentals of System Definition]].<br />
<br />
<br />
'''Definition and Purpose''' - An initial presentation of stakeholders’ intention is not necessarily a requirement, since it has not been defined, analyzed, or determined to be feasible. This intention is distinguished from a requirement as being (stakeholder) needs, goals or objectives. The distinction between requirements and needs is related to which system is concerned; for example, a [[Requirement (glossary)]] for a system may be allocated to a software element of the system, viewed as needs by the supplier of this software element, and further elaborated as (a) requirement(s) for this software element.<br />
<br />
An important aspect of engineering is “requirements engineering.” Some activities gathered under this term include:<br />
*the capture of the needs, goals, and objectives of the various [[Stakeholder (glossary)|Stakeholders (glossary)]];<br />
*the preliminary identification of the engineering elements of this [[System of Interest (SoI) (glossary)]] in terms of purpose, expected mission, services or operations, objectives, exchanges, and physical interfaces with the objects of its context of use;<br />
*the enrichment of the needs through the analysis of these first engineering elements of the system, in particular what is called the “mission analysis”;<br />
*the transformation of these needs into clear, concise, and verifiable Stakeholder Requirements applicable to the System of Interest;<br />
*the analysis and translation of the Stakeholder Requirements into a set of System Requirements (see topic [[System Requirements]]).<br />
<br />
Requirements engineering is performed in an iterative manner with the other life cycle processes and recursively through the system design hierarchy.<br />
<br />
<br />
----<br />
<br />
==Principles about Stakeholder Requirements==<br />
===From capture of needs to the Stakeholder Requirements definition===<br />
Several steps are necessary to state the maturity of the needs, from "real needs" to reaching a realized solution (that could be called "realized needs"). The Figure 1 below presents the “Cycle of needs” as it can be deduced from Professor Shoji Shiba and Professor Noriaki Kano's works and courses, and is adapted here for systems engineering purposes.<br />
<br />
[[File:SEBoKv05_KA-SystDef_Cycle_of_needs.png|500px|center|Cycle of Needs]]<br />
<br />
Figure 1. Cycle of needs (Faisandier, 2011)<br />
<br />
<br />
This figure shows these steps and the position of the Stakeholder Requirements and System Requirements in the engineering cycle:<br />
#'''Real needs''' are those that lie behind any perception, and indeed may not be perceived at all; they are conditioned by the context in which people live. As an example, a generic need could be “identify infectious diseases easily”. It often looks like a simple action.<br />
#'''Perceived needs''' are based on a person’s awareness (possibly imperfect) that something is wrong, that there is a lack of something, that there is something that could improve a situation, or that there are business / investment / market opportunities. Perceived needs are often presented as a list of disorganized expectations, often resulting from an analysis of the usage conditions for the considered action. Following on from the example above, the real need might be perceived as a need to "carry out medical tests in particular circumstances (laboratories, point of care, hospital, human dispensary)". Since the real need is seldom expressed, the richness of the knowledge of the perceived needs is used as a basis for potential solutions. This step has to be as complete as possible to cover all the contexts of use.<br />
#'''Expressed needs''' originate from perceived needs in the form of generic actions or constraints, and are typically prioritized. For example, if safety is the top concern, the expressed need “Protect the operator against contamination” may take priority over other expressed needs, such as “Assist in the execution of tests.” Here the analysis of the expected mission or services in terms of operational scenarios takes place.<br />
#'''Retained needs''' are selected from a more or less important number of expressed needs. The selection uses the prioritization of expressed needs in order to achieve something or make solutions feasible. The retained needs allow the consideration of potential solutions for a System-of-Interest. The retained needs are generally called "Stakeholder Requirements,” and exploration of potential solutions must start from this step. The various solutions suggested at this step are not yet products but describe means of satisfying the Stakeholder Requirements. Each potential solution imposes constraints on the future System of Interest.<br />
#'''Specified needs''', generally called “System Requirements”, are the translation of the stakeholders’ requirements to represent the view and expression of the supplier of the problem, having in mind potential feasible solutions. The expression of the System Requirements means that potential solutions as systems exist or can be developed, manufactured, or bought.<br />
#'''Realized needs''' are the [[Product (glossary)]], [[Service (glossary)]] or [[Enterprise (glossary)]]realized, taking into account every System Requirement (and hence Stakeholder Requirement).<br />
<br />
===Mission Analysis and Operational Concept===<br />
The notions of Mission Analysis and Concept of Operations originated from Defense organizations to define the tasks and actions performed within the context of military operations, taking into account the strategic, operational, and tactical aspects of a given situation.<br />
<br />
*'''Mission analysis''' is the process used to identify all unit missions and critical collective tasks. Collective tasks must be identified to determine exactly what the unit must be trained in to support accomplishment of unit missions. (TRADOC. 1999) appendix A, specifically references for AR 25-30, The Army Integrated Publishing and Printing Program; FM 25-100, Training the Force; FM 25-101, Battle Focused Training; and FM 100-23, Peace Operations. Etc.<br />
<br />
*'''The concept of operations''' is a clear, concise statement of where, when, and how the commander intends to concentrate combat power to accomplish the mission. It broadly outlines considerations necessary for developing a scheme of maneuver. (U.S. Army. 1997), pages 1-268.<br />
<br />
<br />
These notions are also used in industrial sectors such as aviation administrations and aeronautic transportation, health care systems, Space, etc. with adapted definitions and/or terms, such as operational concepts, utilization/usage concept and/or technological concept. Examples include:<br />
<br />
*“Mission Analysis” is the term used to describe the mathematical analysis of satellite orbits, performed to determine how best to achieve the objectives of a space mission (European Space Agency).<br />
<br />
*In a large-scale system, such as chemical and nuclear plants, the system functional requirement changes during its operation and its component must also play various roles depending on the system state. Since the logical relation between the system and its components changes during system operation, phased mission analysis (PMA) must be introduced into their reliability analysis (Kohda, Wada, and Inoue. 1994) pages 299-309.<br />
<br />
<br />
Anywhere these notions are used, it is evident that they are based on fundamental concepts such as [[Operational Mode (glossary)]] (or operational state of the system), Scenario (of actions), operational concepts, functions (provided services), etc. In other words, the system encounters different states during its utilization or operation (Operational Modes). During each state scenarios of services are performed (operational concept); therefore, a scenario is a sequence of actions, functions or services. This means that during the Stakeholder Requirements definition, the description of operational scenarios is a powerful means of identifying the expected or required services of the future system. On the other hand, if the expected services are (pre) defined, the expression of Operational Scenarios provides the expected behavior of the system in use; that is to say, how the user intends to use the system in its various operational modes. For more explanations about the notion of Concept of Operations, refer to (IEEE.1998). Useful information can be found in (ISO/IEC. 2011) Annex A (System Operational Concept) and Annex B (Concept of Operations).<br />
<br />
<br />
===Classification of Stakeholder Requirements===<br />
Several classifications of stakeholder requirements are possible. ISO/IEC 29148, section 9.4.2.3 (ISO/IEC. 2011) provides interesting elements for classification. These classifications of Stakeholder Requirements have the goal to facilitate the classification of the System Requirements and, subsequently, to prepare the design and validation activities. One possible way to classify the Stakeholder Requirements is under the following categories as indicated in Table 1.<br />
<br />
<br />
Table 1 – Example of Stakeholder Requirements classification NEW TABLE<br />
<br />
<br />
<br />
<br />
Further categories as necessary.<br />
<br />
<br />
<br />
----<br />
<br />
==Process Approach - Stakeholder Requirements==<br />
===Purpose and principles of the approach===<br />
The purpose of the Stakeholder Requirements Definition Process is to identify all the needs and expectations from every stakeholder (acquirer, users, and others) and to define the Stakeholder Requirements that express the intended interaction the System of Interest will have with its operational environment. They are the reference against which each resulting operational service is validated.<br />
<br />
Needs and expectations are collected through stakeholders' interviews (including user surveys), technical documents, feedback from the Verification Process and the Validation Process, and from outcomes of the System Analysis Process. (ISO/IEC. 2008)<br />
<br />
From these needs, the stakeholders’ requirements can be developed. To get a complete set of needs, expectations, and, finally, Stakeholder Requirements, it is necessary to consider the System of Interest in various stages across its typical life cycle. Every system has its own stages of life, from the initial need to the disposal; these may include transfer for use or deployment, normal use in operation, production, maintenance, and disposal. For each stage, a list of all actors having an interest in the future system is identified. Those actors are called [[Stakeholder (glossary)|Stakeholders (glossary)]]. The aim is to get every stakeholder point of view for every stage of the system life in order to establish the list of Stakeholder Requirements as exhaustively as possible.<br />
<br />
An example is provided in the Table 2 below:<br />
<br />
[[File:SEBoKv05_KA-SystDef_Stakeholders_Identification_Based_on_Life.png|600px|center|Stakeholders Identification Based on Life Cycle Stages]]<br />
<br />
Table 2. Stakeholders Identification Based on Life Cycle Stages <br />
<br />
<br />
Considering and questioning the classification, as presented in the previous section, allows completion of the set of Stakeholder Requirements.<br />
<br />
A synthesis of the potential System of Interest that could satisfy the Stakeholder Requirements is established by defining its major objectives and its [[Purpose (glossary)]], [[Mission (glossary)]], or services.<br />
<br />
===Activities of the Process===<br />
<br />
Major activities and tasks performed during this process include:<br />
#Eliciting or capturing the stakeholders' needs, expectations, and requirements; determining the purpose, mission, or services and the major objectives of the potential System of Interest.<br />
#Analyzing the acquirers’ and users' needs and defining the corresponding Stakeholder Requirements, including the definition of operational/utilization concepts or [[Scenario (glossary)|Scenarios (glossary)]].<br />
#Exploring, analyzing, and identifying the needs and expectations of the other stakeholders (the list of the stakeholders depends on the system life cycle), including various life cycle constraints.<br />
#Verifying the quality, completeness of each Stakeholder Requirement and the consistency of the set of Stakeholder Requirements.<br />
#Validating the content and relevance of each Stakeholder Requirement with the corresponding stakeholder representative providing [[Rationale (glossary)]] of the existence of the requirement.<br />
#Identifying potential [[Risk (glossary)|Risks (glossary)]] (or threats and hazards) that could be generated by the Stakeholder requirements.<br />
#Synthesizing, recording and managing the Stakeholder Requirements and potential associated Risks.<br />
<br />
<br />
===Artifacts and Ontology Elements===<br />
This process may create several artifacts such as:<br />
#Stakeholder Requirements Document<br />
#Organizational Concept of Operations<br />
#System Operational Concept<br />
#Stakeholder Interview Report<br />
#Stakeholder Requirements Database<br />
#Stakeholder Requirements Justification Document (for traceability purpose)<br />
<br />
<br />
This process handles the ontology elements of Table 3.<br />
<br />
[[File:SEBoKv05 KA-SystDef ontology elements stakeholder requirements.png|600px|center|Main Ontology Elements as Handled within Stakeholder Requirements Definition]]<br />
<br />
Table 3. Main ontology elements as handled within Stakeholder Requirements definition TO BE CHANGED<br />
<br />
<br />
The main relationships between ontology elements are presented in Figure 2.<br />
<br />
[[File:SEBoKv05_KA-SystDef_Stakeholder_Requirements_relationships.png|400px|center|Stakeholder Requirements Relationships with Other Engineering Elements]]<br />
<br />
Figure 2. Stakeholder Requirements relationships with other engineering elements (Faisandier, 2011)<br />
<br />
===Checking and correctness of Stakeholder Requirements===<br />
Stakeholder Requirements should be checked to gauge whether they are well expressed and appropriate. There are a number of characteristics which can be used to check Stakeholder Requirements. In general, Stakeholder Requirements should have characteristics as described in section "Presentation and Quality of Requirements" in the topic [[System Requirement]].<br />
<br />
Determining the correctness of Stakeholder Requirements requires user-domain expertise which would involve one or more subject matter experts if system engineering team does not have stakeholder domain expertise.<br />
<br />
<br />
===Methods and Modeling Techniques===<br />
It is strongly recommended that the engineers consider several different techniques or methods for identifying needs, expectations, and requirements during the elicitation activity to better accommodate the diverse set of requirements sources, including:<br />
*Structured workshops with brainstorming<br />
*Interviews and questionnaires<br />
*Observation of environment or work patterns (e.g., time and motion studies)<br />
*Technical documentation review<br />
*Market analysis or competitive system assessment<br />
*Simulations, prototyping, and modeling<br />
*Benchmarking processes and systems<br />
*Organizational analysis techniques (e.g., Strengths, Weaknesses, Opportunities, Threats - SWOT analysis, product portfolio)<br />
*Quality Function Deployment (QFD), which can be used during the needs analysis. QFD is a technique for deploying the "Voice of the Customer.” It provides a fast way to translate customer needs into requirements.<br />
*Use Case diagram of SysML<br />
*Context diagram based on Block diagram of SysML<br />
*Functional Flow Block Diagram<br />
<br />
<br />
----<br />
<br />
==Application to Product systems, Service systems, Enterprise systems==<br />
<br />
The classification of Stakeholder Requirements may have differences between those types of systems.<br />
<br />
----<br />
<br />
<br />
==Practical Considerations about Stakeholder requirements==<br />
Some major pitfalls encountered with definition of Stakeholder Requirements are indicated in Table 4.<br />
<br />
[[File:SEBoKv05_KA-SystDef_pitfalls_Stakeholder_Requirements.png|600px|center|Major Pitfalls with Definition of Stakeholder Requirements]]<br />
<br />
Table 4. Major pitfalls with definition of Stakeholder Requirements<br />
<br />
<br />
<br />
The following proven practices with Stakeholder Requirements engineering have repeatedly been shown to reduce project risk and cost, foster customer satisfaction, and produce successful system development - see Table 5.<br />
<br />
[[File:SEBoKv05_KA-SystDef_practices_Stakeholder_Requirements.png|600px|center|Proven Practices with Definition of Stakeholder Requirements]]<br />
<br />
Table 5. Proven practices with definition of Stakeholder Requirements<br />
<br />
<br />
----<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
TRADOC. 1999. Systems approach to training management, processes, and products. Ft. Monroe, VA, USA: U.S. Army Training and Doctrine Command (TRADOC), TRADOC 350-70. <br />
<br />
<br />
U.S. Army. 1997. Army field manual: Staff organization and operations. Washington, DC: U.S. Department of the Army, FM 101-5.<br />
<br />
<br />
Kohda, T., M. Wada, and K. Inoue. 1994. A simple method for phased mission analysis. Reliability Engineering & System Safety 45 (3): 299-309. <br />
<br />
<br />
IEEE. 1998. Guide for Information Technology – System Definition – Concept of Operations (ConOps) Document. Institute of Electrical and Electronics Engineers, IEEE 1362:1998.<br />
<br />
<br />
ISO/IEC. 2011. Systems and software engineering - Life cycle processes - Requirements engineering. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC FDIS 29148. <br />
<br />
<br />
ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E).<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E).<br />
<br />
<br />
ISO/IEC. 2011. Systems and software engineering - Life cycle processes - Requirements engineering. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC FDIS 29148. <br />
<br />
<br />
INCOSE. 2010. INCOSE systems engineering handbook, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2. <br />
<br />
<br />
van Lamsweerde, A. 2009. Requirements Engineering. New York, NY: Wiley.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
<br />
Center for Quality Management. 1993. Special issue on kano's methods for understanding customer defined quality. Center for Quality Management Journal 2 (4) (Fall 1993). <br />
<br />
<br />
Faisandier, A. 2011. Engineering and architecting multidisciplinary systems. (expected--not yet published). <br />
<br />
<br />
IEEE. 1998. Guide for Information Technology – System Definition – Concept of Operations (ConOps) Document. Institute of Electrical and Electronics Engineers, IEEE 1362:1998.<br />
<br />
<br />
Hull, M. E. C., Jackson, K., Dick, A. J. J. 2010. Systems Engineering, 3rd ed. London: Springer.<br />
<br />
<br />
Kano, N. 1984. Attractive quality and must-be quality. Quality JSQC 14 (2) (October 1984). <br />
<br />
<br />
Kohda, T., M. Wada, and K. Inoue. 1994. A simple method for phased mission analysis. Reliability Engineering & System Safety 45 (3): 299-309. <br />
<br />
<br />
Marca, D. A., and C. L. McGowan. 1987. SADT: Structured analysis and design techniques. Software engineering. New York, NY: McGraw-Hill. <br />
<br />
<br />
TRADOC. 1999. Systems approach to training management, processes, and products. Ft. Monroe, VA, USA: U.S. Army Training and Doctrine Command (TRADOC), TRADOC 350-70. <br />
<br />
<br />
U.S. Army. 1997. Army field manual: Staff organization and operations. Washington, DC: U.S. Department of the Army, FM 101-5.<br />
<br />
<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Fundamentals of System Definition|<- Previous Article]] | [[System Definition|Parent Article]] | [[System Requirements|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=System_Definition&diff=9648System Definition2011-08-09T20:47:21Z<p>Skmackin: </p>
<hr />
<div>'''Introductory Paragraphs'''<br />
==Topics==<br />
The topics contained within this knowledge area include:<br />
*[[Fundamentals of System Definition]]<br />
*[[Mission Analysis and Stakeholders Requirements]]<br />
*[[System Requirements]]<br />
*[[Architectural Design]]<br />
*[[System Analysis]]<br />
<br />
<br />
----<br />
<br />
==Introduction to System Definition Knowledge Area==<br />
<br />
===System Definition activities===<br />
<br />
The SEBOK divides the traditional life cycle process steps into four stages. This chapter will discuss the definition stage. System Definition is the set of technical creative activities of systems engineering (SE). The activities are grouped and described as generic processes that are performed iteratively and concurrently depending of the selected development cycle or [[Life Cycle (glossary)]]. The activities of a process are not performed only on a sequential mode but they may interact with activities of other related processes. The processes of System Definition include mission analysis and stakeholder requirements, system requirements, architectural design, and system analysis. The Figure below gives an example of iteration of three processes as defined in ISO-IEC 15288.<br />
<br />
[[File:SEBoKv05_KA-SystDef_Example_of_iterations_of_processes_related_to_System_Definition.png|420px|center|Example of iterations of processes related to System Definition ]]<br />
<br />
<br />
Figure 1. Example of iterations of processes related to System Definition (ISO. 2003)<br />
<br />
<br />
The complete definition of a [[System of Interest (SoI) (glossary)]] is generally achieved considering decomposition layers of systems and of [[System Element (glossary)|System Elements (glossary)]]. The Figure below presents a fundamental schema of a system breakdown structure.<br />
<br />
[[File:SEBoKv05_KA-SystDef_Hierarchical_decomposition_of_a_system-of-interest.png|600px|center|Hierarchical decomposition of a system-of-interest ]]<br />
<br />
Figure 2. Hierarchical decomposition of a system-of-interest (ISO/IEC. 2008)<br />
<br />
<br />
In each decomposition layer and for each system, the System Definition processes are applied recursively because the notion of system is recursive; a System of Interest, a system, a system elements have the same nature – see [[Systems|Part 2]]. The Figure below shows an example of recursion of three processes as defined in ISO-IEC 15288.<br />
<br />
<br />
[[File:SEBoKv05_KA-SystDef_Recursion_of_processes_on_layers.png|550px|center|Recursion of processes on layers ]]<br />
<br />
Figure 3. Recursion of processes on layers (ISO. 2003)<br />
<br />
<br />
Note: To facilitate readers' understanding of the standard ISO/IEC. 2008, the Figure below shows the relative position of the Technical Knowledge Areas (KA) of this SEBoK with respect to the processes as stated in the standard.<br />
<br />
<br />
<br />
Figure 4. Mapping of technical topics of Knowledge Areas of SEBoK with ISO/IEC 15288 Technical Processes <br />
<br />
<br />
====Top-down approach: from the problem to the solution====<br />
<br />
In a top-down approach, the System Definition activities are focused primarily on understanding the problem, the conditions that constrain the system, and the design of solutions. The outcomes of the System Definition are used for the System Realization, System Deployment and Use, and Product and Service Life Management. In this approach, System Definition includes the activities that are completed primarily in the front-end portion of the system design and the design itself. These consist of mission analysis and stakeholders’ requirements, system requirements, architectural design, and system analysis. <br />
<br />
*'''Mission Analysis and [[Stakeholder Requirement (glossary)|Stakeholder Requirements (glossary)]]''' focus on the identification and definitions of stakeholders' needs, the development of operational and environmental conditions, of operational concepts, and the definition of applicable constraints.<br />
*These elements then are used for the development of '''System Requirements (glossary)''' that consists of the refinement and translation of the stakeholders’ requirements into System (technical) Requirements. <br />
*These System Requirements are then utilized as inputs to the '''Architectural Design''', which includes [[Functional Architecture (glossary)]], dynamic behavior, and [[Physical Architecture (glossary)]]in addition to temporal and decision levels. <br />
*'''System Analysis''' studies are performed to evaluate and select potential System Elements that compose the system and to compare potential architectures and to select the most suitable one. Finally, System Analysis provides a “best value” balanced solution involving all the relevant engineering elements (Stakeholder Requirements, System Requirements, and architectural Design Properties).<br />
<br />
====Bottom up approach and evolution of the solution====<br />
<br />
During the [[Product (glossary)]] and [[Service (glossary)]] Life Management, and because of evolution of the [[context (glossary)]] of use, or for improvement purpose of the existing solution, engineers are led to reconsider the System Definition in order to modify or adapt some structural, functional, or temporal properties. Before attempting to any modification, and because of the existence of the System of Interest, a [[Reverse Engineering (glossary)]] is often necessary to (re)characterize its own properties or those of its systems or system elements.<br />
<br />
A bottom-up approach is necessary for analysis purposes, or for (re)using existing elements into design [[architecture (glossary)]]. On the contrary, a top-down approach in generally used to define a design solution corresponding to a problem or a set of needs.<br />
<br />
So, in the real life of a System of Interest, bottom-up and top-down approaches are often mixed in order to engineer new solutions and/or to re-engineer or re-use existing [[Product (glossary)|Products]], [[Service (glossary)|Services]], [[Enterprise (glossary)|Enterprises]], physical or functional models, requirements, etc.<br />
<br />
====Separation and iteration between Problem Area and Solution Area====<br />
<br />
Inside the System Definition activities, the Systems Engineering discipline makes a significant distinction between the definition of the problem and the design of the solution. Too often, the design team will develop solutions elements ("how the system must be done") without sufficiently defining the problem to be solved and the constraints to respect ("what the system must do").<br />
<br />
To correctly engineer a system, it is recommended to use in a first step a top-down approach differentiating the activities that treat the problem area in terms of needs, expectations, constraints, and requirements from those that treat the solution area in terms of functional, behavioral, physical design. Nevertheless, the two must work together and iteratively. The design of the solution is made of decisions and trade-offs according to requirements, constraints and enterprises capabilities (know-how, technological control, financial capability, etc.). This work requires several iterations tuning the couple "problem-solution". System Analysis activities are used to perform the link between the two areas.<br />
<br />
The distinction between the two areas shall be effective on each level of the system breakdown structure – see Figure 2 above.<br />
<br />
Most of the time, the systems are not created from scratch and generally integrate existing [[System Element (glossary)|System Elements (glossary)]]. A bottom-up approach is used in parallel of the top-down approach to take into account such elements (as legacy systems or products and services for example), and consists to identify the services and capabilities they provide in order to define applicable interface requirements and constraints.<br />
<br />
<br />
----<br />
<br />
===Ontology for System Development===<br />
<br />
====Why an ontology for system development?====<br />
<br />
Ontology is the set of entities presupposed by a theory (Collins English Dictionary). SE, and in particular system development, can be considered a theory because it is based on mathematical concepts, even if it is not always described as such. A SE ontology can be defined considering the following path.<br />
<br />
SE provides engineers with an approach based on a set of concepts (i.e. Stakeholder, Requirement, Function, Scenario, System Element, etc.) and generic processes. Each process is composed of a set of activities and tasks federated around a theme or a purpose. A process describes “what to do” using the concepts. The implementation of the activities and tasks is supported by methods and modeling techniques, composed themselves of elementary tasks; they describe the “how to do”. The activities and tasks of SE are transformations of generic data using the predefined concepts. Those generic data are called Entities, Classes, or Types. Each ''entity'' is characterized by specific ''attributes'', and each attribute can get different values. All along their execution, the activities and tasks of processes, methods, and modeling techniques exchange instances of generic entities according to logical ''relationships''. These relationships allow the engineer to link the entities between themselves (traceability) and to follow a logical sequence of the activities and the global progression (engineering management). Cardinality is associated with every relationship, expressing the minimum and maximum number of entities in order to make the relationship valid. Additional information may be found in (Oliver, Kelliher, and Keegan. 1997).<br />
<br />
The set of SE entities and their relationships form an ontology also often called Engineering Meta-model. Such an approach is used and defined in the standard (ISO 2007). The benefits of using an ontology are many. The ontology allows or forces:<br />
<br />
*the use of a standardized vocabulary, using the right names and avoiding using synonyms in the processes, methods, and modeling techniques;<br />
*reconcilement of the vocabulary used in different modeling techniques and methods;<br />
*the automatic appearance of traceability of requirements when implemented in databases, SE tools or workbenches, and also the quick identification of the impacts of modifications in the engineering data set;<br />
*checks of the consistency and completeness of engineering data, etc.<br />
<br />
An engineering ontology can be represented using Entity-Relationship Diagrams or Classes Diagrams. The Figure below shows a simplified and truncated view of a meta-data model for system development. The entities are logically grouped according to the activities of development. One can distinguish three views of the system that correspond also to three status of the system: <br />
<br />
*the system "'''as required'''", represented by [[System Requirement (glossary)|System Requirements (glossary)]] that are obtained by refinement of the [[Stakeholder Requirement (glossary)|Stakeholder Requirements (glossary)]];<br />
*the system "'''as designed'''", represented by [[Functional Architecture (glossary)]] and behavioral models or [[Scenario (glossary)|Scenarios (glossary)]] (generally composed of [[Function (glossary)|Functions (glossary)]], [[Input-Output Flow (glossary)|Input-Output Flows (glossary)]], and/or [[Operational Mode (glossary)|Operational Modes (glossary)]] and [[Transition of Modes (glossary)]] and by [[Physical Architecture (glossary)]] models (composed of [[System Element (glossary)|Systems Elements (glossary)]], [[Physical Interface (glossary)|Physical Interfaces (glossary)]], [[Port (glossary)|Port (glossary)]] and their associated [[Design Property (glossary)|Design Properties]];<br />
*the system "'''as realized'''", represented by a [[Product (glossary)]], a [[Service (glossary)]] or an [[Enterprise (glossary)]] that integrates [[Implemented Element (glossary)|Implemented Elements (glossary)]] characterized by a set of [[Measured Property (glossary)|Measured Properties (glossary)]].<br />
<br />
[[File:SEBoKv05_KA-SystDef_A_simplified_meta-data_model_for_system_development.png|700px|center|A simplified view of a meta-data model for system development.]]<br />
<br />
Figure 5. A simplified view of a meta-data model for system development (Faisandier. 2011) <br />
<br />
<br />
As they are progressively instantiated, the entities and their relationships represent also the maturity of the progress of the development of the system. Rather than providing a fixed work flow of activities, the relationships between the entities allow to select several threats of activities and tasks depending of the project type, and of the availability of the engineering data at a given time.<br />
<br />
====Approaches supported by the ontology====<br />
<br />
Such ontology supports easily different approaches for life cycle purposes such as top-down, bottom-up or mixed approaches.<br />
<br />
*A top-down approach can be used starting from the left of the diagram such as: <br />
**Stakeholders write Stakeholder Requirements that are then translated by System Requirements;<br />
**those last are then satisfied by design elements, in particular Functions that are then performed by System Elements, and Input-Output Flows that are carried by Physical Interfaces; <br />
**the System Elements and Physical Interfaces are implemented by Implemented Elements depending of the selected technologies; <br />
**finally these last elements are integrated into intermediate [[Aggregate (glossary)|Aggregates (glossary)]] till obtaining the final realized System of Interest (Product, Service or Enterprise).<br />
<br />
All along the design activity, the Design Properties of the system architecture can be assessed and compared to [[Assessment Criterion (glossary)|Assessment Criteria (glossary)]] extracted from the System Requirements. This allows to identify discrepancies, and then to tune the design or to decide modifications of the requirements (System Analysis). When the system is realized and in operation, the Measured Properties can be compared to the Design Properties and to the System Requirements. These comparisons allow to identify the correct performance of the system, and to decide eventually maintenance operations.<br />
<br />
*But a bottom-up approach associated to reverse engineering can also be used in order to re-structure / architect existing systems to fit with the evolution of the context of use. <br />
**First the Implemented Elements are identified, and then the System Elements and Physical Interfaces that correspond are identified in their turn. <br />
**The second step of reverse engineering consists to identify the Functions performed by the System Elements and the Input-Output Flows carried by the Physical Interfaces. This makes possible to establish physical and functional architecture models of the existing system. <br />
**Finally, it is possible to characterize the existing system by a set of Design Properties and System Requirements extracted from Physical and Functional Architecture models. <br />
**Any modification of what exists (re-engineering) is then possible reconsidering new System Requirements or evolutions in a top-down approach.<br />
<br />
<br />
Because of evolution of the context of use of a system during its life cycle, it is essential that the set of its engineering data and models are permanently consistent, updated and accessible.<br />
<br />
<br />
----<br />
<br />
===Overview of ontology elements related to System Definition activities===<br />
The Figure below shows a synthetic view of the same meta-data model as above but restricted to System Definition and extended with the main relationships used during System Definition.<br />
<br />
[[File:SEBoKv05_KA-SystDef_A_simplified_ontology_for_System_Definition.png|700px|center|A simplified ontology for System Definition.]]<br />
<br />
Figure 6. A simplified ontology for System Definition (Faisandier. 2011) <br />
<br />
<br />
Based on the overview of the figure above, a set of major ''entities'', ''attributes'', and ''relationships'' are suggested and detailed in the topics of the System Definition knowledge area.<br />
<br />
<br />
----<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
ISO/IEC. 2003. Systems Engineering — A Guide for the application of ISO/IEC 15288 System Life Cycle Processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 19760:2003 (E).<br />
<br />
<br />
ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E).<br />
<br />
<br />
Oliver, D., T. Kelliher, and J. Keegan. 1997. Engineering complex systems with models and objects. New York, NY: McGraw-Hill. <br />
<br />
<br />
ISO. 2007. Systems engineering and design. Geneva, Switzerland: International Organization for Standardization (ISO), ISO 10303-AP233.<br />
<br />
<br />
Faisandier. 2011. Engineering and architecting multidisciplinary systems. (expected--not yet published).<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
ANSI/EIA. 1998. Processes for engineering a system. Philadelphia, PA, USA: American National Standards Institute (ANSI)/Electronic Industries Association (EIA), ANSI/EIA-632-1998. <br />
<br />
<br />
INCOSE. 2010. INCOSE systems engineering handbook, version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2. <br />
<br />
<br />
ISO/IEC. 2003. Systems Engineering — A Guide for the application of ISO/IEC 15288 System Life Cycle Processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 19760:2003 (E).<br />
<br />
<br />
ISO/IEC. 2007. Systems engineering--application and management of the systems engineering process. Geneva, Switzerland: International Organization for Standards (ISO)/International Electronical Commission (IEC), ISO/IEC 26702:2007. <br />
<br />
<br />
ISO/IEC. 2008. Systems and software engineering - system life cycle processes. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electronical Commission (IEC), ISO/IEC 15288:2008 (E).<br />
<br />
<br />
NASA. 2007. Systems engineering handbook. Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
<br />
Faisandier. 2011. Engineering and architecting multidisciplinary systems. (expected--not yet published).<br />
<br />
<br />
ISO. 2007. Systems engineering and design. Geneva, Switzerland: International Organization for Standardization (ISO), ISO 10303-AP233.<br />
<br />
<br />
Oliver, D., T. Kelliher, and J. Keegan. 1997. Engineering complex systems with models and objects. New York, NY: McGraw-Hill.<br />
<br />
<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Integration of Process and Product Models|<- Previous Article]] | [[Systems Engineering and Management|Parent Article]] | [[Fundamentals of System Definition|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Knowledge Area]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Process_Integration&diff=9646Process Integration2011-08-09T20:43:33Z<p>Skmackin: </p>
<hr />
<div>Introductory Paragraph(s)<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Representative System Life Cycle Process Models|<- Previous Article]] | [[Life Cycle Models|Parent Article]] | [[System Definition|Next Article ->]]</center><br />
==Signatures==<br />
[[Category:Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=System_Lifecycle_Process_Models:_Vee&diff=9645System Lifecycle Process Models: Vee2011-08-09T20:42:26Z<p>Skmackin: </p>
<hr />
<div>Introductory Paragraph<br />
<br />
==Software lifecycle models==<br />
<br />
<br />
The lifecycle processes of software engineering follow the pattern of systems engineering: analysis, design, construction, verification and validation, acceptance, installation, sustainment, and retirement. However, software is a flexible and malleable medium, which facilitates iterative analysis, design, construction, verification, and validation to a greater degree than is usually possible for the physical components of a system. Each iteration of an iterative development model adds code to the growing software base; the expanded code base is tested, reworked as necessary, and demonstrated to satisfy the requirements for that baseline.<br />
<br />
Some process models for software development support iterative development on daily cycles. Other iterative development models support weekly, bi-weekly, and monthly iterations. The outcome of each iterative cycle is demonstrated to the satisfaction of the software team and other internal stakeholders while the results of some iterations are demonstrated for users, customers, and other external stakeholders. The evolving baselines of code may be tested and demonstrated on the actual system hardware or on emulated hardware.<br />
Table 1 lists three iterative software development models presented in this article and the aspects of software development that are emphasized by those models. <br />
<br />
Table 1. Primary emphases of three iterative software development models<br />
<br />
Incremental-build: Iterative implementation-verification-validation-demonstration cycles<br />
<br />
Agile: Iterative evolution of requirements and code<br />
<br />
Spiral: Iterative risk-based analysis of alternative approaches and evaluation of outcomes<br />
<br />
===Iterative-development process models===<br />
<br />
Developing and modifying software involves creative processes that are subject to many external and changeable forces. Long experience has shown that it is impossible to “get it right” the first time, and that iterative development processes are preferable to linear, sequential development process models such as the well-known Waterfall model. <br />
In iterative development, each cycle of the iteration subsumes the software of the previous iteration and adds new capabilities to the evolving product to produce a next, expanded version of the software. Iterative development processes provide the following advantages: <br />
*continuous integration, verification, and validation of the evolving product, <br />
*frequent demonstrations of progress, <br />
*early detection of defects, <br />
*early warning of process problems, <br />
*systematic incorporation of the inevitable rework that occurs in software development, and <br />
*early delivery of subset capabilities (if desired). <br />
<br />
Iterative development takes many forms in software engineering, including these:<br />
<br />
*An Incremental-build process is used to produce periodic (typically weekly) builds of increasing product capabilities; <br />
*Agile development is used to closely involve a prototypical customer in an iterative process that may repeat on a daily basis; <br />
*The Spiral model is used to confront and mitigate risk factors encountered in developing the successive versions of a product. <br />
Each of these models is briefly described.<br />
<br />
=== The Incremental-build model ===<br />
<br />
<br />
The Incremental-build model is a build-test-demonstrate model of iterative cycles in which frequent demonstrations of progress and verification & validation of work to date are emphasized. The Incremental-build model is based on stable requirements and a software architectural specification. Prioritized requirements are allocated to various elements of the software architecture, which is the basis for specifying a prioritized sequence of builds. Each build adds new capabilities to the incrementally growing product. The development process ends when Version N (the final version) is verified, validated, demonstrated, and accepted by the customer.<br />
<br />
Table 2 lists some partitioning criteria for incremental development. Partitions are decomposed into incremental build units of, typically, one calendar-week each. One-week increments and the number of developers available to work on the project determine the number of features that can be included in each incremental build. This, in turn, determines the overall schedule.<br />
<br />
Table 2. Some partitioning criteria for incremental builds<br />
<br />
Kind of system Partitioning criteria<br />
Application package Priority of features<br />
Safety-critical systems Safety features first; prioritized others follow<br />
User-intensive systems User interface first; prioritized others follow<br />
System software Kernel first; prioritized utilities follow<br />
<br />
Figure 1 illustrates the details of the build-verify-validate-demonstrate cycles in the Incremental-build process. Each “build” includes detailed design, coding, integration, and review and testing by the developers. In cases where code is to be reused without modification, some or all of an incremental build may consist of review, integration, and test of the base code augmented with the reused code. <br />
<br />
Insert Figure 1 here<br />
<br />
Figure 1. Incremental build-verify-validate-demonstrate cycles<br />
<br />
Development of an increment may result in rework of previously developed components to better accommodate the new components being integrated, or to fix defects in previously developed components exposed by addition of the new components, or to fix defects in the new components. In any case, frequent demonstrations of progress (or lack thereof) are a major benefit of an Incremental-build development process.<br />
<br />
Incremental verification, validation, and demonstration, as illustrated in Figure 1 overcomes two of the major problems of a Waterfall approach: 1) problems are exposed early and can be corrected as they occur; 2) minor in-scope changes to requirements that occur as a result of incremental demonstrations can be incorporated in subsequent incremental builds.<br />
<br />
Figure 1 also illustrates that it may be possible to overlap successive builds of the product. It may be possible, for example, to start detailed design of the next version while the present version is being validated. Three factors determine the degree of overlap that can be achieved: <br />
<br />
#availability of sufficient personnel to concurrently pursue multiple activities,<br />
#adequate progress on the previous version to provide needed capabilities for the next version, and<br />
#the risk of significant rework that must be accomplished on the next overlapped build because of changes to the previous in-progress build. <br />
<br />
The Incremental-build process works well when each team consists of 2 to 5 developers plus a team leader (who is also a technical contributor). Team members may work as individuals, or perhaps in pairs. Each individual or pair may produce unofficial builds on a daily basis using a copy of the current official version as a test bed. An official build that integrates, verifies, validates, and demonstrates progress made by all developer teams is typically produced on a weekly or bi-weekly basis. <br />
<br />
The Incremental-build model can be scaled up for large projects by partitioning the architecture into well-defined subsystems and allocating requirements and interfaces to each subsystem. The subsystems can be independently tested and demonstrated, perhaps using stubs and drivers for the subsystem interfaces, or perhaps using early, incremental versions of other evolving subsystems. System integration can proceed incrementally as intermediate versions of the various subsystems become operational.<br />
<br />
A significant advantage of an Incremental-build process is that features built first are verified, validated, and demonstrated most frequently because subsequent builds incorporate the features of the earlier builds. In building the software to control a nuclear reactor, for example, the emergency shutdown software could be built first. Operation of emergency shutdown (scramming) would then be verified and validated in conjunction with the features of each successive build.<br />
<br />
In summary, the Incremental-build model, like all iterative models, provides the advantages of continuous integration and validation of the evolving product, frequent demonstrations of progress, early warning of problems, early delivery of subset capabilities, and systematic incorporation of the inevitable rework that occurs in software development.<br />
<br />
===Agile development models===<br />
<br />
Agile development models are evolutionary, in that the requirements evolve during implementation, Agile development is best suited to software projects that are conducted with the involvement of a knowledgeable customer/user who has a clear understanding of the needs to be satisfied by the system that is being built. There are several variations on the agile theme, but most agile-process models emphasize the following aspects [Agile]: <br />
<br />
#continuous, on-going involvement of a knowledgeable customer/user; <br />
#development of test cases and test scenarios before implementing the next version of the product; <br />
#implementation and testing of the resulting software; <br />
#demonstration of each version of the evolving product to the customer; <br />
#eliciting the next set of requirement(s) from the customer; and <br />
#periodic delivery into the operational environment, if desired. <br />
<br />
The customer’s roles are to provide the “story line” that determines the requirements, to review demonstrated capabilities, and to specify the next chapter of the story for the next iteration. An iterative process model for agile development is depicted in Figure 2. <br />
<br />
Insert Figure 2 here<br />
<br />
Figure 2. Depicting the Agile development process <br />
<br />
As indicated in Figure 2, there is no explicit design step and no design documentation in an Agile development process. This is compensated for by a design “metaphor” that is shared among the developers. A design metaphor might be based on an architectural style; for example, the system may be based on a layered style (e.g., a 3-tier architecture) or a separation-of-concerns architecture (e.g., a Model-View-Control architecture). Lack of explicit design documentation requires that the developers be highly skilled; otherwise “agile” becomes a euphemism “code it-test it-fix it.” <br />
<br />
In some versions of the agile process, the duration of an iterative cycle is one day; in other versions it is extend to one month. However, the software developers always have a running version of the software to which they add capabilities on a daily basis. Some agile models use pair programming, in which pairs of developers share one computer terminal and develop software together. <br />
<br />
Experience with agile models indicates that the resulting products are rated low in defect levels and high in user satisfaction. However, user satisfaction is critically dependent on having a knowledgeable and prototypical user as the customer in the iterative development loop. Some critics have raised the concern that an agile process may result in a functionally structured product that, lacking design documentation, will be hard to modify in the future. This problem can be minimized if the software developers share a common design metaphor and use standard coding and code documentation practices.<br />
<br />
Agile development seems to be best suited to small projects that develop applications software . Small projects because there is no partitioning of an a priori design or allocation of requirements to subsystems, which is necessary if members of large project teams are to work concurrently. Agile processes are appropriate for applications projects because user-stories provided by the customer and design metaphors used by developers are best suited to end-item software that will be used by people in pursuing their work activities or recreational pastimes, as opposed to complex embedded and mission-critical systems.<br />
<br />
Development models other than Agile are sometime characterized as “plan-driven,” because of the intentional lack of emphasis on written requirements specifications, design documentation, and V&V plans in the agile models. The text Balancing Agility and Discipline by Boehm and Turner contrasts plan-driven and agile approaches to software development and presents a middle-ground approach to achieving a balance than incorporates aspects of both approaches, based on the particular situation [BOEHM04].<br />
<br />
===The Spiral Model===<br />
<br />
Originally, the Spiral model was presented as a software development model [Boehm88]. In recent times, it has come to be regarded as a meta-model (i.e., a development process framework) from which various kinds of iterative models can be derived. As illustrated in Figure 3, each cycle of a Spiral process involves: <br />
<br />
#analyzing objectives, identifying alternative approaches, and establishing constraints for the next process cycle; <br />
#planning the next cycle by evaluating alternative approaches, identifying the risk factors of each approach, and selecting an approach; <br />
#implementing the selected alternative; and <br />
#evaluating the outcome and deciding what to do next. <br />
<br />
Insert Figure 3 here<br />
<br />
Figure 3. The Spiral model<br />
<br />
What-to-do-next depends on the particular instantiation of the Spiral meta-model. In an Incremental-build Spiral model the next cycle involves building and integrating the next set of features. The duration of a Spiral cycle might range from one day for an Agile Spiral to one month for an Evolutionary Spiral.<br />
<br />
Although systematic evaluation of risk is a major theme of Spiral models, it should not be inferred that the lowest risk approach should always be chosen. High-risk endeavors, if successful, often result in high payoffs; a spin-off parallel investigation of a high risk approach might be pursued while implementing a lower risk alternative. The evaluation step would then consider both outcomes, which provides information for the next cycle.<br />
<br />
The concepts of the Spiral model can be integrated into all iterative process models; the Spiral model adds the dimensions of systematically generating alternative approaches for the next iteration, evaluating the risk of each, selecting an alternative for implementation, and evaluating the outcome.<br />
<br />
==The role of prototyping in software development==<br />
<br />
In software engineering, a prototype is a mock-up of the desired functionality of some part of the system. This is in contrast to physical systems, where a prototype is usually a first full functionality version of a system. Software prototypes are constructed to investigate a situation or to evaluate a proposed approach to solving a technical problem. A prototype of a user interface, for example, might be constructed to promote dialog with users and to thus better understand their needs and concerns. A prototype implementation of an algorithm might be undertaken to study the performance or security aspects of the algorithm.<br />
<br />
Prototypes are not constructed with the same attention to architectural structure, interfaces, documentation, and quality concerns as is devoted to product components. Prototypes may be built using different tools than are used to build production systems. For example, a prototype of a user interface might be rapidly developed in Visual Basic but the production version of the interface might be implemented in C to provide the required performance and compatibility with other system components. <br />
<br />
In the past, incorporating prototype software into production systems has created many problems. Prototyping is a useful technique that should be employed whenever appropriate; however, prototyping is not a process model for software development. Some organizations use the term “prototyping,” in conjunction with other terms such as “structured” or “rapid” to describe their software development model. In many cases this is a euphemism for chaotic development. <br />
<br />
When building a software prototype, we keep the knowledge we have gained but we do not use the code in the deliverable version of the system unless we are willing to do additional work to develop production-quality code from the prototype code. In many cases it is more efficient and more effective to build the production code “from scratch” using the knowledge gained by prototyping than to re-engineer the prototype code.<br />
<br />
==Lifecycle sustainment of software==<br />
<br />
Software, like all systems, requires sustainment efforts to enhance capabilities, to adapt to new environments, and to correct defects. The primary distinction for software is that sustainment efforts change the software; unlike physical entities, software components do not have to be replaced because of physical wear and tear. Changing the software requires re-verification and re-validation, which may involve extensive regression testing to determine that the change has the desired effect and that the change has not altered other aspects of functionality or behavior.<br />
<br />
==Retirement of software==<br />
<br />
Useful software is seldom retired. Software that is useful often experiences many upgrades during its lifetime so that a later upgrade may bear little similarity to the initial version. In some cases, software that ran in a former operational environment is executed on hardware emulators that provide a virtual machine on newer hardware. In other cases, a major enhancement may replace, and rename, an older version of the software, but the enhanced version provides all of the capabilities of the previous software in a compatible manner. Sometimes, however, a newer version of software may fail to provide compatibility with the older version, which necessitates other changes to a system.<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the '''reference title only''' ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[System Life Cycle Process Drivers and Choices|<- Previous Article]] | [[Life Cycle Models|Parent Article]] | [[Integration of Process and Product Models|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=System_Lifecycle_Process_Drivers_and_Choices&diff=9642System Lifecycle Process Drivers and Choices2011-08-09T20:39:21Z<p>Skmackin: </p>
<hr />
<div>Introductory Paragraph(s)<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Life Cycle Characteristics|<- Previous Article]] | [[Life Cycle Models|Parent Article]] | [[Representative System Life Cycle Process Models|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Generic_Life_Cycle_Model&diff=9640Generic Life Cycle Model2011-08-09T20:36:24Z<p>Skmackin: </p>
<hr />
<div>Introductory Paragraph(s)<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Life Cycle Models|<- Previous Article]] | [[Life Cycle Models|Parent Article]] | [[System Life Cycle Process Drivers and Choices|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=System_Lifecycle_Models&diff=9637System Lifecycle Models2011-08-09T20:31:01Z<p>Skmackin: </p>
<hr />
<div>Introductory Paragraph(s)<br />
<br />
===Topics===<br />
The topics contained within this knowledge area include:<br />
*[[Life Cycle Characteristics]]<br />
*[[System Life Cycle Process Drivers and Choices]]<br />
*[[Representative System Life Cycle Process Models]]<br />
*[[Integration of Process and Product Models]]<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Systems Engineering and Management|<- Previous Article]] | [[Systems Engineering and Management|Parent Article]] | [[Life Cycle Characteristics|Next Article ->]]</center><br />
==Signatures==<br />
<br />
[[Category: Part 3]][[Category:Knowledge Area]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Systems_Engineering_and_Management&diff=9635Systems Engineering and Management2011-08-09T20:29:02Z<p>Skmackin: </p>
<hr />
<div>This part of the SEBOK concentrates upon the generic knowledge of HOW to engineer systems. It provides a basis for the engineering of [[Product System (glossary)|Product Systems (glossary)]], the engineering of [[Service System (glossary)|Service Systems (glossary)]], the engineering of an [[Enterprise System (glossary)|Enterprise System (glossary)]] as well as the engineering of [[System of Systems (SoS) (glossary)|Systems of Systems (glossary)]] as described in Part 4.<br />
<br />
==Managing System Assets==<br />
<br />
Organizations and their enterprises must continually monitor their portfolio of system assets; that is, their value added product system and/or service system offerings as well as all of the systems that support their development or operations (often referred to as infrastructure systems). Proper management of their system assets as illustrated in the System Coupling Diagram is essential for achieving enterprise purpose, goals and missions in responding to situations. <br />
<br />
Key to operations of an enterprise, are the decisions that are made concerning system assets. Thus, prudent change management based upon enterprise needs and the problem and opportunity situations that it encounters must be a central function of enterprise leadership and management at all levels (strategic, tactical, and operational). Changes can involve the creation of new systems, the modification of existing systems or the deletion (retiring) of systems; as well as the altering of operational parameters for systems that are in operation.<br />
<br />
The knowledge areas of this part provide generic insight into various aspects of how to accomplish life cycle relevant changes in respect to the types of engineered systems described in Part 4.<br />
<br />
==Generic Systems Engineering Paradigm== <br />
<br />
In order to establish a basis for the Knowledge Areas of Part 3 and Part 4, the paradigm appearing in Figure 1 identifies the general goal of any Systems Engineering effort. That is the transformation of a need into a system product or service that provides for that need.<br />
<br />
[[File:062211_BL_Paradigm.png|800px|Generic Systems Engineering Model]]<br />
<br />
On the left hand side of the figure observe that there are three Systems of Interest identified in the form of a [[System Breakdown Structure (glossary)]]. SOI 1 is decomposed into its elements which in this case are systems as well (SOI 2 and SOI 3). These two systems are composed of [[Element (glossary)|System Elements (glossary)]] which are not further refined.<br />
<br />
On the right hand side of the figure observe that each of the Systems of Interest has a corresponding [[Life Cycle Model (glossary)]] composed of stages that are populated with processes that are used to define the work to be performed. Note that some of the requirements defined to meet the need are distributed in the early stages of the life cycle for SOI 1 to the life cycles of SOI 2, respectively SOI 3. This decomposition of the system illustrates the fundamental concept of [[Recursion (glossary)]] as defined in the ISO/IEC 15288 standard. That is the standard is reapplied for each of the systems of interest.<br />
<br />
Note that the the system elements are integrated in SOI 2, respectively SOI 3 thus realizing a product or service that is delivered to the life cycle of SOI 1 for integration in realizing the product or service that meets the stated need.<br />
<br />
Some examples that relate to this system need could be an embedded system (SOI 1) composed a hardware system (SOI 2) and a software system (SOI 3), a sub-assembly composed of a chasis and a motor, a human resource system composed of a recruitment system and a capability management system. <br />
<br />
In performing the process work in stages, most often [[Iteration (glossary)]] between stages if required. For example, in successive refinement of the definition of the system or in providing an update (upgrade or problem solution) of a realized and even delivered product or service.<br />
<br />
The work performed in the processes and stages can be performed in a [[Concurrent (glossary)]] manner within the life cycle of any of the systems of interest and concurrent amongst the multiple life cycles.<br />
<br />
This paradigm provides a fundamental framework for understanding generic systems engineering in Part 3 as well as for the application of systems engineering in the provisioning of the various types of systems described in Part 4.<br />
<br />
==Knowledge Areas in Part 3==<br />
*[[Life Cycle Models]]<br />
*[[System Definition]]<br />
*[[System Realization]]<br />
*[[System Deployment and Use]]<br />
*[[Systems Engineering Management]]<br />
*[[Product and Service Life Management]]<br />
*[[Systems Engineering Standards]]<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Dynamically Changing Systems|<- Previous Article]] | [[Main_Page|Parent Article]] | [[Life Cycle Models|Next Article ->]]</center><br />
==Signatures==<br />
[[Category: Part 3]][[Category:Part]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Applying_the_Systems_Approach&diff=9627Applying the Systems Approach2011-08-09T19:26:29Z<p>Skmackin: </p>
<hr />
<div>For engineered systems the application of the Systems Approach is Systems Engineering. This section will provide a direct mapping of the various principles of the Systems Approach to the elements of Systems Engineering and a direct link to the descriptions of those elements. The following subsections are organized according to the Systems Approach principles above with a discussion of how these principles are linked to the elements of Systems Engineering.<br />
The Systems Approach is not a linear, one-dimensional process with a predictable solution at the end, and neither is its offspring Systems Engineering. This section will describe the incremental and evolutionary nature of the beast. This section will describe how the Systems Engineering process will apply the Systems Approach principles throughout its breadth and scope.<br />
==Exploring a Problem or Opportunity==<br />
The problem or opportunity to be addressed can best be determined by conducting a Mission Analysis and by determining the requirements from the stakeholders. See the section in Part 3 called [[Mission Analysis and Stakeholders Requirements]] to determine how this is done. Hence, the Systems Approach principle of Exploring a Problem or Opportunity engenders the Systems Engineering concepts of Mission Analysis and Stakeholder Requirements.<br />
<br />
==System Analysis Approach==<br />
===Identification of the Elements of the System===<br />
Within the Part 3 section called [[Architectural Design]] the functional architecture is defined and the physical elements are allocated to the elements of the functional architecture. <br />
In the early phases some elements are identified. But these elements are not physically visible. They are abstractly defined. Perhaps only their functions are known at this time. Yet these functions beco me physical objects eventually. This progression from the abstract to the concrete is discussed by Hitchins (Hitchins, 2009, p. 59).<br />
The relationships between these elements is also defined and their interactions. The architectural relationships will also be defined. When the abstract elements are transformed into defined elements, these two set of elements are also said to be coupled. According to Blanchard and Fabrycky (Blanchard and Fabrycky, 2006, Chapters 3-5) when translated into Systems Engineering terminology, these stages of system evolution are known as Conceptual Design, Preliminary Design, and Detail Design.<br />
===Grouping of Elements===<br />
Also within the [[Architectural Design]] section groups of elements are defined that can perform a given function. These groups of elements lead to the concept of a subsystem. According to this section, subsystems are identified during the synthesis process. A set of criteria are defined for grouping elements. Hence, the Systems Approach concept of grouping gives rise to the Systems Engineering concept of subsystems.<br />
===Identification of the Boundary of a System===<br />
The [[Architectural Design]] section also describes the concept of the system of interest (SOI). The boundary of the SOI is the interface between the SOI, its environment, and other systems. According to Checkland (Checkland, 1999, p. 312) a boundary is "the set of elements that define the limit of the System of Interest (SoI)." Hence, the boundary of the SOI in Systems Engineering fulfills the Systems Approach principle of a boundary. <br />
===Identification of the Interactions Among the Elements===<br />
The [[Architectural Design]] section also has a subsection called "Concept of Interface". The Systems Engineering concept of interface is the application of the Systems Approach principle of interactions among elements. The [[Architectural Design]] section distinguishes between physical interfaces and functional interfaces. This section emphasizes that both types of interfaces should be considered in the system definition.<br />
<br />
==Synthesis of a System==<br />
The [[Architectural Design]] section has a subsection called "Designing Physical Candidate Architecture". This subsection describes the process of synthesis and provides a set of criteria for synthesis. According to this subsection, “synthesis is done by grouping the leaf system elements to constitute a group of (sub) systems.”<br />
First, in compliance with the principles of the Systems Approach, the Systems Engineering process of synthesis does not begin with a defined system; there is only a problem or opportunity that has been defined. There may be some existing elements (see Identification of the Elements of a System, above) that will eventually become part of a system, but that fact does not change the approach. The objective is to progress from the problem to a defined system; but how does that happen? When a set of assets becomes a system, the latter is called a respondent system, according to Lawson (Lawson, 2010, Chapter 1). The set of assets and the respondent system are said to be coupled. <br />
Complexity makes the process even more non-linear. As emergent properties begin to be observed, changes may need to be made in component definition and perhaps even in the architectural arrangement. Hence, iterative definition becomes the necessary process.<br />
<br />
==Proving a System==<br />
The [[System Realization]] section notes that both Verification and Validation are components of the System Realization process. Implementation and Integration are the other two. These two processes are the application of the Systems Approach principle of Proving the System.<br />
===Verification===<br />
According to the [[System Realization]] section of Part 3, the Systems Engineering verification process uses the results of the system design including requirements to determine whether the system is designed in the way it was intended to be designed and it meets its performance requirements. This process is the Systems Engineering implementation of the Systems Approach principle of verification.<br />
===Validation=== <br />
The [[Systems Approach]] section of Part 3 also provides detail as to how the Systems Engineering process of validation meets the Systems Approach principle of validation. <br />
Finally, with good judgment and patience a system will emerge. This system must be proved, as previously discussed. If all has been successful, the system will solve the problem or exploit the opportunity that was identified in the beginning.<br />
<br />
==Owning and Making Use of a System==<br />
<br />
The [[System Deployment and Use]] section of Part 3 also provides detail about the Systems Engineering aspect of deployment and use to apply the Systems Approach principle of the same name. Factors include transition to deployment, maintenance, logistics, and system operation.<br />
<br />
==Linkages to other topics==<br />
<br />
This topic is linked to the Systems Engineering processes of Conceptual Design, Preliminary Design, and Detail Design.<br />
<br />
==References== <br />
Blanchard, B. & Fabrycky, W. J. 2006. Systems Engineering and Analysis, Upper Saddle River, NJ, Prentise Hall.<br />
<br />
Checkland, P. 1999. Systems Thinking, Systems Practice, New York, John Wiley & Sons.<br />
<br />
Hitchins, D. 2009. What are the General Principles Applicable to Systems? Insight. International Council on Systems Engineering.<br />
<br />
Lawson, H. 2010. A Journey Through the Systems Landscape, London, College Publications, Kings College.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
BLANCHARD, B. & FABRYCKY, W. J. 2006. Systems Engineering and Analysis, Upper Saddle River, NJ, Prentise Hall.<br />
----<br />
====Article Discussion====<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Owning and Making Use of a System|<- Previous Article]] | [[Systems Approach|Parent Article]] | [[Systems Challenges|Next Article ->]]</center><br />
==Signatures==<br />
[[Category:Part 2]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Applying_the_Systems_Approach&diff=9623Applying the Systems Approach2011-08-09T19:21:42Z<p>Skmackin: </p>
<hr />
<div>For engineered systems the application of the Systems Approach is Systems Engineering. This section will provide a direct mapping of the various principles of the Systems Approach to the elements of Systems Engineering and a direct link to the descriptions of those elements. The following subsections are organized according to the Systems Approach principles above with a discussion of how these principles are linked to the elements of Systems Engineering.<br />
The Systems Approach is not a linear, one-dimensional process with a predictable solution at the end, and neither is its offspring Systems Engineering. This section will describe the incremental and evolutionary nature of the beast. This section will describe how the Systems Engineering process will apply the Systems Approach principles throughout its breadth and scope.<br />
==Exploring a Problem or Opportunity==<br />
The problem or opportunity to be addressed can best be determined by conducting a Mission Analysis and by determining the requirements from the stakeholders. See the section in Part 3 called [[Mission Analysis and Stakeholders Requirements]] to determine how this is done. Hence, the Systems Approach principle of Exploring a Problem or Opportunity engenders the Systems Engineering concepts of Mission Analysis and Stakeholder Requirements.<br />
<br />
==System Analysis Approach==<br />
===Identification of the Elements of the System===<br />
Within the Part 3 section called [[Architectural Design]] the functional architecture is defined and the physical elements are allocated to the elements of the functional architecture. <br />
In the early phases some elements are identified. But these elements are not physically visible. They are abstractly defined. Perhaps only their functions are known at this time. Yet these functions beco me physical objects eventually. This progression from the abstract to the concrete is discussed by Hitchins (Hitchins, 2009, p. 59).<br />
The relationships between these elements is also defined and their interactions. The architectural relationships will also be defined. When the abstract elements are transformed into defined elements, these two set of elements are also said to be coupled. According to Blanchard and Fabrycky (Blanchard and Fabrycky, 2006, Chapters 3-5) when translated into Systems Engineering terminology, these stages of system evolution are known as Conceptual Design, Preliminary Design, and Detail Design.<br />
===Grouping of Elements===<br />
Also within the [[Architectural Design]] section groups of elements are defined that can perform a given function. These groups of elements lead to the concept of a subsystem. According to this section, subsystems are identified during the synthesis process. A set of criteria are defined for grouping elements. Hence, the Systems Approach concept of grouping gives rise to the Systems Engineering concept of subsystems.<br />
===Identification of the Boundary of a System===<br />
The [[Architectural Design]] section also describes the concept of the system of interest (SOI). The boundary of the SOI is the interface between the SOI, its environment, and other systems. According to Checkland (Checkland, 1999, p. 312) a boundary is "the set of elements that define the limit of the System of Interest (SoI)." Hence, the boundary of the SOI in Systems Engineering fulfills the Systems Approach principle of a boundary. <br />
===Identification of the Interactions Among the Elements===<br />
The [[Architectural Design]] section also has a subsection called "Concept of Interface". The Systems Engineering concept of interface is the application of the Systems Approach principle of interactions among elements. The [[Architectural Design]] section distinguishes between physical interfaces and functional interfaces. This section emphasizes that both types of interfaces should be considered in the system definition.<br />
<br />
==Synthesis of a System==<br />
The [[Architectural Design]] section has a subsection called "Designing Physical Candidate Architecture". This subsection describes the process of synthesis and provides a set of criteria for synthesis. According to this subsection, “synthesis is done by grouping the leaf system elements to constitute a group of (sub) systems.”<br />
First, in compliance with the principles of the Systems Approach, the Systems Engineering process of synthesis does not begin with a defined system; there is only a problem or opportunity that has been defined. There may be some existing elements (see Identification of the Elements of a System, above) that will eventually become part of a system, but that fact does not change the approach. The objective is to progress from the problem to a defined system; but how does that happen? When a set of assets becomes a system, the latter is called a respondent system, according to Lawson (Lawson, 2010, Chapter 1). The set of assets and the respondent system are said to be coupled. <br />
Complexity makes the process even more non-linear. As emergent properties begin to be observed, changes may need to be made in component definition and perhaps even in the architectural arrangement. Hence, iterative definition becomes the necessary process.<br />
<br />
==Proving a System==<br />
The [[System Realization]] section notes that both Verification and Validation are components of the System Realization process. Implementation and Integration are the other two. These two processes are the application of the Systems Approach principle of Proving the System.<br />
===Verification===<br />
According to the [[System Realization]] section of Part 3, the Systems Engineering verification process uses the results of the system design including requirements to determine whether the system is designed in the way it was intended to be designed and it meets its performance requirements. This process is the Systems Engineering implementation of the Systems Approach principle of verification.<br />
===Validation=== <br />
The [[Systems Approach]] section of Part 3 also provides detail as to how the Systems Engineering process of validation meets the Systems Approach principle of validation. <br />
Finally, with good judgment and patience a system will emerge. This system must be proved, as previously discussed. If all has been successful, the system will solve the problem or exploit the opportunity that was identified in the beginning.<br />
<br />
==Owning and Making Use of a System==<br />
<br />
The [[System Deployment and Use]] section of Part 3 also provides detail about the Systems Engineering aspect of deployment and use to apply the Systems Approach principle of the same name. Factors include transition to deployment, maintenance, logistics, and system operation.<br />
<br />
==Linkages to other topics==<br />
<br />
This topic is linked to the Systems Engineering processes of Conceptual Design, Preliminary Design, and Detail Design.<br />
<br />
==Primary References==<br />
<br />
<br />
<br />
<br />
<br />
==References== <br />
Blanchard, B. & Fabrycky, W. J. 2006. Systems Engineering and Analysis, Upper Saddle River, NJ, Prentise Hall.<br />
<br />
Checkland, P. 1999. Systems Thinking, Systems Practice, New York, John Wiley & Sons.<br />
<br />
Hitchins, D. 2009. What are the General Principles Applicable to Systems? Insight. International Council on Systems Engineering.<br />
<br />
Lawson, H. 2010. A Journey Through the Systems Landscape, London, College Publications, Kings College.<br />
<br />
<br />
<br />
<br />
<br />
<br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
BLANCHARD, B. & FABRYCKY, W. J. 2006. Systems Engineering and Analysis, Upper Saddle River, NJ, Prentise Hall.<br />
----<br />
====Article Discussion====<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Owning and Making Use of a System|<- Previous Article]] | [[Systems Approach|Parent Article]] | [[Systems Challenges|Next Article ->]]</center><br />
==Signatures==<br />
[[Category:Part 2]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Deploying,_Using,_and_Sustaining_Systems_to_Solve_Problems&diff=9622Deploying, Using, and Sustaining Systems to Solve Problems2011-08-09T19:21:05Z<p>Skmackin: </p>
<hr />
<div>While it is logical to assume that once a system has been defined using the Systems Approach that Owning and Making Use of a System might be a logical topic. However, this is not a topic discovered in the literature. The Systems Engineering topic of Owning and Operating a System elaborates on this topic. Hence it is left for future authors to elaborate on this topic within the framework of the Systems Approach. <br />
<br />
<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Proving a System|<- Previous Article]] | [[Systems Approach|Parent Article]] | [[Applying the Systems Approach|Next Article ->]]</center><br />
==Signatures==<br />
[[Category:Part 2]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Implementing_and_Proving_a_Solution&diff=9621Implementing and Proving a Solution2011-08-09T19:20:30Z<p>Skmackin: </p>
<hr />
<div>==Introduction==<br />
<br />
The Systems Approach also requires that the system be proved. In Systems Engineering this is called verification and validation. All of the system principles come into play in proving the system. <br />
<br />
==Proving the System Overview==<br />
<br />
This topic covers both the sub-topics of verification and validation. <br />
<br />
===Verification===<br />
<br />
Verification is the determination that each element of the system meets the requirements of a documented specification (see principle of elements). Verification is performed at each level of the system hierarchy (see principle of grouping and System Analysis).<br />
<br />
===Validation===<br />
<br />
Validation is the determination that the entire system meets the needs of the stakeholders. Validation only occurs at the top level of the system hierarchy. <br />
<br />
In a Systems Engineering context, (Wasson, 2006, pp. 691-709) provides a comprehensive guide to the methods of both system verification and system validation. <br />
<br />
==Linkages to other topics==<br />
<br />
The Systems Approach topic is linked to the Systems Engineering topics of Verification and Validation. <br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
JACKSON, S., HITCHINS, D. & EISNER, H. 2010. What is the Systems Approach? INCOSE Insight. International Council on Systems Engineering.<br />
<br />
===Additional References===<br />
WASSON, C. S. 2006. System Analysis, Design, and Development, Hoboken, NJ, John Wiley & Sons.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Synthesis of a System|<- Previous Article]] | [[Systems Approach|Parent Article]] | [[Owning and Making Use of a System|Next Article ->]]</center><br />
==Signatures==<br />
[[Category:Part 2]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Synthesizing_Possible_Solutions&diff=9620Synthesizing Possible Solutions2011-08-09T19:19:46Z<p>Skmackin: </p>
<hr />
<div>==Introduction==<br />
<br />
According to (INCOSE, 1998, p. 236), synthesis is “the combination of parts, elements, or diverse conceptions into a coherent whole; to put together.” This section will describe the synthesis process at the Systems Approach level show how it links to the Systems Engineer-ing process of the same name. <br />
<br />
==Synthesis Overview==<br />
<br />
Essential to synthesis is the Systems Engineering concept of holism discussed by (Hitchins, 2009) which states that a system must be considered as a whole and not simply as a collection of its elements. In Systems Engineering holism requires that the properties of the whole be determined by considering the behavior of the whole and not simply as the accumulation of the properties of the elements. The latter process is known as reductionism and is the opposite of holism. (Hitchins, 2009) puts it this way: “The properties, capabilities, and behavior of a system derive from its parts, from interactions between those parts, and from interactions with other systems.” When systems are synthesized holistically, emergent properties will be identified.<br />
<br />
In complex systems the individual elements will adapt to the behavior of the other elements and to the whole. The entire collection of elements will behave as an organic whole. For complex systems the entire Systems Engineering synthesis effort itself must be dynamic to treat these systems.<br />
<br />
When the system is considered as a whole, properties called emergent properties often appear (see Concept of Emergence above). These properties cannot be predicted from the elements alone. These properties must be evaluated within the Systems Engineering effort to determine the complete set of performance levels of the system. According to (Jackson et al., 2010) these properties can be designed into the system, but an iterative Systems Engineering approach is required to do it.<br />
<br />
The Systems Approach aspect of synthesis leads to the Systems Engineering process of the same name. Essential to the synthesis of the system is the principle of holism which expresses the idea that a system must be considered as a whole. The principle of interaction is also essential to synthesis. The principle of cohesion also is important to synthesis. Preferring to use the terms “design and development,” (Wasson, 2006, pp. 390-690) describes synthesis from a Systems Engineering point of view. <br />
<br />
In a Systems Engineering context, (White, 2009, pp. 512-515) provides a comprehensive discussion of methods of achieving design synthesis. <br />
<br />
==Linkages to other topics==<br />
<br />
The Systems Approach principle of Synthesis is directly linked to the Systems Engineering principle of Synthesis. <br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
HITCHINS, D. 2009. What are the General Principles Applicable to Systems? Insight. International Council on Systems Engineering.<br />
<br />
JACKSON, S., HITCHINS, D. & EISNER, H. 2010. What is the Systems Approach? INCOSE Insight. International Council on Systems Engineering.<br />
<br />
===Additional References===<br />
INCOSE 1998. INCOSE SE Terms Glossary. In: INCOSE CONCEPTS AND TERMS WG (ed.). Seattle, WA: International Council on Systems Engineering.<br />
<br />
WASSON, C. S. 2006. System Analysis, Design, and Development, Hoboken, NJ, John Wiley & Sons.<br />
<br />
WHITE, Jr., R. PRESTON, 2009. Systems Design. In: SAGE, A. P. (ed.) Handbook of Systems Engineering and Management. Second ed. Hoboken, NJ: John Wiley & Sons.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Systems Analysis Approach|<- Previous Article]] | [[Systems Approach|Parent Article]] | [[Proving a System|Next Article ->]]</center><br />
==Signatures==<br />
[[Category:Part 2]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Analysis_and_Selection_between_Alternative_Solutions&diff=9619Analysis and Selection between Alternative Solutions2011-08-09T19:18:51Z<p>Skmackin: </p>
<hr />
<div>==Introduction==<br />
<br />
According to the Oxford English Dictionary (OED, 1973) analysis is “the resolution of anything complex into its simple elements.” This section will discuss system analysis from a Systems Approach point of view.<br />
<br />
==Analysis of Systems Overview==<br />
<br />
The four elements of System Analysis to be discussed below are: Identification of the Elements of a System, Division of Elements into Smaller Elements, Grouping of Elements, Iden-tification of the Boundary of a System, Identification of the Function of Each Element, and Identification of the Interactions Among the Elements. <br />
<br />
===Identification of the Elements of a System===<br />
<br />
The Systems Approach calls for the identification of the elements of a system. (Jackson et al., 2010, pp. 41-42) identify the kinds of elements of which a system may consist. Integral to this aspect of the Systems Approach is the principle of elements discussed above. Typical elements treated within Systems Engineering may be hardware, software, humans, processes, conceptual ideas, or any combination of these. Systems Engineering defines the properties of these elements, verifies their capability, and validates the capability of the entire system. According to (Page, 2009) in complex systems the individual elements of the system are charac-terized by their adaptability.<br />
<br />
In a Systems Engineering context, according to (Blanchard and Fabrycky, 2006, p. 7) elements may be physical, conceptual, or processes. Physical elements may be hardware, software, or humans. Conceptual elements may be “ideas, plans, concepts, or hypotheses.” Processes may be mental, mental-motor (writing, drawing, etc.), mechanical, or electronic.<br />
<br />
In addition to the operational elements of a system upon which focus is placed (ie. a System of Interest, SOI), ISO/IEC 15288 (2008) also calls for the identification of the “enabling” systems, utilized at various stages in the life cycle and including, for example, maintenance and others that support the operational elements to solve its problem or achieve its opportu-nity.<br />
<br />
===Division of Elements into Smaller Elements===<br />
<br />
The next aspect of the Systems Approach is that elements can be divided into smaller elements. The division of elements into smaller elements allows the systems to be grouped as discussed above under the principle of grouping and below in the Systems Approach principle of grouping of elements. <br />
<br />
The division of elements into smaller elements leads to the Systems Engineering concept of physical architecture as described by (Levin, 2009, pp. 493-495). Each layer of division leads to another layer of the hierarchical view of a system. As Levin points out, there are many ways to depict the physical architecture including wiring diagrams, block diagrams, etc. All of these views depend on arranging the elements and dividing them into smaller elements. According the principle of recursion, these decomposed elements are either terminal elements of systems.<br />
<br />
===Grouping of Elements===<br />
<br />
The next aspect of the Systems Approach is that elements can be grouped. This grouping leads to the principle of grouping discussed above. The systems principle of grouping, dis-cussed above, leads to the identification of the subsystems essential to the definition of a system. Systems Engineering determines how a system may be partitioned and how each sub-system fits and functions within the whole system. The grouping of all the elements of a system is called the system of interest (SOI), also called the relevant system by (Checkland, 1999, p. 166). In Systems Engineering the SOI is the focus of the Systems Engineering effort. According to (Hitchins, 2009, p. 61), some of the properties of an SOI are as follows: The SOI is open and dynamic. The SOI interacts with other systems. The SOI contains sub-systems. The SOI is brought together through the concept of synthesis as described below.<br />
<br />
===Identification of the Boundary of a System===<br />
<br />
The Systems Approach principle of the identification of the boundary of a system is directly linked to the principle of boundaries discussed above. The boundary of a system is essential to Systems Engineering to determine the interaction of the system with its environment and with other systems and to determine the extent of the system of interest (SOI).<br />
<br />
(Buede, 2009, p. 1102) provides a comprehensive discussion of the importance and methods of defining the boundary of a system in a Systems Engineering context.<br />
<br />
===Identification of the Function of Each Element===<br />
<br />
The identification of the function of an element is rooted in the concept of system functions, discussed above. The function of a system or of its elements is essential to Systems Engineering and the determination of the purpose of the system or of its elements. (Buede, 2009, pp. 1091-1126) provides a comprehensive description of functional analysis in a Systems Engi-neering context. <br />
<br />
===Identification of the Interactions among the Elements===<br />
<br />
The next element of the Systems Approach is the identification of the interactions among the elements. These interactions lead to the Systems Engineering process of interface analysis. Integral to this aspect is the principle of interactions discussed above. These interactions occur both with other system elements and also with external elements and the environment.<br />
In a Systems Engineering context interfaces have both a technical and managerial importance. (Browning, 2009, pp. 1418-1419) provides a list of desirable characteristics of both technical and managerial interface characteristics. <br />
<br />
==Linkages to other topics==<br />
<br />
The identification of the boundary of a system is essential to the Systems Engineering concept of a System of Interest (SOI).<br />
<br />
The identification of the Functions of an element is linked to Functional Analysis within Systems Engineering.<br />
<br />
The identification of the interactions between elements in linked to Interface Analysis within Systems Engineering.<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
<br />
JACKSON, S., HITCHINS, D. & EISNER, H. 2010. What is the Systems Approach? INCOSE Insight. International Council on Systems Engineering.<br />
<br />
ISO/IEC 2008. Systems and software engineering -- System life cycle processes. Geneva,<br />
Switzerland: International Organisation for Standardisation / International Electrotechnical<br />
Commissions.<br />
<br />
OED 1973. In: ONIONS, C. T. (ed.) The Shorter Oxford English Dictionary on Historical Principles. Third ed. Oxford: Oxford Univeristy Press.<br />
<br />
===Additional References===<br />
PAGE, S. E. 2009. Understanding Complexity. The Great Courses. Chantilly, VA, USA: The Teaching Company.<br />
<br />
BUEDE, D. M. 2009. Functional Analysis. In: SAGE, A. P. & ROUSE, W. B. (eds.) Handbook of Systems Engineering and Management. Second ed. Hoboken, NJ: John Wiley & Sons.<br />
<br />
BLANCHARD, B. & FABRYCKY, W. J. 2006. Systems Engineering and Analysis, Upper Saddle River, NJ, Prentise Hall.<br />
<br />
LEVIN, A. H. 2009. System Architectures. In: SAGE, A. P. & ROUSE, W. B. (eds.) Handbook of Systems Engineeering and Management. Second ed. Hoboken, NJ: John Wiley & Sons.<br />
<br />
BROWNING, T. R. 2009. Using the Design Structure Matrix to Design Program Organizations. In: SAGE, A. P. & ROUSE, W. B. (eds.) Handbook of Systems Engineering and Management. Second ed. Hoboken, NJ: John Wiley & Sons.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Exploring a Problem or Opportunity|<- Previous Article]] | [[Systems Approach|Parent Article]] | [[Synthesis of a System|Next Article ->]]</center><br />
==Signatures==<br />
[[Category:Part 2]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Identifying_and_Understanding_Problems_and_Opportunities&diff=9618Identifying and Understanding Problems and Opportunities2011-08-09T19:18:00Z<p>Skmackin: </p>
<hr />
<div>==Introduction==<br />
Jenkins according to (Checkland, 1999, p. 140) states that the first step in the Systems Approach is “the recognition and formulation of the problem,” and of course, opportunity. This section will summarize problem and opportunity exploration as described by (Edson, 2008) and others.<br />
==Topic Overview==<br />
According to (Blanchard and Fabrycky, 2006, pp. 55-56) defining a problem is sometimes the most important and difficult step. Defining a problem is asking the questions: What needs to be improved? What is the purpose of the system you want to define? Sometimes a problem is known as a “need.” In short, a system cannot be defined unless you can define what it is supposed to accomplish. <br />
According to (Edson, 2008, pp. 26-29), some of the questions that need to be asked are as follows:<br />
First, how difficult or well understood is the problem? Problems can be “tame,” “regular,” or “wicked.” The answer to this question will help define the tractability of the problem. For tame problems, the solution may be well defined and obvious. <br />
Regular problems are those that are encountered on a regular basis. Their solutions may not be obvious, so serious attention should be given to all aspects of them.<br />
Wicked problems may not be solvable using obvious approaches, so detailed attention should be given to them. <br />
The next factor that needs to be considered is who or what is impacted? There may be ele-ments of the situation that are causing the problem, other elements that are impacted by the problem, and other elements that are just in the loop. Beyond these factors, what is the envi-ronment and what are the external factors that affect the problem? <br />
Finally, what are the viewpoints to the problem? Does everyone think it is a problem? Perhaps there are conflicting viewpoints. All these viewpoints need to be defined. Persons affected by the system, stand to benefit from the system, or can be harmed by the system are called stakeholders. (Wasson, 2006, pp. 42-45) provides a comprehensive list of stakeholder types. <br />
An important factor in defining the problem or opportunity is the scenario in which the prob-lem or opportunity will exist. (Armstrong, 2009, p. 1030) suggests two scenarios: The first is the descriptive scenario. This is the situation as it exists now. The second is the second is the normative scenario. That is the situation as it may be some time in the future. Armstrong suggests that this may be the most difficult part of the problem or opportunity (Armstrong uses the term issue) to define.<br />
<br />
==Linkages to other topics==<br />
<br />
Capturing of Stakeholder Needs in Systems Engineering.<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
ARMSTRONG, Jr., JAMES E., 2009. Issue Formualation. In: SAGE, A. P. & ROUSE, W. B. (eds.) Handbook of Systems Engineering and Management. Second ed. Hoboken, NJ: John Wiley & Sons.<br />
<br />
BLANCHARD, B. & FABRYCKY, W. J. 2006. Systems Engineering and Analysis, Upper Saddle River, NJ, Prentise Hall.<br />
<br />
CHECKLAND, P. 1999. Systems Thinking, Systems Practice, New York, John Wiley & Sons.<br />
<br />
EDSON, R. 2008. Systems Thinking. Applied. A Primer. In: ASYST INSTITUTE (ed.). Arlington, VA: Analytic Services<br />
<br />
===Additional References===<br />
WASSON, C. S. 2006. System Analysis, Design, and Development, Hoboken, NJ, John Wiley & Sons.<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Overview of the Systems Approach|<- Previous Article]] | [[Systems Approach|Parent Article]] | [[Systems Analysis Approach|Next Article ->]]</center>==Signatures==<br />
[[Category:Part 2]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Overview_of_the_Systems_Approach&diff=9617Overview of the Systems Approach2011-08-09T19:17:27Z<p>Skmackin: </p>
<hr />
<div>The Systems Approach must be viewed in the context of [[Systems Thinking (glossary)]] as discussed by (Checkland, 1999) and by (Edson, 2008). According to (Checkland, 1999, p. 318), Systems Thinking is “an epistemology which, when applied to human activity is based on four basic ideas: [[emergence (glossary)]], [[hierarchy (glossary)]], communication, and control as characteristics of systems.”<br />
(Senge, 1990) provides an expanded definition as follows: “Systems thinking is a discipline for seeing wholes. It is a framework for seeing interrelationships rather than things, for seeing patterns of change rather than static "snapshots." It is a set of general principles -- distilled over the course of the twentieth century, spanning fields as diverse as the physical and social sciences, engineering, and management.... During the last thirty years, these tools have been applied to understand a wide range of corporate, urban, regional, economic, political, ecological, and even psychological systems. And systems thinking is a sensibility -- for the subtle interconnectedness that gives living systems their unique character”.<br />
<br />
Systems Thinking has two parts. The first part is a set of concepts to assist in learning how to think in terms of systems. These principles were previously enumerated in the System Concepts topic. Edson (2008, p. 6) captures the primary principles by listing three conditions for a set of elements to satisfy to be a system: “(1) The [[behavior (glossary)]] of each element has an effect on the behavior of the whole. (2) The behavior of the elements and their effects on the whole are interdependent. (3) [[Element (glossary)|Elements (glossary)]] of a system are so connected that independent subgroups of them cannot be formed.” <br />
<br />
The second part of Systems Thinking is the how-to part. It is an abstract set of principles to apply Systems Thinking. This abstract set of principles is called the Systems Approach, the subject of this section. The Systems Approach can relate thinking to exploring the [[problem (glossary)]] or [[opportunity (glossary)]] and proceeding through the steps of [[System Analysis (glossary)|system analysis (glossary)]], [[synthesis (glossary)]], [[proving (glossary)]] the system, and incremental solutions to solve the problem or achieve the opportunity. Models suggested by (Checkland, 1999), (Boardman and Sauser, 2008), (Senge, 1990), and others are employed in this process. When this process is executed in the real world of human-made systems, the discipline of Systems Engineering emerges. <br />
<br />
The relation between systems, Systems Thinking, the Systems Approach, and Systems Engineering can be found in Lawson (2010) where key aspects of the Systems Approach are identified as the mindset capabilities to “think” and “act” in terms of systems. The development of these capabilities is promoted by several paradigms including the following called the [[System Coupling Diagram (glossary)]]:<br />
<br />
[[File:052311_SJ_System_Coupling_Diagram.png|System Coupling Diagram]]<br />
<br />
*Situation System – The problem or opportunity situation; either unplanned or planned. The situation may be the work of nature, be man-made, a combination of both nature and man-made or a postulated situation that is to be used as a basis for deeper understanding and training (for example, business games or military exercises).<br />
<br />
*Respondent System – The system created to respond to the situation where the parallel bars indicate that this system interacts with the situation and transforms the situation to a new situation. A Respondent System, based upon the situation that is being treated can have several names such as Project, Program, Mission, Task Force, or in a scientific context, Experiment. Note that one of the system elements of this system is a control element that directs the operation of the respondent system in its interaction with the situation. This element is based upon an instantiation of a Control System asset, for example a Command and Control System, or a control process of some form. <br />
<br />
*System Assets – The sustained assets of one or more enterprises that are be utilized in responding to situations. System assets must be adequately life cycle managed so that when instantiated in a Respondent System will perform their function. These are the systems that are the primary objects for Systems Engineers. Examples of assets include concrete systems such as value added products or services, facilities, instruments and tools, abstract systems such as theories, knowledge, processes and methods. <br />
<br />
This generic model portrays the essence of a system approach and is applicable to to Product Systems Engineering, Service Systems Engineering and Enterprise Systems Engineering. Further, it is quite clear that this forms the basis for Systems of Systems where System Assets from multiple actors are collected in a Respondent System that is responding to a situation.<br />
<br />
Since the premise is that Systems Approach is a mind-set prerequisite to Systems Engineering, it can be said that projects and programs executed with this mind-set are more likely to solve the problem or achieve the opportunity identified in the beginning.<br />
<br />
The Systems Approach is often invoked in applications beyond product systems. For example, Systems Approach may be used in the educational domain. According to (Biggs, 1993), the system of interest includes “the student, the classroom, the institution, and the community.”<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
<br />
===Primary References===<br />
All primary references should be listed in alphabetical order. Remember to identify primary references by creating an internal link using the ‘’’reference title only’’’ ([[title]]). Please do not include version numbers in the links.<br />
<br />
===Additional References===<br />
All additional references should be listed in alphabetical order.<br />
----<br />
====Article Discussion====<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Systems Approach|<- Previous Article]] | [[Systems Approach|Parent Article]] | [[Exploring a Problem or Opportunity|Next Article ->]]</center><br />
==Signatures==<br />
<br />
[[Category:Part 2]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Systems_Approach_Applied_to_Engineered_Systems&diff=9616Systems Approach Applied to Engineered Systems2011-08-09T19:15:38Z<p>Skmackin: </p>
<hr />
<div>According to (Jackson et al., 2010, pp. 41-43), the Systems Approach is a set of top-level principles and is the foundation of Systems Engineering. The following sub-paragraphs out-line the Systems Approach and show how the systems concepts are reflected in each element of the approach. <br />
<br />
===Topics===<br />
<br />
The topics contained within this knowledge area include:<br />
*[[Overview of the Systems Approach]]<br />
*[[Exploring a Problem or Opportunity]]<br />
*[[Systems Analysis Approach]]<br />
*[[Synthesis of a System]]<br />
*[[Proving a System]]<br />
*[[Owning and Making Use of a System]]<br />
*[[Applying the Systems Approach]]<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
Checkland, P. 1999. [[Systems Thinking, Systems Practice]], New York, John Wiley & Sons.<br />
<br />
Hitchins, D. 2009. [[What are the General Principles Applicable to Systems?]] Insight. International Council on Systems Engineering.<br />
<br />
Jackson, S., Hitchins, D. & Eisner, H. 2010. [[What is the Systems Approach?]] INCOSE Insight. International Council on Systems Engineering.<br />
<br />
Lawson, H. 2010. [[A Journey Through the Systems Landscape]]. London, UK: College Publications, Kings College.<br />
<br />
Senge, P. M. 1990. [[The Fifth Discipline]]: The Art and Practice of the Learning Organization, New York, Doubleday / Currency<br />
<br />
===Additional References=== <br />
BOARDMAN, J. & SAUSER, B. 2008. Systems Thinking - Coping with 21st Century Problems, Boca Raton, FL, CRC Press.<br />
<br />
EDSON, R. 2008. Systems Thinking. Applied. A Primer. In: ASYST INSTITUTE (ed.). Arlington, VA: Analytic Services.<br />
<br />
BIGGS, J. B. 1993. From Theory to Practice: A Cognitive Systems Approach [Online]. Hong Kong: Journal of Higher Education & Development. Available: http://www.informaworld.com/smpp/content~db=all~content=a758503083 [Accessed Routledge 2011].<br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Modeling Standards|<- Previous Article]] | [[Systems|Parent Article]] | [[Overview of the Systems Approach|Next Article ->]]</center><br />
==Signatures==<br />
[[Category:Part 2]][[Category:Knowledge Area]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Modeling_Standards&diff=9615Modeling Standards2011-08-09T19:13:05Z<p>Skmackin: </p>
<hr />
<div>The evolution of modeling standards is an enabling factor for broad adoption of model-based systems engineering (MBSE).<br />
==Motivation for Modeling Standards==<br />
Different types of models are needed to support the analysis, specification, design, and verification of systems. Each type of model can be used to represent different aspects of a system, such as representing the set of system components and their interconnections and interfaces, or representing a system to support performance analysis or reliability analysis. Modeling standards play an important role in defining agreed upon system modeling concepts that can be represented for a particular domain of interest. They also enable integration of different types of models across domains of interest. Modeling standards are extremely important to support model-based systems engineering, which must integrate across disciplines, products, and technologies.<br />
<br />
Standards for system modeling languages can also enable cross discipline, cross project, and cross organization communications. This offers the potential to reduce training requirements for practitioners who need to learn about a particular system, and enables the reuse of system artifacts. Standard modeling languages also provide a common foundation for advancing the practice of systems engineering.<br />
<br />
==Types of Modeling Standards==<br />
There are many different standards that apply to systems modeling. Modeling standards include standards for modeling languages, data exchange between models, and for transformation of one model to another to achieve semantic interoperability, as well as more general modeling standards. The following is a partial list of representative modeling standards classified by these types of standards. <br />
<br />
===Modeling Languages for Systems===<br />
'''Descriptive models'''<br />
These standards apply to general descriptive modeling of systems<br />
*[[Functional Flow Block Diagram (FFBD)]] <br />
*[[Integration Definition for Functional Modeling (IDEF0)]]<br />
*Object Process Diagrams (OPD) and Object Process Language (OPL) <br />
*[[Systems Modeling Language (SysML)]]<br />
*[[Unified Profile for DoDAF and MODAF (UPDM)]] <br />
*[[Web ontology language (OWL)]]<br />
<br />
'''Analytical models and simulations'''<br />
These standards apply to both analytical models and simulations<br />
*[[Distributed Interactive Simulation (DIS)]] <br />
*[[High Level Architecture]]<br />
*[[Modelica]]<br />
*[[Semantics of a Foundational Subset for Executable UML Models (FUML)]]<br />
<br />
===Data Exchange Standards===<br />
*Application Protocol for Systems Engineering Data Exchange (ISO 10303-233) (AP-233)<br />
*Requirements Interchange Format (ReqIF)<br />
*XML Metadata Interchange (XMI)<br />
*Resource Description Framework (RDF)<br />
<br />
===Model Transformations===<br />
These standards apply to transforming one model to another so support semantic interoperability<br />
*Query View Transformations (QVT)<br />
*SysML-Modelica Transformation<br />
*SysML-OPM Transformation<br />
<br />
===General Modeling Standards===<br />
These standards provide general frameworks for modeling<br />
*Model driven architecture (MDA®)<br />
*IEEE 1471-2000 -Recommended Practice for Architectural Description of Software-Intensive Systems<br />
<br />
===Other Domain-specific Modeling Standards===<br />
<br />
'''Software design models'''<br />
<br />
These standards apply to modeling application software and/or embedded software design<br />
*Architecture Analysis and Design Language (AADL)<br />
*Modeling and Analysis for Real-Time and Embedded Systems (MARTE)<br />
*Unified Modeling Language (UML)<br />
<br />
'''Hardware design models'''<br />
<br />
These standards apply to modeling hardware design<br />
*VHSIC Hardware Description Language (VHDL)<br />
<br />
'''Business process models'''<br />
<br />
These standards apply to modeling business processes<br />
*Business Process Modeling Notation (BPMN)<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
List all references cited in the article. Note: SEBoK 0.5 uses Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Primary References===<br />
ANSI/IEEE. 2000. Recommended Practice for Architectural Description for Software-Intensive Systems. New York, NY: American National Standards Institute (ANSI)/Institute of Electrical and Electronics Engineers (IEEE), ANSI/[[IEEE 1471]]-2000.<br />
<br />
Application Protocol for Systems Engineering Data Exchange (ISO 10303-233. Available at http://www.exff.org/ap233<br />
<br />
Architecture Analysis & Design Language (AADL). Available at http://standards.sae.org/as5506a/ <br />
<br />
Business Process Modeling Notation (BPMN). Available at http://www.bpmn.org/<br />
<br />
Distributed Interactive Simulation (DIS). Available at http://en.wikipedia.org/wiki/Distributed_Interactive_Simulation <br />
<br />
Functional flow block diagram (FFBD). Available at [http://en.wikipedia.org/wiki/Functional_flow_block_diagram <br />
<br />
ISO/IEC 42010:2007. Systems and Software Engineering — Recommended Practice for Architectural Description of Software-intensive Systems, International Organization for Standardization/International Electrotechnical Commission, September 12, 2007. ISO/IEC 42010:2007. Available at http://en.wikipedia.org/wiki/ISO/IEC_42010 <br />
<br />
Integration Definition for Functional Modeling (IDEF0). Available at http://www.idef.com/IDEF0.htm <br />
<br />
IEEE Standard 1516, IEEE Standard for High Level Architecture, Institute for Electrical and Electronic Engineers. Available at http://www.sisostds.org/ProductsPublications/Standards/IEEEStandards.aspx <br />
<br />
Wikipedia. High Level Architecture (HLA). Available at [http://en.wikipedia.org/wiki/High_level_architecture_(simulation) <br />
<br />
Modeling and Analysis for Real-Time and Embedded Systems (MARTE). Available at http://www.omgwiki.org/marte/ <br />
<br />
Model driven architecture (MDA®). Available at http://en.wikipedia.org/wiki/Model-driven_architecture[https://www.modelica.org/ Modelica]<br />
<br />
Object Management Group. Query View Transformations (QVT). Available at http://www.omg.org/spec/QVT/1.1/ <br />
<br />
Object Management Group. Requirements Interchange Format (ReqIF). Available at http://www.omg.org/spec/ReqIF/ <br />
<br />
Resource Description Framework (RDF). Available at http://www.w3.org/RDF/.<br />
<br />
Object Management Group. Semantics of a Foundational Subset for Executable UML Models (FUML). Available at http://www.omg.org/spec/FUML/<br />
<br />
Object Management Group. Systems Modeling Language (SysML). Available at http://www.omgsysml.org/<br />
<br />
Object Management Group. SysML-Modelica Transformation Specification. Available at http://www.omg.org/spec/SyM/<br />
<br />
Object Management Group. Unified Modeling Language™ (UML). [http://www.uml.org/#UML2.0<br />
<br />
Object Management Group. Unified Profile for DoDAF and MODAF (UPDM). Available at http://www.omg.org/spec/UPDM/<br />
<br />
Object Management Group. XML Metadata Interchange (XMI). Available at http://en.wikipedia.org/wiki/XML_Metadata_Interchange<br />
<br />
VHSIC hardware description language (VHDL). Available at http://en.wikipedia.org/wiki/VHDL<br />
<br />
Web ontology language. (OWL)Avaialbe at http://www.w3.org/2004/OWL/<br />
<br />
===Additional References===<br />
Dori, D. 2002. Object-Process Methodology: A Holistics System Paradigm. Springer.<br />
<br />
Friedenthal, S., A. Moore, and R. Steiner. 2009. A Practical Guide to SysML: The Systems Modeling Language. Morgan Kaufman. Needham, MA, USA: OMG Press.<br />
<br />
Fritzon, P. 2004. Object-oriented modeling and simulation with Modelica 2.1. New York, NY: Wiley Interscience and IEEE Press.<br />
<br />
Grobshtein, Y. and Dori, D. 2011. Generating SysML Views from an OPM Model: Design and Evaluation. Systems Engineering, DOI 10.1002/sys.20181. Available at http://esml.iem.technion.ac.il/site/wp-content/uploads/2011/02/GeneratingSysMLViewsFromAnOPMModel.pdf<br />
<br />
Paredis, C. J. J., and et al. 2010. An overview of the SysML-modelica transformation specification. Paper presented at 20th Annual International Council on Systems Engineering (INCOSE) International Symposium, 12-15 July, 2010, Chicago, IL.<br />
<br />
Weilkiens, T. 2008. Systems Engineering with SysML/UML. Morgan Kaufman. Needham, MA, USA OMG Press.<br />
<br />
ISO. Product Data Representation and Exchange (STEP). International Standards Organizationa (ISO) 10303. Available at http://www.tc184sc4.org/SC4_Open/SC4%20Legacy%20Products%20(2001-08)/STEP_(10303)/ISO 10303 <br />
----<br />
====Article Discussion====<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[System Modeling Concepts|<- Previous Article]] | [[Representing Systems with Models|Parent Article]] | [[Systems Approach|Next Article ->]]</center><br />
==Signatures==<br />
[[Category:Part 2]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=System_Modeling_Concepts&diff=9614System Modeling Concepts2011-08-09T19:11:38Z<p>Skmackin: </p>
<hr />
<div>System modeling concepts are the foundation concepts that enable system models to represent systems. These include concepts needed to describe systems, and concepts needed to create abstractions of systems, such as view and viewpoint.<br />
==System Concept Model==<br />
A system model represents aspects of a system and its environment. A [[System Concept Model (glossary)|system concept model (glossary)]] captures the key concepts for representing systems, including its [[Requirement (glossary)|requirements (glossary)]], [[Behavior (glossary)|behavior (glossary)]], [[Structure (glossary)|structure (glossary)]], and [[System Property (glossary)|properties (glossary)]]. The concept model identifies the concepts and their relationship to other concepts. In addition, the concept model is accompanied by a set of definitions for each concept. The models are typically defined using an Entity Relationship diagram or a UML class diagram.<br />
<br />
A preliminary [[Systems Engineering Concept Model]] was developed in support of the integration efforts between the development of the OMG Systems Modeling Language and the ISO AP233 Data Exchange Standard for systems engineering. The concept model was captured in an informal way, but the model and associated concepts were rigorously reviewed by a broad representation from the systems engineering community, including members from INCOSE, and the AP233 and SysML development teams.<br />
<br />
A fragment from the top level [[Systems Engineering Concept Model]] is included in Figure 2. This model provides concepts for requirements, behavior, structure and properties of the system, as well as other concepts related to project management. The concept model is augmented by a well defined glossary of terms called the semantic dictionary. The concept model and glossary of terms served as a key input to the requirements for the OMG Systems Modeling Language that was called the [[UML for Systems Engineering Request for Proposal]].<br />
<br />
[[File:060611_SF_System_Conept_Model-Top_Levelnofig10.png|600px|System Concept Model-Top Level]]<br />
<br />
The concept model is sometimes referred to as a meta-model (glossary), domain meta-model, or schema, and can be used to specify the abstract syntax of a modeling language (refer to the MDA Foundation Model (Object Management Group 2010)). Several other system concept models have been developed but not standardized. Future standardization efforts should establish a standard systems concept model. The model can then evolve over time as the systems engineering community continues to formalize and advance the practice of systems engineering.<br />
<br />
==Abstraction – a Key Modeling Concept==<br />
In order to manage complexity, it is important for a model to represent different levels of [[Abstraction (glossary)|abstraction (glossary)]] and to present in a model only the information to communicate the particular intent, and hide other aspects that are not relevant to the intent. Some of the key concepts to model different levels of abstraction include the concept of view and viewpoint, and the concept of black-box and white-box models as described below. Different modeling languages and tools employ other techniques as well.<br />
<br />
===View and Viewpoint===<br />
[[IEEE 1471]] defines and view and viewpoint as follows:<br />
#[[View (glossary)]] - A representation of a whole system from the perspective of a related set of concerns.<br />
#[[Viewpoint (glossary)]] - A specification of the conventions for constructing and using a view. A pattern or template from which to develop individual views by establishing the purposes and audience for a view and the techniques for its creation and analysis.<br />
<br />
The viewpoint specifies the stakeholders and their concerns, and provides the conventions for constructing a view to address those concerns. The view represents aspects of the system that address the stakeholder concerns. Models can be created to represent the different views of the system.<br />
<br />
===Typical System Views===<br />
A systems model should be able to represent multiple views of the system to address a range of stakeholder concerns. Standard views may include requirements, functional, structural, and parametric views, as well as a multitude of discipline specific views to address system reliability, safety, security, and other quality characteristics. <br />
<br />
===Black-box and White-box Models===<br />
A very common abstraction technique is to model the [[Black-Box System (glossary) |black-box (glossary)]] of a system, which only exposes the features of the system that are visible from an external observer, and hide the internal details of the design. This includes stimulus response characteristics, and other black box physical characteristics such as the system mass or weight. The model of the [[White-Box System (glossary)|white-box (glossary)]] of a system shows the internal structure and behavior of the system. The black box and white-box modeling can be applied to the next level of design decomposition, to create a black-box and white box model of each system component.<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
===Citations===<br />
Object Management Group. 2010. MDA Foundation Model. Object Management Group (OMG), document number ORMSC/2010-09-06.<br />
<br />
===Primary References===<br />
ANSI/IEEE. 2000. [[IEEE 1471|Recommended Practice for Architectural Description for Software-Intensive Systems]]. New York, NY: American National Standards Institute (ANSI)/Institute of Electrical and Electronics Engineers (IEEE), ANSI/[[IEEE 1471]]-2000.<br />
<br />
Guizzardi, G. 2007. [[On Ontology, Ontologies, Conceptualizations, Modeling Languages, and (Meta)Models]]. Proceeding of the 2007 conference on Databases and Information Systems IV. Available at http://portal.acm.org/citation.cfm?id=1565425.<br />
<br />
INCOSE 2003. [[Systems Engineering Concept Model]]. Draft 12 Baseline. Available at http://syseng.omg.org/SE_Conceptual%20Model/SE_Conceptual_Model.htm <br />
<br />
Object Management Group 2003. [[UML for Systems Engineering Request for Proposal]]. OMG document number ad/2003-3-41. Available at http://www.omg.org/cgi-bin/doc?ad/2003-3-41<br />
<br />
===Additional References===<br />
Object Management Group. 2010. MDA Foundation Model. Object Management Group (OMG), document number ORMSC/2010-09-06. <br />
----<br />
<br />
==Article Discussion==<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Types of Models|<- Previous Article]] | [[Representing Systems with Models|Parent Article]] | [[Modeling Standards|Next Article ->]]</center><br />
==Signatures==<br />
[[Category:Part 2]][[Category:Topic]]</div>Skmackinhttps://sebokwiki.org/w/index.php?title=Types_of_Models&diff=9613Types of Models2011-08-09T19:07:21Z<p>Skmackin: </p>
<hr />
<div>This section introduces a classification for the many different types of models, and highlights how different models must work together to support the broader engineering effort.<br />
==Model Classification==<br />
There are many different types of models and associated modeling languages to address different aspects of a system. Since different models serve different purposes, a classification of models can be useful for selecting the right type of model for the intended purpose and scope. <br />
<br />
===Formal versus Informal Models===<br />
Since a system model is a representation of a system, many different expressions that vary in degrees of formalism could be considered models. In particular, one could draw a picture of a system and consider it a model. Similarly, one could write a description of a system in text, and refer to that as a model. Both examples are representations of a system. However, unless there is some agreement on the meaning of the terms, there is a potential lack of precision and ambiguity in the representation. <br />
<br />
The primary focus of system modeling is on the use of models supported by a well-defined [[Modeling Language (glossary)|modeling language (glossary)]]. While less formal representations can be useful, there are certain expectations of a model for it to be considered within the scope of MBSE. In particular, the initial classification distinguishes between informal models, and more formal models supported by a modeling language with a defined syntax and semantics for the domain of interest.<br />
<br />
===Physical Models versus Abstract Models===<br />
The DoD 5000.59 -M 1998 definition [1] of a model asserts that, “a model can be physical, mathematical, or otherwise logical representation of a system”. This definition provides a starting point for a high level model classification. A [[Physical Model (glossary)|physical model (glossary)]] is a concrete representation that is distinguished from the mathematical and logical models which are more abstract representations of the system. The [[Abstract Model (glossary)|abstract model (glossary)]] can be further classified as descriptive (similar to logical) or analytical (similar to mathematical). Some example models are shown in Figure 1 (to be updated).<br />
<br />
[[File:060611_SF_Typical_Models.png|800px|Typical Models]]<br />
<br />
===Descriptive Models===<br />
A [[Descriptive Model (glossary)|descriptive model (glossary)]] describes the logical relationships such as the systems whole-part relationship that defines its parts tree, the interconnection between its parts, the functions that its components perform, or the test cases that are used to verify the system requirements. Typical descriptive models may include models that describe the system architecture, or computer aided design models that describe the three dimensional geometric representation of a system.<br />
<br />
===Analytical Models===<br />
An [[Analytical Model (glossary)|analytical model (glossary)]] describes mathematical relationships such as differential equations that support quantifiable analysis about the system parameters. The analytical models can be further classified into static and dynamic models. Dynamic models describe the time-varying state of a system, whereas static models perform computations that do not represent the time varying state of a system. A static model may represent the mass properties calculation or a reliability prediction, where-as a dynamic model may represent the performance of a system, such as the aircraft position, velocity, acceleration, and fuel consumption over time.<br />
<br />
===Hybrid Descriptive and Analytical Models===<br />
A particular model may include descriptive aspects and analytical aspects as described above, but most models tend to emphasize support for one or the other. It should be noted that the logical relationships of a descriptive model can also be analyzed, and inferences can be made from reasoning about the system, but the logical analysis provides different insights than a quantitative analysis of the system parameters. <br />
<br />
===Domain-specific models===<br />
Both descriptive and analytical models can be further classified according to the domain that they represent. The following classification is partially derived from the presentation on ''OWL, Ontologies and SysML Profiles: Knowledge Representation and Modeling'' (Jenkins and Rouquette 2010):<br />
*properties of the system, such as reliability, mass properties, power, structural, or thermal models. <br />
*design and technology implementations such as electrical, mechanical, and software design models. <br />
*subsystems and products, such as communications, fault management, or power distribution models.<br />
*system applications such as information system, automotive system, aerospace system, or medical device models.<br />
<br />
A single model may include multiple domain categories from above. For example, a reliability, thermal, and/or power model may be defined for an electrical design of a communications subsystem for an aerospace system such as an aircraft or sattelite.<br />
<br />
===System Models===<br />
System models can be hybrid models that are both descriptive and analytical. They often span several modeling domains that must be integrated to ensure a consistent and cohesive system representation. As such, the system model must provide both general purpose system constructs and domain-specific constructs that are shared across modeling domains.<br />
<br />
According to wikipedia [2], a system model is the conceptual model that describes and represents a system. A system comprises multiple views such as planning, requirement (analysis), design, implementation, deployment, structure, behavior, input data, and output data views. A system model is required to describe and represent all these multiple views.<br />
<br />
One of the original efforts to formally define a system model using a mathematical framework was developed by Wayne Wymore and documented in his book entitled [[Model-Based Systems Engineering|Model-Based Systems Engineering: An Introduction to the Mathematical Theory of Discrete Systems and to the Tricotyledon Theory of System Design ]]. This approach establishes a rigorous mathematical framework for designing systems in a model-based context. A summary of his work can be found in the [[A Survey of Model-Based Systems Engineering (MBSE) Methodologies]].<br />
<br />
===Simulation versus Model===<br />
The term simulation, or more specifically [[Computer Simulation (glossary)|computer simulation (glossary)]], refers to an analytical model that can be executed by a computing infrastructure. The computer simulation includes the analytical model and the computing infrastructure, as well as the initial conditions required to execute the model. There are many different types of computer simulations. According to wikipedia [3], computer simulations can be characterized based on the following characteristics:<br />
<br />
*Stochastic or deterministic <br />
*Steady-state or dynamic<br />
*Continuous or discrete<br />
*Local or distributed<br />
<br />
Simulations may often be integrated with actual hardware, software, and operators of the system, to evaluate how actual components and users of the system perform in a simulated environment. <br />
<br />
In addition to representing the system and its environment, the simulation must provide efficient computational methods for solving the equations. Simulations may be required to operate in real time, particularly if there is an operator in the loop. Other simulations may be required to operate much faster than real time, and perform thousands of simulation runs to provide statistically valid simulation results. Several computational and other simulation methods are described in [[Simulation Modeling and Analysis]].<br />
<br />
===Visualization===<br />
Computer simulation results and other analytical results often need to be processed so they can be presented to the users in a meaningful way. Visualization techniques and tools are used to display the results in various visual forms. Examples include parametric relationships that may include a simple plot of the state of the system versus time, or the input and output values from several simulation executions that are displayed on a response surface showing the sensitivity of the output to the input. Additional statistical analysis of the results may be performed to provide probability distributions for selected parameter values. Animation is often used to provide a virtual representation of the system and its dynamic behavior, such as displaying aircraft three-dimensional position and orientation as a function of time, and a projection of its path on the surface of the earth represented by detailed terrain maps.<br />
<br />
==Integration of Models==<br />
Many different types of models may be developed as artifacts of a model-based systems engineering effort. Many other domain specific models are created for component design and analysis. The different descriptive and analytical models must be integrated in order to fully realize the benefits of a model-based approach. The role of MBSE to integrate across multiple domains is a primary theme in the [[INCOSE Systems Engineering Vision 2020]].<br />
<br />
As an example, system models can be used to specify the components of the system. The descriptive model of the system architecture may be used to identify and partition the components of the system, and define their interconnection and other relationships. Analytical models for performance, physical and other quality characteristics, such as reliability, may be employed to determine the required values for specific component properties to satisfy the system requirements. An [[Executable System Model (glossary)|executable system model (glossary)]] that represents the interaction of the system components may be used to validate that the component requirements can satisfy the system behavioral requirements. The descriptive, analytical and executable system model must ensure they represent different facets of the same system.<br />
<br />
The component designs must satisfy the component requirements that are specified by the system models. As a result, the component design and analysis models must have some level of integration to ensure the design model is traceable to the requirements model. The different design disciplines for electrical, mechanical and software each create their own models that represent different facets of the same system as well. It is evident that the different models must be sufficiently integrated to ensure a cohesive system solution.<br />
<br />
In order to support the integration, the models must establish [[Semantic Interoperability (glossary)|semantic interoperability (glossary)]] to ensure that a construct in one model has the same meaning as a corresponding construct in another model. In addition to semantic interoperability for models to share common definitions when referring to the same thing, the information must also be exchanged from one modeling tool to another.<br />
<br />
An approach to achieve semantic interoperability is to use [[Model Transformation (glossary)|model transformations (glossary)]] between different models. This approach defines a transformation to establish correspondence between the concepts in one model and the concepts in another. In addition to establishing correspondence, the tools must have a means to exchange the model data in order to share the information. There are multiple means for exchanging data between tools including file exchange, use of application program interfaces (API), and a shared repository.<br />
<br />
The use of modeling standards for modeling languages, model transformations, and data exchange, is an important enabler to achieve integration across modeling domains.<br />
<br />
==References== <br />
Please make sure all references are listed alphabetically and are formatted according to the Chicago Manual of Style (15th ed). See the [http://www.bkcase.org/fileadmin/bkcase/files/Wiki_Files__for_linking_/BKCASE_Reference_Guidance.pdf BKCASE Reference Guidance] for additional information.<br />
<br />
====Citations====<br />
<br />
[1] DoD Directive 5000.59. DoD Modeling and Simulation (M&S) Management. January 4, 1994.<br />
<br />
[2] Wikipedia. System Model. Available at http://en.wikipedia.org/wiki/System_model<br />
<br />
[3] Wikipedia. Computer simulation. Available at http://en.wikipedia.org/wiki/Computer_simulation#Types<br />
<br />
====Primary References====<br />
Law, A. 2007. [[Simulation Modeling & Analysis]]. Fourth Edition. McGraw Hill<br />
<br />
Wymore, A.W. 1993. [[Model-Based Systems Engineering]]. CRC Press, Inc. Boca Raton, FL.<br />
<br />
====Additional References====<br />
Jenkins, S, Rouquette. 2010. OWL, Ontologies and SysML Profiles:Knowledge Representation and Modeling May 2010 NASA- ESA PDE Workshop. Available at http://www.congrex.nl/10m05post/presentations/pde2010-Jenkins.pdf<br />
<br />
Estefan, J, 2008. [[A Survey of Model-Based Systems Engineering (MBSE) Methodologies]]. INCOSE-TD-2007-003-02 dated 5/23/2008. Available at http://www.incose.org/ProductsPubs/pdf/techdata/MTTC/MBSE_Methodology_Survey_2008-0610_RevB-JAE2.pdf<br />
<br />
INCOSE 2007. [[INCOSE Systems Engineering Vision 2020|Systems Engineering Vision 2020]]. INCOSE-TP-2004-004-02 September, 2007. Available at http://www.incose.org/ProductsPubs/products/sevision2020.aspx<br />
<br />
Wikipedia. Computer simulation. Available at http://en.wikipedia.org/wiki/Computer_simulation#Types <br />
----<br />
<br />
==Article Discussion==<br />
<br />
[[{{TALKPAGENAME}}|[Go to discussion page]]]<br />
<center>[[Why Model?|<- Previous Article]] | [[Representing Systems with Models|Parent Article]] | [[System Modeling Concepts|Next Article ->]]</center><br />
==Signatures==<br />
[[Category:Part 2]][[Category:Topic]]</div>Skmackin