SG 2 Perform Testing using Statistical Methods 执行测试,使用统计方法
Tests are designed and executed guided by statistical methods based on operational or usage profiles.
测试被设计和执行,使用基于运营和使用概况的统计方法指导。
SP 2.1 Develop operational profiles 开发运营概况
Operational profiles (or usage models) are developed early in the development lifecycle to serve as a basis from which a statistically correct sample of test cases can be derived. Example work products 1. Operational profile of the system to be tested Sub-practices 1. Develop the customer profile A customer is the person, group or organization that is acquiring the product being developed. A customer group is the set of customers that will be using the product in the same way. The customer profile is the complete set of customer groups and their associated frequency distribution across the profile. 2. Develop the user profile The user profile is the complete set of user groups (the set of actual users that will engage the system in the same way) and their associated frequency distribution across the profile. 3. Develop the system mode profile The system mode profile is the set of system modes (a set of functions or operations grouped in order to analyze execution behavior) and their associated occurrence probabilities. 4. Develop the functional profile The functional profile provides (per system mode) a quantitative view of the relative use of each of the different system functions. 5. Develop the operational profile An operation represents a task being accomplished by a system. The final operational profile is developed by a series of steps using information from the profiles already developed, including the following [Musa]: - Dividing the execution into runs - Identifying the input space (a comprehensive list of input variables) - Partitioning the input space into operations - Determining the occurrence probability for operations 6. Review and agree the operational profile with stakeholders 7. Revise the operational profile as appropriate
SP 2.2 Generate and execute statistically selected test cases 产生和执行统计地选择测试用例
Test cases are generated based on statistically selected samples of the usage of the product, and subsequently executed. Example work products 1. Test cases 2. Test results 3. Record of representativeness monitoring Sub-practices 1. Select samples of the usage of the product based on the developed usage model or operational profile 2. Generate test cases based on the selected usage samples that are characteristic of the operational use of the product The generated test cases will reflect probabilities in the usage model or operational profile and represent a sample of the input space according to the usage patterns. 3. Review and agree the test cases with stakeholders 4. Execute the test cases and record actual results 5. Monitor that the test coverage is representative of the actual usage Testing will use tools and measurements to determine if the set of executed test cases is representative of actual use. Only when testing is satisfied that the tests are sufficient to simulate expected operation in the field, they can use the test results along with other data to help make stop-testing decisions. 6. Revise the test cases as appropriate when test coverage of actual usage is not adequate 7. Analyze and draw statistical conclusions from test results At this sub-practice the statistical sample is used to develop conclusions about the entire population of customers and uses. This will typically be done using reliability models. Typical issues to be addressed include: - How quickly is product quality improving? - Can testing be completed within the constraints associated with the project and test resources?
SP 2.3 Apply statistical test data to make stop-test decisions 使用统计测试数据来做停止测试的决定
Estimations are made regarding the reliability of the product and confidence level of product quality. These estimations are the basis for making stop-test decisions. Example work products 1. Definition of severity levels of failures 2. Reliability and confidence goals 3. Reliability and confidence measures 4. Documented review results, e.g., minutes of the review meeting Sub-practices 1. Establish levels of severity of failures It is important to identify different classes or levels of failure and consider how they should be treated when measuring the reliability of the product. Typically, reliability requirements are established for each failure level. 2. Define quantitative reliability goals to be used as exit criteria and to make stop-test decisions Examples of types of reliability goals include the following: Reliability, expressed in terms such as Mean Time Between Failure (MTBF), Mean Time To Repair (MTTR) and Mean Time To Failure (MTTF) Availability Recoverability Trustworthiness Confidence level (in case confidence levels are used as a reliability goal, the technique of fault seeding will typically be applied as part of the statistical testing process) 3. Review and agree the reliability goals with stakeholders 4. Select a suitable reliability growth model Examples of types of reliability growth models include the following [Musa and Ackerman]: Static model, which is best applied to unchanging software with an unchanged operational profile Basic model, which is useful for modeling failure occurrences for software being tested and continuously debugged Logarithmic poisson model, which is best applied when it is assumed that some defects are more likely to cause failures, and that on average the improvement in failure intensity with each correction decreases exponentially as the corrections are made. 5. Collect statistical data on failures and system execution time 6. Calculate and estimate reliability measures using the reliability growth model by fitting the model to extrapolate from the collected data 7. Review the status regarding the reliability goals with stakeholders 8. Document the results of the reviews, action items and stop-test decisions