Читайте также:
|
|
The output from the risk analysis process is a prioritized list of risks to the project. This list must be translated into an ordering of the use cases. The ordering in turn is used to determine the amount of testing applied to each use case. For the purposes of system testing, I will only consider those business risks that address the domain within which the application is located.
An individual use case can be assigned a risk rating by the tester by considering how the risks identified at the project level apply to the specific use case. For example, those requirements that are rated most likely to change are high risks, those requirements that are outside the expertise of the development team are even higher risks, and those requirements that rely on new technology such as hardware being developed in parallel to the software are high risks as well. In fact it is usually harder to find low risk use cases than high risk. The exact classification scheme can vary from one project to another. It should have sufficient levels to separate the use cases into reasonable groupings but not have so many categories that some categories have no members.
The criticality value combined with the risk associated with the use case can produce a classification that identifies those use cases which describe behavior that is critical to the success of the system but that is also most vulnerable to the risks faced by the project. A highly critical use that has a high risk should obviously receive a high rating for the number of test cases to be generated while a non-critical use with low risk should receive a low rating.
There are several strategies possible for combining the risk and criticality values when the result is not so obvious. An averaging strategy would assign a medium rating to a low risk yet highly critical use while a conservative strategy would assign a high rating to that same use. The choice of strategy is not important to our discussion and is domain and application specific.
The combined rating for the use case is used to determine which combinations of values for domain objects are used in test cases. A low rating would indicate that each equivalence class for each object should be represented in some test case. A high rating indicates that each equivalence class for each object should be used in combination with every equivalence class from the other objects (all permutations).
Conclusion
Good planning can find faults before live testing begins. By examining relationships among the objects required by various use cases, the requirements can be checked for consistency and completeness. This finds faults much more cheaply than creating test cases based on faulty requirements.
Good planning can optimize scarce resources. There is much information that can be used to make testing as productive as possible. By considering the frequency of use of program features and the risk associated with each use, the tester can make more intelligent decisions about where to emphasize testing. Considering equivalence classes of objects allows the tester to achieve better coverage without increasing the number of test cases.
Дата добавления: 2015-11-16; просмотров: 70 | Нарушение авторских прав
<== предыдущая страница | | | следующая страница ==> |
Allocation of resources | | | First Catholic President |