Студопедия
Случайная страница | ТОМ-1 | ТОМ-2 | ТОМ-3
АрхитектураБиологияГеографияДругоеИностранные языки
ИнформатикаИсторияКультураЛитератураМатематика
МедицинаМеханикаОбразованиеОхрана трудаПедагогика
ПолитикаПравоПрограммированиеПсихологияРелигия
СоциологияСпортСтроительствоФизикаФилософия
ФинансыХимияЭкологияЭкономикаЭлектроника

Allocation of resources

Читайте также:
  1. ADDITIONAL RESOURCES
  2. Article 240. Violation of rules related to the protection of mineral resources
  3. Dynamic memory allocation
  4. Environment exhaustible and inexhaustible natural resources
  5. Estimates for Mineral Resources
  6. Expressive Resources of the Language

Testing can be a resource intensive activity. The tester may need to reserve special hardware or he/she may have to construct large, complex data sets. The tester always will have to spend large amounts of time verifying that the expected results section of each test case actually corresponds to the correct behavior. In this section I want to present two techniques for determining which parts of the product should be tested more intensely than other parts. This information will be used to reduce the amount of effort expended while only marginally affecting the quality of the resulting product.

Use Profiles

One technique for allocating testing resources uses use profiles as the basis for determining which parts of the application will be utilized the most and then tests those parts the most. The principle here is test the most used parts of the program over a wider range of inputs than lesser used portions to ensure greatest user satisfaction.

A use profile is simply a frequency graph that illustrates the number of times that an end user function is used, or is anticipated to be used, in the actual operation of the program. The profile can be constructed in a couple of ways. First, data can be collected from actual use such as during usability testing. This results in a raw count profile. Second, a profile can be constructed by reasoning about the meanings and responsibilities of the system interface. The result is a relative ordering of the end user functions rather than a precise frequency count.

The EXIT function for the system will be successfully completed exactly once per invocation of the program but the SAVE function may be used numerous times. It is conceivable that the Create FOOTNOTE function might not be used any at all during a use of the system. This results in a profile that indicates an ordering of SAVE, EXIT and Create FOOTNOTE. The SAVE function will be tested over a much wider range of inputs than the Create FOOTNOTE function.

A second approach to use profiling is to rate each use case on a scale. In projects mentored by Software Architects, we use a form of a use case that includes fields that record an estimate of how frequently the use will be activated and how critical the use described in the use scenario is to the operation of the system. An estimate is also made of the relative complexity of each use case. Figure 1 shows an example from a recent project. During the initial iterations, the team responsible for the use case model uses domain and application information to complete these fields.

Table 1 - Example Use Case

Use Case # 001

Use Scenario: The user selects the SAVE option from the menu.

The system responds by saving the current file using the

current file name, if it exists. If it does not exist, a dialog

box requests a new filename and a new file is created.

Frequency: High Criticality: High

Complexity: Medium

The frequency field can be used to support the first approach to ordering the use cases. The criticality field can also be used to order the use cases. However, neither of these attributes is really adequate by itself. For example, we might paint a logo in the lower right hand corner of each window. This would be a relatively frequent event, but should it fail, the system will still be able to provide the important functionality to the user. Likewise, attaching to the local database server would happen very seldom but its success is critical to the success of certain other functions.

What is required is a technique for combining these two attributes of the use cases. Table 2 illustrates this using a matrix in which the vertical axis is the criticality rating scale and the horizontal is the frequency rating scale. This results in cells that represent categories such as highly critical and very frequent or minimally critical and seldom used. These two classifications are combined into a single scale that ranges from low in the upper left hand corner to high in the lower right corner. In the next section I will illustrate a couple of strategies for mapping the attribute values into the rating scale.

Table 2 - Combining Use Case Rankings

Criticality \ Frequency minimally critical moderately critical highly critical

seldom low

moderate

frequent high

The amount of resources and the size of the project will help determine the granularity of the scale. The objective is to identify a set of uses, in the top one or two categories of rating scale, whose combined size is manageable within the available resources. The scale may simply be low, medium and high. If too many cases are categorized as high, we might add a very high category to further discriminate among the cases.

Once the target group of use cases is identified, these heavily used cases can then be tested over the largest possible range of input values. Below I will present one scheme for varying the number of test cases systematically.


Дата добавления: 2015-11-16; просмотров: 128 | Нарушение авторских прав


<== предыдущая страница | следующая страница ==>
Test Strategy| Applying risk analysis to system testing

mybiblioteka.su - 2015-2024 год. (0.006 сек.)