Студопедия
Случайная страница | ТОМ-1 | ТОМ-2 | ТОМ-3
АвтомобилиАстрономияБиологияГеографияДом и садДругие языкиДругоеИнформатика
ИсторияКультураЛитератураЛогикаМатематикаМедицинаМеталлургияМеханика
ОбразованиеОхрана трудаПедагогикаПолитикаПравоПсихологияРелигияРиторика
СоциологияСпортСтроительствоТехнологияТуризмФизикаФилософияФинансы
ХимияЧерчениеЭкологияЭкономикаЭлектроника

Example 6.4: Forecast variance for the ETS(A,A,A) model 4 страница

Читайте также:
  1. A bad example
  2. A Christmas Carol, by Charles Dickens 1 страница
  3. A Christmas Carol, by Charles Dickens 2 страница
  4. A Christmas Carol, by Charles Dickens 3 страница
  5. A Christmas Carol, by Charles Dickens 4 страница
  6. A Christmas Carol, by Charles Dickens 5 страница
  7. A Christmas Carol, by Charles Dickens 6 страница

114 7 Selection of Models

 

quarterly and monthly data. One could decide to choose these models rather than use the AIC model selection method. However, one should be reassured that the AIC will do as well as or better than the encompassing model, and it would lead to the selection of simpler models when possible.

 

The information in Table 7.5b is more complicated. The potential models for the selection methods in this table include the ten non-seasonal mod-els for the annual time series and all 30 models for quarterly and monthly time series from Tables 2.2 and 2.3. The results for quarterly and monthly data are similar to those for the linear models. The AIC does nearly as well as or better than the other model selection methods. As in the case of the linear models, a single damped trend model performs well. Unlike the case of the linear models, neither the ETS(M,Ad,M) model nor the ETS(A,A d,M) model is an encompassing model for the 30 models, and therefore, it is not as clear which damped trend model to pick. Another observation is that the mean MASE and median MASE do not decrease when the number of models in the selection process is increased from six to 30, as we would hope. For both monthly and quarterly time series, one should consider using the AIC with an expanded set of linear models, but far fewer than all 30 models.

 

The annual data in Table 7.5b tend to be shorter than the quarterly and monthly data. The longest annual series had 41 observations, and there are many very short time series. Hence, it is not unexpected to find that the model selection methods did not do as well as a single model. The same com-parisons as in Table 7.5b were done for annual time series that were greater than or equal to 20 in length, with essentially no change to the results. In fact, model selection comparisons were also done for quarterly and monthly data of length greater than or equal to 28 and 72, respectively, with no change to the general implications in Table 7.5b. For annual time series, the com-parisons on the annual data indicate that one should either use the damped trend model ETS(A,Ad,N) or limit the AIC to the three linear models. These findings match and help to explain the poor performance of the AIC for choosing among innovations state space models in Hyndman et al. (2002) for annual data from the M3 competition.

 

All of the comparisons in Table 7.5a, b were repeated using the mean absolute percentage error (MAPE) from Sect. 2.7.2, and again the implica-tions were the same. See Table 7.5c, d.

 

For a comparison of the individual methods (five ICs and VAL) using the MASE, see Table 7.6. This table allows the reader to see more detail for the model selection methods that are summarized in the last two columns of Table 7.5a, b. In the next section, we will examine model selection for a set of hospital data and will present the results in the same form as Table 7.6.


7.2 Choosing a Model Selection Procedure  

 

Table 7.6. Comparisons of methods on the M3 data using MASE for the models inTables 2.2 and 2.3.

Measure Data type AIC BIC HQIC AICc LEIC VAL
(a) Comparison of methods using MASE for linear models      
Mean Rank Annual 1.84 1.86 1.86 1.88 1.86 1.97
  Quarterly 3.08 3.24 3.14 3.16 3.12 3.26
  Monthly 3.07 3.15 3.05 3.03 3.23 3.20
Mean MASE Annual 2.94 2.96 2.95 2.96 2.95 3.04
  Quarterly 2.15 2.21 2.16 2.17 2.15 2.19
  Monthly 2.06 2.13 2.09 2.05 2.19 2.17
Median MASE Annual 1.82 1.85 1.85 1.85 1.85 1.95
  Quarterly 1.47 1.58 1.50 1.49 1.49 1.53
  Monthly 1.08 1.11 1.08 1.07 1.12 1.10
(b) Comparison of methods using MASE for all models      
Mean Rank Annual 5.42 5.29 5.39 5.33 5.31 5.55
  Quarterly 13.97 14.75 14.20 14.47 15.14 14.87
  Monthly 13.50 13.60 13.33 13.29 14.78 13.92
Mean MASE Annual 3.30 3.28 3.29 3.26 2.91 3.37
  Quarterly 2.27 2.38 2.29 2.29 2.40 2.29
  Monthly 2.08 2.10 2.08 2.08 2.19 2.20
Median MASE Annual 1.98 1.95 1.97 1.97 1.92 2.00
  Quarterly 1.54 1.57 1.55 1.56 1.61 1.55
  Monthly 1.10 1.11 1.07 1.09 1.14 1.09

 

 

7.2.3 Comparing Selection Procedures on a Hospital Data Set

 

For another comparison of the model selection procedures, we used time series from a hospital data set.1 Each time series comprises a monthly patient count for one of 20 products that are related to medical problems. We included only time series that had a mean patient count of at least ten and no individual values of 0. There were 767 time series that met these conditions. As in the comparisons using the M3 data, we withheld H = 18 time periods from the fitting set for the LEIC and the VAL model selection methods; sim-ilarly, we set H = 18 time periods for the comparisons in the forecasting set. The length of every time series was 7 years of monthly observations. Thus, the length nj of all fitting sets had the same value of 66 time periods (i.e., 84 18 = 66).

 

1 The data were provided by Paul Savage of Healthcare, LLC Intelligence and Hans Levenbach of Delphus, Inc.


116 7 Selection of Models

 

Table 7.7. Comparisons of methods on the hospital data set using MASE for modelsin Tables 2.2 and 2.3.

Measure Data type AIC BIC HQIC AICc LEIC VAL
(a) Comparison of methods using MASE for linear models      
Mean Rank Monthly 3.10 3.07 3.01 3.10 3.07 3.36
Mean MASE Monthly 0.94 0.91 0.92 0.94 0.91 0.96
Median MASE Monthly 0.83 0.83 0.83 0.83 0.83 0.84
(b) Comparison of methods using MASE for all models      
Mean Rank Monthly 13.66 13.25 13.22 13.52 13.53 14.80
Mean MASE Monthly 0.98 0.96 0.97 0.98 0.95 1.00
Median MASE Monthly 0.84 0.84 0.83 0.84 0.85 0.86

 

 

By examining Table 7.7, we see that the results for the hospital data set using the MASE are somewhat similar to those for monthly time series in the M3 data set. Because there are definitely time series with values near 0, we believe that in this case the MASE is a more reliable measure than the MAPE for comparing forecasts. For the selection from only linear models in Table 7.7a, the three measures (mean rank, mean MASE, and median MASE) indicate that there is not much difference between the five IC methods (AIC, BIC, HQIC, AICc, and LEIC). The VAL method seems to be clearly the worst choice. In Table 7.7b, where the selection is among all 30 models, the VAL method remains the poorest choice, and similar to the results with the M3 data, there is no improvement with the increase in potential models. A dif-ference from the findings with the M3 data is that we found that it is not a good idea to use a single damped trend model for forecasting.

 

 

7.3 Implications for Model Selection Procedures

 

The comparisons of the model selection procedures in Sects. 7.2.2 and 7.2.3 provide us with some interesting information on how to select models, even though the study was limited to the M3 data and the hospital data. First, the AIC model selection method was shown to be a reasonable choice among the six model selection methods for the three types of data (annual, quar-terly, and monthly) in the M3 data and for the monthly time series in the hospital data. The number of observations for annual data is always likely to be small (i.e., less than or equal to 40), and thus the IC procedures may not have sufficient data to compete with simply choosing a single model such as the ETS(A,Ad,N) model when all ten non-seasonal models are considered. However, using the AIC on the three linear non-seasonal models fared as well as the ETS(A,Ad,N) and would allow the possibility of choosing simpler


7.4 Exercises  

 

models, especially when there is little trend in the data. Thus, for annual time series we recommend using the AIC and choosing among the three linear non-seasonal models.

 

For the monthly data, the AIC is better than the choice of selecting a sin-gle damped trend model in both the M3 data and the hospital data. Because it is definitely not clear which single model to use, we suggest using the AIC. One might also consider limiting the choice of models to a set that includes the linear models but is smaller than the complete set of 30 mod-els. We make the same recommendations for the quarterly time series, with additional emphasis on reducing the number of models from 30.

 

In common with other studies of model selection, our focus has been exclusively on selection methods that relate to point forecasts. Model selection procedures designed to produce good interval forecasts are likely to have similar properties to those discussed in this chapter, but the issue is one to be addressed in future research.

 

 

7.4 Exercises

 

Exercise 7.1. Select a data set with monthly time series, and write someRcode to do the following:

 

a. Find the maximum likelihood estimates, forecasts for h = 1,..., 18, fore-cast errors for h = 1,..., 18, and MASE(18, i, j) for each time series j in the data set and each linear model i from Table 2.1.

 

b. Use the AIC to pick a model kj for each time series j and identify the MASE(18, kj, j) for each time series from values in part a above.

 

c. Use the BIC to pick a model kj for each time series j and identify the MASE(18, kj, j) for each time series from values in part a above.

 

d. Compare the forecast accuracy obtained when selecting a model with the AIC or BIC, and when using the ETS(A,Ad,A) model for all series. (See Sect. 7.2.1 for suggested measures).

 

Exercise 7.2. Repeat Exercise 7.1 with the set of potential models in part aexpanded to include the ETS(M,Ad,M) model and its submodels, and with the ETS(M,Ad,M) model added to the comparison in part c.


118 7 Selection of Models

 

Appendix: Model Selection Algorithms

 

The Linear Empirical Information Criterion

 

In the linear empirical information criterion (LEIC) (Billah et al. 2003, 2005),

 

ζ (n) = c, where is c is estimated empirically from an ensemble of N simi-lar time series for M competing models. The number of observations in the

 

fitting set for time series {y ( tj ) }, j = 1..., N, is denoted by nj. Each of the N sets of observations is divided into two segments: the first segment consists of nj = nj − H observations and the second segment consists of the last H

observations. Let n = max {nj; j = 1,..., N}. Then, values of c between 0.25 and 2 log(n) in steps of δ provide a range of values wide enough to contain all the commonly used penalty functions. The value of δ = 0.25 has worked well in practice. The procedure for estimating c for the LEIC is as follows:

 


Дата добавления: 2015-10-24; просмотров: 155 | Нарушение авторских прав


Читайте в этой же книге: B) Local trend approximation 3 страница | B) Local trend approximation 4 страница | Parsimonious Seasonal Model | Quarterly sales distribution: 16 steps ahead | Lead time demand distribution: 3−steps ahead | Example 6.1: ETS(M,N,M) model | Lead−time demand variance | Forecast Variance | Example 6.4: Forecast variance for the ETS(A,A,A) model 1 страница | Example 6.4: Forecast variance for the ETS(A,A,A) model 2 страница |
<== предыдущая страница | следующая страница ==>
Example 6.4: Forecast variance for the ETS(A,A,A) model 3 страница| Penalty estimation

mybiblioteka.su - 2015-2024 год. (0.015 сек.)