Студопедия
Случайная страница | ТОМ-1 | ТОМ-2 | ТОМ-3
АвтомобилиАстрономияБиологияГеографияДом и садДругие языкиДругоеИнформатика
ИсторияКультураЛитератураЛогикаМатематикаМедицинаМеталлургияМеханика
ОбразованиеОхрана трудаПедагогикаПолитикаПравоПсихологияРелигияРиторика
СоциологияСпортСтроительствоТехнологияТуризмФизикаФилософияФинансы
ХимияЧерчениеЭкологияЭкономикаЭлектроника

Exercise 8.3.

Читайте также:
  1. A. TRAINING EXERCISES
  2. A. TRAINING EXERCISES
  3. Additional Language Exercises
  4. Additional Language Exercises
  5. Additional Vocabulary Exercises
  6. B. Pre-reading Exercises
  7. Basic notions of a system, subsystem, complex, series, cycle, group of exercises

 

a. Following on from Exercise 8.2, show that for the normalized form to pro-duce the same one-step-ahead forecasts, we require the normalized form of the model represented in (8.26) to be written as:

˜ ˜ φ   φ + At− 1  
_t− 1 bt 1 = _t− 1 bt 1  
so that          
˜   ˜ ˜ φ + αε t + at.  
_t = _t + At = _t− 1 bt 1  
    ˜ = bt, we must use the recurrence  
b. Hence show that, in order to have bt  
relationship          
˜ ˜ φ   ˜ ˜  
bt = ct (bt 1 − At− 1/ _t + βε t / _t),  

= ˜ (˜ ) where ct _t / _t At.

 

c. Show that yet another normalization would be required to maintain the same two-step-ahead forecast.


 

Exercise 8.4. Derive (8.21), (8.22), (8.23) and (8.24) for the ETS(M,Ad,M),ETS(A,Ad,M), ETS(A,Md,M) and ETS(M,Md,M) models.


Appendix: Derivations for Additive Seasonality  

 

Appendix: Derivations for Additive Seasonality

 

The purpose of this appendix is to derive the results in (8.8)–(8.13). We will prove the first three of the six items jointly by mathematical induction. Because we have observed y 1, y 2,..., yt, the values of ε i for i = 1, 2,..., t are replaced by ε i = (yt − y ˆ t|t 1)/ q (x t 1) in the non-normalized case and by

 

ε ˜ i = (yt − y ˜ t|t 1)/ q (x ˜ t 1)in the normalized case.For t = 1, result (8.8) is true because

    m− 1 s (i)            
A 1 =   i =0                
  m                
                     
      _ (m 1 )   m− 2 (i) _  
  = m s 0   + γq (x 0) ε 1 +s 0  
      _       _ i =0      

= 0 + (γ / m) q (x 0) ε 1

= 0 + (γ / m) q (x ˜0) ε 1

= A 0+ a 1

 

It is also easily seen that (8.9) and (8.10) hold for t = 1.

 

Assume that (8.8)–(8.10) are true for time t. Observe that ε t +1 = ε ˜ t +1 because y ˜ t + 1 |t is assumed to be the same as y ˆ t +1 |t. Then, A t +1 = At + at +1 follows by the same argument as for t = 1 and the assumptions for t. The other items can be justified for t + 1 as follows:

  ˜ ˜ ˜   + at +1      
  _t +1 = _t + φbt + αq (x ˜ t) ε ˜ t +1      
    = _t + At + φbt + αq (xt) ε t +1 + at +1      
    = _t +1 + At +1,          
  ˜ ˜            
  bt +1 = φbt + βq (x ˜ t) ε t +1        
    = bt +1,          
  s ˜ t (+0)1= s ˜ t ( m 1)+ γq (x ˜ t) ε ˜ t +1 − at +1      
    = st ( m 1) − At + γq (xt) ε t +1 − at +1      
    = st (+0)1 − At +1,          
  s ˜ t (+ i )1= s ˜ t ( i 1) − at +1          
    = st ( i 1) − At − at +1, i =1,..., m − 1    
    = st (+ i 11) − At +1, i =1,..., m − 1,      
y ˜ t +1+ h|t +1 = _ ˜ t +1 + φh b ˜ t +1 + s ˜( mhm +)      
      t +1 (m h +)   _  
          At +1  
    = (_t +1 + At +1) + φh bt +1 + _ st +1 m  

= y ˆ t +1+ h|t +1,

 

where h + m = _(h − 1) mod m _ + 1.


136 8 Normalizing Seasonal Components

 

To prove (8.11) and (8.12), we use the notation in Chap. 6 to write the normalized Class 1 and Class 2 models as

 

          y ˜= w_x ˜ t 1+ q (x ˜ t 1) ε t, (8.27a)  
          x ˜ t = Fx ˜ t 1+ gq (x ˜ t 1) ε t, (8.27b)  
where q (x ˜ t 1) = 1 for Class 1 models and q (x ˜ t 1) = w_x ˜ t 1 =  
y ˆ t + h|t for Class   models. In addition, g ˜ = g + γ, where γ =  
[ γ / m, 0, γ / m,( γ / m)1 _   ] _. Recall from Example 6.2 (p. 96) that in the  
      (m− 1)        
ETS(A,Ad,A) and ETS(M,Ad,A) models      
      w_F i = [1, φi +1, di +1, m , di +2, m ,..., di + m , m ], (8.28)  
where φi = φ + φ 2 + · · · + φi, and dj , m = 1 if j = 0(mod m) and dj , m = 0  
otherwise. It follows that w_F i γ = 0.      

Because we have shown that y ˜ t + h|t = y ˆ t + h |t, the proofs in Chap. 6 can be applied to the two different cases in (8.27) to find v ˜ t + h|h. The two variances

 

have the forms in (8.11) and (8.12) with c ˜ j = w_F j 1 g ˜. Because w_F iγ = 0,

 

c ˜ j = w_F j 1 g ˜= w_F j 1(g + γ) = w_F j 1 g = cj,

 

and v ˜ t + h|t = vt + h|t for all of the Class 1 and Class 2 models in Tables 2.2 and 2.3.

 

The verification of (8.13) is now addressed. When we have observed val-ues for y 1, y 2,..., y t, we can use these values and the models in (8.6) and (8.7) to find xt and x ˜ t, respectively. Then starting with xt and x ˜ t, we can use the same models to generate values for yt +1, yt +2,..., yt + h and y ˜ t +1, y ˜ t +2,..., y ˜ t + h, respectively, by randomly selecting values ε t +1, ε t +2,..., ε t + h from a probability distribution with mean 0 and standard deviation σ. If we treat the simulated values yt + 1, yt +2,..., yt + h as the observed val-ues, we can extend the results in (8.8)–(8.10) to the simulated values for

 

xt +1, xt +2,..., xt + h and x ˜ t +1, x ˜ t +2,..., x ˜ t + h. It follows that the simulatedprediction distributions using (8.6) and (8.7) are identical because, for the

i th simulated value,

y ˜ (i) = _ ˜( i ) + φb ˜( i ) + s ˜(m− 1)(i) + q ( x ˜(i) ) ε t + h
  t + h t + h− 1 t + h− 1   t + h− 1 t + h− 1    
    (i)     (i) (m− 1)(i) (i)
    = ( _t + h 1+ At + h− 1) + φbt + h− 1 + ( st + h− 1   At + h− 1) + q ( xt + h− 1) ε t + h

= y ( t + i ) h.


 

Models with Regressor Variables

 

Up to this point in the book, we have considered models based upon a sin-gle series. However, in many applications, additional information may be available in the form of input or regressor variables; the name may be rather opaque, but we prefer it to the commonly-used but potentially misleading description of independent variables. We then refer to the series of interest as the dependent series. Regressor series may represent either explanatory or intervention variables.

 

An explanatory variable is one that provides the forecaster with addi-tional information. For example, futures prices for petroleum products can foreshadow changes for consumers in prices at the pump. Despite the term “explanatory” we do not require a causal relationship between the input and dependent variables, but rather a series that is available in timely fashion to improve the forecasting process. Thus, stock prices or surveys of con-sumer sentiment are explanatory in this sense, even though they may not have causal underpinnings in their relationship with a dependent variable.

 

An intervention is often represented by an indicator variable taking val-ues 0 and 1, although more general forms are possible. These variables may represent planned changes (e.g., the introduction of new legislation) or unusual events that are recognized only in retrospect (e.g., extreme weather conditions). Indicator variables may also be used to flag unusual observa-tions or outliers; if such values are not identified they can distort the estimates of other parameters in the model.

 

In the next section we introduce the general linear innovations model and then examine a special case which provides insights into its structure. The model development parallels that of the multiple source of error model (see Harvey 1989, Chap. 7). We illustrate the use of these methods with two examples in Sect. 9.2; the first uses intervention variables to modify a uni-variate sales series and the second considers a leading indicator model for gasoline prices. We conclude the chapter with a discussion of diagnostic tests based upon the residuals.


138 9 Models with Regressor Variables

 

9.1 The Linear Innovations Model with Regressors

 

We start from the standard linear innovations model introduced in Chap. 3:

 

yt = w_xt 1+ ε t, (9.1a)
xt = F xt 1+ t. (9.1b)

The regressor variables are incorporated into the measurement equation (9.1a) and the model has the general form:

 

yt = w_xt 1+ zt_p + ε t, (9.2a)
xt = F xt 1+ t. (9.2b)

The vector p, formed from the regression coefficients, consists of unknown quantities that need to be estimated. The vector zt contains the regressor variables.

 

Although p is time invariant, it is convenient to provide it with a time subscript and rewrite the model (9.2) as


 

yt = w_xt 1+ zt_pt 1+ ε t,

 

xt = F xt 1+ t, pt = pt− 1.

These equations can be stacked to give

 

        yt = w ¯ t_x ¯ t 1+ ε t,      
        ¯ + g ¯ ε t,      
        x ¯ t = Ft x ¯ t− 1      
  x ¯ t =_ xt _ , w F ¯ t = _ F 0 _  
where pt w ¯ t =_ ztt _, 0 I  

 

 

(9.3a)

 

(9.3b)

 

_ _ and g ¯ = g 0.


Equations (9.3) have the form of a general time invariant innovations state space model.

 

As an example, consider a local level model where a single intervention occurs at time t = 10 that has a transient effect on the series (a spike) of an unknown amount p 1. The measurement equation becomes yt = _t 1 + zt p 1+ ε t and the transition equation is simply _t = _t 1+ αε t, where zt is anindicator variable that is 1 in period 10 and 0 in all other periods. Similarly, if the effect is permanent (a step), we define the regressor variable as zt = 1 if t ≥ 10 and zt = 0 otherwise.

In either case, the model may be written in the form (9.3) as

              _ _        
y   = _ 1 z t _ _t− 1 + ε ,      
    t     p 1 t        
_t     =   1 0   _t− 1 + α ε .  
_ p 1 _ _0 1_ _ p 1 _ _0_ t    

9.2 Some Examples  

 

We make the usual assumption that the error terms are independent and fol-low Gaussian distributions with zero means and constant variance; that is, ε ∼ N(0, σ 2). The method of estimation in Chap. 5 may be readily adaptedto fit a model with the general form (9.3). It treats the seed state as a fixed vector and combines it with the model’s parameters for optimization of the likelihood or sum of squared errors functions. Because the regression coeffi-cients form part of the seed state vector, estimates of them are obtained from the optimized value of the seed state vector.

 

Predictions can be undertaken with a suitable adaptation of the method in Chap. 6. At this stage, when developing prediction distributions we assume that the errors are homoscedastic. It is now necessary to supplement this method with future values of the regressors. If the regressors consist of lead-ing indicator variables, such values will be known up to a certain future point of time. Moreover, if they consist of indicator variables reflecting the effect of known future interventions that have also occurred in the past, then such values are also known. However, when they are unknown, predictions of the future values of the regressors are needed. It is then best to revert to a multivariate time series framework (see Chap. 17).

 

This approach is easily adapted to accommodate heteroscedastic innova-tions of the type considered in Chap. 4. A grand model is obtained that has

 

the general form      
yt = w ¯ t_x ¯ t 1+ r (x ¯ t 1) ε t, (9.4a)  
¯ + g ¯(x ¯ t 1) ε t. (9.4b)  
x ¯ t = Ftx ¯ t− 1  

The model may be fitted using the method from Chap. 5. Forecasts and pre-diction distributions may then be obtained by methods for heteroscedastic data described in Chap. 6.

 

 

9.2 Some Examples

 

In this section, we assume that a homoscedastic model is adequate and that

 

the errors are independent and follow a Gaussian distribution; that is, ε ∼ N(0, σ 2).

 

9.2.1 An Example Using Indicator Variables

 

We now examine a simple example to illustrate the methods developed so far. We consider a series that gives the sales of a product for 62 weeks starting in early 2003. We refer to the series, which was supplied by a company, as “FM Sales.” The series is plotted in Fig. 9.1.

 

We will incorporate three indicator variables as regressors:

 

z 1=1 in weeks 1–12 when product advertising was in a low-profile mode,and z 1 = 0 otherwise


140 9 Models with Regressor Variables

 


Дата добавления: 2015-10-24; просмотров: 189 | Нарушение авторских прав


Читайте в этой же книге: Parsimonious Seasonal Model | Quarterly sales distribution: 16 steps ahead | Lead time demand distribution: 3−steps ahead | Example 6.1: ETS(M,N,M) model | Lead−time demand variance | Forecast Variance | Example 6.4: Forecast variance for the ETS(A,A,A) model 1 страница | Example 6.4: Forecast variance for the ETS(A,A,A) model 2 страница | Example 6.4: Forecast variance for the ETS(A,A,A) model 3 страница | Example 6.4: Forecast variance for the ETS(A,A,A) model 4 страница |
<== предыдущая страница | следующая страница ==>
Penalty estimation| Weekly FM Sales

mybiblioteka.su - 2015-2024 год. (0.033 сек.)