Студопедия
Случайная страница | ТОМ-1 | ТОМ-2 | ТОМ-3
АвтомобилиАстрономияБиологияГеографияДом и садДругие языкиДругоеИнформатика
ИсторияКультураЛитератураЛогикаМатематикаМедицинаМеталлургияМеханика
ОбразованиеОхрана трудаПедагогикаПолитикаПравоПсихологияРелигияРиторика
СоциологияСпортСтроительствоТехнологияТуризмФизикаФилософияФинансы
ХимияЧерчениеЭкологияЭкономикаЭлектроника

Heteroscedasticity

 

A common form of heteroscedasticity arises when the variance at time t is a function of the mean level of the series at that time. Such structures are a major motivation for the multiplicative models introduced in Chap. 4 and discussed at various points throughout this book. If model-building starts from the linear innovations state space form, it is quite likely that the resid-uals will not indicate a uniform variance over time. Because many series display a positive trend, the variability at the end of the series will often be greater than that at the outset. Based upon this intuition, Harvey (1989, pp. 259–260) suggests dividing the series into three nearly equal parts of length I = (n + 1)/3 _. The test statistic is defined as


n

H (I) = ∑ e 2 t

t = n−I +1


I

e 2 t.

 

t =1


 

When the error process is Gaussian and the null hypothesis of homoscedas-ticity holds, the sampling distribution of H (I) is approximately F (I, I).

 

The LL + seasonals + Lspot(1) model has H = 6.3 with I = 44, which is highly significant and indicates the need to account for heteroscedasticity.

 

Because the purpose of this section was to illustrate ideas, rather than to discuss model-building in detail (outlier identification, additional variables, etc.), we do not pursue matters further at this stage. However, it is evident that the most critical concern is the increased volatility in the series, and we return to that question in Chap. 19.

 

 

9.4 Exercises

 

Exercise 9.1. Extend the model given in (9.2) to include regressor variablesin the transition equations, where the vectors of coefficients p 1 and p 2 will typically include some zero elements:

 

yt = w_xt 1+ zt_p 1+ ε t,

xt = F xt 1+ zt_p 2+ t.

 

Express the model in reduced form by eliminating the state variables. Hence show that the regressors in the transition equation only affect the dependent variable after a one-period delay.

 

Exercise 9.2. Consider the special case of the model defined in Exercise 9.1corresponding to the ETS(A,N,N) process with a single regressor variable:

 

y t = xt− 1+ zt p 1+ ε t,

xt = xt 1+ zt p 2+ αε t.


148 9 Models with Regressor Variables

 

Show that the reduced form is:

 

yt − yt− 1= p 1(zt − zt− 1) + p 2 zt− 1+ ε t − (1 − α) ε t− 1.

 

Further show that the same reduced form could be obtained by includ-ing both zt +1 and zt in the transition equation, but omitting them from the measurement equation.

 

Exercise 9.3. The residual checks in Sect. 9.3 suggest the need for an autore-gressive term at lag 2 in the oil price model. Develop such a model and compare its performance with the results given in Table 9.2.

 

Exercise 9.4. The events of 11 September 2001 produced a substantial short-term drop in the number of air passengers. Use the monthly series on the number of enplanements on domestic aircraft (data set enplanements) to develop an intervention model to describe the series. Use two indicator vari-ables to model the changes: SEPT1 = 1 in September 2001 and = 0 otherwise; and SEPT2 = 1 in and after October 2001 and = 0 otherwise. Hence estimate the overall effects upon air travel. [For further discussion, see Ord and Young (2004).]

 

Exercise 9.5. The US Conference Board carries out a monthly survey on con-sumer confidence. Although the use of this measure as a true explanation of economic change is debatable, its primary benefit is that it appears before many macroeconomic indices are released. The data set unemp.cci contains 100 monthly observations on the consumer confidence index (CCI) and sea-sonally adjusted civilian unemployment (UNEMP) in the US, covering the period June 1997–September 2005:

 

a. Develop univariate models for each series and establish that each series is close to a random walk.

b. Develop a state space regression model that uses CCI (lagged one month) and the SEPT2 indicator defined in Exercise 9.4 to predict UNEMP.


 

Some Properties of Linear Models

 

In this chapter, we discuss some of the mathematical properties of the linear innovations state space models described in Chap. 3. These results are based on Hyndman et al. (2008).

 

We provide conditions that ensure the model is of minimal dimension (Sect. 10.1) and conditions that guarantee the model is stable (Sect. 10.2). We will see that the non-seasonal models are already of minimal dimension, but that the seasonal models are slightly larger than necessary. The normalized seasonal models, introduced in Chap. 8, are of minimal dimension.

 

The stability conditions discussed in Sect. 10.2 can be used to derive the associated parameter space. We find that the usual parameter restrictions (requiring all smoothing parameters to lie between 0 and 1) do not always lead to stable models. Exact parameter restrictions are derived for all the linear models.

 

 

10.1 Minimal Dimensionality for Linear Models

 

The linear innovations state space models (defined in Chap. 3) are of the form

 

yt = w_xt 1+ ε t, (10.1a)
xt = F xt 1+ t. (10.1b)

The model is not unique; for example, an equivalent model can be obtained simply by adding an extra row to the state vector and adding a row contain-ing only zeros to each of w, F and g. Therefore it is of interest to know when the model has the shortest possible state vector xt, in which case we say it has “minimal dimension.”

 

In particular, we wish to know whether the specific cases of the model given in Table 2.2 on page 21 are of minimal dimension. The coefficient matri-ces F, g and w can easily be determined from Table 2.2, and are given below.


150 10 Some Properties of Linear Models

 

Here Ik denotes the k × k identity matrix and 0 k denotes a zero vector of length k.

 

ETS(A,N,N): w =1       F =1                           g = α          
              1 1                         α          
ETS(A,Ad,N): w = 1     F =0 φ                       g = β        
          F =       0 _               g = α      
                m 1                
ETS(A,N,A): w =0 m 1     0 m_ 1           γ    
              0 m   Im 1 0 m           0 m− 1    
                  0 _                    
                                      α      
                          m 1              
ETS(A,Ad,A): w =         F =       φ 0 m_ 1     g =   β      
0 m           0 m_ 1       γ    
      1                                   0          
              0 m− 1   m− 1 I m− 1   m− 1       m      
                                   
                                                       

The matrices for ETS(A,A,N) and ETS(A,A,A) are the same as for ETS(A,Ad,N) and ETS(A,Ad,A) respectively, but with φ = 1.

The following definitions are given by Hannan and Deistler (1988, pp. 44–45):

 

Definition 10.1. The model (10.1) is said to be observable if Rank (O) = p where

 

O = [ w, F _w,(F _)2 w,...,(F _) p 1 w ] and p is the length of the state vector xt.

 

Definition 10.2. The model (10.1) is said to be reachable if Rank (R) = p where

 

R = [ g, F g, F 2 g,..., F p 1 g ] and p is the length of the state vector xt.

 

Reachability and observability are desirable properties of a state space model because of the following result from Hannan and Deistler (1988, p. 48):

 

Theorem 10.1. The state space model (10.1) is of minimal dimension if and only ifit is observable and reachable.

 

Example 10.1: ETS(A,A,N)

 

The observability matrix is

 

O = w, F _w =1 1,1 2


10.1 Minimal Dimensionality for Linear Models  

 

which has rank 2. The reachability matrix is

R = [ g , F g ] =_ α α + β _,

β β

 

which has rank 2 unless β = 0. Consequently, the model is of minimal dimension provided β _ = 0.

 

A similar argument can be used (see Exercise 10.1a) to show that the non-seasonal models ETS(A,N,N) and ETS(A,Ad,N) are both reachable and observable, and therefore of minimal dimension.

 

10.1.1 Seasonal Models

 

Consider the ETS(A,N,A) model, for which the rank of O < p and the rank of R < p. This is because, for the ETS(A,N,A) model, (F) p 1 = F p 1 = Ip. Therefore, model ETS(A,N,A) is neither reachable nor observable. A similar argument (Exercise 10.1b) shows that models ETS(A,A,A) and ETS(A,Ad,A) are also neither reachable nor observable.

 

These problems arise because of a redundancy in the model. For example, the ETS(A,N,A) model is given by yt = _t 1 + st m + ε t, where the level and seasonal components are given by

 

_t = _t 1 + αε t and st = stm + γε t.

 

So both the level and seasonal components have long run features due to unit roots. In other words, both can model the level of the series, and the seasonal component is not constrained to lie anywhere near zero. This is the same problem that led to the use of normalizing in Chap. 8.

 

Let L denote the lag operator defined by Lyt = yt 1. Then, by expanding st = et /(1 − Lm), where et = γε t, it can be seen that st can be decomposedinto two processes, a level displaying a unit root at the zero frequency and a purely seasonal process, having unit roots at the seasonal frequency:

 

          st = _t + st,    
        where _t = _t 1 + m 1 et,    
          S (L) st = θ (L) et,    
S (L) =1+ L + · · · + Lm 1 represents the seasonal summation operator  
and θ (L) = m 1   (m − 1) + (m − 2) L + · · · + 2 Lm 3 + Lm 2. The long run  
component _ t should be part of the level term. _  
    _      

This leads to an alternative model specification where the seasonal equa-tion for models ETS(A,N,A), ETS(A,A,A) and ETS(A,Ad,A) is replaced by

 

S (L) st = θ (L) γε t. (10.2)

152 10 Some Properties of Linear Models

 

The other equations remain the same, as the additional level term can be

 

absorbed into the original level equation by a simple change of parameters.

 

Noting that θ (L)/ S (L) = [1 m 1 S (L)]/(1 − Lm), we see that (10.2) can be written as

γ

st = st−m + γε t m ( ε t + ε t− 1+ · · · + ε t−m +1).

In other words, the seasonal term is calculated as in the original models, but is then adjusted by subtracting the average of the last m shocks. The effect of this adjustment is equivalent to the normalization procedure outlined in Chap. 8, in which the seasonal terms st,..., stm +1 are adjusted every time period to ensure that they sum to zero. Models using the seasonal com-ponent (10.2) will be referred to as “normalized” versions of ETS(A,N,A), ETS(A,A,A) and ETS(A,Ad,A). It can be shown (Exercise 10.1c) that the normalized models are of minimal dimension.

 

 

10.2 Stability and the Parameter Space

 

In Chap. 3, we found (p. 36) that, for linear models of the form (10.1), we could write the state vector as

 

t− 1

xt = Dt x 0+∑ Djgytj,

 

j =0

 

where D = F − gw_ is the discount matrix. So for initial conditions to have a negligible effect on future states, we need Dt to converge to zero. There-fore, we require D to have all eigenvalues inside the unit circle. We call this condition stability (following Hannan and Deistler 1988, p. 48).

 

Definition 10.3. The model (10.1) is said to be stable if all eigenvalues ofD = F − gw_ lie inside the unit circle.

 

Stability is a desirable property of a time series model because we want models where the distant past has a negligible effect on the present state.

 

In Chap. 3, we also found that

 

t− 1

 

y ˆ t|t− 1= w_xt = at +∑ cj yt−j, j =1

 

where at = w_Dt 1 x 0 and cj = w_Dj 1 g. Thus, the forecast is a linear func-tion of the past observations and the seed state vector. This result shows that for a model to be stable, we require the weaker forecastability condition:

 

Definition 10.4. The model (10.1) is forecastable if

 

                       
c j | < and lim a t = a . (10.3)  
|   t    
j =1                      

 

Obviously, any model that is stable is also forecastable.


10.2 Stability and the Parameter Space  

 

On the other hand, it is possible for a model to have a unit eigenvalue for D, but to satisfy the forecastability condition (10.3). In other words, an unstable model can still produce stable forecasts provided the eigenvalues which cause the instability have no effect on the forecasts. This arises because D may have unit eigenvalues where w_ is orthogonal to the eigenvectorscorresponding to the unit eigenvalues.

 

To avoid complications, we will assume that all the eigenvalues are dis-tinct. In this case, we can write the eigendecomposition of D as D = U Λ V, where the columns of U are the eigenvectors of D, Λ is a diagonal matrix containing the eigenvalues of D, and V = U 1. Then

cj +1= w_Djg = w_U Λ j V g =∑ λij (w_ui)(vi_g),

i

 

where ui is a column of U (a right eigenvector) and vi is a row of V (a left eigenvector). By inspection, we see that the sequence converges to zero pro-vided either i | < 1, w_ui = 0 or vi_g = 0, for each i. Further, the sequence only converges under these conditions. Similarly,

 

at +1= w_Dtx 0=∑ λti (w_ui)(vi_x 0).

i

 

In this case, the sequence converges to a constant if and only if either i | ≤ 1, w_ui = 0 or vix 0 = 0, for each i. Thus, we can restate forecastability as follows.

 

Theorem 10.2. Letλidenote an eigenvalue ofD = F−gw_, and letuibe thecorresponding right eigenvector and vi the corresponding left eigenvector. Then the model (10.1) is forecastable if and only if, for each i, at least one of the following four conditions is met:

1. |λi | < 1

2. w_ui =0

3. |λi | =1 and vi_g =0

4. vi_x 0=0 and vi_g =0

 

The concept of forecastability was noted by Sweet (1985) and Lawton (1998) for ETS(A,A,A) (additive Holt-Winters) forecasts, although neither author used a stochastic model as we do here. The phenomenon was also observed by Snyder and Forbes (2003) in connection with the ETS(A,A,A) model. The first general definition of this property was given by Hyndman et al. (2008).

 

We now establish stability and forecastability conditions for each of the linear models. For the damped models, we assume that φ is a fixed damp-ing parameter between 0 and 1, and we consider the values of the other parameters that would lead to a stable model.


154 10 Some Properties of Linear Models

 

The value of D for each model is given below.

 

ETS(A,N,N): D =1 − α

 

ETS(A,A ,N): D = 1 − α 1 − α                        
d       −β     φ − β                      
            α 0 _             α                
ETS(A,N,A): D = γ m 1 1 γ          
0 m_ 1          
            Im 1                
      0 m     0 m            
                                       
          α 1     α 0 _             α      
      β φ           m                
ETS(A,Ad,A): D = β 0 m_   β    
  γ     γ 0 m_   1 γ    
                               
      0 m       m     I m       m        
                     
                                         

Again, for ETS(A,A,N) and ETS(A,A,A), the corresponding result is obtained from ETS(A,Ad,N) and ETS(A,Ad,A) by setting φ = 1.

 


Дата добавления: 2015-10-24; просмотров: 156 | Нарушение авторских прав


Читайте в этой же книге: Example 6.1: ETS(M,N,M) model | Lead−time demand variance | Forecast Variance | Example 6.4: Forecast variance for the ETS(A,A,A) model 1 страница | Example 6.4: Forecast variance for the ETS(A,A,A) model 2 страница | Example 6.4: Forecast variance for the ETS(A,A,A) model 3 страница | Example 6.4: Forecast variance for the ETS(A,A,A) model 4 страница | Penalty estimation | Exercise 8.3. | Weekly FM Sales |
<== предыдущая страница | следующая страница ==>
U.S. Gasoline and Spot Market Prices| Example 10.2: Local level model with drift

mybiblioteka.su - 2015-2024 год. (0.035 сек.)