Студопедия
Случайная страница | ТОМ-1 | ТОМ-2 | ТОМ-3
АвтомобилиАстрономияБиологияГеографияДом и садДругие языкиДругоеИнформатика
ИсторияКультураЛитератураЛогикаМатематикаМедицинаМеталлургияМеханика
ОбразованиеОхрана трудаПедагогикаПолитикаПравоПсихологияРелигияРиторика
СоциологияСпортСтроительствоТехнологияТуризмФизикаФилософияФинансы
ХимияЧерчениеЭкологияЭкономикаЭлектроника

B) Local trend approximation 2 страница

Читайте также:
  1. A Christmas Carol, by Charles Dickens 1 страница
  2. A Christmas Carol, by Charles Dickens 2 страница
  3. A Christmas Carol, by Charles Dickens 3 страница
  4. A Christmas Carol, by Charles Dickens 4 страница
  5. A Christmas Carol, by Charles Dickens 5 страница
  6. A Christmas Carol, by Charles Dickens 6 страница
  7. A Flyer, A Guilt 1 страница

The transformation from series values to prediction errors can be shown to be

 

y ˆ t|t− 1= _t− 1+ bt− 1+ st−m, ε t = yt − y ˆ t|t− 1,

 

_t = _t− 1+ bt− 1+ αε t, bt = bt 1+ βε t,

st = stm + γε t.

 

This corresponds to a commonly used additive version of seasonal expo-nential smoothing (Winters 1960). An equivalent form of these transition equations is


 

y ˆ t|t− 1= _t− 1+ bt− 1+ st−m,

ε t = yt − y ˆ t|t− 1,

_t = α (yt − stm) + (1 − α)(_t 1 + bt 1), bt = β (_t − _t 1) + (1 − β) bt 1,

 

st = γ (yt − _t) + (1 − γ) stm,


 

(3.16a)

 

(3.16b)

 

(3.16c)

 

(3.16d)

 

(3.16e)


 

where the series value is deseasonalized in the trend equations and detrended in the seasonal equation, β = β / α and γ = γ /(1 − α). Equa-tions (3.16c–e) can be interpreted as weighted averages, in which case the natural parametric restrictions are that each of α, β and γ lie in the (0, 1) interval. Equivalently, 0 < α < 1, 0 < β < α and 0 < γ < 1 − α. However, a consideration of the properties of the discount matrix D leads to a different parameter region; this will be discussed in Chap. 10.


3.5 Variations on the Common Models  

 

3.5 Variations on the Common Models

 

A number of variations on the basic models of the previous section can be helpful in some applications.

 

3.5.1 Damped Level Model

 

One feature of the models in the framework described in Chap. 2 is that the mean and variance are local properties. We may define these moments given the initial conditions, but they do not converge to a stable value as t increases without limit. In other words, the models are all nonstationary; the F matrix has at least one unit root in every case. However, it is possible to describe analogous models that are stationary.

 

Consider the damped local level model

 

yt = µ + φ_t 1+ ε t, _t = φ_t 1 + αε t.

 

The transition matrix is simply F = φ, which has no roots greater than one provided |φ| < 1. Thus, the model is stationary for |φ| < 1.

The discount matrix is D = φ − α. Thus, the model is stable provided |φ − α| < 1, or equivalently, φ − 1 < α < φ +1.

We may eliminate the state variable to arrive at

 

yt = µ + φt _ 0+ ε t + α [ φε t 1+ φ 2 ε t 2+ · · · + φt 1 ε 1].

 

When |φ| < 1, the mean and variance approach finite limits as t → ∞:

E(yt | _ 0) = µ + φt _ 0             → µ,          
V(yt | _ 0) = σ 2 _1 + α 2 φ 2 ( φ 2 t 2) _ σ 2 _1 + 1 α   φ 2 _ .  
    11 φ 2       φ 2  
                 

Thus, whenever |φ| < 1, the mean reverts to the stable value µ and the vari-ance remains finite. When the series has an infinite past, the limiting values are the unconditional mean and variance. Such stationary series play a major role in the development of Auto Regressive Integrated Moving Average (ARIMA) models, as we shall see in Chap. 11.

There are two reasons why our treatment of mean reversion (or station-arity) is so brief. First, the use of a finite start-up assumption means that stationarity is not needed in order to define the likelihood function. Sec-ond, stationary series are relatively uncommon in business and economic applications. Nevertheless, our estimation procedures (Chap. 5) allow mean reverting processes to be fitted if required.


48 3 Linear Innovations State Space Models

 

3.5.2 Local Level Model with Drift

 

A local trend model allows the growth rate to change stochastically over time. If β = 0, the growth rate is constant and equal to a value that will be denoted by b. The local level model then reduces to

 

yt = _t 1+ b + ε t, _t = b + _t 1 + αε t,

 

where ε t NID(0, σ 2). It is called a “local level model with drift” and has a state space structure with

 

xt =_ _t b _ _, w =_ 1 1 _ _ , F =_ 1 1 _ and g = _ α 0 _ _ .  
0 1  

This model can be applicable to economic time series that display an upward (or downward) drift. It is sometimes preferred for longer term forecasting because projections are made with the average growth that has occurred throughout the sample rather than a local growth rate, which essentially represents the growth rate that prevails towards the end of the sample.

 

The discount matrix for this model is

 

D = 1 − α 1 − α ,
_   _

which has eigenvalues of 1 and 1 − α. Thus, the model is not stable as Dj does not converge to 0. It is, however, forecastable, provided 0 < α < 2. The model is also forecastable when α = 0, as it then reduces to the linear regression model yt = _ 0 + bt + ε t. Discussion of this type of discount matrix will occur in Chap. 10.

 

The local level model with drift is also known as “simple exponential smoothing with drift.” Hyndman and Billah (2003) showed that this model is equivalent to the “Theta method” of Assimakopoulos and Nikolopoulos (2000) with b equal to half the slope of a linear regression of the observed data against their time of observation.

 

3.5.3 Damped Trend Model: ETS(A,Ad,N)

 

Another possibility is to take the local trend model and dampen its growth rate with a factor φ in the region 0 ≤ φ < 1. The resulting model is

 

yt = _t− 1+ φbt− 1+ ε t, _t = _t− 1+ φbt− 1+ αε t, bt = φbt 1+ βε t.


3.5 Variations on the Common Models  

 

The characteristics of the damped local trend model are compatible with fea-tures observed in many business and economic time series. It sometimes yields better forecasts than the local trend model. Note that the local trend model is a special case where φ = 1.

 

The ETS(A,Ad,N) model performs remarkably well when forecasting real data (Fildes 1992).

 

3.5.4 Seasonal Model Based only on Seasonal Levels

 

If there is no trend in a time series with a seasonal pattern, the ETS(A,N,A) model can be simplified to a model that has a different level in each season. A model for a series with m periods per annum is


 

yt = _tm + ε t, _t = _tm + αε t.

 

It conforms to a state space model where

 

w_ =0 0 · · · 1,

 

      _           0 0... 0 1      
      t       1 0... 0 0    
  _t          
xt =       F =   0 1... 0 0   and  
      .     .. ... . ....  
      ..        
                .. ...      
  _t m +1                
                0 0... 1 0    
                             

 

(3.17a)

 

(3.17b)

 

α

 

0 g =..

 

..

 


The weighted average requirement is satisfied if 0 < α < 1. Because there is no link between the observations other than those m periods apart, we may consider the m sub-models separately. It follows directly that the model is stable when all the sub-models are stable, which is true provided 0 < α < 2.

 

3.5.5 Parsimonious Local Seasonal Model

 

The problem with the seasonal models (3.15) and (3.17) is that they poten-tially involve a large number of states, and the initial seed state x 0 contains a set of parameters that need to be estimated. Modeling weekly demand data, for example, would entail 51 independent seed values for the seasonal recur-rence relationships. Estimation of the seed values then makes relatively high demands on computational facilities. Furthermore, the resulting predictions may not be as robust as those from more parsimonious representations.

 

To emphasize the possibility of a more parsimonious approach, consider the case of a product with monthly sales that peak in December for Christ-mas, but which tend to be the same, on average, in the months of January to November. There are then essentially two seasonal components, one for the months of January to November, and a second for December. There is no need for 12 separate monthly components.


50 3 Linear Innovations State Space Models

 

We require a seasonal model in a form that allows a reduced number of seasonal components. First, redefine m to denote the number of seasonal components, as distinct from the number of seasons per year. In the above example, m = 2 instead of 12. An m -vector zt indicates which seasonal com-ponent applies in period t. If seasonal component j applies in period t, then the element ztj = 1 and all other elements equal 0. It is assumed that the typ-ical seasonal component j has its own level, which in period t is denoted by _tj. The levels are collected into an m -vector denoted by _t. Then the model is

 

yt = zt__t 1+ bt 1+ ε t, (3.18a)
_t = _t− 1 + 1 bt 1 + (1 α + zt γ) ε t, (3.18b)
bt = bt− 1 + βε t, (3.18c)

where 1 represents an m -vector of ones. The term zt__t 1 picks out the level of the seasonal component relevant to period t. The term 1 bt 1 ensures that each level is adjusted by the same growth rate. It is assumed that the random change has a common effect and an idiosyncratic effect. The term 1 αε t repre-sents the common effect, and the term zt βε t is the adjustment to the seasonal component associated with period t.

 

This model must be coupled with a method that searches systematically for months that possess common seasonal components. We discuss this prob-lem in Chap. 14. In the special case where no common components are found (e.g., m = 12 for monthly data), the above model is then equivalent to the seasonal model in Sect. 3.4.3. If, in addition, there is no growth, the model is equivalent to the seasonal level model in Sect. 3.5.4.

 

Model (3.18) is easily adapted to handle multiple seasonal patterns. For example, daily demand may be influenced by a trading cycle that repeats itself every week, in addition to a seasonal pattern that repeats itself annually. Extensions of this kind are also considered in Chap. 14.

 

An important point to note is that this seasonal model does not conform to the general form (3.1), because the g and w vectors are time-dependent. A more general time-varying model must be used instead.

 

3.5.6 Composite Models

 

Two different models can be used as basic building blocks to yield even larger models. Suppose two basic innovations state space models indexed by i = 1, 2 are given by

 

yt = wi_xi, t− 1+ ε it, xit = Fi xi, t− 1+ gi ε it ,


3.6 Exercises  

 

where ε it NID(0, vi). A new model can be formed by combining them as follows:

 

yt = w 1 _x 1, t− 1+ w 2 _x 2, t− 1+ ε t,    
x 1 t = F 10 x 1, t− 1 + g 1 _ ε .  
_ x 2 t _ _ 0 F 2_ _ x 2, t 1 _ _ g 2 t    

For example, the local trend model (3.13) in Sect. 3.4.2 and the seasonal model (3.17) in Sect. 3.5.4 can be combined using this principle. To avoid con-flict with respect to the levels, the _t in the seasonal model (3.17) is replaced by st. The resulting model is the local additive seasonal model (3.15) in Sect. 3.4.3.

 

 

3.6 Exercises

 

Exercise 3.1. Consider the local level model ETS(A,N,N). Show that the pro-cess is forecastable and stationary when α = 0 but that neither property holds when α = 2.

 

Exercise 3.2. Consider the local level model with drift, defined in Sect. 3.5.2.Define the detrended variable z 1 t = y t − bt and the differenced variable z 2 t = yt − yt 1. Show that both of these processes are stable provided 0 < α < 2but that only z 2 t is stationary.

 

Exercise 3.3. Consider the local level model ETS(A,N,N). Show that the meanand variance for yt |_ 0 are _ 0 and _ 02 (1 + (t − 1) α 2) respectively.

 

Exercise 3.4. For the damped trend model ETS(A,Ad,N), find the discountmatrix D and its eigenvalues.


 

Nonlinear and Heteroscedastic

 

Innovations State Space Models

 

In this chapter we consider a broader class of innovations state space mod-els, which enables us to examine multiplicative structures for any or all of the trend, the seasonal pattern and the innovations process. This general class was introduced briefly in Sect. 2.5.2. As for the linear models intro-duced in the previous chapter, this discussion will pave the way for a general discussion of estimation and prediction methods later in the book.

 

One of the intrinsic advantages of the innovations framework is that we preserve the ability to write down closed-form expressions for the recursive relationships and point forecasts. In addition, the time series may be repre-sented as a weighted sum of the innovations, where the weights for a given innovation depend only on the initial conditions and earlier innovations, so that the weight and the innovation are conditionally independent. As before, we refer to this structure as the innovations representation of the time series. We find that these models are inherently similar to those for the linear case.

 

The general innovations form of the state space model is introduced in Sect. 4.1 and various special cases are considered in Sect. 4.2. We then exam-ine seasonal models in Sect. 4.3. Finally, several variations on the core models are examined in Sect. 4.4.

 

4.1 Innovations Form of the General State Space Model

 

We employ the same basic notation as in Sect. 3.1, so that yt denotes the ele-ment of the time series corresponding to time t. Prior to time t, yt denotes a random variable, but it becomes a fixed value after being observed. The first n values of a time series form the n -vector y.

 

Following the discussion in Sects. 2.5.2 and 3.1, we define the model for the variable of interest, yt, in terms of the state variables that form the state vector, xt. We will select the elements of the state vector to describe the trend and seasonal elements of the series, using these terms as building blocks to enable us to formulate a model that captures the key components of the data generating process.


54 4 Nonlinear and Heteroscedastic Innovations State Space Models

 

From Sect. 2.5.2, we specify the general model with state vector xt =

(_t, bt, st, st 1,..., stm +1) _ and state space equations of the form:  
yt = w (xt 1) + r (xt 1) ε t, (4.1a)
xt = f (xt 1) + g (xt 1) ε t, (4.1b)

where r (·) and w (·) are scalar functions, f (·) and g (·) are vector func-tions, and ε t is a white noise process with variance σ 2. Note that we do not specify that the process is Gaussian because such an assumption may conflict with the underlying structure of the data generating process (e.g., when the series contains only non-negative values). Nevertheless, the Gaus-sian assumption is often a reasonable approximation when the level of the process is sufficiently far from the origin (or, more generally, the region of impossible values) and it will then be convenient to use the Gaussian assumption as a basis for inference. The functions in this model may all be time-indexed, but we shall concentrate on constant functions (the invariant form), albeit with time-varying arguments. In Chap. 3, the functions r and g were constants, whereas w and f were linear in the state vector. The sim-plest nonlinear scheme of interest corresponds to {w (xt 1) = r (xt 1) = f (xt 1) = _t 1; g (xt 1) = α_t 1 } or

yt = _t− 1(1 + ε t), (4.2a)
_t = _t− 1(1 + αε t). (4.2b)

 

These equations describe the ETS(M,N,N) or local level model given in Table 2.3 (p. 22). We may eliminate the state variable between (4.2a) and (4.2b) to arrive at a reduced form for the model:

 

yt = yt 1(1+ ε t)(1+ αε t 1)/(1+ ε t 1),

t− 1

yt = _ 0(1+ ε t)∏(1+ αε j).

j =1

 

We may also eliminate ε t to arrive at the recursive relationship:

 

_t = _t− 1+ α (yt − _t− 1),

= αyt + (1 − α) _t 1.

 

The recursive relationship for ETS(M,N,N) is thus seen to be identical to that for ETS(A,N,N). However, the reduced form equations are clearly dif-ferent, showing that the predictive distributions (and hence the prediction intervals) will differ. This difference underlies the need for a stochastic model for a time series; once a suitable model is selected, valid prediction inter-vals can be generated. Without an underlying model, only point forecasts are possible.

 

Given the insights provided by the local level model, we may approach the general model in the same way. Reduced-form expressions do not take


4.1 Innovations Form of the General State Space Model  

 

on a useful form without additional assumptions about the various func-tions in the model. However, we may eliminate the error term to arrive at the recursive relationship:

 

xt = f (x t− 1 ) + g (x t− 1 ) [ yt −w (xt− 1)] . (4.3)  
      r (xt 1)      
Further, for convenience, we write              
D (xt) = f (xt) [ g (xt) w (xt)] ,    
    r (xt)    


Дата добавления: 2015-10-24; просмотров: 145 | Нарушение авторских прав


Читайте в этой же книге: Springer Series in Statistics | Economic Applications: The Beveridge–Nelson | UK passenger motor vehicle production Overseas visitors to Australia 1 страница | UK passenger motor vehicle production Overseas visitors to Australia 2 страница | UK passenger motor vehicle production Overseas visitors to Australia 3 страница | UK passenger motor vehicle production Overseas visitors to Australia 4 страница | UK passenger motor vehicle production Overseas visitors to Australia 5 страница | B) Local trend approximation 4 страница | Parsimonious Seasonal Model | Quarterly sales distribution: 16 steps ahead |
<== предыдущая страница | следующая страница ==>
B) Local trend approximation 1 страница| B) Local trend approximation 3 страница

mybiblioteka.su - 2015-2024 год. (0.03 сек.)