Студопедия
Случайная страница | ТОМ-1 | ТОМ-2 | ТОМ-3
АвтомобилиАстрономияБиологияГеографияДом и садДругие языкиДругоеИнформатика
ИсторияКультураЛитератураЛогикаМатематикаМедицинаМеталлургияМеханика
ОбразованиеОхрана трудаПедагогикаПолитикаПравоПсихологияРелигияРиторика
СоциологияСпортСтроительствоТехнологияТуризмФизикаФилософияФинансы
ХимияЧерчениеЭкологияЭкономикаЭлектроника

Exercise 10.1.

Читайте также:
  1. A. TRAINING EXERCISES
  2. A. TRAINING EXERCISES
  3. Additional Language Exercises
  4. Additional Language Exercises
  5. Additional Vocabulary Exercises
  6. B. Pre-reading Exercises
  7. Basic notions of a system, subsystem, complex, series, cycle, group of exercises

 

a. Show that the non-seasonal models ETS(A,N,N) and ETS(A,Ad,N) are of minimal dimension.

b. Show that the seasonal models ETS(A,A,A) and ETS(A,Ad,A) are not of minimal dimension.

 

c. Show that the normalized seasonal models ETS(A,N,A), ETS(A,A,A) and ETS(A,Ad,A) are of minimal dimension.

 

d. Show that the (unnormalized) seasonal models ETS(A,A,A) and ETS(A,Ad,A) are of minimal dimension if the level component is omitted from the models. (This is an alternative to normalization).

 

Exercise 10.2. Complete Example 10.3 by showing that u is proportional to[ 1, 0, 1,..., 1] for the ETS(A,A,A) model.

Exercise 10.3. The expression xt = Dxt 1+ gyt also applies to some of thenonlinear models discussed in Chap. 4. Use this observation to write down the stability conditions for the relevant nonlinear models.


 

Reduced Forms and Relationships with ARIMA Models

 

The purpose of this chapter is to examine the links between the (linear) innovations state space models and autoregressive integrated moving aver-age (ARIMA) models, frequently called “Box–Jenkins models” because Box and Jenkins (1970) proposed a complete methodology for identification, esti-mation and prediction with these models. We will show that when the state variables are eliminated from a linear innovations state space model, an ARIMA model is obtained. This ARIMA form of the state space model is called its reduced form.

 

The process for deriving the reduced form uses the lag operator, defined by Lyt = yt 1, to eliminate the state variables from the state space model. Another procedure that relies on conventional equation solving methods will be explained in Chap. 13. The latter method has the advantage that its algorithm can be implemented relatively easily in a matrix programming language such as R or Matlab.

 

We begin the chapter with a brief summary of ARIMA models and their properties. In Sect. 11.2 we obtain reduced forms for the simple cases of the local level model, ETS(A,N,N), and the local trend model, ETS(A,A,N). Then, in Sect. 11.3 we show how to put a general linear innovations state space model into an ARIMA reduced form. (Causal) stationarity and invertibility conditions for the reduced form model are developed in Sect. 11.4, and we explore the links with causal stationarity and stability of the corresponding innovations state space model.

 

In the opposite direction, an ARIMA model can also be put in the form of a linear innovations state space model. This reverse procedure is demon-strated in Sect. 11.5.


164 11 Reduced Forms and Relationships with ARIMA Models

 

11.1 ARIMA Models

 

The general form of an ARMA model is conventionally written as:

φ (L) yt = λ + θ (L) ε t, (11.1)

 

where L is the lag operator defined above, and φ (L) and θ (L) are polyno-mials in L. The random errors, ε t, are assumed to be independent and identically distributed with zero means and equal variances, σ 2; we write this as ε t IID(0, σ 2). The parameter λ represents a constant term.

Several special cases serve to illustrate the general model.

 

First order autoregression—AR(1):  
yt = λ + φ 1 yt 1+ ε t. (11.2)
• p th order autoregression—AR(p):  
yt = λ + φ 1 yt 1+ φ 2 yt 2+ · · · + φp ytp + ε t. (11.3)
First order moving average—MA(1):  
yt = λ + ε t − θ 1 ε t 1. (11.4)
• q th order moving average—MA(q):  
yt = λ + ε t − θ 1 ε t 1 − θ 2 ε t 2 − · · · − θq ε tq. (11.5)
• p th order AR, q th order MA—ARMA(p, q):  
yt = λ + φ 1 yt 1+ · · · + φp ytp + ε t − θ 1 ε t 1 − · · · − θq ε tq. (11.6)

 

An important aspect of ARMA modeling is that we assume the series started up in the infinite past, in contrast to the innovations state space mod-els we have considered thus far, where a finite start-up has been employed. Intuitively, if the finite start was a long time ago and the effect of the initial conditions diminishes over time, we might expect that the finite start-up sys-tem would converge to the limiting infinite start-up scheme. We now specify the conditions under which this convergence occurs.

 

11.1.1 Causal Stationarity

 

The standard assumption made about the autoregressive component is that the roots of the polynomial φ (u) = 0 all lie outside the unit circle. This assumption means that we can rewrite (11.1) as:

 

yt = λ / φ (1) + [ θ (L)/ φ (L)] ε t = λ / φ (1) + ψ (L) ε t, (11.7)

11.1 ARIMA Models  
where  
ψ (u) =1+ ψ 1 u + ψ 2 u 2+ · · · (11.8)

 

is an infinite series which is absolutely convergent for |u | ≤ 1. An ARMA model satisfying these conditions is said to be causally stationary. A uni-variate process is causal if the current value depends only upon current and past values of the error process and past values of the series. Hence-forth we assume this to be the case and refer just to stationarity, with the qualifier “causal” always there by implication. Stationarity clearly implies that the coefficients ψi converge to zero as we move away from the present time. This condition reduces to the requirement that the roots of the poly-nomial φ (u) = 0 should lie outside the unit circle. The representation given in (11.7) is known as the Wold representation (or decomposition) of a time series. Thus, any stationary time series may be represented by an infi-nite order MA scheme. Further, (11.7) shows that such processes may in some cases be represented by finite-order ARMA schemes. By extension, as indicated in Exercise 11.6, state space models always result in finite-order ARIMA reduced-form models. In turn, these conditions imply that the pro-cess has an unconditional mean and variance, as illustrated by the following examples.

 

 

Example 11.1: Mean and variance for AR(1)

 

It follows from (11.8) that the AR(1) process is stationary provided 1 | < 1. We may then denote the mean by µ and the variance by ω 2. Taking expectations on both sides of (11.2) we obtain:

 

E(yt) = µ = λ + φ 1E(yt 1) + E(ε t) = λ + φ 1 µ

 

so that µ = λ /(1 − φ 1). In general, the mean of an AR(p) process can be written as µ = λ / φ (1). Subtracting out the mean, squaring both sides of (11.2) and taking expectations, we arrive at:

 

E[(yt − µ)2 ] = φ 12E[(yt 1 − µ)2 ] + 2 φ 1E[ ε t (yt 1 − µ)] + E(ε 2 t).

 

Because ε t and yt 1 are independent, the cross-product term is zero, so this expression reduces to:

 

V(yt) = ω 2 = σ 2/(1 − φ 12).


166 11 Reduced Forms and Relationships with ARIMA Models

 

 


Дата добавления: 2015-10-24; просмотров: 133 | Нарушение авторских прав


Читайте в этой же книге: Forecast Variance | Example 6.4: Forecast variance for the ETS(A,A,A) model 1 страница | Example 6.4: Forecast variance for the ETS(A,A,A) model 2 страница | Example 6.4: Forecast variance for the ETS(A,A,A) model 3 страница | Example 6.4: Forecast variance for the ETS(A,A,A) model 4 страница | Penalty estimation | Exercise 8.3. | Weekly FM Sales | U.S. Gasoline and Spot Market Prices | Heteroscedasticity |
<== предыдущая страница | следующая страница ==>
Example 10.2: Local level model with drift| Example 11.2: Mean and variance for stationary processes

mybiblioteka.su - 2015-2024 год. (0.01 сек.)