V.I.1.a Basic Definitions and
Theorems about ARIMA models
First
we define some important concepts. A stochastic process (c.q. probabilistic process) is defined by a
Tdimensional distribution function.
(V.I.11)
Before
analyzing the structure of a time series model one must make sure
that the time series are stationary with respect to the variance and
with respect to the mean. First, we will assume statistical
stationarity of all time series (later on, this restriction will be
relaxed).
Statistical
stationarity
of a time series implies that the marginal probability distribution
is timeindependent which means that:

the
expected values and variances are constant  
(V.I.12)
where
T is the number of observations in the time series;

the
autocovariances (and autocorrelations) must be constant  
(V.I.13)
where
k is an integer timelag;

the
variable has a joint normal distribution f(X_{1}, X_{2},
..., X_{T}) with marginal normal distribution in each
dimension  
(V.I.14)
If
only this last condition is not met, we denote this by weak
stationarity.
Now
it is possible to define white noise as a stochastic process
(which is statistically stationary) defined by a marginal
distribution function (V.I.11), where all X_{t} are
independent variables (with zero covariances), with a joint normal
distribution f(X_{1}, X_{2}, ..., X_{T}),
and with
(V.I.15)
It
is obvious from this definition that for any white noise process the
probability function can be written as
(V.I.16)
Define
the autocovariance as
(V.I.17)
or
(V.I.18)
whereas
the autocorrelation is
defined as
(V.I.19)
In
practice however, we only have the sample observations at our
disposal. Therefore we use the sample autocorrelations
(V.I.110)
for
any integer k.
Remark
that the autocovariance
matrix and autocorrelation
matrix associated
with a stochastic stationary process
(V.I.111)
(V.I.112)
is
always positive definite, which can be easily shown since a linear
combination of the stochastic variable
(V.I.113)
has
a variance of
(V.I.114)
which
is always positive.
This
implies for instance for T=3 that
(V.I.115)
or
(V.I.116)
Bartlett
proved that the variance of
autocorrelation of a stationary normal stochastic process can be
formulated as
(V.I.117)
This
expression can be shown to be reduced to
(V.I.118)
if
the autocorrelation coefficients decrease exponentially like
(V.I.119)
Since
the autocorrelations for i > q (a natural number) are equal to
zero, expression (V.I.117) can be shown to be reformulated as
(V.I.120)
which
is the so called largelag
variance. Now it is possible to vary q from 1 to any desired
integer number of autocorrelations, replace the theoretical
correlations by their sample estimates, and compute the square root
of (V.I.120) to find the standard deviation of the sample
autocorrelation.
Note
that the standard deviation
of one autocorrelation coefficient is almost always approximated by
(V.I.121)
The
covariances between
autocorrelation coefficients have also been deduced by Bartlett
(V.I.122)
which
is a good indicator for dependencies between autocorrelations.
Remind therefore that intercorrelated autocorrelations can
seriously distort the
picture of the autocorrelation
function (ACF c.q. autocorrelations as a function of a
timelag).
It
is however possible to remove the intervening correlations between X_{t}
and X_{tk} by defining a partial autocorrelation
function (PACF)
The
partial autocorrelation coefficients are defined as the last
coefficient of a partial autoregression equation of order k
(V.I.123)
It
is obvious that there exists a relationship
between the PACF and the ACF since (V.I.123) can be rewritten
as
(V.I.124)
or
(on taking expectations and dividing by the variance)
(V.I.125)
Sometimes
(V.I.125) is written in matrix formulation according to the YuleWalker relations
(V.I.126)
or
simply
(V.I.127)
Solving
(V.I.127) according to Cramer's Rule yields
(V.I.128)
Note
that the determinant of the numerator contains the same elements as
the determinant of the denominator, except for the last column that
has been replaced.
A
practical numerical
estimation algorithm for the PACF is given by Durbin
(V.I.129)
with
(V.I.130)
The
standard error of a partial
autocorrelation coefficient for k > p (where p is the order
of the autoregressive data generating process; see later) is given
by
(V.I.131)
Finally,
we define the following polynomial
lagprocesses
(V.I.132)
where
B is the backshift operator (c.q. B^{i}Y_{t}
= Y_{ti}) and where
(V.I.133)
These
polynomial expressions are used to define linear
filters. By definition a linear filter
(V.I.134)
generates
a stochastic process
(V.I.135)
where
a_{t} is a white noise variable.
(V.I.136)
for
which the following is obvious
(V.I.137)
We
call eq. (V.I.136) the randomwalk model: a model that
describes time series that are fluctuating around X_{0} in
the short and in the long run (since a_{t} is white noise).
It
is interesting to note that a randomwalk is normally distributed. This can be proved by using the definition of
white noise and computing the moment generating function of the
randomwalk
(V.I.138)
(V.I.139)
from
which we deduce
(V.I.140)
(Q.E.D.).
A
deterministic trend is
generated by a randomwalk model with an added constant
(V.I.141)
The
trend can be illustrated by reexpressing (V.I.141) as
(V.I.142)
where
ct is a linear deterministic trend (as a function of time).
The
linear filter (V.I.135)
is normally distributed
with
(V.I.143)
due
to the additivity property of eq. (I.III33), (I.III34), and
(I.III35) applied to a_{t}.
Now
the autocorrelation of a linear filter can be quite easily computed as
(V.I.144)
since
(V.I.145)
and
(V.I.146)
Now
it is quite evident that, if the linear filter (V.I.135) generates
the variable X_{t},
then X_{t}
is a stationary stochastic
process ((V.I.11)  (V.I.13))
defined by a normal distribution (V.I.14)
(and therefore strongly stationary), and a autocovariance function
(V.I.145) which is only dependent on the timelag k.
The
set of equations resulting from a linear filter (V.I.135)
with ACF (V.I.144) are
sometimes called stochastic
difference equations. These stochastic difference equations can
be used in practice to forecast (economic) time series. The forecasting
function is given by
(V.I.147)
On
using (V.I.135), the
density of the forecasting function (V.I.147) is
(V.I.148)
where
(V.I.149)
is
known, and therefore equal to a constant term. Therefore it is
obvious that
(V.I.150)
(V.I.151)
The
concepts defined and described above are all timerelated. This
implies for instance that autocorrelations are defined as a function
of time. Historically, this timedomain
viewpoint is preceded by the frequencydomain
viewpoint where it is assumed that time series consist of sine and
cosine waves at different frequencies.
In
practice there are both advantages and disadvantages to both
viewpoints. Nevertheless, both should be seen as
complementary to each other.
(V.I.152)
for
the Fourier
series model
(V.I.153)
In
(V.I.153) we define
(V.I.154)
The
least squares estimates
of the parameters in (V.I.152) are computed by
(V.I.155)
In
case of a time series with an even number of observations T = 2 q
the same definitions are applicable except for
(V.I.156)
It
can furthermore be shown that
(V.I.157)
(V.I.158)
such
that
(V.I.159)
(V.I.160)
Obviously
(V.I.161)
It
is also possible to show that
(V.I.162)
If
(V.I.163)
then
(V.I.164)
and
(V.I.165)
and
(V.I.166)
and
(V.I.167)
and
(V.I.168)
which
state the orthogonality
properties of sinusoids and which can be proved. Remark that
(V.I.167) is a special case of (V.I.164) and (V.I.168) is a
special case of (V.I.166). Particularly eq. (V.I.166) is
interesting for our discussion in regard to (V.I.160) and (V.I.153),
since it states that sinusoids are independent.
If
(V.I.152) is
redefined as
(V.I.169)
then
I(f) is called the sample
spectrum.
The
sample spectrum is in fact a Fourier cosine transformation of the
autocovariance function estimate. Denote the covarianceestimate of
(V.I.17)by the samplecovariance (c.q. the numerator of
(V.I.110)), the complex number
i, and the frequency by f, then
(V.I.170)
On
using (V.I.155)and
(V.I.170) it follows that
(V.I.171)
which
can be substituted into (V.I.170) yielding
(V.I.172)
Now
from (V.I.110) it follows
(V.I.173)
and
if (t  t') is substituted by k then (V.I.172) becomes
(V.I.174)
which
proves the link between the sample spectrum and the estimated
autocovariance function.
On
taking expectations of the spectrum we obtain
(V.I.175)
for
which it can be shown that
(V.I.176)
On
combining (V.I.175) and (V.I1.176) and on defining the power
spectrum as p(f) we find
(V.I.177)
It
is quite obvious that
(V.I.178)
so
that it follows that the power spectrum converges if the covariance
decreases rather quickly. The power spectrum is a Fourier cosine
transformation of the (population) autocovariance function. This
implies that for any theoretical autocovariance function (cfr. the
following sections) a respective theoretical power spectrum can be
formulated.
Of
course the power spectrum can be reformulated with respect to
autocorrelations in stead of autocovariances
(V.I.179)
which
is the socalled spectral
density function.
Since
(V.I.180)
it
follows that
(V.I.181)
and
since g(f) > 0 the properties of g(f) are quite similar to those
of a frequency distribution function.
Since
it can be shown that the sample spectrum fluctuates wildly around
the theoretical power spectrum a modified (c.q. smoothed) estimate
of the power spectrum is suggested as
(V.I.182)
