Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox Chapter IntroductionG13 — Time Series Analysis

## Scope of the Chapter

This chapter provides facilities for investigating and modelling the statistical structure of series of observations collected at equally spaced points in time. The models may then be used to forecast the series.
The chapter covers the following models and approaches.
1. Univariate time series analysis, including autocorrelation functions and autoregressive moving average (ARMA) models.
2. Univariate spectral analysis.
3. Transfer function (multi-input) modelling, in which one time series is dependent on other time series.
4. Bivariate spectral methods including coherency, gain and input response functions.
5. Vector ARMA models for multivariate time series.
6. Kalman filter models.
7. GARCH models for volatility.

## Background to the Problems

### Univariate Analysis

Let the given time series be x1,x2,,xn${x}_{1},{x}_{2},\dots ,{x}_{n}$, where n$n$ is its length. The structure which is intended to be investigated, and which may be most evident to the eye in a graph of the series, can be broadly described as:
 (a) trends, linear or possibly higher-order polynomial; (b) seasonal patterns, associated with fixed integer seasonal periods. The presence of such seasonality and the period will normally be known a priori. The pattern may be fixed, or slowly varying from one season to another; (c) cycles or waves of stable amplitude and period p$p$ (from peak to peak). The period is not necessarily integer, the corresponding absolute frequency (cycles/time unit) being f = 1 / p$f=1/p$ and angular frequency ω = 2πf$\omega =2\pi f$. The cycle may be of pure sinusoidal form like sin(ωt)$\mathrm{sin}\left(\omega t\right)$, or the presence of higher harmonic terms may be indicated, e.g., by asymmetry in the wave form; (d) quasi-cycles, i.e., waves of fluctuating period and amplitude; and (e) irregular statistical fluctuations and swings about the overall mean or trend.
Trends, seasonal patterns, and cycles might be regarded as deterministic components following fixed mathematical equations, and the quasi-cycles and other statistical fluctuations as stochastic and describable by short-term correlation structure. For a finite dataset it is not always easy to discriminate between these two types, and a common description using the class of autoregressive integrated moving-average (ARIMA) models is now widely used. The form of these models is that of difference equations (or recurrence relations) relating present and past values of the series. You are referred to Box and Jenkins (1976) for a thorough account of these models and how to use them. We follow their notation and outline the recommended steps in ARIMA model building for which functions are available.

#### Transformations

If the variance of the observations in the series is not constant across the range of observations it may be useful to apply a variance-stabilizing transformation to the series. A common situation is for the variance to increase with the magnitude of the observations and in this case typical transformations used are the log or square root transformation. A range-mean plot or standard deviation-mean plot provides a quick and easy way of detecting non-constant variance and of choosing, if required, a suitable transformation. These are plots of either the range or standard deviation of successive groups of observations against their means.

#### Differencing operations

These may be used to simplify the structure of a time series.
First-order differencing, i.e., forming the new series
 ∇xt = xt − xt − 1 $∇xt=xt-xt-1$
will remove a linear trend. First-order seasonal differencing
 ∇sxt = xt − xt − s $∇sxt=xt-xt-s$
eliminates a fixed seasonal pattern.
These operations reflect the fact that it is often appropriate to model a time series in terms of changes from one value to another. Differencing is also therefore appropriate when the series has something of the nature of a random walk, which is by definition the accumulation of independent changes.
Differencing may be applied repeatedly to a series, giving
 wt = ∇d∇sDxt $wt=∇d∇sDxt$
where d$d$ and D$D$ are the orders of differencing. The derived series wt${w}_{t}$ will be shorter, of length N = nds × D$N=n-d-s×D$, and extend for t = 1 + d + s × D,,n$t=1+d+s×D,\dots ,n$.

#### Sample autocorrelations

Given that a series has (possibly as a result of simplifying by differencing operations) a homogeneous appearance throughout its length, fluctuating with approximately constant variance about an overall mean level, it is appropriate to assume that its statistical properties are stationary. For most purposes the correlations ρk${\rho }_{k}$ between terms xt,xt + k${x}_{t},{x}_{t+k}$ or wt,wt + k${w}_{t},{w}_{t+k}$ separated by lag k$k$ give an adequate description of the statistical structure and are estimated by the sample autocorrelation function (ACF) rk${r}_{\mathit{k}}$, for k = 1,2,$\mathit{k}=1,2,\dots$.
As described by Box and Jenkins (1976), these may be used to indicate which particular ARIMA model may be appropriate.

#### Partial autocorrelations

The information in the autocorrelations, ρk${\rho }_{k}$, may be presented in a different light by deriving from them the coefficients of the partial autocorrelation function (PACF) φk,k${\varphi }_{\mathit{k},\mathit{k}}$, for k = 1,2,$\mathit{k}=1,2,\dots$. φk,k${\varphi }_{k,k}$ which measures the correlation between xt${x}_{t}$ and xt + k${x}_{t+k}$ conditional upon the intermediate values xt + 1,xt + 2,,xt + k1${x}_{t+1},{x}_{t+2},\dots ,{x}_{t+k-1}$. The corresponding sample values ϕ̂k,k${\stackrel{^}{\varphi }}_{k,k}$ give further assistance in the selection of ARIMA models.
Both autocorrelation function (ACF) and PACF may be rapidly computed, particularly in comparison with the time taken to estimate ARIMA models.

#### Finite lag predictor coefficients and error variances

The partial autocorrelation coefficient φk,k${\varphi }_{k,k}$ is determined as the final parameter in the minimum variance predictor of xt${x}_{t}$ in terms of xt1,xt2,,xtk${x}_{t-1},{x}_{t-2},\dots ,{x}_{t-k}$,
 xt = φk,1xt − 1 + φk,2xt − 2 + ⋯ + φk,kxt − k + ek,t $xt=ϕk,1xt-1+ϕk,2xt-2+⋯+ϕk,kxt-k+ek,t$
where ek,t${e}_{k,t}$ is the prediction error, and the first subscript k$k$ of φk,i${\varphi }_{k,i}$ and ek,t${e}_{k,t}$ emphasizes the fact that the parameters will alter as k$k$ increases. Moderately good estimates ϕ̂k,i${\stackrel{^}{\varphi }}_{k,i}$ of φk,i${\varphi }_{k,i}$ are obtained from the sample autocorrelation function (ACF), and after calculating the partial autocorrelation function (PACF) up to lag L$L$, the successive values v1,v2,,vL${v}_{1},{v}_{2},\dots ,{v}_{L}$ of the prediction error variance estimates, vk = var(ek,t)${v}_{k}=\mathrm{var}\left({e}_{k,t}\right)$, are available, together with the final values of the coefficients ϕ̂k,1,ϕ̂k,2,,ϕ̂k,L${\stackrel{^}{\varphi }}_{k,1},{\stackrel{^}{\varphi }}_{k,2},\dots ,{\stackrel{^}{\varphi }}_{k,L}$. If xt${x}_{t}$ has nonzero mean, x$\stackrel{-}{x}$, it is adequate to use xtx${x}_{t}-\stackrel{-}{x}$ in place of xt${x}_{t}$ in the prediction equation.
Although Box and Jenkins (1976) do not place great emphasis on these prediction coefficients, their use is advocated for example by Akaike (1971), who recommends selecting an optimal order of the predictor as the lag for which the final prediction error (FPE) criterion (1 + k / n)(1k / n)1vk$\left(1+k/n\right){\left(1-k/n\right)}^{-1}{v}_{k}$ is a minimum.

#### ARIMA models

The correlation structure in stationary time series may often be represented by a model with a small number of parameters belonging to the autoregressive moving-average (ARMA) class. If the stationary series wt${w}_{t}$ has been derived by differencing from the original series xt${x}_{t}$, then xt${x}_{t}$ is said to follow an ARIMA model. Taking wt = dxt${w}_{t}={\nabla }^{d}{x}_{t}$, the (non-seasonal) ARIMA (p,d,q)$\left(p,d,q\right)$ model with p$p$ autoregressive parameters φ1,φ2,,φp${\varphi }_{1},{\varphi }_{2},\dots ,{\varphi }_{p}$ and q$q$ moving-average parameters θ1,θ2,,θq${\theta }_{1},{\theta }_{2},\dots ,{\theta }_{q}$, represents the structure of wt${w}_{t}$ by the equation
 wt = φ1wt − 1 + ⋯ + φpwt − p + at − θ1at − 1 − ⋯ − θqat − q, $wt=ϕ1wt-1+⋯+ϕpwt-p+at-θ1at-1-⋯-θqat-q,$ (1)
where at${a}_{t}$ is an uncorrelated series (white noise) with mean 0$0$ and constant variance σa2${\sigma }_{a}^{2}$. If wt${w}_{t}$ has a nonzero mean c$c$, then this is allowed for by replacing wt,wt1,${w}_{t},{w}_{t-1},\dots \text{}$ by wtc,wt1c,${w}_{t}-c,{w}_{t-1}-c,\dots \text{}$ in the model. Although c$c$ is often estimated by the sample mean of wt${w}_{t}$ this is not always optimal.
A series generated by this model will only be stationary provided restrictions are placed on φ1,φ2,,φp${\varphi }_{1},{\varphi }_{2},\dots ,{\varphi }_{p}$ to avoid unstable growth of wt${w}_{t}$. These are called stationarity constraints. The series at${a}_{t}$ may also be usefully interpreted as the linear innovations in xt${x}_{t}$ (and in wt${w}_{t}$), i.e., the error if xt${x}_{t}$ were to be predicted using the information in all past values xt1,xt2,${x}_{t-1},{x}_{t-2},\dots \text{}$, provided also that θ1,θ2,,θq${\theta }_{1},{\theta }_{2},\dots ,{\theta }_{q}$ satisfy invertibility constraints. This allows the series at${a}_{t}$ to be regenerated by rewriting the model equation as
 at = wt − φ1wt − 1 − ⋯ − φpwt − p + θ1at − 1 + ⋯ + θqat − q. $at=wt-ϕ1wt-1-⋯-ϕpwt-p+θ1at-1+⋯+θqat-q.$ (2)
For a series with short-term correlation only, i.e., rk${r}_{k}$ is not significant beyond some low lag q$q$ (see Box and Jenkins (1976) for the statistical test), then the pure moving-average model MA(q)$\text{MA}\left(q\right)$ is appropriate, with no autoregressive parameters, i.e., p = 0$p=0$.
Autoregressive parameters are appropriate when the autocorrelation function (ACF) pattern decays geometrically, or with a damped sinusoidal pattern which is associated with quasi-periodic behaviour in the series. If the sample partial autocorrelation function (PACF) ϕ̂k,k${\stackrel{^}{\varphi }}_{k,k}$ is significant only up to some low lag p$p$, then a pure autoregressive model AR(p)$\text{AR}\left(p\right)$ is appropriate, with q = 0$q=0$. Otherwise moving-average terms will need to be introduced, as well as autoregressive terms.
The seasonal ARIMA (p,d,q,P,D,Q,s)$\left(p,d,q,P,D,Q,s\right)$ model allows for correlation at lags which are multiples of the seasonal period s$s$. Taking wt = dsDxt${w}_{t}={\nabla }^{d}{\nabla }_{s}^{D}{x}_{t}$, the series is represented in a two-stage manner via an intermediate series et${e}_{t}$:
 wt = Φ1wt − s + ⋯ + ΦPwt − s × P + et − Θ1et − s − ⋯ − ΘQet − s × Q $wt=Φ1wt-s+⋯+ΦPwt-s×P+et-Θ1et-s-⋯-ΘQet-s×Q$ (3)
 et = φ1et − 1 + ⋯ + φpet − p + at − θ1at − 1 − ⋯ − θqat − q $et=ϕ1et-1+⋯+ϕpet-p+at-θ1at-1-⋯-θqat-q$ (4)
where Φi${\Phi }_{i}$, Θi${\Theta }_{i}$ are the seasonal parameters and P$P$ and Q$Q$ are the corresponding orders. Again, wt${w}_{t}$ may be replaced by wtc${w}_{t}-c$.

#### ARIMA model estimation

In theory, the parameters of an ARIMA model are determined by a sufficient number of autocorrelations ρ1,ρ2,${\rho }_{1},{\rho }_{2},\dots \text{}$. Using the sample values r1,r2,${r}_{1},{r}_{2},\dots \text{}$ in their place it is usually (but not always) possible to solve for the corresponding ARIMA parameters.
These are rapidly computed but are not fully efficient estimates, particularly if moving-average parameters are present. They do provide useful preliminary values for an efficient but relatively slow iterative method of estimation. This is based on the least squares principle by which parameters are chosen to minimize the sum of squares of the innovations at${a}_{t}$, which are regenerated from the data using (2), or the reverse of (3) and (4) in the case of seasonal models.
Lack of knowledge of terms on the right-hand side of (2), when t = 1,2,,max (p,q)$t=1,2,\dots ,\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(p,q\right)$, is overcome by introducing q$q$ unknown series values w0,w1,,wq1${w}_{0},{w}_{1},\dots ,{w}_{q-1}$ which are estimated as nuisance parameters, and using correction for transient errors due to the autoregressive terms. If the data w1,w2,,wN = w${w}_{1},{w}_{2},\dots ,{w}_{N}=w$ is viewed as a single sample from a multivariate Normal density whose covariance matrix V$V$ is a function of the ARIMA model parameters, then the exact likelihood of the parameters is
 − (1/2)log|V| − (1/2)wTV − 1w. $-12log|V|-12wTV-1w.$
The least squares criterion as outlined above is equivalent to using the quadratic form
 QF = wTV − 1w $QF=wTV-1w$
as an objective function to be minimized. Neglecting the term (1/2)log|V|$-\frac{1}{2}\mathrm{log}|V|$ yields estimates which differ very little from the exact likelihood except in small samples, or in seasonal models with a small number of whole seasons contained in the data. In these cases bias in moving-average parameters may cause them to stick at the boundary of their constraint region, resulting in failure of the estimation method.
Approximate standard errors of the parameter estimates and the correlations between them are available after estimation.
The model residuals, t${\stackrel{^}{a}}_{t}$, are the innovations resulting from the estimation and are usually examined for the presence of autocorrelation as a check on the adequacy of the model.

#### ARIMA model forecasting

An ARIMA model is particularly suited to extrapolation of a time series. The model equations are simply used for t = n + 1,n + 2,$t=n+1,n+2,\dots \text{}$ replacing the unknown future values of at${a}_{t}$ by zero. This produces future values of wt${w}_{t}$, and if differencing has been used this process is reversed (the so-called integration part of ARIMA models) to construct future values of xt${x}_{t}$.
Forecast error limits are easily deduced.
This process requires knowledge only of the model orders and parameters together with a limited set of the terms ati,eti,wti,xti${a}_{t-i},{e}_{t-i},{w}_{t-i},{x}_{t-i}$ which appear on the right-hand side of the models (3) and (4) (and the differencing equations) when t = n$t=n$. It does not require knowledge of the whole series.
We call this the state set. It is conveniently constituted after model estimation. Moreover, if new observations xn + 1,xn + 2,${x}_{n+1},{x}_{n+2},\dots \text{}$ come to hand, then the model equations can easily be used to update the state set before constructing forecasts from the end of the new observations. This is particularly useful when forecasts are constructed on a regular basis. The new innovations an + 1,an + 2,${a}_{n+1},{a}_{n+2},\dots \text{}$ may be compared with the residual standard deviation, σa${\sigma }_{a}$, of the model used for forecasting, as a check that the model is continuing to forecast adequately.

#### Exponential Smoothing

Exponential smoothing is a relatively simple method of short term forecasting for a time series. A variety of different smoothing methods are possible, including; single exponential, Brown's double exponential, linear Holt (also called double exponential smoothing in some references), additive Holt–Winters and multiplicative Holt–Winters. The choice of smoothing method used depends on the characteristics of the time series. If the mean of the series is only slowly changing then single exponential smoothing may be suitable. If there is a trend in the time series, which itself may be slowly changing, then linear Holt smoothing may be suitable. If there is a seasonal component to the time series, e.g., daily or monthly data, then one of the two Holt–Winters methods may be suitable.
For a time series yt${y}_{\mathit{t}}$, for t = 1,2,,n$\mathit{t}=1,2,\dots ,n$, the five smoothing functions are defined by the following:
• Single Exponential Smoothing
 mt = α yt + (1 − α) mt − 1 ŷt + f = mt var(ŷt + f) = var(εt) (1 + (f − 1)α2)
$mt = α yt + (1-α) mt-1 y^t+f = mt var( y^t+f ) = var(εt) ( 1+ (f-1) α2 )$
• Brown Double Exponential Smoothing
$mt = α yt + (1-α) mt-1 rt = α ( mt - mt-1 ) + (1-α) rt-1 y^ t+f = mt + ( (f-1) + 1 / α ) rt var( y^ t+f ) = var(εt) ( 1+ ∑ i=1 f-1 ( 2α+ (i-1) α2 ) 2 )$
mt = α yt + (1 − α) mt − 1 rt = α (mt − mt − 1) + (1 − α) rt − 1 ŷt + f = mt + ((f − 1) + 1 / α) rt var(ŷt + f) =
 var(εt) ( f − 1 )1 + ∑ (2α + (i − 1)α2)2 i = 1
• Linear Holt Smoothing
$mt = α yt + ( 1-α ) ( mt-1 + ϕ rt-1 ) rt = γ ( mt - mt-1 ) + ( 1-γ ) ϕ rt-1 y^ t+f = mt + ∑ i=1 f ϕi rt var( y^ t+f ) = var( εt ) ( 1+ ∑ i=1 f-1 ( α + α γ ϕ ( ϕi-1 ) ( ϕ-1 ) ) 2 )$
mt = α yt + (1 − α) (mt − 1 + φrt − 1) rt = γ (mt − mt − 1) + (1 − γ) φ rt − 1 ŷt + f = f mt + ∑ φirt i = 1 var(ŷt + f) =
 var(εt) ( f − 1 )1 + ∑ (α + ( α γ φ (φi − 1) )/((φ − 1)))2 i = 1
var(ŷt + f) = ψi =
mt = α (yt − st − p) + (1 − α) (mt − 1 + φrt − 1) rt = γ (mt − mt − 1) + (1 − γ) φ rt − 1 st = β (yt − mt) + (1 − β) st − p ŷt + f =
 mt + (f ) ∑ φirti = 1 + st − p
 var(εt) ( f − 1 )1 + ∑ ψi2 i = 1
 { 0 if ​i ≥ f α + ( αγφ (φi − 1) )/( (φ − 1) ) if ​ i  mod  p ≠ 0 α + ( α γ φ (φi − 1) )/((φ − 1)) + β (1 − α) otherwise
• Multiplicative Holt–Winters Smoothing
var(ŷt + f) =
mt = α yt / st − p + (1 − α) (m t − 1 + φrt − 1) rt = γ (mt − mt − 1) + (1 − γ) φ rt − 1 st = β yt / mt + (1 − β) st − p ŷt + f =
 ( f )mt + ∑ φirt i = 1 × st − p
 var(εt) (∞ ) ∑ ∑ j = 0p − 1 (ψj + ip( st + f )/( st + f − j ))2 i = 0
$mt = α yt / s t-p + ( 1-α ) ( m t-1 +ϕ r t-1 ) rt = γ ( mt - m t-1 ) + ( 1-γ ) ϕ r t-1 st = β yt / mt + ( 1-β ) s t-p y^ t+f = ( mt + ∑ i=1 f ϕi rt ) × s t-p var( y^ t+f ) = var( εt ) ( ∑ i=0 ∞ ∑ j=0 p-1 ( ψ j+ip s t+f s t+f-j ) 2 )$
and ψ$\psi$ is defined as in the additive Holt–Winters smoothing, where mt${m}_{t}$ is the mean, rt${r}_{t}$ is the trend and st${s}_{t}$ is the seasonal component at time t$t$ with p$p$ being the seasonal order. The f$f$-step ahead forecasts are given by t + f${\stackrel{^}{y}}_{t+f}$ and their variances by var(t + f) $\mathrm{var}\left({\stackrel{^}{y}}_{t+f}\right)$. The term var(εt) $\mathrm{var}\left({\epsilon }_{t}\right)$ is estimated as the mean deviation.
The parameters, α$\alpha$, β$\beta$ and γ$\gamma$ control the amount of smoothing. The nearer these parameters are to one, the greater the emphasis on the current data point. Generally these parameters take values in the range 0.1$0.1$ to 0.3$0.3$. The linear Holt and two Holt–Winters smoothers include an additional parameter, φ$\varphi$, which acts as a trend dampener. For 0.0 < φ < 1.0$0.0<\varphi <1.0$ the trend is dampened and for φ > 1.0$\varphi >1.0$ the forecast function has an exponential trend, φ = 0.0$\varphi =0.0$ removes the trend term from the forecast function and φ = 1.0$\varphi =1.0$ does not dampen the trend.
For all methods, values for α$\alpha$, β$\beta$, γ$\gamma$ and ψ$\psi$ can be chosen by trying different values and then visually comparing the results by plotting the fitted values along side the original data. Alternatively, for single exponential smoothing a suitable value for α$\alpha$ can be obtained by fitting an ARIMA(0,1,1)$\mathrm{ARIMA}\left(0,1,1\right)$ model. For Brown's double exponential smoothing and linear Holt smoothing with no dampening, (i.e., φ = 1.0$\varphi =1.0$), suitable values for α$\alpha$ and, in the case of linear Holt smoothing, γ$\gamma$ can be obtained by fitting an ARIMA(0,2,2)$\mathrm{ARIMA}\left(0,2,2\right)$ model. Similarly, the linear Holt method, with φ1.0$\varphi \ne 1.0$, can be expressed as an ARIMA(1,2,2)$\mathrm{ARIMA}\left(1,2,2\right)$ model and the additive Holt–Winters, with no dampening, (φ = 1.0$\varphi =1.0$), can be expressed as a seasonal ARIMA model with order p$p$ of the form ARIMA(0,1,p + 1)(0,1,0)$\mathrm{ARIMA}\left(0,1,p+1\right)\left(0,1,0\right)$. There is no similar procedure for obtaining parameter values for the multiplicative Holt–Winters method, or the additive Holt–Winters method with φ1.0$\varphi \ne 1.0$. In these cases parameters could be selected by minimizing a measure of fit using nonlinear optimization.

### Univariate Spectral Analysis

In describing a time series using spectral analysis the fundamental components are taken to be sinusoidal waves of the form Rcos(ωt + φ)$R\mathrm{cos}\left(\omega t+\varphi \right)$, which for a given angular frequency ω$\omega$, 0ωπ$0\le \omega \le \pi$, is specified by its amplitude R > 0$R>0$ and phase φ$\varphi$, 0φ < 2π$0\le \varphi <2\pi$. Thus in a time series of n$n$ observations it is not possible to distinguish more than n / 2$n/2$ independent sinusoidal components. The frequency range 0ωπ$0\le \omega \le \pi$ is limited to the shortest wavelength of two sampling units because any wave of higher frequency is indistinguishable upon sampling (or is aliased with) a wave within this range. Spectral analysis follows the idea that for a series made up of a finite number of sine waves the amplitude of any component at frequency ω$\omega$ is given to order 1 / n$1/n$ by
R2 = (1/(n2))
 (n ) ∑ xteiωtt = 1 2
.
$R2= (1n2) | ∑ t=1 n xt eiωt | 2 .$

#### The sample spectrum

For a series x1,x2,,xn${x}_{1},{x}_{2},\dots ,{x}_{n}$ this is defined as
f* (ω) = (1/(2nπ))
 (n ) ∑ xteiωtt = 1 2
,
$f* (ω)= (12nπ ) | ∑ t=1 n xt eiωt | 2 ,$
the scaling factor now being chosen in order that
 π 2 ∫ f*(ω)dω = σx2, 0
$2∫0πf*(ω)dω=σx2,$
i.e., the spectrum indicates how the sample variance (σx2${\sigma }_{x}^{2}$) of the series is distributed over components in the frequency range 0ωπ$0\le \omega \le \pi$.
It may be demonstrated that f*(ω)${f}^{*}\left(\omega \right)$ is equivalently defined in terms of the sample ACF rk${r}_{k}$ of the series as
f*(ω) = (1/(2π))
 ( n − 1 ) c0 + 2 ∑ ckcoskω k = 1
,
$f*(ω)=(12π ) (c0+2∑k=1 n-1ckcos⁡kω) ,$
where ck = σx2rk${c}_{k}={\sigma }_{x}^{2}{r}_{k}$ are the sample autocovariance coefficients.
If the series xt${x}_{t}$ does contain a deterministic sinusoidal component of amplitude R$R$, this will be revealed in the sample spectrum as a sharp peak of approximate width π / n$\pi /n$ and height (n / 2π)R2$\left(n/2\pi \right){R}^{2}$. This is called the discrete part of the spectrum, the variance R2${R}^{2}$ associated with this component being in effect concentrated at a single frequency.
If the series xt${x}_{t}$ has no deterministic components, i.e., is purely stochastic being stationary with autocorrelation function (ACF) rk${r}_{k}$, then with increasing sample size the expected value of f*(ω)${f}^{*}\left(\omega \right)$ converges to the theoretical spectrum – the continuous part
f(ω) = (1/(2π))
 ( ∞ ) γ0 + 2 ∑ γkcos(ωk) k = 1
,
$f(ω)=(12π ) (γ0+2∑k=1∞γk cos(ωk)) ,$
where γk${\gamma }_{k}$ are the theoretical autocovariances.
The sample spectrum does not however converge to this value but at each frequency point fluctuates about the theoretical spectrum with an exponential distribution, being independent at frequencies separated by an interval of 2π / n$2\pi /n$ or more. Various devices are therefore employed to smooth the sample spectrum and reduce its variability. Much of the strength of spectral analysis derives from the fact that the error limits are multiplicative so that features may still show up as significant in a part of the spectrum which has a generally low level, whereas they are completely masked by other components in the original series. The spectrum can help to distinguish deterministic cyclical components from the stochastic quasi-cycle components which produce a broader peak in the spectrum. (The deterministic components can be removed by regression and the remaining part represented by an ARIMA model.)
A large discrete component in a spectrum can distort the continuous part over a large frequency range surrounding the corresponding peak. This may be alleviated at the cost of slightly broadening the peak by tapering a portion of the data at each end of the series with weights which decay smoothly to zero. It is usual to correct for the mean of the series and for any linear trend by simple regression, since they would similarly distort the spectrum.

#### Spectral smoothing by lag window

The estimate is calculated directly from the sample autocovariances ck${c}_{k}$ as
f(ω) = (1/(2π))
 ( M − 1 ) c0 + 2 ∑ wkckcoskω k = 1
,
$f(ω)=(12π ) (c0+2∑k=1 M-1wkckcos⁡kω) ,$
the smoothing being induced by the lag window weights wk${w}_{k}$ which extend up to a truncation lag M$M$ which is generally much less than n$n$. The smaller the value of M$M$, the greater the degree of smoothing, the spectrum estimates being independent only at a wider frequency separation indicated by the bandwidth b$b$ which is proportional to 1 / M$1/M$. It is wise, however, to calculate the spectrum at intervals appreciably less than this. Although greater smoothing narrows the error limits, it can also distort the spectrum, particularly by flattening peaks and filling in troughs.

#### Direct spectral smoothing

The unsmoothed sample spectrum is calculated for a fine division of frequencies, then averaged over intervals centred on each frequency point for which the smoothed spectrum is required. This is usually at a coarser frequency division. The bandwidth corresponds to the width of the averaging interval.

### Linear Lagged Relationships Between Time Series

We now consider the context in which one time series, called the dependent or output series, y1,y2,,yn${y}_{1},{y}_{2},\dots ,{y}_{n}$, is believed to depend on one or more explanatory or input series, e.g., x1,x2,,xn${x}_{1},{x}_{2},\dots ,{x}_{n}$. This dependency may follow a simple linear regression, e.g.,
 yt = vxt + nt $yt=vxt+nt$
or more generally may involve lagged values of the input
 yt = v0xt + v1xt − 1 + v2xt − 2 + ⋯ + nt. $yt=v0xt+v1xt- 1+v2xt- 2+⋯+nt.$
The sequence v0,v1,v2,${v}_{0},{v}_{1},{v}_{2},\dots \text{}$ is called the impulse response function (IRF) of the relationship. The term nt${n}_{t}$ represents that part of yt${y}_{t}$ which cannot be explained by the input, and it is assumed to follow a univariate ARIMA model. We call nt${n}_{t}$ the (output) noise component of yt${y}_{t}$, and it includes any constant term in the relationship. It is assumed that the input series, xt${x}_{t}$, and the noise component, nt${n}_{t}$, are independent.
The part of yt${y}_{t}$ which is explained by the input is called the input component zt${z}_{t}$:
 zt = v0xt + v1xt − 1 + v2xt − 2 + ⋯ $zt=v0xt+v1xt-1+v2xt-2+⋯$
so yt = zt + nt${y}_{t}={z}_{t}+{n}_{t}$.
The eventual aim is to model both these components of yt${y}_{t}$ on the basis of observations of y1,y2,,yn${y}_{1},{y}_{2},\dots ,{y}_{n}$ and x1,x2,,xn${x}_{1},{x}_{2},\dots ,{x}_{n}$. In applications to forecasting or control both components are important. In general there may be more than one input series, e.g., x1,t${x}_{1,t}$ and x2,t${x}_{2,t}$, which are assumed to be independent and corresponding components z1,t${z}_{1,t}$ and z2,t${z}_{2,t}$, so
 yt = z1,t + z2,t + nt. $yt=z1,t+z2,t+nt.$

#### Transfer function models

In a similar manner to that in which the structure of a univariate series may be represented by a finite-parameter ARIMA model, the structure of an input component may be represented by a transfer function (TF) model with delay time b$b$, p$p$ autoregressive-like parameters δ1,δ2,,δp${\delta }_{1},{\delta }_{2},\dots ,{\delta }_{p}$ and q + 1$q+1$ moving-average-like parameters ω0,ω1,,ωq${\omega }_{0},{\omega }_{1},\dots ,{\omega }_{q}$:
 zt = δ1zt − 1 + δ2zt − 2 + ⋯ + δpzt − p + ω0xt − b − ω1xt − b − 1 − ⋯ − ωqxt − b − q. $zt=δ1zt-1+δ2zt-2+⋯+δpzt-p+ω0xt-b-ω1xt-b-1-⋯-ωqxt-b-q.$ (5)
If p > 0$p>0$ this represents an impulse response function (IRF) which is infinite in extent and decays with geometric and/or sinusoidal behaviour. The parameters δ1,δ2,,δp${\delta }_{1},{\delta }_{2},\dots ,{\delta }_{p}$ are constrained to satisfy a stability condition identical to the stationarity condition of autoregressive models. There is no constraint on ω0,ω1,,ωq${\omega }_{0},{\omega }_{1},\dots ,{\omega }_{q}$.

#### Cross-correlations

An important tool for investigating how an input series xt${x}_{t}$ affects an output series yt${y}_{t}$ is the sample cross-correlation function (CCF) rxy(k)${r}_{xy}\left(\mathit{k}\right)$, for k = 0,1,$\mathit{k}=0,1,\dots$ between the series. If xt${x}_{t}$ and yt${y}_{t}$ are (jointly) stationary time series this is an estimator of the theoretical quantity
 ρxy(k) = corr(xt,yt + k). $ρxy(k)=corr(xt,yt+k).$
The sequence ryx(k)${r}_{yx}\left(\mathit{k}\right)$, for k = 0,1,$\mathit{k}=0,1,\dots$, is distinct from rxy(k)${r}_{xy}\left(k\right)$, though it is possible to interpret
 ryx(k) = rxy( − k). $ryx(k)=rxy(-k).$
When the series yt${y}_{t}$ and xt${x}_{t}$ are believed to be related by a transfer function (TF) model, the CCF is determined by the impulse response function (IRF) v0,v1,v2,${v}_{0},{v}_{1},{v}_{2},\dots \text{}$ and the autocorrelation function (ACF) of the input xt${x}_{t}$.
In the particular case when xt${x}_{t}$ is an uncorrelated series or white noise (and is uncorrelated with any other inputs):
 ρxy(k) ∝ vk $ρxy(k)∝vk$
and the sample CCF can provide an estimate of vk${v}_{k}$:
 ṽk = (sy / sx)rxy(k) $v~k=(sy/sx)rxy(k)$
where sy${s}_{y}$ and sx${s}_{x}$ are the sample standard deviations of yt${y}_{t}$ and xt${x}_{t}$, respectively.
In theory the IRF coefficients vb,,vb + p + q${v}_{b},\dots ,{v}_{b+p+q}$ determine the parameters in the TF model, and using k${\stackrel{~}{v}}_{k}$ to estimate k${\stackrel{~}{v}}_{k}$ it is possible to solve for preliminary estimates of δ1,δ2,,δp${\delta }_{1},{\delta }_{2},\dots ,{\delta }_{p}$, ω0,ω1,,ωq${\omega }_{0},{\omega }_{1},\dots ,{\omega }_{q}$.

#### Prewhitening or filtering by an ARIMA model

In general an input series xt${x}_{t}$ is not white noise, but may be represented by an ARIMA model with innovations or residuals at${a}_{t}$ which are white noise. If precisely the same operations by which at${a}_{t}$ is generated from xt${x}_{t}$ are applied to the output yt${y}_{t}$ to produce a series bt${b}_{t}$, then the transfer function relationship between yt${y}_{t}$ and xt${x}_{t}$ is preserved between bt${b}_{t}$ and at${a}_{t}$. It is then possible to estimate
 ṽk = (sb / sa)rab(k). $v~k=(sb/sa)rab(k).$
The procedure of generating at${a}_{t}$ from xt${x}_{t}$ (and bt${b}_{t}$ from yt${y}_{t}$) is called prewhitening or filtering by an ARIMA model. Although at${a}_{t}$ is necessarily white noise, this is not generally true of bt${b}_{t}$.

#### Multi-input model estimation

The term multi-input model is used for the situation when one output series yt${y}_{t}$ is related to one or more input series xj,t${x}_{j,t}$, as described in Section [Linear Lagged Relationships Between Time Series]. If for a given input the relationship is a simple linear regression, it is called a simple input; otherwise it is a transfer function input. The error or noise term follows an ARIMA model.
Given that the orders of all the transfer function models and the ARIMA model of a multi-input model have been specified, the various parameters in those models may be (simultaneously) estimated.
The procedure used is closely related to the least squares principle applied to the innovations in the ARIMA noise model.
The innovations are derived for any proposed set of parameter values by calculating the response of each input to the transfer functions and then evaluating the noise nt${n}_{t}$ as the difference between this response (combined for all the inputs) and the output. The innovations are derived from the noise using the ARIMA model in the same manner as for a univariate series, and as described in Section [ARIMA models].
In estimating the parameters, consideration has to be given to the lagged terms in the various model equations which are associated with times prior to the observation period, and are therefore unknown. The function descriptions provide the necessary detail as to how this problem is treated.
Also, as described in Section [ARIMA model estimation] the sum of squares criterion
 S = ∑ at2 $S=∑at2$
is related to the quadratic form in the exact log-likelihood of the parameters:
 − (1/2)log|V| − (1/2)wTV − 1w. $-12log|V|-12wTV-1w.$
Here w$w$ is the vector of appropriately differenced noise terms, and
 wTV − 1w = S / σa2, $wTV-1w=S/σa2,$
where σa2${\sigma }_{a}^{2}$ is the innovation variance parameter.
The least squares criterion is therefore identical to minimization of the quadratic form, but is not identical to exact likelihood. Because V$V$ may be expressed as Mσa2$M{\sigma }_{a}^{2}$, where M$M$ is a function of the ARIMA model parameters, substitution of σa2${\sigma }_{a}^{2}$ by its maximum likelihood (ML) estimator yields a concentrated (or profile) likelihood which is a function of
 |M|1 / NS. $|M|1/NS.$
N$N$ is the length of the differenced noise series w$w$, and |M| = detM$|M|=\mathrm{det}M$.
Use of the above quantity, called the deviance, D$D$, as an objective function is preferable to the use of S$S$ alone, on the grounds that it is equivalent to exact likelihood, and yields estimates with better properties. However, there is an appreciable computational penalty in calculating D$D$, and in large samples it differs very little from S$S$, except in the important case of seasonal ARIMA models where the number of whole seasons within the data length must also be large.
You are given the option of taking the objective function to be either S$S$ or D$D$, or a third possibility, the marginal likelihood. This is similar to exact likelihood but can counteract bias in the ARIMA model due to the fitting of a large number of simple inputs.
Approximate standard errors of the parameter estimates and the correlations between them are available after estimation.
The model residuals t${\stackrel{^}{a}}_{t}$ are the innovations resulting from the estimation, and they are usually examined for the presence of either autocorrelation or cross-correlation with the inputs. Absence of such correlation provides some confirmation of the adequacy of the model.

#### Multi-input model forecasting

A multi-input model may be used to forecast the output series provided future values (possibly forecasts) of the input series are supplied.
Construction of the forecasts requires knowledge only of the model orders and parameters, together with a limited set of the most recent variables which appear in the model equations. This is called the state set. It is conveniently constituted after model estimation. Moreover, if new observations yn + 1,yn + 2,${y}_{n+1},{y}_{n+2},\dots \text{}$ of the output series and xn + 1,xn + 2,${x}_{n+1},{x}_{n+2},\dots \text{}$ of (all) the independent input series become available, then the model equations can easily be used to update the state set before constructing forecasts from the end of the new observations. The new innovations an + 1,an + 2,${a}_{n+1},{a}_{n+2},\dots \text{}$ generated in this updating may be used to monitor the continuing adequacy of the model.

#### Transfer function model filtering

In many time series applications it is desired to calculate the response (or output) of a transfer function (TF) model for a given input series.
Smoothing, detrending, and seasonal adjustment are typical applications. You must specify the orders and parameters of a TF model for the purpose being considered. This may then be applied to the input series.
Again, problems may arise due to ignorance of the input series values prior to the observation period. The transient errors which can arise from this may be substantially reduced by using ‘backforecasts’ of these unknown observations.

### Multivariate Time Series

Multi-input modelling represents one output time series in terms of one or more input series. Although there are circumstances in which it may be more appropriate to analyse a set of time series by modelling each one in turn as the output series with the remainder as inputs, there is a more symmetric approach in such a context. These models are known as vector autoregressive moving-average (VARMA) models.

#### Differencing and transforming a multivariate time series

As in the case of a univariate time series, it may be useful to simplify the series by differencing operations which may be used to remove linear or seasonal trends, thus ensuring that the resulting series to be used in the model estimation is stationary. It may also be necessary to apply transformations to the individual components of the multivariate series in order to stabilize the variance. Commonly used transformations are the log and square root transformations.

#### Model identification for a multivariate time series

Multivariate analogues of the autocorrelation and partial autocorrelation functions are available for analysing a set of k$k$ time series, xi,1,xi,2,,xi,n${x}_{\mathit{i},1},{x}_{\mathit{i},2},\dots ,{x}_{\mathit{i},n}$, for i = 1,2,,k$\mathit{i}=1,2,\dots ,k$, thereby making it possible to obtain some understanding of a suitable VARMA model for the observed series.
It is assumed that the time series have been differenced if necessary, and that they are jointly stationary. The lagged correlations between all possible pairs of series, i.e.,
 ρijl = corr (x i , t ,x j , t + l ) $ρ ijl = corr ( x i , t , x j , t + l )$
are then taken to provide an adequate description of the statistical relationships between the series. These quantities are estimated by sample auto- and cross-correlations rijl${r}_{ijl}$. For each l$l$ these may be viewed as elements of a (lagged) autocorrelation matrix.
Thus consider the vector process xt${x}_{t}$ (with elements xit${x}_{it}$) and lagged autocovariance matrices Γl${\Gamma }_{l}$ with elements of σiσjρijl${\sigma }_{i}{\sigma }_{j}{\rho }_{ijl}$ where σi2 = var(xi,t)${\sigma }_{i}^{2}=\mathrm{var}\left({x}_{i,t}\right)$. Correspondingly, Γl${\Gamma }_{l}$ is estimated by the matrix Cl${C}_{l}$ with elements sisjrijl${s}_{i}{s}_{j}{r}_{ijl}$ where si2${s}_{i}^{2}$ is the sample variance of xit${x}_{it}$.
For a series with short-term cross-correlation only, i.e., rijl${r}_{ijl}$ is not significant beyond some low lag q$q$, then the pure vector MA(q)$\text{MA}\left(q\right)$ model, with no autoregressive parameters, i.e., p = 0$p=0$, is appropriate.
The correlation matrices provide a description of the joint statistical properties of the series. It is also possible to calculate matrix quantities which are closely analogous to the partial autocorrelations of univariate series (see Section [Partial autocorrelations]). Wei (1990) discusses both the partial autoregression matrices proposed by Tiao and Box (1981) and partial lag correlation matrices.
In the univariate case the partial autocorrelation function (PACF) between xt${x}_{t}$ and xt + l${x}_{t+l}$ is the correlation coefficient between the two after removing the linear dependence on each of the intervening variables xt + 1,xt + 2,,xt + l1${x}_{t+1},{x}_{t+2},\dots ,{x}_{t+l-1}$. This partial autocorrelation may also be obtained as the last regression coefficient associated with xt${x}_{t}$ when regressing xt + l${x}_{t+l}$ on its l$l$ lagged variables xt + l1,xt + l2,,xt${x}_{t+l-1},{x}_{t+l-2},\dots ,{x}_{t}$. Tiao and Box (1981) extended this method to the multivariate case to define the partial autoregression matrix. Heyse and Wei (1985) also extended the univariate definition of the PACF to derive the correlation matrix between the vectors xt${x}_{t}$ and xt + l${x}_{t+l}$ after removing the linear dependence on each of the intervening vectors xt + 1,xt + 2,,xt + l1${x}_{t+1},{x}_{t+2},\dots ,{x}_{t+l-1}$, the partial lag correlation matrix.
Note that the partial lag correlation matrix is a correlation coefficient matrix since each of its elements is a properly normalized correlation coefficient. This is not true of the partial autoregression matrices (except in the univariate case for which the two types of matrix are the same). The partial lag correlation matrix at lag 1$1$ also reduces to the regular correlation matrix at lag 1$1$; this is not true of the partial autoregression matrices (again except in the univariate case).
Both the above share the same cut-off property for autoregressive processes; that is for an autoregressive process of order p$p$, the terms of the matrix at lags p + 1$p+1$ and greater are zero. Thus if the sample partial cross-correlations are significant only up to some low lag p$p$ then a pure vector AR(p)$\text{AR}\left(p\right)$ model is appropriate with q = 0$q=0$. Otherwise moving-average terms will need to be introduced as well as autoregressive terms.
Under the hypothesis that xt${x}_{t}$ is an autoregressive process of order l1$l-1$, n$n$ times the sum of the squared elements of the partial lag correlation matrix at lag l$l$ is asymptotically distributed as a χ2${\chi }^{2}$ variable with k2${k}^{2}$ degrees of freedom where k$k$ is the dimension of the multivariate time series. This provides a diagnostic aid for determining the order of an autoregressive model.
The partial autoregression matrices may be found by solving a multivariate version of the Yule–Walker equations to find the autoregression matrices, using the final regression matrix coefficient as the partial autoregression matrix at that particular lag.
The basis of these calculations is a multivariate autoregressive model:
 xt = φl,1xt − 1 + ⋯ + φl,lxt − l + el,t $xt=ϕl,1xt-1+⋯+ϕl,lxt-l+el,t$
where φl,1,φl,2,,φl,l${\varphi }_{l,1},{\varphi }_{l,2},\dots ,{\varphi }_{l,l}$ are matrix coefficients, and el,t${e}_{l,t}$ is the vector of errors in the prediction. These coefficients may be rapidly computed using a recursive technique which requires, and simultaneously furnishes, a backward prediction equation:
 xt − l − 1 = ψl,1xt − l + ψl,2xt − l + 1 + ⋯ + ψl,lxt − 1 + fl,t $xt-l-1=ψl,1xt-l+ψl,2xt-l+1+⋯+ψl,lxt-1+fl,t$
(in the univariate case ψl,i = φl,i${\psi }_{l,i}={\varphi }_{l,i}$).
The forward prediction equation coefficients, φl,i${\varphi }_{l,i}$, are of direct interest, together with the covariance matrix Dl${D}_{l}$ of the prediction errors el,t${e}_{l,t}$. The calculation of these quantities for a particular maximum equation lag l = L$l=L$ involves calculation of the same quantities for increasing values of l = 1,2,,L$l=1,2,\dots ,L$.
The quantities vl = detDl / detΓ0${v}_{l}=\mathrm{det}{D}_{l}/\mathrm{det}{\Gamma }_{0}$ may be viewed as generalized variance ratios, and provide a measure of the efficiency of prediction (the smaller the better). The reduction from vl1${v}_{l-1}$ to vl${v}_{l}$ which occurs on extending the order of the predictor to l$l$ may be represented as
 vl = vl − 1(1 − ρl2) $vl=vl-1(1-ρl2)$
where ρl2${\rho }_{l}^{2}$ is a multiple squared partial autocorrelation coefficient associated with k2${k}^{2}$ degrees of freedom.
Sample estimates of all the above quantities may be derived by using the series covariance matrices Cl${C}_{\mathit{l}}$, for l = 1,2,,L$\mathit{l}=1,2,\dots ,L$, in place of Γl${\Gamma }_{l}$. The best lag for prediction purposes may be chosen as that which yields the minimum final prediction error (FPE) criterion:
 FPE(l) = vl × ((1 + lk2 / n))/((1 − lk2 / n)). $FPE(l)=vl× (1+lk2/n) (1-lk2/n) .$
An alternative method of estimating the sample partial autoregression matrices is by using multivariate least squares to fit a series of multivariate autoregressive models of increasing order.

#### VARMA model estimation

The cross-correlation structure of a stationary multivariate time series may often be represented by a model with a small number of parameters belonging to the VARMA class. If the stationary series wt${w}_{t}$ has been derived by transforming and/or differencing the original series xt${x}_{t}$, then wt${w}_{t}$ is said to follow the VARMA model:
 wt = φ1wt − 1 + ⋯ + φpwt − p + εt − θ1εt − 1 − ⋯ − θqεt − q, $wt=ϕ1wt-1+⋯+ϕpwt-p+εt-θ1εt-1-⋯-θqεt-q,$
where εt${\epsilon }_{t}$ is a vector of uncorrelated residual series (white noise) with zero mean and constant covariance matrix Σ$\Sigma$, φ1,φ2,,φp${\varphi }_{1},{\varphi }_{2},\dots ,{\varphi }_{p}$ are the p$p$ autoregressive (AR) parameter matrices and θ1,θ2,,θq${\theta }_{1},{\theta }_{2},\dots ,{\theta }_{q}$ are the q$q$ moving-average (MA) parameter matrices. If wt${w}_{t}$ has a nonzero mean μ$\mu$, then this can be allowed for by replacing wt,wt1,${w}_{t},{w}_{t-1},\dots \text{}$ by wtμ,wt1μ,${w}_{t}-\mu ,{w}_{t-1}-\mu ,\dots \text{}$ in the model.
A series generated by this model will only be stationary provided restrictions are placed on φ1,φ2,,φp${\varphi }_{1},{\varphi }_{2},\dots ,{\varphi }_{p}$ to avoid unstable growth of wt${w}_{t}$. These are stationarity constraints. The series εt${\epsilon }_{t}$ may also be usefully interpreted as the linear innovations in wt${w}_{t}$, i.e., the error if wt${w}_{t}$ were to be predicted using the information in all past values wt1,wt2,${w}_{t-1},{w}_{t-2},\dots \text{}$, provided also that θ1,θ2,,θq${\theta }_{1},{\theta }_{2},\dots ,{\theta }_{q}$ satisfy what are known as invertibility constraints. This allows the series εt${\epsilon }_{t}$ to be generated by rewriting the model equation as
 εt = wt − φ1wt − 1 − ⋯ − φpwt − p + θ1εt − 1 + ⋯ + θqεt − q. $εt=wt-ϕ1wt-1-⋯-ϕpwt-p+θ1εt-1+⋯+θqεt-q.$
The method of maximum likelihood (ML) may be used to estimate the parameters of a specified VARMA model from the observed multivariate time series together with their standard errors and correlations.
The residuals from the model may be examined for the presence of autocorrelations as a check on the adequacy of the fitted model.

#### VARMA model forecasting

Forecasts of the series may be constructed using a multivariate version of the univariate method. Efficient methods are available for updating the forecasts each time new observations become available.

### Cross-spectral Analysis

The relationship between two time series may be investigated in terms of their sinusoidal components at different frequencies. At frequency ω$\omega$ a component of yt${y}_{t}$ of the form
 Ry(ω)cos(ωt − φy(ω)) $Ry(ω)cos(ωt-ϕy(ω))$
has its amplitude Ry(ω)${R}_{y}\left(\omega \right)$ and phase lag φy(ω)${\varphi }_{y}\left(\omega \right)$ estimated by
 n Ry(ω)eiφy(ω) = 1/n ∑ yteiωt t = 1
$Ry(ω)eiϕy(ω)=1n∑t=1nyteiωt$
and similarly for xt${x}_{t}$. In the univariate analysis only the amplitude was important – in the cross analysis the phase is important.

#### The sample cross-spectrum

This is defined by
fxy * (ω) = 1/(2πn)
 ( n ) ∑ yteiωt t = 1
 ( n ) ∑ xte − iωt t = 1
.
$fxy*(ω)=12πn (∑t=1nyteiωt) (∑t=1nxte-iωt) .$
It may be demonstrated that this is equivalently defined in terms of the sample cross-correlation function (CCF), rxy(k)${r}_{xy}\left(k\right)$, of the series as
 (n − 1) fxy * (ω) = 1/(2π) ∑ cxy(k)eiωk − (n − 1)
$fxy*(ω)=12π ∑-(n-1) (n-1) cxy(k)eiωk$
where cxy(k) = sxsyrxy(k)${c}_{xy}\left(k\right)={s}_{x}{s}_{y}{r}_{xy}\left(k\right)$ is the cross-covariance function.

#### The amplitude and phase spectrum

The cross-spectrum is specified by its real part or cospectrum cf*(ω)$c{f}^{*}\left(\omega \right)$ and imaginary part or quadrature spectrum qf*(ω)$q{f}^{*}\left(\omega \right)$, but for the purpose of interpretation the cross-amplitude spectrum and phase spectrum are useful:
 A*(ω) = |fxy * (ω)| ,   φ*(ω) = arg(fxy * (ω)) . $A*(ω) = | fxy* (ω) | , ϕ*(ω) = arg( f xy * (ω)) .$
If the series xt${x}_{t}$ and yt${y}_{t}$ contain deterministic sinusoidal components of amplitudes Ry,Rx${R}_{y},{R}_{x}$ and phases φy,φx${\varphi }_{y},{\varphi }_{x}$ at frequency ω$\omega$, then A*(ω)${A}^{*}\left(\omega \right)$ will have a peak of approximate width π / n$\pi /n$ and height (n / 2π)RyRx$\left(n/2\pi \right){R}_{y}{R}_{x}$ at that frequency, with corresponding phase φ*(ω) = φyφx${\varphi }^{*}\left(\omega \right)={\varphi }_{y}-{\varphi }_{x}$. This supplies no information that cannot be obtained from the two series separately. The statistical relationship between the series is better revealed when the series are purely stochastic and jointly stationary, in which case the expected value of fxy * (ω)${f}_{xy}^{*}\left(\omega \right)$ converges with increasing sample size to the theoretical cross-spectrum
 ∞ fxy(ω) = 1/(2π) ∑ γxy(k)eiωk − ∞
$fxy (ω) = 12π ∑ -∞ ∞ γxy (k) eiωk$
where γxy(k) = cov(xt,yt + k)${\gamma }_{xy}\left(k\right)=\mathrm{cov}\left({x}_{t},{y}_{t+k}\right)$. The sample spectrum, as in the univariate case, does not converge to the theoretical spectrum without some form of smoothing which either implicitly (using a lag window) or explicitly (using a frequency window) averages the sample spectrum fxy(ω) * ${f}_{xy\left(\omega \right)}^{*}$ over wider bands of frequency to obtain a smoothed estimate xy(ω)${\stackrel{^}{f}}_{xy}\left(\omega \right)$.

#### The coherency spectrum

If there is no statistical relationship between the series at a given frequency, then fxy(ω) = 0${f}_{xy}\left(\omega \right)=0$, and the smoothed estimate xy(ω)${\stackrel{^}{f}}_{xy}\left(\omega \right)$, will be close to 0$0$. This is assessed by the squared coherency between the series:
 Ŵ(ω) = (|f̂xy(ω)|2)/(f̂xx(ω)f̂yy(ω)) $W^(ω)=|f^xy(ω)|2f^xx(ω)f^yy(ω)$
where xx(ω)${\stackrel{^}{f}}_{xx}\left(\omega \right)$ is the corresponding smoothed univariate spectrum estimate for xt${x}_{t}$, and similarly for yt${y}_{t}$. The coherency can be treated as a squared multiple correlation. It is similarly invariant in theory not only to simple scaling of xt${x}_{t}$ and yt${y}_{t}$, but also to filtering of the two series, and provides a useful test statistic for the relationship between autocorrelated series. Note that without smoothing,
 |fxy * (ω)|2 = fxx * (ω)fyy * (ω), $|fxy*(ω)|2=fxx*(ω)fyy*(ω),$
so the coherency is 1$1$ at all frequencies, just as a correlation is 1$1$ for a sample of size 1$1$. Thus smoothing is essential for cross-spectrum analysis.

#### The gain and noise spectrum

If yt${y}_{t}$ is believed to be related to xt${x}_{t}$ by a linear lagged relationship as in Section [Linear Lagged Relationships Between Time Series], i.e.,
 yt = v0xt + v1xt − 1 + v2xt − 2 + ⋯ + nt, $yt=v0xt+v1xt-1+v2xt-2+⋯+nt,$
then the theoretical cross-spectrum is
 fxy(ω) = V(ω)fxx(ω) $fxy(ω )=V(ω )fxx(ω )$
where
 ∞ V(ω) = G(ω)eiφ(ω) = ∑ vkeikω k = 0
$V(ω)=G(ω)eiϕ(ω)=∑k=0∞vkeikω$
is called the frequency response of the relationship.
Thus if xt${x}_{t}$ were a sinusoidal wave at frequency ω$\omega$ (and nt${n}_{t}$ were absent), yt${y}_{t}$ would be similar but multiplied in amplitude by G(ω)$G\left(\omega \right)$ and shifted in phase by φ(ω)$\varphi \left(\omega \right)$. Furthermore, the theoretical univariate spectrum
 fyy(ω) = G(ω)2fxx(ω) + fn(ω) $fyy(ω)=G (ω) 2fxx(ω)+fn(ω)$
where nt${n}_{t}$, with spectrum fn(ω)${f}_{n}\left(\omega \right)$, is assumed independent of the input xt${x}_{t}$.
Cross-spectral analysis thus furnishes estimates of the gain
 Ĝ (ω) = |f̂xy(ω)| / f̂xx (ω) $G^ (ω)= | f^ xy (ω) |/ f^xx (ω)$
and the phase
 ϕ̂(ω) = arg(f̂xy(ω)) . $ϕ^(ω)=arg(f^xy(ω )) .$
From these representations of the estimated frequency response (ω)$\stackrel{^}{V}\left(\omega \right)$, parametric transfer function (TF) models may be recognized and selected. The noise spectrum may also be estimated as
 f̂y ∣ x (ω) = f̂yy (ω) (1 − Ŵ(ω)) $f^ y∣x (ω)= f^yy (ω) (1-W^ (ω) )$
a formula which reflects the fact that in essence a regression is being performed of the sinusoidal components of yt${y}_{t}$ on those of xt${x}_{t}$ over each frequency band.
Interpretation of the frequency response may be aided by extracting from (ω)$\stackrel{^}{V}\left(\omega \right)$ estimates of the impulse response function (IRF) k${\stackrel{^}{v}}_{k}$. It is assumed that there is no anticipatory response between yt${y}_{t}$ and xt${x}_{t}$, i.e., no coefficients vk${v}_{k}$ with k = 1$k=-1$ or 2$-2$ are needed (their presence might indicate feedback between the series).

#### Cross-spectrum smoothing by lag window

The estimate of the cross-spectrum is calculated from the sample cross-variances as
 M + S f̂xy(ω) = 1/(2π) ∑ wk − Scxy(k)eiωk. − M + S
$f^xy(ω)=12π ∑-M+S M+Swk-Scxy(k)eiωk.$
The lag window wk${w}_{k}$ extends up to a truncation lag M$M$ as in the univariate case, but its centre is shifted by an alignment lag S$S$ usually chosen to coincide with the peak cross-correlation. This is equivalent to an alignment of the series for peak cross-correlation at lag 0$0$, and reduces bias in the phase estimation.
The selection of the truncation lag M$M$, which fixes the bandwidth of the estimate, is based on the same criteria as for univariate series, and the same choice of M$M$ and window shape should be used as in univariate spectrum estimation to obtain valid estimates of the coherency, gain, etc., and test statistics.

#### Direct smoothing of the cross-spectrum

The computations are exactly as for smoothing of the univariate spectrum except that allowance is made for an implicit alignment shift S$S$ between the series.

### Kalman Filters

Kalman filtering provides a method for the analysis of multidimensional time series. The underlying model is:
 Xt + 1 = AtXt + BtWt $Xt+1=AtXt+BtWt$ (6)
 Yt = CtXt + Vt $Yt=CtXt+Vt$ (7)
where Xt${X}_{t}$ is the unobserved state vector, Yt${Y}_{t}$ is the observed measurement vector, Wt${W}_{t}$ is the state noise, Vt${V}_{t}$ is the measurement noise, At${A}_{t}$ is the state transition matrix, Bt${B}_{t}$ is the noise coefficient matrix and Ct${C}_{t}$ is the measurement coefficient matrix at time t$t$. The state noise and the measurement noise are assumed to be uncorrelated with zero mean and covariance matrices:
 E {WtWtT} = Qt   and   E {VtVtT} = Rt . $E { Wt WtT } = Qt and E { Vt VtT } = Rt .$
If the system matrices At${A}_{t}$, Bt${B}_{t}$, Ct${C}_{t}$ and the covariance matrices Qt,Rt${Q}_{t},{R}_{t}$ are known then Kalman filtering can be used to compute the minimum variance estimate of the stochastic variable Xt${X}_{t}$.
The estimate of Xt${X}_{t}$ given observations Y1${Y}_{1}$ to Yt1${Y}_{t-1}$ is denoted by tt1 ${\stackrel{^}{X}}_{t\mid t-1}$ with state covariance matrix E {tt1tt1T} = Ptt1$E\left\{{\stackrel{^}{X}}_{t\mid t-1}{\stackrel{^}{X}}_{t\mid t-1}^{\mathrm{T}}\right\}={P}_{t\mid t-1}$ while the estimate of Xt${X}_{t}$ given observations Y1${Y}_{1}$ to Yt${Y}_{t}$ is denoted by tt ${\stackrel{^}{X}}_{t\mid t}$ with covariance matrix E {ttttT} = Ptt$E\left\{{\stackrel{^}{X}}_{t\mid t}{\stackrel{^}{X}}_{t\mid t}^{\mathrm{T}}\right\}={P}_{t\mid t}$.
The update of the estimate, t + 1t ${\stackrel{^}{X}}_{t+1\mid t}$, from time t$t$ to time t + 1$t+1$, is computed in two stages.
First, the update equations are
 X̂t ∣ t = X̂t ∣ t − 1 + Kt rt,   Pt ∣ t = (I − KtCt) Pt ∣ t − 1 $X^ t∣t = X^ t∣t-1 + Kt rt, Pt∣t= ( I- Kt Ct ) Pt∣t-1$
where the residual rt = Yt Ct Xtt1 ${r}_{t}={Y}_{t}-{C}_{t}{X}_{t\mid t-1}$ has an associated covariance matrix Ht = Ct Ptt1 CtT + Rt ${H}_{t}={C}_{t}{P}_{t\mid t-1}{C}_{t}^{\mathrm{T}}+{R}_{t}$, and Kt${K}_{t}$ is the Kalman gain matrix with
 Kt = Pt ∣ t − 1 CtT Ht − 1 . $Kt= Pt∣t-1 CtT Ht-1 .$
The second stage is the one-step-ahead prediction equations given by
 X̂t + 1 ∣ t = At X̂t ∣ t ,   Pt + 1 ∣ t = At Pt ∣ t AtT + Bt Qt BtT . $X^ t+1 ∣ t = At X^ t ∣ t , Pt+1 ∣ t = At Pt ∣ t AtT + Bt Qt BtT.$
These two stages can be combined to give the one-step-ahead update-prediction equations
 X̂t + 1 ∣ t = At X̂t ∣ t − 1 + At Kt rt . $X^ t+1∣t = At X^ t∣t-1 + At Kt rt .$
The above equations thus provide a method for recursively calculating the estimates of the state vectors tt ${\stackrel{^}{X}}_{t\mid t}$ and t + 1t ${\stackrel{^}{X}}_{t+1\mid t}$ and their covariance matrices Ptt ${P}_{t\mid t}$ and Pt + 1t${P}_{t+1\mid t}$ from their previous values. This recursive procedure can be viewed in a Bayesian framework as being the updating of the prior by the data Yt${Y}_{t}$.
The initial values 10${\stackrel{^}{X}}_{1\mid 0}$ and P10${P}_{1\mid 0}$ are required to start the recursion. For stationary systems, P10${P}_{1\mid 0}$ can be computed from the following equation:
 P1 ∣ 0 = A1P1 ∣ 0 A1T + B1Q1 B1T , $P1∣0=A1P1∣0 A1T +B1Q1 B1T ,$
which can be solved by iterating on the equation. For 10${\stackrel{^}{X}}_{1\mid 0}$ the value E{X}$E\left\{X\right\}$ can be used if it is available.

#### Computational methods

To improve the stability of the computations the square root algorithm is used. One recursion of the square root covariance filter algorithm which can be summarised as follows:
 Rt1 / 2 CtSt 0 0 AtSt BtQt1 / 2
U =
 Ht1 / 2 0 0 Gt St + 1 0
$Rt1/2 CtSt 0 0 AtSt BtQt1/2 U= Ht1/2 0 0 Gt St+1 0$
where U$U$ is an orthogonal transformation triangularizing the left-hand pre-array to produce the right-hand post-array, St${S}_{t}$ is the lower triangular Cholesky factor of the state covariance matrix Pt + 1t${P}_{t+1\mid t}$, Qt1 / 2${Q}_{t}^{1/2}$ and Rt1 / 2${R}_{t}^{1/2}$ are the lower triangular Cholesky factor of the covariance matrices Q$Q$ and R$R$ and H1 / 2${H}^{1/2}$ is the lower triangular Cholesky factor of the covariance matrix of the residuals. The relationship between the Kalman gain matrix, Kt${K}_{t}$, and Gt${G}_{t}$ is given by
 AtKt = Gt (Ht1 / 2) − 1. $AtKt=Gt (Ht1/2) -1.$
To improve the efficiency of the computations when the matrices At,Bt${A}_{t},{B}_{t}$ and Ct${C}_{t}$ do not vary with time the system can be transformed to give a simpler structure. The transformed state vector is U*X${U}^{*}X$ where U*${U}^{*}$ is the transformation that reduces the matrix pair (A,C)$\left(A,C\right)$ to lower observer Hessenberg form. That is, the matrix U*${U}^{*}$ is computed such that the compound matrix
 ( CU * T ) U*AU * T
$CU*T U*AU*T$
is a lower trapezoidal matrix. The transformations need only be computed once at the start of a series, and the covariance matrices Qt${Q}_{t}$ and Rt${R}_{t}$ can still be time-varying.

#### Model fitting and forecasting

If the state space model contains unknown parameters, θ$\theta$, these can be estimated using maximum likelihood (ML). Assuming that Wt${W}_{t}$ and Vt${V}_{t}$ are normal variates the log-likelihood for observations Yt${Y}_{\mathit{t}}$, for t = 1,2,,n$\mathit{t}=1,2,\dots ,n$, is given by
 n t constant − (1/2) ∑ ln(det(Ht)) − (1/2) ∑ rtTHt − 1rt. t = 1 t = 1
$constant-12∑t=1nln(det(Ht))-12∑t=1t rtT Ht-1rt.$
Optimal estimates for the unknown model parameters θ$\theta$ can then be obtained by using a suitable optimizer function to maximize the likelihood function.
Once the model has been fitted forecasting can be performed by using the one-step-ahead prediction equations. The one-step-ahead prediction equations can also be used to ‘jump over’ any missing values in the series.

#### Kalman filter and time series models

Many commonly used time series models can be written as state space models. A univariate ARMA(p,q)$\mathrm{ARMA}\left(p,q\right)$ model can be cast into the following state space form:
 xt = Axt − 1 + Bεt wt = Cxt
$xt = Axt-1+Bεt wt = Cxt$
A =
 φ1 1 φ2 1 . . . . φr − 1 1 φr 0 0 . . 0
,  B =
 − 1 − θ1 − θ2 . . − θr − 1
and  CT =
 1 0 0 . . 0
,
$A= ϕ1 1 ϕ2 1 . . . . ϕr-1 1 ϕr 0 0 . . 0 , B= -1 -θ1 -θ2 . . -θr-1 and CT= 1 0 0 . . 0 ,$
where r = max (p,q + 1)$r=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(p,q+1\right)$.
The representation for a k$k$-variate ARMA(p,q)$\mathrm{ARMA}\left(p,q\right)$ series (VARMA) is very similar to that given above, except now the state vector is of length kr$kr$ and the φ$\varphi$ and θ$\theta$ are now k × k$k×k$ matrices and the 1s in A$A$, B$B$ and C$C$ are now the identity matrix of order k$k$. If p < r$p or q + 1 < r$q+1 then the appropriate φ$\varphi$ or θ$\theta$ matrices are set to zero, respectively.
Since the compound matrix
 ( C ) A
$C A$
is already in lower observer Hessenberg form (i.e., it is lower trapezoidal with zeros in the top right-hand triangle) the invariant Kalman filter algorithm can be used directly without the need to generate a transformation matrix U*${U}^{*}$.

### GARCH Models

#### ARCH models and their generalizations

Rather than modelling the mean (for example using regression models) or the autocorrelation (by using ARMA models) there are circumstances in which the variance of a time series needs to be modelled. This is common in financial data modelling where the variance (or standard deviation) is known as volatility. The ability to forecast volatility is a vital part in deciding the risk attached to financial decisions like portfolio selection. The basic model for relating the variance at time t$t$ to the variance at previous times is the autoregressive conditional heteroskedastic (ARCH) model. The standard ARCH model is defined as
yt ψt1 N (0,ht) ,
 q ht = α0 + ∑ αiεt − i2, i = 1
$yt ∣ ψt-1 ∼ N (0,ht) , ht = α0+ ∑ i=1 q αi ε t-i 2 ,$
where ψt${\psi }_{t}$ is the information up to time t$t$ and ht${h}_{t}$ is the conditional variance.
In a similar way to that in which autoregressive (AR) models were generalized to ARMA models the ARCH models have been generalized to a GARCH model; see Engle (1982), Bollerslev (1986) and Hamilton (1994)
 q p ht = α0 + ∑ αiεt − i2 + ∑ βht − i. i = 1 i = 1
$ht = α0 + ∑ i=1 q αi ε t-i 2 + ∑ i=1 p β ht-i .$
This can be combined with a regression model:
 k yt = b0 + ∑ bixit + εt, i = 1
$yt=b0+∑i= 1k bi xit+εt,$
where εtψt1N(0,ht)${\epsilon }_{t}\mid {\psi }_{t-1}\sim N\left(0,{h}_{t}\right)$ and where xit${x}_{\mathit{i}t}$, for i = 1,2,,k$\mathit{i}=1,2,\dots ,k$, are the exogenous variables.
The above models assume that the change in variance, ht${h}_{t}$, is symmetric with respect to the shocks, that is, that a large negative value of εt1${\epsilon }_{t-1}$ has the same effect as a large positive value of εt1${\epsilon }_{t-1}$. A frequently observed effect is that a large negative value εt1${\epsilon }_{t-1}$ often leads to a greater variance than a large positive value. The following three asymmetric models represent this effect in different ways using the parameter γ$\gamma$ as a measure of the asymmetry.
Type I AGARCH(p,q$p,q$)
 q p ht = α0 + ∑ αi(εt − i + γ)2 + ∑ βiht − i. i = 1 i = 1
$ht = α0 + ∑ i=1 q αi ( εt-i +γ ) 2 + ∑ i=1 p βi ht-i .$
Type II AGARCH(p,q$p,q$)
 q p ht = α0 + ∑ αi(|εt − i| + γεt − i)2 + ∑ βiht − i. i = 1 i = 1
$ht = α0 + ∑ i=1 q αi ( | εt-i | + γ εt-i ) 2+ ∑ i=1 p βi ht-i .$
GJR-GARCH(p,q$p,q$), or Glosten, Jagannathan and Runkle GARCH (see Glosten et al. (1993))
 q p ht = α0 + ∑ (αi + γIt − 1)εt − 12 + ∑ βiht − i, i = 1 i = 1
$ht = α0 + ∑ i=1 q ( αi + γ It-1 ) ε t-1 2 + ∑ i=1 p βi ht-i ,$
where It = 1${I}_{t}=1$ if εt < 0${\epsilon }_{t}<0$ and It = 0${I}_{t}=0$ if εt0${\epsilon }_{t}\ge 0$.
The first assumes that the effects of the shocks are symmetric about γ$\gamma$ rather than zero, so that for γ < 0$\gamma <0$ the effect of negative shocks is increased and the effect of positive shocks is decreased. Both the Type II AGARCH and the GJR GARCH (see Glosten et al. (1993)) models introduce asymmetry by increasing the value of the coefficient of εt12${\epsilon }_{t-1}^{2}$ for negative values of εt1${\epsilon }_{t-1}$. In the case of the Type II AGARCH the effect is multiplicative while for the GJR GARCH the effect is additive.
 Coefficient εt − 1 < 0${\epsilon }_{t-1}<0$ εt − 1 > 0${\epsilon }_{t-1}>0$ Type II AGARCH αi(1 − γ)2${\alpha }_{i}{\left(1-\gamma \right)}^{2}$ αi(1 + γ)2${\alpha }_{i}{\left(1+\gamma \right)}^{2}$ GJR GARCH αi + γ${\alpha }_{i}+\gamma$ αi${\alpha }_{i}$
(Note that in the case of GJR GARCH, γ$\gamma$ needs to be positive to inflate variance after negative shocks while for Type I and Type II AGARCH, γ$\gamma$ needs to be negative.)
A third type of GARCH model is the exponential GARCH (EGARCH). In this model the variance relationship is on the log scale and hence asymmetric.
 q q p ln(ht) = α0 + ∑ αizt − i + ∑ φi(|zt − i| − E[|zt − i|]) + ∑ βiln(ht − i), i = 1 i = 1 i = 1
$ln(ht) = α0 + ∑ i=1 q αi zt-i + ∑ i=1 q ϕi ( |zt-i| - E [ | zt-i | ] ) + ∑ i=1 p βi ln( ht-i) ,$
where zt = (εt)/(sqrt(ht)) ${z}_{t}=\frac{{\epsilon }_{t}}{\sqrt{{h}_{t}}}$ and E[|zti|]$E\left[|{z}_{t-i}|\right]$ denotes the expected value of |zti|$|{z}_{t-i}|$.
Note that the φi${\varphi }_{i}$ terms represent a symmetric contribution to the variance while the αi${\alpha }_{i}$ terms give an asymmetric contribution.
Another common characteristic of financial data is that it is heavier in the tails (leptokurtic) than the Normal distribution. To model this the Normal distribution is replaced by a scaled Student's t$t$-distribution (that is a Student's t$t$-distribution standardized to have variance ht${h}_{t}$). The Student's t$t$-distribution is such that the smaller the degrees of freedom the higher the kurtosis for degrees of freedom > 4$\text{}>4$.

#### Fitting GARCH models

The models are fitted by maximizing the conditional log-likelihood. For the Normal distribution the conditional log-likelihood is
 T (1/2) ∑ (log(hi) + (εi2)/(hi)). i = 1
$12∑i=1T (log(hi)+εi2hi) .$
For the Student's t$t$-distribution the function is more complex. An approximation to the standard errors of the parameter estimates is computed from the Fisher information matrix.

### Inhomogeneous Time Series

If we denote a generic univariate time series as a sequence of pairs of values (zi,ti)$\left({z}_{\mathit{i}},{t}_{\mathit{i}}\right)$, for i = 1,2,$\mathit{i}=1,2,\dots$ where the z$z$'s represent an observed scalar value and the t$t$'s the time that the value was observed, then in a standard time series analysis, as discussed in other sections of this introduction, it is assumed that the series being analysed is homogeneous, that is the sampling times are regularly spaced with titi1 = δ${t}_{i}-{t}_{i-1}=\delta$ for some value δ$\delta$. In many real world applications this assumption does not hold, that is, the series is inhomogeneous.
Standard time series analysis techniques cannot be used on an inhomogeneous series without first preprocessing the series to construct an artificial homogeneous series, by for example, resampling the series at regular intervals. Zumbach and Müller (2001) introduced a series of operators that can be used to extract robust information directly from the inhomogeneous time series. In this context, robust information means that the results should be essentially independent of minor changes to the sampling mechanism used when collecting the data, for example, changing a number of time stamps or adding or removing a few observations.
The basic operator available for inhomogeneous time series is the exponential moving average (EMA). This operator has a single parameter, τ$\tau$, and is an average operator with an exponentially decaying kernel given by:
 (e − t / τ)/τ . $e -t/τ τ .$
This gives rise to the following iterative formula:
 EMA [τ ; z] (ti) = μ EMA [τ ; z] (ti − 1) + (ν − μ) zi − 1 + (1 − ν) zi $EMA [τ;z] (ti) = μ ⁢ EMA [τ;z] ( ti-1 ) + (ν-μ) ⁢ zi-1 + (1-ν) ⁢ zi$
where
 μ = e − α   and   α = ( ti − ti − 1 )/τ . $μ = e-α and α = ti - ti-1 τ .$
The value of ν$\nu$ depends on the method of interpolation chosen. Three interpolation methods are available:
 1 Previous point: ν = 1$\nu =1$. 2 Linear: ν = (1 − μ) / α $\nu =\left(1-\mu \right)/\alpha$. 3 Next point: ν = μ$\nu =\mu$.
Given the EMA, a number of other operators can be defined, including:
(i) m$\mathbf{m}$-Iterated Exponential Moving Average, defined as
 EMA [τ,m ; z] = EMA [τ ; EMA[τ,m − 1 ; z]] ​ where ​ EMA [τ,1 ; z] = EMA [τ ; z] . $EMA [ τ,m;z ] = EMA [ τ ; EMA [ τ,m-1 ; z ] ] ​ where ​ EMA [ τ,1;z ] = EMA [ τ ; z ] .$
(ii) Moving Average (MA), defined as
 m2 MA[τ,m1,m2 ; z](ti) = 1/( m2 − m1 + 1 ) ∑ EMA[τ̃,j ; z](ti)​ where ​τ̃ = (2τ)/(m2 + m1) j = m1
$MA [ τ, m1, m2; z ] (ti) = 1 m2 - m1 +1 ∑ j=m1 m2 EMA [ τ~, j; z ] (ti) ​ where ​ τ~= 2⁢τ m2+m1$
(iii) Moving Norm (MNorm), defined as
 MNorm (τ,m,p ; z) = MA [τ,1,m ; |z|p] 1 / p $MNorm ( τ,m,p;z ) = MA [ τ,1,m; |z| p ] 1 / p$
(iv) Moving Variance (MVar), defined as
 MVar (τ,m,p ; z) = MA [τ,1,m ; |z − MA[τ,1,m ; z]|p] $MVar ( τ,m,p;z ) = MA [ τ,1,m; | z - MA [ τ,1,m;z ] | p ]$
(v) Moving Standard Deviation (MSD), defined as
 MSD (τ,m,p ; z) = MA [τ,1,m ; |z − MA[τ,1,m ; z]|p] 1 / p $MSD ( τ,m,p;z ) = MA [ τ,1,m; | z - MA [ τ,1,m;z ] | p ] 1 / p$
(vi) Differential ( Δ$\mathbf{\Delta }$), defined as
 Δ[τ,α,β,γ ; z] = γ (EMA[ατ,1 ; z] + EMA[ατ,2 ; z] − 2EMA[αβτ,4 ; z]) $Δ[τ,α,β,γ;z] = γ⁢ ( EMA[α⁢τ,1;z] + EMA[α⁢τ,2;z] - 2⁢ EMA[α⁢β⁢τ,4;z] )$
(vii) Volatility, defined as
 Volatility[τ,τ′,m,p ; z] = MNorm (τ / 2,m,p ; Δ[τ′ ; z]) $Volatility[τ,τ′,m,p;z] = MNorm ( τ/2,m,p;Δ[τ′;z] )$
A discussion of each of these operators, their use and in some cases, alternative definitions, are given in Zumbach and Müller (2001).

## Recommendations on Choice and Use of Available Functions

### ARMA-type Models

ARMA-type modelling usually follows the methodology made popular by Box and Jenkins. It consists of four steps: identification, model fitting, model checking and forecasting. The availability of functions for each of these four steps is given below for the three types of modelling situation considered: univariate, input-output and multivariate.

#### Univariate series

(a) Model identification
The function nag_tsa_uni_means (g13au) may be used in obtaining either a range-mean or standard deviation-mean plot for a series of observations, which may be useful in detecting the need for a variance-stabilizing transformation. nag_tsa_uni_means (g13au) computes the range or standard deviation and the mean for successive groups of observations and nag_stat_plot_scatter_2var (g01ag) may then be used to produce a scatter plot of range against mean or of standard deviation against mean.
The function nag_tsa_uni_diff (g13aa) may be used to difference a time series. The N = nds × D$N=n-d-s×D$ values of the differenced time series which extends for t = 1 + d + s × D,,n$t=1+d+s×D,\dots ,n$ are stored in the first N$N$ elements of the output array.
The function nag_tsa_uni_autocorr (g13ab) may be used for direct computation of the autocorrelations. It requires the time series as input, after optional differencing by nag_tsa_uni_diff (g13aa).
An alternative is to use nag_tsa_uni_spectrum_lag (g13ca), which uses the fast Fourier transform (FFT) to carry out the convolution for computing the autocovariances. Circumstances in which this is recommended are
 (i) if the main aim is to calculate the smoothed sample spectrum; (ii) if the series length and maximum lag for the autocorrelations are both very large, in which case appreciable computing time may be saved.
For more precise recommendations, see Gentleman and Sande (1966). In this case the autocorrelations rk${r}_{k}$ need to be obtained from the autocovariances ck${c}_{k}$ by rk = ck / c0${r}_{k}={c}_{k}/{c}_{0}$.
The function nag_tsa_uni_autocorr_part (g13ac) computes the partial autocorrelation function (PACF) and prediction error variance estimates from an input autocorrelation function (ACF). Note that nag_tsa_multi_corrmat_partlag (g13dn), which is designed for multivariate time series, may also be used to compute the PACF together with χ2${\chi }^{2}$ statistics and their significance levels.
Finite lag predictor coefficients are also computed by the function nag_tsa_uni_autocorr_part (g13ac). It may have to be used twice, firstly with a large value for the maximum lag L$L$ in order to locate the optimum final prediction error (FPE) lag, then again with L$L$ reset to this lag.
The function nag_tsa_uni_arma_roots (g13dx) may be used to check that the autoregressive (AR) part of the model is stationary and that the moving-average (MA) part is invertible.
(b) Model estimation
The function nag_tsa_uni_arima_prelim (g13ad) is used to compute preliminary estimates of the ARIMA model parameters, the sample autocorrelations of the appropriately differenced series being input. The model orders are required.
The main function for parameter estimation for ARIMA models is nag_tsa_uni_arima_estim (g13ae), and an easy-to-use version is nag_tsa_uni_arima_estim_easy (g13af). Both these functions use the least squares criterion of estimation.
In some circumstances the use of nag_tsa_multi_inputmod_estim (g13be) or nag_tsa_multi_varma_estimate (g13dd), which use maximum likelihood (ML), is recommended.
The functions require the time series values to be input, together with the ARIMA orders. Any differencing implied by the model is carried out internally. They also require the maximum number of iterations to be specified, and return the state set for use in forecasting.
nag_tsa_uni_arima_estim (g13ae) should be preferred to nag_tsa_uni_arima_estim_easy (g13af) for:
 (i) more information about the differenced series, its backforecasts and the intermediate series; (ii) greater control over the output at successive iterations; (iii) more detailed control over the search policy of the nonlinear least squares algorithm; (iv) more information about the first and second derivatives of the objective function during and upon completion of the iterations.
nag_tsa_multi_inputmod_estim (g13be) is primarily designed for estimating relationships between time series. It is, however, easily used in a univariate mode for ARIMA model estimation. The advantage is that it allows (optional) use of the exact likelihood estimation criterion, which is not available in nag_tsa_uni_arima_estim (g13ae) or nag_tsa_uni_arima_estim_easy (g13af). This is particularly recommended for models which have seasonal parameters, because it reduces the tendency of parameter estimates to become stuck at points on the parameter space boundary. The model parameters estimated in this function should be passed over to nag_tsa_uni_arima_forcecast (g13aj) for use in univariate forecasting.
The function nag_tsa_multi_varma_estimate (g13dd) is primarily designed for fitting vector ARMA models to multivariate time series but may also be used in a univariate mode. It allows the use of either the exact or conditional likelihood estimation criterion, and allows you to fit non-multiplicative seasonal models which are not available in nag_tsa_uni_arima_estim (g13ae), nag_tsa_uni_arima_estim_easy (g13af) or nag_tsa_multi_inputmod_estim (g13be).
(c) Model checking
nag_tsa_uni_arima_resid (g13as) calculates the correlations in the residuals from a model fitted by either nag_tsa_uni_arima_estim (g13ae) or nag_tsa_uni_arima_estim_easy (g13af). In addition the standard errors and correlations of the residual autocorrelations are computed along with a portmanteau test for model adequacy. nag_tsa_uni_arima_resid (g13as) can be used after a univariate model has been fitted by nag_tsa_multi_inputmod_estim (g13be), but care must be taken in selecting the correct inputs to nag_tsa_uni_arima_resid (g13as). Note that if nag_tsa_multi_varma_estimate (g13dd) has been used to fit a non-multiplicative seasonal model to a univariate series then nag_tsa_multi_varma_diag (g13ds) may be used to check the adequacy of the model.
(d) Forecasting using an ARIMA model
Given that the state set produced on estimation of the ARIMA model by either nag_tsa_uni_arima_estim (g13ae) or nag_tsa_uni_arima_estim_easy (g13af) has been retained, nag_tsa_uni_arima_forecast_state (g13ah) can be used directly to construct forecasts for xn + 1,xn + 2,${x}_{n+1},{x}_{n+2},\dots \text{}$, together with probability limits. If some further observations xn + 1,xn + 2,${x}_{n+1},{x}_{n+2},\dots \text{}$ have come to hand since model estimation (and there is no desire to re-estimate the model using the extended series), then nag_tsa_uni_arima_update (g13ag) can be used to update the state set using the new observations, prior to forecasting from the end of the extended series. The original series is not required.
The function nag_tsa_uni_arima_forcecast (g13aj) is provided for forecasting when the ARIMA model is known but the state set is unknown. For example, the model may have been estimated by a procedure other than the use of nag_tsa_uni_arima_estim (g13ae) or nag_tsa_uni_arima_estim_easy (g13af), such as nag_tsa_multi_inputmod_estim (g13be). nag_tsa_uni_arima_forcecast (g13aj) constructs the state set and optionally constructs forecasts with probability limits. It is equivalent to a call to nag_tsa_uni_arima_estim (g13ae) with zero iterations requested, followed by an optional call to nag_tsa_uni_arima_forecast_state (g13ah), but it is much more efficient.

#### Input-output/transfer function modelling

 (a) Model identification Normally use nag_tsa_multi_xcorr (g13bc) for direct computation of cross-correlations, from which cross-covariances may be obtained by multiplying by sysx${s}_{y}{s}_{x}$, and impulse response estimates (after prewhitening) by multiplying by sy / sx${s}_{y}/{s}_{x}$, where sy,sx${s}_{y},{s}_{x}$ are the sample standard deviations of the series. An alternative is to use nag_tsa_multi_spectrum_lag (g13cc), which exploits the fast Fourier transform (FFT) to carry out the convolution for computing cross-covariances. The criteria for this are the same as given in Section [Univariate series] for calculation of autocorrelations. The impulse response function may also be computed by spectral methods without prewhitening using nag_tsa_multi_noise_bivar (g13cg). nag_tsa_multi_filter_arima (g13ba) may be used to prewhiten or filter a series by an ARIMA model. nag_tsa_multi_filter_transf (g13bb) may be used to filter a time series using a transfer function model. (b) Estimation of input-output model parameters The function nag_tsa_multi_transf_prelim (g13bd) is used to obtain preliminary estimates of transfer function model parameters. The model orders and an estimate of the impulse response function (see Section [Univariate spectral estimation]) are required. The simultaneous estimation of the transfer function model parameters for the inputs, and ARIMA model parameters for the output, is carried out by nag_tsa_multi_inputmod_estim (g13be). This function requires values of the output and input series, and the orders of all the models. Any differencing implied by the model is carried out internally. The function also requires the maximum number of iterations to be specified, and returns the state set for use in forecasting. (c) Input-output model checking The function nag_tsa_uni_arima_resid (g13as), primarily designed for univariate time series, can be used to test the residuals from an input-output model. (d) Forecasting using an input-output model Given that the state set produced on estimation of the model by nag_tsa_multi_inputmod_estim (g13be) has been retained, the function nag_tsa_multi_inputmod_forecast_state (g13bh) can be used directly to construct forecasts of the output series. Future values of the input series (possibly forecasts previously obtained using nag_tsa_uni_arima_forecast_state (g13ah)) are required. If further observations of the output and input series have become available since model estimation (and there is no desire to re-estimate the model using the extended series) then nag_tsa_multi_inputmod_update (g13bg) can be used to update the state set using the new observations prior to forecasting from the end of the extended series. The original series are not required. The function nag_tsa_multi_inputmod_forecast (g13bj) is provided for forecasting when the multi-input model is known, but the state set is unknown. The set of output and input series must be supplied to the function which then constructs the state set (for future use with nag_tsa_multi_inputmod_update (g13bg) and/or nag_tsa_multi_inputmod_forecast_state (g13bh)) and also optionally constructs forecasts of the output series in a similar manner to nag_tsa_multi_inputmod_forecast_state (g13bh). In constructing probability limits for the forecasts, it is possible to allow for the fact that future input series values may themselves have been calculated as forecasts using ARIMA models. Use of this option requires that these ARIMA models be supplied to the function. (e) Filtering a time series using a transfer function model The function for this purpose is nag_tsa_multi_filter_transf (g13bb).

#### Multivariate series

 (a) Model identification The function nag_tsa_multi_diff (g13dl) may be used to difference the series. You must supply the differencing parameters for each component of the multivariate series. The order of differencing for each individual component does not have to be the same. The function may also be used to apply a log or square root transformation to the components of the series. The function nag_tsa_multi_corrmat_cross (g13dm) may be used to calculate the sample cross-correlation or cross-covariance matrices. It requires a set of time series as input. You may request either the cross-covariances or cross-correlations. The function nag_tsa_multi_corrmat_partlag (g13dn) computes the partial lag correlation matrices from the sample cross-correlation matrices computed by nag_tsa_multi_corrmat_cross (g13dm), and the function nag_tsa_multi_regmat_partial (g13dp) computes the least squares estimates of the partial autoregression matrices and their standard errors. Both functions compute a series of χ2${\chi }^{2}$ statistics that aid the determination of the order of a suitable autoregressive model. nag_tsa_multi_autocorr_part (g13db) may also be used in the identification of the order of an autoregressive model. The function computes multiple squared partial autocorrelations and predictive error variance ratios from the sample cross-correlations or cross-covariances computed by nag_tsa_multi_corrmat_cross (g13dm). The function nag_tsa_uni_arma_roots (g13dx) may be used to check that the autoregressive part of the model is stationary and that the moving-average part is invertible. (b) Estimation of VARMA model parameters The function for this purpose is nag_tsa_multi_varma_estimate (g13dd). This function requires a set of time series to be input, together with values for p$p$ and q$q$. You must also specify the maximum number of likelihood evaluations to be permitted and which parameters (if any) are to be held at their initial (user-supplied) values. The fitting criterion is either exact maximum likelihood (ML) or conditional maximum likelihood. nag_tsa_multi_varma_estimate (g13dd) is primarily designed for estimating relationships between time series. It may, however, easily be used in univariate mode for non-seasonal and non-multiplicative seasonal ARIMA model estimation. The advantage is that it allows (optional) use of the exact maximum likelihood (ML) estimation criterion, which is not available in either nag_tsa_uni_arima_estim (g13ae) or nag_tsa_uni_arima_estim_easy (g13af). The conditional likelihood option is recommended for those models in which the parameter estimates display a tendency to become stuck at points on the boundary of the parameter space. When one of the series is known to be influenced by all the others, but the others in turn are mutually independent and do not influence the output series, then nag_tsa_multi_inputmod_estim (g13be) (the transfer function (TF) model fitting function) may be more appropriate to use. (c) VARMA model checking nag_tsa_multi_varma_diag (g13ds) calculates the cross-correlation matrices of residuals for a model fitted by nag_tsa_multi_varma_estimate (g13dd). In addition the standard errors and correlations of the residual correlation matrices are computed along with a portmanteau test for model adequacy. (d) Forecasting using a VARMA model The function nag_tsa_multi_varma_forecast (g13dj) may be used to construct a chosen number of forecasts using the model estimated by nag_tsa_multi_varma_estimate (g13dd). The standard errors of the forecasts are also computed. A reference vector is set up by nag_tsa_multi_varma_forecast (g13dj) so that should any further observations become available the existing forecasts can be efficiently updated using nag_tsa_multi_varma_update (g13dk). On a call to nag_tsa_multi_varma_update (g13dk) the reference vector itself is also updated so that nag_tsa_multi_varma_update (g13dk) may be called again each time new observations are available.

#### Exponential Smoothing

A variety of different smoothing methods are provided by nag_tsa_uni_smooth_exp (g13am), including; single exponential, Brown's double exponential, linear Holt (also called double exponential smoothing in some references), additive Holt–Winters and multiplicative Holt–Winters. The choice of smoothing method used depends on the characteristics of the time series. If the mean of the series is only slowly changing then single exponential smoothing may be suitable. If there is a trend in the time series, which itself may be slowly changing, then double exponential smoothing may be suitable. If there is a seasonal component to the time series, e.g., daily or monthly data, then one of the two Holt–Winters methods may be suitable.

### Spectral Methods

#### Univariate spectral estimation

Two functions are available, nag_tsa_uni_spectrum_lag (g13ca) carrying out smoothing using a lag window and nag_tsa_uni_spectrum_daniell (g13cb) carrying out direct frequency domain smoothing. Both can take as input the original series, but nag_tsa_uni_spectrum_lag (g13ca) alone can use the sample autocovariances as alternative input. This has some computational advantage if a variety of spectral estimates needs to be examined for the same series using different amounts of smoothing.
However, the real choice in most cases will be which of the four shapes of lag window in nag_tsa_uni_spectrum_lag (g13ca) to use, or whether to use the trapezium frequency window of nag_tsa_uni_spectrum_daniell (g13cb). The references may be consulted for advice on this, but the two most recommended lag windows are the Tukey and Parzen. The Tukey window has a very small risk of supplying negative spectrum estimates; otherwise, for the same bandwidth, both give very similar results, though the Parzen window requires a higher truncation lag (more autocorrelation function (ACF) values).
The frequency window smoothing procedure of nag_tsa_uni_spectrum_daniell (g13cb) with a trapezium shape parameter p(1/2) $p\simeq \frac{1}{2}$ generally gives similar results for the same bandwidth as lag window methods with a slight advantage of somewhat less distortion around sharp peaks, but suffering a rather less smooth appearance in fine detail.

#### Cross-spectrum estimation

Two functions are available for the main step in cross-spectral analysis. To compute the cospectrum and quadrature spectrum estimates using smoothing by a lag window, nag_tsa_multi_spectrum_lag (g13cc) should be used. It takes as input either the original series or cross-covariances which may be computed in a previous call of the same function or possibly using results from nag_tsa_multi_xcorr (g13bc). As in the univariate case, this gives some advantage if estimates for the same series are to be computed with different amounts of smoothing.
The choice of window shape will be determined as the same as that which has already been used in univariate spectrum estimation for the series.
For direct frequency domain smoothing, nag_tsa_multi_spectrum_daniell (g13cd) should be used, with similar consideration for the univariate estimation in choice of degree of smoothing.
The cross-amplitude and squared coherency spectrum estimates are calculated, together with upper and lower confidence bounds, using nag_tsa_multi_spectrum_bivar (g13ce). For input the cross-spectral estimates from either nag_tsa_multi_spectrum_lag (g13cc) or nag_tsa_multi_spectrum_daniell (g13cd) and corresponding univariate spectra from either nag_tsa_uni_spectrum_lag (g13ca) or nag_tsa_uni_spectrum_daniell (g13cb) are required.
The gain and phase spectrum estimates are calculated together with upper and lower confidence bounds using nag_tsa_multi_gain_bivar (g13cf). The required input is as for nag_tsa_multi_spectrum_bivar (g13ce) above.
The noise spectrum estimates and impulse response function estimates are calculated together with multiplying factors for confidence limits on the former, and the standard error for the latter, using nag_tsa_multi_noise_bivar (g13cg). The required input is again the same as for nag_tsa_multi_spectrum_bivar (g13ce) above.

### GARCH Models

The main choice in selecting a type of GARCH model is whether the data is symmetric or asymmetric and if asymmetric what form of asymmetry should be included in the model.
A symmetric ARCH or GARCH model can be fitted by nag_tsa_uni_garch_asym1_estim (g13fa) and the volatility forecast by nag_tsa_uni_garch_asym1_forecast (g13fb). For asymmetric data the choice is between the type of asymmetry as described in Section [GARCH Models].
 GARCH Type Fit Forecast Type I nag_tsa_uni_garch_asym1_estim (g13fa) nag_tsa_uni_garch_asym1_forecast (g13fb) Type II nag_tsa_uni_garch_asym2_estim (g13fc) nag_tsa_uni_garch_asym2_forecast (g13fd) GJR nag_tsa_uni_garch_gjr_estim (g13fe) nag_tsa_uni_garch_gjr_forecast (g13ff) EGARCH nag_tsa_uni_garch_exp_estim (g13fg) nag_tsa_uni_garch_exp_forecast (g13fh)
All functions allow the option of including regressor variables in the model and the choice between Normal and Student's t$t$-distribution for the errors.

### Inhomogeneous Time Series

The following functions deal with inhomogeneous time series, nag_tsa_inhom_iema (g13me), nag_tsa_inhom_iema_all (g13mf) and nag_tsa_inhom_ma (g13mg).
Both nag_tsa_inhom_iema (g13me) and nag_tsa_inhom_iema_all (g13mf) calculate the m$m$-iterated exponential moving average (EMA). In most cases nag_tsa_inhom_iema (g13me) can be used, which returns EMA [τ,m ; z] $\text{EMA}\left[\tau ,m;z\right]$ for a given value of m$m$, overwriting the input data. Sometimes it is advantageous to have access to the intermediate results, for example when calculating the differential operator, in which case nag_tsa_inhom_iema_all (g13mf) can be used, which can return EMA [τ,i ; z]$\text{EMA}\left[\tau ,\mathit{i};z\right]$, for i = 1,2,,m$\mathit{i}=1,2,\dots ,m$. nag_tsa_inhom_iema_all (g13mf) can also be used if you do not wish to overwrite the input data.
The last function, nag_tsa_inhom_ma (g13mg) should be used if you require the moving average, (MA), moving norm (MNorm), moving variance (MVar) or moving standard deviation (MSD). Other operators can be calculated by calling a combination of these three functions and the use of simple mathematics (additions, subtractions, etc.).

### Time Series Simulation

There are functions available in Chapter G05 for generating a realisation of a time series from a specified model: nag_rand_times_arma (g05ph) for univariate time series and nag_rand_times_mv_varma (g05pj) for multivariate time series. There is also a suite of functions for simulating GARCH models: nag_rand_times_garch_asym1 (g05pd), nag_rand_times_garch_asym2 (g05pe), nag_rand_times_garch_gjr (g05pf) and nag_rand_times_garch_exp (g05pg). The function nag_rand_times_smooth_exp (g05pm) can be used to simulate data from an exponential smoothing model.

## Functionality Index

 ARMA modelling,
 ACF nag_tsa_uni_autocorr (g13ab)
 diagnostic checking nag_tsa_uni_arima_resid (g13as)
 differencing nag_tsa_uni_diff (g13aa)
 estimation (comprehensive) nag_tsa_uni_arima_estim (g13ae)
 estimation (easy-to-use) nag_tsa_uni_arima_estim_easy (g13af)
 forecasting from fully specified model nag_tsa_uni_arima_forcecast (g13aj)
 forecasting from state set nag_tsa_uni_arima_forecast_state (g13ah)
 mean/range nag_tsa_uni_means (g13au)
 PACF nag_tsa_uni_autocorr_part (g13ac)
 update state set nag_tsa_uni_arima_update (g13ag)
 Bivariate spectral analysis,
 Bartlett, Tukey, Parzen windows nag_tsa_multi_spectrum_lag (g13cc)
 cross amplitude spectrum nag_tsa_multi_spectrum_bivar (g13ce)
 direct smoothing nag_tsa_multi_spectrum_daniell (g13cd)
 gain and phase nag_tsa_multi_gain_bivar (g13cf)
 noise spectrum nag_tsa_multi_noise_bivar (g13cg)
 Exponential smoothing nag_tsa_uni_smooth_exp (g13am)
 GARCH,
 EGARCH,
 fitting nag_tsa_uni_garch_exp_estim (g13fg)
 forecasting nag_tsa_uni_garch_exp_forecast (g13fh)
 GJR GARCH,
 fitting nag_tsa_uni_garch_gjr_estim (g13fe)
 forecasting nag_tsa_uni_garch_gjr_forecast (g13ff)
 symmetric or type I AGARCH,
 fitting nag_tsa_uni_garch_asym1_estim (g13fa)
 forecasting nag_tsa_uni_garch_asym1_forecast (g13fb)
 type II AGARCH,
 fitting nag_tsa_uni_garch_asym2_estim (g13fc)
 forecasting nag_tsa_uni_garch_asym2_forecast (g13fd)
 Inhomogenous series,
 iterated exponential moving average,
 final value only returned nag_tsa_inhom_iema (g13me)
 intermediate values returned nag_tsa_inhom_iema_all (g13mf)
 moving average nag_tsa_inhom_ma (g13mg)
 Kalman,
 filter,
 time invariant,
 square root covariance nag_tsa_multi_kalman_sqrt_invar (g13eb)
 time varying,
 square root covariance nag_tsa_multi_kalman_sqrt_var (g13ea)
 Transfer function modelling,
 cross-correlations nag_tsa_multi_xcorr (g13bc)
 filtering nag_tsa_multi_filter_transf (g13bb)
 fitting nag_tsa_multi_inputmod_estim (g13be)
 forecasting from fully specified model nag_tsa_multi_inputmod_forecast (g13bj)
 forecasting from state set nag_tsa_multi_inputmod_forecast_state (g13bh)
 preliminary estimation nag_tsa_multi_transf_prelim (g13bd)
 pre-whitening nag_tsa_multi_filter_arima (g13ba)
 update state set nag_tsa_multi_inputmod_update (g13bg)
 Univariate spectral analysis,
 Bartlett, Tukey, Parzen windows nag_tsa_uni_spectrum_lag (g13ca)
 direct smoothing nag_tsa_uni_spectrum_daniell (g13cb)
 Vector ARMA,
 cross-correlations nag_tsa_multi_corrmat_cross (g13dm)
 diagnostic checks nag_tsa_multi_varma_diag (g13ds)
 differencing nag_tsa_multi_diff (g13dl)
 fitting nag_tsa_multi_varma_estimate (g13dd)
 forecasting nag_tsa_multi_varma_forecast (g13dj)
 partial autoregression matrices nag_tsa_multi_regmat_partial (g13dp)
 partial correlation matrices nag_tsa_multi_corrmat_partlag (g13dn)
 squared partial autocorrelations nag_tsa_multi_autocorr_part (g13db)
 update forecast nag_tsa_multi_varma_update (g13dk)
 zeros of ARIMA operator nag_tsa_uni_arma_roots (g13dx)

## References

Akaike H (1971) Autoregressive model fitting for control Ann. Inst. Statist. Math. 23 163–180
Bollerslev T (1986) Generalised autoregressive conditional heteroskedasticity Journal of Econometrics 31 307–327
Box G E P and Jenkins G M (1976) Time Series Analysis: Forecasting and Control (Revised Edition) Holden–Day
Engle R (1982) Autoregressive conditional heteroskedasticity with estimates of the variance of United Kingdom inflation Econometrica 50 987–1008
Gentleman W S and Sande G (1966) Fast Fourier transforms for fun and profit Proc. Joint Computer Conference, AFIPS 29 563–578
Glosten L, Jagannathan R and Runkle D (1993) Relationship between the expected value and the volatility of nominal excess return on stocks Journal of Finance 48 1779–1801
Hamilton J (1994) Time Series Analysis Princeton University Press
Heyse J F and Wei W W S (1985) The partial lag autocorrelation function Technical Report No. 32 Department of Statistics, Temple University, Philadelphia
Tiao G C and Box G E P (1981) Modelling multiple time series with applications J. Am. Stat. Assoc. 76 802–816
Wei W W S (1990) Time Series Analysis: Univariate and Multivariate Methods Addison–Wesley
Zumbach G O and Müller U A (2001) Operators on inhomogeneous time series International Journal of Theoretical and Applied Finance 4(1) 147–178