6.
Process or Product Monitoring and Control
6.4. Introduction to Time Series Analysis 6.4.4. Univariate Time Series Models
|
|||||||||||||||||
Stationarity and Seasonality | The first step in developing a Box-Jenkins model is to determine if the series is stationary and if there is any significant seasonality that needs to be modeled. | ||||||||||||||||
Detecting stationarity | Stationarity can be assessed from a run sequence plot. The run sequence plot should show constant location and scale. It can also be detected from an autocorrelation plot. Specifically, non-stationarity is often indicated by an autocorrelation plot with very slow decay. | ||||||||||||||||
Detecting seasonality | Seasonality (or periodicity) can usually be assessed from an autocorrelation plot, a seasonal subseries plot, or a spectral plot. | ||||||||||||||||
Differencing to achieve stationarity | Box and Jenkins recommend the differencing approach to achieve stationarity. However, fitting a curve and subtracting the fitted values from the original data can also be used in the context of Box-Jenkins models. | ||||||||||||||||
Seasonal differencing | At the model identification stage, our goal is to detect seasonality, if it exists, and to identify the order for the seasonal autoregressive and seasonal moving average terms. For many series, the period is known and a single seasonality term is sufficient. For example, for monthly data we would typically include either a seasonal AR 12 term or a seasonal MA 12 term. For Box-Jenkins models, we do not explicitly remove seasonality before fitting the model. Instead, we include the order of the seasonal terms in the model specification to the ARIMA estimation software. However, it may be helpful to apply a seasonal difference to the data and regenerate the autocorrelation and partial autocorrelation plots. This may help in the model idenfitication of the non-seasonal component of the model. In some cases, the seasonal differencing may remove most or all of the seasonality effect. | ||||||||||||||||
Identify p and q | Once stationarity and seasonality have been addressed, the next step is to identify the order (i.e., the \(p\) and \(q\)) of the autoregressive and moving average terms. | ||||||||||||||||
Autocorrelation and Partial Autocorrelation Plots | The primary tools for doing this are the autocorrelation plot and the partial autocorrelation plot. The sample autocorrelation plot and the sample partial autocorrelation plot are compared to the theoretical behavior of these plots when the order is known. | ||||||||||||||||
Order of Autoregressive Process (\(p\)) |
Specifically, for an AR(1) process, the sample autocorrelation
function should have an exponentially decreasing appearance.
However, higher-order AR processes are often a mixture of
exponentially decreasing and damped sinusoidal components.
For higher-order autoregressive processes, the sample autocorrelation needs to be supplemented with a partial autocorrelation plot. The partial autocorrelation of an AR(\(p\)) process becomes zero at lag \(p + 1\) and greater, so we examine the sample partial autocorrelation function to see if there is evidence of a departure from zero. This is usually determined by placing a 95 % confidence interval on the sample partial autocorrelation plot (most software programs that generate sample autocorrelation plots will also plot this confidence interval). If the software program does not generate the confidence band, it is approximately \(\pm 2/\sqrt{N}\), with \(N\) denoting the sample size. |
||||||||||||||||
Order of Moving Average Process (\(q\)) |
The autocorrelation function of a MA(\(q\))
process becomes zero at lag \(q + 1\)
and greater, so we examine the sample autocorrelation function to see where it
essentially becomes zero. We do this by placing the 95 % confidence
interval for the sample autocorrelation function on the sample
autocorrelation plot. Most software that can generate
the autocorrelation plot can also generate this confidence interval.
The sample partial autocorrelation function is generally not helpful for identifying the order of the moving average process. |
||||||||||||||||
Shape of Autocorrelation Function |
The following table summarizes how we use the sample
autocorrelation function for model identification.
|
||||||||||||||||
Mixed Models Difficult to Identify |
In practice, the sample autocorrelation and partial autocorrelation
functions are random variables and will not give the same picture as
the theoretical functions. This makes the model identification more
difficult. In particular, mixed models can be particularly difficult
to identify.
Although experience is helpful, developing good models using these sample plots can involve much trial and error. For this reason, in recent years information-based criteria such as FPE (Final Prediction Error) and AIC (Akaike Information Criterion) and others have been preferred and used. These techniques can help automate the model identification process. These techniques require computer software to use. Fortunately, these techniques are available in many commerical statistical software programs that provide ARIMA modeling capabilities. For additional information on these techniques, see Brockwell and Davis (1987, 2002). | ||||||||||||||||
Examples |
We show a typical series of plots for performing the initial model
identification for
|