A better definition of a time series is:

...a collection of observations made sequentially in time.

The fundamental goals of time series analysis are forecasting and explanation (interpretation) of historical trends. The statistical analysis of time series is a large and growing field, and the methods commonly used have had to be adapted to the particularities of these kinds of data. Traditional statistical techniques cannot be applied for two reasons:

  1. Independance of observations: Under normal situations, the observations to be treated statistically obey the assumption of independence, which is that observation xi is independent of the other observations in the data series. Time series data, like spatial data, obviously violate this assumption.
  2. Directionality of the series: Statistical methods adapted for analysis of spatial data consider that observations are not independent, but that they are interdependent to a certain degree, and the mathematical formulation of that interdependence is a bi-directional function of the distance between observations. In time series, this is clearly not the case, for the value of an observation in 1998 may be in part dependent on the value of the observation in 1996, but the converse is clearly not the case.
Two functions are at the foundation of most time series techniuqes: the autocorrelation function and the autocovariance function. The autocorrelation function is a mathematical expression which represents the extent to which a pair of observations, at various distances in time (termed lags) are correlated. The autocovariance function is essentially the same thing, as a correlation is simply a standardized covariance.

Using these two functions, statisticians are able to create a series of predictive models. The most common models (those found in textbooks and in the academic literature) are AR (autoregressive), MA (moving average) and ARMA (mixed autoregressive moving average) models.