Who among you will not remember Marty McFly and Doc Brown’s DeLorean as it whizzes through the folds of space-time in the movie “Back to the Future”… In the realm of finance, understanding the intricacies of time series data is akin to unlocking the secrets of time travel. Just as Marty McFly and Doc Brown embark on an adventure through time, financial analysts delve into the depths of historical data, employing a variety of time series forecasting techniques.
When we start to work with time series, we usually handle just prices and volumes and we try to apply many different techniques to benchmark their capacity to predict future movements or future volatility. Queried a Generative Pre-trained Transformer we collected some of these technics:
- Autoregressive Integrated Moving Average (ARIMA): ARIMA models are widely used in finance for forecasting stock prices, exchange rates, and other financial time series data. They are effective in capturing trends, seasonality, and irregularities in the data.
- Seasonal Autoregressive Integrated Moving-Average (SARIMA): SARIMA is an extension of the ARIMA model that explicitly supports univariate time series data with a seasonal component. It is commonly utilized in finance for modeling and forecasting data with significant seasonal fluctuations, such as seasonal variations in stock prices, sales data, and economic indicators.
- GARCH (Generalized Autoregressive Conditional Heteroskedasticity): GARCH models are commonly used to model and forecast volatility in financial time series data, such as stock market returns and asset prices. They are valuable for estimating and predicting the risk associated with financial assets.
- Exponential Smoothing (ETS): Exponential smoothing methods are used to forecast short-term trends and seasonal patterns in financial data. They are especially useful for generating smooth forecasts and can be applied to various financial metrics, such as sales, revenues, and demand forecasting.
- Seasonal Decomposition of Time Series (STL): A method that decomposes a time series into seasonal, trend, and irregular components, which can then be forecasted individually.
- Vector Autoregression (VAR): VAR models are used to analyze the dynamic relationships among multiple time series variables in finance, such as interest rates, inflation rates, and stock market indices. They are valuable for understanding the interdependencies among different financial indicators.
- Long Short-Term Memory (LSTM) Networks: LSTM networks are increasingly used in finance for time series forecasting tasks, such as predicting stock prices and market trends. They are capable of capturing long-term dependencies and complex patterns in financial data, making them well-suited for modeling non-linear relationships.
- Bayesian Structural Time Series (BSTS): BSTS models are used in finance for forecasting time series data with multiple components, such as trend, seasonality, and irregular fluctuations. They are valuable for incorporating prior information and uncertainty into the forecasting process, making them robust for financial applications.
- Prophet: Prophet is widely used for forecasting in finance as it can handle irregularities in data, such as missing values and outliers. It is effective for predicting trends in financial time series data, including stock prices, commodity prices, and economic indicators.
- Neural Prophet: An extension of Facebook’s Prophet that incorporates neural networks for handling complex time series patterns and dependencies.
- Wavelet Analysis: A technique that decomposes time series data into different frequency components, allowing for analysis at multiple scales. It can be used for denoising, smoothing, and forecasting time series data.

When choosing a time series forecasting method for financial forecasting, it is essential to consider the specific characteristics of the financial data, such as seasonality, trend, volatility, and interdependencies, to select the most appropriate technique for the task at hand. Additionally, it is crucial to assess the performance of the chosen model using appropriate evaluation metrics to ensure its effectiveness in making accurate financial predictions.
ARIMA
ARIMA (AutoRegressive Integrated Moving Average) is a popular and powerful time series analysis and forecasting method. It is a combination of autoregressive (AR) and moving average (MA) models that also incorporates differencing to handle non-stationary time series data. ARIMA models are widely used in various fields, including economics, finance, and weather forecasting, to make predictions based on historical data patterns.
The ARIMA model consists of three main components:
- AutoRegressive (AR) Component: The autoregressive part of the model utilizes the dependent relationship between an observation and a certain number of lagged observations. It predicts the value at a specific time point using a linear combination of the previous values.
- Integrated (I) Component: The integrated part of the model involves differencing the observations to make the time series data stationary. Stationarity implies that the statistical properties of the series, such as mean and variance, do not change over time. Differencing is used to remove trends and seasonality.
- Moving Average (MA) Component: The moving average part of the model considers the dependency between an observation and a residual error from a moving average model applied to lagged observations. It helps to smooth out the noise in the data and identify any underlying patterns.
The general form of an ARIMA model is represented as ARIMA(p, d, q), where:
- p denotes the order of the autoregressive component.
- d denotes the degree of differencing.
- q denotes the order of the moving average component.
The steps involved in building an ARIMA model typically include:
- Data preprocessing, including handling missing values and transforming the data to achieve stationarity if necessary.
- Identification of the appropriate values for p, d, and q through data analysis, autocorrelation function (ACF), and partial autocorrelation function (PACF) plots.
- Model fitting using the selected parameters.
- Model diagnostics to ensure that the model assumptions are met.
- Forecasting future values based on the trained model.
ARIMA models are versatile and can capture a wide range of linear patterns in time series data. However, they may not be suitable for highly complex or nonlinear relationships. In such cases, more advanced models, such as SARIMA or machine learning-based approaches, may be more appropriate.
SARIMA
SARIMA (Seasonal Autoregressive Integrated Moving Average) is an extension of the ARIMA model that is designed to handle time series data with seasonal trends. It is a powerful and flexible forecasting technique that incorporates both the ARIMA components and seasonal components to capture complex seasonal patterns in the data. SARIMA models are particularly useful for forecasting time series data that exhibit seasonal fluctuations or patterns over specific time intervals.
The SARIMA model includes the following components:
- Seasonal AutoRegressive (SAR) Component: This part of the model represents the linear relationship between the observation and its lagged values over seasonal time periods. It captures the seasonal patterns in the data and helps predict the seasonal changes.
- Seasonal Integrated (I) Component: Similar to the non-seasonal integrated component in ARIMA, the seasonal integrated component in SARIMA involves differencing the data at seasonal lags to achieve stationarity.
- Seasonal Moving Average (SMA) Component: This component considers the dependency between an observation and the residual error from a moving average model applied to seasonal lags. It helps to account for the seasonal noise or fluctuations in the data.
The general form of a SARIMA model is represented as SARIMA(p, d, q)(P, D, Q)s, where:
- p, d, and q represent the non-seasonal AR, differencing, and MA components, respectively.
- P, D, and Q represent the seasonal AR, seasonal differencing, and seasonal MA components, respectively.
- s denotes the length of the seasonal cycle.
The steps involved in building a SARIMA model are similar to those for ARIMA models, but with additional considerations for the seasonal components. These steps include data preprocessing, identifying the appropriate orders for both the non-seasonal and seasonal components, model fitting, model diagnostics, and forecasting future values.
SARIMA models are effective for capturing complex seasonal patterns and fluctuations in time series data. However, they may become computationally intensive and require a significant amount of data for accurate estimation, especially when dealing with multiple seasonal patterns or high-frequency data. Nonetheless, they remain one of the key tools for time series forecasting, especially for data with clear seasonal trends.
GARCH
GARCH, which stands for Generalized Autoregressive Conditional Heteroskedasticity, is a statistical model used to analyze and forecast the volatility of financial time series data. It is an extension of the ARCH (Autoregressive Conditional Heteroskedasticity) model, which was developed by economist Robert F. Engle in the early 1980s. GARCH allows for the modeling of time-varying volatility in financial markets, acknowledging the presence of heteroskedasticity (varying levels of volatility) in the data.
The GARCH model captures the volatility clustering phenomenon often observed in financial data, where periods of high volatility tend to be followed by additional periods of high volatility, and vice versa for low volatility periods. This behavior is important in financial analysis, as it can have significant implications for risk management and the pricing of financial derivatives.
The GARCH model incorporates both past values of the series being modeled and past forecast errors to predict the conditional variance of the series at each point in time. By estimating the parameters of the GARCH model, analysts and researchers can gain insights into the persistence and dynamics of volatility, enabling them to make more accurate forecasts and assess the associated risks.
Overall, GARCH has become a fundamental tool in the analysis of financial time series data, and its various extensions and modifications have been widely used in the field of quantitative finance and risk management.
ETS
Exponential Smoothing (ETS) is a popular and effective technique for time series forecasting that is based on the principle of smoothing the data to capture the underlying patterns and trends. It is widely used in various industries for making short-term forecasts and is particularly useful for data that does not exhibit complex seasonal patterns or significant historical fluctuations.
ETS models work by recursively updating and smoothing the forecasts based on the most recent observations in the time series data. The main components of an ETS model include the following:
- Level: The level component represents the underlying, smoothed value of the time series data at a specific point in time. It captures the long-term average or the baseline of the data.
- Trend: The trend component accounts for the direction and magnitude of the overall trend in the data. It reflects the rate of change in the data over time and helps in forecasting future trends.
- Seasonality: The seasonality component captures the repetitive and periodic patterns that occur at fixed intervals within the time series data. It helps in identifying and forecasting seasonal fluctuations or patterns.
There are different types of exponential smoothing models, including:
- Simple Exponential Smoothing (SES): This model is suitable for data without any clear trend or seasonal patterns. It applies a smoothing parameter to the previous observation to forecast the next value.
- Holt’s Exponential Smoothing: Holt’s method extends simple exponential smoothing by incorporating a trend component. It is useful for data with a clear linear trend but no seasonality.
- Winter’s Exponential Smoothing: Winter’s method extends Holt’s method by incorporating a seasonal component. It is suitable for data with both trend and seasonality.
ETS models are relatively simple and efficient, making them suitable for short-term forecasting tasks and situations where complex patterns are not prevalent. They are easy to implement and are widely used for demand forecasting, inventory management, and other business applications where quick and straightforward forecasting is required. However, for data with more complex patterns or irregularities, more advanced time series models like ARIMA or SARIMA may be more appropriate.
STL
STL, or Seasonal-Trend decomposition using Loess, is a time series decomposition technique that separates a time series into three components: seasonal, trend, and remainder components. This method is particularly useful when dealing with time series data that exhibit complex seasonal patterns, as it can effectively isolate the various components, allowing for more accurate forecasting and analysis.
The key components of STL are as follows:
- Seasonal Component: This represents the repetitive and predictable fluctuations or patterns that occur at specific intervals within the time series, such as daily, weekly, or yearly cycles. The seasonal component is typically extracted using a moving average or weighted moving average method.
- Trend Component: This component represents the long-term progression or direction of the time series data. It highlights the overall upward or downward movement over an extended period, ignoring the short-term fluctuations or noise. The trend component is estimated using a locally weighted regression method, such as the Loess method.
- Remainder Component (or Residual Component): This component captures the random variability or noise that cannot be explained by the seasonal and trend components. It includes the irregular fluctuations and random variations present in the data.
By decomposing the time series into these three components, STL enables analysts to understand the underlying patterns and behaviors within the data more accurately. This, in turn, allows for better forecasting, trend analysis, and anomaly detection. STL has found applications in various fields, including finance, economics, and environmental science, where the identification of seasonal and trend patterns is crucial for decision-making and predictive modeling.
VAR
Vector Autoregression (VAR) is a statistical model used to capture the linear interdependencies among multiple time series variables. It is an extension of the Autoregressive model, which is used for analyzing a single time series variable. VAR models are particularly useful when analyzing the dynamic relationship between multiple variables that influence each other over time.
In a VAR model, each variable is regressed on its own lagged values as well as the lagged values of the other variables in the system. This means that each variable in the system is modeled as a linear function of its own past values and the past values of all the other variables in the system. The order of the VAR model specifies the number of lags used in the model, and it is chosen based on the specific characteristics of the data and the goals of the analysis.
VAR models are widely used in macroeconomics, finance, and other fields where the interactions between multiple economic variables are of interest. They are valuable for studying the dynamic effects of shocks or policy changes on a system of variables, as well as for forecasting the behavior of the variables in the system. Furthermore, VAR models can be extended to incorporate other features, such as exogenous variables or structural breaks, to enhance their explanatory power and forecasting accuracy.
Estimating a VAR model typically involves techniques such as ordinary least squares (OLS) or maximum likelihood estimation (MLE). The parameters of the VAR model can be used to understand the causal relationships and dynamic interactions between the variables, providing valuable insights into the underlying structure of the system being studied.
LSTM
Long Short-Term Memory (LSTM) is a type of recurrent neural network (RNN) architecture that is designed to handle sequence prediction problems and time series data. LSTMs are particularly effective for capturing long-term dependencies and complex patterns in sequential data by mitigating the vanishing gradient problem often encountered in traditional RNNs.

LSTM networks consist of various components that enable them to remember information over long periods and effectively learn from sequences of data. The key components of an LSTM network include:
- Cell State: The cell state serves as the information highway running through the entire sequence. It allows the network to preserve or discard information as needed, making it well-suited for capturing long-term dependencies.
- Gates: LSTMs use three types of gates—input gate, forget gate, and output gate—to control the flow of information within the network. These gates regulate the information that flows in and out of the cell state, enabling the network to selectively remember or forget information.
- Hidden State: The hidden state acts as the memory of the network and carries information from previous time steps. It is responsible for capturing and storing relevant information from the input sequence and passing it to the next time step.
LSTM networks are trained using backpropagation through time (BPTT), which allows them to learn from sequential data and update the network’s parameters based on the error calculated at each time step. This training process enables LSTMs to effectively model complex temporal dependencies and make accurate predictions based on historical data.
LSTMs are widely used in various applications, including natural language processing, speech recognition, time series forecasting, and other tasks that involve sequential data. They have proven to be effective in capturing intricate patterns and dependencies in time series data, making them a popular choice for modeling and forecasting tasks where historical context is crucial for making accurate predictions. However, it’s important to note that training LSTMs can be computationally expensive and may require a significant amount of data to prevent overfitting.
BSTS
Bayesian Structural Time Series (BSTS) is a powerful framework for time series modeling that is based on Bayesian statistics. It allows for the estimation and forecasting of time series data, incorporating various structural components and complex relationships between variables. BSTS is particularly useful when dealing with time series that exhibit irregular patterns, trends, seasonal effects, and other forms of nonlinearity.
The key components of a BSTS model include:
- Local linear trend: This component captures the underlying trend in the time series data, allowing for changes in the trend over time.
- Seasonal effects: BSTS can incorporate seasonal patterns in the data, allowing for the modeling of periodic fluctuations that occur at fixed intervals, such as daily, weekly, or yearly patterns.
- Regression effects: BSTS can include regression components that allow for the incorporation of exogenous variables that may influence the time series data.
- Irregular components: This component captures the irregular or residual fluctuations that cannot be explained by the other components.
BSTS leverages the Bayesian approach, which involves specifying prior distributions for the model parameters and updating these distributions based on the available data to obtain the posterior distributions. Markov Chain Monte Carlo (MCMC) methods are often used to obtain samples from the posterior distribution, enabling the estimation of the model parameters and the uncertainty associated with them.
The BSTS framework is highly flexible and can be adapted to accommodate various forms of time series data, making it a valuable tool in fields such as economics, finance, and epidemiology. By providing a comprehensive approach to time series modeling, BSTS allows analysts to make more accurate predictions and gain a better understanding of the underlying patterns and structures within the data.
PROPHET
Prophet is a forecasting tool developed by Facebook’s Core Data Science team, designed to handle time series data that exhibits strong seasonal patterns. It is an open-source tool that is easy to use and offers powerful capabilities for producing high-quality forecasts. Prophet is particularly useful for forecasting tasks that involve time series data with various seasonalities, trends, and holiday effects.

Prophet operates based on the following key features and components:
- Additive Modeling: Prophet uses an additive model that decomposes the time series data into three main components: trend, seasonality, and holidays. This additive approach allows for a more flexible and intuitive representation of the time series data.
- Flexibility: Prophet is designed to handle a wide range of time series data, including irregularly spaced data, missing values, and outliers. It can effectively model data with various seasonal patterns and can accommodate both upward and downward trends in the data.
- Holiday Effects: Prophet includes the ability to incorporate holiday effects in the forecasting process. Users can specify custom seasonalities to account for holidays or special events that might affect the time series data.
- Automatic Changepoint Detection: Prophet automatically detects changepoints in the time series data, allowing it to adapt to abrupt changes or shifts in the underlying patterns. This feature helps in capturing changes in trends and seasonality more accurately.
- Uncertainty Estimation: Prophet provides uncertainty intervals for the forecasts, allowing users to assess the reliability of the predicted values. This feature is particularly valuable when making decisions based on the forecasted results.
Prophet is implemented in Python and provides an intuitive and straightforward interface for users to define and fit the forecasting model to their data. It has gained popularity for its ease of use and ability to produce high-quality forecasts with minimal manual adjustments. While it may not be as flexible as some advanced time series models, Prophet serves as a valuable tool for users who want a simple and effective solution for forecasting time series data with strong seasonal patterns and various effects.
WAVELET ANALYSIS
Wavelet analysis is a mathematical technique used for the analysis of time series data, images, and signals. It allows for the examination of data at various scales, enabling the localization of both frequency and time information. This technique is particularly useful when the frequency content of a signal varies over time.

Here are some key features and aspects of Wavelet Analysis:
- Multiresolution Analysis: Wavelet analysis allows for the decomposition of a signal into different frequency components. This decomposition provides a multiresolution representation of the signal, enabling the analysis of the signal at different scales.
- Time-Frequency Localization: Unlike other frequency analysis techniques like the Fourier transform, which provides frequency information but does not retain time information, wavelet analysis offers both time and frequency localization. This capability is especially advantageous when analyzing signals with localized features or time-varying frequencies.
- Applications: Wavelet analysis finds applications in various fields, including signal processing, image analysis, and time series analysis. In finance, wavelet analysis can be used to study and analyze the volatility of financial time series data, identify patterns, and detect anomalies.
- Wavelet Transform: The wavelet transform is a mathematical transformation that decomposes a signal into wavelets, which are small, localized functions. These wavelets are then scaled and translated to analyze the signal at different resolutions and positions.
Overall, Wavelet Analysis is a powerful tool for analyzing signals with time-varying characteristics and local features. Its ability to provide both time and frequency localization makes it particularly useful in applications where the identification of specific patterns and features within a signal is essential for analysis and decision-making.
Of course, this list of techniques does not claim to be exhaustive, but rather aims to be an initial compilation of some of the most commonly used techniques in the financial field. Over time, more complex techniques such as those related to non-linear algorithms like neural networks (not only LSTM but also CNN and Stacked Neural Networks) have replaced, in terms of performance, the more traditional ARIMA, SARIMA, and GARCH. Ensemble methods based on multiple decision trees have established themselves for price forecasting, and in general, multiple combined models seem to provide more robust and enduring results over time.
We have created an educational accelerator that delves into many of these techniques (in addition to delving into the four pillars of AI applied to Quantitative Analysis): we are talking about the Machine Learning Academy (soon available also in English!).
Have a good time travel!
Giovanni Trombetta
Founder & Head of R&D
Gandalf Project