Vai al contenuto

Predicting the Future: From Oracles to Psycho-History to Modern Financial Market Forecasting Techniques

Oracles in the Ancient World

In ancient Greece, the concept of an ‘oracle’ had deep roots in religious and spiritual culture. Sacred temples served as centers where priests and priestesses acted as intermediaries between humans and gods. The most famous of these temples was the Temple of Apollo at Delphi, regarded as the most influential oracle center in antiquity.

The oracles received questions from the faithful and would meditate or enter into a trance-like state to communicate with the deities themselves. Their responses, often delivered in verse or enigmatic form, were interpreted by intermediaries as guidance for addressing crucial matters, both personal and political.

A peculiar aspect of oracles was the ambiguity of their responses. This allowed for multiple interpretations, as intermediaries sought to decipher the true meaning behind the words of the oracles themselves. It was a process that required deep reflection and often left room for self-suggestion.

Oracles had a significant influence on the politics of Greek city-states and the ancient Mediterranean world. Leaders’ decisions were often influenced by the prophecies of oracles, which could lead to significant changes in history.

Isaac Asimov’s ‘Psychohistory’: Science in the Service of Prediction

Isaac Asimov, one of the great science fiction authors of the 20th century, introduced the fascinating concept of “psychohistory” in his famous”‘Foundation” series of science fiction novels. Psychohistory is an imaginary field that blends psychology, mathematics, and history to predict the behavior of human masses on a galactic scale.

In Isaac Asimov’s imaginary world, psychohistory is developed by Hari Seldon, a visionary mathematician, and the concept is central to the Foundation Trilogy. Seldon predicts the fall of the Galactic Empire and the ensuing period of anarchy and chaos that will last for thousands of years. To mitigate this dark phase, he develops psychohistory as a tool to predict and guide the future.

Psychohistory is based on two fundamental principles:

  1. Law of Mass Action: Psychohistory considers the behavior of human masses as a complex system, analyzing how the decisions and actions of billions of individuals combine into predictive patterns.
  2. Historical Cycles: Psychohistory assumes that historical cycles repeat and that humans react similarly to similar circumstances at different times. This allows for the prediction of turning points and future trends.

Hari Seldon uses psychohistory to identify key crisis moments and creates the ‘Seldon Crisis Index,’ which provides guidance for addressing future challenges. His vision is to preserve human knowledge and shorten the period of chaos between the fall of the Empire and the birth of a new galactic civilization.

Psychohistory in Asimov’s storytelling raises intriguing philosophical and societal questions, including the possibility of predicting human behavior on a large scale, the consequences of losing control over one’s destiny, and the struggle to maintain progress and knowledge in times of turmoil.

In summary, psychohistory is an iconic part of Isaac Asimov’s work that challenges our understanding of science-based future prediction. While pure fiction, it has inspired generations of readers and opened the door to thought-provoking reflections on the relationship between the individual and society, history and mathematics.

In Asimov’s vision, mathematics is a fundamental ingredient for psychohistory. This discipline employs complex equations and statistical calculations to analyze human behavior on a vast scale.

Mathematical Models and Financial Markets

Applying what was said about Psychohistory, nothing interprets Asimov’s theory better than the world of financial markets. Millions of traders interact on nearly instantaneous electronic markets, shaping their own greed, fear, and desires. Adding machines to the mix (which now account for a significant percentage of transactions depending on the context), we understand how complex it can be to bring order to chaos.

In this endeavor, artificial intelligence comes to our rescue. By referencing what happens with sentiment services, it allows us to analyze data in different ways depending on the models used and the data we feed them.

Going into more detail, there are various categories of machine learning models, each designed to address specific types of problems. Here are some (it would be impossible to cover them all in an article like this):

  1. Linear Regression: This model is used for regression problems, where the goal is to predict a continuous numeric value based on input variables. Linear regression seeks to establish a linear relationship between input variables and the output.
  2. Logistic Regression: This model is used for binary classification problems, where the goal is to assign an instance to one of two possible classes. Logistic regression estimates the probability that an instance belongs to a specific class.
  3. Decision Trees: Decision trees are used for classification and regression problems. They represent decisions in the form of a tree by dividing data based on specific conditions.
  4. Random Forest: This is an ensemble learning model that combines multiple decision trees to obtain more accurate predictions. It is used for both classification and regression problems.
  5. Support Vector Machine (SVM): SVMs are used for classification and regression. This model seeks to find a hyperplane that best separates different classes or approximates the relationship between input variables and the output.
  6. Artificial Neural Networks (ANN): Neural networks are models inspired by the functioning of the human brain and are used for complex classification and regression problems. Deep neural networks (Deep Learning) can learn hierarchies of features in the data.
  7. K-Means: This is a clustering model used to group similar data into clusters. It is an unsupervised learning model that identifies patterns in the data.
  8. Naive Bayes: This model is used for classification problems, especially text recognition. It is based on Bayes’ theorem and assumes conditional independence among input variables.
  9. Bayesian Networks: This model represents relationships between variables using a directed acyclic graph. It is used for conditional probability modeling.
  10. Autoencoder: This model is used for unsupervised learning and dimensionality reduction. It is particularly useful for data compression and extracting meaningful features from data.

A second category of AI algorithms widely used in finance is reinforcement learning. This is a branch of machine learning that focuses on training agents to make sequential decisions to maximize a reward. Here are some of the main categories of reinforcement learning models:

  1. Q-Learning: This is one of the primary reinforcement learning algorithms. It is based on estimating a Q-function, which evaluates the best action to take in a specific state to maximize cumulative rewards over time.
  2. Deep Q-Networks (DQN): DQNs are an extension of Q-learning that uses deep neural networks to approximate the Q-function. This allows handling more complex state spaces and improving generalization.
  3. Policy Gradient Methods: These approaches aim to directly learn the policy, i.e., the agent’s strategy for selecting actions. Policy gradient algorithms seek to maximize the expected cumulative reward.
  4. Actor-Critic: This is a hybrid approach that combines elements of value-based (critic) and policy-based (actor) learning. The agent uses a neural network (the actor) to make decisions and a second neural network (the critic) to estimate action values.
  5. Model-Based Algorithms: These approaches build an internal model of the environment and use it for planning and decision-making. This internal model can be a neural network or another representation.
  6. Imitation Learning Algorithms: These algorithms train an agent to mimic expert behavior rather than directly discovering a policy. They are often used when a human expert is available for learning.
  7. Continuous-Time Approaches: While many reinforcement learning algorithms operate in discrete environments, continuous-time approaches also handle continuous actions and states.
  8. Multi-Agent Reinforcement Learning: This category focuses on coordinating multiple agents in a shared environment. Agents can collaborate or compete to maximize overall rewards.
  9. Hierarchical Learning: In this context, agents learn hierarchically, breaking down the problem into more manageable subproblems. This can improve learning efficiency in complex environments.

These are some of the main categories of reinforcement learning models, but there are many variations and specific approaches within each category. The choice of the reinforcement learning model will depend on the type of problem, the complexity of the environment, and the specific application requirements.

Evolutionary algorithms are a class of artificial intelligence algorithms inspired by the theory of Darwinian evolution. These algorithms are also used for financial predictions, especially for optimizing trading strategies and portfolio management. Here are some details about evolutionary algorithms used in this context:

  1. Genetic Optimization: Genetic algorithms are one of the most common types of evolutionary algorithms used in the field of financial predictions. These algorithms simulate the process of natural selection, where the best solutions are selected and combined to create new generations of solutions. In the financial context, these “solutions” are often trading strategies. Genetic algorithms aim to identify the most effective strategies by iteratively modifying and combining existing strategies.
  2. Genetic Programming: This is another evolutionary approach that seeks to create computer programs optimized to solve specific problems. In the financial domain, genetic programming can be used to generate customized trading algorithms.
  3. “Monkey” Algorithms: This is an informal term for algorithms that randomly generate trading strategies to test them in the financial market. Strategies that yield positive results are retained, while those that do not are discarded. This process is repeated iteratively until profitable strategies are obtained.
  4. Multi-Population Algorithms: These algorithms use multiple populations of solutions, each with its own characteristics and parameters. The evolution of populations can generate a diversity of trading strategies, allowing greater flexibility in adapting to changing market conditions.
  5. Evolutionary Feature Selection: This approach aims to identify the most relevant features from financial data through evolution. The selected features can then be used to build more effective prediction models.
  6. Portfolio Optimization: Evolutionary algorithms are also used for portfolio optimization. These algorithms seek to determine the optimal composition of an investment portfolio, taking into account return objectives, risk, and constraints.

The combined use of machine learning, reinforcement learning, and genetic algorithms can significantly contribute to improved multi-market prediction. These three approaches can work synergistically to provide more accurate and robust forecasts. Here’s how each of them can contribute to this purpose:

  1. Machine Learning (ML): Machine learning algorithms can be used to analyze historical financial data, identify patterns and trends in market movements, and develop predictive models. These models can be trained to forecast the future behavior of financial assets. Here’s how machine learning can contribute:
    Data Analysis: Machine learning models can process large amounts of historical data to identify hidden patterns and complex relationships among financial variables.
    Trend Prediction: Machine learning algorithms, such as neural networks, can be trained to predict the future price of a stock or another asset based on historical information.
    Risk Management: Machine learning models can help assess the risk associated with specific investment decisions and develop risk mitigation strategies.
  2. Reinforcement Learning (RL): Reinforcement learning algorithms are used to learn how to make optimal decisions in sequential situations. In the financial context, they can be used to manage investment portfolios and make trading decisions. Here’s how reinforcement learning can contribute:
    Optimizing Trading Strategies: Reinforcement learning agents can be trained to learn which actions to take based on market conditions to maximize portfolio returns.
    Portfolio Management: Reinforcement learning can help balance and manage an investment portfolio dynamically, considering market fluctuations and investor preferences.
    Adaptation to Market Conditions: Reinforcement learning algorithms can adapt in real-time to changing market conditions, making informed decisions based on the latest data.
  3. Genetic Algorithms: Genetic algorithms are used to optimize trading strategies and investment portfolios. They can contribute to finding optimal combinations of parameters and trading rules. Here’s how genetic algorithms can contribute:
    Strategy Optimization: Genetic algorithms can explore a wide range of trading strategies by iteratively modifying and combining existing strategies to identify the most effective ones.
    Optimal Parameter Search: They can be used to find optimal parameters for the machine learning models used in financial predictions.
    Portfolio Diversification: Genetic algorithms can help identify the optimal composition of an investment portfolio, taking into account the diverse characteristics of assets.

In summary, the integration of these three approaches – machine learning, reinforcement learning, and genetic algorithms – can lead to more accurate and resilient multi-market prediction. Each approach has its strengths and capabilities, and their synergistic use can significantly enhance the ability to adapt to changing market conditions and make more informed decisions in the world of financial investments.

Once the categories of models to be combined (or made to compete) have been decided, there are three different usage modes:

  1. Ensemble Learning: Ensemble learning is a technique that combines the results of different machine learning models to obtain an overall more accurate prediction. For example, regression models, neural networks, and decision trees can be used, and their predictions aggregated.
  2. Cascade Approach: In this approach, models are organized in a cascade, where the result of one model influences or guides the output of another. This can be useful for further refining predictions and reducing the margin of error.
  3. Majority Vote: A majority vote approach can also be adopted, where models make predictions, and the final prediction is determined by the majority vote. This method is particularly effective when the models are heterogeneous.

The Role of Data

The success of these models largely depends on the quality and quantity of the data used. Historical financial data is crucial for training and validating the models. However, when possible, it’s also important to consider real-time data and financial news sources to adapt to evolving market conditions.

The treatment of matrices of heterogeneous data in order to use them as input for machine learning models is a common challenge in the field of automatic learning. Heterogeneous data refers to data that come from different sources or present different types of variables (e.g., numerical, categorical, textual). Here are some fundamental steps for processing and preparing heterogeneous data for machine learning:

  1. Data Collection and Integration
    – Collect all data from various sources and ensure they are accessible in a unified data structure.
    – Resolve any inconsistencies in data formats or units of measurement.
  2. Handling Missing Data
    – Identify and manage missing values appropriately. This may involve imputing missing data or removing affected rows or columns.
  3. Encoding Categorical Variables
    – Categorical variables need to be converted into a usable numeric form. This can be done through encoding techniques such as one-hot encoding or ordinal encoding.
  4. Normalization or Standardization of Numeric Variables
    – To enable better convergence of machine learning algorithms, it is often useful to normalize or standardize numeric variables so that they have a common scale.
  5. Creating Additional Features
    – Sometimes, model performance can be improved by creating new features from existing data. This process is known as feature engineering.
  6. Data Transformation
    – In some cases, it may be necessary to apply data transformations to make them more suitable for use in models. For example, log-transform may be used to make data more similar to a normal distribution.
  7. Feature Selection
    – When dealing with a large number of features, feature selection techniques can be used to identify the most relevant variables for the problem.
  8. Data Partitioning
    – Split the data into a training set and a test set to evaluate model performance. It is important to maintain consistent partitioning across different data sources.
  9. Cross-Validation
    – Use cross-validation to assess the model on multiple subsets of the data and ensure that it can generalize well.
  10. Model Selection
    – Choose from a variety of machine learning models that are most suitable for the problem and the heterogeneous data provided as input.
  11. Model Training and Optimization
    – Train the model on the data and optimize its parameters to maximize performance.
  12. Model Evaluation
    – Evaluate model performance using appropriate metrics and compare results with expectations.
  13. Continuous Updating
    – Keep the data up to date and align the model with new information.”

The Present of Alternative Data

Bringing together all the elements mentioned so far, it is possible to incorporate some or all of the following categories:

  1. Supervised Machine Learning Models
  2. Unsupervised Machine Learning Models
  3. GAM Allocations Generated Through Genetic Algorithms
  4. Trading Systems Generated Through Reinforcement Learning

The mentioned categories allow for maintaining a sufficient level of model diversification, which are trained on different datasets (varying the depth of information and the number of features). This ensures good heterogeneity of outputs, which are generated in a controlled environment where every single element of the project pipeline is known. This approach differs from a purely ‘blind’ wisdom of the crowd approach, where the development of individual models is delegated to third parties. While theoretically something might be lost in terms of model variety (a single team has limited resources and skills), the advent of Self-Generating AI Systems (AI systems capable of autonomously generating new datasets from known ones, or new variations of machine learning models by combining the state of the art of known models) has once again changed the game, allowing for the exploration of new solutions. It should be noted that both approaches rely on statistical criteria and may not always be accurate.

For example, it is possible to create combined probability trend classifiers that return five different scenarios for each asset for the following time period (as shown in the figure).

But how to leverage this information?

Since these are alternative data, they can be used both to enhance your machine learning model (to improve the accuracy of your results) and to feed your operational asset allocation algorithm. It’s also possible to conduct simulations that solely consider the class with the highest associated probability for each asset, assessing any potential improvement in terms of expected risk.

In the example shown in the figure, you can see the effect of applying asset allocation to classes in a basket with a strong bullish bias, as in the case under examination: expected risk is limited in exchange for a loss of profit (this loss of profit can be seen as paying an insurance premium against risk).

However, it should be noted that this type of architecture can increase the probability of correctly identifying the positioning of each asset by a limited (though valuable) percentage compared to random allocation, as it cannot predict exogenous factors not contained in the data itself. To provide a reference, the erosion percentage caused by this type of architecture on unknown data (Out of Sample) varies between 5% and 10% compared to random allocation.

For further information you can write to info@gandalfproject.com.

Giovanni Trombetta
Founder – Head of R&D
Gandalf Project

Saifer Goldman
Generative AI Expert System

Lascia un commento

Il tuo indirizzo email non sarà pubblicato. I campi obbligatori sono contrassegnati *