Nav: Home

Deep learning and stock trading

March 16, 2017

In their study, researchers of the School of Business and Economics have shown that algorithms based on artificial intelligence are able to make profitable investment decisions. When applied to the S&P 500 constituents from 1992 to 2015, their stock selections generated annual returns in the double digits -- whereas the highest profits were made at times of financial turmoil. The findings have recently been published in the European Journal of Operational Research (EJOR) -- a leading outlet in the field of operational research and decision making.

In March 2016, Lee Sedol -- one of the best human players in the Asian board game Go - lost against AlphaGo, a software developed by Google DeepMind. Compared to chess, Go is much more complex, and has long been considered an Everest for artificial intelligence research. Driving force for such achievements are computer programs that are loosely inspired by how biological brains work, i.e., by learning from examples and independently extracting relationships from millions of data points relationships from millions of data points. 'Artificial neural networks are primarily applied to problems, where solutions cannot be formulated with explicit rules,' explains Dr. Christopher Krauss of the Chair for Statistics and Econometrics at FAU. 'Image and speech recognition are typical fields of application, such as Apple's Siri. But the relevance of deep learning also increases in other domains, such as weather forecasting or the prediction of economic developments.'

Analysing capital market data

The international team around Dr. Christopher Krauss -- consisting of Xuan Anh Do (FAU) and Nicolas Huck (ICN Business School, France) -- were the first researchers to apply a selection of state-of-the-art techniques of artificial intelligence research to a large-scale set of capital market data. 'Equity markets exhibit complex, often non-linear dependencies,' says Krauss. 'However, when it comes to selecting stocks, established methods are mainly modelling simple relationships. For example, the momentum effect only focuses on a stock's return over the past months and assumes a continuation of that performance in the months to come. We saw potential for improvements.' To find out whether automated learning approaches perform better than a naïve buy-and-hold strategy, researchers studied the S&P 500 Index, which consists of the 500 leading US stocks. For the period from 1992 to 2015, they generated predictions for each individual stock for every single trading day, leveraging deep learning, gradient boosting, and random forests.

Outperformance with machine learning

Each of these methods was trained with approximately 180 million data points. In the course of this training, the models learned a complex function, describing the relationship between price-based features and a stock's future performance. The results were astonishing: 'Since the year 2000, we observed statistically and economically significant returns of more than 30% per annum. In the nineties, results were even higher, reflecting a time when our machine learning approaches had not yet been invented,' adds Krauss. These results pose a serious challenge to the efficient-market hypothesis. Returns were particularly high during times of financial turmoil, e.g., the collapse of the dot-com bubble around the year 2000 or the global financial crisis in 2008/2009. Dr Krauss: 'Our quantitative algorithms have turned out to be particularly effective at such times of high volatility, when emotions are dominating the markets.'

Deep learning still has potential

However, Dr. Krauss lowers expectations: "During the last years of our sample period, profitability decreased and even became negative at times. We assume that this decline was driven by the rising influence of artificial intelligence in modern trading - enabled by increasing computing power as well as by the popularization of machine learning." However, the researchers agree that deep learning still has significant potential: 'Currently, we are working on... working on very promising follow-up projects with far larger data sets and very deep network architectures which have been specifically designed for identifying temporal dependencies,' says Krauss. '"First results already show significant improvements of predictional accuracy - also in recent years." - also in recent years.'
-end-
The results of the study were published under the title "Deep neural networks, gradient-boosted trees, random forests: Statistical arbitrage on the S&P 500" in the European Journal of Operational Research.

University of Erlangen-Nuremberg

Related Learning Articles:

Learning with light: New system allows optical 'deep learning'
A team of researchers at MIT and elsewhere has come up with a new approach to complex computations, using light instead of electricity.
Mount Sinai study reveals how learning in the present shapes future learning
The prefrontal cortex shapes memory formation by modulating hippocampal encoding.
Better learning through zinc?
Zinc is a vital micronutrient involved in many cellular processes: For example, in learning and memory processes, it plays a role that is not yet understood.
Deep learning and stock trading
A study undertaken by researchers at the School of Business and Economics at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) has shown that computer programs that algorithms based on artificial intelligence are able to make profitable investment decisions.
Learning makes animals intelligent
The fact that animals can use tools, have self-control and certain expectations of life can be explained with the help of a new learning model for animal behavior.
Learning Morse code without trying
Researchers at the Georgia Institute of Technology have developed a system that teaches people Morse code within four hours using a series of vibrations felt near the ear.
The adolescent brain is adapted to learning
Teenagers are often portrayed as seeking immediate gratification, but new work suggests that their sensitivity to reward could be part of an evolutionary adaptation to learn from their environment.
The brain watched during language learning
Researchers from Nijmegen, the Netherlands, have for the first time captured images of the brain during the initial hours and days of learning a new language.
Learning in the absence of external feedback
Rewards act as external factors that influence and reinforce learning processes.
New learning procedure for neural networks
Neural networks learn to link temporally dispersed stimuli.

Related Learning Reading:

Make It Stick: The Science of Successful Learning
by Peter C. Brown (Author), Henry L. Roediger III (Author), Mark A. McDaniel (Author)

Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning series)
by Richard S. Sutton (Author), Andrew G. Barto (Author), Francis Bach (Series Editor)

Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurélien Géron (Author)

Deep Learning with Python
by Francois Chollet (Author)

Deep Learning (Adaptive Computation and Machine Learning series)
by Ian Goodfellow (Author), Yoshua Bengio (Author), Aaron Courville (Author), Francis Bach (Series Editor)

The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Springer Series in Statistics)
by Trevor Hastie (Author), Robert Tibshirani (Author), Jerome Friedman (Author)

Neural Networks and Deep Learning: A Textbook
by Charu C. Aggarwal (Author)

An Introduction to Statistical Learning: with Applications in R (Springer Texts in Statistics)
by Gareth James (Author), Daniela Witten (Author), Trevor Hastie (Author), Robert Tibshirani (Author)

Pattern Recognition and Machine Learning (Information Science and Statistics)
by Christopher M. Bishop (Author)

Visible Learning for Teachers: Maximizing Impact on Learning
by John Hattie (Author)

Best Science Podcasts 2018

We have hand picked the best science podcasts for 2018. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Circular
We're told if the economy is growing, and if we keep producing, that's a good thing. But at what cost? This hour, TED speakers explore circular systems that regenerate and re-use what we already have. Guests include economist Kate Raworth, environmental activist Tristram Stuart, landscape architect Kate Orff, entrepreneur David Katz, and graphic designer Jessi Arrington.
Now Playing: Science for the People

#504 The Art of Logic
How can mathematics help us have better arguments? This week we spend the hour with "The Art of Logic in an Illogical World" author, mathematician Eugenia Cheng, as she makes her case that the logic of mathematics can combine with emotional resonance to allow us to have better debates and arguments. Along the way we learn a lot about rigorous logic using arguments you're probably having every day, while also learning a lot about our own underlying beliefs and assumptions.