AI in Financial Markets, Part 3: What’s Your Problem?

August 13, 2020
by
· 7 min read

Previously, in parts one and two of this series, we looked at the reasons that modern data science techniques, especially machine learning, are interesting for financial markets participants. We also started to look at why automating machine learning lowers barriers to entry, makes machine learning more acceptable to regulatory functions, and makes experienced quants, strategists, and financial data scientists much more productive and effective in the quest for alpha. In part 4 we describe our recommendations for ensuring that your machine learning models won’t just look great on paper but will actually work on new data in production as well as show DataRobot’s integrated best practices and guardrails can help you with this.

Check out all of the blog series: part 1part 2part 4part 5.

In this part we look at the advantages of automated machine learning in the front office, the importance of problem framing, and how automated machine learning techniques expand the problem space that can be investigated. Plus, why the choice of machine learning algorithm should be seen as just another parameter to search on.

With great power come great non-disclosure agreements

“Any edge has a half-life that is generally pretty short.  It really has always been an arms race.”
— Jody Kochansky, Head of Aladdin Product Group, BlackRock

Let’s dig into this last point a bit more: what exactly can advanced users in front office securities industry roles do with this great increase in productivity and effectiveness? For most other industries, and indeed for other functions in the securities industry (cf. the first post in this series), I’d list a number of use cases that we’ve seen work in the business already ¹. In the financial markets, it’s a little different: there is an ongoing arms race to find and exploit edges before they become widely known, are arbitraged out, and lose value. As a result, it’s difficult to discuss in any level of detail the use cases we’ve seen work across the industry, let alone how, as this runs the risk of eroding our clients’ competitive advantage ². And in any case, the innovators and contrarians in the audience might rather do the opposite of what’s proven, as the greater rewards accrue to those who boldly tread virgin territory. 

Instead, let’s take a different tack. Assume that you’re a front-office financial markets professional with an idea of where there may be an inefficiency in the markets that you want to exploit. You’ve got some excellent quants on your team (maybe you’re an excellent quant yourself), but searching for inefficiencies is a laborious process and good quants and strategists are scarce and expensive (those Ph.Ds don’t grow on trees). It makes sense to automate the repetitive parts of the process and concentrate their time on types of tasks that can’t be automated: a technology that enables practitioners to work their way through more candidate approaches in the same amount of time can be transformational in the right hands. DataRobot is such a technology.


What Can Be Automated?


At its heart, DataRobot is an enterprise AI platform for the automated building of machine learning models to address two very common classes of AI problems:

  • Supervised machine learning. You have a body of historical observations (data), you know various things about them (variables/features) and you know the outcomes of these observations (the target variable). Supervised machine learning comes into its own when it’s valuable to make reasonably accurate predictions of likely outcomes as new observations come in. 
  • Supervised machine learning use cases come in various forms: 
    • classification tasks (answering yes/no questions, or classifying observations into multiple categories) and 
    • regression tasks (predicting numbers); 
  • Supervised machine learning models can be:
    • cross-sectional (observations are independent of each other) or 
    • time-series (observations exhibit time dependencies/serial correlation; many securities industry models fall into this form).
  • Unsupervised anomaly detection. You have a body of historical observations (data), you know various things about them (variables/features) and, as new observations occur, the ability to score how similar or different they are to historical observations is valuable.  This also can be done cross-sectionally or with a time series approach.

In our experience, some 80% of machine learning problems in business can be framed as one of these two types. Financial markets are no exception; a lot of work is carried out to predict various numbers, based on what is known at a given point in time. What these numbers represent is almost irrelevant; as long as:

  • there is data available that is a good representation of the important behaviors/factors driving the number(s) to be predicted; and that
  • the relationship between the underlying data and the number to be predicted is sufficiently stable (or at least consistent) that enough data to build a model can be collected; and, of course, that
  • there is value in being able to predict the number(s) of interest ahead of time

Problem space. The final frontier.

But what does this mean in practice? Maybe it’s best to illustrate this using a “toy” quant finance problem that we built for JPMorgan’s DeepFin series and the Open Data Science Conference. We examined the impact of cuts in dividend expectations on subsequent share price performance, using a set of some 50,000 historical examples of downward revisions from global equity markets over the preceding ten years. In conventional quantitative finance terms, this might be framed as a regression: let’s model, say, three-month forward return with respect to the magnitude of the cut in dividend expectations. Or one might frame the problem as a rule of thumb: what sort of behavior might I expect when dividend expectations for a stock get cut by 10%?


Translating a Problem Statement into the Language of Data Science

pasted image 0 6

The graphic above illustrates some of the questions that could be searched on — the “problem space.”  In our limited, toy example, we searched on the following:

  • Would viewing this as a classification or regression problem work better? Should we aim to model the return number that we’re looking at—regression problem; more precise—or should we look at whether the return will exceed or fall below a certain threshold—classification problem; maybe a greater chance of success.
  • In the case of a classification problem, what threshold should we set for the returns? Zero? 5%? 10%? 50%?
  • Which returns are we interested in in the first place? We looked at absolute returns and returns relative to the stock’s country; one equally could have looked at relative-to-industry returns. (My hypothesis was that by focusing on idiosyncratic risk, i.e. eliminating market factors, the model would have a better chance of success. Alas, that didn’t seem to be how investing worked last decade.)
  • What exactly did we mean with cut in expectations? Any month-on-month fall, even a fraction of a percent, or did we want to be more specific and only look at falls beyond a certain threshold?

In order to keep the problem statement reasonably compact and tractable, we stopped defining search criteria here. This set already gave us 110 combinations of “super-duper-hyperparameters” to work on before even getting to the question of which machine learning algorithms to consider and use—a decision we happily outsourced to DataRobot’s automated machine learning. We threw some compute at these questions, and used DataRobot’s Python API to build 110 machine learning projects containing over 4,300 candidate machine learning models in the course of a weekend (well, 9 hours). This needed no more than 150 lines of code to generate the full set of models, plus another 120 lines to retrieve the results ³.

At this point, JPMorgan’s Ayub Hanif remarked (in a research report summarizing the DeepFin event) thatit cannot be stressed [enough] how complicated this would be for a human to try to build using traditional machine learning coding practices and the length of time it would take to run.

Other things we could have looked at include:

  • What period to train the models on? Evaluating different modeling periods would allow us to gain insight into the stability of the factors and whether there may be distinct régimes in the data.
  • What return period to use? We assumed three months; other periods could have been equally relevant, or indeed more so.
  • Whether to narrow the focus to certain markets or market cap brackets?

And the list goes on, limited only by the modeler’s imagination and ultimately the available time—but time availability is a much lighter constraint than it would be in “traditional” quant.  It’s not hard to imagine how a similar approach could be applied to making predictions on many other financial variables of interest. Automated machine learning therefore carries with it the advantages of scale — facilitating greater efficiency to transformational effect when testing multiple variants of investor hypotheses on many different topics.

Managing for dumb luck: it’s not selection bias if you follow best practices

But, I hear you say, aren’t you just using automated machine learning to generate hordes of proverbial (robot) monkeys and darts to throw at the financial pages? How can you be confident that a model built and identified in such a way will actually generalize? We’ll examine this more closely in next week’s blog post, when we look at best practices in building such models, the importance of ensuring that they are baked into your machine learning approach, and how DataRobot’s automated machine learning helps to ensure best practices in both machine learning model construction and model validation.

White Paper
Realizing the Benefits of Automated Machine Learning
Download now

¹ A cynic might even note that “innovation” teams in too many companies seem far too concerned with the question of whether we’ve seen this use case work with their competitors already.
² Never mind the lawyers. Armies of lawyers ready to descend upon us like a ton of bricks should we do anything stupid or unprofessional.
³ In both cases, the bulk of the code dealt not with the actual machine learning, but with iterating over the data, slicing it as required, and actually keeping track of which machine learning projects related to which variants of the problem statement.

About the author
Peter Simon
Peter Simon

Managing Director, Financial Markets Data Science

Peter leads DataRobot’s financial markets data science practice and works closely with fintech, banking, and asset management clients on numerous high-ROI use cases for the DataRobot AI Platform.  Prior to joining DataRobot, he gained twenty-five years’ experience in senior quantitative research, portfolio management, trading, risk management and data science roles at investment banks and asset managers including Morgan Stanley, Warburg Pincus, Goldman Sachs, Credit Suisse, Lansdowne Partners and Invesco, as well as spending several years as a partner at a start-up global equities hedge fund. Peter has an M.Sc. in Data Science from City, University of London, an MBA from Cranfield University School of Management, and a B.Sc. in Accounting and Financial Analysis from the University of Warwick.  His paper, “Hunting High and Low: Visualising Shifting Correlations in Financial Markets”, was published in the July 2018 issue of Computer Graphics Forum.

Meet Peter Simon
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog