Traders roundtable discussion: the issue of optimization in research

We were having a traders’ roundtable discussion on the topic of researching potential trading systems and the issue of optimization came up. This is a very important topic for traders who want to apply a systematic approach to trading markets. Here are some of the highlights of that discussion for you to consider as you prepare your trading strategies.

Typically when we think of optimization for a trading system we are looking at a process that incorporates multiple variables, parameters with different settings possible and perhaps a number of market filters or conditions which taken together with an exit strategy give us a multitude of ways to trade a particular concept or idea.

It normally begins with an idea the trader has based on insights fromtheory or from reflective practice where he believes the system gives a persistent advantage compared to the average market return  of simply buying and holding. It is also possible however that the edge may come from a brute force data mining operation that finds a statistical edge in some combination of market conditions. In other words, the insight comes from the result of massive computations and not from an intuitive or academic insight.

In either case , what we have is a system of multiple components, each of which can vary, and on an initial pass through with middle-of-the-road parameter settings we find a persistent edge in multiple markets with a statistical significance. Human nature being what it is, we would want to start testing different parameter settings for each component in order to determine the best mix for the most robust return and to find which of the parameters seem to have the most power when it comes to influencing the results. In statistics, this general approach is called factor analysis or principal component analysis.

In theory, you would want to find the absolute maximum return by finding the absolute optimum setting for each of the possible parameters and then take that into the market to begin trading. Taken to an extreme, this can produce a phenomenon of curve fitting or over optimization. What you can end up with is a system that would be perfect for the unique set of data conditions of the test. The problem of course is that the future may not ever show you that same data set again in your over optimize system will under perform much to your surprise.

The usual response to this phenomenon is to conduct testing with out of sample data. In other words designing the system on one data set and refining it to a certain degree and then testing it on a completely new set of market data to see if the edge persists. If you were systems development practice find you always discarding your systems after the out of sample test, it probably occurs as a result of over optimization.

In practice then, we want to find the trade-off between robustness of performance in multiple market conditions with out of sample testing that yields a persistent advantage in multiple market conditions but without trying to overturn the system for ideal conditions.

A way to keep this systematic approach in tune is to continue to monitor performance in a feed-forward approach that examines actual trading results to see how the performance results compare to the test and confirmation performance curves.

The bottom line: the more you rely on automatic trading systems, the more important your research and validation process becomes since you will not be using inexperienced traders override protocol to keep you from going off the deep end.

Tortoise Capital Management © 1996 Frontier Theme