The profitability of technical analysis in a high frequency setting

By | April 8, 2011

UPDATE: The report has been completed, available here.

One busy month behind me, and another one up next. Currently doing exam revisions and had some rather time consuming coursework so far, which has sadly prohibited me from doing anything here on this blog. So, to make up for a month of no blog activity I’ll set aside some time doing it now.

This blog post is going to be about my upcoming dissertation. I’m writing about the profitability of technical analysis in a high frequency setting. I’m also going to be looking at any possible link between this and volatility. And, I’m very much interested in any feedback, tips, information, or what ever you think you might want to contribute. Any specific papers you think I should read? Other sources of interesting information? Please post feedback as a comment here, or contact me directly.

I’m copying and pasting in some of what I’ve written to introduce my dissertation project. A PDF is available here with the complete content, also containing the bibliography.

Introduction

Technical analysis consists of several different types of analysis and interpretation, based on historical data. Some of these techniques are difficult or impossible to describe mathematically or algorithmically, making them unsuitable or difficult to test empirically. Pring (2002, pp 2) gives the following definition:

‘The technical approach to investment is essentially a reflection of the idea that prices move in trends that are determined by the changing attitudes of investors toward a variety of economic, monetary, political, and psychological forces. The art of technical analysis, for it is an art, is to identify a trend reversal at a relatively early stage and ride on that trend until the weight of the evidence shows or proves that the trend has reversed.’

As described in more detail later, one simple and popular method is looking at moving averages, essentially representing a low pass filter. This filter removes higher frequency “noise” thereby allowing the investor to more clearly identify the lower frequency trend. Another popular set of indicators are those grouped as oscillators, representing indicators that swing within a given band. Oscillators are used to discover short-term overbought or oversold conditions.

These systems use past historical data, often just historical prices, as a basis for their conclusion about future price movement or price level. This is in direct conflict with the efficient market hypothesis, stating that markets are informationally efficient, and hence all available information is reflected in the current price (Fama 1970). In an efficient market, any future price would be independent of any current or past price.

Three different levels of market efficiency are defined (Jensen 1978):

  1. The Weak Form of the Efficient markets hypothesis, in which the information set Ωt is taken to be solely the information contained in the past price history of the market at time t.
  2. The Semi-strong Form, were Ωt is taken to be all information that is publicly available at time t.
  3. The Strong Form, in which Ωt is taken to be all information known to anyone at time t.

Technical analysis, as defined here, would be a weak form test as we’re only concerned with past prices.

Other models would be needed to explain how technical analysis could be profitable, given that they would imply markets are not efficient even at a weak form level. Two such alternative models are noisy rational expectation models and behavioural (or feedback) models.

Noisy rational expectation models (Treynor and Ferguson 1985; Brown and Jennings 1989; Grundy and McNichols 1989; Blume, Easley, and O’Hara 1994) argue that there’s asymmetric information among market participants. This implies that there’s a delay between when information is made available and when it is fully reflected in the market. This delay breaks the independents between successive asset returns as information is absorbed over a given time period instead of instantaneously. This would therefore allow trends or patterns to form, something which can be exploited by technical analysis.

Behavioural models focus more on irrational behaviour, where the underlying value is to some extent disconnected from the current price (Shiller 2003), which also would help to describe stock market bubbles. A behavioural model consists of two main types of participants; arbitrageurs (defined as investors who form fully rational expectations about security returns) and noise traders (investors who irrationally trade on noise as if it were information) (Black 1986). Noise traders, by following a positive feedback strategy (buy when prices go up, sell when prices go down) can substantially affect the price thereby contributing to trend formations (De Long et al. 1990a; De Long et al. 1990b). This then represents a situation where technical analysis, in its very existence and due to extensive usage, could be self-fulfilling.

Closely related to positive feedback effects we also find that herding behaviour of short-horizon traders can lead to informational inefficiency, as demonstrated by Froot, Scharfstein, and Stein (1992). As for the behavioural models, this type of model argues that short term investors would benefit from technical analysis as long as it’s widely adopted even if there is no fundamental connection between it and the underlying asset.

With these alternative models, technical analysis, when applied correctly, could provide an investor with a feasible trading strategy. There are numerous papers available that have empirically tested the profitability of technical analysis, and as reported by Park and Irwin (2004) this number has dramatically increased during the last decade. They group these papers into two groups, early and modern. Modern studies are defined as those that include a more advanced and extended analysis of the results. This may include transaction cost, out-of-sample testing, statistical tests and/or data snooping tests. Among a total of 92 modern studies, 58 studies found profitable results, 24 studies obtained non-profitable results, while the remaining indicated mixed results.

Many studies still suffer from numerous issues like ex post parameter selection, data snooping, or insufficient risk or transaction cost analysis. This could significantly affect the conclusions and needs to be addressed. Relevant papers to look at are Brock, Lakonishok, and LeBaron (1992) in addition to Sullivan, Timmermann, and White (1999) whom extends the first paper.

Using high frequency data is interesting for several reasons. First of all, it’s reasonable to assume that many of the market anomalies (as compared to the efficient market hypothesis) ex supra are only observable with higher granularity. Also, as described by Cooper, Cliff and Gulen (2008) there are significant differences in market behaviour during trading and non-trading hours. Their claim of higher volatility during trading periods vs. non-trading periods is interesting as there are some claims linking higher volatility with increased profitability for technical analysis based strategies (see for instance Barr 2010).

Methodologies

In brief the steps used when performing the test would be:

  1. Select a set of parameters for a given technical indicator.
  2. Apply the indicator to an in-sample set of data, executing trades as described by the strategy.
  3. Select the most appropriate parameters based on in-sample performance.
  4. Apply the indicator with selected parameters on a successive out-of-sample data set and measure performance.

Performance is in brief defined as a measure compared against a benchmark with adequate risk and transaction cost penalties. The performance significance also plays an important part by measuring data snooping, as defined later.

Two types of technical indicators will be analysed; moving averages and price channels. These are selected as they fit with some of the underlying market structures outlined earlier.

Moving averages are very simple indicators, where the trading strategy normally consists of two moving averages with different periods. They are among the most popular indicators used for trend following strategies (Taylor and Allen 1992; Lui and Mole 1998). There are several variants of this indicator regarding how the averaging is performed, namely simple moving average (SMA), weighted moving average (WMA) and exponentially weighted moving average (EWMA). Please see the PDF for formulas and strategy definitions.

The other indicator is price channels, known as the Donchian Breakout rule (Donchian 1957), or ‘trading range break’ in Brock, et al. (1992). This indicator gives a long signal when the latest price is greater than the maximum value of N number of previous price values. Likewise, a short signal is given when the current price is lower than the minimum value of N number of previous price values. Again, please see the PDF for more details.

Given these indicators we can now produce a set of strategies based on the parameters. One would use a range of relevant values for each parameter, with individual values within this range selected with a given step value between them. In addition, any constraints on the parameters must of course also be maintained.

These parameters would then be optimized on a given set of time-series data. It’s at this point that transaction cost and risk initially enters the picture. When selecting the optimal parameters, these should not only reflect absolute returns, but include the cost of each transaction. Given a fixed calculation method for the transaction costs of each trade, the optimization routine would be penalized more with increasing number of trades. Risk comes in as a factor as to how we score the overall result. For including this riskiness the Sharpe ratio qualifies as a good measure. A reference benchmark is used and strategy performance would be measured as defined  in Griffioen (2003), with more details in PDF. In the intraday high frequency setting portrayed here it doesn’t make sense to use a buy and hold benchmark with overnight positions. The reason is simply that a high frequency intraday investor is free to choose what to do with his/her money in non-trading hours, and could therefor utilize a buy and hold strategy overnight. The reference benchmark should therefore reflect this by only representing a buy and hold position for intraday trading hours.

A potential issue with the performance formula might be located in the normality assumption when measuring the standard deviation. If needed, a suitable replacement would be the modified Sharpe ratio as defined by Gregoriou and Gueyie (2003). With this modification both the skewness and kurtosis of the return distribution are also taken into account.

Given all of this, a method for scoring in-sample strategy performance has been outlined. No measures have been taken so far as to the persistency in the selected parameters. To address this, actual strategy performance should be measured based on out-of-sample performance. Ex hypothesi, one could assume a clustering of “market mood”. That would imply that the out-of-sample period should consist of the immediately following data set and the in-sample selected parameters are transferred over to the successive out-of-sample period. Testing would thereby consist of a moving window of in-sample historical data, followed by a smaller out-of-sample data set for persistence testing. This window would be moved forward with a defined time step. It would be interesting to also define an optimal time step value based on a combination of parameter persistency and calculation cost.

Jobson and Korkie (1981) showed that the error in Sharpe ratio estimation is normally distributed. With proper scaling of the Sharpe ratio (based on monitoring frequency) and a defined confidence interval, this formula (in PDF) could be used as a guide to the minimum number of observations needed in back testing. We would then solve for T (measurement period) with the given significance level. It would also indicate if the total out-of-sample data set is sufficiently large for ex post Sharpe ratio calculations.

Data snooping has previously been mention as one important short coming of many papers analysing technical analysis. Data snooping (also known as data dredging or data fishing) is the inappropriate use of data mining to uncover misleading relationships in data. White’s Reality Check (RC) for data snooping, as described by White (2000), would be used to test for this effect. There are two methods described as part of the framework when performing this test, namely the Monte Carlo Reality Check and the Bootstrap Reality Check. From a computational standpoint the bootstrap variant is preferred, as it is less computationally demanding but produces results of similar quality. This bootstrapping technique resembles the moving block bootstrap, but solves the lack of stationarity by using blocks of variable length, as described by Politis and Romano (1994). With the in-sample and out-of-sample framework presented here, the reality check is useful in two settings:

  1. Use the test as part of the selection criteria when selecting the most optimal in-sample strategy parameters.
  2. Use the test when testing the significance of the optimal strategy parameters when applied to the out-of-sample data sets.

Finally when it comes to linking volatility to profitability it would be interesting to see if there’s any pattern both with regards to in-sample, out-of-sample, and bootstrapped data sets. There are many ways to measure volatility, but given that high frequency data is available, realized volatility (Andersen and Benzoni 2008) would be a natural choice.

Data and computational requirements

High frequency US equity data would be used, with high frequency defined as one minute bars. More specifically only highly liquid stocks would be used as it removes some of the simulated order fill uncertainty. High liquidity could in part be inferred from high trading volumes and selecting companies that are part of the S&P 100 and/or NASDAQ 100 to some extent guaranties this.

Performing any extensive analysis on high frequency data can be rather computationally demanding. Given the bootstrap framework used there would be numerous samples used per time series. Luckily these forms of analysis allow for a high degree of parallelism. A system would be developed that perform the full analysis. This includes, per stock and distinct strategy analysed:

  1. Create bootstrapped time series from original time series, say between 500 and 1000 samples.
  2. Perform trading strategies and tests on each of those time series, with the back testing period length guided by the significance of the Sharpe ratio obtained.
  3. Calculate daily performance measures and confidence intervals, in addition to the realized volatility measure.
  4. Produce summary statistics for all relevant measures.

Even if these forms of analysis permit a high degree of parallelism it’s in general interest to have high performance software code. Using an interpreted language like MATLAB is therefore deemed too inefficient. Java would instead be used for all computationally demanding aspects as it represents a static and compiled language of high performance. In addition to having optimized software it is also beneficial to distribute the tasks among several worker nodes. Modern infrastructure platforms (typically referenced as Cloud computing) would be used for scheduling these resources. Highly relevant infrastructure providers are Amazon Web Services and Rackspace Cloud, to name two.

 

 

6 thoughts on “The profitability of technical analysis in a high frequency setting

  1. Eugene Steele JD CTA

    What we have here is a failure to communicate. HFT is just the same as LFT in the sense of placing an order in the market. In one case you have an advantage as all HFT requires automation you remove errors beyond the programming errors. That would be human error which can then be present in programming. Depending on the method of HFT you are either there first or seeking to in effect do the same thing as running stops. By definition HFT cannot be hand traded and not all market orders are offered to the public. First they may be offered to insiders and done before the public can see them.

    1. Christian Felde

      What’s defined as high frequency trading depends on who you ask. If you look at the FINalternatives July 2009 Technology and Trading Survey responses to the question “What position-holding time qualifies as high-frequency trading?”, these are the results:
      < 1 sec: Just over 50%
      1 sec – 10 min: Just under 65%
      10 min – 1 hour: About 45%
      1 hour – 4 hours: Just under 40%
      4 hours – 1 day: Just under 25%
      There's also some considering more than 1 day HFT as well..

  2. Kingsley Jones

    I published a few notes when I was a broker dealing with the interface between TA and economics. It is quite doable. Send you those if you send me your email. Also check out Salih Neftci’s work. He is one of (the very few) economists to understand the right question vis a vis technical signals and linear methods. Engineers who know state space methods or super profitable hedge funds like renaissance know what I am talking about but 99.99 percent of economists are clueless. This is why you never meet wealthy economists 🙂 kingsley

  3. Pingback: Cloud computing makes your service a commodity | Blog of Christian Felde

  4. Pingback: Technical analysis and its dependency on volatility, the report | Blog of Christian Felde

Comments are closed.