Forums  > Trading  > When to exit trading strategy  
     
Page 1 of 1
Display using:  

Energetic
Forum Captain

Total Posts: 1488
Joined: Jun 2004
 
Posted: 2018-04-16 19:26

This question came up in the other thread and I gave it some thought. Most generally, the answer will always be in the nature of model doing something unusual or unprecedented. But the threshold will have to be to some extent arbitrary. For example, one can decide on exceeding previous max DD by 5%. I can see why some people are not excited about such an approach. I am not too excited myself, mostly because I'm not such a big fan of DDs.

I want to detect the loss of signal based on relative performance of the strategy against its own history. By performance I mean that relative to the benchmark (despite the reservations that I may not have one). I also don't want it to be a one-time observation. I prefer to measure it over time.

Before I begin laying out possible implementation details, how does this sound in principle?

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

HitmanH


Total Posts: 462
Joined: Apr 2005
 
Posted: 2018-04-17 00:41
Very sensible.

What I've done, is in % terms, compiled some stats from a strategy's backtest - what we expect worst DD to be, 2nd worst DD, 3rd worst etc.
Then - when a strategy goes live - you can compare it to that, how is the vol of realised returns looking, DD profiles, Sharpe. Whats a 2x 1SD move, 3x SD move - you know tails are fat - but you can compile a profile of how fat they have been
Once you have enough data points - you can start subbing out the backtest with the realised results of the strategy itself.

We've started with counting instances in a window of x SD moves, and questioning - is there something in environment different, has market changed etc - rather that turn on / off - but very much the kind of think you're talking about - I think.

I would say do this for a specific strategy / model - don't do it for a fund or book as a whole...

ronin


Total Posts: 340
Joined: May 2006
 
Posted: 2018-04-17 11:49
There is no universal answer. It's also different depending on whether you are running a single strategy or a portfolio of strategies.

But basically, at any point you have to be asking "why am I running this strategy?"

You run some strategies for high returns. You run others for negative beta. Your run some others for diversification. Etc.

So you put together a list of targets for your return distributuion. Returns like this, volatility like that, correlations like these etc. Then, every once in a while, test your strategy against those hypotheses. What's the probability that these still hold?

Based on the answers, you make some decisions.

I don't want to comment directly on your target vs benchmark - it's up to you how you manage that. My only observation would be that optimising for performance vs benchmark would drive you to high beta. You have to think carefully whether that is what you are really looking for, or what you can sell.

"There is a SIX am?" -- Arthur

Energetic
Forum Captain

Total Posts: 1488
Joined: Jun 2004
 
Posted: 2018-04-17 18:30
I understand the difference between managing a portfolio and a single strategy.

In terms of single strategy I believe both of you are thinking in the same direction as I: checking characteristics of realized distributions vs. expected.

In my case, the goal was to beat S&P but beta is relatively low. But I do want to make sure that it does beat S&P at least under conditions when it did so historically.

What I can sell is an entirely different question. Probably nothing.

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

Nonius
Founding Member
Nonius Unbound
Total Posts: 12736
Joined: Mar 2004
 
Posted: 2018-04-17 19:50

"This question came up in the other thread and I gave it some thought. Most generally, the answer will always be in the nature of model doing something unusual or unprecedented. But the threshold will have to be to some extent arbitrary. For example, one can decide on exceeding previous max DD by 5%. I can see why some people are not excited about such an approach. I am not too excited myself, mostly because I'm not such a big fan of DDs.

I want to detect the loss of signal based on relative performance of the strategy against its own history. By performance I mean that relative to the benchmark (despite the reservations that I may not have one). I also don't want it to be a one-time observation. I prefer to measure it over time.

Before I begin laying out possible implementation details, how does this sound in principle?"

if you normally think the Sharpe should be X, and you subsequently see a max DD>> 1/X^2 (represented as a percent of average annual PL), then that's an indication something is wrong. that's if your returns are reasonably normally distributed. details left to reader. So a 4.5 sharpe strat shouldn't be having DDs much larger than 5% of annual PL.

on loss of signal, I'd prefer just checking on an ongoing basis the lead lag corr or the R^2 between the signal and the future return you're trying to predict; the strat has a bunch of other logic (threshold, sizing, costs, slippage,execution) that clouds the assessment of signal quality.

Chiral is Tyler Durden

Energetic
Forum Captain

Total Posts: 1488
Joined: Jun 2004
 
Posted: 2018-04-17 20:27
I'd say 1/X^2 is a rather soft criterion for strategies with Sharpe O(1).

I agree with your last point.

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

Nonius
Founding Member
Nonius Unbound
Total Posts: 12736
Joined: Mar 2004
 
Posted: 2018-04-17 20:56
hahaha, true on O(1), but then again, a sharpe 1 strat over time I'd expect to have sort of meanreverting PL....up one year, down another, flat over a few years, etc. like a punter or a CTA

not to say you're punting, I'm sure the Capn has some awesome strats!

Chiral is Tyler Durden

Energetic
Forum Captain

Total Posts: 1488
Joined: Jun 2004
 
Posted: 2018-04-17 22:39
Dude, check out the next thread:

http://www.nuclearphynance.com/Show%20Post.aspx?PostIDKey=186211

I'm told there's at least $10mln/yr to be made ;)

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

Energetic
Forum Captain

Total Posts: 1488
Joined: Jun 2004
 
Posted: 2018-04-17 22:56
I'll assume that my variable of interest, outperformance vs. benchmark, is somehow correlated or has a joint a distribution with returns of benchmark itself and maybe other variables, e.g. volatility. But for simplicity of exposition, let's say only with benchmark.

I have time series of say monthly returns for both variables. I sort it by benchmark's returns and bucketize. Within each bucket, I observed a bunch of strategy returns which I'll also sort from low to high.

In live trading, each month I observe the relative return of the strategy vs. benchmark. The I look in the bucket where the benchmark peformance fell and find the current score := percentile of relative performance in the bucket. For example, I chose to have a bucket for S&P returns [-1%,1%]. Assume that in backtesting, when  S&P returned between -1% and 1%, the strategy had returns {-5,-3,-1,0,1,2,4,5,7,8}%. Suppose in a given month of live trading, S&P returned 0 and the strategy returned 6%. Then the strategy scored in 80th percentile. If the strategy returned -4% then it scored 10. That's a current score.

How to work with current score? One low number is not necessarily a problem but a sequence or a majority of low numbers in recent history is a problem. A low number followed by high numbers is soon not a problem. I suppose EMA is a decent way of aggregating the current and recent scores. Roughly, I can probably forget about a bad month in about half a year if no new problems appear. So, I'd choose EMA with a decay factor of about 0.7 per month to build aggregate scores.

I could build a first approximation to  emprical joint distribution with my training set and then use the OOS set to build historical aggregate scores in backtest. Analyzing live performance, I could use the historical lows of aggregate scores as a benchmark for the current state of the strategy. If it crosses a historical low then it is certainly a sign of trouble.

Thoughts?

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

TonyC
Nuclear Energy Trader

Total Posts: 1280
Joined: May 2004
 
Posted: 2018-05-01 06:49
Hitman's idea strikes me as being a lot like the classic CUMSUM chart from "manufacturing quality control" stats, all the way back to Demming

flaneur/boulevardier/remittance man/energy trader

Energetic
Forum Captain

Total Posts: 1488
Joined: Jun 2004
 
Posted: 2018-05-22 18:10
I implemented what I described below with one small change: I re-centered the scores to zero for better visualization. Positive values mean that the strategy performs better than on average historically, conditional on the current performance of the benchmark. E.g. high values this Feb-Mar mean that the strategy not just performed well but better than it did during similar bear markets in the past.

Here's how it looks like for the last year of live trading. Both current and aggregate scores fluctuate around zero, as expected. Looks like I should get really worried when the aggregate score crosses -25 or so.


For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

Serg


Total Posts: 2
Joined: Aug 2018
 
Posted: 2018-08-22 15:42
>> "relative performance of the strategy"
Did you consider looking at its relative behavior, relative actions instead? If the strategy has a freedom to decide when to act and how much, it can be a good input to define its "profile". You may also add input related to the activity of the market to profile it more precisely.

Typically any trading strategy has a limited lifespan. The reason is this: traders are continuously looking for trade-able patterns. Large firms may run powerful servers 24/7 looking for them in vast amounts of historical market data. But they all have a certain threshold of of statistical significance to avoid false positive errors. Once such statistical significance pattern is detected, they rush to implement and use it, and thus bring this market "inefficiency" back to equilibrium. This is why if your trading strategy is one of them, it's no wonder that its performance will fade with time.

Moreover, once this trading strategy becomes a "common knowledge", there will be anti-strategies that prey on it. After all, if you had a code of the competitor's strategy, you could profit on it.

When any of that happens, it should manifest itself in the behavior of the trading strategy, which is probably easier to compare with a statistical significance than its performance

Energetic
Forum Captain

Total Posts: 1488
Joined: Jun 2004
 
Posted: 2018-08-28 18:07
I'm not sure what exactly you are proposing. If you explain I'll think about it.

Thank you.

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

gmetric_Flow


Total Posts: 7
Joined: Oct 2016
 
Posted: 2018-08-28 19:52
Of course Serg should explain, but from my understanding, he is proposing that one should compare the action of the strategy relative to the benchmark, i.e., how do the decisions of the strategy differ from the benchmark (when is one selling while the other is buying).

I'm not quite sure how this would help deal with the inevitable reversion to equilibrium (or prolong the lifespan of the strat in any way).

Energetic
Forum Captain

Total Posts: 1488
Joined: Jun 2004
 
Posted: 2018-08-28 20:59
Let's see. Say, may strategy is long 70% of the time on average but this number goes from low 60-s to upper 70-s depending on the market regime. If I focus on the smaller subsets, e.g. 2008, it is long less than 50% of the time. While the benchmark is long all the time by definition.

Then, I am watching my strategy going live and it's long over 80% of the time in 2017 but only 55% this year. I'm not sure how this kind of information will help me realize that the signal is gone. It will take a very long time to register a %age significantly different from what I've seen before and by that time I will likely have lost a lot of money.

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

TonyC
Nuclear Energy Trader

Total Posts: 1280
Joined: May 2004
 
Posted: 2018-08-28 21:57
CuSum charts are an old statistical control technique that might prove useful for determining when a signal is no longer acting as it was

read a paper almost twenty years ago that discussed it "CUSUM TECHNIQUES FOR TECHNICAL TRADING
IN FINANCIAL MARKET"

flaneur/boulevardier/remittance man/energy trader

Energetic
Forum Captain

Total Posts: 1488
Joined: Jun 2004
 
Posted: 2018-08-28 22:21
Thanks, Tony - will read. It's been long time ...

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

Energetic
Forum Captain

Total Posts: 1488
Joined: Jun 2004
 
Posted: 2018-09-12 21:25
From what I understand, the paper established a mapping between a family of momentum following strategies (they call it filter trading rule) and CUSUM techniques. I didn't see how it could be used for determining when a signal has vanished.

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

nikol


Total Posts: 519
Joined: Jun 2005
 
Posted: 2018-09-13 16:46
Apart from said above, you can also try banking risk management approach. It might be ill reference, but still:

1. Check uncertainty (e.g. VaR/ES)_horizon against realized PnL_horizon by testing 'violation' frequencies

2. While realized PnL is never repeated, you can perform backtests over grid of strategy parameters around set of parameters used for trading. Normally, you would expect smooth behavior of resulting PnL's as function of these parameters. Any jumps will indicate presence of even bigger source of uncertainty (=risk).

Energetic
Forum Captain

Total Posts: 1488
Joined: Jun 2004
 
Posted: 2018-09-19 23:07
1. Could you expand on it a little bit? I'm not sure I understand.

2. I've been doing it, it's a part of my MO.

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

nikol


Total Posts: 519
Joined: Jun 2005
 
Posted: 2018-09-20 11:53
Consider you measure risk with quantiles of "next period" distribution.
Expected rate of violation of respective quantile boundaries (loss or profit side) must be equal (in statistical sense) to quantile probability.
Loss side quantile is defined as VaR (value at risk).

Deviation of violation rate will signal that your VaR "model" is inaccurate.
You can do same with cVaR (or Expected Shortfall), but little bit in a more complicated way.

PS. Just thinking afterwards. My proposal is in fact about your ability to predict risk, not about whether strategy works or not. Comparison of predicted and realised distribution will do.

Energetic
Forum Captain

Total Posts: 1488
Joined: Jun 2004
 
Posted: 2018-09-20 21:38
Unfortunately I don't have facilities to predict risk. I only predict the direction of next day move (or fail to make a prediction -> go to cash). The model says nothing about the distribution of returns.

I don't particularly like the current setup whereby if not in cash I'm 100% long or short. I wanted to introduce a sizing algorithm (with the intent to trade out of the position gradually when approaching a signal boundary) so I tried to relate the "strength" of the signal to the distribution of future returns but found little to nothing.

Any VaR/cVaR could only be based on historical model performance, aka historical VaR. I'm familiar with this approach from my actual daily work (employment) and I am not impressed. I cannot imagine why this should be better than what I'm proposing here.

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

nikol


Total Posts: 519
Joined: Jun 2005
 
Posted: 2018-09-20 23:06
VaR/ES can also use MC generated distribution, which can use either Real World or Risk Neural measure. RW is, yes, again historical. RN captures instant market perception. But, I expect, you know this.

From "The way of turtle" by C.Faith I learned that momentum traders like to use ATR as measure of risk and use it for sizing. Not much quantitative, but I find it practical.
Previous Thread :: Next Thread 
Page 1 of 1