Eulers


Total Posts: 1 
Joined: Apr 2020 


Using metrics to track performance it has always boggled my mind why sharpe is being used. I mean whats the usecase when return / max DD is the end goal anyways? Using martingale/ higher dimensional momentum scale in/out position sizing, it can go as far to really hurt the “sharpe”. What and why do people use sharpe, whats justified? 
Short the pops and long the drops 




I would say that maximizing terminal utility or CAGR is the end goal and the return / max DD is just another approximation like Sharpe? 
First Commander of the USS Enterprise 


kloc


Total Posts: 42 
Joined: May 2017 


Well, there are Stirling and Calmar ratios if you prefer CAGR/DD type of risk metrics?
Not as popular as Sharpe, but they're pretty standard... 




gamerx


Total Posts: 20 
Joined: Sep 2012 


There are many problems with the sharpe ratio, but same goes for other metrics.
Guess it is the preferred one simply because everyone else is using it. Makes performance comparable across the board this way. 




Optimal Kelly sizing is linearly proportional to Sharpe ratio. Therefore for a riskneutral investor in an infinite period game without leverage constraints, logarithmic utility scales quadratically with Sharpe regardless of higher moments.
Optimal logreturns are equal to Sharpe^2. For example a strategy of Sharpe 0.5 can achieve 25% annualized logreturns at optimal leverage. Sharpe 1.0 can achieve 100%. Sharpe 2.5 can achieve 625% returns. Sharpe 0.1 can achieve 1%. Etc.
Another way to think about this is to consider the law of large numbers. Consider period to period returns of an asset as an i.i.d. process. (For any reasonably liquid and efficient asset, some sufficiently large period will have sufficiently small autocorrelation between periods.) In the long run, the investors logwealth converges to a normal random variable. The variance of this "cumulative wealth" variable is linearly dependent on the variance of the assets' return variance. Any higher moments converge to zero.
In other words, for a sufficiently longterm investor without leverage constraints, all risk metrics besides Sharpe become irrelevant. 
Good questions outrank easy answers.
Paul Samuelson 



wquant


Total Posts: 8 
Joined: Nov 2019 


I am trying to get a better understanding of Sharpe ratios and what it means for moving forward with a backtested strategy (standard long/short equity returns). Here are a few links to what I've been looking at:
https://www.twosigma.com/wpcontent/uploads/sharpetr1.pdf https://arxiv.org/pdf/1905.08042.pdf https://www.davidhbailey.com/dhbpapers/sharpefrontier.pdf https://alo.mit.edu/wpcontent/uploads/2017/06/TheStatisticsofSharpeRatios.pdf
In these, there seems to be all manner of different statistical tests to ascertain the significance of a backtest Sharpe ratio.
My question is, amongst practitioners is there a standard calculation/procedure to determine confidence in their strategy or to determine a required track record length.
Any guidance is much appreciated. Thanks! 



DouglasP


Total Posts: 12 
Joined: Jan 2014 


Could you expand a bit on how to achieve 25% annual with a Sharpe of 0.5? I would assume that includes a fairly decent risk of losing 99% of your capital? 





Well, just to illustrate I grabbed daily return data on SPY going back to inception (1993). Realized annualized Sharpe is 0.48, so it's a pretty good example. Annualized return is 10%, so getting to 25% requires adding 150% leverage.
2.5X SPY has a number of major drawdowns. Nothing as serious as 99% of capital. But you would lose 85% of your capital in the GFC, 80% during the dotcom bust, and 70% to Covid. But keep in mind that those losses are fully recovered by 2012, 2006, and (nearly) summer 2020 respectively. 94% of rolling 5year periods are positive, and the median return is 242%.
That being said, I don't advocate full Kelly Sizing as a practical manner. In particular, I think most people drastically overestimate their strategy's Sharpe ratio. Mostly because of a combination of overfitting, overconfidence, performance decay and the Peso problem. Fractional Kelly, with the fraction somewhere between one fifth and one half depending on an honest and introspective Bayesian assessment, is usually prudent. 
Good questions outrank easy answers.
Paul Samuelson 


rftx713


Total Posts: 128 
Joined: May 2016 


EspressoLover  I think I know what you mean by the Peso problem, but can you confirm? 




doomanx


Total Posts: 109 
Joined: Jul 2018 


Peso problem refers to the pricing (or in the context EL is using it, the inability to price) rare, significant shifts in the price and/or fundamentals of an asset. As an example, the name comes from the depegging of the Mexican peso from the USD. The Peso was pegged to the dolar, but there was a big gap between the IR on Mexican and US bank deposits with the argument that the gap represented uncertainty in whether the peso would be devalued or not. The peg was removed and the Peso tanked.
In stats terms you are trying to estimate a nonstationary process with a small sample size under fat tails. Historical Sharpe doesn't reflect the uncertainty in when the next structural shift might be. As we're in fat tail land you also can't predict when they are going to happen or how severe they might be. Shrinking your Kelly fraction will reduce drawdown in events like that (and it's always better to underbet than overbet if you're not full Kelly, mathematically and practically, see filthys book). 
did you use VWAP or triplereinforced GAN execution?


