
So I come from a lowerfrequency background where sharpe and the like makes more sense.
I am doing a grid search across 23 parameters and many backtest paths for an HFT strat that both provides and removes liquidity. Let's abstract the simulation out for now and assume the results coming out are realistic. (problem for another thread)
The whole compounding of portfolio returns idea goes out the window here. I have more available exposure limit than the size of the trades. I often have no position, and the size of my position depends on what's available in the book. So I have a PNL time series.
What should my objective function be? I want a sharpelike mechanic to not reward super risky trading on final PNL alone of course.
Ideas I have so far are pretty simple (I like the second one better):  hourly PNL/stdev(hourly PNL)  (PNL per some fixed unit of volume traded) / stdev(PNL per some . . .)
Problem with option 2 is that if some parameter set drastically reduces my traded volume, strategy gets a lot less attractive even if the risk/reward is awesome.
A hint of how this is typically done or any criticism of my planned objective function would be much appreciated :)




nikol


Total Posts: 1195 
Joined: Jun 2005 


Maybe this can help.
1. Why maximising sharpe? Could you maximise pnl by using kelly? 2. Why signal/std, not signal/quantile? If underlying pdf is normal then it does not matter of course 3. Your hft module rely upon pnl estimation and its uncertainty. Why not using that? Perhaps you do but then why do you need additional tune at ~hourly frequency? 



Just pick some reasonable maximum position limit, then grid search for max PnL.
There's really no need to optimize around Sharpe, because most working HFT strategies are constrained by liquidity not risk. Your intuition is right, optimizing on Sharpe will produce very small PnLs. The first marginal unit of risk is almost always going to be the most profitable. Max(Sharpe) collapses into trading one lot at a time. The same is true for Max(PnL/Volume).
As long as you have a solid HFT alpha/strategy, you'll rarely find yourself with highrisk/highPnL parameters. At high turnovers, most of the portfolio variance becomes dominated by the trades, not the positions. What makes or breaks a strategy is shortrun returns postfill, not how the inventory drifts over time.
In contrast lowfrequency strategies tend to hover near zeroSharpe, because EMH implies that the exante return of any random position is about zero. Buyandhold stocks based off a dartboard, and you pretty much match the index.
But on a tradedominated strategy, the returns come from microstructure alpha minus tcosts. There's no reason to expect this number to cluster around zero. And indeed if you just keep making a bunch of random trades, you'll very quickly lose all your money. Almost all highturnover parameterizations are going to be either straight up or straight down. In HFTworld, highPnL/highrisk Sharpes are a rare coincidence that require a lot of stars to align.
If you preset a small maximum position limit, it forces the optimizer into staying within highturnover strategies. And therefore avoids the problem of highPnL/highrisk parameterizations. A tight position limit caps the variance contribution from longrun drift. Unless you literally have no other profitable options (in which case it doesn't matter), highturnover parameterizations will always dominate the PnL metric under a poslimit constraint.
All that being said, sometimes it's practical to set a high Sharpe floor on a strategy. Especially when you're starting out. Not necessarily from a risk perspective, but a validation one. One of the hardest challenges is verifying if/when live trading is not in line with simulated backtests. If you start with a 30Sharpe parameterization, then even a single losing trading day lets you reject the null hypothesis at p<0.05. It can be useful to start with a Max(PnL) subject to Sharpe > [X], then gradually relax that constraint over time as you build up confidence in the live implementation. 
Good questions outrank easy answers.
Paul Samuelson 



Thanks guys, helpful thoughts. Managed to confirm most of them empirically.
After making my server suffer for a few consecutive days, I did in fact arrive at the argmax(PNL) parametrization being the best. Conveniently, this also maximizes my hourly sharpe metric, and is indeed one of the highest turnover parameter sets.
On the volumetime sharpe, there were other parameter sets maximizing, but at the cost of lower PNL and more sporadic traded volume.
Especially appreciate the tips on going live @EL, I want all the sanity checks I can get. Going to be all about the production fill rates from here, so I might come back to complain about those in a month or so. 

