Forums  > Trading  > Performance Metrics for HFT Strat  
Page 1 of 1
Display using:  

Its Grisha

Total Posts: 61
Joined: Nov 2019
Posted: 2020-09-28 22:59
So I come from a lower-frequency background where sharpe and the like makes more sense.

I am doing a grid search across 2-3 parameters and many backtest paths for an HFT strat that both provides and removes liquidity. Let's abstract the simulation out for now and assume the results coming out are realistic. (problem for another thread)

The whole compounding of portfolio returns idea goes out the window here. I have more available exposure limit than the size of the trades. I often have no position, and the size of my position depends on what's available in the book. So I have a PNL time series.

What should my objective function be? I want a sharpe-like mechanic to not reward super risky trading on final PNL alone of course.

Ideas I have so far are pretty simple (I like the second one better):
- hourly PNL/stdev(hourly PNL)
- (PNL per some fixed unit of volume traded) / stdev(PNL per some . . .)

Problem with option 2 is that if some parameter set drastically reduces my traded volume, strategy gets a lot less attractive even if the risk/reward is awesome.

A hint of how this is typically done or any criticism of my planned objective function would be much appreciated :)


Total Posts: 1195
Joined: Jun 2005
Posted: 2020-09-29 09:17
Maybe this can help.

1. Why maximising sharpe? Could you maximise pnl by using kelly?
2. Why signal/std, not signal/quantile? If underlying pdf is normal then it does not matter of course
3. Your hft module rely upon pnl estimation and its uncertainty. Why not using that? Perhaps you do but then why do you need additional tune at ~hourly frequency?


Total Posts: 451
Joined: Jan 2015
Posted: 2020-10-08 19:09
Just pick some reasonable maximum position limit, then grid search for max PnL.

There's really no need to optimize around Sharpe, because most working HFT strategies are constrained by liquidity not risk. Your intuition is right, optimizing on Sharpe will produce very small PnLs. The first marginal unit of risk is almost always going to be the most profitable. Max(Sharpe) collapses into trading one lot at a time. The same is true for Max(PnL/Volume).

As long as you have a solid HFT alpha/strategy, you'll rarely find yourself with high-risk/high-PnL parameters. At high turnovers, most of the portfolio variance becomes dominated by the trades, not the positions. What makes or breaks a strategy is short-run returns post-fill, not how the inventory drifts over time.

In contrast low-frequency strategies tend to hover near zero-Sharpe, because EMH implies that the ex-ante return of any random position is about zero. Buy-and-hold stocks based off a dartboard, and you pretty much match the index.

But on a trade-dominated strategy, the returns come from microstructure alpha minus t-costs. There's no reason to expect this number to cluster around zero. And indeed if you just keep making a bunch of random trades, you'll very quickly lose all your money. Almost all high-turnover parameterizations are going to be either straight up or straight down. In HFT-world, high-PnL/high-risk Sharpes are a rare coincidence that require a lot of stars to align.

If you pre-set a small maximum position limit, it forces the optimizer into staying within high-turnover strategies. And therefore avoids the problem of high-PnL/high-risk parameterizations. A tight position limit caps the variance contribution from long-run drift. Unless you literally have no other profitable options (in which case it doesn't matter), high-turnover parameterizations will always dominate the PnL metric under a pos-limit constraint.

All that being said, sometimes it's practical to set a high Sharpe floor on a strategy. Especially when you're starting out. Not necessarily from a risk perspective, but a validation one. One of the hardest challenges is verifying if/when live trading is not in line with simulated backtests. If you start with a 30-Sharpe parameterization, then even a single losing trading day lets you reject the null hypothesis at p<0.05. It can be useful to start with a Max(PnL) subject to Sharpe > [X], then gradually relax that constraint over time as you build up confidence in the live implementation.

Good questions outrank easy answers. -Paul Samuelson

Its Grisha

Total Posts: 61
Joined: Nov 2019
Posted: 2020-10-08 21:45
Thanks guys, helpful thoughts. Managed to confirm most of them empirically.

After making my server suffer for a few consecutive days, I did in fact arrive at the argmax(PNL) parametrization being the best. Conveniently, this also maximizes my hourly sharpe metric, and is indeed one of the highest turnover parameter sets.

On the volume-time sharpe, there were other parameter sets maximizing, but at the cost of lower PNL and more sporadic traded volume.

Especially appreciate the tips on going live @EL, I want all the sanity checks I can get. Going to be all about the production fill rates from here, so I might come back to complain about those in a month or so.
Previous Thread :: Next Thread 
Page 1 of 1