Forums  > Trading  > Why is native CME iceberg algo still used when MBO data helps designate?  
     
Page 1 of 1
Display using:  

ESMaestro


Total Posts: 141
Joined: Jun 2009
 
Posted: 2020-10-16 16:23
I know the forum has slowed, but always the first place I think of for best insight. Hope some can shed some light.

Long-time independent CME IOM/CBOT B2. Generally operate right outside the HFT spread in products like ES/ZN/UB/Bund. Since '18 VIX blowout, have found myself having to use TT's iceberg randomizing order type to enter/exit in products like ES, even at sizes as small as 20. I like to think I know my niche well, but in reality my isolation from institutional leaves me ignorant in a number of relevant areas.

ES: What are reasons large funds/traders still utilize the CME native iceberg order type when CME disseminates MBO data that allows one to reconstruct with near 100% accuracy native ice going off? Even retail traders are starting to get feeds (ex Rithmic) to glean out this info. Even though the order type is not used as much as synthetic ice (heuristic estimations), there are often tight range instances where native ice makes up a significant portion of that range of volume. Research literature I have come across estimates for ES reg session volume puts native ice @ 4%, I see upwards of 5.5%, while synthetic estimated in a wide range 5-14% of total. Are there more granular/realistic % assumptions out there from practitioners?

Is combination use of both perhaps a strategy...fire off as much synthetic ice as possible, then cap it off with native to leave a "Hi, look at me" to get less sophisticated HFT running towards your next resting synthetic ice tranche? Or those using native ice just don't care/ignorant as they are not so price sensitive? Or perhaps come to realization HFT is so good at gleaning out synthetic, it just doesn't matter?





EspressoLover


Total Posts: 451
Joined: Jan 2015
 
Posted: 2020-10-16 19:05
Caveat, I haven't really done much at CME for awhile. Also I never really played with iceberg orders of any type. But I would think that the biggest reason to use native ice is to gain atomic matching on the hidden liquidity.

This is important if you either don't have latency supremacy or want to interact with oversized IOC orders. For the former, the book can change state before your synthetic has time to insert its next tranche. So HFTs will tend to swipe the best liquidity before it interacts with your hidden liquidity. For IOCs even latency supremacy is insufficient, since you'll never get a chance to interact with the cancel quantity outside of the atomic matching event.

Again, I can't really tell you how *big* of an effect these things are. Certainly they have to be balanced against the very real cost of the obvious visibility that comes with native icebergs. Definitely don't throw out the Occam hypothesis that most native ice users are irrational/lazy/uninformed.

Good questions outrank easy answers. -Paul Samuelson

ESMaestro


Total Posts: 141
Joined: Jun 2009
 
Posted: 2020-10-16 20:05
Espresso, always appreciate your thoughtful input over the years. That makes great sense of it.

Feel naive for not giving more credence to CME native being matching engine side. IOC was not on my horizon.

Curious, to segway into a side topic: any order type/execution advice for someone in my spot (20-120 lot ES scalper) to remain as low profile as possible? I've been getting by "ok" using TT's synthetic ice for entry. Will display upwards of ~20%, randomized tranches remainder, never bracket child orders, and will often cross spread to fill targets on manually placed limits (if I'm seeing stable & large enough display). Have harder time getting out on TT ice as I'm occasionally too proximal to entries and underestimated HFT prediction accuracy that far out. Never had to consider these execution issues until the last few years as I'm in relative terms a small fish.

prikolno


Total Posts: 71
Joined: Jul 2018
 
Posted: 2020-10-16 21:28
I think people have a common misconception about icebergs because the term carries the connotation that it is used by large traders for liquidity taking and outdated microstructure publications are still steeped in this pre-DMA view.

The intended purpose of the max show on CME is to enable MMs to provide liquidity at a tighter spread, with the compensation being that you get stronger guarantees when a level has cleared before those listening on the public feed. i.e. You get the incentive of catching a falling knife so you have extra time to pull your foot out of the way, take the knife and stab your arch-nemesis, or something. I was only directly responsible for about 1% of ES, but guys working for me did quite a lot more, and I can confirm as of 2020 this is still the orthodox way of using max show on CME.

This takes advantage of 2 properties of the matching engine. (i) The fill ack reports faster than the public execution report, (ii) the replenished quantity is guaranteed worst queue priority on the direct book when the level trades through. A regular canary order only exploits the first property. There was a third property which was especially important if you are writing a FPGA parser, but I recall they patched it around Q1 2019.

To synthetically replicate the second property on venues without max show, you have to repeatedly down-prioritize your canary order, which creates a more complex state space (e.g. if it gets filled while down-prioritization is in-flight), adds messaging burden, and signals your intent. So there's very much a use for this order instruction regardless of detection, and you'll see a bunch of 106Js like us protesting if you tried to eliminate it. P.S.: MBO (and to a certain degree MBP) lets you determine the replenishment quantity ex post of the match, not ex ante.

EspressoLover


Total Posts: 451
Joined: Jan 2015
 
Posted: 2020-10-21 17:18
@prikolno

Thanks for this. Extremely informed and informative as always. Any chance you can share more on the FPGA consideration? I don't have any FPGA background, but am working on an FPGA project at the moment. It'd be helpful as a learning opportunity, even if the issue was already patched.

@ESMaestro

My gut sense is that your problems are mostly driven by the high price and high volatility of the S&P 500 in recent years. The "effective tick size" is much smaller. $0.25 becomes relatively smaller against larger absolute price moves. I haven't looked directly into it, but I'd be virtually certain that top-of-book touch sizes are much smaller than they were five years ago. Take this with a grain of salt, but I'd guess that the easiest remedy would be to re-optimize for a thinner book regime.

I say this because market makers tend to size impact based on some concave function of queue size. If you're trying to passively fill with 20 lot child orders, you'll eat a lot more market impact in a regime where touch sizes are 100 contracts, compared to one with 1000. Starting from this point, I think you have can pursue one or more o the following three solutions.

The most obvious is to scale down the size of the child orders. Maybe only display 10 lots at a time instead of 20. With a thinner book/smaller tick size, touch sizes are smaller but the market moves faster. So you have the freedom of slicing orders into finer granularities, Expected fill time on resting limits should be shorter compared to five years ago. As an aside, I don't know if TT supports something like this, but you may want to adaptively size the child orders depending on something like the rolling touch size or average limit order fill time.

The second option is to place your limit orders further down the book. Similar reasoning applies. In a thick regime, deep book orders can take unacceptably large to fill. But with large price moves, away quoting becomes comparatively more viable. Market impact tends to be smaller because you'll mostly be joined at large queues. Plus HFTs are less likely to profile liquidity that originates in the deep book.

The third option is to take make hay while the sun's shining, and take advantage of cheaper liquidity. That entails being more willing to cross the spread. Of course, aggressing will create higher impact, so you want to restrict this behavior to near the end of a parent order. If you use a marketable limit, you can both swipe resting liquidity and colonize first position in the queue. This approach can be particularly compelling if you have microstructure alphas to incorporate into the execution algorithm. Not only does that shrink the expected cost of crossing the spread, but it tends to diminish the long-run impact since your flow will profile like an HFT instead of real demand.

Good questions outrank easy answers. -Paul Samuelson
Previous Thread :: Next Thread 
Page 1 of 1