Forums  > Basics  > Lead-lag / latency real time monitoring  
Page 1 of 1
Display using:  


Total Posts: 987
Joined: Jun 2005
Posted: 2019-11-16 22:46
I trade on X and monitor Y which is leading X but with variable latency. Apparently this number impacts performance.

Sornette's paper "Optimal termal casual path" is nice but seems too "expensive" for real time.
Similar to that I thought to scan cross correlation with variable intervals but that is also expensive.

Can you suggest any other methods ?


Total Posts: 1090
Joined: Nov 2004
Posted: 2019-11-17 08:33
The day Sornette will make actual money with his research... except from selling signals... His latest venture with CS looked like a disaster the last time I cared to have a look.

Nce guy though.

If you are not living on the edge you are taking up too much space.


Total Posts: 987
Joined: Jun 2005
Posted: 2019-11-17 12:44
That paper was not related to trading but attempt to give solution to Granger criteria.
Or you measure success through the profit lenses ?
I guess he is doing well publishing his articles, books and lecturing.

That idea is cool, but yet slow. It led me to other couple of ideas, but will take a lot of implementation. :(

PS. Perhaps EMA is used in zillions of trading algos. Only 5% of those algo using EMA are making profits. Is it still worth employing EMA?


Total Posts: 62
Joined: Jul 2018
Posted: 2019-11-18 03:00
@nikol I can't give you the full method, but there's a way to do this with a directed graph and spectral clustering.


Total Posts: 987
Joined: Jun 2005
Posted: 2019-11-18 13:57

Thank you for this direction.

My other thoughts were:
- "Optimal Energy" idea fits well into FPGA combinatorial techniques (I have proposed such idea in my PhD exploiting geometry of decays. Now it is used in LHC/CERN to measure luminosity). Main problem is - I am not FPGA programmer and it will take time to become one (with assembler coding experience not afraid, but it is time...).
- make MC for X and Y (=X+time shift+noise) and use this sample to train ANN. Then use "transfer learning" to adopt trained ANN to real sample. I am aware that it takes quite a time without guarantee of outcome with wanted level of accuracy ("guaranteed" ~ well controlled like in case of pure numerical methods).


Total Posts: 1212
Joined: Jun 2007
Posted: 2019-11-18 23:37
Hi nikol,
I am a bit confused by the term "lead&lag latencz" here. That's obviously my problem, since doomax seems to understand it.

So please help me to understand the problem.

Assuming the relationship between x and y were linear: are we talking about the fact that x leads y by a varying correlation?

In a simple model where y follows x exactly by a fluctuating lag?

y[t] = x[t-lag[t]]

To be more a clear here an quick (and ugly!!!) python thingy...but I have to get off the train in a minute, so no time to make it nice (sorry):

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

p_of_switch_lag = 0.15
min_lag = 1
max_lag = 10
length_of_series = 60

# assuming the lag is "sticky" and the lag doesn't switch to often
switch_lag_state = np.random.binomial(1,p_of_switch_lag,(length_of_series))

# picking lags..keep old one or randomly pick a new one
lags = []
lag = np.random.randint(min_lag,max_lag,(1))[0]

for state in switch_lag_state:
if(state != 0):
lag = np.random.randint(min_lag,max_lag,(1))[0] #

#x and y--- y exactly folloes x by random lag
x = np.random.standard_normal(length_of_series)
y = []
for t in np.arange(0,length_of_series):
lag = min(max(t-lags[t],0),t)

df = pd.DataFrame(switch_lag_state,columns=["switching_lag"])
df["lag"] = lags
df["x"] = x
df["y"] = y


edit: damn it is loosing tabs...

Ich kam hierher und sah dich und deine Leute lächeln, und sagte mir: Maggette, scheiss auf den small talk, lass lieber deine Fäuste sprechen...


Total Posts: 987
Joined: Jun 2005
Posted: 2019-11-19 12:25
> lag = np.random.randint(min_lag,max_lag,(1))[0]

In my case lag is more or less grouped around single value. But yes it can move wildly at some events, so I want to avoid them (maybe later want to exploit).

I measure xcorr as this:

def xcorr(x,y, rng, nodiff=False):
....if nodiff:
........dx = x
........dy = y
........dx = np.diff(x, 1)
........dy = np.diff(y, 1)
....sel = (np.abs(dx) > 0) & (np.abs(dy) > 0)
....xx = [(i, np.corrcoef(dx[sel], np.roll(dy[sel], i))[0][1]) for i in range(-rng, rng)]
....xxr = list(zip(*xx))
....return xxr[0], xxr[1]

here is picture of cross_correlations found from synchronized tick series sampled at 100 ms (resolution of 1 lag):
pd.DataFrame(...synchronized tick data...).resample('100L').ohlc()

If I sample at 200, 250, 500, 1000 ms than correlations become stronger but the lag is less "pronounced" from which I conclude that I have ~100-300 ms advantage.

Still, this result is obtained in backtest (post-production) mode and completely unusable in realtime. Besides, I cannot claim that my advantage is permanent. At some events I anticipate that this lag can change. Therefore, I want to know it right at the moment of trade (+/- 100 us)

PS. I know, there is "wealth" of underlying structure behind these series which might impact lead-lag pattern, but anyway.


Total Posts: 987
Joined: Jun 2005
Posted: 2019-11-19 20:46
The model behind my problem is this:
X(t) is continuous price process with Px - point-like "realizations" of X with Poisson rate Rx: X(t_i)
Y(t) = X(t+tau) is also continuous process with Py - point-like "realizations" of Y with rate Ry: Y(t_j)

Let say, typically Ry/10 ~ Rx. More rigorously, Y updates if change of X is "large".

I want to know tau.

Point-like process is emulating asynchronous tick data.
It can be trade prices, but in my case it is some kind of mid-price, which is inside bid-ask interval.


Total Posts: 1212
Joined: Jun 2007
Posted: 2019-11-20 08:17
Ahh. Ok. Thx. And suddenly the posts by you and doomax make a lot of sense now:). Should have been able to figure it out myself. Sorry.

Thx again. Will think about it.

Ich kam hierher und sah dich und deine Leute lächeln, und sagte mir: Maggette, scheiss auf den small talk, lass lieber deine Fäuste sprechen...


Total Posts: 62
Joined: Jul 2018
Posted: 2019-11-21 15:56
@nikol something I can talk about that may or may not be relevant - do you actually need to estimate tau or is this propagated into some higher-level decision making? If you're just looking for an optimal forecast based on some variable lags try searching around for 'mixed delay filter' signal processing literature.


Total Posts: 987
Joined: Jun 2005
Posted: 2019-11-21 17:16

Lead value (tau) enters into risk estimation, hence, yes, it has an impact on market order submission policy and impacts limit order price. In a sense tau may change such that my signal is lagging prices. In this case I have to be aware and switch quoting machine from the signal into 'actual market'.

Thank you for directions, I m looking into it (Spectral clustering is a thing).


Total Posts: 13
Joined: Jan 2017
Posted: 2019-11-22 19:07
I was also looking into this a while back, it didn't go anywhere on account of other things but I remember stumbling across I've not actually looked at it so no idea if it's suitable for purpose but who knows, maybe it'll help :)
Previous Thread :: Next Thread 
Page 1 of 1