Forums  > Trading  > Featur Exposure.....bull crap?  
     
Page 1 of 1
Display using:  

Maggette


Total Posts: 1319
Joined: Jun 2007
 
Posted: 2021-08-22 10:21
Hi guys,

hope to get some opinions around that topic: when it comes to modeling with data that seems to be generated by an instationary data, I ran a lot into concepts that are some sort of variation of this here:
https://forum.numer.ai/t/model-diagnostics-feature-exposure/899

Basic idea to kind of "diversify" your models exposure to features. I have seen connected things outside of finance.

I don't see how this is helpful. I never understood why it is the goal to get a model that always works. Have different models, run simple stuff like CUSUM or some bayesian stuff to detect bleed in your model/state detection and reallocated to other models (that you might paper trade/simulate on the side)?

In a domain like finance with a tough signal to noise ratio where it is that hard to get an edge, does it make any sense to "harden" your strategy by limiting "feature exposure"?
Isn't it better to diversify among models/strategies...not within a strategy?

I am also not sure if I find feature neutralization (like a fancy word for boosting with linear regression) awesome either? More open to that one because it makes comparing models easier and is kind of agnostic to the measure I use to evaluate models.

But I work on some experiments right now. Maybe my opinion will change.
Thanks

Ich kam hierher und sah dich und deine Leute lächeln, und sagte mir: Maggette, scheiss auf den small talk, lass lieber deine Fäuste sprechen...

nikol


Total Posts: 1377
Joined: Jun 2005
 
Posted: 2021-08-22 12:37
I am not in "this space", but apart from CUSUM (which I find good detector), I also like the idea of detecting distribution change with qqplot over Q1 at t with Q2 at t-1, where the KS- or AD- tests give you clear probability-like flag of change.

Talking about features space (M and L), we can think of certain mapping of original (hidden) N-dim manifold to M-dim or L-dim (M and L < N) manifolds of features, where the KS/AD-tests between M and L spaces can play a role too... Apology for being too abstract here, just mumbling aloud.

It is interesting also, that two hypotheses:
- "M is different from L"
- "M is same as L"
may deliver different results (due to hypothesis testing with 11/00/01/10 table of outcomes). Worth checking both, whether one is delivering better result than the other.

PS. Looked "Feature neutralization" - is it about choosing features giving ~risk neutral outcome with MtM(on RN)? or it is really about zero'ed PnL (=on RW)?

... What is a man
If his chief good and market of his time
Be but to sleep and feed? (c)
Previous Thread :: Next Thread 
Page 1 of 1