Forums  > Pricing & Modelling  > Forecasting Methodologies  
     
Page 1 of 3Goto to page: [1], 2, 3 Next
Display using:  

il_vitorio


Total Posts: 103
Joined: Aug 2014
 
Posted: 2016-06-05 04:37
Good night Phinanciers,

Here is the deal I have to do a thesis for my Msc and I am thinking of focusing in a good forecasting technique.

Since I do know the "standard" forecasting techniques (Kalman, ARIMA and most of its derivations, HMM)

I was thinking to go further and research in HHMM (hierarchical hidden markov model) or even its generalizations (which from what I have read is called Dynamic Bayesian Network).

So, in this post I would like to know your opinions in good forecasting methods that worth the time to be researched and learned.

Thank you very much,
Il_vitorio

P.S: This could be in University but as I think its more in the poll side could become a good summary of forecasting methods.

One of my most productive days was throwing away 1000 lines of code.

jslade


Total Posts: 1057
Joined: Feb 2007
 
Posted: 2016-06-06 22:15
I always thought that this kind of stuff was pretty neat:

http://www.vovk.net

If I had to spend a year thinking about something in forecasting, it would probably be something along those lines. I doubt as it will get you a job afterwords, but it would be fun to think about.

"Learning, n. The kind of ignorance distinguishing the studious."

katastrofa


Total Posts: 344
Joined: Jul 2008
 
Posted: 2016-06-07 00:11
Always compare the complex methods with something really simple, like exponential smoothing or moving average. You may be surprised how good the simple stuff can be, especially if you have a lot of noise.

il_vitorio


Total Posts: 103
Joined: Aug 2014
 
Posted: 2016-06-08 15:22
Thank you very much for the suggestions.

@katastrofa I will be surely benchmarking against the simple ones.

@jslade I have been poking in the site, it is pretty amazing this line of though, after I finish this research I will surely be thinking in the things on the site.

Any further suggestions?

One of my most productive days was throwing away 1000 lines of code.

EspressoLover


Total Posts: 201
Joined: Jan 2015
 
Posted: 2016-06-08 18:58
Deep (recurrent for time series) nets.

http://arxiv.org/abs/1407.5949

jslade


Total Posts: 1057
Joined: Feb 2007
 
Posted: 2016-06-08 19:40
Honestly, if your purpose is to get a job, knowing the basics really well (as katastrofa says) is your best bet.
Trying to beat the legions of recent Ph.D. student experts in transductive whatevers or dweeb^H^H^H erm deep learning is probably futile for a masters degree. Unless you're just interested, in which case, go nuts!

"Learning, n. The kind of ignorance distinguishing the studious."

EspressoLover


Total Posts: 201
Joined: Jan 2015
 
Posted: 2016-06-08 20:29
Don't disagree at all. In fact I'm pretty skeptical that deep nets would work all that well on asset price prediction. The signal-to-noise ratio is just too low. It's too easy for the auto-encoding to blow all it's statistical power on randomness.

But c'mon man. There's plenty of time for tweaking boring Kalman filters over the next 30 years of sitting in a soul-crushing office. A master's thesis is a last hurrah to do something totally impractical, but still badass cool.

jslade


Total Posts: 1057
Joined: Feb 2007
 
Posted: 2016-06-08 21:12
True story for sure, but I occasionally wish I had spent more time learning more useful things in school. I might be retired by now.

I make stink faces at DL for two reasons: 1) there are a ton of deep learner Ph.D. types on the market, and other than recognizing german traffic signs, I haven't seen much in the way of applications -most graduate and do other things with their time. 2) I know big swinging deep learners, and *they* are saying it is overhyped and people should stop doing this.

That said, I've burned some cycles noodling around with it. There's cool stuff there, but it seems to require an awful lot of human intervention and processing power compared to, like, just thinking about the data problem.

There's interesting new results in plain old linear algebra that strikes me as potentially more interesting than DL. Big SVD stuff, CX/CUR decomposition, Generalized Eigenvector classifiers, marrying fast k-means with sequential regression.

"Learning, n. The kind of ignorance distinguishing the studious."

deeds


Total Posts: 341
Joined: Dec 2008
 
Posted: 2016-06-09 14:38

jslade - thanks for the informed perspective

anything particularly tasty from the broad menu of new developments you list?

radikal


Total Posts: 252
Joined: Dec 2012
 
Posted: 2016-06-09 16:51
@jslade --

I'm now,almost, on the NN train after being on the fence for a while. Finding that I can get either better solutions to existing problems or solve a new class of problems that I previously couldn't with reasonably simple NNs. (Not that they were impossible before just impossible for me as I lacked the math chops to solve with some fancy hierarchal model) My sticking point is SPEED as the sparsity of nets leads to empty and over expensive computations -- there's a LOT of work on this happening right now so fingers semi crossed.

So I'll be the dissenting voice and advise working on something in NNs -- it's way better than working on stochastic vol models =p

There are no surprising facts, only models that are surprised by facts

il_vitorio


Total Posts: 103
Joined: Aug 2014
 
Posted: 2016-06-09 17:37
Thank you very much for the contrubutions.

@jslade - I have been searching on those topics those are things pretty interesting, particularly since I have kind of an academic curiosity on random matrices theory and all the magic behind it. I do not know how do you come up with this kind of material and I am pretty grateful for this names. (Hope to repay in the future)

@ES - Thank you for pointing that out.

@radikal - I have also done ML "general" stuff ANN and a little of the basic RNN, one thing that I had in the possible research bag are the LSTM/GRU and also Deep ConvNets do you talk that kind of work? or are you talking something else? I would be pretty grateful some names that point out in some directions.

Again thanks everyone,
il_vitorio

One of my most productive days was throwing away 1000 lines of code.

EspressoLover


Total Posts: 201
Joined: Jan 2015
 
Posted: 2016-06-10 01:10
@radikal [Sorry to go off on a thread tangent.]

By speed, I assume you mean evaluating an already trained model. Large-scale neural nets, especially deep nets, are basically unworkable in a low-latency context. But have you investigated any of the research on model compression? (Link to an example below). My best luck has been with training large ensembles during learning. Then use something like LSH or very pruned nets to get the latency to something acceptable.

https://arxiv.org/pdf/1504.04788.pdf

chiral3
Founding Member

Total Posts: 4969
Joined: Mar 2004
 
Posted: 2016-06-10 01:43
I am late to this discussion... IMO, "good forecasting techniques" are not the trendy little things that will get you a job now. Others have alluded to this. I've discarded very complex and trendy things for a good 'ole fashioned Kalman on more than a few occasions.

Something that would be valuable and that stresses good basics would be too look at when certain methods do and do not work. This is relevant in a practical sense. Take vol forecasting (I'll pick something simple) applied to target vol. Say for trading, risk premia, vol as an asset class, etc. EWMA, GARCH, EGARCH2, SV, etc.... With their myriad parameterizations..... They all tend to work well in some markets but not others. If they are too reactive turnover is too high. You're basically short gamma. Good times are bad and bad times are ok. If you make them less reactive you bleed less in v-shaped markets but can't get ahead of big moves fast enough. Synthetically your striking your option further OTM.

I've developed some really great strategies using Kalman but, for what I picked up on the front end, I gave up on the back end in tx costs. Bayesian methods have really found a ton of application in my world over the years. I guess my advice to you would be the advice I give the people on my team: don't take your tool and look for a solution. Take a problem and apply your toolbox.

Nonius is Satoshi Nakamoto. 物の哀れ

radikal


Total Posts: 252
Joined: Dec 2012
 
Posted: 2016-06-10 16:47
@il_vitorio -- tbh lstm/gru or just window-batching your inputs doesn't matter much imho -- i haven't played with convolutional in finance (I do use them for random side projects and there's some recent work on applying deep convolutional stuff to non-vision problems but for what I spend my time on trading that's like painting with a hammer)

@ EL - That paper in particular was the moment when I thought this stuff might all fly reasonably in a MM/HFT system. There's some other stuff along similar lines that I haven't circled back to yet. Right now my system is driven by tree ensembles and swapping out to nets is "the goal" but so is improving the 500 other things needed in a new system. ^^


https://arxiv.org/abs/1605.04859
https://arxiv.org/pdf/1510.00149.pdf

Not to be lazy but the first link has a bunch of good references. There were a few other really exciting recent things on this topic but I'm struggling to find this morning.

TBH, even with really good optimization, you're taking an expensive calculation in the tens/hundreds of ms scale to something perhaps STILL too expensive. From naive deep net, you're looking to get, what like a 5 order of magnitude improvement? Which isn't to say this isn't all awesome, but it's only part of the solution if you're dropping into MM system.

There are no surprising facts, only models that are surprised by facts

Nonius
Founding Member
Nonius Unbound
Total Posts: 12651
Joined: Mar 2004
 
Posted: 2016-06-10 16:55
I'm a fan of linear regression. sue me.

Chiral is Tyler Durden

Nonius
Founding Member
Nonius Unbound
Total Posts: 12651
Joined: Mar 2004
 
Posted: 2016-06-10 16:55
I'm a fan of linear regression. sue me.

Chiral is Tyler Durden

jslade


Total Posts: 1057
Joined: Feb 2007
 
Posted: 2016-06-10 20:00
CX, CUR are PCA-like decompositions in the original space. They're most useful when married to big-SVD for a lot of reasons, but imagine a PCA where instead of some weird ass rotation that doesn't mean anything, you get a couple rows and columns that do mean something and which represent the whole space fairly well.
The generalized eigenvector thing generates nice features and uses linear regression to match or beats dweeb learning on important classification problems. I haven't fooled with it much; will do so one day.
Fast k-means is useful everywhere for building RBF classifiers, super high dimensional metric space things, recce engines, etc. It's good juju.

There's more; nuclear norm methods for compressed sensing, etc. Knowing some linear algebra tricks just seems more useful to me than noodling around with something touted everywhere by Nvidia, FB and Google's marketing team.

Random matrix stuff is interesting too, but that's old (I studied that in physics grad school).

"Learning, n. The kind of ignorance distinguishing the studious."

chiral3
Founding Member

Total Posts: 4969
Joined: Mar 2004
 
Posted: 2016-06-10 21:05
The RMT stuff was cool... in physics. I remember the french shop screwing around with that in the early 00's. Bouchaud, Potter et al. They probably got distracted by zetas or quasi crystals, which is why I never heard about them making zee money.

Nonius is Satoshi Nakamoto. 物の哀れ

jslade


Total Posts: 1057
Joined: Feb 2007
 
Posted: 2016-06-11 01:47
Potter and Bouchaud have an active hedge fund the last time I checked. Sornette is active as well (and I think involved with them somehow). Since they are phrench, they don't talk to les roast boeufs like us, though a pal worked with Sornette on his thing.
https://www.cfm.fr/fr/work-with-us/

There was some sweet result in RMT stuff by math/stats guys recently; maybe it was Terry Tao. I dunno; I find bugs in file systems and yell at people on the phone these days.

"Learning, n. The kind of ignorance distinguishing the studious."

EspressoLover


Total Posts: 201
Joined: Jan 2015
 
Posted: 2016-06-11 02:37
@radikal

Thanks for the links. Agree that the limits of compression probably are beyond O(10us) HFT applications. Even with a fast NN library like FANN, that's only enough clock-cycles for ~20k connections.

@jslade

Deep learning is definitely over-hyped. But I think you're throwing the baby out with the bathwater. There's strong theoretical bounds on the compactness of deep nets. For certain types of problems that makes shallow learners effectively useless, even if they are universal.

The perennial problem with the ML hype cycle is that everybody forgets the NFL theorem. Once some new method X makes a breakthrough on problem Y, the entire community leaps to the conclusion that it should be used everywhere and that everything else is now obsolete.

http://nicolas.le-roux.name/publications/LeRoux10_dbn.pdf

chiral3
Founding Member

Total Posts: 4969
Joined: Mar 2004
 
Posted: 2016-06-11 03:17
Jslade, I know a bunch of guys running an active hedge fund. Doesn't mean they're returning benchmark (nav).

Nonius is Satoshi Nakamoto. 物の哀れ

Nonius
Founding Member
Nonius Unbound
Total Posts: 12651
Joined: Mar 2004
 
Posted: 2016-06-11 09:19
this is definitely true. having said that, Bouchaud's firm has been consistently doing pretty damn well over a long run. Also, strangely polite, which is refreshing. Not Rentech-well, but pretty consistent. haven't checked their performance of their funds over the last few years. they did have a stupid operational blow up in one of their funds due to having some of their collateral at a custodian that went into bankruptcy in the mid to late 2000s. Can't remember the name.

I'm actually surprised at the transparency of the research- they publish a lot.

Chiral is Tyler Durden

goldorak


Total Posts: 979
Joined: Nov 2004
 
Posted: 2016-06-11 09:35
Sornette is involved with another HF. No doubt he has too much ego to work with Bouchaud.

If you are not living on the edge you are taking up too much space.

Nonius
Founding Member
Nonius Unbound
Total Posts: 12651
Joined: Mar 2004
 
Posted: 2016-06-11 09:45
Bouchaud seems a bit more grounded. Not saying that Sornette is on the same gobbledy-gook level as a leg -crossing nonsense-spewing postmodernist Phrench philosopher, but he is a bit zany isn't he?

Chiral is Tyler Durden

goldorak


Total Posts: 979
Joined: Nov 2004
 
Posted: 2016-06-11 11:51
Just learnt some vocabulary from you Nonius...

I mean the guy just loves talking and really considers he is in charge of answering (or already answered) the major question of why the universe, life and everything.

He is famous for his diet and I must reckon he is in a crazy good shape for someone reaching his 60s! live or die trying

If you are not living on the edge you are taking up too much space.
Previous Thread :: Next Thread 
Page 1 of 3Goto to page: [1], 2, 3 Next