 Godzilla
|
|
Total Posts: 25 |
Joined: Sep 2008 |
|
|
Any intra-month updates on QIM.
Is it dead or alive? |
|
|
|
 |
 amateur
|
|
Total Posts: 147 |
Joined: Mar 2010 |
|
|
alive
For January:
The Quantitative Global Program returned +0.30% net-of-fees.
Gains on metals (0.70%), indices (0.66%), lost on energies (-1.53%).
The Quantitative Tactical Aggressive Fund returned an estimated +6.17% net-of-fees. |
“unnecessary complex models should not be preferred to simpler ones. However . . . more complex models always fit the data better” |
|
 |
 Azx
|
|
Total Posts: 39 |
Joined: Sep 2009 |
|
|
I made a excel sheet in secondary school that would spit out candlestick patterns. Some worked really great but it didn't make sense to trade any of them.
QIM's founder describes himself as a self taught quant. Maybe university isn't such a good idea after all... |
|
|
|
 |
 amateur
|
|
Total Posts: 147 |
Joined: Mar 2010 |
|
|
Bloomberg has introduced (recently??) a basic candlestick patterns recognition function, CNDL GO, not sure if it is useful. |
“unnecessary complex models should not be preferred to simpler ones. However . . . more complex models always fit the data better” |
|
 |
 Godzilla
|
|
Total Posts: 25 |
Joined: Sep 2008 |
|
|
Does anyone know QIM Feb performance? Thanks |
|
|
|
 |
 amateur
|
|
Total Posts: 147 |
Joined: Mar 2010 |
|
|
Estimated as of 25 Feb:
JAN=0.30% FEB=0.93%
|
“unnecessary complex models should not be preferred to simpler ones. However . . . more complex models always fit the data better” |
|
 |
 svquant
|
|
Total Posts: 113 |
Joined: Apr 2007 |
|
|
As of close Feb 28th
Feb: 1.19% YTD: 1.49% |
|
|
|
 |
 svquant
|
|
Total Posts: 113 |
Joined: Apr 2007 |
|
|
Godzilla in case you were going to ask... QIM is down ~3% in March so far. Now down 1.5% YTD.
It's been a bit ugly in the CTA world since the quake/tsunami in Japan. Wouldn't surprise me if most of the mid month loss happened in the last week.
|
|
|
 |
 Godzilla
|
|
Total Posts: 25 |
Joined: Sep 2008 |
|
|
Thank you svquant, you are so kind. |
|
|
|
 |
 amateur
|
|
Total Posts: 147 |
Joined: Mar 2010 |
|
|
QIM down in April, -2.42% for the Global Program.Silver short was bad.
Other billion CTAs mostly up in April, from what I saw. |
“unnecessary complex models should not be preferred to simpler ones. However . . . more complex models always fit the data better” |
|
 |
 gnarsed
|
|
Total Posts: 87 |
Joined: Feb 2008 |
|
|
their equities program also got cleaned out and was down 11% in the month and 4% ytd. |
|
|
|
 |
|
Machine learning is fairly intriguing, but it's applicability to finance is pretty limited compared to vanilla statistics. Most of supervised learning is much more suited for classification rather than regression. If you're trying to model returns using a classification scheme (i.e. classify daily returns by positive or negative) it's probably biasing you to strategies with asymmetrical payoffs rather than true alpha. As for unsupervised learning, it's pretty hard to outperform vanilla correlation or PCA. Stuff like k-means clustering just doesn't tell you that much more about overall market co-moment structure. |
|
|
 |
 raf
|
|
Total Posts: 123 |
Joined: Sep 2010 |
|
|
Hello,
I found this one intriguing/interesting : http://goo.gl/HJ39E
A while ago I read an interesting paper/note about using fuzzy logic to identify candlesticks patterns, but cannot find the document anymore ):
Have a nice trade
Raf |
No Brain - No Pain ! |
|
|
 |
 quantz
|
|
Total Posts: 237 |
Joined: Jan 2009 |
|
|
HFT Trader, i'm interested in your comments. i take it you are implying most financial relationships are linear in nature therefore not requiring more flexible models. of course linear modeling itself is a form of supervised learning.
curious why you think machine learning is better suited to classification than regression - do you have some experience with it?
i think your point about the pitfalls of using a classification scheme makes sense. however, the idea still intrigues me. if you're interested in discussing further or can recommend any relevant reading, shoot me an email |
|
|
 |
|
Any update on QIM performance since May?
Raf, you may want to look at this program. It it is rather basic but has some interesting features, including an indicator based on patterns: http://bit.ly/prvP2l |
|
|
|
 |
 flip
|
|
Total Posts: 21 |
Joined: Apr 2007 |
|
|
ytd (per end of july): -4.43% may: -3.31% june: -0.92 july: 1.12%
first week of aug up 2.5% (indicative)
|
|
|
 |
 lj12
|
|
Total Posts: 3 |
Joined: Dec 2009 |
|
|
quantz at 2011-05-23 02:44 wrote: > ... you are implying most financial relationships are linear in nature ...
Relationships may not be linear, but given our limited knowledge, simplest (=linear) model will be most robust. Say model explains 5bps out of 150bps daily move, so about (negative) -70dB SNR. Not much ML research is in domains with such low SNRs. For example speech recognition starts to fail at +20dB (that's +ve), at 0dB fails completely (WER <1% shoots to 70%). |
|
|
|
 |
 quantz
|
|
Total Posts: 237 |
Joined: Jan 2009 |
|
|
hmm cool, never studied the issue from a SNR perspective. will add to my list of things to read up on :) |
|
|
 |
 prophet
|
Banned |
Total Posts: 149 |
Joined: Oct 2004 |
|
|
Machine learning is fairly intriguing, but it's applicability to finance is pretty limited compared to vanilla statistics. Most of supervised learning is much more suited for classification rather than regression. If you're trying to model returns using a classification scheme (i.e. classify daily returns by positive or negative) it's probably biasing you to strategies with asymmetrical payoffs rather than true alpha. As for unsupervised learning, it's pretty hard to outperform vanilla correlation or PCA. Stuff like k-means clustering just doesn't tell you that much more about overall market co-moment structure.
Machine learning is more complicated and has more design degrees of freedom with regression models versus classification models. Therefore machine learning of regression problems are more likely to fail, as with any complex system engineering. There's also the bias of history. Machine learning achieved early success with application to classification problems.
The problem with machine learning in finance is that its too powerful and must be properly constrained. Without constraints the learner will find an undesirably strong variety of highly non-deterministic price action while ignoring the more subtle highly-valuable deterministic price action. This is essentially what we call "over fitting" and is the biggest problem in my opinion. The learner may also favor certain market regimes and fail to generalize to other market regimes simply due to lack of constraints on the merit or error function.
There are many useful constraints, either falling on the input side (pre-filter market data to avoid highly non-deterministic price action into the training set), or on the merit/error side (identify regimes and handle them properly).
Now if you spend enough time experimenting with the above-mentioned constraints on real market data, avoid the "machine learning is useless" conclusion and truly think about the problem (as I have), you'll come to some simple, yet bold conclusions concerning the effective application of machine learning to market data.
If you put in the work, and truly understand the nature of the over-fitting problem, you'll find yourself arriving at this conclusion from many angles. It also helps to have been through he process of creating and successfully trading profitable algorithmic strategies, saw those strategies degrade as the market evolves, and have witnessed the struggles of others. I've been told I was wrong about my conclusions many times, only to reconfirm them through further analysis.
|
|
|
|
 |
 temnik
|
|
Total Posts: 204 |
Joined: Dec 2004 |
|
|
Ok, humor me. I think your "bold conclusion" deals with Takens theorem. 
As far as the regimes go, I think market regimes are difficult to identify promptly and reliably, and much of the P&L is lost in that "in-between regime." So, they are just another way of over-fitting.
|
Mon métier et mon art, c'est vivre.
|
|
 |
 prophet
|
Banned |
Total Posts: 149 |
Joined: Oct 2004 |
|
|
There is some relation to Takens theorem.
Some of my conclusions involve the maximum complexity of a model that input data dimensionality can support, the value of specific types of input data, why machine learning is particularly prone to failing with typical market data and typical trading strategies and conclusions based on large populations of strategies inset/outset versus certain machine learning parameters. These are more a series of practical conclusions than a proper theory.
I think the difference between regime-change and general over fitting is that regime-change is considered to be theoretically detectable while over-fitting is always undetectable for in-set testing and is only detected through out-of-set testing.
Given a synthetic 100% random-walk price series, every predictive model is over-fit. However I can still detect regime variations, such as volatility. |
|
|
|
 |
 kevien
|
|
Total Posts: 1 |
Joined: Jun 2011 |
|
|
what is the future of a quant? |
|
|
 |
 Azx
|
|
Total Posts: 39 |
Joined: Sep 2009 |
|
|
Jaffray Woodriff is interviewed in the latest Market Wizards and talks some about QIM's methodology. Here's the summary from the book:
"1. It is possible to find systems that are neither trend following nor countertrend that work better than either of those more common approaches (judging by the comparison of Woodriff’s return/risk to the return/risk of the universe of systematic traders). 2. It is possible to apply data mining techniques to search huge quantities of data to find useful patterns without necessarily falling victim to curve fitting. (Although, as an important caveat, most people trying to do so will misuse the approach and end up finding patterns that worked very well in the past, but fail in actual trading.) 3. Old price data (e.g., data 30 years old) can be nearly as meaningful as recent data. 4. Systems that work well across many markets are more likely to continue to work in actual trading than systems that do well in specific markets. The lesson is: Design systems that work broadly rather than market-specific systems."
The point that I felt stood out the most was that he seems to optimize his models on price history going back 30 years. In order to avoid over-fitting that makes sense, but given how much the market has changed in the same period it does not. Also, what works on commodities might be very different from what works on equities, so it seems as if QIM is searching only for anomalies that have persisted for several decades in several different asset classes.
Any thoughts? |
|
|
|
 |
 quantz
|
|
Total Posts: 237 |
Joined: Jan 2009 |
|
|
he mentions testing "trillions" of systems - sounds suspicious. as do the origins of his strategies. on the other hand he seems to know his stuff pretty well. |
|
|
 |
 pj
|
|
Total Posts: 3425 |
Joined: Jun 2004 |
|
|
> testing "trillions" of systems The oldest scam in the universe. On each iteration half predict up, half - down.  |
вакансия "Программист Психологической службы"
-але! у нас ошибко! не работает бля-бля-бля
-вы хотите об этом поговорить? |
|
|
 |