Forums  > Pricing & Modelling  > simulation of correlated random variables  
Page 2 of 2Goto to page: 1, [2] Prev
Display using:  

Phorgy Phynance
Total Posts: 2961
Joined: Oct 2004
Posted: 2006-05-05 19:18
sfca, one of the reasons I'm trying to move the discussion is because the noise was burying your question (and getting off the original topic). Hopefully someone can now comment Blush


Total Posts: 746
Joined: Mar 2006
Posted: 2006-05-05 23:21

sfca: My first comment is it's essential to backtest a model like this, I think you will find it does a poor job of pricing the derivative. Daily return correlations are not good predictors of longer term price comovements.

Second, since the correlations are on the returns, there's no reason to update them as price levels change. This should be a straightforward simulation.

IAmEric: We get sudden large shifts in risk measures all the time. We assume they're data or system errors and fix them. When my Dad was in college, he had a job maintaining cosmic ray detectors, which are located on deserted high mountains. He would drive around to the dectectors, take them apart and clean them, then reassemble them. The most common problem would be dirt between the plates causing erroneous high counts.

The guy running the study told him about finding a detector giving impossibly high counts, so he cleaned it. It was still high, so he cleaned it again, and reassembled it just in time to catch the tail end of the most spectacular cosmic ray event of the century (he published anyway, extrapolating back from his measurements). So I know it's risky to assume big changes are wrong. But they happen so often, we have no choice.

Financial risk management is not designed to produce red alerts, traders will notice a huge risk shift long before risk management does. Also, any big event makes our correlations useless, so in an emergency you pay attention to more robust measures like PV01 and notional mismatch. VaR gives useful information when there's an explainable trend.

If VaR quadrupled overnight it would obviously be due to a major market move or a gigantic position shift. In the first case, the main concern would be to manage the positions and help clients, not to reduce VaR. If things calmed down and VaR was still inflated, a decision would be made either to increase capital and live with the larger amount of risk; or to manage things down to the old level or VaR over time.

Normally we would plan a gigantic position shift in advance, but it could happen by surprise. For example, if a major clearinghouse defaulted or a natural disaster took out all of our oil refineries. The priority would still be to manage the positions, but there would also be an effort to get more market neutral rapidly. VaR would not be the measure of choice for that, the main emphasis would be on robust risk measures, not VaR.


Total Posts: 204
Joined: Dec 2004
Posted: 2006-05-06 02:18
I hate correlations. They usually make sense only in regime-switching context.

Also, standard correlations are not invariant with respect to vol. rescaling - let alone non-gaussian distributions.

So, my favorite solution is rank correlations bootstrapped. If you have to have a matrix, see this article (no PDF, sorry):

"A distribution-free approach to inducing rank correlations among input variables" R. Iman and W. Conover, Comm. Stat.-Simul. Comp. 11(3) 311-334, 1982.

Good luck!

Mon m├ętier et mon art, c'est vivre.


Total Posts: 1629
Joined: Jun 2004
Posted: 2006-05-06 10:47

re. VaR and limits

A CEO of a bank one said to me 'VaR is a measure of what gear we are driving in', which I thought was quite profound. What is important is the rough size of the VaR figure, not whether we are 1 currency unit over or within VaR limit.

An institution which has not signalled what their response to those two events (just over or just within) has something seriously wrong with it. The VaR limit should be an indication by management of the level of risk taking they are happy with - what gear they want the bank to be in. If a limit is suddenly breaked because of a spike in vol etc. then management have to signal if they want the limit to be re-established very quickly, if they want the limit to be re-established in some orderly fashion over a while, or if they are happy to operate in a regime which they regard as temporary (the market will revert and we will again be in limit in due course). Aaron has made this point quite eloquently I think.

The most interesting case is if we are in 'normal market conditions' and over time a desk operates closer and closer to their limit. Another fairly normal day could occur and that desk could be over. There are game theoretic aspects to what now happens. If the response 'we are not really over' by the desk is accepted then the limit is seen as being a flexible one. However, the fact is that given slightly different computations or slightly different MC sample they might indeed have been inside the limit. Thus it is important to signal to the desk in advance what the response will be.

Thus, a tangible case of a gross institutional problem is if the VaR calculators say nothng to the desk until the day they go over limits, and then suddenly they get all anal and require the limit to be re-established. According to best practice, Basle etc. the risk manager is not supposed to behave this way. Nevertheless I do think it happens sometimes.

Graeme West


Total Posts: 1677
Joined: Aug 2005
Posted: 2006-08-11 21:46

I've been working on some code that involves rounding a series of correlated random variables. I want to quantify how much error I'm introducing, or to at least have some idea.

Anyone have any rules of thumb or advice?  I implemented the algorithm Aaron mentioned earlier in this thread as a generic test to muck around, and I couldn't get a reasonable error to show up. 

In my function (see below) I don't get significant differences between values of n up to n=1,000,000,000. The largest errors I saw were on the order of 1.0e-013, usually it was more like 1.0e-016. The size of the error didn't seem related to the value of n, indicating to me that this is not a way to aggregate error in matlab.

Aaron: When someone writes:

Y = C0 + C1*X + C2*X^2 + . . .

instead of:

Y = C0 + X*(C1 + X*(C2 +. . .

I know the answer will be wrong (and probably the question as well)

The Function I implemented:

function [e1 e2]=createError(n)




for c=1:n





the only reason it would be easier to program in C is that you can't easily express complex problems in C, so you don't. -comp.lang.lisp


Total Posts: 746
Joined: Mar 2006
Posted: 2007-01-07 01:46

I'm sorry I didn't see this earlier.

The error in each term for e1 is on the order of the least significant digit in your numeric representation. If all the terms mattered, to get errors are on the order of 10^-13 to 10^-16, and n = 10^9, you'd need to store numbers to something 23 or 24 places, say 80 bits. That's a waste.

The reason things don't matter much in this case is the higher order terms don't matter much. For most x's, you get beyond 10^-13 pretty quickly. In that case, why add up all billion numbers? If you had terms like f(c)*x^c instead of c*x^c, and f was such that the higher order terms mattered, you would get different answers here.


Total Posts: 1677
Joined: Aug 2005
Posted: 2007-01-09 18:19

Thanks for the clarification, Aaron.  I'll dig up that code and write something that illustrates the difference to myself (I am unable to remember concepts until I code them).

BTW, I saw your name on the speaker list at the Feb. GARP conference, but it doesn't say anything about the topics? 


the only reason it would be easier to program in C is that you can't easily express complex problems in C, so you don't. -comp.lang.lisp


Total Posts: 746
Joined: Mar 2006
Posted: 2007-01-10 20:52

I'm on a panel about Basel Capital, then speaking about stress testing.

My trouble with these conferences is I agree to speak six to eight months before they happen. By the time they come around, my thinking about my topic has changed, sometimes I have to change the whole topic in order to have anything useful and relevant to say. So don't pay too much attention to the brochure.


Total Posts: 452
Joined: Jun 2005
Posted: 2017-04-03 14:35
I want to add one more caveat of Cholesky decomposition - it is non-invariant under variable permutation, meaning that if I use Cholesky to generate MC sample of underlying correlated variable, then I will have different risk estimation for each permutation.
Simply, upper-left matrix element = 1 and, therefore, transformation of independent {r_i}, i=1,N into correlated {v_i} will mean that r_1=v_1 always, while variance(r_i-v_i) will grow as a function of i. Swap of r_1 and r_N variable will lead to a different result.

Keeping this in mind I googled the problem and found this article (with long list of refs of attempts to solve the problem)

By the way, is there any way to estimate the variance of final result given all possible permutations? For risk estimations that would be enough, as one can simply use it as an add-on.
Previous Thread :: Next Thread 
Page 2 of 2Goto to page: 1, [2] Prev