Forums  > Trading  > Sizing with an unknown number of signals  
Page 1 of 1
Display using:  


Total Posts: 141
Joined: Sep 2015
Posted: 2021-06-22 13:14
Let's say I have a signal that is triggered by some "event".

The event can happen any number (N) of times per day, month or quarter.

I have a bankroll W.

Ideally, I have a perfect forecast of N and I invest, let's say I want equal notional weight, W/N in each idea.

But, if I don't know N, and have equal (low) conviction in each idea, how do you go about allocating to an opportunity like this?

I could see historically N / week( or month) and allocate accordingly. If there are more events, I could sell pro-rata and re-invest in the new event...etc.

Just wondering if there is something smarter to do here.


Total Posts: 1377
Joined: Jun 2005
Posted: 2021-06-22 19:44
1. You should know distribution of N.
For example, N ~ Possoin(a) or equally N ~ Norm(mu=a, sigma=sqrt(a)).
Hence, I would do
Time to time (~2%) you will hit your own limit.

You get the idea.


2. Knowing ROI % Uncertainty of Idea will change this allocation rule. It is about Sharpe or Kelly.

3. Investment horizon will change proposed solution too. Without much of complexity, I would split "large" current period DT into small dt and allocate there investments. For example, for Idea(dt) and Idea(2.dt) you have 2 allocations for overlapping period, dt, and 1 for the rest.
Likely there is close form solution.

... What is a man
If his chief good and market of his time
Be but to sleep and feed? (c)


Total Posts: 26
Joined: Nov 2018
Posted: 2021-06-23 03:11
I think you need to be more specific about "equal conviction" here. A signal is triggered and perhaps there will be some sort of decay along its path (exponential decay?). I would assume more return is realized closer to the "event". Then you can test when to rotate out of a "stale" signal and into a newly triggered one and balance the turnover with transaction cost, etc. Be aware of parameter fitting though...


Total Posts: 684
Joined: May 2006
Posted: 2021-06-23 09:32
> Just wondering if there is something smarter to do here.

Well, smarter or not - it's mostly about details.

E.g., you are weighting W / N - in reality, you would first rescale them all to the same volatility. Cash notional is meaningless.

Then, this all works if the assets you are picking are orthogonal in some sense. If they are not, you need to orthogonalize them. So it's W / N / sigma per orthogonal component, not necessarily per asset.

Then, the weighting. Basically, you can run through a whole spectrum of weights, from weighting in proportion to Sharpe to unit weighting. Of orthogonal components, scaled to the same vol. That's a function of what you are optimizing for. To within reasonable assumptions, the general weighting is Sharpe^q for q between 1 and 0.

So q=1, weighting ~ Sharpe means you really care about the return per unit variance, but you don't mind tail risks. Unit weighting, q=0, means you want to diversify tail risks, and you are happy to give up as much return per unit variance as you need to do that. Something in between gives you a balance - a bit of return and a bit of tail risk.

And I'm glossing over funding cost. So there is a hurdle rate for each component, which includes the funding costs, uncertainty, and all sorts of other things.

Obvioulsy, in real life, you may not have a clue about any of these things. So you may as well end up weighting W/N per asset...

"There is a SIX am?" -- Arthur


Total Posts: 1446
Joined: Jun 2004
Posted: 2021-06-23 12:46
I'm going to thrown in a few buzzwords without thinking through it too much

would you use some sort of backward-induction optimization (like HJB) combined with a Kelly bet sizing (depending on your utility function)

the idea being that this bet has an expected impact on your utility, but so does holding capital for future opportunities.

if you assume the distribution of future events is pretty constant (and just depends on time remaining) then the value of holding back capital will be pretty smooth (in relation to time and events). this might not always be the case though, thinking for example of credit defaults on a small universe ?

this scenario reminds me of "teaching computers to play poker" which obviously has elements of bankroll management (combined with game theory) but was only solved, in very limited cases, quite recently.


Total Posts: 516
Joined: Dec 2008
Posted: 2021-06-23 18:12

'But, if I don't know N, and have equal (low) conviction in each idea, how do you go about allocating to an opportunity like this?'

responses draw out the fact that information is gained over time

one oversimple broad brush approach to considering N could be to use the probability symmetry principle as an initial guess (or do we know more?) and update with Bayes...interesting toy problem from Mosteller told me about it, he gives a couple of classical approaches, free book by Downey mentions the Bayesian angle -

'(a) A railroad numbers its locomotives in order, 1, 2, ..., N . One day you see a locomotive and its number is 60. Guess how many locomotives the company has. (b) You have looked at 5 locomotives and the largest number observed is 60. Again guess how many locomotives the company has.'

interesting to me that there is enough to do anything with these...seems a great icebreaker at an AI conference
Previous Thread :: Next Thread 
Page 1 of 1