A handful of the comments are skeptical of the utility of this method. I can tell you as a physical scientist, it is common to make the same measurement with a number of measuring devices of differing precision. (e.g. developing a consensus standard using a round-robin.) The technique Cook suggests can be a reasonable way to combine the results to produce the optimal measured value.
I wonder if this minimum variance approach of averaging the measurements agrees with the estimate of the expected value we'd get from a Bayesian approach, at least in a simple scenario, say a uniform prior over the thing we're measuring and assume that our two measuring devices have unbiased errors described by normal distributions.
I'm not a physical scientist, but I spend a lot of time assessing the performance of numerical algorithms, which is maybe not totally dissimilar to measuring a physical process with a device. I've gotten good results applying Simple and Stupid statistical methods. I haven't tried the method described in this article, but I'm definitely on the lookout for an application of it now.
Yeah and this is a much more intuitive way of generalising from the n = 2 case. Weights are proportional to inverse variance even for n > 2. Importantly this assumes independence so it doesn’t translate to portfolio optimisation very easily.
Your slippery slope makes no sense to me. What do we need XML for here? Is anybody asking for it? You can use your own grammar checker but you can't render your own equations and submit them.
ADDED. Because the new functionality will be used to create cutesy effects for reasons that have nothing to do with communicating math, increasing the demand for moderation work.
Why? Latex is not how maths if supposed to be read, else we'd all be doing that. It's how it might be written.
edit: Nobody is going to use maths for cutesy effects. Where have you ever seen that happen? Downvote them if they do. It is not going to be a big deal.
If A and B have different volatilities, it's rather counter-intuitive to allocate proportionally rather than just all to the one with the lower volatility... :-/
I agree, and I had to think about it for a second, but now it seems obvious. It works for the exact same reason that averaging multiple independent measurements can give a more accurate result. The key fact is that the different random variables are all independent, so it's unlikely that the various deviations from the means will line up in the same direction.
I realize that this is meant as an exercise to demonstrate a property of variance. But most investors are risk-averse when it comes to their portfolio - for the example given, a more practical target to optimize would be worst-case or near-worst-case return (e.g. p99). For calculating that, a summary measure like variance or mean does not suffice - you need the full distribution of the RoR of assets A and B, and find the value of t that optimizes the p99 of At+B(1-t).
There exists a problem in real life that you can solve in the simple case, and invoke a theorem in the general case.
Sure, it's unintuitive that I shouldn't go all in on the smallest variance choice. That's a great start. But, learning the formula and a proof doesn't update that bad intuition. How can I get a generalizable feel for these types of problems? Is there a more satisfying "why" than "because the math works out"? Does anyone else find it much easier to criticize others than themselves and wants to proofread my next blog post?
Here's my intuition: you can reduce the variance of a measurement by averaging multiple independent measurements. That's because when they're independent, the worst-case scenario of the errors all lining up is pretty unlikely. This is a slightly different situation, because the random variable aren't necessarily measurements of a single quantity, but otherwise it's pretty similar, and the intuition about multiple independent errors being unlikely to all line up still applies.
Once you have that intuition, the math just tells you what the optimal mix is, if you want to minimize the variance.
I wish there was a Strunk and White for mathematics.
While by no means logically incorrect, it feels inelegant to setup a problem using variables A and B in the first paragraph and solve for X and Y in the second (compounded with the implicit X==B, and Y==A).
This is why Markowitz isn't used much in the industry, at least not in a plug-and-play fashion. Empirical volatility, and the variance
-covariance matrix more generally speaking, is a useful descriptive statistic, but the matrix has high sampling variance, which means Markowitz is garbage in garbage out. Unlike in other fields, you can't just make/collect more data to reduce the sampling variance of the inputs. So you want to regularize the inputs or have some kind of hybrid approach that has a discretionary overlay.
That's the first thing I thought of. I read the opening of this article and thought "oh this could be applied to a load balancing problem" but it immediately becomes obvious that you can't assume the variance is going to be uniform over time
Doesn't it make more sense to measure and minimize the variance of the underlying cash flows of the companies one is investing in, rather than the prices?
Price variance is a noisy statistic not based on any underlying data about a company, especially if we believe that stock prices are truly random.
----
Write v_i = Var[X_i]. John writes
But if you multiply top and bottom by (1 / \prod_{m=1}^n v_m), you just get No need to compute elementary symmetric polynomials.If you plug those optimal (t_i) back into the variance, you get
where `H = n / (\sum_{k=1}^n 1/v_k)` is the Harmonic Mean of the variances.[0] https://asciimath.org/
ADDED. Because the new functionality will be used to create cutesy effects for reasons that have nothing to do with communicating math, increasing the demand for moderation work.
edit: Nobody is going to use maths for cutesy effects. Where have you ever seen that happen? Downvote them if they do. It is not going to be a big deal.
There exists a problem in real life that you can solve in the simple case, and invoke a theorem in the general case.
Sure, it's unintuitive that I shouldn't go all in on the smallest variance choice. That's a great start. But, learning the formula and a proof doesn't update that bad intuition. How can I get a generalizable feel for these types of problems? Is there a more satisfying "why" than "because the math works out"? Does anyone else find it much easier to criticize others than themselves and wants to proofread my next blog post?
Once you have that intuition, the math just tells you what the optimal mix is, if you want to minimize the variance.
While by no means logically incorrect, it feels inelegant to setup a problem using variables A and B in the first paragraph and solve for X and Y in the second (compounded with the implicit X==B, and Y==A).
1. How to Write Mathematics — Paul Halmos
2. Mathematical Writing — Donald Knuth, Tracy Larrabee, and Paul Roberts
3. Handbook of Writing for the Mathematical Sciences — Nicholas J. Higham
4. Writing Mathematics Well — Steven Gill Williamson
Don’t make decisions for evolving systems based on statistics.
Insider info on the other hand works much better.
Price variance is a noisy statistic not based on any underlying data about a company, especially if we believe that stock prices are truly random.