remix logo

Hacker Remix

Understanding Gaussians

137 points by lapnect 22 hours ago | 29 comments

mjhay 14 hours ago

Great article, but I wish it would have made a more explicit mention of the* central limit theorem (CLT), which I think is what makes the normal distribution "normal." For those not familiar, here is the jist: suppose you have `n` independent, finite-variance random variables with support in the real numbers (so things like count R.V.s work). Asymptotically, as n->infinity, the distribution of the mean will approach a normal distribution. Usually, n doesn't have to be big for this to be a reasonable approximation. n~30 is often fine. The CLT extends in a

To me, this is one of the most astonishing things about probability theory, as well as one of the most useful.

The normal distribution is just one of a class of "stable distributions," all sharing the properties of sums of their R.V.s being in the same family.

The same idea can be generalized much further. The underlying idea is the distribution of "things" as they get asymptotically "bigger." The density of eigenvalues of random matrices with I.I.D entries approach the Wigner Semicircle Distribution, which is exactly what it sounds like. It plays the role of the normal distribution in the very practically-promising theory of free (noncommutative) probability.

https://en.wikipedia.org/wiki/Wigner_semicircle_distribution

Further reading:

https://terrytao.wordpress.com/2010/01/05/254a-notes-2-the-c...

*there's a few normal distribution CLTs, but this is the intuitive one that usually matters in practice

mturmon 10 hours ago

> ...most astonishing things about probability theory...

It's a core result, perhaps the most useful core result of standard probability theory.

But from some points of view, the CLT is not actually astonishing.

If you know what Terry Tao (in the convenient link above) calls the "Fourier-analytic proof", the CLT for IID variables can seem inevitable, as long as the underlying distribution is such that the moment generating function (density Fourier transform) of the first summand exists.

I'd be interested to hear if you have sympathy with the following reasoning:

The Gaussian distribution corresponds to a MGF with second-order behavior like 1 - t^2/2 around the origin. You only care about MGF behavior around the origin because, as N -> \infty, that's all that matters.

Because of the way we normalized the sum (we subtracted the mean), the first-order term in the MGF will vanish. We purposely zeroed it out by centering the sum around zero. That leaves the second-order term, which will give a Gaussian distribution.

So in short:

    - MGF of one summand exists => MGF of (recentered) sum exists
    - We have an expression for the MGF of the recentered sum (convolution property)
    - Only the MGF behavior around the origin matters
    - We re-center the sum, causing the first-order term to vanish
    - We invert the resulting MGF and recover the Gaussian
I'm not being precise here, but I hope the idea comes through.

abetusk 12 hours ago

Good for you for stating the assumptions properly that go into the CLT and for mentioning other stable distributions.

I disagree about the Gaussian being the "normal" case or the "one that usually matters". Finite variance is a big assumption and one that's routinely violated in practice.

For those that are interested, Levy-stable distributions are the general class of convergent sums of random variables [0], synonymously called "fat-tailed" or "heavy-tailed" distributions and include Pareto [1] and the Cauchy distributions [2].

Is there an intuitive explanation for why the Wigner semicircular law is basically the "logarithm" the Gaussian in some respect?

[0] https://en.wikipedia.org/wiki/L%C3%A9vy_distribution

[1] https://en.wikipedia.org/wiki/Pareto_distribution

[2] https://en.wikipedia.org/wiki/Cauchy_distribution

CrazyStat 7 hours ago

“Normal” in the context of the normal distribution actually derives from the technical meaning of normal as perpendicular, like the normal map in computer graphics. The linguistic overloading with normal in the sense of usual or ordinary is an unfortunate coincidence.

abetusk 6 hours ago

It looks like that story is apocryphal.

There's a reddit question which refutes this idea [0] and provides some sources (which are paywalled) [1] [2].

That reddit question also has a source [3] that claims Galton used the term "normal" in the "standard model, pattern type" sense from the 1880s onwards:

""" ... However in the 1880s he began using the term "normal" systematically: chapter 5 of his Natural Inheritance (1889) is entitled "Normal Variability" and Galton refers to the "normal curve of distributions" or simply the "normal curve." Galton does not explain why he uses the term "normal" but the sense of conforming to a norm ( = "A standard, model, pattern, type." (OED)) seems implied. """

Though I haven't confirmed, it looks like Gauss never used the term "normal" to denote orthogonality of the curve.

Do you have a source?

[0] reddit.com/r/statistics/comments/rvuj4r/q_why_did_karl_pearson_call_the_gaussian

[1] https://www.google.co.uk/books/edition/Statistics_and_Public...

[2] https://www.jstor.org/stable/2684625

[3] https://condor.depaul.edu/ntiourir/NormalOrigin.htm

tylerneylon 10 hours ago

I like the font, images, and layout of this article. Does anyone happen to know if a tool (that I can also use) helped achieve this look?

Or if not, does anyone know how to reach the author? I may have missed it, but I didn't even see the author's name anywhere on the site.

esafak 6 hours ago

The maths typeface is Neo-Euler: https://fontlibrary.org/en/font/euler-otf

generuso 10 hours ago

The author is Peter Bloem, and the html is compiled from these sources: https://github.com/pbloem/gestalt.ink

with the help of mathjax: https://www.mathjax.org/

The font seems to be Georgia.

creata 6 hours ago

The CSS says:

    font-family: charter, Georgia, serif;
You can get a convenient copy of Charter here: https://practicaltypography.com/charter.html

Another free font based on (and largely identical to?) Charter is Charis: https://software.sil.org/charis/

wodenokoto 18 hours ago

> You can see that the data is clustered around the mean value. Another way of saying this is that the distribution has a definite scale. [..] it might theoretically be possible to be 2 meters taller than the mean, but that’s it. People will never be 3 or 4 meters taller than the mean, no matter how many people you see.

The way the author defines definite scale is that there is a max and a minimum, but that is not true for a gaussian distribution. It is also not true that if we keep sampling wealth (an example of a distribution without definite scale used in the article), there is no limit to the maximum.

klysm 17 hours ago

I think he’s saying that the distribution of human heights has definite scale, not the Gaussian?

wodenokoto 3 hours ago

No, author very much says the Gaussian has definite scale:

> There are a few distributions like this with a definite scale, but the Gaussian is the most famous one.

dekhn 10 hours ago

Human height (by gender) very nearly follows a gaussian distribution- because height is determined (hand-wave away complexity) by a sum of many independent random variables. In reality it's not truly gaussian for a number of reasons.

nwnwhwje 14 hours ago

Nothing is Gaussian then. What probability distribution allows for Graham's Number to be a possibility?

deepnet 16 hours ago

Jinlian (1964–1982) of China was 8 feet, 1 inch (2.46 centimeters) when she died, making her the tallest woman ever. According to Guinness World Records, Zeng is the only woman to have passed 8 feet (about 2.44 meters)

Mean from article 163.

So the facts check out.

Author is correct.

Also very interesting the suggestion that human height is not Gaussian.

Snip :

“ Why female soldiers only? If we were to mix male and female soldiers, we would get a distribution with two peaks, which would not be Gaussian.

Which begs the question what other human statistics are non Gaussian if sexes are mixed and does this apply to other strong differentiators like historical time, nutrition, neural tribes ?

Statistics is highly non-trivial. “

shiandow 16 hours ago

It's an oversimplification but at some point there is really no difference between impossible and 'incredibly small probability'.

I mean sure it is possible for all air molecules to randomly all go to the same corner of the room at the same time (heck it is inevitable in some sense), you can play it back in reverse to check no laws of physics were broken, but practically that simply does not happen.

KK7NIL 11 hours ago

> at some point there is really no difference between impossible and 'incredibly small probability'.

This is not true.

Using your air molecules example: Every microstate (i.e. location and speed of all the molecules) possible under the given macrostate (temperature, number of molecules, etc) has a probability of happening of 0, but aren't impossible, simply because the microstates are real variables and real numbers are uncountable. Impossible microstates also have 0 probability but are obviously not the same.

lamename 16 hours ago

> The best way to do that, I think, is to do away entirely with the symbolic and mathematical foundations, and to derive what Gaussians are, and all their fundamental properties from purely geometric and visual principles. That’s what we’ll do in this article.

Perhaps I have a different understanding of "symbolic". The article proceeds to use various symbolic expressions and equations. Why say this above if you're not going to follow through? Visuals are there but peppered in.

Torkel 14 hours ago

Agree. This text relies heavily on traditional mathematics to define and work through things. It's quite good at that! But it does become weird when it starts out by declaring that it won't do what it then does.

It also felt like this could be a good topic for a 3b1b video... and... here's the 3b1b video on gaussians: https://www.youtube.com/watch?v=d_qvLDhkg00