Here is a formula for the standard deviation (from the mean), given a normalized density distribution f(x). Note that if the density distribution is valid, N is essentially infinite so the two versions of the standard deviation are essentially identical.

First, we will need to establish how to take averages of things in density distributions. For any distribution on x 'foo', and a property 'baz' which is a function of x,

Average(foo(x), baz(x)) = ∫foo(x)*baz(x) dx

with the integral taken on the whole domain of foo*.

The Standard Deviation of foo(x) = √ ((Average(foo(x), x^{2}) - Average(foo(x), x)^{2})

Note that this is functionally identical to the infinite case of the formulation given in Professor Pi's writeup, but completely mangled so that it is easy to compute and difficult to see why it works.

*If you don't see why that works, consider the simplest nontrivial case, Average(f(x), x). Just think of it as grouping the elements together when adding them up: let's suppose f(3) = 2 so we add two instances of 3 (i.e. 2*3) to the running total. Fortunately, since foo(x) is normalized, we've already divided by the number of points in the distribution. If you test this with an arbitrary function, make sure that it is normalized. For example, f(x) = x on the domain (0,1) doesn't work. However, f(x) = 2x on the same domain does.

Also note that aside from the standard deviation *from* the mean (which is the subject of the node up to this point) there is, distinctly, the standard deviation *of* the mean. This is used when you are sampling a distribution and are attempting to determine how well you have constrained the mean of that distribution from your sampling.

Also, you can calculate the standard deviation from any other statistical estimate one can imagine, not just the mean. The standard deviation from the median, for example. Just take the mean-under-quadrature of the difference.