There are two commonly used definitions of the standard deviation.

σx=(1/N × Σdi2)½

and the sample standard deviation:

σx=(1/(N-1) × Σdi2)½

where: σx is the standard deviation, N is the number of measurements, and di is the difference between measurement i and the mean.

Both methods are frequently used. The sample standard deviation corrects the tendency to understate the uncertainty in measurements, especially when the sample size is small. This can be understood for the extreme case where there is only one measurement available (N=1). In this case the population standard deviation gives the absurd result σx=0, whereas the sample standard deviation is undefined (0/0); a proper representation of the ignorance of the standard deviation after just one measurement.

For practical purposes, the difference between the two definitions is insignificant. However, when you calculate the standard deviation, you need to be aware of the two definitions and report the one you are using. Also keep this in mind when you are using the standard deviation function in your favorite spreadsheet; e.g. Excel uses the STDEV function for the sample standard deviation and STDEVP function for the population standard deviation.

```
Measurement	Measured value		Deviation		 di2
i			xi		 di=xi-xmean

1			17		  0.8		0.64
2			15		 -1.2		1.44
3			16		 -0.2		0.04
4			17		  0.8		0.64
5			16		 -0.2		0.04
__				____
xmean =	16.2			Σdi2=	2.80

```
In this example, the population standard deviation is: σx=(1/5 × 2.80)½=0.75, and the sample standard deviation is: σx=(1/4 × 2.80)½=0.84.

Use the sample standard deviation for calculating confidence intervals.