Measures of variability or variation are numbers computed from a set of observations that help us describe the spread or dispersion of the data.
The range of a data set is equal to the largest observation (the maximum) minus the smallest observation (the minimum). It is the simplest measure of variability to calculate.
Variance equals the average squared difference of observed values from their average. $$\sigma^2=\frac1 n\underset{i=1}{\overset{n}{\sum}}(x_i-\mu)^2$$ Variance and its square root (standard deviation) tell us how widely observed values of a random variable, like stock market returns, might vary. A low variance indicates a safer, less risky investment, whereas a high variance indicates a riskier investment.
Standard deviation is a measure of the difference between a value and the distribution mean. Distributions with larger standard deviations imply that the values deviate further from the mean, whereas smaller standard deviations imply there is a tighter spread of values.
According to the empirical rule, 68.27% of the observations will fall within one standard deviation of the mean. The percentage would increase to 95.45% within two standard deviations and 99.73% within three.
A large variance (or standard deviation) would represent that the observations are more spread out from the mean. On the contrary, a small variance (or standard deviation) implies that the distribution of the observations is more concentrated.