Standard Deviation
Standard deviation is a statistical measure of the dispersion of returns around the average, used in finance as the primary measure of investment volatility and total risk.
Standard deviation is borrowed from statistics and adapted by financial economists as the canonical measure of investment risk in Modern Portfolio Theory and the Capital Asset Pricing Model. It quantifies how widely an asset's returns are spread around their historical average — a high standard deviation means returns have been erratic and unpredictable, while a low standard deviation means returns have been relatively stable.
Mathematically, standard deviation is the square root of variance, which is the average of the squared deviations from the mean return. For a set of historical returns, you calculate the average return, measure how far each individual return deviates from that average, square each deviation, average those squared deviations (to get variance), and then take the square root to convert back to the same units as the returns. In portfolio management, standard deviation is almost always annualized to allow comparison across different asset classes and time horizons.
For U.S. equities, the long-run annualized standard deviation of the S&P 500 has historically been approximately 15-20%. Individual stocks typically have much higher standard deviations — often 30-50% or more — reflecting company-specific risks layered on top of market risk. Investment-grade bond funds typically have standard deviations of 3-7%, explaining why bonds are used to dampen overall portfolio volatility. A classic 60/40 (equity/bond) portfolio has historically carried an annualized standard deviation of roughly 10-12%.
In a normal distribution, approximately 68% of outcomes fall within one standard deviation of the mean, 95% fall within two standard deviations, and 99.7% fall within three standard deviations. This is often called the '68-95-99.7 rule.' Investors and risk managers use this framework to estimate the probability of losses of various magnitudes under normal market conditions. Value at Risk (VaR) models, for example, rely heavily on standard deviation as an input.
The limitation of standard deviation as a risk measure is that it treats upside volatility (returns above average) the same as downside volatility (returns below average), even though investors are primarily concerned with losses. An asset that sporadically produces very large positive returns will have an elevated standard deviation even if its losses are modest. Alternative measures like downside deviation or semi-variance focus exclusively on the dispersion of returns below a threshold, which many practitioners argue is a more accurate representation of the risk that investors actually care about.