Also referred to as VaR. An application of normal probability theory and the concepts of duration and yield volatility can be tied together to provide insight into the risk of a portfolio or position. It's the measure of potential loss from an unlikely, adverse event in a normal market environment. Specifically, suppose that a manager wants to make the following statement:

"There is a Y% probability that the loss in value from a position will be less than $A in the next T days."

The $A in this statement are popularly referred to as the Value at Risk.

The VaR can be exhibited graphically on a normal distribution curve of a change in the value of a position over the next T days. The VaR would be the z-score where the area (probability) to the left of that value is equal to 1-Y%.

The general approaches to VaR computation have fallen into three classes called parametric, historical simulation, and Monte Carlo.

Though VaR is very popular among risk managers these days, no theory exists to show that VaR is the appropriate measure upon which to build optimal decision rules. VaR does not measure "event" (e.g., market crash) risk. That is why portfolio stress tests are recommended to supplement VaR. VaR does not readily capture liquidity differences among instruments. That is why limits on both tenors and option greeks are still useful. VaR doesn't readily capture model risks, which is why model reserves are also necessary.

There's an interesting paradox involving VaR and the Heisenberg Uncertainty Principle. You can predict what sort of economic crisis will occur next, or you can predict when it will happen-- but you will never predict both.

Log in or register to write something here or to contact authors.