What are the ins and outs of crash risk measurement via Value at Risk (VaR)? In their March 2017 paper entitled “A Gentle Introduction to Value at Risk”, Laura Ballotta and Gianluca Fusai provide an introduction to VaR in financial markets, with examples mainly from commodity markets. They address problems related to VaR estimation and backtesting at single asset and portfolio levels. Based largely on mathematics and empirical considerations, *they conclude that:*

- Measurement of VaR requires specifying the:
- Threshold for crashes. VaR thresholds of the worst 1% to 5% of returns are common in practice.
- Measurement interval (holding period during which losses may occur). Active traders may use a one-day interval, while less active investors may use much longer intervals.
- Probability distribution of outcomes (logarithm of returns), generally either: (1) parametric (assuming an underlying mathematical model, such as the normal distribution, and estimating mean and standard deviation parameters); or, (2) non-parametric (building the distribution empirically based on historical simulation/bootstrapping).

- The parametric approach to specifying the distribution of outcomes:
- Generally deteriorates for:
- Extreme VaR thresholds.
- Long measurement intervals (holding periods).
- High return volatilities.
- Small samples of historical data for estimating parameters.

- May not accommodate fat-tailed and skewed distributions and does not straightforwardly capture persistent changes in return volatility. Mitigations may include:
- To account for seasonality, measuring volatility for windows of data rather than daily.
- Using the option-implied volatility forward curve.
- Applying an exponential moving average (EMA) to weight recent data higher in volatility estimation.
- Applying a generalized autoregressive conditional heteroskedasticity (GARCH) model to estimate volatility.

- Generally deteriorates for:
- Non-parametric historical simulation requires no assumptions about the shape of the return distribution. It can capture fat tails and skewness in historical data. However, it:
- Is very inaccurate for extreme VaR thresholds.
- Does not straightforwardly capture persistent changes in return volatility. Mitigations may include:
- Weighting recent returns more heavily than old returns.
- Injecting time series behaviors by performing simulations on randomly extracted blocks of returns of variable lengths.
- Combining a GARCH model with bootstrap simulation (filtered bootstrap).

- Backtesting a VaR setup should examine both the number of return threshold violations and the degree to which violations cluster. Clustering indicates that the model will not work well across a variety of market conditions.
- The main issue for applying VaR at the portfolio level is how to specify the joint distribution of log-returns of portfolio components. Positions in non-linear derivatives complicate this task.
- Historical simulation of portfolio-level returns simplifies the task but obscures the contribution of each position to crash risk.
- The alternative of modeling the interactions among positions may not be tractable if the number of positions is large and the historical sample limited.
- Commodity portfolios often hold complex derivatives, whose pricing can be very time consuming. Two common approaches are: (1) repricing the derivatives in simulated scenarios; or, (2) approximating derivative price as a quadratic function of risk drivers.

In summary, *market crash conditions tend to magnify the weaknesses of both the theoretical and the empirical approximations used to estimate crash risk (as in VaR). *

Cautions regarding conclusions include:

- VaR is less useful for long-term investors than those with short-term performance measurements because: (1) as noted, accuracy of VaR estimates deteriorates with holding period length; and, long-term investors expect to ride out crashes.
- Some investors may not find the paper gentle.