Measuring Extreme Loss Risk
May 13, 2015 - Big Ideas
What is the best approach for measuring extreme loss risk? In their April 2015 paper entitled “Why Risk Is So Hard to Measure”, Jon Danielsson and Chen Zhou analyze the robustness of standard extreme loss risk analysis methods. They focus on:
- The difference in the reliabilities of forecasts based on Value-at-Risk (VaR) and expected shortfall (ES).
- The reliabilities of these forecasts as sample size decreases.
- The difference in reliabilities of forecasts based on time scaling of high-frequency data (say, daily) versus overlapping high-frequency data to forecast risk over a many-day holding period.
In a nutshell, VaR assesses the probability that a portfolio loses at least a specified amount over a specified holding period, and ES is the expected portfolio return for a specified percentage of the worst losses during a specified holding period. The theoretically soundest sampling approach is to use non-overlapping past holding-period returns, but this approach usually means very small samples. Time scaling uses past high-frequency data once and scales findings to the longer holding period by multiplying by the square root of the holding period. Overlapping data re-uses past high-frequency data many times, thereby creating observations that are clearly not independent. Based on theoretical analysis and intensive Monte Carlo simulation derived from daily returns for a broad sample of liquid U.S. stocks during 1926 through 2014, they conclude that: Keep Reading