Is there a way to deal with structural breaks more precisely than just expressing vague skepticism about the usefulness of old data? In the April 2007 draft of their paper entitled “How Useful Are Historical Data for Forecasting the Long-run Equity Return Distribution?”, John Maheu and Thomas McCurdy describe and test a methodology for identifying and calibrating structural breaks in long-term excess equity returns. Using monthly U.S. equity return and risk-free rate data for the period February 1885 through December 2003, *they conclude that:*

- An overall model can reasonably be constructed as a probability-weighted average of submodels. Each submodel describes an interval between two structural breaks and consists of a combination of two normal distributions. The overall model picks and weights submodels at each point in time based on their local predictive powers.
- This methodology identifies clear structural breaks in excess equity returns in 1929, 1934, 1940 and 1969, and possible breaks in the mid-1970s, the early 1990s and sometime between 1998 and 2004.
- The methodology detects structural breaks almost as they occur. For example, data available through April 1931 indicates a 75% probability of a structural break in 1929.
- While the value of old data fades quickly for most of the sample, the rate of fade varies considerably. The methodology incorporating structural breaks therefore outperforms those that ignore the breaks or seek to avoid them by using a rolling window of data.

The following figure, taken from the paper, compares forecasts for the long-run equity risk premium by the structural break model (break k=2) and a model that employs a rolling 10-year window of data (rolling window 10 years). The structural break model combines the predictive powers of a series of submodels, each optimized for an interval between two structural breaks. Each submodel is a combination of two normal distributions (hence, k=2). The 10-year rolling window model assigns equal weight to all data within the window and zero weight to data outside the window (seeking thereby to avoid the effects of structural breaks). The rolling window model generates unrealistically volatile results.

In summary, *a dynamic and flexible model of long-term equity returns that accommodates structural breaks improves predictive power, at the cost of considerable complexity.*

It costs less than a single trading commission. Learn more here.