Objective research to aid investing decisions
Value Allocations for Apr 2019 (Final)
Momentum Allocations for Apr 2019 (Final)
1st ETF 2nd ETF 3rd ETF

Big Ideas

These blog entries offer some big ideas of lasting value relevant for investing and trading.

Measuring Extreme Loss Risk

What is the best approach for measuring extreme loss risk? In their April 2015 paper entitled “Why Risk Is So Hard to Measure”, Jon Danielsson and Chen Zhou analyze the robustness of standard extreme loss risk analysis methods. They focus on:

  1. The difference in the reliabilities of forecasts based on Value-at-Risk (VaR) and expected shortfall (ES)
  2. The reliabilities of these forecasts as sample size decreases.
  3. The difference in reliabilities of forecasts based on time scaling of high-frequency data (say, daily) versus overlapping high-frequency data to forecast risk over a many-day holding period.

In a nutshell, VaR assesses the probability that a portfolio loses at least a specified amount over a specified holding period, and ES is the expected portfolio return for a specified percentage of the worst losses during a specified holding period. The theoretically soundest sampling approach is to use non-overlapping past holding-period returns, but this approach usually means very small samples. Time scaling uses past high-frequency data once and scales findings to the longer holding period by multiplying by the square root of the holding period. Overlapping data re-uses past high-frequency data many times, thereby creating observations that are clearly not independent. Based on theoretical analysis and intensive Monte Carlo simulation derived from daily returns for a broad sample of liquid U.S. stocks during 1926 through 2014, they conclude that: Keep Reading

Survey of Recent Research on Factors, Regimes and Robustness

Why and how should investors pursue investment premiums associated with factors that explain performance differences among related assets (like common stocks)? In the January 2015 version of his paper entitled “Better Investing Through Factors, Regimes and Sensitivity Analysis”, Cristian Homescu summarizes recent research on: (1) factor-based investing; (2) enhancement of factor-based investing via regime switching models; and, (3) strategy robustness testing. Factor investing means systematic targeting of premiums associated with factors that explain an exploitable portion of return and risk differences among securities within one or several asset classes. Based on recent streams of research, he concludes that:

Keep Reading

Incorporating the Experience of the Financial Crisis

How should financial education incorporate the experience of the 2007-2009 financial crisis? In their May 2014 publication entitled Investment Management: A Science to Teach or an Art fo Learn?, Frank Fabozzi, Sergio Focardi and Caroline Jonas summarize the current approach to teaching finance theory and examine post-crisis criticisms and defenses of this approach via review of textbooks and studies and through interviews with finance professors, asset managers and other market players. Based on these sources, they conclude that: Keep Reading

Retirement Income Modeling Risks

How much can the (in)accuracy of retirement portfolio modeling assumptions affect conclusions about the safety of retirement income? In their December 2014 paper entitled “How Risky is Your Retirement Income Risk Model?”, Patrick Collins, Huy Lam and Josh Stampfli examine potential weaknesses in the following retirement income modeling approaches:

  • Theoretically grounded formulas – often complex with rigid assumptions.
  • Historical backtesting – the future will be like the past, requiring long samples.
  • Bootstrapping (reshuffled historical returns) – provides alternate histories but does not preserve return time series characteristics (such as serial correlation), and requires long samples.
  • Monte Carlo simulation with normal return distributions – sensitive to changes in assumed return statistics and often does not preserve empirical return time series characteristics.
  • Monte Carlo simulation with non-normal return distributions – complex and often does not preserve empirical return time series characteristics.
  • Vector autoregression – better reflects empirical time series characteristics and can incorporate predictive variables, but requires estimation of regression coefficients and is difficult to implement.
  • Regime-switching simulation (multiple interleaved return distributions representing different market states) – complex, requiring estimation of many parameters, and typically involves small samples in terms of number regimes.

They focus on retirement withdrawal sustainability (probability of shortfall) as a risk metric and risks associated with modeling (future asset returns), inflation and longevity assumptions. They employ a series of examples to demonstrate how an overly simple model may distort retirement income risk. Based on analysis and this series of examples, they conclude that: Keep Reading

A Few Notes on A Random Walk Down Wall Street

In the preface to the eleventh (2015) edition of his book entitled A Random Walk Down Wall Street: The Time-Tested Strategy for Successful Investing, author Burton Malkiel states: “The message of the original edition was a very simple one: Investors would be far better off buying and holding an index fund than attempting to buy and sell individual securities or actively managed mutual funds. …Now, over forty years later, I believe even more strongly in that original thesis… Why, then, an eleventh edition of this book? …The answer is that there have been enormous changes in the financial instruments available to the public… In addition, investors can benefit from a critical analysis of the wealth of new information provided by academic researchers and market professionals… There have been so many bewildering claims about the stock market that it’s important to have a book that sets the record straight.” Based on a survey of financial markets research and his own analyses, he concludes that: Keep Reading

Crash Protection Strategies

How can investors protect portfolios from crashes across asset classes? In the November 2014 version of his paper entitled “Tail Risk Protection in Asset Management”, Cristian Homescu describes tail (crash) risk metrics and summarizes the body of recent research on the effectiveness and costs of alternative tail risk protection strategies. The purpose of these strategies is to mitigate or eliminate investment losses during rare events adverse to portfolio holdings. These strategies typically bear material costs. He focuses on some strategies that may be profitable and hence useful for more than crash protection. Based on recent tail risk management research and some examples, he concludes that: Keep Reading

Overview of Equity Factor Investing

Is equity factor investing a straightforward path to premium capture and diversification? In their October 2014 paper entitled “Facts and Fantasies About Factor Investing”, Zelia Cazalet and Thierry Roncalli summarize the body of research on factor investing and provide examples to address the following questions:

  1. What is a risk factor?
  2. Do all risk factors offer attractive premiums?
  3. How stable and robust are these premiums?
  4. How can investors translate academic risk factors into portfolios?
  5. How should investors allocate to different factors?

They define risk factor investing as the attempt to enhance returns in the long run by capturing systematic risk premiums. They focus on the gap between retrospective (academic) analysis and prospective portfolio implementation. They summarize research on the following factors: market beta, size, book-to-market ratio, momentum, volatility, liquidity, carry, quality, yield curve slope, default risk, coskewness and macroeconomic variables. Based on the body of factor investing research and examples, they conclude that: Keep Reading

Static Smart Beta vs. Many Dynamic Proprietary Factors

Which is the better equity investment strategy: (1) a consistent portfolio tilt toward one or a few factors widely accepted, based on linear regression backtests, as effective in selecting stocks with above-average performance (smart beta); or, (2) a more complex strategy that seeks to identify stocks with above-average performance via potentially dynamic relationships with a set of many proprietary factors? In their September 2014 paper entitled “Investing in a Multidimensional Market”, Bruce Jacobs and Kenneth Levy argue for the latter. Referring to recent research finding that many factors are highly significant stock return predictors in multivariate regression tests, they conclude that: Keep Reading

Taming the Factor Zoo?

How should researchers address the issue of aggregate/cumulative data snooping bias, which derives from many researchers exploring approximately the same data over time? In the October 2014 version of their paper entitled “. . . and the Cross-Section of Expected Returns”, Campbell Harvey, Yan Liu and Heqing Zhu examine this issue with respect to studies that discover factors explaining differences in future returns among U.S. stocks. They argue that aggregate/cumulative data snooping bias makes conventional statistical significance cutoffs (for example, a t-statistic of at least 2.0) too low. Researchers should view their respective analyses not as independent single tests, but rather as one of many within a multiple hypothesis testing framework. Such a framework raises the bar for significance according to the number of hypotheses tested, and the authors give guidance on how high the bar should be. They acknowledge that they considered only top journals and relative few working papers in discovering factors and do not (cannot) count past tests of factors falling short of conventional significance levels (and consequently not published). Using a body of 313 published and 63 near-published (working papers) encompassing 316 factors explaining the cross-section of future U.S. stock returns from the mid-1960s through 2012, they find that: Keep Reading

Improving Established Multi-factor Stock-picking Models Is Hard

Is more clearly better in terms of number of factors included in a stock screening strategy? In the October 2014 draft of their paper entitled “Incremental Variables and the Investment Opportunity Set”, Eugene Fama and Kenneth French investigate the effects of adding to an established multi-factor model of stock returns an additional factor that by itself has power to predict stock returns. They focus on size, book-to-market ratio (B/M, measured with lagged book value), and momentum (cumulative return from 12 months ago to one month ago, with a skip-month to avoid systematic reversal). They consider a broad sample of U.S. stocks and three subsamples: microcaps (below the 20th percentile of NYSE market capitalizations); small stocks (20th to 50th percentiles); and, big stocks (above the 50th percentile). They perform factor-return regressions, and they translate regression results into portfolio returns by: (1) ranking stocks into fifths (quintiles) based on full-sample average regression-predicted returns; and, (2) measuring gross average returns from hedge portfolios that are long (short) the equally weighted quintile with the highest (lowest) expected returns. Finally, they perform statistical tests to determine whether whether the maximum Sharpe ratio for quintile portfolios constructed from three-factor regressions is realistically higher than those for two-factor regressions. Using monthly excess returns (relative to the one-month Treasury bill yield) for a broad sample of U.S. stocks during January 1927 through December 2013, they find that: Keep Reading

Daily Email Updates
Research Categories
Recent Research
Popular Posts