Objective research to aid investing decisions

Value Investing Strategy (Strategy Overview)

Allocations for November 2023 (Final)
Cash TLT LQD SPY

Momentum Investing Strategy (Strategy Overview)

Allocations for November 2023 (Final)
1st ETF 2nd ETF 3rd ETF

Big Ideas

These blog entries offer some big ideas of lasting value relevant for investing and trading.

Crash Protection Strategies

How can investors protect portfolios from crashes across asset classes? In the November 2014 version of his paper entitled “Tail Risk Protection in Asset Management”, Cristian Homescu describes tail (crash) risk metrics and summarizes the body of recent research on the effectiveness and costs of alternative tail risk protection strategies. The purpose of these strategies is to mitigate or eliminate investment losses during rare events adverse to portfolio holdings. These strategies typically bear material costs. He focuses on some strategies that may be profitable and hence useful for more than crash protection. Based on recent tail risk management research and some examples, he concludes that: Keep Reading

Overview of Equity Factor Investing

Is equity factor investing a straightforward path to premium capture and diversification? In their October 2014 paper entitled “Facts and Fantasies About Factor Investing”, Zelia Cazalet and Thierry Roncalli summarize the body of research on factor investing and provide examples to address the following questions:

  1. What is a risk factor?
  2. Do all risk factors offer attractive premiums?
  3. How stable and robust are these premiums?
  4. How can investors translate academic risk factors into portfolios?
  5. How should investors allocate to different factors?

They define risk factor investing as the attempt to enhance returns in the long run by capturing systematic risk premiums. They focus on the gap between retrospective (academic) analysis and prospective portfolio implementation. They summarize research on the following factors: market beta, size, book-to-market ratio, momentum, volatility, liquidity, carry, quality, yield curve slope, default risk, coskewness and macroeconomic variables. Based on the body of factor investing research and examples, they conclude that: Keep Reading

Static Smart Beta vs. Many Dynamic Proprietary Factors

Which is the better equity investment strategy: (1) a consistent portfolio tilt toward one or a few factors widely accepted, based on linear regression backtests, as effective in selecting stocks with above-average performance (smart beta); or, (2) a more complex strategy that seeks to identify stocks with above-average performance via potentially dynamic relationships with a set of many proprietary factors? In their September 2014 paper entitled “Investing in a Multidimensional Market”, Bruce Jacobs and Kenneth Levy argue for the latter. Referring to recent research finding that many factors are highly significant stock return predictors in multivariate regression tests, they conclude that: Keep Reading

Taming the Factor Zoo?

How should researchers address the issue of aggregate/cumulative data snooping bias, which derives from many researchers exploring approximately the same data over time? In the October 2014 version of their paper entitled “. . . and the Cross-Section of Expected Returns”, Campbell Harvey, Yan Liu and Heqing Zhu examine this issue with respect to studies that discover factors explaining differences in future returns among U.S. stocks. They argue that aggregate/cumulative data snooping bias makes conventional statistical significance cutoffs (for example, a t-statistic of at least 2.0) too low. Researchers should view their respective analyses not as independent single tests, but rather as one of many within a multiple hypothesis testing framework. Such a framework raises the bar for significance according to the number of hypotheses tested, and the authors give guidance on how high the bar should be. They acknowledge that they considered only top journals and relative few working papers in discovering factors and do not (cannot) count past tests of factors falling short of conventional significance levels (and consequently not published). Using a body of 313 published and 63 near-published (working papers) encompassing 316 factors explaining the cross-section of future U.S. stock returns from the mid-1960s through 2012, they find that: Keep Reading

Improving Established Multi-factor Stock-picking Models Is Hard

Is more clearly better in terms of number of factors included in a stock screening strategy? In the October 2014 draft of their paper entitled “Incremental Variables and the Investment Opportunity Set”, Eugene Fama and Kenneth French investigate the effects of adding to an established multi-factor model of stock returns an additional factor that by itself has power to predict stock returns. They focus on size, book-to-market ratio (B/M, measured with lagged book value), and momentum (cumulative return from 12 months ago to one month ago, with a skip-month to avoid systematic reversal). They consider a broad sample of U.S. stocks and three subsamples: microcaps (below the 20th percentile of NYSE market capitalizations); small stocks (20th to 50th percentiles); and, big stocks (above the 50th percentile). They perform factor-return regressions, and they translate regression results into portfolio returns by: (1) ranking stocks into fifths (quintiles) based on full-sample average regression-predicted returns; and, (2) measuring gross average returns from hedge portfolios that are long (short) the equally weighted quintile with the highest (lowest) expected returns. Finally, they perform statistical tests to determine whether whether the maximum Sharpe ratio for quintile portfolios constructed from three-factor regressions is realistically higher than those for two-factor regressions. Using monthly excess returns (relative to the one-month Treasury bill yield) for a broad sample of U.S. stocks during January 1927 through December 2013, they find that: Keep Reading

Better Four-factor Model of Stock Returns?

Are the widely used Fama-French three-factor model (market, size, book-to-market ratio) and the Carhart four-factor model (adding momentum) the best factor models of stock returns? In their September 2014 paper entitled “Digesting Anomalies: An Investment Approach”, Kewei Hou, Chen Xue and Lu Zhang construct the q-factor model comprised of market, size, investment and profitability factors and test its ability to predict stock returns. They also test its ability to account for 80 stock return anomalies (16 momentum-related, 12 value-related, 14 investment-related, 14 profitability-related, 11 related to intangibles and 13 related to trading frictions). Specifically, the q-factor model describes the excess return (relative to the risk-free rate) of a stock via its dependence on:

  1. The market excess return.
  2. The difference in returns between small and big stocks.
  3. The difference in returns between stocks with low and high investment-to-assets ratios (change in total assets divided by lagged total assets).
  4. The difference in returns between high-return on equity (ROE) stocks and low-ROE stocks.

They estimate the q-factors from a triple 2-by-3-by-3 sort on size, investment-to-assets and ROE. They compare the predictive power of this model with the those of the Fama-French and Carhart models. Using returns, market capitalizations and firm accounting data for a broad sample of U.S. stocks during January 1972 through December 2012, they find that: Keep Reading

Forget CAPM Beta?

Does the Capital Asset Pricing Model (CAPM) make predictions useful to investors? In his October 2014 paper entitled “CAPM: an Absurd Model”, Pablo Fernandez argues that the assumptions and predictions of CAPM have no basis in the real world. A key implication of CAPM for investors is that an asset’s expected return relates positively to its expected beta (regression coefficient relative to the expected market risk premium). Based on a survey of related research, he concludes that: Keep Reading

Snooping for Fun and No Profit

How much distortion can data snooping inject into expected investment strategy performance? In their October 2014 paper entitled “Statistical Overfitting and Backtest Performance”, David Bailey, Stephanie Ger, Marcos Lopez de Prado, Alexander Sim and Kesheng Wu note that powerful computers let researchers test an extremely large number of model variations on a given set of data, thereby inducing extreme overfitting. In finance, this snooping often takes the form of refining a trading strategy to optimize its performance within a set of historical market data. The authors introduce a way to explore snooping effects via an online simulator that finds the optimal (maximum Sharpe ratio) variant of a simple trading strategy by testing all possible integer values for strategy parameters as applied to a set of randomly generated daily “returns.” The simple trading strategy each month trades a single asset by (1) choosing a day of the month to enter either a long or a short position and (2) exiting after a specified number of days or a stop-loss condition. The randomly generated “returns” come from a source Gaussian (normal) distribution with zero mean. The simulator allows a user to specify a maximum holding period, a maximum percentage stop loss, sample length (number of days), sample volatility (number of standard deviations) and sample starting point (random number generator seed). After identifying optimal parameter values on “backtest” data, the simulator runs the optimal strategy variant on a second set of randomly generated returns to show the effect of backtest overfitting. Using this simulator, they conclude that: Keep Reading

Survey of Recent Research on Constructing and Monitoring Portfolios

What’s the latest research on portfolio construction and risk management? In the the introduction to the July 2014 version of his (book-length) paper entitled “Many Risks, One (Optimal) Portfolio”, Cristian Homescu states: “The main focus of this paper is to analyze how to obtain a portfolio which provides above average returns while remaining robust to most risk exposures. We place emphasis on risk management for both stages of asset allocation: a) portfolio construction and b) monitoring, given our belief that obtaining above average portfolio performance strongly depends on having an effective risk management process.” Based on a comprehensive review of recent research on portfolio construction and risk management, he reports on:

Keep Reading

When Bollinger Bands Snapped

Do financial markets adapt to widespread use of an indicator, such as Bollinger Bands, thereby extinguishing its informativeness? In the August 2014 version of their paper entitled “Popularity versus Profitability: Evidence from Bollinger Bands”, Jiali Fang, Ben Jacobsen and Yafeng Qin investigate the effectiveness of Bollinger Bands as a stock market trading signal before and after its introduction in 1983. They focus on bands defined by 20 trading days of prices to create the middle band and two standard deviations of these prices to form upper and lower bands. They consider two trading strategies based on Bollinger Bands:

  1. Basic volatility breakout, which generates  buy (sell) signals when price closes outside the upper (lower) band.
  2. Squeeze refinement of volatility breakout, which generates buy (sell) signals when band width drops to a six-month minimum and price closes outside the upper (lower) band.

They assess the popularity (and presumed level of use) of Bollinger Bands over time based on a search of articles from U.S. media in the Factiva database. They evaluate the predictive power of Bollinger Bands across their full sample sample and three subsamples: before 1983, 1983 through 2001, and after 2001. Using daily levels of 14 major international stock market indexes (both the Dow Jones Industrial Average and the S&P 500 Index for the U.S.) from initial availabilities (ranging from 1885 to 1971) through March 2014, they find that: Keep Reading

Login
Daily Email Updates
Filter Research
  • Research Categories (select one or more)