Big Ideas

These blog entries offer some big ideas of lasting value relevant for investing and trading.

Page 1 of 1612345678910...Last »

Better Four-factor Model of Stock Returns?

Are the widely used Fama-French three-factor model (market, size, book-to-market ratio) and the Carhart four-factor model (adding momentum) the best factor models of stock returns? In their September 2014 paper entitled “Digesting Anomalies: An Investment Approach”, Kewei Hou, Chen Xue and Lu Zhang construct the q-factor model comprised of market , size, investment and profitability factors and test its ability to predict stock returns. They also test its ability to account for 80 stock return anomalies (16 momentum-related, 12 value-related, 14 investment-related, 14 profitability-related, 11 related to intangibles and 13 related to trading frictions). Specifically, the q-factor model describes the excess return (relative to the risk-free rate) of a stock via its dependence on:

  1. The market excess return.
  2. The difference in returns between small and big stocks.
  3. The difference in returns between stocks with low and high investment-to-assets ratios (change in total assets divided by lagged total assets).
  4. The difference in returns between high-return on equity (ROE) stocks and low-ROE stocks.

They estimate the q-factors from a triple 2-by-3-by-3 sort on size, investment-to-assets and ROE. They compare the predictive power of this model with the those of the Fama-French and Carhart models. Using returns, market capitalizations and firm accounting data for a broad sample of U.S. stocks during January 1972 through December 2012, they find that: Keep Reading

Forget CAPM Beta?

Does the Capital Asset Pricing Model (CAPM) make predictions useful to investors? In his October 2014 paper entitled “CAPM: an Absurd Model”, Pablo Fernandez argues that the assumptions and predictions of CAPM have no basis in the real world. A key implication of CAPM for investors is that an asset’s expected return relates positively to its expected beta (regression coefficient relative to the expected market risk premium). Based on a survey of related research, he concludes that: Keep Reading

Snooping for Fun and No Profit

How much distortion can data snooping inject into expected investment strategy performance? In their October 2014 paper entitled “Statistical Overfitting and Backtest Performance”, David Bailey, Stephanie Ger, Marcos Lopez de Prado, Alexander Sim and Kesheng Wu note that powerful computers let researchers test an extremely large number of model variations on a given set of data, thereby inducing extreme overfitting. In finance, this snooping often takes the form of refining a trading strategy to optimize its performance within a set of historical market data. The authors introduce a way to explore snooping effects via an online simulator that finds the optimal (maximum Sharpe ratio) variant of a simple trading strategy by testing all possible integer values for strategy parameters as applied to a set of randomly generated daily “returns.” The simple trading strategy each month trades a single asset by (1) choosing a day of the month to enter either a long or a short position and (2) exiting after a specified number of days or a stop-loss condition. The randomly generated “returns” come from a source Gaussian (normal) distribution with zero mean. The simulator allows a user to specify a maximum holding period, a maximum percentage stop loss, sample length (number of days), sample volatility (number of standard deviations) and sample starting point (random number generator seed). After identifying optimal parameter values on “backtest” data, the simulator runs the optimal strategy variant on a second set of randomly generated returns to show the effect of backtest overfitting. Using this simulator, they conclude that: Keep Reading

Survey of Recent Research on Constructing and Monitoring Portfolios

What’s the latest research on portfolio construction and risk management? In the the introduction to the July 2014 version of his (book-length) paper entitled “Many Risks, One (Optimal) Portfolio”, Cristian Homescu states: “The main focus of this paper is to analyze how to obtain a portfolio which provides above average returns while remaining robust to most risk exposures. We place emphasis on risk management for both stages of asset allocation: a) portfolio construction and b) monitoring, given our belief that obtaining above average portfolio performance strongly depends on having an effective risk management process.” Based on a comprehensive review of recent research on portfolio construction and risk management, he reports on:

Keep Reading

When Bollinger Bands Snapped

Do financial markets adapt to widespread use of an indicator, such as Bollinger Bands, thereby extinguishing its informativeness? In the August 2014 version of their paper entitled “Popularity versus Profitability: Evidence from Bollinger Bands”, Jiali Fang, Ben Jacobsen and Yafeng Qin investigate the effectiveness of Bollinger Bands as a stock market trading signal before and after its introduction in 1983. They focus on bands defined by 20 trading days of prices to create the middle band and two standard deviations of these prices to form upper and lower bands. They consider two trading strategies based on Bollinger Bands:

  1. Basic volatility breakout, which generates  buy (sell) signals when price closes outside the upper (lower) band.
  2. Squeeze refinement of volatility breakout, which generates buy (sell) signals when band width drops to a six-month minimum and price closes outside the upper (lower) band.

They assess the popularity (and presumed level of use) of Bollinger Bands over time based on a search of articles from U.S. media in the Factiva database. They evaluate the predictive power of Bollinger Bands across their full sample sample and three subsamples: before 1983, 1983 through 2001, and after 2001. Using daily levels of 14 major international stock market indexes (both the Dow Jones Industrial Average and the S&P 500 Index for the U.S.) from initial availabilities (ranging from 1885 to 1971) through March 2014, they find that: Keep Reading

Evaluating Systematic Trading Programs

How should investors assess systematic trading programs? In his August 2014 paper entitled “Evaluation of Systematic Trading Programs”, Mikhail Munenzon offers a non-technical overview of issues involved  in evaluating systematic trading programs. He defines such programs as automated processes that generate signals, manage positions and execute orders for exchange-listed instruments or spot currency rates with little or no human intervention. He states that the topics he covers are not exhaustive but should be sufficient for an investor to initiate successful relationships with systematic trading managers. Based on his years of experience as a systematic trader and as a large institutional investor who has evaluated many diverse systematic trading managers on a global scale, he concludes that: Keep Reading

Snooping Bias Accounting Tools

How can researchers account for the snooping bias derived from testing of multiple strategy alternatives on the same set of data? In the July 2014 version of their paper entitled “Evaluating Trading Strategies”, Campbell Harvey and Yan Liu describe tools that adjust strategy evaluation for multiple testing. They note that conventional thresholds for statistical significance assume an independent (single) test. Applying these same thresholds to multiple testing scenarios induces many false discoveries of “good” trading strategies. Evaluation of multiple tests requires making significance thresholds more stringent. In effect, such adjustments mean demanding higher Sharpe ratios or, alternatively, applying “haircuts” to computed strategy Sharpe ratios according to the number of strategies tried. They consider two approaches: one that aggressively excludes false discoveries, and another that scales avoidance of false discoveries with the number of strategy alternatives tested. Using mathematical derivations and examples, they conclude that:

Keep Reading

Sensitivity of Risk Adjustment to Measurement Interval

Are widely used volatility-adjusted investment performance metrics, such as Sharpe ratio, robust to different measurement intervals? In the July 2014 version of their paper entitled “The Divergence of High- and Low-Frequency Estimation: Implications for Performance Measurement”, William Kinlaw, Mark Kritzman and David Turkington examine the sensitivity of such metrics to the length of the return interval used to measure it. They consider hedge fund performance, conventionally estimated as Sharpe ratio calculated from monthly returns and annualized by multiplying by the square root of 12. They also consider mutual fund performance, usually evaluated as excess return divided by excess volatility relative to an appropriate benchmark (information ratio). Finally, they consider Sharpe ratios of risk parity strategies, which periodically rebalance portfolio asset weights according to the inverse of their return standard deviations. Using monthly and longer-interval return data over available sample periods for each case, they find that: Keep Reading

Sharper Sharpe Ratio?

Is there some tractable investment performance metric that corrects weaknesses commonly encountered in financial markets research? In the July 2014 version of their paper entitled “The Deflated Sharpe Ratio: Correcting for Selection Bias, Backtest Overfitting and Non-Normality”, David Bailey and Marcos Lopez de Prado introduce the Deflated Sharpe Ratio (DSR) as a tool for evaluating investment performance that accounts for both non-normality and data snooping bias. They preface DSR development by noting that:

  • Many investors use performance statistics, such as Sharpe ratio, that assume test sample returns have a normal distribution.
  • Fueled by high levels of randomness in liquid markets, testing of a sufficient number of strategies on the same data essentially guarantees discovery of an apparently profitable, but really just lucky, strategy.
  • The in-sample/out-of-sample hold-out approach does not eliminate data snooping bias when multiple strategies are tested against the same hold-out data.
  • Researchers generally publish “successes” as isolated analyses, ignoring all the failures encountered along the road to statistical significance.

The authors then transform Sharpe ratio into DSR by incorporating sample return distribution skewness and kurtosis and by correcting for the bias associated with the number of strategies tested in arriving at the “winning” strategy. Based on mathematical derivations and an example, they conclude that:

Keep Reading

Basic Equity Return Statistics

What do the basic statistics of stock market returns tell us about risk and predictability? Basic statistics are the measures of the moments of the return distribution: mean (average), standard deviation, skewness and kurtosis. Are these stock market return statistics (and the risk-reward environment they describe) stable over time? Do they reliably relate to future returns? To make the investigation tractable, we calculate these four statistics month-by-month based on daily returns. Using daily closes of the Dow Jones Industrial Average (DJIA) for January 1930 through April 2014 (1012 months) and the S&P 500 index for January 1950 through April 2014 (772 months), we find that: Keep Reading

Page 1 of 1612345678910...Last »
Login
Current Momentum Winners

ETF Momentum Signal
for November 2014 (Final)

Momentum ETF Winner

Second Place ETF

Third Place ETF

Gross Momentum Portfolio Gains
(Since August 2006)
Top 1 ETF Top 2 ETFs
206% 221%
Top 3 ETFs SPY
211% 83%
Strategy Overview
Recent Research
Popular Posts
Popular Subscriber-Only Posts