Objective research to aid investing decisions
Value Allocations for Feb 2019 (Final)
Momentum Allocations for Feb 2019 (Final)
1st ETF 2nd ETF 3rd ETF
CXO Advisory

Big Ideas

These blog entries offer some big ideas of lasting value relevant for investing and trading.

Emptying the Equity Factor Zoo?

As described in “Quantifying Snooping Bias in Published Anomalies”, anomalies published in leading journals offer substantial opportunities for exploitation on a gross basis. What profits are left after accounting for portfolio maintenance costs? In their November 2017 paper entitled “Accounting for the Anomaly Zoo: A Trading Cost Perspective”, Andrew Chen and Mihail Velikov examine the combined effects of post-publication return deterioration and portfolio reformation frictions on 135 cross-sectional stock return anomalies published in leading journals. Their proxy for trading frictions is modeled stock-level effective bid-ask spread based on daily returns, representing a lower bound on costs for investors using market orders. Their baseline tests employ hedge portfolios that are long (short) the equally weighted fifth, or quintile, of stocks with the highest (lowest) expected returns for each anomaly. They also consider capitalization weighting, sorts into tenths (deciles) rather than quintiles and portfolio constructions that apply cost-suppression techniques. Using data as specified in published articles for replication of 135 anomaly hedge portfolios, they find that:

Keep Reading

Quantifying Snooping Bias in Published Anomalies

Is data snooping bias a material issue for cross-sectional stock return anomalies published in leading journals? In the September 2017 update of their paper entitled “Publication Bias and the Cross-Section of Stock Returns”, Andrew Chen and Tom Zimmermann: (1) develop an estimator for anomaly data snooping bias based on noisiness of associated returns; (2) apply it to replications of 172 anomalies published in 15 highly selective journals; and, (3) compare results to post-publication anomaly returns to distinguish between in-sample bias and out-of-sample market response to publication. If predictability is due to bias, post-publication returns should be (immediately) poor because pre-publication performance is a statistical figment. If predictability is due to true mispricing, post-publication returns should degrade as investors exploit new anomalies. Their baseline tests employ hedge portfolios that are long (short) the equally weighted fifth, or quintile, of stocks with the highest (lowest) expected returns for each anomaly. Results are gross, ignoring the impact of periodic portfolio reformation frictions. Using data as specified in published articles for replication of 172 anomaly hedge portfolios, they find that:

Keep Reading

Seven Habits of Highly Ineffective Quants

Why don’t machines rule the financial world? In his September 2017 presentation entitled “The 7 Reasons Most Machine Learning Funds Fail”, Marcos Lopez de Prado explores causes of the high failure rate of quantitative finance firms, particularly those employing machine learning. He then outlines fixes for those failure modes. Based on more than two decades of experience, he concludes that: Keep Reading

Best Market Forecasting Practices?

Are more data, higher levels of signal statistical significance and more sophisticated prediction models better for financial forecasting? In their August 2017 paper entitled “Practical Significance of Statistical Significance”, Ben Jacobsen, Alexander Molchanov and Cherry Zhang perform sensitivity testing of forecasting practices along three dimensions: (1) length of lookback interval (1 to 300 years); (2) required level of statistical significance for signals (1%, 5%, 10%…); and, (3) different signal detection methods that rely on difference from an historical average. They focus on predicting whether returns for specific calendar months will be higher or lower than the market, either excluding or including January. Using monthly UK stock market returns since 1693 and U.S. stock market returns since 1792, both through 2013, they find that:

Keep Reading

Brute Force Stock Trading Signal Discovery

How serious is the snooping bias (p-hacking) derived from brute force mining of stock trading strategy variations? In their August 2017 paper entitled “p-Hacking: Evidence from Two Million Trading Strategies”, Tarun Chordia, Amit Goyal and Alessio Saretto test a large number of hypothetical trading strategies to estimate an upper bound on the seriousness of p-hacking and to estimate the likelihood that a researcher can discover a truly abnormal trading strategy. Specifically, they:

  • Collect historical data for 156 firm accounting and stock price/return variables as available for U.S. common stocks in the top 80% of NYSE market capitalizations with price over $3.
  • Exhaustively construct about 2.1 million trading signals from these variables based on their levels, changes and certain combination ratios.
  • Calculate three measures of trading signal effectiveness:
    1. Gross 6-factor alphas (controlling for market, size, book-to-market, profitability, investment and momentum) of value-weighted, annually reformed hedge portfolios that are long the value-weighted tenth, or decile, of stocks with the highest signal values and short the decile with the lowest.
    2. Linear regressions that test ability of the entire distribution of trading signals to explain future gross returns based on linear relationships.
    3. Gross Sharpe ratios of the hedge portfolios used for alpha calculations.
  • Apply three multiple hypothesis testing methods that account for cross-correlations in signals and returns (family-wise error rate, false discovery rate and false discovery proportion.

They deem a signal effective if it survives both statistical hurdles (alpha t-statistic 3.79 and regression t-statistic 3.12) and has a monthly Sharpe ratio higher than that of the market (0.12). Using monthly values of the 156 specified input variables during 1972 through 2015, they find that:

Keep Reading

A Few Notes on Trend Following

Michael Covel prefaces the 2017 Fifth Edition of his book, Trend Following: How to Make a Fortune in Bull, Bear, and Black Swan Markets, by stating that: “The 233,092 words in this book are the result of my near 20-year hazardous journey for the truth about this trading called trend following. …Trend following…aims to capture the majority of a connected market trend up or down for outsize profit. It is designed for potential gain in all major asset classes–stocks, bonds, metals, currencies, and hundreds of other commodities. …if you want outside-the-the-box different, the truth of how out-sized returns are made without any fundamental predictions or forecasts, this is it. And if you want the honest data-driven proof, I expect my digging will give everyone the necessary confidence to break their comfort addiction to the box they already know and go take a swing at making a fortune…” Based on his experience as a trader/portfolio manager and the body of trend following research, he concludes that: Keep Reading

Financial Markets as Massively Multiplayer Gambling

Are financial markets best viewed as massively multiplayer gambling? In his March 2017 paper entitled “Why Markets Are Inefficient: A Gambling ‘Theory’ of Financial Markets for Practitioners and Theorists”, Steven Moffitt presents a model of financial markets based on the perspective of an analytical/enlightened gambler. The gambler believes that: (1) actions of many players (some astute, some mediocre and some fools) drive prices; and, (2) markets adapt such that all static trading systems eventually fail. The gambler combines fundamental laws of gambling, knowledge of trading strategies of other market participants and data analysis to identify and exploit trading opportunities. The gambler translates this general strategy into a specific plan that algorithmically generate trades. Key aspects of the model are, as proposed: Keep Reading

The Power of Stories?

Do narratives (stories) sometimes trump rationality in financial markets? In his January 2017 paper entitled “Narrative Economics”, Robert Shiller considers the epidemiology (spread, mutation and fading) of stories as related to economic fluctuations. He explores the 1920-21 depression, the Great Depression of the 1930s, the Great Recession of 2007-9 and the political-economic situation of today as manifestations of popular stories. Based on these examples, other examples from other fields and his experience, he concludes that: Keep Reading

Robustness of Accounting-based Stock Return Anomalies

Do accounting-based stock return anomalies exist in samples that precede and follow those in which researchers discover them? In their November 2016 paper entitled “The History of the Cross Section of Stock Returns”, Juhani Linnainmaa and Michael Roberts examine the robustness of 36 accounting-based stock return anomalies, with initial focus on profitability and investment factors. Anomalies tested consists of six profitability measures, four earnings quality measures, five valuation ratios, 10 growth and investment measures, five financing measures, three distress measures and three composite measures. For each anomaly, they compare pre-discovery, in-sample and post-discovery anomaly average returns, Sharpe ratios, 1-factor (market) and 3-factor (market, size, book-to-market) model alphas and information ratios. Key are previously uncollected pre-1963 data. They assume accounting data are available six months after the end of firm fiscal year and generally employ annual anomaly factor portfolio rebalancing. Using firm accounting data and stock returns for a broad sample of U.S. stocks during 1918 through December 2015, they find that: Keep Reading

Remedies for Publication Bias, Poor Research Design and p-Hacking?

How can the financial markets research community shed biases that exaggerate predictability and associated expected performance of investment strategies? In his January 2017 paper entitled “The Scientific Outlook in Financial Economics”, Campbell Harvey assesses the conventional approach to empirical research in financial economics, sharing insights from other fields. He focuses on the meaning of p-value, its limitations and various approaches to p-hacking (manipulating models/data to increase statistical significance, as in data snooping). He then outlines and advocates a Bayesian alternative approach to research. Based on research metadata and examples, he concludes that: Keep Reading

Daily Email Updates
Research Categories
Recent Research
Popular Posts
Popular Subscriber-Only Posts