Objective research to aid investing decisions
Menu
Value Allocations for Apr 2019 (Final)
Cash TLT LQD SPY
Momentum Allocations for Apr 2019 (Final)
1st ETF 2nd ETF 3rd ETF

Big Ideas

These blog entries offer some big ideas of lasting value relevant for investing and trading.

Cautions Regarding Findings Include…

What are common cautions regarding exploitation of academic and practitioner papers on financial markets? To investigate, we collect, collate and summarize our cautions on findings from papers reviewed over the past year. These papers are survivors of screening for relevance to investors of a much larger number of papers, mostly from the Financial Economics Network (FEN) Subject Matter eJournals and Journal of Economic Literature (JEL) Code G1 sections of the Social Sciences Research Network (SSRN). Based on review of cautions in 109 summaries of papers relevant to investors posted during mid-March 2018 through mid-March 2019, we conclude that: Keep Reading

Equity Factor Census

Should investors trust academic equity factor research? In their February 2019 paper entitled “A Census of the Factor Zoo”, Campbell Harvey and Yan Liu announce a comprehensive database of hundreds of equity factors from top academic journals and working papers through January 2019, including a link to citation and download information. They distinguish among six types of common factors and five types of firm characteristic-based factors. They also explore incentives for factor discovery and reasons why many factors are lucky findings that exaggerate expectations and disappoint in live trading. Finally, they announce a project that allows researchers to add published and working papers to the database. Based on their census of published factors and analysis of implications, they conclude that: Keep Reading

Relative Wealth Effects on Investors

How does investor competitiveness (a goal of relative rather than absolute wealth) affect optimal allocations? In their February 2019 paper entitled “The Growth of Relative Wealth and the Kelly Criterion”, Andrew Lo, Allen Orr and Ruixun Zhang compare optimal portfolios for maximizing relative wealth versus absolute wealth at both short and long investment horizons. They define an individual’s relative wealth as fraction held of total wealth of all investors. Their model assumes that investors allocate to two assets, one risky and one riskless. They identify when an investor should allocate according to the Kelly criterion (series of allocations that maximize terminal wealth over the long run) and when the investor should deviate from it. Based on derivations and modeling, they conclude that:

Keep Reading

Inflated Expectations of Factor Investing

How should investors feel about factor/multi-factor investing? In their February 2019 paper entitled “Alice’s Adventures in Factorland: Three Blunders That Plague Factor Investing”, Robert Arnott, Campbell Harvey, Vitali Kalesnik and Juhani Linnainmaa explore three critical failures of U.S. equity factor investing:

  1. Returns are far short of expectations due to overfitting and/or trade crowding.
  2. Drawdowns far exceed expectations.
  3. Diversification of factors occasionally disappears when correlations soar.

They focus on 15 factors most closely followed by investors: the market factor; a set of six factors from widely used academic multi-factor models (size, value, operating profitability, investment, momentum and low beta); and, a set of eight other popular factors (idiosyncratic volatility, short-term reversal, illiquidity, accruals, cash flow-to-price, earnings-to-price, long-term reversal and net share issuance). For some analyses they employ a broader set of 46 factors. They consider both long-term (July 1963-June 2018) and short-term (July 2003-June 2018) factor performances. Using returns for the specified factors during July 1963 through June 2018, they conclude that:

Keep Reading

Stopping Tests after Lucky Streaks?

Might purveyors of trading strategies be presenting performance results biased by stopping them when falsely successful? In other words, might they be choosing lucky closing conditions for reported positions? In the December 2018 revision of their paper entitled “p-Hacking and False Discovery in A/B Testing”, Ron Berman, Leonid Pekelis, Aisling Scott and Christophe Van den Bulte investigate whether online A/B experimenters bias results by stopping monitored commercial (marketing) experiments based on latest p-value. They hypothesize that such a practice may exist due to: (1) poor training in statistics; (2) self-deception motivated by desire for success; or, (3) deliberate deception for selling purposes. They employ regression discontinuity analysis to estimate whether reaching a particular p-value causes experimenters to end their tests. Using data from 2,101 online A/B experiments with daily tracking of results during 2014, they find that:

Keep Reading

It Can’t All Be Data Snooping?

Is it possible that all the 300+ published factors that predict stock returns (such as size, value, profitability, investment, momentum…) derive from data snooping? In his October 2018 paper entitled “The Limits of Data Mining: A Thought Experiment”, Andrew Chen estimates how much data snooping would be required to “discover” all these factors by pure luck. Specifically, he calibrates a pure luck model built on the assumption that the probability of publishing a factor discovery increases with the degree to which the discovery is convincing (t-statistic). Using this model, he estimates the number of unpublished factor studies required for the published set to be attributable to pure luck. He considers two sets of factor t-statistics: 156 from factor replications via equal-weighted long-short extreme fifths (quintiles) of factor stock sorts; and, a hand-collected set of 316 from published factor studies. Using the specified approach and these two sets of t-statistics, he finds that: Keep Reading

Curbing Data Snooping

How should researchers applying machine learning to quantitative finance address the field’s data limitations, which exacerbate data snooping bias? In their October 2018 paper entitled “A Backtesting Protocol in the Era of Machine Learning”, Robert Arnott, Campbell Harvey and Harry Markowitz take a step back and re-examine financial markets research methods, with focus on suppressing backtest overfitting of investment strategies. They introduce a research protocol recognizing that self-deception is easy. Their goal is that the protocol offers the best way to match or beat expectations in live trading. Based on logic and their collective experience, they conclude that: Keep Reading

Free Data and the Collapse of Trading Costs

How have costs of U.S. stock trading data evolved in recent years? In his October 2018 paper entitled “Retail Investors Get a Sweet Deal: The Cost of a SIP of Stock Market Data”, James Angel examines costs of U.S. stock market data. He also describes the production of these data and their consolidation/distribution via Securities Information Processors (SIP). Using data for U.S. trading costs as far back as 1987, he finds that:

Keep Reading

Investment Strategy Development Coursework

In a series of nine presentation slide sets (Lectures 1-9 of 10) on “Advances in Financial Machine Learning”, Marcos Lopez de Prado provides part of Cornell University’s ORIE 5256 graduate course at the School of Engineering (“Special Topics in Financial Engineering V”). The course description includes: “Machine learning (ML) is changing virtually every aspect of our lives. As it relates to finance, this is the most exciting time to adopt a disruptive technology that will transform how everyone invests for generations [see the chart below]. Students will learn scientifically sound ML tools used in the financial industry.” Key points in these slide sets include: Keep Reading

Most Stock Anomalies Fake News?

How does a large sample of stock return anomalies fare in recent replication testing? In their October 2018 paper entitled “Replicating Anomalies”, Kewei Hou, Chen Xue and Lu Zhang attempt to replicate 452 published U.S. stock return anomalies, including 57, 69, 38, 79, 103, and 106 anomalies 57 momentum, 69 value-growth, 38 investment, 79 profitability, 103 intangibles and 106 trading frictions (trading volume, liquidity, market microstructure) anomalies. Compared to the original papers, they use the same sample populations, original (as early as January 1967) and extended (through 2016) sample periods and similar methods/variable definitions. They test limiting influence of microcaps (stocks in the lowest 20% of market capitalizations) by using NYSE (not NYSE-Amex-NASDAQ) size breakpoints and value-weighted returns. They consider an anomaly replication successful if average high-minus-low tenth (decile) return is significant at the 5% level, translating to t-statistic at least 1.96 for pure standalone tests and at least 2.78 assuming multiple testing (accounting for aggregate data snooping bias). Using required anomaly data and monthly returns for U.S. non-financial stocks during January 1967 through December 2016, they find that:

Keep Reading

Daily Email Updates
Login
Research Categories
Recent Research
Popular Posts