Objective research to aid investing decisions

Value Investing Strategy (Strategy Overview)

Allocations for March 2024 (Final)
Cash TLT LQD SPY

Momentum Investing Strategy (Strategy Overview)

Allocations for March 2024 (Final)
1st ETF 2nd ETF 3rd ETF

Big Ideas

These blog entries offer some big ideas of lasting value relevant for investing and trading.

Variation in the Number of Significant Equity Factors

Does the number of factors significantly predicting next-month stock returns vary substantially over time? If so, what accounts for the variation? In their December 2021 paper entitled “Time Series Variation in the Factor Zoo”, Hendrik Bessembinder, Aaron Burt and Christopher Hrdlicka investigate time variation in the statistical significance of 205 previously identified equity factors before, during and after the sample periods used for their discoveries. Specifically, they track 1-factor (market) alphas of each factor over rolling 60-month intervals over a long sample period. Their criterion for significance for each factor in each interval is a t-statistic of at least 1.96 (95% confidence that alpha is positive). Using monthly returns for all common stocks listed on NYSE, AMEX and NASDAQ exchanges having at least 60 continuous months of data as available during July 1926 (with alpha series therefore starting June 1931) through December 2020, they find that: Keep Reading

Science as Done by Humans

Do the choices researchers make in modeling, sample grooming and programming to test hypotheses materially affect their findings? In their November 2021 paper entitled “Non-Standard Errors”, 164 research teams and 34 peer reviewers representative of the academic empirical finance community investigate this source of uncertainty (non-standard error, as contrasted to purely statistical standard error). Specifically, they explore the following aspects of non-standard errors in financial research:

  • How large are they compared to standard errors?
  • Does research team quality (prior publications), research design quality (reproducibility) or paper quality (peer evaluation score) explain them?
  • Does peer review feedback reduce them?
  • Do researchers understand their magnitude?

To conduct the investigation, they pose six hypotheses that involve devising a metric and computing an average annual percentage change to quantify trends in: (1) market efficiency; (2) realized bid-ask spread: (3) share of client volume relative to total volume; (4) realized spread on client orders; (5) share of client orders that are market orders; and, (6) gross client trading revenue. The common sample for testing these hypotheses is a set of 720 million EuroStoxx 50 index futures trade records spanning 17 years. Each of 164 research teams studies each hypothesis and writes a brief paper, and peer reviewers evaluate and provide feedback to research teams on these papers. They then quantify the dispersion of findings for each hypothesis and further relate deviation of individual study finding from the average finding to team quality, research design quality and paper quality. Using results for all 984 studies, they find that: Keep Reading

Financial Markets Flouters of Statistical Principles

Should practitioners and academics doing research on financial markets be especially careful (compared to researchers in other fields) when employing statistical inference. In the July 2021 version of their paper entitled “Finance is Not Excused: Why Finance Should Not Flout Basic Principles of Statistics”, David Bailey and Marcos Lopez de Prado argue that three aspects of financial research make it particularly prone to false discoveries:

  1. Due to intense competition, the probability of finding a truly profitable investment strategy is very low.
  2. True findings are often short-lived due to financial market evolution/adaptation.
  3. It is impossible to verify statistical findings through controlled experiments.

Based on statistical analysis principles and their experience in performing and reviewing financial markets research, they conclude that: Keep Reading

Post-discovery Effects on Anomaly Return Sequence

Does anomaly publication lead to its speedy exploitation? In his March 2021 paper entitled “The Race to Exploit Anomalies and the Cost of Slow Trading”, Guy Kaplanski studies a sample of widely accepted U.S. stock return anomalies to determine how discovery and publication of an anomaly affects the timing of future returns. He quantifies anomalies by each month sorting stocks into fifths, or quintiles, on each anomaly variable and reforming a portfolio that is long (short) the quintile with the highest (lowest) predicted returns. Using discovery (December of the last year in the discovery sample) and publication dates for 71 anomalies, along with associated anomaly data and daily prices for all reasonably liquid U.S. common stocks during January 1973 through December 2018, he finds that: Keep Reading

Defi Risks and Crypto-asset Growth

What Decentralized Finance (DeFi) issues may dampen associated interest in crypto-assets by undermining its promises of lower costs and risks compared to traditional, centralized financial intermediaries? In their June 2021 book chapter entitled “DeFi Protocol Risks: the Paradox of DeFi”, Nic Carter and Linda Jeng discuss five sources of DeFi risk:

  1. Interconnections with the traditional financial system.
  2. Blockchain-related operational issues.
  3. Smart contract vulnerabilities.
  4. Other governance and regulatory concerns.
  5. Scalability challenges.

A general objective of DeFi is automating rules for behavior in a publicly available financial system, eliminating human discretion from financial transactions/contracts. In practice, however, core DeFi protocols retain some human oversight to address unpredictable problems as they emerge, but such retention allows incompetent or malicious governance, administration and validation (see the figure below). Based on review of the body of research and opinion, they conclude that:

Keep Reading

Modeling the Level of Snooping Bias in Asset Pricing Factors

Is aggregate data snooping bias (p-hacking) in financial markets research a big issue or a minor concern? In their June 2021 paper entitled “Uncovering the Iceberg from Its Tip: A Model of Publication Bias and p-Hacking”, Campbell Harvey and Yan Liu model the severity of p-hacking based on the view that there are, in fact, both some true anomalies and many false anomalies. This view contrasts with other recent research that models the severity of p-hacking by initially assuming that there are no true anomalies. They test their model on a sample of 156 published equal-weighted long-short anomaly time series and 18,113 comparable datamined equal-weighted long-short strategy time series, focusing on series exhibiting alphas with t-statistics greater than 2.0. They present in detail differences in conclusions for the initial assumption that there are some true anomalies and the initial assumption that there are no true anomalies. Applying their model to the specified time series, they find that:

Keep Reading

Interesting vs. Exploitable

Does failure to replicate dampen interest in previously published research? In their May 2021 paper entitled “Non-replicable Publications Are Cited More Than Replicable Ones”, Marta Serra-Garcia and Uri Gneezy use results of three recent replication studies to compare citation rates for papers published in top psychology, economics and general science journals that fail to replicate versus those that do replicate. Replication rates in those past studies are 39% in psychology (replication study published 2015), 61% in economics (replication study published 2016) and 62% in general science (replication study published 2018). Strengths of replicated findings compared to original study findings are 75% for those that do replicate and 0% for those that do not replicate. The authors look at citation rates before and after publication of associated replication studies and also assess the nature/potential impact of citations. Using citations of the studied papers through 2019, they find that: Keep Reading

Why Stock Anomalies Weaken After Publication

Is the known weakening of stock anomalies after publication due more to in-sample overfitting by researchers or post-publication exploitation by arbitrageurs (market adaptation)? In their May 2021 paper entitled “Why and How Systematic Strategies Decay”, Antoine Falck, Adam Rej and David Thesmar examine the typical post-publication risk-adjusted performance (Sharpe ratio) of U.S. stock anomalies. They include only anomalies published through 2010 to allow significant out-of-sample testing in their dataset that ends in 2014. In general, their anomaly return calculations: (1) are based on long-short portfolios of top minus bottom tenth (decile) of anomaly variable sorts; (2) assume that annual variables are available four months after fiscal year end; and, (3) are market beta-hedged based on 36-month rolling betas. They consider date of publication, six proxies for in-sample overfitting and four proxies for ease of anomaly exploitation (arbitrage) to explain weaker post-publication performance. Using a sample of 72 published investment strategies as applied to U.S. stocks during January 1963 through April 2014 and as applied to international stocks as available during January 1995 through December 2018, they find that:

Keep Reading

The State of Systematic (Algorithmic) Investing

How has systematic investment, with trades generated by rules or algorithms, evolved? What are its strengths and weaknesses? In his February 2021 paper entitled “Why Is Systematic Investing Important?”, Campbell Harvey summarizes the history, advantages and disadvantage of systematic (algorithmic) investing. Based on the body of research and personal experience, he concludes that: Keep Reading

Re-examining Equity Factor Research Replicability

Several recent papers find that most studies identifying factors that predict stock returns are not replicable or derive from snooping of many factors. Is there a good counter-argument? In their January 2021 paper entitled “Is There a Replication Crisis in Finance?”, Theis Ingerslev Jensen, Bryan Kelly and Lasse Pedersen apply a Bayesian model of factor replication to a set of 153 factors applied to stocks across 93 countries. For each factor in each country, they each month:

  1. Sort stocks into thirds (top/middle/bottom) with breakpoints based on non-micro stocks in that country.
  2. For each third, compute a “capped value weight” gross return (winsorizing market equity at the NYSE 80th percentile to ensure that tiny stocks have tiny weights no mega-stock dominates).
  3. Calculate the gross return for a hedge portfolio that is long (short) the third with the highest (lowest) expected return.
  4. Calculate the corresponding 1-factor gross alpha via simple regression versus the country portfolio.

They further propose a taxonomy that systematically assigns each of the 153 factors to one of 13 themes based on high within-theme return correlations and conceptual similarities. Using firm and stock data required to calculate the specified factors starting 1926 for U.S. stocks and 1986 for most developed countries (in U.S. dollars), and 1-month U.S. Treasury bill yields to compute excess returns, all through 2019, they find that: Keep Reading

Login
Daily Email Updates
Filter Research
  • Research Categories (select one or more)