Objective research to aid investing decisions
Menu
Value Allocations for Apr 2019 (Final)
Cash TLT LQD SPY
Momentum Allocations for Apr 2019 (Final)
1st ETF 2nd ETF 3rd ETF

Big Ideas

These blog entries offer some big ideas of lasting value relevant for investing and trading.

Book Preview – Chapter 2

Here is this Friday’s installment of Avoiding Investment Strategy Flame-outs, a short book we are previewing for subscribers. Chapter previews will continue for the next seven Fridays.

Chapter 2: “Making the Strategy Logical”

“Making an investment/trading strategy logical essentially means making it testable and implementable, with inputs, outputs and rules clearly defined, properly sequenced and inclusive of all material factors. Clearly defined inputs, outputs and rules enable verification and extension. Definitions that require subjective interpretation are not clear. Properly sequenced inputs, outputs and rules fit the real world, representing an analysis and implementation scenario available to an investor in real time. Some strategies are more forgiving of tight sequencing than others. Including all material factors means accounting for all significant contributions to (capital gains, dividends, interest) and debits from (costs of data, trading frictions, cost of shorting, cost of leverage) investment outcome. The materiality of factors varies with strategy specifics.

“How can investors make sure their strategies are logical?”

Book Preview – Introduction and Chapter 1

Starting today and continuing for the next eight Fridays, we are previewing for subscribers a short book entitled Avoiding Investment Strategy Flame-outs.

The initial installments are:

“Introduction”

“Why do investment/trading strategies that test well on historical data flame out when put to actual use? Are there steps investors can take to improve the odds that strategies they develop will perform as tested? This book draws upon reviews of hundreds of academic and practitioner studies that seek to predict asset prices and exploit the predictions. It focuses on widespread weaknesses and limitations in these studies to help investors: (1) avoid or mitigate the weaknesses in developing their own strategies; and, (2) perform due diligence on strategies offered by others.”

Chapter 1: “Some Statistical Practices that Make Sense”

“Financial systems, such as stock markets, involve a large number of interacting decisions based on many different time-varying levels of knowledge, processing capabilities, motivations and financial resources. Due to this complexity, theories of financial system behavior cannot determine future prices and returns. Said differently, the models termed “financial theories” are actually just working hypotheses generally formed retrospectively (empirically) to fit the past.

“Lack of solid theories leaves researchers to explore a jungle of empirical data via statistical inference, constructing samples and looking for past conditions (indicators) that relate strongly to future outcomes (returns) within those samples. Investors then make the leap (despite limitations in empirical research and changes in the market conditions) that future data is enough like past data to apply findings from such inferences to investment decisions.

“How should investors generate and interpret research findings in such an environment?”

To make room for Avoiding Investment Strategy Flame-outs on the CXOAdvisory.com main menu, we are retiring our “Investment Demons” (largely subsumed by the book). The demons will, however, remain available here.

Navigating the Data Snooping Icebergs

Iterative testing of strategies on a set of data introduces snooping bias, such that a winning (losing) strategy is to some degree lucky (unlucky). Sharing of strategies across a community of researchers carries the luck forward, with accretion of additional bias from testing by subsequent researchers. Is there a rigorous way to account for this accumulation of snooping bias? In the October 2013 version of their paper entitled “Backtesting”, Campbell Harvey and Yan Liu describe three types of adjustment for snooping bias and apply them to quantify the snooping bias “haircut” appropriate for any reported Sharpe ratio (in lieu of a 50% rule-of-thumb discount). Using mathematical derivations and examples, they conclude that: Keep Reading

Measuring Investment Strategy Snooping Bias

Investors typically employ backtests to estimate future performance of investment strategies. Two approaches to assess in-sample optimization bias in such backtests are:

  1. Reserve (hold out) some of the historical data for out-of-sample testing. However, surreptitious direct use or indirect use (as in strategy construction based on the work of others) of hold-out data may contaminate its independence. Moreover, small samples result in even smaller in-sample and hold-out subsamples.
  2. Randomize the data for Monte Carlo testing, but randomization assumptions may distort the data and destroy real patterns in them. And, the process is time-consuming.

Is there a better way to assess data snooping bias? In their September 2013 paper entitled “The Probability of Backtest Overfitting”, David Bailey, Jonathan Borwein, Marcos Lopez de Prado and Qiji Zhu derive an approach for assessing the probability of backtest overfitting that depends on the number of trials (strategy alternatives) employed to select it. They use Sharpe ratio to measure strategy attractiveness. They define an optimized strategy as overfitted if its out-of-sample Sharpe ratio is less than the median out-of-sample Sharpe ratio of all strategy alternatives considered. By this definition, overfitted backtests are harmful. Their process is very general, specifying multiple (in-sample) training and (out-of-sample) testing subsamples of equal size and reusing all training sets as testing sets and vice versa. Based on interpretation of mathematical derivations, they conclude that: Keep Reading

Insidiousness of Overfitting Investment Strategies via Iterative Backtests

Should investors worry that investment strategies available in the marketplace may derive from optimization via intensive backtesting? In the September 2013 update of their paper entitled “Backtest Overfitting and Out-of-Sample Performance”, David Bailey, Jonathan Borwein, Marcos Lopez de Prado and Qiji Zhu examine the implications of overfitting investment strategies via multiple backtest trials. Using Sharpe ratio as the measure of strategy attractiveness, they compute the minimum backtest sample length an investor should require based on the number of strategy configurations tried. They also investigate situations for which more backtesting may produce worse out-of-sample performance. Based on interpretations of mathematical derivations, they conclude that: Keep Reading

Long-term Investors: Focus on Terminal Wealth?

Should long-term investors focus on terminal wealth and ignore interim volatility? In his August 2013 paper entitled “Rethinking Risk”, Javier Estrada compares distributions of terminal wealths for $100 initial investments in stocks or bonds over investment horizons of 10, 20 or 30 years. He utilizes mean, median, tail (extreme 1%, 5% and 10%) and risk-adjusted performance metrics. He employs real returns for 19 country markets adjusted by local inflation and in local currency for individual country markets, and adjusted by U.S. inflation and in dollars for the (capitalization-weighted) World market. Using real annual total returns for indexes of stocks and government bonds in each country during 1900 through 2009 (101, 91, and 81 overlapping intervals of 10, 20, and 30 years), he finds that: Keep Reading

Unified Carry Trade Theory

Does the carry trade concept provide a useful framework for valuation of securities within and across all asset classes? In their July 2013 paper entitled “Carry”, Ralph Koijen, Tobias Moskowitz, Lasse Pedersen and Evert Vrugt investigate expected return across asset classes via decomposition into “carry” (expected return assuming price does not change) and expected price appreciation. They measure carry for: global equities; global 10-year bonds; global bond yield spread (10-year minus 2-year); currencies; commodities; U.S. Treasuries; credit; equity index call options; and equity index put options. Their measurements of carry vary by asset class (based on: futures prices for equity indexes, currencies and commodities, modeled futures prices for global bonds, U.S. Treasuries and credit; and, option prices for options). They further decompose carry returns into passive and dynamic components. The passive component is the return to a hedge (carry trade) portfolio designed to capture differences in average carry returns across securities, and the dynamic component indicates how well carry predicts future price appreciation. Finally, they determine the conditions under which carry strategies perform poorly across all asset classes. Using monthly price/yield data for multiple assets within each class as available during January 1972 through September 2012, they find that: Keep Reading

Capturing Factor Premiums

How can investors capture returns from widely accepted risk factors associated with asset classes and subclasses? In the June 2013 version of his book chapter entitled “Factor Investing”, Andrew Ang provides advice on capturing risk premiums associated with factors such as value, momentum, illiquidity, credit risk and volatility risk. Based on the body of research, he concludes that: Keep Reading

One-factor Return Model for All Asset Classes?

Is downside risk the critical driver of investor asset valuation? In the January 2013 version of their paper entitled “Conditional Risk Premia in Currency Markets and Other Asset Classes”, Martin Lettau, Matteo Maggiori and Michael Weber explore the ability of a simple downside risk capital asset pricing model (DR-CAPM) to explain and predict asset returns. Their approach captures the idea that downside risk aversion makes investors view assets with high beta during bad market conditions as particularly risky. For all asset classes (but focusing on currencies), they define bad market conditions as months when the excess return on the broad value-weighted U.S. stock market is less than 1.0 standard deviation below its sample period average. To test DR-CAPM on currencies, they rank a sample of 53 currencies by interest rates into six portfolios, excluding for some analyses those currencies in highest interest rate portfolio with annual inflation at least 10% higher than contemporaneous U.S. inflation. They calculate the monthly return for each currency as the sum of its excess interest rate relative to the dollar and its change in value relative to the dollar. They then calculate overall and downside betas relative to the U.S. stock market based on the full sample. They extend tests of DR-CAPM to six portfolios of U.S. stocks sorted by size and book-to-market ratio, five portfolios of commodities sorted by futures premium and six portfolios of government bonds sorted by probability of default, and to multi-asset class combinations. They also compare DR-CAPM to optimal models based on principal component analysis within and across asset classes. Using monthly prices and characteristics for currencies and U.S. stocks during January 1974 through March 2010, for commodities during January 1974 through December 2008 and for government bonds during January 1995 through March 2010, they find that: Keep Reading

Linear Factor Stock Return Models Misleading?

Does use of alphas from linear factor models to identify anomalies in U.S. stock returns mislead investors? In the February 2013 draft of their paper entitled “Using Maximum Drawdowns to Capture Tail Risk”, Wesley Gray and Jack Vogel investigate maximum drawdown (largest peak-to-trough loss over a time series of compounded returns) as a simple measure of tail risk missed by linear factor models. Specifically, they quantify maximum drawdowns for 11 widely cited U.S. stock return anomalies identified via one-factor (market), three-factor (plus size and book-to-market ratio) and four-factor (plus momentum) linear models. These anomalies are: financial distress; O-score (probability of bankruptcy); net stock issuance; composite stock issuance; total accruals; net operating assets; momentum; gross profitability; asset growth; return on assets; and, investment-to-assets ratio. They calculate alphas for each anomaly by using the specified linear model risk factors to adjust gross monthly returns from a portfolio that is long (short) the value-weighted or equal-weighted tenth of stocks that are “good” (“bad”) according to that anomaly, reforming the portfolio annually or monthly depending on anomaly input frequency. Using monthly returns and firm fundamentals for a broad sample of U.S. stocks, and contemporaneous stock return model factor returns, during July 1963 through December 2012, they find that: Keep Reading

Daily Email Updates
Login
Research Categories
Recent Research
Popular Posts