Objective research to aid investing decisions
Menu
Value Allocations for June 2019 (Final)
Cash TLT LQD SPY
Momentum Allocations for June 2019 (Final)
1st ETF 2nd ETF 3rd ETF

Big Ideas

These blog entries offer some big ideas of lasting value relevant for investing and trading.

Methods for Mitigating Data Snooping Bias

What methods are available to suppress data snooping bias derived from testing multiple strategies/strategy variations on the same set of historical data? Which methods are best? In their March 2018 paper entitled “Systematic Testing of Systematic Trading Strategies”, Kovlin Perumal and Emlyn Flint survey statistical methods for suppressing data snooping bias and compare effectiveness of these methods on simulated asset return data and artificial trading rules. They choose a Jump Diffusion model to simulate asset return data, because it reasonably captures volatility and jumps observed in real markets. They define artificial trading rules simply in terms of probability of successfully predicting next-interval return sign. They test the power of each method by: (1) measuring its ability not to choose inaccurate trading rules; and, (2) relating confidence levels it assigns to strategies to profitabilities of those strategies. Using the specified asset return data and trading rule simulation approaches, they conclude that: Keep Reading

Data Perturb/Replay to Test Strategy Sensitivities

How can investment advisors apply historical asset performance data to address client views regarding future market/economic conditions? In their February 2018 paper entitled “Matching Market Views and Strategies: A New Risk Framework for Optimal Selection”, Adil Reghai and Gaël Riboulet present an approach for quantitatively relating historical asset return statistics to investor views. They intend this approach to address the widespread problem of backtest overfitting, whereby researchers discover good performance by fitting strategy features to noise in an historical dataset. Specifically, they:

  1. Collect historical return data for assets of interest and run backtests of alternative strategies on these data.
  2. Perturb historical average return, volatility, skewness and pairwise correlations up or down for these assets and rerun backtests of alternative strategies on multiple perturbations.
  3. Analyze relationships between directions of these perturbations and performance of alternative strategies.
  4. Match investor views first to directions of perturbations and then to strategies responding favorably (or least unfavorably) to these directions.

They apply this approach to generic algorithmic strategies (equal weight, momentum, mean reversion and carry). Based on mathematical derivations and examples, they conclude that: Keep Reading

Chess, Jeopardy, Poker, Go and… Investing?

How can machine investors beat humans? In the introductory chapter of his January 2018 book entitled “Financial Machine Learning as a Distinct Subject”, Marcos Lopez de Prado prescribes success factors for machine learning as applied to finance. He intends that the book: (1) bridge the divide between academia and industry by sharing experience-based knowledge in a rigorous manner; (2) promote a role for finance that suppresses guessing and gambling; and, (3) unravel the complexities of using machine learning in finance. He intends that investment professionals with a strong machine learning background apply the knowledge to modernize finance and deliver actual value to investors. Based on 20 years of experience, including management of several multi-billion dollar funds for institutional investors using machine learning algorithms, he concludes that: Keep Reading

Mimicking Anything with ETFs

Can a simple set of exchange-traded funds (ETF), weighted judiciously, mimic the behaviors of most financial assets? In their January 2018 paper entitled “Mimicking Portfolios”, Richard Roll and Akshay Srivastava present and test a way of constructing mimicking portfolios using a small set of ETFs as investment factor proxies. They define a mimicking portfolio as a weighted set of tradable assets that match factor sensitivities of a target, which may be a specific asset, a fund or a non-tradable variable such as an economic indicator. They state that mimicking portfolios should: (1) consist of liquid, easily tradable assets; and, (2) exhibit little return volatility not explained by the factors used. They first winnow a large number of potential factor proxy ETFs spanning major asset classes and geopolitical regions by retaining only one ETF from any pair with daily return correlation greater than 0.70. They begin mimicking portfolio tests at the end of January 2009, when enough reasonably unique ETFs become available. They test this set of ETFs by creating portfolios from them that mimic each NYSE stock that has daily returns over the full sample period. Specifically, on the last day of each month, they reform a mimicking portfolio for each stock via a regression of stock return versus factor proxy ETF returns over the prior 300 trading days (or as few as 250 if 300 are not yet available) to reset coefficients for the ETFs. They perform an ancillary test by attempting to mimic iShares iBoxx $ Investment Grade Corporate Bond (LQD) and SPDR Dow Jones International Real Estate (RWX) ETFs, which are not in the factor proxy set. Using daily returns for the large number of ETFs and 1,634 NYSE stocks from the end of January 2009 through December 2016, they find that: Keep Reading

10 Steps to Becoming a Better Quant

Want your machine to excel in investing? In his January 2018 paper entitled “The 10 Reasons Most Machine Learning Funds Fail”, Marcos Lopez de Prado examines common errors made by machine learning experts when tackling financial data and proposes correctives. Based on more than two decades of experience, he concludes that: Keep Reading

Categorization of Risk Premiums

What is the best way to think about reliabilities and risks of various anomaly premiums commonly that investors believe to be available for exploitation? In their December 2017 paper entitled “A Framework for Risk Premia Investing”, Kari Vatanen and Antti Suhonen present a framework for categorizing widely accepted anomaly premiums to facilitate construction of balanced investment strategies. They first categorize each premium as fundamental, behavioral or structural based on its robustness as indicated by clarity, economic rationale and capacity. They then designate each premium in each category as either defensive or offensive depending on whether it is feasible as long-only or requires short-selling and leverage, and on its return skewness and tail risk. Based on expected robustness and riskiness of selected premiums as described in the body of research, they conclude that: Keep Reading

Emptying the Equity Factor Zoo?

As described in “Quantifying Snooping Bias in Published Anomalies”, anomalies published in leading journals offer substantial opportunities for exploitation on a gross basis. What profits are left after accounting for portfolio maintenance costs? In their November 2017 paper entitled “Accounting for the Anomaly Zoo: A Trading Cost Perspective”, Andrew Chen and Mihail Velikov examine the combined effects of post-publication return deterioration and portfolio reformation frictions on 135 cross-sectional stock return anomalies published in leading journals. Their proxy for trading frictions is modeled stock-level effective bid-ask spread based on daily returns, representing a lower bound on costs for investors using market orders. Their baseline tests employ hedge portfolios that are long (short) the equally weighted fifth, or quintile, of stocks with the highest (lowest) expected returns for each anomaly. They also consider capitalization weighting, sorts into tenths (deciles) rather than quintiles and portfolio constructions that apply cost-suppression techniques. Using data as specified in published articles for replication of 135 anomaly hedge portfolios, they find that:

Keep Reading

Quantifying Snooping Bias in Published Anomalies

Is data snooping bias a material issue for cross-sectional stock return anomalies published in leading journals? In the September 2017 update of their paper entitled “Publication Bias and the Cross-Section of Stock Returns”, Andrew Chen and Tom Zimmermann: (1) develop an estimator for anomaly data snooping bias based on noisiness of associated returns; (2) apply it to replications of 172 anomalies published in 15 highly selective journals; and, (3) compare results to post-publication anomaly returns to distinguish between in-sample bias and out-of-sample market response to publication. If predictability is due to bias, post-publication returns should be (immediately) poor because pre-publication performance is a statistical figment. If predictability is due to true mispricing, post-publication returns should degrade as investors exploit new anomalies. Their baseline tests employ hedge portfolios that are long (short) the equally weighted fifth, or quintile, of stocks with the highest (lowest) expected returns for each anomaly. Results are gross, ignoring the impact of periodic portfolio reformation frictions. Using data as specified in published articles for replication of 172 anomaly hedge portfolios, they find that:

Keep Reading

Seven Habits of Highly Ineffective Quants

Why don’t machines rule the financial world? In his September 2017 presentation entitled “The 7 Reasons Most Machine Learning Funds Fail”, Marcos Lopez de Prado explores causes of the high failure rate of quantitative finance firms, particularly those employing machine learning. He then outlines fixes for those failure modes. Based on more than two decades of experience, he concludes that: Keep Reading

Best Market Forecasting Practices?

Are more data, higher levels of signal statistical significance and more sophisticated prediction models better for financial forecasting? In their August 2017 paper entitled “Practical Significance of Statistical Significance”, Ben Jacobsen, Alexander Molchanov and Cherry Zhang perform sensitivity testing of forecasting practices along three dimensions: (1) length of lookback interval (1 to 300 years); (2) required level of statistical significance for signals (1%, 5%, 10%…); and, (3) different signal detection methods that rely on difference from an historical average. They focus on predicting whether returns for specific calendar months will be higher or lower than the market, either excluding or including January. Using monthly UK stock market returns since 1693 and U.S. stock market returns since 1792, both through 2013, they find that:

Keep Reading

Daily Email Updates
Login
Research Categories
Recent Research
Popular Posts