Objective research to aid investing decisions
Value Allocations for Apr 2018 (Final)
Cash TLT LQD SPY
Momentum Allocations for Apr 2018 (Final)
1st ETF 2nd ETF 3rd ETF
CXO Advisory

Big Ideas

These blog entries offer some big ideas of lasting value relevant for investing and trading.

Page 1 of 2112345678910...Last »

Putting Strategic Edges and Tactical Views into Portfolios

What is the best way to put strategic edges and tactical views into investment portfolios? In their March 2018 paper entitled “Model Portfolios”, Debarshi Basu, Michael Gates, Vishal Karir and Andrew Ang describe and illustrate a three-step optimized asset allocation process incorporating investor preferences and beliefs that is rigorous, repeatable, transparent and scalable. The three steps are: 

  1. Select a benchmark portfolio matched to investor risk tolerance via simple combination of stocks and bonds. They represent stocks with a mix of 70% MSCI All World Country Index and 30% MSCI USA Index. They represent bonds with Barclays US Universal Bond Index. In their first illustration, they focus on 20-80, 60-40 and 80-20 stocks-bonds benchmarks, rebalanced quarterly.
  2. Construct a strategic portfolio with the same expected volatility as the selected benchmark but generates a higher long-term Sharpe ratio by including optimized exposure to styles/factors expected to outperform the market over the long run. Key inputs are long-run asset returns and covariances plus a risk aversion parameter. In their first illustration, they constrain the strategic model portfolio to have the same overall equity exposure and regional equity exposures as the selected benchmark.
  3. Add tactical modifications to the strategic portfolio by varying strategic positions based on short-term expected returns and risks. In their second illustration, they employ a 100-0 stocks-bonds benchmark consisting of 80% MSCI USA Net Total Return Index and 20% MSCI USA Minimum Volatility Net Total Return Index. The corresponding strategic portfolio reflecting long-term expectations is an equally weighted combination of value, momentum, quality, size and minimum volatility equity factor indexes. They specify short-term return and risk expectations based on four indicators involving: economic cycle variables; aggregate stock valuation metrics; factor momentum; and, dispersion of factor measures (such as difference in valuations between value stocks and growth stocks). They apply these indicators to underweight or overweight strategic positions using an optimizer. They rebalance these portfolios monthly. 

For their asset universe, they focus on indexes accessible via Exchanged Traded Funds (ETFs). Using monthly data for five broad capitalization-weighted equity indexes, six broad bond/credit indexes of varying durations and six style/factor (smart beta) equity indexes as available during January 2000 through June 2017, they find that: Keep Reading

Methods for Mitigating Data Snooping Bias

What methods are available to suppress data snooping bias derived from testing multiple strategies/strategy variations on the same set of historical data? Which methods are best? In their March 2018 paper entitled “Systematic Testing of Systematic Trading Strategies”, Kovlin Perumal and Emlyn Flint survey statistical methods for suppressing data snooping bias and compare effectiveness of these methods on simulated asset return data and artificial trading rules. They choose a Jump Diffusion model to simulate asset return data, because it reasonably captures volatility and jumps observed in real markets. They define artificial trading rules simply in terms of probability of successfully predicting next-interval return sign. They test the power of each method by: (1) measuring its ability not to choose inaccurate trading rules; and, (2) relating confidence levels it assigns to strategies to profitabilities of those strategies. Using the specified asset return data and trading rule simulation approaches, they conclude that: Keep Reading

Data Perturb/Replay to Test Strategy Sensitivities

How can investment advisors apply historical asset performance data to address client views regarding future market/economic conditions? In their February 2018 paper entitled “Matching Market Views and Strategies: A New Risk Framework for Optimal Selection”, Adil Reghai and Gaël Riboulet present an approach for quantitatively relating historical asset return statistics to investor views. They intend this approach to address the widespread problem of backtest overfitting, whereby researchers discover good performance by fitting strategy features to noise in an historical dataset. Specifically, they:

  1. Collect historical return data for assets of interest and run backtests of alternative strategies on these data.
  2. Perturb historical average return, volatility, skewness and pairwise correlations up or down for these assets and rerun backtests of alternative strategies on multiple perturbations.
  3. Analyze relationships between directions of these perturbations and performance of alternative strategies.
  4. Match investor views first to directions of perturbations and then to strategies responding favorably (or least unfavorably) to these directions.

They apply this approach to generic algorithmic strategies (equal weight, momentum, mean reversion and carry). Based on mathematical derivations and examples, they conclude that: Keep Reading

Chess, Jeopardy, Poker, Go and… Investing?

How can machine investors beat humans? In the introductory chapter of his January 2018 book entitled “Financial Machine Learning as a Distinct Subject”, Marcos Lopez de Prado prescribes success factors for machine learning as applied to finance. He intends that the book: (1) bridge the divide between academia and industry by sharing experience-based knowledge in a rigorous manner; (2) promote a role for finance that suppresses guessing and gambling; and, (3) unravel the complexities of using machine learning in finance. He intends that investment professionals with a strong machine learning background apply the knowledge to modernize finance and deliver actual value to investors. Based on 20 years of experience, including management of several multi-billion dollar funds for institutional investors using machine learning algorithms, he concludes that: Keep Reading

Mimicking Anything with ETFs

Can a simple set of exchange-traded funds (ETF), weighted judiciously, mimic the behaviors of most financial assets? In their January 2018 paper entitled “Mimicking Portfolios”, Richard Roll and Akshay Srivastava present and test a way of constructing mimicking portfolios using a small set of ETFs as investment factor proxies. They define a mimicking portfolio as a weighted set of tradable assets that match factor sensitivities of a target, which may be a specific asset, a fund or a non-tradable variable such as an economic indicator. They state that mimicking portfolios should: (1) consist of liquid, easily tradable assets; and, (2) exhibit little return volatility not explained by the factors used. They first winnow a large number of potential factor proxy ETFs spanning major asset classes and geopolitical regions by retaining only one ETF from any pair with daily return correlation greater than 0.70. They begin mimicking portfolio tests at the end of January 2009, when enough reasonably unique ETFs become available. They test this set of ETFs by creating portfolios from them that mimic each NYSE stock that has daily returns over the full sample period. Specifically, on the last day of each month, they reform a mimicking portfolio for each stock via a regression of stock return versus factor proxy ETF returns over the prior 300 trading days (or as few as 250 if 300 are not yet available) to reset coefficients for the ETFs. They perform an ancillary test by attempting to mimic iShares iBoxx $ Investment Grade Corporate Bond (LQD) and SPDR Dow Jones International Real Estate (RWX) ETFs, which are not in the factor proxy set. Using daily returns for the large number of ETFs and 1,634 NYSE stocks from the end of January 2009 through December 2016, they find that: Keep Reading

10 Steps to Becoming a Better Quant

Want your machine to excel in investing? In his January 2018 paper entitled “The 10 Reasons Most Machine Learning Funds Fail”, Marcos Lopez de Prado examines common errors made by machine learning experts when tackling financial data and proposes correctives. Based on more than two decades of experience, he concludes that: Keep Reading

Categorization of Risk Premiums

What is the best way to think about reliabilities and risks of various anomaly premiums commonly that investors believe to be available for exploitation? In their December 2017 paper entitled “A Framework for Risk Premia Investing”, Kari Vatanen and Antti Suhonen present a framework for categorizing widely accepted anomaly premiums to facilitate construction of balanced investment strategies. They first categorize each premium as fundamental, behavioral or structural based on its robustness as indicated by clarity, economic rationale and capacity. They then designate each premium in each category as either defensive or offensive depending on whether it is feasible as long-only or requires short-selling and leverage, and on its return skewness and tail risk. Based on expected robustness and riskiness of selected premiums as described in the body of research, they conclude that: Keep Reading

Emptying the Equity Factor Zoo?

As described in “Quantifying Snooping Bias in Published Anomalies”, anomalies published in leading journals offer substantial opportunities for exploitation on a gross basis. What profits are left after accounting for portfolio maintenance costs? In their November 2017 paper entitled “Accounting for the Anomaly Zoo: A Trading Cost Perspective”, Andrew Chen and Mihail Velikov examine the combined effects of post-publication return deterioration and portfolio reformation frictions on 135 cross-sectional stock return anomalies published in leading journals. Their proxy for trading frictions is modeled stock-level effective bid-ask spread based on daily returns, representing a lower bound on costs for investors using market orders. Their baseline tests employ hedge portfolios that are long (short) the equally weighted fifth, or quintile, of stocks with the highest (lowest) expected returns for each anomaly. They also consider capitalization weighting, sorts into tenths (deciles) rather than quintiles and portfolio constructions that apply cost-suppression techniques. Using data as specified in published articles for replication of 135 anomaly hedge portfolios, they find that:

Keep Reading

Quantifying Snooping Bias in Published Anomalies

Is data snooping bias a material issue for cross-sectional stock return anomalies published in leading journals? In the September 2017 update of their paper entitled “Publication Bias and the Cross-Section of Stock Returns”, Andrew Chen and Tom Zimmermann: (1) develop an estimator for anomaly data snooping bias based on noisiness of associated returns; (2) apply it to replications of 172 anomalies published in 15 highly selective journals; and, (3) compare results to post-publication anomaly returns to distinguish between in-sample bias and out-of-sample market response to publication. If predictability is due to bias, post-publication returns should be (immediately) poor because pre-publication performance is a statistical figment. If predictability is due to true mispricing, post-publication returns should degrade as investors exploit new anomalies. Their baseline tests employ hedge portfolios that are long (short) the equally weighted fifth, or quintile, of stocks with the highest (lowest) expected returns for each anomaly. Results are gross, ignoring the impact of periodic portfolio reformation frictions. Using data as specified in published articles for replication of 172 anomaly hedge portfolios, they find that:

Keep Reading

Seven Habits of Highly Ineffective Quants

Why don’t machines rule the financial world? In his September 2017 presentation entitled “The 7 Reasons Most Machine Learning Funds Fail”, Marcos Lopez de Prado explores causes of the high failure rate of quantitative finance firms, particularly those employing machine learning. He then outlines fixes for those failure modes. Based on more than two decades of experience, he concludes that: Keep Reading

Page 1 of 2112345678910...Last »
Daily Email Updates
Login
Research Categories
Recent Research
Popular Posts
Popular Subscriber-Only Posts