Objective research to aid investing decisions

Value Investing Strategy (Strategy Overview)

Allocations for August 2021 (Final)
Cash TLT LQD SPY

Momentum Investing Strategy (Strategy Overview)

Allocations for August 2021 (Final)
1st ETF 2nd ETF 3rd ETF

Investing Expertise

Can analysts, experts and gurus really give you an investing/trading edge? Should you track the advice of as many as possible? Are there ways to tell good ones from bad ones? Recent research indicates that the average “expert” has little to offer individual investors/traders. Finding exceptional advisers is no easier than identifying outperforming stocks. Indiscriminately seeking the output of as many experts as possible is a waste of time. Learning what makes a good expert accurate is worthwhile.

Chess, Jeopardy, Poker, Go and… Investing?

How can machine investors beat humans? In the introductory chapter of his January 2018 book entitled “Financial Machine Learning as a Distinct Subject”, Marcos Lopez de Prado prescribes success factors for machine learning as applied to finance. He intends that the book: (1) bridge the divide between academia and industry by sharing experience-based knowledge in a rigorous manner; (2) promote a role for finance that suppresses guessing and gambling; and, (3) unravel the complexities of using machine learning in finance. He intends that investment professionals with a strong machine learning background apply the knowledge to modernize finance and deliver actual value to investors. Based on 20 years of experience, including management of several multi-billion dollar funds for institutional investors using machine learning algorithms, he concludes that: Keep Reading

10 Steps to Becoming a Better Quant

Want your machine to excel in investing? In his January 2018 paper entitled “The 10 Reasons Most Machine Learning Funds Fail”, Marcos Lopez de Prado examines common errors made by machine learning experts when tackling financial data and proposes correctives. Based on more than two decades of experience, he concludes that: Keep Reading

Seven Habits of Highly Ineffective Quants

Why don’t machines rule the financial world? In his September 2017 presentation entitled “The 7 Reasons Most Machine Learning Funds Fail”, Marcos Lopez de Prado explores causes of the high failure rate of quantitative finance firms, particularly those employing machine learning. He then outlines fixes for those failure modes. Based on more than two decades of experience, he concludes that: Keep Reading

Financial Analysts 25% Optimistic?

How accurate are consensus firm earnings forecasts worldwide at a 12-month horizon? In his May 2016 paper entitled “An Empirical Study of Financial Analysts Earnings Forecast Accuracy”, Andrew Stotz measures accuracy of consensus 12-month earnings forecasts by financial analysts for the companies they cover around the world. He defines consensus as the average for analysts coverings a specific stock. He prepares data by starting with all stocks listed in all equity markets and sequentially discarding:

  1. Stocks with market capitalizations less than $50 million (U.S. dollars) as of December 2014 or the last day traded before delisting during the sample period.
  2. Stocks with no analyst coverage.
  3. Stocks without at least one target price and recommendation.
  4. The 2.1% of stocks with extremely small earnings, which may results in extremely large percentage errors.
  5. All observations of errors outside ±500% as outliers.
  6. Stocks without at least three analysts, one target price and one recommendation.

He focuses on scaled forecast error (SFE), 12-month consensus forecasted earnings minus actual earnings, divided by absolute value of actual earnings, as the key accuracy metric. Using monthly analyst earnings forecasts and subsequent actual earnings for all listed firms around the world during January 2003 through December 2014, he finds that: Keep Reading

Guru Re-grades

What happens to the rankings of Guru Grades after weighting each forecast by forecast horizon and specificity? In their March 2017 paper entitled “Evaluation and Ranking of Market Forecasters”, David Bailey, Jonathan Borwein, Amir Salehipour and Marcos Lopez de Prado re-evaluate and re-rank market forecasters covered in Guru Grades after weighting each forecast by these two parameters. They employ original Guru Grades forecast data as the sample of forecasts, including assessments of the accuracy of each forecast. However, rather than weighting each forecast equally, they:

  • Apply to each forecast a weight of 0.25, 0.50, 0.75 or 1.00 according to whether the forecast horizon is less than a month/indeterminate, 1-3 months, 3-9 months or greater than 9 months, respectively.
  • Apply to each forecast a weight of either 0.5 for less specificity or 1.0 for more specificity.

Using a sample of 6,627 U.S. stock market forecasts by 68 forecasters from CXO Advisory Group LLC, they find that: Keep Reading

How Large University Endowments Allocate Investments

How are the asset allocations of the largest university endowments, conventionally accepted as among the best investors, evolving? In their December 2016 paper entitled “The Evolution of Asset Classes: Lessons from University Endowments”, John Mulvey and Margaret Holen summarize recent public reports from large U.S. university endowments, focusing on asset category definitions and allocations. Using public disclosures of 50 large university endowments for 2015, they find that: Keep Reading

The Value of Fund Manager Discretion?

Are there material average performance differences between hedge funds that emphasize systematic rules/algorithms for portfolio construction versus those that do not? In their December 2016 paper entitled “Man vs. Machine: Comparing Discretionary and Systematic Hedge Fund Performance”, Campbell Harvey, Sandy Rattray, Andrew Sinclair and Otto Van Hemert compare average performances of systematic and discretionary hedge funds for the two largest fund styles covered by Hedge Fund Research: Equity Hedge (6,955 funds) and Macro (2,182 funds). They designate a fund as systematic if its description contains “algorithm”, “approx”, “computer”, “model”, “statistical” and/or “system”. They designate a fund as discretionary if its description contains none of these terms. They focus on net fund alphas, meaning after-fee returns in excess of the risk-free rate, adjusted for exposures to three kinds of risk factors well known at the start of the sample period: (1) traditional equity market, bond market and credit factors; (2) dynamic stock size, stock value,  stock momentum and currency carry factors; and, (3) a volatility factor specified as monthly returns from buying one-month, at‐the‐money S&P 500 Index calls and puts and holding to expiration. Using monthly after-fee returns for the specified hedge funds (excluding backfilled returns but including dead fund returns) during June 1996 through December 2014, they find that: Keep Reading

Robo Advisor Expected Performance and Acceptance

Does a flexible robo advisor (offering automated, passive investment strategies tailored to investor situation/preferences) perform well in comparison to mutual fund/stock portfolios they might replace? If so, what inhibits investors from switching to them? In their November 2016 paper entitled “Robo Advisers and Mutual Fund Stickiness”, Michael Reher and Celine Sun compare actual mutual fund/stock portfolios held by individuals to Wealthfront robo advisor portfolios constructed by assigning weights to 10 exchange-traded funds based on investor responses to questions about financial situation and risk tolerance. The robo advisor portfolio construction process includes a critique of original portfolio diversification, fees and cash holdings. They focus on stock, mutual fund and ETF holdings in retirement (non-taxable) portfolios. They project net portfolio performance at the asset level based principally on the Capital Asset Pricing Model (CAPM, alpha plus market beta) of asset returns. They group findings by: individuals who manage their own portfolios versus those who rely on mutual funds; and, individuals who choose to set up robo advisor accounts versus those who do not. Using original investor portfolio and corresponding robo advisor portfolio holdings collected during mid-January 2016 through early November 2016, fund loads and fees as of September 2016, and monthly returns for all assets and factors as available since January 1975, they find that: Keep Reading

Self-grading of the Morningstar Fund Rating System

How well does the Morningstar fund rating system (one star to five stars) work? In their November 2016 paper entitled “The Morningstar Rating for Funds: Analyzing the Performance of the Star Rating Globally”, suggested for review by a subscriber, Jeffrey Ptak, Lee Davidson, Christopher Douglas and Alex Zhao analyze the global performance of star ratings in terms of ability to predict fund performance. They use two test methodologies:

  1. Monthly two-stage regressions that test the ability of fund star ratings to add value to a linear factor model for each asset class at a one-month horizon. The first stage estimates fund dependencies (betas) on commonly used predictive factors over the past 36 months. The second stage measures the ability of fund star ratings to add predictive power to those betas in the following month. For stock funds, they consider fee, equity market, size, value and momentum factors. For bond funds, they consider fee, credit and term factors. For stock-bond funds, they consider all these factors. For alternative asset class funds, they consider fee and equity market factors.
  2. An event study that tracks performances of equally weighted portfolios of funds formed by prior-month star rating over the next 1, 3, 6, 12, 36 and 60 months.

Using fund categories, monthly fund star ratings and returns, and asset class factor returns during January 2003 through December 2015, they find that: Keep Reading

Institutional Stock Trading Expertise

Does trading by expert investors boost performance (profitably exploit information), or depress performance (unprofitably exploit information or wastefully churn on noise)? In their September 2016 paper entitled “Trading Frequency and Fund Performance”, Jeffrey Busse, Lin Tong, Qing Tong and Zhe Zhang investigate the relationship between trading frequency and performance among institutional investors (funds). They specify fund daily trading frequency as number of trades divided by the number of unique stocks traded. They calculate fund quarterly trading frequency as average daily trading frequency during the quarter. For each buy or sell, they calculate the return from execution date (at execution price) to end of the quarter, including stock splits, dividends and sometimes commissions. They estimate quarterly fund trading performance by aggregating performances of buys and sells separately, weighted either equally or by trade size, such that the average holding interval is about half a quarter. They subtract fund benchmark return over the same holding interval to calculate abnormal return. They then examine the relationship between abnormal return and fund size. Using daily common stock transaction details for 843 fund managers and 5,277 unique funds, along with associated stock return and firm data, during January 1999 through December 2009, they find that: Keep Reading

Login
Daily Email Updates
Filter Research
  • Research Categories (select one or more)